Tuesday, February 3, 2015






Increase internet speed by 20% without any software –

Microsoft reserves 20% of your available bandwidth for their own purposes like Windows Updates and interrogating your PC etc. By unreserving this bandwidth, you can make your internet connection faster. By 20%. The steps to do so are –

1. Click Start then Run and type "gpedit.msc" without quotes.

2. Then go to Local Computer Policy>Computer Configuration>Administrative Templetes>Network>QoS Packet Scheduler. Click on QoS Packet Scheduler. In the right side , find Limit Reservable Bandwidth and double click on Limit Reservable Bandwidth.

3. It will say it is not configured but the truth is under the ‘Explain’ tab. Select ‘Enable’ and set reservable bandwidth to zero.

4. Click on ‘Apply’ and your internet speed will boost up by 20%

Saturday, January 31, 2015

_MG_9465

Google Glasses

This isn’t the headline you’d probably expect me to write about Google Glass. At least not lately, with news that Google has killed off the failed device. Well, I’m here to set the record straight. Google’s first wearable didn’t fail. In fact, I’d argue that it was the most important wearable of 2014.
First off, let’s tackle the recent Glass news. On January 15, Google communicated on its Google Glass Google+ page that Glass was “graduating from Google[x] labs,” explaining that the open beta Explorer program would officially be closed on January 19. The post also explained that this was not the end for Glass and that Google will be “continuing to build for the future, and you’ll start to see future versions of Glass when they’re ready.” Subsequently, we found out that Google Glass is being transitioned to an independent division which will report into Tony Fadell, CEO of the insanely popular smart thermostat Nest and one of the fathers of the iPod. A good sign that Google is getting ready to make Glass a product.
The most important thing to remember about Google Glass up until this point is that it was a project out of Google[x], an experimental facility of Google that prides itself on “moonshot” ideas, which have included working on a space elevator, teleportation and hoverboards. Project Glass was an experiment, and those wearing the device (Explorers) and the people they came into contact with were all its subjects.
Viewing Glass as an experiment changes all expectations of this device. An experiment is expected to end. It has an objective to test with results to analyze. And most importantly, it allows the experimenter to draw conclusions which can then be used for other purposes. With Glass as an experiment, Google tested what would happen if people wore a computer on their face by equipping pioneers, or Explorers, and using the world as a real-time lab. Using what they learned Google can now, as Google CFO Pichette put it on Google’s most recent earnings call, “take a pause and take the time reset their strategy.”
_MG_9326
As one of these Explorers, I voluntarily purchased Glass understanding that this device was in beta, and that the purpose was for me to explore with it and bring Google along for the ride. My journey with Glass began more than a year a half ago, and during this time I’ve personally discovered some killer use cases for the device, such as taking pictures as if they were being snapped with your eyes, or being able to look at a sign in another language and have it magically translate to English. I’ve also experienced other people’s wonder in trying on the device, as well as uncomfortable and almost fearful reactions to Glass because of the camera. It’s true that my use of Glass has waned greatly these past few months – in fact, I hardly wear it out of the house at all – but there hasn’t been one day that I’ve regretted the $1500 purchase. Google promised I would be able to explore with Glass and it has delivered on this promise.
And it’s in this same way that I feel the word “fail” isn’t accurate when it comes to Google Glass. Glass as a product can’t be dead as it was never alive. And Glass as an experiment can’t fail because the objective of an experiment is to test, analyze and then draw conclusions. How Google will use these conclusions to roll out the next iteration of Glass is what is most subject to be judged.
Viewing Glass as an experiment changes all expectations of this device.
Now, that isn’t to say that I entirely agree with some major inconsistencies in the messaging Google put out there for this real life experiment. Google, perhaps intentionally, started to blur the lines between Glass as a program and Glass as a product soon after it started to widen the net for its beta program. The fact that they dropped the word “Project” from its marketing and started to just call it “Google Glass” didn’t help matters much. Nor did the mysterious barges, physical stores in select cities, and then finally opening up the program to anyone who had the money to buy the heads-up display (including a release in the UK). So I can see where all the confusion came from in thinking Glass, as it exists now, was going to officially launch.
Why then do I think Google Glass is the most important wearable of 2014? Well, I have to answer this both from an industry perspective as well as on a personal level. Industry-wise, Glass may have had few users, been over-priced and ran into constant battles with fashion and privacy (again, all part of the experiment), but it did one thing very well and that was create a tremendous amount of buzz and awareness. I believe we wouldn’t be as far along in the wearable tech conversation, especially the mainstream conversation, if Google would have tested Glass in a closed lab rather than through its open Explorer program. Glass helped bring wearables to a head (pun intended) and forced us to start having some serious conversations. From the never-ending privacy concerns that came with wearing a camera on your face, or the social stigma that coined the term Glasshole, to examples of Glass helping to improve the lives of people with disabilities and aiding doctors in performing heart surgery – Google’s first wearable brokered discussions which will help pave the way for the next wave of wearable tech, especially the next wave of smartglasses, and the learnings from this experiment are golden nuggets anyone getting into this space can benefit from.
For example, Glass identified a couple of areas where wearables, smart glasses in particular, seemed to naturally succeed. It became quite clear early on that Glass was becoming an extremely useful tool in enterprise, so much so that Google created its Glass at Work program back in June of last year. Partners such as APX Labs, Augmedix, and Wearable Intelligence all raised venture capital funds last year ranging from $8 to $16 million to use Glass as an efficient, safe and hands-free tool in the workplace. Travel was another lucrative pocket for Glass. Hospitality companies such as Starwoods and Virgin Atlantic used Glass to better the guest experience, and there were a number of apps that turned Glass into a tour guide (Field Trip) and real-time translator (WordLens and Captioning for Glass).
_MG_9414
On a personal level, Glass helped me move faster and deeper into this wearable world I now live in, opening doors and opportunities that may not have been as easy to access without wearing a computer on my face. Many of my first published articles here on BetaKit and MobileSyrup were about my experiences as a Google Glass Explorer. I’ve had the chance to demo Glass to a number of companies and media outlets, and speak about my experiences. My futuristic specs always attracted a crowd at networking events, and give everyone an immediate icebreaker, allowing us to all have some pretty interesting conversations rather than the usual small talk.
Glass opened up our eyes to a brand new world.
Glass was also one of the catalysts which pushed me to create a community around wearable tech. As one of ten or so Canadian Google Glass Explorers at the time, I decided to put together an event where the city of Toronto could come to me to try out Glass, rather than being stopped randomly on the way to a meeting to explain and demo the device. What started as 150 people in Toronto has now grown to one of the largest wearable communities in the world, with chapters in Toronto and Chicago.
Outside of giving us access to the tech, the Explorer program also unified a bunch of like-minded people who were willing and excited to test out a new wave of computing in our personal and professional lives. From artists to doctors and everyone in between, I have met some fascinating people, many who have gone on to become major influencers leading and shaping the wearable tech conversation.
But most importantly, for me and for the rest of the Explorers and those that took the journey with us, Glass opened up our eyes to a brand new world. One where we aren’t craning our necks looking down at the screen normally stuffed in our pocket. One where we can capture moments with minimal disruption and from our own point of view. And one which begins to merge the digital with the physical in ways we previously thought weren’t possible. The Glass experiment may be over, but this sense of wonder and hunger for the future hasn’t gone away; it’s only gotten stronger.

Thursday, January 29, 2015

Together with his colleagues, Martin Rinar, who is currently a professor of computer science at MIT, managed to create new software called ClearView that is able to identify the intrusion of alien software and generate a set of patches meant to repair the operating system

The new invention detects the rule that was compromised and then generates patches that make the software to pursue the compromised rules. 

Afterwards the software analyses all options in order to decide which of the chosen rules represents the most suitable patch. The team's new invention searches for particular types of errors, which are often caused by a malicious code introduced into the operating software. 
A great advantage of the new invention is that it can be installed on several computers that run the same software. When it selects the most effective patch, the latter can be installed on all the other computers. 

The team of researchers carried out a test of the software on a group of machines that run Firefox. A team of independent programmers launched an attack on Firefox, each using a different type of attack. ClearView managed to block the malicious codes of all attacks. It also shut down the program prior to the attack. 
The new invention discarded the wrong corrections and generated patches that corrected all errors caused by malware. Just after 5 minutes of the first attack ClearView managed to create a working patch. 

Thursday, January 22, 2015


Definition - What does Internetworking mean?
Internetworking is the practice of connecting a computer network with other networks through the use of gateways that provide a common method of routing information packets between the networks. The resulting system of interconnected networks is called an internetwork, or Internetworking is the process or technique of connecting different networks by using intermediary devices such as routers or gateway devices. 

EXPLAINATION

Internetworking is a term used by Cisco, BBN, and other providers of network products and services as a comprehensive term for all the concepts, technologies, and generic devices that allow people and their computers to communicate across different kinds of networks.


The most notable example of internetworking is the Internet, a network of networks based on many underlying hardware technologies, but unified by an internetworking protocol standard, the Internet Protocol Suite, often also referred to asTCP/IP.
The smallest amount of effort to create an internet (an internetwork, not the Internet), is to have two LANs of computers connected to each other via a router. Simply using either a switch or a hub to connect two local area networks together doesn't imply internetworking, it just expands the original LAN.

Internetworking ensures data communication among networks owned and operated by different entities using a common data communication and the Internet Routing Protocol. The Internet is the largest pool of networks geographically located throughout the world but these networks are interconnected using the same protocol stack, TCP/IP. Internetworking is only possible when the all the connected networks use the same protocol stack or communication methodologies.

A computer network is a set of different computers connected together using networking devices such as switches and hubs. To enable communication, each individual network node or segment is configured with similar protocol or communication logic, which usually is TCP/IP. When a network communicates with another network having the same or compatible communication procedures, it is known as Internetworking.

Internetworking is also implemented using internetworking devices such as routers.These are physical hardware devices which have the ability to connect different networks and ensure error free data communication. They are the core devices enabling internetworking and are the backbone behind the Internet.
Description: http://www.tutorialspoint.com/data_communication_computer_network/images/routing.jpg
Interconnection of networks:-
Internetworking is stated as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or more local area networks via some sort of wide area network. The original term for an internetwork was catenet.

The definition of an internetwork today includes the connection of other types of computer networks such as personal area networks. The network elements used to connect individual networks in the ARPANET, the predecessor of the Internet, were originally called gateways, but the term has been deprecated in this context, because of possible confusion with functionally different devices. Today the interconnecting gateways are called Internet routers.

Another type of interconnection of networks often occurs within enterprises at the Link Layer of the networking model, i.e. at the hardware-centric layer below the level of the TCP/IP logical interfaces. Such interconnection is accomplished with network bridges and network switches. This is sometimes incorrectly termed internetworking, but the resulting system is simply a larger, single subnetwork, and no internetworking protocol, such as Internet Protocol, is required to traverse these devices. However, a single computer network may be converted into an internetwork by dividing the network into segments and logically dividing the segment traffic with routers. The Internet Protocol is designed to provide an unreliable (not guaranteed) packet service across the network. The architecture avoids intermediate network elements maintaining any state of the network. Instead, this function is assigned to the endpoints of each communication session. To transfer data reliably, applications must utilize an appropriate Transport Layer protocol, such as Transmission Control Protocol (TCP), which provides a reliable stream. Some applications use a simpler, connection-less transport protocol, User Datagram Protocol (UDP), for tasks which do not require reliable delivery of data or that require real-time
Tunneling:-
:If they are two geographically separate networks, which wants to communicate with each other, they may deploy a dedicated line between or they have to pass their data through intermediate netwoks.
Tunneling is a mechanism by which two or more same networks communicate with each other, by passing intermediate networking complexities. Tunneling is configured at both ends.
Description: http://www.tutorialspoint.com/data_communication_computer_network/images/tunneling.jpg[Image: Tunneling]
Data when enters from one end of Tunnel, it is tagged. This tagged data is then routed inside the intermediate or transit network to reach the other end of Tunnel. When data exists the Tunnel its tag is removed and delivered to the other part of the network.
Both ends feel as if they are directly connected and tagging makes data travel through transit network without any modifications

Networking models:-

Two architectural models are commonly used to describe the protocols and methods used in internetworking.
The Open System Interconnection (OSI) reference model was developed under the auspices of the International Organization for Standardization (ISO) and provides a rigorous description for layering protocol functions from the underlying hardware to the software interface concepts in user applications. Internetworking is implemented in the Network Layer (Layer 3) of the model.
The Internet Protocol Suite, also called the TCP/IP model of the Internet was not designed to conform to the OSI model and does not refer to it in any of the normative specifications in Requests for Comment and Internet standards. Despite similar appearance as a layered model, it uses a much less rigorous, loosely defined architecture that concerns itself only with the aspects of logical networking. It does not discuss hardware-specific low-level interfaces, and assumes availability of a Link Layer interface to the local network link to which the host is connected. Internetworking is facilitated by the protocols of itsInternet Layer.

Open Systems Interconnection model (OSI):-
The Open Systems Interconnection model (OSI) is a conceptual model that characterizes and standardizes the internal functions of a communication system by partitioning it into abstraction layers. The model is a product of theOpen Systems Interconnection project at the International Organization for Standardization (ISO), maintained by the identification ISO/IEC 7498-1.
The model groups communication functions into seven logical layers. A layer serves the layer above it and is served by the layer below it. For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that make up the contents of that path. Two instances at one layer are connected by a horizontal connection on that layer

Description: http://upload.wikimedia.org/wikipedia/commons/thumb/4/41/OSI-model-Communication.svg/434px-OSI-model-Communication.svg.png
(Communication in the OSI-Model)


Internet protocol suite :-
The Internet protocol suite is the computer networking model and set ofcommunications protocols used on the Internet and similar computer networks. It is commonly known as TCP/IP, because its most important protocols, the Transmission Control Protocol (TCP) and the Internet Protocol(IP), were the first networking protocols defined in this standard. Often also called the Internet model, it was originally also known as the DoD model, because the development of the networking model was funded by DARPA, an agency of the United States Department of Defense.
TCP/IP provides end-to-end connectivity specifying how data should be packetized, addressed, transmitted, routed and received at the destination. This functionality is organized into four abstraction layers which are used to sort all related protocols according to the scope of networking involved.[1][2]From lowest to highest, the layers are the link layer, containing communication technologies for a single network segment (link); the internet layer, connecting hosts across independent networks, thus establishing internetworking; thetransport layer handling host-to-host communication; and the application layer, which provides process-to-process application data exchange.
The TCP/IP model and related protocol models are maintained by the Internet Engineering Task Force (IETF).

Description: http://upload.wikimedia.org/wikipedia/commons/thumb/c/c4/IP_stack_connections.svg/350px-IP_stack_connections.svg.png

INTERNET WORKING


Definition - What does Internetworking mean?
Internetworking is the practice of connecting a computer network with other networks through the use of gateways that provide a common method of routing information packets between the networks. The resulting system of interconnected networks is called an internetwork, or Internetworking is the process or technique of connecting different networks by using intermediary devices such as routers or gateway devices. 

EXPLAINATION

Internetworking is a term used by Cisco, BBN, and other providers of network products and services as a comprehensive term for all the concepts, technologies, and generic devices that allow people and their computers to communicate across different kinds of networks.


The most notable example of internetworking is the Internet, a network of networks based on many underlying hardware technologies, but unified by an internetworking protocol standard, the Internet Protocol Suite, often also referred to asTCP/IP.
The smallest amount of effort to create an internet (an internetwork, not the Internet), is to have two LANs of computers connected to each other via a router. Simply using either a switch or a hub to connect two local area networks together doesn't imply internetworking, it just expands the original LAN.

Internetworking ensures data communication among networks owned and operated by different entities using a common data communication and the Internet Routing Protocol. The Internet is the largest pool of networks geographically located throughout the world but these networks are interconnected using the same protocol stack, TCP/IP. Internetworking is only possible when the all the connected networks use the same protocol stack or communication methodologies.

A computer network is a set of different computers connected together using networking devices such as switches and hubs. To enable communication, each individual network node or segment is configured with similar protocol or communication logic, which usually is TCP/IP. When a network communicates with another network having the same or compatible communication procedures, it is known as Internetworking.

Internetworking is also implemented using internetworking devices such as routers.These are physical hardware devices which have the ability to connect different networks and ensure error free data communication. They are the core devices enabling internetworking and are the backbone behind the Internet.
Description: http://www.tutorialspoint.com/data_communication_computer_network/images/routing.jpg
Interconnection of networks:-
Internetworking is stated as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or more local area networks via some sort of wide area network. The original term for an internetwork was catenet.

The definition of an internetwork today includes the connection of other types of computer networks such as personal area networks. The network elements used to connect individual networks in the ARPANET, the predecessor of the Internet, were originally called gateways, but the term has been deprecated in this context, because of possible confusion with functionally different devices. Today the interconnecting gateways are called Internet routers.

Another type of interconnection of networks often occurs within enterprises at the Link Layer of the networking model, i.e. at the hardware-centric layer below the level of the TCP/IP logical interfaces. Such interconnection is accomplished with network bridges and network switches. This is sometimes incorrectly termed internetworking, but the resulting system is simply a larger, single subnetwork, and no internetworking protocol, such as Internet Protocol, is required to traverse these devices. However, a single computer network may be converted into an internetwork by dividing the network into segments and logically dividing the segment traffic with routers. The Internet Protocol is designed to provide an unreliable (not guaranteed) packet service across the network. The architecture avoids intermediate network elements maintaining any state of the network. Instead, this function is assigned to the endpoints of each communication session. To transfer data reliably, applications must utilize an appropriate Transport Layer protocol, such as Transmission Control Protocol (TCP), which provides a reliable stream. Some applications use a simpler, connection-less transport protocol, User Datagram Protocol (UDP), for tasks which do not require reliable delivery of data or that require real-time
Tunneling:-
:If they are two geographically separate networks, which wants to communicate with each other, they may deploy a dedicated line between or they have to pass their data through intermediate netwoks.
Tunneling is a mechanism by which two or more same networks communicate with each other, by passing intermediate networking complexities. Tunneling is configured at both ends.
Description: http://www.tutorialspoint.com/data_communication_computer_network/images/tunneling.jpg[Image: Tunneling]
Data when enters from one end of Tunnel, it is tagged. This tagged data is then routed inside the intermediate or transit network to reach the other end of Tunnel. When data exists the Tunnel its tag is removed and delivered to the other part of the network.
Both ends feel as if they are directly connected and tagging makes data travel through transit network without any modifications

Networking models:-

Two architectural models are commonly used to describe the protocols and methods used in internetworking.
The Open System Interconnection (OSI) reference model was developed under the auspices of the International Organization for Standardization (ISO) and provides a rigorous description for layering protocol functions from the underlying hardware to the software interface concepts in user applications. Internetworking is implemented in the Network Layer (Layer 3) of the model.
The Internet Protocol Suite, also called the TCP/IP model of the Internet was not designed to conform to the OSI model and does not refer to it in any of the normative specifications in Requests for Comment and Internet standards. Despite similar appearance as a layered model, it uses a much less rigorous, loosely defined architecture that concerns itself only with the aspects of logical networking. It does not discuss hardware-specific low-level interfaces, and assumes availability of a Link Layer interface to the local network link to which the host is connected. Internetworking is facilitated by the protocols of itsInternet Layer.

Open Systems Interconnection model (OSI):-
The Open Systems Interconnection model (OSI) is a conceptual model that characterizes and standardizes the internal functions of a communication system by partitioning it into abstraction layers. The model is a product of theOpen Systems Interconnection project at the International Organization for Standardization (ISO), maintained by the identification ISO/IEC 7498-1.
The model groups communication functions into seven logical layers. A layer serves the layer above it and is served by the layer below it. For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that make up the contents of that path. Two instances at one layer are connected by a horizontal connection on that layer

Description: http://upload.wikimedia.org/wikipedia/commons/thumb/4/41/OSI-model-Communication.svg/434px-OSI-model-Communication.svg.png
(Communication in the OSI-Model)


Internet protocol suite :-
The Internet protocol suite is the computer networking model and set ofcommunications protocols used on the Internet and similar computer networks. It is commonly known as TCP/IP, because its most important protocols, the Transmission Control Protocol (TCP) and the Internet Protocol(IP), were the first networking protocols defined in this standard. Often also called the Internet model, it was originally also known as the DoD model, because the development of the networking model was funded by DARPA, an agency of the United States Department of Defense.
TCP/IP provides end-to-end connectivity specifying how data should be packetized, addressed, transmitted, routed and received at the destination. This functionality is organized into four abstraction layers which are used to sort all related protocols according to the scope of networking involved.[1][2]From lowest to highest, the layers are the link layer, containing communication technologies for a single network segment (link); the internet layer, connecting hosts across independent networks, thus establishing internetworking; thetransport layer handling host-to-host communication; and the application layer, which provides process-to-process application data exchange.
The TCP/IP model and related protocol models are maintained by the Internet Engineering Task Force (IETF).

Description: http://upload.wikimedia.org/wikipedia/commons/thumb/c/c4/IP_stack_connections.svg/350px-IP_stack_connections.svg.png

Monday, January 5, 2015

how to calculate property tax on C++

#include<iostream> #include<iomanip> using namespace std; int main() { double ActualValue, AssessValue, tax; cout << "Enter the Actual value of property: $"; cin >> ActualValue; AssessValue = (ActualValue*60)/100; tax = (AssessValue*0.64)/100; cout << setprecision(2) << fixed; cout << "The Actual value of the property is: $" << ActualValue << endl; cout << "The Assessment value of the property is: $" << AssessValue << endl; cout << "The property tax is: $" << tax << endl; return 0;
}

Thursday, January 1, 2015

Intro To Windows 10


Windows 10 is a personal computer operating system developed by Microsoft as part of the Windows NT family of operating systems. First presented in April 2014 at the Build Conference, it is scheduled to be released in 2015, and is currently in public beta testing. During its first year of availability, Windows 10 will be offered at no charge for consumer users of Windows 8.1 and Windows 7.
Windows 10 aims to improve the user experience for non-touchscreen devices (such as desktop computers and non-touchscreen laptops), by adding a new revision of the desktop Start menu, and a virtual desktopsystem, and allowing Windows Store apps to run within windows on the desktop as well as in full-screen modes; it also aims to cater to tabletsand touch-screen laptops with an easy way to switch between ‘tablet’ full screen modes and ‘desktop’ windowed modes with Continuum. Windows 10 also furthers Microsoft’s ongoing efforts to unify the Windows PC,Windows Phone and Windows Embedded product families around a common internal core and similar user interface

Development

In December 2013, technology writer Mary Jo Foley reported that Microsoft was working on an update to Windows 8, codenamed Threshold after a planet in Microsoft’s Halo franchise.[1] Similarly to “Blue” (which became Windows 8.1),[2]Foley called Threshold a “wave of operating systems” across multiple Microsoft platforms and services, scheduled for the second quarter of 2015. Foley reported that among the goals for Threshold was to create a unified application platform and development toolkit for Windows, Windows Phone and Xbox One (which all use a similar Windows NT kernel).[1][3] It was speculated that Threshold would be branded as “Windows 9”.[4]
In April 2014, at the Build Conference, Microsoft’s Terry Myerson unveiled an updated version of Windows that added the ability to run Windows Store apps inside desktop windows, and a more traditional Start menu in place of the Start screenseen in Windows 8. The new Start menu takes after Windows 7’s design by using only a portion of the screen and including a Windows 7-style application listing in the first column. The second column displays Windows 8-style app tiles. Myerson stated that these changes would occur in a future update, but did not elaborate.[5][6] Microsoft also unveiled the concept of a “universal Windows app,” allowing Windows Runtime apps to be ported to Windows Phone 8.1 and Xbox One while sharing a common codebase, and allowing user data and licenses for an app to be shared between multiple platforms.[5][7]
In July 2014, Microsoft’s new CEO Satya Nadella explained that the company was planning to “streamline the next version of Windows from three operating systems into one single converged operating system for screens of all sizes,” unifying Windows, Windows Phone, and Windows Embedded around a common architecture and a unified application ecosystem. However, Nadella stated that these internal changes would not have any effect on how the operating systems are marketed and sold.[8][9] Screenshots of a Windows build which purported to be Threshold were leaked in July 2014, showing the previously presented Start menu and windowed apps[3] followed by further screenshot in September 2014 of a build identifying itself as “Windows Technical Preview”, numbered 9834, showing a new virtual desktop system, a notification center, and a new File Explorer icon inspired by the Metro design language.[10]
Threshold was officially unveiled during a media event on September 30, 2014, under the name Windows 10; Myerson said that Windows 10 would be Microsoft’s “most comprehensive platform ever,” providing a single, unified platform for desktop computerslaptopstabletssmartphones, and all-in-one devices.[4][11][12] He emphasized that Windows 10 would take steps towards restoring user interface mechanics from Windows 7 to improve the experience for users on non-touch devices, noting criticism of Windows 8’s touch-oriented interface by keyboard and mouse users.[13][14] Despite these concessions, Myerson noted that the touch-oriented interface would “evolve” as well on 10.[15] In describing the changes, Joe Belfiorelikened the two operating systems to electric cars, comparing Windows 7 to a first-generation Toyota Prius hybrid, and Windows 10 to an all-electric Tesla—considering the latter to be an extension of the technology first introduced in the former.[16] Regarding the operating system’s name, Terry Myerson refused to elaborate on why Microsoft skipped directly from Windows 8 to 10, stating only that “based on the product that’s coming, and just how different our approach will be overall, it wouldn’t be right”. He also joked that they couldn’t call it “Windows One” (alluding to several recent Microsoft products with a similar brand, such as OneNoteXbox One and OneDrive) because they had already made a Windows 1.[4]
Further details surrounding 10’s consumer-oriented features were presented during another media event held on January 21, 2015, entitled “Windows 10: The Next Chapter”. The keynote featured the unveiling of Cortana integration within the operating system, new Xbox-oriented features, Windows 10 for phones and small tablets, an updated Office Mobile suite,Surface Hub—a large-screened Windows 10 device for enterprise collaboration based upon Perceptive Pixel technology,[17]along with HoloLensaugmented reality eyewear and an associated platform for building apps that can render “holograms” through HoloLens.[18] Additional information surrounding Windows 10 is expected to be announced during Build2015.[13][15][19]