Key-Note By Satya Nadella
The keynote of Satya was organized around two topics: opportunities and responsibilities.
It seems Microsoft paid attention to recent comments on Artificial Intelligent sometimes referred to by “death by robot” or “the age of robots”. Also the GDPR was put in the right perspective.
Responsibility will be organized around 3 pillars. First there is privacy and a statement that Microsoft would only use data when the users benefits from it and allowing the user to keep control. Microsoft will even go so far as to defend any privacy violation before the supreme court.
Cyber security is the second pillar. It will require collaboration across the tech sector. Since current attacks might have affected democracy Satya mentioned the need for a digital Geneva convention. Attacking systems in the heart of the democracy could be seen as an act of war.
The third pillar of responsibility is ethical AI. We must not think about what a computer can do but we should think about what a computer should do. AI benefits from cross company data for example machine learning benefits from broad datasets. AI can become more intelligent if we could combine data cross companies. But this should not come at the cost of privacy. Private AI is the answer where data is shared but kept secure so the privacy of the users is guaranteed. Solutions like homographic security are key.
Satya summarized this with a new mission statement for Microsoft “Empower people or organization to achieve more”.
The second part of the keynote was on opportunities based on new technology that became available. Three aspects are key here: ubiquitous computing, AI and multi-sense.
Ubiquitous computing is for Microsoft a paradigm shift form a computer in each home to an app on each device. It means the future is app driven, hosting in a serverless environment.
For the AI part, the announcement of open sourcing the Azure IOT Edge was big news. AI is maturing fast. In specialized areas we see Microsoft’s AI reaching human parity. In 2016 with regards to object recognition. In 2017 for speech recognition. In 2018 machine reading and translation were realized at human level. Azure AI framework is leading by having a more than competitive tool chain and being the most open in the market. Another great announcement was that Kinect in the context of AI is coming back with an Azure IOT integration. Project Brainwave is driving Microsoft’s specialized ship approach realizing real time AI through programmable FPGA chips.
A new to come Microsoft Meeting Room concept was shown were an AI assistant creates meeting minutes but is context aware with regards to technological language that was used in previous meetings and time or place references “Jim we discussed last time on this.
In later AI sessions we saw that Microsoft is able to translate or transcribe any language input trained to a specific individual. For example deaf people with specific language expression English or whatever language. The language models became personalized language expression.
Other AI influences can be found in the new MS Timeline app. This app shows a card based overview of the document you have worked, e-mails you have sent, internet searches you executed to give an overview of your activities. However Timeline is context aware an detect separate streams in you live and groups information based on the context: you are working on project x, y, z … you can be planning a personal trip.
A new phone integration app allows you to connect any phone directly to your desktop. This will integrate texting, browsing through photos and seeing your phone’s notifications all in Windows 10.
Multi-sense and multi-device Microsoft’s inclusion approach to put people in the center of everything instead of devices. Microsoft wants to be inclusive by offering multiple ways to interact with devices. For more enabled people these will be optional in the use for less enabled people these might be their only way to interact with apps.
For HoloLens two new applications were announces. MS Remote Assist and MS Layouts. Both focus on extending existing HoloLens use cases. MS Assist is a tool to work together on a tasks and MS Layout is mixed reality for plant organization and planning.
Next Scott Guthrie gave the technical keynote. Scott firstly introduced Visual Studio Live Share. A real-time distributed pair programming tool. Two developers can work together but still can use their own device regardless it is running Windows, Linux or IOS. Both can use their preferred IDE whether it is Visual Studio or Visual Studio Code. This is not just a screen sharing solution, a secured connection allows access to the localhost instance running the application in development debug mode on the other developer’s machine. Live Share supports distributed breakpoints in code that hare hit simultanuously on both machines. (more on https://www.visualstudio.com/services/live-share/ )
Another developers topic Scott covered were private development spaces. Azure Kubernetics Services allow to set-up private development spaces. This will help new developers, that join the team, to be up and running in no time and not waste five days to set-up there development environment. (more on http://landinghub.visualstudio.com/devspaces )
.NET Core 2.1 was released today. Some highlights include the new Azure SignalR allowing for real-time notifications as a service. ASP.NET Core 2.2 now understands the concept of a real API app that can run without any UI and produces a Swagger documentation as part of the middleware.
On the desktop side we see new improved and backward compatible support for forms desktop applications. XAML islands allows WinForms and WPF application to be extended with new high-DPI controls.
In Visual Studio “go to definition” allows for automatically decompilation of libraries of which the source code or debug references were missing. This feature is fully privacy compliant as it asks the user’s permission the decompile external libraries.
Web UI .NET allows native code to run in the browser like what Web Assembly has done of the Intel platform.
With regards to front-end development it has been announced that Babel 7 will use TypeScript as type checker functionality.
All this was just day 1 of MS Build 2018 in Seattle.