Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Google Goes Deeper Into AI At I/O Conference

CE manufacturers most often make headlines by announcing new hardware or software products — which is great if you can sell them.

Less frequently, companies combine the two by announcing how its software products will help improve a retailer’s hardware sales and associated installation revenue — even when there is no charge for the new software itself. This is just what Google did on Wednesday during its 11th annual Google I/O conference.  

At its core, Google I/O is where those who develop apps and products learn about the latest versions of the basic software tools and suites. However, the first-day keynote set the stage with a more general overview of what is coming down the pike and how Google intends to implement it.

The wide variety of announcements focused around what Google CEO Sundar Pichai stated as the core mission of “Organizing the world’s information.” Looking deeper, last year’s statement that the search and software giant was moving from “Mobile First to AI First” was even more of a guiding principle with the addition of “deep learning” to the construct. This was demonstrated by the melding of databases, search and selection, machine learning and the way in which end results are displayed.

On the search front, the Google Assistant is being continually improved to mine not only web-based data, but personal storage and location awareness. The new Google Lens initiative will use deep learning so that, for example, the user’s device will recognize location, determine what objects are there, and complete an action. For example, a user can take a picture of a router, have it recognized, be delivered with the device’s serial number and information label.  

With the Google Assistant as the front end, a key announcement from Google I/O was the news that the Assistant will become available for Apple’s iOS. With the ability to search and command from the two major mobile platforms, the ability to work seamlessly through to end devices will be accelerated by the coming availability a developer platform that will enable apps, services and product manufacturers to build in Assistant integration via third-party devices.

It could be said that Amazon’s Alexa has paved the way for that, but here Google will go further by allowing commands to be typed as well as spoken. Another difference will be the integration of transactional processing that will allow the user to not only return a search result, but have the address of a third-party establishment to recognize the delivery address and pay via secure, fingerprint-authorized payment.

On the output side, we will also see a push into not only voice and activity response, but visual response as well. For example, speaking a request to a Google Assistant-capable device will have the result show on a TV’s internal Chromecast or a connected streaming dongle. Indeed, it was said during the keynote that the “fastest-growing screen for control is not mobile, but the one in the living room.”

Featured

Close