Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Google Going Deeper Into AI

CE manufacturers most often make headlines by announcing new hardware or software products — which is great if you can sell them.

Less frequently, companies combine the two by announcing how its software products will help improve a retailer’s hardware sales and associated installation revenue — even when there is no charge for the new software itself. This is just what Google did during its 11th annual Google I/O conference.

The wide variety of announcements focused around what Google CEO Sundar Pichai stated as the core mission of “Organizing the world’s information.” Looking deeper, last year’s statement that the search and software giant was moving from “Mobile First to AI First” was even more of a guiding principle with the addition of “deep learning” to the construct. This was demonstrated by the melding of databases, search and selection, machine learning and the way in which end results are displayed.

On the search front, the Google Assistant is being continually improved to mine not only web-based data, but personal storage and location awareness. The new Google Lens initiative will use deep learning so that, for example, the user’s device will recognize location, determine what objects are there, and complete an action. For example, a user can take a picture of a router, have it recognized, be delivered with the device’s serial number and information label.

With the Google Assistant as the front end, a key announcement from Google I/O was the news that the Assistant will become available for Apple’s iOS. With the ability to search and command from the two major mobile platforms, the ability to work seamlessly through to end devices will be accelerated by the coming availability a developer platform that will enable apps, services and product manufacturers to build in Assistant integration via third-party devices.

It could be said that Amazon’s Alexa has paved the way for that, but here Google will go further by allowing commands to be typed as well as spoken. Another difference will be the integration of transactional processing that will allow the user to not only return a search result, but have the address of a third-party establishment to recognize the delivery address and pay via secure, fingerprint-authorized payment.

On the output side, we will also see a push into not only voice and activity response, but visual response as well. For example, speaking a request to a Google Assistant-capable device will have the result show on a TV’s internal Chromecast or a connected streaming dongle.