Google made a few announcements at their I/O 2017 keynote about Android, VR/AR, YouTube, Google Assistant and more.
And if you missed the keynote, here it is:
Google Lens essentially uses vision based computing to identify things in the real world. It uses Google’s AI and knowledge graph. Think of Google Lens as Google Goggles but super charged (or an even better version of Samsung’s Bixby camera stuff).
Google Lens will be integrated into the Google Assistant and Photos then to other Google products. Some examples of what Google Lens could be used for, are identifying flowers, using your camera to get a Wi-Fi network and password off a router or even finding a restaurant in the real world.
It’s been about a year since Google first introduced the Assistant but it only started to become available with the Pixel phones and Google Home. Let’s not forget that Google enabled the Assistant on all Android phones running Android 6.0 and later and last month, Google announced the Assistant SDK to bring the Assistant to even more devices beyond smartphones.
Google says there are over 100 million devices that have access to the Assistant. Now, with Google Lens integration coming to the Assistant, it’s about to become even smarter. Google will be able to have a conversation with what’s on your screen and you’ll soon be able to talk to the Assistant just by typing.
Google will also be bringing the Assistant to the iPhone, presumable US only at launch, just as previously reported. And the Google Assistant will soon be available in more languages including, French, German, Brazilian Portuguese and Japanese sometime this summer. With Italian and Korean by the end of the year.
Google also announced that they’ve teamed up with manufacturers like Sony, LG to build the Assistant into their devices beyond phones. So you’ll be seeing products with a Google Assistant built-in badge on it.
In what seems like an eternity, Google Home will (finally) be making it’s way to Canada, Australia, France, Germany, Japan and more countries in the UK, this summer. There will be proactive assistance coming to Google Home, for example, it may ask you to ask it “what’s up” and it will notify you when to leave for your next calendar appointment.
Hands-free calling will also be coming to Google Home, with the ability to call landlines and cell phone numbers for free in the US and Canada. With the recently added multi-user support on Google Home, you’ll be able to say things like “call mom” and will call the right person depending who said it.
Subscribers of Spotify’s free service will be able to stream music to Google Home and in the US, HBO Now is getting support for Google Home, so you’ll be able to say things like, “Ok Google, show me Game of Thrones”.
Now you’ll be able to get visual responses from the Google Assistant on either your phone or your TV with Chromecast. So you can ask Google Home to show you your calendar or weather and then show it up on your screen.
Actions on Google
Actions on Google is a platform that allows developers to build apps for the Google Assistant. Google announced that the platform will be getting payment support. Google showed off a demo of ordering food from Panera using the Assistant.
Google.ai and TPU Cloud
Google.ai is Google’s new AI platform that will allow developers to build machine learning applications using the cloud. The new platform uses neural nets to design neural nets as part of a reinforcement learning approach.
Google announced three new features coming to Google Photos, with Suggested Sharing, Shared Libraries and Photos Books.
Suggest Sharing will be able to help you to find the best pictures of your friends and family and be able to share with them. It uses machine learning to recognize people in photos and the app will suggest that you share with that person based on your sharing patterns.
Shared Libraries will automatically share pictures of certain people, things or places. It will notify recipients of new photos and save the photos to their library.
Both Suggested Sharing and Shared Libraries will be rolling out on Android, iOS and the web in the next few weeks.
Google Photos Books will allow you to create and print photo albums by automatically selecting the best photos from a selection of photos of your choosing. Right now, Photos Books are only available in the US, softcover books cost $9.99 USD and hardcover books cost $19.99 USD and you’ll get your photo book(s) in a matter of days. Photos Books will be coming to more countries later this year.
Google Lens will also be integrated into Google Photos, so you’ll be able to identify things in your photos using Google’s knowledge graph.
Google didn’t spend a lot of time on Android and the upcoming release of Android O. What they did announce, is that there are over 2 billion active Android devices around the world. Google also announced that Drive has 800 million users and Photos has 500 million user with 1.2 billion photos being uploaded each day.
Google did announced that the Android O public beta/second developer preview is available today and final release to come this summer.
There will be more watches running Android Wear 2.0 coming this year from 24 of the world’s top watch makers. Google is also bringing the Assistant to Android TV along with a new UI.
The company also announced Fluid Experiences in Android O, which they demonstrated by showing the new picture-in-picture mode in O for a more fluid and intuitive multitasking setup.
Notification Dots are Google’s name for unread badges as seen on iOS but on Android, you’ll be able to long-press on an app icon and see a preview of the app’s notifications.
Google also announced that Android O will using on-device machine learning to automatically select phrases, names, addresses and phones numbers just by tapping on them. Also, Autofill will be built-in to O on a system level, so it makes it easier to setup a new device.
They also announced that new version of TensorFlow, called TensorFlow Lite, it will help developers work with neural nets to create smarter AI.
Android Go is a new platform that will help optimize Android releases for devices with l0w-specs and are low-cost. There will be a new set of Google apps that use much less data and memory and a Play Store that highlights apps that will work well with Android Go devices.
Android Go is all about data management, that’s why there will be a quick settings toggle for data allowances, Chrome Data Saver on by default and YouTube Go will let users download videos on Wi-Fi to watch later.
Google also introduced “Building for Billions”, which will help developers create apps that target emerging markets. Every Android release from O onward will have a Android Go version and the first Android Go devices will arrive in 2018.
Daydream VR & AR
Google announced their Daydream VR platform back at Google I/O 2016 but at this year’s I/O, they announced that the Galaxy S8 & S8+ will get Daydream support later this summer via an update.
There will also be standalone Daydream VR headsets with Qualcomm making a reference headset and HTC and Lenovo making standalone consumer Daydream headsets, coming by the end of the year. The standalone headsets won’t require any beacons to be able to use.
Google also announced WorldSense, which is a new form of positional tracking for Daydream. It can allow headsets to track precise movements in a space without the need for external sensors.
Asus’ ZenFone AR which was announced at CES 2017, and has support for Daydream and Tango will be coming this summer.
Google also announced their Visual Positioning Service (VPS) which will allow you to find certain object in the real-world by using key visual feature points.
Google is also bring Tango elements to their Google Expeditions.
360-degree video will be coming to the YouTube TV app, including support for live 360 video and you’ll be able to use your remote to pan around.
Superchat, which is YouTube’s paid promoted chat system is getting a new API, so you’ll be able to trigger stuff in the real-world like controlling lights in a creator’s studio or control a drone.
Google for Jobs
Google for Jobs is Google’s job search engine powered by AI. When you start searching for a job in the search box, Google will show you suggested openings at the top including ones close to you. Google will be partnering with LinkedIn, Facebook, Careerbuilder, Monster, Glassdoor and more to help teach its machine learning.
You’ll be able to filter results by location, title, date posted, category, wage levels and more.
Smart Reply in Gmail
Google also announced that Smart Reply will be coming to the Gmail app on Android and iOS. Smart Reply suggests three responses based on the email you got. Smart Reply uses machine learning to help you create better responses to your emails. Google says that if you’re more of a “thanks!” than a “thanks.” person, Gmail’s Smart Reply will show that more often.