At the end of I/O, Google’s annual developer conference at the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company had said “AI” 121 times. That, essentially, was the crux of Google’s two-hour keynote — stuffing AI into every Google app and service used by more than two billion people around the world. Here are all the major updates from Google’s big event, along with some additional announcements that came after the keynote.

Gemini 1.5 Flash and Updates to Gemini 1.5 Pro

Google announced a brand new AI model called Gemini 1.5 Flash, which it says is optimized for speed and efficiency. Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, the company’s smallest model that runs locally on devices. Google said it created Flash because developers wanted a lighter and less expensive model than Gemini Pro to build AI-powered apps and services while keeping some of the key features like a long context window of one million tokens that differentiates Gemini Pro from competing models. Later this year, Google will double Gemini’s context window to two million tokens, allowing it to process two hours of video, 22 hours of audio, more than 60,000 lines of code, or over 1.4 million words simultaneously.

Project Astra: Google’s AI Assistant

Google showcased Project Astra, an early version of a universal assistant powered by AI. Google’s DeepMind CEO, Demis Hassabis, described it as Google’s version of an AI agent “that can be helpful in everyday life.”

In a video that Google claims was shot in a single take, an Astra user navigates Google’s London office, pointing the phone camera at various objects like a speaker, some code on a whiteboard, and a view out a window, having a natural conversation with the app about what it sees. In one of the video’s most impressive moments, Astra correctly tells the user where she left her glasses without her even mentioning them. The video ends with a twist — when the user finds and wears the missing glasses, we learn they have an onboard camera system capable of using Project Astra to seamlessly carry on a conversation with the user, hinting that Google might be working on a competitor to Meta’s Ray-Ban smart glasses.

Enhanced Google Photos with AI

Google Photos was already intelligent in searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will soon be able to ask Google Photos complex questions like “show me the best photo from each national park I’ve visited” when the feature rolls out over the next few months. Google Photos will use GPS information and its own judgment of what is “best” to present you with options. You can also ask Google Photos to generate captions to post the photos to social media.

Veo and Imagen 3: AI-Powered Media Creation

Google introduced Veo and Imagen 3, its new AI-powered media creation engines. Veo is Google’s answer to OpenAI’s Sora, capable of producing “high-quality” 1080p videos that can last “beyond a minute,” understanding cinematic concepts like a timelapse.

Imagen 3, meanwhile, is a text-to-image generator that Google claims handles text better than its previous version, Imagen 2. The result is the company’s highest quality text-to-image model with an “incredible level of detail” for “photorealistic, lifelike images” and fewer artifacts, essentially pitting it against OpenAI’s DALLE-3.

Google Search Gets AI Overviews

Google is making significant changes to how Search fundamentally works. Most updates, like the ability to ask complex questions (“Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.”) and using Search to plan meals and vacations, won’t be available unless you opt in to Search Labs, the company’s platform for trying out experimental features.

However, a big new feature called AI Overviews, which Google has been testing for a year, is finally rolling out to millions of people in the US. Google Search will now present AI-generated answers on top of the results by default. The company says it will bring this feature to more than a billion users worldwide by the end of the year.

Gemini Integration with Android 15

Google is integrating Gemini directly into Android. When Android 15 releases later this year, Gemini will be aware of the app, image, or video you’re running, allowing you to pull it up as an overlay and ask context-specific questions. Interestingly, Google did not mention Google Assistant during the keynote, leaving questions about its future.

Wear OS 5 Battery Life Improvements

Google isn’t quite ready to roll out the latest version of its smartwatch OS, but it promises major battery life improvements. Wear OS 5 will consume 20 percent less power than Wear OS 4 during a marathon. Wear OS 4 already brought battery life improvements, but it could still better manage device power. Google provided developers with a new guide on conserving power and battery to create more efficient apps.

Android 15 Anti-Theft Features

Android 15’s developer preview has been rolling for months, but more features are coming. Theft Detection Lock, a new Android 15 feature, will use AI to predict phone thefts and lock things up accordingly. Google says its algorithms can detect motions associated with theft, such as grabbing the phone and running, biking, or driving away. If an Android 15 handset detects one of these situations, the phone’s screen will quickly lock, making it harder for thieves to access your data.

Google also announced various other updates. These include adding digital watermarks to AI-generated video and text, making Gemini accessible in the side panel in Gmail and Docs, powering a virtual AI teammate in Workspace, listening in on phone calls to detect scams in real time, and more.

For more details, visit the full article on Engadget.