Google I/O 2025: Complete All Major Announcements and Product Launches

Google I/O 2025: Complete All Major Announcements and Product Launches

Google's annual developer conference delivered groundbreaking updates across AI, Android, Search, and hardware ecosystems, setting the stage for the next generation of technology innovation. From Gemini 2.5's revolutionary Deep Think capabilities to Android 16's desktop mode and Project Mariner's agentic features, here's everything you need to know about Google I/O 2025.

AI Revolution: Gemini 2.5 and Beyond

Google's flagship AI model family received major upgrades at I/O 2025, cementing the company's position at the forefront of artificial intelligence development. Gemini 2.5 Pro now leads on multiple industry benchmarks, including the WebDev Arena with an impressive ELO score of 1415 and across all leaderboards of the LMArena, which evaluates human preference across various dimensions.

Perhaps the most revolutionary addition is Deep Think, an experimental enhanced reasoning mode for Gemini 2.5 Pro that enables the model to consider multiple hypotheses before responding. This capability has produced remarkable results on complex mathematical problems, with Deep Think achieving unprecedented scores on the 2025 USAMO, one of the most challenging math benchmarks available. It also leads on LiveCodeBench for competition-level coding and scores an impressive 84.0% on MMMU for multimodal reasoning.

Google is taking a measured approach with Deep Think, making it available initially to trusted testers via the Gemini API to gather feedback and conduct additional frontier safety evaluations before wider release.

The more efficient Gemini 2.5 Flash has also received substantial upgrades, improving across key benchmarks for reasoning, multimodality, code, and long context while becoming 20-30% more efficient in token usage. It's now available to everyone in the Gemini app and will be generally available for production in Google AI Studio and Vertex AI in early June.

Security has been a major focus, with Google significantly enhancing protections against threats like indirect prompt injections. These improvements make Gemini 2.5 the most secure model family to date, addressing concerns about AI safety and reliability.

New AI Ultra Subscription Tier

Google unveiled its premium AI Ultra subscription, priced at $249.99 per month, delivering what the company calls the "highest level of access" to Google's AI-powered apps and services. The subscription includes:

- Access to Google's Veo 3 video generator
- The new Flow video editing app
- Gemini 2.5 Pro with Deep Think mode
- Higher limits in Google's NotebookLM platform and Whisk image remixing app
- Gemini chatbot in Chrome browser
- Project Mariner agentic tools
- YouTube Premium
- 30TB of storage across Google Drive, Google Photos, and Gmail

This premium tier represents Google's most comprehensive AI offering to date, targeting professional creators and power users who need advanced AI capabilities.

Project Mariner: AI That Takes Action

Project Mariner, Google's experimental AI agent that browses and uses websites, received significant updates enabling it to handle nearly a dozen tasks simultaneously. This technology allows users to accomplish tasks like purchasing tickets to events, making restaurant reservations, or buying groceries online without ever visiting third-party websites.

The technology is being integrated into the Gemini API and Vertex AI, with companies including Automation Anywhere, UiPath, Browserbase, Autotab, The Interaction Company, and Cartwheel already exploring its potential. Broader developer access is planned for summer 2025.

For everyday users, the expansion of Project Mariner's agentic functions to AI Mode in Search will enable Google to find the best event tickets or restaurant reservations based on user prompts. While the AI won't complete purchases, it will present options that best match specific criteria, such as the lowest price.

Search Reinvented with AI

AI Mode in Search is starting to roll out for everyone in the U.S., bringing a conversational AI interface to Google's core product. For questions requiring thorough responses, Google is introducing Deep Search capabilities, which expand background queries from tens to hundreds to create more robust, fully cited reports in minutes.

Live capabilities from Project Astra are coming to AI Mode with Search Live, launching this summer. This feature will allow users to talk back-and-forth with Search about what they see in real-time using their camera.

Google is also introducing a new AI Mode shopping experience that brings together advanced AI capabilities with the Shopping Graph to help users browse for inspiration, consider options, and find the right products. Users can virtually try on billions of apparel listings by uploading a photo of themselves, with this experiment rolling out to Search Labs users in the U.S. starting immediately.

A new agentic checkout feature will help users buy products at their desired price point. Users can tap "track price" on any product listing, set what they want to spend, and receive notifications when the price drops.

AI Overviews have scaled up to 1.5 billion monthly users in 200 countries and territories, making Google Search the product bringing generative AI to more people than any other in the world. In major markets like the U.S. and India, AI Overviews is driving over a 10% increase in Google usage for queries that show these features.

Android 16: New Design and Desktop Mode

While Android didn't take center stage in the main keynote, Google announced significant updates to its mobile operating system. Android 16 introduces Material 3 Expressive, a new design language that refreshes the visual identity of the platform.

The update adds AI-powered weather effects that can make it rain on photos, along with new wallpaper and lock screen options for Pixel phones. When selecting an image as wallpaper, users can access new AI effects that frame subjects within various shapes, similar to iOS's Depth Effect feature.

Google is working with Samsung to bring a desktop mode to Android, building on the foundation of Samsung's DeX platform. This feature will bring enhanced windowing capabilities to Android 16, allowing windows to stretch and move across the screen when connected to a larger display.

Security enhancements include new ways to find lost Android phones and other items, additional device-level features for Google's Advanced Protection program, and improved tools to protect against scams and theft.

Android Auto and Mobile Experiences

Android Auto is receiving a substantial update with support for Spotify Jam, allowing users to share control of an audio source from their individual devices. The platform is also getting a light mode option and, more significantly, support for web browsers and video apps.

The Gemini app has reached 400 million monthly active users, with camera and screen-sharing capabilities for Gemini Live rolling out beyond Android to iOS users. A new Create menu within Canvas helps users explore what Canvas can build, transforming text into interactive infographics, web pages, immersive quizzes, and podcast-style Audio Overviews in 45 languages.

Deep Research is being enhanced to allow users to upload PDFs and images directly, with upcoming support for linking documents from Drive or Gmail and customizing research sources.

An experimental Agent Mode is coming soon to Google AI Ultra subscribers, allowing users to simply describe their end goal and have Gemini complete tasks on their behalf.

New AI Models and Creative Tools

Veo 3, Google's latest video-generating AI model, can create videos with sound effects, background noises, and even dialogue. The model improves upon its predecessor in terms of quality and is available to Google AI Ultra subscribers.

Imagen 4, the latest iteration of Google's image generator, offers faster rendering and the ability to capture fine details like fabrics, water droplets, and animal fur. It handles both photorealistic and abstract styles, creating images in various aspect ratios up to 2K resolution. A future variant promises to be up to 10 times faster than Imagen 3.

Both Veo 3 and Imagen 4 will power Flow, Google's AI-powered video tool for filmmaking.

Stitch is a new AI-powered tool to help people design web and mobile app front ends by generating necessary UI elements and code. Users can prompt Stitch to create app UIs with a few words or even an image, receiving HTML and CSS markup for the designs it generates.

Google has also expanded access to Jules, its AI agent aimed at helping developers fix bugs in code, understand complex codebases, create pull requests on GitHub, and handle certain backlog items and programming tasks.

Hardware and Communication

Beam, previously called Project Starline, is Google's 3D teleconferencing technology that uses a six-camera array and custom light field display to create the illusion of in-person conversation. The system features near-perfect millimeter-level head tracking and 60fps video streaming.

When used with Google Meet, Beam provides AI-powered real-time speech translation that preserves the original speaker's voice, tone, and expressions. Google Meet itself is getting real-time speech translation capabilities.

Google is also building Project Astra glasses with partners including Samsung and Warby Parker, though no specific launch date has been announced. These smart glasses will incorporate the company's advanced AI capabilities for real-time visual recognition and assistance.

Developer Tools and Platform Updates

Google is infusing LearnLM directly into Gemini 2.5, making it the world's leading model for learning. According to Google's latest report, Gemini 2.5 Pro outperformed competitors on every category of learning science principles.

The company added native SDK support for Model Context Protocol (MCP) definitions in the Gemini API for easier integration with open-source tools. Google is also exploring ways to deploy MCP servers and other hosted tools to simplify building agentic applications.

Additional updates include new features for Google Wallet, Wear OS improvements, Google Play Store enhancements, and Google TV updates that weren't highlighted during the main keynote.

Conclusion

Google I/O 2025 demonstrated the company's commitment to advancing AI technology while integrating it meaningfully across its product ecosystem. From the groundbreaking capabilities of Gemini 2.5 and Deep Think to the practical applications of Project Mariner and AI Mode in Search, Google is pushing the boundaries of what's possible with artificial intelligence.

The updates to Android, including the new desktop mode and Material 3 Expressive design language, show Google's continued innovation in mobile operating systems. Meanwhile, hardware initiatives like Beam and the Project Astra glasses partnerships point to an exciting future for communication and augmented reality.

As these technologies continue to evolve and become more integrated into everyday digital experiences, they promise to transform how we interact with information, solve problems, and accomplish tasks. Google I/O 2025 may well be remembered as a pivotal moment in the development of truly helpful, human-centered artificial intelligence and computing.

Suggested Articles