Google, AI and Gemini
Digest more
Google on Tuesday revealed new Android development tools, a new mobile AI architecture, and an expanded developer community. The announcements accompanied the unveiling of an AI Mode for Google Search at the Google I/O keynote in Mountain View, California.
Gemini 2.5 Flash plays the role of the default model in the Gemini chatbot now. It's supposed to be the fast, cost-efficient model for daily use. Google says it’s better than its predecessors, like Gemini 2.0 Flash, in terms of understanding images and text while still being much cheaper to run.
Google’s Gemini Diffusion demo didn’t get much airtime at I/O, but its blazing speed—and potential for coding—has AI insiders speculating about a shift in the model wars.
Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
Android XR glasses are in the works from Google partners such as Warby Parker. My preview was tantalizing—but many questions remain unanswered.
Google’s AI models are learning to reason, wield agency, and build virtual models of the real world. The company’s AI lead, Demis Hassabis, says all this—and more—will be needed for true AGI.
Google has launched a new Gemini AI Ultra AI subscription that costs $250 per month: Here's what you get from the most expensive tier.
Google is adding its Gemini AI assistant to Chrome, the company announced at Google I/O on Tuesday.