The much-derided Google Glass didn’t take off like the tech giant hoped, but Google has remained convinced that we’re going to want mixed-reality eyewear at some point, developing various prototypes over the years. More recent advances from Meta and Apple perhaps show that it was just too far ahead of its time.
But Google’s not been dissuaded. It’s now putting two of its favorite techs together: smart glasses and AI, and that could just be what it takes to make truly smart smart glasses – if people are prepared for this kind of tech.
We first got a glimpse of Google’s Project Astra always-on AI agent back in May. Now the company’s shared a new demo of its prototype for a multimodal virtual assistant that would live in your phone – and your glasses – and would be able to see everything you do.
At the briefing for the launch of Gemini 2.0 this week, Google revealed that a small group is to begin testing Project Astra prototype Gemini AI glasses as part of its Trusted Tester programme. A group of testers will also test Project Astra on Android phones.
The idea is that having an AI agent in glasses form will help with things like providing directions and language translations in situations where we want to have our hands free. Compared to where things were in May, Google says Astra can now understand more languages, mix languages and understand accents and “uncommon words”. It can also understand language at about the latency of human conversation” with new native audio understanding and streaming capabilities.
Meanwhile, Astra can now access Google Maps, Lens and Search to answer questions and it now has 10 minutes of in-session memory and can remember previous conversations.
While Project Astra remains a prototype, Google’s also announced the launch of a new operating system, Android XR, for both smart glasses and headsets. Available as a preview for developers for now, it supports tools like ARCore, Android Studio, Jetpack Compose, Unity, and OpenXR to allow developers to start building apps and games for upcoming devices.
Known as Project Moohan internally, the operating system signals Google’s intention to take on rivals Apple and Meta in the AR and mixed-reality space. Apple’s expensive Vision Pro was hardly destined for mainstream success, but it introduced the concept of spatial computing, which Google appears to have taken pointers from.
Meanwhile, Meta has been aiming its Quest headsets and RayBan smart glasses at a more mainstream market. While the latter remains mainly popular among gamers, the collaboration with RayBan does seem to have helped make smart glasses more appealing to the public by promoting them as fashion accessories that don’t obviously look like smart glasses.
Google’s prototype looks much more powerful thanks to it integration of Gemini with various apps and that ability to remember, but we don’t know what form it will take if it ever reaches market. We’re told that Android XR is a collaboration with Samsung and Qualcomm, and we’ve also seen a glimpse of what looks to be a Samsung mixed-reality headset. This could mean that it will be Samsung rather than Google itself that launches the first Gemini AI glasses. That makes me wonder if they would be as appealing to the fashion-conscious consumer as the Meta-RayBan offering.
I’m also wondering how Google’s going to make its Project Astra capabilities practical given the current limitations on batteries and potential privacy obstacles if the AI agent needs to be recording all the time (I don’t see how it will know where I left my keys if it isn’t). Microsoft delayed the roll out of its controversial Recall tool for Copilot+ PCs partly due to such concerns.
For an overview of what the headset market looks like at the moment, see our pick of the best VR headsets. Just last week, we saw the launch of the world’s first Cinematic AR glasses with the XREAL One and XREAL One Pro Series, described as “the most advanced consumer AR glasses on the market” by XREAL’s co-founder and CEO, Chi Xu.