Here in sunny Mountain View, California, I am sequestered in a teeny-tiny box. Outside, there’s a long line of tech journalists, and we are all here for one thing: to try out Project Moohan and Google’s Android XR smart glasses prototypes. (The Project Mariner booth is maybe 10 feet away and remarkably empty.)
While nothing was going to steal AI’s spotlight at this year’s keynote — 95 mentions! — Android XR has been generating a lot of buzz on the ground. But the demos we got to see here were notably shorter, with more guardrails, than what I got to see back in December. Probably because, unlike a few months ago, there are cameras everywhere and these are “risky” demos.
First up is Project Moohan. Not much has changed since I first slipped on the headset. It’s still an Android-flavored Apple Vision Pro, albeit much lighter and more comfortable to wear. Like Oculus headsets, there’s a dial in the back that lets you adjust the fit. If you press the top button, it brings up Gemini. You can ask Gemini to do things, because that is what AI assistants are here for. Specifically, I ask it to take me to my old college stomping grounds in Tokyo in Google Maps without having to open the Google Maps app. Natural language and context, baby.
But that’s a demo I’ve gotten before. The “new” thing Google has to show me today is spatialized video. As in, you can now get 3D depth in a regular old video you’ve filmed without any special equipment. (Never mind that the example video I’m shown is most certainly filmed by someone with an eye for enhancing dramatic perspectives.)
Because of the clamoring crowd outside, I’m then given a quick run-through of Google’s prototype Android XR glasses. Emphasis on prototype. They’re simple; it’s actually hard to spot the camera in the frame and the discreet display in the right lens. When I slip them on, I can see a tiny translucent screen showing the time and weather. If I press the temple, it brings up — you guessed it — Gemini. I’m prompted to ask Gemini to identify one of two paintings in front of me. At first, it fails because I’m too far away. (Remember, these demos are risky.) I ask it to compare the two paintings, and it tells me some obvious conclusions. The one on the right uses brighter colors, and the one on the left is more muted and subdued.
On a nearby shelf, there are a few travel guidebooks. I tell Gemini a lie — that I’m not an outdoorsy type, so which book would be the best for planning a trip to Japan? It picks one. I’m then prompted to take a photo with the glasses. I do, and a little preview pops up on the display. Now that’s something the Ray-Ban Meta smart glasses can’t do — and arguably, one of the Meta glasses’ biggest weaknesses for the content creators that make up a huge chunk of its audience. The addition of the display lets you frame your images. It’s less likely that you’ll tilt your head for an accidental Dutch angle or have the perfect shot ruined by your ill-fated late-night decision to get curtain bangs.
These are the safest demos Google can do. Though I don’t have video or photo evidence, the things I saw behind closed doors in December were a more convincing example of why someone might want this tech. There were prototypes with not one, but two built-in displays, so you could have a more expansive view. I got to try the live AI translation. The whole “Gemini can identify things in your surroundings and remember things for you” demo felt personalized, proactive, powerful, and pretty dang creepy. But those demos were on tightly controlled guardrails — and at this point in Google’s story of smart glasses redemption, it can’t afford a throng of tech journalists all saying, “Hey, this stuff? It doesn’t work.”
Meta is the name that Google hasn’t said aloud with Android XR, but you can feel its presence loom here at the Shoreline. You can see it in the way Google announced stylish eyewear brands like Gentle Monster and Warby Parker as partners in the consumer glasses that will launch… sometime, later. This is Google’s answer to Meta’s partnership with EssilorLuxottica and Ray-Ban. You can also see it in the way Google is positioning AI as the killer app for headsets and smart glasses. Meta, for its part, has been preaching the same for months — and why shouldn’t it? It’s already sold 2 million units of the Ray-Ban Meta glasses.
The problem is even with video, even with photos this time. It is so freakin’ hard to convey why Silicon Valley is so gung-ho on smart glasses. I’ve said it time and time again. You have to see it to believe it. Renders and video capture don’t cut it. Even then, even if, in the limited time we have, we could frame the camera just so and give you a glimpse into what I see when I’m wearing these things — it just wouldn’t be the same.