I Tried Google and Samsung's Next-Gen Android XR Headsets and Glasses, and the Killer App Is AI

1 week ago 5

I've used many VR and AR headsets and had a lot of experiences. But I've never worn one with an all-seeing, all-listening AI companion by my side until this week when I got my first taste at Google's headquarters in New York City. 

Android XR, which is available in early form for developers now and will fully launch in 2025, promises a whole OS for headsets and glasses of all types and is a bridge to Android phones. But its killer app, the one Google is clearly banking on, is its AI, Gemini. From what I've seen, it's a heck of a sign of how much headsets and glasses will be changing in the next few years… but I still have a lot of questions about how it'll fit into everyday life.

What I remember most clearly afterward, my head buzzing with an hour's worth of demo memories in headset and glasses, is wandering through worlds with the persistence of an AI companion. For instance, I was standing on a 3D map of my own neighborhood, my home below me. I pinched and zoomed, hovering over my roof, until I could see the horizon and some buildings a few streets away. I pointed to them.

 "What's that building over there?" I asked. 

"That's the high school," Gemini said, identifying my town's school name. 

I get closer and ask about the municipal building next door too. We explored my town together, Gemini and I, in a new Samsung mixed-reality headset that feels a lot like Apple's Vision Pro headset. But as I asked Gemini to take me to other places -- beyond Maps to Chrome or YouTube, where it helped me recognize things in videos, or narrated scenes on the fly -- or even pointed out and searched for things in the real world in a constructed living room space at Google's New York headquarters -- I started to lose track of what app I was in. Gemini was always with me, though. And after a few demos, Gemini even told me what I'd done and jogged my memory in case I forgot. 

A lot of it starts to feel like those dreams of sci-fi assistants, and that's not accidental. Google's President of Android Ecosystem, Sameer Samat, equates multimodal AI to a "Tony Stark" moment: "What these [AI] models can do using the cameras on the phone as a way of interacting with the world, it was truly blowing us away. Wouldn't this be perfect for a pair of glasses?" 

During a long exclusive conversation with Samat, it became clear that AI has motivated Google to rewrite its future AR/VR plans and re-enter a space it walked away from years ago after ending support for the Google Daydream.

And yes, Google and Samsung have a lot of AR/VR plans for 2025: Android XR will launch then, and so will Samsung's headset. But Android XR will also work with Android phones and other headsets and glasses ranging from VR to AR to Meta Ray-Ban-like smart glasses. Glasses are very much on Google's roadmap. I also got multiple demos of display-enabled, Gemini-equipped smart glasses from others, each with floating head-up displays. These glasses, part of Google's AI initiative code-named Project Astra, are part of what's coming next. 

It's a lot to take in. But it's all a massive taste of what will probably be a big shift to AI living on XR very soon. It's fascinating stuff and also a lot to digest. 

A bridge to phones, an ecosystem for all sorts of hardware

I've said it for years: the big missing piece in VR and AR has been our phones. To this point, iOS and Android haven't been deeply connected with VR and AR headsets and glasses. But Android XR, a new platform launching in 2025 that was announced for developers Thursday, will open that all up. Starting with Samsung's Vision Pro-like mixed reality headset, Google aims to create a universe of glasses, goggles and headsets to interconnect with Google Play, run multiple 2D apps at once and use Gemini AI. 

Google's making AI a big part of the reason for Android XR, and its biggest feature. In that sense, it's already different from Meta and Apple, which have slow-played AI in VR and AR so far. Apple Intelligence still hasn't emerged on Vision Pro yet, but is likely to arrive next year. While Meta has generative AI already running on its Ray-Ban smart glasses, the Meta Quest VR headsets don't have a lot of AI tools baked in yet.

Android XR is only in its early stages, a preview form for early partners to start getting used to. Google is working with Samsung first as its starting hardware partner, with the mixed reality headset I got to briefly try being the first product next year. Samsung is also making glasses, which we don't know much about yet… in the meantime, Google also has its own in-house smart glasses called Project Astra (I got to try those too). 

There will be other partners and other products: Xreal, which already has a wide range of display glasses and a new set of AI-ready Xreal One glasses, is one of them. But for the year to come, it'll mostly be about Google and Samsung, with hardware using chipsets made by Qualcomm.

Even though Android XR's starting point is a high-end VR headset, the endpoint is a whole range of products to come. "This is not just about one product," says Kihwan Kim, executive VP of Immersive technologies and hardware at Samsung. Kim sees it as the foundation for what will be a range of devices, including glasses. "This is more like the route to making this market," Kim says.

Meta's Orion glasses, which I saw earlier in the fall, are years away from being a reality but show what AR glasses could be. No one is there yet, though, and Google, like everyone else, is splitting the difference to get there.

"We have this kind of parallel approach," Shahram Izadi, Google's VP and GM for XR, says about the headset/glasses strategy. "One starts with a lot of capabilities, one starts with limited capabilities, but you're locking on form factor. Most people are attacking these two vectors to get to all-day wearable AR glasses."

A VR headset by Samsung with a black glass front and a visor and strap

Project Moohan is a familiar-looking mixed reality-enabled VR headset. It'll be the first Android XR-enabled product next year.

Samsung

Samsung's Project Moohan is the first step

I was one of only a few people to get an early hands-on with Samsung's Android XR headset and only got to wear it for a half hour or so. It's called Project Moohan, and Google didn't allow me to take any photos or videos of my demos or the mixed reality headset. The feel of the hardware is very familiar: It has the fit and feel of a Meta Quest Pro but the video quality of an Apple Vision Pro. The headset's clear lenses and visor-like design perch over my forehead, resting over my eyes without needing a face piece pressing in. The headband design tightens in the back, and it's lightweight, but there's also a tethered battery pack, much like the Vision Pro, which I tuck in my pocket. 

Google outfitted me with prescription lenses for my demos, which was a huge help since the headset doesn't seem made to fit over glasses. The hardware has both eye tracking and hand tracking, just like Vision Pro, and uses a color camera passthrough of the real world overlaid with what's shown in VR on the headset, creating mixed reality much like Meta's Quest 3 or Vision Pro. 

Project Moohan is what Google and Samsung started with awhile ago, before a rapid rise in generative AI interest and capabilities that, according to Samat, led the team to pivot toward an agent-based Gemini system that would work on both headsets and glasses. But Moohan is the starting point that Google feels can cover enough bases of interaction, Google Play app compatibility, AI and interface that it can fuel ideas in a wave of other, smaller glasses that may not have all these features eventually.

A screen showing how floating windows in Android XR work like a computer

The windowed feel of Android XR looks a lot like Vision OS, but Gemini AI can also see what you're seeing.

Google

Familiar, but with some AI Magic

Tapping on the side of the headband opens up a grid of Google Play apps, much like how Vision Pro (or my Meta Orion demo) worked. I can pinch open apps with my hand by casting a pointer in space, and app windows can be dragged around by the edges and expanded in size. A top button on the headset can bring me back to the home screen, which includes an immersive 3D landscape that, again, is very Vision Pro.

Google's demos were all of Google apps, several of which aren't on other headsets yet, namely, Maps and YouTube. Google Maps starts in 2D but can launch a full 3D view that feels like the Google Earth experience I tried in VR years ago. Landscapes spread around magically, with searchable locations studded throughout. Google's also adding full 3D-scanned locations over time using a technique called Gaussian splatting, which knits 2D photos into realistic (but a bit hazy) walkable rooms. I popped into a scan of Scarpetta, a New York restaurant, and entered the dining rooms. I've seen these types of scans at Meta and via companies like Varjo and Niantic, but it's fun to see them knitted into Maps.

YouTube feels like a standard viewer with pop-out panes for comments and metadata, but it can also play immersive 3D, 180- and 360-degree videos that have been on YouTube for years. There's another trick too: Google is using AI to convert 2D YouTube videos into 3D. It didn't look bad, and more impressively, it works with home videos in the Photos app, too, along with 2D-to-3D photo conversions. Apple already converts 2D photos to 3D in Vision Pro, but the video trick is a next-level move for immersive memories.

Google's Android XR showing photos in an interface

Android XR can show photos, and videos, and convert them all into 3D.

Google

I also dragged my Chrome browser over to a table to demo how swapping to mouse and keyboard from hand tracking would work, and the transition was pretty seamless; the mouse cursor moved all around the room, not just in the browser window. When I lifted my hand off the mouse, hand tracking was instantly back in action. My demo didn't have eye tracking enabled (maybe because of my prescription inserts), but the headset and Android XR are made to adapt to whatever inputs are available: hands, eyes, voice or things like keyboards, mice or connected phones. (The headset does have automatic eye distance adjustment, by the way.)

There's no price or release date for Samsung's headset, or even an official name -- Moohan refers to "infinity" in Korean -- and it's available only for developers now. But it feels like a very real product, and it runs off Qualcomm's XR2 Plus Gen 2 chips announced back in January. But again, it's the Gemini AI that feels like the special ingredient right now. My demos were pretty contained in a pre-set space with preconfigured apps, but Gemini seemed like a pretty compelling magic trick. The magic continued on glasses in another room.

A man wearing dark framed smart glasses from Google and reading

Google's prototype smart glasses showing Project Astra look pretty normal, but have a display inside one lens.

Google

Glasses: All-seeing AI with a head-up display

Samsung's next product will be smart glasses, with more details coming in 2025. But these glasses don't exist yet. Instead, Google is currently experimenting with its own in-house glasses, part of an AI initiative called Project Astra, which are being field-tested now to get feedback on how they work and feel in public. The second room I entered had a number of pairs of these glasses, one outfitted with a temporary pair of makeshift prescription inserts for me. The glasses look pretty normal, lightweight and wireless (like Meta's Ray-Bans), with a camera onboard and speakers and mics in the arms, along with a few input buttons. 

These glasses have a single display in the right lens, projected in via a Micro LED chip on the arm onto etched waveguides on a small square patch on the lens glass. They feel like a modern riff on Google Glass, but with much better tech. The display mainly shows text: directional information or captions of whatever Gemini might be saying to me through the speakers. 

A representation of what a heads-up instruction window would look like while building a shelf while wearing smart glasses

Google's representation of what heads-up displays in their smart glasses look like. My experience was pretty close to this, but I didn't set up a shelf.

Google

I wandered around the room, looking at books on the shelf and asking about them (Jeff Vandermeer's Absolution, for instance, which I asked about and whether I needed to read the other books first). I opened up a Yuval Noah Harari book and asked Gemini to summarize what was in front of me. I also had them translate a poster on the wall. Meta's Ray-Bans can already do this too, but Gemini, once invoked, stays active and doesn't need additional prompts. Instead of always re-activating, I keep it on… and pause it by tapping on the side of the glasses when I want an Assistant break.

I also got to demo a live translation, where someone else in the room approached me to speak in English and then Spanish. Everything she said to me was auto-captioned in the head-up display, and it all kept being delivered in English even when she changed languages. 

A representation of a pop-up 3D map circle appearing in a view of a city street

This representation of what Maps looks like on glasses was like what I experienced on dual-display prototypes, but in a contained demo space.

Google

Another short demo showed where the tech aims to go next: A pair of glasses with dual displays gave me simulated map information, and when I looked down, I saw a 3D map appear to guide my orientation and show me what street I was facing. Looking up and spinning around, I saw a map appear when I was in motion, then fade away when I was still. I also saw a brief video clip to show the potential resolution of the display; the micro-LED color and pixel density looked really good, but the square field of view was pretty small. Google sees it expanding over time, but it was notably smaller than the Meta Orion prototype, Xreal's glasses and even Snap's developer Spectacles. Then again, right now, Google and its hardware partners like Samsung may be taking small steps for how much visual detail is even delivered on these glasses without feeling like an interruption or unsafe to use in public while walking around.

Meta sees headsets and glasses as two parallel classes of products like PCs and phones, and Google feels the same way. "You'll probably use the more immersive products akin to laptops. On the glasses side, these are more like smartphones of the future or wearable devices of the future like watches or buds. So you have to support both," Izadi says. 

A representation of live translation of a restaurant menu into English via a pop up display

A representation of what instant translation would look like. How will this feel in the outdoor everyday world?

Google

Gemini as an always-ready agent: Am I ready for it?

Through all of these demos, Gemini's one-tap-away readiness was a constant. That's clearly Google's push here by design. But it's also the most eye-opening, surprising part of everything I experienced. AI, no matter what your concerns about it might be, can be extremely helpful in a headset or glasses where inputs like a keyboard or touchscreen are harder to access. I use Siri a lot more in Vision Pro or with AirPods. Meta's Ray-Bans use voice as a deeper way to control things too. However, current VR/AR devices have limits to how aware the AI feels. Gemini, because it can see everything you're seeing in real time, feels like it's a buddy… and maybe not one you'll always want.

I found Gemini bubbly and friendly at first (it said "Hi!" and I awkwardly said "hi" back) but then settled into a listening mode where anything I said could be interpreted as instruction -- there's no "hey Gemini" prompt. That makes things helpful but also intrusive. The way to stop it is to pause it or turn it off again, which feels like the reverse of how AI assistants work now: Instead of tapping to invoke, you tap to stop it. No doubt, Gemini will have limits on how much it can run continuously on small glasses, just from a battery perspective. According to Google, in mixed reality VR like Project Moohan, Gemini works as a layer that uses casting to interpret everything it sees. It can even be used while playing games, though there might be a bit of a performance hit.

The advantages could be how it can continuously break the fourth wall of mixed reality, in a sense: I could "circle to search" things in Chrome and have responses pop up, or pull 3D objects into my world on command, or jump from app to app as I request a location or a video or ask to play music from an album I see in front of me (which happened during my demo). Samsung's Kim suggests I could get tutorials while playing games, for instance, if Gemini sees what I'm doing in the headset or even with glasses. And, of course, it can remember what I was doing, too, and when. Although, when I asked Gemini to recognize my colleague Lisa Eadicicco in the room with me, it said it couldn't be used to identify people (yet). 

Google's already laying out wide-sweeping plans for Gemini 2, just announced, to be an agent-like system that works across devices. Adding camera feeds into the AI input mix also means more data to collect and train with. It won't just be on headsets and glasses, and Google isn't the only company pursuing this vision. The implications are vast.

"The assistant is coming with you, whether it's your glasses, your headset, your phone, your watch," Izadi says.

Would I want Gemini to see everything I'm doing? No, of course not. Microsoft experimented with an always-on Recall AI mode in Windows before delaying it after backlash. How Google will handle that dance between always-helpful and privacy-invasive is unclear, although Google promises that video feeds used for AI recognition are kept private and local. 

A glowing green pair of glasses in a headset, seen between Android mascot logos

The shape of future Android XR products remains unknown, but expect lots of glasses.

Getty Images/CNET

Android XR will open doors between phones, headsets and everything else

One big thing seems clear, though: With Android XR, all sorts of headsets and glasses will be able to bridge into phones more easily than before. That could allow a whole host of otherwise isolated products to feel more knitted together in a way that Apple and Meta haven't done yet (although the exact steps for how that happens aren't clear for Google, either). Google's Samat points to Samsung being the first partner to co-explore the software, but Qualcomm's existing Snapdragon Spaces software, which bridges phones to glasses already, will also be compatible and part of Android XR. Google's also enabling WebXR and Unity tools to work with Android XR, and existing 2D Google Play apps will all run in Android XR, as long as a developer agrees to opt in to listing them there.

Individual hardware makers should be able to customize their own software and tools and still connect to Google Play, but what about putting Google's already widespread services on other devices too? Right now, Google isn't offering any specifics there, but having XR Maps and YouTube -- and Gemini -- on Quest and Vision Pro headsets and elsewhere would be helpful. 

It could also change the way developers envision future VR and AR apps. "While we are looking to bring existing games like Demeo to Android XR, the platform also opens us up to develop entirely new ideas," Tommy Palm, head of Resolution Games, a developer that's made games for lots of existing VR/AR hardware, tells CNET. "Android XR's open nature, developer-friendly approach and unique innovations make it not only viable but allow us to consider new and novel ways to use mixed reality for storytelling. For instance, the natural language interface of ChatBots could be a very potent extension for XR and games."

These moves are early, but they're also pointers to what's happening next. Apple and Meta will undoubtedly have more AI services in AR and VR in the years to come, and Apple will likely find ways to let Vision work with iPhones. Or they need to. Google's plans make a lot of sense, and they'll probably let headsets and glasses work as true peripherals to phones and, eventually, with watches too. With three partners in the equation -- Google, Samsung, and Qualcomm -- and other manufacturers too – it could get messy. But it's also the unifying progress that an already fragmented future landscape needs. We'll know more about what's really happening in 2025, which isn't very far away at all.

Read Entire Article