A first look at Google’s Project Aura glasses built with Xreal

3 hours ago 4

Teased at Google I/O, Project Aura is a collaboration between Xreal and Google. It’s the second Android XR device (the first being Samsung’s Galaxy XR headset) and is expected to launch in 2026. Putting it on, I get why the term “smart glasses” doesn’t exactly fit.

Is it a headset? Smart glasses? Both? Those were the questions running through my head as I held Project Aura in my hands in a recent demo. It looked like a pair of chunky sunglasses, except for the cord dangling off the left side, leading down to a battery pack that also served as a trackpad. When I asked, Google’s reps told me they consider it a headset masquerading as glasses. They have a term for it, too: wired XR glasses.

I can connect wirelessly to a laptop and create a giant virtual desktop in my space. I have up to a 70-degree field of view. My first task is to launch Lightroom on the virtual desktop while opening YouTube in another window. I play a 3D tabletop game where I can pinch and pull the board to zoom in and out. I look at a painting on the wall and summon Circle to Search. Gemini tells me the name of the artwork and the artist.

I’ve done all of this before in the Vision Pro and Galaxy XR. This time, my head isn’t stuffed into a bulky headset. If I wore this in public, most people wouldn’t notice. But this isn’t augmented reality, which overlays digital information over the real world. It’s much more like using a Galaxy XR, where you see apps in front of you and your surroundings.

A Google representative told me everything I tried on Project Aura had originally been developed for Galaxy XR. None of the apps, features, or experiences had to be remade for Project Aura’s form factor. That’s huge.

XR has a major app problem. Take the Meta Ray-Ban Display and the Vision Pro. Both launched with few third-party apps, giving consumers little reason to wear them. Developers also have to pick and choose which of these gadgets they’ll invest in making apps for. That leaves little room for smaller companies with big ideas to compete or experiment.

That’s what makes Android XR fascinating. Smaller players, like Xreal, can access apps developed for Samsung’s headset. Android apps will also work on the AI glasses launching next year from Warby Parker and Gentle Monster.

“I think this is probably the best thing for all the developers. You just don’t see any fragmentation anymore. And I do believe there will be more and more devices converging together. That’s the whole point of Android XR,” says Xreal CEO Chi Xu.

Close up of Google’s Prototype AI glasses from a side angle at Google I/O. In the background you can see a smartphone and vase with plants.

Slipping on Google’s latest prototype AI glasses, I’m treated to an Uber demo in which a fictional version of me is hailing a ride from JFK Airport. A rep summons an Uber on the phone. I see an Uber widget pop up on the glasses display. It shows the estimated pickup time and my driver’s license plate and car model. If I look down, a map of the airport appears with real-time directions to the pickup zone.

It’s all powered by Uber’s Android app. Meaning Uber didn’t have to code an Android XR app from scratch. Theoretically, users could just pair the glasses and start using apps they already have.

When I’m prompted to ask Gemini to play some music, a YouTube Music widget pops up, showing the title of a funky jazz mix and media controls. It’s also just using the YouTube Music app on an Android phone.

I’m asked to tell Gemini to take a photo with the glasses. A preview of it appears in the display and on a paired Pixel Watch. The idea is that integrating smartwatches gives users more options. Say someone wants audio-only glasses with a camera. They can now take a picture and view what it looks like on the wrist. It’ll work on any compatible Wear OS watch.

Photos taken from Google’s AI glasses prototypes with K-pop-inspired effects overlaid. On the left side there’s a pantry lit in pink and blue neon lights; on the right, a person is swirled in neon effects, Korean lettering, and concert lighting.

I also try live translations where the glasses detect the language being spoken. I take Google Meet video calls. I get Nano Banana Pro to add K-pop elements to another photo I’ve taken. I try a second prototype with a display in both lenses, enabling a larger field of view. (These are not coming out next year.) I watch a 3D YouTube video.

It’s all impressive. I hear a few spiels about how Gemini truly is the killer app. But my jaw really drops when I’m told next year’s Android XR glasses will support iOS.

“The goal is to give this ability to have multimodal Gemini in your glasses to as many people as possible. If you’re an iPhone user and you have the Gemini app on your phone, great news. You’re gonna get the full Gemini experience there,” says Juston Payne, Google’s director of product management for XR.

Payne notes that this will be broadly true across Google’s iOS apps, such as Google Maps and YouTube Music. The limitations on iOS will mostly involve third-party apps. But even there, Payne says the Android XR team is exploring workarounds. At a time when wearable ecosystem lock-in is at an all-time high, this is a breath of fresh air.

Google’s use of its existing Android ecosystem is an astute move that could give Android XR an edge over Meta, which currently leads in hardware but has only just opened its API to developers. It also ramps up the pressure on Apple, which has fallen behind on both the AI and glasses fronts. Making things interoperable between device form factors? Frankly, it’s the only way an in-between device like Project Aura has a shot.

“I know we can make these glasses smaller and smaller in the future, but we don’t have this ecosystem,” adds Xu, Xreal’s CEO. “There are only two companies right now in the world that can really have an ecosystem: Apple and Google. Apple, they’re not going to work with others. Google is the only option for us.”

Google is trying to avoid past mistakes. It’s deliberately partnering with other companies to make the hardware. It’s steering clear of the conspicuous design of the original Google Glass. It has apps pre-launch. The prototypes explore multiple form factors — audio-only and displays in one or both lenses.

Payne doesn’t dodge when I ask the big cultural question: How do you discourage glassholes?

“There’s a very bright, pulsing light if anything’s being recorded. So if the sensor is on with the intent to save anything, it will tell everyone around,” says Payne. That includes queries to Gemini for any task involving the camera. On and off switches will have clear red and green markings so users can prove to others that they’re not lying when they say the glasses aren’t recording. Payne says Android’s and Gemini’s existing permissions frameworks, privacy policies, encryption, data retention, and security guarantees will also apply.

“There’s going to be a whole process for getting certain sensor access so we can avoid certain things that could happen if somebody decides to use the camera in a bad way,” Payne says, noting Google’s taking a conservative approach to granting third parties access to the cameras.

On paper, Google is making smart moves that address many of the challenges inherent to this space. It sounds good, but that’s easy to say before these glasses launch. A lot could change between now and then.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Read Entire Article