In the early days of phones as we now know them (Android phones specifically), things were, how should I say this… batshit wacky. There were all sorts of different form factors (looking at you, T-shaped LG VX9400), boundary-pushing, albeit doomed, features (remember FM transmitters?), and enough variety in color and shape to make your head spin like a Samsung Juke. It was interesting; it was tumultuous; it was, at times, downright stupid. It was also, in a lot of ways, just like smart glasses are now.
Smart glasses, in case you haven’t been keeping an eye on them, have had quite a year. Most of that hype can be attributed to one company, Meta, which has had some initial success in selling its Ray-Ban-branded AI glasses—enough success to justify pushing out a pair of $800 display smart glasses, the Meta Ray-Ban Display. And it’s not just Ray-Bans; Meta has greatly expanded its smart glasses universe with Oakley-branded specs too, bringing its current smart glasses lineup to four different models—two types of Ray-Ban smart glasses and two types of Oakley (or, I guess, five if you still count the Stories).
But just because Meta is dominating the space with volume doesn’t mean it’s alone in its quest. Far from it, in fact. As Meta plows ahead, the wake of its warpath is growing bigger and bigger, and the very definition of smart glasses is ballooning along with it. That definition, as we’re learning, can vary greatly.
Meta Ray-Ban Display aren’t the only show in town. © Raymond Wong / GizmodoMeta’s vision is clear: smart glasses, in its estimation, are all about cameras, computer vision, audio, and AI. While a screen also factors in via the Meta Ray-Ban Display, those four components are what all of Meta’s smart glasses have in common. Meta thinks everyone wants to take pictures and videos, and everyone wants to interface with its sometimes frustrating AI voice assistant. Other players in the smart glasses game aren’t so sure. Even Realities, for example, just launched its Even G2 smart glasses, which have a screen but no speakers or cameras. As a result, they’re lightweight and more glasses-like (alternatively, less gadget-like) and also more appealing to anyone worried about the privacy implications of walking around with a discreetly placed camera on your face.
That last part—privacy—is important. In all of the diverging hardware choices, there’s a more philosophical debate happening. Even Realities hasn’t shied away from differentiating itself from Meta as a more privacy-focused alternative, but there’s an implicit throughline of ambient computing, too. The screen inside the Even G2 isn’t designed to consume your eyes; it’s meant to blend in, allowing for semi-discreet notifications, navigation, news updates, and more. These are what insiders would call heads-up display glasses, or HUD glasses for short. And they, of course, have a foil in the smart glasses world too.
The Inmo Air 3 smart glasses, which I tested out, lean fully into AR, for example. Instead of a HUD, they offer a full-on projection experience (similar to what you’d get with smart glasses made by Xreal) that plasters a big, unavoidable screen in front of your face. And the thing is, Inmo doesn’t picture devices like the Air 3 as a virtual screen that you stay at home and game or watch movies with. Inmo envisions people wearing the Air 3 (like Meta’s Orion prototype) out and about. All day. Every day. In some ways, the philosophy here is even more stark. Are smart glasses a background accessory? Are they ambient? Are they immersive? Even more existentially, are they a computer? Or are they just another Apple Watch on your face?
The Inmo Air 3 have a big, immersive AR screen, which you can’t see here because it’s near impossible to photograph. © Raymond Wong / GizmodoEven when the philosophy isn’t totally divergent, makers of smart glasses don’t seem to agree on a template quite yet. Take screens, for example. Meta, for its part, focused on giving the Meta Ray-Ban Display a Liquid Crystal on Silicon (LCoS) screen that’s lit up by an LED light source. The result is a full-color display with a high max brightness of 5,000 nits. It’s interesting, technology, to be sure, but do smart glasses need that much color and brightness? Maybe not; competitors like Rokid, for example, opt for a monochrome green display that’s micro LED and not as bright. It’s a simple, almost early-CRT-computing-like screen, but it gets the job done.
As experimental as things are right now, that free-wheeling spirit likely won’t last forever. Large presences in the space loom, and with their entry will likely come some kind of consensus. Apple, for example, is rumored to be pursuing its own pair of smart glasses, which could come next year or the year after. What those will bring to the table is anyone’s guess, but the impact of an Apple-made pair of smart glasses is obvious. With deep iPhone integration, a penchant for UI, and a head start with software like visionOS, which is used inside its XR headset (er, sorry, spatial computer), the Vision Pro, Apple has a chance to blow the gates wide open.

I say “a chance” here because the same could have been said for the Vision Pro, which as you probably know, hasn’t quite lived up to expectations. Nothing is guaranteed in AR/XR, even with a cachet as powerful as Apple’s. For now, companies are destined to throw AI and micro LEDs at the wall and see what sticks. So far, the results have been interesting, to say the least, and have even given rise to weird controllers like Meta’s Neural Band and touch-sensitive smart rings. Some results, I’ll admit, have made me want to swear off gadgets entirely. I can only hope that one day, when smart glasses are finally figured out, made obsolete, or whatever their ultimate fate might be, we can look back on all of this chaos and, just like phones, say, “Damn, that was fun.”








English (US) ·