Both Meta and Snap have now put their glasses in the hands of (or maybe on the faces of) reporters. And both have proved that after years of promise, AR specs are at last A Thing. But what’s really interesting about all this to me isn’t AR at all. It’s AI.

Take Meta’s new glasses. They are still just a prototype, as the cost to build them—reportedly $10,000—is so high. But the company showed them off anyway this week, awing basically everyone who got to try them out. The holographic functions look very cool. The gesture controls also appear to function really well. And possibly best of all, they look more or less like normal, if chunky, glasses. (Caveat that I may have a different definition of normal-looking glasses than most people. ) If you want to learn more about their features, Alex Heath has a great hands-on writeup in The Verge.

But what’s so intriguing to me about all this is the way smart glasses enable you to seamlessly interact with AI as you go about your day. I think that’s going to be a lot more useful than viewing digital objects in physical spaces. Put more simply: it’s not about the visual effects, it’s about the brains.

Today if you want to ask a question of ChatGPT or Google’s Gemini or what have you, you pretty much have to use your phone or laptop to do it. Sure, you can use your voice, but it still needs that device as an anchor. That’s especially true if you have a question about something you see—you’re going to need the smartphone camera for that. Meta has already pulled ahead here by letting people interact with its AI via its Ray-Ban Meta smart glasses. It’s liberating to be freed from the tether of the screen. Frankly, staring at a screen kinda sucks.

That’s why when I tried Snap’s new Spectacles a couple of weeks ago, I was less taken by the ability to simulate a golf green in the living room than I was with the way I could look out on the horizon, ask Snap’s AI agent about the tall ship I saw in the distance, and have it not only identify it but give me a brief description of it. Similarly, in The Verge Heath notes that the most impressive part of Meta’s Orion demo was when he looked at a set of ingredients and the glasses told him what they were and how to make a smoothie out of them.

The killer feature of Orion or other glasses won’t be AR ping-pong games—batting an invisible ball around with the palm of your hand is just goofy. But the ability to use multimodal AI to better understand, interact with, and just get more out of the world around you without getting sucked into a screen? That’s amazing.

And really, that’s always been the appeal. At least to me. Back in 2013, when I was writing about Google Glass, what was most revolutionary about that extremely nascent face computer was its ability to offer up relevant,  contextual information using Google Now (at the time the company’s answer to Apple’s Siri) in a way that bypassed my phone.

While I had mixed feelings about Glass overall, I argued, “You are so going to love Google Now for your face.” I still think that’s true.



Source link

By admin

Malcare WordPress Security