Meta's Ray-Ban Smart Glasses Receive 'Live AI' Update for Enhanced User Experience
Meta has announced a new software update for its Ray-Ban smart glasses which will add "Live AI," a feature that can use a video feed to gather context for questions, similarly to Google's Project Astra.
A new update rolling out to Ray-Ban Meta smart glasses, v11, which brings a few new options.
That includes Shazam integration, which will allow users to ask the glasses "Hey Meta, what is this song" and then have the result read aloud. This feature will be available in the US and Canada.
Beyond that, Meta is also introducing new AI features, and they look enticing. The first of these new features is "Live AI," which allows Ray-Ban Meta glasses to capture video which is then used by the AI to offer "real-time, hands-free help" on the things you're actively doing.
Meta says that, eventually, this data will be used to offer suggestions before you even have to ask.
The first is live AI, which adds video to Meta AI on your glasses. During a live AI session, Meta AI can see what you see continuously and converse with you more naturally than ever before. Get real-time, hands-free help and inspiration with everyday activities like meal prep, gardening, or exploring a new neighborhood. You can ask questions without saying "Hey Meta," reference things you discussed earlier in the session, and interrupt anytime to ask follow-up questions or change topics. Eventually live AI will, at the right moment, give useful suggestions even before you ask.
"Live translation," meanwhile, will be able to translate speech in real-time, with the other person's speech being output in English through the glasses (and also transcribed on your phone). This works for Spanish, French, and Italian.
Meta will only be rolling these features out through a waitlist, an only in the US and Canada for now.
Google is working on something just like this.
At Google I/O 2024 in May, the company showed off "Project Astra," a new AI project that would be able to use a video feed to gather context, then being able to answer questions based on what it saw. Google teased the functionality on glasses, but has yet to roll anything out. The announcement of Gemini 2.0 earlier this month saw Google detailing new updates to Astra will be able to converse in multiple languages, store up to 10 minutes of memory, improve latency, and more. It's unclear how Meta's "Live AI" will compare, but it's certainly exciting to see this functionality coming so soon, especially as we won't see it fully realized for Google until sometime next year.
Latest News
WhatsApp for iOS Unveils Sleek New Profile Tab in Latest Update
48 minutes ago
Samsung Pulls the Plug on Its $3,000 Tri-Fold Experiment After Only Three Months
48 minutes ago
CERN's Upgraded Smasher Hits Milestone with 80th Particle Discovery
48 minutes ago
Samsung Admits Privacy Comes at a Cost for Galaxy S26 Ultra’s Stunning Screen
1 hour ago
Todd Howard Wants You to Forget The Elder Scrolls 6 Even Exists
1 hour ago
Court Rules Apple Can Purge Apps at Will as Musi Loses Big
1 hour ago