ARVR

Samsung‘s Android XR Headset is a Curious Combo of Quest & Vision Pro, With One Stand-out Advantage


Samsung is the first partner to formally announce a new MR headset based on the newly announced Android XR. The device, codenamed “Project Moohan,” is planned for consumer release in 2025. We went hands on with an early version.

Note: Samsung and Google aren’t yet sharing any key details for this headset like resolution, weight, field-of-view, or price. During my demo I also wasn’t allowed to capture photos or videos, so we only have an official image for the time being.

If I told you that Project Moohan felt like a mashup between Quest and Vision Pro, you’d probably get the idea that it has a lot of overlapping capabilities. But I’m not just making a rough analogy. Just looking at the headset, it’s clear that it has taken significant design cues from Vision Pro. Everything from colors to button placement to calibration steps, make it unmistakably aware of other products on the market.

And then on the software side, if I had told you “please make an OS that mashes together Horizon OS and VisionOS,” and you came back to me with Android XR, I’d say you nailed the assignment.

It’s actually uncanny just how much Project Moohan and Android XR feel like a riff on the two other biggest headset platforms.

But this isn’t a post to say someone stole something from someone else. Tech companies are always borrowing good ideas and good designs from each other—sometimes improving them along the way. So as long as Android XR and Project Moohan got the good parts of others, and avoided the bad parts, that’s a win for developers and users.

And many of the good parts do indeed appear to be there.

Hands-on With Samsung Project Moohan Android XR Headset

Image courtesy Google

Starting from the Project Moohan hardware—it’s a good-looking device, no doubt. It definitely has the ‘goggles’-style look of Vision Pro, as well as a tethered battery pack (not pictured above).

But where Vision Pro has a soft strap (that I find rather uncomfortable without a third-party upgrade), Samsung’s headset has a rigid strap with tightening dial, and an overall ergonomic design that’s pretty close to Quest Pro. That means an open-peripheral design which is great for using the headset for AR. Also like Quest Pro, the headset has some magnetic snap-on blinders for those that want a blocked-out peripheral for fully immersive experiences.

And though the goggles-look and even many of the button placements (and shapes) are strikingly similar to Vision Pro, Project Moohan doesn’t have an external display to show the user’s eyes. Vision Pro’s external ‘EyeSight’ display has been criticized by many, but I maintain it’s a desirable feature, and one that I wish Project Moohan had. Coming from Vision Pro, it’s just kind of awkward to not be able to ‘see’ the person wearing the headset, even though they can see you.

Samsung has been tight-lipped about the headset’s tech details, insisting that it’s still a prototype. However, we have learned the headset is running a Snapdragon XR2+ Gen 2 processor, a more powerful version of the chip in Quest 3 and Quest 3S.

In my hands-on I was able to glean a few details. For one, the headset is using pancake lenses with automatic IPD adjustment (thanks to integrated eye-tracking). The field-of-view feels smaller than Quest 3 or Vision Pro, but before I say that definitively, I first need to try different forehead pad options (confirmed to be included) which may be able to move my eyes closer to the lenses for a wider field-of-view.

From what I got to try however, the field-of-view did feel smaller—albeit, enough to still feel immersive—and so did the sweet spot due to brightness fall-off toward the outer edges of the display. Again, this is something that may improve if the lenses were closer to my eyes, but the vibe I got for now is that, from a lens standpoint, Meta’s Quest 3 is still leading, followed by Vision Pro, with Project Moohan a bit behind.

Although Samsung has confirmed that Project Moohan will have its own controllers, I didn’t get to see or try them yet. I was told they haven’t decided if the controllers will ship with the headset by default or be sold separately.

So it was all hand-tracking and eye-tracking input in my time with the headset. Again, this was a surprisingly similar mashup of both Horizon OS and VisionOS. You can use raycast cursors like Horizon OS or you can use eye+pinch inputs like VisionOS. The Samsung headset also includes downward-facing cameras so pinches can be detected when your hands are comfortably in your lap.

When I actually got to put the headset on, the first thing I noticed was how sharp my hands appeared to be. From memory, the headset’s passthrough cameras appear to have a sharper image than Quest 3 and less motion blur than Vision Pro (but I only got to test in excellent lighting conditions). Considering though how my hands seemed sharp but things further away seemed less so, it almost felt like the passthrough cameras might have been focused at roughly arms-length distance.

Continue on Page 2: Inside Android XR »

Inside Android XR

Anyway, onto Android XR. As said, it’s immediately comparable to a mashup of Horizon OS and VisionOS. You’ll see the same kind of ‘home screen’ as Vision Pro, with app icons on a transparent background. Look and pinch to select one and you get a floating panel (or a few) containing the app. It’s even the same gesture to open the home screen (look at your palm and pinch).

The system windows themselves look closer to those of Horizon OS than VisionOS, with mostly opaque backgrounds and the ability to move the window anywhere by reaching for an invisible frame that wraps around the entire panel.

In addition to flat apps, Android XR can do fully immersive stuff too. I got to see a VR version of Google Maps which felt very similar to Google Earth VR, allowing me to pick anywhere on the globe to visit, including the ability to see locations like major cities modeled in 3D, Street View imagery, and, newly, volumetric captures of interior spaces.

While Street View is monoscopic 360 imagery, the volumetric captures are rendered in real-time and fully explorable. Google said this was a gaussian splat solution, though I’m not clear on whether it was generated from existing interior photography that’s already available on standard Google Maps, or if it required a brand new scan. It wasn’t nearly as sharp as you’d expect from a photogrammetry scan, but not bad either. Google said the capture was running on-device and not streamed, and that sharpness is expected to improve over time.

Google Photos has also been updated for Android XR, including the ability to automatically convert any existing 2D photo or video from your library into 3D. In the brief time I had with it, the conversions looked really impressive; similar in quality to the same feature on Vision Pro.

YouTube is another app Google has updated to take full advantage of Android XR. In addition to watching regular flatscreen content on a large, curved display, you can also watch the platform’s existing library of 180, 360, and 3D content. Not all of it is super high quality, but it’s nice that it’s not being forgotten—and will surely be added to as more headsets are able to view this kind of media.

Google also showed me a YouTube video that was originally shot in 2D but automatically converted to 3D to be viewed on the headset. It looked pretty good, seemingly similar in quality to the Google Photos 3D conversion tech. It wasn’t made clear whether this is something that YouTube creators would need to opt-in to have generated, or something YouTube would just do automatically. I’m sure there’s more details to come.

The Stand-out Advantage (for now)

Android XR and Project Moohan, both from a hardware and software standpoint, feel very much like a Google-fied version of what’s already on the market. But what it clearly does better than any other headset right now is conversational AI.

Google’s AI agent, Gemini (specifically the ‘Project Astra‘ variant) can be triggered right from the home screen. Not only can it hear you, but it can see what you see in both the real world and the virtual world—continuously. Its ongoing perception of what you’re saying and what you’re seeing makes it feel smarter, better integrated, and more conversational than the AI agents on contemporary headsets.

Yes, Vision Pro has Siri, but Siri can only hear you and is mostly focused on single-tasks rather than an ongoing conversation.

And Quest has an experimental Meta AI agent that can hear you and see what you’re seeing—but only the real world. It has no sense of what virtual content is in front of you, which creates a weird disconnect. Meta says this will change eventually, but for now that’s how it works. And in order to ‘see’ things, you have to ask it a question about your environment and then stand still while it makes a ‘shutter’ sound, then starts thinking about that image.

Gemini, on the other hand, gets something closer to a low framerate video feed of what you’re seeing in both the real and virtual worlds; which means no awkward pauses to make sure you’re looking directly at the thing you asked about as a single picture is taken.

Gemini on Android XR also has a memory about it, which gives it a boost when it comes to contextual understanding. Google says it has a rolling 10-minute memory and retains “key details of past conversations,” which means you can refer not only to things you talked about recently, but also things you saw.

I was shown what is by now becoming a common AI demo: you’re in a room filled with stuff and you can ask questions about it. I tried to trip the system up with a few sly questions, and was impressed at its ability to avoid the diversions.

I used Gemini on Android XR to ask it to translate sign written in Spanish into English. It quickly gave me a quick translation. Then I asked it to translate another nearby sign into French—knowing full well that this sign was already in French. Gemini had no problem with this, and correctly noted, “this sign is already in French, it says [xyz],” and it even said the French words in a French accent.

I moved on to asking about some other objects in the room, and after it had been a few minutes since asking about the signs, I asked it “what did that sign say earlier?” It knew what I was talking about and read the French sign aloud. Then I said “what about the one before that?”….

A few years ago this question—”what about the one before that?”—would have been a wildly challenging question for any AI system (and it still is for many). Answering it correctly requires multiple levels of context from our conversation up to that point, and an understanding of how the thing I had just asked about relates to another thing we had talked about previously.

But it knew exactly what I meant, and quickly read the Spanish sign back to me. Impressive.

Gemini on Android XR can also do more than just answer general questions. It remains to be seen how deep this will be at launch, but Google showed me a few ways that Gemini can actually control the headset.

For one, asking it to “take me to the Eiffel tower,” pulls up an immersive Google Maps view so I can see it in 3D. And since it can see virtual content as well as real, I can continue having a fairly natural conversation, with questions like “how tall is it?” or “when was it built?”

Gemini can also fetch specific YouTube videos that it thinks are the right answer to your query. So saying something like “show a video of the view from the ground,” while looking at the virtual Eiffel tower, will pop up a YouTube video to show what you asked for.

Ostensibly Gemini on Android XR should also be able to do the usual assistant stuff that most phone AI can do (ie: send text messages, compose an email, set reminders), but it will be interesting to see how deep it will go with XR-specific capabilities.

Gemini on Android XR feels like the best version of an AI agent on a headset yet (including what Meta has right now on their Ray-Ban smartglasses) but Apple and Meta are undoubtedly working toward similar capabilities. How long Google can maintain the lead here remains to be seen.

Gemini on Project Moohan feels like a nice value-add when using the headset for spatial productivity purposes, but its true destiny probably lies on smaller, everyday wearable smartglasses, which I also got to try… but more on that in another article.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.