Tech Talk #19 : Augmented Reality

Tech Talk #19 : Augmented Reality

The main difference between AR vs VR is that VR is a computer generated simulation. This means that reality or an alternative world is generated graphically.

|3855xauto

Computer vision, depth tracking and mapping play a key role within this process. All data can be collected in real time via cameras, for example, and processed directly. This makes it possible to display digital content whenever the user needs it.

Advantages:

  • Enables individualized learning and enhances the learning process.
  • AR offers a wide range of applications that are continuously being improved.
  • Experience or knowledge can be shared over long distances.

Disadvantages:

  • The costs of implementing AR are comparatively high.
  • Many devices have only a low level of performance.
  • A key disadvantage is the lack of user privacy.

Do you think MR is a perfect combination of the VR and AR ?

Which technology is leading currently AR or VR?

Will AR be the future?

AR and VR is so similar that they should be the same thing, like XR or MR.
AR is nothing but VR plus a layer of images in reality. Anyone who uses Meta Quest 2 knows they can co-exist in the same device.

1 Like

Heh, I once made the argument, on reddit I think, that in my humble opinion VR and AR is a difference in degree; Not in kind.

A few months later, I watched a Michael Abrash talk, where he made the exact opposite statement. I wasn’t swayed by his arguments, but was amused by us choosing the same idiom, to express opposing conclusions. :7

1 Like

What you are describing technologically is passthrough (aka VST, Video See Through) and it’s only a small component of Mixed Reality.

The real challenges of Mixed Reality are the features that help merging the real world into VR in order to augment it. Things like controller-free interaction (hand tracking, eye tracking, voice recognition), environment understanding (scene mapping, recognizing walls to project holograms onto, detecting obstacles for navigation and creating collisions meshes), unbouned spatial location (creating persistent anchors in space and being able to locate your device beyond the preset “guardian” or room scale) and sharing of the experience (observer cameras and scene graphs to share your AR experiencd with other devices, whethere thay are also a headset of maybe just a 2D screen).

These functionalities cannot be underestimated, otherwise your Mixed Reality experience will be extremely poor (if your device only supports the video see through and none of the features above, you won’t have AR). Seeing your surroundings without the ability for an application to project accurately content onto the environment is not that useful. Being able to move around without the ability to interact isn’t that useful. Not being able to leave your predefined “play area” without getting a warning and having to reconfigure it is quite annoying. Being alone in your experience and not able to invite other people to see what you see what you are experiencing is going to get boring and lonely.

These challenges are a magniture harder to solve for both devices vendors and application developers than just the passthrough bit.

These features aren’t really must-have for VR, and most VR devices today do not implement them (exception going to the controller-free interaction features that are more and more common, but mostly for comfort rather than necessity). Hence I don’t quite agree with your statement that MR is nothing but VR with extra layers of visuals.

Sure, the Quest 2 supports some of these features. So do other devices like HoloLens or Magic Leap. They are still area where it’s very hard to create good applications, and there is still poor commonnality between vendors: legacy API like OpenVR support none of these features. Prior to OpenXR, they all need dedicated SDK that aren’t portable from a device to another. With OpenXR, some of them are in the process of being implemented cross-vendor, but the road is still very very long ahead.

The story for middleware is also quite behind too. The best engine today to support all these features is probably Unity, but there are still areas when devices like Quest 2 and HoloLens 2 cannot be supported well without writing device-specific support. This is a huge blocker for app developers that cannot create and distribute consistent experiences for all platforms.

VR is pretty much known and solved at this point. All you’re going to get now is “better performance”, “better optics”, “better battery life”.
There are a lot of really hard problems that still need to be solved to provide some of the basic features of AR.

3 Likes

Blockquote
VR is pretty much known and solved at this point. All you’re going to get now is “better performance”, “better optics”, “better battery life”.

Well, to fool the eyes maybe, but for real vr, you need haptic feedback and a treadmill or whatever other solution. There is also much room for improvement. This is much easier in AR, as there is only the feedback missing from the virtual stuff. But at least you don’t need a treadmill

1 Like

I I tried various models of AR in AWE Asia。 Seem still a long way from mass adoption


image

image

4 Likes

You have a good point for the locomotion, thought from experience having tried one of the “treadmill” solution, it really did not enhance the experience or the immersion at all (quite the opposite for me. Cues that break immersion are much worse IMO than cues that are missing), and the “move with thumbstick” can be pretty immersive if you get all the right cues from the other senses and the application does a good job at it.

I don’t quite agree that locomotion and world-scale tracking is easier in AR, especially if you consider the many more possible use cases, like being on a moving platform (a car, boat, which will not work well with the currently tuned sensor fusion algorithms) and the outdoor (some environments have less tracking cues for the computer vision to pick up) and not ideal lighting conditions. You also have new constraints for displaying holograms due to depth perception/occlusion (how to adjust the scene when the depth in the app and the depth in reality wildly conflict when following your movements). Sure, you can move your body, but having the experience follow you well, that’s much harder.

I think controller haptics can be there next year with PSVR2. Their haptics on DualSense are pretty good, and just needs to apply that tech to VR. I’m not convinced more advanced systems like haptics glove will help immersion in VR (though I have not tried any). I believe the force-feedback experience is best achieved with controllers specific to the experience (think wheels, force-feedback joysticks, motion rig), these are pretty good today (having tried a couple) and have been for a while. So yeah, I maintain that these problems are mostly solved :wink: and now it’s only about optimizations (on many levels, performance, form factor, pricing, accessibility…).

Now for haptics and force-feedback in Mixed Reality, that’s one more thing to add to the list of “hard to solve, not solved yet” things in my post.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.