I’m really curious how good the depth sensor is. In the iPhone there are apps that will let you use the depth sensor as a 3d scanner; I’d like to see something like that here. I’m particularly curious if it would be possible to use the scanner to get the contours of your face, then use that to 3d print a custom facial interface out of TPU, sort of similar to what the bigscreen beyond does.
Doesn’t sound that great from uploadvr:
Dynamic occlusion should make mixed reality on Quest 3 look more natural. However, the headset’s depth sensing resolution is very low, so it won’t pick up details like the spaces between your fingers and you’ll see an empty gap around the edges of objects.
The depth map is also only suggested to be leveraged out to 4 meters, after which “accuracy drops significantly,” so developers may want to also use the Scene Mesh for static occlusion.
I’m really curious how good the depth sensor is. In the iPhone there are apps that will let you use the depth sensor as a 3d scanner; I’d like to see something like that here. I’m particularly curious if it would be possible to use the scanner to get the contours of your face, then use that to 3d print a custom facial interface out of TPU, sort of similar to what the bigscreen beyond does.
Doesn’t sound that great from uploadvr: Dynamic occlusion should make mixed reality on Quest 3 look more natural. However, the headset’s depth sensing resolution is very low, so it won’t pick up details like the spaces between your fingers and you’ll see an empty gap around the edges of objects.
The depth map is also only suggested to be leveraged out to 4 meters, after which “accuracy drops significantly,” so developers may want to also use the Scene Mesh for static occlusion.
Thanks!