Thoughts on FDVR
I recently came upon the concept of FDVR, or full-dive virtual reality. Strangely, no one but r/singularity calls it FDVR, though it does go by other names, like immersion), but direct searches for FDVR always end up at r/singularity.
I'm surprised no research mentions FDVR, nor have I ever heard a professor or anyone else claim to work towards that goal, even as a long-range moonshot. The most that really happens is in AAA gaming, UE and its Nanites being key tech, especially with some of the visuals for Subnautica and whenever Ark 2 comes out.
With current trends, cloud-based FDVR will happen long before local FDVR, considering Nvidia already runs cloud GPUs for gamers, and Immersion-level content will take large computational resources. While Latency could be an issue, it's tolerable now with Nvidia cloud GPUs, and with future latency advancements+just sending the most minimal info to render+some compression, it could work well. Capitalism is also supportive, cause you can scale up servers while providing a subscription model, which is a rising trend, if not prominent in most business models today. Servers also probably mean multiplayer, nice plus.
Some of the predictions on r/singgularity are rather insane. No you don't ASI+Nanobots+"Neuro Cortex Chip", that's just insane. In terms of tech achievable in the next couple of decades (2-3), I feel like we could get to FDVR or Proto-FDVR in every sense:
Sight:
- Kind of the whole point of the headsets
- better graphics, AI world models, and more realistic physics helps us unlock this
- (side note here, but minimizing headset size is ideal, people will wear this for hours/days at a time)
Sound:
- spatial audio tends to be good enough, FDVR games will immerse you with multiple layers of sound from multiple directions
- future gens of headphones and mics in general
- better recorded sound, and maybe AI generated ones?
Smell:
- future iterations of Osmo
- AI probably comes in here on smell datasets, generating plausible smells, smell file formats, etc. I can't to run RLHF from smells one day
Taste:
- Unsure actually, but this shouldn't be an issue too often, considering you're not eating real food, and the other senses can cover for immersion in most cases
Touch:
- some way to apply pressure, probably with material engineering and full suits
- depending on how sensitive the material is, or how low res, you can feel different textures (fabric, dirt, bricks, etc)
- it needs a way to restrict movement believably to make sure you feel like you're touching things, or hitting a wall, etc, unsure how tech does that
Movement, Balance etc
- Future Iterations of the Disney Holotile, maybe with an added 3d element, so tiles that can move you around and change elevation
- the tile size and number of tiles will probably be inversely proportional, so the ground has more and more smaller tiles to mimic terrain as closely as possible
of course, an embedded brain chip could do it all but it's more complicated than all of these tech branches converging.
Peak FDVR is ready player one, where people spend more time than not on the FDVR worlds, though hopefully less dystopian.
Two ways to see it here:
- Utopia: Ready player one without the big corpo ads, infinitely explorable worlds, define your own life/destiny in any way, and maybe you could have actions which impact the real world, like booking a spaceship probe thing to head to alien worlds and report back when it has a simulation for you to explore, or IRL droids being used in VR, but cybersecurity looks wild in this world, considering you have to look for hacks and patches, while also dealing with how hackers manifest, whether its hallucinations, psychological torture, just glitches, etc.
- dystopia: Pure monkey activation dopamine hell, or enough ads to place borderline seizures, one company controls and overprices everything, and people simultaneously want the model to crash and burn for their freedom, yet also want the servers and everything keeping FDVR alive, vicious cycle of control, etc
Also in terms of GenAI, I think a big step forward is Sora for nanites or something similar, fully generated 3d worlds, where attention and sequence length determine how volumetrically big the world is (think X*Y*Z
axis).
If you do have any research ideas, email vatsapandey123@gmail.com
, I'll take a look and might be able to work on a couple, considering my uni does have funds/research in VR.