In late November of last year, I tweeted a bite-sized view of what I think the future of VR (I’ll use the term VR as inclusive of AR) will look like:
I think there's a world, perhaps 10-ish years from now, in which phones exist entirely as apps in AR (i.e. you always have AR glasses or contacts or implants, and they enable you to see / interact with your "phone").
— Brown Recluse (@RDesai01) November 29, 2017
The point of this post is to expand a bit more on why I believe this to be true and what needs to happen to make it real. We’re going to discuss a couple of concepts around screen resolution, graphics engines, and immersive VR. Taken together, those concepts will, I hope, paint a picture of what apps in VR might look like 10 to 20 years from now.
First, let’s take a look at screen resolution. The standard resolution today in VR is 1.2 megapixels, used by both the Oculus Rift and the HTC Vive. While that might sound pretty good, it gets you images that look like this:
Not the greatest quality out there. This is because VR today is nowhere near the resolution of the human eye. If you think of our eyes as video cameras, they operate at 576 megapixels. Even a new VR headset from the stealth mode startup Varjo operating at 70 megapixels doesn’t come close. So we have a ways to go to get headsets up to snuff; however, I definitely expect it to happen as the tech matures.
Next, we need to get the actual VR content to the right level of fidelity. One of the top firms when it comes to 3D games and simulated content is Unity. Their graphics engine is the industry standard for making really high quality content. For example, look at this:
It’s visually stunning, but still not “real.” It’s too perfect yet not high enough resolution. But if you think about how far video games have come in the last 10 – 20 years, I don’t think we’re too far off from something pretty close to reality.
One of the underlying issues with VR, in my opinion, is the lack of full body immersion. If you’re going to put me into a world, really put me in it. I want to feel the rain and hear the crickets. So to complement the visual developments we’re making, we need to build in virtual reality for sounds, smells, and touch. I could expand on that a lot, so perhaps I’ll come back to it in another post.
But if we can build headsets and create content at “reality resolution,” supplemented with VR for our other senses, I think we can do some very cool things. One of them is rendering our current devices inside of VR / AR,
If the computer strapped to your face can generate convincing facsimiles of reality, why wouldn’t it be able to generate a convincing mobile phone or laptop? My guess is that we might end up with laptop and smartphones as apps inside of VR.
The action of using my phone might be:
- Reaching into my pocket and “feeling” my “phone” due to a haptic feedback suit.
- As I pull it out of my pocket, the VR headset (or implant or whatever) would render it in real-time so it feels like I’ve actually taken a physical object out of my pocket.
- Smartphones apps will be subroutines run inside of the larger “phone” app. Neither the phone nor its apps will exist in physical reality, but I’ll see and feel them due to my VR setup.
As a thought experiment of what VR might look like in the future, I think this is pretty cool. It makes me wonder whether we’ll see a consolidation of technologies more broadly, not just folding phones and laptops into VR.