Are there are AR/VR technologies which are addressing software developers? I would very much like to replace my shitty monitor with AR/VR glasses for development.
I know this may not seem like "intended use case", but the developer experience can use some innovation for a change. Also one way to bring these technologies close to developers.
Not for coding. The resolution just isn't there yet.
Roughly, per-eye resolution is in the same ballpark as HD displays, but stretched over a 90+ degree field of view. Fonts need to be very large to be legible. You can create a theater sized virtual monitor, but it's just taxing to use. Aliasing artifacts make it worse.
At least for text-focused tasks, I'd take virtually any display built in the past 40 years over a modern VR headset.
> Not for coding. The resolution just isn't there yet.
Spoken like someone who hasn't programmed in VR yet.
I don't think you'll be programming any operating systems in VR anytime soon, but there is still a lot of programming, specifically object scripting, that could be done in VR. A number of people--including myself--have built demos that prove out the concept.
One of the reasons is that text legibility is not strictly about display resolution. Motion within the view improves legibility significantly. Yes, the fonts render to very large pixels. But the specific pixels they render to are constantly changing. Your brain fuses those images over time. I'm not able to find the paper right now, but the US Navy did a study that proved pilot visual acuity improved when they were in a dynamic scenario. The study performed a visual acuity test where pilots had to identify letters in view from within a flight simulator. One group had full use of the simulator in motion, one was told the simulator motion systems were broken, but they still sat in it to perform the same test rendered on the same screen.
And as you said, larger fonts are easier to read. There is a lot of spatial resolution in VR that is not used very often. You're used to thinking about organizing your code on a 2D display, but you have an entire 3D environment around you. That environment could be a zoomable interface where code editors are linked to live objects. Use individual editors for individual code units. Organize them in a tree structure linked to the object. Trees organizers are a lot easier to navigate in 3D than on a 2D screen, especially if you eliminate window scrolling.
Window scrolling was created to account for the limited spatial resolution of 2D displays. But in the process, you lose spatial memory of where things are located. Things like windows and tabs and desktop workspaces were invented to try to wrangle that problem more, but they are not as good as a real, spatial filing system.
Think about it. You probably know exactly where your favorite book is on your bookshelf. You could probably walk over to it and pick it off the shelf without even opening your eyes. But there is very little chance you can pick any particular file you want in a 2D GUI system, specifically because of the absence of spatial relationships.
So a combination of "text legibility is not as bad as you think it is" and "code could be a lot more organized than it is on 2D displays" means that programming in VR is a lot better than you're making it out to be.
Hi there. I don't see any code in your repo. What were you considering for the text editor component? I wrote a text editor that renders to HTML5 canvas elements specifically for use as textures on WebGL meshes. I'm not working on VR programming environments anymore, but I recently did a complete refresh on all the code https://www.primrosevr.com/
There is code if you go to packages/webxr. I only have code for the browser (functional but needs perf improvments) and terminal (should be done in a few weeks).
Oh dude, thank you, I was aware of your project 6ish months ago (there's only like 3-4 canvas text editors so I try to follow them all, most are abandoned though) but what you're doing now fits perfectly with what I'm looking for. Thank you, I'll make sure to use it in v0.1. I couldn't get the webgl demo working though. https://github.com/capnmidnight/Primrose/blob/master/demo3d.... returns a 404.
But my long term goal is to actually 3d render the text completely using SDF or MSDF techniques.
Nice thanks (I previously opened it in Safari and assumed it was broken).
SDF would definitely be tricky, but like I want to ideally render text in 3d space without having a mesh in between. I do realise optimising an approach like that would be extremely hard but in terms of UX, it'd be better than anything possible with meshes and enable a few interesting possibilities.
Just going from your username, if you're actually available, I'd love to hire you for a bit of your help/expertise on my project. Like I'd like to render Primrose with a transparent/translucent background without the text being constrained inside a height limited viewport (render all lines at once) and implement scrolling on the mesh itself (also improve scrolling performance), and just support all keyboard + mouse shortcuts.
My email is me@manuis.in, please message me there if you're interested.
My advice is to save your eyes and put the money you’d spend on VR over the next decade and buy a very nice monitor now instead. Every aspect of the experience will be superior, and if you’re a developer the payback time will be relatively short.
I know this may not seem like "intended use case", but the developer experience can use some innovation for a change. Also one way to bring these technologies close to developers.