My first encounter with the Oculus Rift head-mounted display ended with me feeling nauseous after using it for only 10 seconds. A good friend told me that this happens to everybody and suggested that I should give it another try. Thankfully I did and I must admit I was very impressed by the level of immersion in the official Tuscany scene and some of the other 3rd party demos.
In fact I’m so hooked that I ordered a development kit and started coding for it right away. The results may not look impressive quite yet, but they already feel like a real place and that is just so cool. Here are come screenshots of my latest experiments.
The Oculus Rift SDK is very easy to integrate into an existing engine and comes with great documentation and helpful examples that are easy to understand. Good API design is pretty rare, so I appreciate the time and care that obviously went into the Oculus SDK.
Even though I didn’t spent that much time writing Oculus applications yet I already learned some important lessons about VR:
- Rendering at 60Hz is essential as higher latency quickly causes nauseousness.
- Realistic values are required for an immersive experience. For example most contemporary FPS games use greatly exaggerated movement speeds. The average human walks at a speed of 5km/h, so that should be taken into account when writing the player controller.
- Reference frames are important. When you look at the ground in real life you’ll (hopefully) see your own body and legs, so naturally your brain will have the same expectation even in a virtual environment.
I think my next steps will be to build upon my basic test and combine it with research into physically based shading, which has been on my hobby-coding list for a while as well.
Recently I became very interested in compiler design and implementation. In University I chose Computer Graphics over Compiler Architecture, so I felt it was time to finally close this gap in my knowledge base. Even though I had a rough understand how to write a compiler and never actually attempted it. Obviously I implemented many parsers for various game-related data formats, but it’s not quite the same.
After finishing my implementation of the DCPU16 I really wanted write programs in a high-level language. I mentioned this to a friend who suggested to check out ANTLR, which is a parser generator. ANTLR let’s you define a grammar and then creates a code that will parse code into an abstract syntax tree (AST).
I didn’t write the parser myself for a few reasons:
- I really wanted to be able to easily change/extend the grammar, which means I constantly would need to modify the parser source code.
- Writing parsers is not the most exciting thing in the world.
- It’s fun to learn about new SDKs and APIs!
I chose to use the C# target, which means that ANTLR will generate C# classes when generating the lexer and parser. The remaining tasks were to process the AST in order to emit DCPU assembly, which turned out to be a pretty interesting and challenging problem.
Writing a naïve assembly generator is pretty straight forward. The trick, however, is to create the most efficient DCPU code. In other words the goal is to represent a program written in a high-level language with as few ‘hardware’ instructions as possible. My solution is far from perfect, but it’s better than a naïve implementation.
The following example shows a math expression with variables (left column) and the corresponding DCPU assembly (right column) generated by my compiler:
|Math expression||Generated DCPU assembly|
There are a few more things I’d like to investigate. Right now the compiler only supports simple math expressions (the result will be written to register A) and I would like to add support for functions, loops and conditional statements. Ideally the high-level language could also communicate with the various devices I already implemented (e.g. monitor, keyboard, …)