Category Archives: Programming

Memories of Virtuality: Adding VR support to Mnemonic

Now that we wrapped up this year’s Amnesia Fortnight (which is the name of Double Fine’s public game jam) and released the prototypes I wanted to share my thoughts on adding Oculus Rift support to ‘Mnemonic’. Some of you might know that I’m personally very excited about the current developments in VR which is why I pitched a sci-fi exploration game called ‘Derelict’ that would have been developed for the Oculus Rift from day one.

Even though it didn’t get picked in the end I was still lucky enough to work on VR for Derek Brand’s film noir inspired exploration / adventure game ‘Mnemonic’ and I think it turned out great! In this blog posts I will shed some light on some of the technical and non-technical aspects of adding support for the Oculus Rift to the prototype.

Obviously I didn’t do all the work myself and the VR version of ‘Mnemonic’ would not have been possible without major contributions from Brandon Dillon (@Noughtceratops) and Matt Enright (@ColdEquation)!

The following quick presentation introduces the topic in five short minutes:

Game design

In ‘Mnemonic’ you have to discover your past by entering the surreal world of your memories. As you explore different events you can restore more memories by solving adventure-game style puzzles which will eventually lead you to a dark secret. I think the team definitely nailed Derek’s vision of a film noir art style. The game is rendered (almost) entirely in black and white and looks stunning and that the prototype was created in only 2 weeks still blows my mind.

The design of ‘Mnemonic’ is a great fit for the Oculus Rift since the game is slow paced and doesn’t require fast or unnatural player movement (e.g. strafing, jumps). The core mechanic is to look at and interact with interesting things, which works very well even with current generation of VR headsets. The fact that you are exploring memories also helps the VR experience, because the brain does not expect 100% realistic behavior from the game world (e.g. ‘why can’t I push that barrel’).

One thing I wish we could have done is to give the main character a (virtual) body since it would have helped to increase the feeling of immersion by creating a stronger connection between the player and the virtual alter ego. In one of the memories you are sitting in a car and it really feels weird to see the seat instead of a body when looking down.

I think the finished result proves the point that you have think about VR from the beginning in order to create a great experience. Porting the design of a game to VR after it is done will never be as successful as incorporating it from the start.

Mnemonic in VR mode

Mnemonic in VR mode

Adding VR support to our game engine

I originally integrated Oculus Rift support to ‘Autonomous’ a while ago in my spare time, because I wanted to know how complicated it would be to add such a feature to a preexisting and relatively complicated code base.

‘Autonomous’ is a first-person game that lets you build and program autonomous robots in a cool 80s inspired sci-fi world. The game was Lee Petty’s pitch for Amnesia Fortnight in 2012 and I contributed to the prototype as a graphics programmer. If you have a Leap controller you can check out the game here: http://autonomousgame.com/

Autonomous

Autonomous

In order to add Oculus Rift support to our proprietary ‘Buddha’ game engine I had to first solve the problem of rendering the current scene for each eye. Since ‘Buddha’ uses immediate mode rendering I was able to draw the scene twice by simply duplicating the frame data and offsetting the camera for each eye.

I ran into an interesting problem with our implementation of directional sun-light shadows though. Since ‘Buddha’ was originally developed for ‘Brutal Legend’ it uses cascaded shadow maps in order to provide high quality shadows at varying distances. The cascades are computed by splitting the view frustum into multiple slices, but since the frustum for each eye were slightly different the resulting shadow maps caused disparity between the left and right eye.

This may seem like a minor problem, but the differences were big enough to create discomfort when playing the game. It took me a while to figure out what was going on, but in the end I was able to identify the discrepancy by performing the ‘left-eye-right-eye-test’: Close the right eye and look at the scene, then open the right eye and close the left eye and compare the rendered results. Any visual difference that is not directly connected to the camera offset is a VR bug!

My solution for this problem was to perform all shadow calculations in the space centered between both eyes. Due to complications, which are beyond the scope of this blog post, it wasn’t possible to cache the shadow maps for the second frame, so there is definitely some room for future improvements.

The rest of the integration was relatively straight forward and in the end ‘Autonomous’ looked like this in VR mode:

Autonomous in VR mode

Autonomous in VR mode

Integrating VR into Mnemonic

Adding to support for the Oculus Rift to ‘Mnemonic’ was straight forward using the VR integration mentioned above (especially after Brandon cleaned up my experimental code). We simply retrieve the orientation from the HMD and apply it to the FPS camera. The user is also able to rotate the camera left and right with the mouse (or game-pad), in order to make it possible to reorient the main character.

Initially there were plans to add cutscenes to the prototype that would take away camera control from the player, but we managed to convince the team that this would make people feel sick and therefore break the VR experience. In the final prototype the player has full control over the orientation of the camera at all times and I think it definitely helps to prevent nauseousness (aka VR sickness).

Here is a picture of Tim playing the ‘Mnemonic’ prototype in VR mode using the Oculus Rift:

Mnemonic_VR_Tim

User interface design

Traditional 2D UI doesn’t work well in VR, which is a lesson I had previously learned when adding Oculus Rift support to ‘Autonomous’. The main problem is that you can’t simply blit the UI on top of the scene. Drawing a 2D element at the same screen-space location for both eyes essentially means that the UI is infinitely far away. However since it is supposed to be on top of everything else the brain can’t really make sense of what it sees.

For ‘Autonomous’ I used the solution described in Joe Ludwig’s excellent paper about adding VR support to ‘Team Fortress 2’. The idea is to draw the UI as a camera-attached plane which ‘floats’ in front of the camera. Since the plane has a real distance to the camera each eye will see it at a slightly different screen-space location and the brain will therefore interpret it correctly. Readability of the UI is still problem, but that is a story for another day…

For ‘Mnemonic’ we decided to avoid 2D UI entirely. Fortunately the game doesn’t require menus and we only had to find a solution for the inventory. Items carried by the player are represented by real 3D models that are located on a ring around the camera. This way the inventory items are rendered as part of the regular scene and show up at the correct location in VR. This approach works very well and I would like to explore it further in a future project.

3D inventory in Mnemonic

3D inventory in Mnemonic

Post effects and other screen-space problems

Image post-processing of the rendered scene is a pretty standard (and useful) technique in games these days. Typical post effects include color correction, edge darkening (aka vignetting), anti-aliasing, blooming or depth-of-field blurring. These operations are usually applied in screen-space which makes them problematic for the same reason that 2D UI doesn’t work well in VR.

Unless the operations are spatially independent (e.g. color correction) it is important to take the interpupillary distance into account when rendering the effect. In ‘Mnemonic’ we offset the texture coordinates of extra textures used during the image post-processing step.

In the prototype the player is able to return to the ‘memory hub’ at any point by pressing a button and the transition is represented by an animated Rorschach image. By applying a horizontal offset to the Rorschach texture the effect essentially gets rendered at a virtual depth (much closer than infinity), which is important since the effect is faded in (and out) on top of the scene.

While this works alright in the prototype a better approach will be necessary for a full game. Drawing these kinds of effects in 3D space (just like the inventory) seems to be the only real solution for this problem. I’d love to experiment with camera-attached particles or similar techniques.

Of course there are effects that can’t be represented by 3D geometry (e.g. vignette) and more research will be necessary to figure out how to do implement them in VR.

Conclusion

I think the potential of VR to create an immersive experience for the player is very exciting, but creating an excellent VR experience isn’t trivial. In his talk at the Steam Dev Days Palmer Luckey (the CEO of Oculus) said that you really have to design a game with VR in mind and I very much agree with him. Adding VR support later on is very difficult and will require quite a few changes.

The games industry is only at the beginning of figuring out how to effectively use VR and I think next few years will be very interesting and exciting. I’m very grateful that I was able to experiment with VR during this year’s Amnesia Fortnight and I really hope that I can come back to it and work on a full game.

I’m very proud of what we achieved with ‘Mnemonic’ and I hope you will check out the prototype (especially if you own an Oculus Rift). You can still get access to all of the games on Humble Bundle.

Post scriptum

If you made it this far and you are still not tired of my ramblings, then you might also want to check out the excellent documentary about Amnesia Fortnight made by 2 Player Productions. You can find the entire playlist on YouTube.

I’m talking a bit about adding VR support to ‘Mnemonic’ in the episodes about day 9 (starting at 17:18) and 10 (starting at 37:44):

Dynamic 2D Character Lighting

It is certainly no secret that lighting can tremendously improve the visual quality of a game. In this blog post I’ll describe various techniques for dynamic 2D character lighting that are very easy to implement and don’t require any additional assets (e.g. normal maps). I have successfully used these techniques in various games like Monkey Island: Special Edition, Lucidity and most recently in Broken Age.

But why is character lighting important in 2D games? Very often artists will paint lighting (and shadowing) directly into the environment artwork, which is okay because the world is generally static. In fact it would be very difficult to compute the lighting for, let’s say, a background image in real-time, since no spatial information (e.g. z-depth, normal direction) is available. Characters on the other hand move through the world and will be visible in various locations with different lighting conditions, so it is generally not possible to paint the influence of light sources directly into the sprites (or textures). Therefore the only solution is to calculate the character lighting during run-time.

The video below demonstrates how Shay (who is one of the main characters in Broken Age) looks with and without lighting. I hope you’ll agree that lighting helps to integrate Shay into (the beautiful) world.

Ambient Lighting

The idea behind ambient lighting is to make a character look like they are part of the game world rather than floating on top of it. So if, for example, a player moves from a shadowed into a fully lit area the visual appearance of the character should change from dark to bright.

Thankfully this can be achieved very easily by tinting the sprite (or texture). Using a gradient tint allows independent control of the color of the lower and upper part of the body, which makes it possible to emulate ambient occlusion between the floor and the character.

B_D2DCL_Ambient_Gradient

Ambient gradient tinting is the backbone of Broken Age’s lighting system and is used in every scene for every character. The following picture demonstrates how the ambient gradient changes based on the location of Shay.

B_D2DCL_Ambient_Gradient_Sampling

There are different ways to interpolate the gradient color across the body of a character. The simplest approach is to use the bounding box in which case vertices on the head sample from top of the gradient whereas the feet use the bottom of the gradient. This approach works great for sprite-based characters, but doesn’t offer much control over the shape of the gradient.

Broken Age applies a different strategy by computing normals as the vectors from the translated center of the bounding box to each vertex. The ambient gradient is then sampled based on the y-coordinate of the resulting normal. Not only does this technique work great with skinned geometry it also makes it possible to define how quickly the gradient interpolates across the body by changing the offset of the center. My GDC Europe talk ‘Broken Age’s Approach to Scalability’ contains more information about this approach, so please feel free to check it out if you want to know more about it.

B_D2DCL_Normals

Local Lighting

The goal of local lighting is to show the influence of nearby light sources. In other words it answers the questions where the characters are in relation to lights. The video of Monkey Island: Special Edition (MISE) below shows how the local lighting of the campfire illuminates the two characters. Please make sure to watch it fullscreen and in high definition.

If you look closely you’ll notice that the campfire only illuminates the parts of the characters that are facing towards it. There are different ways to achieve this look. The characters in Broken Age have hand authored normal maps that are then used to compute the reflected light. While painted normals offer a lot of artistic freedom they obviously require a lot of time to create.

In MISE we weren’t able to go down this route due to time constraints of the development schedule, so we needed a technique that required no extra assets and only a minimal amount of tweaking. Our solution utilizes a screen-space light map that contains the illumination of nearby light sources. The following image shows the light map for the lookout scene.

B_D2DCL_Lookout_Lightmap

In order to calculate the light map we first render the alpha mask of the characters into an offscreen render-target (see image A below) before downsampling and blurring it using a Gaussian kernel (image B). The resulting image can be interpreted as a soft height field containing a blobby approximation of the characters. With this in place we can now compute the normals of the blob approximation as the gradient of the height field (see image C). Now that normal information is available we can finally calculate the light map by accumulating the reflected illumination from all nearby light sources (image D). In MISE we used a very simple Labertian lighting model (aka ‘n dot l’), but more a sophisticated formula can easily be used instead.

B_D2DCL_Lightmap_Calculation

The resulting lighting data can be combined with the scene in different ways. The easiest solution is to blit the light map on top of the framebuffer using additive blending. We used this approach in MISE and it worked great, but has the disadvantage that reflected light is barely visible on bright pixels (e.g. Guybrush’s shirt).

Another compositing strategy would be to sample the illumination from the light map in the character shader. This way the light can be integrated in different ways with the texels of the sprite (or texture), so characters could effectively have varying reflectivity. In this case the light map essentially represents the results of a light pre-pass.

Light Sources

One thing I haven’t talked about up until now is how light sources are authored. It should be easy to associate different parts of the world with specific parameters in order to be able to match the character lighting with the illumination painted into the environment.

Representing light sources as radial basis functions works great for ambient as well as local lighting. The gradient tint for each character can easily be computed as the weighted average based on the distance from the light sources. In my experience it is also a good idea to expose some kind of global light, which is sampled if there are no light sources nearby. The following image illustrates the radial basis function blending for ambient gradient tints.

B_D2DCL_RBF_Blending

The light map can be calculated by drawing the light sources as approximated shapes (e.g. quad or circle) into the render-target. The color and intensity of each output fragment is computed by evaluating the lighting model of your choice. In MISE we calculated the reflected light basically like this:

vec l = light_pos - fragment_world_pos
vec n = tex2D(normal_map, fragment_screen_pos)
scalar nDotL = saturate(dot(normalize(l), n))
scalar dist = length(l)
vec result = light_color * nDotL / (dist * dist)

Using radial basis functions has the additional benefit that it’s very easy to animate a light source both by changing its transformation (e.g. position, scale) as well as parameters (e.g. color, intensity). A flickering campfire can easily be achieved by interpolating between two states using a (nice) noise function. Attaching this light source to the hand of a character effectively makes it a cool looking torch.

Conclusion

Dynamic character lighting helps to make a game world and its inhabitants look more believable and interesting. Thankfully it’s relatively easy to implement a lighting system that produces nice results and I hope that my blog posts could provide you with some inspiration as well as practical advice. Please feel free to post questions as comments below.

That’s it for today. The last one to leave turn off the (dynamic) lights! 🙂

PS.: I talked about this topic at Unknown Worlds’ postmortem event and you can watch the recording here: http://www.twitch.tv/naturalselection2/c/3341463

What I expect from a Graphics Programmer candidate

This is a crosspost from #AltDevBlog: http://www.altdevblogaday.com/2013/11/08/how-to-become-a-graphics-programmer-in-the-games-industry/

As we were recently hiring a new Graphics Programmer at Double Fine I had to identify what kind of technical knowledge and skills we would expect from a potential candidate. Although this definition will be somewhat specific to what we look for in a candidate, it might still be of interest to other coders trying to score a job in the industry as a Rendering Engineer.

This post might help you to identify areas to learn about in order to get you closer to your goal of becoming a Graphics Engineer, whether you just finished your degree or perhaps have been working in the games industry in a different role. Alternately, if you are a seasoned Rendering Programmer, then you know all of this stuff and I would love to hear your comments on the topic.

Know the Hardware

Learning about the strengths and weaknesses of the hardware that will execute your code should be important for any programmer, but it’s an essential skill for a Graphics Engineer. Making your game look beautiful is important; getting all of the fancy effects to run at target frame rate is often the trickier part.

Of course, it would be unrealistic to expect you to know every little detail about the underlying hardware (especially if you are just starting out) but having a good high-level understanding of what is involved to make a 3D model appear on screen is a mandatory skill, in my opinion. A candidate should definitely know about the common GPU pipeline stages (e.g. vertex- and pixel-shader, rasterizer, etc.), what their functionality is and whether or not they are programmable, configurable or fixed.

Very often, there are many ways to implement a rendering effect, so it’s important to know which solution will work best on a given target device. Nothing is worse than having to tell the artists that they will have to modify all of the existing assets, because the GPU doesn’t support a necessary feature very well.

For example, the game that I am currently working on is targeting desktop computers as well as mobile devices, which is important because mobile GPUs have very different performance characteristics compared to their desktop counterparts (if you are interested you can find my micro talk on this topic below). Our team took this difference into account when making decisions about the scene complexity and what kind of effects we would be able to draw.

A great way to learn more about GPUs is to read chapter 18 of Real-Time Rendering (Third Edition), because it contains an excellent overview of the Xbox 360, Playstation 3 and Mali (mobile) rendering architectures.

Good Math Skills

Extensive knowledge of trigonometry, linear algebra and even calculus is very important for a Graphics Programmer, since a lot of the day to day work involves dealing with math problems of varying complexities.

I certainly expect a candidate to know about the dot and cross products and why they are very useful in computer graphics. In addition to that, it is essential to have an intuitive understanding for the contents of a matrix, because debugging a rendering problem can make it necessary to manually ‘decompose’ a matrix in order to identify incorrect values. For example, not that long ago I had to fix a problem in our animation system and was able to identify the source of the problem purely by looking at the joint matrices.

In my opinion, a candidate should be able to analytically calculate the intersection between a ray and a plane. Also, given an incident vector and a normal, I would expect every Rendering Engineer to be able to easily derive the reflected vector.

There are plenty of resources available on the web. You can find some good resources in the links section. I would also strongly recommend attempting to solve some of these problems on a piece of paper instead of looking at a preexisting solution. It’s actually kind of fun, so you should definitely give it a try.

Passion for Computer Graphics

An ideal candidate will keep up to date with the latest developments in computer graphics especially since the field is constantly and rapidly advancing (just compare the visual fidelity of games made 10 years ago with what is possible today).

There are plenty of fascinating research papers (e.g. current SIGGRAPH publications), developer talks (e.g. GDC presentations) and technical blogs available on the internet, so it should be pretty easy to find something that interests you. You can find quite a few blogs of fellow Rendering Engineers in the links section.

Of course implementing an algorithm is the best way to learn about it, plus it gives you something to talk about in an interview. Writing a cool graphics demo also helps you to practice your skills and most of all it is a lot of fun.

Performance Analysis and Optimization

One of the responsibilities of a Graphics Programmer is to profile the game in order to identify and remove rendering related bottlenecks. If you are just starting out I wouldn’t necessarily expect you to have a lot of practical experience in this area, but you should definitely know the difference between being CPU and GPU bound.

An ideal candidate will have used at least one graphics analysis tool like PIX (part of the DirectX SDK), gDEBugger or Intel’s GPA. These applications are available for free allowing you to take a closer look at what’s going on inside of a GPU, isolate bugs (e.g. incorrect render-state when drawing geometry) and identify performance problems (e.g. texture stalls, slow shaders, etc.)

Conclusion

The job of a Graphics Programmer is pretty awesome since you’ll be directly involved with the visual appearance of a product. The look of a game is very often the first thing a player will know about it (e.g. trailers, screenshots) which has been very gratifying for me personally.

Truth be told, you won’t be able to write fancy shaders every day. You should be prepared to work on other tasks such as: data compression (e.g. textures, meshes, animations), mathematical and geometry problems (e.g. culling, intersection computations) as well as plenty of profiling and optimizations. Especially, the latter task can be very challenging since the GPU and the associated driver cannot be modified.

To sum up, becoming Rendering Engineer requires a lot of expert knowledge, and it is certainly not the easiest way to get a foot in the proverbial games industry door, but if you are passionate about computer graphics it might be the right place for you!

Post Scriptum

Please make sure to also check out the excellent #AltDevBlog article “So you want to be a Graphics Programmer” by Keith Judge.

As mentioned above I recently gave a micro talk about some of the fundamental differences between desktop and mobile GPUs at the PostMortem event hosted by Unknown Worlds.

Stating the obvious: The adventure of programming

It just occurred to me the other day (and I’m not sure why it took me so long to make that connection) that enjoy programming and adventure games because of similar reasons.

Adventure games are all about puzzles and the joy of solving them. Well so is programming. You are constantly confronted with questions like ‘Why does the system behave like that?’, ‘How can we get around these limitations and make feature X work?’

The other day I was trying to track down a weird issue were the parts of the screen would sometimes get corrupted. Initially I thought it had to do with how I was managing graphics memory, so I was poking around in that code but couldn’t find the problem. The issue turned out to be in a completely different system and I (eventually) identified the source by observation of the behavior of the affected code. I approached the problem just like a puzzle in an adventure game (I even tried to look up the solution on the internet) and felt quite good after I finally solved the problem.

But there are more similarities than solving puzzles. The taxonomy of adventures in my simple world view is defined by whether the puzzles follow designer logic or observational logic. For the former category you basically have to figure out what the designer was thinking when he was creating the puzzle. Very often there is only one solution and it might not be the most obvious/logical one (usually the solution is very creative and funny though). The latter category describes games were the puzzles can be solved by observing the environment and making logical conclusions. I’m not going to name examples here, but I’m sure you know what I’m talking about.

The same taxonomy can be applied to programming too! For example if you have to work with a closed-source API you’ll have to start thinking like the architect of the system in order to be able to use it properly. Very often there is only one right way of interfacing with the API and other approaches will introduce obvious (and worse than that non obvious) bugs. Unfortunately there barely is a funny pay-off though. Observational logic is also important, because debugging pretty much relies on reasoning based on the changing state of systems.

Not only that but sometimes you even have to do pixel hunting when trying to find and fix syntax problems. Also did you ever notice that branching is very similar to dialog trees… 😀

Maybe it’s far fetched, but it would explain why I like adventure games and coding a lot. 🙂