Back at GDC after 2009 and 2010! Like the last time, here’s an overview of my thoughts on the presentations I heard.
Liquid Intelligence: Connecting AI and Physics in Vessel
This talk was a nice combination of background information on the technical details and the way in which they interacted with the design in Vessel. He started out with the inspiration for the way the fluid simulation is set up, which was, like so often, found in a SIGGRAPH paper. He covered some implementation details, also addressing the parameters available to the designers such as the viscosity of the fluid. One of the most interesting details was how emergent gameplay resulted from linking the fluid simulation to AI objects. For example, he described how amazed they were when they introduced an AI that drinks up all the fluid it can find, and it would drink up its fallen brothers when they were killed. This lead to puzzles being built upon this principle.
Crafting the World of Crysis
Since I learned a lot about visual arts in games, I found this talk really interesting though it didn’t focus much on the programming side of Crysis. The speaker explained how Crytek started out with the premise of combining the jungle surroundings of Crysis 1 with the urban landscape of Crysis 2, which governed the look for Crysis 3. One interesting tidbit is that they moved away from providing the level designers with modular, re-usable assets per se but rather created assets for specific parts of the game. However, they are still planned to be modular, for example collapsed buildings are made up of parts, with some additional detail objects added to cover up repetitive parts or straight edges.
Nintendo Wii U Application Development with HTML and JavaScript
This sponsored talk explained Nintendo’s way of handling HTML5 on Wii U, which seems to be more open than previous. You still need a Nintendo dev kit, which will probably still be an obstacle for most small or starting out indie studios. At least they seem to be flexible about the previous rule of only allowing dev kits to be placed in company offices.
On the technical side, they showed a system for deploying and debugging HTML 5 code to a Wii U dev kit. There are libraries for making use of the specific features of the Wii U, such as the tilt sensor. They brought up the author of impact.js to demonstrate the engine running on Wii U, also showing off that the system can handle a lot of objects on the screen. However, after a question from the audience, it was stated that developers don’t have access to WebGL, which seems like a huge gap for 3D games.
PlayStation Shading Language for PS4
A lot of shader code in this presentation, and apart from the information about PS4, the guys from Sony showed the implementation of Voxel Cone Tracing on PS4. For me, the largest takeaway was that programming graphics on PS4 will be considerably easier than on PS3.
I had read about the architecture of the PS3 and graphics programming on it in books and on the web, but as a games researcher working on Serious Games a PS3 dev kit was pretty much out of our reach ;-) Good to know that I won’t need that knowledge for the next generation of consoles.
Broken Age’s Approach to Scalability
One of my favourite talks of GDC this year. Like the talk on vessel, it got into some parts of the design with a focus on the technical side. The engine of Broken Age uses only painted 2D objects for the graphics, but realizes a pseudo-3D look by some tricks. For example, scenes are created in layers, realizing parallax when scrolling. More innovative are the ways in which animation and lighting is handled. For animation, characters are created using 2D skeletal animation, with hand-drawn body parts that are generated for several directions the character can face. In this way, they can realize animations without going way overboard on the budget for textures.
For lighting, the engine creates normals by approximating them from the shapes of the body parts and uses them to tint the characters, creating an effect like rim lighting. For shadows, a simple blob underneath the characters is created. To give the effect of the light position influencing the shadow, the blob is scaled and stretched according to the distance from the light. Really simple, but looks convincing enough.
The scalability part was largely realized by looking out for mobile devices, where reducing overdraw is the major requirement. Instead of putting the textures for the objects on large planes with a lot of transparent space on the texture, planar objects which approximate the shapes of the drawings are created, in order to reduce overdraw.
Lastly, the engine is a custom engine built on Moai engine. Since I’m working on something similar myself, I’d love to see how they are handling scripting and other point&click-adventure elements, but maybe they show that in some other talk.
Virtual Reality Gaming and Game Development
The founders of Occulus VRÂ gave this presentation themselves. The session had massive attendance, the gamedev community is really interested in this one. They gave several examples of successful integration of VR into games and give insights into best practices they have found so far. There were some tips that are quite old, such as looking out for flat GUI elements that hang in space.
The more interesting part was how scenes have to be composed for VR to work. For example, they showed how normal 3D rendering onto 2D non-stereo screens leads to characters which are really short. This is due to our expectations of seeing characters on a screen, where we want to see not only their faces but also their torso. In a VR environment, you suddenly notice how low the camera has to go for this effect.
Another interesting thing is the personal space. Often, close-ups of game characters (such as NPCs in Skyrim during conversations) are way too close to us. In VR, people notice how this makes them uncomfortable.
The major recommendation was to never take the control over the head movement from the player. This means no cut-scenes with automatic camera control, no sudden motions (the discussion is still open over whether head bobbing is good or not), and no elements (GUI) that stick to the same place in the player’s view no matter where they look. The pointed out that this can help with reducing simulator sickness.
Expo
Apart from the presentations, I looked around a bit on the expo floor, talking among others to the people of nevigo who are creating articy:draft and checking out the Occulus Rift for myself. Of course it’s a shiny new toy, and I’m looking forward to having one. However, at least in the breakout demo I was playing, there’s still some blurring to be seen when you move your head and the resolution was relatively low. But still it’s fascinating to be looking up or behind you during playing and still be in the game world. I’m really looking forward to seeing something like this on a Rift.
See also the posts on this year’s presentations on Tuesday and Wednesday.