Ender’s Game

I feel I have to talk about my thoughts with regards to the Ender’s Game movie, especially in light of the mixed reviews I have heard.

There are two ways to think about the process of turning a book into a movie.

The Engineer’s Way is methodical. Given a movie, what are the changes from the book? For each change, does it modify the meaning or impact of the event from the book? The fewer the changes, the more faithful the movie is to the book.

The Artist’s Way takes a more emotional approach. What messages and emotions made the book interesting? How can we capture those same elements in the cinematic form?

Up front, neither way is inherently better. For a literate moviegoer, the Engineer’s Way may prove more interesting. With the supporting knowledge from having read the book, the movie falls into context. In this case, the moviegoer is looking to see the images in his head turned into CGI reality on the screen. He wants to see the cool things, watch the faces of the characters as they go through their journey. The literate moviegoer has already been inside the character’s head, and emotionally experienced the story. Now they want to graphically experience it.

On the other hand, the hapless, un-informed average Joe has not experienced the story yet, on any level. They have not heard the facts, been on the emotional roller-coaster, or seen the end. In this case, some may prefer the Engineer’s Way, especially if they are looking for shallow entertainment. But if the moviegoer is looking for an engaging story, they will almost always want the Artist’s Way.

This presents a dilemma for the cinematographer. Do you risk the wrath of the fans by deviating from the book? Or do you faithfully reproduce the book and risk losing the emotional intensity found within its pages? Few books allow for both approaches.

Ender’s Game took the Engineer’s Way. Personally I think this was wrong. Ender’s Game is a long book with a couple of plot lines and milieu elements that don’t especially lend themselves to the film medium. In fact, some of the best parts in the film adaptation of Ender’s Game were the parts that deviated most from the book. For example, the two invasions compressed into one and the space battle turned into a fighter plane battle. Of course, that didn’t change the impact of those events — you might say that it is an example of the Engineer’s Way. But the exclusion of the Earth-bound politics certainly falls under the Artist’s Way.

The point I want to make, in a strange, round-a-bout way, is that the film was faithful but devoid of emotional involvement. It had the intensity, but the audience was left behind as the film skipped along at a brisk pace. One of the cardinal sins of blockbuster films (or AAA games, for that matter) is that their sense of pacing is non-existent. There were almost no moments of complete silence in the Ender’s Game movie. Much of it flew along, approaching the discontinuity of montage. Light music accompanied the quick delivery of dialogue and display of action, squelching any opportunity for a realistic pause.

Even having read the book a number of times and enjoying it, I could not emotionally connect with the characters onscreen. I watched the action, rather than experiencing it. The movie did a little too much tell, and not enough show.

While armchair directing is the most despicable form of cinematic criticism, I want give my two cents. If they had selected a few of the most emotionally charged and story-driving scenes and played them out over an extended period, the audience would have been given time to think. When there are realistic pauses in a conversation, the audience can create their own responses and then contrast them with what is said onscreen. In this manner of comparison, the audience connects with the characters. There is nothing wrong with having a second or, god forbid, two seconds of near-silence. A moment of ambient room noise can say as much as a minute of dialogue.

That said, they did pretty well with adapting the book. I’m not going to comment on the ending, because I am as stumped as anyone when it comes to turning the end of that book into a meaningful cinematic sequence.

Advertisements

Pyglet

A couple of months ago I discovered Pyglet, which is basically an OpenGL interface for Python. I experimented with it, figuring out different features. I had never really worked with OpenGL before, and the little bit of experience I had was limited to translating GLUT spheres (for an N-body simulation for my Parallel Computing class).

Screenshot of my program (image link broken)

My exploration into Pyglet was a two-pronged attack. On the one hand, I had to learn how to use OpenGL, and on the other I had to learn Pyglet’s idiosyncratic diversions from standard C OpenGL. Fortunately, there is a wealth of tutorials and explanations on the Internet regarding OpenGL, and Pyglet is fairly well documented.

However, there is a caveat to exploring a new tool: debugging becomes much harder. When your code stops doing what it is supposed to, you have no idea whether the bug stems from a mistake in your code, or a fundamental error in your method. Moreover, your method may work in theory, but you are using the new API incorrectly. Since there is no way to test all three of these possibilities at once, you have to assume two of them are correct and look for an error in the third. If you pick wrong, you can spend days looking for a bug in the wrong place.

For instance, I was writing code to find the ray originating from the camera and passing through a point on the screen where the user had clicked. The issue here is that although you know the location of the click with relation to the window, you don’t know where it is in world space. Well, I went through 3 or 4 methods for finding the vectors in world space that corresponded to the vertical and horizontal directions on the screen. After that the problem becomes trivial.

A graphic description of my problem

In desperation I started printing out random things in the body of my code. I realized that two variables which should have been the same were very different. It turned out that when I found the location of user click in world space (which was stored as a vector), I was accidentally normalizing it.

I had assumed my code had no careless errors, and had instead blamed the bug on my method. In reality, I doubt there was ever a problem with my method. Instead, the problem had always lain in the three lines of code that I considered the “trivial” part of the problem. While it was quite trivial, that triviality also protected it from proofreading. After wasting a good 4 or 5 days on the problem, I have learned my lesson: there was absolutely nothing I could have done about it.

Crysis 3: First Impressions

I put my beefy new graphics card up to the test. I’ve always been a fan of Crysis. The first Crysis game was such a brilliant creation. From the spine-singling intro scenes, to the best mix of cutscenes and free-roam arenas. The vehicles, guns, and explosions all felt right. But the game kept getting better. The tank battle was a nice departure from the jungle stealth of the start. Then the zero gravity sequence just totally blew my mind. That turned Crysis into delicious cake. The ice level, the VTOL sequence, and the entire last level (with that epic end sequence) were all just frosting.

Crysis Screenshot

I know every level of that game by heart. So when Crysis 2 came out, I was excited. The multiplayer beta gave me some idea of how the controls would differ. But I reserved judgement (since the singleplayer campaign is the heart of any game). So imagine my surprise and disappointment when the game came out, and it sucked. Gameplay was boring and linear, enemies were samey and uninteresting, vehicle sections were highly linear, and the graphics were somehow worse than the first game. Despite all the hype over the “CryEngine 3”, the graphics were plasticy and bloomy. Crytek took everything interesting out of the series, and removed all the main characters to boot – Nomad was replaced by a silent, unimpressive protagonist. The game was cut and dried; there was no boisterous spirit left in the IP.

Since Crysis 3 came out, and I got a new graphics card, I figured I would buy the game. Maybe Crytek had taken the lessons they learned in making Crysis 2 to heart. Nyeeeh. The enemies and weapons are the same, and the interface is still dumbed down. I’ll admit, the graphics look a bit better, and the choice of environment is sounder. But since when was a bow and arrows cool? The bow and arrow concept seems like a feature tacked on to justify the game; without it, Crysis 3 would just be a short story add-on to Crysis 2.

My biggest issue is that the game is still highly linear. There are such excellent, expansive sets in Crysis 3, but each area is bounded by myriad invisible walls. The crudest element, which really insults me, is that you can see into the void in some places, where they forgot to put geometry. CryEngine has a default feature that puts a terrain layer across the entire map. The fact that they eschewed that, which was designed for creating large free-roam environments, means that Crytek has truly forsaken the idea of open gameplay. This makes me sad. There was great opportunity for this urban grassland idea. Imagine being able to fight through semi-collapsed buildings, then onto a grass plain, then climb onto a freeway and drive a tank down it, scaring deer out of the way and shooting down helicopters, which crash into skyscrapers.

There were good things about Crysis 2 and 3. The idea that the nano-suit is alien technology, the idea of Prophet’s conscious switching bodies. The stalkers in high grass were cool. But they screwed up the aliens, didn’t bring back ice levels or zero gravity, and took away speed and strength mode, tactical nuke launchers, and in-game freedom. I will continue to tout the demise of the Crysis franchise as a definitive argument against EA and consoles.

< / rant >

Zombies, Pixels, and Cubes (Oh my!)

It’s no secret that many games these days have incurred that oh-so virulent infection. Like the T-virus, it has spread to every sector of the market, turning developers in shambling shells of their former selves. I speak, of course, of zombies. Just last year we saw WarZ, ZombiU, BlOps 2, and Amy. The year before that saw Yakuza: Dead Souls, Rise of Nightmares, Dead Island, and the rather well-named Zombies. That list excludes low-profile games and those which aren’t, in my opinion, terrible. Is this trend developer laziness, or perhaps a corporate influence? I wouldn’t be surprised if teams were pushed towards zombie games because, statistically, they make more money.

While it is reasonable when large-budget games are zombie-based, the same rationality falls short of protecting indie games. Zombie games are a prop-up, a cop-out for a developer who can’t come up with a better framework. Sure, it saves you the effort of establishing a complete universe (which is extremely tricky). That effort can go back into making other parts of the game better. But is the tradeoff worth it? To me, zombies don’t allow for a lot of avenues in terms of creative gameplay and storytelling. Are zombies a fall-back for those who need an extra kick in their games? Just search “zombie” in the Steam Store and sort by release date. Decide for yourself.

On a seemingly unrelated note, I want to talk about retro graphics. Let’s take a stroll down the Steam Greenlight aisle, shall we? In the first few pages we see:

  • MANOS: The Hands of Fate
  • Dead Colony
  • Deprivation
  • Hammerwatch
  • Potatoman Seeks the Troof
  • Dungeonmans
  • Topia Online
  • 16 Bit Arena
  • Spuds Quest
  • Legend of Dungeon

Keep in mind, these are those that are easily distinguishable by their image tile – many more lurk out there behind well-illustrated thumbnails.

What is the cause of this tsunami in indie game market that is retro graphics? Pixel graphics have the added bonus of nostalgic appeal for a certain generation. Art assets may be cheaper to produce. But, at least to me, pixel graphics convey a sense of harsh, delineated gameplay, where fun is equated with difficulty. My mind drifts to games like Megaman, where the reward for beating one level is to play the same level over again, with a different color tileset. I think the benefits of pixel graphics fall by the wayside when the decision is made. Pixel graphics, like zombies, are a knee-jerk reflex for the mediocre game developer. Often these developers are different, but I guarantee that there is more than one pixellated zombie game out there produced in the last five years.

Which brings me to cubes. Thanks, Minecraft. I both enjoy and loathe your trend-setting magnificence. It’s time for another stroll through Greenlight. Bonus points for games that have the word “Cube” in them.

  • Block Story
  • Slip
  • Logicubiks
  • Cell Emergence
  • Brain Cube Reloaded
  • King Voxel
  • Cubes and Zombies
  • Ace of Spades
  • Cube Park
  • Cube World

Ugh. *shiver*. I should do another post on how to not make your game look totally unappealing on Steam Greenlight. You would think choosing a good name and thumbnail would be at the top of everybody’s list. Apparently not.

The Future of the Source Engine

Valve’s Source and GoldenSource engines and Epic’s Unreal engines have had a long, acrimonious feud. Both Golden Source and the Unreal Engine debuted in 1998 in Half Life and Unreal, respectively. Both were considered revolutionary games at the time. Unreal blew technical and graphical expectations out of the water. Half Life left a legacy as one of the most influential games in the FPS genre.

Unreal Engine screenshot Unreal Engine screenshot
i2Zan0DmFkTfy Golden Source screenshot

Fast forward 6 years. Valve, in the meantime, has released Team Fortress Classic and Counterstrike, both extremely revolutionary games. The Unreal and Unreal 2 engines (the latter was released 2 years prior) had become extremely popular platforms for game developers, mostly because of the engines’ notable modularity and room for modification.

In 2004, Valve debuts the Source engine with Half Life 2, a ground breaking game that completely demolishes competition and sets a long-lasting legacy in terms of story, gameplay, and graphics. For comparison, Unreal Tournament 2004 was published the same year.

Unreal Engine 2 screenshot Source screenshot

In another 7 years, Unreal Engine 3 has been released and games like Gears of War and Batman: Arkham City have been developed using it. Valve has just published their first widely supported game, Portal 2. The Source engine has been evolved over the years, and many graphical upgrades have been applied along with compatibility with major game consoles.

Batman: AC screenshot
screenshot-2

However, it becomes readily apparent that the visual styles of these two engines have diverged in the years since 1998. The Unreal line of engines have supported games like Bioshock and Mass Effect, but have also bourn the brunt of AAA games. Such games are known for their muted brown-grey color pallete, uninteresting story, and factory-made gameplay. Unreal Engine games are commonly criticized for having character models that look “plastic” (a result of game developers setting specular too high on materials), awkward character animations, and overuse of lens flares and bloom.

Games on the Source engine, on the other hand, consistently revolutionize some aspect of gaming. For example, Team Fortress 2, Portal, and Left 4 Dead are widely known for innovative gameplay. Unfortunately, Valve has lagged behind in terms of pushing the graphical frontier. Half Life 2 was smashingly good for its time, much in the same way that Halo stunned the gaming world back in 2001. However, every Source game since its debut has looked more and more aged.

Even worse, developers are driven away from using the Source engine due to a set of tools that have barely evolved since they were developed in 1998. Hammer, the level creation program, and Face Poser, the character animation blender, are unwieldy and unfinished; Source SDK tools are notorious for their bugs and frequent crashes.

Conversely, the Unreal toolset is streamlined and easy to jump into. This appeal has drawn more and more amateurs and professional developers alike. The editor allows you to pop right into the game to see changes, whereas the Source engine still requires maps to be compiled (which can take minutes) in order for the most recent revision to be played. Unreal’s deformable meshes dwarf the Source engine’s awkward displacement system.

However, I have a feeling that a couple of factors are going to come together and boost both engines out of the recent stigma they have incurred. The biggest factor is that at some point the AAA game industry is going to collapse. The other critical event is Half Life 3.

Yes! Do I know something you don’t? Have I heard a rumor lurking the Internet about this mysterious game? No. But I do know history. And that is more useful than all the forum threads in the universe.

Half Life was released in 1998. Half Life 2 was released in 2004. Episode 2 was released in 2007. Half Life 2 took 6 years to develop, despite being on a side burner for some of that time. By extrapolation, Half Life 3 should be nearing release in the next 2 years. However, circumstances are different.

The Source engine was developed FOR Half Life 2. Graphics were updated. But the toolset remained the same. In the time between HL2 and now, Valve has been exploring other genres. Team Fortress 2, Portal 2, and Left 4 Dead 2 all took a portion of the company’s resources. In addition, that last few years have been spent intensively on developing Dota 2 (which, by the way, was the cause of the free release of Alien Swarm). The second Counterstrike was contracted out. So Half Life 3 has been a side project, no doubt going through constant revisions and new directions.

However, unless Valve is going to release Day of Defeat 2 or Ricochet 2 (yeah right) in 2013, production on Half Life 3 is going to kick into high gear. There is one fact that drives me to believe even more heavily in this theory.

Since 2011, and probably even earlier, Valve has been pumping a huge amount of effort into redesigning their entire suite of development tools. It had become readily apparent to everyone at the company that the outdated tools were making it impossible to develop games efficiently.

“Oh yeah, we’re spending a tremendous amount of time on tools right now. So, our current tools are… very painful, so we probably are spending more time on tools development now than anything else and when we’re ready to ship those I think everybody’s life will get a lot better. Just way too hard to develop content right now, both for ourselves and for third-parties so we’re going to make enormously easier and simplify that process a lot.”
-Gabe Newell

Because both TF2 and Portal 2 have been supported continuously since their release, they have been the first to see the effects of this new tool development. Valve seems to have used these games as testing grounds, not only for their Free to Play business model and Steam Workshop concept, but also for new kinds of development tools. First, the Portal 2 Puzzle Maker changed the way that maps were made. In the same way that Python streamlines the programming process, the Puzzle Maker cuts out the tedious technical parts of making a level.

The second tool released was the Source Filmmaker. Although it doesn’t directly influence the way maps are made, its obviously been the subject of a lot of thought and development. The new ways of thinking about animation and time introduced by the SFM are probably indicative of the morphing paradigms in the tool development section at Valve.

Don’t think that Valve is going to be trampled by any of its competitors. Despite Unreal Engine’s public edge over the Source engine, especially with the recent UE4 reveal, the AAA game industry is sick, and no other publisher has a grip on the PC game market quite like Valve does. And although 90% of PC gamers pirate games, PC game sales are hardly smarting. In fact, the PC game market is hugely profitable, racking up $19 billion in 2011. This is just a few billion shy of the collective profits of the entire console market. Yet the next best thing to Steam is, laughably, EA’s wheezing digital content delivery system Origin.

Numbers Source

Anyways, here’s hoping for Half Life 3 and a shiny new set of developer tools!

Source Filmmaker: First Impressions

Meet the Pyro

Meet the Pyro



As you may have heard, the Source Filmmaker was released two weeks ago at the conclusion of the Pyromania Update for Team Fortress 2. To get it at first, everybody was required to submit a survey form that included basic hardware and software specs about your computer, including whether or not a microphone was attached. The idea was that a limited, graded release would help give a taste of what the tool is like without flooding the Internet with videos. However, after three weeks of semi-open beta, the SFM team has gone public. You can download it here. Here are my first impressions of the tool (there is a TL;DR at the bottom).

The Source Filmmaker is a tool that allows “players” to set up scenes within any Source game, and then edit the resulting clips as if they were in an video editing program. This hybrid system passes over a lot of the conventional paradigms in film making. You can simultaneously modify how you want a shot to look AND change how the sequence is cut together. Scenes still have props, actors, lights, and cameras. However, if you decide while editing that you want a shot of the same scene from a different angle, you can create a new shot from a new angle in seconds.

This is definitely the direction that movies are headed as a medium. Computer graphics have reached a level of visual fidelity that allows filmmakers to create entire new elements and mix that with live footage. For instance, Sky Captain (an awesome movie, by the way) was shot entirely on blue-screen in some guys basement. All the environments and non-human actors were computer generated. This allowed the maker to move the actors around as he pleased. If he didn’t like the direction they were facing or their position on-screen, he could simply move them around like another 3D asset.

Sky Captain and the World of Tomorrow

Sky Captain and the World of Tomorrow



So far I’ve used the Source Filmmaker for a little over one week, on and off (I made this). From what I hear, experts at the program can deftly make complex scenes in minutes. However, I have yet to figure out all the hotkeys and efficient methods, so it takes me a long time to even sketch out a rudimentary scene. My speed is hampered, in some part, by the strange choice of hotkeys; The lower left part of the keyboard seems to have shortcuts distributed at random. Yes, every program has such a learning period in which shortcuts are committed to muscle memory. The SFM, though, for all its similarities to 3D programs, seems to have flipped the traditional hotkey set.

I digress, however. The primary aspect of SFM that impedes my work in the program is the tool’s concept of time and animation. To illustrate, let me explain the structure of the program: Each file is called a “session”; a self-contained clip. A single map is associated with each session. A session contains a strip of “film” which is composed of different shots.

Shots are independent scenes within the same map. Each shot has a scene camera and various elements that expand upon the base map set. Each shot also has an independent concept of time. You can move a shot “fowards” or “backwards” in time, which doesn’t move the clip in relation to other clips, but changes which segment of time the shot is showing within its universe. You can also change the time scale, which slows down or speeds up the clip.

If you move a shot to be before another shot, it will not change the shot, only the sequence in which the shots are displayed. This can be confusing and/or annoying. For instance, if you have a shot of someone talking, and you want to have a close-up shot or a different angle inside of that clip, there are two ways to do so. You could go into the motion editor and move the camera within the specific segment of time within the shot. The easier way, however, is to split the shot into three clips. The end clips remain the same, and inherit the elements from the single parent shot (which doesn’t exist anymore). In the middle clip, however, you change the camera to show a close-up angle. Both of these methods look the same; until you change your mind.

After you split a clip up into different shots, you can’t (to the best of my knowledge) add in a common element that spans all three shots, even though the elements that were there beforehand were inherited by all three. If you move a prop in one shot, it doesn’t translate over. This problem lends itself to a strange workflow, in which you set up the entire scene from one camera view, and only when you are satisfied do you split it up into different clips.

But how about the other method I mentioned? The motion editor allows you to select “portions of time” within a shot’s universe. You can make changes to objects and their properties, but the changes will only be visible within that time segment. For smooth transitions, it allows you to “partially” select time, and blend between two different settings. This feature can be extremely useful and powerful, but it is also a pain in the ass. While trying to hand-animate actors, I often find myself getting annoyed because I want to go back to the same time selection and add in something, or smooth over multiple curves. Since each entity stores its animation separately (each bone in a actor’s skeleton, for instance), I often find myself annoyed because I change an animation, but forgot about a bone. The animation ends up completely screwed, and its easier to start over than fix it.

Yes, a lot of this pain is due to my inexperience with the workflow. I’m sure I’ll get the hang of working with the strange animation system. But for any filmmaker or animation starting out, it will be quite a jump from the traditional keyframe methodology. In the Valve-made tutorials the guy talks about the graph editor, which seems to liken itself to a keyframed timelines. However, I have yet to glean success from the obtuse interface, and in any case the “bookmarking” system seems unnecessarily complex.

I want to cover one more thing before wrapping up. What can you put in a scene? Any model from any source game can be added in and animated. There are also new high-res versions of the TF2 characters. Lights, particle systems, and cameras are also available. For each of these elements, you need to create and Animation Set, which defines how the properties of the elements change over time. IK rigs can be added to some skeletons, and any property of any object in the session can be edited in real time via the Element Viewer. Another huge aspect of the program is the ability to record gameplay. At any time, you can jump into the game and run around like you are playing. All the elements of the current shot are visible as seen by a scene camera. You can even run around while the sequence is playing. You can also capture your character’s motion in “takes”. This is great for generic running around that doesn’t need gestures or facial animations. If you need to change something, you can convert the take into an animation set, which can be edited.

On the note of character animation, lip syncing is extremely easy. Gone are the pains of the phoneme editor in Face Poser. You can pop in a sound clip, run auto-detect for phonemes, apply to a character, and then go in with the motion editor and manually change facial animation and mouth movements.

TL;DR: To summarize my feelings, any person who admires the Meet the Team clips or the Left 4 Dead 2 intro trailer should definitely check out the Source Filmmaker. It’s free, and the current tutorials let you jump into making cool short clips; every clip looks really nice after rendering. The program does require a lot of memory and processing power though, so you will be unable to work efficiently if your computer doesn’t get decent framerates in TF2.

%d bloggers like this: