A Problem with Films

The eponymous film industry has been approaching a point of conflict with technology. Especially in recent years, more films have started used framerates that are significantly greater than the traditional 24 fps. This is caused by the increasingly movement from film-based camera to tape (or digital) cameras. However beneficial this switch might be, the public hasn’t received it very well so far. For example, Peter Jackson decided to film The Hobbit at 48 fps, but so far people have found the screened clips unpleasant.



The problem is that faster frame rates tend to take away the “cinematic” aesthetic that separates feature films from home videos and cheap television. Unfortunately, there is no way to fix this; our minds and eyes associate 24 fps with movies. This is a stigma that won’t go away anytime soon as long as movie continue to use obtusely slow frame rates. There will, by necessity, be a period in which all movies look “cheap”. Once the transition is made, however,

The same thing occurred with 3D films. At first people were averse to the concept, because it violated their concept of what the “movie experience” was like. However, more and more films took to the technique, and eventually the majority of moviegoers became comfortable with the feeling. I experienced this recently, when I saw Prometheus and decided to watch it in 2D. Mere minutes in to the film, I already have a faint feeling in the back of my head that something wasn’t right; my eyes have become trained to expect 3D sensations when I sit down in a movie theater.

Historically, this trend of initial rejection has been true for all new advances in film. Color film, synced sound, computer generated graphics, etc. Take, for instance, this excerpt of an article I snagged from IGN. It voices the feelings that movie audiences will be experiencing at some point in the next 10 years. However, I think this is a positive switch.

“I didn’t go into CinemaCon expecting to write anything less than great things about The Hobbit, but the very aesthetic chosen by Peter Jackson has made me very nervous about this film. It just looked … cheap, like a videotaped or live TV version of Lord of the Rings and not the epic return to Tolkien that we have all so long been waiting for.”

Advertisements

Source Filmmaker: First Impressions

Meet the Pyro

Meet the Pyro



As you may have heard, the Source Filmmaker was released two weeks ago at the conclusion of the Pyromania Update for Team Fortress 2. To get it at first, everybody was required to submit a survey form that included basic hardware and software specs about your computer, including whether or not a microphone was attached. The idea was that a limited, graded release would help give a taste of what the tool is like without flooding the Internet with videos. However, after three weeks of semi-open beta, the SFM team has gone public. You can download it here. Here are my first impressions of the tool (there is a TL;DR at the bottom).

The Source Filmmaker is a tool that allows “players” to set up scenes within any Source game, and then edit the resulting clips as if they were in an video editing program. This hybrid system passes over a lot of the conventional paradigms in film making. You can simultaneously modify how you want a shot to look AND change how the sequence is cut together. Scenes still have props, actors, lights, and cameras. However, if you decide while editing that you want a shot of the same scene from a different angle, you can create a new shot from a new angle in seconds.

This is definitely the direction that movies are headed as a medium. Computer graphics have reached a level of visual fidelity that allows filmmakers to create entire new elements and mix that with live footage. For instance, Sky Captain (an awesome movie, by the way) was shot entirely on blue-screen in some guys basement. All the environments and non-human actors were computer generated. This allowed the maker to move the actors around as he pleased. If he didn’t like the direction they were facing or their position on-screen, he could simply move them around like another 3D asset.

Sky Captain and the World of Tomorrow

Sky Captain and the World of Tomorrow



So far I’ve used the Source Filmmaker for a little over one week, on and off (I made this). From what I hear, experts at the program can deftly make complex scenes in minutes. However, I have yet to figure out all the hotkeys and efficient methods, so it takes me a long time to even sketch out a rudimentary scene. My speed is hampered, in some part, by the strange choice of hotkeys; The lower left part of the keyboard seems to have shortcuts distributed at random. Yes, every program has such a learning period in which shortcuts are committed to muscle memory. The SFM, though, for all its similarities to 3D programs, seems to have flipped the traditional hotkey set.

I digress, however. The primary aspect of SFM that impedes my work in the program is the tool’s concept of time and animation. To illustrate, let me explain the structure of the program: Each file is called a “session”; a self-contained clip. A single map is associated with each session. A session contains a strip of “film” which is composed of different shots.

Shots are independent scenes within the same map. Each shot has a scene camera and various elements that expand upon the base map set. Each shot also has an independent concept of time. You can move a shot “fowards” or “backwards” in time, which doesn’t move the clip in relation to other clips, but changes which segment of time the shot is showing within its universe. You can also change the time scale, which slows down or speeds up the clip.

If you move a shot to be before another shot, it will not change the shot, only the sequence in which the shots are displayed. This can be confusing and/or annoying. For instance, if you have a shot of someone talking, and you want to have a close-up shot or a different angle inside of that clip, there are two ways to do so. You could go into the motion editor and move the camera within the specific segment of time within the shot. The easier way, however, is to split the shot into three clips. The end clips remain the same, and inherit the elements from the single parent shot (which doesn’t exist anymore). In the middle clip, however, you change the camera to show a close-up angle. Both of these methods look the same; until you change your mind.

After you split a clip up into different shots, you can’t (to the best of my knowledge) add in a common element that spans all three shots, even though the elements that were there beforehand were inherited by all three. If you move a prop in one shot, it doesn’t translate over. This problem lends itself to a strange workflow, in which you set up the entire scene from one camera view, and only when you are satisfied do you split it up into different clips.

But how about the other method I mentioned? The motion editor allows you to select “portions of time” within a shot’s universe. You can make changes to objects and their properties, but the changes will only be visible within that time segment. For smooth transitions, it allows you to “partially” select time, and blend between two different settings. This feature can be extremely useful and powerful, but it is also a pain in the ass. While trying to hand-animate actors, I often find myself getting annoyed because I want to go back to the same time selection and add in something, or smooth over multiple curves. Since each entity stores its animation separately (each bone in a actor’s skeleton, for instance), I often find myself annoyed because I change an animation, but forgot about a bone. The animation ends up completely screwed, and its easier to start over than fix it.

Yes, a lot of this pain is due to my inexperience with the workflow. I’m sure I’ll get the hang of working with the strange animation system. But for any filmmaker or animation starting out, it will be quite a jump from the traditional keyframe methodology. In the Valve-made tutorials the guy talks about the graph editor, which seems to liken itself to a keyframed timelines. However, I have yet to glean success from the obtuse interface, and in any case the “bookmarking” system seems unnecessarily complex.

I want to cover one more thing before wrapping up. What can you put in a scene? Any model from any source game can be added in and animated. There are also new high-res versions of the TF2 characters. Lights, particle systems, and cameras are also available. For each of these elements, you need to create and Animation Set, which defines how the properties of the elements change over time. IK rigs can be added to some skeletons, and any property of any object in the session can be edited in real time via the Element Viewer. Another huge aspect of the program is the ability to record gameplay. At any time, you can jump into the game and run around like you are playing. All the elements of the current shot are visible as seen by a scene camera. You can even run around while the sequence is playing. You can also capture your character’s motion in “takes”. This is great for generic running around that doesn’t need gestures or facial animations. If you need to change something, you can convert the take into an animation set, which can be edited.

On the note of character animation, lip syncing is extremely easy. Gone are the pains of the phoneme editor in Face Poser. You can pop in a sound clip, run auto-detect for phonemes, apply to a character, and then go in with the motion editor and manually change facial animation and mouth movements.

TL;DR: To summarize my feelings, any person who admires the Meet the Team clips or the Left 4 Dead 2 intro trailer should definitely check out the Source Filmmaker. It’s free, and the current tutorials let you jump into making cool short clips; every clip looks really nice after rendering. The program does require a lot of memory and processing power though, so you will be unable to work efficiently if your computer doesn’t get decent framerates in TF2.

RPCreate: The Website

I apologize for the recent set of more technical posts. Then again, I guess I’m not that repentant, since this is going to be of the same breed. I was told that to attract an audience I have to focus on a certain subject; I’d rather that subject be me. Perhaps I’m just going through a technical phase… Next post won’t be technical though, just generally nerdy.

After learning how to use the basics of Google AppEngine, I’m ready to create a full-fledged website for my own purposes. It will be a clever conglomeration of some of the projects I’ve been wanting to do. These are, by name, a wiki, a forum (bulletin board), and a hub site for RPCreate. RPCreate is, of course, my new idea for a Minecraft server. I wrote a post about it not long ago. It needs a strong site to support the community, and I feel like a free forum and free wiki are not enough; not only are they missing a critical piece, the main homepage, but having the services separate and externally hosted also means I can’t implement custom features that would really help with creating a strong community.

The main front page would be largely static, with only occasional major announcements and also portals to the wiki and forum. It could also have feeds from both sites, proving an easy way to check up on the latest activity. The server would also interface with the Minecraft server, allowing server status (online/offline, players, etc) to be displayed in a banner across all the pages (in the wiki, forum, and hub).

Another bonus of consolidating online resources is that the site can have a single user database that works for the wiki AND forum, so you can view recent activity from a single profile page and track karma/helpfulness across the entire community. In addition, it would facilitate the fight against griefers, spammers, and other mischief makers. Banned or misbehaving players could have recent hurtful activity reverted with a single click. In terms of helpful utilities, setting up a system for emailing notification of server events (like steam community groups), or emailing players that haven’t logged onto the server in a while, would be trivial.

A free service like Google AppEngine would suffice, and if I posted banner ads then the revenue could be spent on something like a text notification service or a real domain name. I also like AppEngine because it uses Python to generate the entire response, so you can do whatever you like behind the scenes, unlike PHP. You can even map multiple directories to a single function, or generate dynamic directories. Because of this huge flexibility, I figure a Python-based web application would be the best choice for interfacing with another server, like a Minecraft server. PHP either doesn’t have that kind of capability or it’s too difficult to figure out for a lazy bum like me.

This entire website would also act as a springboard into other web applications I have planned. Once I have this under my belt (because I don’t pretend to be experienced in website creation), I can move on to ambitious, revenue-generating projects. I would talk about them here, but they might get stolen…

Eh, I’ll probably go over them in another post. Stay tuned, and subscribe to my RSS feed, or follow me on Twitter @mattlevonian.

%d bloggers like this: