Programming a 40K Simulator

(further reading)

This has been a pet project of mine for a long time. I’ve taken a couple cracks at making one, but have failed miserably each time. My objective is to create a simulator that allows players to play battles with various armies, allowing them to refine their strategies before actually buying the models. However, I want a fairly flexible and modular rule system that enables alternate rulesets to be introduced (not only other official rule editions, but custom gametypes like racing) and of course custom armies. There are two inherent problems with creating a realistic simulator:

  1. The number of exceptions to the rules is about as large as the number of rules. Many exceptions rely on human understanding of the rules, and get resolved intuitively. This is problematic when going for a procedural approach
  2. The rules change. I started playing with 4th edition rules. My first attempt was during that phase. When I made my second attempt, the rules had moved onto 5th edition. Now they are in 6th edition. It’s a pain in my ass.

The two previous times I attempted to make a simulator, I programmed from the ground up. This time I’m thinking about using Unity3D. This makes sense in at least one regard: Line of Sight is contingent on precise model and terrain geometries. If I want the project to match my objectives, I need some sort of 3D system behind it.

I see a pretty clear path for development. The first step is to breakdown the rules into a clear procedural system. After I can see the possibilities that need to be encountered, I can figure out how to generalize that into as simple an architecture as possible. Then I can figure out how to implement that in Unity3D.

Unfortunately, every step of that is hard. I’m still stuck on that first step. First, I’m unaccustomed to the 6th edition ruleset. Second, the rulebook isn’t laid out optimally for my purposes. It works well for a beginner building up their understanding. The basics of each step is introduced, and then later modified with exceptions and special cases. I need an organizational somewhere between a flowchart and an outline.

So I have two things I need to do: map the rules, and learn C# for later use in Unity3D.

Advertisements

A Problem with Films

The eponymous film industry has been approaching a point of conflict with technology. Especially in recent years, more films have started used framerates that are significantly greater than the traditional 24 fps. This is caused by the increasingly movement from film-based camera to tape (or digital) cameras. However beneficial this switch might be, the public hasn’t received it very well so far. For example, Peter Jackson decided to film The Hobbit at 48 fps, but so far people have found the screened clips unpleasant.



The problem is that faster frame rates tend to take away the “cinematic” aesthetic that separates feature films from home videos and cheap television. Unfortunately, there is no way to fix this; our minds and eyes associate 24 fps with movies. This is a stigma that won’t go away anytime soon as long as movie continue to use obtusely slow frame rates. There will, by necessity, be a period in which all movies look “cheap”. Once the transition is made, however,

The same thing occurred with 3D films. At first people were averse to the concept, because it violated their concept of what the “movie experience” was like. However, more and more films took to the technique, and eventually the majority of moviegoers became comfortable with the feeling. I experienced this recently, when I saw Prometheus and decided to watch it in 2D. Mere minutes in to the film, I already have a faint feeling in the back of my head that something wasn’t right; my eyes have become trained to expect 3D sensations when I sit down in a movie theater.

Historically, this trend of initial rejection has been true for all new advances in film. Color film, synced sound, computer generated graphics, etc. Take, for instance, this excerpt of an article I snagged from IGN. It voices the feelings that movie audiences will be experiencing at some point in the next 10 years. However, I think this is a positive switch.

“I didn’t go into CinemaCon expecting to write anything less than great things about The Hobbit, but the very aesthetic chosen by Peter Jackson has made me very nervous about this film. It just looked … cheap, like a videotaped or live TV version of Lord of the Rings and not the epic return to Tolkien that we have all so long been waiting for.”

The Chicken or the Engine?

When designing an application that has separated functions for created and displaying data, the developer faces a dilemma. It is difficult to test the application that displays the data without an existing test set. However, it is hard to create a data set by hand, and it is difficult to know whether your data creation program is operating correctly without being able to view the product in the display program. It always ends up being a balancing game; develop a small part of the display app with a limited hand-crafted data set, then build the creation app, and then try to develop small modules in parallel until you have a robust enough codebase.

Really the problem is the development cycle itself. Say I want to create a game. So maybe I decide to use a preexisting engine so I don’t have to create my own renderer (by the way, my 3D game engine is coming along nicely, albeit slower than I expected. I need to rework a lot of the math behind it, but once I’ve built the basic graphics part I expect it will get easier). First you strip down the engine, but then what the hell do you do?

I suppose you code game mechanics, or at least the UI and then the way user controls interact with the game. Then you start to build up a set of game entities, until you have the basic game and then you can add in features. You can create test assets as you add features. Once the main meat of the game is coded, you can pass it off to the environment designers, etc. After that you can continue to polish the game and add features that don’t change asset requirements or level design needs.

But that’s only if you start with a pre-built engine. When building an engine from scratch, you need some sort of test data, such as a 3D model or XML file. You need it to be be simple enough to debug, but that may involve a lot of hand work. Often you need the display codebase to build the test data in anyways! Hmm. Not sure where this is going. I guess it was more of a complaining session than anything else.

OpenGL and Geometry Generation

Today I was thinking about 3D rendering (in part because of the recent work I’ve been doing with ray tracing). I worked out all the math for drawing a polygon based on a list of vertices and a camera. I was considering coding it up, but then I realized that I was very unfamiliar in working with Windows (because I sure as hell wasn’t going to do this in Java). So I spent the greater portion of the afternoon reading a tutorial on Windows programming and using OpenGL, at which point I abandoned my original. I was just going to finally figure out how to use OpenGL.

I had worked with GLUT before when working on a Parallel Computing lab. However, I only used pixel control in that case; I was rendering subsections of a Mandelbrot set. However, that was easier because all the requisite libraries were already installed in the major lab at school (which has workstations with Gentoo installed). Working at home, I have been confounded. I just can’t get the linker to use all the requisite libraries.

The whole thing that got me thinking about 3D engines was my working on a HL2 level. Often I will import brushwork (pieces of the level) from the game’s campaign levels; it saves time and adds a nice level of detail to the environment. However, I was thinking about common elements such as stairs, doors, windows, and grates. It’s a multi-step process to cut a hole in a brush, unless you use carve (but nobody uses carve because it doesn’t give you control over how the geometry cuts). Doors are tedious to cut out and then line up with the texture. Non-solid stairs are the most painful to make, however. You have to arrange the steps and make sure the sidings match up, and for each new type of turn you have to rework the geometry. The whole idea of hand-making all the geometry in a level is ridiculous. I haven’t seen a single FPS level editor than lets you define procedures for geometry generation.

A screenshot of the Hammer UI

A screenshot of the Hammer UI


I feel like it would be relatively simple to define a generation process for buildings, for example. Each building is spaced a certain distance in from the sidewalk. There are maybe two or three justifications for things like planters and doors. Then windows are spaced evenly apart, with buffer spaces on either side of the building. You could attach balconies or planters on to every windows, awnings above doors, and even outdoor area layouts for cafes. After meticulously defining a couple of building styles, you could almost instantly generate entire blocks. Then come the nested procedures. A street, for example, would have periodic drains and manholes, distributions of building types based on the neighborhood type, and junctions to more streets. Signs, traffic lights, road markings, and crosswalks would all be placed correctly at street corners. Coul-de-sacs could fill up empty space. Interiors could be set as well for buildings. Floor plans could be modular. Rooms with distributions of room types and different layout permutations would combine into floors. A building type could have a sequence of floor types defined, such as bottom level stores and top level apartments. Central structures like stairwells would only need to be made once.

Although the procedural parameter definitions might take a while longer than making regular geometry, it would be a huge time saver. Not only could full geometries be generated, but intricate, custom-designed battle areas could be laid out faster. Common terrain pieces like walls, fortifications, stairs, railings, gates, and hedges could be created with the use of a single spline. Suddenly a task like designing the maps for my strategy game becomes less daunting. The pipeline for map production is shortened. General map layouts can be quickly sketched out and then directly generated. Beta testing would be infinitely easier, as map adjustments could be made in hours, rather than days.

Ray Tracing

I don’t have time for a full fledged post today due to school work. However, I do have the time to quickly describe my latest lab in my CS class, Parallel Computing.

I’ll start out with a short summary of the class. Essentially, first semester focused on distributing computing across multiple machines. We used MPI with C to solve both ‘trivial’ (each computation is independent) and ‘complex’ (computations require communication between workers). We mostly used a top down framework with a manager machine coordinating workers. Second semester is focusing on multi-threading on one machine, including harnessing GPUs and multicore CPUs.

The problem I’ve been working on is actually a lead up to the parallelization; that is, it isn’t running in parallel yet. We are doing ray tracing within a 3D space to determine shadows. Because it doesn’t particularly matter how complex the scene is, we are rendering spheres and planes only. The way it works is like this:

A point in space acts as a camera. The “screen” is actually represented as a plane. For each “pixel” on the screen, a vector is created that runs from the camera to the middle of the pixel (or other fractions if you are anti-aliasing) and then normalized. Next, you work out whether the vector passes through each sphere and choose the solution closest to the camera. That determines which sphere or plane (if any) is rendered. To determine color, two normalized vectors are created. One goes from the center of the sphere to the point that is being rendered. The other runs from the point being rendered to a light source. If the cosine of the angle between them (dot product over the product of the magnitudes) is negative, the point is in the sphere’s own shadow. If it is positive, then you check whether or not the point-to-light vector intersects a sphere. If it does, then the point is in a shadow. If it has line of sight to the light, then you merely multiply the color of the sphere by the cos of the angle to produce falloff. A point in shadow uses the product of the sphere’s color and the scene’s ambient color. If a point doesn’t interest a plane or sphere, it is rendered with the environment color (usually black).

So far we haven’t implemented reflectivity or made the code run in parallel. Anyways, back to work!

%d bloggers like this: