The Interplanetary Transport System

Space has been on my brain a lot lately. One of the causes was the long-awaited presentation by Elon Musk at the International Astronautical Congress (IAC) last month. During the talk, he finally laid out the details of his “Interplanetary Transport System” (ITS). The architecture is designed to enable a massive number of flights to Mars for absurdly low costs, hopefully enabling the rapid and sustainable colonization of Mars. The motivation behind the plan is a good one: humanity needs to become a multi-planetary species. The sheer number of things that could take civilization down a few pegs or destroy it outright is frighteningly lengthy: engineered bio-weapons, nuclear bombs, asteroid strikes, and solar storms crippling our electrical infrastructure are some of the most obvious. Rampant AI, out-of-control self-replicating robots, and plain old nation-state collapse from war, disease, and famine are some other threats. In the face of all those horrifying things, what really keeps me up at night is the fact that if civilization collapses right now, we probably won’t get another shot. Ever. We’ve abused and exhausted the Earth’s resources so severely, we simply cannot reboot human civilization to its current state. This is the last and best chance we’ll ever get. If we don’t establish an independent, self-sufficient colony on Mars within 50 years, we’ll have solved the Fermi Paradox (so to speak).

But Musk’s Mars architecture, like most of his plans, is ambitious to the point of absurdity. It at once seems like both fanciful science fiction and impending reality. Because Musk works from first principles, his plans defy socio-political norms and cut straight to the heart of the matter and this lateral approach tends to rub the established thinkers of an industry the wrong way. But we’ve seen Musk prove people wrong again and again with SpaceX and Tesla. SpaceX has broken a lot of ground by being the first private company to achieve orbit (as well as return safely to Earth), to dock with the International Space Station, and to propulsively land a part of a rocket from an orbital launch. That last one is particularly important, since it was sheer engineering bravado that allowed them to stand in the face of ridicule from established aerospace figureheads. SpaceX is going to need that same sort of moxie in spades if they are going to succeed at building the ITS. Despite their track record, the ITS will be deceptively difficult to develop, and I wanted to explore the new and unsolved challenges that SpaceX will have to overcome if they want to follow through on Musk’s designs.

SpaceX ITS Diagram

The basics of the ITS architecture are simple enough: a large first stage launches a spaceship capable of carrying 100 people to orbit. More spaceships (outfitted as tankers) are launched to refill the craft with propellants before it departs for Mars during an open transfer window. After a 3 to 6 month flight to the Red Planet, the spaceship lands on Mars. It does so by at first bleeding off speed with a Space Shuttle-style belly-first descent, before flipping over and igniting its engines at supersonic speeds for a propulsive landing. After landing, the craft refill its tanks by processing water and carbon dioxide present in Mars’s environment and turning them into propellant for the trip back to Earth. Then the spaceship simply takes off from Mars, returns to Earth, and lands propulsively back at home.

Now, there are a lot of hidden challenges and unanswered questions present in this plan. The first stage is supposed to land back on the launch mount (instead of landing on a pad like the current Falcon 9 first stage), requiring centimeter-scale targeting precision. The spaceship needs to support 100 people during the flight over, and the psychology of a group that size in a confined space for 6 months is basically unstudied. Besides other concerns like storing highly cryogenic propellants for a months-long flight, radiation exposure during the flight, the difficulty of re-orienting 180 degrees during re-entry, and the feasibility of landing a multi-ton vehicle on soft Martian regolith using powerful rocket engines alone, there are the big questions of exactly how the colonists will live and what they will do when they get to Mars, where the colony infrastructure will come from, how easy it will be to mine water on Mars, and how the venture will become economically and technologically self-sufficient. Despite all of these roadblocks and question marks, the truly shocking thing about the proposal is the price tag. Musk wants the scalability of the ITS to eventually drive the per-person cost down to $200,000. While still high, this figure is a drop in the bucket compared to the per-capita cost of any other Mars architecture on the table. It’s well within the net-worth of the average American (although that figure is deceptive; the median American net-worth is only $45,000. As far as I can figure, somewhere between 30% and 40% of Americans would be able to afford the trip by liquidating most or all of their worldly assets). Can SpaceX actually achieve such a low operational cost?

Falcon 9 Production Floor

Remember that SpaceX was originally targeting a per-flight price of $27 million for the Falcon 9. Today, the price is more like $65 million. Granted, the cost to SpaceX might be more like $35 million per flight, and they haven’t even started re-using first stages. But it is not a guarantee that SpaceX can get the costs as low as they want. We have little data on the difficulty of re-using cores. Despite recovering several in various stages of post-flight damage, SpaceX has yet to re-fly one of them (hopefully that will change later this year or early next year).

That isn’t the whole story, though. The Falcon 9 was designed to have the lowest possible construction costs. The Merlin engines that power it use a well-studied engine design (gas generator), low chamber pressures, an easier propellant choice (RP-1 and LOX), and relatively simple fabrication techniques. The Falcon 9 uses aluminum tanks with a small diameter to enable easy transport. All of their design choices enabled SpaceX to undercut existing prices in the space launch industry.

But the ITS is going to be a whole other beast. They are using carbon fiber tanks to reduce weight, but have no experience in building large (12 meter diameter) carbon fiber tanks capable of holding extremely cryogenic liquids. The Raptor engine uses a hitherto unflown propellant combination (liquid methane and liquid oxygen). Its chamber pressure is going to be the highest of any engine ever built (30 MPa. The next highest is the RD-191 at 25 MPa). This means it will be very efficient, but also incredibly difficult to build and maintain. Since reliability and reusability are crucial for the ITS architecture, SpaceX is between a rock and a hard place with its proposed design. They need the efficiency to make the system feasible, but the high performance envelope means the system will suffer less abuse before needing repairs, reducing the reusability of the system and driving up costs. At the same time, reusability is crucial because the ITS will cost a lot to build, with its carbon fiber hull and exacting standards needed to survive re-entry at Mars and Earth many times over.

It’s almost like the ITS and Falcon 9 are on opposites. The Falcon 9 was designed to be cheap and easy to build, allowing it to be economical as an expendable launch vehicle, while still being able to function in a large performance envelope and take a beating before needing refurbishment. The ITS, on the other, needs all the performance gains it can get, uses exotic materials and construction techniques, and has to be used many times over to make it an economical vehicle.

All of these differences make me think that the timeline for the development of the ITS is, to put it mildly, optimistic. The Falcon 9 went from the drawing board to full-stack tests in 6 years, with a first flight a few years later. Although the SpaceX of 2004 is not the SpaceX of 2016, the ITS sure as hell isn’t the Falcon 9. A rocket using the some of the most traditional and well-worn engineering methods in the book took 6 years to design and build. A rocket of unprecedented scale, designed for an unprecedented mission profile, using cutting-edge construction techniques… is not going to take 6 years to design and build. Period. Given SpaceX’s endemic delays with the development of the Dragon 2 and the Falcon Heavy, which are a relatively normal sized spaceship and rocket, respectively, I suspect the development of a huge spaceship and rocket will take more like 10 years. Even when they do finally fly it, it will take years before the price of seat on a flight falls anywhere as low as $200,000.

Red Dragon over Mars

If SpaceX manages to launch their Red Dragon mission in time for the 2018 transfer window, then I will have a little more hope. The Red Dragon mission needs both a proven Falcon Heavy and a completely developed Dragon 2. It will also allow SpaceX to answer a variety of open questions about the mission profile of the ITS. How hard is it to land a multi-ton vehicle on Martian regolith using only a powered, propulsive descent? How difficult will it be to harvest water on Mars, and produce cryogenic propellants from in situ water and carbon dioxide? However, if SpaceX misses the launch window, I definitely won’t be holding my breath for humans on Mars by 2025.

Lone Wolf (Part 1 of 2)

I was struck by a muse and started writing a story. Partly to get it out there, and partly to force myself to finish it, I’m posting the first half of it here. I’ve reproduced the first scene here; you can download the full PDF (18 pages) to read the rest.


I’ve never considered myself a “people person”. It isn’t that I don’t like people; I just never find the right thing to say, or end up doing something I later look back on with cringe-inducing horror. I mention this only to give you a notion of how deep in over my head I was from the moment I heard the faint knocking at my door.

It was a Friday, right around 8pm, and the last rays of dusk were filtering out of the sky. It started almost as a scratching, then escalated to a weak yet persistent tapping by the time I had navigated from the kitchenette, through the tight space of my apartment, to the front door.

I wasn’t expecting visitors, and the door’s peephole was non-functional (I had never worked up the courage to call a repair service), so I wrenched the door open knowing in the back of my mind that there was a roughly 30% chance that whatever stood on the other side wanted to kill me. But instead of a combatant, the body of a young woman, bloodied and weak, slumped through the doorway onto my carpet.

So four things quickly filtered through my mind in this moment. First I thought “oh shit.” That was quickly followed by the sinking realization that I was going to miss the TNG marathon later tonight. The last two came as I appraised the situation: it was no mere coincidence that this girl had chosen to rap on my door, and that literally the last thing I should do at this moment was phone the police.

I kicked into action. Although my interpersonal skills may be lacking, I do know a good amount of first-aid. I dragged her body into the cramped interior of my apartment and laid her on my couch. As I fetched my first-aid kit, I winced at the blood trail soaking into my carpet and upholstery.

Claw marks raked across her arms and back, and a gash on her scalp hinted at a treacherous fall. Fortunately for me (and her), it didn’t look like there was much internal damage besides maybe some fractured ribs. It would hurt to move and breathe for a few weeks, but she would recover. Judging by the head wound, she might also have suffered a light-to-moderate concussion. At least on this count, I thought as I started tending to the wounds, things could have gone a lot worse. I didn’t relish the idea of driving a half-dead girl with no relation to me to the hospital.

Of course, that was the least of my concerns at the moment. I mulled over several pieces of information that pointed to a whole lot of strife for me in the near future. First, she was a werewolf. I could smell it on her as clear as day. Second, she had been attacked by other werewolves – lingering scents pointed to a single pack. Third, after somehow escaping, she had – bleeding, in shock, and near-death — decided to head straight for my doorstep. If this didn’t already sound bad enough, it was made 10 times worse by the fact that I was a werewolf.

Read the rest here.

What Does It Take To Become A Programmer?

So these are my thoughts on this article (hint, it’s utter tripe): Programming Doesn’t Require Talent or Even Passion.

On the one hand, this article espouses a good sentiment (you don’t have to be gifted to learn programming). On the other, it completely disregards the important idea that being able to do something is not the same as being able to do it well.

I can draw, but anyone who has seen me draw would agree that I’m pretty shit at it. I can draw just well enough to get my concepts across to other people. However, if I intended on becoming an artist for a living, I should probably learn about proportions, shading, composition, perspective, color theory, and be able to work with a range of mediums. Of course, there isn’t some big secret to learning these things. You just practice every day and study good artistic work, analyzing how it was made. Maybe you take some courses, or read some books that formally teach certain techniques. After thousands of invested hours, you will find that your drawing has radically improved, as shown again and again by progress comparison pictures (that one is after 2 years of practice).

The same holds true for programming. Anyone can learn programming. It requires nothing except a little dedication and time. But the article starts out by promising to ‘debunk’ the following quote (I’m not sure if it’s actually a real quote – they don’t attribute it to anybody):

You not only need to have talent, you also need to be passionate to be able to become a good programmer.

The article immediately ignores the fact that the ‘quote’ is talking about good programmers. Just like becoming a good artist requires artistic talent and a passion for learning and improving every day, good programmers are driven by the need to learn and improve their skills. Perhaps an argument can be made for “talent” being something you acquire as a result of practice, and thus you don’t need talent to start becoming good; you become good as you acquire more and more talent. This is a debate for the ages, but I would say that almost invariably a passion for a skill will result in an early baseline proficiency, which is often called “talent”. Innate talent may or may not exist, and it may or may not influence learning ability.

It doesn’t really matter though, because the article then goes on to equate “talent” and “passion” with being a genius. It constructs a strawman who has always known how to program and has never been ignorant about a single thing. This strawman, allegedly, causes severe anxiety to every other programmer, forcing them to study programming at the exclusion of all else. It quotes the creator of Django (after affirming that, yes, programmers also suffer from imposter syndrome):

Programming is just a bunch of skills that can be learned, it doesn’t require that much talent, and it’s not shameful to be a mediocre programmer.

Honestly, though, the fact of the matter is that being a good programmer is incredibly valuable. If your job is to write code, you should be able to do it well. You should write code that doesn’t waste other people’s time, that doesn’t break, that is maintainable and performant. You need to be proud of your craft. Of course, not every writer or musician or carpenter takes pride in their craft. We call these people hacks and they churn out shitty fiction that only shallow people read, or uninteresting music, or houses that fall down in an earthquake and kill dozens of people.

So, unless you want to be responsible for incredibly costly and embarrassing software failures, you better be interested in becoming a good programmer if you plan on doing it for a career. But nobody starts out as a good programmer. People learn to be good programmers by having a passion for the craft, and by wanting to improve. If I look at older programmers and feel inferior by comparison, I know it’s not because they are a genius while I am only a humble human being. Their skill is a result of decades of self-improvement and experience creating software both good and bad.

I think it’s telling that the article only quotes programmers from web development. Web development is notorious for herds of code monkeys jumping from buzzword to buzzword, churning out code with barely-acceptable performance and immense technical debt. Each developer quote is followed by a paragraph that tears down the strawman that was erected earlier. At this point, the author has you cheering against the supposedly omnipresent and overpowering myth of the genius programmer — which, I might remind you, is much like the myth of the genius painter or genius writer; perhaps accepted by those with a fixed mindset, but dismissed by anybody with knowledge of how the craft functions. This sort of skill smokescreen is probably just a natural product of human behavior. In any case, it isn’t any stronger for programming than for art, writing, dance, or stunt-car driving.

The article really takes a turn for the worse in the second half, however. First, it effectively counters itself by quoting jokes from famous developers that prove the “genius programmer” myth doesn’t exist:

* One man’s crappy software is another man’s full time job. (Jessica Gaston)

* Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

* Software and cathedrals are much the same — first we build them, then we pray. (Sam Redwine)

The author LITERALLY ASKS: “If programmers all really had so much talent and passion, then why are these jokes so popular amongst programmers?”, as if to prove that he was full of shit when he said back in the beginning “It’s as if people who write code had already decided that they were going to write code in the future by the time they were kids.”

But the absolute worst transgression the article makes is quoting Rasmus Lerdorf, creator of PHP. For those of you not “in the know”, PHP is a server-side language. It is also one of the worst affronts to good software design in recent history. The reason it was the de facto server-side language before the recent Javascript explosion is that it can be readily picked up by people who don’t know what they are doing. Like you would expect from a language designed by someone who “hates programming” and used by people who don’t what they are doing, PHP is responsible for thousands of insecure, slow, buggy websites.

PHP’s shortcoming are amusingly enumerated in this famous post: PHP – a fractal of bad design. In the post, the following analogy is used to illustrate how PHP is bad:

I can’t even say what’s wrong with PHP, because— okay. Imagine you have uh, a toolbox. A set of tools. Looks okay, standard stuff in there.

You pull out a screwdriver, and you see it’s one of those weird tri-headed things. Okay, well, that’s not very useful to you, but you guess it comes in handy sometimes.

You pull out the hammer, but to your dismay, it has the claw part on both sides. Still serviceable though, I mean, you can hit nails with the middle of the head holding it sideways.

You pull out the pliers, but they don’t have those serrated surfaces; it’s flat and smooth. That’s less useful, but it still turns bolts well enough, so whatever.

And on you go. Everything in the box is kind of weird and quirky, but maybe not enough to make it completely worthless. And there’s no clear problem with the set as a whole; it still has all the tools.

Now imagine you meet millions of carpenters using this toolbox who tell you “well hey what’s the problem with these tools? They’re all I’ve ever used and they work fine!” And the carpenters show you the houses they’ve built, where every room is a pentagon and the roof is upside-down. And you knock on the front door and it just collapses inwards and they all yell at you for breaking their door.

That’s what’s wrong with PHP.

And according to Rasmus Lerdorf, the creator of this language:

I’m not a real programmer. I throw together things until it works then I move on. The real programmers will say “Yeah it works but you’re leaking memory everywhere. Perhaps we should fix that.” I’ll just restart Apache every 10 requests.

It’s like the article is admitting that if you don’t take the time to learn good programming principles, you are going to be responsible for horrible systems that cause headaches five years down the line for the people maintaining them and that regularly allow hackers to access confidential personal information like patient information and social security numbers for millions of people.

So yes, if you aren’t planning on programming for a career, learning to program is fairly straightforward. It’s as easy as learning carpentry or glass-blowing. It might seem daunting, but invest a half dozen hours and you can have your foot solidly in the door.

But if you plan on building systems other people will rely on, you sure are hell better pick up some solid programming fundamentals. If you aren’t motivated to improve your skillset and become a better programmer, don’t bother learning at all. Don’t be the reason that the mobile web sucks, and don’t be the reason that 28 American soldiers died. Learn to be a good programmer.

Indiscriminately Valuing Non-Violent Games

Starting with the 1980s arcade games Galaxian and Missile Command, games and combat became nearly synonymous. This was only exacerbated in the 90s by the advent of wildly popular shooters like Doom. The choice to focus a game around antagonism, combat, and violence was not a conscious design decision, but a necessity of the industry and environment. There were abstract games that didn’t contain violence, but in general the highest-profile games were about, in essence, murder.

Doom screenshot

Doom: you shoot things. Dead simple.

Then a renaissance occurred in academia, and suddenly games were art. Nobody really knew what to do with this fact or what it meant, but it was revolutionary, and regardless of anything else, games were definitely art. To support this, a number of innovative (perhaps iconoclastic) non-violent games — games like Journey and Gone Home — were foisted up as evidence that games are art. “Games are art, they can convey aesthetics beyond violence.” Good, great. Innovative games that are fun without using violence in their designs are awesome.

Journey screenshot

Journey is one of the seminal games in the recent wave of “artistically-valuable” indie games.

However, this easily morphed into a reactionary movement. Since these games without violence or combat were touted as being somehow better or “more elevated” than your run-of-the-mill murder simulator, it became obvious that a game that was violent was inherently less.

Obviously, this sort of indiscriminate valuing of non-violent games is a terrible idea. A game that doesn’t use violence can be poorly designed and not-fun (Dear Esther, Mountain), just like a game that uses violence and combat can provoke deeper aesthetics (Hotline Miami, This War of Mine). Part of the problem is that nobody has developed the proper critical skills to analyze these non-violent, pacifistic games. Those that could view the design choices evenly and rationally are too busy climbing up their own assholes and praising the games for not using combat. On the other side, core gamers are immediately turned off by the lack of combat and write it off as boring.

This War Of Mine screenshot

Refugees have said This War of Mine accurately conveys the constant fear of living in a war-torn region.

One result of this dysfunction is the proliferation of so-called “walking simulators”. These are games whose main play involves walking around consuming either written, visual, or aural media, perhaps with light puzzle-solving mechanics (or similar accents). Many enterprising developers, whether they realize it consciously or not, have seized on the fact that making such a game guarantees some measure of success. They will be praised by academics and critics interested in furthering games as a legitimate medium, and have their game purchased by the small-but-steady audience of non-core, non-casual gamers (most of whom probably chafe at being called gamers).

Some walking simulators are great; I actually enjoyed Gone Home, in a way that I probably wouldn’t have if it had been a movie. They do a good job of immersing you in a focused, meaningful experience. Others are scattered or diluted by dissonant design decisions — like Corpse of Discovery. But nobody cares, because these games aren’t being evaluated on their merits as a game. They are either praised for being a game without combat mechanics, or they are ignored because they are a game without combat mechanics. Little else tends to go into the evaluation process.

Gone Home screenshot

Gone Home gives the player a meaningful experience despite being limited to looking at rooms and listening to audio.

A student game at USC, Chambara, got changed during development to be “non-violent”. The game originally saw samurai dueling in a starkly colored world. Now instead of blood, hitting an enemy produces a burst of feathers. Apparently this one tweak now qualifies it as “a transcendently beautiful and artistic entertainment game with a pacifistic outlook”. That is a direct quote from a faculty member at the school. You may see why this is troublesome to me. First of all, changing blood to feathers doesn’t change the fact that your game is about sneaking around and hitting other people with sticks before they hit you. That seems a far cry from a “pacifist outlook”. Second, this change actually hurts the game aesthetically. The blood splatters beautifully complemented the dichromatic nature of the game’s world. I consider the stark look of a blood splatter to be more artistic than a burst of feathers. Yet the game’s devs decided to make this tweak. Did they do it because it would benefit the game? No. According to the devs, “we were uncomfortable with the violence the game displayed and did not feel like it accurately reflected who we were and what we believed.” In other words, they value a game that contains bloodshed differently than a game that does not. Are they allowed to make this decision based on their personal beliefs? Absolutely. But isn’t it absurd to pretend that this tweak lends the game a “pacifist outlook”, and that it in turn allows the game to transcend to the angelic ranks of non-violent video games?

Blood Splatters

Blood splatters…

Feather Splatters

…and “feather splatters”.

I would urge critics and academics to judge pacifistic games on their merits as a game, not on their merits as a non-violent game. I would urge developers to treat the presence of combat and violence as just one among a countless sea of other design possibilities. If it aids your experience goal, you should include it and tailor it to the needs of your game as an experience. If it doesn’t don’t include it. But don’t decide to make your game non-violent or exclude combat mechanics just because it means your game will be valued as inherently better by a specific set of people.

Escaping UI Idioms

Personally I find that whenever my engineer brain switches on, my designer brain switches off. I have to step away from coding for a while in order to objectively make the best decisions about what to implement and how. When I let my engineer brain do the designing, I end up falling into age-old preconceptions about how things should be. This is especially true when it comes to UI design.

But is it the best idea to blindly follow UI conventions, either new or old? On the one hand, a familiar UI layout and universal UI idioms will make it easier for users to jump straight into your program. However, if those idioms aren’t well suited to your application, the user can quickly find themselves confused, frustrated, and lost. If the UI was unfamiliar but uniquely designed around your application, the users will be less confused because they have no expectations which can be unwittingly subverted.

Some bad features:

  • Confirmation emails which require you to click a link before you can do anything with your account. Confirmation emails that require a link to be clicked in 24 hours but which do not impede progress are much better.
  • The “re-enter your email” fields on signup forms. Every modern browser automatically enters your password.
  • Separating the “Find” and “Replace” functions, putting them in the “View” and “Edit” menus respectively.
  • Speaking of “View” and “Edit” menus, the standard “File”, “View”, “Edit” menu tabs often don’t suit applications. Choose menu item labels that suit your application.

An example of a good feature is the use of universal symbols for universal functions. Using a crazy new “save” icon is not a good subversion of conventional UI idioms. Another is exit confirmation; in a lot of cases, confirming whether you want to save before exiting is a great feature.

Here are two features which are not standard for applications with text-editing capability but which should be (I’ve only seen it in a handful of programs, of which Notepad++ is most prominent):

  • A “Rename” option under the File menu, which saves the file with a new name and removes the file with the old name. This saves the tiresome task of doing “Save As” and then deleting the file in the save window, or (God forbid) having to navigate to the file in your OS’s file browser and renaming the file there.
  • Special character (\t, \n) and Regex support in “Find and Replace” modes.

VR Isn’t Ready

Recently I’ve heard a lot of hubabaloo about VR, especially with regards to games. This wave of hype has been going on for while, but it has personally intensified for me because one of my professors this semester is running a VR startup. I’m also working on a VR-compatible game, so VR talk has become more relevant to me.

Array of current VR headsets

Array of current VR headsets

First off, I believe VR is still 10 years away from its prime-time. The tech is just not advanced to a viable level right now, and some fundamental issues of user experience have yet to be solved.

For example, my professor gave an example of why VR is such an immersive mode of interaction: the first time people put on the headset and jump into a virtual world, they reach out and try to touch objects. He trumpeted this as being evidence of a kinetic experience (i.e. it pushed them to “feel” things beyond what they immediately see). While is this kind of true, I see it far more as evidence of a fundamental shortcoming. The moment a user tries to interact with the world and fails, they are jerked out of the fantasy and immersion is broken. This is true in all games; if a user believes they can interact with the world in a certain way but the world doesn’t respond correctly, the user is made painfully and immediately aware that they are in a game, a simulation.

Control VR isn't enough.

Control VR isn’t enough.

This brings me to the first huge issue: the input problem. VR output is relatively advanced, what with Oculus and Gear VR and Morpheus. But we’ve seen little to no development effort targeted at ways for the user to interact with the world. Sure we have Control VR and such projects, but I think these haven’t caught on because they are so complicated to setup. Oculus made huge strides by turning the HMD into a relatively streamlined plug-and-play experience with a minimal mess of cables. We have yet to see how Oculus’s custom controllers affect the space, but I have a feeling they aren’t doing enough to bridge the haptic gap. We won’t see VR takeoff until users are no longer frustrated by the effort to give input to the game by these unintuitive means. As long as users are constantly reminded they are in a simulation, VR is no better than a big TV and a comfy couch.

Speaking of big TVs: the output tech isn’t good enough. The 1080p of the DK2 is nowhere near high enough to be immersive. Trust me: I’ve gotten to try out a DK2 extensively in the past few months at zero personal cost. My opinion is informed and unbiased. Trying to pick out details in the world is like peering through a blurry screen door. As long as I’m tempted to pop off the headset and peek at the monitor to figure out what I’m looking at, VR isn’t going to take off. Even the 2160×1200 of the consumer Oculus won’t be enough. When we get 3K or 4K resolutions in our HMDs, VR will be a viable alternative to monitor gaming. Of course, this tech is likely 5-10 years away for our average consumer.

These never caught on.

These never caught on.

This all isn’t to say that current VR efforts are for naught. These early adopter experiments are definitely useful for figuring out design paradigms and refining the tech, However, it would be foolish to operate under the assumption that VR is posed to take the gaming world by storm. VR is not the new mobile. VR is the new Kinect. And like the Wii and Kinect, VR is not a catch-all interaction mode; most gaming will always favor a static, laid-back experience. You can’t force people to give up lazy couch-potato gaming.

Of course, outside of gaming it may not be a niche interaction mode. In applications where immersion is not the goal and users expect to have to train in the operation of unnatural, intuitive controls, VR may very well thrive. Medicine, industrial operation, design, and engineering are obvious applications. It might even be useful for education purposes. But temper your expectations for gaming.

New Coding Paradigms

So I’ve recently been thinking that the whole idea of editing text files filled with code is outmoded. When I’m thinking about code, I certainly don’t think of it as a set of classes and functions laid out in a particular order. I think of it as a cloud of entities with properties and interactions flowing between them. Shouldn’t our experience of writing code reflect this?

We need to start rethinking our code-editing tools. A lot. Here is a simple example:
XML heatmaps

What else could we do? How about the ability to arbitrarily break off chunks of code and view them in parallel, even nesting this behavior to break long blocks of code into a string of chunks:
Nesting chunks

What if we let the flow of the documentation decide how a reader is introduced to the code base, instead of letting the flow of compiler-friendly source files decide it? Chunks of code are embedded within wiki-style documentation, and while you can follow the code back to its source, reading the documentation will eventually introduce you to the whole codebase in a human-friendly fashion.

The same code could even appear in multiple places (obviously updated when the source changes), and you could see all the places in the documentation where a particular chunk of code appears. This could bridge the gap between documentation and code; documentation will never grow stale, as updating code necessitates interaction with it. Similarly, updating documentation is the same process as writing code. When a standard changes or an SLA (service level agreement) is modified, the code changes too.

But why restrict ourselves to semi-linear, text-based documentation a la wikis? We tend to find UML diagrams extremely helpful for visualizing complex systems in code. What if we could build powerful, adaptable tools to translate between raw code, text-based documentation, and visual diagrams? Strictly binding them together might restrict you in the lowest levels of coding (much like, for example, using a high-level language restricts your ability to control memory allocation), but it opens up the new ability to make changes to a diagram and have most of the code rearrange and resolve itself before you. Then you step in to give a guiding hand, and adjust the text documentation, and voila! Best of all, this is more than a diagram-to-code tool; the diagram is a living thing. In fact, the diagrams, the documentation, and the codebase are synonymous. A change in one is a change in the others.

We’re getting to the point where it is much more useful to be able to dance across a codebase quickly than to be able to tweak and tune the minutiae of your code. Some allowances must be made for processing-intensive applications. Perhaps this system wouldn’t even be useful in those cases. But when you find yourself favoring adaptability and iteration speed over efficiency during development, and when you find yourself being hampered by the need to go between files, scroll through large swathes of code, or referring back and forth between code and documentation, maybe it’s time to rethink your coding paradigms.

%d bloggers like this: