VR Isn’t Ready

Recently I’ve heard a lot of hubabaloo about VR, especially with regards to games. This wave of hype has been going on for while, but it has personally intensified for me because one of my professors this semester is running a VR startup. I’m also working on a VR-compatible game, so VR talk has become more relevant to me.

Array of current VR headsets

Array of current VR headsets



First off, I believe VR is still 10 years away from its prime-time. The tech is just not advanced to a viable level right now, and some fundamental issues of user experience have yet to be solved.

For example, my professor gave an example of why VR is such an immersive mode of interaction: the first time people put on the headset and jump into a virtual world, they reach out and try to touch objects. He trumpeted this as being evidence of a kinetic experience (i.e. it pushed them to “feel” things beyond what they immediately see). While is this kind of true, I see it far more as evidence of a fundamental shortcoming. The moment a user tries to interact with the world and fails, they are jerked out of the fantasy and immersion is broken. This is true in all games; if a user believes they can interact with the world in a certain way but the world doesn’t respond correctly, the user is made painfully and immediately aware that they are in a game, a simulation.

Control VR isn't enough.

Control VR isn’t enough.

This brings me to the first huge issue: the input problem. VR output is relatively advanced, what with Oculus and Gear VR and Morpheus. But we’ve seen little to no development effort targeted at ways for the user to interact with the world. Sure we have Control VR and such projects, but I think these haven’t caught on because they are so complicated to setup. Oculus made huge strides by turning the HMD into a relatively streamlined plug-and-play experience with a minimal mess of cables. We have yet to see how Oculus’s custom controllers affect the space, but I have a feeling they aren’t doing enough to bridge the haptic gap. We won’t see VR takeoff until users are no longer frustrated by the effort to give input to the game by these unintuitive means. As long as users are constantly reminded they are in a simulation, VR is no better than a big TV and a comfy couch.

Speaking of big TVs: the output tech isn’t good enough. The 1080p of the DK2 is nowhere near high enough to be immersive. Trust me: I’ve gotten to try out a DK2 extensively in the past few months at zero personal cost. My opinion is informed and unbiased. Trying to pick out details in the world is like peering through a blurry screen door. As long as I’m tempted to pop off the headset and peek at the monitor to figure out what I’m looking at, VR isn’t going to take off. Even the 2160×1200 of the consumer Oculus won’t be enough. When we get 3K or 4K resolutions in our HMDs, VR will be a viable alternative to monitor gaming. Of course, this tech is likely 5-10 years away for our average consumer.

These never caught on.

These never caught on.

This all isn’t to say that current VR efforts are for naught. These early adopter experiments are definitely useful for figuring out design paradigms and refining the tech, However, it would be foolish to operate under the assumption that VR is posed to take the gaming world by storm. VR is not the new mobile. VR is the new Kinect. And like the Wii and Kinect, VR is not a catch-all interaction mode; most gaming will always favor a static, laid-back experience. You can’t force people to give up lazy couch-potato gaming.

Of course, outside of gaming it may not be a niche interaction mode. In applications where immersion is not the goal and users expect to have to train in the operation of unnatural, intuitive controls, VR may very well thrive. Medicine, industrial operation, design, and engineering are obvious applications. It might even be useful for education purposes. But temper your expectations for gaming.

New Coding Paradigms

So I’ve recently been thinking that the whole idea of editing text files filled with code is outmoded. When I’m thinking about code, I certainly don’t think of it as a set of classes and functions laid out in a particular order. I think of it as a cloud of entities with properties and interactions flowing between them. Shouldn’t our experience of writing code reflect this?

We need to start rethinking our code-editing tools. A lot. Here is a simple example:
XML heatmaps

What else could we do? How about the ability to arbitrarily break off chunks of code and view them in parallel, even nesting this behavior to break long blocks of code into a string of chunks:
Nesting chunks

What if we let the flow of the documentation decide how a reader is introduced to the code base, instead of letting the flow of compiler-friendly source files decide it? Chunks of code are embedded within wiki-style documentation, and while you can follow the code back to its source, reading the documentation will eventually introduce you to the whole codebase in a human-friendly fashion.

The same code could even appear in multiple places (obviously updated when the source changes), and you could see all the places in the documentation where a particular chunk of code appears. This could bridge the gap between documentation and code; documentation will never grow stale, as updating code necessitates interaction with it. Similarly, updating documentation is the same process as writing code. When a standard changes or an SLA (service level agreement) is modified, the code changes too.

But why restrict ourselves to semi-linear, text-based documentation a la wikis? We tend to find UML diagrams extremely helpful for visualizing complex systems in code. What if we could build powerful, adaptable tools to translate between raw code, text-based documentation, and visual diagrams? Strictly binding them together might restrict you in the lowest levels of coding (much like, for example, using a high-level language restricts your ability to control memory allocation), but it opens up the new ability to make changes to a diagram and have most of the code rearrange and resolve itself before you. Then you step in to give a guiding hand, and adjust the text documentation, and voila! Best of all, this is more than a diagram-to-code tool; the diagram is a living thing. In fact, the diagrams, the documentation, and the codebase are synonymous. A change in one is a change in the others.

We’re getting to the point where it is much more useful to be able to dance across a codebase quickly than to be able to tweak and tune the minutiae of your code. Some allowances must be made for processing-intensive applications. Perhaps this system wouldn’t even be useful in those cases. But when you find yourself favoring adaptability and iteration speed over efficiency during development, and when you find yourself being hampered by the need to go between files, scroll through large swathes of code, or referring back and forth between code and documentation, maybe it’s time to rethink your coding paradigms.

Trapped between Eye Candy and Motivation

There’s this really big problem when it comes to working on games (or really any sort of project that lies at the intersection of engineering and design). It has nothing to do with programming or design or testing or art or sound or anything else like that.

The problem is staying motivated. This is especially bad when you are working alone, but it can even happen in groups of 2 or 3 people. Beyond that, you can always find motivation in the stuff that other people are doing, because it comes from outside of your personal drive and creativity. But in small groups or solo projects, the game becomes your baby, and then you get tired of your baby.

Sometimes this happens when you work so long on one subset of features that they sort of blur together and become the totality of the project to you. You quickly get tired of this smaller sub-problem (especially tweaking and tweaking and tweaking), then get tired of the game without realizing there is other interesting work to be done.

Or maybe you realize that there is a lot of stuff to do on the project, but you’ve been working on it so long without much visible or marked improvement that you begin to despair. Maybe the project will never flower, you think. Maybe your efforts will never be used to the full extent they were designed for.

Wherever this loss of motivation comes from, there is one piece of advice I heard that really helps me. It boils down to this: if you keep wishing your game was awesome, make it awesome. Add in that feature you keep thinking about, but keep putting off because there is more important framework-laying to do. Or take some time off and mess around with that one technical gimmick (shader, hardware stuff, multi-threading, proc-gen, or what have you). When you feel yourself losing motivation, give yourself permission to go off and get it back. Don’t soldier on, because your project will inevitably end up on the dump heap with all the other projects you abandoned.

The only problem is, everyone (including myself) always says that adding eye-candy and little trinkets to your project prematurely is a Bad Idea. If you make your game cool by adding eye-candy, the wisdom goes, then your game is no longer cool because of the gameplay (you know, the point of a game). Arguments about whether gameplay is important not-withstanding, if adding a few bits of visual indulgence saves your game from succumbing to ennui, then by all means, add the cool things!

From Light

I haven’t posted in a while, in part because I’ve been busy with a lot of things. Maybe I’ll make posts about some of those other things at one point, but right now I just want to talk about From Light.

Logo for the game.

From Light is a game that I have had the pleasure and honor to help develop. It was originally created as a class project by two other students, but when it showed promise they decided to develop it further. Our team has now grown to 10 people, all (save one) students at USC.

The game is a 2D puzzle platformer based on long-exposure photography (holy hell have I said that line a lot). Basically, you can etch out light trails onto film using the stars in the sky, then jump on those trails to navigate the levels.

I mention that I’ve said the above line a lot because the game got accepted into the PAX 10 for PAX 2015, and I went up to Seattle last weekend with 3 other teammates to show the game off at the four-day gaming convention. This, you may have gathered, is completely and mindbogglingly awesome. I get to work on a game that is recognized and validated by real-world people! And truly, the reception of PAX was way more than I ever would have expected. People frickin’ loved the game!

 PAX 10 Logo  Photo of us at the booth.

And at PAX one of the things I heard again and again was that taking a game to completion, to the point where it could be shipped and sold as an actual game (y’know, for money), is an invaluable experience. Not only do you get a sellable game and a fantastic line on your resume, you also get all the experience involved in taking a game from 80% to 100%, and all the non-development business stuff involved in getting your game out to consumers. Needless to say, this convinced me that we should take From Light to completion. Before, I had been hesitant because as students it was unlikely we could put in the time to finish it fully. However, I am now willing to work harder than I have ever worked before to finish this game.

In the meantime, if it strikes your fancy please “like” the game on Facebook, or follow us on Twitter, or just download the game from our website.

Pluto “Fans”

The recent fly-by of Pluto by the New Horizons spacecraft has reignited a debate that should have stayed buried forever. I’m not saying the IAU’s 2006 definition of planet wasn’t lacking, it’s just that this specific debate should have died and stayed dead.

Plutesters, hehehe.


The problem is that it is entirely unclear why we’re defining “planet” to begin with. Categorization of phenomena is supposed to help us organize them epistemologically. This is why we have a taxonomy of species. Any definition of space objects should be designed to help us classify and study them, not contrived for cultural reasons. We shouldn’t try to exclude KBO’s or other minor bodies because we don’t want to have 15 planets, and we shouldn’t try to include Pluto because we feel bad for it. The classifications we come up with should mirror our current understanding of how similar the bodies are. On the other hand, our precise definitions should produce the same results as our imprecise cultural definitions for well-known cases. As evidenced by the outrage caused by the IAU’s “exclusion of Pluto from planethood”, people don’t like changing how they think about things.

Images of Pluto and Charon.


Which brings us to the current debate. Fans of Pluto seem to be hinging their argument on the fact that Pluto is geologically active, and that it’s diameter is actually larger than that of Eris. Previously it was thought that Eris was both more massive (by 27%) and larger in diameter than Pluto (with the flyby of New Horizons, we now believe Pluto has the larger diameter). This is what moved the IAU to action in the first place; if Pluto is a planet, then so is Eris. There is no world in which we have 9 planets. We either have 8, or 10+.

Then you have Makemake, Haumea, Sedna, and Ceres. How do those fit in? It’s possible we would end up having far more than 15 planets, based on current predictions of KBO size distributions. This illuminates a fundamental problem: what is the use of a classification that includes both Sedna and Jupiter? These two bodies are so different that any category that includes both is operationally useless for science within our solar system. But continuing that logic, the Earth is also extremely dissimilar to Jupiter. The Earth is more similar to Pluto than it is to Jupiter. So having Earth and Jupiter in the same category but excluding Pluto also seems weird.

Unless we consider our definition of similarity. There are two ways to evaluate a body: intrinsic properties (mass, diameter, geological activity, etc), and extrinsic properties (orbit, nearby bodies, etc). One would be tempted to define a planet based on its intrinsic properties. After all, at one time Jupiter was still clearing its orbit, and in the future Pluto will eventually clear its orbit. Does it make sense for the same body to drop in and out of statehood. Well… yes. The fact that a human stops being a child at some point doesn’t make the category of “child” any less useful for a huge range of societal and cultural rules.

In fact, “intrinsic properties” is sort of a gray area. Rotation rate doesn’t really count, since tidal locking is common yet caused by extrinsic forces. Geological activity is also not necessarily intrinsic. Io has extreme internal activity caused by tidal heating. One can imagine the same for a planet close to its parent star. Composition can change as atmosphere is blown away by the parent star, and even mass and diameter can change through planetary collisions.

Regardless, defining a planet only on its intrinsic properties means that moons are now technically “planets”. “Moon” becomes a subcategory of “planet”. This is actually a great definition, but too radical to get accepted currently, so thus functionally useless.

So we must define a planet at least partially based on extrinsic properties. The rocky inner planets and the gaseous outer planets are similar in that they make up the VAST portion of the mass within their orbital region. Earth is 1.7 million times more massive than the rest of the stuff in its orbit. On the other hand, Pluto is 0.07 times the mass of the rest of the Kuiper Belt. Yeah, it makes up less than 10% of the Kuiper Belt. This is a pretty clear separation.

After that revelation, everything falls into place. We have large, orbit-clearing objects, and we have smaller objects that are still in hydrostatic equilibrium but are part of a larger belt of objects.


It turns out, this definition is already in place. For all the hub-bub about the IAU’s definition, most everybody agrees with the splitting of bodies via two parameters that measure likelihood of a body ejecting other bodies in its orbit (the Stern-Levison parameter Λ), and a body’s mass relative to the total mass of bodies in its orbit (planetary discriminant µ). The split occurs at a semi-arbitrary Λ=1 and µ=100.

What everybody is really arguing about is whether or not we get to call both types of bodies planets, or just the big ones.

Stern and Levison propose the terms überplanet and unterplanet, but I think major planet and minor planet is more adoptable.

Finally, just plain old “planet” should refer by default to major planets only, but can contextually refer to both classes in some cases.

Problem solved.

Language Gamification: Bullshit vs Bullshit

Gamification may be bullshit, but does that mean it might be just the tool to fight your own, personal brand of bullshit?

Screenshot of Duolingo

Learning foreign languages is hard. Really hard. Part of this has to do with complex neurological reasons, which can only be explained using words like neuroplasticity and monolinguals. Yes, some of the difficulty is hard-wired. But additionally, a part of you just doesn’t like learning foreign languages. It’s complicated and easy to forget, requires a lot of memorization, and you can still sound like an idiot after years of practice. Sometimes the linguistic variations are impossible to pronounce or hear, or the grammatical structures are completely foreign to your mental processes. So you make up bullshit: reasons to skip or skimp on practice, or give up altogether. Learning a foreign language is a constant battle against your lazier self.

Duolingo logo

But Duolingo, a site I’ve recently come to frequent, changes the game, so to speak. It gamifies the process of learning a foreign language, adding daily goals, streaks of meeting your daily goal, unlocking mechanics, currency and purchasing, and total progress towards fluency. Now, it’s not a particularly good way of learning a language. In fact, it’s terrible at teaching. But really, teaching isn’t the point of Duolingo. It’s just a way of defeating your bullshit by replacing it with a more benign type of bullshit.

Duolingo assigns tangible, meaningless progression to the real, intangible progress of learning a language. Without Duolingo as a external, concrete arbiter that says “Yes you are getting better”, learning a language can feel hopeless because no matter how much you master it, there are always more words to learn, faster sentences to parse, and structures you don’t understand. Now, the “percent fluency” that Duolingo feeds you doesn’t necessarily correspond to any real gains, but it affirms that the hard mental work you put in today actually paid off in some continuing educational journey. And that affirmation is what makes you come back the next day to learn more.

Going Nowhere on the Information Superhighway

More than 50% of people die within 30 miles of where they were born. Even though America has a well-maintained highway system that spans the continent, most people don’t randomly pack up from their home town and go on a road trip to the opposite side of the country. And so it is with the virtual world. Before the Internet, information was highly segregated geographically. The farther you were from a source of information, the longer it took to reach you, and the more you had to go out of your way to consume it. This was the result of both the technology and the media networks that existed.

The Internet was supposed to revolutionize the way information moved. The so-called information super-highway would advance digital transit in the same way the Interstate Highway System did in the 1950’s. But just like the real highway system, the Internet hasn’t caused a mass exodus of ordinary bitizens. In this analogy, the reason is painfully obvious. It takes a huge amount of effort to leave your Internet communities and travel to another place where the dialect or even language is different. And to what gain?

These barriers to information cross-pollination result in an Internet that experiences de facto segregation along cultural boundaries. This division is no less real than the geographic segregation experienced by human populations in the real world. A TED talk by Ethan Zuckerman explores the vast sections of Twitter you may not even be aware existed; huge parts of Twitter are occupied by Brazilians and by African Americans, but if you are a caucasian American, you’ve probably never interacted with that side of Twitter. Even in the information age, we still consume the media closest to us. Yet this is even more dangerous, because the ease of information transfer lulls us into thinking that we are getting a cosmopolitan viewpoint, when in fact we are stuck in the middle of an echo chamber.

This is why it is so hard for people to branch out and become informed about subjects they don’t believe they are interested in. Be it international politics, scientific advances, or social justice debates, people often sit back and consume their news from whatever source is most familiar and convenient. The result is that I am woefully uninformed about the geopolitical situation in Africa, and the general public is woefully uninformed about anything related to space exploration. Then again, you don’t see me going out and reading up on African conflicts, so I don’t blame anyone for having a spotty knowledge base.

%d bloggers like this: