What Does It Take To Become A Programmer?

So these are my thoughts on this article (hint, it’s utter tripe): Programming Doesn’t Require Talent or Even Passion.

On the one hand, this article espouses a good sentiment (you don’t have to be gifted to learn programming). On the other, it completely disregards the important idea that being able to do something is not the same as being able to do it well.

I can draw, but anyone who has seen me draw would agree that I’m pretty bad at it. I can draw just well enough to get my concepts across to other people. However, if I intended on becoming an artist for a living, I should probably learn about proportions, shading, composition, perspective, color theory, and be able to work with a range of mediums. Of course, there isn’t some big secret to learning these things. You just practice every day and study good artistic work, analyzing how it was made. Maybe you take some courses, or read some books that formally teach certain techniques. After thousands of invested hours, you will find that your drawing has radically improved, as shown again and again by progress comparison pictures (that one is after 2 years of practice).

The same holds true for programming. Anyone can learn programming. It requires nothing except a little dedication and time. But the article starts out by promising to ‘debunk’ the following quote (I’m not sure if it’s actually a real quote – they don’t attribute it to anybody):

You not only need to have talent, you also need to be passionate to be able to become a good programmer.

The article immediately ignores the fact that the ‘quote’ is talking about good programmers. Just like becoming a good artist requires artistic talent and a passion for learning and improving every day, good programmers are driven by the need to learn and improve their skills. Perhaps an argument can be made for “talent” being something you acquire as a result of practice, and thus you don’t need talent to start becoming good; you become good as you acquire more and more talent. This is a debate for the ages, but I would say that almost invariably a passion for a skill will result in an early baseline proficiency, which is often called “talent”. Innate talent may or may not exist, and it may or may not influence learning ability.

It doesn’t really matter though, because the article then goes on to equate “talent” and “passion” with being a genius. It constructs a strawman who has always known how to program and has never been ignorant about a single thing. This strawman, allegedly, causes severe anxiety to every other programmer, forcing them to study programming at the exclusion of all else. It quotes the creator of Django (after affirming that, yes, programmers also suffer from imposter syndrome):

Programming is just a bunch of skills that can be learned, it doesn’t require that much talent, and it’s not shameful to be a mediocre programmer.

Honestly, though, the fact of the matter is that being a good programmer is incredibly valuable. If your job is to write code, you should be able to do it well. You should write code that doesn’t waste other people’s time, that doesn’t break, that is maintainable and performant. You need to be proud of your craft. Of course, not every writer or musician or carpenter takes pride in their craft. We call these people hacks and they churn out deplorable fiction that only shallow people read, or uninteresting music, or houses that fall down in an earthquake and kill dozens of people.

So, unless you want to be responsible for incredibly costly and embarrassing software failures, you better be interested in becoming a good programmer if you plan on doing it for a career. But nobody starts out as a good programmer. People learn to be good programmers by having a passion for the craft, and by wanting to improve. If I look at older programmers and feel inferior by comparison, I know it’s not because they are a genius while I am only a humble human being. Their skill is a result of decades of self-improvement and experience creating software both good and bad.

I think it’s telling that the article only quotes programmers from web development. Web development is notorious for herds of code monkeys jumping from buzzword to buzzword, churning out code with barely-acceptable performance and immense technical debt. Each developer quote is followed by a paragraph that tears down the strawman that was erected earlier. At this point, the author has you cheering against the supposedly omnipresent and overpowering myth of the genius programmer — which, I might remind you, is much like the myth of the genius painter or genius writer; perhaps accepted by those with a fixed mindset, but dismissed by anybody with knowledge of how the craft functions. This sort of skill smokescreen is probably just a natural product of human behavior. In any case, it isn’t any stronger for programming than for art, writing, dance, or stunt-car driving.

The article really takes a turn for the worse in the second half, however. First, it effectively counters itself by quoting jokes from famous developers that prove the “genius programmer” myth doesn’t exist:

* One man’s crappy software is another man’s full time job. (Jessica Gaston)

* Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

* Software and cathedrals are much the same — first we build them, then we pray. (Sam Redwine)

The author LITERALLY ASKS: “If programmers all really had so much talent and passion, then why are these jokes so popular amongst programmers?”, as if to prove that he was being intellectually dishonest when he said back in the beginning “It’s as if people who write code had already decided that they were going to write code in the future by the time they were kids.”

But the absolute worst transgression the article makes is quoting Rasmus Lerdorf, creator of PHP. PHP is a server-side language. It is also one of the worst affronts to good software design in recent history. The reason it was the de facto server-side language before the recent Javascript explosion is that it can be readily picked up by people who don’t know what they are doing. Like you would expect from a language designed by someone who “hates programming” and used by people who don’t what they are doing, PHP is responsible for thousands of insecure, slow, buggy websites.

PHP’s shortcoming are amusingly enumerated in this famous post: PHP – a fractal of bad design. In the post, the following analogy is used to illustrate how PHP is bad:

I can’t even say what’s wrong with PHP, because— okay. Imagine you have uh, a toolbox. A set of tools. Looks okay, standard stuff in there.

You pull out a screwdriver, and you see it’s one of those weird tri-headed things. Okay, well, that’s not very useful to you, but you guess it comes in handy sometimes.

You pull out the hammer, but to your dismay, it has the claw part on both sides. Still serviceable though, I mean, you can hit nails with the middle of the head holding it sideways.

You pull out the pliers, but they don’t have those serrated surfaces; it’s flat and smooth. That’s less useful, but it still turns bolts well enough, so whatever.

And on you go. Everything in the box is kind of weird and quirky, but maybe not enough to make it completely worthless. And there’s no clear problem with the set as a whole; it still has all the tools.

Now imagine you meet millions of carpenters using this toolbox who tell you “well hey what’s the problem with these tools? They’re all I’ve ever used and they work fine!” And the carpenters show you the houses they’ve built, where every room is a pentagon and the roof is upside-down. And you knock on the front door and it just collapses inwards and they all yell at you for breaking their door.

That’s what’s wrong with PHP.

And according to Rasmus Lerdorf, the creator of this language:

I’m not a real programmer. I throw together things until it works then I move on. The real programmers will say “Yeah it works but you’re leaking memory everywhere. Perhaps we should fix that.” I’ll just restart Apache every 10 requests.

It’s like the article is admitting that if you don’t take the time to learn good programming principles, you are going to be responsible for horrible systems that cause headaches five years down the line for the people maintaining them and that regularly allow hackers to access confidential personal information like patient information and social security numbers for millions of people.

So yes, if you aren’t planning on programming for a career, learning to program is fairly straightforward. It’s as easy as learning carpentry or glass-blowing. It might seem daunting, but invest a half dozen hours and you can have your foot solidly in the door.

But if you plan on building systems other people will rely on, you sure are hell better pick up some solid programming fundamentals. If you aren’t motivated to improve your skillset and become a better programmer, don’t bother learning at all. Don’t be the reason that the mobile web sucks, and don’t be the reason that 28 American soldiers died. Learn to be a good programmer.

Indiscriminately Valuing Non-Violent Games

Starting with the 1980s arcade games Galaxian and Missile Command, games and combat became nearly synonymous. This was only exacerbated in the 90s by the advent of wildly popular shooters like Doom. The choice to focus a game around antagonism, combat, and violence was not a conscious design decision, but a necessity of the industry and environment. There were abstract games that didn’t contain violence, but in general the highest-profile games were about, in essence, murder.

Doom screenshot

Doom: you shoot things. Dead simple.



Then a renaissance occurred in academia, and suddenly games were art. Nobody really knew what to do with this fact or what it meant, but it was revolutionary, and regardless of anything else, games were definitely art. To support this, a number of innovative (perhaps iconoclastic) non-violent games — games like Journey and Gone Home — were foisted up as evidence that games are art. “Games are art, they can convey aesthetics beyond violence.” Good, great. Innovative games that are fun without using violence in their designs are awesome.

Journey screenshot

Journey is one of the seminal games in the recent wave of “artistically-valuable” indie games.



However, this easily morphed into a reactionary movement. Since these games without violence or combat were touted as being somehow better or “more elevated” than your run-of-the-mill murder simulator, it became obvious that a game that was violent was inherently less.

Obviously, this sort of indiscriminate valuing of non-violent games is a terrible idea. A game that doesn’t use violence can be poorly designed and not-fun (Dear Esther, Mountain), just like a game that uses violence and combat can provoke deeper aesthetics (Hotline Miami, This War of Mine). Part of the problem is that nobody has developed the proper critical skills to analyze these non-violent, pacifistic games. Champions of “games are art” too frequently praise the games for not using combat, rather than evaluating the game holistically and praising good design choices. On the other side, core gamers are immediately turned off by the lack of combat and write it off as boring.

This War Of Mine screenshot

Refugees have said This War of Mine accurately conveys the constant fear of living in a war-torn region.



One result of this dysfunction is the proliferation of so-called “walking simulators”. These are games whose main play involves walking around consuming either written, visual, or aural media, perhaps with light puzzle-solving mechanics (or similar accents). Many enterprising developers, whether they realize it consciously or not, have seized on the fact that making such a game guarantees some measure of success. They will be praised by academics and critics interested in furthering games as a legitimate medium, and have their game purchased by the small-but-steady audience of non-core, non-casual gamers (most of whom probably chafe at being called gamers).

Some walking simulators are great; I actually enjoyed Gone Home, in a way that I probably wouldn’t have if it had been a movie. They do a good job of immersing you in a focused, meaningful experience. Others are scattered or diluted by dissonant design decisions — like Corpse of Discovery. But nobody cares, because these games aren’t being evaluated on their merits as a game. They are either praised for being a game without combat mechanics, or they are ignored because they are a game without combat mechanics. Little else tends to go into the evaluation process.

Gone Home screenshot

Gone Home gives the player a meaningful experience despite being limited to looking at rooms and listening to audio.



A student game at USC, Chambara, got changed during development to be “non-violent”. The game originally saw samurai dueling in a starkly colored world. Now instead of blood, hitting an enemy produces a burst of feathers. Apparently this one tweak now qualifies it as “a transcendently beautiful and artistic entertainment game with a pacifistic outlook”. That is a direct quote from a faculty member at the school. You may see why this is troublesome to me. First of all, changing blood to feathers doesn’t change the fact that your game is about sneaking around and hitting other people with sticks before they hit you. That seems a far cry from a “pacifist outlook”. Second, this change actually hurts the game aesthetically. The blood splatters beautifully complemented the dichromatic nature of the game’s world. I consider the stark look of a blood splatter to be more artistic than a burst of feathers. Yet the game’s devs decided to make this tweak. Did they do it because it would benefit the game? No. According to the devs, “we were uncomfortable with the violence the game displayed and did not feel like it accurately reflected who we were and what we believed.” In other words, they value a game that contains bloodshed differently than a game that does not. Are they allowed to make this decision based on their personal beliefs? Absolutely. But isn’t it absurd to pretend that this tweak lends the game a “pacifist outlook”, and that it in turn allows the game to transcend to the angelic ranks of non-violent video games?

Blood Splatters

Blood splatters…


Feather Splatters

…and “feather splatters”.



I would urge critics and academics to judge pacifistic games on their merits as a game, not on their merits as a non-violent game. I would urge developers to treat the presence of combat and violence as just one among a countless sea of other design possibilities. If it aids your experience goal, you should include it and tailor it to the needs of your game as an experience. If it doesn’t don’t include it. But don’t decide to make your game non-violent or exclude combat mechanics just because it means your game will be valued as inherently better by a specific set of people.

Escaping UI Idioms

Personally I find that whenever my engineer brain switches on, my designer brain switches off. I have to step away from coding for a while in order to objectively make the best decisions about what to implement and how. When I let my engineer brain do the designing, I end up falling into age-old preconceptions about how things should be. This is especially true when it comes to UI design.

But is it the best idea to blindly follow UI conventions, either new or old? On the one hand, a familiar UI layout and universal UI idioms will make it easier for users to jump straight into your program. However, if those idioms aren’t well suited to your application, the user can quickly find themselves confused, frustrated, and lost. If the UI was unfamiliar but uniquely designed around your application, the users will be less confused because they have no expectations which can be unwittingly subverted.

Some bad features:

  • Confirmation emails which require you to click a link before you can do anything with your account. Confirmation emails that require a link to be clicked in 24 hours but which do not impede progress are much better.
  • The “re-enter your email” fields on signup forms. Every modern browser automatically enters your password.
  • Separating the “Find” and “Replace” functions, putting them in the “View” and “Edit” menus respectively.
  • Speaking of “View” and “Edit” menus, the standard “File”, “View”, “Edit” menu tabs often don’t suit applications. Choose menu item labels that suit your application.

An example of a good feature is the use of universal symbols for universal functions. Using a crazy new “save” icon is not a good subversion of conventional UI idioms. Another is exit confirmation; in a lot of cases, confirming whether you want to save before exiting is a great feature.

Here are two features which are not standard for applications with text-editing capability but which should be (I’ve only seen it in a handful of programs, of which Notepad++ is most prominent):

  • A “Rename” option under the File menu, which saves the file with a new name and removes the file with the old name. This saves the tiresome task of doing “Save As” and then deleting the file in the save window, or (God forbid) having to navigate to the file in your OS’s file browser and renaming the file there.
  • Special character (\t, \n) and Regex support in “Find and Replace” modes.

VR Isn’t Ready

Recently I’ve heard a lot of hubabaloo about VR, especially with regards to games. This wave of hype has been going on for while, but it has personally intensified for me because one of my professors this semester is running a VR startup. I’m also working on a VR-compatible game, so VR talk has become more relevant to me.

Array of current VR headsets

Array of current VR headsets

First off, I believe VR is still 10 years away from its prime-time. The tech is just not advanced to a viable level right now, and some fundamental issues of user experience have yet to be solved.

For example, my professor gave an example of why VR is such an immersive mode of interaction: the first time people put on the headset and jump into a virtual world, they reach out and try to touch objects. He trumpeted this as being evidence of a kinetic experience (i.e. it pushed them to “feel” things beyond what they immediately see). While is this kind of true, I see it far more as evidence of a fundamental shortcoming. The moment a user tries to interact with the world and fails, they are jerked out of the fantasy and immersion is broken. This is true in all games; if a user believes they can interact with the world in a certain way but the world doesn’t respond correctly, the user is made painfully and immediately aware that they are in a game, a simulation.

Control VR isn't enough.

Control VR isn’t enough.

This brings me to the first huge issue: the input problem. VR output is relatively advanced, what with Oculus and Gear VR and Morpheus. But we’ve seen little to no development effort targeted at ways for the user to interact with the world. Sure we have Control VR and such projects, but I think these haven’t caught on because they are so complicated to setup. Oculus made huge strides by turning the HMD into a relatively streamlined plug-and-play experience with a minimal mess of cables. We have yet to see how Oculus’s custom controllers affect the space, but I have a feeling they aren’t doing enough to bridge the haptic gap. We won’t see VR takeoff until users are no longer frustrated by the effort to give input to the game by these unintuitive means. As long as users are constantly reminded they are in a simulation, VR is no better than a big TV and a comfy couch.

Speaking of big TVs: the output tech isn’t good enough. The 1080p of the DK2 is nowhere near high enough to be immersive. Trust me: I’ve gotten to try out a DK2 extensively in the past few months at zero personal cost. My opinion is informed and unbiased. Trying to pick out details in the world is like peering through a blurry screen door. As long as I’m tempted to pop off the headset and peek at the monitor to figure out what I’m looking at, VR isn’t going to take off. Even the 2160×1200 of the consumer Oculus won’t be enough. When we get 3K or 4K resolutions in our HMDs, VR will be a viable alternative to monitor gaming. Of course, this tech is likely 5-10 years away for our average consumer.

These never caught on.

These never caught on.

This all isn’t to say that current VR efforts are for naught. These early adopter experiments are definitely useful for figuring out design paradigms and refining the tech, However, it would be foolish to operate under the assumption that VR is poised to take the gaming world by storm. VR is not the new mobile. VR is the new Kinect. And like the Wii and Kinect, VR is not a catch-all interaction mode; most gaming will always favor a static, laid-back experience. You can’t force people to give up lazy couch-potato gaming.

Of course, outside of gaming it may not be a niche interaction mode. In applications where immersion is not the goal and users expect to have to train in the operation of unnatural, intuitive controls, VR may very well thrive. Medicine, industrial operation, design, and engineering are obvious applications. It might even be useful for education purposes. But temper your expectations for gaming.

New Coding Paradigms

So I’ve recently been thinking that the whole idea of editing text files filled with code is outmoded. When I’m thinking about code, I certainly don’t think of it as a set of classes and functions laid out in a particular order. I think of it as a cloud of entities with properties and interactions flowing between them. Shouldn’t our experience of writing code reflect this?

We need to start rethinking our code-editing tools. A lot. Here is a simple example:
XML heatmaps

What else could we do? How about the ability to arbitrarily break off chunks of code and view them in parallel, even nesting this behavior to break long blocks of code into a string of chunks:
Nesting chunks

What if we let the flow of the documentation decide how a reader is introduced to the code base, instead of letting the flow of compiler-friendly source files decide it? Chunks of code are embedded within wiki-style documentation, and while you can follow the code back to its source, reading the documentation will eventually introduce you to the whole codebase in a human-friendly fashion.

The same code could even appear in multiple places (obviously updated when the source changes), and you could see all the places in the documentation where a particular chunk of code appears. This could bridge the gap between documentation and code; documentation will never grow stale, as updating code necessitates interaction with it. Similarly, updating documentation is the same process as writing code. When a standard changes or an SLA (service level agreement) is modified, the code changes too.

But why restrict ourselves to semi-linear, text-based documentation a la wikis? We tend to find UML diagrams extremely helpful for visualizing complex systems in code. What if we could build powerful, adaptable tools to translate between raw code, text-based documentation, and visual diagrams? Strictly binding them together might restrict you in the lowest levels of coding (much like, for example, using a high-level language restricts your ability to control memory allocation), but it opens up the new ability to make changes to a diagram and have most of the code rearrange and resolve itself before you. Then you step in to give a guiding hand, and adjust the text documentation, and voila! Best of all, this is more than a diagram-to-code tool; the diagram is a living thing. In fact, the diagrams, the documentation, and the codebase are synonymous. A change in one is a change in the others.

We’re getting to the point where it is much more useful to be able to dance across a codebase quickly than to be able to tweak and tune the minutiae of your code. Some allowances must be made for processing-intensive applications. Perhaps this system wouldn’t even be useful in those cases. But when you find yourself favoring adaptability and iteration speed over efficiency during development, and when you find yourself being hampered by the need to go between files, scroll through large swathes of code, or referring back and forth between code and documentation, maybe it’s time to rethink your coding paradigms.

Trapped between Eye Candy and Motivation

There’s this really big problem when it comes to working on games (or really any sort of project that lies at the intersection of engineering and design). It has nothing to do with programming or design or testing or art or sound or anything else like that.

The problem is staying motivated. This is especially bad when you are working alone, but it can even happen in groups of 2 or 3 people. Beyond that, you can always find motivation in the stuff that other people are doing, because it comes from outside of your personal drive and creativity. But in small groups or solo projects, the game becomes your baby, and then you get tired of your baby.

Sometimes this happens when you work so long on one subset of features that they sort of blur together and become the totality of the project to you. You quickly get tired of this smaller sub-problem (especially tweaking and tweaking and tweaking), then get tired of the game without realizing there is other interesting work to be done.

Or maybe you realize that there is a lot of stuff to do on the project, but you’ve been working on it so long without much visible or marked improvement that you begin to despair. Maybe the project will never flower, you think. Maybe your efforts will never be used to the full extent they were designed for.

Wherever this loss of motivation comes from, there is one piece of advice I heard that really helps me. It boils down to this: if you keep wishing your game was awesome, make it awesome. Add in that feature you keep thinking about, but keep putting off because there is more important framework-laying to do. Or take some time off and mess around with that one technical gimmick (shader, hardware stuff, multi-threading, proc-gen, or what have you). When you feel yourself losing motivation, give yourself permission to go off and get it back. Don’t soldier on, because your project will inevitably end up on the dump heap with all the other projects you abandoned.

The only problem is, everyone always says that adding eye-candy and little trinkets to your project prematurely is a Bad Idea. If you make your game cool by adding eye-candy, the wisdom goes, then your game is no longer cool because of the gameplay (you know, the point of a game). Arguments about whether gameplay is important not-withstanding, if adding a few bits of visual indulgence saves your game from succumbing to ennui, then by all means, add the cool things!

From Light

I haven’t posted in a while, in part because I’ve been busy with a lot of things. Maybe I’ll make posts about some of those other things at one point, but right now I just want to talk about From Light.

Logo for the game.

From Light is a game that I have had the pleasure and honor to help develop. It was originally created as a class project by two other students, but when it showed promise they decided to develop it further. Our team has now grown to 10 people, all (save one) students at USC.

The game is a 2D puzzle platformer based on long-exposure photography (holy hell have I said that line a lot). Basically, you can etch out light trails onto film using the stars in the sky, then jump on those trails to navigate the levels.

I mention that I’ve said the above line a lot because the game got accepted into the PAX 10 for PAX 2015, and I went up to Seattle last weekend with 3 other teammates to show the game off at the four-day gaming convention. This, you may have gathered, is completely and mindbogglingly awesome. I get to work on a game that is recognized and validated by real-world people! And truly, the reception of PAX was way more than I ever would have expected. People frickin’ loved the game!

 PAX 10 Logo  Photo of us at the booth.

And at PAX one of the things I heard again and again was that taking a game to completion, to the point where it could be shipped and sold as an actual game (y’know, for money), is an invaluable experience. Not only do you get a sellable game and a fantastic line on your resume, you also get all the experience involved in taking a game from 80% to 100%, and all the non-development business stuff involved in getting your game out to consumers. Needless to say, this convinced me that we should take From Light to completion. Before, I had been hesitant because as students it was unlikely we could put in the time to finish it fully. However, I am now willing to work harder than I have ever worked before to finish this game.

In the meantime, if it strikes your fancy please “like” the game on Facebook, or follow us on Twitter, or just download the game from our website.

Pluto “Fans”

The recent fly-by of Pluto by the New Horizons spacecraft has reignited a debate that should have stayed buried forever. I’m not saying the IAU’s 2006 definition of planet wasn’t lacking, it’s just that this specific debate should have died and stayed dead.

Plutesters, hehehe.


The problem is that it is entirely unclear why we’re defining “planet” to begin with. Categorization of phenomena is supposed to help us organize them epistemologically. This is why we have a taxonomy of species. Any definition of space objects should be designed to help us classify and study them, not contrived for cultural reasons. We shouldn’t try to exclude KBO’s or other minor bodies because we don’t want to have 15 planets, and we shouldn’t try to include Pluto because we feel bad for it. The classifications we come up with should mirror our current understanding of how similar the bodies are. On the other hand, our precise definitions should produce the same results as our imprecise cultural definitions for well-known cases. As evidenced by the outrage caused by the IAU’s “exclusion of Pluto from planethood”, people don’t like changing how they think about things.

Images of Pluto and Charon.


Which brings us to the current debate. Fans of Pluto seem to be hinging their argument on the fact that Pluto is geologically active, and that it’s diameter is actually larger than that of Eris. Previously it was thought that Eris was both more massive (by 27%) and larger in diameter than Pluto (with the flyby of New Horizons, we now believe Pluto has the larger diameter). This is what moved the IAU to action in the first place; if Pluto is a planet, then so is Eris. There is no world in which we have 9 planets. We either have 8, or 10+.

Then you have Makemake, Haumea, Sedna, and Ceres. How do those fit in? It’s possible we would end up having far more than 15 planets, based on current predictions of KBO size distributions. This illuminates a fundamental problem: what is the use of a classification that includes both Sedna and Jupiter? These two bodies are so different that any category that includes both is operationally useless for science within our solar system. But continuing that logic, the Earth is also extremely dissimilar to Jupiter. The Earth is more similar to Pluto than it is to Jupiter. So having Earth and Jupiter in the same category but excluding Pluto also seems weird.

Unless we consider our definition of similarity. There are two ways to evaluate a body: intrinsic properties (mass, diameter, geological activity, etc), and extrinsic properties (orbit, nearby bodies, etc). One would be tempted to define a planet based on its intrinsic properties. After all, at one time Jupiter was still clearing its orbit, and in the future Pluto will eventually clear its orbit. Does it make sense for the same body to drop in and out of statehood. Well… yes. The fact that a human stops being a child at some point doesn’t make the category of “child” any less useful for a huge range of societal and cultural rules.

In fact, “intrinsic properties” is sort of a gray area. Rotation rate doesn’t really count, since tidal locking is common yet caused by extrinsic forces. Geological activity is also not necessarily intrinsic. Io has extreme internal activity caused by tidal heating. One can imagine the same for a planet close to its parent star. Composition can change as atmosphere is blown away by the parent star, and even mass and diameter can change through planetary collisions.

Regardless, defining a planet only on its intrinsic properties means that moons are now technically “planets”. “Moon” becomes a subcategory of “planet”. This is actually a great definition, but too radical to get accepted currently, so thus functionally useless.

So we must define a planet at least partially based on extrinsic properties. The rocky inner planets and the gaseous outer planets are similar in that they make up the VAST portion of the mass within their orbital region. Earth is 1.7 million times more massive than the rest of the stuff in its orbit. On the other hand, Pluto is 0.07 times the mass of the rest of the Kuiper Belt. Yeah, it makes up less than 10% of the Kuiper Belt. This is a pretty clear separation.

After that revelation, everything falls into place. We have large, orbit-clearing objects, and we have smaller objects that are still in hydrostatic equilibrium but are part of a larger belt of objects.


It turns out, this definition is already in place. For all the hub-bub about the IAU’s definition, most everybody agrees with the splitting of bodies via two parameters that measure likelihood of a body ejecting other bodies in its orbit (the Stern-Levison parameter Λ), and a body’s mass relative to the total mass of bodies in its orbit (planetary discriminant µ). The split occurs at a semi-arbitrary Λ=1 and µ=100.

What everybody is really arguing about is whether or not we get to call both types of bodies planets, or just the big ones.

Stern and Levison propose the terms überplanet and unterplanet, but I think major planet and minor planet is more adoptable.

Finally, just plain old “planet” should refer by default to major planets only, but can contextually refer to both classes in some cases.

Problem solved.

Language Gamification

Gamification may be bullshit, but does that mean it might be just the tool to fight your own, personal brand of bullshit?

Screenshot of Duolingo

Learning foreign languages is hard. Really hard. Part of this has to do with complex neurological reasons, which can only be explained using words like neuroplasticity and monolinguals. Yes, some of the difficulty is hard-wired. But additionally, a part of you just doesn’t like learning foreign languages. It’s complicated and easy to forget, requires a lot of memorization, and you can still sound like an idiot after years of practice. Sometimes the linguistic variations are impossible to pronounce or hear, or the grammatical structures are completely foreign to your mental processes. So you make up bullshit: reasons to skip or skimp on practice, or give up altogether. Learning a foreign language is a constant battle against your lazier self.

Duolingo logo

But Duolingo changes the game, so to speak. It gamifies the process of learning a foreign language, adding daily goals, streaks of meeting your daily goal, unlocking mechanics, currency and purchasing, and total progress towards fluency. Now, it’s not a particularly good way of learning a language. In fact, it’s terrible at teaching. But really, teaching isn’t the point of Duolingo. It’s just a way of defeating your bullshit by replacing it with a more benign type of bullshit.

Duolingo assigns tangible, meaningless progression to the real, intangible progress of learning a language. Without Duolingo as a external, concrete arbiter that says “Yes you are getting better”, learning a language can feel hopeless because no matter how much you master it, there are always more words to learn, faster sentences to parse, and structures you don’t understand. Now, the “percent fluency” that Duolingo feeds you doesn’t necessarily correspond to any real gains, but it affirms that the hard mental work you put in today actually paid off in some continuing educational journey. And that affirmation is what makes you come back the next day to learn more.

Going Nowhere on the Information Superhighway

More than 50% of people die within 30 miles of where they were born. Even though America has a well-maintained highway system that spans the continent, most people don’t randomly pack up from their home town and go on a road trip to the opposite side of the country. And so it is with the virtual world. Before the Internet, information was highly segregated geographically. The farther you were from a source of information, the longer it took to reach you, and the more you had to go out of your way to consume it. This was the result of both the technology and the media networks that existed.

The Internet was supposed to revolutionize the way information moved. The so-called information super-highway would advance digital transit in the same way the Interstate Highway System did in the 1950’s. But just like the real highway system, the Internet hasn’t caused a mass exodus of ordinary bitizens. In this analogy, the reason is painfully obvious. It takes a huge amount of effort to leave your Internet communities and travel to another place where the dialect or even language is different. And to what gain?

These barriers to information cross-pollination result in an Internet that experiences de facto segregation along cultural boundaries. This division is no less real than the geographic segregation experienced by human populations in the real world. A TED talk by Ethan Zuckerman explores the vast sections of Twitter you may not even be aware existed; huge parts of Twitter are occupied by Brazilians and by African Americans, but if you are a caucasian American, you’ve probably never interacted with that side of Twitter. Even in the information age, we still consume the media closest to us. Yet this is even more dangerous, because the ease of information transfer lulls us into thinking that we are getting a cosmopolitan viewpoint, when in fact we are stuck in the middle of an echo chamber.

This is why it is so hard for people to branch out and become informed about subjects they don’t believe they are interested in. Be it international politics, scientific advances, or social justice debates, people often sit back and consume their news from whatever source is most familiar and convenient. The result is that I am woefully uninformed about the geopolitical situation in Africa, and the general public is woefully uninformed about anything related to space exploration. Then again, you don’t see me going out and reading up on African conflicts, so I don’t blame anyone for having a spotty knowledge base.