Escaping UI Idioms

Personally I find that whenever my engineer brain switches on, my designer brain switches off. I have to step away from coding for a while in order to objectively make the best decisions about what to implement and how. When I let my engineer brain do the designing, I end up falling into age-old preconceptions about how things should be. This is especially true when it comes to UI design.

But is it the best idea to blindly follow UI conventions, either new or old? On the one hand, a familiar UI layout and universal UI idioms will make it easier for users to jump straight into your program. However, if those idioms aren’t well suited to your application, the user can quickly find themselves confused, frustrated, and lost. If the UI was unfamiliar but uniquely designed around your application, the users will be less confused because they have no expectations which can be unwittingly subverted.

Some bad features:

  • Confirmation emails which require you to click a link before you can do anything with your account. Confirmation emails that require a link to be clicked in 24 hours but which do not impede progress are much better.
  • The “re-enter your email” fields on signup forms. Every modern browser automatically enters your password.
  • Separating the “Find” and “Replace” functions, putting them in the “View” and “Edit” menus respectively.
  • Speaking of “View” and “Edit” menus, the standard “File”, “View”, “Edit” menu tabs often don’t suit applications. Choose menu item labels that suit your application.

An example of a good feature is the use of universal symbols for universal functions. Using a crazy new “save” icon is not a good subversion of conventional UI idioms. Another is exit confirmation; in a lot of cases, confirming whether you want to save before exiting is a great feature.

Here are two features which are not standard for applications with text-editing capability but which should be (I’ve only seen it in a handful of programs, of which Notepad++ is most prominent):

  • A “Rename” option under the File menu, which saves the file with a new name and removes the file with the old name. This saves the tiresome task of doing “Save As” and then deleting the file in the save window, or (God forbid) having to navigate to the file in your OS’s file browser and renaming the file there.
  • Special character (\t, \n) and Regex support in “Find and Replace” modes.

VR Isn’t Ready

Recently I’ve heard a lot of hubabaloo about VR, especially with regards to games. This wave of hype has been going on for while, but it has personally intensified for me because one of my professors this semester is running a VR startup. I’m also working on a VR-compatible game, so VR talk has become more relevant to me.

Array of current VR headsets

Array of current VR headsets



First off, I believe VR is still 10 years away from its prime-time. The tech is just not advanced to a viable level right now, and some fundamental issues of user experience have yet to be solved.

For example, my professor gave an example of why VR is such an immersive mode of interaction: the first time people put on the headset and jump into a virtual world, they reach out and try to touch objects. He trumpeted this as being evidence of a kinetic experience (i.e. it pushed them to “feel” things beyond what they immediately see). While is this kind of true, I see it far more as evidence of a fundamental shortcoming. The moment a user tries to interact with the world and fails, they are jerked out of the fantasy and immersion is broken. This is true in all games; if a user believes they can interact with the world in a certain way but the world doesn’t respond correctly, the user is made painfully and immediately aware that they are in a game, a simulation.

Control VR isn't enough.

Control VR isn’t enough.

This brings me to the first huge issue: the input problem. VR output is relatively advanced, what with Oculus and Gear VR and Morpheus. But we’ve seen little to no development effort targeted at ways for the user to interact with the world. Sure we have Control VR and such projects, but I think these haven’t caught on because they are so complicated to setup. Oculus made huge strides by turning the HMD into a relatively streamlined plug-and-play experience with a minimal mess of cables. We have yet to see how Oculus’s custom controllers affect the space, but I have a feeling they aren’t doing enough to bridge the haptic gap. We won’t see VR takeoff until users are no longer frustrated by the effort to give input to the game by these unintuitive means. As long as users are constantly reminded they are in a simulation, VR is no better than a big TV and a comfy couch.

Speaking of big TVs: the output tech isn’t good enough. The 1080p of the DK2 is nowhere near high enough to be immersive. Trust me: I’ve gotten to try out a DK2 extensively in the past few months at zero personal cost. My opinion is informed and unbiased. Trying to pick out details in the world is like peering through a blurry screen door. As long as I’m tempted to pop off the headset and peek at the monitor to figure out what I’m looking at, VR isn’t going to take off. Even the 2160×1200 of the consumer Oculus won’t be enough. When we get 3K or 4K resolutions in our HMDs, VR will be a viable alternative to monitor gaming. Of course, this tech is likely 5-10 years away for our average consumer.

These never caught on.

These never caught on.

This all isn’t to say that current VR efforts are for naught. These early adopter experiments are definitely useful for figuring out design paradigms and refining the tech, However, it would be foolish to operate under the assumption that VR is posed to take the gaming world by storm. VR is not the new mobile. VR is the new Kinect. And like the Wii and Kinect, VR is not a catch-all interaction mode; most gaming will always favor a static, laid-back experience. You can’t force people to give up lazy couch-potato gaming.

Of course, outside of gaming it may not be a niche interaction mode. In applications where immersion is not the goal and users expect to have to train in the operation of unnatural, intuitive controls, VR may very well thrive. Medicine, industrial operation, design, and engineering are obvious applications. It might even be useful for education purposes. But temper your expectations for gaming.

New Coding Paradigms

So I’ve recently been thinking that the whole idea of editing text files filled with code is outmoded. When I’m thinking about code, I certainly don’t think of it as a set of classes and functions laid out in a particular order. I think of it as a cloud of entities with properties and interactions flowing between them. Shouldn’t our experience of writing code reflect this?

We need to start rethinking our code-editing tools. A lot. Here is a simple example:
XML heatmaps

What else could we do? How about the ability to arbitrarily break off chunks of code and view them in parallel, even nesting this behavior to break long blocks of code into a string of chunks:
Nesting chunks

What if we let the flow of the documentation decide how a reader is introduced to the code base, instead of letting the flow of compiler-friendly source files decide it? Chunks of code are embedded within wiki-style documentation, and while you can follow the code back to its source, reading the documentation will eventually introduce you to the whole codebase in a human-friendly fashion.

The same code could even appear in multiple places (obviously updated when the source changes), and you could see all the places in the documentation where a particular chunk of code appears. This could bridge the gap between documentation and code; documentation will never grow stale, as updating code necessitates interaction with it. Similarly, updating documentation is the same process as writing code. When a standard changes or an SLA (service level agreement) is modified, the code changes too.

But why restrict ourselves to semi-linear, text-based documentation a la wikis? We tend to find UML diagrams extremely helpful for visualizing complex systems in code. What if we could build powerful, adaptable tools to translate between raw code, text-based documentation, and visual diagrams? Strictly binding them together might restrict you in the lowest levels of coding (much like, for example, using a high-level language restricts your ability to control memory allocation), but it opens up the new ability to make changes to a diagram and have most of the code rearrange and resolve itself before you. Then you step in to give a guiding hand, and adjust the text documentation, and voila! Best of all, this is more than a diagram-to-code tool; the diagram is a living thing. In fact, the diagrams, the documentation, and the codebase are synonymous. A change in one is a change in the others.

We’re getting to the point where it is much more useful to be able to dance across a codebase quickly than to be able to tweak and tune the minutiae of your code. Some allowances must be made for processing-intensive applications. Perhaps this system wouldn’t even be useful in those cases. But when you find yourself favoring adaptability and iteration speed over efficiency during development, and when you find yourself being hampered by the need to go between files, scroll through large swathes of code, or referring back and forth between code and documentation, maybe it’s time to rethink your coding paradigms.

Trapped between Eye Candy and Motivation

There’s this really big problem when it comes to working on games (or really any sort of project that lies at the intersection of engineering and design). It has nothing to do with programming or design or testing or art or sound or anything else like that.

The problem is staying motivated. This is especially bad when you are working alone, but it can even happen in groups of 2 or 3 people. Beyond that, you can always find motivation in the stuff that other people are doing, because it comes from outside of your personal drive and creativity. But in small groups or solo projects, the game becomes your baby, and then you get tired of your baby.

Sometimes this happens when you work so long on one subset of features that they sort of blur together and become the totality of the project to you. You quickly get tired of this smaller sub-problem (especially tweaking and tweaking and tweaking), then get tired of the game without realizing there is other interesting work to be done.

Or maybe you realize that there is a lot of stuff to do on the project, but you’ve been working on it so long without much visible or marked improvement that you begin to despair. Maybe the project will never flower, you think. Maybe your efforts will never be used to the full extent they were designed for.

Wherever this loss of motivation comes from, there is one piece of advice I heard that really helps me. It boils down to this: if you keep wishing your game was awesome, make it awesome. Add in that feature you keep thinking about, but keep putting off because there is more important framework-laying to do. Or take some time off and mess around with that one technical gimmick (shader, hardware stuff, multi-threading, proc-gen, or what have you). When you feel yourself losing motivation, give yourself permission to go off and get it back. Don’t soldier on, because your project will inevitably end up on the dump heap with all the other projects you abandoned.

The only problem is, everyone (including myself) always says that adding eye-candy and little trinkets to your project prematurely is a Bad Idea. If you make your game cool by adding eye-candy, the wisdom goes, then your game is no longer cool because of the gameplay (you know, the point of a game). Arguments about whether gameplay is important not-withstanding, if adding a few bits of visual indulgence saves your game from succumbing to ennui, then by all means, add the cool things!

From Light

I haven’t posted in a while, in part because I’ve been busy with a lot of things. Maybe I’ll make posts about some of those other things at one point, but right now I just want to talk about From Light.

Logo for the game.

From Light is a game that I have had the pleasure and honor to help develop. It was originally created as a class project by two other students, but when it showed promise they decided to develop it further. Our team has now grown to 10 people, all (save one) students at USC.

The game is a 2D puzzle platformer based on long-exposure photography (holy hell have I said that line a lot). Basically, you can etch out light trails onto film using the stars in the sky, then jump on those trails to navigate the levels.

I mention that I’ve said the above line a lot because the game got accepted into the PAX 10 for PAX 2015, and I went up to Seattle last weekend with 3 other teammates to show the game off at the four-day gaming convention. This, you may have gathered, is completely and mindbogglingly awesome. I get to work on a game that is recognized and validated by real-world people! And truly, the reception of PAX was way more than I ever would have expected. People frickin’ loved the game!

 PAX 10 Logo  Photo of us at the booth.

And at PAX one of the things I heard again and again was that taking a game to completion, to the point where it could be shipped and sold as an actual game (y’know, for money), is an invaluable experience. Not only do you get a sellable game and a fantastic line on your resume, you also get all the experience involved in taking a game from 80% to 100%, and all the non-development business stuff involved in getting your game out to consumers. Needless to say, this convinced me that we should take From Light to completion. Before, I had been hesitant because as students it was unlikely we could put in the time to finish it fully. However, I am now willing to work harder than I have ever worked before to finish this game.

In the meantime, if it strikes your fancy please “like” the game on Facebook, or follow us on Twitter, or just download the game from our website.

Pluto “Fans”

The recent fly-by of Pluto by the New Horizons spacecraft has reignited a debate that should have stayed buried forever. I’m not saying the IAU’s 2006 definition of planet wasn’t lacking, it’s just that this specific debate should have died and stayed dead.

Plutesters, hehehe.


The problem is that it is entirely unclear why we’re defining “planet” to begin with. Categorization of phenomena is supposed to help us organize them epistemologically. This is why we have a taxonomy of species. Any definition of space objects should be designed to help us classify and study them, not contrived for cultural reasons. We shouldn’t try to exclude KBO’s or other minor bodies because we don’t want to have 15 planets, and we shouldn’t try to include Pluto because we feel bad for it. The classifications we come up with should mirror our current understanding of how similar the bodies are. On the other hand, our precise definitions should produce the same results as our imprecise cultural definitions for well-known cases. As evidenced by the outrage caused by the IAU’s “exclusion of Pluto from planethood”, people don’t like changing how they think about things.

Images of Pluto and Charon.


Which brings us to the current debate. Fans of Pluto seem to be hinging their argument on the fact that Pluto is geologically active, and that it’s diameter is actually larger than that of Eris. Previously it was thought that Eris was both more massive (by 27%) and larger in diameter than Pluto (with the flyby of New Horizons, we now believe Pluto has the larger diameter). This is what moved the IAU to action in the first place; if Pluto is a planet, then so is Eris. There is no world in which we have 9 planets. We either have 8, or 10+.

Then you have Makemake, Haumea, Sedna, and Ceres. How do those fit in? It’s possible we would end up having far more than 15 planets, based on current predictions of KBO size distributions. This illuminates a fundamental problem: what is the use of a classification that includes both Sedna and Jupiter? These two bodies are so different that any category that includes both is operationally useless for science within our solar system. But continuing that logic, the Earth is also extremely dissimilar to Jupiter. The Earth is more similar to Pluto than it is to Jupiter. So having Earth and Jupiter in the same category but excluding Pluto also seems weird.

Unless we consider our definition of similarity. There are two ways to evaluate a body: intrinsic properties (mass, diameter, geological activity, etc), and extrinsic properties (orbit, nearby bodies, etc). One would be tempted to define a planet based on its intrinsic properties. After all, at one time Jupiter was still clearing its orbit, and in the future Pluto will eventually clear its orbit. Does it make sense for the same body to drop in and out of statehood. Well… yes. The fact that a human stops being a child at some point doesn’t make the category of “child” any less useful for a huge range of societal and cultural rules.

In fact, “intrinsic properties” is sort of a gray area. Rotation rate doesn’t really count, since tidal locking is common yet caused by extrinsic forces. Geological activity is also not necessarily intrinsic. Io has extreme internal activity caused by tidal heating. One can imagine the same for a planet close to its parent star. Composition can change as atmosphere is blown away by the parent star, and even mass and diameter can change through planetary collisions.

Regardless, defining a planet only on its intrinsic properties means that moons are now technically “planets”. “Moon” becomes a subcategory of “planet”. This is actually a great definition, but too radical to get accepted currently, so thus functionally useless.

So we must define a planet at least partially based on extrinsic properties. The rocky inner planets and the gaseous outer planets are similar in that they make up the VAST portion of the mass within their orbital region. Earth is 1.7 million times more massive than the rest of the stuff in its orbit. On the other hand, Pluto is 0.07 times the mass of the rest of the Kuiper Belt. Yeah, it makes up less than 10% of the Kuiper Belt. This is a pretty clear separation.

After that revelation, everything falls into place. We have large, orbit-clearing objects, and we have smaller objects that are still in hydrostatic equilibrium but are part of a larger belt of objects.


It turns out, this definition is already in place. For all the hub-bub about the IAU’s definition, most everybody agrees with the splitting of bodies via two parameters that measure likelihood of a body ejecting other bodies in its orbit (the Stern-Levison parameter Λ), and a body’s mass relative to the total mass of bodies in its orbit (planetary discriminant µ). The split occurs at a semi-arbitrary Λ=1 and µ=100.

What everybody is really arguing about is whether or not we get to call both types of bodies planets, or just the big ones.

Stern and Levison propose the terms überplanet and unterplanet, but I think major planet and minor planet is more adoptable.

Finally, just plain old “planet” should refer by default to major planets only, but can contextually refer to both classes in some cases.

Problem solved.

Going Nowhere on the Information Superhighway

More than 50% of people die within 30 miles of where they were born. Even though America has a well-maintained highway system that spans the continent, most people don’t randomly pack up from their home town and go on a road trip to the opposite side of the country. And so it is with the virtual world. Before the Internet, information was highly segregated geographically. The farther you were from a source of information, the longer it took to reach you, and the more you had to go out of your way to consume it. This was the result of both the technology and the media networks that existed.

The Internet was supposed to revolutionize the way information moved. The so-called information super-highway would advance digital transit in the same way the Interstate Highway System did in the 1950’s. But just like the real highway system, the Internet hasn’t caused a mass exodus of ordinary bitizens. In this analogy, the reason is painfully obvious. It takes a huge amount of effort to leave your Internet communities and travel to another place where the dialect or even language is different. And to what gain?

These barriers to information cross-pollination result in an Internet that experiences de facto segregation along cultural boundaries. This division is no less real than the geographic segregation experienced by human populations in the real world. A TED talk by Ethan Zuckerman explores the vast sections of Twitter you may not even be aware existed; huge parts of Twitter are occupied by Brazilians and by African Americans, but if you are a caucasian American, you’ve probably never interacted with that side of Twitter. Even in the information age, we still consume the media closest to us. Yet this is even more dangerous, because the ease of information transfer lulls us into thinking that we are getting a cosmopolitan viewpoint, when in fact we are stuck in the middle of an echo chamber.

This is why it is so hard for people to branch out and become informed about subjects they don’t believe they are interested in. Be it international politics, scientific advances, or social justice debates, people often sit back and consume their news from whatever source is most familiar and convenient. The result is that I am woefully uninformed about the geopolitical situation in Africa, and the general public is woefully uninformed about anything related to space exploration. Then again, you don’t see me going out and reading up on African conflicts, so I don’t blame anyone for having a spotty knowledge base.

Introduction to Programming

Taking an introductory programming course this semester has been an interesting experience. Since I grasp the course material well, I’ve spent some time helping others with their work. As anyone who has taught math can attest, teaching even basic concepts requires you to understand the material far better than the student must. When it comes to programming, helping people is even more difficult because you can’t just tell them how to do it. You need to let them to figure it out on their own, otherwise they won’t have learned anything.

But leading someone along without explicitly telling them anything is really, REALLY difficult. Our professor is a master at this, and I respect him deeply because of it. A student will ask a question, and the professor will reply with an oblique statement that doesn’t seem to address the student’s question at all. Yet soon enough the student says “Oh! I get it!” and goes on their merry way. I try as hard as possible to emulate this method when I help those who are struggling, but it is nigh impossible to strike the correct balance. Help them too much, and they don’t learn. Help them too little, and they despair or begin to resent programming. And as much as I don’t like seeing it happen, many of the people in the class have come to resent programming.

This is as sad as a student resenting literature because of a bad English class experience, or resenting math because of a bad math teacher. Yet I don’t fully understand how to prevent it. If there was a good, standardized methodology for teaching difficult concepts without causing students to resent the field, I feel a lot of the problems in society today could be solved. Maybe that is just wishful thinking, though.

The second interesting observation from taking this class has come from observing a peer. The first language she learned was Python, and learning C++ this semester has caused some distress. There were many lamentations along the lines of “why is the computer so dumb?!” Of course, I found this hilarious because it mirrors a situation in the novel A Fire Upon the Deep. As the protagonists head towards the bottom of the Beyond, much of their advanced computer technology stops working, and they are forced to adopt more primitive methods. Needless to say, the characters who grew up with the advanced technology are indignant that they are forced to use such primitive technologies as a keyboard. Meanwhile, the character who grew up using primitive technology merely smiles.

In my mind, this helps clear up the argument of whether new students to the art of programming should be started on a high-level language, or a low-level language. Until such time as low-level programming is never needed except in rare circumstances, students should be started at a medium-to-low level. For example, it is easier to step up to Python from Java than it is to step down. I was originally of the mind that new students should start at a high-level as to learn common computing concepts without getting bogged down in obtuse technicalities and syntax, but getting a first-hand view of the results of such an approach has changed my mind.

Truly Sustainable Energy

Nuclear.

The US public is split nearly 50/50 between those who favor nuclear power and those who don’t. Because of this, nuclear is often a dirty word in the political arena. Nobody wants to lose half their constituency over a marginal issue like nuclear power. Before 1979, the political climate was ripe for the rapid expansion of nuclear power. However, the Three Mile Island accident resulted in the cancellation of most new nuclear plant projects. 30 years later, the public was just starting to warm up to the idea of nuclear as part of the so-called “nuclear renaissance.” Then, in a case of incredibly poor timing, the Fukushima disaster struck.

There is a lot of weird cultural weight attached to the word, not the least due to an entire generation being psychologically scarred by the perceived overhanging threat of nuclear war. Unfortunately, this snubs one of humanity’s greatest hopes for survival.

Nuclear might not be cost-effective as geothermal, wind, or hydro power. It also isn’t as clean as solar. However, I would argue that neither cost-effectiveness nor cleanliness displaces nuclear from being the best “clean” energy source available. And not only would widespread adoption of nuclear energy entirely solve the climate crisis, it would save humanity from eventual extinction by hastening our spread through the universe.

As I see it, the only other power source that is as scalable as nuclear is solar. Solar, however, loses out on two counts. First, it is really expensive compared to, like, any other power source. Second, the energy density of solar is really, really low. We would need to cover 496,805 square kilometers of area with solar panels to satisfy the world’s projected energy consumption in 2030. While the price of solar power has really come down, that’s also in part due to subsidized research. On the other hand, nuclear has a much higher power density, and despite years of marginalization, is still competitive with current cutting-edge solar power. It is also extremely reliable, with fluctuations in power output virtually non-existent. This is something other forms of renewable energy lack.

If we started investing in nuclear research, we could dramatically lower the costs of nuclear power and satisfy a huge portion of the world’s energy demands. Cheap electricity would hasten the wide-spread use of electric cars (okay, this would probably happen anyways). With combustion cars and both natural gas and coal plants replaced, the influx of greenhouse gases into the environment would be greatly reduced. Cheap, portable reactors would allow developing countries to get on their feet in terms of manufacturing capability. Cheap energy would allow us to implement energy-intensive climate engineering schemes. Advanced nuclear technology would lead to the development of closed-core nuclear rockets, allowing safe, clean, and cheap access to space. Portable reactors would jump-start unmanned planetary exploration, interstellar exploration, human colonization, and asteroid mining.

Of course, none of this will happen. Nuclear is still a dirty word, burdened by the historical and cultural baggage it must drag around. The first step to a better, cleaner future is to get the public to accept nuclear power. As long as we are afraid to approach the energy problem space head-on, we are holding ourselves back from achieving our full potential.

The Community-Driven Game

Imagine you are driving a car, and you have three of your misanthropic friends in the back. Suddenly they lean forwards and ask if they can help steer. You think this might be a bad idea, but before you can react they clamber forwards and put their hands on the wheel. Most people would at this point judge the situation as “not a good idea”.

Replace your annoying friends with the Internet (uh oh), and replace the car with an indie game. Congratulations, you have just created the perfect environment for a terrible game to develop. Actually, often times the situation only gets as far as the Internet playing backseat driver, yelling out confusing and contradicting directions that are both useless and hard to ignore. But for a game like KSP, the community has leapt into the passenger seat and nearly wrested controls from the developer.

The developers of KSP are driving towards a cliff of not-fun. They could probably make a good game that stood on it’s own and appealed to a certain audience if left to their own devices. However, because the early prototypes of the game drew such a diverse crowd, the fans want the game to head in a couple of conflicting directions. Few people share a common vision for the game, and a lot of people like to play armchair game designer.

I honestly think some of the more prolific modders in the community have been taking the game in a more suitable direction. Meanwhile, the community quibbles over what should be included in the stock game and what shouldn’t. I want to take one of my biggest peeves as a case study:

One of the most touted arguments against certain large features is that the feature merely adds another level of complexity without adding any “true gameplay”. For example,

  • Life Support would just mean another thing to worry about, and it would reduce the amount of shenanigans you can do (stranding Kerbals on planets for years, etc).
  • Living Room/Sanity mechanics? Nope, it would just be a hassle. You have to bring up bigger habitats any time you want to send a mission to somewhere far away. It doesn’t add any gameplay during the mission.
  • Reentry heating? That just restricts craft designs, making people conform to certain designs and plan around reentry.
  • Different fuel types? Too complex, requires a lot of learning and planning before hand, and only restricts your options during a mission (again, restricting shenanigans).
  • Realistic reaction wheels that don’t provide overwhelming amounts of torque and require angular momentum to be bled off with a reaction system periodically? Could prove to be annoying during a critical part of a mission if you hit max angular momentum. Requires you to put in a reaction system even if you only want to rotate your craft (not translate).

Do you see the problem with these arguments? You are arguing that something shouldn’t be added to the game because it adds gameplay that isn’t in the game right now. See how circular and pointless the argument is? The worst part is that it could be extended to basically any part of the game that exists right now.

  • Electric charge? What if you run out of charge during a critical maneuver, or go behind the dark side of the planet. It’s A GAME, we shouldn’t have to worry about whether or not the craft is receiving light. Just assume they have large batteries.
  • Different engine types? That would add too much planning, and just limits the performance of the craft. What if I need to take off, but my thrust is too low to get off the ground? That wouldn’t be very fun.
  • Taking different scientific readings? That sounds like it would be pretty tedious. You shouldn’t add something that is just going to be grinding. The game doesn’t have to be realistic, just fun.
  • A tech tree? Why restrict players from using certain parts? What if they want to use those parts? You shouldn’t restrict parts of the game just so the player has to play to unlock them. That doesn’t accomplish anything.

Hell, why even have a game in the first place? It sounds like a lot of thinking and planning and micromanagement and grinding.

Of course, this could be considered reductio ad absurdum, but the problem is that it actually isn’t. The arguments against Life Support or different fuel types or reentry heating just don’t hold any water. Yet people hate against them, so the developers are less likely to put them in the game. Since I started with a metaphor, I’ll end with one:

The developers of KSP are driving towards a cliff because the community told them to. Fortunately, they realized it and are now putting on the brakes. In response, the community is shouting “why are you putting on the brakes? That only slows the car down!” To which I reply, “yes, yes it does.”