What Does It Take To Become A Programmer?

So these are my thoughts on this article (hint, it’s utter tripe): Programming Doesn’t Require Talent or Even Passion.

On the one hand, this article espouses a good sentiment (you don’t have to be gifted to learn programming). On the other, it completely disregards the important idea that being able to do something is not the same as being able to do it well.

I can draw, but anyone who has seen me draw would agree that I’m pretty shit at it. I can draw just well enough to get my concepts across to other people. However, if I intended on becoming an artist for a living, I should probably learn about proportions, shading, composition, perspective, color theory, and be able to work with a range of mediums. Of course, there isn’t some big secret to learning these things. You just practice every day and study good artistic work, analyzing how it was made. Maybe you take some courses, or read some books that formally teach certain techniques. After thousands of invested hours, you will find that your drawing has radically improved, as shown again and again by progress comparison pictures (that one is after 2 years of practice).

The same holds true for programming. Anyone can learn programming. It requires nothing except a little dedication and time. But the article starts out by promising to ‘debunk’ the following quote (I’m not sure if it’s actually a real quote – they don’t attribute it to anybody):

You not only need to have talent, you also need to be passionate to be able to become a good programmer.

The article immediately ignores the fact that the ‘quote’ is talking about good programmers. Just like becoming a good artist requires artistic talent and a passion for learning and improving every day, good programmers are driven by the need to learn and improve their skills. Perhaps an argument can be made for “talent” being something you acquire as a result of practice, and thus you don’t need talent to start becoming good; you become good as you acquire more and more talent. This is a debate for the ages, but I would say that almost invariably a passion for a skill will result in an early baseline proficiency, which is often called “talent”. Innate talent may or may not exist, and it may or may not influence learning ability.

It doesn’t really matter though, because the article then goes on to equate “talent” and “passion” with being a genius. It constructs a strawman who has always known how to program and has never been ignorant about a single thing. This strawman, allegedly, causes severe anxiety to every other programmer, forcing them to study programming at the exclusion of all else. It quotes the creator of Django (after affirming that, yes, programmers also suffer from imposter syndrome):

Programming is just a bunch of skills that can be learned, it doesn’t require that much talent, and it’s not shameful to be a mediocre programmer.

Honestly, though, the fact of the matter is that being a good programmer is incredibly valuable. If your job is to write code, you should be able to do it well. You should write code that doesn’t waste other people’s time, that doesn’t break, that is maintainable and performant. You need to be proud of your craft. Of course, not every writer or musician or carpenter takes pride in their craft. We call these people hacks and they churn out shitty fiction that only shallow people read, or uninteresting music, or houses that fall down in an earthquake and kill dozens of people.

So, unless you want to be responsible for incredibly costly and embarrassing software failures, you better be interested in becoming a good programmer if you plan on doing it for a career. But nobody starts out as a good programmer. People learn to be good programmers by having a passion for the craft, and by wanting to improve. If I look at older programmers and feel inferior by comparison, I know it’s not because they are a genius while I am only a humble human being. Their skill is a result of decades of self-improvement and experience creating software both good and bad.

I think it’s telling that the article only quotes programmers from web development. Web development is notorious for herds of code monkeys jumping from buzzword to buzzword, churning out code with barely-acceptable performance and immense technical debt. Each developer quote is followed by a paragraph that tears down the strawman that was erected earlier. At this point, the author has you cheering against the supposedly omnipresent and overpowering myth of the genius programmer — which, I might remind you, is much like the myth of the genius painter or genius writer; perhaps accepted by those with a fixed mindset, but dismissed by anybody with knowledge of how the craft functions. This sort of skill smokescreen is probably just a natural product of human behavior. In any case, it isn’t any stronger for programming than for art, writing, dance, or stunt-car driving.

The article really takes a turn for the worse in the second half, however. First, it effectively counters itself by quoting jokes from famous developers that prove the “genius programmer” myth doesn’t exist:

* One man’s crappy software is another man’s full time job. (Jessica Gaston)

* Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

* Software and cathedrals are much the same — first we build them, then we pray. (Sam Redwine)

The author LITERALLY ASKS: “If programmers all really had so much talent and passion, then why are these jokes so popular amongst programmers?”, as if to prove that he was full of shit when he said back in the beginning “It’s as if people who write code had already decided that they were going to write code in the future by the time they were kids.”

But the absolute worst transgression the article makes is quoting Rasmus Lerdorf, creator of PHP. For those of you not “in the know”, PHP is a server-side language. It is also one of the worst affronts to good software design in recent history. The reason it was the de facto server-side language before the recent Javascript explosion is that it can be readily picked up by people who don’t know what they are doing. Like you would expect from a language designed by someone who “hates programming” and used by people who don’t what they are doing, PHP is responsible for thousands of insecure, slow, buggy websites.

PHP’s shortcoming are amusingly enumerated in this famous post: PHP – a fractal of bad design. In the post, the following analogy is used to illustrate how PHP is bad:

I can’t even say what’s wrong with PHP, because— okay. Imagine you have uh, a toolbox. A set of tools. Looks okay, standard stuff in there.

You pull out a screwdriver, and you see it’s one of those weird tri-headed things. Okay, well, that’s not very useful to you, but you guess it comes in handy sometimes.

You pull out the hammer, but to your dismay, it has the claw part on both sides. Still serviceable though, I mean, you can hit nails with the middle of the head holding it sideways.

You pull out the pliers, but they don’t have those serrated surfaces; it’s flat and smooth. That’s less useful, but it still turns bolts well enough, so whatever.

And on you go. Everything in the box is kind of weird and quirky, but maybe not enough to make it completely worthless. And there’s no clear problem with the set as a whole; it still has all the tools.

Now imagine you meet millions of carpenters using this toolbox who tell you “well hey what’s the problem with these tools? They’re all I’ve ever used and they work fine!” And the carpenters show you the houses they’ve built, where every room is a pentagon and the roof is upside-down. And you knock on the front door and it just collapses inwards and they all yell at you for breaking their door.

That’s what’s wrong with PHP.

And according to Rasmus Lerdorf, the creator of this language:

I’m not a real programmer. I throw together things until it works then I move on. The real programmers will say “Yeah it works but you’re leaking memory everywhere. Perhaps we should fix that.” I’ll just restart Apache every 10 requests.

It’s like the article is admitting that if you don’t take the time to learn good programming principles, you are going to be responsible for horrible systems that cause headaches five years down the line for the people maintaining them and that regularly allow hackers to access confidential personal information like patient information and social security numbers for millions of people.

So yes, if you aren’t planning on programming for a career, learning to program is fairly straightforward. It’s as easy as learning carpentry or glass-blowing. It might seem daunting, but invest a half dozen hours and you can have your foot solidly in the door.

But if you plan on building systems other people will rely on, you sure are hell better pick up some solid programming fundamentals. If you aren’t motivated to improve your skillset and become a better programmer, don’t bother learning at all. Don’t be the reason that the mobile web sucks, and don’t be the reason that 28 American soldiers died. Learn to be a good programmer.

Computer Mysticism

Last weekend I installed two different versions of Windows on two computers. One was a brand-new PC I built myself, and one was an HP that needed a reinstall. One needed a VPN connection to the MIT network to validate. The other one needed to have its proprietary drivers backed up and restored.

There’s a certain magic to computers when you start getting into the low-level stuff. I don’t mean programming-wise. Reinstalling Windows is more of a mystical art than a straightforward process.

Ancient forum tomes are filled with archaic tutorials. Software is a moving target, and complex formulas and hacks are prone to break down over time.

But even worse is the amount of superstition that gets poured into computer maintenance. Each user has rituals that they are convinced ward off errors. Actually, we see this in all sort of technology usage; people have rites designed to improve buffering speed, battery life, and disk readability. I know a group of people who have a running joke that involves standing on one foot when doing any complex computer maintenance to make it work.

The reclusive Linux alchemists mix their own potions (disdaining the draughts pushed by the shops in town), but use indecipherable notation in their recipes. Elixirs are delicate brews, and the average person doesn’t have the same instincts that let alchemists be productive.

Yet after going through the ordeal of reinstalling Windows or constructing a computer from scratch (and having it work!), you have a lingering feeling of power. The minor incongruities and annoyances that plague modern software usage no longer make you feel helpless. You are an empowered user, able to conquer any confounding roadblock. You may not be a mage, but you aren’t completely powerless under the whims of the wizards in the grand Corporate Tower.

Pyglet

A couple of months ago I discovered Pyglet, which is basically an OpenGL interface for Python. I experimented with it, figuring out different features. I had never really worked with OpenGL before, and the little bit of experience I had was limited to translating GLUT spheres (for an N-body simulation for my Parallel Computing class).

Screenshot of my program (image link broken)

My exploration into Pyglet was a two-pronged attack. On the one hand, I had to learn how to use OpenGL, and on the other I had to learn Pyglet’s idiosyncratic diversions from standard C OpenGL. Fortunately, there is a wealth of tutorials and explanations on the Internet regarding OpenGL, and Pyglet is fairly well documented.

However, there is a caveat to exploring a new tool: debugging becomes much harder. When your code stops doing what it is supposed to, you have no idea whether the bug stems from a mistake in your code, or a fundamental error in your method. Moreover, your method may work in theory, but you are using the new API incorrectly. Since there is no way to test all three of these possibilities at once, you have to assume two of them are correct and look for an error in the third. If you pick wrong, you can spend days looking for a bug in the wrong place.

For instance, I was writing code to find the ray originating from the camera and passing through a point on the screen where the user had clicked. The issue here is that although you know the location of the click with relation to the window, you don’t know where it is in world space. Well, I went through 3 or 4 methods for finding the vectors in world space that corresponded to the vertical and horizontal directions on the screen. After that the problem becomes trivial.

A graphic description of my problem

In desperation I started printing out random things in the body of my code. I realized that two variables which should have been the same were very different. It turned out that when I found the location of user click in world space (which was stored as a vector), I was accidentally normalizing it.

I had assumed my code had no careless errors, and had instead blamed the bug on my method. In reality, I doubt there was ever a problem with my method. Instead, the problem had always lain in the three lines of code that I considered the “trivial” part of the problem. While it was quite trivial, that triviality also protected it from proofreading. After wasting a good 4 or 5 days on the problem, I have learned my lesson: there was absolutely nothing I could have done about it.

Programming Paradigms

Computer science is a relatively young field, and it has rapidly evolved ever since its inception. This becomes increasingly evident when you look at computer science being taught versus computer science being used. This is extremely apparent in the misnomer: computer science. CS is more technical art than science.

For a long time, computers had finite computational resources and memory. Today, our average consumer-grade computer is comparable to a super computer from 1985. Thusly, the twenty first century requires programming paradigms far different from those taught in the twentieth century. It no longer pays off to optimize the number of calculations or amount of memory your program uses, unless you are specifically performing mathematically intensive operations. This blog voices that sentiment much better than I can.

So programming now is about implementing an idea. Its easy to rise above the technical nitty gritty details and focus on the concept at hand. Then programming becomes a form of poetry, in which you express your ideas in a structured and rhythmic way. Programming, at a consumer level, is no longer about getting a machine to do what you want; its about empowering people.

Just like a poet spends many hours revising their verses and getting the words to say exactly what is meant, a programmer spends hours rearranging and improving code to fulfill their idea effectively. And like poetry, there are many genres and styles of programming. Unfortunately, programming is also like poetry in the way that many students get turned off to it by the experiences they have with it in school.

Programming should be taught with the main objective in mind: we are here to accomplish a mission. Writing mechanics are practiced and improved, but without an idea behind a poem or story, it is pointless. Algorithms are important, and so is project design and planning. But these are merely implements with which to express the programmer’s idea.

This is why the most successful software is easy to use, is powerful, or grants people an ability they didn’t have before. When you use a program, it doesn’t matter whether all the variables are global, whether the project was built top-down or bottom-up. The functional differences of some of the most disputed methods are miniscule. Optimization is a trivial concern when compared with the user interface. Is the parse speed of one file format more important than the support of a larger number of formats?

Kids want to be programmers because of coding heroes like Notch, the creator of Minecraft. But Minecraft isn’t well-designed. In fact, the program is a piece of crap that can barely run on a laptop from 5 years ago despite its simplicity. But the idea is gold, and that is what people notice. This is why Minecraft and Bioshock, and not COD, inspire people to be game developers.

However, functional programming is the CS taught in schools. Schools need to teach the art of computer science, not only the science. Imagine if writing was only taught, even up through college, in the scope of writing paragraphs. Essays and papers would just be a string of non sequiturs (kind of like this blog). Fiction would have no comprehensible story, only a series of finely crafted paragraphs. Only those who figured out the basic structures of plot, perhaps by reading books by others who had done the same, would learn to write meaningful stories.

In the future, everyone will be a programmer to some degree. At some point data will become so complex that to even manipulate information people will need to be able to interface with data processors through some sort of technical language in order to describe what they want. To survive in a digital world you either need software to help you interface with it, or learn the language of the realm.

Yet children are being driven off in droves because computers are being approached in education from completely the wrong angle. Computers are tool we use to accomplish tasks; the use of computers should not be taught just because “people need to be able to use computers in order to survive in the modern world”, but because children will be able to implement their ideas and carry out tasks much easier if they do have an expanded skillset on the computer. Computer skills should be taught in the form of “how would you go about doing X? Ok, what if I told you there was a much easier way?”

Snow Crash

Oh. Yes. I am going to start off this post by talking about the absolutely brilliant book by Neal Stephenson (see Cryptonomicon), Snow Crash. The book that popularized the use of the word “avatar” as it applies to the Web and gaming. The book that inspired Google Earth. And despite being 20 years old, it is more relevant than ever and uses the cyberpunk theme to hilarious and thought-provoking extents. It paints the picture of an Internet/MMO mashup, sort of like Second Life, based in a franchised world. Governments have split up and been replaced in function by companies; competing highway companies set up snipers where their road systems cross, military companies bid for retired aircraft carriers, and inflation has caused trillion dollar bills to become nigh worthless.

In the book, a katana-wielding freelance hacker named Hiro Protagonist follows a trail of mysterious clues and eventually discovers a plot to infect people with an ancient Sumerian linguistic virus. The entire book is bizarre, but it has some great concepts and is absolutely entertaining. Stephenson never fails to tell a great story; his only problem is wrapping them up. Anyways, I highly suggest you read it.

Well, I’ve been thinking about games again. I have two great ideas in the works, and one of them is “hacking” game based roughly in the Snow Crash universe. It doesn’t really use any of the unique concepts from it besides the general post-fall world setting and things like the Central Intelligence Corporation. It probably won’t even use the Metaverse, although it depends how much I choose to expand the game from the core concept. The player does play, however, as a freelance hacker who may or may not wield swords (not that it matters, since you probably won’t be doing any running around).

I’m writing up a Project Design Document which will cover all the important points of the game:
Download the whole document

Why Richard Stallman is Wrong

I listened to an interview with Richard Stallman, and I truly believe he is wrong regarding the ethics of proprietary software and especially the fundamental beliefs behind computer and Internet usage.

Fundamentally, he assumes incorrect things. He says that people should be able to use computers for free. That doesn’t mean that having people pay to improve the experience is evil. I can decide to gnaw through a tree on my property for free, but I can obviously pay to have it cut down. Similarly, a user should be able to do anything they want for free, but should also be able to pay to either improve the experience, do it faster, or change the feel. The point at which you start getting involved with morality is when the development of proprietary software begins to interfere with the development of open-source software. However, I think that if proprietary software was somehow banned, the rate of development of open-source software would not increase by very much.

Stallman is fine with software developed for a single client, where the developer is paid for the development of free software, rather than the software itself. However, that is fundamentally the same as distributing proprietary software. The cost of the proprietary software represents the effort that went into making it, as well as upkeep for the company including other worker salaries and continued research and development. I do agree that such costs can get out of hand and that a ridiculous amount of money can end up going to those higher up on the corporate ladder. However, that is a necessary evil to keep high quality proprietary software pumping out at a rate faster that free software can be developed.

Although he demands that the functionality of ebooks mirror that of books, he doesn’t seem to make the same connection regarding proprietary software and its real world parallel: non-free services. Although you should be able to live in a house and use public transportation for minimal costs, you almost always buy furniture and hire services to make your life more comfortable. Similarly, proprietary software allows users to improve the aspects of their experience that they want to.

As I said before, Stallman discusses ebooks, and how you should be able to do the same with an ebook as you can with a regular book. However, as a completely different medium, you can’t just demand something like that. Suppose I demand that JPEGs be viewable in the same resolution as the paintings at a museum, for free. That doesn’t even make sense. Being a completely different medium, we need to approach ebooks in a completely different fashion. It would be nice to be able to easily share ebooks or sell them used. However, for an ebook to exist in an economic and material singularity similar to that of a paper book, proprietary software is absolutely necessary. Using Stallman’s logic, I can say that if you want a book to be freely available, write it yourself!

In some ways, open source philosophy (or at least Stallman’s) is like Communism. Everybody pools their resources and in return everybody gets the same, free software. However, as we see with many actual implementations of Communism, somebody who contributes resources may not need all the products. If I spend time coding, I want a video editor, not a database manipulator. The obvious solution is to have both developed and then have those who want the video editor to give their share of resources to that developer, and those who wanted the database software to the other.

%d bloggers like this: