Introduction to Programming

Taking an introductory programming course this semester has been an interesting experience. Since I grasp the course material well, I’ve spent some time helping others with their work. As anyone who has taught math can attest, teaching even basic concepts requires you to understand the material far better than the student must. When it comes to programming, helping people is even more difficult because you can’t just tell them how to do it. You need to let them to figure it out on their own, otherwise they won’t have learned anything.

But leading someone along without explicitly telling them anything is really, REALLY difficult. Our professor is a master at this, and I respect him deeply because of it. A student will ask a question, and the professor will reply with an oblique statement that doesn’t seem to address the student’s question at all. Yet soon enough the student says “Oh! I get it!” and goes on their merry way. I try as hard as possible to emulate this method when I help those who are struggling, but it is nigh impossible to strike the correct balance. Help them too much, and they don’t learn. Help them too little, and they despair or begin to resent programming. And as much as I don’t like seeing it happen, many of the people in the class have come to resent programming.

This is as sad as a student resenting literature because of a bad English class experience, or resenting math because of a bad math teacher. Yet I don’t fully understand how to prevent it. If there was a good, standardized methodology for teaching difficult concepts without causing students to resent the field, I feel a lot of the problems in society today could be solved. Maybe that is just wishful thinking, though.

The second interesting observation from taking this class has come from observing a peer. The first language she learned was Python, and learning C++ this semester has caused some distress. There were many lamentations along the lines of “why is the computer so dumb?!” Of course, I found this hilarious because it mirrors a situation in the novel A Fire Upon the Deep. As the protagonists head towards the bottom of the Beyond, much of their advanced computer technology stops working, and they are forced to adopt more primitive methods. Needless to say, the characters who grew up with the advanced technology are indignant that they are forced to use such primitive technologies as a keyboard. Meanwhile, the character who grew up using primitive technology merely smiles.

In my mind, this helps clear up the argument of whether new students to the art of programming should be started on a high-level language, or a low-level language. Until such time as low-level programming is never needed except in rare circumstances, students should be started at a medium-to-low level. For example, it is easier to step up to Python from Java than it is to step down. I was originally of the mind that new students should start at a high-level as to learn common computing concepts without getting bogged down in obtuse technicalities and syntax, but getting a first-hand view of the results of such an approach has changed my mind.

Truly Sustainable Energy

Nuclear.

The US public is split nearly 50/50 between those who favor nuclear power and those who don’t. Because of this, nuclear is often a dirty word in the political arena. Nobody wants to lose half their constituency over a marginal issue like nuclear power. Before 1979, the political climate was ripe for the rapid expansion of nuclear power. However, the Three Mile Island accident resulted in the cancellation of most new nuclear plant projects. 30 years later, the public was just starting to warm up to the idea of nuclear as part of the so-called “nuclear renaissance.” Then, in a case of incredibly poor timing, the Fukushima disaster struck.

There is a lot of weird cultural weight attached to the word, not the least due to an entire generation being psychologically scarred by the perceived overhanging threat of nuclear war. Unfortunately, this snubs one of humanity’s greatest hopes for survival.

Nuclear might not be cost-effective as geothermal, wind, or hydro power. It also isn’t as clean as solar. However, I would argue that neither cost-effectiveness nor cleanliness displaces nuclear from being the best “clean” energy source available. And not only would widespread adoption of nuclear energy entirely solve the climate crisis, it would save humanity from eventual extinction by hastening our spread through the universe.

As I see it, the only other power source that is as scalable as nuclear is solar. Solar, however, loses out on two counts. First, it is really expensive compared to, like, any other power source. Second, the energy density of solar is really, really low. We would need to cover 496,805 square kilometers of area with solar panels to satisfy the world’s projected energy consumption in 2030. While the price of solar power has really come down, that’s also in part due to subsidized research. On the other hand, nuclear has a much higher power density, and despite years of marginalization, is still competitive with current cutting-edge solar power. It is also extremely reliable, with fluctuations in power output virtually non-existent. This is something other forms of renewable energy lack.

If we started investing in nuclear research, we could dramatically lower the costs of nuclear power and satisfy a huge portion of the world’s energy demands. Cheap electricity would hasten the wide-spread use of electric cars (okay, this would probably happen anyways). With combustion cars and both natural gas and coal plants replaced, the influx of greenhouse gases into the environment would be greatly reduced. Cheap, portable reactors would allow developing countries to get on their feet in terms of manufacturing capability. Cheap energy would allow us to implement energy-intensive climate engineering schemes. Advanced nuclear technology would lead to the development of closed-core nuclear rockets, allowing safe, clean, and cheap access to space. Portable reactors would jump-start unmanned planetary exploration, interstellar exploration, human colonization, and asteroid mining.

Of course, none of this will happen. Nuclear is still a dirty word, burdened by the historical and cultural baggage it must drag around. The first step to a better, cleaner future is to get the public to accept nuclear power. As long as we are afraid to approach the energy problem space head-on, we are holding ourselves back from achieving our full potential.

The Community-Driven Game

Imagine you are driving a car, and you have three of your misanthropic friends in the back. Suddenly they lean forwards and ask if they can help steer. You think this might be a bad idea, but before you can react they clamber forwards and put their hands on the wheel. Most people would at this point judge the situation as “not a good idea”.

Replace your annoying friends with the Internet (uh oh), and replace the car with an indie game. Congratulations, you have just created the perfect environment for a terrible game to develop. Actually, often times the situation only gets as far as the Internet playing backseat driver, yelling out confusing and contradicting directions that are both useless and hard to ignore. But for a game like KSP, the community has leapt into the passenger seat and nearly wrested controls from the developer.

The developers of KSP are driving towards a cliff of not-fun. They could probably make a good game that stood on it’s own and appealed to a certain audience if left to their own devices. However, because the early prototypes of the game drew such a diverse crowd, the fans want the game to head in a couple of conflicting directions. Few people share a common vision for the game, and a lot of people like to play armchair game designer.

I honestly think some of the more prolific modders in the community have been taking the game in a more suitable direction. Meanwhile, the community quibbles over what should be included in the stock game and what shouldn’t. I want to take one of my biggest peeves as a case study:

One of the most touted arguments against certain large features is that the feature merely adds another level of complexity without adding any “true gameplay”. For example,

  • Life Support would just mean another thing to worry about, and it would reduce the amount of shenanigans you can do (stranding Kerbals on planets for years, etc).
  • Living Room/Sanity mechanics? Nope, it would just be a hassle. You have to bring up bigger habitats any time you want to send a mission to somewhere far away. It doesn’t add any gameplay during the mission.
  • Reentry heating? That just restricts craft designs, making people conform to certain designs and plan around reentry.
  • Different fuel types? Too complex, requires a lot of learning and planning before hand, and only restricts your options during a mission (again, restricting shenanigans).
  • Realistic reaction wheels that don’t provide overwhelming amounts of torque and require angular momentum to be bled off with a reaction system periodically? Could prove to be annoying during a critical part of a mission if you hit max angular momentum. Requires you to put in a reaction system even if you only want to rotate your craft (not translate).

Do you see the problem with these arguments? You are arguing that something shouldn’t be added to the game because it adds gameplay that isn’t in the game right now. See how circular and pointless the argument is? The worst part is that it could be extended to basically any part of the game that exists right now.

  • Electric charge? What if you run out of charge during a critical maneuver, or go behind the dark side of the planet. It’s A GAME, we shouldn’t have to worry about whether or not the craft is receiving light. Just assume they have large batteries.
  • Different engine types? That would add too much planning, and just limits the performance of the craft. What if I need to take off, but my thrust is too low to get off the ground? That wouldn’t be very fun.
  • Taking different scientific readings? That sounds like it would be pretty tedious. You shouldn’t add something that is just going to be grinding. The game doesn’t have to be realistic, just fun.
  • A tech tree? Why restrict players from using certain parts? What if they want to use those parts? You shouldn’t restrict parts of the game just so the player has to play to unlock them. That doesn’t accomplish anything.

Hell, why even have a game in the first place? It sounds like a lot of thinking and planning and micromanagement and grinding.

Of course, this could be considered reductio ad absurdum, but the problem is that it actually isn’t. The arguments against Life Support or different fuel types or reentry heating just don’t hold any water. Yet people hate against them, so the developers are less likely to put them in the game. Since I started with a metaphor, I’ll end with one:

The developers of KSP are driving towards a cliff because the community told them to. Fortunately, they realized it and are now putting on the brakes. In response, the community is shouting “why are you putting on the brakes? That only slows the car down!” To which I reply, “yes, yes it does.”

Fetishizing Apollo

America has an unhealthy obsession with historic US space missions. This obsession is even more pronounced in the space-enthusiast community; it is no surprise that there are multitudes of mods for KSP that allow users to build and fly their very own Saturn V rocket. Really, America’s fixation on the 1960s and -70s era NASA programs has achieved a pornographic level (I use this word not in the sexual meaning, but in the same sense as in the pornography of violence).

It is an understandable attraction, I suppose — many of the iconic space photographs were taken by Apollo astronauts.

earthrise astronaut fullearth

Landing people on the Moon might be considered one of mankind’s greatest achievements, and was certainly the height of glory for the US space program.

But the level at which America has turned the moon missions into a fetish is astounding. Countless books, movies, rehashed TV series, photo remasters, articles, celebrations… it’s depressing.

We should appreciate Apollo for what it was: an antenna. Celebrating Apollo is like including the antenna mast in the height measurement for a really tall building. Yes, the fact that we stuck a tall pole on top of a tall building is cool, but it’s not really the pole that you’re interested in, is it?

People like thinking about Apollo because they like the idea of humans expanding into space, and in their mind Apollo is the farthest we’ve ever gotten towards that goal. It’s an understandable misconception, considering the Moon is literally “the farthest humans have ever gone”. But Apollo was unsustainable (even if the Apollo Applications Program had gone forwards, it still would have been a step in the wrong direction!). We are now much closer to accomplishing the goal of long-term human expansion into space than we ever were.

SLS, more like SMH

Granted, it won’t be painted the same way in real life.

This is why the SLS is so disappointing, I think. Right now we have highly advanced computing and robotics technologies, excellent ground support infrastructure for space missions, incredibly advanced materials knowledge, and a huge array of novel manufacturing techniques being developed. As a civilization, we are much more ready to colonize space than we were a half-century ago. Yet the government has decided the best way to start human expansion into space is to build a cargo cult around Apollo. The US is building a rocket that looks like the Saturn V, as if some sort of high-tech idolatry will bring back the glory of Apollo. They are resurrecting an architecture that was never a good idea to begin with!

The space program paradigm is outdated. Despite my most optimistic hopes, let’s be real: the next big driver in space travel will be high-power corporations following the profits of a few innovative companies that pioneer the market. It won’t be enthusiastic supporters than become the first space colonists, but employees doing their stint in the outer solar system before returning to Earth.

Mass Paradigm

One of the most interesting things to think about with respect to the near-future of space travel is the removal of limited mass as a paradigm. That is to say, right now the predominate design constraint for spacecraft is mass, because rockets are very expensive, so each kilogram of payload must be put to best use. Unfortunately, this means that the design and construction costs for spacecraft are very high, as much effort is put towards shaving off grams by using exotic materials and efficient designs.

But soon the current launch vehicle renaissance will result in launch costs low enough to demolish the limited-mass paradigm. There is a tipping point where it is economical to cut design costs and take the hit on launch costs. There will also see a growing emphasis on tough and reliable systems that last a long time, rather than fragile, light, efficient systems. Combined with lower fuel costs from asteroid mining and improved refueling technologies, the predominant modus operandi will be repairing spacecraft rather than replacing spacecraft. Designing for reusability and, more importantly, refurbishment will be critical.

We’re already seeing a shift towards this paradigm with SpaceX. Their launch vehicles use redundant systems to make up for their cheaper designs — their avionics electronics, for example, are not rad-hardened but instead redundant in triplicate. The mass penalty is minimal, however they also make up for it by using modern electronics concepts. For instance, instead of running numerous copper wires up and down the length of their rockets, they run a single ethernet cable and use a lot of multiplexing.

This kind of change is just the beginning, however. There will be a time when it makes sense to loft a big bundle of steel rods into orbit and have workers weld them into a frame for a spaceship. This has a number of benefits: the frame doesn’t have to be fit into a fairing, it can be reconfigured on the fly, and it doesn’t have to endure the acceleration and acoustic stresses of launch. Additionally, lifting big bundles of steel makes best use of the volume in a launch vehicle fairing.

I think the only two questions about the future of space travel are: How much will it be dominated by robots? and Where will the money come from? But those are questions for another time.

Learning a Foreign Language

I have had the benefit of taking Japanese 1 this semester, and it is quite a humbling experience. Learning a language which has no romantic roots — a truly foreign language — lends a certain perspective that learning French or Latin does not.

However, it also seems to me that the teaching method is geared towards a very specific type of learning style. The class starts out by teaching a number of phrases which the students are to memorize, and meanwhile students also begin to learn one of the writing systems. It is not until a few weeks in that students finally learn some grammar (i.e. the thing that actually determines whether or not a communication system is a language), and even then it takes time to learn the exact mechanics behind the memorized phrases.

For instance, we learned how to ask how to say something in Japanese: (english word)wa nihongo de nanto iimasuka. Yet we are not told that nihongo means Japanese language (although it can be inferred), and we certainly aren’t told that ~go is a suffix, applied to the word nihon (Japan), which means the language of. In addition, we aren’t told that nan means what, ~wa is a topic particle (and we certainly aren’t told it’s spelled using は instead of わ, because hell, we don’t even know how to write at that point), or that to is a sort of quotation mark (if we are, it is only in passing and without context).

Insights can only be gleamed by comparing the response: (Japanese word)to iimasu. Now it becomes clear that ~ka is a question particle. So yes, nanto became (word)to, so ~to must be some sort of literal marker suffix. iimasu must be “say”, or thereabouts.

My point is that it is very hard for me to memorize phrases or words with no context. The teaching style is designed to help a certain type of learner. My learning style would benefit greatly from learning a variety of grammar and vocabulary separately, and letting my brain concoct the phrases from their base elements; when I speak, it flows logically, and my mind pronounces one morpheme at a time. Learning whole phrases with no context means I can’t break it down into morphemes, and so production of the sounds comes much harder.

Perhaps this will change after we get past the first few weeks, but I can’t help worrying that this sort of learn-specifics-then-learn-rules teaching style will continue.

Why Scientific Philosophy Is Important

I recently talked to a person who was convinced that scientific theories, mathematical theories, mathematical theorems, knowledge, truth, and scientific laws were all basically synonymous. He said that physics could not exist without math, because math defined physics. He also was convinced that believing and agreeing were the same thing. I attempted to remedy these misconceptions using some basic arguments, but I was finally written off as “not understanding anything” and “unwilling to do the math”. When I asked him to define the word truth, he merely kept repeating, “I don’t know what you mean. Truth is just that which is.” When I attempted to explain that the word “truth” was a symbol referring to a concept, and that we couldn’t have a discussion if we were referring to different concepts with the same word, he said “you don’t need to define truth, it just is. It’s very simple.” He couldn’t understand why I kept “bringing up philosophy when we’re talking about simple truths here.”

Sigh. If I can’t break through that kind of rhetoric, I might as well just explain my thoughts here.

Why is it important to know about the philosophy behind knowledge, truth, and science when talking about it? Isn’t it possible to rely on a the natural human consensus of truth? Besides, while it is so hard to explain using language, people intuitively grasp the concept. Right?

Well, let’s give some examples. It’s true that if you drop an object, it falls, right? Well, yeah, that statement is true if you are on the surface of a planet, and not orbiting it. Or if you are underwater and you drop a buoyant object — it goes up! But wait, can you drop something underwater if it doesn’t go down? No, that wouldn’t be dropping it would just be… releasing? Hold on, when an object is in orbit, isn’t it actually just falling in a special way? It’s moving sideways fast enough that it misses the ground by the time it’s fallen far enough. But if an astronaut releases a wrench, and it float right in front of him, you wouldn’t call that “dropping”.

What we see is that the word “drop” has a definition, and we need to know what the definition of “drop” is before we can begin to assess the truth of the statement “if you drop an object, it falls”. As it turns out, “dropping” an object consists of releasing it such that it falls away from you. Uh oh. So yeah, “if you drop an object, it falls” is true, but it doesn’t actually convey any physical knowledge; it just defines a property of the word “drop” in terms of another word, “fall”.

So lets look at some more meaningful examples. Most people would say it’s true that planets orbit the sun in an elliptical manner. Except it isn’t true. It’s true that the movement of the planets can be approximated into ellipses, but in fact there are measurable deviations. “Okay, sure. The movement is actually described by Newton’s laws of motion, and the law of gravitation.” Okay, yes, an N-body approximation gets much, much closer to describing reality. In fact, it perfectly matched the observations Newton was working from. However, it’s still not true the Newton’s laws describe the motion of the planets.

We can look to general relativity to describe the motion of the planets even better. We have launched satellites to observe very minor fluctuations in the path of the Earth that would confirm the prediction made by general relativity. As it turns out, general relativity makes predictions that perfectly match our observations. Woof. Finally, we’ve found some truth. The path of the planets around the sun is described by general relativity.

But wait, can we say this in good conscience? No! Just like Newton, we’ve found a set of laws which create predictions that match our observations. But just like Newton, we cannot measure the motion perfectly. All we can say is that general relativity describes the motion of the planets as far as we can observe. We don’t know if there is some unknown mechanic that affects the motion of planets in a way we can’t measure right now. We can’t say that general relativity is “true”, we can only say that it is confirmed by all of our observations to date, much in the same way that Newton could not say that his laws of motion were true; they merely described the all physical data he was capable of obtaining.

This gets to the root of the problem. While mathematical notions can be “true” because they exist within an entirely constructed framework defined through logic, theories in science can never be “true”. The point of science is not to find things that are true, but to find the best explanation for why the world works the way it does. And just to get one thing clear, theories are explanation of “why”, and laws are explicit definitions of how physical quantities relate. So no, we don’t use “math to define physics”, physics uses math to explain the physical universe. But even without math, we can perform a sort of qualitative physics.

For instance, “things stay still until you push them, and things keep going straight unless you push them.” This phrasing of Newton’s first law of motion is simplistic and uses words like “thing” and “push” without really defining them, but it gets the point across. Similarly, “big things move less when you push them, and small things move more.” This is very simplistic, and doesn’t even mention the fact that acceleration changes linearly with force, but it communicates the basic idea of Newton’s second law of motion, without even getting into what “big”, “small”, and “move” really mean.

The point is that the traditional phrasing of Newton’s second law, F=ma (which, by the way, is more accurately ΣF = m * Σa), merely uses mathematical symbols rather than English symbols, which allows us to manipulate it using the rules of mathematics. But just because we are manipulating arbitrary quantities with math doesn’t mean anything physically. Just because I calculate that an object which masses 1 kg should accelerate at 1 m/s^2 when I apply 1 N of force doesn’t mean the thing is actually going to act that way if I perform the experiment. This is because “mass” is really a simplification of a whole range of things, as is “acceleration”. It doesn’t even account for internal forces, and only describes the movement of the center of mass.

Math may be true, but only within the realm of math. When we translate physical quantities into the mathematical universe, they lose they physical meaning. We may translate them back, but the results we get can only be an approximation, not a truth, not a reality. These approximations can be very useful, but we have to remember the limitations of our theories, and our instruments.

Defining Life

I’ve had this conversation a couple of times recently, because it poses an interesting question: can we create a definition for ‘alive’ that encompasses not only known biological life, but also any theoretical lifeforms we can imagine? This might include alternative biochemistry, artificial life (nanites?), and even digital lifeforms.

Obviously there is an inherent problem in this discussion; we are assuming everyone shares a similar definition of life. However, even a skin-deep probing can reveal divisive philosophical questions. Are computer viruses alive? How about self-replicating structures of dust particles in a plasma? Is the Earth alive? We can’t truly resolve this problem without first clearly setting a boundary for what things are alive and what things aren’t alive. For example, scientists seem to have resolutely decided that biological viruses are not alive. Similarly, its clear to our human sensibilities that a car engine is not alive, even if it is highly advanced and has all sorts of sensors and regulatory mechanisms.

For the sake of discussion, I’m going to skip over this roadblock and dive in. Wikipedia gives these criteria for calling something ‘alive’:

  1. Homeostasis: Regulation of the internal environment to maintain a constant state.
  2. Organization: Being structurally composed of one or more cells.
  3. Metabolism: Converting chemicals and energy to maintain internal organization.
  4. Growth: A growing organism increases in size in all of its parts, rather than simply accumulating matter.
  5. Adaptation: The ability to change over time in response to the environment.
  6. Response to stimuli: A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
  7. Reproduction: The ability to produce new individual organisms, either asexually from a single parent organism, or sexually from two parent organisms.

There are some good ones in there, but a few need to go. Let’s throw out Organization (this can almost be seen as tautological — things made of cells are alive because they are made of cells — and exclusive of otherwise potential candidates for life), Growth (one can imagine an organism which is artificially constructed, but then maintains itself perfectly, or a mechanical organism that starts life by being constructed externally, and slowly grows smaller as it sacrifices components to stay operational), and Reproduction (again, imagine a constructed organism that cannot reproduce). This leaves Homeostasis, Metabolism, and Adaptation/Response to stimuli.

However, its clear that Metabolism is important: an organism must take something from its environment and consume it to maintain an internal state. Metabolism and Homeostasis are where biological viruses fail the ‘life test’. While some advanced viruses meet the Adaptation and Response to Stimuli (arguably the same thing, just at different scales), no virus can use resources from its environment to perform internal upkeep. It requires the hijacked machinery of a cell to do that.

Unless you say that living things are part of a virus’s ‘environment’. Then you could argue that in some sense of the word, viruses are alive, because they use resources present in the environment to perform internal upkeep. This raises an important question about context. Indeed, all definitions of life seem to hinge on context. For example, a computer virus’s environment is the computer system. Resources would be computing time and memory, perhaps.

Is a computer virus alive? Advanced viruses can modify their own state (metamorphic code), respond to stimuli (anti-virus, user activity, etc), and metabolize resources from their environment. They also reproduce, although we cut that criterion so the point is moot. If a computer virus meets the requirements for life (albeit unconventionally), then do we have to accept it as a lifeform?

Moreover, there are things we wouldn’t normally call a single entity that fulfill the requirements for life. These are often termed “living systems”. The Earth is a prime example. It has systems that regulate its interior, it absorbs sunlight and that helps fuel the regulatory cycles on the surface. It’s debatable whether the Earth responds to stimuli. Sure, there are feedback loops, but the Earth doesn’t really respond accordingly to changes (say, changes in solar luminosity or meteoric impacts) in order to maintain homeostasis. Quite the opposite, in fact. For example: a decrease in solar radiation produces more ice, lowering albedo, thus lowering albedo further.

So maybe the Earth isn’t alive, but we have to consider nonetheless that systems can be alive. In fact, its questionable whether humans are single organisms. Several pounds of our weight are gut bacteria, independent organisms which share no DNA with us, but on which we rely for survival. We are a system. Call it a colony, call it symbiosis; the entity that is a human is in fact a collection of trillions of ‘independent’ organisms, and yet that entity is also singularly ‘alive’.

Can we trust our initial, gut reaction that tells us what is alive and what isn’t? Moreover, what use is there in classifying life in the first place? We treat cars that are definitely not alive as if they are precious animals with a will of their own, and then squash bugs without a second thought. Is it important to define life at all, rigorous criteria or not?

Mobile Computing

Many have predicted the fall of the PC in favor of large-scale mobile computing with smartphones and tablets. Most people don’t need the power of a high-end laptop or desktop computer to check email and play Facebook games. Indeed, most services are now provided over the Internet, with low client computational requirements. However, we may see an abrupt reversal in this trend.

There are two factors at play that could radically change the direction of the computing market. First, some experts are now predicting doom and gloom for the “free Internet”. The post-Snowden Internet is very likely going to fragment along national lines, with each country creating its own insulated network over security concerns. Not only does this mean the US will lose its disproportionate share of Internet business (and US tech companies will see significant declines in overseas sales), but it also means the era of cloud services may be coming to a premature close. As users see the extent of NSA data mining, they may become less willing to keep all of their data with a potentially unsecured third-party. If users wish to start doing more computing offline – or at least locally – in the name of security, then desktop computers and high-power tablets may see a boost in sales.

Second, the gulf between “PCs” and “tablets” is rapidly closing; the agony over PC-mobile market shifts will soon be moot. Seeing a dip in traditional PC sales, many manufacturers have branched out, and are now creating a range of hybrid devices. These are often large tabletop-scale tablets to replace desktops, or tablets like the Surface Pro to replace laptops. I suspect the PC market will fragment, with a majority of sales going towards these PC-mobile hybrids, and a smaller percentage going towards specialty desktops for high-power gaming and industry work (think CAD and coding).

I doubt desktop computers will disappear. In 10 years, the average household might have a large tablet set in a holder on a desk and connected to a mouse and keyboard, or laid flat on a coffee table. It would be used for playing intensive computer games, or the entire family could gather round and watch videos. In addition to this big tablet-computer, each person would have one or two “mobile” devices: a smallish smartphone, and a medium tablet with a keyboard attachment that could turn it into laptop-mode. Some people may opt for a large-screen phone and forgo the tablet.

It’s hard to tell whether or not the revelations about national spying will significantly impact the civilian net (the same goes for the fall of net neutrality). On the one hand, people are concerned about the security of their data. However, being able to access data from any device without physically carrying it around has proved to be a massive game-changer for business and society in general. We may be past the point-of-no-return when it comes to adopting a cloud computing framework. On the whole, transitioning from a dichotomy between “mobile devices” and “computers” to a spectrum of portability seems to be a very good thing.

Digital Copyright

We’ve got a big problem in America. Well, we’ve got a number of big problems. But one of the biggest, baddest problems is that monstrous leviathan known as copyright law.

Glossing over the issues with traditional copyright law, I want to focus on digital copyright. It has been apparent for some time that there is something dreadfully wrong with the way the US handles copyright management on the Internet. An explosion of annoying DRM, horrific lawsuits, and illegal prosecution has illuminated the fact that our current system for managing content rights is broken.

Currently the DMCA governs much of US digital copyright law. It is based on two tenets: one, content providers are not accountable for user-uploaded content as long as, two, there is a means for quickly taking down content at the request of the owner of any copyrighted material in that offending content.

However, many large content producers have taken to spamming such takedown requests, to the point of absurdity; for example, HBO at one point requested that Youtube take down a video with HBO content – that HBO itself had posted. We also hear the stories about kids being sued for hundreds of thousands of dollars because they pirated a few dozen songs. And in at least one case, monolithic content producers like the MPAA and RIAA have gotten the US government to grossly violate a swath of other laws in order to enforce the DMCA. I speak of the Kim Dotcom raid. Invalid permits, illegal seizure of evidence, failing to unfreeze funds for legal defense, harassment while in custody, illegal withholding of evidence from the defense – the list goes on. It shows that the crusade against copyright infringement has become a farce, and the DMCA is no longer effective.

Ironically, it’s not even clear that taking this hard-line approach is the right way to go about deterring copyright infringement in the first place. Over the last few years, Netflix has grown to comprise around 35% of all Internet traffic during peak hours; it has become the de facto way to easily watch movies and TV online. And while Netflix has grown, file-sharing sites have dropped from 30% to 8% of all traffic. This means that legitimate content consumption has effectively replaced online piracy for movies and TV shows.

Why did this happen? Simple: it became easy to watch movies and TV online without pirating. Pirating doesn’t occur because people don’t want to pay for content. It occurs because they physically can’t pay for content. If they could shell out cash for their favorite movies on demand over the Internet, they would; but until streaming sites like Netflix, there was simply no mechanism for doing so. In trying to protect their content, the MPAA actually encouraged online piracy.

We see the same thing occur with music and video games. In many cases, reduced DRM leads to increased sales. There are two explanations. One, if content is easy to pirate, then people do so quickly after release. Because more people are, say, playing the latest video game, word of mouth spreads faster, so more people end up buying the game legitimately. Second, it could be that when a content creator releases something without heavy DRM, the public collectively takes it as a show of good faith, and would rather purchase the content to show support rather than pirate it and take advantage of the creator.

In any case, we can expect to see a change in digital copyright in the near future. For everyone’s sake (that is, both content creators and consumers), I hope we take the path of less DRM and easier legitimate access to content, rather than the path of heavy-handed piracy suppression and draconian DRM.