Defining Life

I’ve had this conversation a couple of times recently, because it poses an interesting question: can we create a definition for ‘alive’ that encompasses not only known biological life, but also any theoretical lifeforms we can imagine? This might include alternative biochemistry, artificial life (nanites?), and even digital lifeforms.

Obviously there is an inherent problem in this discussion; we are assuming everyone shares a similar definition of life. However, even a skin-deep probing can reveal divisive philosophical questions. Are computer viruses alive? How about self-replicating structures of dust particles in a plasma? Is the Earth alive? We can’t truly resolve this problem without first clearly setting a boundary for what things are alive and what things aren’t alive. For example, scientists seem to have resolutely decided that biological viruses are not alive. Similarly, its clear to our human sensibilities that a car engine is not alive, even if it is highly advanced and has all sorts of sensors and regulatory mechanisms.

For the sake of discussion, I’m going to skip over this roadblock and dive in. Wikipedia gives these criteria for calling something ‘alive’:

  1. Homeostasis: Regulation of the internal environment to maintain a constant state.
  2. Organization: Being structurally composed of one or more cells.
  3. Metabolism: Converting chemicals and energy to maintain internal organization.
  4. Growth: A growing organism increases in size in all of its parts, rather than simply accumulating matter.
  5. Adaptation: The ability to change over time in response to the environment.
  6. Response to stimuli: A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis.
  7. Reproduction: The ability to produce new individual organisms, either asexually from a single parent organism, or sexually from two parent organisms.

There are some good ones in there, but a few need to go. Let’s throw out Organization (this can almost be seen as tautological — things made of cells are alive because they are made of cells — and exclusive of otherwise potential candidates for life), Growth (one can imagine an organism which is artificially constructed, but then maintains itself perfectly, or a mechanical organism that starts life by being constructed externally, and slowly grows smaller as it sacrifices components to stay operational), and Reproduction (again, imagine a constructed organism that cannot reproduce). This leaves Homeostasis, Metabolism, and Adaptation/Response to stimuli.

However, its clear that Metabolism is important: an organism must take something from its environment and consume it to maintain an internal state. Metabolism and Homeostasis are where biological viruses fail the ‘life test’. While some advanced viruses meet the Adaptation and Response to Stimuli (arguably the same thing, just at different scales), no virus can use resources from its environment to perform internal upkeep. It requires the hijacked machinery of a cell to do that.

Unless you say that living things are part of a virus’s ‘environment’. Then you could argue that in some sense of the word, viruses are alive, because they use resources present in the environment to perform internal upkeep. This raises an important question about context. Indeed, all definitions of life seem to hinge on context. For example, a computer virus’s environment is the computer system. Resources would be computing time and memory, perhaps.

Is a computer virus alive? Advanced viruses can modify their own state (metamorphic code), respond to stimuli (anti-virus, user activity, etc), and metabolize resources from their environment. They also reproduce, although we cut that criterion so the point is moot. If a computer virus meets the requirements for life (albeit unconventionally), then do we have to accept it as a lifeform?

Moreover, there are things we wouldn’t normally call a single entity that fulfill the requirements for life. These are often termed “living systems”. The Earth is a prime example. It has systems that regulate its interior, it absorbs sunlight and that helps fuel the regulatory cycles on the surface. It’s debatable whether the Earth responds to stimuli. Sure, there are feedback loops, but the Earth doesn’t really respond accordingly to changes (say, changes in solar luminosity or meteoric impacts) in order to maintain homeostasis. Quite the opposite, in fact. For example: a decrease in solar radiation produces more ice, lowering albedo, thus lowering albedo further.

So maybe the Earth isn’t alive, but we have to consider nonetheless that systems can be alive. In fact, its questionable whether humans are single organisms. Several pounds of our weight are gut bacteria, independent organisms which share no DNA with us, but on which we rely for survival. We are a system. Call it a colony, call it symbiosis; the entity that is a human is in fact a collection of trillions of ‘independent’ organisms, and yet that entity is also singularly ‘alive’.

Can we trust our initial, gut reaction that tells us what is alive and what isn’t? Moreover, what use is there in classifying life in the first place? We treat cars that are definitely not alive as if they are precious animals with a will of their own, and then squash bugs without a second thought. Is it important to define life at all, rigorous criteria or not?

Mobile Computing

Many have predicted the fall of the PC in favor of large-scale mobile computing with smartphones and tablets. Most people don’t need the power of a high-end laptop or desktop computer to check email and play Facebook games. Indeed, most services are now provided over the Internet, with low client computational requirements. However, we may see an abrupt reversal in this trend.

There are two factors at play that could radically change the direction of the computing market. First, some experts are now predicting doom and gloom for the “free Internet”. The post-Snowden Internet is very likely going to fragment along national lines, with each country creating its own insulated network over security concerns. Not only does this mean the US will lose its disproportionate share of Internet business (and US tech companies will see significant declines in overseas sales), but it also means the era of cloud services may be coming to a premature close. As users see the extent of NSA data mining, they may become less willing to keep all of their data with a potentially unsecured third-party. If users wish to start doing more computing offline – or at least locally – in the name of security, then desktop computers and high-power tablets may see a boost in sales.

Second, the gulf between “PCs” and “tablets” is rapidly closing; the agony over PC-mobile market shifts will soon be moot. Seeing a dip in traditional PC sales, many manufacturers have branched out, and are now creating a range of hybrid devices. These are often large tabletop-scale tablets to replace desktops, or tablets like the Surface Pro to replace laptops. I suspect the PC market will fragment, with a majority of sales going towards these PC-mobile hybrids, and a smaller percentage going towards specialty desktops for high-power gaming and industry work (think CAD and coding).

I doubt desktop computers will disappear. In 10 years, the average household might have a large tablet set in a holder on a desk and connected to a mouse and keyboard, or laid flat on a coffee table. It would be used for playing intensive computer games, or the entire family could gather round and watch videos. In addition to this big tablet-computer, each person would have one or two “mobile” devices: a smallish smartphone, and a medium tablet with a keyboard attachment that could turn it into laptop-mode. Some people may opt for a large-screen phone and forgo the tablet.

It’s hard to tell whether or not the revelations about national spying will significantly impact the civilian net (the same goes for the fall of net neutrality). On the one hand, people are concerned about the security of their data. However, being able to access data from any device without physically carrying it around has proved to be a massive game-changer for business and society in general. We may be past the point-of-no-return when it comes to adopting a cloud computing framework. On the whole, transitioning from a dichotomy between “mobile devices” and “computers” to a spectrum of portability seems to be a very good thing.

Digital Copyright

We’ve got a big problem in America. Well, we’ve got a number of big problems. But one of the biggest, baddest problems is that monstrous leviathan known as copyright law.

Glossing over the issues with traditional copyright law, I want to focus on digital copyright. It has been apparent for some time that there is something dreadfully wrong with the way the US handles copyright management on the Internet. An explosion of annoying DRM, horrific lawsuits, and illegal prosecution has illuminated the fact that our current system for managing content rights is broken.

Currently the DMCA governs much of US digital copyright law. It is based on two tenets: one, content providers are not accountable for user-uploaded content as long as, two, there is a means for quickly taking down content at the request of the owner of any copyrighted material in that offending content.

However, many large content producers have taken to spamming such takedown requests, to the point of absurdity; for example, HBO at one point requested that Youtube take down a video with HBO content – that HBO itself had posted. We also hear the stories about kids being sued for hundreds of thousands of dollars because they pirated a few dozen songs. And in at least one case, monolithic content producers like the MPAA and RIAA have gotten the US government to grossly violate a swath of other laws in order to enforce the DMCA. I speak of the Kim Dotcom raid. Invalid permits, illegal seizure of evidence, failing to unfreeze funds for legal defense, harassment while in custody, illegal withholding of evidence from the defense – the list goes on. It shows that the crusade against copyright infringement has become a farce, and the DMCA is no longer effective.

Ironically, it’s not even clear that taking this hard-line approach is the right way to go about deterring copyright infringement in the first place. Over the last few years, Netflix has grown to comprise around 35% of all Internet traffic during peak hours; it has become the de facto way to easily watch movies and TV online. And while Netflix has grown, file-sharing sites have dropped from 30% to 8% of all traffic. This means that legitimate content consumption has effectively replaced online piracy for movies and TV shows.

Why did this happen? Simple: it became easy to watch movies and TV online without pirating. Pirating doesn’t occur because people don’t want to pay for content. It occurs because they physically can’t pay for content. If they could shell out cash for their favorite movies on demand over the Internet, they would; but until streaming sites like Netflix, there was simply no mechanism for doing so. In trying to protect their content, the MPAA actually encouraged online piracy.

We see the same thing occur with music and video games. In many cases, reduced DRM leads to increased sales. There are two explanations. One, if content is easy to pirate, then people do so quickly after release. Because more people are, say, playing the latest video game, word of mouth spreads faster, so more people end up buying the game legitimately. Second, it could be that when a content creator releases something without heavy DRM, the public collectively takes it as a show of good faith, and would rather purchase the content to show support rather than pirate it and take advantage of the creator.

In any case, we can expect to see a change in digital copyright in the near future. For everyone’s sake (that is, both content creators and consumers), I hope we take the path of less DRM and easier legitimate access to content, rather than the path of heavy-handed piracy suppression and draconian DRM.

Project Morpheus

I recently discovered an interesting tidbit; NASA has been quietly developing the technology necessary for landing a humanoid robot on the Moon. Now, this is not a particularly interesting goal on its own.

Morpheus lander in flight

However, the point is not the end goal. Project Morpheus, as it is now called, is really an experiment with different work flows. Morpheus is based on the principle of working quickly and efficiently, rather than the slow-and-steady plod that NASA generally adopts. Instead of planning for every possible contingency, the small team is designing low-cost systems with a rapid iteration rate.

The project is also an integration of a number of technologies — methane-oxygen engines, advanced robotics, advanced landing techniques, etc — which are being developed in parallel. Instead of breaking the goal down into small steps and working straight at it, Project M seems to be making more generalized progress, so that the technologies it develops can be used in a variety of applications. This is good, as it will lead to cheaper, faster development cycles for other missions.

Finally, it is not high-profile. Low-profile projects are less likely to get bloated politically and bureaucratically; politicians want to pork-barrel big projects, which leads to missed deadlines and overshot budgets. Keeping projects out of the limelight means they are less likely to get axed for inefficiency, and keeping them low cost means they are less likely to get axed for budgetary reasons.

So I wouldn’t mind if NASA created more of these low-cost, fast-paced projects. Sure, not every one of them would get finished, but the approach is appealing — don’t put all your eggs in one basket, and all that.