The age of the cyborg
We seem to be done with RoboCop, Replicants, and the Borg from the Star Trek universe. The cyborg as seen by the general public is no longer merely a science fiction staple. Bionic limbs, artificial eyes and brain-computer interaction research crowds the news. We are in the age of the cyborg, it is declared. You see it pop up on the DailyScience, in general newspapers, in YouTube videos. In 2010, the Cyborg Foundation was created by the artist Harbisson, who ‘hears’ color by an electronic device attached to his skull. In short, cyborgs are everywhere. But most people only have a vague, sci-fi inspired notion of what a cyborg actually is. If we truly are in the age of the cyborg, when did this age begin?
In its most literal sense, the age of the cyborg began in 1953, with the first mention of the term in the article ‘Cyborgs and Space’ by Kline and Clynes. In this largely hypothetical paper, the focus lies heavily on space travel and the inhospitable environments the space-traveling human will encounter. The technology the authors propose is largely that of external components enhancing homeostatic control – for instance to withstand higher temperatures. To administer the needed chemicals continually, a pump has to be connected to the body.
However, this concept of physically attaching technology, integrating it with the human body, is older than the term ‘cyborg’. One of the first accounts that goes beyond mere fiction is the 1920′s paper ‘Daedalus: Science and the Future’ by the biologist and mathematician Haldane. He makes the distinction between technology that serves humans, and technology that becomes a part of us, predicting the latter to be the focus for the coming century. Thirty years later, the computer scientist Licklider starts to argue for AI research that does just that – augmenting human intelligence, instead of creating autonomous intelligent beings.
A shaky definition
But the ideas about cyborgs do not stop there. One can summarize what most definitions of ‘a cyborg’ have in common as follows: ‘a self-regulating system with both biological and artificial parts’. Please note that 1. the borders of the system are not specified and 2. artificial is not necessarily the same as non-carbon-based. This means that, by this definition, lab-grown donor organs make a cyborg. But it also means, according to the sociologist Hess (1995, ‘On Low-Tech Cyborgs’), that a society based on humans and technology can be called a cyborg society. A person and their smartphone can be called a self-regulating system. In fact, with this definition one could argue that the very first time man made an artificial tool and used it, a cyborg was created.
This concept would most likely appeal to the philosopher Clark, best known for his ideas on the Extended Mind. In short, this is the notion that the ‘mind’ can incorporate the environment and is therefore not limited to the brain or even the whole body of the user. According to Clark, humans are natural born cyborgs – an animal specialized in the use and creation of tools (clothes instead of fur, spears instead of claws, writing as a way to share and store ideas, etc.). Indeed, neurological research indicates that the human brain is very flexible in its corporeal awareness and includes external tools readily in its ‘body map’ (1997, ‘The body in the brain’, by Berlucchi).
But where does that leave us? Are we really already cyborgs? I would argue that we have never tried to extend the capabilities of the human body quite as much as we do right know. But there is a strange limitation of our efforts. Artificial limbs are mainly for the people missing one. In spite of the growing pervasiveness of plastic surgery, there is for now no revolution of enhancing your physical abilities by hard technology. No exoskeletons to walk faster – yet. Doping (basically chemical enhancement) is forbidden from professional sports. True, a lot of our technology focuses on extending our cognitive abilities rather than our physical ones. There is an unprecedented amount of information stored in the world right now – outside of human brains. Most of us carry small computers in our pockets that hold half our life for us.
Still, when you are talking about incorporated technology, the research seems mostly driven by medical science. Neuro-prosthetics, especially cognitive prostheses, are a hot research topic, but lab-grown neurons are usually developed for neuro-degenerative disorders. Additionally, a lot of research focuses on animal-technology systems, like bomb-sniffing rats who transmit a danger signal via their brain and can even get a small neurochemical ‘reward’ for a job well done. These are all very worthy and arguably even the most important causes to serve with this technology. Yet wouldn’t one expect more willingness from the general public to jump on the cyborg-bandwagon? Spending money on esthetic plastic surgery is increasingly normal, but building technology in a healthy body to extend its functions is an exception. An interesting exception, for instance, is the scientist Warwick (2002), who build a neural transponder in his arm to use the neuronal signals to accomplish various tasks.
Nevertheless, the view of the relationship between human and technology is changing, partly due to advances in the medical field. Earlier this year, bionic limb engineer and -user Herr argued in a TED-talk (2014, ‘The new bionics that let us run, climb, and dance’) that “Humans are never disabled. Technology is disabled”. With this – not unfamiliar – sentiment, something very profound changes in our relationship with technology. Technology is no longer merely helping us – using it to extend our capabilities is a right we have. This echoes the philosopher More, who introduced the concept of morphological freedom in 1993, formulated as: “The ability to alter bodily form at will through technologies such as surgery, genetic engineering, nanotechnology, or uploading”. Morphological freedom gives us the right to shape our bodies the way we want – whether this is by restoring a lost limb or adding one that was never there. In the future, the line between the two might just become blurry.