A recent article in the Observer features two pieces about human enhancement in the prospect of the FutureFest festival in London in September (see here and here). The articles mention Bertolt Meyer, a Swiss man born without a left hand who was recently fitted with a state-of-the-art bionic one (which he controls from his iPhone), and include quotes from well-know authors associated with the topic of human enhancement, such as Nick Bostrom and Andy Miah.
At the moment, prosthetic devices like Meyer’s are used to restore normal human functions among those who lack them. Yet as such devices become ever more sophisticated, to the point that they eventually outperform “natural” limbs in terms of speed, strength, executive control etc., “will it become the norm to have one of these”, Meyer asks? Also, as the author of the Observer editorial worries, “what happens when these technologies and machines get so smart that humans can be written out of the equation altogether?”. For instance, what if we could simply turn to our iPhone rather than a human doctor to get a diagnosis for our ailments as well as appropriate treatment recommendations? Such suggestions can elicit fears of a dystopian future where humans are pressured to become “cyborgs”, whether they like it or not, if they are to remain competitive on the job market (including competitive sports) and in other contexts; or where they are increasingly made obsolete by more effective machines, and real-life human interaction is reduced (machines replacing staff at supermarket checkouts, but also GPs, etc.), and becomes less accessible than it is now (think of having to pay a significant premium to see a human doctor).
Critics of enhancement technologies have correctly emphasized such concerns, yet they have also tended on that basis to endorse a general anti-enhancement stance, at complete odds with transhumanist thinkers. In light of the potentially immense benefits of these technologies (including when it comes to preserving the existing goods that we, including enhancement critics, properly value), such a radical stance strikes me as untenable. Suppose for instance that machines eventually proved able to diagnose illnesses more accurately, or to conduct medical research more effectively, than even the most competent humans can. Were this so, isn’t there a case to be made that we should then step aside and let them do the relevant work, even if this meant losing to them certain intrinsically valuable occupations? (This wouldn’t apply to occupations in relation to which achieving successful outcomes as efficiently as possible doesn’t seem to be a moral imperative; furthermore, professions like doctor also involve a social function which it isn’t clear we should want to delegate to machines.)
Secondly, let us assume that, in contrast to some transhumanists, we prefer our current fleshly envelope to a cybernetic form of embodiment. This assumption is still compatible with supporting various forms of enhancement, such as radical life extension, to the extent that it preserves our fleshly envelope when decrepitude and death would have destroyed it. Also, preferring forms of life extension that preserve our current bodies over those involving cyborgization is compatible with choosing cyborgization if the only alternative is destruction. Finally, such preferences largely seem to depend on aesthetic considerations that will no longer warrant them if we consider enhancements that do not affect the person’s appearance, or the phenomenology of her experiences, in a negative way (or at all). Some of us may not like the idea of replacing our natural hands with bionic ones, but why should we feel the same about getting an artificial “enhanced” heart were this to become possible, if the only consequence of doing so were to prolong our healthy lifespan? Concerns about fair access, mentioned by both Bostrom and Miah, would then still apply, yet it seems that these could in principle be addressed without refraining from developing the technology, or trying to enforce a ban on its use.
It is to be hoped that events like FutureFest will help move the enhancement debate beyond the overly simple, radical opposition between “pros-” and “anti-” towards a more nuanced and constructive assessment of those technologies that takes into account both their great promise and potential perils – ideally resulting in suggestions as to how to fulfill that promise while minimizing the perils.
Author Bio: Alex Erler is a research associate at the Oxford Uehiro Centre for Practical Ethics. You can find his academic page here and his Twitter page here.