Search the Lounge

Categories

« Faculty Hiring: Tennessee | Main | Blackhead Signpost Road Name Change Coming? »

July 31, 2015

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Patrick S. O'Donnell

So, this is a non-standard definition of "cyborg" insofar as it does NOT refer to "a fictional or hypothetical person." And thus I'm curious as to why we should not use the term "bionic" here, an existing concept that fits the bill without resort to a stipulative definition that goes against the standard meaning of "cyborg." I'm also curious as to whether or not you will provide us with a definition of what a "person" or human being is (distinct from non-human animals on the one hand, and a robot or 'cyborg' on the other hand).*

* I'm thinking here, for example, of the sorts of conceptions filled out in works such as these:

• Gillett, Grant. Subjectivity and Being Somebody: Human Identity and Neuroethics (Imprint Academic, 2008).
• Hacker, P.M.S. Human Nature: The Categorial Framework (Blackwell, 2007).
• Hacker, P.M.S. The Intellectual Powers: A Study of Human Nature (Wiley Blackwell, 2013).
• Smith, Christian. What is a Person? (Chicago University Press, 2010).
• Tallis, Raymond. The Hand: A Philosophical Inquiry into Human Being (Edinburgh University Press, 2003).
• Tallis, Raymond. I Am: A Philosophical Inquiry into First-Person Being (Edinburgh University Press, 2004).
• Tallis, Raymond. The Knowing Animal: A Philosophical Inquiry into Knowledge and Truth. (Edinburgh University Press, 2004).

Patrick S. O'Donnell

To pose the question, “who will be liable when our memory enhanced brains are hacked and our memories are stolen and distributed on the Internet?,” presumably implies that science is getting close to implanting “memory chips” (or ‘memory in a dish’) or some such technology by way of aiding or enhancing one’s memory. But that remains firmly ensconced in the realm of science fiction, despite some breathless neuroscientific article titles and popular science headlines. Indeed, I think it is based on an egregious failure to understand precisely what “memory” is for human beings: see, for example, the discussion in Gillett [referenced in the first comment above]; the treatment in Raymond Tallis’ Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity (Acumen, 2011): 123-32; the relevant sections in M.R. Bennett and P.M.S. Hacker, Philosophical Foundations of Neuroscience (Blackwell, 2003); and here and there in Michael S. Pardo and Dennis Patterson’s Minds, Brains, and Law: The Conceptual Foundations of Law and Neuroscience (Oxford University Press, 2013).

Incidentally, you state that your “purpose in undertaking this project is not to define what is human and what is not,” yet some--if not the most important--questions you are asking presuppose or assume an understanding of what is human or not, to wit:

“There are potentially important questions to ask about what embedding technology into our bodies will do to us. Will we still be human? Is there a point where the technology becomes ‘too much of us’ and we become machines?”

So perhaps it should at least be a subsidiary or ancillary purpose of this project to define “what is human and what is not.”

Ralph Clifford

Your breaking the skin definition bothers me. We have had non-invasive technology for quite a while that can “read” the electronic signals of the brain by placing detectors on the head outside of the skin. The skin is never broken. What has been missing to date is the understanding/technology to translate these brain waves into usable signals. With the work that has been done at Duke with monkeys (I acknowledge that those experiments broke the skin as it involved brain-invasive technology), the day when externally detected brain waves can be captured and made to operate technology may soon be here. As a slightly farfetched example, if I have a cap that I can put on that would then allow an external computer to grade my exams for me by consulting with my biological brain about whether a particular answer is correct, haven’t I become a grading cyborg?

I am also somewhat troubled by the breadth of what you are willing to consider as being the enabling technology in a cyborg. You have clearly captured the interactive nature of the human-machine connection, but you may have gone a bit too far. There are many examples of things that are “actively involved in variously allowing, enhancing, enabling or preventing certain actions or abilities of or by the attached person” that may not be cyborg-like.

How about a medicine such as Lipitor. It actively interacts with the body, or at least the liver, and enhances its ability to generate the enzymes needed to block LDL cholesterol. Another medical example that seems to fit your definition even more would be a vaccine that allows the immune response in the body to be “enhanced,” thus “enabling” the person to remain healthy.

To me, the transition from human to cyborg to robot has to include the degree of autonomous conduct that the technology can trigger. With drugs and vaccines, there is no autonomous conduct by the drug; instead, it is causing a reaction within the human. At the other end, a robot would be a super-Watson-like computer, but would also be able to make its own autonomous decisions without any human intervention. If there are two autonomous beings, one animal and one “machine,” that are operating in some form of partnership, you’ve reached the possibility of a cyborg. As you point out in your post, the collaboration needs to be more or less permanent.

Rob Heverly

Thanks for the really good questions/comments, Patrick and Ralph. To start with Patrick's two comments, the "human" questions are critical questions in the overall scheme of things. Yes, I'm assuming them away, because if I don't, I'll never get to the doctrinal questions that we'll need to answer once this technology becomes more prevalent. Even as subsidiary questions, they are too dense and too weighty to be part of another project. There are people continuing to talk about, think about, and work on these questions (Brett Frischman at Cardozo Law, for example, is working on a book entitled, "Humans in the Twenty-First Century: How Social and Technological Tools are Reshaping Humanity" : http://www.brettfrischmann.com/), but that's not my interest right now.

My point is to try to focus on "easy" examples to avoid these debates, because even though they are important debates, I don't see us making any real progress on them before we start running into doctrinal problems such as "how much access should the government have to communications that arise directly from brain activity made available through brain connected technologies," or those "who is liable" questions that continue to intrigue me. I prefer to stick to these pragmatic, doctrinal questions and let others take up the more philosophical issues, at least at this point.

On the state of the technology, my next post will detail some of what I’ve found, and while I used the example of hacking our memories, how about hacking our insulin pumps? Or our pacemakers? There are viable technologies that allow for stimulation of the brain to help counteract the effects of Parkinson's Disease; if networks are used in that system, hacking arises again. There are experts working on neuroprosthetics that could raise a host of problems like those I'm worried about (see here: http://www.the-scientist.com/?articles.view/articleNo/41324/title/Neuroprosthetics/); you can argue those are outliers or science fiction, but I wouldn't want to say that to Gerwin Shalk the next time he and I have coffee. I think the science may be moving very quickly beyond the limits you see. And if it is, we should be thinking about where it's going now, not later after it gets there.

As for the use of the term cyborg, I'm quite happy to argue that we are moving the cyborg from myth to reality. Haraway laid out an ironic mythology of the cyborg to argue that boundaries are not what we think they are; that's not my point. My point is that the fiction of the cyborg is quickly becoming reality.

On Ralph’s first point, the “break in the skin” requirement of my definition is a real concern to me. I’ve chosen it to a degree to do the same thing that I’ve done by simply asserting that some people are human and some robots are robots. I don’t want to play at the margins. I want to use the clear, nearly inarguable case, and then see where that leads us. I don’t want to get into a debate with people who want to extend the external brain harness reasoning to the cell phone (Andy Clark hasn’t convinced me that using a cell phone makes me a cyborg). I’d rather say, “People who use a harness to grade papers may be cyborgs, but I’m not saying they are and I don’t need to prove that they are to show that privacy concerns will be significant as embedded cyborg technology becomes more widely used.” Those same concerns may arise for non-embedded technologies, but that’s a much easier jump to make once I’ve established the basic concerns for embedded technologies.

Finally, I think Ralph has hit on something important on the overbreadth of what can be cyborg by my definition. I really like this sentence: “To me, the transition from human to cyborg to robot has to include the degree of autonomous conduct that the technology can trigger.” It’s exactly the kind of thing I was hoping to come across in posting this work to the Lounge.

Thanks to both of you for commenting, I really appreciate you taking the time to do so, and hope we can continue the conversation.

The comments to this entry are closed.

StatCounter

  • StatCounter
Blog powered by Typepad