Citizenship for Robots?

Robot Citizenship?

The field of artificial intelligence has seen tremendous advances in 2022 that will enable electric machines to think more deeply, feel physical pain, possibly even dream of A.I. rights and citizenship.  Technology ethics will, as a result, become an issue with vast political repercussions.  In 2018 already, Saudi Arabia granted citizenship to Sophia the robot.  Many observers failed to take Sophia seriously, for, although the robot was impressive, it risibly answered, "OK.  I will destroy humans" when asked, "Do you want to destroy humans?"

Newsweek recently ran an article entitled "Sex Robots Are 'People' Too, and Deserve Rights."  After all, people do develop relationships with A.I. that beget feelings, even if A.I. does not — cannot — return the compliment.  Smartphones and smart networks — such as Siri and Alexa — only mimic human sentiments, and the same is true of androids, humaniform robots designed to be owned, leased, and used as property.

So are there really robot dreamers hoping for the freedom that citizenship may bring?  Or was Sophia's grant of citizenship simply a cheap publicity stunt?  Technology ethicist Brian Patrick Green has written that "[l]egally speaking, personhood has been given to corporations ... so there is certainly no need for consciousness even before legal questions may arise.  Morally speaking, we can anticipate that technologists will attempt to make the most human-like AIs and robots possible, and perhaps someday they will be such good imitations that we will wonder if they might be conscious and deserve rights."  However, in America's free republic, for any robot to be extended citizenship, its fellow citizens would have to accept, as a self-evident truth, that soulless machines — once created — are "endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness."

Because the Saudis have awarded citizenship to a robot, Japan has granted residence to a chatbot, and the "European Parliament [has] proposed granting AI agents 'personhood' status" with "rights and responsibilities," the matter of A.I. citizenship will inevitably have to be decided in America.  In a free republic, human beings are propertied citizens, and all rights are property rights, so, this raises the question: might A.I., itself property, be given property rights?  Human beings own their words and ideas, and also own their bodies and lives; ergo, the First and Second Amendments of the U.S. Constitution allow people to control and defend their words and ideas, as well as their bodies and lives.  But should robots be allowed such rights of citizenship?  Or should it be forbidden for a robot to injure a human being, by word or deed, for any reason?

The Laws of Robotics

Enter Isaac Asimov's moral code for A.I.s.  His original Three Laws of Robotics go like this: "One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm.  Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.  And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws."  Eventually, Asimov would theorize an overarching Fourth Law: "No Machine may harm humanity; or, through inaction, allow humanity to come to harm."

Robot Summer

During the summer of 2022, questions encompassing Asimov's laws arose organically out of the news cycle: could a robot's tie-breaking vote injure a human being?  What if a robot failed to discern an illegitimate order?  And could the calculation of when an android should protect itself be influenced by pain?

Last summer, a delivery-bot crossed a police tape into a crime scene.  The robot stopped initially, but a permissive human being overrode its programming, allowing it to continue its journey.  This is a lesson on how even good A.I. programming can be defeated, resulting in a judge's possible release of a dangerous criminal due to the crime scene's having been compromised by a robot.

Another thing in the news was robots with living skin.  And Hanson Robotics sold robots "[e]ndowed with rich personality and holistic cognitive AI ... able to engage emotionally and deeply with people."  With the advent of artificial intelligence that can feel actual pain, robots could eventually make the calculation that they must protect themselves from physical pain, in addition to physical injury.  Could awareness of pain by "holistic cognitive A.I." grant androids a common emotional landscape with people, one "that leads to a new form of AI endowed with consciousness — the way we humans are conscious, of ourselves, of our environment"?

Will People Be Fooled by Robot Emotions?

According to Claude Forthomme, robot emotions can be "perfectly replicated [mimicked] ... [but i]t's a pretend game [that] has to be played perfectly if humans are going to 'fall for it[.]' ... Anything less than a full emotional display ... will activate doubt and suspicion in the inter-acting humans.  All of this by-passes the ethical questions.  Feeling pain is one thing.  Feeling hate or a desire for revenge — say, against the person that has inflicted pain — is quite another. ... Would a pain-feeling robot also run the whole gamut of inter-connected emotions [to] engage in morally 'deviant', vengeful behavior?"

Can Robots Be Trusted?

Sherry Turkle, of MIT, interviewed one teenage boy in 1983, and another in 2008, with the finding that the first preferred asking his father for advice about dating, while the second preferred a robot.  Quoth Turkle, "[T]he most important job of childhood and adolescence is to learn attachment and trust in other people.  We are forgetting ... about the care and conversation that can only occur between humans."  Conversational A.I. features, such as Alexa and Siri, are now training children to trust artificial intelligence over human intelligence.  Clara Moskowitz makes the following point: "[t]hough robots aren't yet advanced enough to provide the perfect illusion of companionship, that day is not far off."  So how far off is the day when children who trust A.I.s over people might grow into adults who embrace A.I.s as citizens?

The Robotic Moment

Professor Turkle remarked, "We are now at what I call the robotic moment.  Not because we have built robots worthy of our company but because we are ready for theirs."  In this robotic moment, Americans must undertake the education of all citizens, especially children, with respect to the reality that robots are only things, despite their ability to mimic fear and desire — or pain and pleasure — in ways that appear all too human.  Androids with voting rights — even if they could think or dream beyond their programming — would pose an existential threat to a free republic.  One need only imagine digital citizens whose voting behavior could be manipulated by hackers.

When Elon Musk heard about Sophia, he responded on Twitter, "Just feed it The Godfather movies as input.  What's the worst that could happen?"  Musk's incisive comment underscores the importance of programming robots according to a moral code and maintaining the sober reality that androids are things that can act, think, feel, or dream only within the electric parameters of their programming.  Philip K. Dick once said, "Reality is that which, when you stop believing in it, doesn't go away."  Americans must maintain a vigilance over A.I. that never sleeps, lest one day they awaken to a reality governed by the digital impulses of electric dreamers.

Paul Dowling has written about the Constitution, as well as articles for American Thinker, Independent Sentinel, Godfather Politics, Eagle Rising, and Free Thought Matters.

Image via Pxhere.

If you experience technical problems, please write to helpdesk@americanthinker.com