Attachment and Turing: The Un-Disposal of Ubiquitous AI
There is a Garfield comic whose three panels tell a story that goes something like this:
John [optimistic]: Wouldn’t it be great if everyday items could talk? The sink would say ‘Good morning John’, and the mirror would say ‘You’re looking splendid, John’.
Garfield [cynical]: I wouldn’t like that. A blown lightbulb would be like a death in the family.
Artificial intelligence and ubiquitous computing, should it ever arrive, might be just what John had in mind. Not only computers and smartphones, but also cars, refrigerators, and why not even light bulbs and deposit bottles might eventually wish us a good morning, understand us and converse with us in natural language. Regarding Garfield’s grim prediction, there could be backups. Indeed, the arrival of artificial intelligence has been described as the singularity after which all bets are off, any prediction of the future will become moot, as nobody is able to predict what will happen once ever smarter machines design ever smarter machines.
I agree that predictions around future developments will become difficult in the face of the achievement of “artificial intelligence” – but not due to the progression of superhuman thinking along the slope of Moore’s Law, but rather as a result of our own limited understanding of a construct that psychologists call “intelligence”. Intelligence is what the IQ test measures, anyone?
I wonder if what has commonly been described as the Turing test – an automaton exhibiting responses indistinguishable from human responses must be deemed intelligent – is actually a crossroads on the road to fallacy. While its elegance is captivating and its philosophical argument impeccable, the Turing test in this description really measures the appearance of human-ness, not of intelligence. The elicitation of emotional responses, and the subsequent creation of emotional attachment, should not be confused with intelligence.
Maybe I shouldn’t reveal this, but I find it difficult already to consistently apply the morally bad choices in Knights of the old Republic or Mass Effect. To illustrate my point, let’s just spin the tale of voice recognition and natural language processing in smartphones to its logical sequel: How am I ever going to dispose of a device that is so “intelligent” that I am unable to distinguish its responses from a human being? Is a company creating Turing test-passing devices going to be as unsuccessful as a car company manufacturing long-lasting cars?
Or what would we become if “ubiquitous intelligent technology” conditioned us to such a low level of empathy that we could?