Was it truly conscious, though?
The risk of committing to any theory of consciousness is that doing so opens up the possibility of criticism. Sure, self-awareness seems important, but aren’t there other key features of consciousness? Can we call something conscious if it doesn’t feel conscious to us?
Dr. Chella believes that consciousness can’t exist without language, and has been developing robots that can form internal monologues, reasoning to themselves and reflecting on the things they see around them. One of his robots was recently able to recognize itself in a mirror, passing what is probably the most famous test of animal self-consciousness.
Joshua Bongard, a roboticist at the University of Vermont and a former member of the Creative Machines Lab, believes that consciousness doesn’t just consist of cognition and mental activity, but has an essentially bodily aspect. He has developed beings called xenobots, made entirely of frog cells linked together so that a programmer can control them like machines. According to Dr. Bongard, it’s not just that humans and animals have evolved to adapt to their surroundings and interact with one another; Our tissues have evolved to subserve these functions, and our cells have evolved to subserve our tissues. “What we are is intelligent machines made of intelligent machines made of intelligent machines, all the way down,” he said.
This summer, around the same time that Dr. Lipson and Dr. Chen released their newest robot, a Google engineer who claimed that the company’s newly improved chatbot, called LaMDA, was conscious and deserved to be treated like a small child. This claim was met with skepticism, mainly because, as Dr. Lipson noted, the chatbot was processing “a code that is written to complete a task.” There was no underlying structure of consciousness, other researchers said, only the illusion of consciousness. Dr. Lipson added: “The robot was not self aware. It’s a bit like cheating.”
But with so much disagreement, who’s to say what counts as cheating?
Eric Schwitzgebel, a philosophy professor at the University of California, Riverside, who has written about artificial consciousness, said that the issue with this general uncertainty was that, at the rate things are progressing, humankind would probably develop a robot that many people think is conscious before we agree on the criteria of consciousness. When that happens, should the robot be granted rights? Freedom? Should it be programmed to feel happy when it serves us? Will it be allowed to speak for itself? To vote?
(Such questions have fueled an entire subgenre of science fiction in books by writers such as Isaac Asimov and Kazuo Ishiguro and in television shows like “Westworld” and “Black Mirror.”)