Quantcast
Viewing all articles
Browse latest Browse all 3882

Robots can be conscious, just like humans

Image may be NSFW.
Clik here to view.

In 1974, the American philosopher Thomas Nagel posed the question: What is it like to be a bat? It was the basis of a seminal thesis on consciousness that argued consciousness can not be described by physical processes in the brain. More than 40 years later, advances in artificial intelligence and neural understanding are prompting a re-evaluation of the claim that consciousness is not a physical process and as such cannot be replicated in robots. Cognitive scientists Stanislas Dehaene, Hakwan Lau and Sid Kouider posited in a review published last week that consciousness is “resolutely computational” and subsequently possible in machines. The trio of neuroscientists from the Collège de France, University of California and PSL Research University respectively addressed the question of whether machines will ever be conscious in the journal Science. Keep up with this story and more by subscribing now “Centuries of philosophical dualism have led us to consider consciousness as irreducible to physical interactions,” the researchers state in Science. “[But] the empirical evidence is compatible with the possibility that consciousness arises from nothing more than specific computations.” The scientists define consciousness as the combination of two different ways the brain processes information: Selecting information and making it available for computation, and the self-monitoring of these computations to give a subjective sense of certainty—in other words, self-awareness. “We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing in the human brain,” the review’s abstract states. “We review the psychological and neural science of unconscious and conscious computations and outline how they may inspire novel machine architectures.” Essentially, the computational requirements for consciousness outlined by the neuroscientists could be coded into computers. Read more: Google's 'Big Red Button' could save the world. Dystopian warnings of advanced artificial intelligence stretch to something called the technological singularity, in which an artificial general intelligence replaces humans as the dominant force on this planet. Billionaire polymath Elon Musk has referred to human-level artificial intelligence—or artificial general intelligence—as “more dangerous than nukes,” while eminent physicist Stephen Hawking has suggested it could lead to the end of humanity. In order to quell the existential threat that this nascent technology poses, cognitive robotics professor Murray Shanahan has said that any type of conscious robot should also be encoded with a conscience. Assuming it is possible, robots capable of curiosity, sympathy and everything else that distinguishes humans from machines are still a long way off. The most powerful artificial intelligence algorithms—such as Google’s DeepMind—remain distinctly unselfaware, but developments towards this level of thought processing are already happening. If such progress continues to be made, the researchers conclude a machine would behave “as though it were conscious.” The review concludes: “[The machine] would know that it is seeing something, would express confidence in it, report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans.” Perhaps then we could know: What is it like to be a robot?

Viewing all articles
Browse latest Browse all 3882

Trending Articles