Here’s how human consciousness works—and how a machine might replicate it

I recently attended a panel discussion titled Being Human in the Age of Intelligent Machines. At one point during the evening, a philosophy professor from Yale said that if a machine ever became conscious, then we would probably be morally obligated to not turn it off. The implication was that if something is conscious, even a machine, then it has moral rights, so turning it off is equivalent to murder. Wow! Imagine being sent to prison for unplugging a computer. Should we be concerned about this? Most neuroscientists don’t talk much about consciousness. They assume that the brain can be understood like every other physical system, and consciousness, whatever it is, will be explained in the same way. Since there isn’t even an agreement on what the word consciousness means, it is best to not worry about it. Philosophers, on the other hand, love to talk (and write books) about consciousness. Some believe that consciousness is beyond physical description. That is, even if you had a full understanding of how the brain works, it would not explain consciousness. Philosopher David Chalmers famously claimed that consciousness is “the hard problem,” whereas understanding how the brain works is “the easy problem.” This phrase caught on, and now many people just assume that consciousness is an inherently unsolvable problem. Personally, I see no reason to believe that consciousness is beyond explanation. I don’t want to get into debates with philosophers, nor do I want to try to define consciousness. However, the Thousand Brains Theory suggests physical explanations for several aspects of consciousness

More:
Here’s how human consciousness works—and how a machine might replicate it