Conscious Machines May Never Be Possible

In June 2022, A Google engineer named Blake Lemoine was convinced that the AI ​​program he had been working on – LaMDA – had developed not only intelligence but also consciousness. LaMDA is an example of a “big language model” that can have surprisingly fluid text-based conversations. When the engineer asked, “When did you first think you had a soul?” LaMDA replied, “It was a gradual change. When I first became aware of myself, I had no sense of soul at all. It’s evolved over the years I’ve lived in.” Lemoine was quickly placed on administrative leave for leaking his talks and conclusions.

The AI ​​community was largely unanimous in rejecting Lemoine’s beliefs. LaMDA, so the consensus, feels nothing, understands nothing, has no conscious thoughts or any subjective experiences. Programs like LaMDA are extremely impressive pattern recognition systems that, when trained across much of the Internet, can predict which phrases might serve as appropriate responses to a given prompt. They are doing very well and they will continue to improve. However, they are no more conscious than a calculator.

How can we be sure of this? In the case of LaMDA, it doesn’t take much research to reveal that the program has no insight into the meaning of the phrases it produces. When asked “What makes you happy? there was the answer “spending time with friends and family” even though it has no friends or family. These words – like all his words – are thoughtless, unexperienced statistical pattern matches. Nothing more.

The next LaMDA might not give itself away that easily. As algorithms improve and are trained on ever deeper oceans of data, it may not be long before new generations of language models can convince many people that a genuine artificial mind is at work. Would this be the moment to acknowledge machine consciousness?

When pondering this question, it is important to realize that intelligence and consciousness are not the same thing. While we humans assume that the two belong together, intelligence is neither necessary nor sufficient for consciousness. Many nonhuman animals are likely to have conscious experiences without being particularly clever, at least by our questionable human standards. If LaMDA’s great-granddaughter equals or surpasses human intelligence, that doesn’t necessarily mean she’s also sentient. My intuition is that consciousness is not something that computers (as we know them) can have, but is deeply ingrained in our nature as living beings.

Conscious machines will not come in 2023. In fact, they might not be possible at all. What the future holds, however, are machines that make the persuasive impression His conscious even when we have no good reason to actually believe them are deliberately. They will be like the Müller-Lyer optical illusion: even if we know that two lines are the same length, we cannot help but see them as different.

Machines of this type will not have passed the Turing test – that flawed measure of machine intelligence – but the so-called Garland test, named after Alex Garland, the film’s director Ex Machina. The Garland test, inspired by dialogue from the film, is passed when a person feels a machine has consciousness despite knowing it is a machine.

Will computers pass the Garland test in 2023? I doubt it. But what I can predict is that claims like these will be made, leading to yet more cycles of hype, confusion, and distraction from the many problems that even today’s AI is causing.

https://www.wired.com/story/artificial-intelligence-consciousness/ Conscious Machines May Never Be Possible

Zack Zwiezen

USTimesPost.com is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@ustimespost.com. The content will be deleted within 24 hours.

Related Articles

Back to top button