Type to search



Reviews Technology

Will AI eventually be as smart as or smarter than humans?

Share

President Biden offered his most forthright warning to date about the strength of artificial intelligence in a lecture to the Air Force Academy earlier this month, saying that the technology may “overtake human thinking” in the not-too-distant future.

“It’s not going to be easy,” Biden warned, referring to a recent Oval Office discussion with “eight leading scientists in the area of AI.”

“We’ve got a lot to deal with,” he added. “A fantastic opportunity, but a lot to deal with.”

To anyone who has played with OpenAI’s ChatGPT-4, Microsoft’s Bing, or Google’s Bard, the president’s dire prediction certainly sounded more like science fiction than science.

A sceptic would remark that the latest round of generative AI chatbots is cool. They can assist you in planning a family vacation, practising difficult real-life interactions, summarising complex academic papers, and “explaining fractional reserve banking at a high school level.”

But what does it mean to “overtake human thinking”? That’s quite a leap.

However, in recent weeks, some of the world’s most prominent AI scientists — people who know far more about the subject than, say, Biden — have begun to raise concerns about what is to come.

Also Read: AI Can Listen to Keyboard Sounds for Stealing Passwords

The technology that powers ChatGPT today is known as large language model (LLM). Trained to recognise patterns in massive volumes of text — the vast bulk of anything on the internet — these algorithms process any sequence of words provided to them and forecast which words will appear next. They are an advanced example of “artificial intelligence”: a model designed to solve a specific problem or deliver a specific service. LLMs are learning how to chat better in this example, but they cannot learn other duties.

Can they, or they can’t?

For decades, researchers have speculated about a higher level of machine learning known as “artificial general intelligence,” or AGI: software that can learn any task subject. AGI, often known as “strong AI,” is a machine that can do everything the human brain can do.

A group of Microsoft computer scientists issued a 155-page study paper in March claiming that one of their new experimental AI systems displayed “sparks of artificial general intelligence.” What else might account for it “coming up with humanlike answers and ideas that weren’t programmed into it?”

In April, computer scientist Geoffrey Hinton — a neural network pioneer considered as one of the “Godfathers of AI” — resigned from Google to speak freely about the hazards of AGI.

In May, a group of business heavyweights (including Hinton) issued a one-sentence statement warning that AGI might pose an existential threat to humanity on par with “pandemics and nuclear war” if its goals are not aligned with ours.

“The idea that this stuff could actually get smarter than people — there were a few people who believed that,” Hinton told the New York Times. “However, most people thought it was a long shot. And I thought it was a long shot. I assumed it would be 30 to 50 years or possibly longer. Clearly, I no longer believe that.”

Of course, each of these foreboding times has sparked debate. (More on this in a moment.) They have, however, exacerbated one of the tech world’s most heated debates: Are machines that can outthink the human brain unrealistic or inevitable? And could we be much closer to opening Pandora’s box than most people believe?

Why there’s debate

There are two reasons why concerns about AGI have suddenly become more credible – and serious.

The first is the unexpected pace with which recent AI developments have occurred. “Look at how it was five years ago and how it is now,” Hinton said to the New York Times. “Take the difference and spread it around.” That’s terrifying.”

The second factor is uncertainty. Stuart Russell, a computer science professor at the University of California, Berkeley and co-author of Artificial Intelligence: A Modern Approach, couldn’t explain the inner workings of today’s LLMs to CNN.

As a result, no one knows where artificial intelligence will go from here. Many experts predict that AI will eventually evolve into AGI. Some believe that AGI will not arrive for a long time, if at all, and that overhyping it will divert attention away from more pressing challenges, such as AI-fueled misinformation or job loss. Others believe that this development is already underway. A smaller group is concerned that it may worsen tremendously. According to the New Yorker, “a computer system [that] can write code — as ChatGPT already can —… might eventually learn to improve itself over and over again until computing technology reaches what’s known as “the singularity”: a point at which it escapes our control.”

“My confidence that this wasn’t coming for quite a while has been shaken by the realisation that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better” at certain things, Hinton recently told the Guardian. He then anticipated that real AGI would be five to twenty years distant.

“I’m dealing with a lot of uncertainty right now,” Hinton added. “However, I wouldn’t rule out another year or two.” And I still wouldn’t rule out a hundred years. People who are confident in this circumstance are insane in my opinion.”

Perspectives

Today’s artificial intelligence is simply not nimble enough to mimic human intellect.

“While artificial intelligence is progressing — synthetic images are becoming more realistic, and speech recognition can often work in noisy environments — we are still likely decades away from general-purpose, human-level AI that can understand the true meanings of articles and videos or deal with unexpected obstacles and interruptions.” The area is mired in the same problems that academic scientists (including myself) have been highlighting for years: getting AI to be dependable and dealing with uncommon conditions.” Gary Marcus is a Scientific American writer.

There is nothing that ‘biological’ brains can do that their digital equivalents will not be able to replicate (at some point).

“I’m frequently told that AGI and superintelligence will never happen because they’re impossible: human-level intelligence is a mysterious thing that can only exist in brains.” Such carbon chauvinism ignores a key AI revolution insight: intelligence is all about information processing, and it makes no difference whether the information is handled by carbon atoms in brains or silicon atoms in computers. AI has been ruthlessly outperforming humans on task after task, and I challenge carbon chauvinists to quit shifting the goalposts and openly predict which tasks AI will never be able to do.” — Time’s Max Tegmark

Perhaps AGI is already here if we consider what ‘general’ intelligence might imply.

“These days, my opinion is that this is AGI, in the sense that it is a type of intelligence that is general — but we need to be a little less, you know, hysterical about what AGI means.” We’re getting a lot of raw intellect that doesn’t always come with an ego-viewpoint, ambitions, or a feeling of coherent self. That is very fascinating to me.” — Noah Goodman, Stanford University associate professor of psychology, computer science, and linguistics, to Wired

We may never agree on what AGI is – or when we will have attained it.

“It’s a philosophical question.” So, because we’re a scientific field, it’s a difficult moment to be in our field in certain ways. It’s quite improbable that there will be a single occurrence when we can tick the box and declare, “AGI achieved.” — Sara Hooker, director of a machine learning research group, to Wired