New Study Claims AI Does Not Present an Existential Threat to Humanity

Picture of Eric

Eric

According to new research from the University of Bath and the Technical University of Darmstadt in Germany, ChatGPT and other large language models (LLMs) do not possess the ability to learn independently or acquire new skills, thus posing no existential threat to humanity.

The study, published in the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), revealed that while LLMs can follow instructions and demonstrate proficiency in language, they cannot master new skills without explicit guidance. This makes these models inherently controllable, predictable, and safe.

READ MORE — OpenAI Cautions Users Against Developing Romantic Feelings for ChatGPT

The research team concluded that despite being trained on increasingly larger datasets, LLMs can be deployed without significant safety concerns. However, the technology still carries the risk of misuse.

Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study, noted, “The prevailing narrative that this type of AI poses a threat to humanity hinders the widespread adoption and development of these technologies and diverts attention from genuine issues that need addressing.”

Led by Professor Iryna Gurevych at the Technical University of Darmstadt, the research team conducted experiments to assess LLMs’ ability to complete unfamiliar tasks, referred to as “emergent abilities.”

While LLMs can respond to questions about social situations without specific programming, the researchers found this capability stems from “in-context learning” (ICL), where models complete tasks based on examples provided to them.

Dr. Tayyar Madabushi remarked, “There has been a fear that larger models might unpredictably solve new problems, presenting threats through hazardous abilities like reasoning and planning. Our study demonstrates that this concern is unfounded.”

The study’s findings counter concerns about the potential existential threat posed by LLMs, a view held by many leading AI researchers. However, the research team emphasized the importance of addressing existing risks, such as the creation of fake news and the increased potential for fraud.

Professor Gurevych added, “Our results do not imply that AI is without threat. Instead, we show that the emergence of complex thinking skills associated with specific threats is not supported by evidence. Future research should focus on other risks posed by these models.”

Related News

Trending

Recent News

Type to Search