AI Regulation Faces Uncertainty as Global Powers Pursue Divergent Approaches

Picture of Hassan Khan

Hassan Khan

AI Regulation Faces Uncertainty as Global Powers Pursue Divergent Approaches

As the world prepares for the Paris AI summit on February 10-11, countries are adopting diverse approaches to regulating artificial intelligence (AI), leading to a fragmented global governance landscape.

The United States, under returning President Donald Trump, revoked the executive order on AI oversight issued by Joe Biden in October 2023. The directive, which was mostly voluntary, had required AI developers like OpenAI to submit safety assessments and key data to the government. While supported by tech firms, the order aimed to protect privacy and prevent civil rights violations, but the U.S. has no formal AI regulatory framework and instead relies on existing privacy laws. Digital lawyer Yael Cohen-Hadria likened the situation to a “Wild West” environment, with the government opting for minimal regulation.

Read More: AI Revolution in the Food Industry: A Recipe for Innovation

In contrast, China is advancing a legal framework for generative AI while implementing interim measures that mandate respect for personal data, consent for usage, labeling of AI-generated content, and user safety. AI systems are also required to “adhere to core socialist values,” meaning they cannot criticize the Communist Party or jeopardize national security. The DeepSeek AI model, for example, refused to respond to inquiries about President Xi Jinping or the 1989 Tiananmen Square protests. Cohen-Hadria noted that China would impose strict regulations on businesses, especially foreign ones, but would allow itself exceptions.

The European Union, on the other hand, prioritizes ethics in its AI laws. Its “AI Act,” passed in March 2024, is considered the world’s most comprehensive AI regulation. It prohibits AI systems like predictive policing that profile individuals based on sensitive attributes such as race, religion, or sexual orientation. The law introduces a risk-based approach, subjecting high-risk AI systems to more rigorous compliance standards. Cohen-Hadria highlighted the EU’s strong intellectual property protections and facilitation of controlled data circulation, which she believes will accelerate innovation.

India, without specific AI regulations, uses existing laws on defamation, privacy, and cybercrime to address AI-related issues. While there have been numerous discussions about AI regulation, concrete legislative actions have been scarce. In March 2024, the Indian government released an advisory requiring companies to seek approval before deploying untested AI models, which sparked backlash, particularly from AI firms like Perplexity. Following controversy over a Google AI accusing Prime Minister Narendra Modi of fascist policies, the government amended its regulations to require only disclaimers on AI-generated content.

The UK, the third-largest AI market after the U.S. and China, has incorporated AI regulation into its economic growth strategy. Prime Minister Keir Starmer introduced an “AI opportunities action plan” in January, focusing on testing AI before formal regulation. The plan emphasizes that well-designed regulation can foster rapid, safe AI development, while ineffective regulation may hinder adoption in crucial sectors.

Internationally, the Global Partnership on Artificial Intelligence (GPAI), comprising over 40 countries, aims to promote responsible AI use. The French presidency confirmed a broader meeting for the 2025 action plan. Additionally, in May 2024, the Council of Europe adopted the world’s first binding AI treaty, signed by the U.S., UK, and EU.

Despite these international efforts, AI governance remains uneven. Of 193 UN member states, only seven participate in major AI governance frameworks, while 119, mostly in the Global South, are outside any initiative.

Related News

Trending

Recent News

Type to Search