Type to search



Technology

Ukrainian Student’s Image Misused for AI Personas on Chinese Social Media

Share
Ukrainian Student's Image Misused for AI Personas on Chinese Social Media

In Beijing, Olga Loiek, a University of Pennsylvania student from Ukraine, experienced an unsettling misuse of her image after launching a YouTube channel. Shortly after her debut in November last year, Loiek discovered her likeness was being used to create AI-generated alter egos on Chinese social media platforms. These digital doppelgangers, such as “Natasha,” portrayed Russian women fluent in Chinese who expressed gratitude towards China for supporting Russia and engaged in selling products like Russian candies.

Loiek’s virtual counterparts amassed hundreds of thousands of followers in China, significantly more than her actual social media presence. “This is literally like my face speaking Mandarin and, in the background, I’m seeing the Kremlin and Moscow, and I’m talking about how great Russia and China are,” Loiek told Reuters, describing her discomfort at seeing AI-generated versions of herself promoting views she would never endorse.

Also Read: Student Arrested for AI-Assisted Cheating in University Exam

Her case highlights a growing trend of AI-generated personas on Chinese social media, often misappropriating the images of real women without their knowledge. These avatars promote products, leveraging the Russia-China “no limits” partnership declared in 2022, just before Russia invaded Ukraine. 

Experts note that the technology to create such realistic AI images is widespread. Jim Chai, CEO of XMOV, which develops advanced AI technology, explained, “For example, to produce my own 2D digital human, I just need to shoot a 30-minute video of myself, and then after finishing that, I re-work the video. Of course, it looks very real, and of course, if you change the language, the only thing you have to adjust is the lip-sync.”

The use of AI in creating and disseminating content raises significant ethical and legal concerns, especially as generative AI systems like Chat GPT become more popular. China has responded by issuing draft guidelines for standardizing the AI industry, aiming to establish over 50 national and industry-wide standards by 2026. Similarly, the European Union’s AI Act, which imposes strict transparency obligations on high-risk AI systems, has set a potential global benchmark since its enforcement this month.

However, regulation struggles to keep pace with rapid AI advancements. Xin Dai, an associate professor at Peking University Law School, highlighted the immense volume of AI-generated content as a critical issue. “We can only predict that with increasingly powerful tools for creating information, creating content, and disseminating content to become available basically every next minute,” Dai said, emphasizing the global challenge posed by the proliferation of AI-generated media.