SHAH ALAM – The rise of artificial intelligence (AI)-generated fake accounts and misinformation is posing an increasing threat to digital spaces, undermining trust and distorting public discourse.
While social media platforms have implemented various countermeasures, experts warned that the rapid advancement of AI in deception necessitated a more strong and adaptive response.
Taylor’s University, School of Media and Communication (SOMAC), Faculty of Social Sciences and Leisure Management senior research fellow Associate Professor Dr Massila Hamzah said tackling AI-generated fake accounts required a multi-faceted approach combining advanced detection tools, regulatory oversight and global She said Malaysia can learn from international efforts to enhance scam detection, early intervention and the swift removal of fraudulent content.
"One strategic approach is fostering international collaboration by learning from successful initiatives in other countries.
"Direct reporting mechanisms should be strengthened to enable users to flag suspicious accounts efficiently, allowing for immediate action before misinformation spreads widely," she said when contacted.
She added that regulatory monitoring was crucial in addressing AI-driven deception, particularly in Malaysia, where the Malaysian Communications and Multimedia Commission (MCMC) oversees digital safety.
However, she noted that the growing prevalence of AI-generated misinformation necessitated a review of existing enforcement strategies.
"Perhaps, integrating global strategies that have proven to be effective as well as best practices employed by the authorities in other regions would be deemed necessary.
"Likewise, by analysing international efforts in combating AI-driven deception issues, refinement of its policies and enforcement mechanisms to better suit local challenges and ensure a safer digital ecosystem can be made possible," she said.
Massila also stressed that social media platforms must remain proactive in adapting to emerging threats by implementing advanced AI detection systems, increasing transparency in AI-generated content and educating users on recognising and reporting deceptive activities.
Meanwhile, Malaysian Research Accelerator for Technology and Innovation (MRANTI), Innovation Commercialisation head and AI expert Dr Afnizanfaizal Abdullah highlighted the importance of combining technological advancements with stricter user verification protocols and public awareness initiatives.
"Advanced artificial intelligence and machine learning algorithms can analyse behavioral patterns, detect bot-like activity and flag deepfake content.
"Platforms like X (formerly known as Twitter) have already utilised such technologies to identify and remove millions of spam accounts daily.
"Image recognition technologies can scrutinise profile pictures for signs of AI generation, as research indicates that AI-generated faces often contain subtle artifacts like eye misalignment," he told Sinar Daily.
Afnizanfaizal further suggested that stronger identity verification measures should be introduced, such as multi-factor authentication, CAPTCHA tests and biometric verification.
He noted that requiring users to verify their identities through official documentation or digital identification can enhance accountability while safeguarding legitimate users’ rights.
"Transparency tools such as AI-generated content labeling and digital watermarking ensure users can identify manipulated media," he added.
He also stated the role of human moderation, fact-checking initiatives and crowdsourced reporting in maintaining content integrity.
He said platforms must enforce clear policies against coordinated disinformation campaigns and educate users on identifying AI-driven misinformation.
Citing a recent example, Afnizanfaizal pointed out that Medium has faced a surge in AI-generated content, with studies estimating that nearly 47 per cent of posts were AI-created.
However, he noted that the platform’s filtering systems effectively prevent most AI-generated content from gaining widespread visibility.
"By integrating AI detection, verification protocols, policy enforcement and public awareness initiatives, social media platforms can significantly mitigate the risks posed by AI-generated fake accounts and foster a more authentic and trustworthy online environment," he added.