search

LEMON BLOG

MCMC May Take Legal Action Against X Over Grok Deepfake Concerns

Malaysia's internet regulator is taking a harder line against X, as concerns grow over the safety risks posed by its AI chatbot, Grok. According to Communications Minister Fahmi Fadzil, the Malaysian Communications and Multimedia Commission (MCMC) is now considering legal action against the platform for failing to adequately protect users.

The issue, he said, goes beyond technical shortcomings and touches directly on compliance with Malaysian law.

Grok at the Centre of the Controversy

At the heart of the dispute is Grok, an AI chatbot that has drawn criticism for its ability to generate explicit and harmful content. While adult or provocative content is not uncommon on the internet, Grok has raised serious alarms due to what authorities describe as insufficient safeguards.

According to the minister, Grok is capable of producing non-consensual sexual deepfakes involving women and even children. This lack of effective guardrails places the chatbot in direct conflict with Malaysian laws designed to protect users, particularly minors, from online harm.

Temporary Suspension After Unsatisfactory Response

Earlier this week, MCMC moved to block access to Grok in Malaysia after engaging with X over the issue. The regulator reportedly found the platform's response inadequate, prompting the temporary suspension of the chatbot while further action is considered.

Fahmi noted that while the government has requested additional discussions with X, legal proceedings are now firmly on the table. MCMC is currently reviewing the situation and is expected to issue a more detailed statement once its assessment is complete.

Malaysia Is Not Acting Alone

Malaysia's stance is not an isolated one. The minister highlighted that other countries have also taken action against Grok due to similar concerns over weak safety controls. Most notably, Indonesia recently blocked access to the chatbot after concluding that its safeguards were insufficient.

This growing regional pushback suggests that regulators are becoming less tolerant of AI tools that prioritise speed and capability over user protection.

What This Signals Going Forward

The situation underscores a broader shift in how governments are responding to generative AI platforms. As these tools become more powerful, regulators are increasingly expecting companies to proactively prevent misuse rather than react after harm occurs.

For X, the coming weeks may prove critical. For Malaysia, the case could set an important precedent on how AI-generated content, especially deepfakes, is regulated moving forward.

As discussions continue, one thing is clear: platforms operating in Malaysia are expected to meet local safety and legal standards, and failure to do so may come with serious consequences.

How to Fix Windows 11 File Explorer Lag by Disabli...
Multiple TeamViewer DEX Client Vulnerabilities Rai...

Related Posts

 

Comments

No comments made yet. Be the first to submit a comment
Wednesday, 14 January 2026

Captcha Image

LEMON VIDEO CHANNELS

Step into a world where web design & development, gaming & retro gaming, and guitar covers & shredding collide! Whether you're looking for expert web development insights, nostalgic arcade action, or electrifying guitar solos, this is the place for you. Now also featuring content on TikTok, we’re bringing creativity, music, and tech straight to your screen. Subscribe and join the ride—because the future is bold, fun, and full of possibilities!

My TikTok Video Collection
Subscribe to our Blog
Get notified when there's new article
Subscribe