Malaysia is moving toward a more "grown-up" way of dealing with artificial intelligence: not by banning it, not by pretending it's just another app feature, but by building a national framework that spells out who's responsible for what when AI systems are created, deployed, and used at scale. That's the basic direction of the upcoming AI Governance Bill, according to Digital Minister Gobind Singh Deo.
And the timing isn't random.
Across the internet (and in Malaysia too), the scariest AI problems right now aren't "AI will become sentient." They're much more human: scams, impersonation, reputational attacks, and deepfakes that can pressure, shame, trick, or financially drain victims. In other words, the problem isn't just cybersecurity anymore. It's manipulation at industrial speed.
"AI governance" sounds abstract… until you ask: who's accountable?
When people hear "AI law," they often imagine rules aimed at regular users. But the bill's emphasis (based on what has been said publicly so far) is more about the professionals and organisations involved across the lifecycle of AI systems: the people who build models, integrate them into products, deploy them in real environments, and maintain them after launch.
That lifecycle framing matters because a lot of harm happens in the messy middle, not at the moment of invention.
For example:
A governance framework tries to stop that accountability fog by setting common expectations for safe practice and responsibility-sharing.
Why the bill is talking about "production safeguards"
"Safeguards" can sound like corporate jargon, but the idea is simple: if AI is being produced and deployed, there should be consistent safety hygiene around it.
Public reporting so far describes the bill as aiming for safeguards in AI production, and a coherent national framework rather than a patchwork of one-off rules.
In practical terms (without pretending we've seen the final draft), AI safeguards in many countries usually end up touching areas like:
Malaysia's key challenge will be balancing this so innovation doesn't get strangled… but safety isn't treated like an optional plugin.
Deepfakes: the problem isn't the tech, it's the leverage
The deepfake angle keeps coming up because deepfakes have evolved from "weird internet trick" into "scam toolkit."
A deepfake doesn't need to fool everyone. It just needs to fool one person at the right moment:
That's why the minister's comments frame deepfakes as increasingly serious and tied to victim manipulation, not just a technical security issue.
"Don't we already have laws for this?" Yes — and the government keeps pointing to them
Even before an AI Governance Bill lands, Malaysia already has laws used to act against harmful outcomes involving AI-generated content, especially when it crosses into obscenity, harassment, or scams.
For instance, Malaysian authorities have reiterated that manipulating images (including misuse of AI) to produce obscene, highly offensive, or harmful material can be treated as an offence.
And when the harm is a classic scam, the legal framing doesn't magically change just because AI was involved. The Penal Code provisions on cheating are commonly referenced in this context, including:
So the message is basically: "AI doesn't give you a legal immunity cloak."
The interesting part: tools to fight deepfakes, not just rules to punish them
One notable detail is that Malaysia isn't only talking about enforcement. There's also talk of capability-building: developing AI-based tools to verify the authenticity of images and videos to support cybercrime investigations.
This has been reported as collaboration involving CyberSecurity Malaysia and Universiti Kebangsaan Malaysia, including official ministry communications about proactive steps to combat online scams.
That's a practical approach: if deepfakes lower the cost of lying, verification tools try to lower the cost of proving what's real.
A useful comparison: South Korea's "AI Basic Act" and the transparency rule idea
One suggestion floating around is that Malaysia could borrow a page from South Korea's approach, especially around disclosure and transparency for generative AI.
South Korea's AI Basic Act has been described (in legal summaries and news reporting) as requiring clearer labeling/notice for AI-generated content and user notification in certain cases, particularly around generative AI outputs that can be mistaken for real content.
That's relevant because labeling doesn't "solve" deepfakes, but it does:
Of course, disclosure rules also raise hard questions: what counts as "AI-generated," how labeling is enforced, and whether bad actors will ignore it anyway (spoiler: many will). But governance frameworks usually aim to raise the baseline across the mainstream ecosystem, even if criminals keep doing criminal things.
What this means for regular people (not AI engineers)
Even if the bill is aimed more at developers, deployers, and platforms, everyday users still feel the impact because it shapes the environment you operate in.
If Malaysia gets this right over time, you'd expect:
And if Malaysia gets it wrong, you get either:
That's the tightrope.
A quick reality check you can actually use today
While laws and bills move at government speed, deepfake scams move at "two clicks and a Wi-Fi signal" speed. So the practical habits still matter:


Comments