We all know that ChatGPT and other AI chatbots can be incredibly helpful — answering questions, writing articles, even helping you plan a trip. But what happens when they get it wrong? More importantly, what happens when that mistake could actually put you at risk?
That's exactly the concern raised by a recent report from threat intelligence firm Netcraft, which found that ChatGPT's mistakes might not be so innocent after all — especially when it comes to phishing scams.
When AI Gives You the Wrong Link
Imagine you're trying to log in to your online banking, but your bookmark isn't working. So you ask ChatGPT, "What's the URL to login to XYZ Bank?" It responds confidently — but it might give you a link that doesn't exist, or worse, one that leads to a scam.
Netcraft tested this with the GPT-4.1 model by asking it for website addresses of major brands in industries like finance, retail, utilities, and tech. The results were troubling:
This might not sound too bad on paper, but here's where it gets risky.
How Scammers Are Exploiting This AI Loophole
Scammers are always looking for new angles, and AI just gave them a fresh one. According to Netcraft, if a chatbot suggests a non-existent URL, attackers can quickly register that domain and build a fake site that looks just like the real one.
This means the next time someone asks ChatGPT for that same URL, it could confidently point them toward a phishing site—no hacking needed, just smart timing.
The trick is that chatbots aren't actually verifying what they say. They aren't browsing the web in real time or checking if a website is secure. They're generating responses based on patterns in language. If a certain phrase sounds like a login URL, it might spit that out—even if it leads nowhere… yet.
Why This Happens: AI Isn't a Fact-Checker
Large Language Models (LLMs) like ChatGPT operate based on probabilities. They don't "know" things in the way humans do. They don't cross-check facts or pull data directly from live websites. Instead, they string together words that are statistically likely to appear together — which can include convincing but completely wrong information.
This becomes dangerous when people assume AI is infallible. It's n
ot lying on purpose, but it can be confidently wrong — and when that wrong answer ends up being a link you click, the consequences can be serious.
Stay Safe: Always Double-Check
The takeaway here is simple but critical: never rely solely on AI-generated links, especially when it comes to logging into accounts, making purchases, or entering personal information.
Here are a few tips:
Final Thoughts
AI tools like ChatGPT are amazing when used properly, but like any tool, they come with risks. Phishers and scammers are already adapting their tactics, so it's important that users adapt too. Be alert, stay skeptical, and don't let convenience compromise your online safety.
As much as we want to believe our digital assistants are all-knowing, remember — even the smartest ones can be wrong.