AI doesn't know what day it is
AI will tell you the answer whether it knows it or not. Understanding the structural blind spots — the things it genuinely can't know — is the difference between using AI well and being misled by it.
AI doesn't know what day it is
Ask an AI assistant what day of the week it is and it will tell you confidently. It will also, fairly often, be wrong. Not because it's broken — because it genuinely doesn't have access to a clock. It calculates the day from a date string, applies a formula, and delivers the answer with the same certainty it uses for everything else.
That gap — between sounding right and being right — is the thing most businesses aren't accounting for.
Not a bug
AI systems have structural blind spots that aren't failures of the software. They're features of how the technology is built.
An AI doesn't know what's happening right now. Its training has a cutoff — anything after that date simply doesn't exist for it. It doesn't know whether a business is still trading, whether a price has changed, or whether the person it's describing is still in the same role. It doesn't know your organisation's internal terminology, your clients' preferences, or what "urgent" means in your context.
There's a subtler version of this inside a single conversation. As a session grows longer, earlier context gets progressively diluted. The AI at message fifty is not working with the same full picture it had at message one. Instructions you set at the start of a conversation quietly carry less weight as new material accumulates. The output drifts — and usually nobody notices until something is wrong.
None of this is a defect. It's the nature of the system.
The confidence problem
The issue for business users is that AI doesn't signal uncertainty the way a human expert would. A good adviser who doesn't know something will say so. An AI will often fill the gap with a plausible-sounding answer — and the tone will be identical whether the answer is solid or invented.
This isn't a reason not to use AI. It's a reason to use it with someone who knows where the gaps are.
What this means in practice
For most business tasks — drafting communications, summarising documents, generating options — the blind spots don't matter much. The output is good enough and the stakes of a small error are low.
For higher-stakes tasks — customer-facing content, commercial decisions, compliance work — the same confidence that makes AI fast also makes it risky without the right oversight. Someone needs to know which questions to ask, which outputs to verify, and where to cross-reference against a source that actually knows what's happening now.
That's not a criticism of AI. It's an argument for using it well rather than just using it.
If you're making business decisions with AI and no one on your team is asking those questions, that's worth a conversation.
Robin Carswell