You're Not AI-Ready. Here's What That Actually Means.
Most dental practices think they're using AI. They're chatting with it. There's a massive difference — and when compute costs spike, that difference becomes very expensive.
A few months ago, someone paid $100 a month for Claude — one of the best AI models on the planet — connected it to their email and calendar, and asked it to prepare a daily briefing. The kind of thing that should take AI about ten seconds. Pull my meetings, summarise my emails, tell me what matters today.
It grabbed half the meetings. Five emails. Couldn't generate useful insights from any of it. On a hundred-dollar-a-month plan.
The model wasn't the problem. Claude is exceptional. The problem was that the company behind it — Anthropic — literally doesn't have enough computing power to let it do the work. They're rate-limiting on the backend. Rationing intelligence to save costs. Even for paying customers.
If that doesn't set off alarm bells for anyone relying on AI tools for their business, it should. Because compute — the raw processing power that makes AI work — is about to get a lot more scarce. And if you're running a dental practice and your strategy for AI is "we use ChatGPT sometimes," you are not prepared for what's coming.
The chatbot trap
Here's a stat that should make you uncomfortable. Two-thirds of people who use AI regularly believe the responses are either pulled from a database or read from a pre-written script. They think it's basically Google with better grammar.
That mental model — AI as a search engine you can talk to — is the single biggest barrier to actually benefiting from it. Because if you think AI is a chatbot, you'll use it like a chatbot. You'll ask it a question, get an answer, and move on. One question, one answer, done.
That is like buying a car to sit in the driveway and listen to the radio. Technically you're using it. But you're not going anywhere.
Real AI — the kind that actually changes how a practice operates — doesn't answer questions. It does work. It reads your emails and classifies them before your receptionist even opens the inbox. It listens to what you say during a procedure and writes the clinical notes while you move to the next patient. It answers the phone at 7pm on a Thursday when everyone's gone home, books the emergency appointment, and logs it in your system. It scans the referral letter that arrived by fax, extracts the patient details, and files it against the right record.
None of that is chatting. None of it requires a human to type a prompt. It just runs. And the practices that have this set up — there are a few of us, not many yet — are operating at a fundamentally different level than the ones where the receptionist opens ChatGPT when she's stuck on a word.
Why this matters right now
There's something happening in the AI industry that's going to make the difference between "using AI" and "using AI properly" a lot more consequential. It comes down to compute — the raw processing power that makes all of this work.
Nate B. Jones, who covers AI strategy and is one of the clearest thinkers in this space, uses an analogy I think is perfect. Think of AI companies as airlines running a popular route. The plane has a fixed number of seats — that's the compute. And right now, they're trying to sell those seats to three very different types of passengers.
There's the economy passenger — the billion people on the free ChatGPT plan, asking it to fix their grammar and generate birthday card messages. These people barely cover the fuel cost.
There's the business class passenger — enterprises buying thousands of seats, burning through tokens, demanding high-quality inference for real work. Sam Altman has said publicly that enterprises are asking to consume a trillion tokens and OpenAI will "fail in 2026 to meet enterprise demand."
And there's the investor seat keeping the airline flying until it reaches profitability.
The problem? There aren't enough seats on the plane. Compute is the constraint. And when demand outstrips supply, the airline starts making choices. Free users get bumped to smaller, dumber models. Rate limits kick in. Prices go up for everyone else.
This isn't a hypothetical future. It's happening right now.
It's already showing up in the products you're being sold
Anthropic — the company behind Claude, arguably the best AI model available today — is so compute-constrained that even their $100-a-month plan limits you to 50 tool calls across email, calendar, and documents. Not per day. Total. Check your email twice and look at three documents and you've burned through your allowance by Wednesday.
Nate tested Claude's email integration and had a genuinely terrible experience. Not because the model was bad — Claude is brilliant — but because Anthropic is rate-limiting on the backend to save compute costs. The AI pulled half his meetings, grabbed five emails, and couldn't generate useful insights from what it had. On a hundred-dollar-a-month plan.
Meanwhile, OpenAI is rolling back their more intelligent reasoning models for free users because serving good intelligence to people who won't pay isn't sustainable. So the 950 million people on the free plan — the ones forming their mental model of what AI can do — are getting increasingly dumber versions of it. They're being trained to think AI is mediocre.
And this gets to the core issue. When compute is scarce, AI companies don't degrade equally for everyone. They allocate. Enterprise customers with deep pockets get the good models, the fast inference, the high token limits. Small businesses get what's left.
Where does a four-person dental practice in Darwin sit in that allocation? Not in business class.
Agents make it 100x worse
Everything I've described is about today's AI. Single-shot interactions. One prompt, one response. A chatbot.
The entire industry is pivoting to something called agents. An AI agent doesn't answer a question — it does a job. It plans the work, executes step by step, checks itself, calls other tools, handles errors, and keeps going until the task is finished. Where a chatbot makes one AI call, an agent makes 50 to 200. A single task that used to be one API call becomes a hundred.
Now picture every company you've ever heard of — Salesforce, Microsoft, Google, Apple, Amazon — rolling out agent platforms to millions of enterprise workers simultaneously. The compute demand doesn't increase linearly. It multiplies by orders of magnitude.
NVIDIA controls roughly 80% of the AI chip market and their latest GPUs sell out months in advance. New chip fabrication plants take 3–5 years to build. The industry is projecting $5.2 trillion in data centre investment by 2030. That is the largest infrastructure build in human history.
And even that might not be enough.
What happens to your software subscriptions
Let's bring this home. You don't need to understand GPU supply chains. You need to understand what happens to the things you pay for.
That AI clinical scribe you're evaluating? It runs on cloud compute. When the provider's GPU costs go up — and they will — your subscription goes from $30/month per clinician to $60, then $99. Or they keep the price the same and quietly switch to a cheaper, less capable model. Your notes start having more errors. You just don't know why because they didn't tell you.
That after-hours answering service with "AI-powered" call handling? It starts charging per minute instead of per month. Or it introduces "fair use" limits you didn't notice in the updated terms.
That practice management system that just added an "AI assistant" feature? It gets slower during peak hours. "Your request is being processed" becomes the new hold music. Enterprise hospitals get priority. Your four-chair practice in the suburbs? You get the scraps.
None of this is speculation. It's the inevitable consequence of a supply-demand imbalance in compute that is already here and about to get significantly worse.
What AI-ready actually looks like
So when I say a practice is "AI-ready," I don't mean you've tried ChatGPT. I mean something specific. I mean you've set up your practice so that when compute costs spike — and they will — you're insulated from the worst of it. And beyond that, you're actually using AI in ways that change how your practice operates, not just how your receptionist writes emails.
You run the heavy lifting locally
A modern desktop computer — a Mac Studio, a PC with a decent GPU — can run speech-to-text, text-to-speech, document classification, and small language models completely offline. No cloud. No API fees. No rate limits. No allocation decisions by a company in San Francisco about whether your practice deserves the good model today.
We run our practice AI on a Mac Studio with an M4 Max chip. $3,499 once. It handles dictation, email classification, document scanning, and our entire knowledge base. The monthly running cost is about $15 in electricity. When Anthropic's compute costs go up 50%, our dictation still works. When OpenAI reprices their API, our email classifier doesn't notice.
You only use cloud AI where it genuinely earns its keep
Cloud AI — Claude, GPT-4, the frontier models — is worth paying for when the task is complex and the volume is low. Drafting a difficult referral letter. Analysing an unusual treatment plan. Helping with a tricky patient communication. These tasks benefit from the best intelligence available, and you might do 10–20 of them across the whole practice in a day.
Even if per-token costs double, that's going from $40/month to $80. Manageable.
But running 200 email classifications a day through a cloud API? That's like flying business class on a route you take five times a week. And when the airline raises prices, you can't switch to economy because you never built the option.
You own the workflow, not just the subscription
This is the part most people miss. When you subscribe to an AI-powered SaaS tool, you own nothing. Not the model. Not the workflow. Not the data pipeline. If they double the price, you pay or you lose access. There's no negotiation because there's no alternative you can switch to.
A practice that's built even basic AI workflows with open-source tools has options. You can swap models — if one provider gets expensive, you switch to another. You can move workloads between local and cloud. You can add capabilities without waiting for a vendor's product roadmap. When compute gets scarce and everyone is fighting for allocation, you have leverage. That's the difference.
Your data is structured and accessible
AI needs data to work with. If your patient records are in your PMS but your referral letters are in a filing cabinet, your appointment notes are in three different systems, and your SOPs exist only in someone's head, then AI can't help you — no matter how good the model is.
The practices that will adopt AI fastest aren't the ones with the biggest budgets. They're the ones whose data is already organised. Digital records. Consistent naming. Documented procedures. This costs nothing but time, and it's the single highest-value preparation you can make.
The real question
Are you actually using AI to its potential, or are you using it like 99% of people — as a chatbot you type questions into when you're stuck?
Because the gap between those two things is about to get very expensive. Right now, in early 2026, we're in a window where hardware is affordable, open-source models are genuinely excellent, and cloud prices are still low enough to experiment cheaply. This window does not stay open.
At our practice in Darwin, our total ongoing AI cost is about $50–65 per month. That covers clinical dictation, email automation, after-hours phone answering, document processing, patient registration assistance, and a voice notes system. Off-the-shelf SaaS doing the same work would cost $1,500–2,500/month today. And those prices are going up, not down.
When compute costs double — and the dynamics make this a when, not an if — our bill goes from $60 to maybe $90. Practices relying entirely on cloud subscriptions go from $2,000 to $3,500. The gap widens every year. And at some point, the gap becomes the difference between a practice that can afford AI and one that can't.
The best time to start was a year ago. The second best time is this week.
Want to build something like this?
We build custom AI tools for businesses. Tell us what you're dealing with — we'll tell you what's possible.