What Most People Get Wrong About AI & Automation (and What to Do Instead)
If you’ve tried using AI or automation in your ABA clinic, you’ve probably hit a wall. Maybe the tool seemed amazing at first, but then something broke. Or the time savings you expected never showed up. Or worse, you created a new problem instead of solving the old one.
You’re not alone. Most BCBAs and clinic leaders run into common ai & automation mistakes because the technology moves fast and the guidance is scattered.
This article walks through the biggest errors people make, explains why they happen, and gives you practical steps you can start this week. The goal isn’t to scare you away from these tools—it’s to help you use them safely, ethically, and in ways that actually work.
We’ll start with ethics and safety because that foundation matters more than speed. Then we’ll clear up what “AI” and “automation” actually mean. After that, we cover the eight most common mistakes, with a “do this instead” for each. Finally, we give you a simple playbook and guardrails you can use right away.
Start Here: Ethics and Safety Come First
Before you think about efficiency, think about risk.
AI can help with tasks like drafting notes or rephrasing parent updates. Automation can save time on reminders and scheduling handoffs. But neither one replaces your clinical judgment. You’re still responsible for what happens to your clients.
The big risks fall into a few categories: privacy is at the top, wrong information is next, bias in AI outputs is real, and unsafe shortcuts can hurt clients or put your clinic in a tough spot with compliance.
A simple rule: if it touches client care, you review it first. When you’re unsure, ask your supervisor, compliance lead, or privacy officer.
Quick Safety Rules in Plain Language
Don’t paste client-identifying details into AI tools without approved safeguards. Most consumer AI tools like public ChatGPT aren’t appropriate for protected health information (PHI). If a vendor won’t sign a Business Associate Agreement (BAA), treat that tool as not approved for PHI.
Don’t send AI-written messages to caregivers without a human read-through. The AI might get the tone wrong, include inaccurate details, or miss important context. Your review is the safety net.
Don’t let automation change clinical data without checks. Any workflow that touches the clinical record, billing, or authorization status needs a human approval step before it goes live.
For more on this topic, see our guide on [AI ethics and privacy basics for ABA](/ai-and-automation/ai-ethics-and-privacy-basics-for-aba).
What “AI” and “Automation” Mean (Simple Definitions)
People mix up these terms all the time, and that confusion leads to avoidable mistakes.
Automation follows set steps or rules—”if this happens, then do that.” It runs the same way every time. Example: when an appointment is scheduled, automatically send a reminder to the caregiver. The rules are fixed. The system doesn’t think or adapt.
AI generates or predicts outputs. It can summarize text, draft messages, or sort information. It’s flexible, but that flexibility comes with risk—AI can be wrong in ways that look confident. Example: you ask an AI tool to rephrase a parent update in plain language. The AI drafts something, but you must review it before sending.
Why does this matter? They need different safety checks. Automation risks include brittle rules, broken handoffs, and silent failures. AI risks include hallucinations (made-up information), privacy mistakes, and inconsistent outputs.
If you’re not sure whether something is “AI” or “automation,” label it first. That one step makes safety planning easier. For more detail, read [what workflow automation means in ABA](/ai-and-automation/what-is-workflow-automation-in-aba).
Before You Automate: A Fast Self-Audit (2 Minutes)
Before you build or buy anything, answer these questions:
- What problem are you solving (in one sentence)?
- Who will use it (BCBA, RBT, admin, billing)?
- How often does this task happen?
- Is it mostly rules-based, or does it require judgment?
- What could go wrong (privacy leak, wrong output, missed step)?
- How will you know it worked (pick one clear measure)?
- Where will a human review happen?
Think of it as a traffic light system. Green light tasks are low risk: admin reminders, internal task lists, non-client templates. Yellow light tasks need review: draft notes, draft emails, draft session summaries. Red light tasks are off-limits without serious safeguards: anything that makes clinical decisions or sends sensitive information without human review.
Print this checklist and keep it where your team plans workflows. For a deeper version, check out our [automation readiness checklist](/ai-and-automation/automation-readiness-checklist).
Mistake #1: Automating a Broken Process First
This is the most common trap. You’re overwhelmed. A process is painful. So you automate it, hoping technology will fix the mess.
But if your process is unclear, automation just makes the mess run faster.
What it looks like: the automation runs, but the outcome is still wrong. You end up with the same errors, just delivered more quickly. Or the workflow breaks in new, confusing ways because nobody mapped out the handoffs.
What to do instead: Fix the steps first, then automate the clean version. Write the process in five to ten steps. Remove steps that don’t add value. Make handoff points clear (who does what next). Only automate one step at a time.
Pick one painful workflow and map it on paper first. If it’s unclear on paper, it will be worse in automation. For guidance, see [how to map an ABA workflow before you automate](/ai-and-automation/workflow-mapping-for-aba-clinics).
Mistake #2: No Clear Goal
This mistake sneaks up on teams that want to “use AI” without defining success. When that becomes the goal, you end up with lots of experiments and no stable workflow. Time and money disappear into pilots that never finish.
What to do instead: Set one goal and one stop rule. The goal tells you what you’re trying to accomplish. The stop rule tells you when to pause or roll back.
A goal might sound like: “Fewer late notes, without lowering quality.” Or: “Fewer missed appointments, without more staff work.” The stop rule might be: “If our error rate goes above five percent, we pause and review.”
Write your goal in one sentence. If your team can’t repeat it, your automation will drift. For more, see [how to set safe goals for AI projects in ABA](/ai-and-automation/setting-goals-for-ai-projects-in-aba).
Mistake #3: Using Bad Data
AI and automation are only as good as the data underneath them. If your inputs are messy, your outputs will be messy. No tool can fix missing fields, duplicate client records, or data that doesn’t match across systems.
Data silos happen when clinical notes live in one system, billing data in another, and scheduling in a third. When those systems don’t talk to each other, the same information might be entered differently in each place.
This shows up as wrong names, wrong dates, missing fields, or billing denials because session logs don’t match authorizations.
What to do instead: Pick one place that’s official for each data type (a “source of truth”). Standardize key fields like client name format and service codes. Add a simple data check step before reports or messages go out.
If the input is messy, the output will be messy. Fix the data before you blame the tool. For more, see [data quality basics for ABA workflows](/ai-and-automation/data-quality-basics-for-aba-workflows).
Mistake #4: Siloed Tools and Brittle Logic
This mistake creeps in when you patch together tools with lots of “if this, then that” rules scattered across different systems. Each rule makes sense on its own, but together they become fragile. Change one label, one field, or one API, and the whole thing breaks.
Signs your automation is becoming fragile: only one person knows how it works, small updates break it, you fix it every week, and nobody knows what to do when it fails.
What to do instead: Simplify, reduce handoffs, and document the rules. Write down the steps and the “if/then” rules in plain words. If you can’t explain it, it’s too fragile.
For more, see [orchestration for clinic workflows](/ai-and-automation/orchestration-for-clinic-workflows).
Mistake #5: Skipping Human Review
Teams often want “hands-off” automation. But when you skip human review, errors go out unchecked. Messages to families contain wrong details. Notes read oddly. Billing codes get misfiled.
What to do instead: AI drafts, humans decide. Set up workflows so emails and session summaries are generated as drafts, not auto-sent. Add a second set of eyes for high-risk items. Include an approval step before anything is filed or sent.
A typical human-in-the-loop flow: the AI drafts a message, the system pauses before sending, a reviewer sees the draft with context, the reviewer approves, edits, or denies, the decision is logged, and only then does the message send.
For more detail, read [human-in-the-loop: what it means in ABA](/ai-and-automation/human-in-the-loop-aba).
Mistake #6: Treating a Pilot Like the Final System
Pilots feel easier than full rollout planning. But without a clear end date and success criteria, pilots drag on forever. You end up with one-off fixes, unclear ownership, and inconsistent use.
What to do instead: Set a pilot timeline, success criteria, and a decision date. A common approach: activate in 30 days, test over 90 days, make a go/no-go decision at around three months. Keep scope small—one workflow, one team, one month.
Pilot rules that reduce risk: limit users at first, keep a manual backup plan, track issues in one place, and decide at the end whether to expand, fix, or stop.
Set a decision date for your pilot. Otherwise, you’ll keep paying for confusion. For more, see [how to run a safe AI pilot in a clinic](/ai-and-automation/how-to-run-a-safe-ai-pilot-in-a-clinic).
Mistake #7: No Monitoring
“Set it and forget it” doesn’t work. Automations fail quietly. Errors build up. Missed tasks, wrong messages, and incorrect reports pile up until someone notices the damage.
What to do instead: Monitor a few key checks weekly. Review run history and audit logs. Filter for errors and warnings. Drill into the exact step that failed. Create an easy way to report failures without blame.
A simple monitoring plan: weekly spot-check of a small sample, an error log (what broke, why, the fix), and an owner assigned (who checks and who fixes).
If no one owns monitoring, the system will fail in silence. For more, see [how to monitor automations in ABA clinics](/ai-and-automation/monitoring-automations-in-aba-clinics).
Mistake #8: Poor Documentation and Training
Building is faster than documenting. But when only one person understands how a workflow works, you have a single point of failure. Staff avoid the workflow or use it wrong. Turnover becomes a crisis.
What to do instead: Write a one-page “how it works” guide and train with examples. The guide should include purpose (one sentence), inputs (what it needs), outputs (what it creates), a fail plan (what to do when it breaks), and privacy rules (what data is allowed).
If a new hire can’t use it after a quick read, rewrite the guide in simpler words. For a template, see [standard work for ABA tech workflows](/ai-and-automation/standard-work-for-aba-tech-workflows).
Safer Steps: A Simple “Start Small, Test, Then Grow” Playbook
Now that you know the mistakes, here’s a simple playbook:
- Pick one low-risk workflow
- Map the steps and remove waste
- Decide the human review points
- Test with a small group
- Track errors and fix them
- Roll out slowly with training and monitoring
Your “minimum safe version” has one input source, one output type, one owner, and one backup plan. Grow only after it stays stable.
For more, see the [minimum safe automation framework](/ai-and-automation/minimum-safe-automation-framework).
Privacy, Confidentiality, and Compliance Guardrails
Your clinic needs clear rules for AI and automation that protect clients and your team.
Keep client data protected: Use the minimum necessary information. Don’t enter PHI into consumer-grade AI tools. Only use AI vendors who sign a BAA. Prefer vendors who offer “zero retention” or prompt deletion of data. Require encryption and security assessments.
Separate practice data from real client data when testing. Use de-identification methods for any experimentation. When AI is part of care, consider disclosing that to clients and allowing opt-out pathways.
Human oversight remains essential. AI advises while humans decide. No clinical decision should be made solely by AI. Verification happens before anything enters the record.
A quick check before using AI for any task: Does it include client identifiers? Could it change care decisions? Could it be sent outside the clinic? Is there a required human review step?
Write your clinic’s “AI rules” in one page. Clear rules protect clients and your team. For a starter template, see our [clinic AI policy starter](/ai-and-automation/clinic-ai-policy-starter).
Frequently Asked Questions
How do I know if I’m making common AI & automation mistakes? Use the two-minute self-audit from this article. Look for warning signs like unclear goals, messy data, no owner, and no monitoring. Fix the highest-risk items first, especially anything involving privacy or client-facing work.
Should I automate a task that already feels chaotic? Usually no. Fix the process first. Map the steps, remove extra steps, then automate one small part. Keep a manual backup plan while testing.
What is the difference between AI and automation? AI helps create or sort information and can be flexible but sometimes wrong. Automation runs steps the same way every time and follows fixed rules. They need different safety checks.
What are “data silos,” and why do they break automations? Data silos mean your data is stuck in separate places. The same information may not match across systems. Pick a source of truth and standardize key fields.
What does “human-in-the-loop” mean in an ABA clinic? A person reviews and approves before anything important happens. This is especially critical for client-facing messages and documentation.
How do I start small with AI without wasting time? Pick one workflow and one goal. Build the minimum safe version. Pilot with a small group and set a decision date.
What tasks are safest to automate first in a clinic? Start with low-risk admin tasks like internal reminders or scheduling confirmations. Avoid tasks that require clinical judgment. Add extra safeguards for anything that touches client information.
Moving Forward the Right Way
The best AI and automation systems don’t replace your judgment. They support it. The best systems protect clients first, support staff second, and deliver efficiency third.
You don’t need to adopt every new tool. You don’t need to move fast and break things. What you need is a thoughtful approach: map your workflows, fix broken processes, set clear goals, add human review where it matters, and monitor what you build.
Pick one workflow, run the self-audit, and build the minimum safe version with clear human review. If you want more help, explore our AI & automation pillar for step-by-step clinic-ready guides.
Small, careful steps create lasting change.



