The Three Goals of Behavior Analysis: Description, Prediction, and Control
If you work in ABA—whether as a BCBA, RBT, clinic director, or caregiver—you’ve likely heard that behavior analysis is a science. But what does that actually mean in practice? The answer lies in three foundational goals: description, prediction, and control. These form the backbone of every functional behavior assessment, intervention plan, and progress review you’ll conduct. Mastering them means the difference between guessing at why behavior happens and having solid, data-based answers.
This article is for practicing clinicians, clinic leaders, and senior supervisors who want to deepen their understanding of these core concepts and see how they apply in real clinical scenarios. We’ll walk through plain-language definitions, show how the three goals build on each other, and highlight the ethical guardrails that keep your work client-centered and trustworthy.
One-Paragraph Summary
Behavior analysis describes behavior by systematically observing and measuring what people do, predicts behavior by identifying reliable patterns between behavior and environmental events, and achieves scientific control by demonstrating a functional relation—showing that manipulating an environmental variable reliably changes behavior. Prediction is probabilistic, based on likelihood under specific conditions and a person’s learning history, not absolute certainty. Ethical practice distinguishes scientific “control” (data-based, testable influence through environmental changes) from coercion: effective behavior change should be consent-informed, use the least restrictive procedures, and be evaluated for social validity—whether goals, methods, and outcomes meaningfully improve quality of life.
Ready to apply these concepts to your current caseload? Download our one-page behavior description and prediction checklist to get started. [/resources/behavior-description-checklist]
Understanding the Three Scientific Goals
What Description Means
Description is where everything starts. When you describe behavior in ABA, you’re not interpreting it, judging it, or guessing at intent. You’re simply recording what happened in observable, measurable terms.
A good description answers: Who did what? When? Where? How often or for how long? Instead of “Leo was aggressive,” a solid description says: “When told to transition to math, Leo pushed his materials off the desk, made eye contact with the teacher, and screamed ‘No!’ for 45 seconds. This occurred on 4 of 5 school days this week.”
To write a description you can measure and track, you need an operational definition—a clear, objective, complete description that allows different observers to consistently identify the same behavior. A good operational definition passes the “stranger test”: could someone unfamiliar with the client spot the behavior just from your definition? It also passes the “dead man’s test”: if a dead man could do it, it’s not behavior. “Sitting quietly” isn’t behavior; “hands folded, eyes forward, no vocalizations” gets closer.
How Prediction Builds on Description
Once you have clear descriptions and data, patterns emerge. This is where prediction comes in.
Prediction means identifying conditions under which behavior is more likely to occur. It’s not about certainty—it’s about probability. When you notice that Leo’s screaming increases during difficult math tasks, or that a client’s requests spike after visual prompts are faded, you’re predicting: “Under these conditions, this behavior tends to happen.”
The antecedent-behavior-consequence (ABC) model helps you find these patterns. By recording what happens before a behavior, the behavior itself, and what happens after, teams can spot triggers and potential reinforcers. Maybe behavior escalates when noise is high or preferred activities are removed. Maybe it decreases when a specific staff member provides clear instructions. These patterns let you shift from reacting to planning.
Prediction is probabilistic, not absolute. Saying “Leo is more likely to scream when math is presented” is accurate and useful. Saying “Leo will definitely scream every time” sets you up for confusion when he doesn’t. Behavior is complex, learning histories vary, and environments shift. Probability language—”more likely,” “tends to occur,” “increases the likelihood”—keeps your predictions honest and scientifically sound.
Control: Demonstrating a Functional Relation
Control in behavior analysis means something specific: demonstrating that changing an environmental variable reliably produces a behavior change. This is called a functional relation, and it’s the gold standard of evidence that you’ve actually caused the change you’re seeing.
Control is not punishment, coercion, or domination. It’s not forcing compliance or manipulating someone into behaving a certain way. Scientific control is about testing whether your intervention *works*—whether the changes you made actually led to behavior change.
To show control, you typically use experimental logic. In a single-subject design, you might use an ABAB reversal: baseline, intervention, return to baseline, intervention again. If behavior improves only when your intervention is in place and returns to baseline when it’s removed, you’ve demonstrated control. You’ve replicated the effect across conditions, strengthening confidence that your intervention is the cause.
In everyday clinical work, this might look like: you describe baseline behavior, predict that teaching a replacement skill will reduce problem behavior, implement the intervention, and measure whether the predicted change happens. If your data show reliable improvement, you’ve achieved control.
How These Three Goals Connect
Think of description, prediction, and control as a logical sequence: observe → identify patterns → test and demonstrate change.
You cannot reliably predict what you haven’t clearly described. Fuzzy definitions (“He’s being defiant”) don’t give you the precision needed to spot real patterns. Without prediction, control becomes guessing. You change something, behavior shifts, but you can’t confidently say your change caused it unless you had a solid prediction and measured whether it came true.
The best clinical work weaves all three together. At intake, you describe and operationally define target behaviors. During functional assessment, you predict the conditions and functions maintaining behavior. When you implement interventions, you demonstrate control by showing that systematic environmental changes produce reliable improvement.
Description in Practice: Avoiding Interpretation
One common mistake is slipping from description into interpretation. Description is what you see; interpretation is what you think it means.
If a staff member says “The client was being lazy and didn’t want to try,” that’s interpretation. You can’t directly observe laziness or willingness. Instead, describe: “When shown the worksheet, the client looked away, did not write within 10 seconds, and put their head down.” That’s observable and measurable. Now you can test hypotheses: Is the task too hard? Does the client need a break? Is the environment distracting?
This matters because interpretations can mislead your team. If everyone agrees “he’s being lazy,” you might assume motivation is the problem. But if you describe the actual behavior, you might notice the task needs smaller steps or a different prompt. Description keeps your team honest and focused on what’s changeable.
Prediction: Building a Testable Hypothesis
Once you have solid descriptions across several days or weeks, look for patterns. ABC analysis is invaluable here.
Say you notice: every time a non-preferred activity is announced, screaming occurs within 30 seconds. Or: requests only happen after the adult leans in and says “What do you need?” These observations form the basis of prediction. Your hypothesis becomes testable: “If we add a visual warning before transitions, screaming will decrease” or “If we fade the adult prompt more slowly, requests will remain stable.”
Prediction guides your intervention and tells you what data to collect. It also keeps everyone aligned on what change to expect. Because predictions are probabilistic, you’re protected against over-interpreting normal variability. Some days screaming will happen even with the visual warning; the prediction is about likelihood, not a guarantee.
Control: What It Is and What It Isn’t
Many people outside ABA hear “control” and think of coercion or force. That’s not scientific control.
Scientific control is functional influence demonstrated through data. You change something measurable, behavior changes predictably, and you can show the connection by measuring both across time. It’s testable, replicable, and transparent.
Ethical control has guardrails. It requires informed consent or assent—the client or guardian understands what you’re doing and why, and can refuse. It uses the least restrictive, least intrusive procedures: offer a choice before a demand, teach a replacement skill before eliminating a behavior, reinforce desired behavior before using punishment. And it’s evaluated for social validity—do the goals matter to the client? Are the methods acceptable? Are the outcomes actually improving daily life?
When you measure whether behavior changes only when your intervention is in place, you’re testing for control. If data show improvement, you have evidence of a functional relation. This is powerful not because you’ve dominated the person, but because you’ve proven your approach works—which means you can refine it, replicate it, and trust it will help the next client with similar needs.
When You Use Description, Prediction, and Control
In a typical clinical workflow, these goals unfold naturally.
At intake and baseline, you describe the target behavior, develop an operational definition, and choose how to measure it (frequency, duration, intensity, latency). You collect baseline data—the “before” picture—so you have something to compare against later.
During functional behavior assessment, you gather more data to predict the function and conditions surrounding behavior. ABC charts, preference assessments, and functional analysis all serve prediction. Your FBA hypothesis might be: “When demands are high and preferred activities are absent, escape-maintained behavior is likely.”
When you implement interventions, you manipulate environmental variables and track behavior. Did teaching a break card reduce avoidance? Does pairing the adult with preferred items increase engagement? Use single-subject logic: does behavior change only when the intervention is in place? If so, you’ve shown control. If not, the data guide you to modify your approach.
During progress reviews, you use data to confirm whether you’ve maintained control. Is the behavior staying improved, or drifting? Are there side effects or collateral benefits? Ongoing measurement ensures you’re responsive, not just hoping it works.
Real-World Examples
Example 1: Transitions and Aggression
The scenario: A student, Leo, shows aggression when transitioning from preferred to non-preferred activities.
Description: “At the bell marking end of recess, Leo hit a peer on the shoulder (force sufficient to leave a red mark), screamed ‘No!’ repeatedly, and ran to the corner. Duration: 2 minutes. Frequency: 4 of 5 days this week.”
Prediction: You notice every transition from recess to classroom instruction triggers aggression within 10 seconds. Hypothesis: “Leo’s behavior is maintained by escape; transitions with advance notice and reinforcement for calm transitions will reduce aggression.”
Control: You introduce a visual timer (5-minute warning) and reinforce Leo for calm transitions. Data show aggression drops to 1 incident per week. You remove the timer; aggression rises. You reintroduce it; aggression drops again. This replication shows control.
Example 2: Requesting and Prompting
The scenario: An adolescent rarely requests help independently.
Description: “During independent work time, the student did not raise their hand to request help during a 30-minute period. On 5 of 5 observed days, zero independent requests were made.”
Prediction: Requests increase immediately after an adult leans in and models the request. Hypothesis: “Gradually fading prompts will maintain or increase independent requesting.”
Control: You introduce a prompt hierarchy (model → gesture → proximity cue → no prompt) and reinforce each independent request. Requests increase from 0 to 3–4 per session. As you fade prompts, requests remain stable. This shows control.
Common Mistakes That Undermine These Goals
Confusing description with interpretation. “He was being defiant” is not a description. Write: “When given a non-preferred task, the student said ‘No,’ turned away, and did not start within 15 seconds.”
Expecting prediction to be certain. Behavior is influenced by many factors. Say “is likely to” and let variability be normal.
Treating correlation as proof of control. You notice behavior improves when a new staff member joins. That’s a clue, not control—unless you systematically manipulate the variable and measure the effect.
Equating control with punishment. Control means demonstrating a functional relation through any ethical method—usually reinforcement and skill teaching.
Skipping measurement because improvement seems obvious. Without data, you’re guessing. A clinician might feel the client is doing better, but data might show behavior is unchanged or worsening in frequency while intensity decreased. Measurement keeps you honest.
The Ethical Imperative: Control Must Respect Dignity
This is non-negotiable: scientific control must be ethical control.
Ethical control starts with informed consent or assent. The client or guardian understands what behavior is being targeted, why, what methods will be used, and what risks might occur. They can decline or withdraw consent.
Use the least restrictive, least intrusive procedures. Before restricting access to a preferred item, teach a replacement skill and reinforce it. Before using a consequence, try antecedent modification and positive reinforcement. Intrusive procedures are only justified for immediate safety and only after less restrictive options have been attempted and documented.
Social validity ensures your work actually improves the person’s life. Do the goals match what the client and family value? Are the methods acceptable and non-stigmatizing? Do outcomes matter in real contexts—home, school, community—not just the clinic? Monitor for side effects. If your intervention reduces problem behavior but damages the client’s confidence or eliminates preferred activities, that’s a red flag. Your data should include quality-of-life checks, not just frequency counts.
Practical Decision Points
When should you move from description to prediction? After you have clear, operational data across at least 3–5 days or sessions. One incident isn’t a pattern.
When should you implement an intervention? When your prediction is testable and specific. “The client will be better” isn’t testable. “Escape-maintained behavior will decrease when a break card is taught and reinforced” is testable.
How do you know you’ve achieved control? Data show reliable change only when the intervention is in place, replicated across conditions or phases. If you remove the intervention and behavior returns to baseline, that’s strong evidence. If behavior improves without your intervention, other variables may be at play.
Key Takeaways
Description, prediction, and control are connected steps: observe behavior clearly, identify reliable patterns, then intervene and measure change. All three rest on clear, observable measurement and ethical practice. Control means a demonstrated functional relation—reliable, data-based influence—not coercion or force. Use data to guide decisions, monitor for side effects, and ensure your work actually improves the client’s quality of life. When you ground your practice in these three goals, you’re honoring the science and protecting client dignity.
Next step: Review one of your active cases. Can you describe the target behavior in operational terms? Can you name a testable prediction about when or why it occurs? What data would show control? Take a few minutes to map these out. This is the foundation of accountable, effective ABA practice.



