F.5. Design and evaluate descriptive assessments.-

F.5. Design and evaluate descriptive assessments.

Design and Evaluate Descriptive Assessments: A Practical Guide for ABA Clinicians

If you work in ABA—whether as a BCBA, clinic director, senior supervisor, or caregiver partner—you’ve probably heard that descriptive assessments are the “first step” in functional behavior assessment. But what does that actually mean, and how do you know when you’re doing it right?

This guide walks you through what descriptive assessments are, why they matter in real-world practice, and how to use them ethically and effectively. We’ll cover different ways to collect data, when to move beyond descriptive assessment, and common pitfalls that can lead teams astray.

What Is a Descriptive Assessment?

A descriptive assessment is a direct observation method in which you watch and record a target behavior and the events around it as they naturally occur—without changing anything. You’re not manipulating antecedents or consequences, and you’re not running a controlled experiment. Instead, you’re describing what you see in the client’s real environment: when the behavior happens, what comes before it, what comes after it, and any patterns you notice.

The goal is straightforward: collect objective data that helps your team understand the context of a behavior so you can generate testable hypotheses about its function. A descriptive assessment tells you what might be maintaining a behavior; it doesn’t prove why it’s happening. That distinction is crucial.

Descriptive assessments fit into the larger functional behavior assessment (FBA) process as a natural starting point. Most FBAs begin with indirect assessment (interviews and questionnaires), move into descriptive assessment (direct observation), and may eventually include experimental analysis (controlled manipulation of variables) if more certainty is needed. Each step builds on the last.

It helps to clarify where descriptive assessment sits in the landscape of assessment approaches, because clinicians and caregivers sometimes use these terms interchangeably when they’re actually quite different.

Descriptive assessment vs. indirect assessment. Indirect assessments rely on what people tell you—interviews, behavior rating scales, caregiver reports. They’re valuable for gathering context and perspective, but they depend on memory and interpretation. Descriptive assessments use your direct eyes and ears in the moment.

Both are useful, and they often complement each other. Interview data might reveal patterns you then watch for in real time. Or your observations might show that caregiver reports don’t match what you actually see—and that mismatch becomes important information.

Descriptive assessment vs. functional analysis. Functional analysis (FA) is an experimental procedure in which you systematically change antecedents or consequences to see if the behavior changes in response. It’s the gold standard for confirming causation. Descriptive assessment, by contrast, looks at natural contingencies and lets you form hypotheses. If descriptive data suggest a behavior is attention-maintained, you have a working hypothesis—but you haven’t proven it. That’s where FA comes in.

Descriptive assessment vs. preference or competency assessments. Sometimes teams conduct assessments to identify what a client prefers (a preference assessment) or what skills they already have (a skills or competency assessment). These answer different questions. Descriptive assessments focus specifically on behavior-environment relationships.

Why This Matters in Real Practice

Here’s the honest truth: descriptive assessments can prevent costly mistakes. Many well-intentioned teams implement interventions based on a hunch or a caregiver’s report, only to find the intervention doesn’t work—or worse, makes things worse. A solid descriptive assessment grounds your team in real data before you redesign supports.

Consider a teacher who reports that a student is “disruptive during transitions.” That’s a starting point, but it’s vague. Is the student calling out? Leaving the classroom? Arguing with peers? Does it happen when transitioning from preferred activities to less preferred ones, or is it random? Do certain adults trigger it?

A structured descriptive observation across several transition periods will reveal patterns your hypotheses might have missed. That clarity changes how you design an intervention—and whether it actually works.

Descriptive assessments also build team alignment. When everyone sees the same data collected the same way, it’s harder to fall back on anecdotes or opinions. Caregivers and teachers trust objective observation, and it opens better conversations about what change is really needed.

The ethical dimension matters too. Interventions based on solid observational evidence are more likely to be effective and less likely to be harmful. When you take time to observe before you act, you signal to caregivers that you’re thoughtful and precise about supporting their child.

Key Features of a Strong Descriptive Assessment

A solid descriptive assessment has several hallmarks that distinguish it from casual observation or anecdotal notes.

First, it happens in the client’s natural environment—the classroom, home, community setting, or wherever the behavior typically occurs. This is where the behavior is most authentic and where contextual factors are real.

Second, you choose and document your recording method before you start. You’re not deciding on the fly whether to count frequency, note duration, or write a narrative. Pre-planning ensures consistency and reduces bias.

Third, the focus is always on antecedents (A), behavior topography (B), and immediate consequences (C). You’re not recording everything; you’re recording what’s relevant to understanding the context.

Fourth, time-bound sampling is built in. You might observe during math instruction on Mondays, Wednesdays, and Fridays for two weeks. Or you might collect data during all transitions for five days. The point is that you’ve decided in advance how much and when you’ll observe.

Finally, you use descriptive data to form hypotheses that guide next steps, not as a final answer. If your data suggest a function, your next move might be to try a targeted intervention, run a functional analysis, or collect more information. Descriptive data open doors; they don’t close them.

Data Collection Methods in Descriptive Assessment

Descriptive assessment isn’t one method—it’s a family of methods, each with its own strengths. Choosing the right one depends on what you’re trying to understand about the behavior.

ABC Narrative Recording. This is open-ended description. You write down what happened before the behavior, describe the behavior itself, and note what happened immediately after.

Narrative data are rich and contextual. They let you capture nuance, the client’s words, setting details, and your clinical observations all in one place. The trade-off is that narrative data take time to write and analyze.

They’re especially useful when you need to understand how a behavior unfolds or when you’re unsure what details matter yet. For example, if a young child often engages in self-injury but the trigger is unclear, narrative ABC data across multiple instances might reveal a pattern—like that it happens most often when the child is denied a preferred activity, when there’s a loud noise, or when a specific adult is present.

Event Recording (Frequency/Continuous Recording). This method counts every time the target behavior happens during an observation window. It answers the question: “How many times did this happen?”

Event recording is efficient and gives you a clear frequency count. Use it when the behavior is discrete (has a clear beginning and end) and not too high-rate. If a student raises their hand in class, you can count every hand-raise. If a toddler says “no,” you can count each instance.

Event recording is less useful if the behavior is continuous (like sustained crying) or happens so often that counting becomes impossible.

Get quick tips
One practical ABA tip per week.
No spam. Unsubscribe anytime.

Interval Recording. This method divides your observation into equal time blocks—say, 1-minute or 5-minute intervals—and you mark whether the behavior occurred during each interval.

There are three main flavors: partial interval (did the behavior happen at any point during the interval?), whole interval (was the behavior happening for the entire interval?), and momentary time sampling (was the behavior happening at the exact moment the interval ended?).

Interval recording is useful when you need to estimate occurrence over time without the precision of event recording. It’s also practical when observing multiple behaviors or when the behavior is high-rate. The trade-off is that interval recording can over- or under-estimate frequency depending on which flavor you use, so choose carefully based on your question.

Scatterplot. This is a visual grid that maps the occurrence (or intensity) of a behavior across times of day and multiple days. One axis shows time blocks (e.g., 8:00–8:30 a.m.), and the other shows dates. You mark each cell to show whether the behavior occurred during that block.

Scatterplots are powerful for spotting temporal patterns. Did the behavior cluster in the morning? Before lunch? During a specific subject? A scatterplot shows that at a glance. Once you identify a high-risk time, you can zoom in with more detailed ABC data.

In practice, teams often combine methods. You might start with a scatterplot to find when the behavior is most likely, then shift to ABC narrative during those high-risk windows.

Descriptive Assessment Within the Larger FBA Process

Descriptive assessment is one piece of a bigger puzzle. A comprehensive FBA typically includes three layers: indirect assessment (what caregivers and teachers report), descriptive assessment (what you directly observe), and sometimes experimental analysis (what you test under controlled conditions).

Indirect assessment is quick and inexpensive. You ask parents, teachers, and the client (if age-appropriate) what they think is triggering and maintaining the behavior. This gives you leads to follow and context you might not see in a brief observation window.

Descriptive assessment takes those leads and tests them in the real world. It’s more time-intensive than interviews, but safer and less intrusive than experimental analysis. You watch the behavior unfold naturally and record what you see.

Functional analysis—if you do it—is the most rigorous step. You systematically manipulate variables to confirm a function. But not every case needs FA. Many interventions can be designed and tested using careful descriptive data and clinical judgment.

The decision to move to experimental analysis depends on safety, feasibility, expertise, and how much certainty your team needs.

Common Mistakes to Avoid

Even experienced clinicians sometimes trip up on descriptive assessment. Here are the most frequent pitfalls.

Treating correlation as causation. This is the biggest one. You collect descriptive data, see that the behavior follows attention, and assume attention is the function—so you design an extinction procedure. But attention might be a coincidental correlate, not the actual driver. Descriptive data suggest function; they don’t prove it. Stay humble about what your data can tell you.

Skipping operational definitions. If you don’t define “aggression,” “on-task,” or “elopement” in concrete, observable terms before you start, different observers will record different things. Consistency falls apart. Before you collect a single data point, write down exactly what the behavior looks like: “Aggression is hitting, kicking, or biting another person with force sufficient to make contact.”

Using the wrong method for the question. If you need to know frequency, event recording or ABC continuous recording is better than interval. If you need to know when it happens, scatterplot is your friend. If you need rich context, narrative is key. Think about your question first, then pick your tool.

Observing for too little time or too much time without a plan. If a behavior is rare, a single 10-minute observation window might miss it entirely. If a behavior is high-rate, three weeks of continuous narrative recording will exhaust your team. Predefine your sampling plan based on expected behavior frequency and variability.

Ignoring interobserver agreement (IOA). If only one person is collecting data and no one’s checking their work, bias and drift creep in. Have a second observer collect data on at least 20% of sessions, and calculate agreement. If it’s below 80%, retrain and troubleshoot. High IOA is a sign of reliable, credible data.

How to Collect Descriptive Data Well: Ethical and Practical Essentials

Before you pick up a clipboard, make sure you’ve handled the foundational pieces.

Obtain informed consent. Explain to caregivers and the client (at an age-appropriate level) what you’ll be observing, why, how long it will take, where it will happen, and how you’ll use the information. Consent should be voluntary, with the right to withdraw without penalty. If methods change—like moving from in-person to video observation—update consent.

Protect privacy and secure data. Observations should happen in private settings when possible. Notes should be de-identified (use initials or case numbers, not full names). Store data securely—encrypted digital files, locked cabinets for paper—and limit access to people directly involved in the client’s care.

Train observers and check reliability. Even trained clinicians need to calibrate on each case. Walk observers through operational definitions, show examples of what counts and what doesn’t, and practice coding sample videos or real situations together. Once data collection starts, periodically have a second observer collect data independently. Calculate IOA using a method appropriate to your data type. If agreement is below 80%, stop, retrain, and restart.

Plan for bias reduction. Observers are human, and they see what they expect to see. Assign observers who don’t have strong preexisting opinions about the behavior. Use structured forms (like an ABC sheet) rather than blank paper, which reduces the temptation to editorialize. Blind observers to hypotheses if possible—they should record what they see, not what they think should happen.

When to Move Beyond Descriptive Assessment

Descriptive assessment is a strong starting point, but it’s not always the final step. There are clear decision points where your team might benefit from moving into experimental analysis.

If your descriptive data are ambiguous—if multiple functions seem possible or the patterns are scattered—a functional analysis can narrow it down. FA removes the noise of natural contingencies and tests one thing at a time under controlled conditions.

If time is critical, FA can sometimes yield answers faster than extended descriptive data collection. If your client’s behavior is rare or extremely dangerous, a standard FA might not be safe, but modified versions (like latency FA or precursor-focused FA) might work.

If your team’s expertise and resources allow it, and your client’s safety is assured, a controlled FA often leads to more precise, function-based interventions and better outcomes.

That said, functional analysis isn’t mandatory or always appropriate. For many clients, descriptive data plus informed clinical judgment and careful, minimally invasive trials are sufficient. The decision should be made as a team, with caregiver input, and always with safety and dignity at the center.

Join The ABA Clubhouse — free weekly ABA CEUs

Bringing It Together: Examples in Real Settings

Let’s see how this works in two realistic scenarios.

Scenario 1: School-based student with disruptive behavior during transitions.

A teacher reports that a fifth-grader, Maya, frequently yells and refuses to move when it’s time to transition from math to reading. The team decides to observe Maya during all transitions over two weeks, using an ABC narrative form across three transition periods per day.

What they find: Maya yells and refuses most often when transitioning from a preferred activity (math with a certain peer buddy) to a less preferred activity (independent reading work). When an adult gives a 2-minute warning, she’s more compliant. When she’s seated near a friend, the behavior drops.

The hypotheses: Maya may be escape-motivated (avoiding non-preferred work) and/or attention-motivated (wanting connection before transitions). The team designs a transition support plan: a buddy system, increased adult attention during warnings, and a preferred task to start reading time. They track whether this works before considering more intensive intervention.

Scenario 2: Home-based client with elopement behavior.

A parent reports that their 7-year-old sometimes runs toward the door without warning, creating a safety risk. The behavior feels unpredictable.

The team uses a scatterplot across a week, recording whether elopement occurred during each half-hour block. The scatterplot reveals that most elopements happen between 4:00 and 6:00 p.m.—exactly when the parent is preparing dinner and the child has been in school all day.

They then zoom in with event recording and ABC notes during that window for another week. The pattern: elopement happens most often when the parent is busy and the child is bored or seeking connection. The team hypothesizes the function is escape (from boredom) and/or attention (from a busy caregiver).

They suggest structured after-school snack time, a clear activity menu, and scheduled parent-child connection before dinner prep. The descriptive data guided practical, low-intensity supports that addressed the real-world context.

Key Takeaways

Descriptive assessments are your foundation for understanding behavior in context. They give you objective data, not opinions. But they show you patterns and possibilities, not proof. Treat descriptive data as the beginning of a conversation, not the end.

Choose your recording method to match your question: narrative for rich context, event recording for frequency, interval recording for efficient occurrence sampling, and scatterplots for temporal patterns. Combine methods if it serves your team.

Plan for reliability from day one. Clear operational definitions, observer training, IOA checks, and ethical safeguards—consent, privacy, data security—are non-negotiable. They protect your client and strengthen your data.

Use descriptive findings to form testable hypotheses and design initial supports. If those supports work, great. If the picture remains unclear or your team needs stronger evidence, you have the information to decide whether a functional analysis is warranted.


As you review your current assessment practices, ask yourself: Are we collecting descriptive data systematically, or relying on anecdotes? Do our operational definitions match across observers? Have we obtained consent and explained why we’re watching? Start with these fundamentals—and watch how much clearer the picture becomes.

Leave a Comment

Your email address will not be published. Required fields are marked *