Design and Apply Discontinuous Measurement Procedures: A Practical Guide for Busy ABA Settings
If you’re a BCBA, clinic director, or senior RBT working in a classroom, group setting, or clinic where one-on-one observation isn’t realistic, you know the challenge: you need reliable data on behavior, but continuous recording is often impossible. That’s where discontinuous measurement procedures—interval recording and time sampling—become your working solution. These methods let you collect meaningful data even when you’re juggling multiple learners, managing staff, or balancing clinical duties with administrative tasks.
In this article, we’ll walk through what discontinuous measurement is, why it matters in real ABA work, how to choose the right method for your situation, and how to report it ethically so your data actually guides good decisions.
What Is Discontinuous Measurement?
Discontinuous measurement samples behavior at intervals rather than recording every single occurrence. Instead of watching a learner’s off-task behavior for a full 30-minute lesson and counting every instance, you divide the lesson into smaller time chunks—say, thirty-second intervals—and check whether the behavior happened during each chunk. Your result is an estimate of how often or how long the behavior occurred, not an exact count.
This method trades precision for feasibility. In exchange for less granular data, you gain the ability to collect information on multiple learners, conduct classroom observations, or oversee treatment in a realistic, sustainable way. But here’s the critical part: because interval data are estimates, you need to understand their limits and report them honestly to your team and families.
The Three Main Discontinuous Procedures
All discontinuous methods divide observation time into equal intervals, but they differ in how you score each interval. Let’s break down partial-interval recording, whole-interval recording, and momentary time sampling—and when you’d actually choose one over the others.
Partial-Interval Recording (PIR)
With partial-interval recording, you mark an interval as a “yes” if the behavior occurs at any time within that interval, even for just a moment.
Say you’re monitoring off-task behavior in a busy classroom, and you set 30-second intervals over a 10-minute lesson. If a child glances away from the assignment for just 3 seconds during interval 1, that entire interval gets marked as a positive instance of off-task behavior—even if the child was on-task for the other 27 seconds.
This method tends to overestimate how much time the behavior actually occupied, because even a brief dip gets treated like a full-interval occurrence. That said, partial-interval recording is excellent for capturing brief or high-frequency behaviors without needing constant eyes-on-target. Behaviors targeted for *reduction*—interruptions, elopement attempts, off-task glances—are often tracked this way because you want to detect any occurrence, not measure sustained duration.
Whole-Interval Recording (WIR)
Whole-interval recording does the opposite. You mark an interval as a “yes” only if the behavior is present for the entire duration of that interval.
Using the same classroom example, if a child drifts off-task for even 5 seconds during a 1-minute interval, that interval does not count as on-task—even if the child was engaged for the other 55 seconds. This method underestimates total time engaged in behavior, because any brief interruption disqualifies the whole interval.
Whole-interval recording is your choice when you need to verify sustained behavior—staying seated for a full work block, maintaining attention throughout a transition, or meeting a fluency target for an entire timing period. It’s more conservative and answers the question: “For how long was this learner genuinely, continuously engaged?”
Momentary Time Sampling (MTS)
Momentary time sampling works differently. Instead of monitoring behavior across an interval, you check whether the behavior is occurring at one specific moment—the end of each interval.
You might set 2-minute intervals during a group activity and glance over at the target learner right at the 2-minute mark, the 4-minute mark, and so on. You record whether the learner is engaged at that exact instant. Over a 20-minute session with ten 2-minute intervals, you might observe engagement at eight of those snapshots, giving you an 80% “momentary engagement” estimate.
Momentary sampling’s accuracy depends heavily on whether your sampling moments happen to land near actual behavior changes. It’s less granular than interval recording, but it’s often the most practical option when you’re teaching a group and need quick, low-intrusion checks on multiple learners’ engagement at once.
Discontinuous Measurement vs. Continuous Measurement
Continuous measurement records every instance of a behavior—every time a learner raises their hand, every interval of silence, every transition. It’s precise, but it requires dedicated observation time or technology like video review.
Discontinuous measurement gives you a sample—a snapshot estimate—instead. You trade some accuracy for real-world feasibility. The question isn’t which is “better”; it’s which fits your decision-making needs and your actual capacity.
If you’re evaluating whether a new reinforcement protocol is improving task completion across a classroom, partial-interval data on engagement will inform that decision. If you’re deciding whether a learner has met a mastery criterion requiring sustained performance, continuous duration data or whole-interval data is more appropriate because brief interruptions matter.
Why This Matters in Your Practice
Discontinuous measurement exists because continuous observation is often impossible in real settings. A classroom teacher can’t stop teaching to count every off-task glance. A clinic director can’t watch every room simultaneously. Supervisors juggle scheduling, paperwork, and compliance alongside clinical oversight. These methods make data collection possible in those contexts.
Beyond feasibility, interval methods offer important flexibility. You can measure group-level engagement, adjust interval length based on behavior patterns, and scale observations across multiple learners without deploying an observer per child. In a group ABA program or a school where you’re supporting multiple classrooms, that scalability is essential.
But here’s where the ethical stakes come in: misusing these methods can lead to bad decisions. If you interpret a partial-interval estimate of 40% as “the learner was off-task for exactly 40% of the time” when intervals were very long, you’ve overstated what you know. If you use momentary sampling to detect a rare behavior that might only happen once or twice per session, you’ll likely miss it entirely and conclude it’s not a problem when it is. And if you base a high-stakes decision—like restraint, seclusion, or skill sign-off—on unclear or inappropriately chosen interval data, you’ve put your learner and your practice at risk.
That’s why documenting your method, checking observer reliability, and being transparent about limitations matter so much.
Choosing the Right Method: A Practical Guide
Start with your behavior and your question.
For brief, high-frequency, or disruptive behaviors where you want to detect presence: use partial-interval recording. You’re asking, “Did this behavior happen at all during each interval?” PIR will catch it.
For sustained, continuous, or engagement behaviors where duration matters: use whole-interval recording. You’re asking, “Was the learner engaged for the whole interval?” WIR gives you that conservative estimate.
For group observations, quick checks, or when continuous monitoring is genuinely impractical: use momentary time sampling. You’re asking, “What’s the status right now?” MTS fits busy, multitasking environments.
Your next consideration is interval length. Very short intervals—10 to 30 seconds—give you more precision for fast-moving behaviors but require intense focus. Longer intervals—3, 5, or even 10 minutes—reduce observer workload but increase bias (partial-interval overestimation and whole-interval underestimation both get worse with longer intervals). A practical guideline: favor intervals of 3 minutes or less when possible, balancing accuracy with what your staff can realistically execute.
Finally, consider your observers’ capacity. If your RBTs are also running a session or managing a classroom, they’ll do better with momentary sampling or longer intervals than with minute-by-minute partial-interval recording. If you have a dedicated data collector or can use video review, shorter intervals and more granular methods become feasible.
How to Set Up an Interval Recording Observation
Here’s what good preparation looks like:
Define your behavior operationally. Not “off-task” but “eyes not on assigned task, paper, or teacher.” Not “stereotypy” but “hand-flapping, spinning, or vocal stimming lasting longer than 3 seconds.”
Choose your method and interval length based on what we’ve covered above.
Determine total observation time. How long will you observe in one session? 10 minutes? 30 minutes? Be realistic—if you plan a 30-minute observation but classroom demands interrupt after 15, you’ll have gaps in your data.
Set up your data sheet. Draw or print a simple grid with intervals listed down the left side. As you observe, mark each interval + (behavior present) or − (not present) according to your method’s rule. Many programs now use digital recording apps, which can speed up calculation and reduce transcription errors.
Train your observers. If multiple staff are collecting data, practice together on the same video clip or live session. Independently score the same intervals and compare. This builds confidence and reveals confusion before real data collection starts.
Understanding Interval Data and Its Limits
Your final data will be a percentage of intervals with the behavior. If you collected 40 intervals and marked 12 as positive, you have 30% of intervals with the behavior.
From there, you can estimate duration by multiplying: 12 intervals × 30 seconds per interval = 360 seconds, or 6 minutes of estimated off-task time.
But notice the word “estimated.” If you used partial-interval recording, that 6 minutes is likely an overestimate—the actual off-task time was probably shorter because single-second glances were counted as full intervals. If you used whole-interval, 6 minutes is likely an underestimate because brief interruptions eliminated whole intervals from the count. If you used momentary sampling, it’s a rough snapshot, not a precise duration.
These biases aren’t failures—they’re inherent to the method and acceptable when you’ve chosen thoughtfully. But they must be reported and acknowledged. When you present data to a team, parent, or supervisor, say: “Using partial-interval recording with 30-second intervals, off-task behavior occurred in 30% of observed intervals, suggesting approximately 6 minutes of off-task behavior, though this is likely an overestimate.” That honesty is more useful than a false sense of precision.
Ensuring Reliability: Interobserver Agreement
If multiple staff are collecting discontinuous data, you need to verify they’re using the method the same way. That’s interobserver agreement, or IOA.
The most common approach is interval-by-interval agreement: both observers score the same intervals independently, then compare. You count how many intervals they agreed on (both said + or both said −) and divide by total intervals. If two observers agreed on 36 out of 40 intervals, that’s 90% IOA.
Aim for 80% or higher, with 85–90%+ preferred for critical decisions. Collect IOA on a representative sample—roughly 20–33% of sessions across phases—not just once and then assume all data are good.
If IOA is low, the culprit is usually unclear behavior definition, unclear interval timing, or observer fatigue. Circle back, clarify, retrain, and try again. Strong IOA is your signal that the data are trustworthy.
Ethical Reporting and Documentation
When you use discontinuous measurement, document:
- The target behavior and operational definition—exactly what you’re looking for.
- Method chosen (PIR, WIR, or MTS) and why.
- Interval length and total observation time—so anyone reviewing the data understands the sampling design.
- Scoring rule—the exact criterion you used for marking an interval positive.
- Any deviations—if an observation was interrupted, note it.
- IOA results—the percentage of agreement and observer pairs.
- Limitations—whether the method overestimates, underestimates, or depends on timing.
This level of detail isn’t busywork; it’s the foundation of accountability. It allows supervisors to audit your data, allows you to explain your method if decisions are questioned, and protects your learner by ensuring data quality.
Most importantly, it signals to everyone involved—your team, the family, the referring physician—that you take measurement seriously and won’t hide the compromises inherent in real-world data collection.
Common Mistakes and How to Avoid Them
Treating interval percentages as exact counts or durations. They’re estimates. Say “approximately” when you’re doing the math, and always report the method and interval length alongside the number.
Confusing partial and whole interval definitions. The error usually goes like this: “Did the behavior happen?” Yes (PIR) vs. “Did the behavior happen continuously?” Yes (WIR). Spend two minutes with your team clarifying your rule before the first observation.
Using very long intervals for brief behaviors. A 5-minute interval for hand-flapping? You’ll miss it. Match your interval to your behavior’s speed. Fast behavior, short intervals.
Ignoring interval length when comparing across phases. If you collected baseline data with 1-minute intervals and intervention data with 5-minute intervals, the comparison is apples-to-oranges on the graph. Keep intervals consistent or note the change clearly.
Collecting IOA only once, then assuming all subsequent data are solid. Spot-check regularly. Observer drift happens—people get tired, shortcuts creep in, definitions get fuzzier. Periodic IOA is your quality-control checkpoint.
When Discontinuous Measurement Isn’t Enough
There are moments when discontinuous methods fall short, and you need continuous or event-based measurement instead.
If a behavior is rare and critical—a learner’s elopement attempts occur once or twice per week—momentary time sampling will probably miss them. Count every instance (event recording) instead.
If you’re awarding a significant reinforcer or making a major clinical decision based on sustained performance—a learner has “mastered” a skill and can move to the next goal—don’t rely on whole-interval estimates alone. Add continuous timing or direct observation of full-criterion performance to confirm.
If behavior duration is central to the intervention—you’re building fluency and speed matters—measure continuous duration, not intervals.
Discontinuous methods are tools for specific jobs. Know what they can and can’t do, and choose the tool that matches the decision.
Key Takeaways
Discontinuous measurement makes data collection feasible in real, busy ABA settings. Choosing the right method—partial-interval for detecting brief behaviors, whole-interval for confirming sustained engagement, momentary sampling for quick group checks—depends on your behavior pattern and your environment. Always document your procedures, check interobserver reliability regularly, and be transparent about the biases and limits of your chosen method. When you do that consistently, interval data become a credible part of your clinical decision-making, not a guess dressed up as precision.
As you review your own measurement practices, ask: Are my observers clear on which method we’re using and why? Do I have recent IOA data? Am I reporting limitations alongside results? If the answer to any of those is no, a quick team huddle to clarify procedure and a refresher IOA check will pay dividends in data quality and team confidence.



