Using Nonexemplar Video Modeling to Improve Staff Training
Training staff to implement ABA procedures with high fidelity is one of the most persistent challenges in clinical practice. This article examines recent research on whether showing common mistakes—clearly labeled as errors—alongside correct models can improve procedural integrity. The findings offer practical guidance for clinics looking to strengthen their video-based training without adding significant time.
What is the research question being asked and why does it matter?
The question is whether short video trainings work better when they show only the “right way” to do a procedure, or when they also show common mistakes and label them as errors.
This matters because staff often learn new procedures quickly, and then small errors show up later—during the day, later in the week, or when no trainer is watching. Those small errors add up. They change what the learner experiences: missed praise, unclear instructions, or messy preference assessment results.
The study focused on procedural integrity—how closely someone follows the steps of a procedure. High integrity ensures the learner gets support as planned and helps your data mean what you think it means.
Many clinics want training that is quick and doesn’t require a trainer present the whole time. But “quick” training can miss the parts that prevent common errors.
The researchers also wanted to know if adding “mistake examples” (nonexemplars) would take too much extra time. If it adds only a little time but helps staff avoid common errors, that could be a good tradeoff for busy teams—especially when you need to train many new staff and can’t do full behavioral skills training with lots of practice and feedback every time.
What did the researchers do to answer that question?
They recruited six college students who were new to these procedures. Sessions were conducted over Zoom, and the “learner” was a trained research assistant using a script, not an actual client.
Participants ran three procedures: discrete trial teaching (DTT), a multiple stimulus without replacement (MSWO) preference assessment, and a free operant (FO) preference assessment that served as a control.
At first, participants only had written instructions. Their accuracy was very low.
Then each person watched two short video model trainings with voice-over instruction—one for DTT and one for MSWO. For one procedure, they watched an exemplars-only video (only correct steps). For the other, they watched an exemplars-plus-nonexemplars video (correct steps plus clips showing common mistakes, clearly marked as “do not do this”). The assignment was counterbalanced so each procedure was trained both ways across participants.
After watching each video once, participants ran the procedure again. The researchers measured integrity using task analyses (16 steps for DTT, 17 for MSWO, 11 for FO). They also looked at which steps were most often missed, checked for carryover between procedures, and conducted follow-up checks at two and five weeks for most participants.
Both kinds of video training improved integrity substantially for most people. Training with nonexemplars led to slightly better performance for five of six participants, and errors on certain “common mistake” steps were lower when those mistakes had been shown and labeled during training.
Results should be interpreted carefully: this was remote, with simulated learners, and a small group of novice trainees.
How you can use this in your day-to-day clinical practice
If you already use video models to train RBTs, consider adding a short “common mistakes” section instead of only showing perfect performance. In this study, the nonexemplar videos were brief and clearly labeled as errors, and that small addition was linked with higher integrity for most trainees.
For a busy clinic, this suggests a practical change: keep your main “correct model” video, but add a few short clips that say, in plain language, “Do not do this,” and then show the mistake. The goal isn’t to shame staff—it’s to make the differences easy to see before practicing with a learner.
Pick your nonexemplars based on real errors you see in your setting, not just what looks good in a training. The researchers chose three common mistakes per procedure by asking experienced BCBAs what new staff often miss.
You can do the same by reviewing your fidelity checklists, error correction notes, and recurring supervision comments. Start with errors that are both common and meaningful for learner outcomes: missed praise, unclear SDs, skipped error correction steps, or incorrect preference assessment data recording. This keeps the video short and focused—which matters when staff are watching on a phone between sessions.
Use nonexemplars especially for steps that are easy to skip when people feel rushed. In this study, some steps stayed “sticky” as errors even after training—like ready prompts and parts of data recording or ranking.
In real practice, those are often the steps staff drop first when the environment gets busy. If you have a procedure where staff do the main part correctly but miss the “bookends” (setup, clear instruction, quick data, reset), those are strong targets for nonexemplar clips.
When building mistake clips, make them visually obvious and pair them with a simple fix. The researchers used voice-over plus on-screen symbols to highlight errors as they happened.
You can copy this approach without fancy editing by adding clear text like “Mistake: did not remove item from array” and then “Fix: remove it right away before the next trial.” Keep each mistake clip focused on one step. Avoid long scenes with multiple errors—staff may not know what to focus on.
Don’t rely on videos alone for all staff or all skills. One participant had variable DTT performance even after training, a reminder that some people need practice and feedback to stabilize performance.
Use video modeling as a first layer, then decide who needs additional support. A practical rule: pair video training with at least one short competency check—a role-play or live fidelity probe—so you can see who’s ready to work with clients and who needs coaching.
Use short follow-up checks, not just a one-time training sign-off. In the study, some performance dropped by five weeks for a couple of participants in the exemplars-only condition.
In real clinics, drift is common, especially when procedures aren’t used every day. Plan a quick integrity probe two to five weeks after training for high-risk procedures. If you see drift, assign a “booster” video targeting the exact steps that slipped, rather than re-teaching the whole procedure.
Be careful about generalizing these results to all staff and real learners. These were college students, not DSPs or RBTs working under real time pressure, and the “learner” was scripted and couldn’t show the full range of challenging behavior.
Treat this as a helpful training design idea, not proof that it will fix integrity problems for every team. Before changing your whole training system, pilot the exemplars-plus-nonexemplars format with one procedure and measure both fidelity and staff experience.
Finally, keep learner dignity at the center when training staff. Nonexemplars should focus on the staff action, not make fun of a learner response or show harsh interactions.
If you add mistake clips, make sure the “wrong way” doesn’t include unsafe or disrespectful behavior, even as a demonstration. The point is to help staff notice and avoid common errors so the learner gets clearer teaching, better reinforcement, and more accurate assessments.
Works Cited
Bartle, G. E., Ruby, S. A., & DiGennaro Reed, F. D. (2025). The effects of video modeling containing different exemplar types on procedural integrity. Journal of Organizational Behavior Management. https://doi.org/10.1080/01608061.2025.2476425



