An Analysis of Variables Affecting Behavior Analytic Practitioners’ Intention to Leave a Position and Leave the Field

An analysis of interactive computer training on staff acquisition of MSWO preference assessment implementation

Understanding the Limits of Computer-Based Training for MSWO Preference Assessments

Interactive computer training has become a popular way to onboard staff quickly and consistently. But when it comes to running MSWO preference assessments correctly, can a computer module alone get staff to mastery? This study offers practical answers—and a clear reminder that efficient training still requires human feedback.

What is the research question being asked and why does it matter?

This study asked a simple, practical question: Can interactive computer training (ICT) alone teach staff to run an MSWO preference assessment correctly?

This matters because MSWOs are widely used to identify strong reinforcers. If staff run them incorrectly, you may select weak items—and your teaching or treatment may stall.

Many programs favor ICT because it is fast, consistent, and does not require a BCBA to train every person one-on-one. But if ICT does not lead to correct performance, it can create a false sense that staff are “trained” when they are not ready.

The study also asked two follow-up questions relevant to real clinics. First, does repeating the same ICT help staff reach mastery? Second, if ICT is not enough, what additional training is needed to achieve high accuracy? These questions help supervisors build efficient training plans without cutting corners on fidelity.

What did the researchers do to answer that question?

The researchers trained 9 staff at a residential school to run an MSWO with a 5-item array. Some were new hires with little or no ABA background. Others were already employed with basic ABA exposure but had not been trained on MSWO.

Staff completed baseline first by reading written instructions for 10 minutes, then doing a scored role-play MSWO with a confederate “student”—no help during the test.

Next, staff completed an ICT module on a tablet. The module included written steps with pictures, audio, short videos of correct MSWO runs, and quiz questions with feedback. It also covered data recording and included a drag-and-drop activity for summarizing results. Staff were tested again after the ICT. If they did not reach mastery, they repeated the module and were tested again.

If staff still did not meet mastery, the researchers added live training. First, corrective feedback about errors from the last test. If that was not enough, they added corrective feedback plus a live model of the correct steps.

Mastery was set at 90% correct steps in a performance test. The test scored many small steps—sampling items, giving the “choose one” cue, scoring, rotating items, and calculating results. Staff also completed a short survey about their experience with the training.

How you can use this in your day-to-day clinical practice

Do not assume ICT equals competency for MSWO implementation. In this study, only 1 of 9 staff met mastery after ICT alone. Most improved a little, but they still made many errors until a supervisor added feedback—and sometimes modeling.

Get quick tips
One practical ABA tip per week.
No spam. Unsubscribe anytime.

A practical change: treat ICT as “pre-training,” not the final step. Plan from the start to run a performance check before the staff member uses MSWO results for clinical decisions.

Build your training flow like a ladder, moving up only when performance data show it is needed.

  • Step 1: ICT to cover basic terms, the flow of the assessment, and what the data sheet means.
  • Step 2: A brief, scored role-play check right after ICT, using the same materials staff will use in real sessions.
  • Step 3: If they are not at your fidelity goal, provide short, focused corrective feedback on the exact errors you observed.
  • Step 4: If they still struggle, model the missing or incorrect steps, then have them try again.

The key is deciding what to add based on observed performance, not time spent in training.

Expect that repeating the same ICT may not fix the problem. In this study, a second pass through ICT usually led to only small or moderate gains. Some people did not improve much at all.

If you are behind on onboarding, repeating the module may feel efficient, but it may delay the real fix. After the first ICT, move quickly to a short role-play plus feedback. This respects staff time while protecting clients from low-fidelity assessment.

Use ICT more confidently with experienced staff, but still verify with a check-out. Staff with more experience did better than brand-new staff, and one experienced participant reached 100% after ICT alone. That does not guarantee the same result in your setting, but it suggests ICT may work best as a booster for people who already know basic teaching routines, data sheets, and prompting.

For brand-new staff, plan on live support. Do not schedule them to run preference assessments alone until they pass a performance test.

Make your feedback specific and tied to MSWO outcomes. Common errors can change the meaning of the data—not rotating items correctly, scoring the wrong item, or inconsistent sampling.

When you give feedback, connect the step to why it matters: “If we do not rotate, we might think the first item is the favorite just because it was closest.” This helps staff remember the step and reduces “robot” performance that breaks down in real sessions.

Plan for language and learning-history differences without blaming the staff member. Several participants spoke English as a second language, and the ICT was in English with no chance to ask questions during training.

You can add simple supports: allow questions after the module, review key words with pictures, and do a short “teach-back” where the staff member explains the next step before doing it. The goal is removing barriers so you are measuring skill, not reading level or test anxiety.

Join The ABA Clubhouse — free weekly ABA CEUs

Do not over-trust social validity ratings. Staff reported they liked the training and felt prepared, even when their accuracy was low. “They liked the module” is not a sign they can run MSWO correctly.

Keep dignity high by separating likeability from competence: you can value their experience and still require a check-out to protect the learner and the program.

Keep your mastery standard, but apply it with context. The study used 90% accuracy—a reasonable target for an assessment that guides reinforcer choice. Still, not every step carries the same risk.

When a staff member misses mastery, look at what they missed. Errors that change the ranked order of items or break standardization should trigger more training right away. Small errors that do not change results may be coached quickly, but re-check performance before using their data for treatment planning.

Do not let ICT replace real practice with real materials. The ICT here had videos and quizzes but did not include full guided rehearsal with feedback inside the module. Staff improved most when feedback and modeling were added by a person.

If you adopt ICT, pair it with short, planned role-plays and performance feedback as your default package. This keeps training efficient while staying honest about what ICT can and cannot do for most staff.


Works Cited

Sherman, J., Vedora, J., Hotchkiss, R., & Colón-Kwedor, C. (2025). An analysis of interactive computer training on staff acquisition of MSWO preference assessment implementation. *Journal of Organizational Behavior Management, 45*(4), 307–323. https://doi.org/10.1080/01608061.2024.2438013

Leave a Comment

Your email address will not be published. Required fields are marked *