Understanding Agreement Between Supervisors and Employees on Performance Assessments
When staff performance falls short, figuring out why is half the battle. The Performance Diagnostic Checklist–Human Services (PDC-HS) is a common tool for identifying barriers—but whose answers should guide your next steps? This study compared what supervisors and employees reported when asked about the same performance problem. The findings have practical implications for how ABA leaders diagnose workplace issues and choose interventions.
What is the research question being asked and why does it matter?
This study asked a straightforward question: when there’s a staff performance problem, do the supervisor and the employee identify the same root cause on the PDC-HS 1.1?
This matters because the PDC-HS helps determine what kind of fix to try first—more training, better job supports, or clearer feedback. If supervisors and employees usually agree, you might only need one interview. If they often disagree, you may need both viewpoints before choosing an intervention.
The performance problem in this study is common in ABA agencies: BCBAs not reporting procedural fidelity data as expected. In practice, this kind of issue can stem from many causes—time constraints, unclear expectations, weak prompts, or missing consequences.
Pick the wrong cause, and you waste time on an intervention that doesn’t match the real barrier. The core question: whose report should you trust when figuring out why the work isn’t happening?
What did the researchers do to answer that question?
The researchers recruited 10 BCBA–supervisor pairs from a large home and community-based ABA agency. The BCBAs were the employees with the performance concern. The supervisors were Senior Behavior Analysts who oversaw them. None had used the PDC-HS before, though some had heard of it.
The first author interviewed each BCBA and supervisor separately using the PDC-HS 1.1 over secure teleconferencing. BCBAs answered questions about their own barriers. Supervisors answered the same questions about that BCBA’s barriers. Neither saw the other’s answers, and no one received feedback—this was purely about comparing reports.
Trained graduate students scored the forms, converting answers into domain scores ranked from least to most “indicated.” The main outcome was whether the BCBA and supervisor agreed on the top-ranked domain. Researchers also calculated a rank correlation for each pair, showing how similar the full pattern of rankings was across all four domains.
How you can use this in your day-to-day clinical practice
Don’t assume agreement. In this study, only 6 of 10 pairs agreed on the most indicated domain. That’s “sometimes yes, sometimes no”—not “close enough every time.” If you can only interview one person, treat the results as a starting point, not a final diagnosis. Avoid jumping from one interview to one intervention without a quick reality check.
When possible, interview both people. Compare outputs intentionally. If the top domain matches, you can have more confidence you’re targeting the right problem area. If they don’t match, don’t average scores and hope for the best. Treat the mismatch as useful information—it suggests the situation looks different depending on where someone sits in the system, or that one person may not have full contact with the real barriers.
Build in direct checks before selecting an intervention, especially when interviews disagree. If the concern is “fidelity data aren’t being reported,” look at timestamps, caseloads, where reporting sits in the workflow, and what prompts exist at the moment the behavior should happen. Check what happens after reporting: does anyone comment? Does it change anything? Or does it disappear into a database with no feedback?
This study couldn’t confirm which domain was “true,” so don’t treat PDC-HS outcomes as facts. Use them as hypotheses to confirm through observation, record review, and follow-up questions.
Consider how close the supervisor is to daily work. Some supervisors in this study had newer relationships with the BCBA; some had longer ones. That could affect agreement. If a supervisor is removed from day-to-day steps, their interview may miss key barriers like response effort, tech problems, or competing tasks.
If you only interview the supervisor, add questions that force contact with the real workflow: “Show me where in the system they enter the data.” “When during the week is that supposed to happen?”
If you only interview the employee, ask for examples and artifacts—screenshots, written expectations, calendar invites—so you’re not relying solely on memory.
Watch for bias without assuming bad intent. Employees may report causes that feel safer (“I need more training”) and underreport others (“I don’t see a reason to do it”). Supervisors may assume motivation problems when the real issue is unclear steps or missing materials.
Ask neutral, choice-based questions that protect dignity: “What makes this easy on a good week?” “What gets in the way on a hard week?” Then confirm with data instead of debating who’s right.
Be mindful of reactivity. Interviewing someone about barriers might change their behavior. When people talk through obstacles, they may start fixing them on their own. In services, that’s often a good thing—but it can confuse your evaluation of later interventions.
If you’re planning to measure a specific change (like a new reminder system), start tracking performance right away. Note the interview date as a possible event that could shift behavior.
Use the PDC-HS as a self-management tool carefully. This study suggests self-report can align with supervisor report for some people, but not all. A reasonable approach: have both the employee and supervisor complete it, compare results together, and pick one small, respectful change.
If outcomes are close and follow-through improves, self-checks may work well for that person. If outcomes are far apart, keep the supervisor more involved and increase direct observation before selecting solutions.
Don’t overgeneralize. This was a small sample (10 pairs) in one ABA agency, focused on one performance issue. Agreement might look different with other tasks, other industries, or more experienced staff.
The safest takeaway isn’t “always interview employees” or “always interview supervisors.” It’s this: when possible, get both viewpoints. When you can’t, add direct checks so your intervention still matches the real barrier.
Works Cited
Echeverria, F., & Wilder, D. A. (2025). Further evaluation of the performance diagnostic checklist 1.1: Outcome agreement between supervisors and employees. *Journal of Organizational Behavior Management, 45*(4), 297–306. https://doi.org/10.1080/01608061.2024.2430787



