Beyond Social Validity: Using Qualitative Research in Behavior Analysis
Gathering meaningful feedback from families and clients often requires more than a simple rating scale. This article explores how behavior analysts can use qualitative methods—like interviews and focus groups—to capture the nuances that numbers alone miss. It also addresses a growing concern: how to protect data quality when recruiting participants online. For clinicians who want richer stakeholder input without sacrificing rigor, the guidance here offers a practical starting point.
What is the research question being asked and why does it matter?
This paper asks a practical question: if we want better social validity than quick rating scales, how can behavior analysts use qualitative research in a way that is careful and useful?
This matters because families and clients often have more to say than what fits on a 1–5 scale. If we only use rating forms, we can miss key details—what feels respectful, what is actually doable at home, and what outcomes matter most to the person living with the plan.
The authors also tackle a real problem in modern research: when you recruit people online, some may be fake responders. Without planning for that, you can waste time, pay incentives to fraud, and end up with bad data that leads to bad clinical decisions.
For clinicians, the takeaway is this: if you ever run a clinic survey, a caregiver interview project, or any stakeholder feedback effort, you need procedures that protect data quality.
This article is not testing an ABA intervention. It is a “lessons learned” guide based on the authors’ experience doing qualitative work with caregivers and professionals—often online—about hard topics like severe challenging behavior and caregiver adherence. The value for practice is in the process tips, not in outcome data.
What did the researchers do to answer that question?
The authors described the qualitative methods their team has been using—semi-structured interviews, topic groups, and document review—then shared concrete problems they ran into and how they handled them.
A major focus was recruitment and screening for a purposeful sample. They tried to choose participants who could give rich information, not a sample meant to represent everyone.
They explained how they recruited through a mix of direct outreach and online methods. Direct-to-stakeholder routes (like caregiver social media groups) worked better than going through intermediary organizations. They also described screening steps, like collecting key demographics and sometimes asking for documentation to confirm details and reduce fraud risk.
The authors shared warning signs for fraudulent responders in online studies:
- Odd email patterns
- Poor audio or video quality
- Vague answers that do not match the question
- Shifting details
- Mismatched location data
- Strong focus on getting paid in a specific way
They built in rules allowing the team to remove someone when information quality was too poor or contradictory. They also used payment methods (like mailed checks) that made fraud less rewarding.
To support real participants, they planned for technology needs, childcare needs, and scheduling limits. They offered flexible times, tech help, and childcare reimbursements when needed. Having a “lead” and a “support” staff member in interviews helped—one person focused on the conversation while the other handled tech issues, notes, and participant support.
How you can use this in your day-to-day clinical practice
If you want better social validity than a rating form, add a short, planned interview step. Keep it simple. For example, after a new BIP has been running for 2–3 weeks, schedule 15–20 minutes and ask open questions:
- “What parts are easiest to do?”
- “What parts are hardest?”
- “What feels respectful to your child?”
- “What is not working for your family schedule?”
Then ask, “Can you give me an example from this week?” That last question often surfaces the real barriers you can actually plan around.
Use qualitative questions to find the hidden variables your data sheets do not show. If problem behavior is down but the caregiver looks burnt out, your graphs will not tell you why. A calm conversation might reveal that the plan requires too many steps, triggers conflict in the home, or makes the caregiver feel judged. Once you learn that, you can adjust the plan to fit the family’s real life while keeping the core behavior principles.
Plan for the “therapeutic triad” on purpose. Many caregiver-mediated plans place a lot of work on the caregiver, even when the child is the main client. Use interviews to check whether your plan is quietly shifting all the burden onto the caregiver without support.
Ask caregivers what support they want from you—modeling, role-play, written steps, fewer targets at once. Also ask what choices the child should have in the plan, so the work is not only about adult control.
Do not treat qualitative feedback like “nice comments” at the end of treatment. Treat it like clinical data that can change your next steps.
If a caregiver says, “I can’t do planned ignoring because it feels unsafe,” your job is not to persuade them to comply. Your job is to re-check risk, define what “unsafe” means in that home, and pick an option that meets the function and fits the family’s values. Qualitative input is often where you learn which parts of a plan could harm rapport or dignity if you push too hard.
If you collect stakeholder feedback online—even as a clinician—set up basic fraud and data-quality rules. This applies to remote intakes, online caregiver surveys, telehealth recruitment, or clinic satisfaction interviews with gift cards.
Decide ahead of time what counts as a stop sign: inconsistent stories, inability to answer simple context questions, or strong pressure to change payment methods. Put a clause in your consent or clinic policy that you may end the session if information is inconsistent or if the person cannot participate safely and clearly.
If you offer incentives for feedback (like a raffle or gift card), make fraud less rewarding. Some fraud risk drops when payment is not instant or not easily sent to an email address. You can also delay incentives until after a participation check—for example, after the interview is completed and basic information matches.
Be careful here: do not make it so hard that real families cannot participate. Your goal is balanced access and data quality, not gatekeeping.
Reduce participation barriers the same way you reduce treatment barriers. If you want honest caregiver feedback, you may need to offer evening times, short sessions, and flexible ways to share documents or examples.
Some caregivers will not use scheduling apps easily, so be ready to schedule by email or text if your workplace allows it. If you run group caregiver meetings, consider having a second staff member present so the lead can keep the conversation going while support handles tech issues or private questions.
When you ask caregivers to report service history or details (like “Did you have an FA?”), expect errors and memory gaps. Build your clinical decisions on more than recall when it matters.
In practice, that can mean reviewing past reports, requesting consent to coordinate with prior providers, or asking for copies of key pages. Do this only when truly needed for care, with clear respect for privacy and workload.
Finally, be honest about what this paper can and cannot tell you. It does not prove that qualitative methods improve outcomes, and it does not give a single “right” way to do interviews or analysis.
What it gives you is a set of practical cautions: open-ended feedback can improve fit and dignity, online methods bring fraud risks, and planning for barriers improves participation. Use these ideas as add-ons to your clinical judgment and your usual data-based decision-making—not as a replacement for them.
Works Cited
Brown, K. R., & Pence, S. T. (2025). Beyond social validity: Embracing qualitative research in behavior analysis. Behavior Analysis in Practice. https://doi.org/10.1007/s40617-025-01126-0



