Evaluating contributions of progressive ratio analysis to economic metrics of demand

Evaluating contributions of progressive ratio analysis to economic metrics of demand

Understanding Progressive Ratio Assessments: What They Can and Cannot Tell Us About Reinforcer Value

Clinicians often look for quick, practical ways to measure how valuable a reinforcer is for a given learner. Progressive ratio procedures—where the work requirement increases over time—are appealing because they’re faster than traditional demand assessments. But can these quicker tests actually replace more thorough methods? A recent study examined this question, and the findings have important implications for how we select reinforcers and design reinforcement schedules in clinical practice.


What Is the Research Question Being Asked and Why Does It Matter?

Clinicians and researchers sometimes use progressive ratio procedures—where the work requirement keeps increasing—to estimate how “valuable” a reinforcer is. A common hope is that this faster test can replace slower demand assessments that show how responding changes as the “price” (the ratio) changes.

The main question here was straightforward: Can a Basis x progressive ratio analysis (Basis x PRA) produce the same demand metrics as a progressive fixed ratio analysis (PFRA)?

This matters because PFRA-style demand checks take time and staff effort. In busy clinics, you may not be able to run many long sessions at multiple ratio values. If a quicker Basis x PRA could accurately identify things like the “best” ratio for maximizing work output (often called Pmax), that would help with programming decisions. But if it gives wrong answers, it could lead you to pick schedules that are too hard, waste teaching time, or set up frustration.

The study also matters for dignity and choice. These assessments ask people to “work” for items, and the procedures themselves can be tiring or feel unpleasant. If a shorter method doesn’t give the same information, clinicians shouldn’t assume it does.


What Did the Researchers Do to Answer That Question?

The researchers worked with 96 adults with disabilities (ages 18–79). Each person chose a preferred reinforcer—usually a small edible item or brief access to music. The response was meant to be easy and not tied to prior reinforcement history, like rolling a die or squeezing a clothespin.

Each participant completed two assessments, always in the same order.

First, they did a Basis x PRA. In this test, the ratio increased within a session, but only after every two reinforcers (Basis 2), increasing by three each step—FR 1, FR 1, FR 4, FR 4, FR 7, FR 7, and so on. Participants completed three test sessions, with control sessions mixed in to confirm responding would stop when no reinforcer was delivered.

Get quick tips
One practical ABA tip per week.
No spam. Unsubscribe anytime.

Second, each participant did a PFRA demand assessment. In PFRA, the ratio stayed the same for the entire session, then increased for the next session. Ratios increased by three across sessions. Sessions ended when the person stopped responding for 60 seconds or asked to stop. The PFRA ended after researchers saw responding drop past the peak, allowing them to locate Pmax.

The researchers compared what Basis x PRA predicted versus what PFRA actually showed. They examined whether Basis x PRA measures aligned with PFRA measures of elasticity (especially Pmax) and with PFRA measures of equilibrium (like consumption at free access and total responses at peak).


How You Can Use This in Your Day-to-Day Clinical Practice

Do not use Basis x PRA results to pick the “optimal” ratio requirement for programming. Saying “this reinforcer’s Pmax is FR 13, so we should teach at FR 13” isn’t supported by this research. Basis x PRA measures didn’t match PFRA Pmax in a useful way—it predicted the exact PFRA Pmax for only 8 out of 96 people. Most often it overestimated the ratio by about two steps. In practice, that means Basis x PRA can tempt you into making work requirements too hard, then blaming the learner when responding drops.

If you want a quick progressive ratio-style assessment, use it for a different question: “How much work will this reinforcer likely support compared to others?” In this study, Basis x PRA measures (like breakpoint) did relate to several PFRA outcomes reflecting overall “strength” of demand. People with higher breakpoints tended to consume more at free access and FR 1, and showed higher peak response output in PFRA. So Basis x PRA seems more useful for ranking reinforcers by general strength—not for deciding the best schedule value.

When choosing reinforcers for teaching or functional communication training, treat a high breakpoint as a clue that the item might support more responding when things get harder. That could help if you’re planning longer work periods, thinning reinforcement, or teaching difficult new skills. But stay conservative: even with a high breakpoint, start with easy ratios and thin slowly based on the learner’s data and comfort. The breakpoint isn’t a safe map to “what ratio to run”—only a rough signal of how much effort the person may tolerate.

If your goal is schedule design, use PFRA-style sampling when possible, even if brief. You don’t need a perfect economics study to get clinical value. You can run short sessions at a few ratios you might actually use (like FR 1, FR 3, FR 6, FR 9) and look for where responding stays strong versus where it starts dropping. This is closer to what PFRA measures, because the ratio stays fixed in-session and the learner can settle into that “price.” This study supports the idea that within-session rising requirements add extra effects that don’t equal true demand at each price.

Build choice and exit into any “work for reinforcer” assessment. The researchers repeatedly reminded participants they didn’t have to work, could stop anytime, and had alternative activities available. That’s a good clinical standard, not just a research detail. If you run any progressive schedule, include a clear stop response, allow walking away, and ensure the learner has a real alternative. If your assessment only “works” when the learner feels trapped, the data aren’t trustworthy and the practice isn’t respectful.

Be cautious about using Pmax as a label for “best reinforcer.” A reinforcer can produce high peak responding (high Omax) even when its Pmax is at a low ratio. Another reinforcer might have higher Pmax but support very little responding overall. For clinical decisions, how much behavior the reinforcer supports at realistic ratios often matters more than the single ratio value where a peak happens. If someone says “Pmax is higher, so it’s more powerful,” this study suggests that’s not a safe conclusion.

Join The ABA Clubhouse — free weekly ABA CEUs

Use these findings mainly with learners and settings similar to the study: adults with disabilities, small standardized reinforcer units, and assessments emphasizing voluntary participation. Don’t assume the same patterns hold for young children, severe problem behavior contexts, or when reinforcers are larger or poorly controlled. The researchers also didn’t fully test very high prices in PFRA (they stopped after finding Pmax), and PFRA always came second—fatigue or boredom could have affected those outcomes.

Keep the clinical takeaway narrow and honest. Basis x PRA seems reasonable for quickly comparing reinforcers by how much responding they may support. It doesn’t appear valid for deciding the “right” ratio requirement or making precise elasticity claims. If you must choose one method for schedule planning, prioritize a PFRA-like approach with fixed ratios per session, even if you can only sample a small set of ratios.


Works Cited

Lambert, J. M., Osina, M. A., Staubitz, J. L., Reed, D. D., & Madden, G. J. (2026). Evaluating contributions of progressive ratio analysis to economic metrics of demand. *Journal of the Experimental Analysis of Behavior, 125*(1), e70077. https://doi.org/10.1002/jeab.70077

Leave a Comment

Your email address will not be published. Required fields are marked *