For decades, the field of Applied Behavior Analysis has wrestled with a deceptively simple question: How many hours of therapy does a child with autism need? The answer, depending on whom you ask, might be ten hours a week, or it might be forty. It might depend on the severity of the diagnosis, the child’s age, or whether the family has access to other services. It might depend on what insurance is willing to cover, or what a clinician feels is right based on twenty years of experience. In an industry now serving an estimated one in thirty-one American children, the lack of consensus has become increasingly untenable.
Dr. Paula Braga-Kenyon has spent three decades thinking about this problem. A licensed psychologist in Brazil who moved to the United States in 1996 to study behavior analysis at the New England Center for Children, she has watched the field evolve from a niche specialty into a sprawling, insurance-funded enterprise. She now consults for ABA companies in both countries and teaches in the master’s program for behavior analysis at Northeastern University. And for the past several years, she has been part of a team at Gracent, a pediatric therapy organization based in Illinois, working on a tool they hope might bring some clarity to the dosage question.
“There’s no test where you can just say, ‘Score here, and this is how much therapy you need,’” Braga-Kenyon explained in a recent interview with Acuity Media Network. “It’s not as black and white as we would want it to be.”
The Trouble with Clinical Judgment
The traditional approach to determining ABA hours has relied heavily on clinical judgment—a behavior analyst assesses the child, identifies skill deficits across developmental areas, develops treatment goals, and recommends a number of weekly hours based on professional experience. This approach has an obvious limitation: clinical judgment is only as good as the clinician. According to data from the Behavior Analyst Certification Board, more than half of all Board Certified Behavior Analysts received their credentials within the past five years. Many of the people now making dosage recommendations simply haven’t been in the field long enough to develop robust intuitions.
“My worry is that the people prescribing don’t have the insight or experience to assess with the right tools, interpret the results, and develop meaningful goals,” Braga-Kenyon said. “The more tools we can give them, the better.”
Mark Shalvarjian, Gracent’s CEO, encountered this problem firsthand. A healthcare executive with decades of experience in oncology and nephrology before moving into behavioral health, he began reviewing the hours of care his company was recommending to clients and noticed troubling patterns.
“There were significant variations in recommendations across different regions,” Shalvarjian told Acuity Media Network. Some of those variations, he found, were influenced by parental preferences rather than clinical best practices. It was a dynamic that struck him as fundamentally misaligned with the mission of delivering appropriate care.
Shalvarjian had learned something from his years working with oncologists and nephrologists: medicine is underpinned by science, but at the end of the day, it is an art. “It really comes down to the individual and what’s best for that individual,” he said. He saw the same principle applying to behavioral health—and saw an opportunity to give clinicians better tools to practice that art.
The problem is compounded by the patchwork of insurance policies governing coverage. Braga-Kenyon described a landscape in which payers impose unpublished caps and arbitrary rules that have little grounding in clinical evidence. “Some insurers won’t approve, and when you go to peer review, they’ll say, ‘We only approve up to ten hours,’” she recalled. “The law says you can’t do that, but that’s how people are navigating it.”
A Model Emerges
In 2024, a team of researchers at LittleStar ABA Therapy, a provider based in Indiana, published what may be the first rigorously validated dosage-determination tool in the field. Their Patient Outcome Planning Calculator, or POP-C, appeared in the journal Behavior Analysis in Practice and demonstrated strong correlations with established assessment instruments like the Vineland Adaptive Behavior Scales. The tool generates dosage recommendations based on symptom severity, challenging behavior, communication skills, and other factors. In a pilot study of seventy-seven patients, the POP-C achieved eighty-eight percent interrater reliability when compared with recommendations made by experienced assessment specialists.
Braga-Kenyon saw the publication as a significant step forward. She reached out to the authors, hoping to learn more about the methodology. When it became clear that the full tool would not be made publicly available in the near term, she and a team of colleagues at Gracent—including Elizabeth Mazzocco-Bodner, Kathleen Bailey Stengel, Kelli Jost, and Ryan Emry—decided to develop their own version, drawing on the published research and their collective clinical experience.
“We took what this published article suggested and built on it—about five variables,” Braga-Kenyon explained. The team built an online calculator that weighs five factors: the severity of the autism diagnosis as documented by a physician, the child’s Vineland score (a standardized measure of adaptive behavior), age, participation in other therapies like speech or occupational therapy, and quality of life. Each variable is assigned a weight, with diagnostic severity accounting for the largest share at forty-four percent, followed by Vineland score at twenty-five percent.
Testing the Tool
To validate their Care Calculator, the Gracent team recruited twelve Board Certified Behavior Analysts from outside the organization, all with doctoral-level credentials or at least eight years of experience. Each clinician reviewed three de-identified client profiles and submitted dosage recommendations. The researchers then compared these expert judgments with the calculator’s output.
The results revealed both promise and complexity. For two of the three clients, the calculator’s recommendations closely matched the clinician averages. For Client A, a child with moderate support needs, the experts recommended a mean of 16.3 hours per week while the calculator suggested seventeen. For Client C, an older child, the figures were 22.3 and twenty-two hours, respectively. But for Client B—the youngest in the sample, at three years old, with a more complex clinical profile—the experts recommended an average of 33.6 hours while the calculator suggested only twenty-two.
The divergence for the youngest client underscored a recurring theme in the dosage literature: age matters, but in ways that are difficult to quantify. The literature has long emphasized the value of early intervention, and clinicians may instinctively recommend higher dosages for younger children, anticipating greater developmental plasticity. The Gracent team has noted this as an area for refinement.
A separate survey of twenty-one BCBAs at Gracent who had been using the calculator in their daily practice yielded encouraging but nuanced feedback. Seventy-six percent found the tool easy to use. Eighty-one percent said it helped support or validate their clinical recommendations. But only forty-three percent agreed that the calculator’s output aligned with their own clinical judgment—a tension the team acknowledges is inherent to any standardized approach.
Rolling It Out
Implementing the calculator across Gracent’s operations required more than just training clinicians on the mechanics. Shalvarjian found that experienced BCBAs in Texas were already recommending hours within five percent of what the calculator suggested—their clinical intuitions, honed over years of practice, were landing in roughly the same place. In Illinois, where the company has a larger operation and more variability in clinician experience levels, the gaps were wider.
“We had BCBAs who were prescribing fairly accurately, and others who were not,” Shalvarjian said. “There was just more variability within the organization.” The calculator helped surface those discrepancies in a way that felt constructive rather than punitive. “It’s really just to raise a flag and say, ‘Hey, let’s have a discussion about this,’” he explained. “It’s a guide. It’s not a mandate.”
Still, the rollout surfaced anxieties. Some clinicians had already made recommendations to families and now found that the calculator was suggesting different figures. Going back to a family to revise a treatment plan felt uncomfortable.
“A lot of people were concerned about having those conversations,” Shalvarjian acknowledged. But Braga-Kenyon, Mazzocco-Bodner, and the clinical team coached them through it. The framing mattered: this was not about admitting error, but about incorporating new information—something that happens all the time in healthcare. “You start a course of therapy for whatever physical condition you have, and along the way, the doctor adjusts,” Shalvarjian said. “They give you a different drug, they move you to a different protocol. The reason for that is because we have new information.”
By and large, he said, families received the revised recommendations well. “They understood that there was real thought behind them.”
A Supplement, Not a Substitute
Braga-Kenyon is emphatic that the Care Calculator is not meant to replace professional judgment. At Gracent, clinicians first conduct their own assessments and formulate their hour recommendations independently. Only then do they run the calculator as a cross-check. When there is a significant discrepancy, a peer review is triggered.
“Often there’s a reason why they want more or fewer hours, and we’ll say, ‘Yeah, that’s totally fine,’” Braga-Kenyon said. “The calculator isn’t meant to dictate the decision. It’s a tool to help you arrive at one.”
She invoked the analogy of a physician ordering multiple tests before reaching a diagnosis. “A medical doctor, when they prescribe something, they don’t only look at one thing,” she said. “They do a series of exams, they talk to you, they look at your history, and then they come up with something.”
The team submitted their paper to Behavior Analysis in Practice, the same journal that published the POP-C research, but it was rejected. They have since invited an outside researcher to strengthen the statistical analysis and plan to resubmit to a different journal. In the meantime, they intend to present the tool at the ABA Reimagined conference in February 2026 and make it freely available online.
“Our goal was to make the tool free for everybody,” Braga-Kenyon said. “Others can tweak it—swap out a variable if they think a different one works better. The intent is really to generate discussion.”
Shalvarjian echoed that philosophy. “Good thinking, good ideas should not be held in a box,” he said. “They should be shared broadly.” He sees the decision to release the tool freely as aligned with a broader vision for the company. “We want to be a place where providers want to come work, where we bring innovation to the industry at large. We want to be a thought leader.”
The Ongoing Debate
The broader question of how much ABA is enough remains deeply contested. The dosage debate has historical roots in the landmark 1987 study by O. Ivar Lovaas, which found dramatic improvements in children receiving forty hours per week of intensive behavioral intervention. That benchmark has shaped the field ever since, even as subsequent research has complicated the picture. A 2017 study published in Translational Psychiatry found that treatment intensity and duration were both significant predictors of skill mastery across developmental domains. But a 2022 study in the World Journal of Pediatrics suggested that data-driven dose optimization could improve outcomes compared to uniform high-intensity protocols.
More recent research has raised further questions about the relationship between hours and outcomes. Braga-Kenyon noted that some studies have found higher hours are not necessarily associated with improvements in adaptive behavior, though they may correlate with meeting more treatment goals. She suggested such findings require careful interpretation. “If you’re meeting more goals, and those goals are set intentionally to improve quality of life, then more hours might actually be an indicator of better outcomes,” she said. “This discussion can’t be done in a vacuum.”
The Council for Autism Service Providers has published practice guidelines recommending comprehensive treatment of twenty to forty hours per week for young children with significant support needs, while acknowledging that treatment should be individualized. The field appears to be moving, however haltingly, toward a more nuanced understanding of the relationship between dosage and outcomes.
Shalvarjian sees the Care Calculator as one step in that direction, but not the last. “If this is the best it can be, I would call that out,” he said. “I think it can and will be improved over time, just like all research does.” He pointed to the Lovaas study as a reference point. “Lovaas did some groundbreaking research back in the eighties, but we’re going on forty years now. We need to continue to sharpen our tools.”
A Question of Craft
Braga-Kenyon acknowledged that some in the field may object to standardized dosage tools on principle, arguing that any such instrument risks diminishing the role of individual clinical expertise. It is a concern she takes seriously. “The problem is that not everyone has that level of experience anymore,” she said.
Shalvarjian raised a related concern: the training pipeline itself may be part of the problem. He had spoken with representatives from a couple of master’s programs for BCBAs and was surprised to learn that dosage determination was not part of the curriculum. “I was really surprised that at least these couple did not,” he said. Even setting aside the question of whether all programs lack such training, he noted, “at least something that everyone would accept is that people that are newer in the field don’t have the same ability or experience to diagnose as accurately as somebody that’s been in the field for a while. The Care Calculator can really bridge that gap.”
The Care Calculator, in this view, is less a replacement for craft than a scaffold for those still developing it. A newly certified BCBA who recommends fifteen hours for a child while the calculator suggests twenty-five has been given an occasion to pause and reflect, to consult with more experienced colleagues, to articulate the reasoning behind the discrepancy. The tool does not decide; it prompts a conversation.
Whether such tools will gain traction across the industry remains to be seen. The POP-C has been available for over a year, and the Care Calculator will soon be released to the public. Shalvarjian envisions a future in which individualized care plans, grounded in measurable inputs and supported by robust research, become the standard. “That would be amazing,” he said.
In Brazil, where Braga-Kenyon spends part of her time, the dosage question takes a different form. There, physicians prescribe ABA hours as part of the diagnostic process, often before any functional assessment has taken place. Braga-Kenyon has been working to educate doctors on how to make more informed prescriptions. The problem, she has found, is the same in both countries: people are making consequential decisions without adequate guidance.
The Care Calculator is one attempt to provide some. Its creators are under no illusion that it will resolve the dosage debate. But they hope it will make the debate more productive, grounding clinical conversations in shared data rather than competing intuitions. For a field that has long struggled to define medical necessity, that would be no small thing.






