Beyond planning: validating what we actually delivered
A deep-dive into the importance of post-treatment QA and how Atlas makes it possible
Eli Kress
Co-founder, CEO

Clinicians, physicists, and dosimetrists spend so much time getting plans right. Less time is given to answering one important question, though: how did we actually do?
This post is about why we believe post-treatment plan validation should be routine in all radiation oncology clinics, what makes that hard to implement in practice, and why we built Atlas to make it simple and local.
Planning quality is not the same as delivered quality
Planning excellence is table stakes. Patient-specific QA, checks of contouring and coverage, and peer review all matter. Yet the end of treatment is where the truth lives. Anatomy changes, couch shifts, replans, setup variation, and dose accumulation can all move reality away from the plan. Over a cohort or a trial these small deltas add up.
Post-treatment validation asks: did our delivered dose statistics land where we expected for target coverage and OAR sparing, not just for one patient but across many? When the answer is known, programs learn faster and RTQA runs smoother.
Why this is hard today
- Data is scattered across TPS, PACS, trial folders, spreadsheets, and the drives of individual workstations.
- Structure names vary site to site and even planner to planner, which blocks aggregation.
- Cohort-level DVH checks are slow to compute and slower to standardize.
- Evidence-based constraints exist, but applying them at scale to local data is tedious and time-consuming.
- Security concerns keep data local by design, which rules out many cloud tools.
The result is that many teams validate a handful of cases or rely on anecdotes. This works for intuition, but is not optimal for informed decision-making.
What Atlas does
Atlas helps teams validate post-treatment quality across cases and trials using de-coupled, non-sensitive data. Your patient's PHI always stays within your environment.
- Standardize DICOM-RT structure labels and dose using a local mapping assistant.
- Apply constraints at scale either defined by your institutions clinical goals or with RadOncCalc's evidence-based library, versioned and cited.
- Evaluate cohorts with DVH statistics, pass or fail summaries, and tolerance bands.
- Explore your data from program trends to patient-level context in a click.
- Export reports for QA review, trial packets, and regulatory documentation.
All personal data processing runs on-prem and our local application is built so IT can audit every step, every time.
Where it helps on day one
- End-of-treatment reviews that move from manual spot checks to systematic validation.
- Technique changes such as new planning systems or optimizers, with before and after comparisons.
- Trial QA with consistent constraint application and faster queries.
- Program benchmarking across sites in a network, apples to apples.
- M&M and education with real data, not screenshots.
On gamma...
Patient-specific QA and gamma pass rates are important, just not sufficient for program learning. Gamma answers “did this plan and machine deliver as expected for this patient.” Post-treatment validation answers “did our patients, as a group, receive what our constraints and coverage goals intended.” Atlas focuses on the second question to complement the first.
Using Atlas
- Choose a cohort such as all prostate cases last quarter or all cases on a trial.
- Connect data locally from your TPS and file shares.
- Confirm structure mappings at the project level and address one-offs with ease
- Define constraints on our own or with our library.
- Upload data and review program-level trends & click into outliers.
- Share an export for physics QA, tumor board, or your upcoming publication.
Guardrails we take seriously
- Local by design so PHI stays inside the firewall.
- Versioning and audit trails for every constraint set and run.
- No lock-in for your data or reports.
Why we built Atlas
While building Accelerator, we realized that the ability to analyze constraints compliance was something that users outside of trials. The need for this to be spun out as a standalone offering hit us one day during a demo when we were asked: "Could this be used to do a retrospective quality analysis of my hospitals' breast plans?"
After doubling down with our trusted stakeholders and testing the waters in the clinics where we regularly find inspiration, we found three key questions that a variety of users were looking to answer:
- Are our outcomes consistent with what we aimed to deliver across the last 100 cases?
- Which sites or techniques drift from our intended constraints?
- Can we prove to ourselves and to a sponsor that our program hits its marks reliably?
The first version of Atlas was built to give clear, local answers to those questions without adding more work to the already busy weeks that physicists endure and, hopefully, reducing some of their existing work.
What’s next
- Clinical goal exports to automatically make your learnings actionable in the TPS
- Automated alerts when a metric trends toward the edge of tolerance.
- De-identified aggregate insights you can opt into for research use.
If you want to try it
We are offering open access for a limited time to help teams validate recent cohorts. If you want a short walkthrough, reach out. If you'd like to sign-up now, submit a form response here.
Aitrium Atlas helps radiation oncology programs validate what they delivered, not just what they planned. If that resonates, we would love to show you how it works.
