The ROI of Neurodiversity Training
Last updated
The ROI of neurodiversity hiring and training is real, but most HR teams measure the wrong things. Cost-per-hire and time-to-fill on neurodivergent placements miss the point. The four metrics that actually predict program success are 90-day retention, comparative offer rate at the structured-interview stage, manager confidence at 60 days, and the cost-curve trajectory over a 3-year horizon — all of which decouple training investment from per-hire staffing margin. The TCO framework on this page shows how a one-time training investment compares to ongoing staffing-firm placement margins over a multi-year program.
The cost of inaction
The simplest framing: every neurodivergent candidate who leaves an interview having outperformed their offer rate is a candidate your competitor hires. Multiply by the number of mis-screened candidates per quarter, by the loaded cost of an unfilled role, and the inaction column on your spreadsheet starts to look expensive.
Three categories of cost most HR teams under-account for:
Missed hires. Standard interview processes systematically screen out qualified neurodivergent candidates at the structured-interview and panel stages. Each missed hire is a candidate your team interviewed, evaluated, and rejected — paying the full hiring-cost overhead with none of the benefit. The fix is upstream, in interview design, not downstream in pipeline volume.
First-year attrition. Hires made through unmodified interview processes, into management cultures unprepared for disclosure conversations, churn at higher rates than hires made through redesigned processes into trained management teams. The cost of replacing a salaried employee in their first year is well-documented in industry research — typically estimated by SHRM and other sources at six to nine months of the employee's salary, which means each preventable first-year departure is a meaningful line item. The structural patterns behind these departures — and the five that Debra Solomon sees in nearly every workplace — are covered in why your neurodivergent hires aren't staying.
Manager-time cost without strategy. When managers have no training in handling disclosure conversations or accommodation requests, they spend three to five times longer per situation working through the right move — most of it on Slack with peers asking "how do I handle this?" instead of with the employee. That time has a loaded hourly rate. Compound it across a 200-manager organization and the answer is not small.
The point isn't the specific dollar amount. It's that "we're not investing in this yet" is itself an investment — in the form of inefficiency that the budget hides because no one calls it that.
The four metrics that matter
The four metrics that actually correlate with neurodiversity-program success. Each has an operational definition, a leading indicator that surfaces problems early, and a failure mode that hides program weakness.
1. 90-day retention of neurodivergent hires
The single highest-signal metric. Why 90 days specifically? Because it's the first window in which the manager-readiness investment shows up in the data. Hires made into management teams that handle disclosure well, set explicit expectations, and protect calendar time stay through 90 days at higher rates than hires made into teams that don't.
Operational definition: percentage of cohort still employed at day 90, segmented by hires made through redesigned interview processes versus pre-redesign processes. Leading indicator: 30-day manager-confidence survey scores. Failure mode: aggregating across cohorts so the redesign signal disappears in the noise.
2. Comparative offer rate at the structured-interview stage
Does your interview process surface neurodivergent candidate ability, or filter it out? Track the offer rate of candidates who reach the structured-interview round, segmented by self-identification when consented. Pre-redesign, the gap is usually wide; post-redesign, it should narrow over the first three cohorts.
Leading indicator: panelist scoring variance. If three panelists score the same candidate within a tighter range than they did pre-redesign, the structured rubric is working — that's the calibration signal that precedes the offer-rate change. Failure mode: declaring victory on a single quarter's data when the sample size is small.
3. Manager confidence at 60 days post-disclosure
Self-reported manager comfort handling accommodation requests, post-disclosure conversations, and performance feedback. Yes, it's a soft metric. It also predicts the harder retention numbers more reliably than any process compliance measure does, because a manager who reports low confidence usually has reasons that show up in the relationship a quarter later.
Operational definition: structured 5-question pulse survey administered 60 days after each new disclosure conversation, anonymized to the disclosure event. Leading indicator: open-text responses describing situations the manager wasn't sure how to handle. Those situations are training material for the next cohort.
4. Cost-curve trajectory over 3 years
Is your per-hire cost flat, declining, or compounding? Training-led models produce a declining curve as managers reuse skills across hires. Staffing-led models produce a flat curve indefinitely — every hire pays the same staffing margin as the first.
This is the metric that reframes the budget conversation. The first hire under a training-led model is more expensive than the first hire under a staffing-led model. The tenth hire is dramatically cheaper. The hundredth hire is in a different cost universe. Measure not just the absolute cost per hire but the slope of the curve — the slope is what predicts where you'll be at year three.
TCO framework
A 3-year total-cost-of-ownership comparison framework — for any vendor or approach you're evaluating. Fill in the values for your own situation; we'll explain the categories. This is a buyer's tool, not a sales table.
| Cost category | Year 1 | Year 2 | Year 3 | 3-year total |
|---|---|---|---|---|
| Vendor or training investment (one-time vs recurring) | One-time for training-led; per-placement for staffing-led | $0 for training-led (capability is yours); recurring per-placement for staffing-led | Same as Year 2 | Sum |
| Internal recruiter / sourcing time (loaded hourly rate × hours/hire) | Higher in Year 1 as the team learns redesigned process | Lower as the process becomes routine | Stable at the trained baseline | Sum |
| Staffing-firm margin (per-placement, only if using staffing-led) | Per-placement margin × hires made via firm | Same per-placement | Same per-placement | Sum |
| Manager training time (hours × loaded rate) | Highest in Year 1 (initial cohort) | Refresher cycle, lower | Refresher cycle, lower | Sum |
| Accommodation budget (per JAN survey data, median per-employee accommodation cost is in the low hundreds of dollars per year) | Low; concentrated in first-year hires | Low; varies by cohort size | Low; stabilized | Sum |
| Internal capacity built (this is a gain, not a cost — track it) | Beginning to compound | Compounding visible | Capability owned in-house, transferable to new hires and new managers | Net asset |
The framework's most useful property: it surfaces the cost-curve difference. Staffing-led models keep paying the same per-placement margin in Year 3 as in Year 1. Training-led models retire that margin and replace it with internal capacity that doesn't recur. Run the framework with your own numbers; the answer is sometimes one model, sometimes the other, and sometimes a hybrid.
For accommodation cost specifically, JAN's annual cost survey publishes employer-reported median costs of accommodations across condition types. Most accommodations come in well below the threshold most HR teams budget for; the typical median per-accommodation cost is consistently low.
The 30 / 90 / 365 day rhythm
A measurement cadence that surfaces problems early without creating evaluation theater.
Day 30 — what should already be visible. Panelist scoring variance has tightened. Manager-confidence pulse survey returns are in for any new disclosures. The first redesigned-interview cohort has been hired and is in onboarding. Trailing indicator: nothing — it's too early. Leading indicators: scoring variance, manager-confidence first reads.
Day 90 — the first reliable retention signal. The first cohort hired through redesigned processes hits the 90-day mark. Compare retention against the pre-redesign baseline. If the redesign is working, retention should be at parity or better with the comparison cohort. If it's worse, the post-hire side of the workflow (manager readiness) is usually the variable that's lagging — the interview redesign worked, the post-hire experience didn't catch up. Cross-link to the manager disclosure response guide as a remediation read.
Day 365 — durable behavior change vs reverting to old patterns. A year in, you're looking for whether the changes have stuck across multiple hiring cycles, multiple cohorts of new managers, and multiple performance review cycles. The honest test: would a new hiring manager joining the team this quarter run the redesigned process by default, without explicit instruction? If yes, the program is durable. If no, it's process-dependent and will erode the next time a senior leader changes.
Training-led vs staffing-led ROI
Two different shapes of return, both valid for different scenarios. The honest framing isn't "which is better" — it's "which is the right shape of investment for what you're trying to build."
Staffing-led models operate at the placement layer of the hiring funnel. They source candidates, screen them, and place them into client engagements — sometimes as the staffing firm's own employees billed to the client, sometimes as direct hires with the client. The ROI shape is fast and flat: speed-to-first-hire is the strongest dimension, and pipeline density (especially in concentrated tech roles) is dense. The trade-off: the per-placement cost recurs indefinitely. Year 3 looks the same as Year 1 on a cost basis.
For organizations with concentrated tech hiring needs and time pressure (need filled roles in 30 to 90 days), the staffing-led model is often the right answer. It exists for a reason and serves a real need.
Training-led models operate at the capability layer. They train your hiring managers, recruiters, and panelists to run redesigned processes for any role and any candidate — neurodivergent or otherwise — with the explicit goal of building internal capacity that compounds across hires. The ROI shape is slower and steeper: the first hire takes longer because the team is learning, but the tenth hire reuses the same capability with no incremental capability cost. By Year 3, the per-hire cost is dramatically below the staffing-led equivalent.
For organizations building durable in-house hiring capability — across functions broader than just tech, with a longer time horizon — the training-led model is usually the right shape of investment. Essential Training covers the manager and recruiter readiness layer; Premium Coaching adds direct one-on-one work with Debra Solomon for senior HR and DEI leaders who need higher-touch advisory.
Many HR teams use both. Staffing-led for urgent filled tech roles; training-led for the broader organizational capacity that lets the company eventually reduce staffing-firm reliance. They're complements more often than substitutes.
Frequently asked questions
What's the realistic time-to-first-measurable-result for a training-led program?
Leading indicators usually surface in 30 to 60 days: panelist scoring variance, candidate offer-acceptance rates, and qualitative manager-confidence signals. The first reliable retention signal is at 90 days post-hire, which means the first cohort hired through a redesigned process produces interpretable retention data 4 to 5 months after kickoff. Full cost-curve advantage versus staffing-led models is typically visible in year 2.
How do we measure neurodiversity hiring outcomes without identifying individual employees?
Aggregated cohort metrics, voluntary self-identification with privacy protection, and comparative metrics that don't require disability disclosure. Comparative offer rate at the structured-interview stage — segmented by self-identification when consented — is the most reliable measure that doesn't require individual identification. The EEOC's guidance on demographic data collection sets the framework here.
What if our CFO wants a hard ROI number before approving the budget?
The ROI number depends on your hiring volume, current attrition rate, manager hourly cost, and whether you're comparing against staffing-led costs or against a no-program baseline. We provide the framework and the inputs; the ROI calculation is yours, with realistic ranges based on published research and your own historical data. We do not publish a generic "X% ROI" claim for two reasons: it would not be defensible across companies, and our brand integrity rule requires that every numerical claim be sourced or shown as a structure for the buyer to fill in.
Should we benchmark against industry data or our own historical baseline?
Both, with priority to your own baseline. Industry data is useful for context — what does the median look like? — but your historical baseline is what tells you whether the program is working for your specific company. Track both. When they disagree, your baseline is the truth.
Do training-led and staffing-led approaches play well together?
Yes. A common pattern: staffing-firm vendors for urgent filled tech roles where placement velocity matters, training-led investment for the broader organizational capacity that compounds over years. The two approaches operate at different points in the hiring funnel and on different time horizons. They're complements more often than substitutes.
External sources we cite and trust
Primary sources for the legal, structural, and cost claims on this page.
- JAN — Workplace Accommodations: Low Cost, High Impact — Job Accommodation Network's annual employer cost survey. The primary source for accommodation-cost expectations.
- ADA.gov — Employment — Americans with Disabilities Act, the interactive process, and employer obligations.
- EEOC — Disability Discrimination — Equal Employment Opportunity Commission guidance, including demographic data collection and the federal complaint process.
- Harvard Business Review — Neurodiversity as a Competitive Advantage — Austin and Pisano's foundational piece on the workplace business case.
- SHRM — Cost-per-hire and turnover-cost research — Society for Human Resource Management published research on hiring and turnover economics, useful for the inaction-cost calculation.
The hiring side of the ROI calculation depends on interview redesign. The retention side depends on manager readiness post-disclosure. The framework on this page combines both.