Five Problems We Can Fix
A decade of NDIS evidence and the work ahead
This consolidates a decade of formal recommendations and the evidence behind them: 676 recommendations from 63 review reports, 51 public submissions to the JCPAA inquiry, 451 discrete issues. Five root causes account for approximately 93 percent of the recommendation evidence base and 94 percent of the public submission issues.
This webbook is the consolidated SP analysis of NDIS reforms, recommendations and the work ahead. The five-root-cause framing in Chapter 3 is the analytical anchor; Chapter 4 includes both the relational view and an alternative four-cause causal chain as a complementary lens.
Use the sidebar to move between chapters. Each chapter is self-contained but cross-references to other chapters are linked. To save the whole document as a PDF, use your browser's print function (the print stylesheet collapses the tabs into a linear document).
Copyright and intellectual property
© 2026 Angela Harvey, Supporting Potential. All rights reserved.
The content of this report, including the five-root-cause framework, the four-cause causal chain lens, the analytical synthesis across 676 formal recommendations and 51 public submissions, the cause-and-effect chains, the power map, the visual artefacts, and the underlying methodology, is the intellectual property of Angela Harvey trading as Supporting Potential.
No part of this report may be reproduced, redistributed, transmitted, or used to train artificial intelligence systems, in any form or by any means, without prior written permission. Short quotations for the purposes of journalism, academic review, or policy commentary are permitted provided the report is cited as “Supporting Potential, Five Problems We Can Fix (May 2026)”.
For permission to reproduce, cite at length, or build derivative work on the analytical framework, please contact Angela Harvey at Supporting Potential.
Executive summary
Over the last 10 years, the Australian Government has spent more than $680 million reviews and reports to interrogate the NDIS and ensure it is fit for purpose. This report consolidates that decade of formal recommendations and the evidence behind them. It draws on 676 formal recommendations across 63 review reports between 2016 and 2025, on 51 public submissions to the Joint Committee of Public Accounts and Audit inquiry into the administration of the NDIS, and on the 451 discrete issues raised by those submissions.
Five root causes account for approximately 93 percent of the recommendation evidence base and 94 percent of the public submission issues. They are the same five problems, named and re-named across multiple reviews, met with partial reforms that have not redesigned the underlying cause.
The headline finding is descriptive. The same five problems have been raised in formal reviews for up to a decade. The same five problems are raised again by the public submissions to the JCPAA inquiry in 2026. Across the 676 recommendations, 13 (1.9 percent) are recorded as implemented; 290 (42.9 percent) are recorded as in progress; and 362 (53.6 percent) have no published status, which is itself a transparency gap. Whatever the exact mix of implemented, in-progress and unknown, recommendations are accumulating faster than they are being closed out, and the design failures named in those recommendations remain visible in the public submissions a decade later.
What is at stake is also straightforward. Providers are exiting markets. Quality is under pressure. Participants in higher-risk settings are most exposed where the system still relies on complaints to surface harm. The reforms scheduled for 2026 will land on a system that has not addressed the design failures that produced the harm those reforms exist to prevent.
This report sets out the evidence for each root cause, maps the May 2026 reform landscape against them, segments the implications by service type, and offers an action set for government, peak bodies and providers. The framing throughout is intentional. These are problems we can fix. The work ahead is not to discover what is wrong; it is to act on what is already known.
The five root causes at a glance
| # | Root cause | Direction |
|---|---|---|
| RC1 | Reasonable and necessary undefined in practice | Toward a transparent, consistent line of sight between need, funding and life quality |
| RC2 | Transaction-based workforce design with no support circle architecture | Toward a coordinated workforce with defined roles, qualifications, interfaces and shared accountability for outcomes |
| RC3 | No proactive quality system, only reactive complaints | Toward quality built into the system, not extracted after harm |
| RC4 | Individual funding applied to group economics | Toward a housing and supports model that aligns funding with group economics without surrendering individual self-direction |
| RC5 | Designed for a default participant who doesn’t exist | Toward a system designed from the margins so it works for everyone |
The five do not act independently. Two relationships in particular shape what can be fixed and in what order. The workforce design failure (RC2) and the proactive quality system gap (RC3) are entangled rather than parallel; a reform that addresses one without the other leaves half the gap open. The funding architecture for housing and group settings (RC4) sits upstream of the proactive quality reform (RC3) in delivery sequence; pushing harder on quality without first redesigning the economics that make quality deliverable accelerates the provider exits the quality reform is intended to prevent. Section 4.4 sets out these relationships in detail; Sections 8 and 9 reflect them in the recommendations.
1. The evidence base
This report draws on a single consolidated dataset built over twelve months from publicly available sources.
Headline figures
- 676 formal recommendations across 63 distinct review reports between 2016 and 2025.
- $680 million or more in directly attributable review spending. Four components are firm and individually disclosed: the Disability Royal Commission ($527.9 million), the 2023 NDIS Review ($18.1 million), the NDIS Review implementation and design work ($129.8 million), and the nine NDIS-related ANAO performance audits in this dataset ($5.76 million combined). The Productivity Commission’s 2011 Disability Care and Support inquiry, the Tune Review (2019), the McKinsey Independent Pricing Review (2018) and the Boland Review of the NDIS Act are not separately disclosed and are not included in the $680 million figure. Joint Standing Committee, Senate Estimates and similar parliamentary scrutiny costs are absorbed in standing parliamentary appropriations and are also excluded.
- 51 public submissions to the Joint Committee of Public Accounts and Audit inquiry into the administration of the NDIS, producing 451 discrete issues raised.
- Five root causes account for approximately 93 percent of the recommendation evidence base and 94 percent of the public submission issues.
The 676 by review stream
| Stream | Recommendations | Reports |
|---|---|---|
| Disability Royal Commission (2019–2023) | 219 | 9 chapters and volumes |
| NDIS Review (Bonyhady/Paul, 2023) | 138 | 26 chapters |
| Joint Standing Committee inquiries | 137 | 5 inquiries |
| ANAO performance audits (2016–2025) | 56 | 9 audits |
| Productivity Commission NDIS Costs Study (2017) | 44 | 1 report |
| Tune Review (2019) | 29 | 1 report |
| McKinsey Independent Pricing Review (2018) | 28 | 1 report |
| Community Visitor Schemes Review (2018) | 6 | 1 report |
| Grattan Institute (2025) | 4 | 1 report |
| Boland Review (2024) | 2 | 1 report |
| Other (Senate inquiries, Regulatory Burden, General Issues, Ministerial, Pricing Reform, Commission AR) | 13 | 12 reports |
| Total | 676 | 63 reports |
Source type breakdown
| Type | Recs | Share |
|---|---|---|
| Royal Commission | 219 | 32% |
| Independent reviews (NDIS Review, Tune, McKinsey, PC NDIS Costs, Grattan, Boland, CVS Review) | 251 | 37% |
| Parliamentary inquiries (JSC) | 137 | 20% |
| Performance audits (ANAO) | 56 | 8% |
| Other (Senate, Regulatory Burden, General Issues, Ministerial, Pricing Reform, Commission AR) | 13 | 2% |
The 51 JCPAA submissions
- Lodged to the Joint Committee of Public Accounts and Audit inquiry into the administration of the NDIS, with submissions made public progressively from February 2026 through May 2026.
- 451 distinct issues extracted from those submissions.
- 94.0 percent of issues map to at least one of the five root causes.
A short note on method. Recommendations were extracted from each formal review report, tagged against a seven-pain-point taxonomy, then consolidated into the five root causes used in this report. Submission issues were extracted from each public JCPAA submission as discrete one or two sentence statements, mapped against the same taxonomy, and confidence-rated. The $680 million figure is built up from disclosed Federal Budget allocations and the per-audit cost figures published by ANAO in each performance audit report; the breakdown above shows the firm and not-disclosed components. The full method, the source registry and the limitations are set out in Appendix D, with per-audit ANAO costs documented at research/policy-evidence/docs/anao_audit_costs.md. The data files behind every figure in this report are listed in the appendices.
2. How to read this report
The report follows a deliberate structure. Each root cause chapter has the same shape so the reader can compare across them.
Each chapter contains the diagnostic title and direction, years unresolved, an evidence-at-a-glance table, the diagnosis, a section on what has been tried, a section on what is still missing, and a reform exposure section.
Where the report uses qualified language (“appears to”, “is consistent with”), that reflects the limit of the evidence available rather than hedging. Where it uses definitive language, the underlying evidence is robust.
The headline figures throughout are unique counts. Where a recommendation or submission issue speaks to more than one root cause, it is counted once for each. Totals across the five root causes therefore sum to more than 100 percent.
A note on the JCPAA tables
Three counts appear in each root cause chapter, drawn from the 51 public submissions to the JCPAA inquiry and the 451 discrete issues extracted from them.
- JCPAA submissions raising it is the number of submitters (out of 51) whose submission contains at least one issue touching the root cause. This is breadth, how many distinct voices raised something on the topic.
- JCPAA issues raised is the number of discrete issues (out of 451) that touch the root cause at any level, primary or secondary. An issue can touch more than one root cause, so issue counts across the five root causes sum to more than 451.
- JCPAA issues where this is the primary concern is the subset of issues where the root cause is the main framing, not a secondary mention. Primary counts across the five root causes sum close to 451.
A worked example. An issue that says “plan managers should have minimum qualifications, and the lack of qualifications allows fraud to go undetected” touches both RC2 (workforce design) and RC3 (proactive quality). It is counted as one issue for each, but its primary concern is workforce design, so RC2 gets the primary tag and RC3 does not.
3. The five root causes
3.1 RC1. Reasonable and necessary undefined in practice
Toward a transparent, consistent line of sight between need, funding and life quality.
Years unresolved: 9 (first formally identified 2017).i
Evidence at a glance
| Source | Count |
|---|---|
| Formal recommendations tagged to this root cause | 206 |
| JCPAA submissions raising it | 40 of 51 |
| JCPAA issues raised | 99 |
| JCPAA issues where this is the primary concern | 82 |
The diagnosis
The “reasonable and necessary” test is the gateway between need and funding in the NDIS. It is also the test that has never been defined operationally. The Act sets out criteria, the Operational Guidelines paraphrase them, and the planner applies them. Different planners interpret the same situation differently. Two participants with similar needs receive different decisions. The system is filtering applications through a definition it has not actually written.
The cost of that ambiguity falls on participants and providers. The Administrative Appeals Tribunal, and now the Administrative Review Tribunal that replaced it from October 2024, has set aside or varied 76 percent of NDIA decisions taken to appeal. Less than 2 percent of appealed decisions are affirmed. Appeal volumes grew 727 percent between 2016 and 2020. The most common issue across appeals is reasonable and necessary. Read at face value, those numbers say that the great majority of appealed decisions were wrong, and that the system rewards those who can afford to appeal.
The “everyday items” rule sharpens the problem. The Act excludes from funding items “available to everyone”. Available to everyone is not the same as serving the same purpose. An iPad to most people is a device for entertainment, communication and productivity. To a person who is non-verbal and uses it as their primary communication system, the iPad is the communication system. The rule applied without context excludes the second use because it cannot distinguish it from the first.
Financial inequality compounds the access problem. People with disability are about 1.6 times more likely to live in poverty than the general population. Severe and persistent disability adds an estimated $173 a week in extra costs. Approximately 20 percent of NDIS participants live in poverty. To prove eligibility, a participant needs evidence. To get evidence, the participant often needs to pay for assessments. The participants for whom the system would matter most are the participants least equipped to navigate it.
Nine years of formal recommendations have called for a clearer line of sight between need and funding. Episodic conditions were recognised in legislation in July 2022 (section 24(3) of the NDIS Act). The Getting Back on Track Act provides a framework for further definition. Support Lists under section 10 specify what funding can and cannot be spent on. Each is partial. None defines reasonable and necessary at the operational level where the planner makes the decision.
What has been tried
Episodic conditions are now recognised in legislation. The amendment in July 2022 acknowledged that conditions like multiple sclerosis, mental health conditions and chronic fatigue can fluctuate without ceasing to be permanent for the purposes of the Act.
The Getting Back on Track Act 2024 introduced new statutory tools, including support lists and functional capacity assessment provisions. These create the legal hooks for clearer definitions.
Section 10 support lists, effective October 2024, specify what NDIS funding can and cannot be spent on. The lists clarify funded inclusions and exclusions but do not define need.
Operational guidelines and planner training have been refreshed. Functional capacity assessment tools are in development. The PACE planning system aimed to standardise reassessment workflows.
These initiatives are not bad. They add hooks, lists and processes around the existing test. None of them defines reasonable and necessary at the operational level.
What is still missing
The single most consequential gap is an operational definition of reasonable and necessary that planners can apply consistently. Five specific elements are absent:
- A published method for translating Act criteria into planner decisions, so two planners reach the same result on the same facts.
- Explicit recognition of disability context in the everyday items rule, so the same item serving a different purpose can be funded where appropriate.
- Funded access pathways for participants in financial hardship who cannot afford the evidence required to apply or appeal.
- Use of tribunal overturn rates (Administrative Appeals Tribunal and now the Administrative Review Tribunal) as a quality signal feeding back into planner training and decision support, rather than as a separate appeals system parallel to scheme operation.
- Published consistency measures, so the scheme can be judged on whether the same need produces the same decision.
Until these are in place, the contested, disempowering and stressful argument the NDIS Review described will continue to play out at every plan and every reassessment.
Reform exposure
Minister Butler’s 22 April reform package contains three propositions that touch this root cause.
R1 introduces functional capacity assessments to replace diagnosis-based access. The intent is greater consistency through standardised assessment. The reform changes the access gate. It does not, by itself, define reasonable and necessary inside the scheme. Two participants who pass the same functional capacity test can still receive different reasonable and necessary decisions.
R2 announces that approximately 160,000 participants will be moved off the scheme, framed as a tightening of who is eligible. The scheme boundary is shifting. The definition of need within the scheme is not.
R3 foundational supports outside the NDIS is the proposition closest to addressing this root cause. If properly designed and funded, foundational supports create the access pathway for participants who do not meet NDIS eligibility but still need disability supports. The dependency on state funding and commissioning architecture qualifies the conclusion. As at May 2026 no state disability minister has committed cash beyond the December 2023 communique, and the receiving commissioning architecture has not been built.
Section 10 support lists, in force since October 2024, tighten what NDIS funding can be spent on. They clarify scope but do not define need.
The functional capacity assessment workforce required to deliver R1 across the existing 760,000 participants does not currently exist within the NDIA. Mass reassessment in the announced timeframe carries operational delivery risk.
Of the formal recommendations targeting this root cause, the May 2026 alignment refresh recorded approximately 1 percent as implemented. That is the highest implementation rate across the five root causes, and still consistent with the wider pattern of recommendations accumulating faster than they resolve.
Taken together, the reforms in flight tighten the scheme boundary, list what can be funded, and propose a new assessment framework. None defines reasonable and necessary in operation. The gap the NDIS Review described in 2023 remains.
3.2 RC2. Transaction-based workforce design with no support circle architecture
Toward a coordinated workforce with defined roles, qualifications, interfaces and shared accountability for outcomes.
Years unresolved: 10 (first formally identified 2016).i
Evidence at a glance
| Source | Count |
|---|---|
| Formal recommendations tagged to this root cause | 247 |
| JCPAA submissions raising it | 45 of 51 |
| JCPAA issues raised | 161 |
| JCPAA issues where this is the primary concern | 108 |
The diagnosis
Every role inside the NDIS workforce was scoped to perform an activity. None were scoped to support a participant’s outcome with defined accountability for it. None were scoped as part of a connected support circle around the participant. The result is a workforce that, a decade in, still operates as a collection of independent transactions rather than as a coordinated quality system.
The pattern recurs across every role. NDIA planners make funding decisions without required clinical or disability expertise. Local Area Coordinators were created without defined interfaces to the planner, the support coordinator, or the participant’s existing supports. Support coordinators were given a vague capacity-building remit, no minimum qualifications, and no defined position in the support circle. Plan managers number more than 1,400 nationally, without consistent oversight or training requirements. Behaviour support practitioners produce plans that authorise restrictive practices, while no role is scoped to ensure those plans reduce restrictive practices in operation. Support workers deliver intimate care, behaviour support implementation and complex daily routines, often without supervision built into the role. Allied health professionals see their clinical evidence routinely set aside by planners. Auditors interpret the practice standards inconsistently across firms. Advocates exist where states fund them and not where they do not. Community visitors were recommended in 2018 and have not been implemented at the national level.
These are not isolated failings. They are the same design failure repeated across every role. Each role was scoped to do something specific (write a plan, manage a plan, deliver a support, run an audit) without a parent design specifying how the roles connect, what each role owes the participant, or how quality is checked across the chain.
The compounding effect shows up at the points of greatest risk. The participant is the only person who can fully own their outcome, and many participants are fully capable of directing their own supports and do so well. The design failure shows up for participants whose risk profile, complexity or constrained capacity to self-direct means they need a dedicated function working in their best interests. No current role is scoped to be that function, so where outcomes are poor for these participants accountability cannot be located. This is the pattern 45 of 51 JCPAA submissions describe, and the pattern formal reviews have described in 247 separate recommendations across the past decade.
The framing matters because it changes the response. A workforce shortage problem is solved by adding more practitioners. A workforce design problem is not, because adding more practitioners to a system whose roles were never properly scoped scales the failure rather than solving it. The shortage is real, but it sits downstream of the design.
The roles, named
| Role | How it was scoped | Design failure surfaced in the evidence |
|---|---|---|
| NDIA planner | Front-line decision-maker on funding | No required clinical or disability expertise; inconsistent application of “reasonable and necessary”; clinical evidence routinely set aside |
| Local Area Coordinator (LAC) | Community-based connection role | No defined interface with planner, support coordinator or other supports; being replaced by Navigator before original design failures were resolved |
| Support coordinator | Build participant capacity to use plan | No clear scope, no minimum qualifications, independence optional, activity rather than outcome measures, gatekeeping risk in thin markets |
| Plan manager | Process payments and provide financial oversight | No mandatory training or qualifications baseline; 1,400-plus providers operating with minimal oversight; carry risk disproportionate to remuneration |
| Behaviour support practitioner | Develop plans for participants with complex behaviours | Funding tied to plan production rather than to improvements in behavioural responses and reductions in restrictive practices; no defined interface with treating clinicians; 68 percent of participants not consulted |
| Support worker | Deliver supports as billable hours | Minimum mandatory training is the Worker Orientation Module; supervision not built into the role; deliver behaviour support implementation and intimate care without clinical backing |
| Allied health professional | Provide therapy, assessment and clinical input | Clinical evidence routinely set aside; pricing model does not cover supervision, training, travel or non-billable coordination; AHPRA-regulated outside the scheme |
| Auditor | Verify compliance with practice standards | Limited disability services exposure across many firms; inconsistent interpretation between auditors; assessing compliance not quality; no calibration mechanism across the audit market |
| Advocate | Support participants through complaints and appeals | Funded where states choose to fund; assumed by the complaints system but not provided as part of NDIS infrastructure |
| Community visitor | Independent oversight of institutional and group settings | Recommended in 2018 as a national framework, not implemented; where state schemes exist they have inspection rights but no enforcement powers, and coverage is inconsistent across jurisdictions |
What has been tried
Multiple workforce initiatives have been announced over the past decade. None has reached the underlying design problem.
The Navigator role has been proposed to replace Local Area Coordinators and parts of support coordination. It is in design phase, with a five-year transition. Registration of support coordinators was paused in December 2025. No design framework has been published showing how the original support coordination failures will be avoided in the new role.
Behaviour support workforce strategies have focused on increasing practitioner supply. None have addressed the funding model that pays for plan production rather than restrictive practice reduction.
Mandatory registration for SIL and platforms commences in July 2026. It will lift compliance benchmarks for those service types but does not extend to support coordination, plan management or many of the role types named above.
The Integrity and Safeguarding Act 2025 strengthens what the regulator can do after harm. It does not redesign what each role owes the participant before harm.
These are not bad initiatives, only partial ones, each addressing one role at a time without reaching the support circle problem.
What is still missing
The single most consequential gap is the support circle architecture itself. There is no published model showing how the roles connect, what each role owes the participant, where one role’s responsibility ends and another’s begins, and who has accountability to the participant for life quality outcomes.
Beneath that, several specific elements are absent across the workforce:
- A minimum qualifications baseline for each role.
- Defined interfaces between roles, including formal handovers, shared documentation and joint accountability for participant outcomes.
- Outcome measures embedded in role design rather than activity measures bolted on top.
- Quality checking that lives inside the workforce, not only outside it at the regulator.
- Workforce capability infrastructure (training pipelines, supervision frameworks and career pathways) so that redesigned roles are filled with people equipped to perform them.
Until the support circle is architected and the capability infrastructure is built behind it, fixing one role at a time will keep producing the same pattern, a slightly improved transaction in a still-disconnected workforce.
Reform exposure
The reforms in flight as at May 2026 touch this root cause unevenly. Minister Butler’s reform package, announced at the National Press Club on 22 April 2026, sits at the centre of the picture and changes the shape of several of these reforms.
The Navigator design, reaffirmed in the announcement, has the opportunity to address the original support coordination failures. As at May 2026 no detailed design framework has been published. If the published design does not specify scope, qualifications, interfaces, position in the support circle and outcome measures, the new role will inherit the same pattern.
Mandatory registration was expanded in Butler’s announcement to cover higher-risk activities including personal care, with effect from 1 July 2026. Registration lifts compliance benchmarks for the registered roles but reaches only one node in a longer chain. SIL providers implement behaviour support plans others write, and they are assessed against standards by auditors whose disability services exposure varies. Quality at the participant level depends on the BSP being well-written, the auditor being able to recognise quality in disability services, and the registered provider having the workforce capability to implement consistently. On its own, this work is unlikely to lift the chain. A registered support worker still operates without supervision built into the role.
Butler’s announced cut of approximately 30 percent to third-party intermediary spend, applying primarily to plan management and support coordination, recasts what was previously deferred plan management reform as a budgetary measure. Reducing plan management spend without specifying minimum qualifications, oversight requirements or scope leaves the existing plan manager market doing the same activity with less resource. Multiple JCPAA submissions identified plan management as a design problem; cutting spend on that role does not redesign it.
The practice standards refresh provides a vehicle for embedding outcome measures, but only if the standards themselves are redesigned around outcomes rather than processes.
The Integrity and Safeguarding Act sharpens the regulator’s powers to act after harm. The workforce architecture that would prevent the harm is not part of the Act.
The May 2026 alignment analysis tested 89 recommendations from the evidence base that substantively align with Butler’s agenda, of which 16 address a consolidated design failure as identified in this report’s framework. The remaining reforms strengthen administrative efficiency or revenue protection, both of which are reasonable scheme objectives. Every root-cause-targeting recommendation in the aligned set sits at implementation status “unknown” or “in progress”. The reforms with implementation traction are R10 (fraud detection) and parts of R7 (digital payment integrity). The reforms that would redesign roles around participant outcomes are not yet on the published delivery path. Taken together, the reform landscape touches the symptoms of RC2 in several places. None of the reforms in flight, individually or collectively, redesigns the support circle.
3.3 RC3. No proactive quality system, only reactive complaints
Toward quality built into the system, not extracted after harm.
Years unresolved: 10 (first formally identified 2016).i Tied with RC2 as the longest-running unresolved root cause.
Evidence at a glance
| Source | Count |
|---|---|
| Formal recommendations tagged to this root cause | 285 |
| JCPAA submissions raising it | 46 of 51 |
| JCPAA issues raised | 149 |
| JCPAA issues where this is the primary concern | 91 |
The diagnosis
The NDIS has no proactive quality system. Quality monitoring relies on participants raising complaints when something has already gone wrong. For participants who are well-informed, well-supported and able to leave a poor provider, the complaints route is one safeguard among several. For participants with high support needs, limited informal networks, or living in settings where leaving is not a realistic option, complaints arrive too late or do not arrive at all.
Two layers are missing. Inside the support circle, no role is scoped to check whether quality is actually being delivered. RC2 details that gap from the workforce angle. Outside the support circle, the regulator has no mandated proactive presence. The NDIS Commission acts after a complaint or after a serious incident report. There is no equivalent of the welfare visits, mandatory site inspections or independent oversight that operate in comparable systems.
The complaints model itself rests on assumptions that do not hold in disability support. The model assumes the participant knows what good service looks like, can compare it against alternatives, can walk away from a poor provider, and has no ongoing dependency on the staff member or organisation they would be complaining about. Disability support is often the opposite. A participant who has only ever received support one way may not know it could be different. A participant who lives in supported accommodation cannot walk away from a complaint without losing their home. A participant whose support worker will be in their home tomorrow has a different calculus than someone returning a faulty product.
The volume tells the story. NDIS Commission complaints grew from 1,422 in 2018-19 to 29,054 in 2023-24, a 20-fold increase in five years. Sector estimates suggest 70 to 85 percent of abuse experienced by people with disability goes unreported. The complaints volume that does reach the Commission is the visible portion of a larger pattern.
This is the longest-running unresolved root cause in the evidence base. ANAO first identified it in 2016. The Disability Royal Commission’s six alternative mechanisms, recommended in 2023, remain unimplemented at national scale. The Community Visitor Schemes Review of 2018 recommended a national framework. That has not been implemented either. The 2025 ANAO audit is still finding the same gap. A decade of formal recommendations has not produced a proactive system.
What has been tried
The NDIS Commission has expanded its complaint-handling capacity over the past several years. Triage has improved, response times have shortened, and online complaint portals and accessible information for participants have been added.
The Integrity and Safeguarding Act 2025 strengthens what the regulator can do once it knows about harm. Penalties of up to $15 million for corporations. Expanded banning orders for individuals, auditors and consultants. Anti-promotion orders to stop predatory marketing. Stronger information-gathering powers and information-sharing arrangements with other safeguarding bodies.
Worker screening continues to expand, with progressive coverage harmonisation across jurisdictions.
Some states operate Community Visitor Schemes for disability supported accommodation. Coverage is inconsistent across jurisdictions, and the schemes that do exist vary in scope and inspection powers. The 2018 Community Visitor Schemes Review documented this fragmentation and recommended a national framework that has not been implemented.
These initiatives are not bad. They strengthen what happens after harm has been reported, and they make reporting somewhat easier. None of them is a proactive system. None places the Commission, an independent body, or a defined function in regular contact with participants who cannot self-advocate.
What is still missing
Six mechanisms have been recommended by formal reviews and remain absent at national scale:
- Adult safeguarding functions, modelled on child protection but adapted for adult disability contexts.
- A one-stop-shop reporting mechanism with warm referrals and integrated advocacy.
- Expanded OPCAT-style independent inspection of detention and detention-like settings.
- A national community visitor scheme operating consistently across all jurisdictions, with inspection rights and defined enforcement consequences.
- A national disability death review mechanism.
- A reportable conduct scheme requiring providers to notify an independent body of allegations against staff.
Beyond these named mechanisms, three structural elements are missing:
- Funded welfare visitor positions independent of the providers being visited, with mandated unannounced visits to SIL, group homes and other 24-hour settings.
- Direct contact between Commission staff and participants in higher-risk settings, scheduled rather than triggered by complaint.
- Immediate safety arrangements for participants during a complaint investigation, so a participant raising a complaint about their accommodation provider does not continue to live with that provider while the investigation proceeds.
The recurrence analysis in Section 4.1 confirms the pattern. The largest cluster of recommendations made multiple times across separate reviews concerns market data and stewardship. Conflicts of interest in support coordination, regulatory risk frameworks and active fraud detection follow. The same gaps have been re-described in different language across a decade.
Reform exposure
The Integrity and Safeguarding Act 2025 passed both Houses of Parliament on 1 April 2026 and received Royal Assent on 8 April 2026. Schedule 1, the enforcement schedule, is in force. Schedule 2, which transfers functions related to the NDIA, is likely to be in force from 6 May 2026 (readers should verify against the Federal Register of Legislation). Penalties have lifted. Banning powers have expanded. Information-gathering has strengthened. None of these provisions establishes a proactive monitoring framework. The Act sharpens the post-harm response without redesigning the pre-harm system.
Minister Butler’s 22 April reform package contains R10 (fraud crackdown expansion), which extends financial integrity tools. Financial integrity protects scheme funds. It is not the same as participant safety. The Butler reform package does not include funded welfare visits, a national community visitor scheme, adult safeguarding functions, or reportable conduct schemes.
R6 (mandatory registration expansion) creates a register of providers. A register is not proactive monitoring. Registration tells the regulator who is delivering supports; it does not tell the regulator whether quality is being delivered or whether a participant is safe today.
The Community Visitor Schemes Review’s 2018 recommendation for a national framework remains unadopted. The Royal Commission’s six alternative mechanisms, recommended in 2023, remain unimplemented at national scale.
Of the formal recommendations targeting this root cause, the May 2026 alignment refresh recorded approximately 1 percent as implemented. That rate is consistent with the pattern from RC2. Reforms that strengthen revenue protection, administrative efficiency and post-harm enforcement move. Reforms that establish proactive quality presence do not.
Taken together, the reform landscape touches the symptoms of RC3 in several places. None of the reforms in flight, individually or collectively, establishes a proactive quality system either inside the workforce or at the regulator.
3.4 RC4. Individual funding applied to group economics
Toward a housing and supports model that aligns funding with group economics without surrendering individual self-direction.
Years unresolved: 6 (first formally identified 2020).i
Evidence at a glance
| Source | Count |
|---|---|
| Formal recommendations tagged to this root cause | 93 |
| JCPAA submissions raising it | 47 of 51 |
| JCPAA issues raised | 189 |
| JCPAA issues where this is the primary concern | 116 |
The highest-volume single concern across the public submissions.
The diagnosis
The NDIS funds individual entitlements. Specialist Disability Accommodation and Supported Independent Living operate on group economics. The two logics are incompatible, and the provider absorbs the gap.
A SIL provider is paid for the support hours each participant uses, but the cost of delivering those hours is shared across the group. When one resident leaves and the room sits vacant, the provider continues to staff the ratio while the income falls. When a new resident arrives whose needs do not match the existing group, the provider absorbs the mismatch. When a resident’s needs change, the funding follows the participant and the staffing does not. The model promises individual choice and runs on group economics. Neither is fully delivered.
The housing layer compounds the funding mismatch. The Specialist Disability Accommodation waitlist sits above 15,000 nationally. There are approximately 33,000 SIL recipients but only 23,000 SDA recipients, leaving a gap of about 10,000 participants whose accommodation is provided through provider head leasing or other workarounds. The capital required to close the housing gap has been estimated at around $12 billion. Specialist disability housing is not the responsibility of any single body. The Commonwealth funds individual supports, the states have largely exited social housing, and the private market for accessible properties does not exist at the required scale.
Provider economics absorb both gaps. Vacancy risk sits 100 percent on the provider; the NDIS does not fund vacancy. Head leasing risk shifts property obligations onto providers not designed to hold them. SCHADS award wage increases flow through to costs, and pricing has not kept pace. Compliance costs have grown alongside thinning margins. The FY 2024-25 financial picture set out in Section 5.1 shows that across the not-for-profit SIL providers reporting publicly, financial pressure is widespread, with multiple operators reporting losses. These are not small organisations on the edge. They are among the providers the system has come to rely on.
Self-direction matters here, and the current model often surrenders it to satisfy the economics. Supported accommodation is too often presented as a binary choice between group living a participant did not choose and individual living that is not consistently available at scale. The two are not the only options. Group economics can be retained without surrendering each participant’s right to direct their own supports, choose their housemates, change their support worker, or relocate without losing their funding. The current model bundles housing and support, ties choice to vacancy availability, and treats self-direction as something pursued after the financial logic has been resolved rather than as a design constraint shaping the financial logic itself.
Six years of formal recommendations have called for the funding model to align with how supports are actually delivered. The Joint Standing Committee inquiry into Supported Independent Living tabled its report in 2020, naming the structural mismatch between individual funding and group economics. The NDIS Review of 2023 recommended separating accommodation from support. The Disability Royal Commission of 2023 recommended phasing out group homes. The NDIS Commission’s Own Motion Inquiry into Supported Accommodation, conducted between 2022 and 2024, investigated the seven largest supported accommodation providers and fed an action plan into SIL Practice Standards co-design. Forty-seven of 51 JCPAA submissions raise this root cause. The economics underneath have not been redesigned.
What has been tried
The NDIS Review recommended separating SDA from SIL to create theoretical choice. Separation does not create housing supply, and the SDA waitlist remains above 15,000.
Mandatory SIL registration commences in July 2026. Registration adds compliance benchmarks for providers. It does not address vacancy risk, mismatch risk or the housing gap. It increases the cost of operating a model already identified as not supporting safe and quality delivery.
SIL Practice Standards have been refreshed through co-design. The standards lift expectations for quality. They do not fund quality.
The Integrity and Safeguarding Act 2025 strengthens enforcement against poor providers. Stronger enforcement on a market with a cost model identified as not supporting safe delivery is a regulatory response to an economic problem.
The NDIS Commission’s Own Motion Inquiry into the seven largest providers identified patterns of harm and produced an action plan. The inquiry was diagnostic. It was not a redesign of the funding architecture.
Each of these is a reasonable initiative. None addresses the underlying economic and architectural mismatch.
What is still missing
Five elements are absent in the design of the housing and supports system:
- A coherent funding model that acknowledges group economics where they apply, funds individual choice where it applies, and explicitly addresses where the two intersect.
- Assigned responsibility for accessible disability housing, with a single body or coordinated bodies accountable for closing the supply gap.
- Funded provider risk, including vacancy funding, mismatch funding and head lease risk where providers are required to hold it.
- Mechanisms that preserve self-direction inside group settings, including the ability to change provider without losing housing, the ability to direct one’s own supports, and the ability to relocate without losing funding.
- Sequencing that fixes economics before adding compliance cost, rather than the current sequence of lifting compliance benchmarks before redesigning the model that makes them deliverable.
Reform exposure
Mandatory SIL registration commences 1 July 2026, expanded in Minister Butler’s 22 April announcement to cover higher-risk activities including personal care. Registration lifts compliance benchmarks for SIL and platforms. It does not address the funding mismatch this root cause names. SIL providers operating at a loss are now required to absorb additional compliance cost.
The Integrity and Safeguarding Act 2025 received Royal Assent on 8 April 2026, with Schedule 1 (enforcement) now in force. The Act sharpens enforcement against poor providers. Some would state that this assumes a market that can sustain quality if pushed harder, while the cost model evidence suggests caution about that assumption.
Butler’s reform package does not include an SDA capital plan, vacancy funding or a redesigned SIL pricing model. R3 (foundational supports) sits outside the NDIS and does not extend to specialist disability accommodation. R8 (intermediary spend cut) and R9 (shortlisted quality providers) target plan management, support coordination and provider curation, not housing or SIL economics.
The 2023 NDIS Review and Disability Royal Commission recommendations on housing and SIL remain largely unimplemented. The cross-jurisdictional architecture required to assign housing responsibility has not been negotiated.
This is the highest-volume single concern across the JCPAA submissions. Forty-seven of 51 submissions raise it. The signal to the JCPAA committee is consistent across submitter types and provider scales. Whether the committee report due mid-2026 produces movement remains to be seen.
Of the formal recommendations targeting this root cause, the May 2026 alignment refresh recorded zero as implemented. Together with RC2, that is the lowest implementation rate across the five root causes.
Taken together, the reform landscape adds compliance cost to a model that the same evidence base says cannot afford it. None of the reforms in flight redesigns the underlying economics or assigns the missing housing responsibility.
3.5 RC5. Designed for a default participant who doesn’t exist
Toward a system designed from the margins so it works for everyone.
Years unresolved: 9 (first formally identified 2017).i
Evidence at a glance
| Source | Count |
|---|---|
| Formal recommendations tagged to this root cause | 128 |
| JCPAA submissions raising it | 25 of 51 |
| JCPAA issues raised | 53 |
| JCPAA issues where this is the primary concern | 27 |
The diagnosis
The NDIS was designed for a participant who is urban, English-speaking, has a single stable disability, makes individual decisions, and can self-advocate. Anyone who does not match those characteristics meets a system that was not built for them. Adding bolt-on strategies to a default-participant design has not closed the access gap.
The exclusion shows up across multiple groups. First Nations participants are 28 percent less likely to receive disability supports through the NDIS than non-Indigenous Australians, and 54 percent less likely to access disability services at all. CALD participants represent approximately 10 percent of the NDIS, against approximately 19.7 percent of the Australian population, an under-representation of around half. Women are around 37 percent of NDIS participants when their share of the disability population would suggest closer to 50 percent. LGBTQIA+ data is not collected at the scheme level, and research suggests around 30 percent of LGBTQIA+ Australians have disability, a higher rate than the general population. Remote and regional communities face thin markets, fly-in fly-out workforce models and reduced choice of providers.
Intersectionality multiplies these effects. A First Nations woman in a remote area meets cultural unsafety, gender-based barriers and thin markets concurrently. The system addresses each barrier through a separate strategy that operates in parallel with the others. The barriers themselves compound; the strategies do not. Designing for groups in silos has not kept pace with the way participants experience the system.
The bolt-on pattern is consistent. Translated materials assume the issue is language, not the underlying decision-making model. Cultural training assumes individual practitioner attitudes, not system design. Dedicated liaison roles add a single person at the interface between an excluded community and a system not built for that community. Group-specific consultations create separate strategies rather than redesigning the core. Each addition is well-intentioned. None changes what the system was designed to do.
Nine years of formal recommendations have called for a system redesigned from the margins, with intersectional data, alternative commissioning arrangements, and integration into core operations rather than separate strategies. The First Nations Strategy is now several iterations old. Women remain largely absent from the 2023 NDIS Review’s main framing. CALD under-representation persists. LGBTQIA+ participants remain absent from scheme data. The default-participant pattern has not been redesigned.
What has been tried
The NDIA First Nations Strategy has been refreshed across multiple iterations. It exists in parallel with core operations rather than embedded within them.
Cultural training programs operate within the NDIA, the Commission and provider organisations. Training programs have not changed system design.
Translated materials are available in major community languages. Translation addresses one barrier without addressing eligibility processes, planning conversations or service delivery models.
Dedicated liaison roles operate in some jurisdictions and within some providers. These individuals work hard at the interface between communities and a system not designed for those communities.
Group-specific consultations have produced reports for First Nations participants, women, CALD communities and LGBTQIA+ Australians. Consultation has produced strategies. The strategies have not been integrated into core scheme design.
These initiatives are not bad. They have generated insights, data and small-scale success cases. None of them changes the default-participant assumption underneath.
What is still missing
Five elements are absent from the system design:
- Mandated intersectional data collection across access, planning, supports used, outcomes achieved and complaints. Data not collected cannot be acted on.
- Co-designed alternative commissioning arrangements for communities the default-participant model does not serve, with First Nations communities, CALD organisations, women’s organisations and LGBTQIA+ organisations as genuine co-governance partners rather than consultees.
- Published disaggregated outcomes data, so the scheme can be judged on whether it is working for everyone, not on aggregates that average over the participants it does not serve.
- A gender-responsive review of NDIS design and operation, named in the 2023 NDIS Review’s submissions process and not yet conducted.
- Integration of group-specific strategies into core NDIA, NDIS Commission and Department of Social Services operations, rather than parallel strategies operating alongside business as usual.
Until these are in place, intersectional exclusion will continue to be addressed through bolt-ons that operate in parallel with the system.
Reform exposure
Minister Butler’s 22 April reform package contains R3 (foundational supports) which, if properly designed and funded, could create access pathways for participants who currently fall outside the NDIS for reasons connected to RC5. The dependency on state funding and on culturally appropriate commissioning architecture qualifies the conclusion.
R1 (functional capacity assessments) replaces diagnosis-based access with standardised functional tests. Standardised tests applied uniformly to a non-uniform population can deepen rather than close the access gap, depending on how the tests are designed and who validates them. As at May 2026 the assessment tools and workforce have not been finalised.
The 2023 NDIS Review’s Recommendation 14, on alternative commissioning for First Nations participants and remote communities, remains in implementation phase. As at May 2026 no detailed framework has been published showing how alternative commissioning will be operationalised, who the receiving partners will be, or how outcomes will be measured.
Mandatory data improvement under the Getting Back on Track Act creates the legal hooks for intersectional data collection. Whether intersectional data is actually collected and disaggregated reporting actually published is a delivery question still to play out.
Butler’s reform package does not include a gender-responsive review, a CALD-specific commissioning framework, or LGBTQIA+ data collection commitments.
Of the formal recommendations targeting this root cause, the May 2026 alignment refresh recorded approximately 1 percent as implemented. That figure is consistent with RC2 and RC3.
Taken together, the reform landscape touches the access gate (R1, R2, R3) without redesigning the underlying default-participant assumption. The First Nations, CALD, gender and LGBTQIA+ patterns named in the evidence base remain in place.
4. Patterns across the five
4.1 Recurring recommendations
Across all 676 recommendations, 26 distinct themes have been raised more than once by separate reviews. 70 recommendations (10.4 percent) sit inside one of these tight recurrence clusters. A further 38 recommendations participate in cross-review pairs at a looser similarity threshold, bringing total cross-review recurrence to 108 recommendations (16.0 percent).
This is a conservative measure. NDIS recommendations are typically reworded between reviews rather than copied, which means lexical similarity will undercount substantive recurrence. The 10.4 percent figure should be read as a floor, not a ceiling.
The most-repeated themes
| Theme | Recs in cluster | Reviews represented |
|---|---|---|
| Market data and provider stewardship | 7 | ANAO Transition 2016, McKinsey IPR 2018, Auditor-General 2025-26 |
| First Nations engagement and workforce | 6 | General Issues 2021, JSC Capability and Culture 2023, JSC Psychosocial 2017, JSC Workforce 2022, Tune Review 2019 |
| Conflicts of interest in support coordination | 4 | ANAO Board 2024-25, ANAO Daily Life 2022-23, JSC Capability and Culture 2023, Tune Review 2019 |
| Regulatory risk framework | 4 | ANAO Transition 2016, ANAO Board 2024-25, Auditor-General 2025-26, Ministerial SoE 2025 |
| Price guide and SIL pricing settings | 4 | McKinsey IPR 2018, JSC SIL 2020 |
| Active fraud detection | 4 | ANAO Fraud 2018-19, ANAO Daily Life 2022-23, JSC Capability and Culture 2023 |
| Future accommodation and support needs in planning | 3 | JSC SIL 2020, JSC Capability and Culture 2023 |
Where the recurrences sit
Recurring recommendations cluster inside the proactive-quality and workforce-design areas, consistent with these being the longest-running and largest-evidence root causes.
| Pain point tag | Recurring recs |
|---|---|
| PP03 (proactive quality) | 28 |
| PP_WORKFORCE | 27 |
| PP01 (R&N) | 18 |
| PP07 (complaints model) | 14 |
| PP02 (support coordination) | 11 |
| PP_SYSTEMS | 11 |
| PP05 (housing/SIL) | 10 |
| PP06 (default participant) | 9 |
Mapped to the consolidated five root causes, recurring recommendations sit overwhelmingly in RC3 (proactive quality) and RC2 (workforce design). That is the same evidence concentration the rest of this report develops.
What this tells us
The system is not short of recommendations. The same problems have been re-described in different language across multiple reviews, and the recurrence pattern confirms that the underlying issues have not been addressed by the ones that came before. The cluster analysis is a quantitative companion to the qualitative finding the rest of this report develops: a decade of recommendations, four major reform vehicles, and the same five root causes still standing.
Methodology note. Recurrence is measured by TF-IDF cosine similarity (sublinear term frequency, L2 normalised) across the 676-recommendation corpus. Pair threshold 0.35 for any cross-review pair; cluster-forming threshold 0.40 via union-find. Royal Commission chapters and NDIS Review chapters are treated as single parent reviews to avoid double-counting within-document near-duplicates. Full method and data in Appendix D, with source files at research/policy-evidence/data/processed/recommendation_recurrence_clusters.json and research/policy-evidence/docs/recommendation_recurrence_analysis.md.
4.2 Reform exposure heatmap
The map below summarises where each major reform in flight at May 2026 touches the five root causes. Addresses means the reform substantively redesigns the underlying cause. Partial means it touches the cause or enables a future redesign without delivering one. A blank cell means the reform does not touch the cause.
Three observations follow from the heatmap.
First, no reform in flight is rated addresses for any of the five root causes. Every reform either touches a cause partially or does not touch it at all. The strongest single connection is Butler’s R3 (foundational supports) for RC1, and that connection depends on state funding agreements that have not yet been concluded.
Second, RC4 is the least covered. The intermediary spend cut and growth cap touch SIL economics indirectly, and mandatory registration adds compliance cost to the existing model. Nothing in the reform landscape redesigns the funding architecture or assigns housing responsibility.
Third, RC3 is touched by the largest number of reforms (Integrity Act, mandatory registration, digital payment, fraud detection, shortlisted providers, practice standards) but each of these touches it partially. No reform establishes proactive quality monitoring at the participant level.
4.3 Years unresolved
The five root causes have accumulated formal recommendations across the past six to ten years.
The two longest-running root causes are workforce design and proactive quality, both anchored in the ANAO Transition audits of 2016. Both remain unaddressed at May 2026. The two nine-year-old causes (R&N undefined, default participant) were first identified in the JSC Psychosocial 2017 inquiry and the Productivity Commission NDIS Costs Study respectively. RC4 is the most recent root cause but carries the highest single-issue concentration in the JCPAA submissions, with 47 of 51 submissions raising it.
A pattern is visible across the timing. Reforms have moved most often where they touch revenue protection or administrative efficiency. Reforms that would redesign roles, regulators or funding architecture have moved least often. The decade of accumulated recommendations is concentrated in the latter set.
4.4 How the root causes interact
The five root causes are presented separately in Section 3 because each has its own evidence base, design failure and reform exposure. They do not act independently in the system. Two relationships in particular shape what can be fixed and in what order.
RC2 and RC3 are entangled, not parallel.
The proactive quality system gap (RC3) is what shows up at the regulator and at the surface of the participant experience. The workforce design gap (RC2) is what produces it. Inside the support circle, no role is scoped to do quality checking. Outside the support circle, the regulator only acts after harm. The two are the same problem viewed from different angles.
A reform that addresses RC3 without redesigning the workforce roles that should be doing quality checking inside (RC2) will at best produce a parallel inspection regime; the role-level checking that prevents harm in the first place will still be absent. A reform that addresses RC2 without establishing proactive quality presence outside (RC3) will produce a redesigned workforce with no external check that the redesign is delivering. Either reform on its own leaves half the gap open.
The implication is sequencing-neutral but coupling-essential. RC2 and RC3 reforms should be designed together with shared accountability for outcomes, even if they ship at different times. Treating them as independent workstreams produces the pattern of the past decade: workforce strategies on one side, integrity legislation on the other, neither closing the participant safety gap.
RC4 sits upstream of RC3 in delivery sequence.
The proactive quality system that RC3 calls for must be delivered through providers. Where the provider economics are not viable (RC4), pushing harder on quality (RC3) accelerates exits rather than improving quality. The two are not in conflict, but they are sequenced.
Mandatory registration in July 2026 is the clearest current example. It lifts compliance benchmarks for SIL providers operating at thin or negative margins. Without addressing the underlying economics first, the same providers either exit, dilute quality across remaining capacity, or absorb cost until they cannot. Each outcome is poor for participants, and each is foreseeable.
The implication is that RC4 work should lead RC3 work for SIL and other settings where the cost model is identified as inadequate. This is not an argument against quality reform; it is an argument for redesigning the economics that make quality deliverable before increasing the price of failure. The 2023 Royal Commission and the 2023 NDIS Review both recognised this sequencing in their housing recommendations. The May 2026 reform set has not yet reflected it.
These two relationships shape every recommendation in Sections 8 and 9. Reforms designed RC by RC will produce the same partial pattern the evidence base already records. Reforms designed across the entanglements will not.
4.5 An alternative lens: the four-cause causal chain
The five-root-cause framing in Section 3 is the analytical anchor of this report. For some audiences a more compact narrative is useful, and the relationships set out in Section 4.4 can be re-presented as a four-cause causal chain. This section sets out that lens, names what it adds and what it leaves out, and runs a disconfirming test against it.
The chain.
The story reads as follows. Undefined obligation produces wrong workforce design. Wrong workforce design produces reactive-only quality. Economic mismatch sits in parallel and amplifies all three.
What CC1 absorbs. RC1 and RC5 are different mechanisms with a shared philosophical root. R&N undefined names the missing definition of what the system owes. The default-participant design names the missing definition of who the system owes it to. Together they say: the obligation has never been written. CC1 captures this combined gap in a single causal step.
Why CC3 is presented as downstream of CC2. If every NDIS role had been scoped to support participant outcomes with quality checking built in, the proactive-quality work that RC3 calls for would already be partly done inside the system. The reactive-only model is partly a consequence of having no internal role doing the work that proactive quality requires. Inside-out and outside-in are facets of the same gap.
Why CC4 is parallel rather than in-line. The economic mismatch is not produced by CC1 to CC2 to CC3. It is a separate design choice (individual funding applied to group economics). It compounds all three. Where economics are not viable, defining obligation has no purchase, redesigning workforce has no funding, and pushing harder on quality accelerates exits. Treating CC4 as parallel rather than sequential preserves the structural distinction.
Disconfirming test.
The test is the obvious one. If CC3 can be fixed without CC2 redesign, the chain collapses. If proactive quality is achievable through regulatory mandates alone, the workforce-as-cause claim does not hold.
The honest answer is partly yes. External quality mechanisms can move without workforce redesign. Community Visitor Schemes operate in several jurisdictions entirely outside the workforce. OPCAT-style independent inspection operates in detention and aged care without requiring those workforces to be redesigned first. Adult safeguarding functions and reportable conduct schemes are deliberately external to service delivery. The Disability Royal Commission’s six alternative mechanisms were proposed as an external system that could move independently of workforce reform.
External-only CC3 fixes are visible-but-partial. They observe institutional and group settings effectively. They cannot observe in-home individual support at the same density. The Royal Commission proposed the six mechanisms knowing they were insufficient on their own. The full proactive-quality picture requires both external presence and internal role-level checking. The chain therefore does not collapse, but it does soften. CC3 reforms can move without CC2, and they should, but neither alone delivers full proactive quality.
What the chain is good for.
The chain is useful when the audience benefits from a story rather than a list. An executive briefing has different needs than a parliamentary submission. A board paper that needs to land in five minutes can be carried by a chain in a way it cannot be carried by five independent diagnoses. Strategic communication benefits from sequenced framing.
What the chain leaves out.
The intersectional case that RC5 makes (First Nations under-access, CALD under-representation, gendered exclusion, LGBTQIA+ data not collected) does not dissolve cleanly into “obligation undefined”. It has its own evidence base, its own bolt-on critique and its own redesign call. CC1 tells the philosophical story; the practical implications still sit at the level of RC5.
The chain also implies a sequencing that the data does not fully support. The disconfirming test above shows that CC3 reforms can usefully move ahead of CC2.
How to use both framings.
The five-root-cause framing remains the anchor of this report. It maps directly to the evidence base, the recommendation register, and the JCPAA submissions. The four-cause causal chain is a complementary lens, useful for executive audiences, board papers and short-form communication where the relational point in Section 4.4 needs to land in a single image. Use the chain where a story serves better than a list; return to the five where the evidence needs to be defended.
5. State of the sector
This section sets out the financial and operational state of the NDIS sector at FY 2024-25, the most recent year for which substantive provider data is available. The picture is consistent with the design failures named in Section 3. The cost model has not been redesigned, the workforce has not been redesigned, and the providers the system relies on are absorbing the gaps.
For scheme-wide context, NDIA’s most recent published rolling-12-month total for scheme payments stood at $49.0 billion for the year ending 31 December 2025, up from $38.3 billion two years earlier (Q2 FY25/26 quarterly report). The discrete FY24/25 annual figure is not yet separately published in the form used in this report; based on the published trajectory, it would sit somewhere below $49 billion. Active participants reached 739,413 at the end of Q4 FY24/25. The registered provider count grew from approximately 17,800 in early 2023 to 21,387 by the end of October–December 2024, with around 1,400 to 1,800 new registrations per quarter and 600 to 900 deregistrations per quarter across the most recent year of available Commission data. The proportion of registered providers actually claiming against participant plans drifted down from 71.7 percent in January–March 2024 to 67.3 percent in October–December 2024. Registration is not the same as participation; an increasing share of registered providers are not active in the scheme.
5.1 Provider viability
The sample used here is a 150-provider longitudinal cohort drawn randomly from the publicly released 2021 NDIS provider payments list and matched to ACNC charity records to obtain FY 2024-25 financials. The cohort therefore represents the not-for-profit segment of the scheme. For-profit providers are not included because their financial data is not publicly available, which is itself a finding worth naming. The picture below describes the publicly reportable NFP segment, not the full provider universe.
Within the sample, the median provider revenue change from FY 2023-24 to FY 2024-25 was minus 7.9 percent. Forty-one providers (29.7 percent) grew revenue. Ninety-five (68.8 percent) declined. Two were flat. The aggregate cohort revenue change was minus 27.2 percent, but this figure is inflated by scope-mismatch reporting in four large group structures. The median is the more reliable read on the typical provider trajectory.
Across the same cohort, 42.8 percent of reporting providers ran at a loss in FY 2024-25, similar to the 41.5 percent loss rate in FY 2023-24. The weighted margin sat at minus 0.44 percent. The median margin was plus 0.95 percent, with the lower quartile at minus 3.7 percent. Roughly four in ten reporting providers ran at a loss. The median provider earned a margin under 1 percent. The pattern across two years is not improvement; it is steady erosion.
Twelve providers in the sample show as “fell off” the FY 2024-25 list. Five are real exits: two merged into other entities, one closed, one ceased NDIS services from August 2025, and one absorbed another sample entity through merger. The remaining seven are recorded as “overdue” or “not yet submitted”. The combined FY 2023-24 revenue of the twelve providers was approximately $670 million.
Concentration shifted within the cohort, with the top 10 share dropping from 44.8 percent to 39.8 percent and the top 25 from 65.8 percent to 62.4 percent. The shift is largely a reporting artefact and should not be read as structural deconcentration. Concentration trends across the full NDIS provider population cannot be answered from this dataset alone.
The not-for-profit visibility limit matters for any reading of these numbers. The publicly available financial data covers only NFP providers. The for-profit segment, which has grown materially over the past five years, sits outside this dataset because for-profit financials are not lodged publicly. Any sector-wide claim about provider viability from this dataset is therefore a claim about the visible portion of the sector, not the whole.
5.2 Life outcomes
Life outcomes data at scheme level is limited by the design of the NDIS Outcomes Framework. The framework relies on self-reported survey data, has no published targets, and has no causal link between supports received and outcomes achieved. The 2023 NDIS Review was the first major review to systematically name this measurement gap.
What is published is therefore a description of activity rather than a measurement of effect. Aggregate participant outcome scores can be reported without being linked to whether the supports received caused those outcomes or whether the scheme is achieving the purposes set out in section 3 of the NDIS Act. For this reason this report does not rely on published outcome scores as a measure of system performance.
The systemic finding is that the bar was never set. The NDIS Act sets out purposes rather than success criteria. The Productivity Commission’s 2011 report defined what was broken in the disability system that preceded the NDIS, but did not define what working would look like in measurable terms. A decade later, that gap is still open.
5.3 Mortality and harm
This section is necessarily brief. Mortality data published by the NDIA is incomplete, mortality reviews are inconsistent across jurisdictions, and there is no national disability death review mechanism. The Disability Royal Commission of 2023 documented the pattern in detail across multiple volumes and recommended a national mechanism. That mechanism has not been implemented.
Sector estimates suggest 70 to 85 percent of abuse experienced by people with disability goes unreported. The visible part of the harm is a fraction of the actual pattern. RC3 (no proactive quality system) is the design failure that produces this asymmetry. Until proactive monitoring is in place, mortality and harm data will continue to undercount the risks the scheme exists to manage.
Coronial systems do publish some data, and Supporting Potential maintains a working coronial case database for analytical purposes. That data sits behind this report but is not summarised in detail here, both because of the sensitivity of the underlying material and because the analysis is more useful in a private-briefing context than a published one.
5.4 Workforce condition
The workforce picture is structurally consistent with RC2. Behaviour support practitioners are in shortage, with multiple announced workforce strategies that have not changed the supply trajectory. Allied health professionals report difficulty operating within the NDIS pricing model and are exiting the scheme in unquantified but material numbers. Support coordinators face the same role-design issues that have been documented for eight years. Support workers operate without supervision built into the role.
Across the workforce, retention pressures are reported across multiple JCPAA submissions, including from peak bodies and providers. The pattern most often described is moral injury, where qualified workers leave because the system prevents them from doing the work they are trained to do. The workforce shortage is real, but as Section 3.2 sets out, it sits downstream of the design.
6. The reform landscape, May 2026
This section orients the reader to the reforms in flight at May 2026. The five root causes set out the problem; the reform landscape sets out the response.
6.1 What has passed
The Integrity and Safeguarding Act 2025 passed both Houses of Parliament on 1 April 2026 and received Royal Assent on 8 April 2026. Schedule 1 (enforcement) is in force. Schedule 2 (NDIA functions) is likely to be in force from 6 May 2026 (readers should verify against the Federal Register of Legislation). The Act introduces penalties of up to $15 million for corporations, expanded banning orders for individuals, auditors and consultants, anti-promotion orders to stop predatory marketing, and strengthened information-gathering and information-sharing powers. The Act sharpens what the regulator can do once harm has occurred. It does not establish proactive quality presence.
The Getting the NDIS Back on Track Act 2024 introduced statutory tools including support lists under section 10 and functional capacity assessment provisions. Section 10 support lists took effect from 3 October 2024. Plan duration extensions of up to three years for stable needs and the introduction of funding periods commenced concurrently. The 2024 Act provides the legal framework for several Butler reform propositions; substantive operational delivery is mostly still ahead.
Section 24(3) of the NDIS Act, recognising that conditions can fluctuate without ceasing to be permanent, has been operative since July 2022.
6.2 What is in flight
Minister Butler announced a 12-proposition reform package at the National Press Club on 22 April 2026. The package is the most consequential single set of reform announcements since the Disability Royal Commission and NDIS Review reported in late 2023. Its interaction with each root cause is mapped in Section 4.2.
| Reform | Target | Current status |
|---|---|---|
| R1. Functional capacity assessments | Replace diagnosis-based access | Tools and assessor workforce in development |
| R2. Approximately 160,000 participants moved off scheme | Tighten scheme boundary | Legislative framework via 2024 Act |
| R3. Foundational supports outside the NDIS | Receiving tier for participants moving off | Federal-state agreement pending |
| R4. Reassessment of all 760,000 current participants | Apply functional capacity test cohort-wide | Operational sequencing not yet specified |
| R5. Growth rate capped at 5 percent by 2030 | Constrain total cost growth | Successor to the 8 percent target |
| R6. Mandatory registration expansion | Lift compliance benchmarks | Commences 1 July 2026 for higher-risk activities including personal care |
| R7. Digital payment system | Pay-on-evidence integrity | Platform under design |
| R8. Third-party intermediary spend cut, approximately 30 percent | Plan management, support coordination | Mechanism not yet specified |
| R9. Shortlisted quality providers | Curated lists for participant selection | Quality signal not yet defined |
| R10. Fraud crackdown expansion | Detection, investigation, prosecution | Fraud Fusion Taskforce already operating |
| R11. NDIS and aged care interface | $1 billion investment, free personal care in aged care settings | Boundary reform |
| R12. Legislation introduced next sitting | Delivery vehicle | Sitting calendar pending |
Beyond the Butler package, additional reforms are in flight. The Navigator design replaces Local Area Coordinators and parts of support coordination, and is in design phase with a five-year transition. Registration of support coordinators was paused in December 2025. No detailed Navigator design framework has been published.
The Practice Standards refresh has completed SIL co-design work, which is feeding into the broader Practice Standards review. The Pricing Reform Program is ongoing under the Independent Health and Aged Care Pricing Authority following the 2024 Act.
6.3 What has been deferred or paused
Support coordination registration was paused in December 2025 pending Navigator design publication. Plan management reform was deferred and is now reframed by Butler’s 30 percent intermediary spend cut. Foundational supports reached National Cabinet communique in December 2023 with a 50/50 federal-state funding split, but no state has yet committed cash, and the receiving commissioning architecture has not been designed.
6.4 The cumulative load
No single body is modelling the cumulative cost of concurrent reforms on providers. Providers are concurrently absorbing the Integrity and Safeguarding Act commencement, the 1 July 2026 mandatory registration expansion, pricing changes flowing from the IHACPA Pricing Reform Program, the functional capacity assessment rollout, the reassessment process, the digital payment system enrolment and the Practice Standards refresh.
The signal across multiple JCPAA submissions in 2026 is that this cumulative load is a board-level risk for many providers. No published assessment quantifies it at sector scale.
The Butler analysis splits the 12 propositions into three tiers of likelihood. The reforms most likely to ship are those that protect Commonwealth revenue and can be delivered without state agreement: R7 (digital payment), R10 (fraud), R12 (legislation) and R6 (mandatory registration). The reforms least likely to ship as announced are those requiring federal-state agreement and substantive design work: R3 (foundational supports), R4 (whole-of-scheme reassessment in the announced timeframe) and R8 (intermediary spend cut without role redesign).
The structural barriers are unchanged from the Shorten era. Federal-state agreement is required for the design fixes and has not been concluded. NDIA workforce capacity for mass reassessment does not currently exist. NDIS Commission capacity to regulate a multiplied registered provider population at the announced timeframe is not in place. The PACE planning system risk profile applies to the digital payment system. The political cycle and legislative attrition that broke the Shorten 2024 Bill into tranches still applies.
7. What this means by service type
Each segment of the provider sector encounters a different combination of the five root causes. This section maps the compound exposure for six common service types.
7.1 SIL and supported accommodation
RC4 is the core issue. The individual funding model does not work for group economics, and forty-seven of 51 JCPAA submissions raise this directly. Vacancy risk sits 100 percent on the provider. Pricing does not match SCHADS-driven cost growth. The housing supply gap is not the provider’s responsibility, but providers absorb it through head leasing arrangements they were not designed to hold.
RC2 is the secondary exposure. Quality at the participant level depends on the BSP being well-written, the auditor being able to recognise quality in disability services, and the registered provider having the workforce capability to implement consistently. The chain has no single point of accountability across roles.
RC3 affects SIL providers most when participants in their settings are exposed to harm that the complaints model does not surface. Higher-risk settings are where the proactive monitoring gap matters most, and SIL houses are among the most concentrated such settings in the scheme.
The 1 July 2026 mandatory registration adds compliance cost to a model already operating at thin or negative margins (Section 5.1). Butler’s R8 (30 percent intermediary spend cut) and R5 (5 percent growth cap) do not address SIL economics directly but constrain the surrounding financial envelope. The structural redesign that would address RC4 is not in the published reform set.
7.2 Support coordination and plan management
RC2 is the core issue. The role was designed without scope, qualifications, interfaces or a defined position in the support circle. The Navigator model is in design with a five-year transition. No design framework has been published showing how the original design failures will be avoided.
Butler’s R8 cuts third-party intermediary spend by approximately 30 percent. For plan management and support coordination, this means doing the same activity with less resource on a design that is already inadequate. Multiple JCPAA submissions identified plan management as a design problem before R8 was announced; the cut does not redesign the role.
For support coordinators specifically, registration was paused in December 2025 pending Navigator design. The Navigator transition will reshape or replace the service over five years. When the design is published, it can usefully be checked against the original eight design failures (scope, qualifications, independence, position, measurement, capacity building, exit, boundaries). If the design does not address each, the same problems will recur under a different name.
7.3 Allied health and therapy
RC1 is the daily reality. “Reasonable and necessary” decisions are made by NDIA planners without required clinical or disability expertise. Tribunal data (Administrative Appeals Tribunal, now the Administrative Review Tribunal) shows 76 percent of appealed decisions are set aside or varied. Allied health professional evidence is routinely set aside by planners.
RC2 affects allied health pricing specifically. The pricing model does not cover supervision, training, travel and non-billable coordination. Allied health is regulated externally by AHPRA but treated by the scheme as a “support provider” rather than a clinician whose advice is the primary evidence. The pricing-compliance-viability feedback loop named in three JCPAA submissions affects allied health directly.
Butler’s R1 (functional capacity assessments) adds workforce demand without an established workforce. The mass reassessment of 760,000 participants depends on functional capacity assessors who do not currently exist within the NDIA at the required scale. Whether this is delivered through NDIA hiring or outsourced to allied health remains unclear at May 2026.
7.4 Behaviour support
RC2 names the specific design failure. Funding is tied to plan production rather than to improvements in behavioural responses and reductions in restrictive practices. Sixty-eight percent of participants have not been consulted in their behaviour support plans. The interface with treating clinicians is undefined.
RC3 affects behaviour support implementation. Plans are written by behaviour support practitioners but implemented by support workers, often without supervision built into the support worker role. The participant’s actual outcome depends on the implementation chain that no role currently owns end-to-end.
Butler’s reform package does not specify a redesigned behaviour support funding model. R6 (mandatory registration expansion) lifts compliance benchmarks for SIL providers implementing BSPs. It does not change what BSP funding pays for.
7.5 Remote, regional and CALD-focused providers
RC5 is the structural barrier. The system was designed for an urban, English-speaking, individually-deciding, self-advocating participant. Participants in remote, regional or CALD communities do not match those assumptions. Fly-in fly-out models do not deliver outcomes. Translated materials do not address eligibility processes, planning conversations or service delivery models.
RC2 affects these providers because thin markets concentrate market power in support coordinators who become gatekeepers rather than enablers. Allied health workforce shortages are most acute in remote and regional settings.
The compounded effect of the May 2026 reforms falls disproportionately on small and regional providers. Mandatory registration cost on a small revenue base. Compliance reporting on a thin workforce. The 2023 NDIS Review’s Recommendation 14 on alternative commissioning for First Nations participants and remote communities remains in implementation phase. As at May 2026 no detailed framework has been published.
7.6 All providers, cross-cutting
Mandatory registration from 1 July 2026 lifts compliance benchmarks for the registered roles. The Integrity and Safeguarding Act 2025 expands the regulator’s post-harm enforcement powers. Both add cost to operating in the scheme. Neither redesigns the underlying model.
For providers operating at the median margin of under 1 percent, the cumulative cost of the reforms in flight (Section 6.4) can exceed the annual margin. This is a board-level risk, not an operational task. Boards that have not stress-tested cumulative reform exposure should do so before the July 2026 commencement.
The signal in the JCPAA submissions is that the sector has been clear with parliament about what is coming. Whether parliament reflects that clarity in the committee report due mid-2026 will determine the next phase of the reform sequence.
8. What government and peak bodies should do
The largest levers sit with government. The five root causes are design choices and design failures. Only the system designer can redesign the system. Beyond that, two further observations shape how this section reads.
The first is that government and providers have a productive working relationship available to them that is not currently being used productively. Both sides hold legitimate goals. Government has a mandate for scheme sustainability and participant safety. Providers have a need for financial viability and practice quality. These goals are not in conflict and should reinforce each other. The mechanisms each side currently uses to pursue them are creating unintended consequences for the other. Government tightens pricing, centralises decisions and increases compliance, which shifts the cost of delivery onto providers and reads provider responses as gaming. Providers paper-comply, reduce caseload complexity or exit, which shifts the cost of unmet need onto government and reads government action as hostile. Each side’s rational response escalates the dynamic. The reforms that will work are reforms that change the mechanisms rather than tighten the existing ones.
The second is that the levers available divide into three domains. Some can only be pulled by government. Some can only be pulled by providers. Some require both sides to act together. This section treats the three government and shared domains in turn; Section 9 covers what providers can do without waiting.
8.1 For government
One priority sits above the five that follow.
Set the goal, track implementation, measure effect, and adjust when necessary. A scheme of this scale, cost and consequence requires a published statement of what success looks like, an implementation tracking framework against published commitments, an outcome measurement framework that links supports to participant life quality, and a governance discipline that allows the Commonwealth and states to stop and adjust when the evidence shows a reform is not working as intended. The NDIS has had multiple reform announcements over the past decade. What it has not had is a public, durable answer to the question “how will we know this worked, and what will we do if it didn’t”. This is the work that sits above any individual root cause priority and makes the rest of them auditable.
The single piece of work that would make this discipline possible is a published value ledger for the scheme. The Productivity Commission’s 2011 case for the NDIS argued the scheme would partly pay for itself through avoided costs in health, justice, housing and income support, and through workforce participation gains for people with disability and their carers. That premise has not been tested against actual data in the years since. Treasury, the Productivity Commission and the Parliamentary Budget Office all have the methodological capacity to produce an annual fiscal return report for the scheme. What is missing is the mandate. Commissioning that report is the most consequential single decision available to the Commonwealth, because it changes the terms of every subsequent reform debate. Cost is currently the only measured driver. Adding fiscal return and outcome return as named, measured drivers does not require new analytical methodology; it requires a decision to do the work that was implied at the scheme’s design and never operationalised.
Beneath that, five priorities, one per root cause. Practical, not aspirational. Each priority names what only government can do.
For RC1, define reasonable and necessary at the operational level. The Act provides criteria, the Operational Guidelines paraphrase them, and the planner applies them. The missing layer is a published method that translates Act criteria into consistent planner decisions, supported by a feedback loop using tribunal overturn rates as a quality signal. Functional capacity assessments alone do not deliver this layer. A complementary lever is to restore weight to clinical evidence inside the planning process. Centralised assessment tools have replaced clinical judgment in several places where the participant’s circumstances cannot be captured in standardised inputs. Returning weight to clinical evidence, where it is consistent with the rest of the planning approach, is the practical reform most likely to reduce the appeal volume that currently sits in the tribunal.
For RC2, publish the support circle architecture. Specify how each role connects, what each role owes the participant, and where one role’s accountability ends and another’s begins. The Navigator design is the most immediate vehicle. If it is published without a support circle architecture, minimum qualifications, defined interfaces and outcome measures, the new role will inherit the same pattern. Pair this with a workforce pipeline investment: pricing that supports competitive wages, supervision built into role design, and career pathways that allow people to develop and remain in the workforce sustainably. Workforce design and workforce capability are different projects; both need to land.
For RC3, establish proactive quality presence. Implement the Disability Royal Commission’s six alternative mechanisms at national scale, namely adult safeguarding functions, one-stop-shop reporting with warm referrals, OPCAT-style independent inspection, a national community visitor scheme with defined enforcement consequences, a national disability death review mechanism, and a reportable conduct scheme. Fund welfare visitor positions independent of providers. Mandate scheduled Commission contact with participants in higher-risk settings. The Integrity Act sharpens post-harm enforcement; it does not establish pre-harm presence.
For RC4, redesign the housing and supports funding model and assign housing responsibility. Choose between group economics with appropriate funding or individual choice with appropriate housing supply. The current bundling has not delivered either. Pair this with two structural reforms that the underlying evidence base specifically calls for. The first is an independent pricing authority that separates price-setting from cost-controlling, so pricing can reflect the actual cost of compliant delivery rather than a budget target. The second is a market stewardship function with early warning triggers, so provider withdrawal is detected and acted on before participant access collapses. Mandatory registration in July 2026 will accelerate provider exits in this segment unless the underlying economics are addressed.
For RC5, redesign from the margins rather than bolting on group-specific strategies. Mandate intersectional data collection. Co-design alternative commissioning with First Nations communities, CALD organisations, women’s organisations and LGBTQIA+ organisations. Publish disaggregated outcomes data by cohort, geography and disability type, so the populations the scheme has not been designed for become visible in reporting. Conduct the gender-responsive review the 2023 NDIS Review’s submissions process called for.
Two cross-cutting government levers complete the picture. Procedural fairness in payment decisions addresses the mechanism that currently destroys compliant providers without due process. Reasons, notice and appeal pathways for section 45 rejections and payment locks would stop the most acute single trigger of provider exit and would reduce the adversarial escalation that the current system rewards. Reform sequencing with transition support addresses the cumulative load named in Section 6.4. A cumulative impact assessment before each reform wave would allow providers to absorb and implement each change before the next arrives, and would allow government to detect and adjust where a reform is not working.
These priorities are not new. Each maps to recommendations made multiple times across multiple reviews over the past decade. The task is implementation, not discovery.
8.2 For peak bodies
The consolidation opportunity is to stop engaging with reforms one at a time. Peak body advocacy is currently distributed across multiple concurrent reforms, including registration, practice standards, integrity legislation, Navigator and pricing. Each campaign mobilises the same evidence base and the same membership.
Framing each campaign around the underlying root cause it addresses or fails to address shifts the conversation upward. A campaign on registration alone is engaging with a symptom; a campaign on workforce design (RC2) is engaging with the root cause. The mid-2026 JCPAA committee report and the parliamentary cycle that follows offer the strongest single window for coordinated cross-peak-body advocacy in years.
A practical step is cross-peak-body coordination on the five root causes, with each peak body taking primary carriage of one root cause for its membership and supporting the other four. The current distributed approach produces simultaneous campaigns that absorb sector capacity without aggregating influence. The five-root-cause framing offers a coordination structure that does not require any peak body to set its own priorities aside.
Beyond coordination, peak bodies hold three further levers that are underused. The first is evidence aggregation. Provider-level evidence of value (avoided hospitalisations, behaviour support reduction, supported employment outcomes, community inclusion) currently sits inside individual provider submissions and individual services. No mechanism aggregates that evidence into a learning system the scheme can use. Peak bodies are well positioned to do that aggregation work, both for advocacy and to demonstrate to government what works at provider level.
The second is acting as the structural bridge to shared levers. Real-time data sharing, co-designed practice standards, dispute resolution pathways and workforce pipeline co-investment all require provider-sector engagement at peak level rather than at individual provider level. Peak bodies are the structural counterpart for those conversations.
The third is sector-wide trust building. The dynamic between providers and government described in the section opening is not productive for either side. Peak bodies that publicly acknowledge where provider practice is poor and call for the changes that would address it have far more standing than peak bodies that defend the existing state. A joint accountability statement with government, of the kind several peak bodies have already proposed, would change the terms of the conversation.
8.3 For the JCPAA committee
The Joint Committee of Public Accounts and Audit is a parliamentary oversight committee, not a policy review. It can compel government responses. Its committee report, due mid-2026, has the opportunity to shift the framing from “another review” to “implementation of the five things already documented for a decade”.
The 51 public submissions and 451 issues raised tell the committee the same story the 676 formal recommendations told. The signal is consistent across submitter type, geographic location and service segment. The next phase of work is implementation discipline rather than further inquiry.
The single most consequential set of recommendations the committee could make is to commission the work that closes the measurement gap at scheme level. Four asks compose that work, in order of dependency.
Ask one. A published insurance ledger for the NDIS. An annual fiscal return report that tests the 2011 Productivity Commission premise against actual data, with five components: health system avoided costs (hospitalisations, emergency department presentations, mental health crisis admissions); justice system avoided costs (incarceration, remand, court contacts); housing and homelessness avoided costs; workforce participation gains for participants and carers; and income support interaction. This ask requires no new methodology. Treasury, the Productivity Commission and the Parliamentary Budget Office all have the capacity. The first report could be commissioned within an 18-month window and updated annually. This is the ask the committee can act on immediately.
Ask two. A redesigned NDIS Outcomes Framework with targets, benchmarks and causal links. The current framework has no targets, no benchmarks, no causal link between supports received and outcomes achieved, and no cohort disaggregation. A redesign should define what good looks like for each support type, set measurable benchmarks, track the causal link, and publish disaggregated results.
Ask three. Aggregation of proof-of-concept services into a learning system. Provider-level evidence of the insurance return already exists in isolated services. An aggregation mechanism would identify services producing measurable return, independently verify the numbers, and feed them into policy and pricing decisions as operational learning rather than as anecdotes.
Ask four. Named accountability for closing the measurement gap. Without this, the first three can be commissioned, produced and ignored, in the same way that a substantial share of ANAO recommendations have been recycled or stalled. A single body should own the value ledger, be publicly accountable for it, and report annually to the committee.
A complementary procedural recommendation is a published implementation timeline against the recommendations already accepted by government, with a six-monthly progress audit by ANAO. This requires no further evidence, no further inquiry, and no further design work. It requires a decision that the sequence of “review, accept, delay, re-review” has produced enough cycles.
8.4 Where government and providers must act together
Beyond what each side can do alone, six levers can only be pulled by government and providers acting together. Each addresses the dynamic that has produced the past decade’s accumulating recommendations.
Real-time data sharing. Plan managers already hold purchasing behaviour, billing patterns and financial anomaly data. Connecting that intelligence to NDIA and Commission data creates a proactive monitoring infrastructure that neither side can build alone. Government must build the architecture; providers must share the data honestly.
Co-designed practice standards. Current standards were designed without operational input from the providers that implement them. Standards that acknowledge real-world constraints (pricing, workforce, complexity) are more likely to drive practice improvement than paper compliance. Government sets the standard; providers bring operational reality.
Direct dispute resolution pathways. There is currently no mechanism for direct conversation to resolve provider-NDIA disputes. Every disagreement escalates to formal review or the tribunal, costing both sides. A dispute resolution pathway would resolve a meaningful proportion of disputes earlier, more cheaply, and without the adversarial escalation that the current system rewards.
Outcome measurement frameworks. Neither side currently has agreed metrics for quality beyond compliance. Outcome-based measurement gives government evidence for value and gives providers a way to demonstrate worth that is not just tick-box audit. Government must accept outcomes over process; providers must invest in measurement.
Workforce pipeline co-investment. Government controls pricing, which caps wages. Providers control workplace culture and supervision. Neither can solve the workforce position alone. A joint workforce strategy with shared accountability for recruitment, training, retention and career pathways would address what individual reforms have not.
Trust-building mechanisms. Joint accountability statements and transparent error acknowledgment on both sides change the dynamic that has produced the past decade’s pattern. Both sides need to acknowledge their own contributions to dysfunction. Government must acknowledge administrative harm; providers must acknowledge where practice is poor.
These six levers do not replace what each side does alone. They make what each side does alone more effective. The reforms most likely to land in the next parliamentary cycle are the reforms designed across all three domains, not within any one of them.
9. What providers can still do
The largest design changes sit with government. The quality of service delivered to participants today is shaped by what providers do, not only by what government has fixed. The five root causes are unfixed at the system level. Within those constraints, providers retain meaningful agency over the quality of service their participants receive. This section sets out what that agency looks like in practice, drawing on the cause-and-effect evidence base and on what providers currently demonstrating measurable value are doing differently.
The frame is straightforward. The reforms that will eventually land are reforms that change the mechanisms by which the scheme operates rather than tighten the existing ones. Providers cannot wait for government to redesign those mechanisms before activating quality at provider level. The actions below are entirely within provider control. They build resilience against policy volatility, reduce incidents and regulatory burden, and generate the outcome evidence that becomes the strongest single argument for policy change. The provider that builds for adaptability rather than for compliance perfection absorbs each reform wave as a paperwork exercise rather than an operational overhaul.
9.1 Document your actual cost of delivery
Line item by line item, against the price guide. The pricing model gap is the evidence base for any future pricing reform conversation. Providers who hold the data will shape the response. Providers who do not will absorb whatever comes.
Include direct support hours, supervision, training, travel, non-billable coordination, incident management, compliance reporting, insurance, and the cost of implementing each concurrent reform. The gap between what you are paid and what it costs to deliver safely and compliantly is your evidence. The signal across multiple JCPAA submissions in 2026 is that this gap is widely felt; what most providers do not yet have is the data to quantify it at their own organisation.
Three measurement moves convert the cost picture from a compliance frame into a value frame.
The first is workforce stability as a financial return. Turnover is expensive, and the cost is currently invisible in most provider profit-and-loss statements. Four metrics from payroll and HR systems make it visible: twelve-month rolling staff turnover rate disaggregated by role type; cost per replacement (recruitment, onboarding, training, productivity ramp-up); days to fill vacancies and days of unfilled shifts covered by agency or overtime; and participant-reported continuity of support. The combined cost of churn, expressed in dollars per participant per year or per funded hour, is almost always larger than providers expect. Many organisations discover that a 10 percent reduction in turnover would release more budget than any pricing round would. Retention is not an HR line. It is a return-on-investment line, and it should be presented to the board alongside revenue and compliance cost.
The second is direct care hours as a ratio rather than a total. An hour of admin crowds out an hour of care, and the ratio is invisible in most provider reporting. Three metrics from rosters and timesheets make it visible: direct contact hours as a percentage of total paid hours, by team and by service type; documentation hours as a percentage of total paid hours, separated into claims-required, compliance-required and internal; and meeting hours for front-line staff as a percentage of total paid hours. When the ratio moves in the wrong direction, it is a leading indicator of quality decline and workforce burnout. The data exists in every roster system. Reporting it quarterly to the board changes the conversation about workforce and compliance from a cost discussion to a productivity discussion.
The third is avoided crisis events and hospitalisations. This is where the scheme-level insurance return becomes visible at the service level, and where a service’s value to other parts of government is measurable. Four metrics from incident logs and health liaison records make it visible: participant-level hospitalisation rates before and after commencing with the service, where the data can be obtained with consent; emergency department and crisis service use; preventive interventions recorded; and “near miss” incidents where a preventive action stopped an escalation. Providers who measure this become proof points for the case that quality practice is financially sustainable.
A complementary discipline is to stop measuring activity that produces no learning. The number of policies reviewed, staff trained, audits passed, incidents reported and participants in service are activity measures. They tell the board what happened. They do not tell the board what changed for the people supported. Report them where funders require. Do not let them occupy the space in the board report that should be occupied by the value measures above.
9.2 Map your exposure to concurrent reforms
For each reform listed in Section 6.2, document what it requires you to change, what the change costs, and when it takes effect. Registration. Practice standards. NDIS Act amendments. Getting Back on Track Act provisions. Functional capacity assessments. Reassessment process. Navigator design. Integrity and Safeguarding Act. Section 10 support lists.
If the cumulative compliance cost across these reforms exceeds your annual margin, that is a board-level risk. The decision is not whether to comply; it is whether the organisation can sustain compliance on the existing revenue base, and if not, what the alternative is. Providers who arrive at that decision after July 2026 commencement will be navigating it under operational pressure. Providers who arrive at it before commencement can plan deliberately.
Two adjacent disciplines change how reform exposure is absorbed.
The first is to build flexible systems rather than rigid ones. Reform absorptive capacity is itself a design choice. Providers who build operational systems around participant-specific protocols, role-clear workforce design and quality data are providers whose response to a new reform is to update one or two documents and continue operating. Providers who build operational systems around compliance perfection are providers whose response to a new reform is a multi-month project. The first approach scales with reform pace. The second does not.
The second is to operationalise risk profiling at provider level. Map how minor risks roll up to systemic ones, and clarify who needs to know what and when. The NDIS Commission has been recommended for a decade to build this capacity at scheme level and has not yet done so; providers do not have to wait. An internal risk profile that connects board governance to frontline signals lets the board see its own risk landscape rather than waiting for a Commission notice. It is also the foundation on which any future regulatory conversation about the provider’s risk maturity will be conducted.
9.3 Build your evidence position
The mid-2026 JCPAA committee report, the Navigator design publication, the July 2026 mandatory registration commencement and the Integrity and Safeguarding Act enforcement schedule are each opportunities to provide evidence, respond to consultation, or engage with peak body advocacy. Providers who can articulate what the system needs, with data rather than complaints, will be positioned to influence. Providers who wait for the system to fix itself will absorb whatever comes.
The articulation matters. “Pricing is too low” is a complaint. “Our cost of delivering one hour of two-to-one personal care, including supervision, training, travel and non-billable coordination, is X dollars; the price guide pays Y; the gap is Z; we have absorbed it for N years” is evidence.
The wider opportunity is to build a value ledger inside the organisation that mirrors the structure of what should exist at scheme level. Three drivers translate from scheme to service. Cost (unit cost of delivery, overhead ratio, award compliance, compliance burden); financial return (workforce stability, direct care hours, avoided crises, retention); and outcome return (incidents, complaints, cohort outcomes). The internal value ledger is not the equivalent of the scheme-level ledger described in Section 8.3; it is the operational counterpart that makes the same conversation possible inside the organisation.
Two further measurement moves complete the evidence picture.
Complaint-to-action time as a safety signal. Complaint data is currently treated as reputational risk. It is also a live indicator of whether the service is listening. Four metrics from the complaints register make this visible: median and 90th percentile time from complaint received to first response; median and 90th percentile time to resolution visible to the complainant; proportion of complaints where the complainant was asked if they felt the response was adequate; and recurrence rate for the same complaint type within 90 days. Whether the service is using complaints as a safety learning system or as a risk containment process is invisible in the file but visible in those four metrics. At scheme level, complaint volume has grown twenty-fold in five years. At service level, the response cadence and recurrence rate are the more useful signals than the volume itself.
Cohort outcomes as an equity audit. Scheme averages hide the populations the scheme was never designed for. A provider that tracks cohort outcomes is ahead of where the regulator currently is. Five metrics, disaggregated by First Nations status, cultural and linguistic background, primary disability, age band, gender and geographic band, make this visible: wait time from referral to first service contact; plan utilisation rates where visible; incident rates per 1,000 support hours; complaint rates per 100 active participants; and participant feedback scores where collected. Whether the service is replicating the structural exclusions named in RC5 or working against them becomes visible in the patterns. This data is almost always already in the system and almost never looked at this way.
Together these moves produce a one-page board paper that gives the board a different conversation. Cost has been the only frame for years. Cost beside the two return sides, refreshed quarterly, is the conversation the scheme has been unable to have at parliamentary level for a decade. Providers who give their boards that conversation are providers who are positioned to participate in the wider conversation when it eventually starts.
9.4 Build quality from the inside
Within the support circle the provider controls, providers can architect what the system has not. The simplest test is the participant’s own experience on a Tuesday afternoon. Does the support worker know the behaviour support plan and have someone they can call when a routine breaks down? Does the allied health professional’s recommendation make it into the daily routine, or does it sit in a folder? When the participant changes, who notices, and who acts? When something goes wrong, who is accountable for noticing it, and who is accountable for fixing it? These are questions providers can answer for their own participants without waiting for system redesign.
Six concrete mechanisms turn that test into operational reality.
Participant-specific protocols. A team-built description of what good support looks like for this person. Generic shift notes, one-size-fits-all support and broken feedback loops all trace back to the same root: there is no shared description of the participant that the team has produced together. With a participant-specific protocol, support is differentiated, outcomes improve, incidents reduce. Without it, the team is supporting a category of participant rather than a person. This is the foundation everything else sits on.
Internal pattern recognition. Most providers can record incidents and review them. Far fewer can zoom out to see what their own data is telling them. The discipline is to follow the thread from one participant, to a site, to a system, and prioritise by risk. A near-miss in one house may be a pattern across three. A complaint from one family may be a structural feature of a process. Pattern recognition turns incident data into operational intelligence; operational intelligence turns reactive response into preventive action.
Work practice over policy. Policies protect the organisation legally. Work practices guide the worker operationally. Providers who only have policies have compliance documentation, not operational direction. Operational guidance tailored to the workforce (in plain language, specific to the staff cohort, describing what to actually do when the situation arises) is what produces consistent practice. The format matters. A policy designed for an auditor is not a policy that supports a support worker at 9pm on a Saturday.
Workforce culture and supervision. Supervision built into the support worker role as a scheduled, paid activity rather than something that happens when a manager has time. Near-miss reporting cultures that do not punish the report. Supported entry pathways for early-career staff. These are the disciplines that retain the workforce and that make pattern recognition possible. Workers who can see how their work fits, who have supervision to fall back on, and who are part of a team that talks about the participant rather than around them are workers who stay. Workers who are alone in someone’s home with a plan they have not seen and no one to call are workers who leave, and the participants they support feel both the loss of continuity and the loss of trust.
Multidisciplinary case review on a defined cadence. Behaviour support, allied health, support workers and accommodation staff looking at the participant together, not in sequence. A short structured forum that asks the same questions every time: what changed since last review; what is the current risk picture; what is the next intervention; who owns it; how will we know it worked. Where the team holds these forums consistently, the documentation that flows out is what the regulator has been asking for. Where the team does not, the documentation is reconstructed retrospectively to satisfy audit, and the practice underneath remains opaque.
Outcome measures the participant has agreed matter. Activity measures (hours billed, plans produced, audits passed) are visible in every provider’s data. Outcome measures (whether the participant is sleeping more soundly, whether they are seeing more people, whether their restrictive practices have reduced) often are not. The discipline is to ask the participant what good would look like for them, write it down in a way the team can see, and report against it on the same cadence as the activity measures. Outcome measurement at provider level is the operational analogue of the scheme-level Outcomes Framework redesign that government has not yet undertaken; the providers that do it now are the providers that will set the benchmark for the redesign when it eventually happens.
The human dimension matters because quality is delivered by people. Building quality from the inside is at least partly a workforce retention strategy, and the providers who treat it that way will hold their workforce through the reform turbulence ahead.
This is not a substitute for system redesign. It is the difference between a participant’s actual experience today and a worse one. Providers who do this work consistently across years will also be the providers most able to demonstrate what good looks like when system redesign eventually lands. The evidence base for the next round of reform will come from organisations that can show, not say, what works.
Appendices
Appendix A. Reviews register
The 676 recommendations consolidated in this report come from 63 distinct review reports between 2016 and 2025. The full register, with rec_id, year, source, source_short, source_type, owner, status, pain point tagging and lineage references, is at:
research/policy-evidence/data/processed/unified_recommendations_v3.json
Summary by review stream:
| Stream | Reports | Recommendations |
|---|---|---|
| Disability Royal Commission (2019–2023) | 9 chapters and volumes | 219 |
| NDIS Review (Bonyhady/Paul, 2023) | 26 chapters | 138 |
| Joint Standing Committee inquiries | 5 inquiries | 137 |
| ANAO performance audits (2016–2025) | 9 audits | 56 |
| Productivity Commission NDIS Costs Study (2017) | 1 report | 44 |
| Tune Review (2019) | 1 report | 29 |
| McKinsey Independent Pricing Review (2018) | 1 report | 28 |
| Community Visitor Schemes Review (2018) | 1 report | 6 |
| Grattan Institute (2025) | 1 report | 4 |
| Boland Review (2024) | 1 report | 2 |
| Other (Senate inquiries, Regulatory Burden, General Issues, Ministerial, Pricing Reform, Commission AR) | 12 reports | 13 |
| Total | 63 | 676 |
The five Joint Standing Committee inquiries are NDIS Psychosocial 2017, Supported Independent Living 2020, NDIS Quality and Safeguards Commission 2021, NDIS Workforce 2022, and Capability and Culture 2023.
Appendix B. Source registry
Provenance metadata for every recommendation is at research/policy-evidence/data/sources/source_registry.csv. Each row records the report title, publishing body, year, URL where publicly accessible, and ingestion date. Where the original document is locally archived, the path is recorded under local_path.
Appendix C. JCPAA submissions list
The 51 submissions producing 451 discrete issues are mapped at:
research/policy-evidence/data/processed/unified_inquiry_alignment_v2.json
The submissions are numbered by JCPAA assignment (001 through 054, with some submission numbers either not yet released or withheld at the submitter’s request). The submitters are mixed: individuals, providers, peak bodies, consumer organisations, advocacy organisations, government bodies, and an Aboriginal Community Controlled Health Organisation peak body.
Submitter type distribution and primary root cause distribution per submission is documented in the v2 addendum at research/policy-evidence/docs/jcpaa_submissions_v2_addendum.md.
Appendix D. Methodology
Recommendation extraction. Each formal review report was parsed to identify numbered recommendations or equivalent formal proposals. Where a review used a non-numbered structure, each discrete proposition was extracted as a single rec. Recommendation text was preserved verbatim. Each rec was tagged with year, source, source_type and owner. Pain point tagging was assigned manually using the seven-pain-point taxonomy (PP01–PP07 plus PP_SYSTEMS and PP_WORKFORCE) documented in pain_points_enriched.json. Pain point tags were then mapped to the consolidated five root causes per the framework in Section 3.
Cross-review duplicate analysis. Recurring recommendations were identified using TF-IDF cosine similarity (sublinear term frequency, L2 normalised) across the 676-rec corpus, with a pair threshold of 0.35 and a cluster-forming threshold of 0.40 via union-find. Royal Commission chapters and NDIS Review chapters were treated as single parent reviews to avoid double-counting within-document near-duplicates. Full method at research/policy-evidence/docs/recommendation_recurrence_analysis.md.
JCPAA submission issue extraction. Each public submission was read in full. Discrete issues were extracted as one or two-sentence statements. Each issue was tagged with submitter, submitter_type, inquiry_category, evidence_type, mapped_pain_points, primary_pain_point, confidence (low, medium or high), and alignment_type (confirms, extends or contradicts existing review evidence). Confidence ratings reflect the quality of evidence provided in the submission, not the salience of the issue. Issues that did not map to any of the seven pain points were tagged against gap themes (GAP01–GAP03) where appropriate.
Reform exposure scoring. Each Butler reform proposition was scored against each pain point as ROOT (substantively addresses), ENABLE (necessary precondition without delivering), SYMPTOM (touches a downstream symptom), or NA. Scoring criteria are in research/policy-evidence/scripts/score_root_cause.py and the scoring memo at research/policy-evidence/outputs/butler-reform-analysis.md.
Implementation rate. The status field on each recommendation records implemented, in_progress, recycled, stalled, or unknown. Where status is unknown, the rec is excluded from implementation rate calculations. Status assignments are drawn from public information available at the time of analysis and are refreshed when the alignment analysis is re-run.
Limitations are stated transparently:
- The recommendations register is reproducible but not exhaustive. Some review reports may produce more or fewer recommendations than the canonical numbered set; in those cases the canonical set was preferred.
- Pain point tagging is an analytical judgment. Different analysts could plausibly tag a recommendation differently. The tags and the underlying rationale are visible in the data file for inspection.
- The 7→5 root cause consolidation is Supporting Potential’s analytical synthesis. Other consolidations are possible. The five-root-cause framing is documented and defended in Section 3.
- The JCPAA submission issue extraction is qualitative. Different analysts could plausibly extract different issue counts from the same submission. The issue mappings are visible at the source-text level for inspection.
- Implementation status data is limited by what government has published. Many recommendations have no published status; these are treated as unknown rather than as not-implemented.
Appendix E. Glossary
- AAT — Administrative Appeals Tribunal (replaced by Administrative Review Tribunal in October 2024).
- AHPRA — Australian Health Practitioner Regulation Agency.
- AMS — Activity Management System (NDIS Commission’s compliance portal).
- ANAO — Australian National Audit Office.
- ART — Administrative Review Tribunal.
- BSP — Behaviour Support Plan.
- CALD — Culturally and Linguistically Diverse.
- CVS — Community Visitor Scheme.
- JCPAA — Joint Committee of Public Accounts and Audit.
- JSC — Joint Standing Committee on the National Disability Insurance Scheme.
- LAC — Local Area Coordinator.
- NDIA — National Disability Insurance Agency.
- NDIS — National Disability Insurance Scheme.
- NFP — Not for profit.
- OPCAT — Optional Protocol to the Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment.
- PACE — NDIA’s planning IT system.
- PP — Pain Point (PP01–PP07, plus PP_SYSTEMS and PP_WORKFORCE in the underlying tagging schema).
- R&N — Reasonable and Necessary (NDIS Act eligibility test).
- RC — Root Cause (the five consolidated root causes in this report, RC1–RC5).
- RoRD — Review of Reviewable Decision.
- RP — Restrictive Practice.
- SCHADS — Social, Community, Home Care and Disability Services Industry Award.
- SDA — Specialist Disability Accommodation.
- SIL — Supported Independent Living.
Document control
| Field | Value |
|---|---|
| Version | 1.0 (complete first draft) |
| Date | May 2026 |
| Owner | Angela Harvey, Supporting Potential |
| Status | All sections drafted across the v3 evidence base (676 recommendations, 51 JCPAA submissions, 451 issues) |
| Next milestone | Editorial review and refinement |
Supporting Potential helps NDIS providers deliver better quality services that enhance life outcomes for people with disability whilst improving financial margins. We work at the intersection of regulatory design, operational systems, and provider strategy.
Systems view
The five-root-cause framing in the main report is the analytical anchor. This companion view shows how those root causes interact at the level of the system as a whole. It draws on Supporting Potential's prior systems-mapping work, rebuilt here in the same visual style as the rest of the report.
Three lenses appear in this chapter. The first is the dynamic between government and providers, which both sides experience as adversarial even though their goals reinforce each other. The second is the set of cause-and-effect chains that turn a single decision into a downstream pattern. The third is the power map, which sorts the available reform levers by who can actually pull them.
The dynamic between government and providers
Government and providers are often described in adversarial terms. The evidence base shows something more specific: both sides hold legitimate goals (sustainability and safety on one side, viability and quality on the other) and the mechanisms each side uses to pursue those goals create unintended consequences for the other. Each side's rational response then escalates the dynamic. Reforms that change the mechanisms work; reforms that tighten the existing ones tend to make the dynamic worse.
The cause-and-effect chains
Six cascades appear repeatedly across the evidence base. Each begins with a government or system action and produces a sequence of provider responses, system-level consequences, and harm. They are not independent. The pricing cascade feeds the workforce cascade, which feeds the market cascade. Reading them as separate problems hides the structural shape.
The power map
The reform levers available to fix these chains divide into three domains. Some can only be pulled by government. Some can only be pulled by providers. Some require both sides to act together. The map below sorts the levers named throughout this report by which domain they sit in. Sections 8 and 9 set out how each is used; the visual makes the distribution visible at a glance.
The strategic implication is that providers do not have to wait for government to fix the system before activating the provider-side levers. Each provider lever activated produces evidence that strengthens the case for the government-side and shared-side levers to be pulled.
The system at a glance
Ten variables shape the scheme. Participant outcomes (the purpose) sits downstream of every other variable. Provider viability, workforce capability, regulatory effectiveness, NDIA administrative capacity, scheme financial sustainability, market depth, trust, data quality, and reform absorptive capacity each connect through feedback loops that currently run as vicious cycles rather than as the virtuous cycles the scheme was designed to produce. The diagram below positions the central feedback (the quality death spiral) and the variables that feed it.
This is a simplified view. The fuller mapping (nine specific feedback loops, including the dormant virtuous cycles R8 and R9 that providers can activate without waiting for policy change) sits in research/ndis-systems-map.html.
Footnotes
- i First formally identified
- Refers to the year of the earliest formal review recommendation that has been mapped to this root cause pattern in the consolidated dataset. The five-root-cause framing is Supporting Potential's analytical synthesis across 676 recommendations from 63 review reports. Early formal reviews typically named specific symptoms or design issues rather than the consolidated root cause as framed in this report; the pattern accumulated across subsequent reviews. The year therefore reflects when the underlying issue first surfaced in formal evidence, not when it was named as a coherent root cause. A per-root-cause audit of the earliest mapped recommendation, including the post-JSC-ingestion update that strengthened the RC1 anchor, is documented at
research/policy-evidence/docs/years_unresolved_audit.md.