The federal government and National Cabinet has committed to rebooting and fixing the National Disability Insurance Scheme (NDIS) for people with disabilities. It plans to invest more in the National Disability Insurance Agency (NDIA), which administers the scheme, and drive change to support participants better.
As part of these initiatives, the government has indicated a move towards prioritising evidence-based supports to ensure funds are appropriately and effectively spent. NDIS minister Bill Shorten promised “a renewed focus on evidence and data,” adding that he wanted to
[…] get rid of shoddy therapies that offer little to no value to participants or desperate parents.
The rhetoric raises important questions. How is “evidence” defined? And can it be usefully applied within the complex NDIS context?
Medical research origins
The term “evidence-based practice” comes from the medical field, mostly from research trials with a clear cause-and-effect relationship. A specific drug or treatment (termed “interventions”) might be given to certain subjects and then any changes are tracked with objective measurement tools, such as blood tests, improvements in health or changes in function.
Research evidence is ranked in a hierarchy to denote its reliability and significance. Expert opinion sits at the base, then case studies, then randomised control trials (in which subjects are randomly assigned to experimental or control groups) and systematic reviews (which look at the results of lots of different trials and studies combined) at the prestigious peak.
But this narrow idea of what evidence is can be problematic when applied to a complicated scheme like the NDIS.
Disability is different
Firstly, disability is not a medical condition. It is part of being human and affects everyone uniquely due to factors such as each person’s unique socio-, psychological and physical make-up and the context and environment they are in. Support services need to be tailored for each person and their circumstances.
This uniqueness of intervention and the multiple and often unpredictable benefits and outcomes of intervention makes measuring clear cause-and-effect relationships inaccurate or incomplete in many cases. This calls for a different approach to the definition of evidence.
To add to this complexity, each support service is unique in terms of set-up, context and resources available.
Finally, disability research has historically been overlooked and severely underfunded compared to medical research into drugs, detection or therapies. The quality and quantity of published research available is very limited.
3 things we can consider about supports
So, how can we judge NDIS supports and practice to ensure funds are appropriately spent?
Evidence within complex environments needs to incorporate:
1. Qualitative outcomes
The current focus on highly rigorously published research study outcomes, for example from Randomised Control Trails, should be complemented with qualitative research studies. These studies may involve fewer participants but incorporate the voices of people with disabilities. Participants can articulate their views on services provided, outcomes and benefits, and their preferences. Systematic reviews can then be formulated to survey and summarise quantitative and qualitative research studies.
2. NDIS participant feedback
Research takes a long time. Information can be gathered more quickly from NDIS participants that will reflect their choices, priorities, values, preferences and individual context. Service providers should be regularly surveying and monitoring their client groups. The NDIS Review is due to report in the coming months and they are also investing in a wellbeing measure and the government has developed a Disability Strategy Outcomes Framework to track and report improvements for people with disability.
3. Supports in context
Real-world supports don’t happen in a vacuum. To judge effectiveness and suitability we will need information about service provision. This might include the available resources to provide services (such as telecommunications access in remote areas of Australia), contexts (such as geographical or population demographics including culture and language), and organisational factors such as service delivery and set-up (for example, inter-disciplinary teams or sole-practitioner models).
Evidence-based recommendations in the real world
An example of how these three important components can inform evidence-based practice can be found in the recently released guidelines for supporting children with autism and their families.
Autism is the largest disability category for NDIS, with around one in three active NDIS participants receiving funding for the condition. The fresh guidelines include information from extensive systematic reviews, incorporate qualitative and quantitative research studies, and the voices of autistic people, families and service providers. The surrounding context of service provision – how and where supports are delivered in the real world – was described and applied to recommendations.
This broader view and application of evidence-based practice is more appropriate for the supports the NDIA provides funding for. However, these types of evidence sources are currently limited. We do not have them available for all disability groups or age groups.
Investments will need to be made to focus on developing these evidence sources and ensuring the government stays true to its commitment of working together with people with disability and the sector to provide “choice and control” and effective support.
This article is republished from The Conversation is the world's leading publisher of research-based news and analysis. A unique collaboration between academics and journalists. It was written by: Kobie Boshoff, University of South Australia.
Kobie Boshoff does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.