Research

Comparing applets and oranges: barriers to evidence-based practice for app-based psychological interventions

Published in: BMJ Evidence-based mental health

Poor-quality pharmaceuticals and medical devices rarely make it to market; however, the same cannot be said for app-based interventions. With a high availability but low evidence base for mHealth, apps are an increasingly uncertain prospect to users and healthcare professionals alike.

Although in a first-best situation, the burden of proof concerning app safety, clinical and cost-effectiveness ‘should’ ultimately lie with app developers; a number of barriers to evidence generation, including the fact that ‘acceptable evidence’ itself is largely open to interpretation, mean that it may be folly to expect this paucity of real-world effectiveness research to improve.

While the health technology assessment of established therapeutic modalities including pharmaceuticals and talking therapies benefits from the existence of approved evaluative guidelines, unfortunately the same cannot be said for app-based interventions, specifically with regard to outcomes measurement.

As such, it would seem that in order to prevent the comparative assessment of apps simply becoming an exercise comparing apples and oranges, there is a clear need for consensus and guidance for app developers, as to which patient-reported outcome measures, among the hundreds available, are of clinical use to those making decisions, and should therefore be used when developing app-based interventions.

By negating the fear that any evidence collected may be of poor quality, we can reincentivise developers to engage in evidence generation, and in doing so, maximise the likelihood of evidence-based decision-making taking a firm hold. However, only by dispelling the ambiguity around what acceptable evidence can and should look like, can we begin to do so.

 

Click here to read the full research piece.