New methods of analysis and advances in technology are changing the way observational studies are performed.
Clinical registries
Clinical registries are essentially cohort studies, and are gaining importance as a method to monitor and improve the quality of care.19 These registries systematically collect a uniform longitudinal dataset to evaluate specific outcomes for a population that is identified by a specific disease, condition or exposure. This allows for the identification of variations in clinical practice20 and benchmarking across practitioners or institutions. These data can then be used to develop initiatives to improve evidence-based care and patient outcomes.21
An example of a clinical registry in Australia is the Australian Rheumatology Association Database,22 which collects data on the biologic disease-modifying antirheumatic drugs used for inflammatory arthritis. Clinical data from treating specialists are combined with patient-reported quality of life data and linked to national databases such as Medicare and the National Death Index. This registry has provided insight into the safety and efficacy of drugs and their effect on quality of life. It was used by the Pharmaceutical Benefits Advisory Committee to assess cost-effectiveness of these drugs.23
Another example is the Haemostasis Registry. It was used to determine the thromboembolic adverse effects of off-label use of recombinant factor VII.24
Clinical registries can also be used to undertake clinical trials which are nested within the registry architecture. Patients within a registry are randomised to interventions and comparators of interest. Their outcome data are then collected as part of the routine operation of the registry. The key advantages are convenience, reduced costs and greater representativeness of registry populations as opposed to those of traditional clinical trials.
One of the first registry-based trials was nested within the SWEDEHEART registry.25 This prospectively examined manual aspiration of thrombus at the time of percutaneous coronary intervention in over 7000 patients.26 The primary endpoint of all-cause mortality was ascertained through linkage to another Swedish registry. The cost of the trial was estimated to be US$400 000, which was a fraction of the many millions that a randomised controlled trial would have cost.
Propensity score matching
Even without randomising people within cohorts, methods have emerged in recent years that allow for less biased comparisons of two or more subgroups. Propensity score matching is a way to assemble two or more groups for comparison so that they appear like they had been randomised to an intervention or a comparator.27 In short, the method involves logistic regression analyses to determine the likelihood (propensity) of each person within a cohort being on the intervention, and then matching people who were on the intervention to those who were not on the basis of propensity scores. Outcomes are then compared between the groups. Propensity score analysis of a large cohort of patients with relapsing remitting multiple sclerosis found that natalizumab was superior to interferon beta and glatiramer acetate in terms of improved outcomes.28
Data technology
Increasing sophistication in techniques for data collection will lead to ongoing improvements in the capacity to undertake observational studies (and also clinical trials). Data linkage already offers a convenient way to capture outcomes, including retrospectively. However, ethical considerations must be taken into account, such as the possibility that informed consent might be required before linking data. Machine learning will soon allow for easy analyses of unstructured text (such as free text entries in an electronic prescription).29 Patient-reported outcome measures are important and in future will be greatly facilitated by standardised, secure hardware and software platforms that allow for their capture, processing and analyses.