Guide to economic evaluations

This second article on pharmacoeconomics gives an overview of how to assess the quality of an economic evaluation and discusses how economic evaluations are used to influence the provision of health care.

This content was published in 2003. We do not recommend that you take any clinical decisions based on this information without first ensuring you have checked the latest guidance.

Economic evaluations are increasingly being used to make decisions in healthcare so it is useful to assess their quality. This article review guidelines published in the BMJ[1]. Panel 1 contains a checklist of questions that should be asked when reading an economic evaluation.

Economic evaluation study question

The question raised by the evaluation should be economically important. Asking whether a new treatment is simply “worthwhile” is not enough. The relevant question is whether the new treatment is worthwhile, compared with existing interventions. Similarly, merely asking whether drug A is more expensive than drug B ignores the issue of treatment efficacy, and economic evaluations should explore outcomes as well as costs. The evaluation should also consider resource implications, ie, whether the extra cost of a more effective drug is affordable.

The perspective of the economic evaluation should be clearly stated at the outset because this influences the categories of costs and the types of outcomes that are included. For example, an economic evaluation conducted from the perspective of society will include out-of- pocket costs incurred by the patient. If the perspective is clear, different decision-makers (eg, government, purchaser, provider) will know how relevant the evaluation is to their aims.

Selection of alternatives

The economic evaluation of an intervention is only useful if the intervention is compared with legitimate alternative treatments. An example of an inappropriate comparator would be the use of an older generation drug or a placebo if a better agent is available. Alternatives must be clearly stated and the reasons for selecting them must be justified. This helps the reader to deter- mine whether or not the economic evaluation can be applied to his or her own setting. If the study considers alternatives that are not available in that setting, the evaluation will not provide relevant cost-effectiveness information. Comparators should also be the most cost-effective interventions currently available. In practice, the comparator tends to be the most widely used intervention in the setting in which the evaluation is conducted.

Type of evaluation conducted

Different forms of evaluation include cost-minimisation analysis (CMA), cost-effectiveness analysis (CEA), cost-utility analysis (CUA) and cost-benefit analysis (CBA) (see P?, 15 November, pp679-81). The type of evaluation per- formed should be stated, along with justifications for choosing it. For example, if a CMA is used, the authors need to state why the treatments are considered to be equally effective. A CBA can answer broader questions of resource allocation from a societal point of view and enables comparisons of health and non-health programmes. Because the outcome in CUA is measured in the number of healthy years gained, comparisons can be made between health programmes. In contrast, CEA can only be used to compare programmes with the same outcomes (eg, deaths avoided from smoking cessation and deaths avoided from treating high blood pressure).

Efficacy data

Assessing the evidence for the efficacy of interventions is a key component of economic evaluation. If data comes from a single study (eg, a clinical trial), design details such as sample size and selection, method of randomisation and the type of analysis conducted (eg, intention to treat or treatment completers only) should be given. Results should also be reported with confidence intervals. The gold standard for assessing the efficacy of an interven- tion is the randomised double blind controlled trial. However, it is possible that no clinical trials have been conducted and, instead, observational studies such as cohort or case-control studies have been used. If so, the authors should discuss any limitations that could affect the validity of the estimate of efficacy.

If the data are based on a review of a number of primary studies, the evaluation should consider how the studies were selected. This will involve describing the search strategy, the criteria for inclusion of the studies in the review and the criteria used to ensure the studies are valid. The number of primary studies included should be stated and the method used to combine the results should be explained (eg, the statistical techniques used in a meta-analysis). Finally, if differences are found between primary studies, the authors should investigate them and discuss how this may affect the estimate of efficacy.

Measurement of benefits

The primary outcome measure used in the economic evaluation should be clearly stated. Examples of outcome measures are life-years gained and quality adjusted life years (QALYs). In cases where a value (or utility score) has been attached to a health state, the source of the values, as well as the method used to assign them (eg, generic health state preference instrument or direct measurement), should be given. In the case of direct measure- ment, the category of people asked (eg, patients, health care professionals, or the general population) and the number of people asked should be reported.


Both the quantity of resources used and their unit costs should be reported so the reader can apply the assessment to his or her own circumstances. For example, the reader should be able to recalculate costs using prices from their own setting. The methods used to estimate costs and quantities and the dates when estimates (eg, prices) are made, should be specified. For example, data could have been collected alongside clinical data in a trial or it could have been prospectively or retrospectively sourced from medical records. The categories of costs considered should be appropriate for the perspective adopted in the evaluation

Modelling Modelling techniques enable an evaluation to be extended beyond what has been observed. For example, it may be necessary to transform intermediate outcomes such as cholesterol concentrations into final outcomes such as coronary heart disease events and survival. Models also allow the analyst to use evidence from a variety of sources when not available from a single source. For example, the effectiveness of treatments may be obtained from randomised controlled trials, while incidence data on the condition may be obtained from population surveys. Costs may be obtained from a third source. These data are then combined in a single model.

In cases where modelling is used, the authors should say why. Details should also be given about the type of model used in the analysis (eg, decision tree model, epidemiological model, regression model) and the choice of model should be justified. Finally, any assumptions made in constructing the model should be stated.

Panel 1: Questions that should be asked about economic evaluations

  • Is the question economically important?
  • Does the study consider both costs and benefits?
  • Is the perspective of the study clearly stated?
  • Is the alternative, or comparator, clearly stated?
  • Is the choice of alternative justified?
  • Is the form of economic evaluation stated?
  • Is the form of economic evaluation justified?
  • Do the effectiveness data come from a single study or from a
    review of studies?
  • Is the main health outcome measure or benefit clearly
  • Are the sources and methods used to derive health benefit
    values clearly stated?
  • Are quantities and costs reported separately?
  • Are the methods used to estimate quantities and costs clearly
  • Are the categories of costs considered appropriate for the
    perspective adopted?
  • Is the year that the price was sourced from stated?
  • Is the use of modelling justified in the analysis?
  • Is the type of model used justified?
  • Are the assumptions made in the model clearly stated?
  • Is the duration of the study sufficient to answer the study
  • Are costs and benefits appropriately discounted?
  • Are appropriate statistical analyses applied to sampled data?
    Are sensitivity analyses conducted on uncertain parameters?
    Is an incremental analysis conducted?
  • Are appropriate comparisons with other studies made?

Adjustment for timing of costs and benefits

The duration of the study should be enough to observe effects of the interventions that are being analysed in the economic evaluation — it should consider the whole life of the treated individuals if necessary.

In economic evaluations where the follow-up is over one year, discounting will be conducted. Discounting is an economic tech- nique that reflects the fact that people prefer to delay costs as long as possible and receive benefits as soon as possible. Streams of costs and benefits occurring over a number of years are discounted to obtain their net present value (NPV). To calculate the NPV, the dis- count rate, the number of years over which the intervention is rele- vant and costs and benefits are needed. The choice of the discount rate is not straightforward. There is no agreement among health economists on which rate to use, or on whether both costs and bene-fits should be discounted at the same rate. It is good practice for economic evaluations to present both discounted and non-discounted results to allow the reader to apply a discount rate that may be more appropriate for his or her own setting. In the United Kingdom, analysts tend to use the 6 per cent rate recommended by the Treasury.

Allowance for uncertainty

Some of the parameters used in an eco- nomic evaluation may be uncertain. If the uncertainty stems from data sampled from a population, during a clinical trial for example, then standard statistical methods should be used to represent this uncertainty and confidence intervals should be presented.

If uncertainty stems from manipulating the results to suit other settings or patients, extrapolating beyond the observed data or the type of analytical methods used, then sensitivity analysis is best suit- ed to indicating the uncertainty. Sensitivity analysis involves varying the uncertain parameter over a chosen range of values. For example, if the relative risk of disease is 0.8 with treatment compared with no treatment, a sensitivity analysis could explore the costs and benefits of the treatment with relative risk of 0.25, 0.5, 0.8 and 1. Possible types of sensitivity analysis include one-way or multi-way sensitivity analysis, threshold analysis and probabilistic sensitivity analysis. The choice of the range over which the parameters are varied should be clearly stated. Recently, a number of studies have used cost- effectiveness acceptability curves to present the results of the sensitivity analysis.2 These graphs show the probability that an intervention is cost-effective as a function of the decision-maker’s ceiling cost-effectiveness ratio.

Presentation of result

An analysis using incremental cost- effectiveness ratios (see P7, 15 November, pp679-81), comparing the extra benefits gained with the extra costs incurred, should be reported. The main costs and benefits should be presented in both a disaggregated and an aggregated form. This gives the reader a clear summary of the different elements involved and applying the results to his or her own setting is possible. For example, if the costs do not apply to the reader’s setting, the results of the benefits may still be used. Comparisons with other studies should only be made when the methods are similar and the settings are comparable.

Policy uses of economic evaluations

In the 1990s, a number of countries began using cost-effectiveness techniques to define health care policies. In Canada and Australia, reimbursement decisions for pharmaceuticals started using eco- nomic evidence, and pharmaceutical companies were required to provide evidence that their products were cost-effective. In the UK, the National Institute for Clinical Excellence was created to appraise health care interventions and a number of National Health Service treatments are now recommended on both clinical and economic evidence. The rationale for the establishment of NICE was to make the decision-making process more efficient and more equitable by applying health economic techniques to health policy decisions. Increasing efficiency means selecting the most cost- effective interventions, thereby making the best use of limited health care resources. Increasing equity means providing the same treatment for the whole population. In particular, the uneven geographical distribution of particular forms of health care (“post- code prescribing”) has long been of particular concern.

NICE is responsible for producing three types of guidance for the NHS in England and Wales:

  • Technology appraisals on the use of new and existing medicines and treatments
  • Clinical guidelines for the appropriate treatment and care of patients with specific diseases and conditions
  • Recommendations on whether interventional procedures used for the diagnosis and treatment are safe and work well enough for routine use

A NICE decision to recommend an intervention depends on a number of criteria, which include the clinical benefit of the intervention, its cost-effectiveness and the costs to the NHS of recommending it. For example, in October 2001 guidance on the use of sibutramine for the treatment of obesity in adults was issued. This recommended that sibutramine should be prescribed as part of an overall treatment plan for management of nutritional obesity in people aged 18-65 years. In terms of clinical effectiveness, a number of clinical trials were evaluated. The guidance reviews the outcomes of the trials in terms of weight loss, dosage, target population (eg, it was not recommended for people outside the age range of 18-65 years) and possible adverse effects (eg, the increased tisk of blood pressure in the older population). In the section on cost- effectiveness, the guidelines indicate that the manufacturer submission estimated a cost per QALY gained of £10,500. However, because a number of parameters were judged to be uncertain (in particular, the cost of not treating obesity; the rates of natural weight gain and the rate of regain after treatment is stopped), the guidelines estimated that a more realistic cost per QALY gained could be anything up to £30,000. Cost implications for the NHS are also assessed. In the first year, the resulting total costs of prescribing the drug and monitoring therapy were estimated at approximately £8.4m. It is clear then, how important it is for health professionals to gain an understanding of economic evaluation techniques because they are to play a central role in defining national treatment guidelines.

How has NICE fared with its goals to increase efficiency and equity in health care? With respect to geographical inequity, NICE was established so that for treatments with uncertain efficacy and where local authorities were consequently implementing different treatments, a national guidance implemented by all local authorities would provide the same treatment for all patients concerned. While this is happening, one consequence may be that in order to follow NICE guidance, the provision of local services not appraised by NICE may be affected so that local variations remain. A long-term solution is to appraise all existing technologies, identifying those that are widely used but not cost-effective. This would free resources that could be allocated to more cost-effective treatments.

With respect to efficiency, the main problem may be the imple- mentation of the guidelines in practice. While the status of the guidelines remains advisory rather than mandatory, there is strong pressure for the recommendations to be followed in practice unless clinical judgement strongly contraindicates them. Moreover, in January 2002, the Government announced a statutory obligation for the NHS in England and Wales to provide funding for treatments and drugs recommended by NICE if deemed appropriate by a clinician. Despite all this, it is unclear how compliance with NICE’s guidelines is being monitored. In fact, a recent study shows that the recommended use of herceptin for breast cancer has been followed variably and a number of women in some areas of the UK are still not being given access to this treatment.*

NICE’ appraisal procedures are also likely to have implications for the pharmaceutical industry. In the past, companies have concentrated on developing clinical trials to satisfy the evidence required by drug licensing authorities. However, the information required by NICE is different. Not only is there a need for econ- omic data to be collected alongside information on the clinical effects of the intervention, but there also needs to be an emphasis on more “real life” clinical populations. The emphasis is therefore likely to switch from traditional clinical trials with strict inclusion and exclusion criteria to pragmatic clinical trials that try to replicate real life more closely. There has been some concern that the indus- try may unduly influence the procedures at NICE, during the submission process. However, NICE commissions an independent appraisal from academic centres alongside industry submissions and this should safeguard against bias. In the appraisal of sibutramine,


  1. Drummond MB, Jefferson TO. Guidelines for authors and peer reviewers of economic submissions to the BMJ. The BMJ Economic Evaluation Working Party. BMJ 1996;3 13: 275-83.
  2. Briggs AH. Handling uncertainty in cost-effectiveness models. Pharmacoeconomics 2000;17:479-500.
  3. National Institute for Clinical Excellence. Guidance on sibutramine for obesity. Available at:
  4. CancerBACUP. Data on access to Herceptin. Available at:
  5. Health Technology Assessment. HTA web pages. Available at:
  6. NHS Centre for Reviews and Dissemination. NHS EED web pages. Available at:

Last updated
The Pharmaceutical Journal, PJ, November 2003;()::DOI:10.1211/PJ.2021.1.65620