Article Text

Download PDFPDF

Designing information technology to support prescribing decision making
  1. N Barber
  1. Correspondence to:
 Professor N Barber
 Department of Practice and Policy, The School of Pharmacy, 29 Brunswick Square, London WC1N 1AX, UK;


The use of computerised prescribing and decision support to reduce medication error is a common element of medication safety policy. This paper discusses the sort of characteristics that a decision support system should have. The system should slot into a wider vision of good prescribing, not conflict with it, and should be based on our understanding of the causes of error. As yet there is little evidence that decision support is effective in changing patient outcome, and the evaluation in this field is of limited quality and generalisability. It is proposed that software design should target high risk patients and drugs, trap dosing errors, have standardised methods of production and evaluation, be congruent with good prescribing, focus on the tasks that computers do well, individualise treatment, and ensure that prescribers enjoy using the final product.

  • information technology
  • prescriptions
  • decision making
  • computerised prescribing

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The recognition of the extent and significance of medical error has led to policy initiatives in the UK, USA, and other countries. Medication error is a common source of medical error, hence a focus of initiatives to make the prescribing and administration of medicines safer. Central to this policy in the UK and USA is the adoption of computerised prescribing (called Computerised Physician Order Entry (CPOE) in the USA). The literature contains some spectacular examples of computerised prescribing systems producing remarkable reductions in errors, and the same few studies from fewer sites are commonly quoted in the policy documents. However, there seems to be little debate about the basic question—namely, how should computerised prescribing software be designed to improve patient safety? As the UK is about to embark on a massive programme of introducing computerised prescribing into National Health Service (NHS) hospitals, the question seems an important one to ask.

This paper comes from consideration of the question for a UK/US workshop on patient safety. It starts by defining good prescribing and then describes what is known about prescribing error (predominantly from a UK perspective). The evaluation of systems is discussed from the perspective of the methodological issues in evaluating information technology in general, and the literature on decision support in particular. Finally, these points are brought together in a synthesis that makes some suggestions for the design of software to improve patient safety in prescribing.


If computers are to help in prescribing, we first need a concept of what good prescribing is. For many years appropriate prescribing was assumed by many to be the prescription of a drug that had evidence of clinical effectiveness. However, this construction of prescribing—solely as a technical pharmacological act—is at odds with much of the reality of prescribing and with the models of prescribing that are currently promoted in the literature and in government policy.

I have previously argued1 that good prescribing involves consideration of three broad areas:

  • What the patient wants. Most health care starts with a patient wanting something, thinking a doctor may be able to help them, and seeking information from the doctor. What the patient wants for themselves must be a prime factor in decisions about good prescribing.

  • The technical/rational. This relates the area of scientific measurement of the drug—its pharmacology and effects (good and bad)—with the costs associated with its use. For example, a decision tool that helps to decide on the correct dose of the drug would include physical parameters of the drug and patient; the dose could then be calculated from this technical information.

  • The greater good. There are areas of prescribing in which some consideration of societal good needs to be considered. In a national utilitarian system such as the NHS in the UK, cost reduction appears under this banner—for example, as generic substitution or use of a formulary. If a patient wants a very expensive drug, then a rationale for denying the patient that drug can be that the same effect could be achieved by a cheaper alternative and the remaining money could go to the treatment of other patients.

We can represent possible prescribing states by showing these three elements in a Venn diagram (fig 1). In the centre, where all three goals are met, prescribing is not controversial. All the remaining sections are debatable. A judgement needs to be made by studying the unique details of the case alongside the principles and rationale of each element. The process is analogous to the way that a judge in a court of law will pass sentence—partly according to the general principles of law associated with the crime and partly according to the specific circumstances under which the crime was committed.

Figure 1

 A framework for evaluating good prescribing.

Good prescribing should therefore take into account not just the facts, but the values of individual patients, sometimes the values of those close to them, and of the NHS and of society as a whole. This shows that the concept of prescribing error can also be contestable at times.

Computers can aid or subvert these goals, depending on how they are programmed and put to use. There is an obvious role for computers in the area of technical pharmacological measures—calculating doses and identifying overdoses, drug interactions, or a previous allergic reaction to the drug. In the area of “the general good” they can also have a role, for example, by imposing a formulary of drugs chosen for the benefit of the population as a whole. It also allows a different way of achieving the greater good—for example, surveillance of prescribing to allow an audit of the prescriber. Provided this is linked to an appropriate change process, a cycle of quality improvement for the good of the local population can be achieved.

The area of patient wants is less well dealt with by computers, as is the case with value judgements generally. However, one area in which we can expect progress is that of the patients’ wants for information about their medicines, or the information required to help them make a choice between two different types of treatment. Provided this is linked to consultation with a knowledgeable prescriber, this function can be useful.


By understanding the nature of prescribing error, we can understand how to prevent it. Prescribing is an extraordinarily frequent act in health care. Hence, even though a small percentage of prescriptions may contain an error, these errors are frequent events. We estimated that, on a typical weekday from 09.00 hours to 18.00 hours, an inpatient prescription was written every 20 seconds in a teaching hospital.2 Prescribers had a 1.5% error rate, a quarter of which were potentially serious,2 although these were usually trapped by pharmacists before harm was done. The incidence of prescribing error in primary care is not really known; it is hard to interpret the studies in the literature because of differences in methodology and definitions of error, but it is probably higher than the incidence in hospital.

To find out why prescribers made errors we interviewed hospital doctors who made serious prescribing errors, then categorised the causes according to Reason’s theory of the causation of errors in an organisation.3 Reason proposed a nested series of causes of an error.4 First are the cognitive processes in the prescriber’s head; at the next level are the local conditions at the time of the error which might have contributed to it (tiredness, stress, etc); and finally there is the organisational climate in which the person was working, called “latent conditions” by Reason. The local conditions and organisational climate can all be significant contributors to an individual making an error.

Of the 44 serious prescribing errors considered in depth, 57% were inadvertent slips and lapses, 39% were “rule based” mistakes (not knowing the right rule for prescribing in that case), and 4% were violations (intentional breaking of rules). If this were the case generally, it suggests that over half of the serious errors did not occur because the doctor was ignorant of the appropriate rules; the errors were inadvertent. Most were slips and lapses, those mental “hiccups” which we all have, such as when we try to get into our home using our office key.

As most prescribers do not know that they have made an error, it follows that software must run constantly in the background to intercept slips and lapses. “Pull down” software which has to be accessed by the prescriber will not trap these errors; it will only be of use if the prescriber knows that he/she does not know what the correct prescription should be. This is a drawback of the Prodigy system developed by the NHS.

How could prescribing decision support software help? The most common form of prescribing error is an error in dosage. The best software support to eradicate this type of error would be some form of error trapping that identified abnormal or inappropriate doses. The introduction of some simple rule sets, such as dose adjustment in renal failure, would remove several of the remaining errors. The next level according to Reason’s theory concerns the error producing conditions. Here the most common factors are the work environment (particularly staffing), how individuals feel, their skills and knowledge, and issues within their clinical team such as communication and responsibility. Finally, issues of organisational culture—such as correct doses not being taught to junior doctors and the detail of prescribing not being seen as important—also plays a part. It can be seen that computers have the ability to prevent some of these failings, but not most of them. It is obvious that education and training also have a large part to play in error reduction.


Different professions and academic tribes understand different things when they use the term “decision support”. In this paper we use the widest conception of decision support—that is, anything which stops bad decisions being enacted or improves the quality of decisions.

Computerised decision support has been around in medicine for over 30 years, commonly in the area of diagnosis and dose calculations. With the development of GP and pharmacy computer systems came software that, at its simplest, ensured a legible prescription in which the drug name was spelt correctly and the strength of the tablet was one that existed for that drug. In addition, software was introduced that checked for drug-drug interactions and sometimes for allergies or contraindications. Many individual programmes were devised for clinicians and pharmacists to do specific tasks—for example, the calculation of dose adjustments from pharmacokinetic principles or neonatal TPN (hyperalimentation) formulae.

There are interesting differences in the development of computer systems in prescribing in the UK and the USA. Before starting on them it is worth clearing up the language differences. In the UK “computerised prescribing” (the term I use in this paper) is generally taken to be prescribing by a physician using a computer and usually includes some sort of decision support. In the USA the separate terms “Computerised Physician Order Entry” (CPOE) and “Clinical Decision Support Systems” (CDSS) are used.

In primary care in the UK almost all surgeries use computerised prescribing and have done so for many years. In the USA it is much rarer in this setting. In contrast, hospitals in the USA are looking to acquire CPOE rapidly; in 1999 a survey suggested that one in eight hospitals had it and one in four were actively seeking it.5 However, the current proportion of hospitals with full implementation is unknown. In the UK four general hospitals currently have computerised prescribing throughout the hospital, although I recently conducted a one in five survey of hospitals in England and Wales and found one in three had computerised prescribing on at least one ward. Computerised prescribing in hospitals is high on the policy agenda in both countries.


Before examining the extent to which decision support systems have been proved to be effective, it is helpful to first discuss the evaluation of IT systems as it illuminates both the uncertainty of our current knowledge base and the design of software in the future. In doing so, it will become clear that many of the existing evaluations are limited.

The effectiveness of a computerised decision support system depends not just on the way it handles a patient’s data, but also on who uses it, under which conditions, to which ends. It needs to be tested by a range of staff in a range of settings. What is more, it is designed to improve some aspect of current performance of a system involving humans, so the effectiveness of the IT system depends on the ineffectiveness of the human system with which it is being compared. The benefits of an IT system may not be generalisable across different human systems of work. As the human systems of prescribing, dispensing and administration in the UK are different from those in the USA, we must assume that the effectiveness of any given computerised prescribing system in the UK will be different from its effectiveness in the USA.

Evaluation of an IT system is different from that of a medicine in several important ways. Most importantly, the IT system constantly changes and develops over time whereas a drug molecule does not. Consequently, in IT both formative and summative evaluations are needed. Formative ones help form the development process and summative ones are more formal assessments of the performance of a system at a given time. What is more, unlike a drug given to treat a disease, the many different stakeholders in an IT system may have different concepts of what performance is, and hence what the evaluation should assess. The clinical staff may be looking for a reduction in errors while the finance director may be concerned about the cost of implementation but also wants useful financial information out of the system. Chiefs of service may want to use it to keep track of what their staff are doing and to monitor troublesome staff; the microbiologists may want to use it to enforce antimicrobial policies on the surgeons, and so on.

Many evaluations spend more time looking for benefits than they do looking for harm, yet the very nature of good IT development involves failure and learning from that failure. Parnas et al6 wrote: “As a rule software systems do not work well until they have been used, and have failed repeatedly, in real applications.” This can be seen in the performance of the computerised prescribing system at Brigham and Women’s Hospital, probably the most famous and most evaluated system in the world. Six months after implementation, four of the eight types of error being monitored had worsened. Injectable potassium chloride errors became much greater and remained so for more than 2.5 years, although they had been brought under control by 4.5 years.7 In radiology tens of patients at a UK hospital died because staff did not understand that decision support software was already built into their system; consequently, they also applied a manual correction so patients were systematically underdosed in their cancer treatment. The fault occurred because staff did not understand how to use the technology.8 Evaluations of IT in health care must specifically look for harm.


There have been three important reviews of decision support in this area. A systematic review of the effects of computer decision support systems on doctors’ performance and patient outcomes conducted in 1998 found that 43 of 65 studies showed an improvement in the doctors’ performance.9 Fifteen of the programmes were intended to improve dosing and nine did so. Six of 14 studies on patient outcome in the previous 24 years showed an improvement; most of the others were underpowered.

The Cochrane review of computer advice on dosing to improve prescribing found 15 satisfactory trials in the previous 36 years and concluded that there were significant reductions in the time to achieve therapeutic control and a significantly shortened length of stay.10 The number of toxic levels and adverse drug reactions were also reduced, although one limit of the 95% CI of the difference hovered around zero.

A Cochrane review of the use of software (and other tools) to help patients make decisions on screening or treatment was published in 2003.11 Most of these systems were designed to be used by patients before meeting with a counsellor. Of 131 systems found, most of which were on the web, 30 had been evaluated. More complicated decision aids were generally more effective than simple ones. Knowledge and realistic expectations were increased, the proportion of patients undecided, and the extent of decision conflict reduced, but there was no effect on satisfaction, anxiety, or patient outcome.

An interesting insight comes from the Agency for Healthcare Research and Quality review of CPOE and CDSS in which it was noted that, of the primary studies, six out of eight were performed at three institutions, all with sophisticated “home grown” systems.12 One interpretation of this is that centres of excellence using committed able people can significantly reduce medication errors through the use of computers. We still need to know the effect of commercial systems on typical hospitals.

What we know from these reviews needs to be balanced by what we don’t know. Firstly, we have no knowledge of the tens or hundreds of decision support systems that were embarked on but failed. Secondly, it is likely that there are many systems in use which have not been formally evaluated. Thirdly, most of the studies have been conducted at single sites so we have no idea about generalisability. Finally, the software and the comparator (human) system are often so poorly described that we do not know whether the benefits would be realised if we purchased the system and installed it locally (indeed, as the version number of the software is usually lacking, we do not know whether we would be purchasing the same system or not). Probably one of the few conclusions we can draw is that, if capable people work on a local IT system and are properly funded, it will eventually be beneficial. Also, isolated systems to calculate a solution to a single problem (such as warfarin dosage or neonatal TPN) have a reasonable chance of being effective and, given the growth of hand held computing, may be available in the pocket of prescribers for those occasions on which they need them. Looking back at the Venn diagram of good prescribing (fig 1), it is clear that these work by improving the technical part of prescribing and hence will be most effective when local performance in this area is weak.


When designing decision support software it would make sense to start with understanding our needs—where and how the current system is failing. The first focus should be on the areas of most harm and most frequent errors. The product needs to be robust and to add to the quality of prescribing; it should also be developed with likely future advances in prescribing in mind. The seven suggestions shown in box 1 are a starting point for debate.

Box 1 Writing prescribing software to improve safety

  • Target high risk drugs and patients first

  • Start with simple error trapping software

  • Use agreed standards of software writing and development

  • Make it congruent with the wider view of good prescribing

  • Focus on what computers do well

  • Work to a patient focus in the future

  • Make it so doctors look forward to using it

Target high risk drugs and patients first

High risk drugs are those, usually with a narrow therapeutic index, that are often associated with harm such as digoxin, warfarin and methotrexate. As dosage errors are the most common form of error, the correct dose of these drugs must be a priority. High risk patients are those whose well being is highly dependent on their drugs being right or who are so “in extremis” that they have no reserve to cope with what, to other patients, may be a relatively minor adverse event resulting from an error. High risk patients include neonates, children, cancer patients and those on renal wards, cardiac units, and ITUs. In other words, we are placing our risk reduction system precisely at the riskiest part of the system rather than investing a great deal of effort and resources to cover a wide spectrum of low risk patients.

Start with simple error trapping programmes

If doses exceed twice or half the normal range, a warning should be displayed. These programmes should always be active and running in the background. The nature of medication errors is that the person making them is usually sure they are doing the right thing—hence they will not seek the use of a decision support package. Exceptions may be when the prescriber knows that there is a difficult calculation to perform or when he/she comes across an unfamiliar condition or drug. In these cases, information and guidance needs to be easily available.

Use agreed standards of software writing and development

There are many small start up companies and “hobbiests” in the field of software writing. In some ways the market is similar to the production of pharmaceuticals in the early 1960s before the problems of thalidomide were known. Purchasers need to have confidence that the software has been developed using satisfactory standards. The roll out of the software should be linked to a programme of progressive evaluation that considers all aspects of performance. The product should be designed and evaluated with generalisability in mind.

Make it congruent with the principles of good prescribing

It should not diminish caring. A computer system needs to be designed and used so that it does not interfere with the wider goals of care. Computers should allow the prescriber to write the occasional placebo and not to record as an error a subtherapeutic dose of a drug which has knowingly been prescribed to maintain a relationship with an unstable patient. We know that many GPs prescribe things which they know are not “correct” in terms of pure pharmacology but for which they have a rationale based on the long term benefit to their patient. I am not convinced that we should try to stop this. What is more, I think the use of patient decision tools needs to be carefully addressed so that they do not become a substitute for a real person. We should not forget the benefits that accrue from a knowledgeable person engaging with a patient, listening to his or her problems, and giving advice. It is true that this does not generally happen all that frequently in prescribing, but the solutions to this are, I think, to tackle the cultural and managerial issues rather than to give up on humans and hand over the role to technology.

Focus on what computers do well

Computer programmes should focus on what computers do well. They are high reliability machines with great calculating power and the ability to store and relate large quantities of data. They are also generally a tool of control and, hence, of limited flexibility. This is seen as a strength by those who wish to control prescribers and as a weakness by many prescribers. However, in order for many of the benefits of a computer to be delivered, there is a need to start with a comprehensive set of data of high certainty. It is on this that many systems founder—for example, keeping an accurate list of diagnoses, medicines or protocols. Computers also have difficulty when “knowledge” is uncertain, as in many drug interactions. These are often based on case histories and theory, but the real probability of an adverse clinical outcome may be unknown. It also requires the knowledge base to be frequently updated, an expensive investment. Computers should not be seen as an automatic solution to human failings (the “no legs good, two legs bad” philosophy). Humans, like computers, have a large capacity to learn and to vary what they do. Before embarking on software design the potential costs and effectiveness of other human systems—training, better management, alternative skill mix, process re-engineering, etc—should also be considered.

Work to a patient focus

In the future there is (as there has been) great potential for computerised decision support:

  • The big development from the early days is that powerful processors are now carried to the patient by the doctor rather than the doctor having to go to the processor. Prescribers can use hand held systems that contain large amounts of knowledge and processing power. The use of radio networks based on IEC 802.11 (WiFi) will allow even more data to be delivered to the prescriber’s fingertips. The need to use simplistic, slow, and inadequate programmes will be a thing of the past.

  • We will see a movement from databases based on the properties of drugs to patient centred databases which include the pharmacogenomic profile of the patient. This would allow the development of a personalised formulary which would hold the drugs which are more likely to work for that patient and less likely to cause adverse events when compared with the population as a whole. Several enzymes and carrier systems that affect the way the body handles a drug have been found to have a genetic component to their activity. For example, an important enzyme in the metabolism of several drugs is CYP2D6 which exhibits polymorphism with the result that poor metabolisers (who have a low level of activity of the enzyme) get toxic effects at normal dosage and rapid metabolisers find the drug ineffective. This knowledge could be incorporated into the patient’s profile to either personalise dosage or to avoid certain drugs.

    Key messages

    • Computerised prescribing should enhance the wider concept of good prescribing, not conflict with it.

    • It should not be assumed that the effectiveness of a computerised prescribing system in one country is any guide to its effectiveness in another.

    • The evaluation of information technology is complex and poorly performed in medicine.

    • Most evidence for the success of computerised prescribing comes from a few centres of excellence in the USA.

    • The literature overestimates the effectiveness of computerised prescribing and decision support.

    • Decision support software rarely focuses on the most frequent causes of errors.

  • The other area of potential growth is the conversion of effectiveness and risk data into an understandable form for patients (and sometimes prescribers). Patient safety is likely to be achieved through informed adherence to treatment resulting in fewer treatment failures and hospitalisations from non-adherence or preventable adverse events.

Make it so doctors want to use it

Finally—and this is only slightly tongue in cheek—software needs to be designed so that doctors should salivate at the thought of using it and preen themselves having used it. If using the software is considered too much of a chore, it doesn’t matter how good it is on a technical level, is will be unlikely to deliver the promised benefits in practice.


Computer supported decision making has been promising great things since the 1970s but has delivered far less than its promise. Technology has so often produced profound benefits that we tend to forget how much failure and trial and error there was in the early days. Computerised prescribing is common in primary care in the UK and is increasingly common in hospitals in the USA, but we are still largely in the dark about the effectiveness and safety of the decision support in these systems. From the perspective of patient safety, it may be more cost effective to target prescribing decision support on high risk patients and high risk drugs.

The use of computerised prescribing tends to be driven by the technology rather than by true patient safety need. The literature base supporting the system is often over interpreted. If computerised prescribing is to be implemented to make prescribing safer then, certainly in the UK, it needs to be rethought and emphasis placed on developments that trap or avoid known causes of harm.

The use of computerised prescribing to improve patient safety should not be seen as the only contribution of the system to quality. There are many potential benefits—for example, in the areas of information storage, access and transfer—which may produce safety and quality benefits for the patient and the healthcare provider. However, one of the most exciting areas is in improving the overall quality of prescribing. Of the three domains of good prescribing presented in fig 1, it has been shown how computers could contribute to the technical aspects and the greater good (through formularies, for example). The real benefits of quality in efficacy and, to some extent, in safety will come when the software helps patients work out whether they want a medicine and, if so, which one in which formulation and dose regime. The software should not replace a caring human but there is potential for many of the technical aspects of prescribing to be performed by computer, allowing the prescriber time to listen, guide and care for the patient.



  • This paper is adapted from a presentation given at the 2nd US/UK Patient Safety Research Methodology Workshop: Safety by Design held in Washington in 2003. The views expressed are those of the author.