We were pleased to read the recent article by Greaves et al.1
outlining new methodological techniques to analyze patients' online
ratings of care. We agree with the authors that social media websites
represent a wealth of first-hand patient experiences with health and
healthcare, but have largely remained untapped by biomedical researchers -
especially to gain new insi...
We were pleased to read the recent article by Greaves et al.1
outlining new methodological techniques to analyze patients' online
ratings of care. We agree with the authors that social media websites
represent a wealth of first-hand patient experiences with health and
healthcare, but have largely remained untapped by biomedical researchers -
especially to gain new insights into how to improve clinical care. We
concur that "big data" techniques such as machine learning and natural
language processing can be extremely powerful to synthesize the large
amount of textual data on these sites.
However, our previous work has also suggested the importance of
traditional research methods applied to social media content. In
particular, qualitative analysis adds perspective to patients' online
dialogue where big data mining techniques perhaps cannot. In a
qualitative examination of primary care provider ratings on Yelp,2 we
analyzed 712 reviews of 455 doctors in four large urban areas (Chicago,
New York, Atlanta and San Francisco). We found that these provider ratings
often reflected the entire visit experience (i.e., parking, wait times,
front desk staff) rather than focusing solely on the clinical encounter
with the provider. Similarly, we recently qualitatively coded over 450
Twitter messages about cancer screening,3 and found that miscellaneous
tweets such as jokes or popular culture references could be distinguished
from the rich information about personal patient experiences with pap
smears or mammograms. In both instances, these nuances in the online
content may have been missed by applying data mining or natural language
processing alone.
Therefore, we advocate using mixed methods approaches to analyzing
social media content about health and healthcare experiences, as these
techniques are inherently complementary to one another. Big data
approaches allow researchers to examine millions of messages to uncover
trends and overall sentiment in the online content, as well as the
potential to rank the prevalence of specific discussion topics on social
media sites. However, in combination with qualitative analysis of a
carefully selected subsample of online content, the textual data can be
interpreted in light of additional context - allowing researchers both
breadth and depth in their work.
Moreover, not only should we aim to understand patient values and
preferences from the large amounts of publicly available dialogue on
social media, but we should also look to online social media as a means to
directly engage in this dialogue with patients. Because of the ease of
use and the speed of information dissemination, online social media
channels have become a cornerstone of everyday life, transforming the ways
that society shares ideas and beliefs, news, and information about
products and services among individuals and organizations. To be truly
patient-centered, healthcare providers and systems should play an active
role in communicating important health and healthcare messages through the
channels in which growing numbers of patients are already engaged.
Courtney R. Lyles, PhD & Urmimala Sarkar, MD, MPH
University of California San Francisco, Division of General Internal
Medicine
References
1. Greaves F, Ramirez-Cano D, Millett C, Darzi A, Donaldson L. Harnessing
the cloud of patient experience: using social media to detect poor quality
healthcare. BMJ quality & safety 2013.
2. Lopez A, Detz A, Ratanawongsa N, Sarkar U. What patients say about
their doctors online: a qualitative content analysis. Journal of general
internal medicine 2012;27(6):685-92.
3. Lyles CR, Lopez A, Pasick R, Sarkar U. "5 Mins of Uncomfyness Is Better
than Dealing with Cancer 4 a Lifetime": an Exploratory Qualitative
Analysis of Cervical and Breast Cancer Screening Dialogue on Twitter.
Journal of cancer education : the official journal of the American
Association for Cancer Education 2012.
We read the study by Durani et al (1) and the accompanying editorial
(2) with great interest. Aspiring to engage junior doctors in the safety
and quality movement is a noble aim but in doing so it is essential to
consider the influences of both the formal (explicit) curriculum and the
informal ('hidden') curriculum on doctors in training. We feel that whilst
Durani et al's questionnaire may be useful to chart temporal tre...
We read the study by Durani et al (1) and the accompanying editorial
(2) with great interest. Aspiring to engage junior doctors in the safety
and quality movement is a noble aim but in doing so it is essential to
consider the influences of both the formal (explicit) curriculum and the
informal ('hidden') curriculum on doctors in training. We feel that whilst
Durani et al's questionnaire may be useful to chart temporal trends in
junior doctors' knowledge and attitudes in patient safety, we would
caution against using subtle differences uncovered in trainees' responses
to inform the subsequent development of educational interventions. To do
so risks 'over-engineering' approaches to patient safety education and
neglecting the basics.
Our experience of implementing sustainable patient safety training
across a Foundation School ('Lessons Learnt: Building a Safer
Foundation')(3) has revealed two core ingredients for engaging junior
doctors in safety and quality improvement - training providers as a
minimum must ensure i) a safe environment for junior doctors to raise and
act on safety concerns and ii) basic instruction and opportunities in
patient safety and quality improvement for all junior doctors.
First and foremost, we argue that in order to engage junior doctors
in safety improvement, above all, they need to feel safe in the
environment within which they work. Whilst informal discussions of safety
and quality issues by junior doctors are commonplace in the 'Doctors'
Mess' and at other social gatherings, structured and protected
opportunities to do so within teaching programmes are severely lacking.
Moreover, whilst leadership and quality improvement schemes described
by Lemer et al(2) are laudable, they invariably appeal to a self-selected
group and are not always accessible to all. In the UK, latest guidance by
the General Medical Council emphasises the duty of doctors in raising and
acting on concerns about patient safety(4) and that leadership and
management is a core role of all doctors,(5) not reserved for the
privileged few. To ensure equity of opportunity and to fulfil the
regulator's standards we need to ensure the provision of basic training
and opportunities for junior doctors across both the domains of patient
safety and leadership.
Through providing basic instruction in patient safety and integrating
facilitated case-based discussions of patient safety incidents (PSIs)
within the teaching programme, we have successfully created a springboard
for Foundation trainees' engagement in safety and quality improvement.(3)
Importantly, trainees are not passive recipients of the intervention,
rather active collaborators with trainee 'Leads' at each site leading
local delivery of the programme and rising to the challenge of peer-
leadership. Whilst we do not claim that our programme is a panacea for
engaging junior doctors in quality and safety, we do feel it is an
important first step in promoting wide-scale clinical engagement in the
quality movement.
Conflict of Interest:
The authors are part of a team who developed, implemented and evaluated 'Lessons Learnt: Building a Safer Foundation' - a patient safety training programme for Foundation trainees in collaboration with the North Western Deanery and the Imperial Centre for Patient Safety and Service Quality. The programme won the BMJ Excellence in Healthcare Education Award 2012.
The authors of the article 'Perceived Causes of Prescribing Errors by
Junior Doctors in Hospitals' published in the BMJ Quality & Safety on
30 October 2012 report that "the main task factor identified was poor
availability of drug information on admission (often out of hours)" and
"Systems which should aid prescribers were not always available (e.g. the
Emergency Care Summary was available, but the doctor did not have...
The authors of the article 'Perceived Causes of Prescribing Errors by
Junior Doctors in Hospitals' published in the BMJ Quality & Safety on
30 October 2012 report that "the main task factor identified was poor
availability of drug information on admission (often out of hours)" and
"Systems which should aid prescribers were not always available (e.g. the
Emergency Care Summary was available, but the doctor did not have a
password for it)". The article postulates that had the information
contained in the Emergency Care Summary (ECS) been available, it would
have led to a decrease in errors.
The ECS is a national system of shared electronic records in Scotland
which enables up to date prescribing information from Primary Care systems
to be available to clinicians working in unscheduled care i.e. Out of
Hours, Ambulance, Emergency Rooms and Acute Receiving Units(1). It was
designed to improve the information available when GP practices are
closed. At the time of the study, in 2011, ECS was not available for
junior doctors dealing with scheduled admissions in secondary care.
The lack of access to ECS in secondary care has been identified as a
critical patient safety gap and plans have been made to address this. New
developments to make the medication information in ECS available for all
patients in hospitals and out patients are underway. In 2011, a pilot
project in Lanarkshire reported(2) that the use of ECS for medicines
reconciliation in Medicine for the Elderly, Orthopaedic admissions and
Surgical day cases was found to be helpful by all users. A review of 31
cases found 119 discrepancies, between medicines information in ECS and
the referral letter, an average of 5 per patient, as the average length of
time between referral and pre-assessment was 110 days. The ECS records
were accessed by nursing staff and pharmacists carrying out medicines
reconciliation and was felt to be so beneficial that it was agreed to
extend the use of ECS within secondary care using the Clinical Portals(3)
to provide secure identity and event based governance(4).
The article states that "problems with inadequate quality medicines
information at admission to hospital were highlighted. It is
disappointing to see that measures such as the ECS which have been
designed to tackle this very issue by providing an up to date list of
patient's medicines are not working (many doctors said that they did not
have access to the Emergency Care Summary)" and we would like to correct
this statement as since it's inception the ECS was specifically designed
to improve care Out of Hours and was not available to hospital doctors for
planned admissions. Medicines reconciliation (a process by which the most
recent and accurate sources of information are used to create a full list
of medicines for a patient) has been a major priority for the Scottish
Patient Safety Programme and they have helped to make the case for
extending use of ECS for this purpose.
Significant developments are underway to extend access to ECS for all
clinical users and eHealth developments such as the Clinical Portals will
mean that ECS accounts and separate passwords will not be required in the
longer term.
Dr Libby Morris, eHealth Clinical Lead, Scottish Government Health
and Social Care Directorate and GP, Hermitage Medical Practices, 5
Hermitage Terrace, Edinburgh, EH10 4RP
Dr Ian M Thompson, Chair, Emergency Care Summary Service Board and
GP, East Linton Surgery, Station Road, East Linton, East Lothian, EH40 3DP
Jonathan Cameron, Programme Manager/ Interim Head of Project
Management, National Information Systems Group, NHS National Services
Scotland, Gyle Square, 1 South Gyle Crescent, Edinburgh EH12 9EB
Conflict of Interest:
LM and JC were responsible for managing the ECS as a development project. IMT is the clinical chair for ECS as a business as usual service.
I appreciated Shearer et al's recent article in BMJQS[1], it brings
to light the debilitating effects of ill-placed social and cultural
influences, and the professional hierarchies evident in all hospitals. The
issues identified from the research further validate the necessity for a
systems approach when dealing with clinical risk management[2]. That said,
mandating rapid response systems (RRS) as part of hospital protocol...
I appreciated Shearer et al's recent article in BMJQS[1], it brings
to light the debilitating effects of ill-placed social and cultural
influences, and the professional hierarchies evident in all hospitals. The
issues identified from the research further validate the necessity for a
systems approach when dealing with clinical risk management[2]. That said,
mandating rapid response systems (RRS) as part of hospital protocols
should not be so quickly dismissed as an ineffective avenue to improving
RRS effectiveness. It is not a new concept that workplace culture has much
to do with clinical efficacy and patient safety; in fact, Leape and
Berwick[3] pinpoint culture as a significant barrier to progress in
patient safety and highlight the necessity for dramatic changes to occur
as the next step to achieving higher standards. Unequal relationships
exist within the healthcare team, and inter-occupational hierarchies
between doctors and nurses impede the flow of information[4], as does the
seniority of doctors over their junior staff[5]. This element of fear that
is created by an institution's structure - fear of reprimand by senior
staff, fear of failure to meet expectations, and fear of judgement from
others, acts alongside a clinician's medical knowledge in determining
whether or not to call for help in poorly defined clinical situations, or
to activate the RRS protocol, even when the patient fulfils the defined
criteria. Standardisation of processes is an acceptable and widely
employed mechanism for the prevention of errors[6]. Sliding scale insulin
dosing and perioperative antibiotic protocols were adopted by institutions
to produce significant improvements in patient safety, much of which
younger clinicians, like myself, take for granted these days. Following
the successful establishment of a rapid response team, it speaks for
itself that an effective way to ensure that patients are receiving a high
standard of care when their status is deteriorating is, not only to
educate and train staff on the RRS, but also to make such a protocol
compulsory.
At present, readily available and easily identifiable criteria exist
to guide the management of specific diseases, and to minimise variations
in clinical judgement within an institution. A mandated RRS protocol would
provide a similar opportunity to remove the pressure of judging the risk
involved in tricky clinical situations and the fear of repercussions
associated with initiating the call. I imagine, like those sliding scales
and antibiotic protocols, once implemented, a compulsory RRS protocol
would seem like second nature. Explicit criteria for determining when one
needs help and how to access that help, may serve as a means to
effectively overcome the negative implications of workplace culture, as
well as inter- and intra- professional hierarchies[4].
1. Shearer B, Marshall S, Buist MD, et al. What stops hospital
clinical staff from following protocols? An analysis of the incidence and
factors behind the failure of bedside clinical staff to activate the rapid
response system in a multi-campus Australian metropolitan healthcare
service. BMJ Qual. Saf.2012;21:569-75.
2. Kohn CT, Corrigan JM, Donaldson MS. Chapter 8: creating safety
systems in health care organizations. To err is human: building a safer
health system. Washington, DC: National Academy Press 1999:134-174.
3. Leape LL, Berwick DM. Five years after To Err Is Human, what have
we learned? JAMA. 2005;293(19):2384-90
4. Mackintosh N, Sandall J. Overcoming gendered and professional
hierarchies in order to facilitate escalation of care in emergency
situations: the role of standardised communication protocols. Soc Sci Med.
2010;71(9):1683-6
5. Stewart J. To call or not to call: a judgement of risk by pre-
registration house officers. Med Educ. 2008;42(9):938-44
6. Leape LL. Error in medicine. JAMA. 1994;272(23):1851-7
We read the article on discharge summaries by Mohta et al with
interest. We passionately believe that we must keep trying innovative
methods to improve the quality of this most important handover document of
care. Earlier this month, our audit to evaluate the extent to which
contents of all fields in the electronic discharge summary template are
completed with relevant information, revealed that the trainees had failed
to...
We read the article on discharge summaries by Mohta et al with
interest. We passionately believe that we must keep trying innovative
methods to improve the quality of this most important handover document of
care. Earlier this month, our audit to evaluate the extent to which
contents of all fields in the electronic discharge summary template are
completed with relevant information, revealed that the trainees had failed
to complete some of the most important fields in the template. We then
interviewed doctors at different seniority in our hospital to find the
reasons for such practice. We also interviewed GPs to confirm what they
want in these summaries. Based on the results, we now intend to implement
three interventions (1) Trainees will print random summaries completed by
them to do CbD (Case-based discussion) with their supervisors for their e-
portfolio. This will give them an opportunity for feedback from senior
consultants (2) We intend to put a large sticker on the top of the case
record for the clinicians to make note of any important clinical event as
it happens which should become part of the discharge summary at the time
of the patient discharge. Person completing the discharge summary will
make sure that all events on the sticker form part of the summary (3)
Formal training module on discharge summaries at the time of induction on
the first day when the trainee joins the Department. It will be
interesting to find the results of the closing loop results of this audit.
I appreciated seeing an introduction of analysis of means (ANOM) by
Mohammed and Holder. As stated in their article, the technique is not well
known, but nonetheless I would like to encourage people to learn this
useful graphical display to compare groups. I have been using this method
in healthcare improvement work (1,2) and would like to share a couple of
lessons learned over the years.
The proportion ANOM chart should...
I appreciated seeing an introduction of analysis of means (ANOM) by
Mohammed and Holder. As stated in their article, the technique is not well
known, but nonetheless I would like to encourage people to learn this
useful graphical display to compare groups. I have been using this method
in healthcare improvement work (1,2) and would like to share a couple of
lessons learned over the years.
The proportion ANOM chart should meet most of your needs, since first
continuous type data can be converted to a proportion. For example, length
of stay (LOS) greater than 2 days can be used in the proportion ANOM chart
to compare groups (such as hospitals or providers) versus using LOS in the
continuous ANOM chart. Secondly, the proportion ANOM chart is easier to
use since it is the p-chart (a statistical process control chart), which
people may be familiar with. The only difference between the ANOM for
proportions and the p-chart is the control limits on the ANOM are not set
at 3 sigma - they are adjusted to account for number of groups being
compared. The best reference I have seen on ANOM is in a book by
Balestracci & Barlow.(3)
Another issue with the ANOM chart is the denominator size - you need the
right size for these charts to be most helpful. A good rule of thumb is 5. If you are comparing LOS greater than 2 days across hospitals and
25% is the overall rate then each hospital will need to have 21 or more
patients (5/.25 = 20). If there were only 10 patients in each group, then
the control limits will be too wide and may not yield useful information.
Besides too few patients in the denominator, another issue is too many
patients. If your subpopulation of patients is 1,000, then you have 50
times more patients than needed and you may have many groups crossing the
limits, which most likely contains Type 1 errors and is also useless
information. The primary purpose of the ANOM chart is to find the hospital
(or whatever you are comparing) that is performing outside the system
result because there will be opportunity to learn from the hospital that
is performing beyond the others.
One last lesson to keep in mind is the ANOM chart can be somewhat useless
if you are analyzing rare events or a proportion that is less than 10%.
For example, mortality rate (MR) for a specific procedure is 1.5% and you
want to see if there is a difference across 15 hospitals; however, there
are on average 100 patients per year that have the procedure. Using the 5 rule, you would need 334 or more patients in the denominator so
you will need 3.3 years of data. You may have the data, but most likely
you will only find a low performer, which can be motivating information
for the low performing hospital but more useful information is finding a
hospital that is doing this well. If a hospital had 0% MR, then the ANOM
chart will not show the 0% crossing the lower limit with 334 patients in
the denominator. The hospital would need to have 2.5 times more patients
(835 with 0% MR) and then the result would be significantly different.
If ANOM is not in your analytical tool box, I would highly suggest
learning more about this method, since the graphical display does so
effectively what statistical process charts are suppose to do - point out
the difference between common and special cause variation.
1. Homa K. Analysis of means used to compare providers' referral
patterns. Quality Management in Health Care 16(3): 256-264, July/September
2007.
2. Homa K, Kirkland KB. Determining next steps in a hand hygiene
improvement initiative by examining variation in hand hygiene compliance
rates. Quality Management in Health Care 20(2):116-121, April - June 2011
3. Balestracci D, Jr., Barlow JL. Quality Improvement. Practical
Applications for Medical Group Practice. 2nd ed. Englewood, CO: Center for
Research in Ambulatory Health Care Adminstration; 1998.
There is a paucity of papers focused on the sustainability of
improvement projects. In addition, the authors and the VA are to be
congratulated on sharing what are less-than-positive results so we can all
learn.
The quality improvement collaborative (QIC) process is excellent in
raising awareness of issues, training staff in QI techniques and in
mobilising action to improve. With all methods there are some gain...
There is a paucity of papers focused on the sustainability of
improvement projects. In addition, the authors and the VA are to be
congratulated on sharing what are less-than-positive results so we can all
learn.
The quality improvement collaborative (QIC) process is excellent in
raising awareness of issues, training staff in QI techniques and in
mobilising action to improve. With all methods there are some gains and
losses. The positive contribution of the QIC process needs to be balanced
with the lack of deep behavioural or system change - and this matters when
it comes to sustainability. The continual focus on technical measurements
in QIC allows participants to displace the need for behavioural changes.
Also, because the process is team focused, rather than organisational
focused, system changes are difficult to make.
Just because sustainability drops off after the program, doesn't mean
it isn't a good program. It may, however, mean that to gain sustainability
additional parallel support is required or the QIC needs to be redesigned
in content and structure.
January 31, 2013
To the editors:
We were pleased to read the recent article by Greaves et al.1 outlining new methodological techniques to analyze patients' online ratings of care. We agree with the authors that social media websites represent a wealth of first-hand patient experiences with health and healthcare, but have largely remained untapped by biomedical researchers - especially to gain new insi...
We read the study by Durani et al (1) and the accompanying editorial (2) with great interest. Aspiring to engage junior doctors in the safety and quality movement is a noble aim but in doing so it is essential to consider the influences of both the formal (explicit) curriculum and the informal ('hidden') curriculum on doctors in training. We feel that whilst Durani et al's questionnaire may be useful to chart temporal tre...
The authors of the article 'Perceived Causes of Prescribing Errors by Junior Doctors in Hospitals' published in the BMJ Quality & Safety on 30 October 2012 report that "the main task factor identified was poor availability of drug information on admission (often out of hours)" and "Systems which should aid prescribers were not always available (e.g. the Emergency Care Summary was available, but the doctor did not have...
I appreciated Shearer et al's recent article in BMJQS[1], it brings to light the debilitating effects of ill-placed social and cultural influences, and the professional hierarchies evident in all hospitals. The issues identified from the research further validate the necessity for a systems approach when dealing with clinical risk management[2]. That said, mandating rapid response systems (RRS) as part of hospital protocol...
We read the article on discharge summaries by Mohta et al with interest. We passionately believe that we must keep trying innovative methods to improve the quality of this most important handover document of care. Earlier this month, our audit to evaluate the extent to which contents of all fields in the electronic discharge summary template are completed with relevant information, revealed that the trainees had failed to...
I appreciated seeing an introduction of analysis of means (ANOM) by Mohammed and Holder. As stated in their article, the technique is not well known, but nonetheless I would like to encourage people to learn this useful graphical display to compare groups. I have been using this method in healthcare improvement work (1,2) and would like to share a couple of lessons learned over the years. The proportion ANOM chart should...
There is a paucity of papers focused on the sustainability of improvement projects. In addition, the authors and the VA are to be congratulated on sharing what are less-than-positive results so we can all learn.
The quality improvement collaborative (QIC) process is excellent in raising awareness of issues, training staff in QI techniques and in mobilising action to improve. With all methods there are some gain...
Pages