Surveys in the context of pharmacy practice can be best described as an act of investigating the experiences, opinions, or behaviours of a group of patients or colleagues by asking them questions. While “knowledge, attitude and practices” surveys (the KAP model) have been used extensively in social sciences for decades, surveys have also become a commonly used tool in healthcare to gather relevant information from patients as well as practitioners. A Medline search using the term “survey research in healthcare” for the period between December 2005 and December 2015 returned 45,667 entries.
The purpose of a survey
Survey research in pharmacy practice may be used for descriptive use only
, for example, collecting data to describe the demographic characteristics and risk factors of patients receiving influenza vaccination in community pharmacies
. It can also be used for investigating correlational or differential hypotheses — for example, Urbonas and Kubiliene used surveys to test whether a correlational relationship existed between job satisfaction and over-the-counter counselling practice at community pharmacies
, and O’Brien et al. conducted a retrospective analysis of surveys to investigate possible differences in opinions regarding knowledge and skills needed for pharmacy practice between licensed pharmacists and new graduates or graduating senior pharmacy students
Survey research can even be used to establish cause-effect relationships, for example, Gupta et al. assessed the impact of a pharmacist-led, web-based video presentation in increasing patients’ awareness of the importance of medication adherence
However, while it is sufficient to collect survey data for descriptive, correlational and differential purposes, establishing a cause-effect relationship requires an experimental design — the presence of an active intervention by the pharmacist, and a control group with random assignment. Furthermore, while surveys can be used for collecting data both retrospectively and prospectively, experimental studies by nature are limited to prospective data collection.
Although retrospective surveys are helpful for collecting data from the past and covering long periods, the quality of data is reliant upon recall memory, which is prone to bias for events with intense experiences.
Prospective surveys can be cross-sectional or longitudinal — cross-sectional surveys collect data once at a specific point in time, while longitudinal surveys collect data multiple times over a specified period which can extend from months to years, or even decades. A cross-sectional survey allows the researcher to determine correlations or differences at a given time, while a longitudinal survey may reveal trends and provide hints on probable cause-effect relationships. For example, Papastergiou et al. collected cross-sectional survey data once from each participant at four different community pharmacy locations during a period of eight weeks
, while Williemsen et al. reported a 25-year survey study, where each subject completed two or more surveys, which collected information on parameters such as physical and mental health, lifestyle, and personality
Time and cost are the biggest disadvantages of a longitudinal survey design and, unless it is completed over a short period, this design may not be practical for a community pharmacy. Regarding retrospective versus prospective surveys, while the latter offer the advantage of being not subject to recall bias, they may face the threat of response bias and therefore care must be taken to minimise this possibility (see ‘Carrying out a survey’).
Designing, developing and validating a survey
While pharmacists may use surveys for a wide range of purposes, it is important to note that all survey research, like any other research, must meet the criteria for scientific “validity” and “reliability”
. In the context of research, the concepts of validity and reliability, while related, are distinct from each other. Validity reflects the degree to which the study succeeds in accurately measuring what it set out to measure (internal validity), and to the degree the findings of the study can be applied to real world (external validity or generalisability). Reliability refers to the degree of accuracy with which a specific variable can be measured repeatedly
To assure quality, pharmacists must know the guidelines and recommendations regarding best practices for survey research and be mindful of the rigour that must be applied in the design, conduct and reporting of survey research so that the findings credibly reflect the target population and are a true contribution to the scientific literature
. The format of the survey, the number of questions and the scales of measurement should all be considered to ensure the survey is both valid and reliable
When designing questions, it is important to be aware of the cognitive aspects involved in answering survey questions, such as making sense of the question and the role of memory and guessing in answering
. It is highly recommended that pharmacists make use of the online self-guided survey development tools such as Survey Monkey, and Survey Planet.
Regarding the length of a survey, a trade off exists between the extent of information sought and the response rate. A short survey may increase participation but may not yield the required information. A longer survey covering more information may decrease the response rate below acceptable levels (see ‘Maximising response rates’).
Finally, ethical issues of confidentiality and anonymity must be considered in survey design and development.
Sampling frame and size
Potential survey participants are selected from the ‘sampling frame’, which is defined by a careful statement of inclusion and exclusion criteria. This task must be done in a manner that gives the highest probability of yielding the sample truly representative of the target population along important variables. For example, if the pharmacist wishes to target subjects with a particular characteristic such as being elderly, or being diabetic, then the sampling frame must focus on those areas where the probability of elderly or diabetic subjects is high.
The next step is to determine the sample size with sufficient statistical power. If the sample is too small, there may not be enough power to observe the desired phenomenon, but a sample larger than required is a waste of valuable resources. Study design, effect size, type of data, variance (a measurement of how far each number in a data set is from the mean) the desired confidence level (the percentage of all possible samples that can be expected to include the true population value), the power (the likelihood that a study will detect an effect where an effect exists) and the margin of error all need to be taken into account when estimating the optimal sample size. The optimal sample size is directly proportional to the confidence level, power, and variance, and inversely proportional to the margin of error.
In general, the sample size should yield at least 95% confidence in results with 5% margin of error for a type I error (i.e. false positive), and with at least 80% power to detect a type II error (i.e. false negative). A number of online tools are available where pharmacists can compute the optimal sample size by completing desired values for these factors. For example, Dupont and Plummer Jr. have provided a free interactive programme that can be used for measurements on all scales common to surveys. The programme can determine the sample size needed to detect a specified effect with the required power, the power with which a specific effect can be detected with a given sample size, or the specific effect that can be detected with a given power and sample size.
Finally, if under-representation of specific populations is expected, such as from pediatric or geriatric patients, data should be collected with alternative methods. For example, patients aged over 75 years have been shown to visit the pharmacy to get their prescriptions filled, and this opportunity for personal interaction can be used to recruit them
Carrying out a survey
While pharmacists may administer the survey on site, published reports in healthcare settings using this mode of survey administration are few. Most studies report off-site administration via post, telephone, or electronic (email and web-based) modes, and some also describe mixed methods.
Given advances in information technology and communication capabilities, electronic modes have become most prevalent and afford advantages of reduced costs, fast delivery and response, access to large populations and, in the case of web-based applications, direct data capture. Disadvantages of electronic modes include time-consuming initial development, coding, testing, sample bias due to computer illiteracy or non-availability, and data security concerns. Emails can suffer from lack of correct email addresses, because they change more often than postal addresses, software incompatibility, and data download errors,
Telephone surveys are the least preferable mode for pharmacists given the practical constraints of time, personnel and financial resources. Nevertheless, since majority of the adults aged 18–34 years old have mobile phones and use them extensively, random digit dialling of mobile phones was found to be a feasible methodology for surveillance of young adults
Alternatively, respondents could fill out surveys on a computer provided by the pharmacist on site. This approach, while it eliminates software compatibility issues and permits direct data capture, adds the burden of providing computers in a private location and that of scheduling.
Finally, the old fashioned method of posting surveys is still a viable mode of survey administration and in some cases was found to be more effective in improving response rate than the electronic mode
Maximising response rates
By nature, data from survey methodology is entirely dependent upon the subjects’ willingness to respond and poor response rates can be a serious problem for study’s validity,
. While desired response rates for most research and for pharmacy are approximately 60% and 80% respectively, the average response rate in 490 survey studies published between 2000 and 2005 for journals in the US was 52.7%
. Among strategies reported to improve response rates are: pre-notifying participants about the survey and personalising the survey; publicising the survey; monitoring responses and sending reminders; promising feedback on results; and using monetary or non-monetary incentives. Although shortening a lengthy questionnaire significantly increased the response, the trade-off between the value of additional questions and a larger sample should be considered
. Face-to-face interaction as a primer to set up the survey was also shown to be effective.
Regarding the mode of survey administration, the latest reports tend to support best response rates with mixed-method approaches using both hard copy and electronic survey collection methods,
Irrespective of the response rate, it is critical to either show that the non-respondents are not different from respondents in terms of important characteristics, or take into account the differences when interpreting the results. A statistical comparison of the responders and the non-responders on key parameters will reveal whether or not significant differences exist. It is also important to state the proportion of the questionnaire not completed, which is not the same as the response rate
Asking sensitive questions
Another area requiring special attention is that of collecting data on sensitive questions such as sexual habits, illicit drug use, and other behaviors considered unlawful or socially undesirable. Typically, answers to sensitive questions are either omitted or not answered truthfully
. While some studies report on possible ways of improving the response, such as using face-to-face interviews
or non-randomised response techniques
, it is best to keep the possibility of omission or incorrect answers in mind when interpreting the results.
With thanks to Anant Kishore for minor editing.
Reading this article counts towards your CPD
You can use the following forms to record your learning and action points from this article from Pharmaceutical Journal Publications.
Your CPD module results are stored against your account here at The Pharmaceutical Journal. You must be registered and logged into the site to do this. To review your module results, go to the ‘My Account’ tab and then ‘My CPD’.
Any training, learning or development activities that you undertake for CPD can also be recorded as evidence as part of your RPS Faculty practice-based portfolio when preparing for Faculty membership. To start your RPS Faculty journey today, access the portfolio and tools at www.rpharms.com/Faculty
If your learning was planned in advance, please click:
If your learning was spontaneous, please click:
 Breese LC. An introduction to developing surveys for pharmacy practice research. Canadian J. of Hospital Pharmacy 2014;(4):286–291. doi: 10.4212/cjhp.v67i4.1373
 Papastergiou J, Folkins C, Li W et al. Community pharmacist-administered influenza immunization improves patient access to vaccination.Can Pharm J (Ott) 2014;147(6):359–365. doi: 10.1177/1715163514552557
 Urbonas G & KubilienÄ L. Assessing the relationship between pharmacists’ job satisfaction and over-the-counter counselling at community pharmacies. Int J Clin Pharm 2015. doi: 10.1007/s11096-015-0232-y
 O’Brien CE, Flowers SK & Stowe CD. Desirable skills in new pharmacists: a comparison of opinions from practitioners and senior student pharmacists. J Pharm Pract 2015. doi: 10.1177/0897190015621804
 Gupta V, Hincapie AL, Frausto S et al. Impact of a web-based intervention on the awareness of medication adherence. Res Social Adm Pharm 2015. doi:10.1016/j.sapharm.2015.11.003
 Willemsen G, Vink JM, Abdellaoui A et al. The Adult Netherlands Twin Register: twenty-five years of survey and biological data collection. Twin Res Hum Genet 2013;16(1):271–281. doi: 10.1017/thg.2012.140
 Etchegaray JM & Fischer WG. Understanding evidence-based research methods: reliability and validity considerations in survey research. Health Environments Research Design 2010;4(1):131–135. doi: 10.1177/193758671000400109
 Leung L. Validity, reliability, and generalizability in qualitative research. J Family Med Prim Care 2015;4(3):324–332. doi: 10.4103/2249-4863.161306
 Draugalis JR, Coons SJ & Plaza CM. Best practices for survey research reports: a synopsis for authors and reviewers. Am J Pharm Ed 2008;72(1):1–6. doi: 10.5688/aj720111
 Etchegaray JM & Fischer WG. Understanding evidence-based research methods: developing and conducting effective surveys. Health Environments Research Design 2010;3(4):8–13. doi: 10.1177/193758671000300402
 Schwarz N. Cognitive aspects of survey methodology. Applied Cognitive Psychology 2007; 21:277–287. doi: 10.1002/acp.1340
 Frisk P, KÃ¤lvemark-Sporrong S & Wettermark B. Selection bias in pharmacy-based patient surveys. Pharmacoepidemiol Drug Saf 2014;23(2):128–139. doi: 10.1002/pds.3488
 Jansen KJ, Corley KG & Jansen BJ. Chapter 1, E-Survey Methodology. In: Reynolds RA, Woods R, & Baker JD. (eds.) Handbook of research on electronic surveys and measurements. Idea Group Inc., Hershey, PA, USA, 2007. doi: 10.4018/978-1-59140-792-8
 Cope DG. Using electronic surveys in nursing research. Oncology Nursing Forum 2014;41(60):681–682. doi: 10.1188/14.ONF.681-682
 Gundersen DA, ZuWallack RS, Dayton J et al. Assessing the feasibility and sample quality of a national random-digit dialing cellular phone survey of young adults. Am J Epidemiol 2014;179 (1):39–47. doi: 10.1093/aje/kwt226
 Cho YL, Johnson TP & Vangeest JB. Enhancing surveys of health care professionals: a meta-analysis of techniques to improve response. Evaluation and the Health Profession 2013;36(3):382–407. doi: 10.1177/0163278713496425
 Fincham JE. Response rates and responsiveness for surveys, standards, and the journal. Am J Pharm Ed 2008;72(2):1–3. doi: 10.5688/aj720243
 Baruch Y & Holtom BC. Survey response rate levels and trends in organizational research. Human Relations 2008;61(8):1139–1160. doi: 10.1177/0018726708094863
 Sahlqvist S, Song Y, Bull F et al. Effect of questionnaire length, personalisation and reminder type on response rate to a complex postal survey: randomised controlled trial. BMC Med Res Methodol 2011;11:62. doi: 10.1186/1471-2288-11-62
 Kroth PJ, McPherson L, Leverence Ret al. Combining web-based and mail surveys improves response rates: a PBRN study from PRIME Net. Ann Fam Med 2009;7(3):245–248. doi: 10.1370/afm.944
 Scott A, Jeon SH, Joyce CM et al. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors. BMC Med Res Methodol 2011;11:126. doi: 10.1186/1471-2288-11-126
 McPeake J, Bateson M & O’Neill A. Electronic surveys: how to maximize success. Nurse Researcher 2014;21(3):24–26. doi: 10.7748/nr2014.01.21.3.24.e1205
 Tourangeau R & Yan T. Sensitive questions in surveys. Psychol Bull. 2007;133(5):859–883. doi: 10.1037/0033-2909.133.5.859
 Herbert DL, Loxton D, Bateson Det al. Challenges for researchers investigating contraceptive use and pregnancy intentions of young women living in urban and rural areas of Australia: face-to-face discussions to increase participation in a web-based survey.J Med Internet Res 2013;15(1):e10. doi: 10.2196/jmir.2266
 Wu Q & Tang ML. Non-randomized response model for sensitive survey with noncompliance. Stat Methods Med Res 2014. doi: 10.1177/0962280214533022