“My AI scribe stopped working first thing this morning,” says Yunzheng Jiao, principal pharmacist in research and clinical trials at Dudley Group NHS Foundation Trust.
Jiao uses ambient voice technology (AVT) — also known as ambient scribes — which depend on AI to take notes during conversations with patients.
“My first patient was late. After our consultation, I realised the AI did not transcribe our conversation. It just kept looping, round and round,” he recalls. “I tried three or four times to fix it but couldn’t. I thought, ‘Oh god, what do I do now?’”
In the end, Jiao restarted the system and discovered the file was corrupted and unusable. “I’d relied too much on the transcription,” he says. “Luckily I still remembered what we’d discussed.”
By the time he had rewritten his notes, Jiao was 40 minutes behind for his next patient.
Jiao is one of many pharmacy professionals now using ambient scribes to automate clinical note-taking. Tools such as Heidi (Heidi Health), Accurx Scribe and Tortus ‘listen’ to consultations and use AI to transcribe and summarise conversations into structured notes or letters. These tools promise to ease the administrative burden of documentation and free up clinicians to spend more time directly with patients1. In pharmacy, they have been shown to streamline operations, reduce errors and improve patient care2,3.
The wins are big — but their failures or misinterpretations of human nuance are less well documented.
“Sometimes the transcription […] sounds funny, or it doesn’t make sense. It can miss emphasis or tone, especially with patients who have strong accents or talk softly,” says Jiao.
Patients generally understand and appreciate the use of technology
Dervis Alkan Gurol, director of Sussex Pharmacies
Yasmin Karsan, clinical safety officer and medical device consultant, adds that these types of AI tools can occasionally generate inaccurate details. “I was told about a patient who was sitting with a clinician discussing their inhaler technique. The patient said, ‘I have a Seretide inhaler’, but the AI summarised it as, ‘I use my Seretide inhaler two times a day.’ That’s factually incorrect.”
A 2024 evaluation also found that summaries can “misgender” patients and “mistake critical details in transcription”4.
Both Jiao and Karsan believe accountability still sits squarely with them: as the clinician, they are responsible for reviewing and correcting notes. In its guidance on ambient scribe technology, published in April 2025, NHS England states that liability “remains complex and largely uncharted, with limited case law to provide clarity”.
For now, any mistake is likely to be on the clinician or their employer.
Many of these tools are already embedded in NHS and private settings across the UK, often under self-certified Medicines and Healthcare products Regulatory Agency (MHRA) classification. Yet pharmacy-specific regulation and professional standards on their use lags behind; a BMJ article says: “We are in danger of being swept up in the promise of ambient scribes, without considering wider and longer-term effects.”5
The General Pharmaceutical Council (GPhC) has yet to issue guidance on any AI use and, while the Royal Pharmaceutical Society (RPS) released its own recommendations in January 2025, there are still questions around liability and safe implementation — even as such tools become routinely used.
Automation in action
Ambient scribes use a form of generative AI, known as ‘large language models’ (LLMs) — the same technology behind ChatGPT or Gemini (Google) — to convert speech into structured clinical notes. Despite occasional technical errors, Jiao calls it a “game-changer”.
Several studies across healthcare settings show staff morale improves and burnout rates have fallen since introducing AVT, largely owing to clinicians spending more time with patients and less time on paperwork2,6–8. Meanwhile, pharmacy workflow systems that use AI — such as Titan (Invatech Health), a patient management record system incorporating AI clinical checks — are reducing dispensing errors and freeing up clinical capacity.
Data show that the use of AI systems is supported by the public. In June 2024, a UK survey of 7,201 people found that just over half (54%) backed the use of AI in healthcare, rising to 61% when used for administrative purposes.
Jiao adds that patients have not objected to the use of it in consultations.
Dervis Alkan Gurol, director of Sussex Pharmacies, agrees, adding: “I have a clip-on microphone and explain that AI will take notes. Patients generally understand and appreciate the use of technology.”
Karsan says uptake of these tools is growing — clinicians welcome the workload relief — but there’s a danger of over-reliance. “Some clinicians trust the tool after five correct consultations,” she says. But ‘automation bias’ can creep in when clinicians start trusting a system too much because it’s worked well in the past. Some studies express concern that this may lead clinicians to accept what the ambient scribe tells them without critiquing or using professional judgement9,10.
But Gurol isn’t worried about over-reliance. “I don’t think we can say that a clinician’s memory or thinking quality will reduce,” he says. “On the contrary, when I’m taking notes, I’m having to stop occasionally to think, am I writing this right? That affects my consultations with patients.”
Companies developing AI software are taking this into account. Jack Tabner, general manager at Accurx, a software platform that has developed an ambient scribe, says they’re being “thoughtful” about some of the potential unintended consequences.
“The product is designed to encourage users to read through their notes,” he explains. “If you try to complete a scribe and haven’t spent long enough on the page — meaning you couldn’t possibly have read it properly — a warning pops up asking, ‘Have you read through this? Have you checked it’s accurate?’”
Safeguards such as this may help reduce user error, but they don’t resolve the bigger ethical questions.
Ethical challenges of AI
Stephen Goundrey-Smith, a consultant in pharmacy informatics, believes the main ethical challenges with using AI-powered technology in healthcare are not new. “Privacy is a problem with all digital systems, conventional as well as AI ones,” he says, with questions about data and bias also presenting challenges. What is new, however, is the magnification of these issues, as AI becomes less transparent and harder to check, making “accuracy totally and utterly vital”, he says.
We don’t have the experience yet to know what the problems are ethically with specific systems
Stephen Goundrey-Smith, a consultant in pharmacy informatics
“We don’t have the experience yet to know what the problems are ethically with specific systems,” Goundrey-Smith adds.
This sentiment is echoed by Abi Eccles, senior researcher in digital health at the University of Oxford and colleagues, writing in the BMJ: “Until further evidence is available, clinicians should use the technology with diligence and caution” — particularly given limited formal guidance and longer-term evidence on their use.
Some companies are taking a proactive approach to building AI systems responsibly. Tariq Muhammad is chief executive of Invatech Health, which developed technology to automate dispensing, known as ‘Titan’.
According to Muhammad, Titan automates around 80% of the dispensing process, leaving pharmacists to focus on the 20% that need closer attention. “Because the clinical check is repetitive, Titan AI can learn from millions of transactions across pharmacies,” he explains.
“In many ways, we’re improving patient safety. We’re saving pharmacists from brain rot and instead showing them a handful of things that actually need their focus.”
To mitigate risk, Titan uses layered oversight: random sampling, pharmacist review and independent safety panels. “It’s a living, breathing system, not just something you switch on and forget,” Muhammad says.
Like many developers, he describes Titan as “a tool, not a replacement”.
This ‘human-in-the-loop’ approach offers reassurance for clinicians and patients — but not quite full protection. Matthew Boyd, professor of medicines safety at the University of Nottingham and vice chair of the Pharmacy Law and Ethics Association, believes all pharmacists must understand AI functionality and how and where data are stored. “The underpinning data pool informing decision-making must be really sound,” he says. “Governance is critical.”
Regulation arena
However, governance only works within a clear regulatory framework – which is still catching up with the speed of AI development for tools like AVT.
Great Britain follows the UK Medical Device Regulations 2002 (MDR), which sets essential requirements for safety and performance. The MHRA, which oversees these regulations, indicated in June 2025 that AI software with a medical purpose — as defined in regulation — is likely to qualify as a ‘medical device’.
If AI software has a ‘medical purpose’ as defined in the UK MDR, it is likely a medical device
Hadi Shahidipour, a national clinical lead and senior regulatory specialist
Hadi Shahidipour, a national clinical lead and senior regulatory specialist, who oversees medical device technical documentation and compliance for NHS England, says it’s a “myth” that AI software used for healthcare purposes isn’t regulated.
“If AI software has a ‘medical purpose’ as defined in the UK MDR, it is likely a medical device,” he explains.
“Most AI has sufficient complexity to meet this definition,” he adds, although some international standards still need updating.
Medical devices are classified from class I to class III according to risk. Most AI or health IT medical devices — including Heidi, Titan and Accurx Scribe — are currently class I, which is the lowest risk category and only requires companies to self-certify. However, the MHRA plans to update medical device regulation in 2026, which could lead to the reclassification of some devices, meaning they may require independent audit before they can be registered. In October 2025, it also announced a National AI Commission to shape recommendations for future regulatory oversight.
Shahidipour notes that technical files for class I devices can range from “very good” to “non-existent” in quality because, essentially, “you’re checking your own homework”.
Muhammad shares this concern, warning that self-certification could lead to “many new products… with poor safety standards”.
Yass Omar, head of legal and regulatory affairs at ambient scribe company Heidi Health, admits that moving into higher-risk classifications is “time-consuming, resource-intensive and costly”.
“It’s a balance,” he says. “We want to build innovative, impactful technology, but we have to work closely with regulators and government to do it safely.”
In spring 2024, the MHRA launched the AI Airlock programme, allowing health technology companies to test their AI tools in a sandbox environment under regulatory supervision. Karsan was a stakeholder on that programme and calls it a “positive step” to help inform future regulation. The first pilot ended in April 2025, with a report pending and a second phase planned for 2026.
Goundrey-Smith adds that it will be crucial to “join the dots” with regulations and professional standards.
“There’s work to be done to ensure our professional standards retain integrity in an AI world,” he explains. “For example, what does it mean for me as a pharmacist to keep data confidential if the system I’m using could disclose it to a third party without my control?”
Who is responsible?
While the UK is committed to technological innovation, as outlined in the NHS ten-year health plan, the question of liability remains unanswered. In 2024, study results suggested that clinicians risk becoming a “liability sink” for AI, absorbing responsibility from AI systems much like a ‘heat sink’ absorbs excess heat11.
NHS England’s current guidance on ambient scribes, published in April 2025, states that if no specific liable party can be established — or if a supplier “lacks sufficient coverage” — liability may default to the NHS trust or primary care provider, which holds a “non-delegable duty” to ensure patient safety and quality of care.
NHS Wales guidance, published in August 2025, emphasises safe AVT adoption without mentioning liability, while NHS Scotland advises clear contracts defining roles and liability, and consulting medical defence bodies when in doubt.
For now, pharmacists rely largely on professional judgement. The RPS has said that the GPhC will include AI deployment in future professional standards and, in October 2025, called for digital and AI skills to become a core competency for all healthcare professionals.
Amareen Kamboh, head of pharmacy workforce for NHS Hampshire and Isle of Wight, believes education is crucial. “It’s our responsibility… to equip the workforce with the necessary skills for the future,” she says.
“At the same time, we can’t forget the current workforce, many of whom don’t yet have those skills.”
Patients are central to the conversation surrounding AI in healthcare, yet evidence of direct impact is still limited. A spokesperson from Patient.info, a patient-facing health platform, said it had not yet seen reported cases where AI use in pharmacy consultations has directly caused problems around confidentiality or consent, but warned that “these are real risks if the technology is adopted without clear safeguards”.
Insurance considerations add another layer of complexity. Keith Bryceland, principal at Segment Risk, said that the growing use of AI “raises important questions around professional accountability” and warned that “pharmacists will remain accountable for their clinical decisions, even when AI is involved”.
Karsan points out that while AI can make mistakes, so can humans. “At what point and at what risk appetite are you accepting the fallibility of AI?” he asks.
Kamboh concurs. “AI can make things more efficient and safer — but if an error occurs, it can feel scarier because it feels out of our control,” she says.
“As a patient, if something goes wrong and you hear ‘It’s down to the machine’ — it doesn’t carry the same understanding as human error.”
For developers, this brings a level of moral responsibility. Muhammad envisions a future where “AI is essential, not optional”.
“It’s clear the primary care system is broken, so pharmacists have an opportunity to plug that gap,” he says. “They’re capable and qualified now as prescribers. But we must work differently to meet demand, using technology like AI to scale services safely.”
Ultimately, it is down to the individual pharmacist, and their employer, to decide how best to use AI tools while regulations and guidance continue to develop. In the meantime, pharmacists weigh the risks versus rewards – as the spokesperson for Patient.info puts it: “Pharmacists work in a trusted role at the frontline of patient care, and it’s essential that any use of AI tools […] meets the same standards of confidentiality, transparency and professional judgement as traditional practice.”
Box: Further reading from The Pharmaceutical Journal
- 1.Sasseville M, Yousefi F, Ouellet S, et al. The Impact of AI Scribes on Streamlining Clinical Documentation: A Systematic Review. Healthcare. 2025;13(12):1447. doi:10.3390/healthcare13121447
- 2.Duggan MJ, Gervase J, Schoenbaum A, et al. Clinician Experiences With Ambient Scribe Technology to Assist With Documentation Burden and Efficiency. JAMA Netw Open. 2025;8(2):e2460637. doi:10.1001/jamanetworkopen.2024.60637
- 3.González-Pérez Y, Montero Delgado A, Martinez Sesmero JM. [Translated article] Introducing artificial intelligence to hospital pharmacy departments. Farmacia Hospitalaria. 2024;48:TS35-TS44. doi:10.1016/j.farma.2024.04.001
- 4.Bundy H, Gerhart J, Baek S, et al. Can the Administrative Loads of Physicians be Alleviated by AI-Facilitated Clinical Documentation? J GEN INTERN MED. 2024;39(15):2995-3000. doi:10.1007/s11606-024-08870-z
- 5.Eccles A, Pelly T, Pope C, Powell J. Unintended consequences of using ambient scribes in general practice. BMJ. 2025;390:e085754. doi:10.1136/bmj-2025-085754
- 6.Nambudiri VE, Watson AJ, Buzney EA, Kupper TS, Rubenstein MH, Yang FSC. Medical Scribes in an Academic Dermatology Practice. JAMA Dermatol. 2018;154(1):101. doi:10.1001/jamadermatol.2017.3658
- 7.Tierney AA, Gayre G, Hoberman B, et al. Ambient Artificial Intelligence Scribes to Alleviate the Burden of Clinical Documentation. NEJM Catalyst. 2024;5(3). doi:10.1056/cat.23.0404
- 8.Olson KD, Meeker D, Troup M, et al. Use of Ambient AI Scribes to Reduce Administrative Burden and Professional Burnout. JAMA Netw Open. 2025;8(10):e2534976. doi:10.1001/jamanetworkopen.2025.34976
- 9.Coiera E, Kocaballi B, Halamka J, Laranjo L. The digital scribe. npj Digital Med. 2018;1(1). doi:10.1038/s41746-018-0066-9
- 10.Kocaballi AB, Ijaz K, Laranjo L, et al. Envisioning an artificial intelligence documentation assistant for future primary care consultations: A co-design study with general practitioners. Journal of the American Medical Informatics Association. 2020;27(11):1695-1704. doi:10.1093/jamia/ocaa131
- 11.Lawton T, Morgan P, Porter Z, et al. Clinicians risk becoming “liability sinks” for artificial intelligence. Future Healthcare Journal. 2024;11(1):100007. doi:10.1016/j.fhj.2024.100007



