Applications of AI in pharmacy practice

Generative artificial intelligence (AI) has the potential to transform many aspects of the healthcare sector. This article looks at how AI can be used in pharmacy practice.
Wires and networks overlay tablet on blue background

After reading this article you should be able to:

Artificial intelligence (AI) — “the science and engineering of making intelligent machines”​1​ — has been present for decades in many forms, including machine learning (creating models from existing data) and deep learning (predictions and decision making from data)​2​. However, it is generative AI (deep-learning models that create original content) that is suddenly transforming many sectors including healthcare, business and society. This is owing to its ability to automate complex tasks, generate new data and provide innovative solutions to longstanding challenges​2​.

In the UK, there are many situations where AI is already changing healthcare practice. For instance, AI is currently being trialled in breast cancer screening, after studies have demonstrated an increase in detection without increase in staffing time​3–5​. The application of an AI tool that is aimed to identify patients at risk of falls or clinical deterioration as part of home care visits is another example​6​, as a study showed that a predictive tool was able to anticipate falls with 83% accuracy, demonstrating the potential of AI to enable early, preventative interventions​7​. Both examples are possible owing to the vast amount of data available to train computer models to create algorithms, identify patterns and predict outcomes. 

In pharmacy, AI has proven to be effective at streamlining operations, reducing errors and improving patient care​8​. It can be used in prescription verification, automated dispensing and forecasting stock needs, as well as ‘chatbots’ that provide customer care​9,10​. Generative AI can also expedite the drug discovery process by analysing vast amounts of data, identifying potential drug candidates, predicting efficacy and safety, and reducing costs​11​.

Outside of specific AI tools or enhanced programmes, open generative AI also offers benefits for pharmacists in their everyday tasks, such as creating communications and summarising reports. In 2025, the Royal Pharmaceutical Society (RPS) published its policy on use of AI in pharmacy, which was supportive of the responsible and effective use of AI outlining the challenges and opportunities that use of this technology presents​12​.

This article provides an introduction to this new innovation and explores some of the potential applications of generative AI in everyday pharmacy practice.

Policy and regulatory context

In the UK, current AI policy and guidance in healthcare emphasises ethical standards, data protection and patient safety but is aimed to promote its use to drive efficacy gains. Regulators are expanding the scope, with the Medicines and Healthcare products Regulatory Agency (MHRA) regulating AI-based software as medical devices, tailoring their requirements to the level of risk associated with the technology​13​. The Care Quality Commission (CQC) has a role in monitoring the implementation of AI in healthcare. To support adoption, the MHRA and CQC have collaborated on the AI and Digital Regulations Service for health and social care with the National Institute for Health and Care Excellence (NICE) and the NHS Health Research Authority on the AI and digital regulations service (AIRDS)​14​. This provides a central point for regulatory information and support for “both developers and adopters of AI and data-driven technologies”​14​.

Currently, there is no explicit guidance on the use of AI from the General Pharmaceutical Council (GPhC); however, the standards for pharmacy professionals can be applied to the use of AI:

  • Ensuring that any use supports person-centred care;
  • Using continuous professional development to be informed about AI and how to integrate tools responsibility into practice;
  • Ensuring that AI systems used comply with data-protection regulations and using AI in a way that respects patient confidentiality;
  • Using professional judgement to interpret AI and make final decisions based on clinical experience.

As AI software for health is covered by the MHRA, pharmacists should report any safety or adverse incidences to it under the Yellow Card scheme​15​.

How generative AI works

Generative AI starts out as an extensive data set (the ‘foundation model’). This is commonly a large language model (LLM) drawn from a huge amount of text from wide-ranging sources, including books, articles, websites and social media platforms​2​. The model is trained to create links, relationships and patterns of data points that result in a network of information​2​. This network allows the model to respond almost instantly to queries and, from its knowledge points, create a response to a question (e.g. ‘What are the symptoms of impetigo?’) or a request (e.g. ‘Suggest some names for a health check campaign’).

In this way, it can return curated information that it believes to be factually relevant, as well as being creative with its outputs and suggestions if your request requires it. The examples given here are very simple, but it can create responses to complex requests and can be interacted with through conversation to refine outputs.

The more training and feedback the AI model receives, the more it improves the accuracy of its outputs, learning each time it is used. Like many tools, the more competent the user is at interacting with the AI model, the more the responses will meet the needs of the user. This is discussed further below, in the section on ‘Prompt engineering’.

There are also foundation models for images, video, sound and music, meaning that several types of output requests are possible​2​. Some AI products will support multiple types of output — for example ChatGTP and Microsoft Copilot produce both text and images​16​ — whereas others are limited to a specific type of output; For example, VoiceBox uses speech synthesis to produce realistic human voices​17​.

Open versus closed generative AI systems

AI systems can be classified as open or closed source. Open-source AI systems (e.g. TensorFlow, PyTorch) are models and software that are available to anyone to use, modify or improve. This openness has helped to democratise access to the technology and fosters collaboration and further innovation. Being open source allows users to audit and review the AI system they are using but can leave it more open to abuse by hackers or those with malicious intent. There is the possibility that quality control methods vary in open-source projects, which could lead to errors or bugs. These systems require significant set up, customisation and integration before use. Open-source models are best used by people with technical skills or those looking to develop them.

Closed-source AI systems (e.g. Google Assistant) are the opposite. The model and software are proprietary and accessible only by the company and its developers. This protects the asset for the company; however, there is reduced transparency of the code for the user. Often monetised and quality controlled, they tend to be more reliable. Closed-source systems are preconfigured and can be used with minimal expertise.

Limitations of generative AI

As generative AI is dependent on the foundation model that it is built upon, the quality of its output is reliant on the quality of this data. Data inputs can be biased and this bias can feed into outputs​18​. By making links between data points, it may generate false or misleading information, termed ‘hallucinations’, that do not align to reality, as the connections made are incorrect or misleading (but may seem plausible)​19​. There have been several cases of AI generating fake legal cases and court filings, as well as incorrectly stating general facts on basic things​19​. Often, the hallucinations are presented convincingly, making it challenging for the user to spot if they lack subject knowledge. The outputs produced by generative AI are inconsistent with models producing different responses to the same question asked at different times (non-determinism). This variability can reduce user trust and make it unsuitable for critical applications, such as clinical decision-making, at present.

Ethical and legal issues regarding AI use 

Aside from the legal cases relating to bias and misinformation in output generation, there are some other considerations when using generative AI. Similar to how a sharp knife is used as a tool for a chef, but it can be misused as a weapon, generative AI can be used for malicious purposes. For example, it can be used to produce fake healthcare communications to gain trust of others, or its data model could be attacked with insertion of data that affects the model and the output — an ‘AI poisoning attack’​20​. The responsibility for legal and ethical use of AI and verification of outputs lies with the user.

There are also ethical considerations around the use of personal healthcare data within AI models. Although generative AI products all carry privacy notices, it is important to note that the data are stored and used by systems to further train the models. Data could therefore be used outside their original intent and scope, breaching data privacy and patient confidentiality. For some documents and text, this could infringe copyright if used in the training model.

Free-access AI products can leave businesses open to more risk than products with a business licensing or subscription agreement that have defined data policies that govern use of inputted data. For this reason, the RPS has suggested that “pharmacists should exercise caution in sharing patient or any personal information with third-party AI tools”​12​. This presents a dilemma within healthcare, as new, AI-driven solutions are likely to require the use of patient data to train the model. There is also ambiguity around whether consent to care extends to the use of AI in care​21​. This poses a challenge to the rules on “collection, use, and disclosure of personal data”​21​. Pharmacists should be transparent on when AI is employed in patient care and information should be given to patients about its role in their diagnosis and treatment. 

Although various codes of ethics have been proposed by a number of organisations, there is currently no unification globally. The UK has signed an international treaty to address the risk of AI and UK legislation is expected in the near future​22​. The EU AI Act in 2024 may have some impact on UK businesses that produce outputs used in the EU, such as collaborative projects and research. At the time of writing, the UK’s focus remained on the overall principles and the sector-specific guidance and regulation outlined previously.

How pharmacists might apply generative AI to aspects of their work

There are several ways that generative AI outputs can assist with everyday tasks. Some examples include:

  • The ‘jumping-off point’. This is the description of the starting point that the AI can offer in completing tasks. It will not necessarily mean that the output requires no further input or reworking, but it can save time completing tasks, such as creating training plans, job descriptions and strategies. Another example of a jumping-off point might be to ask the AI to create patient cases to use with trainee pharmacists or to provide some initial suggestions on how to structure a report;
  • Summarising content: 
    • If time is short, AI can assist in catching up with missed meetings, even if minutes have not yet been produced. If there is a meeting transcript available to upload, AI can create a one-page summary, key points from each speaker and a list of action points;
    • It is also useful at summarising documents and an example could be stepping in for a colleague at short notice for a meeting and not being able to read in full the preparatory papers. The AI can summarise content and this can assist in what is then prioritised to read in full or what documents might have the most relevant information to skim through. Depending on the system you are using, it may also assist by extracting and offering the ability to interact with any data in the papers;
  • Minuting meetings. There are add-on AI products that you can run in online meetings to turn transcripts into automated minutes. When using these, ensure that actions, owners and due dates are clearly articulated in the meeting after each discussion point for it to pick up on these;
  • Research. AI products can assist with literature reviews and interaction with research papers, such as Paperguide. Common AI tools, such as ChatGPT, are also expanding their function with deep-dive options, which take more time to generate returns, but these returns are more in-depth and will pull relevant research to the query​23​;
  • Data analysis. Any dataset could potentially use AI to speed up analysis, from the basic ‘analyse this’ feature of Microsoft Excel to more sophisticated analytical software that can be used for insights and predictive analytics;
  • Crafting communication. As seen in example 4 (below), generative AI can assist with writing emails, reducing the time needed to produce detailed wording and tailoring it to the audience. This might be particularly useful in situations where a rushed attempt at an email might set the wrong tone or to a colleague who has been particularly challenging. Another approach to the example is to write your email and drop it into the AI, asking it to make it sound more professional, authoritative or friendly (depending on your needs);
  • Automated communication. Patients can be set up to receive reminders of medication adherence, educational resources and patient information related to their condition.

Although it can feel as if AI has already made big advances, it is a rapidly evolving area. New features, improvements and add-ons to existing software are being made available on a daily basis, and it is highly likely that use of AI will rapidly increase and steadily affect more of our working lives.

Prompt engineering and the use of dialogue to refine requests

For those just starting out with generative AI and wanting to make efficient use of its outputs, it is worth learning a little about ‘prompt engineering’. That is, the words and information that you use to craft effective requests.

An alternative approach can be to interact with the generative AI on the request. The user may follow the prompt ideas above or simply describe their needs while also informing the AI how many questions it can ask to refine the request. It will then decide what it thinks it is most important to know to get the output required.

This approach to prompt engineering focuses on written outputs. A similar process is required for image generation but with prompts that are more aligned to photography terms. Microsoft Copilot has a useful quick guide​16​, but if more guidance is required on the style of image to request, the DALLE2 prompt book may help​24​.

Referencing the use of AI

If quoting AI-generated information or including an AI generated picture, it is good practice to reference the AI. According to Microsoft Copilot, items to include in the reference are​25​:

  • The AI tool’s name and version;
  • The interaction date;
  • A descriptive title that captures the query or task.

It is the advice of the Modern Language Association that, as “AI tools cannot take ethical or legal responsibility for their output or enter into legal agreements”, they should not be considered an author of work​26​. However, manuscripts or reports can state how AI was used in the work. Publishers and businesses may have their own guidance on transparent use of AI.

Best practice

  • Check if your place of work has an AI policy for you to work within. If it does not have one already, it is time to introduce one. Advice is available from the government’s AI playbook​27​ and the ‘AI governance policy for NHS provider organisations’​28​;
  • Consider the information that you are sharing in your interactions with AI and whether it is appropriate to do so. Do not enter patient information, sensitive business information and do not use colleagues’ names when generating correspondence;
  • If looking for assistance with clinical topics, remember that AI can hallucinate and produce biased information. Always confirm clinical information that will affect a patient with reliable evidence-based medicines resources;
  • Assist the AI with prompts to refine your request.
  1. 1.
    Grewal PDS. A Critical Conceptual Analysis of Definitions of Artificial Intelligence as Applicable to Computer Engineering. IOSRJCE. 2014;16(2):09-13. doi:10.9790/0661-16210913
  2. 2.
    Stryker C, Kavlakoglu E. What is artificial intelligence (AI)? . IBM. 2024. Accessed July 2025. https://www.ibm.com/think/topics/artificial-intelligence
  3. 3.
    Department of Health and Social Care, Department for Science, Innovation and Technology, NHS England, Peter Kyle MP and Wes Streeting MP. World-leading AI trial to tackle breast cancer launched. UK government. February 2025. Accessed July 2025. https://www.gov.uk/government/news/world-leading-ai-trial-to-tackle-breast-cancer-launched#:~:text=Year%20Health%20Plan-,Nearly%20700%2C000%20women%20across%20the%20country%20will%20take%20part%20in,today%20(4%20February%202025).
  4. 4.
    Hernström V, Josefsson V, Sartor H, et al. Screening performance and characteristics of breast cancer detected in the Mammography Screening with Artificial Intelligence trial (MASAI): a randomised, controlled, parallel-group, non-inferiority, single-blinded, screening accuracy study. The Lancet Digital Health. 2025;7(3):e175-e183. doi:10.1016/s2589-7500(24)00267-x
  5. 5.
    Ng AY, Oberije CJG, Ambrózay É, et al. Prospective implementation of AI-assisted screen reading to improve early detection of breast cancer. Nat Med. 2023;29(12):3044-3049. doi:10.1038/s41591-023-02625-9
  6. 6.
    Nationwide roll out of artificial intelligence tool that predicts falls and viruses. NHS England. March 2025. Accessed July 2025. https://www.england.nhs.uk/2025/03/nationwide-roll-out-of-artificial-intelligence-tool-that-predicts-falls-and-viruses/
  7. 7.
    Heger T, Windle N, Bucci M, Prando G, Maruthappu M. MT12 Developing and Applying Predictive Models to Identify Patients at Increased Risk of Falling in Domiciliary Care. Value in Health. 2024;27(12):S489. doi:10.1016/j.jval.2024.10.2482
  8. 8.
    González-Pérez Y, Montero Delgado A, Martinez Sesmero JM. Acercando la inteligencia artificial a los servicios de farmacia hospitalaria. Farmacia Hospitalaria. 2024;48:S35-S44. doi:10.1016/j.farma.2024.02.007
  9. 9.
    Bu F, Sun H, Li L, et al. Artificial intelligence-based internet hospital pharmacy services in China: Perspective based on a case study. Front Pharmacol. 2022;13. doi:10.3389/fphar.2022.1027808
  10. 10.
    Authors, Clark M, Bailey S. op728369720122. Published online January 1, 2024. http://www.ncbi.nlm.nih.gov/books/NBK602381/
  11. 11.
    Kant S, Deepika, Roy S. Artificial intelligence in drug discovery and development: transforming challenges into opportunities. Discov Pharm Sci. 2025;1(1). doi:10.1007/s44395-025-00007-3
  12. 12.
    Artificial Intelligence (AI) in Pharmacy. Royal Pharmaceutical Society. 2025. Accessed July 2025. https://www.rpharms.com/recognition/all-our-campaigns/policy-a-z/ai#refs.
  13. 13.
    Impact of AI on the regulation of medical products. Medicines and Healthcare products Regulatory Agency. April 2024. Accessed July 2025. https://www.gov.uk/government/publications/impact-of-ai-on-the-regulation-of-medical-products/impact-of-ai-on-the-regulation-of-medical-products
  14. 14.
    Understanding regulations of AI and digital technology in health and social care. NHS AI and digital regulations service for health and social care. Accessed July 2025. https://www.digitalregulations.innovation.nhs.uk
  15. 15.
    What to report to the Yellow Card scheme. Medicines and Healthcare products Regulatory Agency . Accessed July 2025. https://yellowcard.mhra.gov.uk/what-to-report
  16. 16.
  17. 17.
    Introducing Voicebox: The first generative AI model for speech to generalize across tasks with state-of-the-art performance. Meta. 2023. Accessed July 2025. https://ai.meta.com/blog/voicebox-generative-ai-model-speech/
  18. 18.
    Shedding light on AI bias with real world examples . IBM. 2023. Accessed July 2025. https://www.ibm.com/think/topics/shedding-light-on-ai-bias-with-real-world-examples
  19. 19.
    Sidhu BK. Hallucinations in Artificial Intelligence: Origins, Detection, and Mitigation. International Journal of Science and Research (IJSR). Int J Sci Res. 2025;18(1):8-15. https://www.ijsr.net/getabstract.php?paperid=SR241229170309
  20. 20.
    Henderson B. Top 10 AI Attacks Health Care Technology Professionals Need To Know! Microsoft Tech Community. April 2024. Accessed July 2025. https://techcommunity.microsoft.com/blog/healthcareandlifesciencesblog/top-10-ai-attacks-health-care-technology-professionals-need-to-know/3989431
  21. 21.
    Corfmat M, Martineau JT, Régis C. High-reward, high-risk technologies? An ethical and legal account of AI development in healthcare. BMC Med Ethics. 2025;26(1). doi:10.1186/s12910-024-01158-1
  22. 22.
    Ministry of Justice and Shabana Mahmood MP. UK signs first international treaty addressing risks of artificial intelligence . UK government. 2024. Accessed July 2025. https://www.gov.uk/government/news/uk-signs-first-international-treaty-addressing-risks-of-artificial-intelligence
  23. 23.
    Introducing deep research . OpenAI. 2025. Accessed July 2025. https://openai.com/index/introducing-deep-research
  24. 24.
    The DALL·E 2 Prompt Book. Dallery Gallery. 2022. Accessed July 2025. https://dallery.gallery/the-dalle-2-prompt-book
  25. 25.
    Response to query on referencing AI output. Microsoft Copilot. May 2025. Accessed July 2025. https://copilot.microsoft.com
  26. 26.
    Policy on Generative AI. Modern Language Association. 2025. Accessed July 2025. https://www.mla.org/Publications/MLA-Book-Publications/Resources-for-Contributors/Policy-on-Generative-AI
  27. 27.
    Artificial Intelligence Playbook for the UK Government (HTML). Government Digital Service. 2025. Accessed July 2025. https://www.gov.uk/government/publications/ai-playbook-for-the-uk-government/artificial-intelligence-playbook-for-the-uk-government-html
  28. 28.
    Artifical Intelligence (AI) Governance Policy for NHS Provider Organisations,. Consulting IH. 2025. Accessed July 2025. https://wmidsimagingnetwork.nhs.uk/wp-content/uploads/2025/05/AI-Governance-Policy-for-NHS-Providers_Ver2.0_InnovateHealthConsultingLtd.pdf
Last updated
Citation
The Pharmaceutical Journal, PJ, July 2025, Vol 315, No 7999;315(7999)::DOI:10.1211/PJ.2025.1.364330

    Please leave a comment 

    You might also be interested in…