Artificial Intelligence (AI) Questions and Answers

The ISMPP Artificial Intelligence (AI) Task Force provides answers to questions received from ISMPP members. Below are questions-and-answers from the AI plenary session and workshops at the ISMPP 20th Annual Meeting, April 28-May 1, 2024.

Email [email protected] to submit your questions related to AI in medical publications and communications.

Some journals allow for the use of AI in publication development if appropriately disclosed; are reviewers made aware of the use of AI in the pubs they are reviewing? If so, do you think that disclosure biases reviews?

Reviewers *should* be made aware of AI use in publications they are reviewing, as it is a recommended disclosure per journal requirements and ICMJE guidance.

What is the best way for small medical communication groups to begin learning how to incorporate AI to write first drafts?

We suggest starting with AI slowly. Start with trying it on discrete manuscript or abstract sections, such as background, introduction (non-proprietary info for "external" AI), methods, or other sections (with "internal" or private AI systems). Suggest taking AI use slowly by experimenting with AI prompting to generate summaries of published literature. Play with AI a bit to understand output nuances before jumping into full manuscript use. Practice on published manuscripts.

ICMJE/WAME guidelines ask for full disclosure (and sometimes publication in the manuscript itself) for all prompts and outputs generated. Personally, I would look mad if I wrote out my full conversations with a chatbot! Does the AI Task Force think these guidelines, whilst encouraging transparency, are also themselves barriers to medical writers experimenting with and using large language models (LLMs)?

We don't believe this is a barrier. The AI Task Force interprets the ICMJE requirement to disclose prompts as reporting the prompts used when applying AI to data collection, analysis, or figure generation. Generally, if AI generated new content, it is best practice to keep (and report if required) the prompts used to generate the content. This reinforces our call to practice with AI ahead of using it with submitted drafts.

We’re struggling with copyright challenges in terms of using AI to write summaries of open access and published data — is the workaround partnerships with the journal? Are there other things we’re missing?

When summarizing content, regardless of whether done entirely by human or assisted by AI, the same copyright rules apply to the accountable human.

I work for a large pharma company with its own AI tools and high restrictions on use of public tools; however, we've seen an increase in agency partners using tools like Zoom's AI tool,, etc., and sometimes they turn them on before we (client) are made aware. Any best practices for addressing incongruency between agency practices and client guardrails?

Using AI during an online call is analogous to recording the call, and, therefore, it is best practice to gain acknowledgement to use AI during a call. If not comfortable in the use of AI during calls, feel comfortable asking to turn off the AI.

In terms of disclosure, where wording states “use of AI should be disclosed” - where do you draw the line of “use” of AI? Content generation is an obvious yes, but what if just prompting ideas? Summarizing a disease area for background experience?

When AI generates content included in the publication, it is important to disclose this. If AI is used for a brainstorm, but without any content (edited or not) from the AI included in the publication, this might not be disclosed. Example: If using AI to simply educate yourself, it might not be disclosed. However, if used as part of the research methodology, such as generating a list of publications as part of a review, then it would be important to disclose the use of AI per ICMJE guidance.

The opposite of the question above - I work for an agency and would never use AI without talking to clients first, but any advice as to how to approach those conversations? It may require escalation to procurement teams in order to grant that permission and them demanding reduced prices - another massive barrier.

On an individual basis, have the conversation about concerns and benefits in use of AI. Confidentiality? Privacy? Speed or efficiency gains? Discuss the value of using AI and mitigating the risk in use. Use of AI is an evolving landscape, and we are still evaluating how the use of AI translates into time and cost savings.

How do we encourage professionals to learn about AI and how it can benefit their outputs?

Experiment with AI in a safe environment. Have AI open at every opportunity and have fun with it to see how AI might or might not benefit you. ISMPP has considerable educational opportunities about AI and has more planned for its members.

Have you received any feedback on a document created using AI from the end user/audience (i.e., publishers, patients, regulators)?

ISMPP member research has shown readability comparisons between AI and human-generated documents. Most platforms can generate comparable output to humans. AI still makes mistakes and, therefore, necessitates a human in the loop.

Other than ChatGPT, Gemini, etc., are there any other AI tool you may recommend?

The AI Task Force will soon publish an analysis of helpful AI Tools.

Everyone wants to explore AI, but trepidation still persists - what can we do to encourage more test and learn scenarios?

Practice and experiment with AI continuously and remember to "keep a human in the loop" to understand AI's limitations.

How to mitigate risk of biases from the underlying data used to train the AI models?

Test the AI output and remember that final accountability for content lies with the human authors. Also, be aware of how specific you are with your prompt. More detail with your prompt may help mitigate bias; however, you still need to verify the output.

What is the best resource to learn how to best create AI prompts?

Prompt Engineering Guide: or Gemini for Google Workspace Prompt Guide:

How can we measure the efficacy of AI-driven solutions in improving medical communication outcomes?

Utilize measures, such as time saved on very specific tasks, readability scores, error rates, accuracy, etc. As more use cases are discovered, the key performance measures of such use cases will become apparent. They will be project and situation-specific.