The Web and Digital Experience team at Monash University Library explains how an AI‑driven prompt assistant was created to strengthen voice consistency across multiple authors.
The use of generative AI tools is becoming increasingly common in editorial processes for content publishing. In 2025, the Monash University Library web team experimented with Copilot as a tool to achieve a consistent voice for content written by a decentralised authorship team for an online audience. While Copilot could generate content quickly, the tone and voice used were regularly inconsistent with existing editorial processes.
Maintaining a consistent editorial voice across multiple authors, whether the content was written by staff or drafted with AI assistance, was a challenge. Copilot has no inherent understanding of our web writing conventions, our editorial voice, or preferred spellings for specific Library-related words. Any solution we came up with needed to support the author's expertise, rather than replace it.
To address this, we developed a structured prompt briefing document that staff could use directly within Copilot. This document, referred to internally as the AI Prompt Assistant, was introduced to address editorial inconsistencies among content authors without requiring new software, technical training, or major changes to existing workflows. This tool helped support a decentralised model of authorship and alleviated a bottleneck problem of content needing to be reviewed by a central editorial team.
In practice, the AI Prompt Assistant is a text-based document that contains instructions and context for Copilot. This includes a copy of the current web editorial guidelines, reformatted for efficiency and interpretation by a large language model. Instead of staff having to restate editorial requirements in each prompt interaction with an AI tool, the editorial guidelines are provided upfront by uploading the document once, when they first start using Copilot to assist in their work.
This document effectively turns Copilot into a domain-specific editorial assistant, with relevant, up-to-date contextual information. Staff remain responsible for subject matter and decision-making, but are now supported by a tool that is pre-aligned with the Library’s communication standards. This has proven especially useful for staff during rapid drafting of new content and for reviewing existing materials for consistency before publishing for external audiences.
“...one of the main strengths of the tool is its ability to introduce new ways to frame content, with particular user groups in mind. I believe we have a good sense of our users' needs, but this can sometimes be hard to translate into practice. This is where working with AI in careful ways can help bring on the ground insight to life.” - User testing participant #1
Developing the AI Prompt Assistant presented several challenges during testing, particularly in preventing ‘drift’ in responses and ensuring that outputs could be clearly explained and linked back to specific editorial standards. Early testing showed that even with a well-designed briefing document, AI tools remain heavily dependent on the quality of the prompts they receive. To support staff in using the tool effectively, the team developed a webpage that provides examples and guidance on proper prompting etiquette. Using a design thinking approach, feedback from test groups was incorporated iteratively to refine the tool into the version currently used at the Monash University Library.
Before implementation, staff relied on their individual understanding of how content should be written for the library, with drafts often needing adjustments to better align with editorial standards. Following implementation, staff have successfully used the AI Prompt Assistant, particularly for drafting new web content. Outputs align more closely with the Library’s tone and voice, reducing the amount of reworking required before publication. The tool has also been incorporated into more visible projects, including the redevelopment of faculty subject guides that receive millions of visits each year. These guides, maintained by multiple authors, provide subject-specific information that supports students throughout their studies. In this context, the AI Prompt Assistant has enabled authors to maintain a consistent editorial voice throughout their day-to-day content workflow. Results from multidisciplinary testing groups indicated that the briefing document had improved consistency and helped staff feel more confident experimenting with AI-supported workflows.
We compared the results of the same block of text processed in two ways: once through Copilot without the Prompt Assistant, and once using the Prompt Assistant. The difference in output was significant. The version produced with the Prompt Assistant was concise, used accessible language, and effectively simplified complex ideas. In contrast, the version generated without the Prompt Assistant was overly long, relied on complex language and library‑specific jargon, and included common AI markers such as em dashes (—).
Usage of generative AI is likely to grow in the future, including in library workplaces. The question is not whether these tools will be used, but how. For the Monash University Library, introducing a shared editorial assistant has provided a practical way to integrate generative AI into existing workflows while reinforcing established editorial practices. By embedding shared standards directly into the tool, staff can work more efficiently and confidently, with fewer review cycles required before content is ready for publication.



0 Comments