As a generative AI consulting company, we know firsthand the incredible potential that AI has for corporate communication teams. However, we also recognize that there are common mistakes that teams make when working with this technology. In this blog post, we’ve outlined some of these mistakes and provided tips on how to avoid them to ensure success.

Not Defining the Right Objectives
One of the biggest mistakes that teams make when working with generative AI in a communications context is not defining clear objectives for the communications output in close alignment with your communication strategy. It’s crucial to have a clear understanding, what your priority messaging is. By breaking down your communications strategy into a messaging house and a target word cloud and specifying goals per target audiences that you want to achieve over the year, you can ensure that the AI-generated content is aligned with your communication goals and your assets remain specific to your brand and strategic market positioning. If you are not clear about this first step, AI only leverage on what is already there or follow the individual impulses of decentralized editors across your organization, leaving an unclear profile for what your company stands and cannibalize your credibility on your strategic topics. Especially as our productivity has increased in content creation, getting your strategic priorities clear and track them with measurable data has become more essential than ever.
Not Having Quality Input Data
Another common mistake is not having quality input data for the AI to work with. The quality of the content used for training the AI is critical to the success of the process. By ensuring that the content used for training the AI is clean, complete, and representative of the assets that you want to create, you can optimize the output and maximize the impact of your communication efforts. Branding guidelines, specific keywords, tone of voice, etc. are essential when training the AI for your specific purpose to maintain the user experience. Mostly a very efficiently way to achieve this kind of input data is to generate it during an in-depth page clean-up where you create ideal digital content in the tone of voice and branding claims you want to use and which is then used to train the future content generation with AI.
Ignoring the Importance of Human Oversight
Generative AI is an incredible tool, but it’s not a replacement for human oversight. It’s crucial to have a team of human experts (Subject Matter Experts but also highly skilled digital editors and specialist knowhow about your audiences) who can review the output generated by the AI and ensure that it aligns with the brand’s messaging and objectives. Without human oversight, the AI may generate content that is irrelevant, incorrect or inappropriate, damaging the brand’s reputation. Main tool is an editorial process template (available from us) to create new assets that ensures an optimized but quality controlled use of AI. We normally make sure this is done by integrating these roles in the editorial board. Our editorial process also foresees that an editorial boards mainly uses AI in the ideation phase up to the point where the editor comes up with key questions that the target audiences have to confront a subject matter expert with these questions to source really specific and novel content for your organization. Afterwards, Ai can be used to smoothen the copy and generate various assets for different channels. But the creation of new ideas needs to pass a human brain if you don’t want to be perceived a “stochastic parrot” (therefore also the visual of this article, in case you were wondering).
Not Continuously Evaluating the Output
One of the most significant mistakes that corporate communication teams make when working with generative AI is not continuously evaluating the output. It’s essential to have a process for regularly reviewing and evaluating the AI-generated content to ensure that it’s meeting the brand’s objectives and producing the desired results. In order to work, this monitoring needs to be simple to understand and highly relevant. If the editors cannot immediately draw conclusions from the reporting, the reporting needs to be simplified. Key variables you should look at are monitoring engagement metrics, such as click-through rates and conversion rates, and making adjustments as necessary. In our projects, the dedicated role of the “Voice of the Audience” has paid off as lifting a whole group’s data literacy is a lengthy and complex undertaking. Having a dedicated specialist to make sense of your data and reflect it in relation to the last assets the team members created is highly efficient.
Unclear Messaging Focus and Brand Positioning
Another common mistake that corporate communication teams make when working with generative AI is an unclear messaging focus and brand positioning. AI-generated content can be tempting to churn out in large quantities, but it’s crucial to maintain a clear focus on the brand’s messaging and positioning. Failing to do so can result in the dilution of the brand’s key assets and positioning, leading to generic content that could be created by any competitor. Coming back to the stochastic parrot: If AI is not trained on who is talking in which tone and which brand personality, contents will sound random, less engaging and interchangeable. This is not how good communication works and needs to be considered in the initial AI training process.
Furthermore, to avoid this mistake, it’s crucial to define clear messaging and positioning objectives for the AI to follow. This includes identifying the brand’s unique value proposition, target audience, and brand voice. By doing so, the AI-generated content will be aligned with the brand’s messaging and positioning, helping to maintain consistency and a strong brand identity. Additionally, it’s essential to regularly review and evaluate the AI-generated content to ensure that it’s consistent with the brand’s messaging and positioning.
Lack of Continuous Monitoring and Prompt Enhancement
It’s important to monitor the performance of the AI-generated content and how it resonates with the target audience. This includes tracking engagement metrics such as views, clicks, and shares, and analyzing feedback to gain insights into what resonates with the target audience.
By utilizing data-driven monitoring and prompt enhancement, communication teams can gain a better understanding of what works and what doesn’t in their AI-generated content. This information can be used to make adjustments and enhancements to future content, ensuring that it is optimized for maximum impact and effectiveness. Also here, the Voice of the Audience Role in our proposed Editorial Boards structure blog post can be a valuable game changer in your effort to drive AI use successfully in your organization.
In the best case, this procedure will become a sacred ritual for the team, driving organizational learning, data literacy and prompt design know-how across various roles of the editorial board, left alone the professional profile of your editorial team members. In order to establish this routine, we normally advise our clients to split their communication planning into sprints of four weeks to generate data feedback and insights for the next sprint at least every four weeks. Now you have the routine (sprint structure) and the owner (Voice of the Audience Editor)
In conclusion, generative AI can be an incredibly powerful tool for corporate communication teams. However, it’s essential to avoid these common mistakes to ensure success. Define clear objectives, ensure quality input data, have human oversight, and continuously evaluate the output and establish a culture of collective critical exploration and learning. By doing so, you can leverage the power of generative AI to create compelling, effective assets that help your brand stand out in a crowded market.
