Watch out: This image was AI generated using Dall-E and intentionally highlights the limitations of AI generated content. The spelling mistakes are a reminder that while AI is powerful, it lacks the human touch for flawless communication.
By Carolyn Watson, Senior Account Manager at ZPB
A bit like Marmite, you either love or hate the ways that generative AI is shaping marketing and content creation – especially the more creative elements. Generative AI can develop marketing strategies, text, images, and videos in mere seconds, which could take a team of marketers, designers and videographer days or weeks to develop. It’s changing marketing and communications, and we are finding AI tools like Bing’s Co-Pilot, ChatGPT and some of the clever tools in Canva Pro to be really useful.
While gen AI can have many benefits, it also presents several challenges and ethical considerations to be navigated. Here, we delve into the potential pitfalls of using generative AI in marketing, particularly within the healthcare sector.
Ethical concerns and trust in healthcare communications
AI-generated content can be misleading or incorrect, we’ve all read articles about ChatGPT or other systems making up information. In healthcare, where accurate information is critical, the dissemination of false or misleading content can have serious consequences and damage an organisation’s reputation. It can lead to patients making poorly informed decisions about their health, which can negatively impact their well-being. If you are using AI tools to generate or review content, it is vital to have stringent quality control measures in place to ensure accuracy.
It is now possible with my tools to ask it to cite the sources. Be sure to use your judgement about whether these are credible, robust, up to date and relevant geographically. Double and triple check data points put forward. Fact checking AI is probably more work than going to the sources you know and trust and writing the content yourself – for now.
Google downgrades websites with AI-generated content
It is very tempting to rely on gen AI to draft a flurry of blogs within minutes, who doesn’t want to spend less time on admin? This translates to a growing rate at which new media posts, blog articles, and email newsletters can be developed – but beware as Google has already updated their algorithm and I’m sure it won’t be long before social media platforms follow suit.
Google’s March 2024 core algorithm update now penalizes sites using AI generated content, to crack down on low-quality, unhelpful and unoriginal content. There are also massive concerns around AI being used to spread fake news and misinformation, which raises concerns over Google’s trust and regulatory compliance.
How exactly does Google identify AI generated content? Here are a few ways it is likely to spot it:
· Poor language quality: Checking for semantic coherence, syntax, and language patterns
· Lack of context: AI often struggles with nuanced language, context-specific information, and cultural references
· Lack of emotion: AI-generated content tends to lack personality, while human content can be differentiated by writing style, personal touches and unique insights
· Dull: Just as content professionals can spot an AI written web page, other readers can too, which translates into high bounce rates and low CTRs
· Repetition: AI relies on templates and pre-programmed patterns, so it tends to use formulaic or repetitive language which is easy to spot
Watch out for AI nonsense
While generative AI can enhance efficiency and creativity, have a human being review every word, and cut the repetition. Look out for the AI-obvious list of bulleted statements that are creeping into posts and online content. Sometimes AI creates nonsensical or strange sentences when it is struggling to make a consistent argument. And we know that AI can hallucinate and completely make things up sometimes!
Believe me, reproducing nonsense is not good for your corporate reputation. AI lacks the emotional intelligence and contextual understanding that humans bring to content creation. Moreover, human creativity is invaluable in developing authentic and emotionally engaging campaigns. This may change in the future, based on how rapidly this field is advancing – but for the time being, gen AI is a tool that augments human capabilities rather than acting as a replacement for them.
Data privacy and security concerns
Generative AI relies on vast amounts of data, raising significant concerns about data privacy and security. We advise not to put any GDPR sensitive material onto any gen AI tool, even if it suggests it is a closed source tool. You may also want to check with your clients to see if they have any rules or policies around using AI tools to generate content, as it may extend to external agencies.
The utmost care must be taken especially if you are uploading data to an AI tool. Healthcare data is particularly sensitive, and any breach could have serious consequences. Unauthorised access to personal health information can lead to privacy violations and identity theft, undermining patient trust. Think twice about using ChatGPT to analyse data – if there is any identifiable data at all – steer well clear.
Moving with the future of gen AI
We are on board with AI, it is getting better and better and some of the pitfalls listed above may not be a problem soon. But, as exciting as it is, we’re still cautious about how we use generative AI. Getting communications right in healthcare is possibly more important than in any other industry and generative AI gets it wrong too often to rely on. For this reason, we are using it carefully with a heavy human hand, picking out and re-working the good stuff and discarding the dross.
Here are some helpful resources and further reading on generative AI:
The Market Research Society: https://www.mrs.org.uk/standards/guidance-on-using-ai-and-related-technologies
McKinsey: The state of AI in early 2024
UCL’s guidance on referencing AI: https://library-guides.ucl.ac.uk/referencing-plagiarism/acknowledging-AI
Comments