Jonathan Hopfner | February 23rd, 2024

A backlash against the creeping use of generative AI (GenAI) in journalism has already forced some publications to reverse course, and even claimed a few scalps.

Having been promised that GenAI will enable vast improvements to content creation along with other aspects of their work, some of the many marketing and communications teams experimenting with the technology might be getting a little nervous. Could the campaigns against the procedurally generated and push for authenticity come for their organisations as well? Marketers aren’t typically held to the same standards as journalists of course, but the often harsh reaction to brands that have been ‘caught’ using AI in advertising and other forms of outreach at the very least highlights the need to tread carefully – and to be prepared to take a stand.

Lost in many of these controversies is the question of whether AI-generated content is inherently less effective, or more problematic, than the human variety. An interesting recent study by scholars at MIT of people's reactions to human versus AI-created marketing campaigns suggests that’s not always the case. In fact, when uninformed about the origin of what they were reading, most people preferred the AI-generated work. There are multiple plausible reasons for this: being based on vast amounts of information, AI content can often sound breezily authoritative. It’s perhaps also less prone to erratic leaps of logic, or jarring inconsistencies.

Giving credit where it’s due

As a company founded by former journalists of a … certain vintage, you’d think research like this would leave us distinctly uneasy. But it’s important to (even grudgingly) acknowledge AI’s strengths as well as its limitations, and to leverage these strengths where it makes sense – as we’ve done in our iN/Ntelligence platform, which is capable of mining material from media and other publishers for insights at a scale and speed that would be impossible for even our talented, and largely tireless, human team to replicate.

Whether it’s synthesising or summarising vast amounts of content, creating lists or categories, even producing the first draft of a relatively low-stakes press release to push through blank page syndrome – AI can shine when it comes to augmenting human creative endeavour. It’s more problematic when that endeavour is outsourced to AI entirely. At best, this comes across as slightly lazy or rubs some the wrong way; MIT’s study for example found people are less receptive to content as soon as they realise it’s AI-generated. At worst, letting AI take over results in decisions, even hallucinations, that endanger hard-won reputations.

Especially when it comes to the written word, enterprises experimenting with AI on the sly also have to realise that more often than not, they’re not fooling anybody. As software grows more sophisticated, there may yet come a time when AI-written material is entirely indistinguishable from the human variety. But for now it’s readily apparent AI text exhibits certain characteristics. Lengthy strings of facts with little attention paid to sourcing; the kind of casual confidence possible only for someone oblivious to the high stakes of being certain; or as one columnist aptly put it, a consistently “cheery, noncommittal, aim-to-please averageness” – all are dead giveaways.

Standing on principles

From the brand point of view, as more scandals inevitably surface, enterprises can be sure that whatever they publish will be scrutinised for signs of AI involvement – and if existing research is any indication, heavily discounted if that role is believed to be significant.

Content that’s thoughtfully created or curated by individuals, on the other hand, is positioned to command a premium. Many organisations are already waking up to this reality and experimenting with features like watermarks to distinguish digitally generated content from the human article, though these are far from infallible.

In this kind of environment it’s important for organisations to do as much as they can to dispel the ambiguity. Rather than keeping people guessing, we advise adopting a policy of strict AI transparency – that is, defining and articulating the role AI plays in researching or producing any content created for an external audience.

Just as it’s now routine for companies to set out and share their standards on sustainability, diversity, equity and inclusion, or data privacy, we envision a time when policies on the use of AI in marketing, communications and content are commonplace, even prominently displayed as a sign of commitment to transparency and trust. Whether a company always lives up to its stated principles is another matter entirely. But just as with any other issue that has significant social and cultural as well as business implications, those who prefer to say nothing or avoid the AI topic entirely can’t expect to receive the benefit of the doubt.

World-class content strategy and execution

Contact us to get started