By Matthew Dunn on Thursday, 19 September 2024
Category: Email Strategy

Seeing The Future Through Blurred I’s

Picture this scenario: The marketing team at your scrappy little company huddles for days hashing out the brand guidelines for a critical new product launch. The results, everyone agrees, are brilliant. The name is snappy, memorable and perfect. The tag line says it all. With all-hands-on-deck intensity everything quickly goes public - website, collateral, press releases.

A few weeks later, you’re checking up on your big well-funded industry competitor, who you know has been working on a competitive product. Wow…wait a minute…that’s our tag line!! 

Horrors! Pitchforks! Intellectual property (IP) theft! Call the lawyers! Sue them!

Your CEO — wise and calm veteran — takes the less expensive course. She calls up the competitor CEO and politely inquires, WTH? 

His response is, remember when you bragged about your little company being “all-in” on AI to that industry columnist? “We treat AI like a colleague; it’s got a seat at the table at every meeting” I think you said? Well, that means you don’t own that tag line. AI = no IP, no copyright. Too bad, so sad. See you at the conference in Vegas. I’ll buy the first round.

Currently, AI outputs can’t be copyrighted or patented. That’s not news or post-worthy. But assuming that stands — which seems likely — there will be some challenging implications and changes for many aspects of business operations, including marketing and email.

It’s Not The AI Headline That Worries Me, It’s The Byline

Over the past few months, a venture I’m involved in made the strategic jump into AI-generated content in a big way. We’ve harnessed a mix of GenAI systems (and a hefty amount of conventional data warehouse work) to uncover deep insights on the customer segments for over 5,000 companies, organizations and social enterprises. There are AI code snippets in the source code, AI data analysis write-ups on web pages, AI-generated descriptions of charts — the list goes on and on. The production scale is amazing; one project that took a month would have taken about 3 man-years without AI. No issue with results, but…

The issue, and the “think about this” point in this post is the difficulty of keeping track of work produced by AI, and work produced by people.

AI technologies — particularly GenAI — are sneaking in everywhere because they can be easily useful. Chat up some alternative subject lines for a campaign - easy. Gin up a Python function to sort a CSV file — easy. Rewrite a long web page into a short summary — easy. 

It’s becoming even easier as AI moves from general-use endpoint — ChatGPT.com — to contextualized assistant, like GitHub CoPilot. Already, software development tools like CoPilot and Cursor have become so unobtrusively good at “helping” that it’s truly difficult to keep the I-wrote-this and AI-wrote-this lines from blurring.

The “work products” of marketing aren’t always treated as intellectual property, and I’m not suggesting that every subject line merits a copyright filing. But that strange little dividing line of done-by-human (could-be-IP) versus done-by-machine illuminates the changing definition of work itself. We’re still at a relatively early, experimental stage of AI technologies, but I think some of the challenges of more mature stages are becoming visible. Let’s put that in a more marketing-centric example without the IP burden to make the point.

For the sake of discussion, say one of your vendors provides GenAI copywriting, right in the platform. Yay! Give me alternative subject lines, you cry! Punch up this boilerplate for me!  Writing is hard work, and a bit of help with tight deadlines is always welcome.

Run that operation for a year or so in your head, and then consider this question. Would you have the slightest idea (or record) which copy was written by people, and which by machine? Would the quality of marketing go up or down over time as that line blurred? (I’ve got an opinion; what do you think?)

Alternatively, say your email platform delivered AI-coded templates. “This LLM has digested billions of email messages and it codes them perfectly.” Difficult job, and certainly tedious; seems like an ideal case for AI. But I would make an educated guess that over time, the organizational ability to modify, change or fix issues with those templates would probably go down, and the complexity of the templates would go up.

In short, the I’s — human intelligence and artificial intelligence — are going to blur and blend. In a relatively short time, at a practical level, we just won’t be able to keep track and distinguish.

Is A Blurred Line Between Human And AI Outputs A Problem?

You could argue that that’s not a problem except in areas like IP with (at least for now) a very bright-line distinction between the two, with value and dollars and consequences attached. That may be; I wouldn’t be surprised to see quasi “clean room” practices spring up in IP-heavy industries, where companies invest in safeguards to keep the artificial “I” out of the room to ensure viable IP.

Alternatively, we may try to cope with the deluge of artificial-I “stuff” systematically. In that scenario, the ESP that provides GenAI copywriting assistance would be tagging and recording everything — Bill wrote this, Skippy the Robot wrote that. Ironically, that coping approach would require explosive growth in conventional digital-and-data systems, which some pundits thought AI was going to make easier or eliminate!

Or we’ll revive that Latin phrase (Juvenal) - “Quis custodiet ipsos custodes?” — “Who watches the watchers?” and try to make AI systems “watch themselves”,or watch each other, or some sort of convoluted thing. It’s a nice notion, but copy-and-paste will kill it in the long run.

You might be thinking, well, this is not a marketing or an email problem. Perhaps it isn’t a problem, but that doesn’t mean it’s not a profound change. For better or worse, there used to be a “who” attached to most kinds of work. Who wrote this headline? Who’s going to draft the report? Hey, didn’t you code this template — I’m seeing a problem in Outlook, can you take a look?

There’s a related longer-term issue in this blur with potentially greater impact. Will blurred-I’s work improve or degrade people’s ability to grow their own knowledge and skills? If I don’t really have to wrack my brain to understand that data well enough to write the web-page copy for it, I haven’t learned as much about that data. If I delegate a bit of code crafting to Cursor’s AI, have I learned more about the problem — or about Python — than if I’d done it “by hand?” Over the longer haul of a job or a career, will we accrue as much experience and have as many stories to tell?

It’s really, really hard to tell. On the one hand, the current crop of AI tools can be an incredible learning resource; on the other, they can do “the easy stuff” faster and better. Google Gemini kept trying to “help” finish sentences in the course of writing this post — kind of laughable, kind of scary. An AI enthusiast might say “just give it the prompt and skip the hours of writing” — but thinking through the prompt and thinking through the post are more-or-less the same challenge - garbage prompt, garbage output. I think I learn more doing it “by hand”, but that stack of things to do isn’t getting smaller — is the learning gained worth the time invested?

At this early moment, there’s a fair degree of decide-that-for-yourself. I don’t think that will last. Organizations will have to start grappling with scaled-up organizational challenges of blurred I’s — who owns the work, who understands the job, who’s responsible for what, and more. This is going to be interesting.

Photo by Mario Blasquez on Unsplash

Related Posts

Leave Comments