DALL·E 2022-11-09 08.58.01 - An email message soaring over a mountain valley in the style of Maxfield Parrish.
In the next year or three, expect to see the look of marketing content — including email — to change more than it has in at least a decade, because of AI-generated art.
Do yourself a professional favor — seriously. Stop reading this blog post. Go invest 5 minutes in playing with one of these:
https://huggingface.co/spaces/stabilityai/stable-diffusion
Write a description of something you imagine — called a prompt. Put your inner child in charge, plus your internal scrapbook of artists, visual styles, and memories.
For the rest of this post, let’s assume that you tried that. My bet is that you went past 5 minutes.
AI-generated images have more-or-less exploded into public awareness in the past few months. The technology wasn’t built overnight, of course. Years of research and (no doubt) millions of dollars have gone into making it happen. But it has crossed the threshold of “magic” recently, and I think it’s going to transform a lot of fields — including marketing and email.
I’m not qualified to say anything about the technical internals of what these AI engines do. My understanding of how they work, in rough terms, is this. They’ve been trained on thousands-to-millions of reference images and terms, learning what apple and cat and 3D and Maxfield Parrish look like. (It’s actually difficult to type sentences about AI without putting every other word in ‘quotes’ — is that machine actually ‘learning’? Are a million pictures of apples ‘apples,’ or just a bunch of pixels?)
Somehow, it doesn’t matter.
With that ‘training’ done, these astonishing programs turn around and transform words into approximations of those millions of images. DALL•E generated this image from a linear and simple prompt — ‘An oil painting of a German Shepherd dog’.
DALL·E 2022-11-09 09.10.50 - An oil painting of a German Shepherd dog.
You can sort of see the process at work with that simple prompt. Yes, it’s recognizably a German Shepherd, with the brush patterns of an oil painting. The tag around his (her?) neck is kind of rough — more a dab and approximation than a detailed dog tag. But it is recognizable and engaging — certainly much better than many of us would do with a brush, a blank canvas & tubes of paint.
That’s just the on-ramp, though. As prompts become less linear and more imaginative, the visual results get more and more startling. Writing an imaginative, novel prompt is a relatively quick exercise in language that somehow, magically becomes a thing in a few seconds.
DALL·E 2022-11-09 10.28.38 - High quality photo of Mona Lisa astronaut.
You can see the averaging and artifacts here — eyes a bit off, lips a bit goofy, flowing hair on the helmet. That’s today’s state of the art; what will this prompt do in a year? In 10 years?
The current state of this technology already raises a bunch of possibilities and questions. Project it forward, at an AI rate of change, and it gets a bit mind-boggling and rather serious.
Let’s take the commercial “marketing world” impact first, since that’s the nominal topic of this post. Why is this going to have an impact on email and marketing content?
Objectively speaking, the visual content in email is pretty bland. That’s not a critique of the creative departments doing the work — there’s terrific stuff out there. But the majority is decorative filler — stock photos, off-the-shelf graphics, “brand treatments” and glorified clip art.
That’s a result of natural market forces. Visual talent is (was?) rare. Original production is time-consuming. QED “original visuals” that are specific to a given campaign or message are expensive. For all of the amazing digital creative tools from companies like Affinity and Adobe, this visual stuff is hard.
The key thing is, the people at the other end of that campaign send button are visual-first. We are wired to “process” visual content far, far faster than language — in fact, visual stimuli are pre-processed by the human visual system (the eyes) before reaching the brain. The visual aspects of your marketing message — imagery, layout, whitespace, typography, etc. — are first in the comprehension queue.
The intersection between marketing communications and these new visual-making tools is just pragmatic. When it becomes easier and cheaper to generate meaningful visuals that are part of the message than it is to drop in stock photos & decorative filler, we’ll use them.
Moves in the market are already afoot that make this inevitable. Case in point: a month ago, Microsoft announced a new graphic design offering — Microsoft Designer — with DALL-E 2 integration.
This meteor is going to make quite a few waves; let’s skim the tops, briefly.
- What’s the impact on people and org structures already in the business of visual content?
- How tangled are rights and ownership going to be?
- Who will control that intangible thing called “style?”
- How will companies grapple with brand control?
- As generated visuals become more & more realistic, what’s the impact on truth and fact?
Change comes in unexpected ways. For all the speculation about AI and marketing, it’s been difficult to pin down visible, tangible change. Now - here it comes. Should be fascinating to watch.