Over the past year, the AI research company OpenAI has been feeding a giant bank of captioned images into a program called DALL-E. Named after the Surrealist artist and the endearing robot from Pixar’s 2008 classic WALL-E, the machine learning system can generate any image simply by typing short descriptions into a text box. From that brief prompt, the system conjures a set of original visuals that reflect some version of the phrase. Our favorites from DALL-E’s Instagram: “Michelangelo’s sculpture of David wearing headphones DJing” and “claymation of teenage unicorns being taught algebra.”
Some may dismiss the humorous renderings as another viral art-world trend, but the implications of photorealistic images generated by AI within seconds from simple text prompts are sure to reverberate throughout the creative industries. For one, there’s the all-too-real fear of those trained in analog skills losing their jobs. While layoffs can be expected with the adoption of new technologies, others theorize that programs like DALL-E will instead jumpstart an explosion of innovation. That scenario largely hinges on artists finding inventive ways to make the AI work for them rather than against them. For example, an architect can use the tool to envision new possibilities of coexistence between buildings and nature; product designers might streamline the sketching process.
If a machine is handling most of the “creative” work, can the images be called art? Aaron Hertzmann, a computer science professor at the University of Washington, explains this way of thinking echoes early 20th-century arguments against the artistic merit of photography. Today, most people acknowledge the extent of human craft involved in taking a photograph. As the old adage goes, it’s the eye behind the lens—not the lens itself—responsible for impactful photos. One scroll through Instagram proves that.
There’s also the law of unintended consequences. Hertzmann notes the Modern Art movement came about because of photography, which pushed painting away from photorealism and into abstraction once cameras became more readily available. This dynamic is on display in the work of artists like Danielle Baskin, who uses DALL-E to imagine “alternative realities”—think high-speed rail zooming across the Golden Gate Bridge or ramen flowing out of a faucet.
“Art maintains its vitality through continual innovation, and technology is one of the main engines,” Hertzmann writes. “We’re lucky to be alive at a time when artists can explore ever-more powerful tools. Today, through GitHub and Twitter, there’s an extremely rapid interplay between machine learning researchers and artists; it seems like, every day, we see tinkerers and artists tweeting new creative experiments with the latest neural networks. Seeing an artist create something wonderful with new technology is thrilling, because each piece is a step in the evolution of new forms of art. As artists’ tools, AI software will surely transform the way we think about art in thrilling and unpredictable ways.”
The software comes with its share of limitations. For one, DALL-E won’t generate visuals of real people, and retains the rights to each image it produces. Its algorithm has also generated sexist and racist imagery—search terms like “CEO” yielded white men in suits—illustrating an ongoing flaw with AI. In a statement, OpenAI acknowledged that DALL-E “inherits various biases from its training data, and its output sometimes reinforces societal stereotypes.” False images that appear realistic can also open up more doors for fake news, which has already fostered an unstable political climate. “It’s all fun and games when you’re generating ‘robot playing chess’ in the style of Matisse,” Sarah Rose Sharp writes for Hyperallergic, “but dropping machine-generated imagery on a public that seems less capable than ever of distinguishing fact from fiction feels like a dangerous trend.”