How Realistic AI Image Generators Are Filtering Into Creative Platforms

Recent integrations and partnerships could make the technology more mainstream

Artificial intelligence-powered image generators have turned heads on social media for their ability to produce often-realistic and fanciful renderings on any text prompt that one might imagine. And now they are quickly making their way into professional creative platforms.

A series of partnerships have begun to integrate machine-learning-powered image generation technology into software and online platforms on which designers rely. Stock image bank Shutterstock announced a partnership last month with research group OpenAI, the maker of state-of-the-art image generator Dall-E 2, which will allow customers to generate their own content for creative use. TikTok and Picsart recently integrated a basic text-to-image generator for their users. And Adobe has indicated that it will begin to wrap generative AI into its products starting next year.

Meanwhile, generative AI is beginning to attract big money in Silicon Valley, where investors see it as an opportunity to get in early on a trend that could define a new generation of creative tools. Stability AI, the startup behind popular image generation platform Stable Diffusion, recently raised $101 million to bring the small startup to a reported valuation of more than $1 billion. 

Experts say moves like these will speed the adoption of generative AI into mainstream usage and could raise new questions for the creative industry at large. “As sort of mind-boggling as some of these tools are, I think their long-term implications are, if anything, under-appreciated,” said Gartner research analyst Andrew Frank. “This is a really big revolution.”

Platforms in a tight spot

As recently as a couple years ago, AI text-to-image generation could only produce garbled abstractions that might take some squinting to make out any resemblance to the input text prompt. That changed earlier this year with OpenAI’s release of Dall-E 2, which was trained on a much larger data set and brought a new sometimes-photorealistic quality to these creations. Other tools, like Stable Diffusion and Midjourney, have made these capabilities accessible to even larger audiences.

With this proliferation, it wasn’t long before creative platforms were forced to make some choices regarding the proliferation of this kind of art. Stock image banks like Shutterstock and Getty Images have found themselves flooded with submissions generated by these tools. While Getty reacted by banning all AI-generated images from the site, citing copyright concerns, Shutterstock decided it wanted to embrace the wave of this technology in the most ethical way it could manage.

“The mediums to express creativity are constantly evolving and expanding,” Shutterstock CEO Paul Hennessy said. “We recognize that it is our great responsibility to embrace this evolution and to ensure that the generative technology that drives innovation is grounded in ethical practices.”

Along with its OpenAI partnership, Shutterstock has worked with the research group to launch a fund designed to compensate artists for their contributions to the AI model in the form of royalties.

Avoiding artist alienation

Even as creative platforms begin to embrace this technology, however, risks and questions have abound in its usage. Legal issues around copyrighting AI images and the use of protected images in training these models remain far from settled. And despite built-in software guardrails, there is still potential for nefarious use in fabricating images of real people or pulling up results that might be disturbing or offensive.

“There are certainly issues with the legality of certain images. Even if an image is legal, some of the stuff that you see coming out of these tools can be quite disturbing, particularly for people who are sensitive to dark imagery,” Frank said.

Platforms are experimenting with this technology gradually to make sure they address some of these risks. Adobe recently said it wants to find a way to integrate this technology in a way that creates more opportunities for iterative processes and development that feels more like a collaboration between humans and AI.

“As impressive as early generative AI is, the first models have, in some ways, cut out human beings,” Adobe chief product officer and executive vice president of Creative Cloud said. “We are investing our research and product design talent to develop an approach that centers on the needs of creatives.”

That implementation might look like a tool in Photoshop that generates a slew of options for a given piece of artwork from which the human artist can choose their favorite. Creators might also use AI to generate a custom template as a starting point or elements like fonts or color schemes.

Creative platforms will have to weigh risks and considerations like these against the benefits of tapping this technology as they implement them and do so in a way that respects the inputs of human artists, Frank said. 

“There’s bound to be some level of alienation [to artists], as these tools start to find their way into tools like Adobe,” Frank said. “But I think that the hope is that the creative possibilities that these tools unlock will sort of more than make up for the maybe some of the negative feelings that they generate.”