Is this picture of Trump real? Free AI tools come with risks

0

Comment

On Monday, a tech startup in London did something most software companies never do: It released the code behind its creation so anyone could reproduce it. Any developer in the world can rebuild the image generation model created by Stability AI, which can spit out any image or photo you can imagine from a single text prompt.

The tool is almost magical – scary, even – in what it can do. Want a picture of an English Blue Shorthair cat playing a guitar?

But here’s how this tool is potentially revolutionary compared to DALL-E 2, a similar program launched by San Francisco-based OpenAI earlier this year that hundreds of people have used to make wacky art. Stability AI is free to replicate and has very few restrictions. The code for DALL-E 2 has not been released and it will not generate images of specific individuals or politically sensitive topics such as Ukraine, in order to prevent the software from being misused. The London tool, on the other hand, is truly free-for-all.

In fact, Stability AI’s tool offers huge potential for creating fake images of real people. I’ve used it to conjure up several “photos” of British Prime Minister Boris Johnson awkwardly dancing with a young woman, Tom Cruise walking through the rubble of war-torn Ukraine, a realistic portrait of actress Gal Gadot, and a alarming image of London Palace of Westminster on fire. Most of the images of Johnson and Cruise looked fake, but some looked like they could pass for the most gullible among us.

Stability AI said in its Monday post that its model includes a “safety classifier,” which blocks sexual scenes but can also be adjusted or removed entirely at the user’s discretion.

Stability AI founder and CEO Emad Mostaque says he’s more concerned about public access to AI than the harm his software might cause. “I think control of these models shouldn’t be determined by a group of self-proclaimed people in Palo Alto,” he told me in an interview in London last week. “I think they should be open.” His company will make money by charging for special access to the system, as well as selling licenses to generate famous characters, he said.

Mostaque’s release is part of a larger drive to make AI more freely available, believing it shouldn’t be controlled by a handful of Big Tech companies. It’s a noble feeling, but it also carries risks. For example, while Adobe Photoshop may be better at simulating an embarrassing photo of a politician, Stability AI’s tool requires far less skill and is free. Anyone with a keyboard can hit their refresh button over and over until the system, known as Stable Diffusion, spits out something that sounds convincing. And Stable Diffusion images will appear more accurate over time as the model is rebuilt and re-trained on new datasets.(1)

Mostaque’s answer is that we are, depressingly, in the midst of an inevitable increase in fake images anyway, and our sensibilities will just have to adapt. “People will be aware that anyone can create this image on their phone, in a second…People will be like, ‘Oh, it’s probably just created,'” he said. In other words, people will learn to trust the internet even less than they already do and the phrase “pictures or it didn’t happen” will evolve into “pictures don’t prove anything anymore.” Even so, he predicts that 99% of people who use his tool will have good intentions.

Now that Mostaque’s model has been released, social media companies like Snap Inc. and Byte Dance Inc.’s TikTok could replicate it for their own platforms. TikTok, for example, recently added an AI tool to generate background images, but it’s highly stylized and doesn’t make specific images of people or objects. This could change if TikTok decides to use the new model. Mostaque, a former hedge fund manager who studied computer science at Oxford University, said developers in Russia had already replicated it.

Mostaque’s open-source approach runs counter to how most big tech companies have handled AI discoveries, driven as much by intellectual property concerns as public safety. Alphabet Inc.’s Google has a model called Imagen whose designs look even more realistic than OpenAI’s DALL-E 2, but the company won’t release it due to “potential risks of misuse”. He says he’s “exploring a framework” for a potential future release, which may include some oversight. OpenAI also won’t release details of its tools for anyone to copy. (2)

Monopoly tech companies shouldn’t be the sole guardians of powerful AI, as they’re bound to steer it to their own agenda, whether in advertising or keeping people hooked on endless scrolling. But I’m also uncomfortable with the alternative idea of ​​”democratizing AI”. Mostaque himself used this expression, which is increasingly popular in technology.(3)

Making a product affordable or even available for free doesn’t quite fit the definition. At its heart, democracy relies on governance to function properly, and there is little oversight evidence for tools like Stable Diffusion. Mostaque says he relied on a community of several thousand developers and supporters who deliberated on the Discord discussion board as to when it would be safe to release his tool into the wild. So it’s something. But now that Stable Diffusion is out, its use will be largely unchecked.

You could say that putting powerful AI tools in nature will somehow contribute to human progress, and that Stable Diffusion will transform creativity as Mostaque predicts. But we have to expect unintended and unintended consequences that are just as widespread as the benefits of making anyone an AI artist, whether it’s a new generation of misinformation campaigns, new types of online scams, or anything else.

Mostaque won’t be the last person to release a powerful AI tool to the world, and if Stability AI hadn’t, someone else would have. This race to be the first to bring powerful innovation to the masses is part of what drives this gray area of ​​software development. When I pointed out the irony of his company’s name given the disruption it’s likely to cause, he retorted that “instability and chaos is coming anyway.” The world should prepare for an increasingly bumpy ride.

More from Bloomberg Opinion:

• Who needs the government to explore deep space? : Adam Minter

• Robots are key to winning the productivity war: Thomas Black

• Can India master lending by application? :Andy Mukherjee

(1) The release of the system’s “weights” on Monday means anyone can fine-tune the calibration to make it more accurate in certain areas. For example, someone with a large cache of Donald Trump images could recycle the model to conjure up much more accurate “photos” of the former US President, or anyone else.

(2) OpenAI started in 2015 as a nonprofit whose goal was to democratize AI, but running AI systems requires powerful computers that cost hundreds of millions of dollars. To solve this problem, OpenAI took a billion dollar investment from Microsoft Corp. in 2019, in exchange for giving the tech giant the first marketing rights to one of OpenAI’s discoveries. OpenAI has since released fewer and fewer details about new designs such as DALL-E 2, often to the dismay of some computer scientists.

(3) Among the many examples of the trope, Robinhood Markets Inc. wants to “democratize finance” (it creates an app for trading stocks and crypto assets) while controversial startup Clearview AI wants to “democratize finance”. facial recognition”.

This column does not necessarily reflect the opinion of the Editorial Board or of Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former journalist for the Wall Street Journal and Forbes, she is the author of “We Are Anonymous”.

More stories like this are available at bloomberg.com/opinion

Share.

Comments are closed.