Meta has unveiled Make-A-Scene, an AI research tool that uses sketches and text to produce new artwork. Shared by Mark Zuckerberg, the tool is designed to give creative control to anyone, artists and non-artists.
Most AI systems rely solely on text prompts to generate images, but due to the limitations of text, it’s often difficult to predict how the image will turn out.
Make-A-Scene, meanwhile, allows users to “convey their vision with greater specificity, using a variety of elements, forms, arrangements, depth, compositions, and structures,” a post on Meta’s AI blog reads.
“This multimodal generative AI method puts creative control in the hands of people who use it by allowing them to describe and illustrate their vision through both text descriptions and freeform sketches.”
The system functions by learning important components of sketches that are likely to be relevant to its creator, such as objects and animals. From there, it assesses the quality of images created by different generative models. Make-A-Scene is still capable of generating an image with text-only prompts if that’s what a creator chooses.
Meta noted that AI was trained using millions of example images from public datasets. As such, “the bias reflected in the training data affects the output of those models.”
“The AI industry is still in the early days of understanding and addressing these challenges, and there’s a lot more work to be done,” Meta said.
Excited to announce Make-A-Scene, our latest research tool Mark Zuckerberg just shared. Make-A-Scene is an exploratory concept that gives creative control to anyone, artists & non-artists alike to use both text & sketches to guide AI image generation: https://t.co/p9HNFy3VeY pic.twitter.com/Ir5U4IvikV
— Meta AI (@MetaAI) July 14, 2022
In other tech news, take a closer look at the Nothing Phone (1).