According to META Blog, New AI Research Tool Turns Ideas Into Art, users can now create a digital painting without ever picking up a paintbrush or generating storybook illustrations to accompany the words.
Meta has showcased an artificial intelligence (AI) research concept called Make-A-Scene that will allow people to bring their visions to life. Make-A-Scene empowers people to create images using text prompts and freeform sketches. Prior image-generating AI systems typically used text descriptions as input, but the results could be difficult to predict.
With the Make-A-Scene update, this is no longer the case. It demonstrates how people can use both text and simple drawings to convey their visions with greater specificity by utilizing a variety of elements.
‘Weird Dall-E Mini Generations’ is a good place to find some highly useful, and applicable in new contexts. And others just being strange, mind-warping interpretations, which show how the AI system views the world.
One of the more interesting AI applications is Dall-E, an AI-powered tool that enables you to enter in any text input – like ‘horse using social media and it will pump out images based on its understanding of that data.
Meta’s new ’Make-A-Scene’ system, also uses text prompts, as well as input drawings, to create wholly new visual interpretations.
Make a Scene seeks to provide more controls to help guide the output – so it’s like Dall-E, but, in Meta’s view at least, a little better, with the capacity to use more prompts to guide the system.
Computer systems have come in interpreting different inputs, and how much AI networks can now understand what we communicate, and what we mean, in a visual sense.
Eventually, the machine learning processes learn and understand more about how humans see the world. This could help to power a range of functional applications, like automated cars, accessibility tools, improved AR and VR experiences, and more.