A tool based on artificial intelligence (AI), able to turn any text into an image is the newest creation of Google. Presented on Monday (23), the technology is not yet expected to be released to the public, as it needs some improvements, but it has already drawn the attention of experts.
Named “Image”, the tool works similarly to DALL-Ea program that generates images from texts developed by OpenAI. Quite simply, just open the software and type a random description that comes to your mind, such as “two robots sitting at a table with the Eiffel Tower in the background”, for example, and wait for the magic to be done.
The platform’s AI can rely on a number of styles to generate the image, such as oil paintings, photographs and CGI renderings. In some cases, the result is not what the user expected, with the scene drawn by the search giant’s robot appearing unfinished, blurred or smudged, but in others the illustration is quite realistic.
Photo generated by Google’s new text-to-image converter.Source: Google/Disclosure
A team of human evaluators hired by the Mountain View company analyzed the photographs generated by Imagen technology, comparing them with illustrations created by competing software. For some of them, Google’s system delivered better results than the DALL-E, which has been developed for longer.
Although it has proved capable of generating photos closer to reality, the Google program that creates images from text has some limitations and problems. Without revealing more details, big tech said that AI also encodes social biases, often coming across as racist, sexist, or toxic in some other way.
AI has shown surprising results.Source: Google/Disclosure
How do you remember the The Verge, the problem may be in the web data used for training the algorithm, probably without any kind of curation, including those that reflect social stereotypes and derogatory associations with certain marginalized groups. This made the tool learn and replicate some types of behaviors found in the online environment.
Only images of animals, food and objects were shown by the Mountain View company.Source: Google/Disclosure
The company itself acknowledged that the show may prioritize “imagery of people with lighter skin tones and a trend towards images depicting different professions to align with Western gender stereotypes”. Interestingly, the photos of people generated by the engine were not released by Google.
More images automatically created by technology.Source: Google/Disclosure
Because of these problems, the technology still needs some improvements and tests before being released to users in general, availability that has also not been confirmed by the technology giant.