The US tech giant, which only released its revamped Gemini AI in some parts of the world on February 8, said it was “working to address recent issues” with the image generation feature.
“While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,” the company said in a statement.
This came two days after an X (formerly Twitter) user posted images showing Gemini’s results for the prompt “generate an image of a 1943 German soldier”.
The AI had generated four images of soldiers one was white, one black, and two were women of colour, according to the X user named John L.
Tech companies see AI as the future for everything from search engines to smartphone cameras.
But AI programs not only those produced by Google have been widely criticised for perpetuating race biases in their results.
“@GoogleAI has a bolted on diversity mechanism that someone did not think through very well or test,” John L wrote on X.
Big tech firms have often been accused of rushing out AI products before they have been properly tested.
And Google has a chequered history of launching AI products.
Last February the firm apologised after an ad for its newly released Bard chatbot showed the program getting a basic question about astronomy wrong.
AFP
All rights reserved. This material, and other digital content on this website, may not be reproduced, published, broadcast, rewritten or redistributed in whole or in part without prior express written permission from PUNCH.
Contact: [email protected]