Washington Examiner

Google apologizes for AI bot’s inaccurate portrayal of ‘diverse’ historical figures

Google Apologizes for⁢ AI Chatbot’s ⁤Racially Inaccurate⁤ Images of Historical Figures

Google has issued⁤ an apology after its artificial‌ intelligence‍ chatbot, Gemini, generated images that depicted historical​ figures with inaccurate racial and ethnic diversity. The head‍ of‍ product for Google’s AI division, Jack Krawczyk, released a statement acknowledging‌ the issue​ and assuring users that they are⁣ working‌ to ⁢fix it immediately.

Krawczyk emphasized that‌ Google‍ takes representation and bias seriously, and they design their image generation capabilities to reflect their global user base. He also ⁤mentioned that they will be “tuning” the model behind Gemini to account for a more nuanced⁢ historical ⁣context.

The inaccuracies in the generated images sparked criticism ⁢from conservative users who claimed it was evidence of the AI model being too “woke.” Computer scientist Debarghya Das expressed frustration, ⁣stating⁣ that it was difficult to get ⁢Gemini to acknowledge⁤ the existence of white people. Babylon Bee writer Frank⁣ Fleming ⁢even turned it into a game, attempting ​to get Gemini to create an ⁢image of a ‌Caucasian male.

This is‌ not ⁣the first time Google ⁤has faced accusations regarding diversity issues. Almost a decade ago, the company‍ had to apologize for its photo app labeling an image of a black couple as “gorillas.”

Google ⁣Gemini,⁢ previously known as ​Google Bard, ‌was launched in March 2023 as a chatbot powered by Google’s large⁢ language model. It underwent multiple upgrades and was renamed “Gemini” in⁢ February to reflect⁣ its ​advanced technology.

What specific actions​ is Google taking to rectify the issue of racially inaccurate images ‍generated by the Gemini AI chatbot?

Google has issued an apology for‍ its AI chatbot, Gemini, generating racially ​inaccurate images of historical figures. The ⁣head of product for Google’s AI division, Jack Krawczyk, released a statement addressing the issue ⁤and reassuring users that immediate action is being taken to rectify it.

Krawczyk emphasized that Google takes representation and ‍bias seriously, and they design their image generation capabilities with the intention of reflecting ‌their global user⁣ base. He also⁣ mentioned‌ that ​they will ⁢be “tuning” the model behind Gemini to account ⁣for a more nuanced historical⁢ context.

The inaccuracies in the generated images have sparked criticism from conservative ‌users ‌who claim ⁤that​ it is evidence⁢ of the AI model being too “woke.”⁢ Computer scientist ‍Debarghya Das⁢ expressed frustration, stating that it was difficult to get Gemini⁢ to acknowledge‍ the existence of white people. Babylon Bee writer Frank Fleming ​even turned⁣ it into a game,⁣ attempting to get Gemini to create ‍an image ‍of a Caucasian ‍male.

This is not the first time Google‍ has faced accusations regarding diversity issues. Almost a decade ago, the company had to apologize for its photo app labeling an image ⁣of​ a black couple as “gorillas.”

Google Gemini, previously known as‌ Google Bard, was ⁢launched in‍ March 2023 as a chatbot powered by Google’s large ⁢language⁣ model. It has ‍undergone multiple upgrades and was renamed “Gemini” in February to reflect its ⁤advanced technology.

In conclusion, Google’s apology for the racially inaccurate images generated ​by ⁤Gemini reflects their commitment to addressing diversity and bias issues. The company’s efforts to improve​ the AI model and ensure accurate ⁣representation are commendable steps toward fostering inclusivity and‌ understanding in the digital world.



" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."
*As an Amazon Associate I earn from qualifying purchases

Related Articles

Sponsored Content
Back to top button
Available for Amazon Prime
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker