views
Google has been slow with its AI roll out and the episode that transpired this week explains the reasons for its hesitancy. The company was forced to pause its AI image generator tool after its mistakes with historical figures. Many people were unimpressed with the results that Google’s Gemini AI image generator offered, which has not only made the company sit up and stop its availability but also admit the mistakes that have been made.
So, how did Google’s AI tech get it so wrong, and why did this happen? Google’s Prabhakar Raghavan has tried to explain the issue in this blog post and highlighting the worrying concerns that AI still poses for the world.
He mentions that the AI tool was finding it hard to differentiate between a wide range of people, and the AI model became extra cautious to avoid any major lapses and be offensive to anyone. “These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong,” Raghavan points out.
Google’s reasons do make sense but one cannot explain the need for the AI model to confuse its command, rather than use its own mind to deliver the outcome. AI models are being trained using extensive dataset, but even then these tools face a tough time with prompts related to ethnicity and sex, or even historical facts.
For instance, AI cannot mix up the characters of German soldiers from the World War and give them a different ethnicity. Google has understandably decided to keep the AI model in learning mode to get these facts correct in the future.
The company has feared these issues before, and now that it has come to reality, changes are needed before the concerns about AI spread further.
Comments
0 comment