Failures with Google's generative AI "will gradually erode our trust in Google."
Failures with Googles generative AI will gradually erode our trust in Google

Google had a hectic Memorial Day weekend as it scrambled to control the repercussions from some outrageous suggestions made by the new AI Overview feature in its Search product. Let me catch you up in case you were enjoying a beachside tanning session or some hotdogs and beer in place of scrolling through X and Instagram (META).

The goal of AI Overview is to deliver search queries generative AI-based answers. It usually behaves that way. However, in the past week, it has also informed users that Barack Obama is the first Muslim president and that they may use nontoxic glue to stop cheese from slipping off their pizza.

In response, Google removed the comments and said it would use the mistakes to make improvements to its systems. However, the instances might gravely harm Google’s reputation, especially in light of the app’s ability to produce historically incorrect photographs and the company’s disastrous Gemini image generator debut.

Associate professor of computer science and engineering at NYU’s Tandon School of Engineering Chinmay Hegde clarified, “Google is supposed to be the premier source of information on the internet.” “And our trust in Google will gradually decline if that product is compromised.”

Google’s AI errors
Google has encountered difficulties before when it started its generative AI program, as evidenced by the issues with AI Overview. In a promotional film released in February 2023, Google’s chatbot Bard—which the firm relaunched as Gemini in February—famously displayed an inaccuracy in one of its responses, which caused Google’s stock to plummet.

Then there was the Gemini picture generator software, which produced images of various people in erroneous environments, such as 1943 German soldiers.

Google made an effort to address the historical bias in AI by integrating a greater mix of ethnicities when creating images of individuals. However, the software ended up rejecting some requests for photographs of people from particular backgrounds because the business overcorrected. In response, Google apologized for the incident and took the program offline for a short while.

Meanwhile, the problems with the AI Overview surfaced when Google claimed that users were posing unusual queries. A Google representative explained that the rock-eating incident involved a website that syndicated articles about geology from other sources into their platform, which included an article that first appeared on the Onion. AI Overviews provided a link to the information.

While there are valid justifications, it is becoming increasingly annoying that Google keeps releasing products that have defects that it then has to justify.

PC Soni Editor

Categorized in:

Artificial Intelligence,

Last Update: 3 July 2024

Tagged in: