Proceed With Caution: Famous AI Failures
We all make mistakes! From Google's Gemini pause to Air Canada's empathetic chatbot, here are some unexpected ways artificial intelligence went rogue.
Artificial intelligence has achieved revolutionary breakthroughs, generating text, images, and videos, and even serving as assistants and imitators. However, it is not without its flaws. We've compiled some infamous AI fails to both entertain you and remind you to stay vigilant when using these tools.
Let’s start with the least expected - Google paused Gemini’s image-generating feature.
“It’s clear that this feature missed the mark. Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn't work well,” stated Prabhakar Raghavan, a senior vice president at Google in a Google blog post.
What went wrong? An attempt to diversify image prompts based on gender, race and other characteristics backfired. The AI tool faced scrutiny for over-diversifying answers to historical questions and creating inaccurate, and in some cases offensive, images. Examples include a female pope, an Asian founding father and a Black Nazi.
This issue not only damaged Gemini’s credibility, but cost Google’s parent company, Alphabet, around $90 billion in market value in one day.
Google is not the only company pausing AI features. OpenAI was quick to pause its new “Sky” voice for ChatGPT when actress Scarlett Johansson called out the voice for being an imitation of her own.
Johansson claimed she declined an offer from Sam Altman, CEO of OpenAI, to be the voice of ChatGPT.
“When I heard the released demo, I was shocked, angered and in disbelief that Mr Altman would pursue a voice that sounded so eerily similar to mine," Johansson wrote in a statement. "Mr Altman even insinuated that the similarity was intentional, tweeting a single word 'her' - a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human."
OpenAI has since paused the voice, insisting that the company hired paid voice actors, and the resemblance between the voices of “Sky” and Johansson was purely a coincidence. (We’re not too sure, but let us know what you think in the comments!)
On a lighter note, some chatbots have been caught lying. A passenger flying Air Canada queried the airline’s chatbot about bereavement fares and received a surprising response. The chatbot empathetically offered up to 90 days to claim back a bereavement discount after purchasing a regular ticket, more lenient than the airline’s actual policy.
In reality, Air Canada’s bereavement discounts cannot be applied after tickets have already been purchased. The passenger took the airline to tribunal, arguing that the airline was responsible for the information provided by its chatbot.
What do you think of this case? (Spoiler: the passenger won.)
More issues have surfaced regarding AI inaccuracies, some of which demand regulatory attention. According to Forbes, the following cases are examples of great risks caused by AI errors.
A senior kicked off of healthcare due to a faulty algorithm;
A father wrongfully imprisoned due to biased facial recognition technology;
A woman threatened by an abusive partner with explicit deepfake photos.
These are just a few examples among many. As we are still in the early stages of AI development, it's clear that regulatory precautions are necessary to avoid serious risks and implications.
Companies need to conduct thorough reviews and extensive trial-and-error testing before releasing AI tools and chatbots, given the potential for unexpected issues. No AI is perfect, and mistakes are inevitable. Creating more reliable AI tools will require patience and oversight.
Best,
Ariana for the Don’t Count Us Out Yet Team