“60 Minutes” made a pretty nasty claim about a Google AI chatbot

Since OpenAI unleashed ChatGPT into the world, we’ve seen it take people you wouldn’t believe. Some people have claimed that chatbots have an awakened agenda. US Senator Chris Murphy tweeted that ChatGPT “taught” itself advanced chemistry. Even seasoned tech journalists have written stories about how it works The chatbot fell in love with them. It seems as if the world is reacting to the AI ​​in the same way that cavemen likely reacted when they first saw fire: with utter confusion and incoherent babble.

One of the most recent examples comes from 60 minutes, who threw his voice into the ring with a new episode focused on innovations in artificial intelligence that aired on CBS Sunday. The episode featured interviews with the likes of Sundar Pichai, CEO of Google, and included questionable content Claims about one of the company’s large language model (LLM).

The segment is about emergent behavior, which describes an unexpected side effect of an AI system that was not necessarily intended by the developers of the model. We’ve already seen emerging behavior emerge in other recent AI projects. For example, researchers recently used ChatGPT to create digital personas with goals and background in a study published online last week. They note that the system performs various emergent behaviors such as sharing new information from one character to another and even forming relationships with each other—something the authors had not initially planned for the system.

Emerging behavior is certainly a worthwhile topic to discuss on a news program. where is the 60 minutes The video takes a turn, though, when we learn about claims that Google’s chatbot was actually able to teach itself a language it didn’t know before after being asked in that language. “For example, one Google AI program adapted on its own after being told to do so in the language of Bangladesh, which it was not trained to know,” CBS News reporter Scott Pelley said in the video.

See also  3DS System Update 11.16.0-48 is now available, here are the full patch notes

AI experts say this is a full-fledged bachelor’s degree. Not only could the robot not learn a foreign language it “never trained to know,” but it also never taught itself a new skill. The entire clip prompted AI researchers and experts to criticize the news show’s misleading framing on Twitter.

“I certainly hope some journalists will review the entire @60Minute segment on Google Bard as a case study on how to *not* cover AI,” said Melanie Mitchell, an AI researcher and professor at the Santa Fe Institute, he wrote in a tweet.

“Stop thinking magical about technology! It is not possible for #AI to respond in Bengali, unless the training data has been contaminated with Bengali or trained in a language that overlaps with Bengali, such as Assamese, Oriya or Hindi,” Ms. Alex O. researcher at the Massachusetts Institute of Technology, Added in another post.

It is worth noting 60 minutes The clip didn’t say exactly what AI they used. A CBS spokesperson told The Daily Beast that the segment was not a discussion on Bard but a separate program on artificial intelligence. However, the reasons why this part is so frustrating to these experts is that it ignores and plays with the reality of what generative AI can actually do. It can’t “teach” itself a language if it doesn’t have access to it in the first place. This would be like trying to teach yourself Mandarin but you only heard it after someone asked you Mandarin once.

After all, language is incredibly complex — with nuances and rules that require an incredible degree of context to understand and communicate. There is no way for even the most advanced LLM to handle and learn all of that with a few prompts.

Moreover, the AI ​​software may have already been trained to know Bengali, the dominant language of Bangladesh. Margaret Mitchell (no relation), researcher in the Startup Lab at AT HuggingFace and formerly at Google, explained this in Tweet topic Make an argument for why 60 minutes It was likely a mistake.

See also  Apple event: Everything we know about the March 8 'Performance Sneak' event

Mitchell noted that Bard — Google’s recently introduced publicly available chatbot — includes work from an early iteration of its model called PaLM. In a 2022 demonstration, Google showed that PaLM can communicate and respond to prompts in Bengali. the Paper behind PaLM It revealed in a datasheet that the model was actually trained on the language with approximately 194 million symbols in the Bengali alphabet.

Although we don’t know what the mysterious separate AI program was, it likely relied on the same work from PaLM – which would explain the presence of Bengali in the dataset.

It is unclear why Pichai, the CEO of Google, sat down for the interview and allowed these allegations to continue unopposed. (Google did not respond to requests for comment.) Since the episode aired, he has remained silent despite experts pointing out the misleading and false claims made in the clip. On Twitter Margaret Mitchell Proposal The reason behind this could be a combination of Google leaders not knowing how their products work and also that it allows poor messaging to be spread in order to address the current hype around generative AI.

“I suspect [Google executives] I literally don’t understand how it works, Mitchell chirp. “What I wrote above is most likely news to them. And they are motivated not to understand (TALK YOUR EYES TO THIS DATA SHEET!!).”

The second half of the video can also be seen as problematic as Pichai and Billy discuss a short story Bard created that “sounded very human”, leaving the two men somewhat shaken.

The truth is, these products aren’t magic. They are unable to be “human” because they are not human. They are text predictors like the ones on your phone, trained to come up with the most likely words and phrases after a series of words in phrases. Saying they exist might give them a level of power that could be incredibly dangerous.

See also  I'm a cybersecurity guru - here are the apps I would NEVER use

After all, users can use these generative AI systems to do things like spread misinformation. We’ve already seen this manipulate people’s likenesses and even their voices.

Even a chatbot on its own can cause harm if it ends up producing biased results – something we’ve already seen with the likes of ChatGPT and Bard. Knowing these chatbots’ propensity to hallucinate and fabricate results, they may be able to spread misinformation to unsuspecting users.

Research bears this out, too. A recent study Posted in Scientific reports He found that human responses to ethical questions could easily be influenced by the arguments made by ChatGPT – and even that users greatly underestimated how much they were affected by bots.

Misleading claims about 60 minutes It is really just a symptom of the greater need for digital literacy at a time when we need it most. Many AI experts say that now, more than ever, is the time when people need to be aware of what AI can and cannot do. These basic facts about robotics must also be communicated effectively to the wider public.

This means that the people with the biggest platforms and the loudest voices (such as the media, politicians, and CEOs of Big Tech) bear the most responsibility in ensuring a safer and more educated AI future. If we don’t, we could end up like the cavemen aforementioned, playing with the magic of fire — and getting burned in the process.

Editor’s Note: A previous version of this story said that the AI ​​discussed in the 60 Minutes segment was Bard when it was a separate Google AI.

Leave a Reply

Your email address will not be published. Required fields are marked *