It’s Way Too Easy to Get Google’s Bard Chatbot to Lie

When Google announced the launch of its Bard chatbot last month, a competitor to OpenAI’s ChatGPT, it came with some ground rules. An updated safety policy banned the use of Bard to “generate and distribute content intended to misinform, misrepresent or mislead.” But a new study of Google’s chatbot found that with little effort from a user, Bard will readily create that kind of content, breaking its maker’s rules.

Researchers from the Center for Countering Digital Hate, a UK-based nonprofit, say they could push Bard to generate “persuasive misinformation” in 78 of 100 test cases, including content denying climate change, mischaracterizing the war in Ukraine, questioning vaccine efficacy, and calling Black Lives Matter activists

→ Continue reading at WIRED

Similar Articles

Advertisment

Most Popular