Meta is re-training its AI so it won’t discuss self-harm or have romantic conversations with teens

Meta is re-training its AI and adding new protections to keep teen users from discussing harmful topics with the company’s chatbots. The company says it’s adding new “guardrails as an extra precaution” to prevent teens from discussing self harm, disordered eating and suicide with Meta AI. Meta will also stop teens from accessing user-generated chatbot characters that might engage in inappropriate conversations.

The changes, which were first reported by TechCrunch, come after numerous reports have called attention to alarming interactions between Meta AI and teens. Earlier this month, Reuters reported on an

→ Continue reading at Engadget

Similar Articles

Advertisment

Most Popular