Business Insider has obtained the guidelines that Meta contractors are reportedly now using to train its AI chatbots, showing how it’s attempting to more effectively address potential child sexual exploitation and prevent kids from engaging in age-inappropriate conversations. The company said in August that it was updating the guardrails for its AIs after Reuters reported that its policies allowed the chatbots to “engage a child in conversations that are romantic or sensual,” which Meta said at the time was “erroneous and inconsistent” with its policies and removed that language.Â
The document, which Business Insider has shared an excerpt from, outlines what kinds of content are “acceptable” and “unacceptable” for its
→ Continue reading at Engadget