Anthropic Says That Claude Contains Its Own Kind of Emotions

Claude has been through a lot lately—a public fallout with the Pentagon, leaked source code—so it makes sense that it would be feeling a little blue. Except, it’s an AI model, so it can’t feel. Right?

Well, sort of. A new study from Anthropic suggests models have digital representations of human emotions like happiness, sadness, joy, and fear, within clusters of artificial neurons—and these representations activate in response to different cues.

Researchers at the company probed the inner workings of Claude Sonnet 4.5 and found that so-called “functional emotions” seem to affect Claude’s behavior, altering the model’s outputs and actions.

Anthropic’s findings may help ordinary users make sense of how chatbots actually

→ Continue reading at WIRED

Similar Articles

Advertisment

Most Popular