Sony AI released a dataset that tests the fairness and bias of AI models. It’s called the Fair Human-Centric Image Benchmark (FHIBE, pronounced like “Phoebe”). The company describes it as the “first publicly available, globally diverse, consent-based human image dataset for evaluating bias across a wide variety of computer vision tasks.” In other words, it tests the degree to which today’s AI models treat people fairly. Spoiler: Sony didn’t find a single dataset from any company that fully met its benchmarks.
Sony says FHIBE can address the AI industry’s ethical and bias challenges. The dataset includes images of nearly 2,000 paid participants from over 80 countries. All of their likenesses
→ Continue reading at Engadget