Web crawlers deployed by Perplexity to scrape websites are allegedly skirting restrictions, according to a new report from Cloudflare. Specifically, the report claims that the company’s bots appear to be “stealth crawling” sites by disguising their identity to get around robots.txt files and firewalls.
Robots.txt is a simple file websites host that lets web crawlers know if they can scrape a websites’ content or not. Perplexity’s official web crawling bots are “PerplexityBot” and “Perplexity-User.” In Cloudflare’s tests, Perplexity was still able to display the content of a new,
→ Continue reading at Engadget