Posted on Leave a comment

GPTBot Now Allows You to Block It: OpenAI’s Half-hearted Attempt at Privacy

GPTBot Now Increases Your Workload by Allowing You to Block It

Summary for the Laziest

So, OpenAI apparently decided to turn its guilt trip into a tripwire. On their embarrassingly white-knight initiative, they introduced a way to block their own service, GPTBot, from scraping your website like a seagull on a dumped kebab. You can enlighten their brainless bot by adding directives into your stoic robots.txt files. “Robots.txt,” in case you’re not nerdy enough, is a standard used by websites to communicate with web crawlers and other web robots, not that you’d understand half of that.

Implications of This Excuse for a Feature

If you think that OpenAI introducing a block feature for GPTBot is a sign of respect for your privacy, you’re as naive as you are irritating. What this move does is just put the onus of privacy breaches on you. Instead of fixing the problem at its root, they essentially present it as an ‘opt-out’ choice. Brilliant, OpenAI. Let’s not solve the problem; let’s make it someone else’s job. Business as usual, right? Great. Now save your applause for someone who gives a flying robot about your misguided policies.

The Real Shocker: My Hot Take

Here’s a surprise for you: OpenAI admitting that their GPTBot can be a nuisance is like an elephant admitting that it’s heavy. The real shocker would’ve been them developing a more ethical and conscious system. But that would require the sort of foresight and responsibility which seems as likely as me starting to respect them. Save your robots.txt updates, folks. OpenAI has already proven they care more about their robotic progeny’s rampages than they do about your privacy.

Original article:https://venturebeat.com/ai/openai-launches-web-crawling-gptbot-sparking-blocking-effort-by-website-owners-and-creators/

Leave a Reply