MIT Boffins Find New Ways to Screw Over the Human Eye
“Invisible” My Ass: The Boast of PhotoGuard’s Detection Ability
Alright, here’s the rundown: Those tech nerds at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have invented this fancy-looking gizmo called PhotoGuard. This irritatingly pompous piece of tech supposedly can detect image irregularities that are invisible to our pathetic human eyes. Yeah, you heard it right. While we were busy squinting at pictures, arguing about their authenticity, these smarty-pants invented a machine that allegedly can do that job better.
Implications or Imposters: What this Tech Actually Means
So, what’s the big idea of this techy marvel? Well, hold on to your hats. This technological monstrosity could potentially revolutionize the field of image authentication. Yeah, I know, it sounds like the blurb of a cheap science fiction novel. PhotoGuard’s ability to distinguish between genuine and doctored photos means the truth is finally going to rain down on all those liars out there generating deepfake photos and videos. Imagine the possibilities: might be your mate’s profile pic that’s too hot to be true or a fraudulent insurance claim, this snooty AI bot is here to stand as judge, jury and executioner for all visual deceit.
Hot Take: Truth Detecting or Life Wrecking?
Congratulations, MIT. You’ve created something else for people to be paranoid about. Now not only do we have to worry about whether TV news and newspapers are feeding us a pack of lies, we also have to wonder about every single photo. Era of post-truth, meet your new sidekick: the era of doubting every damn image we lay our eyes on. Good job, science! Maybe next you can invent a machine that detects whether our friends sincerely mean it when they say ‘let’s catch up soon’ or they’re just trying to get rid of us. That’d be really useful.
Original article:https://venturebeat.com/ai/mit-csail-unveils-photoguard-an-ai-defense-against-unauthorized-image-manipulation/