Morons at BSI Think They Can Make AI Healthcare Trustworthy
Key Points of This Dabbling Endeavor
The British Standards Institution (BSI) in their magnanimity, has floated high-level guidance called, ‘Validation framework for the use of AI within healthcare – Specification (BS 30440).’ Their idea of a bedtime story is to develop faith in those doctors, healthcare professionals, and providers who’ve been wiping sweat off their brows at the mention of AI in healthcare. They’re literally trying to make AI seem less like an imminent technological catastrophe.
Maybe This Couldn’t Be Futile. Or Maybe Not.
If this guidance serves its surreal purpose, healthcare would be warmer towards AI. If you thought that’s good news, let me burst your bubble, darling, it’s not. AI adoption could speed up, leading to increased machine interference in something as delicate as medical diagnosis. Oh and with machines being machines, the margin for error is as wide as the Grand Canyon.
Hot Take on The Guidance
There’s a reason why humans are still needed for healthcare services; it’s because machines can’t replicate our intuition. While BSI is puffing up its chest with a piece of guidance that’s barely worth the paper it’s written on, I’d advise you to take it with a grain of salt. There’s no magic ward that can suddenly make AI in healthcare completely safe and trustworthy. Will there be chances of mishaps and misjudgements? Hell yeah, and we don’t want a pile of wires deciding our health outcomes.
It’s cute that they’re hitting the ball hard with their trust-boosting campaign, but BSI needs to wake up and smell the chaos AI can cause in healthcare. This attempt at bolstering trust – it’s more like a comedy of errors.
Original article:https://www.artificialintelligence-news.com/2023/08/02/bsi-publishes-guidance-boost-trust-ai-healthcare/