- The EU AI Act requires AI explainability and accountability
- Only 38% of workers can accurately pinpoint who's accountable in their business
- More than half (59%) aren't even sure how quickly they could shut down AI in a crisis
Despite rapid AI adoption, new research from ISACA suggests many businesses might be going in blindly – more than half (59%) of UK businesses wouldn't even know how quickly they could stop AI during a crisis.
Only around one in five (21%) say they's feel confident stopping an AI system within 30 minutes, highlighting major safety gaps.
And it's not just shutting them down that's a problem – not even half (42%) say they could explain an AI failure to leadership or regulators.
Article continues below
Are businesses blind about the risks of AI?
ISACA explained that the gaps aren't just concerning for business operations and reputation, but also from a legislative framework. The EU AI Act requires explainability and accountability.
Part of the failure comes down to unclear accountability, with 20% of workers unsure of who is responsible for AI failures. Poor visibility is also a contributing factory, with one in three organizations not requiring AI's use at work to be disclosed, which ISACA says is a nightmare for blind spots.
The report explains that businesses are currently treating is as a technical problem, but they should instead focus on it being an organization-wide governance challenge. "Truly closing the gap can’t be done by process changes alone," Chief Global Strategy Officer Chris Dimitriadis wrote. "Rather, it will require professionals who have the expertise to evaluate AI risk rigorously, embed oversight across the full lifecycle."
Looking ahead, businesses are being urged to define accountability at the senior level and to start rolling out better visibility and auditing. Besides this, they must also build AI incident response into their strategies and factor it into their broader cybersecurity postures.
With only 38% of respondents identifying the board or an exec as being accountable in the event of an AI incident, it's clear more needs to be done to disseminate information and processes through the workforce.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.









English (US) ·