
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- IT, engineering, data, and AI teams now lead responsible AI efforts.
- PwC recommends a three-tier "defense" model.
- Embed, don't bolt on, responsible AI in everything.
"Responsible AI" is a very hot and important topic these days, and the onus is on technology managers and professionals to ensure that the artificial intelligence work they are doing builds trust while aligning with business goals.
Fifty-six percent of the 310 executives participating in a new PwC survey say their first-line teams -- IT, engineering, data, and AI -- now lead their responsible AI efforts. "That shift puts responsibility closer to the teams building AI and sees that governance happens where decisions are made, refocusing responsible AI from a compliance conversation to that of quality enablement," according to the PwC authors.
Also: Consumers more likely to pay for 'responsible' AI tools, Deloitte survey says
Responsible AI -- associated with eliminating bias and ensuring fairness, transparency, accountability, privacy, and security -- is also relevant to business viability and success, according to the PwC survey. "Responsible AI is becoming a driver of business value, boosting ROI, efficiency, and innovation while strengthening trust."
"Responsible AI is a team sport," the report's authors explain. "Clear roles and tight hand-offs are now essential to scale safely and confidently as AI adoption accelerates." To leverage the advantages of responsible AI, PwC recommends rolling out AI applications within an operating structure with three "lines of defense."
- First line: Builds and operates responsibly.
- Second line: Reviews and governs.
- Third line: Assures and audits.
The challenge to achieving responsible AI, cited by half the survey respondents, is converting responsible AI principles "into scalable, repeatable processes," PwC found.
About six in ten respondents (61%) to the PwC survey say responsible AI is actively integrated into core operations and decision-making. Roughly one in five (21%) report being in the training stage, focused on developing employee training, governance structures, and practical guidance. The remaining 18% say they're still in the early stages, working to build foundational policies and frameworks.
Also: So long, SaaS: Why AI spells the end of per-seat software licenses - and what comes next
Across the industry, there is debate on how tight the reins on AI should be to ensure responsible applications. "There are definitely situations where AI can provide great value, but rarely within the risk tolerance of enterprises," said Jake Williams, former US National Security Agency hacker and faculty member at IANS Research. "The LLMs that underpin most agents and gen AI solutions do not create consistent output, leading to unpredictable risk. Enterprises value repeatability, yet most LLM-enabled applications are, at best, close to correct most of the time."
As a result of this uncertainty, "we're seeing more organizations roll back their adoption of AI initiatives as they realize they can't effectively mitigate risks, particularly those that introduce regulatory exposure," Williams continued. "In some cases, this will result in re-scoping applications and use cases to counter that regulatory risk. In other cases, it will result in entire projects being abandoned."
8 expert guidelines for responsible AI
Industry experts offer the following guidelines for building and managing responsible AI:
1. Build in responsible AI from start to finish: Make responsible AI part of system design and deployment, not an afterthought.
"For tech leaders and managers, making sure AI is responsible starts with how it's built," Rohan Sen, principal for cyber, data, and tech risk with PwC US and co-author of the survey report, told ZDNET.
"To build trust and scale AI safely, focus on embedding responsible AI into every stage of the AI development lifecycle, and involve key functions like cyber, data governance, privacy, and regulatory compliance," said Sen. "Embed governance early and continuously.
Also: 6 essential rules for unleashing AI on your software development process - and the No. 1 risk
2. Give AI a purpose -- not just to deploy AI for AI's sake: "Too often, leaders and their tech teams treat AI as a tool for experimentation, generating countless bytes of data simply because they can," said Danielle An, senior software architect at Meta.
"Use technology with taste, discipline, and purpose. Use AI to sharpen human intuition -- to test ideas, identify weak points, and accelerate informed decisions. Design systems that enhance human judgment, not replace it."
3. Underscore the importance of responsible AI up front: According to Joseph Logan, chief information officer at iManage, responsible AI initiatives "should start with clear policies that define acceptable AI use and clarify what's prohibited."
"Start with a value statement around ethical use," said Logan. "From here, prioritize periodic audits and consider a steering committee that spans privacy, security, legal, IT, and procurement. Ongoing transparency and open communication are paramount so users know what's approved, what's pending, and what's prohibited. Additionally, investing in training can help reinforce compliance and ethical usage."
4. Make responsible AI a key part of jobs: Responsible AI practices and oversight need to be as much of a priority as security and compliance, said Mike Blandina, chief information officer at Snowflake. "Ensure models are transparent, explainable, and free from harmful bias."
Also key to such an effort are governance frameworks that meet the requirements of regulators, boards, and customers. "These frameworks need to span the entire AI lifecycle -- from data sourcing, to model training, to deployment, and monitoring."
Also: The best free AI courses and certificates for upskilling - and I've tried them all
5. Keep humans in the loop at all stages: Make it a priority to "continually discuss how to responsibly use AI to increase value for clients while ensuring that both data security and IP concerns are addressed," said Tony Morgan, senior engineer at Priority Designs.
"Our IT team reviews and scrutinizes every AI platform we approve to make sure it meets our standards to protect us and our clients. For respecting new and existing IP, we make sure our team is educated on the latest models and methods, so they can apply them responsibly."
6. Avoid acceleration risk: Many tech teams have "an urge to put generative AI into production before the team has a returned answer on question X or risk Y," said Andy Zenkevich, founder & CEO at Epiic.
"A new AI capability will be so exciting that projects will charge ahead to use it in production. The result is often a spectacular demo. Then things break when real users start to rely on it. Maybe there's the wrong kind of transparency gap. Maybe it's not clear who's accountable if you return something illegal. Take extra time for a risk map or check model explainability. The business loss from missing the initial deadline is nothing compared to correcting a broken rollout."
Also: Everyone thinks AI will transform their business - but only 13% are making it happen
7. Document, document, document: Ideally, "every decision made by AI should be logged, easy to explain, auditable, and have a clear trail for humans to follow," said McGehee. "Any effective and sustainable AI governance will include a review cycle every 30 to 90 days to properly check assumptions and make necessary adjustments."
8. Vet your data: "How organizations source training data can have significant security, privacy, and ethical implications," said Fredrik Nilsson, vice president, Americas, at Axis Communications.
"If an AI model consistently shows signs of bias or has been trained on copyrighted material, customers are likely to think twice before using that model. Businesses should use their own, thoroughly vetted data sets when training AI models, rather than external sources, to avoid infiltration and exfiltration of sensitive information and data. The more control you have over the data your models are using, the easier it is to alleviate ethical concerns."
Get the morning's top stories in your inbox each day with our Tech Today newsletter.








English (US) ·