Latest News
Leaders Need to Take Responsibility for, and Action on, Responsible AI Practices – PwC
The estimated $15.7trn economic potential of artificial intelligence (AI)[1] will only be realised if the integration of responsible AI practices occurs across organisations, and is considered before any developments take place, according to a new paper by PwC.
Combating a piecemeal approach to AI’s development and integration – which is exposing organisations to potential risks – requires organisations to embed end-to-end understanding, development and integration of responsible AI practices, according to a new toolkit published this week by PwC.
PwC has identified five dimensions organisations need to focus on and tailor for their specific strategy, design, development, and deployment of AI: Governance, Ethics and Regulation, Interpretability & Explainability, Robustness & Security, and Bias and Fairness.
The dimensions focus on embedding strategic planning and governance in AI’s development, combating growing public concern about fairness, trust and accountability.
Earlier this year, 85% of CEOs said AI would significantly change the way they do business in the next five years, and 84% admitted that AI-based decisions need to be explainable in order to be trusted[2].
Speaking this week at the World Economic Forum in Dalian, Anand Rao, Global AI Leader, PwC US, says:
“The issue of ethics and responsibility in AI are clearly of concern to the majority of business leaders. The C-suite needs to actively drive and engage in the end-to-end integration of a responsible and ethically led strategy for the development of AI in order to balance the economic potential gains with the once-in-a-generation transformation it can make on business and society. One without the other represents fundamental reputational, operational and financial risks.”
As part of PwC’s Responsible AI Toolkit, a diagnostic survey enables organisations to assess their understanding and application of responsible and ethical AI practices. In May and June 2019, around 250 respondents involved in the development and deployment of AI completed the assessment.
The results demonstrate immaturity and inconsistency in the understanding and application of responsible and ethical AI practices:
- Only 25% of respondents said they would prioritise a consideration of the ethical implications of an AI solution before implementing it.
- One in five (20%) have clearly defined processes for identifying risks associated with AI. Over 60% rely on developers, informal processes, or have no documented procedures.
- Ethical AI frameworks or considerations existed, but enforcement was not consistent.
- 56% said they would find it difficult to articulate the cause if their organisation’s AI did something wrong.
- Over half of respondents have not formalised their approach to assessing AI for bias, citing a lack of knowledge, tools, and ad hoc evaluations.
- 39% of respondents with AI applied at scale were only “somewhat” sure they know how to stop their AI if it goes wrong.
Anand Rao, Global AI Leader, PwC US, says:
“AI brings opportunity but also inherent challenges around trust and accountability. To realise AI’s productivity prize, success requires integrated organisational and workforce strategies and planning. There is a clear need for those in the C-suite to review the current and future AI practices within their organisation, asking questions to not just tackle potential risks, but also to identify whether adequate strategy, controls and processes are in place.
“AI decisions are not unlike those made by humans. In each case, you need to be able to explain your choices, and understand the associated costs and impacts. That’s not just about technology solutions for bias detection, correction, explanation and building safe and secure systems. It necessitates a new level of holistic leadership that considers the ethical and responsible dimensions of technology’s impact on business, starting on day one.”
Also at the launch this week at the World Economic Forum in Dalian, Wilson Chow, Global Technology, Media and Telecommunications Leader, PwC China, added:
“The foundation for responsible AI is end-to-end enterprise governance. The ability of organisations to answer questions on accountability, alignment and controls will be a defining factor to achieve China’s ambitious AI growth strategy.”
PwC’s Responsible AI Toolkit consists of a flexible and scalable suite of global capabilities, and is designed to enable and support the assessment and development of AI across an organisation, tailored to its unique business requirements and level of AI maturity.
SOURCE PwC
Blockchain
Bitget Partners with Fiat24 to Advance PayFi Solutions for Crypto
Bitget Partners with Fiat24 to Advance PayFi Solutions for Crypto
Blockchain
LCT Secures VARA In-Principle Approval, Defining Its Role in Dubai’s Crypto Landscape
Blockchain
Bybit One-Click Buy Offers a Winning Chance in First-Time Deposits Lucky Draws
bybit lucky
-
Blockchain6 days ago
Bridging Innovation and Regulation: How Yellow Network is Transforming Non-Custodial Trading in a Pro-Crypto Future
-
Blockchain Press Releases6 days ago
HTX Ventures Identifies Five Rapidly-Growing Sectors in 2024, Expects Positive Crypto Regulations Driven by Trump Next Year
-
Blockchain4 days ago
50,000+ Mined Coins and 100,000 New Users: EMCD Summarizes 2024
-
Blockchain7 days ago
Blocks & Headlines: Today in Blockchain (Reserve Bank of India (RBI), Aethir, Blockchain Center Abu Dhabi, Qubetics, )
-
Blockchain4 days ago
Blocks & Headlines: Today in Blockchain (BlackRock, Plume, SEALSQ, Hedera, Deutsche Bank, KuCoin)
-
Blockchain5 days ago
Building Bridges in Crypto: Bybit Sparked Dialogues and Joined Industry Leaders at Bitcoin MENA
-
Blockchain Press Releases3 days ago
Bybit P2P Unlocks 20,000 USDT Prize Pools for Select Users in South Asia and Africa
-
Blockchain Press Releases5 days ago
Dubai Police Team Triumphs at KuCoin Sponsored 2024 Dubai Open Gov Padel Cup