How Organizations Can Mitigate the Risks of AI

How Organizations Can Mitigate the Risks of AI

by Bloomberg Stocks
0 comments 98 views
A+A-
Reset

PWC HERO

Has Responsible AI Peaked?

It’s no secret that the pandemic has accelerated the adoption and, more critically, organizations’ desire to adopt artificial intelligence (AI) capabilities. However, it’s notably difficult to make AI work. Only 6% of organizations have been able to operationalize AI, according to PwC’s recent global Responsible AI survey of more than 1,000 participants from leading organizations in the U.S., U.K., Japan, and India. More than half of companies in the survey said they are still experimenting and remain uncommitted to major investments in AI capabilities.

But companies that have an embedded AI strategy can more reliably deploy applications at scale, with more widespread adoption across the business, than those that don’t. Larger companies (greater than $1 billion) in particular are significantly more likely to be exploring new use cases for AI (39%), increasing their use of AI (38%), and training employees to use AI (35%).

Responsible AI

While some challenges to operationalization are technical or limited by skill sets, a trust gap remains an inhibitor.

A major trend is to incorporate “responsible AI” practices to bridge this trust gap. Responsible AI comprises the tools, processes, and people needed to control AI systems and govern them appropriately in accordance with the environment we would like to operate in and is implemented using technical and procedural capabilities to address bias, explainability, robustness, safety, and security concerns (among other things). The intent of responsible AI, which is sometimes referred to as or conflated with trusted AI, AI ethics, or beneficial AI, is to develop AI and analytics systems methodically, enabling high quality and documented systems that are reflective of an organization’s beliefs and values and minimizing unintended harms

Responsible AI in the enterprise

Appreciation of the new concerns AI can pose to an organization has led to a significant increase in risk mitigation activites. Organizations are pursuing strategies to mitigate risks of individual applications as well as broader risks posed to the business or to society, which customers and regulators alike are increasingly demanding (Figure 1). These risks are experienced at the application level, including performance instability and bias in AI decision-making; the business level, such as enterprise or financial risk; and the national level, such as job displacement from automation, and misinformation. To address these risks and more, organizations are using a variety of risk-mitigation measures, starting with ad hoc measures and advancing to a more structured governance process. More than a third of companies (37%) have strategies and policies to tackle AI risk, a stark increase from 2019 (18%).

Picture1 1

Figure 1: Risk taxonomy, PwC

Despite this increased emphasis on risk mitigation, organizations are still debating how to govern AI. Only 19% of companies in the survey have a formal documented process that gets reported to all stakeholders; 29% have a formal process only to address a specific event; and the balance have only an informal process or no clearly defined process at all.

Part of this discrepancy is due to lack of clarity around AI governance ownership. Who owns this process? What are the responsibilities of the developers, compliance or risk-management function, and internal audit?

Banks and other organizations already subject to regulatory oversight on their algorithms tend to have robust functions (“second-line” teams) that can independently validate models. Others, however, have to rely on separate development teams, because the second-line does not have the appropriate skills to review AI systems. Some of these organizations are choosing to bolster their second-line teams with more technical expertise, while others are creating more robust guidelines for quality assurance within the first line.

Regardless of responsibility, organizations require a standard development methodology, complete with stage gates at specific points, to enable high-quality AI development and monitoring (Figure 2). This methodology extends to procurement teams as well, given that many AI systems enter organizations through a vendor or software platform.

Picture2

Figure 2: Stage gates in the AI development process, PwC

The awareness of AI risks complements another trend to consider technology ethics—adopting practices for development, procurement, usage, and monitoring of AI driven by a “what should you do” rather than a “what can you do” mindset.

While there is a litany of ethical principles for AI, data, and technology, fairness remains a core principle. Thirty-six percent of survey respondents identify algorithmic bias as a primary risk focus area, and 56% believe they can address bias risks adequately. As companies mature in their adoption of AI, they also tend to embrace algorithmic bias as a primary focus, given the expertise in developing AI and awareness of issues around AI risks. Fairness rates as the fifth-most important principle to AI-mature companies versus being in eighth place for less mature organizations. Other principles include safety, security, privacy, accountability, explainability, and human agency. Organizational approaches to implement AI and data ethics tend to focus on narrow initiatives that are considered in isolation and utilize one-off tools such as impact assessments and codes of conduct. Large companies with mature AI use are significantly more likely to invest in a variety of initiatives, including carrying out impact assessments (62%), creating an ethical board (60%), and providing ethical training (47%). This push signals a recognition that multiple internal initiatives would be required to operationalize responsible AI.

What organizations can do

  • Establish principles to guide: A set of ethical principles adopted and supported by leadership provides a north star to the organization. Principles on their own, however, are not sufficient to embed responsible AI practices. Stakeholders need to consider principles in the context of their day-to-day work to design policies and practices that the whole company can get behind.
  • Consider governance ownership: Fortunately, many leaders within organizations are interested in establishing governance practices for AI and data. However, without specifying an owner for this governance, an organization is likely to find itself with a different problem—discrete practices that may be in conflict with one another. Identify which teams should design governance approaches, and agree on an owner and a process to identify updates to existing policies.
  • Develop a well-defined and integrated process for data, model, and software lifecycle: Implement standardized processes for development and monitoring, with specific stage gates to indicate where approvals and reviews are needed to proceed (Figure 2). This process should connect to existing data and privacy governance mechanisms as well as the software-development lifecycle.
  • Break down silos: Align across necessary stakeholder groups to connect teams for the purposes of sharing ideas and leading practices. Create common inventories for AI and data for the governance process, and use this exercise as an opportunity to consider structural changes or realignments that could enable the business to run better.
  • Keep tabs on the rapidly changing regulatory climate: It’s not just customers, investors, and employees who are demanding responsible practices. Regulators are taking notice and proposing legislation at the state, regulator, national, and supranational levels. Some regulations stem from expanded data-protection and privacy efforts, some from specific regulators on narrow use case areas (such as banking), and some from a more general desire to improve accountability (such as the European Union’s Artificial Intelligence Act). Keeping pace on these regulations is key to identifying future compliance activities.

With these actions, organizations will be better positioned to address AI risks in an agile fashion.


Learn how PwC can help your organization build responsible AI practices.

Read More

You may also like

Leave a Comment