TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Five compliance best practices for a successful AI governance program Related reading: A view from DC: Will Maryland end the era of notice and choice?

rss_feed

""

The artificial intelligence regulatory landscape is quickly shifting. Most recently, U.S. President Joe Biden's administration issued an executive order on AI, G7 leaders agreed on a set of guiding principles for AI and a voluntary code of conduct for AI developers, and the EU AI Act could become the world's first comprehensive AI regulation.

As companies around the world develop and deploy this technology, they are closely watching regulatory developments and recognize the urgency of building out AI governance programs.

While there is no one-size-fits-all approach to AI governance, all responsible AI stakeholders can take advantage of established compliance program best practices to effectively implement best-in-class AI governance programs.

1. Map principles to written procedures

Similar to other compliance programs, an AI governance program must exist beyond a set of principles and core values, and should be strategically implemented as a living, adaptable framework within an organization.

AI principles can be brought to life through building out written protocols and procedures given organizational needs and regulatory requirements.

Specifically, consider mapping each responsible AI principle, such as fairness or explainability, to a set of implementation guidelines, policies and procedures. This is a particularly helpful exercise to demonstrate the implementation of responsible AI principles across the organization.

Once these are mapped out, visualize the entire AI governance program in one comprehensive chart or playbook, capturing the applicable written protocols, procedures and requirements for each principle.

As an additional benefit, this effort can help companies isolate and pinpoint specific protocols of a program that require updates due to changes in organizational AI strategy or the regulatory landscape.

Consider, for example, the fairness principle — arguably one of the more complex principles to implement given challenges associated with managing AI bias. Drawing from compliance program best practices, apply the following four elements when operationalizing this principle:

  • Define "fairness": It is critical to align internally on the applicable definition of fairness to ensure a shared understanding of this term and what it means from an implementation perspective.
  • Define the regulatory scope: Determine any applicable regulatory requirements in relation to this principle, and document compliance measures in written policies and procedures.
  • Measure fairness: Document protocols for mitigating bias and carrying out fairness assessments.
  • Prepare for change management: Update fairness implementation guidelines and procedures as needed, keeping in mind organizational changes to AI deployment or use, client needs and new legal requirements.

2. Establish multiple lines of defense

A multiple-lines-of-defense strategy is a popular compliance risk management mechanism — particularly in heavily regulated industries — and can help manage AI-related risks within an organization.

This strategy helps mobilize teams at separate stages of AI development or deployment where they own specified risks, are responsible for mitigating those risks and help drive responsible AI behaviors across different defense lines.

From an AI governance program perspective, the multiple-lines-of-defense model is particularly helpful with implementing the accountability principle.

The Data and Trust Alliance and IBM recently provided a breakdown of three lines of defense from an AI governance program perspective. The first line of defense includes people either building AI models or buying them from vendors. This team is tasked with aligning the design, development and deployment of all AI models with the organization's documented responsible AI principles.

The second line of defense focuses on the risk function, evaluating and validating the work performed by the first line of defense. As an example, teams in the second line may perform a compliance check to ensure certain data variables — such as variables from a protected class — are not used in AI models.

The third line of defense serves as the audit function and includes various subject matter experts in order to conduct evaluations of the AI systems in use.

3. Initiate AI literacy efforts and responsible AI training

Similar to the privacy landscape, it is easy to speak about AI in technical, and increasingly, legal jargon.

To effectively drive adoption of a responsible AI culture, companies must account for various levels of understanding about AI across teams, including what responsible development and responsible deployment mean.

Ongoing training will be essential to advance AI literacy and encourage adoption of responsible AI behaviors across an organization.

In addition to organization-wide training on an AI governance program, role-based training can help employees understand the implications of interacting with AI systems given their specific roles and responsibilities.

Consider using available responsible AI materials and training in the market, including resources within the IAPP AI Governance Center and AI Governance Professional training.

4. Strategically drive a responsible AI culture

A compliance program is limited in its effectiveness if organizational behaviors are unaligned with program objectives. To help drive AI governance program adoption across an organization, it is essential to start with the company's responsible AI culture.

Do employees appreciate the value and benefits of responsible AI? Do they understand the risks associated with irresponsible use of AI, and do they embrace the role they play in relation to responsible use of AI systems?

Effective communication is critical for explaining the "why" behind responsible AI, establishing guardrails around interaction with AI systems, and outlining employee roles and responsibilities for AI use.

A responsible AI strategy should support the company's compliance culture. This means aligning responsible AI principles and program vision and mission statements with company core values, updating job descriptions to include responsible AI expectations, and adding responsible AI as a performance review metric.

Establish a responsible AI champion network across the organization to scale efforts, similar to the privacy champion concept.

5. Prepare an AI governance program for change

As a compliance best practice, an AI governance program should be designed to stay relevant for the organization and meet business needs. In essence, programs should include dynamic controls and automated solutions to account for required program updates.

A KPMG report stressed the importance of implementing change-management protocols for compliance programs.

KPMG noted such protocols could help teams take a practical and efficient approach to adjusting compliance programs for new regulations. In addition, change-management protocols help companies foster coordination across silos and gain meaningful insights to improve compliance risk management.

A number of factors contribute to the need for AI governance program adaptability, including the rapidly developing technological landscape, new regulations, corporate changes to AI strategy, and consumer and client demands.

Consider, for example, the proposed EU AI Act. If it passes, what protocols are in place to account for this regulation, including updating policies, procedures, company-wide and role-based training, and any external-facing communications to consumers and clients?

It’s critical to include change management in AI governance program design, so it is able to adapt alongside any new applicable developments.

If a company is already using change-management protocols for onboarding and implementing AI systems, it may be helpful to align the AI governance program changes with those efforts. That way, employees can be briefed on responsible use of any new AI systems as soon as possible.

Position an AI governance program for success

When operationalizing an AI governance program, certain compliance strategies can be used to ensure a successful rollout and drive responsible AI behaviors across the organization.

Remember to document procedures in accordance with each principle, establish multiple lines of defense, initiate AI literacy efforts and responsible AI training, strategically drive a responsible AI culture, and adopt change-management protocols. These compliance best practice tips can help position implementation efforts for success and adaptation over time.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

1 Comment

If you want to comment on this post, you need to login.

  • comment Abi Ogunbayo • Dec 30, 2023
    I totally agree