From AI Risk Policy to Board-Ready Compliance.

A structured AI governance program built for your organization - policies, model risk classification, governance charter, and regulatory alignment across EU AI Act, NIST AI RMF, and ISO 42001.

Duration: 4-12 weeks Team: 1 Senior Consultant + 1 Policy Analyst

You might be experiencing...

The EU AI Act assigns risk categories to AI systems your organization is already deploying - and you have no governance structure to assess or document them.
Your board has asked for an AI risk report, and your team has no framework, no risk register, and no policies to draw from.
Enterprise customers and enterprise procurement processes are requiring documented AI governance policies before they sign contracts.
Your organization deploys AI across multiple teams, but there is no central oversight, no AI use policy, and no accountability structure for AI decisions.
NIST AI Risk Management Framework or ISO 42001 certification is being considered, but the documentation and governance gap is too large to know where to begin.

AI governance is no longer optional for organizations deploying AI at scale. The EU AI Act, NIST AI Risk Management Framework, and ISO 42001 create specific obligations that require documented policies, risk classification systems, human oversight mechanisms, and accountability structures. Building these from scratch without specialist guidance typically takes 18-24 months and still produces gaps. We build it in 12 weeks.

The AI Governance Gap

Most organizations deploying AI have informal governance at best - an unwritten understanding that AI features need engineering review before launch, a legal team that reviews terms of service, and a CISO who is broadly aware AI is being used. This informal structure fails when:

  • Enterprise procurement requires documented AI policies before contract signature
  • Regulators ask for evidence of AI risk assessment and human oversight mechanisms
  • An AI incident occurs and there is no documented response plan, no clear accountability, and no record of pre-deployment risk assessment
  • The board asks what the organization’s AI risk exposure is, and the answer is “we don’t have a structured view”

Our AI Governance & Risk Framework engagement builds the formal structure that closes these gaps.

Policy Suite Architecture

The governance program is built around 8-12 AI policies tailored to your organization’s AI use cases, regulatory obligations, and risk tolerance. Core policies include an AI Use Policy governing acceptable AI use across the organization, a Model Risk Policy defining classification criteria and approval workflows for new AI deployments, an AI Incident Response Policy integrating AI-specific incidents into existing security processes, and a Human Oversight Policy defining where human review is required before AI-generated outputs are acted upon.

Regulatory Alignment

Every governance framework we build includes regulatory mapping - a structured analysis of which specific EU AI Act articles, NIST AI RMF subcategories, and ISO 42001 clauses apply to your AI portfolio and what evidence is required to demonstrate compliance. This mapping gives your compliance team the documentation they need for audit preparation and regulatory inquiry response, without requiring them to interpret complex regulatory text themselves.

Governance That Actually Works

A policy document that sits in a SharePoint folder is not governance. Effective AI risk management requires governance structures that are operationalized: approval workflows in your project management tools, risk register reviews in your existing governance calendar, model cards completed before high-risk AI systems go live, and incident response procedures tested through tabletop exercises. Our implementation and training phases ensure governance is adopted, not just documented.

Engagement Phases

Weeks 1-2

Assessment

AI portfolio inventory, regulatory exposure analysis (EU AI Act risk classification, NIST AI RMF gap assessment, ISO 42001 readiness), stakeholder interviews, and governance gap identification.

Weeks 3-6

Design

AI governance framework architecture, policy suite drafting (8-12 policies), model risk classification system design, governance charter development, model card template creation, and AI incident response plan design.

Weeks 7-10

Implementation

Policy review and approval workflows, governance committee structure setup, risk register deployment, model card implementation for existing high-risk systems, and regulatory mapping documentation.

Weeks 11-12

Training

Four governance workshops (executive, technical, product, operations), AI risk awareness training, governance process walkthroughs, and ongoing advisory relationship establishment.

Deliverables

8-12 AI governance policies (AI Use Policy, Model Risk Policy, AI Incident Response Policy, Data Quality Policy, Human Oversight Policy, AI Procurement Policy, and others as applicable)
Model risk classification system - tiered framework for categorizing AI systems by risk level
AI governance charter defining accountability structures, decision rights, and oversight mechanisms
Model card templates for documenting high-risk AI systems
AI incident response plan integrated with existing security incident processes
Regulatory mapping document - EU AI Act obligations, NIST AI RMF alignment, ISO 42001 gap analysis
Four governance workshops delivered to executive, technical, product, and operations audiences
90-day implementation roadmap for governance program operationalization

Before & After

MetricBeforeAfter
Governance MaturityNo AI policies, no risk classification, no oversight structureFull policy suite and governance charter in 12 weeks
Regulatory ReadinessUnknown EU AI Act obligations, no NIST AI RMF alignmentRegulatory mapping document with specific obligations and compliance evidence
Board ReportingNo AI risk report for board or leadershipBoard-ready AI risk register and governance program summary

Tools We Use

NIST AI RMF EU AI Act ISO 42001 MITRE ATLAS Model Cards

Frequently Asked Questions

Does the EU AI Act apply to our organization?

The EU AI Act applies to any organization that places AI systems on the EU market or uses AI systems to serve EU-based users - regardless of where the organization is headquartered. It also applies to organizations using AI systems developed outside the EU if those systems affect EU residents. The Act creates different obligations for high-risk AI systems (healthcare, employment, law enforcement, critical infrastructure) versus general-purpose AI. Our assessment phase identifies exactly which obligations apply to your specific AI portfolio.

What is the NIST AI Risk Management Framework?

The NIST AI RMF is a voluntary framework from the US National Institute of Standards and Technology that helps organizations manage AI risks across four functions: GOVERN (establish accountability and culture), MAP (identify and classify AI risks), MEASURE (analyze and assess risks), and MANAGE (prioritize and treat risks). While voluntary for most US organizations, it is increasingly referenced in enterprise procurement requirements and regulatory guidance.

How long does this engagement take?

The standard engagement is 12 weeks for a full governance framework build. Organizations with simpler AI portfolios or urgent regulatory deadlines can complete a baseline governance program in 4-6 weeks - covering the highest-priority policies and most critical regulatory obligations first, with a roadmap for completing the remainder.

Who needs to be involved from our organization?

Effective AI governance requires involvement from legal/compliance, engineering leadership, product leadership, and executive sponsorship (typically CISO or CTO). The assessment phase requires access to AI project owners. Policy design workshops involve cross-functional stakeholders. The governance charter requires executive sign-off to be effective.

Can this help us achieve ISO 42001 certification?

Yes. ISO 42001 is an AI management system standard that follows the same high-level structure as ISO 27001. Our AI Governance & Risk Framework engagement produces the documentation and governance structures that form the foundation of an ISO 42001 compliance program. Organizations seeking certification will also need internal audits and third-party certification - we provide the governance foundation that makes both achievable.

Know Your AI Attack Surface

Request a free AI Security Scorecard assessment and discover your AI exposure in 5 minutes.

Get Your Free Scorecard