Building Trustworthy AI in a Regulated Industry
How Insurance Leaders Can Approach AI Governance
Unlike many other sectors experimenting with Artificial Intelligence, the insurance industry operates under an evolving patchwork of regulations designed to protect consumers, ensure fair outcomes and preserve market stability. AI introduces a new set of variables into this environment — powerful, yes, but also non-deterministic, data-hungry and capable of unintended bias if left unchecked.
That reality is exactly why regulators such as the National Association of Insurance Commissioners (NAIC) are sharpening their focus on AI governance. Their guidance is not a call for perfection or complete certainty — because no model, no matter how sophisticated, can eliminate all risk. Rather, it’s a call for maturity: documented processes, ongoing evaluation, transparency, human oversight, and an ability to demonstrate that AI systems behave as intended.
AI Governance is No Longer a “One-time Approval” Activity
Traditional software is deterministic; you test, validate and deploy it. AI-driven systems behave differently; they learn from dynamic data, interact with users in complex ways and can produce variable outputs for similar inputs. In other words, AI introduces an ever-present variable — one that demands continuous quality assurance (QA).
For solution providers in the insurance space, QA extends far beyond accuracy; it must include fairness, privacy, security, explainability, and the operational reliability carriers depend on.
The goal is to show a disciplined, repeatable approach that builds confidence over time by proactively assessing risk, preventing bias before the AI-powered solution reaches production, ensuring humans remain in control, and maintaining transparent documentation that meets the expectations of regulators and carrier partners alike.
How Carriers Can Evaluate Technology Partners in an Emerging AI Landscape
Evaluating AI vendors in insurance is still an emerging discipline. Most risk managers apply frameworks designed for traditional, rule-based software or established infrastructure and systems with predictable inputs and outputs and well-understood risk categories. AI doesn’t fit neatly into those legacy models.
While the NAIC introduced guidance to help insurers think more holistically about AI governance, modern AI systems’ variables are ever-evolving, broader than what the industry is used to evaluating, and require ongoing oversight.
Governance in this space cannot be one-sided, which is why strong vendor–carrier collaboration is becoming a leading indicator of AI readiness. Carriers need visibility into how vendors build, monitor and improve systems. And vendors need real-world feedback to refine models, detect issues early and ensure outputs align with business and regulatory expectations. The organizations that get the most value from AI treat governance as a shared responsibility and maintain open, continuous communication with their solution providers.
Choosing Partners Who Balance AI Innovation with Data Protection
There’s also a unique challenge when it comes to data. AI systems learn from interactions, outcomes and historical patterns — the exact kinds of insights that often sit on top of highly sensitive, regulated information. This complexity creates a new paradigm: how do we make AI smarter while preserving our absolute obligation to protect Personally Identifiable Information (PII) and customer confidentiality?
The industry must evolve approaches that maximize model quality and efficiency without compromising privacy or expanding risk unnecessarily.
Ultimately, evaluating AI partners today requires assessing their maturity, transparency, willingness to collaborate, and ability to operate responsibly in an environment with evolving technology and regulatory expectations. Carriers who choose and actively collaborate with partners with strong governance instincts will see faster implementations, better risk management and greater long-term ROI.
How Hi Marley Approaches AI Governance
Hi Marley believes that trust is the foundation for innovation in insurance. Every carrier we work with relies on us to protect their data, uphold their brand, and help their teams deliver better experiences, especially in the age of AI.
Our approach to AI governance starts with a simple principle: AI should enhance the quality, safety and consistency of insurance conversations, not introduce uncertainty. To deliver on that, we focus on a set of core practices that align with regulatory expectations while remaining flexible enough to adapt to a rapidly evolving technology landscape.
- Structured Risk Assessments
Before introducing any AI capability, we perform a structured, multi-disciplinary evaluation of potential risks, including privacy, fairness, operational reliability and customer impact. These assessments ensure that we understand how a model performs and behaves across a range of real-world scenarios.
- Bias Prevention and Responsible Data Practices
Fair outcomes matter. To meet compliance expectations and build systems that respect the diversity of customers and claims experiences, we apply conservative data-handling practices, limit model exposure to sensitive information, and look for potential sources of unintended bias early.
- Human-in-the-Loop by Design
AI in insurance works best when it supports, not replaces, human expertise. We designed many of our AI-enabled features with controlled autonomy: they accelerate workflows, provide suggestions, or help surface information, while keeping adjusters, managers and customer-facing teams in complete control of the final decision and action.
- Security Controls That Prioritize Privacy
Our AI systems operate within the same security and privacy standards our customers already trust: the principle of least privilege, strict access controls, encrypted data flows, and strong boundaries that prevent models from retaining or rediscovering sensitive PII. We treat AI not as a separate stack, but as an extension of the same secure ecosystem carriers expect.
- Documentation and Transparency for Regulators and Partners
We maintain clear, accessible documentation that explains how our AI features work, what data they use, how they’re monitored, and how we manage risks over time. This level of detail and visibility gives carriers and regulators confidence that the systems supporting their business are traceable, explainable and backed by a repeatable governance process.
- Continuous Quality Assurance
We treat QA as an ongoing responsibility. We continually test models against shifting data patterns, customer needs and operational requirements. This process aligns with what regulators, including the NAIC, want to see: a commitment to continuous oversight and measurable improvement.
What “Doing It Right” Looks Like in an Emerging Space
AI has the potential to reshape how carriers handle claims, how customers communicate, and how the insurance industry operates — but only if it’s deployed responsibly. Doing AI right means embracing a disciplined approach today that can scale as expectations rise and change. It’s about transparency, collaboration and the understanding that building trustworthy AI is a shared effort between carriers and their technology partners.
As insurers navigate this new landscape, the most successful organizations will be those that choose partners committed to responsible innovation, who take governance seriously, engage openly, and recognize that trust is earned through action and consistency.
At Hi Marley, we’re proud to take that responsibility seriously. And we’re excited to help carriers unlock the power of AI while protecting what matters most: their customers, their data and the integrity of the insurance experience.