Regulatory Reference
What Regulators Expect for AI in Financial Services
The SEC, FINRA, NYDFS, NAIC, and CISA have all published guidance on the proper and secure implementation of AI. This page compiles their key positions, exact language, and source documents in one place.
Summary
Cross-Cutting Themes Across All Regulators
Despite different jurisdictions and mandates, five regulators converge on the same core expectations for firms using AI.
Every regulator expects documented AI governance frameworks with board-level oversight, clear policies and procedures, and designated accountability for AI use.
AI must be included in enterprise risk assessments. This means assessing risks from both the organization’s own AI use and from threat actors leveraging AI.
Organizations remain responsible for AI used on their behalf, including vendor-supplied AI tools. Due diligence must cover AI-specific risks, data handling, and security practices.
AI systems that process sensitive data (NPI, customer information, PII) must have appropriate controls for data minimization, access, quality, and retention.
AI systems must be continuously monitored, tested, and validated. Outputs must be checked for accuracy and bias.
Staff must be trained on AI-specific risks, including social engineering enhanced by AI such as deepfakes and sophisticated phishing.
No regulator has created entirely new AI-specific regulations. Each has clarified that existing regulatory frameworks already apply to AI use. Firms cannot claim ignorance.
Firms must not overstate AI capabilities (SEC AI-washing enforcement). AI-driven decisions must be explainable and auditable.
The SEC brought its first-ever enforcement actions against investment advisers for “AI washing” — making false or misleading statements about their use of AI. Delphia (USA) Inc. paid $225,000 and Global Predictions, Inc. paid $175,000 in civil penalties.
Although withdrawn, this proposed rule signaled the SEC’s intent to regulate AI-driven conflicts of interest. The SEC continues to apply existing regulatory frameworks to AI use.
Proposed Rule S7-12-23 (PDF)Specific regulatory areas implicated include recordkeeping (Rules 3110, 4511), customer information protection (Reg S-P), risk management, Reg BI, and communications with the public (Rule 2210 — content standards apply whether communications are “generated by a human or technology tool”).
The report also recommends conducting initial and ongoing due diligence on third-party vendors, maintaining an inventory of firm data types accessed or stored by vendors, and monitoring vendor services for vulnerabilities or data breaches.
2026 Oversight Report — GenAI SectionSuperintendent Adrienne A. Harris issued guidance explaining how 23 NYCRR Part 500 applies to AI risks. The letter does not impose new requirements — it clarifies how the existing cybersecurity regulation applies to AI.
Four AI-related cybersecurity threats identified:
From threat actors: AI-enabled social engineering (deepfakes via email, phone, text) and AI-enhanced cyberattacks (amplified potency, scale, and speed). From a firm’s own AI use: exposure of nonpublic information (NPI) when AI tools process sensitive data, and increased attack surface from AI systems creating new vulnerabilities.
Addresses the use of AI systems and external consumer data in insurance underwriting and pricing. Establishes principles around fairness, transparency, and accountability in AI use that reflect broader regulatory expectations applicable across financial services.
Circular Letter CL2024-07The Model AI Bulletin provides guidelines for insurers on responsible AI use. As of August 2025, at least 24 states and the District of Columbia have adopted it in full or substantially similar form. The bulletin recommends NIST’s AI Risk Management Framework (AI RMF) as a reference.
The AIS Program must: (1) Address governance, risk management controls, and internal audit functions. (2) Be adopted by the board of directors or an appropriate board committee. (3) Be tailored to and proportionate with the insurer’s use and reliance on AI. (4) Address all AI systems that make decisions impacting customers. (5) Address AI use across the insurance product life cycle.
Required controls must address: oversight and approval process for AI system acquisition; data practices (currency, lineage, quality, integrity, bias, minimization, suitability); validation, testing, and retesting; privacy of non-public information; and data and record retention.
Published by the NSA AI Security Center, CISA, FBI, and international partners (Australia, Canada, New Zealand, UK). Provides best practices for deploying and operating externally developed AI systems.
Additional recommendations include sandboxing ML model environments in hardened containers, monitoring networks with firewall allow lists, validating AI model integrity before deployment, and assessing security practices of AI vendors and suppliers.
Deploying AI Systems Securely (PDF)Published by CISA, NSA, FBI, and international partners. Outlines ten cybersecurity best practices specific to AI systems, covering the full AI lifecycle from development to operation. References NIST SP 800-53 for additional controls.
Three key risk categories: data supply chain risks (compromised data from third parties), maliciously modified data (poisoning and adversarial manipulation), and data drift (gradual degradation of data quality and relevance).
The ten recommendations address securing the data supply chain, protecting data against unauthorized modification, data integrity validation, access control for training and operational data, and monitoring for data anomalies.
AI Data Security Best Practices (PDF)Published by CISA and the Australian Cyber Security Centre with international partners. Outlines four key principles for integrating AI into operational technology systems while reducing risk. Focuses on machine learning, large language models, and AI agents due to their complex security challenges.
Secure AI Integration in OTReference
Source Documents
| Regulator | Document | Date |
|---|---|---|
| SEC | 2025 Examination Priorities | Oct 2024 |
| SEC | AI-Washing Enforcement (Press Release 2024-36) | Mar 2024 |
| SEC | Proposed PDA Rule (S7-12-23) | Jul 2023 |
| SEC | AI at the SEC | Ongoing |
| FINRA | Regulatory Notice 24-09 (GenAI) | Jun 2024 |
| FINRA | 2025 Annual Regulatory Oversight Report | Feb 2025 |
| FINRA | 2026 Annual Regulatory Oversight Report | Dec 2025 |
| NYDFS | Industry Letter on AI Cybersecurity Risks | Oct 2024 |
| NYDFS | Circular Letter No. 7 (AI in Insurance) | Jul 2024 |
| NAIC | Model Bulletin on AI (Adopted) | Dec 2023 |
| CISA/NSA/FBI | Deploying AI Systems Securely | Apr 2024 |
| CISA/NSA/FBI | AI Data Security Best Practices | May 2025 |
| CISA | Secure AI Integration in OT | Dec 2025 |
See How FCI Aligns with These Requirements
FCI has been deploying AI in cybersecurity since 2017 and helps financial services firms govern AI risk — implementing the controls, enforcement, and evidence that regulators expect.