Regulatory Reference

What Regulators Expect for AI in Financial Services

The SEC, FINRA, NYDFS, NAIC, and CISA have all published guidance on the proper and secure implementation of AI. This page compiles their key positions, exact language, and source documents in one place.

Summary

Cross-Cutting Themes Across All Regulators

Despite different jurisdictions and mandates, five regulators converge on the same core expectations for firms using AI.

Governance & Policies

Every regulator expects documented AI governance frameworks with board-level oversight, clear policies and procedures, and designated accountability for AI use.

Risk Assessment

AI must be included in enterprise risk assessments. This means assessing risks from both the organization’s own AI use and from threat actors leveraging AI.

Vendor Due Diligence

Organizations remain responsible for AI used on their behalf, including vendor-supplied AI tools. Due diligence must cover AI-specific risks, data handling, and security practices.

Data Protection & Privacy

AI systems that process sensitive data (NPI, customer information, PII) must have appropriate controls for data minimization, access, quality, and retention.

Monitoring & Testing

AI systems must be continuously monitored, tested, and validated. Outputs must be checked for accuracy and bias.

Training & Awareness

Staff must be trained on AI-specific risks, including social engineering enhanced by AI such as deepfakes and sophisticated phishing.

Existing Frameworks Apply

No regulator has created entirely new AI-specific regulations. Each has clarified that existing regulatory frameworks already apply to AI use. Firms cannot claim ignorance.

Transparency & Accuracy

Firms must not overstate AI capabilities (SEC AI-washing enforcement). AI-driven decisions must be explainable and auditable.

SEC — Securities and Exchange Commission
October 2024
2025 Examination Priorities
“Firms touting their use of artificial intelligence will be reviewed for consistency with actual practices and whether such firms have implemented adequate policies and procedures to monitor and/or supervise their use of AI, including for tasks related to fraud prevention and detection, back-office operations, anti-money laundering, and trading functions.”
“How firms protect against loss or misuse of client records occurring from the use of third-party AI models and tools.”
Key Takeaway: The SEC is actively examining whether firms’ AI claims match reality, and whether adequate policies, procedures, and supervision exist for AI use — including vendor/third-party AI tools.
SEC 2025 Exam Priorities (PDF)
March 2024
AI-Washing Enforcement Actions

The SEC brought its first-ever enforcement actions against investment advisers for “AI washing” — making false or misleading statements about their use of AI. Delphia (USA) Inc. paid $225,000 and Global Predictions, Inc. paid $175,000 in civil penalties.

“Investment advisers should not mislead the public by saying they are using an AI model when they are not.” — SEC Chair Gary Gensler
Key Takeaway: Claims about AI capabilities must be accurate and substantiated. The SEC will enforce against firms that overstate their use of AI.
SEC Press Release 2024-36
July 2023 — Withdrawn 2024
Proposed Predictive Data Analytics Rule
“Eliminate or neutralize the effect of conflicts of interest associated with their use of covered predictive data analytics and adopt written compliance policies regarding the same.”

Although withdrawn, this proposed rule signaled the SEC’s intent to regulate AI-driven conflicts of interest. The SEC continues to apply existing regulatory frameworks to AI use.

Proposed Rule S7-12-23 (PDF)
August 2025
SEC AI Task Force & Compliance Plan
“AI must be governed with the same care as any other business tool.”
“Market participants using AI in their business operations are advised to review their AI usage and duly implement and/or update AI policies and procedures to ensure compliance with the existing regulatory framework.”
SEC AI Page
FINRA — Financial Industry Regulatory Authority
June 2024
Regulatory Notice 24-09: Generative AI & LLMs
“FINRA’s rules — which are intended to be technology neutral — and the securities laws more generally, continue to apply when member firms use Gen AI or similar technologies in the course of their businesses, just as they apply when member firms use any other technology or tool.”
“A member firm should evaluate Gen AI tools prior to deploying them and ensure that the member firm can continue to comply with existing FINRA rules applicable to the business use of those tools.”

Specific regulatory areas implicated include recordkeeping (Rules 3110, 4511), customer information protection (Reg S-P), risk management, Reg BI, and communications with the public (Rule 2210 — content standards apply whether communications are “generated by a human or technology tool”).

Key Takeaway: Existing FINRA rules apply fully to AI. Firms must evaluate AI tools before deployment and ensure ongoing compliance.
FINRA Regulatory Notice 24-09
February 2025
2025 Annual Regulatory Oversight Report
“Implementing a Gen AI governance program that (1) identifies low-risk AI use cases that do not need robust compliance review, (2) identifies prohibited use cases and ensures none are in production, (3) identifies risks associated with other Gen AI use cases and mitigation measures, and (4) keeps track of higher-risk Gen AI use cases in production.”
2025 Oversight Report
December 2025
2026 Annual Regulatory Oversight Report
“FINRA expects firms to establish a supervision, governance or model risk management framework that establishes clear policies and procedures to develop, implement, use and monitor GenAI, while maintaining comprehensive documentation throughout.”
“Ensuring contracts with third-party vendors comply with regulatory obligations (e.g., adding language that prohibits firm or customer sensitive information from being ingested into a third-party vendor’s open-source GenAI tool).”

The report also recommends conducting initial and ongoing due diligence on third-party vendors, maintaining an inventory of firm data types accessed or stored by vendors, and monitoring vendor services for vulnerabilities or data breaches.

2026 Oversight Report — GenAI Section
NYDFS — New York Department of Financial Services
October 16, 2024
Industry Letter: Cybersecurity Risks Arising from AI

Superintendent Adrienne A. Harris issued guidance explaining how 23 NYCRR Part 500 applies to AI risks. The letter does not impose new requirements — it clarifies how the existing cybersecurity regulation applies to AI.

“The Guidance does not impose any new requirements beyond obligations that are in DFS’s cybersecurity regulation codified at 23 NYCRR Part 500; rather, the Guidance is meant to explain how Covered Entities should use the framework set forth in Part 500 to assess and address the cybersecurity risks arising from AI.”

Four AI-related cybersecurity threats identified:

From threat actors: AI-enabled social engineering (deepfakes via email, phone, text) and AI-enhanced cyberattacks (amplified potency, scale, and speed). From a firm’s own AI use: exposure of nonpublic information (NPI) when AI tools process sensitive data, and increased attack surface from AI systems creating new vulnerabilities.

“Covered Entities should update risk assessments to include their organization’s use of AI and new risks from threat actors’ use of AI.”
“Covered Entities should consider the threats facing their TPSPs from the use of AI, and how their TPSPs protect themselves from such threats.”
“Covered Entities should also adapt incident response and business continuity plans to include disruptions relating to AI.”
Key Takeaway: The NYDFS maps AI risks directly onto Part 500 controls — risk assessment (500.9), third-party management (500.11), access controls, training (500.14), and incident response. Firms regulated under Part 500 already have obligations that extend to AI.
NYDFS Industry Letter (Oct 2024)
July 2024
Insurance Circular Letter No. 7: AI in Underwriting & Pricing

Addresses the use of AI systems and external consumer data in insurance underwriting and pricing. Establishes principles around fairness, transparency, and accountability in AI use that reflect broader regulatory expectations applicable across financial services.

Circular Letter CL2024-07
NAIC — National Association of Insurance Commissioners
December 2023 — Adopted by 24+ States
Model Bulletin on the Use of AI Systems by Insurers

The Model AI Bulletin provides guidelines for insurers on responsible AI use. As of August 2025, at least 24 states and the District of Columbia have adopted it in full or substantially similar form. The bulletin recommends NIST’s AI Risk Management Framework (AI RMF) as a reference.

“Insurers must adopt, implement and maintain a documented AI program (an AIS Program) to support the responsible use of AI and mitigate the potential risk of inaccurate or discriminatory decisions, particularly when AI is used in regulated processes.”

The AIS Program must: (1) Address governance, risk management controls, and internal audit functions. (2) Be adopted by the board of directors or an appropriate board committee. (3) Be tailored to and proportionate with the insurer’s use and reliance on AI. (4) Address all AI systems that make decisions impacting customers. (5) Address AI use across the insurance product life cycle.

Required controls must address: oversight and approval process for AI system acquisition; data practices (currency, lineage, quality, integrity, bias, minimization, suitability); validation, testing, and retesting; privacy of non-public information; and data and record retention.

Key Takeaway: The NAIC sets the most prescriptive AI governance requirements of any regulator — a documented program, board adoption, data accountability, and third-party vendor accountability. This model is being adopted state by state.
NAIC Model Bulletin (PDF)
CISA — Cybersecurity and Infrastructure Security Agency
April 15, 2024
Joint Guidance: Deploying AI Systems Securely

Published by the NSA AI Security Center, CISA, FBI, and international partners (Australia, Canada, New Zealand, UK). Provides best practices for deploying and operating externally developed AI systems.

“Prevent unauthorized access or tampering with the AI model by applying role-based access controls (RBAC), or preferably attribute-based access controls (ABAC) where feasible, to limit access to authorized personnel only.”
“Implement hardware protections for model weight storage and aggressively isolate weight storage by storing model weights in a protected storage vault in a highly restricted zone (HRZ).”
“Adopt a zero trust mindset, which assumes a breach is inevitable or has already occurred, implement detection and response capabilities enabling quick identification, and integrate an incident detection system to help prioritize incidents.”

Additional recommendations include sandboxing ML model environments in hardened containers, monitoring networks with firewall allow lists, validating AI model integrity before deployment, and assessing security practices of AI vendors and suppliers.

Deploying AI Systems Securely (PDF)
May 22, 2025
AI Data Security: Best Practices for Securing Data

Published by CISA, NSA, FBI, and international partners. Outlines ten cybersecurity best practices specific to AI systems, covering the full AI lifecycle from development to operation. References NIST SP 800-53 for additional controls.

Three key risk categories: data supply chain risks (compromised data from third parties), maliciously modified data (poisoning and adversarial manipulation), and data drift (gradual degradation of data quality and relevance).

The ten recommendations address securing the data supply chain, protecting data against unauthorized modification, data integrity validation, access control for training and operational data, and monitoring for data anomalies.

AI Data Security Best Practices (PDF)
December 2025
Principles for Secure AI Integration in Operational Technology

Published by CISA and the Australian Cyber Security Centre with international partners. Outlines four key principles for integrating AI into operational technology systems while reducing risk. Focuses on machine learning, large language models, and AI agents due to their complex security challenges.

Secure AI Integration in OT

See How FCI Aligns with These Requirements

FCI has been deploying AI in cybersecurity since 2017 and helps financial services firms govern AI risk — implementing the controls, enforcement, and evidence that regulators expect.