3 min read

NYDFS Releases Guidance on Combating AI Cybersecurity Risks

In 2017, the New York State Department of Financial Services (NYDFS) Cybersecurity Requirements for Financial Service Companies (23 NYCRR 500) was enacted to better regulate and protect customer data. The requirements were revised again on November 1, 2023. 

On October 16, 2024, the NYDFS issued new guidance focused on cybersecurity risks, specifically related to the uptick in cybercriminals using artificial intelligence (AI) to commit crimes at greater scale and speed. The guidance addresses businesses regulated by the NYDFS that use AI or outsource business to entities that use AI. However, the insights provided may offer an instructive framework effective for all organizations to consider.

Key takeaways 

The primary goal of the guidance is to help assist “Covered Entities”—that is, those entities regulated by NYDFS—in understanding and assessing their cybersecurity risks associated with the use of AI, as well as the controls organizations can implement to mitigate these risks.  

Key threats when using AI 

The guidance identifies four primary threats associated with AI: 

  1. AI-enabled social engineering: Threat actors (groups with the intention to exploit a digital system) may use AI to create deepfakes (highly personalized and manipulated audio, video, images, and text) to deceive authorized users into releasing sensitive information or transferring funds. 
  2. AI-enhanced cyber-attacks: AI can help threat actors scan and analyze large amounts information quickly, enabling threat actors to accelerate the speed and scale of cyber-attacks. This includes faster system access, more effective malware deployment, and the ability to develop new malware quickly. 
  3. Exposure of nonpublic information: AI products collect and process large amounts of data, including nonpublic and biometric information, making them particularly attractive targets for cyberattacks.  With this information, threat actors may attempt to imitate authorized users or bypass Multi-Factor Authentication (MFA). 
  4. Outsourced services using AI: Outsourced vendors that use AI products can introduce additional vulnerabilities, as these vendors may collect and manage significant amounts of company information. 

Mitigation strategies 

The NYDFS guidance offers some more detailed examples of controls and measures Covered Entities may take to mitigate AI-related cyber threats.  

Risk assessments and risk-based programs, policies, procedures, and plans: 

  • Address AI-related risks with respect to your organization’s own AI use, third-party AI technologies, and potential vulnerabilities from AI applications. 

  • Focus on risks to confidentiality, integrity, or availability of systems or nonpublic information. 

Third-party service provider and vendor management: 

  • Consider AI-related threat vulnerabilities specific to third-party service providers. 

  • Require third-party service providers to notify the entity of any AI-related cybersecurity events. 

  • Include appropriate representations or warranties in agreements with third-party service providers. 

Access controls: 

  • Be cautious of MFA methods that are vulnerable to AI deepfakes and other AI-enhanced attacks. 
  • Employ technology with liveness detection or texture analysis to verify biometric factors. 
  • Use multiple biometric modalities simultaneously, such as combining fingerprint with iris recognition or user keystrokes. 

Cybersecurity Training: 

  • Train employees on AI-related social engineering attacks and AI-enhanced cyberattacks. 

  • Educate relevant personnel on securing and defending AI systems, designing secure AI systems, and using AI-powered applications without disclosing nonpublic information. 

Monitoring: 

  • Monitor AI-enabled products or services for unusual search behaviors indicating attempts to extract nonpublic information. 

  • Block searches that could expose nonpublic information to public AI products or systems. 

Data Management: 

  • Implement data minimization practices for data used in AI applications. 

  • Identify and control all information systems that use or rely on AI. 

  • Maintain an inventory of AI systems and prioritize mitigation measures for critical systems. 

Relevance to all employers 

Although the guidance only addresses organizations regulated by the NYDFS, with the uptick in the use of AI among organizations and cybercriminals alike, the guidance may provide useful information for all organizations to consider. The guidance calls for employers to train employees on the proper use of AI, stating that many AI-driven attacks are aimed at employees in hopes of obtaining proprietary company information. Employers regulated by the NYDFS should remain aware of AI-related cybersecurity risks mentioned, especially as they pertain to employee AI use and training, as well as third-party vendor management.  

We are committed to using AI responsibly. MBI Worldwide remains current on the Professional Background Screening Association (PBSA) standards, maintaining compliance with rigorous industry standards to keep our systems and client data safe. To learn more about the guidance from NYDFS, take a look at the official newsletter here

This article is for informational purposes only and does not constitute legal advice. Employers should consult their legal counsel before taking any action. 

3 Ways to Justify Your Background Check Budget for 2025

Many organizations are in the middle of the 2025 budgeting plans, meaning spending, return on investment, and cutting back on unnecessary spending is...

Read More

NYDFS Releases Guidance on Combating AI Cybersecurity Risks

In 2017, the New York State Department of Financial Services (NYDFS) Cybersecurity Requirements for Financial Service Companies (23 NYCRR 500) was...

Read More

CFPB Urges Employers to Follow FCRA When Using Background Dossiers and Algorithmic Scores

The Consumer Financial Protection Bureau (CFPB) issued a recent policy statement advising employers to comply with the Fair Credit Reporting Act...

Read More