If there is one area where AI is making a massive impact in financial services, that area is cybersecurity.
A recent report from the U.S. Treasury Department underscores the opportunities and challenges that AI represents to the financial services industry. The product of a presidential order and led by the Treasury’s Office of Cybersecurity and Critical Infrastructure Protection (OCCIP), the report highlights in particular the growing gap between the ability of larger and smaller institutions to leverage advanced AI technology to defend themselves against emerging AI-based fraud threats.
In addition to what it calls “the growing capability gap,” the report – Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector – also points to another difference between larger and smaller financial institutions: the fraud data divide. This issue is similar to the capability gap; larger institutions simply have more historical data than their smaller rivals. When it comes to building in-house, anti-fraud AI models, larger FIs are able to leverage their data in ways that smaller firms cannot.
These observations are among ten takeaways from the report shared last week. Other concerns include:
Regulatory coordination
Expanding the NIST AI Risk Management Framework
Best practices for data supply chain mapping and “nutrition labels”
Explainability for black box AI solutions
Gaps in human capital
A need for a common AI lexicon
Untangling digital identity solutions
International coordination
More than 40 companies from fintech and the financial services industry participated in the report. The Treasury research team interviewed companies of all sizes, from “systemically important” international financial firms to regional banks and credit unions. In addition to financial services companies, the team also interviewed technology companies and data providers, cybersecurity specialists and regulatory agencies.
The report touches on a wide range of issues relating to the integration of AI technology and financial services, among them the increasingly prominent role of data. “To an extent not seen with many other technology developments, technological advancements with AI are dependent on data,” the report’s Executive Summary notes. “In most cases, the quality and quantity of data used for training, testing, and refining an AI model, including those used for cybersecurity and fraud detection, directly impact its eventual precision and efficiency.”
One of the more refreshing takeaways from the Treasury report relates to the “arms race” nature of fraud prevention. That is, how to deal with the fact that fraudsters tend to have access to many of the same technological tools as those charged with stopping them. To this point, the report even acknowledges that, in many instances, cybercriminals will “at least initially” have the upper hand. That said, the report concludes that “at the same time, many industry experts believe that most cyber risks exposed by AI tools or cyber threats related to AI tools can be managed like other IT systems.”
At a time when enthusiasm for AI technology is increasingly challenged by anxiety over AI capabilities, this report from the U.S. Treasury is a sober and constructive guide toward a path forward.
Photo by Jorge Jesus