Financial services are racing to integrate GenAI. But with new regulations and growing complexity, how can organizations effectively manage AI risks and ensure transparency?
Purus suspendisse a ornare non erat pellentesque arcu mi arcu eget tortor eu praesent curabitur porttitor ultrices sit sit amet purus urna enim eget. Habitant massa lectus tristique dictum lacus in bibendum. Velit ut viverra feugiat dui eu nisl sit massa viverra sed vitae nec sed. Nunc ornare consequat massa sagittis pellentesque tincidunt vel lacus integer risu.
Mauris posuere arcu lectus congue. Sed eget semper mollis felis ante. Congue risus vulputate nunc porttitor dignissim cursus viverra quis. Condimentum nisl ut sed diam lacus sed. Cursus hac massa amet cursus diam. Consequat sodales non nulla ac id bibendum eu justo condimentum. Arcu elementum non suscipit amet vitae. Consectetur penatibus diam enim eget arcu et ut a congue arcu.
Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.
Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.
“Nisi consectetur velit bibendum a convallis arcu morbi lectus aecenas ultrices massa vel ut ultricies lectus elit arcu non id mattis libero amet mattis congue ipsum nibh odio in lacinia non”
Nunc ut facilisi volutpat neque est diam id sem erat aliquam elementum dolor tortor commodo et massa dictumst egestas tempor duis eget odio eu egestas nec amet suscipit posuere fames ded tortor ac ut fermentum odio ut amet urna posuere ligula volutpat cursus enim libero libero pretium faucibus nunc arcu mauris sed scelerisque cursus felis arcu sed aenean pharetra vitae suspendisse ac.
It's difficult to find a financial services organization that's not engaging (or actively debating) the use of generative artificial intelligence (AI). With masters of the universe making visionary proclamations, such as Jamie Dimon, CEO at JPMorgan Chase, stating that "AI could be as transformative as electricity or the internet", pressure continues to fill the boardrooms and senior leadership ranks with questions — and demands — on how to effectively use AI to transform operations.
On the technology front, there's a consolidation of early winners utilizing their infrastructure foothold and mass exposure of consumer products to get ahead. Google, Amazon Web Services (AWS), and Microsoft (in partnership with OpenAI) have already established a presence in financial services through their cloud integrations. Meanwhile, Meta and Apple are leveraging their widespread use of personal devices and consumer apps to make inroads into the financial services sector.
This leaves financial services organizations with a narrow vision as to how and where they should focus their risk management efforts.
In June 2023, joint guidance from the Board of Governors of the Federal Reserve System (FRB), the Federal Deposit Insurance Corporation (FDIC), and the Office of the Comptroller of the Currency (OCC) published interagency guidance on risk management for third-party relationships. Although agencies acknowledged requests for more specific AI guidance, they chose a broad, principle-based approach.
As a result, risk managers are closely examining and dissecting this guidance as they work to implement AI across their organization's internal, and eventually, customer-facing use cases. Specific components being examined cover unique vendor landscape, existing entrenchment, and impact on financial services strategic risks:
More recently in March of 2024, the U.S. Department of Treasury released guidance to the financial services industry focused on cybersecurity risks, a priority for U.S. regulators, senior management, and board members of these organizations.
'Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector’ tackles a number of third-party risk management concerns, in particular how to decipher 'explainability for black box AI solutions'. The Treasury admits the concern of lack of explainability, specifically around safety, privacy, and consumer protection concerns, and points to the research and development community as providing an avenue for answers.
And while the Treasury alludes to frameworks to support longer-term assessment, it's clear that a mix of experience, talent, research, independence, and evidence will be part of the solution. This mix of experience, talent, research, independence, and evidence is producing breakthrough AI research each day that's highly useful for financial services organizations.
AI researchers at Dynamo AI divide explainability into four assessable components, with risk-mitigating controls alongside each:
Each of these four components are crucial in deploying AI safely and effectively. However, there are significant roadblocks to achieving each in practice. Companies can be increasingly secretive of their model development processes, infamously going down the path of closed source black box models in 2021. This not only makes AI transparency extremely challenging, if not impossible, to achieve, but calls into question interpretability, accountability, and trust.
Where does that leave financial services in terms of effective independent oversight of AI? Dynamo AI identifies several key strategies emerging as organizations seek to balance competition, innovation, and risk management:
Dynamo AI provides financial services organizations with the tools needed to assess and demonstrate AI compliance. Schedule your free demo.