AI Governance
Jun 3, 2024

Applying Third-Party Risk Management Principles Effectively on Generative AI Use

Applying Third-Party Risk Management Principles Effectively on Generative AI Use

Low-code tools are going mainstream

Purus suspendisse a ornare non erat pellentesque arcu mi arcu eget tortor eu praesent curabitur porttitor ultrices sit sit amet purus urna enim eget. Habitant massa lectus tristique dictum lacus in bibendum. Velit ut viverra feugiat dui eu nisl sit massa viverra sed vitae nec sed. Nunc ornare consequat massa sagittis pellentesque tincidunt vel lacus integer risu.

  1. Vitae et erat tincidunt sed orci eget egestas facilisis amet ornare
  2. Sollicitudin integer  velit aliquet viverra urna orci semper velit dolor sit amet
  3. Vitae quis ut  luctus lobortis urna adipiscing bibendum
  4. Vitae quis ut  luctus lobortis urna adipiscing bibendum

Multilingual NLP will grow

Mauris posuere arcu lectus congue. Sed eget semper mollis felis ante. Congue risus vulputate nunc porttitor dignissim cursus viverra quis. Condimentum nisl ut sed diam lacus sed. Cursus hac massa amet cursus diam. Consequat sodales non nulla ac id bibendum eu justo condimentum. Arcu elementum non suscipit amet vitae. Consectetur penatibus diam enim eget arcu et ut a congue arcu.

Vitae quis ut  luctus lobortis urna adipiscing bibendum

Combining supervised and unsupervised machine learning methods

Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.

  • Dolor duis lorem enim eu turpis potenti nulla  laoreet volutpat semper sed.
  • Lorem a eget blandit ac neque amet amet non dapibus pulvinar.
  • Pellentesque non integer ac id imperdiet blandit sit bibendum.
  • Sit leo lorem elementum vitae faucibus quam feugiat hendrerit lectus.
Automating customer service: Tagging tickets and new era of chatbots

Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.

“Nisi consectetur velit bibendum a convallis arcu morbi lectus aecenas ultrices massa vel ut ultricies lectus elit arcu non id mattis libero amet mattis congue ipsum nibh odio in lacinia non”
Detecting fake news and cyber-bullying

Nunc ut facilisi volutpat neque est diam id sem erat aliquam elementum dolor tortor commodo et massa dictumst egestas tempor duis eget odio eu egestas nec amet suscipit posuere fames ded tortor ac ut fermentum odio ut amet urna posuere ligula volutpat cursus enim libero libero pretium faucibus nunc arcu mauris sed scelerisque cursus felis arcu sed aenean pharetra vitae suspendisse ac.

It would be difficult to find a financial services organization that is not engaging (or actively debating) the use of generative Artificial Intelligence (AI) in some form or fashion. With masters of the universe making visionary proclamations, such as Jamie Dimon, CEO at JPMorgan Chase, stating that 'AI could be as transformative as electricity or the internet’1, the pressure continues to fill the boardrooms and senior leadership ranks with questions (and demands) on how to use AI effectively to transform operations.

On the technology front, there appears to be a consolidation of early winners successfully utilizing their infrastructure foothold and mass exposure of consumer products to get ahead. Google, Amazon Web Services (AWS), and Microsoft (in partnership with OpenAI) all have existing footholds across the internal workings and infrastructure of financial services organizations, for the most part paved by recent cloud integrations embedded across these organizations. Additionally, technology titans such as Meta and Apple are using their unique reach through entrenched positions across the financial services employee base via prolific use of personal devices and consumer applications to gain traction.  

For the near term, this consolidation leaves financial services organizations with a narrow vision as to how and where they should focus their risk management efforts.  

Almost one year ago, joint guidance from the Board of Governors of the Federal Reserve System (FRB), the Federal Deposit Insurance Corporation (FDIC), and the Office of the Comptroller of the Currency (OCC) published interagency guidance on risk management for third-party relationships2. And while the agencies admitted commentators had asked for additional guidance related to artificial intelligence, this topic was not addressed specifically due to the regulator's intended broad, principled based approach. As a result, components of this guidance have been highlighted and pulled apart throughout corporate halls by risk managers as their organizations look to implement AI across internal, and eventually, customer-facing use cases.

Specifically, the unique vendor make-up, existing entrenchment, and impact to financial services strategic risks is causing a laser-eyed focus on a number of core tenants in the joint guidance including:

  • Addressing the need for 'independent testing and objective reporting of results and findings', especially in a field where the required machine learning expertise is scarce, both internally and across vendor partners;
  • Choosing an acceptable 'conformity assessment or certification by independent third parties' in a space where financial services regulators have published little specific guidance and broader government and trade associates are only recently beginning to publish and battle-test principles and risk management control recommendations;
  • How to effectively implement contract provisions that allow for 'periodic, independent audits of the third party and its relevant subcontractors, consistent with the risk and complexity of the third-party relationship', given that the large technology partners hold vast internal and political power, and the black box nature of AI naturally inhibits efforts for explainability, demonstrated evidence, and traceability;
  • Proactively inserting 'ongoing monitoring and independent reviews' into the technology lifecycle to address changing risk or material issues, specifically as these risks are changing as AI models learn and mature; and
  • Establishing a process to provide evidence of risks to the vendor to facilitate remediation of identified issues or course correct the operational processes.

More recently in March of 2024, the US Department of Treasury released guidance to the financial services industry focused on cybersecurity risks, an area of priority for US regulators, senior management and the boards of these organizations. 'Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector3 tackles a number of third-party risk management concerns, in particular how to decipher 'explainability for black box AI solutions'. The Treasury admits the concern of lack of explainability, specifically around safety, privacy, and consumer protection concerns, and points to the research and development community as providing an avenue for answers. And while the Treasury alludes to frameworks to support longer-term assessment, it is clear that a mix of experience, talent, research, independence, and evidence will be part of the solution.  

And this mix of experience, talent, research, independence, and evidence is producing breaking AI research each day that is useful for financial services organizations. AI researchers at Dynamo AI divide explainability into four assessable components (and develop risk-mitigating controls alongside each):  

  1. Transparency: Making the development and training process of AI systems clear and understandable to users, developers, and stakeholders.
  2. Interpretability: Enabling people to comprehend how an AI system arrives at its decisions or predictions, and what factors influence its outputs.
  3. Accountability: Ensuring that the processes used to train and deploy AI systems can be monitored and held accountable for their decisions and actions, especially under the lens of user privacy, fairness and bias for critical domains such as healthcare, finance, and law.
  4. Trust: Building trust between users and the outputs of AI systems in terms of safety, truthfulness, and helpfulness.

Each of these four components are crucial in deploying AI safely and effectively. However, there are significant roadblocks in achieving each in practice. Model can be increasingly secretive of their model development processes, infamously going down the path of closed source black box models in 2021. This not only makes AI transparency extremely challenging, if not impossible, to achieve, but calls into question interpretability, accountability, and trust.  

Where does that leave financial services in thinking about effective independent oversight of AI? Dynamo AI sees a number of key strategies that are becoming more prevalent as organizations balance competition and innovation with risk management:

  • A constant evaluation of your AI technology stack and partners, assessing for concerns of concentration risk. This is particularly relevant in evaluating whether the technology vendor is also providing risk management metrics or controls on the AI being deployed.  
  • Incorporating methods to in-take and assess new research on the subject of AI, its risks, and control methods. This may take form as part of in-house initiatives across risk and audit divisions, as well as ensure that vendors have built in research teams and / or abilities to review and react to the latest technical strategies for AI deployment and risk management.  
  • Establishing innovative and effective controls for AI deployment as part of the pre- and post-deployment / ongoing monitoring strategy. Controls implemented pre-deployment may need to provide input back to any AI organizational governance function established, as the risks identified (or mitigated) during this phase may inform the overall risk profile of the use case and its impact. Ongoing monitoring should be constant, and embedded in the AI stack, with outputs that are clear to staff of a variety of technical and risk management skills.  
  • Being more and more specific about control reports that are expected of your vendors, and using these outputs to help inform a broad set of stakeholders about expectable testing and validation of AI as your use case and processes mature. Aligning this within your Process, Risk, and Control Self-Assessment processes is another good way to bring together a collective set of stakeholders that all require knowledge about risk management of AI.  

Dynamo AI is the enterprise platform for enabling private, secure, and regulation-compliant AI models. Our team of PhD's, machine learning and risk management experts, as well as our community of industry partners across financial services are leading the way in providing organizations with the tools to assess and demonstrate compliance while using AI. Let us know how you are engaging with AI and how we can help.  

[1] https://www.cnbc.com/2024/04/09/jpmorgan-chase-ceo-jamie-dimon-ai-could-be-as-big-as-the-internet.html

[2] https://www.occ.gov/news-issuances/bulletins/2023/bulletin-2023-17.html

[3] Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector (treasury.gov)