As enterprises dive deeper into GenAI, new risks emerge, from misuse to misapplication. Discover how to develop effective AI guardrails and ensure compliance with evolving regulations
Purus suspendisse a ornare non erat pellentesque arcu mi arcu eget tortor eu praesent curabitur porttitor ultrices sit sit amet purus urna enim eget. Habitant massa lectus tristique dictum lacus in bibendum. Velit ut viverra feugiat dui eu nisl sit massa viverra sed vitae nec sed. Nunc ornare consequat massa sagittis pellentesque tincidunt vel lacus integer risu.
Mauris posuere arcu lectus congue. Sed eget semper mollis felis ante. Congue risus vulputate nunc porttitor dignissim cursus viverra quis. Condimentum nisl ut sed diam lacus sed. Cursus hac massa amet cursus diam. Consequat sodales non nulla ac id bibendum eu justo condimentum. Arcu elementum non suscipit amet vitae. Consectetur penatibus diam enim eget arcu et ut a congue arcu.
Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.
Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.
“Nisi consectetur velit bibendum a convallis arcu morbi lectus aecenas ultrices massa vel ut ultricies lectus elit arcu non id mattis libero amet mattis congue ipsum nibh odio in lacinia non”
Nunc ut facilisi volutpat neque est diam id sem erat aliquam elementum dolor tortor commodo et massa dictumst egestas tempor duis eget odio eu egestas nec amet suscipit posuere fames ded tortor ac ut fermentum odio ut amet urna posuere ligula volutpat cursus enim libero libero pretium faucibus nunc arcu mauris sed scelerisque cursus felis arcu sed aenean pharetra vitae suspendisse ac.
As enterprises integrate generative AI (GenAI) systems deeper into their operations, concerns about the risks of these systems grow. Harvard Business Review recently classified these concerns based on two factors — risks related to the creation of GenAI content, and risks related to the consumption of GenAI content.
GenAI systems lacking controls on both the creation and consumption of content guardrails are particularly vulnerable to:
In parallel, regulators, especially across financial services, are becoming more vocal about requiring guardrails from industry participants. In one example, out of many across the industry, Gary Gensler, the Chair of the US Securities and Exchange Commission (SEC), details the types of guardrail coverage an institution may require in a recent speech:
"Investor protection requires the humans who deploy a model to put in place appropriate guardrails. Did those guardrails take into account current law and regulation, such as those pertaining to front-running, spoofing, fraud, and providing advice or recommendations? Did they test it before deployment and how? Did they continue to test and monitor? What is their governance plan — did they update the various guardrails for changing regulations, market conditions, and disclosures?"
"Did those guardrails take into account current law and regulation, such as those pertaining to front-running, spoofing, fraud, and providing advice or recommendations?" — Gary Gensler, Chair of the US Securities and Exchange Commission (SEC)
Guardrails, a critical component of GenAI control frameworks, are lightweight systems designed to shield against user inputs or model outputs that fall into these risk categories. When properly applied, these controls mitigate risks, such as malicious user inputs, exposure of sensitive user data, non-compliant user inputs and model responses, and hallucinations.
Guardrails help protect GenAI models from misuse and prevent the production of inaccurate or unsafe content, all while enhancing visibility into the GenAI system.
While guardrails may seem like a cure-all, their effectiveness is reliant on the quality of the guardrail itself. Dynamo AI has collaborated closely with the financial services industry to develop and deploy guardrails across a broad set of GenAI use cases. During this time, our research experts have observed that defining effective guardrails can be a challenge for even the most seasoned financial services executives.
The goal of a guardrail is to filter out all ‘undesirable’ content while maintaining the functionality of the AI system. This means maximizing true positives (non-compliant content correctly caught) while minimizing false positives (compliant content incorrectly caught). This is a challenging objective, but a high-quality guardrail definition can help.
At DynamoAI, we believe good guardrails have three components:
Now that we’ve outlined what a guardrail is and the structural components to consider when developing a guardrail, let’s turn toward how financial service organizations can start to develop a comprehensive inventory of guardrails.
To develop a comprehensive inventory of guardrails for GenAI applications, financial service organizations must think critically about their use cases.
Some questions to ask include:
Each element will drive a unique set of guardrails that will help build an effective GenAI guardrail control inventory.
GenAI use cases are mainly categorized as internal or external (consumer) facing, with different risks and control requirements associated with each. An internally facing use case primarily serves employees or contractors within an organization. An externally facing use case serves consumers or the public at large.
Each use case requires an assessment of applicable rules, laws, and regulations, which may influence the specific guardrails that should be applied. This should be a systematic process that identifies the use case risks and recommends guardrail control standards (as one component in a larger control framework).
For example, a community bank may want to deploy an internal chatbot that allows personnel to search internal policies, or engage with an internal compliance department to discuss marketing guidelines.
Labor laws from the Department of Labor and individual states may also help inspire a set of guardrails to protect against human resource policy, federal, or state law infringement. This is critical as policymakers and a Supreme Court Justice have noted that organizations may be liable for what their AI says.
Examples include:
These types of guardrails help mitigate a whole set of risks. For example, if employee disability information is inputted, guardrails can prevent this information from being leaked to users or sent to the model. Guardrails can also prevent discussion of sensitive topics that should be handled by a human resource expert or a manager.
Beyond labor laws, the nature of the business plays a significant role in dictating its guardrail inventory. Depending on the nature of the financial services firm, regulations and guidance from each federal agency can inform use case-specific guardrails.
For example, the community bank may decide to use their chatbot to provide consumers with information about a specific financial product. The existing CFPB regulations can help inform any guardrails that should be implemented.
Potential guardrails include controls on limiting financial or legal advice, prohibiting discussions of credit reporting, and ensuring that consumers can connect with a support agent should they have questions about their personal finances.
Consumer harm regulations or research such as those listed below also provide requirements that can be translated into guardrails:
DynamoGuard can help customize your organization’s guardrails to internal AI policies and workflows. Learn more.
Finally, the jurisdictions in which the chatbot is deployed is a critical input to guide guardrail development. The location where an employee utilizes AI, regardless of where the company is based, may impact the types of guardrails required.
Local market laws, rules, and regulations for each jurisdiction need to be considered. The EU AI act provides some of the clearest expectations, while state-specific bills being introduced across the US, such as in Colorado and Connecticut, are also sources of inspiration.
Once a baseline inventory of guardrails is created, organizations should consider including them in their process, risks, and control management. Guardrails should be assessed and tested periodically in alignment with an organization's risk tolerance and testing plans. Procedures should be established for participants across the first and second lines of defense to recommend guardrail enhancements or creations. Additionally, test evidence should be periodically reviewed and strengthened.
The management of GenAI guardrails is a new process across the financial services sector, and an operating model should be designed to identify, deploy, manage, and retire guardrails across their lifecycle, in line with the size and complexity of each organization.
We're here to help you deploy enterprise compliant financial services GenAI use cases. Learn more with a free product demo.