Product
May 9, 2024

DynamoGuard: a platform for customizing powerful AI gates

DynamoGuard: a platform for customizing powerful AI gates

Low-code tools are going mainstream

Purus suspendisse a ornare non erat pellentesque arcu mi arcu eget tortor eu praesent curabitur porttitor ultrices sit sit amet purus urna enim eget. Habitant massa lectus tristique dictum lacus in bibendum. Velit ut viverra feugiat dui eu nisl sit massa viverra sed vitae nec sed. Nunc ornare consequat massa sagittis pellentesque tincidunt vel lacus integer risu.

  1. Vitae et erat tincidunt sed orci eget egestas facilisis amet ornare
  2. Sollicitudin integer  velit aliquet viverra urna orci semper velit dolor sit amet
  3. Vitae quis ut  luctus lobortis urna adipiscing bibendum
  4. Vitae quis ut  luctus lobortis urna adipiscing bibendum

Multilingual NLP will grow

Mauris posuere arcu lectus congue. Sed eget semper mollis felis ante. Congue risus vulputate nunc porttitor dignissim cursus viverra quis. Condimentum nisl ut sed diam lacus sed. Cursus hac massa amet cursus diam. Consequat sodales non nulla ac id bibendum eu justo condimentum. Arcu elementum non suscipit amet vitae. Consectetur penatibus diam enim eget arcu et ut a congue arcu.

Vitae quis ut  luctus lobortis urna adipiscing bibendum

Combining supervised and unsupervised machine learning methods

Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.

  • Dolor duis lorem enim eu turpis potenti nulla  laoreet volutpat semper sed.
  • Lorem a eget blandit ac neque amet amet non dapibus pulvinar.
  • Pellentesque non integer ac id imperdiet blandit sit bibendum.
  • Sit leo lorem elementum vitae faucibus quam feugiat hendrerit lectus.
Automating customer service: Tagging tickets and new era of chatbots

Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.

“Nisi consectetur velit bibendum a convallis arcu morbi lectus aecenas ultrices massa vel ut ultricies lectus elit arcu non id mattis libero amet mattis congue ipsum nibh odio in lacinia non”
Detecting fake news and cyber-bullying

Nunc ut facilisi volutpat neque est diam id sem erat aliquam elementum dolor tortor commodo et massa dictumst egestas tempor duis eget odio eu egestas nec amet suscipit posuere fames ded tortor ac ut fermentum odio ut amet urna posuere ligula volutpat cursus enim libero libero pretium faucibus nunc arcu mauris sed scelerisque cursus felis arcu sed aenean pharetra vitae suspendisse ac.

In 2023, DynamoFL had the opportunity to enable some of the largest global enterprises to embed privacy, safety, and security into their GenAI stacks. These partnerships laid the foundation for these enterprises to streamline the launch of GenAI solutions at greater speed and efficiency. As we move into 2024, we are witnessing a wave of enterprises seeking to productionize GenAI beyond just preliminary experimentation, and are seriously addressing security and safety challenges with GenAI.

Enterprises are also weighing the impact of emerging regulations like the EU AI Act and US Executive Order, which define a new framework of compliance requirements for GenAI. For example, LLMs can be prompted in nearly infinitely diverse ways, resulting in a virtually unbound range of risk of LLMs generating non-compliant outputs. Efforts to enforce safety guardrails around models have thus far proven difficult and unsatisfactory for enterprise risk management, thus presenting challenges to adhere to guidelines in NIST’s Risk Management Framework or the EU’s risk triaging paradigm.  

The Enterprise’s Challenge with GenAI Safety and Security Today

In our work enabling production-grade GenAI for enterprises, we have found that most LLM safety solutions (i.e. guardrail products) fall incredibly short of emerging regulatory requirements and general safety standards. We highlight three reasons for this failure:

  1. Today’s guardrail offerings lack true customizability needed by enterprises: each enterprise has their own unique set of AI governance requirements tied to their specific LLM use-cases. Out-of-box LLMs are designed to adhere to a limited set of safety principles that are often too broad to tackle edge-cases that make up the bulk of LLM compliance violations.  
  1. Poor detection of non-compliant usage: since LLMs are only aligned to broad safety guidelines, they fail spectacularly when encountering nuanced non-compliance edge-cases. For example, we found that LlamaGuard (based on the Llama2-7B architecture) failed to correctly flag 86% of prompt injection attacks from two popular prompt injection classification datasets.  
  1. Limited ability to incorporate safety requirements from compliance and risk teams: compliance and risk teams commonly create a list of bespoke AI governance requirements (i.e. an “AI Constitution”) to promote safe LLM usage, but enterprises lack a meaningful workflow beyond limited prompt-engineering to enforce these requirements.  

DynamoGuard is a Significant Leap Forward in GenAI Safety and Security

DynamoGuard is currently enabling a critical step forward in addressing these gaps in AI safety. In contrast to previous guardrailing approaches, DynamoGuard meaningfully advances LLM safety by integrating the following key capabilities:

  1. Unprecedented Customization of Guardrails: Compliance teams can simply copy-and-paste their AI governance requirements into DynamoFL to enable truly customizable guardrails.  
  1. A Major Leap in Non-Compliance Detection Accuracy: DynamoGuard achieves a 2-3X improvement in compliance violation detection (i.e. in the detection of prompt-injection violations) compared to leading LLMs by leveraging DynamoFL’s “Automatic Policy Optimization” (APO) technique to teach guardrails to address non-compliance edge-cases.  
  1. A Human-Centric Workflow for Building Robust Guardrails: While edge-case reasoning is a powerful technique to bolster guardrail efficacy, compliance teams (human beings) still need to be closely involved in tuning and monitoring guardrails. DynamoGuard provides compliance teams with an end-to-end workflow to fine-tune AI guardrails and monitor their performance in real-time to close the gap in meeting compliance requirements.

How it Works: the DynamoGuard User Journey

  1. Compliance team describe their AI governance policies to DynamoGuard in natural language (or just copy and paste their existing AI governance policies into DynamoGuard)
  1. DynamoGuard leverages Automatic Policy Optimization (APO) to generate a series of example user-interaction edge-cases that violate AI governance policies.  
  1. Compliance teams edit or reject these edge-case examples to refine DynamoGuard’s understanding of nuanced edge-case violations.  
  1. DynamoGuard fine-tunes a lightweight guard model to classify the generated edge-cases
  1. DynamoGuard is integrated into the enterprise's production-ready LLM system and leverages its fine-tuned lightweight guard model to flag noncompliance violations in LLM inputs and outputs.  
  1. Compliance teams can monitor guardrail efficacy in real-time through DynamoGuard’s LLM monitoring dashboard and continue to fine-tune their LLMs to strengthen guardrails.  

Expanding the reach of DynamoGuard with Dynamo 8B, our multilingual foundation model to democratize access to safe GenAI

Even with the constant stream of exciting updates in the LLM space, there is a relative lack of investment in non-English languages, resulting in a gap in performance between English and other languages for open-source language models. We built Dynamo 8B to address this gap in multilingual LLM offerings.  

We are excited about the downstream applications that Dynamo 8B will support. AI teams are struggling to address the challenge of unsafe user queries and LLM outputs, resulting in major compliance challenges for enterprises deploying the technology. As language models like LlamaGuard and Phi-2 were developed to act as more lightweight guardrail models to regulate LLM inputs and outputs, we are excited for Dynamo 8B to similarly enable safe and compliant usage of LLMs globally across a diverse set of languages.  

DynamoGuard completes DynamoFL’s comprehensive GenAI safety and security offerings

DynamoFL’s complete product offering seeks to provide our customers with the appropriate tools and techniques to enable Blue Teaming & Red Teaming, while also ensuring end-to-end auditability of the entire LLMOps lifecycle.  

Our products to-date include: 

  1. DynamoEval – Evaluate an unlimited number of existing closed or open-source LLMs for privacy, security, and reliability risks with 20+ different adversarial testing approaches 
  1. Regulation-Compliant Co-Pilots – A pre-trained catalog of co-pilots within Banking, Healthcare, and Life Sciences that are embedded with differential privacy and optimized for cost. This ensures that AI systems are regulation compliant, without driving up your expenses. 
  1. DynamoFL – Enable private and efficient federated machine learning when training AI models across distributed data sets 

Our 2 new product additions are: 

  1. DynamoGuard: Enable real-time moderation of both internally and 3rd party hosted LLMs, based on natural language processing of your internal compliance policies. DynamoGuard then creates your AI guardrails to prevent and monitor non-compliant inputs / outputs  
  1. Dynamo 8B foundation model:  DynamoGuard is made possible based on our multilingual Foundation Models that have unparalleled performance in comparison with similar sized models. Read more about our Dynamo8B release here! 

With DynamoEval, we are introducing a unique, comprehensive, and technical approach to red teaming. Now, DynamoGuard is the first fully-customizable guardrail development & deployment platform enabling blue teaming. Leveraging both DynamoEval and DynamoGuard, AI teams can ensure they have a fully auditable pipeline from pre-deployment through post-deployment.