News
May 9, 2024

Navigating AI Security in Response to the White House Executive Order

After the White House Executive Order, investigate AI security. Learn how DynamoFL uses cutting-edge AI solutions to manage legal compliance and safety.

Navigating AI Security in Response to the White House Executive Order

Low-code tools are going mainstream

Purus suspendisse a ornare non erat pellentesque arcu mi arcu eget tortor eu praesent curabitur porttitor ultrices sit sit amet purus urna enim eget. Habitant massa lectus tristique dictum lacus in bibendum. Velit ut viverra feugiat dui eu nisl sit massa viverra sed vitae nec sed. Nunc ornare consequat massa sagittis pellentesque tincidunt vel lacus integer risu.

  1. Vitae et erat tincidunt sed orci eget egestas facilisis amet ornare
  2. Sollicitudin integer  velit aliquet viverra urna orci semper velit dolor sit amet
  3. Vitae quis ut  luctus lobortis urna adipiscing bibendum
  4. Vitae quis ut  luctus lobortis urna adipiscing bibendum

Multilingual NLP will grow

Mauris posuere arcu lectus congue. Sed eget semper mollis felis ante. Congue risus vulputate nunc porttitor dignissim cursus viverra quis. Condimentum nisl ut sed diam lacus sed. Cursus hac massa amet cursus diam. Consequat sodales non nulla ac id bibendum eu justo condimentum. Arcu elementum non suscipit amet vitae. Consectetur penatibus diam enim eget arcu et ut a congue arcu.

Vitae quis ut  luctus lobortis urna adipiscing bibendum

Combining supervised and unsupervised machine learning methods

Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.

  • Dolor duis lorem enim eu turpis potenti nulla  laoreet volutpat semper sed.
  • Lorem a eget blandit ac neque amet amet non dapibus pulvinar.
  • Pellentesque non integer ac id imperdiet blandit sit bibendum.
  • Sit leo lorem elementum vitae faucibus quam feugiat hendrerit lectus.
Automating customer service: Tagging tickets and new era of chatbots

Vitae vitae sollicitudin diam sed. Aliquam tellus libero a velit quam ut suscipit. Vitae adipiscing amet faucibus nec in ut. Tortor nulla aliquam commodo sit ultricies a nunc ultrices consectetur. Nibh magna arcu blandit quisque. In lorem sit turpis interdum facilisi.

“Nisi consectetur velit bibendum a convallis arcu morbi lectus aecenas ultrices massa vel ut ultricies lectus elit arcu non id mattis libero amet mattis congue ipsum nibh odio in lacinia non”
Detecting fake news and cyber-bullying

Nunc ut facilisi volutpat neque est diam id sem erat aliquam elementum dolor tortor commodo et massa dictumst egestas tempor duis eget odio eu egestas nec amet suscipit posuere fames ded tortor ac ut fermentum odio ut amet urna posuere ligula volutpat cursus enim libero libero pretium faucibus nunc arcu mauris sed scelerisque cursus felis arcu sed aenean pharetra vitae suspendisse ac.

This week, the White House published a sweeping executive order on artificial intelligence (AI) announcing new federal mandates that are intended to promote the safe, reliable, and responsible deployment of AI. This executive order is the heaviest regulatory step taken by the US federal government so far, building upon its introduction of the Blueprint for an AI Bill of Rights and previous Executive Order directing agencies to combat algorithmic discrimination. The executive order has a major focus on guiding federal agencies on how to regulate AI and thus carries considerable consequences for the private sector, particularly with its expectations around AI safety, security, privacy, algorithmic discrimination and the preservation of civil rights.  

President Biden has placed AI safety and security as the first and foundational theme in his 111-page Executive Order. This is underscored by Vice President Harris' follow-up announcement on Wednesday that the administration will launch a new AI Safety Institute within the Department of Commerce focused on creating standards to test the safety of AI models and release draft policy guidance for the U.S. government’s use of the technology.

DynamoFL has been in active discussion with key policymakers from both the Legislative and Executive offices that are helping to shape the federal government's collective approach to AI regulation. We can assist organizations who are seeking to understand the Executive Order’s new emphasis on standards for AI safety and security and help you plan ahead. In anticipation of the emerging Executive Order, we designed DynamoFL’s LLM platform to streamline compliance with unfolding regulatory requirements from governments around the world, including this US Executive Order and the EU AI Act. We have been preparing our existing customers to efficiently and safely productionize their Generative AI solutions, while helping them document compliance along the way.

To start with, the executive order explicitly calls out the risk of data leakage and data extraction:

“AI not only makes it easier to extract, identify, and exploit personal data, but it also heightens incentives to do so because companies use data to train AI systems.”

DynamoFL provides state-of-the-art privacy-preserving techniques to support end-to-end LLM training and an AI safety evaluation platform that is easily integratable with existing ML workflows. DynamoFL's solutions have already helped some of the largest enterprises address risks mentioned in the EO through automated, ongoing penetration testing of LLMs for malicious data extraction attacks. 

Regulators are striving to engage with deeply technical topics pertaining to LLMs and this is further evidenced by FTC’s actions earlier this year. Enterprises productionizing AI services should expect their regulators to be knowledgeable of technical topics and prepare accordingly.

In addition to proactive engagement with policymakers, our world-class team of AI experts reviews emerging regulatory frameworks and filters hundreds of new ML research papers published to productionize regulation-compliant methods best tailored to your AI use case and business needs.

Red-Teaming and Penetration testing

The executive order also requests the creation of guidelines for AI audits and red-teaming, especially of AI capabilities which could cause harm. This focus on AI red teaming builds on the “external red teaming” language used in the voluntary AI commitments declaration and independent “testing” mentioned in the NIST AI RMF and EU AI Act. AI “red-teaming" would be a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.

 “Testing and evaluations will help ensure that AI systems function as intended, are resilient against misuse or dangerous modifications, are ethically developed and operated in a secure manner, and are compliant with applicable Federal laws and policies.”

Red-teaming can help identify potential compliance incidents, data leakage, and malicious attacks has become higher with the onset of Generative AI and LLMs. This category of ML models are more likely to train on larger and more sensitive datasets, and introduce new attack vectors. In fact, DynamoFL’s team of academic experts and privacy researchers recently uncovered major data and privacy vulnerabilities with GPT-3, a major large language model provided by OpenAI. 

DynamoFL offers LLM penetration testing solutions that have been utilized by financial services institutions, hardware companies, and other large organizations deploying GenAI since early 2023. Through our evaluation suite, we stress-test organizations’ open-source and closed-source LLMs for critical LLM risks such as sensitive data leakage, hallucinations, and more.

Safety Documentation

Safety and security evaluations must be paired with comprehensive and standardized reporting requirements, and DynamoFL is ahead of the industry in defining what standardized AI safety reports should contain. The executive order requests developers of dual-use foundational models to provide “a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security.

DynamoFL offers a suite of customizable, auto-generated AI model evaluation reports that can be shared with your company leadership, regulators, and other key stakeholders.

Conclusion

We started this company to enable Fortune 500 enterprises, federal agencies, and other large organizations to deploy innovative AI solutions securely, safely, and with mitigations in place against potential risk vectors that can affect users, communities, and organizations at large. DynamoFL is here to help and we encourage you to reach us at hello@dynamofl.com to learn more.