Legal Documentation

Acceptable Use Policy

Please review our acceptable use policy before using dynamo ai in a secure work environment with your questions or problems. Stay responsible and safe

We want everyone to use our tools safely and responsibly. That’s why we’ve created usage policies that apply to all users of Dynamo AI’s models, tools, and services. 

If we discover that your product or usage doesn’t follow these policies, we may ask you to make necessary changes. Repeated or serious violations may result in further action, including suspending your usage or terminating your account.

Our policies are subject to change. 


This Artificial Intelligence Acceptable Use Policy (“Policy”) applies to customers’ use of all services offered by DynamoFL, Inc. or its affiliates (“Dynamo AI”), or third party products, applications or functionality that interoperate with services offered by Dynamo AI, that incorporate artificial intelligence. The terms of this Policy are in addition to the Privacy Policy.

Disallowed Usage

Customers and their users may not use any Dynamo AI Artificial Intelligence products, nor any third party product, application or functionality which interoperates with our Services that incorporates Artificial Intelligence or machine learning, for the following:

Illegal activity

Generation of hateful, harassing, or violent content:

  • Content that expresses, incites, or promotes hate based on identity
  • Content that intends to harass, threaten, or bully an individual
  • Content that promotes or glorifies violence or celebrates the suffering or humiliation of others

Weapons Development

Developing, advertising, marketing, distributing, or selling weapons, weapon accessories, or explosives, as enumerated by the United States Munitions List.

Political campaigning or lobbying, by:

  • Generating high volumes of campaign materials
  • Generating campaign materials personalized to or targeted at specific demographics
  • Building conversational or interactive systems such as chatbots that provide unverified information about campaigns or engage in political advocacy or lobbying
  • Building products for political campaigning or lobbying purposes

Generating Adult Content 

  • Submitting, generating or distributing sexually explicit material or adult products;
  • Submitting, generating or distributing non-consensual intimate imagery;
  • Submitting, generating or distributing deepfake or deepnude pornography;
  • Creating sexual chatbots or engaging in erotic chat.

Fraudulent or deceptive activity, including:

  • Scams
  • Coordinated inauthentic behavior
  • Plagiarism
  • Academic dishonesty
  • Astroturfing, such as fake grassroots support or fake review generation
  • Disinformation
  • Spam
  • Pseudo-pharmaceuticals

Activity that violates people’s privacy, including:

  • Tracking or monitoring an individual without their consent
  • Facial recognition of private individuals
  • Classifying individuals based on protected characteristics
  • Using biometrics for identification or assessment
  • Unlawful collection or disclosure of personal identifiable information or educational, financial, or other protected records

Generation of malware

Content that attempts to generate code that is designed to disrupt, damage, or gain unauthorized access to a computer system.

Engage in Automated Decision-Making Processes with Legal Effects

Automated decision-making process with legal or similarly significant effects, unless the Customer ensures that the final decision is made by a human being. In this case, Customer must take account of other factors beyond the Services’ recommendations in making the final decision.

Individualized Advice from Licensed Professionals

 Generating individualized advice that in the ordinary course of business would be provided by a licensed professional. This includes, for example, financial, medical and legal advice.

Explicitly Predicting Protected Characteristics

Explicitly predicting an individual’s protected characteristic, including, but not limited to, racial or ethnic origin, and past, current, or future political opinions, religious or philosophical beliefs, trade union membership, age, gender, sex life, sexual orientation, disability, health status, medical condition, financial status, criminal convictions, or likelihood to engage in criminal acts.

  • Please contact Dynamo AI if you intend to use our services to specifically identify security breaches, unauthorized access, fraud, and other security vulnerabilities, or to identify and reduce bias in Dynamo AI products. 

Child Sexual Exploitation and Abuse

For any purposes related to child sexual exploitation or Child Sexual Abuse Material (CSAM).

Additionally, Customer should not:

  • Include Personally Identifiable Information (“PII”), confidential, and/or sensitive information in queries or inputs made to Dynamo AI system. There is a technical risk that the system may “memorize” such information, and also note that your data may be used to improve our services. You may review our privacy policy here. For authorized use of Services that involve personally identifiable information or confidential data and have been pre-approved by Dynamo AI in writing, Customer should ensure they have the authority, consent and lawful basis to upload any PII as part of a user-submitted query or fine-tuning dataset.
  • Submit images of individuals for the purposes of creating or analyzing biometric identifiers, such as face prints or fingerprints or scans of eyes, hands, or facial geometry.


  1. Customers must disclose when end users or consumers are interacting directly with automated systems, such as Dynamo AI chatbots or similar features, unless obvious from context or where there is a human in the loop.
  2. Customers may not deceive end users or consumers by misrepresenting content generated through automated means as human generated or original content.

We have further requirements for certain uses of our models:

  • Consumer-facing uses of our models in medical, financial, and legal industries; in news generation or news summarization; and where else warranted, developers must provide a disclaimer to users informing them that AI is being used and of its potential limitations.
  • Automated systems (including conversational AI and chatbots) must disclose to users that they are interacting with an AI system. Products that simulate another person must either have that person's explicit consent or be clearly labeled as “simulated” or “parody.”

Note about adversarial attacks: Intentional stress testing of the API and adversarial attacks are allowable, but violative generations must be disclosed to, reported immediately, and must not be used for any purpose except for documenting the result of such attacks in a responsible manner.


DATA SECURITY: Dynamo AI is not meant to be a replacement for wider information security measures; The security of any User Data is still subject to the Customer's wider information security and data retention protocols and policies 

THIRD PARTY OPERATORS: If the Customer chooses to engage a third party (including any group company) to operate Dynamo AI on your behalf, we recommend the Customer ensures that the third party complies with the GDPR and any other relevant law

NOTICE OF HIGH RISK USE: Generative AI technology will continue to be used in new and innovative ways, and we encourage you to consider if your use of these technologies is safe.