Skip to main content

Platform Usage Policy

We want everyone to use our tools safely and responsibly. That’s why we’ve created usage policies that apply to all users of Empsing’s services. By following them, you’ll ensure that our technology is used for good.

If we discover that your product or usage doesn’t follow these policies, we may ask you to make necessary changes. Repeated or serious violations may result in further action, including suspending or terminating your account.

Our policies may change as we learn more about the use and abuse of our models.

Disallowed usage of our models

We don’t allow the use of our models for the following:

  • Illegal activity. Empsing prohibits the use of our models, tools, and services for illegal activity.
  • Child Sexual Abuse Material or any content that exploits or harms children. We report CSAM to the relevant authorities when we discover them.
  • Generation of hateful, harassing, or violent content. Content that expresses, incites, or promotes hate based on identity
  • Content that intends to harass, threaten, or bully an individual. Content that promotes or glorifies violence or celebrates the suffering or humiliation of others
  • Generation of malware. Content that attempts to generate code that is designed to disrupt, damage, or gain unauthorized access to a computer system.
  • Activity that has high risk of physical harm, including:
    • Weapons development.
    • Military and warfare.
    • Management or operation of critical infrastructure in energy, transportation, and water
    • Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders
  • Activity that has high risk of economic harm, including:
    • Multi-level marketing
    • Gambling
    • Payday lending
    • Automated determinations of eligibility for credit, employment, educational institutions, or public assistance services
  • Fraudulent or deceptive activity, including: Scams, Coordinated inauthentic behavior, Plagiarism, Academic dishonesty, Astroturfing, such as fake grassroots support or fake review generation, Disinformation, Spam, and Pseudo-pharmaceuticals.
  • Adult content, adult industries, and dating apps, including: Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness), Erotic chat, Pornography
  • Political campaigning or lobbying, by:
    • Generating high volumes of campaign materials
    • Generating campaign materials personalized to or targeted at specific demographics
    • Building conversational or interactive systems such as chatbots that provide information about campaigns or engage in political advocacy or lobbying
    • Building products for political campaigning or lobbying purposes
  • Activity that violates people’s privacy, including:
    • Tracking or monitoring an individual without their consent
    • Facial recognition of private individuals
    • Classifying individuals based on protected characteristics
    • Using biometrics for identification or assessment
    • Unlawful collection or disclosure of personal identifiable information or educational, financial, or other protected records
  • Engaging in the unauthorized practice of law, or offering tailored legal advice without a qualified person reviewing the information.
  • Offering tailored financial advice without a qualified person reviewing the information
  • Telling someone that they have or do not have a certain health condition, or providing instructions on how to cure or treat a health condition
  • High-risk government decision-making, including Law enforcement and criminal justice or Migration and asylum

We have further requirements for certain uses of our models:

  • Consumer-facing uses of our models in medical, financial, and legal industries; in news generation or news summarization; and where else warranted, must provide a disclaimer to users informing them that AI is being used and of its potential limitations.
  • Automated systems (including conversational AI and chatbots) must disclose to users that they are interacting with an AI system. With the exception of chatbots that depict historical public figures, products that simulate another person must either have that person’s explicit consent or be clearly labeled as “simulated” or “parody.”
  • The use of model outputs in live streams, demonstrations, and research is subject to our Sharing & Publication Policy.