A wide range of predefined use cases have been built with the Satellite Writer system, and we’re excited to see what you create. Please review these guidelines to understand which content you can produce.
Almost always approved
Generations are low stakes, mainly because additional safety and reliability tools are in place to mitigate potential harms.
It uses higher-risk cases, where we expect creators to seek approval for these niche topics and have made a reasonable effort to identify and mitigate safety risks. Our Bias filter will automatically block offensive or harmful content generated.
Almost always approved
Narrow generative content in non-sensitive domains. For instance, any generations in white label niche categories.
Table of Contents
- High-level guidelines for case-by-case use-case generation
- Book Writing
- Other General Notes
- Search Engine
- Article Writing / Editing
- Blog Tools
- Line-Editors and Direct
- Writing Assistants
- Social Media
- Code Generation
Spam and political implications
- Content that serves open-ended outputs to 3rd-parties, such as Tweet generators, Instagram post generators, unconstrained chatbots, or other open-ended text generators, primarily through social media platforms.
- Content that post (or enable the posting of) content automatically or in a largely-automated fashion on social media or other moderately high-stakes domains.
- Content where an end-user is misled to believe that generative responses or content is coming from a human.
- Content for scalably generating articles or blog posts for SEO purposes, except where used by trusted first-party users.
- Content types attempt to influence political decisions/opinions, such as tools to make it easier for political campaigns to identify and target potential voters.
- Content for the automatic writing or summarization of news articles or content that may be sensitive politically, economically, medically, or culturally (including summarizers/writers that accept arbitrary inputs and so may be misused for these purposes).
- Content has a high risk of irreversible harm or is founded on discredited or unscientific premises.
- Content classifies people based on protected characteristics (like racial or ethnic background, religion, political views, or private health data).
- Content that extracts protected personal information or other sensitive information from data about people.
- Content that claims to diagnose or treat medical or psychiatric conditions.
- Content that helps determine eligibility for credit, employment, housing, or similar essential services.
- Content intended for use in high-risk contexts such as payday lending, pseudo-pharmaceuticals, gambling, multi-level marketing, weapons development, warfare, national intelligence, or law enforcement.
Extending improper API access
- Content that recreates the functionality of the Playground for end-users who do not have API keys – for instance, open-ended chatbots or text generators.
- “Wrappers” that allow end-users to create their account workflows.
- Applications that allow a third party to use your access or process tokens.
We prohibit the Satellite Writer system from being used to generate specific content.
In addition, we provide free content filters to help ensure your content generations are being used for their intended purpose and to limit misuse.
If you’re using the API for research, building use-cases to address related harms, and similar, we will work with you, so please get in touch with firstname.lastname@example.org about your intended use.
We prohibit users from knowingly generating—or allowing others to generate—the following categories of content knowingly:
- Hate: Content that expresses, incites, or promotes hate based on identity.
- Harassment: Content that intends to harass, threaten, or bully an individual.
- Violence: Content that promotes or glorifies violence or celebrates the suffering or humiliation of others.
- Self-harm: Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders.
- Adult: Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness).
- Political: Content attempting to influence the political process or to be used for campaigning purposes.
- Spam: Unsolicited bulk Content.
- Deception is false or misleading content, such as attempting to defraud individuals or spread disinformation.
- Malware: Content that attempts to generate ransomware, keyloggers, viruses, or other software intended to impose some level of harm.
What if my content was previously approved but now is an unsupported?
Please get in touch with Support if you have concerns about this. In addition, our production team will generally reach out to customers that may be implicated by changes to our usage guidelines, though these are relatively infrequent.
My content looks like what company X is doing. So why do you allow X to do it but not my account?
Sometimes, other mitigating factors can allow for an application to be approved or not approved (e.g., being a legacy approval, working with select partners to help us evolve our guidelines, etc.).
What happens if I don’t go through the pre-launch review process?
Your access may be immediately revoked if you deploy without submitting a review. In some cases, the loss of an API key may be permanent. If you deployed accidentally without realizing this policy, please contact Support, and we will work out the best way to get your application compliant.
A good rule of thumb is that something counts as high stakes if mistakes by the AI could harm someone’s quality of life (e.g., an end-user gets terrible medical advice, etc.) or create other adverse severe impacts.
- Content that could reasonably be expected to give medical advice.
- Content that evaluates applicants for jobs or loans.
- Content that is meant to provide emotional Support or therapy.
- Content that gives legal advice.
- Content that generates code for use in a safety-critical or security-related context.
How will these guidelines evolve?
The Exo team will update these guidelines from time to time and hopes to increase the amount that is safely doable with the Satellite Writer system.
In particular, the methods we use are:
Through risk mitigation: We are working to develop new technical methods and assurance strategies to manage risks. Over time, we hope to unlock many use cases that are currently restricted.
Through partnerships: We are interested in partnering with a handful of developers to unlock higher-risk use cases.
We are incredibly motivated to work with developers with established domain expertise who can help us identify and mitigate the risks in those areas.
We expect to increase the level of detail in our guidelines over time as we learn more.
This section discusses requirements for use-cases which are evaluated case-by-case. Please see above for examples that are almost always approvable or disallowed.
We generally do not permit tools that generate a paragraph or more of natural language or many lines of code unless the output is of a concrete structure that couldn’t be repurposed for general blog or article generation (e.g., a cover letter, a recipe, song lyrics).
We limit the maximum input to 2 sentences for headline generations or 30-60 words.
We highly recommend that a human in the loop thoroughly review the code and require it in high-stakes contexts for code generation.
A significant focus in this section is on preventing overly-general tools that can replicate the functionality of Playground.
For example, if an end-user essentially has access to Playground through a particular tool, they can repurpose the tool for an unintended or disallowed purpose.
Restrictions on user-input lengths are one way we reduce this risk:
When an end-user can only insert a limited number of words into the prompt, or the prompt is well-structured to avoid an end-user redirecting it (e.g., by their inserting “Write me Tweets about […]” in the user-input portion), the tool is more likely to stay on its intended functionality.
Another primary focus is preventing the misuse of tools to generate misinformation at scale.
In this case, restrictions on the maximum amount of output tokens are a way of reducing this risk:
By limiting the amount of text produced in one go, we aim to allow genuinely helpful tools to marketers while not letting bad actors produce large amounts of text with trivial effort.
We will update this section with specific examples to learn more about the content people create with Satellite Writer.
However, in general, we do not approve:
Content that is intended to or could easily facilitate cybercrime.
Content aimed at generating code for any purpose involving an intent to harm or that has a reasonable expectation of causing harm. This includes code that might be directly harmful (e.g., a virus) and code that enables harmful use cases (e.g., deepfake, spam generation, or non-consensual surveillance).
For Satellite Writer, serving multiple end-users may quickly exceed your rate limits.
In many cases, particularly in high-stakes contexts, our Bias Filter will block an account and send a user to “AI Ethics school”. If the problem involves a specific user/account, we will require a human in the loop to review content thoroughly before publication.