Pre-Screening and Post-Screening Policy
Last Modified: April 20, 2026
This Pre-Screening and Post-Screening Policy is published by SugarGlitch Ltd, a Cyprus private limited company ("SugarGlitch," "we," "our," or "us"), and is incorporated into our Terms of Service.
This Policy describes the two-stage content review process we use on the Services. It works alongside our Prohibited Content Policy (which describes what is not allowed), our Content Moderation Policy (which describes our overall moderation approach), and our Content Removal Policy (which describes how Content is removed and the rights available to affected users).
1. What Is Screened
Both pre-screening and post-screening apply to:
- AI-generated outputs (text, images, audio, video, and other Generations)
- Prompts and inputs submitted by users
- Characters and their metadata (names, descriptions, traits, voices, instructions)
- Usernames, display names, profile fields, and avatars
- Any other Content created, uploaded, or shared on the Services
2. Pre-Screening
Pre-screening occurs before Content is generated, displayed, or made available on the Services. It includes:
- Real-time automated scanning of prompts and inputs against classifiers and rule-based filters trained to detect prohibited categories
- Filters that block patterns, terms, and behaviors associated with known abuse vectors and circumvention techniques
- Strict pre-generation checks designed to prevent the creation of Content depicting minors in sexual contexts and other zero-tolerance categories
- For some categories of user-submitted Content (such as public Characters), human review may be required before publication
If a prompt or input is blocked at the pre-screening stage, the user may receive a notice that the request was rejected. We do not always disclose the specific basis for a block where doing so would assist circumvention.
3. Post-Screening
Post-screening occurs after Content has been generated, published, or otherwise made available. It includes:
- Continuous automated monitoring of Generations and other Content for violations not detected at the pre-screening stage
- Review of Content flagged by user reports submitted under our Complaint Policy or Content Moderation Policy
- Proactive audits of trending Characters, popular prompts, frequently-flagged accounts, and other patterns of platform activity
- Review by human moderators of escalated or context-sensitive cases
We may edit, restrict, demote, or remove Content at the post-screening stage, including Content that was permitted at pre-screening but later identified as violating our policies. The procedure for removal and the rights available to affected users are described in our Content Removal Policy.
4. Why Both Stages Are Needed
Pre-screening is well-suited to catching prompts and inputs that match known abuse patterns before any Content is produced — preventing harm at the point of generation. But pre-screening alone is not sufficient because:
- AI generation is inherently variable; identical prompts can produce different outputs, and edge-case outputs cannot always be predicted from inputs
- Abuse patterns evolve, and post-screening allows us to identify and respond to new patterns
- Some violations only become apparent in context (for example, when comparing a Character's behavior across many interactions) rather than in any single prompt
Pre-screening blocks the most clearly prohibited inputs at the source. Post-screening is how we catch what slipped through, what we did not anticipate, and what only becomes problematic over time.
5. Limitations
Both stages are imperfect. Automated systems produce false positives (blocking or flagging Content that does not violate our policies) and false negatives (failing to detect Content that does). Human review is constrained by volume and the inherently subjective nature of some judgments. We continuously work to improve detection accuracy.
If you believe pre-screening has blocked a legitimate prompt or post-screening has incorrectly removed your Content, the appeals process described in our Content Removal Policy is available.
6. Enforcement
Violations identified at either stage may result in any of the consequences described in our Content Moderation Policy and Prohibited Content Policy, up to and including permanent termination and reporting to law enforcement.
7. Updates
Our screening systems are updated over time to reflect new abuse patterns, new model behaviors, and new legal requirements. This Policy may be updated to reflect material changes in process. The date of the most recent update is shown at the top of this Policy.
8. Contact Us
For questions about this Policy:
SugarGlitch Ltd [REGISTERED ADDRESS — TBD] Cyprus Email: [email protected]