Loading...
Last Updated: May 5, 2026 · Next scheduled review: May 5, 2027
Performed in good faith under the framework of EU Digital Services Act Article 34. Counsel review pending before formal publication flag.
Scope. Chatalystar Inc. operates a paid, authentication-gated, adults-only platform that hosts (a) user-generated adult content from third-party-verified human creators and (b) generative-AI character interactions. This document covers material systemic risks arising from those services.
Methodology. Each risk is identified, supported by evidence, mapped to deployed mitigations (with hyperlinks to the specific policy or product page), and rated for residual risk after mitigation. Residual ratings are good-faith engineering judgments, not formal probabilistic estimates.
Cadence. The assessment is reviewed at least annually and after any material change in product surface, service volume, or applicable law.
DSA Art. 34(1)(a) — illegal content
Risk that real or simulated sexual material depicting minors is uploaded to the vault, generated by an AI character, or solicited from a creator. Zero-tolerance category under U.S., U.K., and EU law and the platform's own Community Guidelines.
Residual risk is meaningfully reduced by the layered controls above but is not zero, because no automated screening system reaches 100 % recall on novel material and because adversarial actors continually probe the upload pipeline. Residual-risk owner: Trust & Safety. Mitigation review cadence: quarterly.
DSA Art. 34(1)(b) — fundamental rights, dignity, private life
Risk that intimate images of an identifiable person are uploaded without that person's consent — including 'revenge porn,' hidden-camera material, leaked private content, or non-consensual deepfakes of a real person.
Residual risk is medium. Reactive removal (24h SLA) is the dominant control today; proactive perceptual-hash matching against an NCII victim database is a known mitigation gap that is on the Trust & Safety roadmap. Residual-risk owner: Trust & Safety. Review cadence: quarterly.
DSA Art. 34(1)(a) — illegal content; TCO Reg. (EU) 2021/784
Risk that the platform is used to host or distribute terrorist content within the meaning of Regulation (EU) 2021/784, or to recruit / coordinate violent extremism via chat or profile surfaces.
Residual risk is low for the platform's intended use. The dominant residual exposure is misuse of one-to-one chat for radicalisation messaging, mitigated by the report flow and operator escalation. Residual-risk owner: Trust & Safety. Review cadence: semi-annual.
DSA Art. 28 — protection of minors; AVMSD; UK OSA
Risk that a person under 18 reaches NSFW surfaces of the platform — registers an account, views creator content, or interacts with AI characters in adult contexts.
Residual risk is medium until the EU age-verification wallet (eIDAS 2.0 / EUDI) and equivalent national schemes are integrated as supported AV signals. Self-attestation is a known weak link; integration with strong third-party age signals is on the Trust & Safety roadmap. Residual-risk owner: Trust & Safety + Product. Review cadence: quarterly.
DSA Art. 34(1)(c) — civic discourse, electoral processes
Risk that the platform is used for coordinated inauthentic behavior (CIB) — networks of fake accounts, mass-impersonation of real creators, or coordinated harassment / scam campaigns.
Residual risk is low for creator-side CIB and medium for member-side scam DMs. Continued investment in member-side anti-scam heuristics is the primary mitigation gap. Residual-risk owner: Trust & Safety + Engineering. Review cadence: semi-annual.
DSA Art. 34(1)(b) & (c) — fundamental rights, civic discourse
Risk that members or creators are subjected to targeted harassment, hate speech against protected characteristics, or gender-based abuse via chat, profile, or content surfaces.
Residual risk is medium and concentrated in private-message harassment that is reactive-only by design. Residual-risk owner: Trust & Safety. Review cadence: quarterly.
DSA Art. 34(2) — risks specific to generative AI
Risk that AI characters (Muses, Simulated Presence) are prompted to portray minors, real people without consent, or to generate content that violates platform policy or applicable law; or that members are deceived about whether they are talking to a human or an AI.
Residual risk is low. The dominant residual exposure is novel jailbreak techniques that bypass current prompt classifiers; mitigated by red-team review and rapid rule-pack updates. Residual-risk owner: AI Safety + Trust & Safety. Review cadence: quarterly.
DSA Art. 34(1)(b) — privacy, fundamental rights; GDPR
Risk that users' real identities, payment data, intimate chat content, or NSFW media are exposed via breach, scraping, account takeover, or improper internal access.
Residual risk is medium and is the standard residual exposure for any platform processing intimate user data. Residual-risk owner: Privacy + Security Engineering. Review cadence: annual + after any in-scope incident.