How Chatalystar built compliance-first from launch
1. Executive summary
Chatalystar is a verified-creator adult companion platform. From launch we treated trust & safety as a product surface rather than a compliance afterthought. This white paper documents the architecture as deployed: a cryptographic identity chain that binds every upload to a real, age-verified, consenting adult; a responsive infrastructure that turns reports into action within 24 hours; and a transparent operations posture published quarterly.
Our thesis is that the median operator in this category does almost nothing, the leader was forced into compliance by a near-death event, and the regulatory environment is converging on the EU DSA / UK OSA model. A platform built compliance-first is durable; one retrofitted under enforcement pressure is not. This document is meant to be readable by regulators, journalists, payment processors, banking partners, and creators evaluating where to host.
2. Problem framing
Three risks define modern adult-content hosting: child sexual abuse material (CSAM), non-consensual intimate imagery (NCII), and identity fraud (uploaders posing as someone else). The first two are criminal harms with mandatory reporting obligations; the third is the upstream risk that enables both. A platform that cannot prove who uploaded what cannot meaningfully respond to either.
Most platforms address these by checking identity once at signup and trusting the account thereafter. The failure mode is well documented: stolen sessions, bulk uploads from compromised accounts, and post-hoc 'someone else uploaded that' defenses. The architecture in this paper closes that loop by binding each upload back to a freshly signed attestation from the verified creator.
3. Phase 1 — Identity binding chain
3.1 Veriff verification
Every Star completes third-party government-ID verification with biometric liveness through Veriff before publishing. Members are 18+; Stars are 21+. The verification record is retained per /privacy and made available under valid legal process.
3.2 Wallet binding
On verification, the Star signs an EIP-712 typed-data attestation with their wallet that binds (a) wallet address, (b) hash of the Veriff session, (c) Creator Agreement version and content hash. The attestation is anchored on-chain so its existence is independently verifiable.
3.3 Vault PIN
An additional out-of-band second factor, separate from wallet keys, is required to enter the upload vault. This protects against device theft and key compromise scenarios where a wallet is unlocked but the human is not present.
3.4 Per-session attestation
Each vault session emits a fresh signed attestation binding wallet → identity → agreement → session ID → timestamp. The attestation lives only for the duration of the session; replayed attestations are rejected.
3.5 Per-upload binding
Every uploaded asset embeds a signed reference to the active session attestation at the moment of upload. The asset metadata row stores the session ID, the asset hash, and an HMAC over the prior row in the creator's history.
3.6 HMAC chain of custody
Each upload row is HMAC-linked to the previous row. Tamper with any earlier row — selectively delete an embarrassing upload, alter a timestamp, swap the asset reference — and the chain breaks. An external auditor can verify the chain end-to-end without trusting the platform.
4. Phase 2 — Responsive infrastructure
4.1 Public reporting routes
Five inboxes — report@, trust@, le@, dmca@, and 2257@ — each with documented SLAs and what to send. A structured report form is available at /trust/report; URLs, screenshots, reporter relationship, and reason are captured and routed to the appropriate triage lane.
4.2 Triage queue & SLA
Reports route to a moderation queue with severity-based prioritization. Severity-1 reports (suspected CSAM, NCII, imminent harm) target acknowledgement within 4 hours and action within 24 hours. Other categories target acknowledgement within 24 hours.
4.3 NCMEC / IWF / Stop NCII integrations
Sightengine performs upload-time minor screening. Confirmed matches are reported to NCMEC under 18 USC § 2258A. The Stop NCII hash list is checked at upload to block re-circulation of consensually-removed intimate imagery. The IWF hash list is integrated for known-CSAM blocking.
4.4 Quarterly transparency reports
Anonymized counts are aggregated and published at quarter close: reports received per category, action rates, mean response times, appeal outcomes, NCMEC reports filed, and hash-match counts. The pipeline writes to a read-only metrics view that the public /trust page consumes.
4.5 Law enforcement liaison
le@chatalystar.com is monitored by trained personnel. Subpoenas, preservation requests, and emergency disclosure requests have documented response times. Preservation requests are honored before any contested deletion proceeds.
5. Threat model & abuse cases
5.1 Adversary profiles
- Opportunistic uploader: tries to upload prohibited material, expects loose moderation. Defeated by Sightengine + minor screening + manual review.
- Identity-fraud actor: poses as someone else; tries to use a stolen or purchased account. Defeated by per-session attestation + vault PIN.
- Coercion actor: pressures a verified creator to upload non-consensual material of a third party. Defeated by per-upload binding + reporting routes + 24h SLA.
- Repeat-offender: attempts to re-upload removed material. Defeated by hash-list integrations (IWF, Stop NCII).
- Plausible-deniability actor: uploads material then claims 'someone else used my account'. Defeated by per-session attestation that binds session entry to the verified human.
5.2 Out-of-scope adversaries
Nation-state-level cryptographic attacks against EIP-712 signature validity, the wallet provider, or NIST-approved HMAC primitives are out of scope for this document. So is platform-internal insider attack against logging substrate; we mitigate with read-only metrics views, audit trails, and least-privilege access but do not claim cryptographic immunity from a fully-rooted operator.
6. Standards mapping
The /trust page hosts the live standards table. The summary: every operative requirement of the EU DSA notice-and-action and transparency duties (Articles 14, 16, 17, 24, 34), every obligation under 18 USC § 2257 / § 2257A and § 2258A, every operative duty under the UK Online Safety Act for adult-content services, and every relevant US state age-verification statute is implemented. A row that says 'in deployment' is not eligible to ship to the public table; only live implementations are listed.
7. Verification approach
Independent verification of these claims is supported by three mechanisms:
- On-chain anchoring of session attestations means the existence and timing of attestations can be independently verified without trusting Chatalystar.
- The HMAC chain-of-custody can be reconstructed from any creator's upload history; an auditor with read access to a creator's metadata rows can verify integrity end-to-end.
- Quarterly transparency reports are reproducible from the responsive-infrastructure pipeline; the methodology is documented in Annex D and category definitions are stable across reports.
Outside-counsel legal review of public claims on /trust is recommended before each major content update; we treat that as operational rather than architectural and budget for it separately.
8. Roadmap
Phase 3 is hardware-rooted C2PA content provenance. We track the C2PA specification and hardware ecosystem maturity but do not claim a launch date; the technology is not yet practical at commercial scale for the device population our creators use. We document its eventual landing on the /trust landscape animation as the dashed 'theoretical maximum' to keep the comparison honest.
Other tracks under active design but not yet shipped include cross-platform takedown coordination through industry working groups, formal Tech Coalition membership, and localized white-paper translations.
Annex A — Cryptographic primitives
- Wallet signatures: EIP-712 typed-data v4 (secp256k1).
- Per-row HMAC: HMAC-SHA-256 with rotated platform key; key rotation at 90-day intervals; previous keys retained for verification.
- Asset hashing: SHA-256 over the canonical bytes of the uploaded asset.
- On-chain anchoring: per-session attestation merkle-root anchored on a public EVM-compatible chain at session close.
Annex B — Hash-list integrations
- IWF (Internet Watch Foundation) hash list — known CSAM. Hard-block at upload + NCMEC report on confirmed match.
- NCMEC hash sharing — known CSAM. Hard-block at upload + automated CyberTipline submission.
- Stop NCII — non-consensual intimate imagery hashes contributed by the public. Hard-block at upload + flagged-creator review.
Annex C — Article 34 risk register
EU DSA Article 34 requires VLOPs to conduct annual systemic-risk assessments. We voluntarily maintain an equivalent register. The current register identifies five systemic risks (CSAM, NCII, identity fraud, coordinated inauthentic behavior, deceptive AI persona use) and tracks each to specific deployed mitigations. The register is reviewed quarterly and re-published with the transparency report.
Annex D — Comparison methodology
Composite scores in the /trust comparison chart are derived from five factors weighted equally: identity binding (does the platform bind uploads to a real person?), per-upload accountability (can the platform attribute an individual asset to an uploader?), response SLA (median time from report to action), NCMEC integration (is reporting automated and mandatory?), and transparency (does the platform publish quantitative moderation data?). Each factor scores 0-20; the composite is the sum.
Scores for non-Chatalystar platforms are derived from those platforms' own published policies and from observable behavior at content-upload, takedown, and reporting moments. Where a platform's posture has changed materially (e.g. Pornhub's December 2020 reset), the post-change posture is scored and labeled.