Fanvue

Scaling Moderation Transparency at Fanvue

Building an account health and user warnings system for creators, compliance, and trust at scale.

Creator Economy

AI

Compliance & Moderation

Date:

Sep 2025 - Dec 2025

Project Type:

Platform system · Trust & Safety · Creator Tools

Role:

Senior Product Designer

Team:

CEO, PM, Head of Support, Moderation & Compliance Lead, 2 Engineers, Senior Product Designer, Lead Product Designer

Responsibilities:

Product & UX Strategy

System Design

Information Architecture

Trust & Safety UX

Cross-functional Alignment

Why did we build this?

As Fanvue scaled, content moderation moved from edge cases to a core platform risk. Creators were informed of violations only via email. In practice, this led to: ‣ Missed or disputed warnings ‣ Repeated violations ‣ High support load (“I didn’t see the email”) ‣ Escalation-heavy enforcement (temporary login locks, manual follow-ups)

The platform needed a moderation system that could:

Clearly communicate violations to creators

Support cumulative enforcement rules

Scale across moderation, support, and compliance teams

Fanvue

Scaling Moderation Transparency at Fanvue

Building an account health and user warnings system for creators, compliance, and trust at scale.

Creator Economy

AI

Compliance & Moderation

Date:

Sep 2025 - Dec 2025

Project Type:

Platform system · Trust & Safety · Creator Tools

Role:

Senior Product Designer

Team:

CEO, PM, Head of Support, Moderation & Compliance Lead, 2 Engineers, Senior Product Designer, Lead Product Designer

Responsibilities:

Product & UX Strategy

System Design

Information Architecture

Trust & Safety UX

Cross-functional Alignment

Why did we build this?

As Fanvue scaled, content moderation moved from edge cases to a core platform risk. Creators were informed of violations only via email. In practice, this led to: ‣ Missed or disputed warnings ‣ Repeated violations ‣ High support load (“I didn’t see the email”) ‣ Escalation-heavy enforcement (temporary login locks, manual follow-ups)

The platform needed a moderation system that could:

Clearly communicate violations to creators

Support cumulative enforcement rules

Scale across moderation, support, and compliance teams

Fanvue

Scaling Moderation Transparency at Fanvue

Building an account health and user warnings system for creators, compliance, and trust at scale.

Creator Economy

AI

Compliance & Moderation

Date:

Sep 2025 - Dec 2025

Project Type:

Platform system · Trust & Safety · Creator Tools

Role:

Senior Product Designer

Team:

CEO, PM, Head of Support, Moderation & Compliance Lead, 2 Engineers, Senior Product Designer, Lead Product Designer

Responsibilities:

Product & UX Strategy

System Design

Information Architecture

Trust & Safety UX

Cross-functional Alignment

Why did we build this?

As Fanvue scaled, content moderation moved from edge cases to a core platform risk. Creators were informed of violations only via email. In practice, this led to: ‣ Missed or disputed warnings ‣ Repeated violations ‣ High support load (“I didn’t see the email”) ‣ Escalation-heavy enforcement (temporary login locks, manual follow-ups)

The platform needed a moderation system that could:

Clearly communicate violations to creators

Support cumulative enforcement rules

Scale across moderation, support, and compliance teams

Impact & Achievements

1 Source of truth

for moderation actions

1 Source of truth

for moderation actions

Clear separation

between user-visible warnings and internal admin notes

Clear separation

between user-visible warnings and internal admin notes

Consistent enforcement

of cumulative warning rules

Consistent enforcement

of cumulative warning rules

Reduced ambiguity

between creators, moderators, and support

Reduced ambiguity

between creators, moderators, and support

Fanvue in Numbers

Fanvue is a rapidly growing creator economy platform headquartered in London, enabling paid exclusive content distribution and innovation in AI-powered creator tools. In early 2026, the company announced a $22M Series A and a $100M+ annualised run rate, signaling strong market traction and increasing complexity in content moderation and compliance.

Fanvue is a rapidly growing creator economy platform headquartered in London, enabling paid exclusive content distribution and innovation in AI-powered creator tools. In early 2026, the company announced a $22M Series A and a $100M+ annualised run rate, signaling strong market traction and increasing complexity in content moderation and compliance.

Fanvue is a rapidly growing creator economy platform headquartered in London, enabling paid exclusive content distribution and innovation in AI-powered creator tools. In early 2026, the company announced a $22M Series A and a $100M+ annualised run rate, signaling strong market traction and increasing complexity in content moderation and compliance.

+

200k

Creators on Fanvue

+

200k

Creators on Fanvue

+

5M

Monthly Unique Visitors

+

5M

Monthly Unique Visitors

+

17M

Monthly Website Visits

+

17M

Monthly Website Visits

$

100M

ARR

$

100M

ARR

Project Goals

Make moderation status visible and understandable in-product

Reduce reliance on email-only communication

Enable creators to self-serve before contacting support

Support cumulative warning logic (e.g. 3 warnings = ban)

Align creator-facing UX with internal moderation workflows

Create a scalable system that could adapt to future legal changes

Outcomes

Centralized Account Health experience for creators

User Warnings system replacing the previous strike logic

Warning types, Severity, Account status, Escalation risk

Consistent enforcement across content categories (copyright, AI, safety, fraud)

Role & Ownership

I led:

  • Problem framing and system definition

  • UX strategy for Account Health and Warnings

  • Translation of compliance policy into product UX

  • Alignment across Product, Trust & Safety, Support, and Engineering

  • Core information architecture and interaction models

After this foundation was established and the V1 of the feature worked on, the project continued into validation and refinement, with usability testing and final adjustments completed after handoff.

The problem space

Moderation information was fragmented:

Emails for creators

Admin Notes for internal teams

No shared visibility

No cumulative warning clarity

Creators did not know:

  • How many warnings they had

  • Whether a warning was final

  • What risk they were approaching

Moderators lacked:

  • Consistent tracking

  • Clear user-facing representations of enforcement

Fanvue updated its moderation policy in response to:

  • New copyright regulations

  • AI-generated and deepfake content risks

  • Online safety and responsible content creation standards

Solution: Account Health & User Warnings system

This new feature presents three significant opportunities:

Account Health (creator-facing)

  • Overall account status

  • Active warnings and severity

  • Escalation risk

  • Clear explanation of why action was taken

User Warnings

  • Date issued

  • Violation type

  • Severity

  • Whether it is a final warning

  • Account risk status

Clear status communication

  • In review

  • Removed

  • Approved

Education & prevention

The goal was behavior correction, not punishment-first UX.

The goal was behavior correction, not punishment-first UX.

Policy guidance is embedded contextually, explaining:

  • Why content was flagged

  • What rule applies

  • How to avoid future violations

Internal alignment (moderation & support)

A key goal of this project was reducing friction and ambiguity between internal teams, not just improving creator-facing UX.

A key goal of this project was reducing friction and ambiguity between internal teams, not just improving creator-facing UX.

Previously, moderation actions, internal notes, and creator communications lived in separate systems, forcing Support to act as an intermediary.

The Account Health system aligned moderation, compliance, and support around a shared source of truth, ensuring that actions taken internally were reflected clearly and consistently in the creator experience.

Moderators

  • Log User Warnings (creator-visible)

  • Use Admin Notes for investigations, fraud, AML, or legal context

Support

  • Access full context directly from Account Health

  • No longer act as the “translator” between systems

Key design decisions

This project required making deliberate trade-offs between transparency, compliance, and operational scalability.

Why replace strikes with warnings

  • “Strikes” were perceived as opaque and punitive

  • Warnings allowed clearer escalation and education

Why separate User Warnings and Admin Notes

  • Legal and investigative information should not be user-visible

  • Transparency must still be controlled and intentional

Why a centralized Account Health view

  • Inline indicators alone lacked context and history

  • Creators needed to understand patterns, not incidents

Feature Launch

The system launched as a platform-level capability, not a feature toggle.

The system launched as a platform-level capability, not a feature toggle.

It introduced:

Creator-facing Account Health and Warnings views

Internal moderation tooling aligned with compliance rules

A shared mental model across Product, Trust & Safety, and Support

Learnings

  • Moderation is not an edge case — it is a core product system

  • Transparency reduces conflict more effectively than enforcement alone

  • UX must sometimes prioritize legal clarity over simplicity

  • Designing for scale means designing for org alignment, not just users

  • Moderation is not an edge case — it is a core product system

  • Transparency reduces conflict more effectively than enforcement alone

  • UX must sometimes prioritize legal clarity over simplicity

  • Designing for scale means designing for org alignment, not just users

Rod Martinez © 2025

Rod Martinez © 2025