Impact & Achievements
Fanvue in Numbers
Project Goals
Make moderation status visible and understandable in-product
Reduce reliance on email-only communication
Enable creators to self-serve before contacting support
Support cumulative warning logic (e.g. 3 warnings = ban)
Align creator-facing UX with internal moderation workflows
Create a scalable system that could adapt to future legal changes
Outcomes
Centralized Account Health experience for creators
User Warnings system replacing the previous strike logic
Warning types, Severity, Account status, Escalation risk
Consistent enforcement across content categories (copyright, AI, safety, fraud)
Role & Ownership
I led:
Problem framing and system definition
UX strategy for Account Health and Warnings
Translation of compliance policy into product UX
Alignment across Product, Trust & Safety, Support, and Engineering
Core information architecture and interaction models
After this foundation was established and the V1 of the feature worked on, the project continued into validation and refinement, with usability testing and final adjustments completed after handoff.
The problem space
Moderation information was fragmented:
Emails for creators
Admin Notes for internal teams
No shared visibility
No cumulative warning clarity
Creators did not know:
How many warnings they had
Whether a warning was final
What risk they were approaching
Moderators lacked:
Consistent tracking
Clear user-facing representations of enforcement
Fanvue updated its moderation policy in response to:
New copyright regulations
AI-generated and deepfake content risks
Online safety and responsible content creation standards
Solution: Account Health & User Warnings system
This new feature presents three significant opportunities:
Account Health (creator-facing)
Overall account status
Active warnings and severity
Escalation risk
Clear explanation of why action was taken
User Warnings
Date issued
Violation type
Severity
Whether it is a final warning
Account risk status
Clear status communication
In review
Removed
Approved
Education & prevention
Policy guidance is embedded contextually, explaining:
Why content was flagged
What rule applies
How to avoid future violations
Internal alignment (moderation & support)
Previously, moderation actions, internal notes, and creator communications lived in separate systems, forcing Support to act as an intermediary.
The Account Health system aligned moderation, compliance, and support around a shared source of truth, ensuring that actions taken internally were reflected clearly and consistently in the creator experience.
Moderators
Log User Warnings (creator-visible)
Use Admin Notes for investigations, fraud, AML, or legal context
Support
Access full context directly from Account Health
No longer act as the “translator” between systems
Key design decisions
This project required making deliberate trade-offs between transparency, compliance, and operational scalability.
Why replace strikes with warnings
“Strikes” were perceived as opaque and punitive
Warnings allowed clearer escalation and education
Why separate User Warnings and Admin Notes
Legal and investigative information should not be user-visible
Transparency must still be controlled and intentional
Why a centralized Account Health view
Inline indicators alone lacked context and history
Creators needed to understand patterns, not incidents
Feature Launch
It introduced:
Creator-facing Account Health and Warnings views
Internal moderation tooling aligned with compliance rules
A shared mental model across Product, Trust & Safety, and Support









