Case Studies

Abuse Moderation at Bumble

A project that I lead to both revamp the user (moderator) experience and enhance the developer experience for Bumble’s most-essential moderation application.

This project helped underpin the safety of millions of daily active users on Bumble’s multiple applications across web, iOS and Android.


I was given the reigns as Project Lead to oversee both UX and Frontend. Tasks spanned the breadth of both disciplines (below). All while managing multiple stakeholders from product, data science and legal teams. I was able to fully showcase my skillset across engineering, design and leadership.

Project lead

  • Communicate with multiple stakeholders
  • Taskboard management
  • Document decisions
  • Identify and alleviate blockers
  • Manage project timeline and deliverables
  • Responsible for deadlines and expectations

UX Lead

  • Define UX strategy
  • Audit existing UI
  • Recruit sponsor users (moderators)
  • Showcase designs and prototypes
  • Gather user feedback
  • Ideate and iterate on feedback
  • Create high-fidelity designs
  • Create internal component library in Figma

Frontend Co-lead

  • Create project scaffold
  • Create custom styling theme
  • Implement component library in React
  • Pixel perfect implementation of designs
  • State management and API layer
  • Write tests and documentation
  • Ensure accessibility standards


Bumble has millions of daily active users across it’s multiple dating and lifestyle apps on web, iOS and Android platforms. As a female-founded company and app, Bumble places huge emphasis on member safety. And to deliver on this, we have created a suite of moderation tools that is used by up to 1000 moderators.

Potential malicious behaviour is either member submitted or automatically detected by our Deception Detector AI model. A portion of these cases are automatically sanctioned due to having a high ML probability indicator. But the majority of the cases need to be moderated.

It is then up to a member of the Member Safety Team at Bumble team to decide whether the case breaks the company community guidelines and they will apply a sanction — either block, warn or no sanction. This is done on our in-house Abuse Moderation application. It’s an essential cog in the system that keeps millions of Bumble members safe, and has many useful features that was helping moderators throughput dozens of cases each per hour.


Unfortunately, out of respect for confidentiality agreements — I can't include images and various other proprietary information in this case study.


While the Abuse Moderation application served it’s purpose relatively effectively for our moderation team, there were several issues existing relating to both user and developer experience.

These pain points were identified via feedback exercises with both the moderation team who used the application on a daily basis, and the development team who worked on the codebase on a regular basis. They would then be used to align on and define goals and definitions of success for the project.

User experience problems

Manual identification of reported content

When a moderator loaded a new case, the only upfront information they were able to gather was what type of content has been reported (e.g. a photo or message). However it was up to them to manually find the reported content within the reported user’s profile — in order to take action.

Too many clicks

Moderators were required to manually click through the UI (e.g. paginating through a conversation, applying a sanction by clicking a button). Moderator throughput was limited by the speed of a trackpad or mouse.

Inconsistent design system and patterns

The existing UI was built using Bootstrap v2, however no designs were ever created. UI/UX design had always been an afterthought.

Lack of power-user features

No power-user features existed. however during the research phase, we found out that a savvy moderator had created their own Chrome extension to add keyboard shortcuts to their workflow. Moderators work as contractors and are paid on a per-case basis — so these types of features to them mean not just productivity, but profit.

Developer experience problems

Stagnating frontend stack

The frontend tech stack was built using now legacy web standards, libraries and frameworks — Backbone.js, jQuery and Bootstrap v2. Over time this led to tech debt, alongside deprecated and unmaintained code in the codebase.

Additional, it meant undesired churn was required for otherwise simple bugs and feature work. A new feature that would (for example) take three days to implement end-to-end with a modern UI framework, was taking five days to implement with the legacy framework.

This legacy tech stack issue is seen across other applications within the Member Safety tool suite with differing stacks. There was no alignment, and thus constant context switching. No defined standards and poor opportunities for reusability.

Context shifting between codebases

It was also difficult for existing engineers within the company to context shift into the project to help with work. Many were not familiar with the legacy tech stack and thus needed to (re)familiarise themselves. The same applied to onboarding new engineers to the team/company.

Quality assurance workflow

Although the codebase had end-to-end testing integrated into it’s CI/CD workflow, there were no unit tests. Additionally, the end-to-end tests were written and maintained by a separate QA team. This created additional churn and dependancy on another team in order to release features and bugs.

Lack of componentisation, flexibility, reusability and reactivity

Due to the limitations of Backbone.js, we found ourselves copy-pasting related fragments of UI code. Modern component-based UI framework (e.g. React and Vue) are architected for component-based approach — which let developers build flexible and reusable components. Performant state management with reactivity was also difficult to integrate.


Increase moderation throughput by 20%

The Member Safety Team measures it’s success on the amount of cases that it can complete in a given time period — avoiding backlogs. These figures are presented directly to the CEO and C-suite on a monthly basis.

Identify reported content in one click

Removing the need for moderators to manually identify reported content was key to improving the average time-to-sanction (TTS) metric.

Increase accuracy and quality of sanctions

Providing as much context as possible to moderators will enable them to make the most accurate and informed sanction decision.

Increase moderator productivity

Opportunities to supercharge the moderation workflow where being missed. Power-users were crying out of features that would make them more productive and profit. It’s a win-win for both parties.

Decrease future project bootstrap time

Looking to the future, we knew that other products within the Member Safety tool suite would need a similar treatment as Abuse Moderation. We wanted to standardise and defined an opinionated tech stack going forward to eliminate the pain points of context switching and onboarding.

Integrate seamlessly into existing backend

With a backend heavy presence within team, the backend codebase of Abuse Moderation was mature and well engineered. We wanted to integrate the green-field frontend seamlessly into the existing backend. Essentially we didn’t want to change much with regards backend or take away from the backend capacity of the team.


Audit of existing UI

This is essentially a fat trimming exercise. A breakdown of the layout, components and features — as well as the information architecture. The idea was to first isolate these elements and define their relationships, purpose, relevance and importance to the UI/UX.

These isolated elements were used as part of the user interview exercise to help gauge user sentiment at both a micro and macro level.

User observation and interviews

In order to truly empathise with users (moderators), we needed to observe them using the application, ask clarifying questions and gather general feedback. This is a essential stage in the whole design process because as the saying goes — “you are not the user”. We needed to clarify any assumptions and verify any preconceptions that we may have had so that going forward we could build the right thing and then build the thing right.

Ideation, playback and iteration

With a wealth of feedback and understanding at our backs, and the main user experience problems identified (above) — I was able to begin design explorations in Figma.

Through communication with the other Frontend Lead, the decision to use MUI as the component library for the application was made. And thus, on the design side of things we were able to purchase and make use of the MUI for Figma UI kit. This allowed me to move quite fast towards high-fidelity mockups, slightly unusual but it worked for our process this time.

Regular playbacks were scheduled with stakeholders, including separate playbacks for sponsor users (moderators). Feedback was gathered and documented — and used to iterate in the next phase of design or indeed development.

Figma component library

This actually ties into the developer experience goal of being able to quickly bootstrap new moderation projects. Essentially we wanted to create common and standardised components, UX patterns and user flows across our new look future projects. And that starts at the design phase.

The idea was to (first) create a set of reusable layouts and templates that designers could pull from and subsequently developers too (in code). From a UX perspective, this also creates familiarity around things like look/feel, navigation, discoverability etc. Then do the same but at a micro level for components.

The end goal was an almost drag-and-drop experience to building future moderation projects where designers (heck, even Figma-savvy developers/ or PMs) could create attractive, functional and consistent UIs with ease.


Inspired by popular scaffold NPM packages such as create-react-app and create-next-app, we wanted to create our own internal scaffold package to quickly spin up a new application for our Member Safety team — create-member-safety-app.

The frontend team came together and collectively considered and assessed the potential scaffold components of the project.

Scaffold components to consider

  • UI framework
  • Component library
  • Component workshop
  • Type safety
  • API layer
  • State management
  • Styling
  • Unit testing
  • End-to-end testing
  • Error handling
  • Authentication
  • Linting
  • Bundling
  • Observability
  • Deployment

As you can see, there were many components to consider — but we also wanted to keep the scaffold as lean as possible. And so we opted to extend create-react-app as opposed to rolling our own scaffold tool.

We did however configure and standardise a few components to align and minimise the affect of context shifting.

Scaffold configuration

React for the UI framework

Pretty straight forward from an organisational standpoint. Bumble as a company was all in on React so every frontend engineer at the company was comfortable working in the ecosystem.

TypeScript for type safety

Similar decision to choosing React above. Most of Bumble’s frontend codebase is already typed with TypeScript — and we as a team very much enjoying the developer experience with it.

MUI for the component library

As non-customer facing applications, our priority wasn’t to build highly custom and brand identifiable components. We wanted something robust, extensive, attractive and well maintained and documented — MUI ticked all the boxes for us. It also has a fantastic Figma UI kit to help align our design system with our component system.

RTK Query for the API layer

We compared quite a few libraries in this category, and it was between RTK Query and React Query. Ultimately we choose RTK Query due to it’s integration with Redux Toolkit as a state management tool. They are siblings and optimised for each other.

Redux Toolkit for state management

A controversial pick, because of the rise of the Context API in React. We did actually allow consumers to opt out of this one.

But when absolutely needed, Redux Toolkit integrates really nicely with RTK Query. We found the new auto-generated React hooks to be a nice developer experience too.

Jest and React Testing Library for unit testing

This was simply inherited from the extending of create-react-app. A well trusted and maintained, as well as documented testing setup.

Cypress for end-to-end testing

A well established player in the industry with a good developer experience. Because we would be shifting to writing our own end-to-end tests — we were looking for the best testing experience for frontend engineers, not necessarily for QA engineers.


Unfortunately, I left the company before release so I can’t speak to the final impact and outcomes but I can talk about how they were planned and would’ve been measured.

Impact and outcomes

The monitoring and analytics infrastructure for Member Safety applications already existed in the existing backend, so we were able to quantitatively measure success straight away. The metrics that we cared about related back to the project goals that we had set at the start of the project.

The data science team within the Member Safety team have a whole dashboard of metrics with alerts which help with monitoring and measuring of moderation functions across the company.


Improved moderation throughput

This is a metric that we looked at from a organisational (macro) scale as opposed to a per moderator (micro) scale due to variations in moderator productivity and fluctuations in member activity. Certain periods (days/weeks) can experience more/less activity and thus more/less reports to moderate.

As mentioned, moderators are paid on a per case basis as contractors. And from an organisational perspective — we just want as many cases completed as possible. Micro-analysing moderator throughput wasn’t a priority.

It's important when comparing metrics before and after, to consider variables that have changed. For example, it would be unwise to compare monthly metrics of December to January because the holiday season is traditionally an outlier.

For measuring impact and outcomes here, and with our goal of 20% increase in moderation throughput in mind. It would've been wise to compare moderation throughput on a rolling monthly basis per monthly active users, as well as the average moderation queue times (i.e. how long a case is in the moderation queue waiting to be moderated).

Improved accuracy and quality of sanctions

Measuring accuracy and quality when it comes to sanctions can be difficult due as we process hundreds of thousands of reports each month. It's not possible to re-moderate and quality assess manually.

What we can look at from a metric analysis perspective is the number of disputed sanctions per 1000 sanctions, and also the number of repeat offenders when it comes to reports against the, We want to keep these type of metrics as low as possible.


Improved moderation experience

When your users rely on a product or service to make a living — user experience matters. Throughout the process we liaised heavily with our sponsor users to research, interview, playback designs and development, gain insights and feedback etc.

With a project of this type, it’s never truly finished. There is always feedback to be gained, features to add and bugs to fix. It’s the same constant albeit slower iteration cycle that was used during design and development.

In terms of measuring user experience, it’s a similar process to to the user observation and interviews method that we used during the design phase of the project — except “refine” is the name of the game, not “redesign”.

Improved developer experience

As a frontend engineer on this project I can speak to this impact personally! The upgrade from the old legacy tech stack to the new modern tech stack made the frontend team's life much (×10) easier.

As mentioned, I left the company before the completion of the project — but I have full confidence that any new/future frontend engineers coming on to the project will be working in codebase that is clean, documented and easily extendable.

Increased trust in the team

Another thing previously mentioned, was the monthly moderation metrics are reported to the CEO and C-suite on a monthly basis. They also need to interact with the moderation applications themselves from time to time.

Having both improved moderation metrics and a world-class suite of moderation tools give the people outside of our team added confidence and trust that whatever moderation report the company receives — that we can deal with it in the highest level of effectiveness, efficiency and accuracy, helping keep Bumble’s millions of daily active users safe.

Back to top