About YOU, THE MODERATOR

This project seeks to comment on the issues of disinformation and hateful content on the internet — and social media in particular — by placing the user in the position of a content moderator. Moderation has in recent years been increasingly vilified, with some characterizing it as a tool for those who wish to control conversations against users' will. While it may very well be true that certain platforms have moderated content in conspicuously uneven ways, this perspective ignores the fact that content moderation is nonetheless a gargantuan task that requires tremendous effort on the part of moderators. While algorithmic moderation has been on the rise for many years, it still cannot be relied upon entirely, and in many companies an undeservedly small group of moderators are responsible for a huge quantity of content. This is especially true for users from countries that may be poorer or speak a less prevalent language, who receive even less moderation — if any.

This interface simulates a hypothetical moderation system combining both algorithmic and human work. An unseen algorithm “flags” posts to appear on the user’s screen (which are visually censored to avoid unnecessary bias), and the user is asked to “examine” the posts by clicking on them repeatedly. This action is a catch-all for the work of moderators, representing both the critical thinking and research necessary to determine if a given piece of media may be harmful. This process is represented by emojis that appear, which are at first random — indicating the nature of the post is still unclear — but consolidate on one emoji as the user repeatedly “examines” the post, indicating that the nature of the post is clear, and the user can confidently make a decision on whether they should “delete” the post, or “approve” it, depending on what the emoji represents (a pinocchio-nose emoji (🤥) representing “disinformation,” for example).

A score is kept on whether the user has moderated more posts correctly or incorrectly. As the user continues to moderate, the posts will appear more rapidly, until eventually it becomes impossible to achieve an adequate score. Posts will also “push” themselves around, adding to a sense of being overwhelmed. The point of this is to communicate how larger and more active sites require larger and more active moderation teams, or else they create a risk of more violent content and more disinformation, with greater reach. The intent is to leave the user with a better understanding of the level of work social media companies undergo to provide a better experience free of violence and other harmful content.Learn more about misinformation here.