what does speculative-provocative content moderation looks like?
What would a visually provocative censorship look like, in different speculative futures?
If you have been on Twitter, you may understand the complexity of the platform and its network. Where Twitter is run by a considerable less number of employees, its perceived as an opaque-but-vague platform which may block you for being an activist and not the alt right profile, who is propagating and generating hateful content on the platform. Often users give credits to the vague description of The Twitter Rules for the censorship and moderation of the content, but for a platform that strives for overall free speech- rather than absolute free speech -can only achieve a limited amount of satisfaction.
Following their rules of moderation is hard for every user on the platform as well, where an individual never knows how and when their account or tweet got blocked. In their cover story for Fast Company, in May 2018, Carr and McCracken talk about how Rosenberg(Twitter user) got blocked for sharing his perspective on harassment against Jews, but the trolls and accounts propagating hate-speech against him retained on the platform. They wrote, Rosenberg who considers his effort good citizenship rather than vigilantism, still isn’t sure why Twitter found it unacceptable; he never received an explanation directly from the company. As a Twitter user myself, I often see tweets being removed from my feed with an image prompting that the content is advised to be violent[synonym] for me. This nature of stealth censorship led me to interrogate the nature of censorship on the platform, and what it may look like in its most provocative nature. For this exercise I created alternative scenarios of speculative world that are being governed by different entities, have their own rules, and goals which are as follows:
Community Censorship: Tagging/Suggestions
Rules:
1. Maximising the ‘snackability’ of online content
2. Unlocking profile and features of the platform through continual interaction/gamification
3. Giving ownership to each individual/ Open-source censorship
Goal: Creating a collaborative narrative, that defines the current socio-political structure.
Moral Policing
Rules:
1. Streamlining the socio-cultural narrative, through online monitoring.
2. Penalty to those who don’t align to the source’s thought+beliefs
3. Promoting positive content, based on moral-emotional responses and triggers
Goal: Eliminating/Changing the data present, to morph the narrative and behaviour of users
Religious Censorship
Rules:
1. Streamlining the socio-cultural narrative, through online monitoring
2. Penalty to those who don’t align to the source’s thought+beliefs
3. Projecting a desirable image of self
Goal: Eliminating/Changing the data present, to morph the narrative and behaviour of users
Self Censorship
Rules:
1. Projecting a desirable image of self
2. Unlocking profile and features of the platform through continual interaction/gamification
3. Increasing the connectivity, to become a higher authority
Goal: Changing the data present, to morph the narrative and behaviour of users(connections)
The visual aesthetic exploration of the digital field you’re exploring as a researcher can lead to broadening your expectation from the users and the platform, where you start questioning the social and technological limitations of the field struggles with. Exploring the aesthetic alternative-futures of twitter opened up avenues for more questions and interrogations and explorations. After speculating censorship and moderation of twitter platform, I wanted to know who are the people getting censored on the platform, and how can I know more about them which led me to my next step.