Pensive: A Blog

The Meaningful Thoughts of Matthew Eleazar

On Content Moderation (CSE Ethics)

On Content Moderation (CSE Ethics)

Published on November 13, 2024

  1. Can content moderation be an effective mechanism for quelling problems on the Internet? The conversation on moderation tends to take the form of an absolutist defense of freedom of speech or an inclination to ban speech that is deemed unacceptable to certain groups. Neither feels like it's the right framing for this conversation. What would be a sensible policy for effective moderation?

I suppose to a reasonable extent, content moderation is a relatively effective mechanism for quelling problems in the Internet when it comes to issues surrounding inappropriate content, speech, writing, threads, and the like. I think, as I hearken back to my take on moderated content for Project 01, that no speech should ever be removed just for its own sake even if it offends other people. Now, what people, I think, neglect to mention, is context. If you’re posting a very political stance on a particular issue that could be deemed offensive but authentic on your part — on a music subreddit/thread — the mods have absolutely the right to take down what you posted because it was inappropriate and was not posted in the right context/environment. 

I think depending on the platform you’re posting your speech on, there will be rules that need to be followed and conventions that are ideally followed so that wherever people spit out their verbiage, the place where they spit them out makes sense. 

With the case of Twitter, Facebook, IG, and the rest of the relevant platforms, for any political post out on display for the public, tied to an individual user’s stream of consciousness sharing, I think there should be very minimal content moderation. Although I wouldn’t say I hold an “absolutist defense of freedom of speech” position, because I think clearly there are limits to that freedom, I sure as hell don’t hold the “ban speech that is deemed unacceptable to certain groups” position. I think that whatever racist, genocidal, “hate speech”, and whatnot that gets thrown out online should not be taken down because I believe that the idiots who post things like these deserve to get shunned and berated for holding braindead ideas by the, hopefully, level-headed and reasonable public. I also think the dislike button (with numbers) should come back not just to YouTube but also get introduced to the major social media platforms so that people are not limited by the expressions they can express to content (especially towards very obviously dislikable content). People who post material like this deserve to have their idiocy out on display so that they would get embarrassed out of existence. 

 

  1. Imagine that you have posted a controversial, yet respectful and not intentionally inflammatory, take on social media. After public outcry, this lands you in hot water with the platform and you find yourself banned. What do you do?

I’ll probably document everything that I said, have examples of what the people who opposed me said to me, and keep track of all the events that have transpired leading up to my banning — especially actions and statements said to me by the social media platform itself. I’ll then probably make/start my own blog website and for one of my blog posts have a very clear and succinct blog/documentation of the details of my banning from that platform, and to the extent the truth allows, frame myself in the most positive light and the platform and the people who canceled me, not so much [demonize them], so that I can demonstrate the unreasonableness of the treatment I received from the hostile mob and the hostile-mob-worshiping platform. If time permits, I might reach out to a journalist who’s interested in freedom of speech issues and share with that person my story, and maybe that journalist can write a sensational story about how the social media platform that banned me hates freedom of speech and should be in turn canceled and should lose users. Easy.