Annika Thompson (’26, Graphic Design) and Sophia Rossi  (’24, Political Science)

In the United States, 72% of the population uses at least one form of social media. Of those 72%, the most popular social media sites were Youtube, Facebook, and Instagram. While the majority of Americans are using these sites regularly, there is disagreement on whether or not these platforms are regulating the content that users post effectively. Some believe that there isn’t enough content moderation, because of the influx of fake news that circulates online. Still, others think that moderation could lead to censorship of certain political viewpoints or infringements on free speech. Many users, especially young people, get their news from these sites, making it a crucial issue to be informed on.

Free speech and censorship of content have been on the forefront of people’s minds, especially after social media sites have updated their platforms with new policies. For example, Instagram has added a fact checking system that pops up on the bottom of posts, telling users to see why “fact checkers say this is false” and will blur out posts that it deems “sensitive content”. Additionally, after Twitter’s new CEO Elon Musk changed the platform to the name X, many of the other apps features have been updated. One of those updates included changing the content moderation policy to be less strict on matters of hate speech. Musk has even gone so far as to sue the state of California over its law that social media companies must show their content moderation policies.

Federally, the laws regarding content moderation of social media include the First Amendment and section 230 of the Common Decency Act. The First Amendment includes the freedom of speech, press, assembly, and protest. When disputing content moderation laws, this is often the first law that’s brought up, and used to shoot down laws attempting to prevent hate speech. Section 230 of the Communication Decency Act protects users and companies from legal problems involving one another. Social media companies are encouraged to address illegal content, including threats, terrorism, and child abuse. However, the company is safe from being sued over what users post on their platform. In summary, they are not held responsible for what people choose to say, and they are in control of how much they moderate their content.

When JMU students were asked the question, “should there be stronger content moderation of social media?” there were a variety of responses. Much like the rest of the population, students seem to be split on this issue. Some believed that stronger content moderation would not make a difference to free speech because we already have laws in place to prevent people from saying whatever they want online. Others thought it would limit self-expression and there should be less moderation overall, with exceptions for preventing inappropriate content. While there were various opinions, what stayed consistent throughout was that no one thought no content moderation would be a good idea.

The issue of content moderation is complex and requires effort and time to create an effective and balanced system, especially because we don’t all agree on what should and should not be allowed on certain platforms. The evolution of different policies has elicited different responses from people, both positive and negative. It’s likely that creating new federal laws regarding moderation will be difficult to pass because of how broad the subject is. For now, users can best control what they see online by choice of platform.