Hate speech is any form of expression that attacks or disparages a person or a group based on their identity, such as race, religion, gender, sexual orientation, disability, or nationality. Hate speech can have serious negative consequences for the targets, such as psychological distress, fear, isolation, self-censorship, and even physical violence. Hate speech can also undermine social cohesion, democratic values, and human rights.
Twitter is one of the most popular and influential social media platforms in the world, with over 300 million active users per month. However, Twitter also has a dark side: it can be a platform for spreading hate speech, harassment, and misinformation.
According to a recent study by the Anti-Defamation League (ADL), 37% of Americans reported experiencing severe online hate and harassment in 2020, an increase from 28% in 2019. The most common types of online hate and harassment were being called offensive names (25%), being physically threatened (18%), being sexually harassed (14%), and being stalked (10%). The study also found that Twitter was the second most common source of online hate and harassment, after Facebook.
The impact of hate speech on Twitter users can be devastating. For example, in 2016, Leslie Jones, a comedian and actress who starred in the Ghostbusters remake, was subjected to a barrage of racist and sexist tweets from trolls who also hacked her personal website and posted nude photos of her. Jones temporarily left Twitter after expressing her anguish and frustration.
What can be done to combat hate speech on Twitter? There are several possible strategies that involve different actors and levels of intervention. Some of these are:
- Twitter itself: As the platform owner and operator, Twitter has the responsibility and the power to enforce its own policies and standards against hate speech and harassment. Twitter has taken some steps to address this issue, such as expanding its definition of hateful conduct, creating a reporting system for abusive tweets, suspending or banning accounts that violate its rules, and using artificial intelligence to detect and remove harmful content. However, many critics argue that Twitter's efforts are insufficient, inconsistent, or biased. For example, some users complain that Twitter is too slow or reluctant to act against prominent figures who spread hate speech or incite violence. Others claim that Twitter is too quick or arbitrary to censor or silence voices that challenge the status quo or express unpopular opinions.
- Governments: As the regulators and enforcers of the law, governments have the authority and the duty to protect their citizens from hate speech and its consequences. Some governments have enacted laws or regulations that require social media platforms like Twitter to remove illegal hate speech within a certain timeframe or face fines or other sanctions. For example, in Germany, the Network Enforcement Act (NetzDG) obliges platforms to delete manifestly unlawful content within 24 hours or risk penalties up to 50 million euros. However, some challenges and risks arise from this approach. For instance, how to define hate speech in a clear and consistent way that respects freedom of expression? How to ensure that platforms comply with different legal frameworks across different jurisdictions? How to prevent governments from abusing their power to suppress dissent or criticism?
- Civil society: As the stakeholders and beneficiaries of a free and safe online environment, civil society groups have the role and the opportunity to monitor, expose, and counter hate speech on Twitter. Some examples of civil society initiatives are: creating online tools or platforms that allow users to report or flag hateful tweets; conducting research or analysis on the prevalence and impact of hate speech on Twitter; providing support or assistance to victims or targets of hate speech; raising awareness or educating users about the dangers and effects of hate speech; promoting positive or alternative narratives that celebrate diversity and inclusion; engaging in dialogue or debate with perpetrators or supporters of hate speech; mobilizing collective action or advocacy to demand change from Twitter or governments.
Hate speech on Twitter is a complex and multifaceted problem that requires a comprehensive and collaborative solution. No single actor or strategy can address it alone. It is essential that all parties involved work together.
Comentarios