The European Commission has given an ultimatum to social media giants and web companies: get rid of hate speech and violent content on their platforms or face legal action.
This decision comes after years of trying to work together with social media companies to curb the situation yielded few results.
Facebook, Twitter, Google, and Microsoft have all in the past pledged to deal with a method for curbing user-generated hate speech and violent content in a timely fashion but, according to the EU, they haven’t lived up to their promises.
Facebook earlier this year said they were hiring 3000 people to amp up content moderation on their platform but the EU doesn’t feel like this has brought any improvement in how fast they act.
EU Isn’t Satisfied With the Status quo
“The situation is not sustainable: in more than 28% of cases, it takes more than one week for online platforms to take down illegal content,” said commissioner for the digital economy and society, Mariya Gabriel.#EU warns #socialmedia companies to cut the #hatespeechClick To Tweet
The commission will be implementing new laws in the coming months to deal with social media companies if they fail to increase their efforts in dealing with the situation.
The commission wants the companies to invest more in the detection and timely removal of illegal content as well as prevent them from reappearing on their platforms.
In a press release last month, the commission said: “Given their increasingly important role in providing access to information, the Commission expects online platforms to take swift action over the coming months, in particular in the area of terrorism and illegal hate speech – which is already illegal under EU law, both online and offline.”
It’s not clear what punitive measures the European Commision will put in place to deal with companies who do not conform to the rules.
However, with the commission’s history of slapping hefty fines on companies who don’t play by its rules, we’re sure punishments will be extreme.
Common Tools Suggested by the European Commission
The press release we linked to above lists a few common tools the European Commission feels online platforms should adopt. They are:
- Detection and notification: Social media site controllers should make an effort to cooperated with “competent” government authorities. This would be achieved by selecting common “points of contact” to ensure immediate connections and rapid removal of flagged, illegal content. In order to speed up detection of illegal content, social media sites should work closely with “trusted flaggers”–these are highly-specialized figures with expert knowledge of what constitutes illegal content. As you might imagine, the EU also wants companies to make user-flagging readily available and automated detection protocols should also be more effective.
- Effective removal: Because timeliness is important in cases of incitement of terrorism or other specific timeframes where serious harm could be caused, the commission wants flagged content to be removed as fast as possible. The commission mentions they will do more research into specific timeframes regarding illegal content. The commission feels social media platforms should be transparent with users regarding policies, past transgressions and specific statistics regarding the number of instances where illegal content was found and removed. The commission also suggests that platforms be careful not to let this censorship go to far and create “over-removal”.
- Prevention of re-appearance: The commission suggests that social media platforms pay extra attention to repeat offenders and make sure tools are in place to disallow them from repeatedly posting illegal content. They again suggest that automated tools be developed for this expressed purpose in particular.
How do you feel about the EU’s warning? Is this necessary, or is it dangerous censorship?