Science fiction is rife with stories of AI taking control of people’s daily lives. Google is planning to use their Conversation AI to help moderate online comments for the New York Times’ website, and the idea behind this implementation dances on the razor-thin edge between moderating harmful comments and the Orwellian nightmare of ‘thoughtcrime.’ How will Google’s AI help the Times, and could this lead to Internet censorship, an infringement on the right to free speech?
Let’s face it: The Internet can be a downright rude and nasty place sometimes. Give some people the right to free expression and a relatively unregulated place to practice that right, and they’ll show you an entirely different set of values than what you may think they possess. Social platforms attract Cyberbullies, trolls, and people looking to shame others by exposing or insulting them.
Of course, the right to insult people is, in many cases, protected by our bill of rights. Language is how humans express their thoughts and opinions, and censoring language serves to make those thoughts and opinions invalid. In places where freedom of speech is treated as a fundamental human right, there is a history of social development in place that gives people a refined yet still imperfect sense of what speech will be acceptable in any given context. After all, your opinions should land you in jail, but that doesn’t mean that people will like them. Saying the wrong thing in the wrong place can have significant consequences.
The fact that such social development hasn’t yet occurred on the Internet is alarming, especially considering that there are often no consequences for saying something that would most likely get you beaten in the real world. Therefore, comments section net-wide have become a space for acidic arguments, harmful speech, and the occasional guilty laugh.
The New York Times, however, isn’t having it. Rather than disable comments on their articles, they have turned to Jigsaw, which is a company that falls under the umbrella of Google. Jigsaw has developed a program called Conversation AI, which is intended to help humans to moderate comments sections.Jigsaw has developed a program called Conversation AI, which is intended to help humans to moderate comments sections.Click To Tweet
Implementation With Cautious Intent
Currently, the comments published on the New York Times‘ website are moderated by people, but their workload is immense. Conversation AI is learning to detect each type of comment that should be rejected to help moderators manage the amount of content they need to review. Those types include those that are insubstantial, off-topic, spam, inflammatory, incomprehensible, crass, or ones that are a direct attack on authors, publishers, or other commenters.
The Times considers the reduction of abuse to be under control due to the efforts of their moderators. The point of Conversation AI isn’t to replace the moderators, but to streamline their job to help them focus on the largest abuse issues. According to Erica Greene, the engineering manager of the Times, “We don’t ever expect to have a system that’s fully automated,” so commenters can rest assured that the AI isn’t out to get people, merely to make it easier for an employee to find the issue in order to use their discretion in solving it.
The AI is going to be implemented in a few months, which should help the Times foster the kind of atmosphere they want to have on their website, but what does this say about the climate of the Internet as a whole?
Wiping out Internet Abuse is Double-plus Good, Right?
The intent behind Conversation AI is tantamount to Internet censorship, and censorship is in direct opposition to the freedom of speech.
History is full of moments where significant advances were almost stopped because they presented unfavorable opinions, such as the struggle between the Church and Galileo over his heliocentric model of the solar system.
Freedom of speech allows us to discuss ideas without dismissing them as unacceptable, giving societies the ability to discern whether or not those ideas have merit without the intervention of a restrictive power structure. Therefore, ideas should be accepted rather than dismissed.
Accepting ideas is not the same as agreeing with them, and only through discussion can we judge how agreeable an idea is and whether or not it should be implemented. Technology is allowing us a new forum where our biases are being laid bare before us even if we want to convince ourselves that they do not exist, and that kind of naked truth is far more useful than the absence of it.
Cyberbullying and harassment represent an ugly truth about human nature. However, while Internet censorship should not deter the free creation of ideas, people who feel threatened by the speech on the Internet should have legitimate recourses to protect themselves.
For example, it should not be Google’s responsibility to filter or censor information; their responsibility should be to provide the means for users to file complaints if they feel threatened by another user’s actions or statements. Additionally, it is their duty to establish a code of conduct, which would provide users a clear set of guidelines when issuing a complaint about another user.
The Internet Censorship Debate
Detecting abuse requires knowledge of the context in which one makes those comments. This represents a linguistic problem that AI may never be able to solve, so completely automated censorship is, thankfully, impossible. This means that while Conversation AI may seem like an attempt to make a machine that imitates Big Brother, it will always be trying to catch up to the latest trends in what it labels abuse rather than staying ahead of them.
Ironically, Internet censorship is a hotly debated topic that deserves discussion. Many people carry the sentiment that people are capable of discerning what is and isn’t useful out of what they read, and they see censorship as a regressive strategy to combat Internet abuse. In a recent chat with Twitter CEO Jack Dorsey on the fake news phenomenon, Edward Snowden offered the opinion that people deserved to choose for themselves what speech is useful or not. He said, “We point out what is fake; we point out what is true — the answer to bad speech is not censorship, the answer is more speech.”
“We point out what is fake; we point out what is true — the answer to bad speech is not censorship, the answer is more speech.” -Edward Snowden
So, will Conversation AI be the ember that lights the flame of Internet censorship? The world will soon find out. Let’s all hope that the answer is a clear ‘no.’