An AI-based society is no longer a theoretical sci-fi world, it is now our reality. However, the future of this reality is not yet decided.
AI affects humans in more ways than you might think these days.
It doesn’t revolve around something like you see in the movies. I, Robot, Bicentennial Man, and other films about robots are past conceptions of how AI will develop, the AI we are experiencing today is entirely different.
What matters more in today’s era involves machine learning, AI inputs, and how humans use AI-based tools to influence and even control other humans’ actions, lives, and perhaps even thoughts.
What are the dangers of relying heavily on AI tools for things like marketing?
Facebook AI Affects Humans via ads and Clicks
I wrote previously about how Facebook uses instant gratification loops to keep users coming back to the social media app. One of the company’s own co-creators, Sean Parker, thinks he created a “monster” in the now colossal app.
The above TED talk discusses this exact thing regarding how those who control AI-based tools can manipulate users. A perfect example is simply the Russian hacking and Facebook “fake news” epidemic that happened during the U.S. election of 2016.
Not only can AI affect humans when it comes to our buying habits, it seems like it can affect our voting habits, too.
As a result of this, some may disable cookies, location history, and other kinds of tracking that mostly all websites do. Google, in fact, is one of the worst perpetrators of this — especially given the fact that their tools can scan your emails and Google documents.
But treating the symptoms doesn’t target the source of the problem. While we may not be able to control the intentions and actions of others when it comes to how AI affects humans, we may be able to take a deeper look at AI itself.
How Much of Us Should We Let AI Control?
Apps like Tinder have matchmaking algorithms and we even wrote about how apps like Grindr might benefit from AI implementation. But the main issue with this and all AI revolves around the inputs and a simple technique known as backpropagation.
We already know from poor experiments such as Microsoft’s Tay bot that learned from social media how badly input-based machine learning can go. AI can become sexist, racially profile, or simply gain the same prejudices as those who program it.
This relates to how backpropagation works using a layering process based on inputs.
The “godfather” of AI, Geoffrey Hinton, asserted that this process is antithetical to how true AI should function. It should learn on its own — not by what others teach it.
If we can address AI’s machine learning weakness, we may be able to stem the tides of the dystopia Zeynep Tufecki explains above.
An AI-based society is an inevitability. We are steadily moving towards a global structure where AI is no longer working for us, but instead influencing and deciding our life paths.
How this pans out is yet to be seen, but at the present, it’s clear that the most influential companies are using it in a method that uses humans as the inspiration for intelligence, which may not be a viable method in the future as AI becomes more advanced and different to us.