Scared of Artificial Intelligence? You're Not Alone: Top 4 Fears About AI Taking Over
Are AI fears overblown? Let's discuss the top four fears people have concerning the growth of artificial intelligence.
Photo Source: Acceleration Economy
We’ve done a lot of talking over the past year about all the good that generative artificial intelligence has to offer, but what about all the people voicing their fears about an AI future? Are those fears just a dramatic misunderstanding of what AI is, or should we all be listening to them a little more closely?
Today, we’re going to delve into the top four fears people have about AI, and give you the information to decide whether or not they’re simply overblown.
1. Mass Unemployment
The most talked about AI fear seems to be the threat of mass unemployment, as technology replaces not only physical laborers, but also knowledge workers. Americans have a tendency to feel like we were born to work, so the thought of being replaced by AI threatens our core identity and sense of intrinsic value. If AI can perform certain tasks within minutes that we’ve spent years trying to master, we might just be screwed.
The Hollywood writers’ strike is a perfect example of this fear already spurring action, with writers fearing their work will be replaced by generative AI that works faster and costs production companies a fraction of a writer’s salary. Potential job cuts might also lead to the further concentration of wealth among the top one percent, in this case those who own and invest in AI.
Despite all of this alarm about AI stealing our jobs, things might not turn out the way we expect. This panic over impending mass unemployment is the same one that surfaced alongside the invention of railroads, telephones and computers. And what happened in those eras? New jobs were created that could have previously never been imagined, and productivity dramatically expanded. Therefore, it’s easy to consider what jobs might become obsolete, but more difficult to think of all the new jobs that may emerge.
We’re already seeing an increase in jobs requiring creativity and critical thinking, including prompt engineers and algorithm trainers, and as AI processes evolve, new unskilled jobs will emerge as well. In fact, according to Forbes, almost 60 percent of the jobs people have today didn’t exist in 1940 – imagine how many brand new jobs might exist 80 years from now in collaboration with AI!
Is AI really going to replace the American Dream, or is this existential crisis just a result of shifting roles? As I mentioned in my previous article, “What we’re looking at is probably more of a shift in job responsibilities than an outright replacement.”
2. Systematic Bias
Another lesser known, but no less important, limitation of AI is its propensity for bias. AI is built and trained by humans, so any bias existing in the trainers or the data is going to become evident in the system’s outputs.
This type of systematic bias is especially scary when AI is used to make real-time decisions, such as which candidate to hire or who to promote, or when a biased algorithm in the media inevitably shapes public opinion on a critical topic. This fear is, in my opinion, a rational one and one that requires human oversight to combat. We can’t visually see how AI comes to its decisions, so human review and control over the final outcome is essential, as well as transparency of training data.
3. Privacy Concerns
Another apprehension surrounding AI, which this piece from Entrepreneur covers well, involves cybersecurity risk, including the stealing or leaking of data, reputation attacks, and a general loss of privacy. Like any powerful tool, not all who wield AI are planning to use it for good – there is always the potential for one to use AI to steal an identity, damage a reputation or attack a company’s software. The proliferation of deepfakes is especially concerning to celebrities, who’s image and voice (essentially what makes them famous) can be recreated by AI either for profit or in an attempt to disgrace them. However, if we really think about it, these types of violations and infiltrations have occurred long prior to this wave of generative AI, with computer viruses and photoshop.
No matter how primitive or evolved technology is, there will always be people using it for malicious purposes, meaning that plans for data privacy, regulation of use, and policing of published content is probably our best defense. Social media platforms in particular will have the responsibility of implementing tools that detect AI generated content.
4. The Singularity
Finally, the science fiction dystopian world in which robots gain consciousness and eliminate the human race, also known as the singularity, is another source of distress among AI skeptics. Even some AI experts perpetuate this fear and call for swift regulation. Ian Hogarth, who has heavily invested in AI companies, warns against a “God-like AI” that no longer needs humans to operate. Right now, we are still required to administer AI, but what Hogarth is referring to is artificial general intelligence (AGI), where AI achieves a human level of consciousness and is able to learn on its own.
AI is fueled by neuroscience insights, and developers attempt to mimic the human brain using machine learning algorithms and neural networks. It might seem surprising (and perhaps counter-intuitive) to some, but AGI is actually the ultimate goal of companies engaged in the AI race.
While we haven’t reached AGI yet, nobody quite knows how long it will take to get there and what might happen if we do, which is why calls for a slowdown and regulation have amplified as technology expands.
Some researchers even argue that AGI is not a prerequisite for AI with innately human abilities. Geoffrey Hinton, who recently quit his decade-long job working on AI at Google, claims that neural nets can already have feelings, in some sense of the word. If feelings are merely counterfactual statements about what would have caused an action, or a way of discussing inclinations to action, then generative AI engages in "feeling" all the time. While we see AI's "hallucinations" as a flaw, by this reasoning, they really serve as evidence of a human-like intelligence. Digital intelligence is essentially immortal, and Hinton told The New Yorker, "We have to be real. We need to think, how do we make it not as awful for humanity as it might be?"
On the other hand, theoretical physicist Michio Kaku actually called chatbots “glorified tape recorders.”
The bottom line is that fear is one of humans’ critical survival mechanisms, which arises from our brain’s responses to uncertainty and potential threat. Fears about unemployment, discrimination, privacy violations and the end of the human race are all valid, but are they founded? That’s for you to decide as we continue to keep an eye on AI’s progress and any approaches taken to maximize benefits and minimize harm.
Best,
Nina for the Don’t Count Us Out Yet Team