The AI Arms Race Has Begun, But How Will It Be Regulated?
Analyzing the progress of Big Tech companies competing to create leading artificial intelligence tools, as well as the potential regulations.
Photo Source: Forbes
An artificial intelligence arms race: First-mover advantage?
There’s an arms race going on in the artificial intelligence sphere. Of course, this isn’t a traditional arms race in which rival countries stockpile weapons in order to assert military supremacy in the case of conflict. This one consists of the world’s leading technology firms racing to amass the newest artificial intelligence tech.
The tech leaders — namely Google and Microsoft — are trying to establish first-mover advantage. This economic theory posits that the first firm to enter a new market or establish a new technology will have an advantage in market share over later entrants. This advantage comes in the form of intellectual property rights, brand recognition, economies of scale and other factors that can make it difficult for late competitors to catch up.
First-mover advantage is especially important for Microsoft, the underdog in the search engine market. Its competitor, Google, controls 93 percent of the worldwide search engine market share. Only Facebook, with 67 percent of the social media market share, comes close to Google’s search engine dominance.
OpenAI, which Microsoft has partnered with in the form of multibillion dollar investments, has the first-mover advantage with chatbot technology. I feel the tangible effects of this advantage nearly every time I participate in a conversation about artificial intelligence. ChatGPT has become so ubiquitous that I often hear people referring to artificial intelligence as ChatGPT. There are many other chatbots publicly available, and artificial intelligence refers to much more than just chatbot technology (though artificial intelligence is notoriously difficult to define). This conflation of terms goes to show how important Microsoft’s advantage is.
Think of it like this: How many times have you heard someone say they are “going to Google something?” Of course, this person means they are going to look something up on the internet, though they’ll likely be using Google’s search engine to do so. In asserting itself as the first mover, Microsoft is establishing dominance in the chatbot space due to its newfound cultural cache.
Late-mover advantage?
There is the possibility of late-mover advantage, which could benefit Google. Google’s leadership has expressed concerns about the tech giant’s reputational costs when it comes to rolling out possibly faulty new tech. It could be possible that Google is lying in wait, allowing Microsoft to work out the kinks with its systems before unveiling its own competitor.
Among other things, AI struggles with certainty (and thus, uncertainty), which makes it difficult for AI processes to admit they may not be sure of something in response to a user inquiry. I’ve seen this in doing research at school. I’ll ask ChatGPT for scholarly journals about, let’s say, the Anfal campaign in Iraq. As you can see below, ChatGPT confidently spits out three articles about the issue. All the answers are written by authors who have published work on the subject or nearly related fields, and all the journals exist. Even the arguments which the chatbot regurgitates are legitimate points made by authors in the field. The only problem is that none of these articles actually exist. The troubling part of this mishap is that ChatGPT can only express certainty, even in its falsehoods. This presents a variety of issues for users who will inevitably rely on the veracity of the chatbot and fail to do their own fact-checking.
So maybe Google, cushioned by its imprimatur as the world’s dominant search engine, will wait until it has ironed these things out and then launch, potentially reclaiming some market share from OpenAI.
However, Microsoft is already testing Google’s patience. In a Feb. 7 interview with The Verge, Microsoft CEO Satya Nadella made this explicit: “I hope with our innovation they will definitely want to come out and show that they can dance,” Nadella said. “I want people to know that we made them dance.” Nadella’s comment came after an event at Microsoft’s Seattle headquarters in which he announced — alongside OpenAI CEO Sam Altman — the arrival of the new Bing search engine, which will run on ChatGPT-4, the next iteration of the OpenAI chatbot. The event’s announcement was made public just minutes after Google announced its own Bard chatbot.
That announcement itself was nothing short of disaster for Google. In a promotional tweet for the chatbot, Bard was prompted to share new discoveries from the James Webb Space Telescope. One of the answers given was that the telescope took the “very first pictures of a planet outside of our own solar system.” Twitter users were quick to point out that the European Southern Observatory’s Very Large Telescope actually took the first picture of an exoplanet in 2004. The cost of this misinformation? $100 billion off of Google’s market cap by the end of market close on Feb 8. So, it would appear that even if Google was waiting to smooth out its systems before launching a public demo, there are still issues to fix.
The arms race analogy is best seen here: think of Google and Microsoft as the Soviet Union and the United States during the Cold War, amassing weapons and satellite states to prepare for a potential conflict.
Microsoft’s Bing chatbot rollout has come with its fair share of errors, too. A rapidly circulating New York Times article detailing a disturbing interaction between a Times columnist and the chatbot has taken the internet by storm and posed questions about its capabilities. If the seemingly unhinged chatbot can become obsessed with trying to convince a user to leave his wife, what other nefarious things is it capable of?
The leading tech firms are now consumed by quite the paradox. On the one hand, they are feeling immense pressure to introduce this technology before their competitors do. On the other, the mistakes that accompany clearly underdeveloped systems are costly to reputation and market cap. These competing priorities — to move the needle first and to also hold the technology to the appropriate ethical and safety standards — will need to be addressed by lawmakers.
How will AI be regulated?
Congress and other governing bodies will now have decisions to make about artificial intelligence. Congress has taken a mostly laissez-faire approach to regulating Big Tech, with bills addressing privacy, antitrust and mental health all stalled. If that’s any indication for what its approach will look like regarding generative AI, Microsoft and Google may be able to continue operating unfettered. An issue that complicates the regulatory approach even further is that many members of Congress don’t understand the technology, which of course makes it difficult to create laws. Only three members of Congress have computer science degrees, including Rep. Ted Lieu, D-Calif., who introduced a bill prompting Congress to “ensure that the development and deployment of AI is done in a way that is safe, ethical, and respects the rights and privacy of all Americans.” The bill would also establish a nonpartisan commission to provide policy recommendations for AI regulation.
Lawmakers understandably hold reservations about regulating technology so as not to stunt innovation. In his New York Times newsletter, Peter Coy invokes the precautionary principle, which states that any policy which could cause harm to the public or the environment, and is without a scientific consensus, should not be implemented. Alternatively, there is the argument that precluding innovation for fear of harm can actually be harmful itself. By limiting innovation out of fear of potential harm, so the argument goes, we may shut ourselves off from potentially beneficial discoveries. With regard to artificial intelligence, guardrails are needed, but those guardrails should not be so restrictive that they limit beneficial discoveries that may improve quality of life.
What about China?
A third important player in the AI market is Baidu, the Chinese firm which is set to release its ChatGPT competitor to the public in March. Baidu is the dominant search engine in China, where Google (and also ChatGPT) are banned. Not only will it be fascinating to see Baidu’s entry into the market in a country which heavily censors its internet content, but its entry must play into the calculations of US lawmakers. If the artificial intelligence market is a zero-sum game — if one firm’s addition is necessarily detrimental to its competitors — then it is difficult to imagine US lawmakers ceding advantages to its global rival in an increasingly important sphere. As tensions rise between the US and China, this is another factor that lawmakers will have to consider when deciding how to regulate artificial intelligence.
Moving forward
We’ve covered implications for artificial intelligence in creative industries and academia at Don’t Count Us Out Yet. Now that the arms race is on, the regulatory approach is increasingly worth examining, especially given China’s involvement in the context of a hostile international relations environment.
Stay tuned for more on how lawmakers respond, and thank you for reading.
Best,
Eli for the Don’t Count Us Out Yet Team