Are We Already in the Third Artificial Intelligence Winter?
Analyzing AI Progress and Problem Recognition
Photo Credit: Science Focus
The past few months have been ones where more and more news and breakthroughs are revealed regarding artificial intelligence. If you are like us, you may have experienced a mix of worry and intrigue when reading the following prevalent headlines.
“The Google engineer who thinks the company’s AI has come to life”
“Dall-E 2: Why the AI image generator is a revolutionary invention”
Wow, one would think that artificial intelligence is poised to disrupt a lot of areas by recognizing problems we don’t know about or quickening the ability to solve the ones we do. However, is this a new way of intelligence which uses the talent of AI learning to teach itself? Or is it just quicker way of doing things we have known about for a few years? Does this really add any other dimension to AI’s ability to act more and more humanlike?
When we teach Introduction of artificial intelligence to non-technology oriented students, we always have a history section and note the 1956 Dartmouth workshop, which really started the area and the two AI “winters” since then. First in the 1970s, there wasn’t enough computer power. Second in the 1990s, where the data needed for artificial intelligence to make decisions wasn’t generally available.
Now we are at the point where artificial intelligence not only has far superior memory function, but also rational thinking programming that is far superior to how humans do it. But now that AI can learn by themselves with machine/deep learning breakthroughs, where are the huge examples of AI helping humans spot new big problems? Examples would include: “You shut down a factory in the United States that produces 40% of the baby formula in the United States, then you are going to have a huge shortage in a few months” and “ Will a major heat wave be coming in 2-3 weeks for most of the midwest?”
Shouldn’t AI at least be starting to act like young children where they can recognize new problems such as, “Daddy watch out for that car!”
If AI is currently doing this, we certainly haven’t seen the examples.
It is not as if major tech companies, think tanks and universities aren’t aware of this problem and putting huge resources to it. However, almost all solutions are focused on how to find better data to answer existing questions or secondary questions that follow the first one. There are some organizations, such as Big Science, trying to maintain the idea that better data analysis should be prioritized rather than having more data.
We have written about how Andrew Ng and his new company, Landing AI, is focused on this problem mainly for manufacturing companies, but are all these approaches really just focusing on the trees instead of the forest? And isn’t the real problem that you might not be asking the right questions?
So how does AI come up with questions? And if so, will they lead to better answers with any data? For example, anyone who has raised a child knows part of their growth in intelligence starts with the 5-year-old asking, “Daddy why is the sky blue?”
We seem to be at the point where current computer professional thinking doesn’t answer this question with hardware or software. Isn’t that the definition of another artificial intelligence winter?
As readers in our soft launch phase might be aware, our goal at Don’t Count Us Out Yet is to get more non-technology trained professionals aware of the problems happening in technology and science and start to participate. Or, as MIT professors have so aptly put it, we are moving from the age of enlightenment into the age of entanglement.
Good question to start with, and we have a possible answer. Journalists, journalistic studies and good editors of investigative newspapers and publications have been working on this for years. And we don’t think there is a better group to help break through this issue for artificial intelligence than journalists. Shouldn’t any AI research and development group have at least have one journalistic person looking at the development of questions in the machine/deep-learning processes?
Here is one eight-step how to ask the right question approach we developed while teaching at Lehigh University in the introductory Media and Society course. We are sure that there are many others and we believe these types of thinkers need to be in the development of artificial intelligence now. What other big questions are there that AI can so easily answer, such as baby formula shortages, which would help immensely right now? If we don’t see this soon, grab your coat and gloves… it is going to be a cold third winter. But at least AI will be able to help you pick the right color for your living room out of your 50 preferences