Modern AI technology has been far more successful than at any point in history, with it becoming part of our daily lives without us noticing. Despite this success, AI has a long way to go to achieve the promise that many hold of this technology − which is why it is feared that the AI field is heading into its third winter. This article examines the factors that will determine whether we are heading for an AI summer, AI autumn, AI winter or AI spring and will also provide stories that encapsulate each of the plausible futures that each of the AI seasons represents.
Artificial intelligence: overview
Over the past 250 years, the primary drivers of economic growth have been technological innovations that became general-purpose technologies creating waves of complementary innovations and opportunities. Artificial intelligence, and more specifically machine learning, is the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it is given. Machine-learning algorithms is a subset of AI, thus when referring to either AI or machine-learning in this article, it relates to the ability of an system to automatically learn and improve from experience without being explicitly programmed.
PwC predicts that AI would add $16 trillion to the global economy by 2030, while McKinsey estimates this figure at $13 trillion. AI’s most important attributes, which will only increase in importance in the coming years, are connectivity and updateability. AI can connect to an integrated network with an innumerable number of similar applications which enables it to process data and learn much quicker than humans could ever wish to. The other ability is that of updatability whereby AI remains updated with the latest data, which is impossible for humans to do.
The promise of AI has not materialised in its application, which resulted in some researchers suggesting that it is heading for its third winter. The previous two AI winters took place when the promise of what AI could do went unmet, which led to research funding drying up and the importance of this research field being questioned. The Boston Consulting Group conducted a survey of 2 500 CEOs of which seven out of ten stated that their AI projects had generated little benefit thus far. Furthermore, two-fifths of the respondents that invested significantly in AI have not realised any benefits. PwC also conducted a survey that found that only 4% of the respondents planned to deploy AI across their organisations, which is down from 20% the year before.
Both the surveys could suggest a cooling in the enthusiasm for using AI currently and could indicate that an AI winter is approaching.
The revival of AI over the past few years can be ascribed to three trends that converged, namely the improvement in AI algorithms, the use of big data and cloud supercomputing. These three trends not only influenced the revival of AI but will also determine whether AI is moving into its third winter or not. They will also shape the future use of AI as discussed in the next section.
AI algorithms
One of the limiting factors of the current AI algorithms is their lack of cognitive ability, which means they only have the ability to correlate inputs with outputs, but they do it blindly with no understanding of the broader context. Therefore, although they are powerful pattern-recognition tools, they do not have top-down reasoning that represents the way humans approach problems and tasks. Rather, AI algorithms are currently trained with bottom-up reasoning that requires mountains of big data and creates serious limitations because the algorithms do not know how to treat situations where little or no data exist. If the future algorithms can improve their cognitive ability by using top-down reasoning. Relying less on bottom-up big data, they will more closely resemble the way humans approach problems and tasks and will be more broadly applied.
To create cognitive algorithms that use top-down reasoning, researchers are encouraged to widen the scope rather than the volume of data of what machines are taught. Furthermore, it is suggested that researchers should combine current machine-learning techniques with older, symbolic AI approaches which emphasised formal logic, hierarchical categories and top-down reasoning.
Data requirements
The data requirements of AI algorithms depend on whether algorithms will develop the cognitive function of using top-down reasoning in the future or remain limited to bottom-up reasoning that will require vast amounts of data to train these algorithms. Researchers found that one of the key factors that differentiate between AI leaders and laggard organisations is the difference in their approach to data.
Current AI algorithms require large numbers of carefully labelled examples, and those labels usually have to be applied by humans, which is known as data wrangling. Data wrangling takes approximately 80% of the time used on an AI project. The more complex and the more parameters that AI algorithms have, the higher the cost in terms of training data. Also, more computing power, which tends to be cloud computing, is then required.
There are some problems with training data that need to be taken into account:
- In many cases the training data contain a bias which is then transferred to the algorithm, creating or reinforcing unfair bias.
- The use of personal data has brought about ethical constraints and also privacy issues.
- Apart from the training data, AI algorithms require new data that will allow them to improve their accuracy in terms of prediction.
- One of the ways in which to deal with these data problems is to make up some data by creating synthetic virtual training data. It was found that algorithms trained on synthetic data performed better than algorithms trained on real data alone. Synthenic data is also more attractive as it addresses privacy concerns relating to personal data.
Computational power
One of the key trends that has brought about the revival of AI is the unlimited access to supercomputing in the cloud, which was estimated at $70 billion in 2015, and the continual growth in big data has brought about a compound annual growth rate of more than 50% since 2010. One of the central laws used to predict computing power is known as Moore’s Law which states that computing power doubles every two years on average at a constant cost.
The explosion in the demand for computing power has put pressure on Moore’s Law as the shrinking of computer chips is getting harder and the associated benefits of doing so are not what they were. This led to the optimisation of techniques such as changing the computer architecture to follow the structure of the data being processed or eliminating the time AI models spend on multiplying numbers by zero.
Other researchers are looking at alternative ideas such as quantum computing and neuromorphic chips. These two ideas are explained below:
Quantum computing uses the counter-intuitive properties of quantum mechanics to provide big speed-ups for some types of computation in which a computer is trying to make trade-offs between millions of variables to arrive at a solution that minimises as many as possible.
Neuromorphic chips are inspired by biology and chip makers are investigating chips that contain components designed to mimic more closely the electrical behaviour of the neurons that make up biological brains.
Companies should take full advantage of this collaboration between AI and humans. They must understand how humans can most effectively augment machines, how machines can enhance what humans do best, and how to redesign business processes to support the partnership.
Four AI development scenarios by 2030
Figure 1 was developed by identifying the two main trends that will shape the future of AI by 2030. These two trends are the development of cognitive ability by AI algorithms to use top-down reasoning and the development of data sharing and use according to a global framework. This global framework will protect data privacy, govern what the data can be used for and ensure that the data are usable and uniform, because without data even the most cognitive algorithms will not be able to function. These two trends influence each other: as the cognitive ability improves, the algorithms will become less data dependent and vice versa. Lastly, both trends also have an impact on the computing power that will be needed in the future, and thus it was decided to use these two trends in the scenario.
Figure 1 shows the four scenarios that will be described by way of stories in the next series of posts depicting AI summer, AI autumn, AI winter and AI spring.
AI summer
At 06:00 on 13 July 2030 in Genève, Switzerland, Meryl Oyomno, a Kenyan artificial intelligence engineer, has just been woken up by her alarm clock. She is filled with excitement as today marks a momentous occasion − she is presenting the Global Artificial Intelligence Framework to members of the United Nations for ratification. Approval of this framework has become increasingly important due to the introduction of cognitive AI algorithms. This framework will allow cognitive AI algorithms to become general-purpose technology that can affect entire economies.
The Global Artificial Intelligence Framework governs the use of all AI algorithms to ensure their application is in line with the ten principles of the United Nations Compact. She reflects on the development of the framework and the introduction of cognitive ability to AI algorithms, which moved artificial intelligence from a powerful pattern-recognition tool to one of cognitive intelligence. The breakthrough came when Vishaal Lutchman, a PhD candidate in Biology at the University of Bangalore, discovered the neural sequences that enable human cognitive ability and applied them to AI algorithms. He found the missing piece of the puzzle that had evaded so many other researchers and FinTech companies. Meryl smiles, thinking who would have thought that a 28-year-old PhD biologist would solve the cognitive problem of AI algorithms.
She also thinks back to David Webb, a master’s student in Biology from the University of Oxford, who created the neuromorphic chip that is based on the sequence of the electrical behaviour of neurons in the human brain. By studying the electrical sequence, he was able to apply this to the computer chip and create the neuromorphic chip that solved the computing problem in running the large conscious AI algorithms. This decreased the cost of using the cognitive algorithms and the analysis of unlimited amounts of data.
While reflecting on the exploits of Dr Vishaal Lutchman and David Webb, Meryl realises that the seed of the Global Artificial Intelligence Framework was the creation of the Global Data Sharing and Usage Framework ratified in October 2025 which governs the sharing of personal data, the format that this data will be kept in and the use thereof. This enabled personal information to be protected, enabled a standardised price to be paid for the use of this information, and ensured the information is used for purposes that are aligned to the ten principles of the United Nations Compact.
As Meryl is getting ready for her presentation, she thinks back to mid-2020 in the midst of the COVID-19 pandemic reading an article debating whether the world would move into an AI winter in the coming decade. She smiles and thinks with the gift of hindsight that what has since happened in the field of AI could only be described as an AI summer.
AI autumn
Dr John Omalu is one of a handful of ophthalmologists in Abuja and the sole license holder of the ophthalmology cognitive AI algorithm built by DeepMind, a subsidiary of Google in Nigeria.
Sitting in his favourite coffee shop this morning, he is deeply concerned about the announcement of the dissolution of the Global Data Sharing and Usage Framework that has been in existence for the past five years. He is worried about the impact that it will have on his business and astounded by the short-sightedness of the decision.
His ophthalmology cognitive AI algorithm license enables him to provide ophthalmology services at a fraction of the price and to more patients than a human ophthalmologist could. The algorithm also identifies ophthalmology conditions more accurately, provides better treatment plans and has better patient outcomes than any human ophthalmologist. The power of this algorithm is its ability to adjust patient care and treatment based on the patient’s prognosis. It uses millions of patient records via the Global Data Sharing and Usage Framework to remain up to date with the latest treatment protocols, new and rare ophthalmology conditions and the outcomes of patients based on the various treatments given. Furthermore, the algorithm’s cognitive ability allows it to apply this data to each individual patient in the same way a human ophthalmologist would.
Dr Omalu has branched out his services across Nigeria through mobile clinics that are manned by two nurses and one driver. The one nurse places the patient in position and holds the ophthalmology scanner to the patient’s eye. The ophthalmology algorithm does the rest of the work by diagnosing the patient and printing out a treatment plan that the second nurse then explains to the patient.
The dissolution of the Global Data Sharing and Usage Framework means that each country will need to create its own data sharing and usage framework, which could result in data being stored in an incoherent format across different jurisdictions, affecting the usability of the data. This will be devastating for Dr Omalu’s business, as the ophthalmology AI algorithm will no longer have access to billions of anonymous patient records to provide updated treatment, clinical trials and outcomes based on best clinical practices. Limiting the availability of data will severely reduce the usefulness of the ophthalmology cognitive AI algorithm as it requires data to learn and decide on the appropriate treatment, similar to human doctors. This will also increase the cost of treating patients suffering from eye disorders in Nigeria and around the world. Additional ophthalmologists will need to be trained and local data sets will need to be built by each country.
Dr Omalu just shakes his head while contemplating the short-sightedness of the removal of the global framework. He recalls an article that he read in July 2020 during the COVID-19 pandemic about AI heading for the third AI winter. Thinking back to the article, he is disappointed because the introduction of the cognitive AI algorithm has paved the way for an AI summer, but the removal of the Global Data Sharing and Usage Framework has pushed AI into an AI autumn.
AI winter
On 13 July 2030, Sundar Pichai is sitting on his porch pondering the development of AI over the past decade. He is often reminded by friends and old business associates of his prediction that the development of AI will be ‘more profound than the discovery of fire or electricity’ while he was the CEO of both Google and Alphabet Inc.
The key question that haunts Sundar is how his prediction could be so wrong. For the past ten years since the COVID-19 pandemic hit the world, AI has gone into what is termed the third AI winter. The two factors that largely contributed to the AI winter are the inability to develop AI algorithms that pose cognitive ability and the inability to develop an international framework that governs the collection, sharing and use of personal data across the world.
The inability of algorithms to develop cognitive ability meant that they could only correlate inputs with outputs, but they do it blindly with no understanding of the broader context. Furthermore, the use of AI algorithms was limited because there was no framework for the collection, sharing and use of personal data across the world. The growing amount and the greater variety of data that algorithms are exposed to improve the accuracy and predictive ability of these algorithms and also remove bias such as race and sex. Having a central framework will also allow the data to be stored in a uniform format to improve its usability.
These two factors resulted in AI not being able to live up to its initial promise. This led to organisations not obtaining the envisaged benefits from AI investments and a substantial decrease in the investment in AI projects and AI firms. Consequently, AI has added far less to the global economy by 2030 than PwC’s estimate of $16 trillion and McKinsey’s estimate of $13 trillion.
The AI winter hindered the development of computer chips because of the reduced investment in AI and a decrease in funding for research into the development of quantum and biological computer chips. Sunar recalls the article that he penned for the Wall Street Journal in December 2020, arguing that the next decade would be an AI summer. However, when he thinks back, neither of the two factors required to create an AI summer has realised and thus we have entered the third AI winter since 2020.
AI spring
On 23 July 2030, Dr Leonard Musk is reminiscing about his work as the head of AI at MIT. He is searching for reasons why, despite numerous hours of research across several collaboration projects and billions in funding, they have been unable to create algorithms that have a cognitive ability.
Although traditional AI algorithms are powerful pattern-recognition tools, they learn in a very narrow way, need much more data to learn a new task than humans, and require humans to provide high-level concepts through labels. This limits their ability to correlate inputs with outputs as they do so blindly with no understanding of the broader context, whereas algorithms with cognitive ability would enable them to become the general-purpose technology that can affect every area of human life.
Dr Musk’s discouragement makes way for a smile when he thinks about the strides that have been made, especially after the adoption of the Global Data Sharing and Usage Framework by the members of the United Nations. It improved the performance of the current AI algorithms due to the data being kept in a uniform and usable format, decreased the cost of training algorithms and reduced bias such as race and sex, which avoided the creation or reinforcement of unfair bias. Furthermore, it brought about breakthroughs in the medical and other fields, enabling AI algorithms and humans to work together and create more synergy than would have been possible on their own.
Although the cognitive ability of AI algorithms has eluded researchers for the past ten years, Dr Musk thinks back to an article in December 2020 claiming that AI was heading for the third AI winter. He has a sip of coffee and then thinks that an AI spring would be a better description of the development of AI over the past ten years.
This article is based on the author’s coursework submissions towards his master’s degree in Future Studies at the University of Stellenbosch. References are available on request.