A Brief History of Artificial Intelligence

Brief History of AI StopAd

Not a day goes by without news about advancements in the artificial intelligence (AI) field.

Growing adoption of AI technologies like Machine Learning based on Neural networks, Natural Language Processing etc., has created new markets like that of smart assistants from Google and Amazon.

However, artificial intelligence applications are more far reaching.

There are numerous use-cases, ranging from self driving cars to fraud prediction in finance. But implementation of AI isn’t limited only tech giants. StopAd is an example of smaller company harnessing the power of AI subsets like Machine Learning and Computer Vision to increase effectiveness of ad blocking.

These advances are accompanied by continuous debate on possible implications of AI, and its impact on society at large. Opinions vary from outright alarmist  as expressed by Elon Musk, to highly optimistic— consider the vision of the Technical Singularity by scientist and futurist Ray Kurzweil. Musk believes that powerful AI-driven machines controlled by monopoly-like businesses will take over. Kurzweil believes that humanity will greatly benefit from human-machine symbiosis.

It is no wonder that opposite thoughts of opinion leaders and overall hype have contributed to a distorted perception AI in the eyes of laypeople. In the recent years, AI has been in the spotlight, but topic coverage concentrated mostly on recent innovations, creating a sense that AI research is a new trend in computer science. In this article, we’ll look at the decades-long, complicated history of AI to help dispel myths that this powerful technology has burst into existence ready to take over the world.

Who Started Artificial Intelligence?

The history of the AI is somewhat of a myth for a significant part of the population. Once in a while I run into people who think that the concept of AI appeared, shortly after they filmed Terminator. In fact, the beginning of AI research dates back to the 1950s. In fact, in 2006, Dartmouth University hosted a 50th Anniversary Conference on AI. The term “Artificial Intelligence” was coined in 1956 by John McCarthy, who was math professor at the University of Dartmouth at that time. He was applying for a grant to fund his research, known as “The Dartmouth Summer Research Project on Artificial Intelligence.” His goal for the project was:

“To proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” John McCarthy

The Imitation Game

However, a full 6 years before that, mathematician Alan Turing, also known as “father of computer science” introduced a concept that is now  known as Turing Test. It is described in his paper “Computing Machinery and Intelligence.” Turing posed the question: “Can machines think?” However, instead of getting caught up in defining “machine” and “intelligence,” he proposed an experiment known as the “Imitation Game.”

The Imitation Game consists of three parties: an interrogator, a human, and a machine. The interrogator engages in dialogue (question-answer) with both human and machine in written format during a set amount of time. If the interrogator is unable to differentiate between the replies of human and machine based on their answers, it is concluded that the machine has passed the test.

Turing’s test is remarkable because it offers a practical method and meaningful framework of measuring what Turing considered the computer’s capability to think. The test has its flaws, but to this day no computer has passed it when engaging in lasting conversation. It ultimately fails with responses becoming random blabber at some point.

What does this test mean for the current state of AI?

Obviously, there’s a lot of room for improvement in various subsets of AI, before the test can be passed. Critically, however, passing the Turing Test is not the sole purpose of AI research. This paper from the University of Washington on the history of artificial intelligence provides great a explanation of Turing’s test basics and other concepts crucial to understanding development of AI research.  

AI Research Ups and Downs: AI Winters

Importantly, the history of artificial intelligence is marked by intermittent periods of interest and increased funding with stages of financial cutbacks, known as “AI Winters.” AI historians identify 2 “winters:” one in the 1970s and the latest between 1987 and 1993. During AI Winters, technological limitations such as the lack of computing power slow down research while more pessimistic views gradually replace high expectations.

Does this mean that the current hype precedes the next interval of disenchantment with AI?

Andrew Ng, ex-chief scientist at Baidu and professor at Stanford University, specializing in Machine Learning is optimistic:

“Multiple [hardware vendors] have been kind enough to share their roadmaps. . . I feel very confident that they are credible and we will get more computational power and faster networks in the next several years.”— Andrew Ng

Investment statistics provided by McKinsey Global Institute confirms that tech giants are serious about AI and its advancement. McKinsey estimates that in 2016 Google and Baidu spent around $20 to $30 billion on funding their internal R&D and acquiring startups in the field. Companies focused on Machine Learning received 60% of external investments by tech giants, with Computer Vision and Natural Language Processing in second and third place respectively.
Large tech companies are leading the way to incorporation of Machine Learning and related technologies in their consumer-facing products, but even smaller businesses are already able to receive measurable benefit from embracing certain aspects of AI. Think of chatbots that will continue to be a trend in 2018, after proving their value for brands that want to amplify consumer experience and stay in touch with audience. At the same time, training such bots requires time and sufficient availability of data. Other use-cases include already existing Machine Learning implementations in cybersecurity and healthcare (IBM’s Watson helps to suggest treatment and establish diagnosis).

Further Reading on the History of AI

Like any other science field, AI research has had many challenges on the way from a purely academic discipline to the real-life application. Current progress is based on a solid foundation that has been building up for decades.  

If this short overview got you interested in more detailed accounts, check out details from the Harvard blog and this AI timeline posted in Carnegie Mellon University. The University of Washington paper cited above is another great resource you should consider. Finally, chapter 9 of Funding a Revolution: Government Support for Computing Research is dedicated to AI development and the role US  government agencies have played in the process.

What aspect of AI is most interesting for you?

Share