All Articles
The Past, Present and Future of Artificial Intelligence and the Technologies Behind It
Nowadays, we live in a world where technology is always there for you. With just the click of a button, you can find recipes for chocolate cake, or experience what it’s like to live on mars. But probably the most recent technology that recently gained popularity is artificial intelligence, or AI for short. According to Wikipedia, artificial intelligence is, in simplest terms, intelligence exhibited by machines, especially computer systems.
The Technologies Involved
In order for an AI to work, it must be trained, just like how humans learn and practice to develop a skill. Training usually happens through machine learning (ML), which is the use of statistical algorithms that learn from data so that they can recognize previously unseen data, and therefore allowing them to recognize data (which include text, images, audio, video, code, etc.) on their own. Uses of machine learning in AI models include:
- Speech recognition
- Natural Language Processing
- Predictive analytics
- Image Recognition
- Biometric Uses (face and fingerprint recognition)
And so much more.
Deep learning is the use of neural networks to train AI models. Neural networks are models inspired by human and/or animal brains and biological neural networks.
The History of Artificial Intelligence
The concept of non-human intelligence existed way before the invention of computers, with philosophers in the 18th century writing about how intelligence is constructed and making theories about whether or not intelligence is possible outside of the human brain. In the 1950s, AI took the first steps from being just a mere concept to being a real technology. A timeline of the history of AI is shown below.
- The concept of computer intelligence was first proposed by Alan Turing, who invented the Turing Test (originally called the Imitation Game) in 1950. The Turing Test is the test of a machine’s ability to exhibit intelligent behaviour.
- The first neural network was developed by Marvin Minsky and Dean Edmonds in 1951. It was called SNARC. SNARC was built using 3,000 vacuum tubes to simulate a network of 40 biological neurons.
- In 1952, Arthur Samuel developed a checkers AI called Samuel Checkers-Playing Program. It was the first self-learning program to play a game.
- The term “artificial intelligence” was coined by John McCarthy in 1955.
- John McCarthy developed the Lisp programming language in 1958, which was quickly adopted by the AI industry. Lisp is still used by millions of programmers worldwide.
- The STUDENT natural language processing (NLP) program was developed by Daniel Bobrow in 1964. It was designed to solve algebra word problems.
- In 1966, Joseph Weizenbaum created Eliza, the first AI that could engage in conversations with humans.
- SHRDLU was an AI created by Terry Winograd in 1968. SHRDLU was an AI that manipulates a world of blocks according to user instructions.
- In 1997, IBM’s Deep Blue AI defeated Garry Kasparov in a historic chess rematch. It was the first AI to defeat a chess world champion.
- In 2011, Apple released Siri, a voice-powered personal assistant.
- In 2016, Facebook developed a facial recognition system named DeepFace, which could recognize human faces in photos.
- In 2018, OpenAI released GPT (Generative Pre-trained Transformer) and later in November 2022, OpenAI used the GPT technology to develop the ChatGPT
Artificial Intelligence and Its Uses in the Modern Day
Today, AI has multiple uses in both personal use and the industry. The uses of AI include, but is not limited to chatbots, automation, robotics, security, customer service, manufacturing, healthcare, education and agriculture. Examples for major AI companies of the modern day are OpenAI, Google, Microsoft, IBM, Apple, Intel and Anthropic. Chatbots like ChatGPT can help you with writing, coding, finding information and so much more. Image generation AI models like DALL-E generate pictures according to a user prompt. Recently, video generation AI models such as Sora AI have been developed. There are also AI models that help with programming, photo and video editing and making music.
The Future of Artificial Intelligence
Since the dawn of the 21st century, AI technology has been improving rapidly, to the point where people are concerned about AI’s future. Many movies have been made about such scenarios. Some speculate that AI will take over humans’ jobs, resulting in job loss and unemployment. But luckily, the chances that AI will completely get rid of the need for human workers is quite low. It is also unlikely that AI will take over the world like some people think. So, for now, AI is still under our control, and let’s hope it always stays that way.
Summary
Artificial intelligence, or AI for short, is defined as intelligence exhibited by machines. AI is an emerging technology that has many uses like speech recognition, image recognition, speech synthesis, biometric and security purposes, programming help, music generation and so much more. Technologies like machine learning and deep learning are used to train and manipulate artificial intelligence models. AI has a long history, starting from 1950. Companies like OpenAI, Google, Microsoft, IBM, Apple, Intel and Anthropic are developing AI models and helping the overall development of AI. Lots of people are concerned about the future of artificial intelligence, but the truth is that there’s probably nothing to worry about.
References
- TechTarget, The history of artificial intelligence: Complete AI timeline, TechTarget, viewed 08 July 2024, <https://www.techtarget.com/searchenterpriseai/tip/The-history-of-artificial-intelligence-Complete-AI-timeline/>
- Maryville University 2023, History of AI, Maryville University, viewed 08 July 2024, <https://online.maryville.edu/blog/history-of-ai/>.
Keywords
AI, Artificial Intelligence, AI History, History of AI, Future of AI
Nethuja Sandev Perera Gunawardane
K/Taxila Central College, Horana