#19 TRENDING IN Environment 🔥

AI Is Not so New: Everything You Should Know About Its History

Environment

Fri, January 03

What was the last time you heard a word related to AI? Maybe 5 minutes ago? Just now when you were asked to google something and found this article?

AI has integrated into our daily routine just as smooth as water would perfectly fill free space in a bucket full of rocks. When something gets into your life so easily, it is better to analyse it, and at least know where it actually came from. This is what we will be doing today: this article is about AI starting out in our world, getting better, and what it is in our present day.

Want to Write for The Teen Magazine?

Share your ideas and get published on The Teen Magazine. Whether it’s entertainment, wellness, or academics, your voice matters here!

Apply Now

The Very Beginning

The story starts in 1950. Alan Turing, a mathematician and computer scientist, proposes the idea of The Turing Test, something that today can be heard from almost everywhere associated with AI. The test suggests that a person is given the task of trying to determine which of the other 2 participants is a computer and which is human. The person is limited to using the responses to written questions to make the determination. It was finally passed in 2014, by the way

What’s interesting is that it’s only in 1956 when at a certain conference hosted in Dartmouth college sounded the 2-word term: Artificial Intelligence. This conference also presented the Logical Theorist, a program that was capable of symbolic reasoning, means to understand symbols, such as red is usually associated with “Stop” or “Wrong”.

Let’s move several years later, where the golden times of AI had started.

The Golden Times (1st Times)

As mentioned, some years later. 1960: John McCarthy, by the way one of the organisers of the Dartmouth Conference, has developed Lisp, a programming language which has quickly become the most used language for AI programs. It will soon be used to create several iconic projects.

In 1966, ELIZA, the very first chatbot, written in a language based on Lisp, was developed. Talking with ELIZA would give you vibes of talking with a psychotherapist, which is essentially what it was meant to be. ELIZA was also the first AI program to attempt passing the Turing Test, however it failed, and so did many more programs.

The Winter of AI Research

The development of ELIZA has started a trend of similar AI programs. Unfortunately, none of them were capable of “General Intelligence”, in other words imitating an usual human, not a psychotherapist. The slow progression towards this idea soon killed all the optimism, and funding of different AI researches around the world has been reduced.

Instead, all the money was invested into more narrow AI applications, for example Mycin, developed in 1972, AI capable of diagnosing some of all bacterial diseases and offering the right medication for treating the found disease. It didn’t really work out, unfortunately.

The Modern Golden Times (2nd Times)

As people kept developing the Internet - the only modern source capable of giving out enough data for AI, the end of the “winter” has approached. At the same time, the concept of a neural network program has appeared, as well as the idea of Machine Learning (further ML) went on a popularity rise.

A solid example of ML being used was Deep Blue, the first chess-playing AI to beat a world champion of the game, Garry Kasparov, in 1997.

At this point, we slowly approach the years when most of us were already born. With the continuation of AI research, the revolution of deep learning has started. Deep Learning - basically an upgrade of ML, allowing for even more complex tasks, led to creation of things like ChatGPT (GPT-1) and BERT. ChatGPT, particularly at the time GPT-1, was released in June 2018, while BERT is a Google chat bot that was released in October 2018. Both are considered to be the main reasons for the AI popularity boom.

About GPT models

GPT-1, the first one, as mentioned released in June of 2018, was a relatively small language model compared to that of modern GPT-4. The model learned from about 7000 books based on BookCorpus.

This was not the biggest available dataset, but it was chosen because of the long text in the books so that the model can work with long user requests just as well as with short ones. After the book training process, the model was also “fine-tuned”, easily said, given data to learn from onto specific topics, making it more specialised in the exact subject.

GPT-2’s main difference from GPT-1 was the non-mandatory nature of fine-tuning. Since the BookCorpus was replaced with a dataset of about 8 million articles on Reddit, the model had shown much greater results than GPT-1, allowing to switch off the fine-tuning and still use it without much trouble.

GPT-3, the one we all probably already experienced, received a new feature: in-context learning, which means that the model would learn from your requests given to her, in some cases. GPT-3.5, the middle between GPT-3 and GPT-4, as might be reckoned, is just GPT-3, but with some improvements of the basic stats (e. g. dataset size). It was also the first model to receive internet browsing in order to collect info for the response.

GPT-4 - the modern GPT model that we all use. The main upgrade from 3.5 was the speech and image recognition, however it also came with nice bonuses such as: less mistakes in mathematics, better understanding of specific terms used in different sciences and enhanced creativity.

After the first GPT model, things such as Midjourney, DALL-E and many more have appeared. Today AI is almost everywhere in our life, and the only thing stopping it from integrating even more is because there is not enough data on the internet to supply Deep Learning neural networks’ requirements. This led to people trying to teach AI on generated by itself data, but AI cannot train purely on the generated by itself data, or else it will get “poisoned” and stop giving out satisfying results. Google “poisoned AI” for an image. This “poisoning” is also something that now is used by artists in order to preserve their images from getting used for AI training without their agreement. Interesting, isn’t it?

Vlas Vigilianskii
1,000+ pageviews

Writer since Sep, 2024 · 3 published articles

Vlas Vigilyanskiy studies at a Russian school and passes homeschooling A-Levels program in his free time. Small moments left between lessons are mostly devoted to balisong-flipping.

Comment