A Brief History of Artificial Intelligence

Happy Weekend Friends! Today's article covers a brief history of AI. It is an incredibly rich history of research and experimentation. The goal is to cover topics lightly without diving in too deeply and to avoid rambling. Introduction My aim is to cover the birth of AI as a field of study, some key milestones in development and recent advancements in Natural Language Processing (NLP). If that tickles your fancy then buckle up. It is important to outline the two driving factors that lead to AI in 2025. The first is processing power. The invention of the transistor computer processing units (CPUs) and graphical processing units (GPUs) and the power of compute has completely changed the way we view computers and AI. The second is the amalgamation of data. Not just the connection but our ability to structure, label, and query that data in a way that is easier and faster for machines to digest in large formats. Just having a large, clean data set was unthinkable, let alone a trainable and labelled dataset or words and images. Origin "Can Machines Think?" It is important to understand is that artificial intelligence is NOT NEW. That's right the field of machines that can imitate humans goes further back then the 1950's. For our sake we will start in 1950. With our British friend, Alan Turing, and his paper, "Computing Machinery and Intelligence,(1) where he posed the question, "Can Machines Think?". Alan Mathison Turing, Mathematician and Computer Scientist, 1951 (2) Turing moved beyond the hard to define concepts of the words "think" and "machine" by proposing "The Imitation Game," now known as the Turing Test. By moving beyond the definitions and vagueness of the words he offer's a scenario. Imagine there are two contestants. A machine and a human who are both asked natural language questions and give answers to another person. The machine and human are hidden from the third person. This third person must not be able to tell the machine from the human based on the answers given. This would mean that the machine has passed the Turing Test (3). Etymology The term "Artificial Intelligence" was coined at Dartmouth Conference in 1956. It is attributed to John McCarthy (4), the founder of LISP. Combining experts in Neural Networks, the Theory of Computation, and Automata Theory to see which aspects of humans could be replicated by artificial intelligence. Advancements in Computer Technology Truly a story of legends. The origin story of Silicon Valley is hardly rivaled in modern history for the scale of innovation. From 1956 with William Shockley to today's modern giants like Intel, Google, and many others. The invention of semiconductor, transistor, and the microprocessor changed the face of the world we live in and set the stage for legendary tales and titans of industry. Although, it is outside the scope of today's writings I highly advise you to listen to Acquired Podcasts (6). It is incredible Bell Labs, the traitorous eight, and Fairchild Semiconductors were so instrumental in the world we live in today. Including direct progress to the chatbots we are using in 2025. When it comes to the advancement of computational power. One law stands above them all. Moores Law (7), which dictates that "the number of transistors on an integrated circuit board doubles every two years." Considering this law has held true since 1975 it is a critical component on our journey towards AI plausibility. Advancements from the 1990's and early 2000's It took decades for AI to become a main stream talking point when IBM's Deep Blue defeated the world chess champion Garry Kasparov (8). In the late 1990's and early 2000's the amalgamation of data and significant increase in computing power allowed major strides in AI Research. Even by 2007, Sony's Smile Shutter technology was already identifying faces in camera screens (9). During this time the Stanford ImageNet Project had been growing. Started in 2006 by Data Scientist Li Fei-Fei (10) (11). The dataset has since grown to over 14 million labelled images for academic use in training AI models. This has been a huge advancement for the "data" portion of the AI requirements. The Godfather of AI Geoffrey Everest Hinton was awarded the Nobel Prize in Physics in 2024 (12) for "foundational discoveries and inventions that enable machine learning with artificial neural networks." Geoffrey E. Hinton, David Rumelhart, and Ronald J. Williams published a paper on the backpropogration algorithm (13) (14) in 1986. Not only contributing to the establishment of Neural Networks but the "fine-tuning" algorithms necessary to make them more useful to humans. Truly great accomplishments that are still critical for training and tuning models in 2025. Advancements from 2012 to 2022 The biggest advancements from 2012 to 2022 was the increase in computational power and available data. More effort was put

May 18, 2025 - 06:28
 0
A Brief History of Artificial Intelligence

Happy Weekend Friends!

Today's article covers a brief history of AI. It is an incredibly rich history of research and experimentation. The goal is to cover topics lightly without diving in too deeply and to avoid rambling.

Introduction

My aim is to cover the birth of AI as a field of study, some key milestones in development and recent advancements in Natural Language Processing (NLP). If that tickles your fancy then buckle up.

It is important to outline the two driving factors that lead to AI in 2025. The first is processing power. The invention of the transistor computer processing units (CPUs) and graphical processing units (GPUs) and the power of compute has completely changed the way we view computers and AI.

The second is the amalgamation of data. Not just the connection but our ability to structure, label, and query that data in a way that is easier and faster for machines to digest in large formats. Just having a large, clean data set was unthinkable, let alone a trainable and labelled dataset or words and images.

Origin

"Can Machines Think?"

It is important to understand is that artificial intelligence is NOT NEW. That's right the field of machines that can imitate humans goes further back then the 1950's. For our sake we will start in 1950. With our British friend, Alan Turing, and his paper, "Computing Machinery and Intelligence,(1) where he posed the question, "Can Machines Think?".

Image description
Alan Mathison Turing, Mathematician and Computer Scientist, 1951 (2)

Turing moved beyond the hard to define concepts of the words "think" and "machine" by proposing "The Imitation Game," now known as the Turing Test. By moving beyond the definitions and vagueness of the words he offer's a scenario. Imagine there are two contestants. A machine and a human who are both asked natural language questions and give answers to another person. The machine and human are hidden from the third person. This third person must not be able to tell the machine from the human based on the answers given. This would mean that the machine has passed the Turing Test (3).

Etymology

The term "Artificial Intelligence" was coined at Dartmouth Conference in 1956. It is attributed to John McCarthy (4), the founder of LISP. Combining experts in Neural Networks, the Theory of Computation, and Automata Theory to see which aspects of humans could be replicated by artificial intelligence.

Advancements in Computer Technology

Truly a story of legends. The origin story of Silicon Valley is hardly rivaled in modern history for the scale of innovation. From 1956 with William Shockley to today's modern giants like Intel, Google, and many others. The invention of semiconductor, transistor, and the microprocessor changed the face of the world we live in and set the stage for legendary tales and titans of industry. Although, it is outside the scope of today's writings I highly advise you to listen to Acquired Podcasts (6). It is incredible Bell Labs, the traitorous eight, and Fairchild Semiconductors were so instrumental in the world we live in today. Including direct progress to the chatbots we are using in 2025.

When it comes to the advancement of computational power. One law stands above them all. Moores Law (7), which dictates that "the number of transistors on an integrated circuit board doubles every two years." Considering this law has held true since 1975 it is a critical component on our journey towards AI plausibility.

Advancements from the 1990's and early 2000's

It took decades for AI to become a main stream talking point when IBM's Deep Blue defeated the world chess champion Garry Kasparov (8). In the late 1990's and early 2000's the amalgamation of data and significant increase in computing power allowed major strides in AI Research. Even by 2007, Sony's Smile Shutter technology was already identifying faces in camera screens (9).

During this time the Stanford ImageNet Project had been growing. Started in 2006 by Data Scientist Li Fei-Fei (10) (11). The dataset has since grown to over 14 million labelled images for academic use in training AI models. This has been a huge advancement for the "data" portion of the AI requirements.

The Godfather of AI

Geoffrey Everest Hinton was awarded the Nobel Prize in Physics in 2024 (12) for "foundational discoveries and inventions that enable machine learning with artificial neural networks." Geoffrey E. Hinton, David Rumelhart, and Ronald J. Williams published a paper on the backpropogration algorithm (13) (14) in 1986. Not only contributing to the establishment of Neural Networks but the "fine-tuning" algorithms necessary to make them more useful to humans. Truly great accomplishments that are still critical for training and tuning models in 2025.

Advancements from 2012 to 2022

The biggest advancements from 2012 to 2022 was the increase in computational power and available data. More effort was put into building labelled and structured datasets the were optimized for machine to learn on. For example, ImageNet hosts yearly competitions for training. The team that won that year, AlexNet from Toronto (15), had GPUs that took two weeks to train. That same model only took ~five minutes to train in November of 2022. A monumental effort to tie computational abilities together. Large amounts of data were now being processed (trained) simultaneously. A massive breakthrough on two fronts. Both computational power and organized data were primed to lead to an explosion of AI.

A third metric was added to computational power and data availability and organization. The MLPerf metric (16) which was a set of benchmarks to help measure AI's pace. If you think Moore's Law is impressive, the gains in AI models from 2022 to present are just incomprehensible. Imagine what the future holds for these models.

Queue OpenAI (17) and eventually ChatGPT (18). Followed by Anthropic, Gemini and beyond. I am extremely excited to see what the future holds and I hope you are too. For better of for worse we are all in this together. As the world changes I hope it's for the betterment of all.

Thanks for reading, let me know what I should add.

And add. (19)

REFERENCES

  1. Turing, Alan M. (1950). Computing Machinery and Intelligence, Mind, 59/433–460. https://doi.org/10.1093/mind/LIX.236.433
  2. https://www.turing.org.uk/sources/archive.html
  3. https://en.wikipedia.org/wiki/Turing_test
  4. https://www.britannica.com/biography/John-McCarthy
  5. https://www.acquired.fm/episodes/adapting-episode-3-intel
  6. https://www.acquired.fm/episodes/adapting-episode-3-intel
  7. https://en.wikipedia.org/wiki/Moore%27s_law
  8. https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov
  9. https://www.cnet.com/culture/say-cheese-sony-technology-focuses-on-smiles/
  10. https://image-net.org/about.php
  11. https://www.historyofdatascience.com/imagenet-a-pioneering-vision-for-computers/#:~:text=Set%20up%20by%20data%20scientist,It%20all%20began%20in%201985.
  12. https://en.wikipedia.org/wiki/Geoffrey_Hinton#:~:text=Geoffrey%20Everest%20Hinton%20(born%201947,%22the%20Godfather%20of%20AI%22.
  13. https://en.wikipedia.org/wiki/Backpropagation
  14. Rumelhart, D., Hinton, G. & Williams, R. Learning representations by back-propagating errors. Nature 323, 533–536 (1986). https://doi.org/10.1038/323533a0
  15. https://www.pinecone.io/learn/series/image-search/imagenet/
  16. https://spectrum.ieee.org/mlperf-rankings-2022
  17. https://en.wikipedia.org/wiki/OpenAI
  18. https://en.wikipedia.org/wiki/ChatGPT
  19. https://people.idsia.ch/~juergen/who-invented-backpropagation.html