AAI Logo
Loading...
AAI Logo
Loading...
Artificial Intelligence
Artificial IntelligenceBeginner

What is Artificial Intelligence?

artificial intelligenceAI basicshistory of AInarrow AIAGI
No reviews yet — be the first!

Defining Intelligence — Then Making It Artificial

Diagram
MERRIAM-WEBSTERintelligence/ɪnˈtɛlɪdʒ(ə)ns/nounthe ability to learn or understandor to deal with new or tryingsituations— Merriam-Webster DictionaryWHAT AI REPLICATES1LearnTraining on labeled examples2UnderstandPattern recognition in data3AdaptGeneralise to new situations
Merriam-Webster's definition of intelligence — and the three abilities AI attempts to replicate in machines.

Merriam-Webster defines intelligence as "the ability to learn or understand or to deal with new or trying situations." That is exactly what AI researchers and engineers set out to build into machines. The word "artificial" here simply means intelligence that exists outside of humans — in silicon, not neurons.

Which raises the real questions: how do machines acquire intelligence? And how do they handle new or trying situations compared to how we do?

The honest answer: in much the same way. They either get it right, or they go horribly wrong. For e.g., a self-driving system trained mostly on dry suburban roads can fail unpredictably on an icy motorway, or a medical classifier trained predominantly on data from one demographic may misdiagnose patients from another. Push a machine into territory it was not trained for and it responds the way any unprepared mind would: unreliably.

So what separates machine intelligence from human intelligence?

For situations that are already defined — tasks the system has encountered before, in some form, during training — machines tend to outperform us. They are more consistent, more tireless, and less prone to mood or distraction. A well-trained model doing a familiar task will get it right more often, and more reliably, than a human doing the same thing repeatedly.

What sets human intelligence apart is often described as creativity and novelty — the ability to face a genuinely unprecedented situation, one that no prior experience maps onto, and still reason through it. That has traditionally been where machines fall short.

This is, however, actively debated. Generative models now produce music, art, and writing that many experience as genuinely creative — leading some researchers to argue that human creativity is itself pattern recombination at a deeper level. Others insist that intuition, intention, and lived experience cannot be reduced to statistics. Neither side has a settled answer.

One more thing worth holding onto: an AI is only as good as the developer who trained it and the data it was trained on. And that data is, almost always, a reflection of human behaviour. Which means the intelligence inside any AI system is, at its core, a distillation of us.

AI is only as good as the developer who trained it and the data it learned from — and that data is almost always a mirror of human behaviour.

A Brief History

Diagram
1950Turing TestCan machines think?1956"AI" CoinedDartmouth Workshop1970s–80sThe AI WintersFunding dries up twice1997Deep BlueDefeats Kasparov2012AlexNetDeep learning begins2017 – nowTransformersGPT, BERT & beyond
Six moments that shaped AI — from a philosophical thought experiment in 1950 to the foundation models redefining the field today.

The pattern across these milestones is striking: waves of optimism, followed by disillusionment, then a single breakthrough that resets expectations entirely. Alan Turing's 1950 question, can a machine think?, was less a technical specification and more a provocation that forced people to define intelligence at all. The Dartmouth Workshop six years later gave the field its name and an early confidence that human-level AI was a generation away. It was not.

The AI Winters are as much a part of the story as AlexNet or the Transformer. They reveal that progress in AI is discontinuous, not gradual. One architecture change, one new dataset, or one shift in available compute can move the entire field forward faster than a decade of incremental work. That pattern has not changed.

Narrow AI vs General AI

Every AI system deployed today is Narrow AI. It does one type of task extremely well and fails completely outside that domain. AGI and ASI remain theoretical — here is how the three tiers compare.

Narrow AI (ANI)General AI (AGI)Superintelligence (ASI)
ScopeOne specific domainAny intellectual task a human can doExceeds human intelligence across all domains
Exists today?YesNoNo
ExamplesGPT, spam filters, self-driving systemsNoneNone
TimelinePresentDecades, centuries, or never — researchers disagreeFurther in the future than AGI, if ever

Researchers disagree sharply on whether AGI is achievable at all with current paradigms — let alone when. ASI is even further in the realm of speculation. What is not speculation is the gap between the narrow tools we have today and anything resembling general intelligence.

Quick Check

A chess engine that beats world champions but cannot perform any other task is an example of:

The Two Approaches: Rules vs Learning

Historically there have been two camps in AI:

  • Symbolic AI (rule-based) encodes human knowledge as explicit rules. It is interpretable and predictable, but brittle — any situation not covered by the rules breaks it. For e.g., an expert system for medical diagnosis might have thousands of if-then rules written by doctors, yet fail entirely when a patient presents an unusual combination of symptoms not covered by those rules.

  • Machine Learning lets the system discover its own rules from data. It generalises better but the resulting rules are often opaque. For e.g., instead of writing "if the tumour is larger than 3cm and irregular in shape, classify as malignant", you feed thousands of labelled scans and let the algorithm find the patterns itself.

Modern AI is dominated by machine learning — specifically deep learning — but symbolic approaches are still used where interpretability is non-negotiable. For e.g., legal reasoning, medical decision support, and safety-critical systems still rely on them.

Quick Check

A fraud detection system is built by a team of banking experts who write 2,000 explicit rules covering known fraud patterns. When a completely new type of fraud emerges, the system misses it entirely. What does this illustrate?

Why AI Matters Now

Three things converged in the last decade that made AI genuinely transformative:

  • Data at scale. Every smartphone, website interaction, and connected sensor generates data. Models need examples to learn from — and we now have billions of them.
  • Cheap compute. GPUs originally built for video games turned out to be perfect for the matrix operations neural networks require. Cloud access democratised access to this hardware.
  • Open research — and its limits. Major breakthroughs such as the Transformer architecture, diffusion models, and RLHF were published openly, allowing the global research community to build on them rapidly. This dynamic began shifting around 2022–2023. Frontier labs — OpenAI, Google DeepMind, Anthropic — have increasingly moved toward closed development, sharing little about their architectures, training data, or methods. At the same time, a parallel open-source ecosystem has grown around models like Meta's LLaMA and Mistral, aiming to keep capable AI accessible. Whether openness or closed development produces better and safer AI is itself an active debate in the field, with reasonable arguments on both sides.

The result is that AI has moved from academic curiosity to infrastructure. For e.g., it now runs inside search engines, hospital diagnostic tools, code editors, financial systems, and supply chains. Understanding what it is — and what it is not — is no longer optional for anyone working in technology.

What AI Cannot Do

Despite the hype, AI today has real limits:

  • It does not understand — it finds statistical patterns
  • It fails badly outside its training distribution
  • It requires enormous amounts of labelled data for most tasks
  • It cannot reliably reason about novel situations the way humans can
  • It has no goals, intentions, or common sense

These are not merely engineering problems to be solved with more compute. Some are fundamental to how current AI works. Knowing the limits is as important as knowing the capabilities.

Test Your Knowledge

Ready to check how much you remember? Take the quiz for What is Artificial Intelligence? and see your score on the leaderboard.

Take the Quiz