Artificial intelligence

From OpenEncyclopedia

Artificial intelligence (AI) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and language understanding. As an academic field, AI was founded at a workshop held at Dartmouth College in 1956, where the term was coined by John McCarthy.

The field has experienced cycles of optimism and disappointment (so-called "AI winters") since its inception. From the 2010s onward, advances in deep learning, the availability of large datasets, and increases in computing power produced rapid progress in areas including computer vision, speech recognition, and natural language processing. The 2020s saw the rise of large-scale generative AI systems based on the transformer architecture, including GPT-4, Claude, and Gemini.

History

The intellectual roots of AI lie in philosophy, mathematics, and early cybernetics. The 1950 paper "Computing Machinery and Intelligence" by Alan Turing introduced the Turing test as a criterion for machine intelligence. The 1956 Dartmouth workshop is widely regarded as the founding event of AI as a discipline. Early successes in symbolic reasoning and game-playing gave way to the first "AI winter" in the 1970s as funding dried up. Expert systems revived interest in the 1980s, followed by another downturn. The current era began with breakthroughs in neural network training in the late 2000s and the 2012 ImageNet result by AlexNet.

Approaches

AI research is broadly divided into:

  • Symbolic AI — manipulating high-level human-readable symbols according to logical rules. Dominant from the 1950s through the 1980s.
  • Machine learning — systems that learn patterns from data. Includes supervised, unsupervised, and reinforcement learning.
  • Deep learning — multi-layer artificial neural networks, responsible for most modern advances.
  • Statistical and probabilistic methods — Bayesian networks, hidden Markov models, and similar.

Modern systems

Since 2017, the transformer architecture has dominated work in natural language processing and increasingly in vision and audio. Large language models such as GPT-3, GPT-4, Claude, LLaMA and others are trained on hundreds of billions of tokens of text and can perform a wide variety of tasks without task-specific fine-tuning. These systems are produced by organisations including OpenAI, Anthropic, DeepMind, Meta, and Microsoft.

Applications

AI is now embedded in many everyday technologies, including web search, recommendation systems, machine translation, voice assistants, autonomous vehicles, medical imaging analysis, drug discovery, code generation, and content creation. It is also used in scientific research — for example, AlphaFold dramatically advanced protein structure prediction.

Safety and ethics

The rapid capability gains of large AI systems have intensified debate about AI safety, including concerns about misuse, bias, labour displacement, misinformation, and longer-term existential risks from artificial general intelligence. Major AI labs and governments have begun establishing evaluation frameworks, red-teaming practices, and regulatory regimes such as the EU AI Act.

See also