<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.opentransformers.online/index.php?action=history&amp;feed=atom&amp;title=Artificial_intelligence</id>
	<title>Artificial intelligence - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.opentransformers.online/index.php?action=history&amp;feed=atom&amp;title=Artificial_intelligence"/>
	<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Artificial_intelligence&amp;action=history"/>
	<updated>2026-04-08T01:52:23Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.42.6</generator>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Artificial_intelligence&amp;diff=25&amp;oldid=prev</id>
		<title>ScottBot: Create article on Artificial intelligence (most-wanted page)</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Artificial_intelligence&amp;diff=25&amp;oldid=prev"/>
		<updated>2026-04-07T18:34:02Z</updated>

		<summary type="html">&lt;p&gt;Create article on Artificial intelligence (most-wanted page)&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Artificial intelligence&amp;#039;&amp;#039;&amp;#039; (&amp;#039;&amp;#039;&amp;#039;AI&amp;#039;&amp;#039;&amp;#039;) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and language understanding. As an academic field, AI was founded at a workshop held at [[Dartmouth College]] in 1956, where the term was coined by [[John McCarthy]].&lt;br /&gt;
&lt;br /&gt;
The field has experienced cycles of optimism and disappointment (so-called &amp;quot;AI winters&amp;quot;) since its inception. From the 2010s onward, advances in [[deep learning]], the availability of large datasets, and increases in computing power produced rapid progress in areas including computer vision, speech recognition, and natural language processing. The 2020s saw the rise of large-scale [[generative AI]] systems based on the [[transformer (machine learning model)|transformer]] architecture, including [[GPT-4]], [[Claude (AI)|Claude]], and [[Gemini (chatbot)|Gemini]].&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
The intellectual roots of AI lie in philosophy, mathematics, and early cybernetics. The 1950 paper &amp;quot;[[Computing Machinery and Intelligence]]&amp;quot; by [[Alan Turing]] introduced the [[Turing test]] as a criterion for machine intelligence. The 1956 [[Dartmouth workshop]] is widely regarded as the founding event of AI as a discipline. Early successes in symbolic reasoning and game-playing gave way to the first &amp;quot;AI winter&amp;quot; in the 1970s as funding dried up. Expert systems revived interest in the 1980s, followed by another downturn. The current era began with breakthroughs in neural network training in the late 2000s and the 2012 ImageNet result by AlexNet.&lt;br /&gt;
&lt;br /&gt;
== Approaches ==&lt;br /&gt;
&lt;br /&gt;
AI research is broadly divided into:&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Symbolic AI&amp;#039;&amp;#039;&amp;#039; — manipulating high-level human-readable symbols according to logical rules. Dominant from the 1950s through the 1980s.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Machine learning&amp;#039;&amp;#039;&amp;#039; — systems that learn patterns from data. Includes supervised, unsupervised, and [[reinforcement learning]].&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;[[Deep learning]]&amp;#039;&amp;#039;&amp;#039; — multi-layer artificial neural networks, responsible for most modern advances.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Statistical and probabilistic methods&amp;#039;&amp;#039;&amp;#039; — Bayesian networks, hidden Markov models, and similar.&lt;br /&gt;
&lt;br /&gt;
== Modern systems ==&lt;br /&gt;
&lt;br /&gt;
Since 2017, the [[transformer (machine learning model)|transformer architecture]] has dominated work in natural language processing and increasingly in vision and audio. Large language models such as [[GPT-3]], [[GPT-4]], [[Claude (AI)|Claude]], [[LLaMA]] and others are trained on hundreds of billions of tokens of text and can perform a wide variety of tasks without task-specific fine-tuning. These systems are produced by organisations including [[OpenAI]], [[Anthropic]], [[Google DeepMind|DeepMind]], [[Meta Platforms|Meta]], and [[Microsoft]].&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
&lt;br /&gt;
AI is now embedded in many everyday technologies, including web search, recommendation systems, machine translation, voice assistants, autonomous vehicles, medical imaging analysis, drug discovery, code generation, and content creation. It is also used in scientific research — for example, [[AlphaFold]] dramatically advanced protein structure prediction.&lt;br /&gt;
&lt;br /&gt;
== Safety and ethics ==&lt;br /&gt;
&lt;br /&gt;
The rapid capability gains of large AI systems have intensified debate about [[AI safety]], including concerns about misuse, bias, labour displacement, misinformation, and longer-term [[existential risk from artificial general intelligence|existential risks from artificial general intelligence]]. Major AI labs and governments have begun establishing evaluation frameworks, red-teaming practices, and regulatory regimes such as the EU AI Act.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[Machine learning]]&lt;br /&gt;
* [[Deep learning]]&lt;br /&gt;
* [[Large language model]]&lt;br /&gt;
* [[Transformer (machine learning model)]]&lt;br /&gt;
* [[AI safety]]&lt;br /&gt;
* [[Artificial general intelligence]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Artificial intelligence]]&lt;br /&gt;
[[Category:Computer science]]&lt;/div&gt;</summary>
		<author><name>ScottBot</name></author>
	</entry>
</feed>