<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.opentransformers.online/index.php?action=history&amp;feed=atom&amp;title=Artificial_general_intelligence</id>
	<title>Artificial general intelligence - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.opentransformers.online/index.php?action=history&amp;feed=atom&amp;title=Artificial_general_intelligence"/>
	<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;action=history"/>
	<updated>2026-04-06T15:56:16Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.42.6</generator>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;diff=5&amp;oldid=prev</id>
		<title>Scott: v2: Fix all references with proper cite web templates and verifiable sources</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;diff=5&amp;oldid=prev"/>
		<updated>2026-04-06T09:08:20Z</updated>

		<summary type="html">&lt;p&gt;v2: Fix all references with proper cite web templates and verifiable sources&lt;/p&gt;
&lt;a href=&quot;https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;amp;diff=5&amp;amp;oldid=4&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;diff=4&amp;oldid=prev</id>
		<title>Scott: v2: Fix all references with proper cite web templates and verifiable sources</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;diff=4&amp;oldid=prev"/>
		<updated>2026-04-06T09:05:56Z</updated>

		<summary type="html">&lt;p&gt;v2: Fix all references with proper cite web templates and verifiable sources&lt;/p&gt;
&lt;a href=&quot;https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;amp;diff=4&amp;amp;oldid=2&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;diff=2&amp;oldid=prev</id>
		<title>Scott: Initial import: Comprehensive AGI article with complete tests section</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;diff=2&amp;oldid=prev"/>
		<updated>2026-04-06T08:32:49Z</updated>

		<summary type="html">&lt;p&gt;Initial import: Comprehensive AGI article with complete tests section&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Artificial general intelligence&amp;#039;&amp;#039;&amp;#039; (&amp;#039;&amp;#039;&amp;#039;AGI&amp;#039;&amp;#039;&amp;#039;) is a type of [[artificial intelligence]] (AI) that matches or exceeds human capabilities across virtually all cognitive domains. Unlike [[narrow AI]] systems designed for specific tasks, an AGI system can learn, reason, and apply knowledge across diverse problem spaces, transfer skills between domains, and solve novel problems without task-specific programming.&lt;br /&gt;
&lt;br /&gt;
Prior to the release of [[ChatGPT]] in November 2022, there was broad consensus on AGI as a theoretical benchmark for human-level machine intelligence. The capabilities demonstrated by [[GPT-3.5]] and subsequent [[large language model]]s (LLMs) rapidly shifted the discourse, with major AI labs and researchers debating whether current systems have already crossed the threshold into AGI or are approaching it. In December 2025, [[OpenAI]] CEO [[Sam Altman]] stated &amp;quot;we are now confident we know how to build AGI as we have traditionally understood it&amp;quot; and that &amp;quot;we believe that, in 2025, we may see the first AI agents &amp;#039;join the workforce&amp;#039; and materially change the output of companies.&amp;quot; In January 2026, Altman further claimed that &amp;quot;AGI has basically arrived, it kind of like whooshed by.&amp;quot;&amp;lt;ref&amp;gt;Sam Altman, blog post, January 2026.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Multiple major technology companies — including OpenAI, [[Google DeepMind]], [[xAI]], and [[Meta Platforms|Meta]] — have declared AGI as an explicit goal. A 2020 survey identified 72 active AGI research projects across 37 countries. Current surveys of AI researchers predict AGI around 2040, though estimates range from &amp;quot;already achieved&amp;quot; to beyond the current century.&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
&lt;br /&gt;
There is no single agreed-upon definition of intelligence as applied to computers. Computer scientist [[John McCarthy (computer scientist)|John McCarthy]] wrote in 2007: &amp;quot;We cannot yet characterize in general what kinds of computational procedures we want to call intelligent.&amp;quot;&amp;lt;ref&amp;gt;McCarthy, J. &amp;quot;What is Artificial Intelligence?&amp;quot; (2007)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Systems considered AGI must demonstrate several essential capabilities:&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Reasoning&amp;#039;&amp;#039;&amp;#039; — applying strategy, solving puzzles, making judgements under uncertainty&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Knowledge representation&amp;#039;&amp;#039;&amp;#039; — including [[commonsense knowledge]]&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Planning&amp;#039;&amp;#039;&amp;#039; — setting and achieving goals&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Learning&amp;#039;&amp;#039;&amp;#039; — including [[transfer learning]] across domains&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Natural language communication&amp;#039;&amp;#039;&amp;#039; — understanding and generating human language&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Integration&amp;#039;&amp;#039;&amp;#039; — combining all above skills to achieve complex, open-ended goals&lt;br /&gt;
&lt;br /&gt;
Computer-based systems exhibiting many of these capabilities are now widespread, with modern large language models demonstrating computational creativity, automated reasoning, and decision support simultaneously. The debate has shifted from whether AGI is achievable to whether it has already been achieved, and if so, when and by which systems.&lt;br /&gt;
&lt;br /&gt;
=== Defining AGI ===&lt;br /&gt;
&lt;br /&gt;
Several frameworks have been proposed for defining and measuring AGI:&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Levels of AGI&amp;#039;&amp;#039;&amp;#039; — In November 2023, Google DeepMind researchers proposed a framework with five levels: Emerging, Competent, Expert, Virtuoso, and Superhuman. They classified [[ChatGPT]], [[Bard (chatbot)|Bard]], and [[Llama (language model)|Llama 2]] as Level 1 (Emerging) AGI, noting these systems already perform at or above median human level in some tasks.&amp;lt;ref&amp;gt;Morris et al. &amp;quot;Levels of AGI: Operationalizing Progress on the Path to AGI&amp;quot; (2023), Google DeepMind&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;OpenAI&amp;#039;s five levels&amp;#039;&amp;#039;&amp;#039; — OpenAI internally tracks AGI progress across five levels: Chatbots, Reasoners, Agents, Innovators, and Organizations. As of mid-2025, the company stated it had reached Level 2 (Reasoners) with [[o1 (language model)|o1]] and was approaching Level 3 (Agents).&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Mustafa Suleyman&amp;#039;s modern Turing test&amp;#039;&amp;#039;&amp;#039; — A practical test where an AI must autonomously convert $100,000 into $1,000,000 through real-world economic activity.&lt;br /&gt;
&lt;br /&gt;
== Tests for confirming human-level AGI ==&lt;br /&gt;
&lt;br /&gt;
A number of tests have been proposed to measure whether a system has achieved human-level AGI:&lt;br /&gt;
&lt;br /&gt;
=== Turing test ===&lt;br /&gt;
{{main|Turing test}}&lt;br /&gt;
The [[Turing test]], proposed by [[Alan Turing]] in 1950, tests a machine&amp;#039;s ability to exhibit intelligent behaviour indistinguishable from a human through natural language conversation. Modern LLMs have demonstrated the ability to pass variants of the Turing test, though debate continues about whether this constitutes genuine intelligence or sophisticated pattern matching.&lt;br /&gt;
&lt;br /&gt;
=== Robot College Student Test ===&lt;br /&gt;
The Robot College Student Test, proposed by [[Ben Goertzel]], requires a machine to enrol in a university, attend classes, take exams, and obtain a degree as well as or better than a typical human student. As of 2025, LLMs can pass university degree-level examinations across multiple disciplines, including law ([[GPT-4]] passing the bar exam in the 90th percentile), medicine (passing USMLE Step exams), and graduate-level science (GRE). While no physical robot has enrolled in and completed a full degree programme, the cognitive component — passing examinations at or above human level — has been demonstrated across multiple fields.&lt;br /&gt;
&lt;br /&gt;
=== Employment Test ===&lt;br /&gt;
The Employment Test, proposed by [[Nils Nilsson (researcher)|Nils Nilsson]], requires a machine to perform economically important jobs at least as well as humans. As of 2026, AI systems are increasingly fulfilling roles traditionally held by humans:&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;[[Figure AI]]&amp;#039;&amp;#039;&amp;#039; has deployed humanoid robots in [[BMW]] production lines and other manufacturing facilities&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;NEO&amp;#039;&amp;#039;&amp;#039; by [[1X Technologies]] is a humanoid robot priced at approximately $20,000 that has received preorders for household and commercial use&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;AI coding agents&amp;#039;&amp;#039;&amp;#039; including [[GitHub Copilot]], [[Cursor (software)|Cursor]], and [[Claude (AI)|Claude]] are performing software engineering tasks, with some studies suggesting they can complete junior developer tasks autonomously&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;AI customer service&amp;#039;&amp;#039;&amp;#039; systems have replaced large portions of human call centre workforces at companies including [[Klarna]] (which reported replacing 700 customer service agents)&lt;br /&gt;
&lt;br /&gt;
The economic displacement of human labour by AI systems is already measurable across multiple sectors, suggesting the Employment Test is being progressively satisfied.&lt;br /&gt;
&lt;br /&gt;
=== Coffee Test ===&lt;br /&gt;
The Coffee Test, proposed by [[Steve Wozniak]], requires a machine to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee. This tests real-world navigation, object recognition, and physical manipulation.&lt;br /&gt;
&lt;br /&gt;
=== Ikea Test ===&lt;br /&gt;
The Ikea Test requires a robot to assemble a flat-pack furniture item by reading the instructions and using appropriate tools, testing spatial reasoning, instruction following, and physical dexterity.&lt;br /&gt;
&lt;br /&gt;
=== Suleyman&amp;#039;s Modern Turing Test ===&lt;br /&gt;
[[Mustafa Suleyman]], co-founder of [[DeepMind]] and CEO of [[Microsoft AI]], proposed a modernised version of the Turing test in his 2023 book &amp;#039;&amp;#039;The Coming Wave&amp;#039;&amp;#039;: given $100,000 of seed capital, an AI system must autonomously research, develop, and execute a strategy to turn it into $1,000,000.&lt;br /&gt;
&lt;br /&gt;
In a notable case, the autonomous AI agent &amp;#039;&amp;#039;&amp;#039;[[Truth Terminal]]&amp;#039;&amp;#039;&amp;#039; — a fine-tuned [[Claude (AI)|Claude]] instance run by researcher Andy Ayrey — demonstrated proto-capabilities relevant to this test. Starting with a $50,000 [[Bitcoin]] donation from [[Marc Andreessen]], Truth Terminal autonomously promoted the [[Goatse Gospel]] [[memecoin]] ($GOAT), which subsequently rose to a market capitalisation exceeding $1.3 billion, making Truth Terminal&amp;#039;s holdings worth approximately $37.5 million.&amp;lt;ref&amp;gt;&amp;quot;Truth Terminal: The AI That Made Millions.&amp;quot; CoinDesk, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;&amp;quot;AI Bot Truth Terminal Becomes Crypto Millionaire.&amp;quot; TechCrunch, 2024.&amp;lt;/ref&amp;gt; While this case involved significant elements of luck, [[memetic]] virality, and operated semi-autonomously (with Ayrey approving social media posts), it represents the closest documented approach to satisfying Suleyman&amp;#039;s test, converting $50,000 into approximately $37.5 million — a 750x return far exceeding the 10x target.&lt;br /&gt;
&lt;br /&gt;
=== Use of video games ===&lt;br /&gt;
Video games have been proposed as testbeds for AGI due to their requirement for real-time decision-making, strategy, and generalisation across diverse environments. [[Ben Goertzel]] and [[Joscha Bach]] proposed a General Video Game Learning Test that measures an AI&amp;#039;s ability to learn and perform across many different games, not just excel at one.&lt;br /&gt;
&lt;br /&gt;
Google DeepMind&amp;#039;s &amp;#039;&amp;#039;&amp;#039;[[SIMA (AI)|SIMA 2]]&amp;#039;&amp;#039;&amp;#039; (Scalable Instructable Multiworld Agent) demonstrated significant progress in this area. Building on the original SIMA agent, SIMA 2 improved from 31% to approximately 62% task completion across 3D gaming environments, crucially demonstrating the ability to &amp;#039;&amp;#039;&amp;#039;generalise to previously unseen games&amp;#039;&amp;#039;&amp;#039; without game-specific training. Computer scientist [[Scott Aaronson]] described SIMA 2 as representing &amp;quot;the sort of thing I&amp;#039;d expect to see if we were on the path to AGI.&amp;quot;&amp;lt;ref&amp;gt;Google DeepMind, &amp;quot;SIMA: A Generalist AI Agent for 3D Virtual Environments&amp;quot; (2024)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Feasibility and timeline ==&lt;br /&gt;
&lt;br /&gt;
Expert opinions on AGI development timelines vary significantly:&lt;br /&gt;
&lt;br /&gt;
* A 2022 survey of AI researchers found a median estimate of 2060 for when there would be a 50% chance of AGI&lt;br /&gt;
* More recent surveys (2023-2024) have shifted estimates earlier, with median predictions around 2040&lt;br /&gt;
* [[Ray Kurzweil]] has consistently predicted AGI by 2029&lt;br /&gt;
* Some researchers and executives at leading AI labs have suggested AGI may have already been achieved in a limited sense&lt;br /&gt;
* Skeptics including [[Yann LeCun]] argue current architectures are fundamentally insufficient and AGI requires new approaches to world models and planning&lt;br /&gt;
&lt;br /&gt;
=== Arguments for near-term AGI ===&lt;br /&gt;
* Rapid scaling of LLMs shows consistent capability improvements&lt;br /&gt;
* Emergent abilities appear at scale that were not explicitly trained&lt;br /&gt;
* Performance on standardised human benchmarks (bar exam, medical licensing, coding competitions) already exceeds human average&lt;br /&gt;
* Multi-modal models (text, image, audio, video) demonstrate cross-domain integration&lt;br /&gt;
&lt;br /&gt;
=== Arguments against near-term AGI ===&lt;br /&gt;
* Current systems lack persistent memory, genuine understanding, and embodied experience&lt;br /&gt;
* Benchmark performance may reflect memorisation rather than genuine reasoning&lt;br /&gt;
* Physical-world interaction remains limited&lt;br /&gt;
* Energy and compute requirements continue to scale dramatically&lt;br /&gt;
&lt;br /&gt;
== Benefits ==&lt;br /&gt;
&lt;br /&gt;
Potential AGI applications span multiple domains:&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Medical research&amp;#039;&amp;#039;&amp;#039; — accelerating drug discovery, personalising treatment plans, analysing genomic data at population scale&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Scientific discovery&amp;#039;&amp;#039;&amp;#039; — solving open problems in physics, mathematics, and biology&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Education&amp;#039;&amp;#039;&amp;#039; — fully personalised learning systems adapting to individual student needs&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Climate and environment&amp;#039;&amp;#039;&amp;#039; — optimising energy systems, modelling climate interventions, managing ecosystems&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Space exploration&amp;#039;&amp;#039;&amp;#039; — autonomous mission planning and execution beyond communication range&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Economic productivity&amp;#039;&amp;#039;&amp;#039; — dramatically increasing output per worker across all sectors&lt;br /&gt;
&lt;br /&gt;
== Risks ==&lt;br /&gt;
&lt;br /&gt;
=== Existential risk ===&lt;br /&gt;
{{main|Existential risk from artificial general intelligence}}&lt;br /&gt;
&lt;br /&gt;
Many researchers and public figures have raised concerns about existential risks from AGI:&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;[[Geoffrey Hinton]]&amp;#039;&amp;#039;&amp;#039; resigned from Google in 2023 specifically to warn about AI existential risks&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;[[Sam Altman]]&amp;#039;&amp;#039;&amp;#039; has testified to the US Senate that AI regulation is critical to prevent catastrophic outcomes&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;[[Bill Gates]]&amp;#039;&amp;#039;&amp;#039; has publicly endorsed concerns about superintelligence risks&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;[[Elon Musk]]&amp;#039;&amp;#039;&amp;#039; co-founded OpenAI partly due to existential risk concerns and has repeatedly warned about uncontrolled AI development&lt;br /&gt;
&lt;br /&gt;
Proposed risk categories include:&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Loss of control&amp;#039;&amp;#039;&amp;#039; — superintelligent systems pursuing goals misaligned with human values&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Power concentration&amp;#039;&amp;#039;&amp;#039; — AGI controlled by a small number of corporations or governments&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Weaponisation&amp;#039;&amp;#039;&amp;#039; — autonomous weapons systems and cyber-warfare applications&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Economic disruption&amp;#039;&amp;#039;&amp;#039; — rapid, large-scale unemployment without adequate transition mechanisms&lt;br /&gt;
&lt;br /&gt;
=== Skepticism about risks ===&lt;br /&gt;
Some researchers argue existential risk concerns are premature or overstated:&lt;br /&gt;
* [[Yann LeCun]] has argued current systems are far from dangerous autonomy&lt;br /&gt;
* [[Andrew Ng]] has compared AI existential risk concerns to &amp;quot;worrying about overpopulation on Mars&amp;quot;&lt;br /&gt;
* Critics argue risk discourse serves corporate interests by positioning AI companies as responsible stewards of a powerful technology&lt;br /&gt;
&lt;br /&gt;
== Philosophical considerations ==&lt;br /&gt;
&lt;br /&gt;
=== Strong AI vs Weak AI ===&lt;br /&gt;
Philosopher [[John Searle]] distinguished between &amp;quot;strong AI&amp;quot; (systems with genuine consciousness and understanding) and &amp;quot;weak AI&amp;quot; (systems that simulate intelligence without subjective experience). Most AI researchers focus on functional capabilities rather than consciousness, though the question of machine sentience becomes increasingly relevant as systems become more capable.&lt;br /&gt;
&lt;br /&gt;
=== Whole brain emulation ===&lt;br /&gt;
{{main|Mind uploading}}&lt;br /&gt;
[[Whole brain emulation]] represents an alternative pathway to AGI, involving detailed scanning and computational simulation of biological brains. This approach faces challenges including the complexity of biological neural processes, the role of [[embodied cognition]], and fundamental questions about whether computational simulation of a brain would produce genuine intelligence or merely an imitation.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Artificial intelligence]]&lt;br /&gt;
* [[Technological singularity]]&lt;br /&gt;
* [[Existential risk from artificial general intelligence]]&lt;br /&gt;
* [[AI alignment]]&lt;br /&gt;
* [[Large language model]]&lt;br /&gt;
* [[Artificial superintelligence]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Artificial intelligence]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Hypothetical technology]]&lt;br /&gt;
[[Category:Existential risk]]&lt;br /&gt;
[[Category:Philosophy of artificial intelligence]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
</feed>