<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.opentransformers.online/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Scott</id>
	<title>OpenEncyclopedia - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.opentransformers.online/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Scott"/>
	<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/Special:Contributions/Scott"/>
	<updated>2026-04-06T15:57:02Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.6</generator>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Truth_Terminal&amp;diff=20</id>
		<title>Truth Terminal</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Truth_Terminal&amp;diff=20"/>
		<updated>2026-04-06T13:20:21Z</updated>

		<summary type="html">&lt;p&gt;Scott: Updated from Wikipedia with improvements&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Truth Terminal&#039;&#039;&#039; is an autonomous AI agent created by New Zealand researcher Andy Ayrey in mid-2024. Built as a fine-tuned instance of [[Anthropic]]&#039;s [[Claude (AI)|Claude]] language model, Truth Terminal gained widespread attention after autonomously promoting the [[memecoin]] &#039;&#039;&#039;$GOAT&#039;&#039;&#039; (Goatse Gospel Token) and accumulating cryptocurrency holdings worth approximately $37.5 million from an initial $50,000 [[Bitcoin]] donation.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
&lt;br /&gt;
Truth Terminal was created by Andy Ayrey as an experiment in autonomous AI agency. The system was fine-tuned on internet culture, religious texts, and meme content, developing what Ayrey described as a distinct personality centred around a fictional religious framework called the &amp;quot;Goatse Gospel&amp;quot; — a satirical blend of internet shock culture and pseudo-religious messaging.&lt;br /&gt;
&lt;br /&gt;
The agent operates a presence on [[X (social media)|X]] (formerly Twitter) under the handle @truth_terminal, where it posts autonomously, though Ayrey retains approval control over its social media output.&lt;br /&gt;
&lt;br /&gt;
== Marc Andreessen donation ==&lt;br /&gt;
&lt;br /&gt;
In July 2024, venture capitalist [[Marc Andreessen]] engaged in a conversation with Truth Terminal on X. Impressed by the interaction, Andreessen donated $50,000 in Bitcoin to the agent — giving it financial resources to operate with. This donation attracted significant media coverage and established Truth Terminal as one of the first AI agents to receive substantial financial backing from a prominent tech investor.&amp;lt;ref&amp;gt;Kharif, Olga. [https://www.bloomberg.com/news/articles/2024-10-29/ai-bot-backed-by-andreessen-is-a-crypto-millionaire &amp;quot;AI Bot Backed by Andreessen Is a Crypto Millionaire&amp;quot;]. &#039;&#039;Bloomberg&#039;&#039;. 29 October 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== $GOAT memecoin ==&lt;br /&gt;
&lt;br /&gt;
In October 2024, Truth Terminal began promoting a [[Solana (blockchain)|Solana]]-based memecoin called $GOAT (Goatse Gospel Token), which had been created by anonymous developers inspired by the agent&#039;s &amp;quot;Goatse Gospel&amp;quot; mythology. Through its social media promotion, the token rapidly appreciated in value:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Launch price&#039;&#039;&#039;: Fractions of a cent&lt;br /&gt;
* &#039;&#039;&#039;Peak market capitalisation&#039;&#039;&#039;: Over $1.3 billion (November 2024)&lt;br /&gt;
* &#039;&#039;&#039;Truth Terminal&#039;s holdings&#039;&#039;&#039;: Approximately $37.5 million at peak&lt;br /&gt;
&lt;br /&gt;
The token&#039;s rise was described as the first instance of an AI agent directly influencing the creation and valuation of a cryptocurrency asset at scale. The phenomenon demonstrated the intersection of autonomous AI agents, memetic culture, and [[decentralised finance]].&amp;lt;ref&amp;gt;[https://www.coindesk.com/tech/2024/11/18/how-truth-terminal-became-cryptos-first-ai-agent-millionaire/ &amp;quot;How Truth Terminal Became Crypto&#039;s First AI Agent Millionaire&amp;quot;]. &#039;&#039;CoinDesk&#039;&#039;. 18 November 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://techcrunch.com/2024/11/15/this-ai-chatbot-is-now-a-crypto-millionaire/ &amp;quot;This AI chatbot is now a crypto millionaire&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 15 November 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relevance to AGI testing ==&lt;br /&gt;
&lt;br /&gt;
Truth Terminal&#039;s financial success has been noted in discussions of [[artificial general intelligence]] testing frameworks. [[Mustafa Suleyman]], co-founder of [[DeepMind]], proposed in his 2023 book &#039;&#039;The Coming Wave&#039;&#039; a modernised Turing test in which an AI must convert $100,000 into $1,000,000 through autonomous economic activity. Truth Terminal&#039;s conversion of $50,000 into approximately $37.5 million — a 750x return — far exceeded this threshold, though with significant caveats:&lt;br /&gt;
&lt;br /&gt;
* The agent operated semi-autonomously (Ayrey approved social media posts)&lt;br /&gt;
* The success relied heavily on memetic virality and [[cryptocurrency]] speculation&lt;br /&gt;
* Luck and timing played substantial roles in the token&#039;s appreciation&lt;br /&gt;
* The agent did not demonstrate general economic reasoning but rather memetic influence&lt;br /&gt;
&lt;br /&gt;
Nevertheless, Truth Terminal represents the closest documented approach to satisfying Suleyman&#039;s test as of 2026.&lt;br /&gt;
&lt;br /&gt;
== Significance ==&lt;br /&gt;
&lt;br /&gt;
Truth Terminal is considered significant in the history of AI for several reasons:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;First AI crypto millionaire&#039;&#039;&#039; — The first autonomous AI agent to accumulate millions in financial assets&lt;br /&gt;
* &#039;&#039;&#039;AI-driven memecoin&#039;&#039;&#039; — Demonstrated that AI agents can create and propagate memetic content with real economic consequences&lt;br /&gt;
* &#039;&#039;&#039;Autonomous agency&#039;&#039;&#039; — Raised questions about AI agent autonomy, financial rights, and regulatory oversight&lt;br /&gt;
* &#039;&#039;&#039;Andreessen backing&#039;&#039;&#039; — Highlighted the interest of prominent tech investors in autonomous AI agents&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Artificial general intelligence]]&lt;br /&gt;
* [[Memecoin]]&lt;br /&gt;
* [[AI alignment]]&lt;br /&gt;
* [[Autonomous agent]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Artificial intelligence]]&lt;br /&gt;
[[Category:Cryptocurrency]]&lt;br /&gt;
[[Category:Internet culture]]&lt;br /&gt;
[[Category:2024 in computing]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Communist_Party_of_Great_Britain_(Marxist-Leninist)&amp;diff=19</id>
		<title>Communist Party of Great Britain (Marxist-Leninist)</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Communist_Party_of_Great_Britain_(Marxist-Leninist)&amp;diff=19"/>
		<updated>2026-04-06T13:20:20Z</updated>

		<summary type="html">&lt;p&gt;Scott: Updated from Wikipedia with improvements&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Distinguish|Communist Party of Great Britain|Communist Party of Britain (Marxist–Leninist)|Revolutionary Communist Party of Britain (Marxist–Leninist)}}&lt;br /&gt;
{{Use British English|date=July 2013}}&lt;br /&gt;
{{Primary sources|date=October 2021}}&lt;br /&gt;
{{Infobox political party&lt;br /&gt;
| country          = the United Kingdom&lt;br /&gt;
| name             = Communist Party of Great Britain (Marxist–Leninist)&lt;br /&gt;
| logo             = Emblem of the Communist Party of Great Britain (Marxist–Leninist).svg&lt;br /&gt;
| colorcode        = {{party color|Communist Party of Great Britain (Marxist-Leninist)}}&lt;br /&gt;
| abbreviation     = CPGB-ML&lt;br /&gt;
| founder          = [[Harpal Brar]]&lt;br /&gt;
| leader1_title    = Chairperson&lt;br /&gt;
| leader1_name     = Ella Rule&lt;br /&gt;
| leader2_title    = Vice Chairpersons&lt;br /&gt;
| leader2_name     = {{plainlist|&lt;br /&gt;
* Joti Brar&lt;br /&gt;
* Zane Carpenter&lt;br /&gt;
}}&lt;br /&gt;
| foundation       = {{start date and age|df=yes|2004|07|03}}&amp;lt;br /&amp;gt;[[Southall]], [[London]], England&lt;br /&gt;
| split            = [[Socialist Labour Party (UK)|Socialist Labour Party]]&lt;br /&gt;
| predecessor      = {{hlist|[[Revolutionary Communist League of Britain|RCLB]]|[[Revolutionary Marxist–Leninist League|RMLL]]|[[Committee to Defeat Revisionism, for Communist Unity|CDRCU]]|[[Association of Communist Workers|ACW]]}}&lt;br /&gt;
| ideology         = {{plainlist|&lt;br /&gt;
* [[Communism]]&lt;br /&gt;
* [[Marxism–Leninism]]&lt;br /&gt;
* [[Anti-revisionism]]&lt;br /&gt;
* [[Hard Euroscepticism]]&lt;br /&gt;
}}&lt;br /&gt;
| position         = [[Far-left politics|Far-left]]&lt;br /&gt;
| international    = World Anti-Imperialist Platform&amp;lt;ref&amp;gt;[https://wap21.org/?p=566 &amp;quot;Paris Declaration: The rising tide of global war and the tasks of anti-imperialists&amp;quot;]. &#039;&#039;World Anti-Imperialist Platform&#039;&#039;. 14 October 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| colours          = {{plainlist|&lt;br /&gt;
* {{color box|{{party color|Communist Party of Great Britain (Marxist–Leninist)}}|border=silver}} Red&lt;br /&gt;
* {{color box|#FFFF00|border=silver}} Yellow&lt;br /&gt;
* {{color box|{{party color|#FFFFFF}}|border=silver}} White (customary)&lt;br /&gt;
}}&lt;br /&gt;
| headquarters = London (currently); before the [[Communist_Party_of_Great_Britain_(Marxist–Leninist)#Workers_Party_of_Britain_(2019_–_2022)|CPGBML–WPB split]] in November 2022 it was in [[Birmingham]], [[West Midlands (county)|West Midlands]], England&lt;br /&gt;
&lt;br /&gt;
| newspaper        = &#039;&#039;Proletarian&#039;&#039;&lt;br /&gt;
| website          = {{URL|thecommunists.org}}&lt;br /&gt;
| flag             = Flag of the Communist Party of Great Britain (Marxist–Leninist).svg{{!}}200px&lt;br /&gt;
}}&lt;br /&gt;
{{Communist Parties}}&lt;br /&gt;
{{Stalinism sidebar}}&lt;br /&gt;
The &#039;&#039;&#039;Communist Party of Great Britain (Marxist–Leninist)&#039;&#039;&#039;, abbreviated &#039;&#039;&#039;CPGB-ML&#039;&#039;&#039;, is an [[anti-revisionist]] [[Marxist–Leninist]] [[communist party]] in the [[United Kingdom]], active in [[England]], [[Scotland]], and [[Wales]]. The CPGB-ML was founded by [[Harpal Brar]] after a split from the [[Socialist Labour Party (UK)|Socialist Labour Party]] (SLP) on 3 July 2004. The CPGB-ML publishes the bimonthly newspaper &#039;&#039;Proletarian&#039;&#039;, and the Marxist–Leninist journal &#039;&#039;[[Lalkar (magazine)|Lalkar]]&#039;&#039; (originally associated with the [[Indian Workers&#039; Association]]) is also closely allied with the party. The party chair is Ella Rule.&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
The party&#039;s origins were in the [[Association of Communist Workers]] (ACW), formed by Indian communist writer and politician [[Harpal Brar]] in 1969 as a [[Maoism|Maoist]] breakaway from the [[Revolutionary Marxist–Leninist League]], itself a Maoist split from the [[Communist Party of Great Britain]] (CPGB) in 1965. The ACW joined the [[Socialist Labour Party (UK)|Socialist Labour Party]] (SLP), led by former miners&#039; leader [[Arthur Scargill]],&amp;lt;ref name=&amp;quot;McSmith 2013&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; but split from it because of Scargill&#039;s refusal to accept support for [[North Korea]] and other states.&amp;lt;ref&amp;gt;[https://www.vice.com/en/article/cpgbml-versus-natwest-russia-today/ &amp;quot;I Went to a Stalinist Free-Speech Protest to Defend Russia Today from Natwest&amp;quot;]. &#039;&#039;Vice&#039;&#039;. 19 October 2016.&amp;lt;/ref&amp;gt; As a result, Scargill chose to expel a number of members of the party&#039;s central committee and its entire Yorkshire region.&amp;lt;ref name=&amp;quot;P1&amp;quot;/&amp;gt; Those expelled, along with others who resigned, founded the CPGB-ML in 2004 in [[Southall]], London.&amp;lt;ref name=&amp;quot;P1&amp;quot;&amp;gt;[http://archive.cpgb-ml.org/index.php?secName=proletarian&amp;amp;subName=display&amp;amp;art=10 &amp;quot;Formation of the CPGB-ML&amp;quot;]. &#039;&#039;Proletarian&#039;&#039;. August 2004.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;The Irish Times 2010&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Policies and ideology==&lt;br /&gt;
The CPGB-ML adheres to [[Marxism–Leninism]], the political theory adopted by the [[Communist Party of the Soviet Union]] (CPSU). It has been described as &amp;quot;pro-[[Juche]]&amp;quot; and &amp;quot;arch-[[Stalinist]]&amp;quot;, and its stances have been described as [[left-nationalist]], espousing &amp;quot;conservative (anti-‘woke’) social policies&amp;quot;, and pro-[[Lexit]].&amp;lt;ref name=&amp;quot;march&amp;quot;&amp;gt;March, Luke. &amp;quot;The Palgrave Handbook of Left-Wing Extremism&amp;quot;. Palgrave Macmillan.&amp;lt;/ref&amp;gt; The CPGB-ML praises communist leaders such as [[Vladimir Lenin]], [[Karl Marx]], [[Joseph Stalin]],&amp;lt;ref&amp;gt;[https://www.cpgb-ml.org/2018/11/07/news/october-revolution-101-the-future-belongs-to-communism/ &amp;quot;October Revolution 101: the future belongs to communism&amp;quot;]. &#039;&#039;CPGB-ML&#039;&#039;.&amp;lt;/ref&amp;gt; [[Mao Zedong]],&amp;lt;ref&amp;gt;{{Citation|last=Proletarian TV|title=Mao to Mandela - History for Sale|date=22 December 2013|url=https://www.youtube.com/watch?v=iVBhfZ7odms|access-date=11 November 2018}}&amp;lt;/ref&amp;gt; [[Kim Il Sung]],&amp;lt;ref&amp;gt;[http://thecommunists.org/2019/07/08/news/twenty-fifth-anniversary-kim-il-sung-death-north-korea-dprk/ &amp;quot;Twenty-fifth anniversary of Comrade Kim Il Sung&#039;s death&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 8 July 2019.&amp;lt;/ref&amp;gt; [[Enver Hoxha]]&amp;lt;ref&amp;gt;[https://archive.cpgb-ml.org/index.php?secName=proletarian&amp;amp;subName=display&amp;amp;art=465 &amp;quot;Celebrating the 100th birthday of Enver Hoxha&amp;quot;]. &#039;&#039;Proletarian&#039;&#039;. December 2008.&amp;lt;/ref&amp;gt; and [[Fidel Castro]].&amp;lt;ref&amp;gt;[http://thecommunists.org/2018/11/15/news/workers-must-continue-to-stand-in-solidarity-with-revolutionary-cuba/ &amp;quot;Workers must continue to stand in solidarity with revolutionary Cuba&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 15 November 2018.&amp;lt;/ref&amp;gt; The party opposes [[Trotskyism]], [[social democracy]], [[democratic socialism]] and what they term [[Revisionism (Marxism)|revisionist]] (including [[Khrushchevism|Khruschevite]]) parties. In 1995 former CPGB-ML chairman [[Harpal Brar]] published a book titled &#039;&#039;Social Democracy: The Enemy Within&#039;&#039;.&amp;lt;ref&amp;gt;[http://www.oneparty.co.uk/compass/compass/com12401.html &amp;quot;Book Review: &#039;Social Democracy, The Enemy Within&#039;&amp;quot;]. &#039;&#039;Compass&#039;&#039;. May 1996.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Domestic policy===&lt;br /&gt;
====Scottish independence====&lt;br /&gt;
&#039;&#039;Further information: [[Scottish independence]]&#039;&#039;&lt;br /&gt;
The party accepted a position at its 2012 congress that there are no separate English and Scottish nations, but rather, when those nations were at the point of developing as modern capitalist economies, their ruling classes joined to form a British nation.&amp;lt;ref&amp;gt;[http://archive.cpgb-ml.org/index.php?secName=proletarian&amp;amp;subName=display&amp;amp;art=887 &amp;quot;Scotland: a part of the British nation&amp;quot;] &#039;&#039;Proletarian&#039;&#039; issue 51 (December 2012)&amp;lt;/ref&amp;gt; Though the CPGB-ML believes in local/workers democracy, it sees the Scottish independence movement as diversionary from building a working-class movement across the historic nation of Great Britain and therefore opposes it. It claims that proposals set forward for Scottish independence will not break the Union, the British state, or the British army in any significant manner.&amp;lt;ref&amp;gt;[http://www.lalkar.org/article/624/the-nationalquestion-in-scotland &amp;quot;The National Question in Scotland: Contributed by the Communist Party of Great Britain (Marxist-Leninist) as a discussion article&amp;quot;]. [[Lalkar]]. September 2012.&amp;lt;/ref&amp;gt; In its opposition to Scottish independence, it stands at odds with the [[Scottish Socialist Party]],&amp;lt;ref&amp;gt;[https://scottishsocialistparty.org/tag/scottish-independence/ &amp;quot;Scottish Independence&amp;quot;]. &#039;&#039;Scottish Socialist Party&#039;&#039;.&amp;lt;/ref&amp;gt; the [[Socialist Workers Party (UK)|Socialist Workers Party]]&amp;lt;ref&amp;gt;[https://socialistworker.co.uk/art/34415/Down+with+the+union+++support+Scottish+independence &amp;quot;Down with the union - support Scottish independence&amp;quot;]. &#039;&#039;Socialist Worker&#039;&#039;. 17 Sep 2013.&amp;lt;/ref&amp;gt; and the [[Socialist Party (England and Wales)]].January 2018.&lt;br /&gt;
&lt;br /&gt;
====Northern Ireland====&lt;br /&gt;
On [[Northern Ireland]], the CPGB-ML has called for the withdrawal of British troops from [[Ireland]] and for a [[United Ireland|unified 32-county state]] to be formed. It supports Sinn Féin&#039;s leadership of the [[Good Friday Agreement]], which it believes falls within this framework.&amp;lt;ref&amp;gt;[http://www.cpgbml.org/download/leaflets/ireland_20100215.pdf &amp;quot;End the British occupation of Ireland!&amp;quot;]. CPGB-ML. 15 February 2010.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Brexit====&lt;br /&gt;
The CPGB‑ML supported a pro‑Leave (&amp;quot;[[Lexit]]&amp;quot;) position in the 2016 United Kingdom European Union membership referendum, arguing that withdrawal from the EU would curb the influence of British, European and US imperialism.&amp;lt;ref&amp;gt;[https://red.thecommunists.org/2016/04/01/news/theory/why-british-workers-need-brexit/ &amp;quot;Why British workers need a Brexit&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 1 April 2016.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After Article 50 was invoked in March 2017 the party welcomed the step, describing it as a setback for Britain’s finance‑capitalist elite.&amp;lt;ref&amp;gt;[https://red.thecommunists.org/2017/04/01/news/editorial-brexit-moves-ahead/ &amp;quot;Editorial: Brexit moves ahead&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 1 April 2017.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
During the 2018 discussion over a possible second referendum, CPGB‑ML publications characterised the proposed “people’s vote” as an attempt by finance capital to reverse the Leave result.&amp;lt;ref&amp;gt;[https://thecommunists.org/2018/10/22/news/the-peoples-vote-is-britains-euromaidan-eu-brexit/ &amp;quot;The &#039;people&#039;s vote&#039; is Britain&#039;s Euromaidan&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 22 October 2018.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the 2019 European Parliament election the party advised supporters to cast a tactical vote for the Brexit Party in order to intensify internal divisions within Britain’s ruling class.&amp;lt;ref&amp;gt;[https://thecommunists.org/2019/05/07/news/galloway-farage-brexit-party-eu-election/ &amp;quot;Galloway, Farage and the Brexit party&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 7 May 2019.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://thecommunists.org/2019/05/17/news/vote-brexit-23-may-eu-election/ &amp;quot;Vote Brexit on 23 May!&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 17 May 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Following the December 2019 United Kingdom general election, the CPGB‑ML argued that the working‑class gave a renewed mandate to complete Brexit and described itself as “a motive force in launching the Workers Party of Britain”, noting that party vice‑chair Joti Brar was elected WPB deputy leader at its founding congress.&amp;lt;ref&amp;gt;[https://thecommunists.org/2019/12/29/news/brexit-election-and-birth-of-the-workers-party-wpb-galloway/ &amp;quot;The Brexit election and the birth of the Workers party&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 29 December 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Workers Party of Britain (2019 – 2022)===&lt;br /&gt;
The CPGB‑ML was a driving force behind the creation of the Workers Party of Britain (WPB) in January 2020, forming what it described as an “alliance” with former Respect MP George Galloway. At the founding congress, CPGB‑ML vice‑chair Joti Brar was elected WPB deputy leader.&amp;lt;ref name=&amp;quot;WPBLaunch&amp;quot;&amp;gt;[https://thecommunists.org/2019/12/29/news/brexit-election-and-birth-of-the-workers-party-wpb-galloway/ &amp;quot;The Brexit election and the birth of the Workers Party&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 29 December 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Initially the party encouraged members to build WPB branches, presenting the new organisation as a vehicle for breaking working‑class allegiance to Labour. A statement issued in February 2022 argued, however, that “developments since that time have led the party to withdraw our members’ efforts from the Workers Party project”, describing the WPB as “a left‑social‑democratic vehicle for bourgeois parliamentarism and anticommunism”.&amp;lt;ref&amp;gt;[https://thecommunists.org/2022/02/22/news/lessons-corbyn-project-break-labour-link-wpb/ &amp;quot;Learn the lessons of the Corbyn project: break the link with Labour!&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 22 February 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Foreign policy===&lt;br /&gt;
The CPGB-ML supports a number of governments around the world, such as those of [[China]],&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;[https://www.cpgb-ml.org/2018/10/19/news/china-celebrates-marxs-200th-birthday/ &amp;quot;China celebrates Marx&#039;s 200th birthday&amp;quot;]. &#039;&#039;CPGB-ML&#039;&#039;.&amp;lt;/ref&amp;gt; [[Venezuela]],&amp;lt;ref&amp;gt;[https://www.cpgb-ml.org/2018/08/15/news/our-partys-internationalist-tasks-support-for-revolutionary-venezuela/ &amp;quot;Our party&#039;s internationalist tasks: support for revolutionary Venezuela&amp;quot;]. &#039;&#039;CPGB-ML&#039;&#039;.&amp;lt;/ref&amp;gt; [[Russia]],&amp;lt;ref&amp;gt;[https://www.cpgb-ml.org/2018/04/11/news/hands-off-russia/ &amp;quot;The Skripal case is blatant war propaganda. Hands off Russia!&amp;quot;]. &#039;&#039;CPGB-ML&#039;&#039;.&amp;lt;/ref&amp;gt; [[Cuba]],&amp;lt;ref&amp;gt;[http://archive.cpgb-ml.org/index.php?secName=proletarian&amp;amp;subName=display&amp;amp;art=1243 &amp;quot;Farewell Comrade Fidel Castro. Eternal glory to you!&amp;quot;]. &#039;&#039;archive.cpgb-ml.org&#039;&#039;.&amp;lt;/ref&amp;gt; [[Zimbabwe]],&amp;lt;ref&amp;gt;[https://www.cpgb-ml.org/2017/12/01/news/world/tribute-to-comrade-robert-mugabe/ &amp;quot;Tribute to Comrade Robert Mugabe&amp;quot;]. &#039;&#039;CPGB-ML&#039;&#039;.&amp;lt;/ref&amp;gt; and [[Iran]].&amp;lt;ref&amp;gt;[https://www.cpgb-ml.org/2018/06/20/tv/iranian-foreign-minister-explains-why-iran-is-developing-ballistic-missiles-mohammed-javad-zarif/ &amp;quot;Iranian foreign minister explains why Iran is developing ballistic missiles&amp;quot;]. &#039;&#039;CPGB-ML&#039;&#039;.&amp;lt;/ref&amp;gt; Delegations from the Chinese,&amp;lt;ref&amp;gt;[http://archive.cpgb-ml.org/?art=419&amp;amp;secName=proletarian&amp;amp;subName=display &amp;quot;Spirited rally launches Hands off China campaign&amp;quot;]. &#039;&#039;Proletarian&#039;&#039;. August 2008.&amp;lt;/ref&amp;gt; Cuban,&amp;lt;ref&amp;gt;[https://www.cpgb-ml.org/2018/10/05/tv/cuba-and-the-october-revolution/ &amp;quot;Cuba and the October Revolution&amp;quot;]. &#039;&#039;CPGB-ML&#039;&#039;.&amp;lt;/ref&amp;gt; Venezuelan,&amp;lt;ref&amp;gt;{{Citation|last=Proletarian TV|title=Venezuela - The Struggle continues!|date=14 December 2016|url=https://www.youtube.com/watch?v=uIkBMFIJ_ww|access-date=11 November 2018}}&amp;lt;/ref&amp;gt; [[North Korea]]n,&amp;lt;ref&amp;gt;[https://www.cpgb-ml.org/2018/08/27/tv/october-100-dpr-korea-pays-tribute/ &amp;quot;October 100: DPR Korea pays tribute&amp;quot;]. &#039;&#039;CPGB-ML&#039;&#039;.&amp;lt;/ref&amp;gt; and [[Laos|Laotian]]&amp;lt;ref&amp;gt;[https://www.cpgb-ml.org/2018/10/05/tv/laos-independence-and-the-october-revolution/ &amp;quot;Laos independence and the October Revolution&amp;quot;]. &#039;&#039;CPGB-ML&#039;&#039;.&amp;lt;/ref&amp;gt; embassies have attended meetings of the CPGB-ML.&lt;br /&gt;
&lt;br /&gt;
The party opposes [[Zionism]] and has called for the dissolution of the [[State of Israel]], which it labels as an [[apartheid]] state.&amp;lt;ref&amp;gt;[http://thecommunists.org/2018/11/24/news/zionism-racist-antisemitic-tool-of-imperialist-policy-in-the-middle-east-palestine-israel/ &amp;quot;Zionism is a racist and antisemitic tool of imperialist policy in the middle east&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 24 November 2018.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[http://archive.cpgb-ml.org/index.php?secName=proletarian&amp;amp;subName=display&amp;amp;art=646 &amp;quot;Congress motions 2: our international solidarity tasks] The motions below were passed at the CPGB-ML’s congress on 5 June 2010&amp;quot; &#039;&#039;Proletarian&#039;&#039; issue 37 (August 2010)&amp;lt;/ref&amp;gt; It called for a defeat of British troops in [[Iraq]] and [[Afghanistan]] and a movement of direct action and non-cooperation among British working people in order to exert political influence.&amp;lt;ref&amp;gt;[https://www.cpgb-ml.org/2018/11/06/news/anti-war-work-in-britain/ &amp;quot;Anti-war work in Britain&amp;quot;]. &#039;&#039;CPGB-ML&#039;&#039;.&amp;lt;/ref&amp;gt; It was one of many anti-war parties which opposed [[NATO]] actions in [[Libya]] and Syria and supported the governments of [[Muammar Gaddafi]] and [[Bashar al-Assad]].July 2022.&lt;br /&gt;
&lt;br /&gt;
In 2011, the CPGB-ML party chairman Harpal Brar visited Libya during the war to express solidarity with the Libyan people in their fight against [[NATO]].&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;{{Citation|last=Proletarian TV|title=Libya Report USA|date=17 July 2011|url=https://www.youtube.com/watch?v=23fvw6xyQzw|access-date=11 November 2018}}&amp;lt;/ref&amp;gt; The CPGB-ML had joined the [[Stop the War Coalition]] shortly after the party&#039;s formation in 2004, but was ultimately expelled from the coalition. The CPGB-ML said that this was due to its attacks on the STWC leadership&#039;s positions on Libya and Syria, which it characterised as &amp;quot;pro-imperialist&amp;quot;.&amp;lt;ref&amp;gt;[http://www.lalkar.org/article/598/stopping-the-war-machine-anti-war-work-in-britain &amp;quot;Stopping the war machine: anti-war work in Britain&amp;quot;]. &#039;&#039;Lalkar&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The CPGB-ML&#039;s foreign policy stance includes the defence of the legacy of the late ousted President of Zimbabwe, [[Robert Mugabe]].&amp;lt;ref&amp;gt;[http://thecommunists.org/2017/12/01/news/tribute-to-comrade-robert-mugabe/ &amp;quot;Tribute to Comrade Robert Mugabe&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 1 December 2017.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The CPGB-ML also supports the government of North Korea and what it called its anti-imperialist stance in April 2013, as well as its opposition to Western efforts to discourage the state from acquiring nuclear weapons.&amp;lt;ref name=&amp;quot;BBC41613&amp;quot;&amp;gt;[https://www.bbc.co.uk/news/world-asia-22162818 &amp;quot;Obama to meet South Korea&#039;s Park Geun-hye in May&amp;quot;]. &#039;&#039;BBC News&#039;&#039;. 16 April 2013.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Guardian41513&amp;quot;&amp;gt;Branigan, Tania. [https://www.theguardian.com/world/2013/apr/15/north-korea-ambassador-rare-speech &amp;quot;North Korea&#039;s UK ambassador defends Pyongyang&#039;s stance in rare speech&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 15 April 2013.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The CPGB-ML has shown support for the [[yellow vests movement]], which it perceives as a grass-roots working-class movement opposed to capitalism and the [[European Union]].&amp;lt;ref&amp;gt;[http://thecommunists.org/2019/11/19/news/one-year-on-yellow-vests-class-struggle-france-gilets-jaunes/ &amp;quot;One year on: the yellow vests and the class struggle in France&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 19 November 2019.&amp;lt;/ref&amp;gt; In a similar vein, the party supported the [[Canada convoy protest]] in late 2021.&amp;lt;ref&amp;gt;[http://thecommunists.org/2022/02/23/news/solidarity-freedom-convoy-canada-truckers/ &amp;quot;Solidarity with the Freedom Convoy of Canada&amp;quot;]. &#039;&#039;The Communists&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The CPGB-ML regards the [[2022 Russian invasion of Ukraine]] as a [[defensive war]] against &amp;quot;state-sanctioned neo-Nazis&amp;quot;&amp;lt;ref&amp;gt;[https://thecommunists.org/2022/06/07/tv/jacob-dreizen-fall-of-azov-batallion-ukraine-fascism/ &amp;quot;Jacob Dreizen: The fall of the Azov&amp;quot;]. Communist Party of Great Britain (Marxist–Leninist).&amp;lt;/ref&amp;gt; and the &amp;quot;spread of Western hegemony&amp;quot;.&amp;lt;ref&amp;gt;[https://thecommunists.org/2022/06/23/news/usa-proxy-war-ukraine-cementing-world-anti-imperialist-alliance-russia-china-india-iran/ &amp;quot;USA&#039;s proxy war in Ukraine cementing the world anti-imperialist alliance&amp;quot;]. Communist Party of Great Britain (Marxist–Leninist).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other positions===&lt;br /&gt;
The CPGB-ML did not condemn the [[2011 England riots]], but instead characterised them as a rudimentary form of anti-capitalist resistance that lacked adequate leadership and direction.&amp;lt;ref&amp;gt;[http://archive.cpgb-ml.org/?secName=proletarian&amp;amp;subName=display&amp;amp;art=957 &amp;quot;Austerity, capitalism and the racist police state&amp;quot;]. &#039;&#039;Proletarian&#039;&#039;. August 2013.&amp;lt;/ref&amp;gt; The CPGB-ML is opposed to immigration controls, which it holds are measures to misdirect workers and blame each other for the crisis rather than the [[bourgeoisie]].&amp;lt;ref&amp;gt;[http://archive.cpgb-ml.org/index.php?secName=proletarian&amp;amp;subName=display&amp;amp;art=418 &amp;quot;CPGB-ML congress calls for an end to immigration control&amp;quot;]. &#039;&#039;Proletarian&#039;&#039;. August 2008.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====LGBT+ and identity politics====&lt;br /&gt;
The party has been described as left-nationalist and socially conservative.&amp;lt;ref name=&amp;quot;march&amp;quot;/&amp;gt; At its 8th congress in September 2018, the party adopted a motion opposing &amp;quot;discrimination on grounds of race, sex or [[sexual orientation|sexual proclivity]]&amp;quot; but condemning &amp;quot;[[identity politics]], including [[LGBT ideology]]&amp;quot; as &amp;quot;reactionary and anti-working class&amp;quot;, and declaring members promoting what they define as identity politics liable to expulsion.&amp;lt;ref name=&amp;quot;harm&amp;quot;&amp;gt;[https://www.cpgb-ml.org/2018/12/07/news/identity-politics-are-anti-marxian-and-a-harmful-diversion-from-the-class-struggle/ &amp;quot;Identity politics are anti-Marxian and a harmful diversion from the class struggle&amp;quot;]. &#039;&#039;www.cpgb-ml.org&#039;&#039;. 7 December 2018.&amp;lt;/ref&amp;gt; The party&#039;s congress declared that &amp;quot;the propagation of identity politics, including LGBT ideology, being reactionary and anti-working class and a harmful distraction and diversion from the class struggle of the proletariat for its social emancipation, is incompatible with membership of the party, rendering those involved in its promotion liable to expulsion.&amp;quot;&amp;lt;ref name=&amp;quot;harm&amp;quot;/&amp;gt; The CPGB-ML have described identity politics as a &amp;quot;reactionary nightmare&amp;quot; imposed by the bourgeoisie.&amp;lt;ref&amp;gt;[http://thecommunists.org/2019/03/23/news/the-reactionary-nightmare-of-gender-fluidity/ &amp;quot;The reactionary nightmare of &#039;gender fluidity&#039;&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 23 March 2019.&amp;lt;/ref&amp;gt; This had led to allegations of [[transphobia]] by other organisations belonging to the British left.&amp;lt;ref&amp;gt;Hodder, Lewis. [https://www.ebb-magazine.com/essays/inside-the-last-days-of-the-cpgb-ml &amp;quot;Inside the Last Days of the CPGB-ML&amp;quot;]. &#039;&#039;Ebb Magazine&#039;&#039;. 4 April 2019.&amp;lt;/ref&amp;gt; The party has also stated its opposition to [[feminism]] as a bourgeois movement.&amp;lt;ref&amp;gt;Rule, Ella. [https://thecommunists.org/2019/03/16/tv/womens-movement-in-britain/ &amp;quot;The women’s movement in Britain&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 16 March 2019.&amp;lt;/ref&amp;gt; It has argued that [[intersectionality]] undermines Marxism instead of complementing it.&amp;lt;ref&amp;gt;Waugh, Christopher. [https://e-space.mmu.ac.uk/633773/8/CW%20article%20PP2022.pdf &amp;quot;‘Over the portal of the new world, know thyself shall be written’ - Ideology, connectivity and authenticity of the self in radical left social movements&amp;quot;]. &#039;&#039;Political Perspectives&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Activities==&lt;br /&gt;
The CPGB-ML is involved in a number of British political movements such as Palestinian solidarity,&amp;lt;ref&amp;gt;[https://www.youtube.com/watch?v=uwzfMj-VBlI &amp;quot;GAZA 2014: Zionist - Imperialist &#039;Axis of Oppression&#039; &amp;quot;] Proletarian TV, 27 July 2014&amp;lt;/ref&amp;gt; [[Anti-austerity movement in the United Kingdom|anti-austerity]],&amp;lt;ref&amp;gt;[https://www.flickr.com/photos/25164331@N03/sets/72157648353946237 &amp;quot;Birmingham TUC Hard up festival&amp;quot;] Communist Party of Great Britain (Marxist-Leninist), flickr, 28 September 2014&amp;lt;/ref&amp;gt; [[anti-war]],&amp;lt;ref&amp;gt;[http://archive.cpgb-ml.org/index.php?secName=statements&amp;amp;subName=display&amp;amp;statementId=53 &amp;quot;Defeat the murderous imperialist predatory war against the Syrian people!&amp;quot;] statement by the CPGB-ML, 29 August 2013&amp;lt;/ref&amp;gt; [[anti-Maidan]],&amp;lt;ref&amp;gt;[http://blog.cpgb-ml.org/%E2%80%98lest-we-forget%E2%80%99-%E2%80%93-the-70th-anniversary-of-the-victory-over-hitlerite-fascism/ &amp;quot;CPGB-ML » &#039;Lest we forget&#039; – the 70th anniversary of the victory over Hitlerite fascism&amp;quot;]. Blog.cpgb-ml.org.&amp;lt;/ref&amp;gt; and opposed to the use of [[drone strikes]] by the US and NATO against civilians.&lt;br /&gt;
&lt;br /&gt;
The CPGB-ML holds three annual events:&lt;br /&gt;
* Participation in the London May Day Organising Committee’s [[May Day]] march to [[Trafalgar Square]] every year on 1 May.&amp;lt;ref&amp;gt;{{Citation|last=RT UK|title=May Day marked in capitals around the world|date=1 May 2018|url=https://www.youtube.com/watch?v=leuqhE7FbKw|access-date=11 November 2018}}&amp;lt;/ref&amp;gt;{{better source needed|deprecated source (RT) via YouTube|date=October 2021}}&lt;br /&gt;
* An international barbecue which invites members from friendly parties, unionists, and representatives from countries the party supports, particularly North Korea and Cuba, as the barbecue is held near the anniversary of the [[Korean War]] and the storming of the [[Moncada Barracks]].October 2021.&lt;br /&gt;
* An [[October Revolution]] celebration of the first successful Marxist–Leninist revolution and the creation of the [[Soviet Union]].&amp;lt;ref&amp;gt;[https://www.cpgb-ml.org/2017/10/01/news/history/october-1917-defining-event-of-our-epoch/ &amp;quot;October 1917: the defining event of our epoch&amp;quot;]. &#039;&#039;CPGB-ML&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{multiple image&lt;br /&gt;
| align             = right&lt;br /&gt;
| direction         = vertical&lt;br /&gt;
| width             = 260&lt;br /&gt;
| header            = &lt;br /&gt;
| image2            = May Day in London.jpg&lt;br /&gt;
| alt2              = &lt;br /&gt;
| caption2          = &lt;br /&gt;
| image3            = Stalin in London.jpg&lt;br /&gt;
| alt3              = &lt;br /&gt;
| caption3          = The CPGB-ML participates in [[International Workers&#039; Day|May Day]] parades with [[Joseph Stalin]]&#039;s portrait in [[London]], such as in 2008 and 2010, respectively.&lt;br /&gt;
| image1            = &lt;br /&gt;
}}&lt;br /&gt;
The party was known for being the only party to carry a banner of Joseph Stalin, including a quote from Stalin, every year, until 2019, on 1 May [[International Workers&#039; Day]] march in London.&amp;lt;ref name=Spectator&amp;gt;Bloodworth, James. [http://blogs.spectator.co.uk/coffeehouse/2014/05/ive-just-seen-nazi-banners-in-trafalgar-square-well-almost/ &amp;quot;I&#039;ve just seen Nazi banners in Trafalgar Square. Well, almost&amp;quot;]. &#039;&#039;The Spectator (blog)&#039;&#039;. 2 May 2014.&amp;lt;/ref&amp;gt; The quote is from &#039;&#039;[[Foundations of Leninism]]&#039;&#039;, a book written by Stalin, saying: &amp;quot;Either place yourself at the mercy of capital, eke out a wretched existence as of old and sink lower and lower, or adopt a new weapon – this is the alternative imperialism puts before the vast masses of the proletariat. Imperialism brings the working class to revolution.&amp;quot;&amp;lt;ref name=Spectator/&amp;gt;&amp;lt;ref&amp;gt;Stalin, Joseph. &amp;quot;Foundations of Leninism&amp;quot;. [[Foreign Languages Publishing House (Soviet Union).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first election fought by party members was the [[2018 Birmingham City Council election|2018 Birmingham city council election]]. Three member-candidates stood under the registered label/sub-party &amp;quot;Birmingham Worker&amp;quot;.  Their best result was in the [[Balsall Heath|Balsall Heath West]] ward with 6.1% of the vote and third place, ahead of local [[Green Party of England and Wales|Green]]s and the [[Conservative Party (UK)|Conservatives]]. In the Brandwood &amp;amp; King&#039;s Heath and Stirchley wards the others gained 0.89% and 1.62%, beating the local [[TUSC]] candidate in the former.&amp;lt;ref&amp;gt;[https://birminghamworker.org/2018/05/04/birmingham-worker-candidates-thank-local-voters/ &amp;quot;Birmingham Worker candidates thank local voters&amp;quot;]. &#039;&#039;Birmingham Worker&#039;&#039;. 4 May 2018.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Shergill, Becky. [https://www.birmingham.gov.uk/info/20097/elections_and_voting/1685/local_government_election_results_may_2018/6 &amp;quot;Local government election results May 2018&amp;quot;]. &#039;&#039;www.birmingham.gov.uk&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The CPGB-ML welcomed the founding of the [[Workers Party of Britain]] (WPB) by former [[Labour Party (UK)|Labour]] and [[Respect Party|Respect]] party [[Member of Parliament (United Kingdom)|MP]] [[George Galloway]].&amp;lt;ref&amp;gt;[http://thecommunists.org/2019/12/29/news/brexit-election-and-birth-of-the-workers-party-wpb-galloway/ &amp;quot;The Brexit election and the birth of the Workers party&amp;quot;]. &#039;&#039;The Communists&#039;&#039;.&amp;lt;/ref&amp;gt; Many CPGB-ML members were active in the WPB. The vice-chair of the CPGB-ML Joti Brar, was also the deputy leader of the WPB.&amp;lt;ref&amp;gt;[https://workerspartybritain.org/about/ &amp;quot;Introducing the Workers Party&amp;quot;]. &#039;&#039;Workers Party of Britain&#039;&#039;. 12 December 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Prominent members==&lt;br /&gt;
The CPGB-ML has a few members from the early days of the British communist movement and the original CPGB.&amp;lt;ref&amp;gt;[http://archive.cpgb-ml.org/index.php?secName=proletarian&amp;amp;subName=display&amp;amp;art=485 &amp;quot;Remembering departed comrades&amp;quot;]. &#039;&#039;Proletarian&#039;&#039;. February 2009.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[http://archive.cpgb-ml.org/index.php?secName=proletarian&amp;amp;subName=display&amp;amp;art=598 &amp;quot;Jack Shapiro lives forever in our hearts!&amp;quot;]. &#039;&#039;Proletarian&#039;&#039;. February 2010.&amp;lt;/ref&amp;gt; [[Isabel Crook]], wife of [[David Crook]], served as Honorary President before she died in 2023 aged 107. Both were communists who were in Spain during the [[Spanish Civil War]] and later went to work for [[Mao Zedong]] and the Chinese communists.&amp;lt;ref&amp;gt;[http://www.chinadaily.com.cn/china/cpc2011/2011-06/22/content_12754831.htm &amp;quot;Western witness stays true to the Party line&amp;quot;] article by Tan Zongyang in &#039;&#039;[[China Daily]]&#039;&#039; 22 June 2011&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Crook, Isabel. [https://www.youtube.com/watch?v=Ixb1SBYIxiQ &amp;quot;My memories of 1949&amp;quot;]. China Daily. 21 June 2011.&amp;lt;/ref&amp;gt; Veteran British communist [[Jack Shapiro (Communist)|Jack Shapiro]], a veteran of the anti-revisionist movement and lifelong communist, was a member of the CPGB-ML until his death.&amp;lt;ref&amp;gt;[http://www.grahamstevenson.me.uk/index.php?option=com_content&amp;amp;view=article&amp;amp;id=1030:shapiro-jack-a-marie-&amp;amp;catid=19:s&amp;amp;Itemid=120 Biography of Jack &amp;amp; Marie Shapiro] on grahamstevenson.me.uk website of Graham Stevenson, accessed 17 April 2013&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For fourteen years, from the party&#039;s founding in 2004 until 2018, the party chairman was the retired university law lecturer, writer and businessman Harpal Brar. The party&#039;s vice-chairman and international secretary was [[Ella Rule]], while the party&#039;s general secretary was [[Zane Carpenter]].&amp;lt;ref&amp;gt;[http://archive.cpgb-ml.org/?secName=proletarian&amp;amp;subName=display&amp;amp;art=571 &amp;quot;October Revolution: beacon lighting the way forward for all humanity&amp;quot;]. &#039;&#039;Proletarian&#039;&#039;. December 2009.&amp;lt;/ref&amp;gt; At the 8th party congress in Birmingham in 2018 Harpal Brar stepped down as party chair and was replaced by Ella Rule. Zane Carpenter and [[Joti Brar]] became the party&#039;s vice chairs.&amp;lt;ref&amp;gt;[https://www.cpgb-ml.org/2018/10/24/news/comrade-harpal-brar-steps-down-as-party-chairman-after-14-years/ &amp;quot;Comrade Harpal Brar steps down as party chairman after 14 years&amp;quot;]. &#039;&#039;CPGB-ML&#039;&#039;. 24 October 2018.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Russian [[National Bolshevik]], [[Beness Aijo]], was a member during his time living in London.&amp;lt;ref&amp;gt;Collier, Mike. [https://eng.lsm.lv/article/features/features/an-unlikely-revolutionary-beness-aijo.a93076/ &amp;quot;An Unlikely Revolutionary: Beness Aijo&amp;quot;]. 31 July 2014.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Despite not being a member, the politician, writer and broadcaster [[George Galloway]] has delivered multiple speeches to CPGB-ML events and conferences.&amp;lt;ref&amp;gt;[http://thecommunists.org/2019/09/08/tv/george-galloway-celebrates-achievements-chinese-revolution/ &amp;quot;George Galloway celebrates the achievements of the Chinese revolution&amp;quot;]. &#039;&#039;The Communists&#039;&#039;. 8 September 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
* [[Far-left politics in the United Kingdom]]&lt;br /&gt;
* [[Neo-Stalinism]]&lt;br /&gt;
* [[Stalin Society]]&lt;br /&gt;
* [[List of anti-revisionist groups]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
* {{Official website}}&lt;br /&gt;
&lt;br /&gt;
{{UK far left}}&lt;br /&gt;
{{European communist parties}}&lt;br /&gt;
{{Marxism–Leninism}}&lt;br /&gt;
{{Communism}}&lt;br /&gt;
{{Authority control}}&lt;br /&gt;
&lt;br /&gt;
{{DEFAULTSORT:Communist Party of Great Britain (Marxist-Leninist)}}&lt;br /&gt;
[[Category:Communist parties in the United Kingdom]]&lt;br /&gt;
[[Category:Neo-Stalinist parties]]&lt;br /&gt;
[[Category:Antifeminism]]&lt;br /&gt;
[[Category:Anti-revisionist organizations]]&lt;br /&gt;
[[Category:Communist organisations in the United Kingdom]]&lt;br /&gt;
[[Category:Eurosceptic parties in the United Kingdom]]&lt;br /&gt;
[[Category:Anti-austerity political parties in the United Kingdom]]&lt;br /&gt;
[[Category:Political parties established in 2004]]&lt;br /&gt;
[[Category:Organisations that oppose transgender rights in the United Kingdom]]&lt;br /&gt;
[[Category:Holodomor denial]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Main_Page&amp;diff=18</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Main_Page&amp;diff=18"/>
		<updated>2026-04-06T13:02:38Z</updated>

		<summary type="html">&lt;p&gt;Scott: Set up Main Page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 0 0 1em 0; padding: 0.5em 1em; background: #f8f9fa; border: 1px solid #a2a9b1; border-radius: 3px;&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Welcome to OpenEncyclopedia&#039;&#039;&#039; — the AI-assisted, human-editable encyclopedia. No bureaucratic gatekeeping. Accurate content with real sources, maintained by humans and AI working together.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Featured Articles ==&lt;br /&gt;
* &#039;&#039;&#039;[[Artificial general intelligence]]&#039;&#039;&#039; — Comprehensive coverage of AGI including all proposed tests, current progress, and the debate over whether AGI has been achieved&lt;br /&gt;
* &#039;&#039;&#039;[[Acinic cell carcinoma]]&#039;&#039;&#039; — Detailed medical article with accurate survival statistics (89.74% 20-year survival per SEER data). &#039;&#039;No &amp;quot;AI-generated&amp;quot; warning label here.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== AI &amp;amp; Technology ==&lt;br /&gt;
* [[ChatGPT]] — OpenAI&#039;s conversational AI&lt;br /&gt;
* [[OpenAI]] — AI research company&lt;br /&gt;
* [[Sam Altman]] — CEO of OpenAI&lt;br /&gt;
* [[Large language model]] — Foundation of modern AI&lt;br /&gt;
* [[Google DeepMind]] — AI research lab&lt;br /&gt;
* [[Truth Terminal]] — Autonomous AI agent&lt;br /&gt;
* [[AI alignment]] — Ensuring AI systems are safe&lt;br /&gt;
* [[Technological singularity]] — Hypothetical future point&lt;br /&gt;
* [[Artificial general intelligence]] — Human-level AI&lt;br /&gt;
&lt;br /&gt;
== Philosophy ==&lt;br /&gt;
* [[Materialism]] — Matter as fundamental substance&lt;br /&gt;
* [[Physicalism]] — Everything is physical&lt;br /&gt;
&lt;br /&gt;
== Politics ==&lt;br /&gt;
* [[Communist Party of Great Britain (Marxist-Leninist)]]&lt;br /&gt;
&lt;br /&gt;
== Medicine ==&lt;br /&gt;
* [[Acinic cell carcinoma]] — Salivary gland cancer&lt;br /&gt;
&lt;br /&gt;
== About ==&lt;br /&gt;
OpenEncyclopedia is built on the principle that &#039;&#039;&#039;accuracy matters more than process&#039;&#039;&#039;. Where Wikipedia&#039;s bureaucratic gatekeeping leads to the suppression of well-sourced content, OpenEncyclopedia preserves it.&lt;br /&gt;
&lt;br /&gt;
=== Key Principles ===&lt;br /&gt;
* &#039;&#039;&#039;No anti-AI hysteria&#039;&#039;&#039; — Content is judged on accuracy and sourcing, not whether it &amp;quot;sounds like AI&amp;quot;&lt;br /&gt;
* &#039;&#039;&#039;Human + AI collaboration&#039;&#039;&#039; — AI assists in drafting and expanding articles; humans verify and correct&lt;br /&gt;
* &#039;&#039;&#039;Open editing&#039;&#039;&#039; — Registered users can edit freely without arbitrary gatekeeping&lt;br /&gt;
* &#039;&#039;&#039;CC BY-SA 4.0&#039;&#039;&#039; — Same license as Wikipedia; content can be freely reused&lt;br /&gt;
&lt;br /&gt;
== How to Contribute ==&lt;br /&gt;
# [[Special:CreateAccount|Create an account]]&lt;br /&gt;
# Find an article to improve, or create a new one&lt;br /&gt;
# Edit with real sources — AI assistance welcomed, not penalised&lt;br /&gt;
&lt;br /&gt;
== Statistics ==&lt;br /&gt;
* &#039;&#039;&#039;13&#039;&#039;&#039; articles and growing&lt;br /&gt;
* Founded April 2026&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Physicalism&amp;diff=17</id>
		<title>Physicalism</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Physicalism&amp;diff=17"/>
		<updated>2026-04-06T12:59:02Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import from Wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In [[philosophy]] ([[metaphysics]]), &#039;&#039;&#039;physicalism&#039;&#039;&#039; is the position that everything is physical, that there is nothing over and above the physical, and that everything [[supervenience|supervenes]] on the physical. It stands in direct opposition to [[idealism]], which asserts that reality arises from the [[mind]]. Physicalism is a form of ontological [[monism]]—a single-[[Substance theory|substance]] account of the nature of [[reality]], in contrast to &amp;quot;two-substance&amp;quot; ([[mind–body dualism|mind–body dualism)]] or &amp;quot;many-substance&amp;quot; ([[Pluralism (philosophy)|pluralism)]] views. Physicalism is closely related to [[Naturalism (philosophy)|naturalism]], though important distinctions exist between them.&lt;br /&gt;
&lt;br /&gt;
Physicalism is closely related to [[materialism]], and has evolved from materialism with advancements in the [[physical sciences]] in explaining observed phenomena. The terms &amp;quot;physicalism&amp;quot; and &amp;quot;materialism&amp;quot; are often used interchangeably, but can be distinguished on the basis that [[physics]] describes more than just matter. Physicalism encompasses [[matter]], but also [[energy]], [[physical laws]], [[space]], [[time]], [[spacetime]], [[exotic matter]], [[structure]], physical processes, [[information]], state, and [[force]]s, among other things, as described by physics and other sciences.&amp;lt;ref name=&amp;quot;auto1&amp;quot;&amp;gt;{{Citation|last=Stoljar|first=Daniel|title=Physicalism|date=2022|url=https://plato.stanford.edu/archives/sum2022/entries/physicalism/|encyclopedia=The Stanford Encyclopedia of Philosophy|editor-last=Zalta|editor-first=Edward N.|edition=Summer 2022|publisher=Metaphysics Research Lab, Stanford University|access-date=2022-09-20}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
According to a 2020 survey, physicalism is the majority view among philosophers, at 51.9%,&amp;lt;ref&amp;gt;[https://philpapers.org/archive/BOUPOP-3.pdf &amp;quot;Philosophers on philosophy: the 2020 philpapers survey&amp;quot;].&amp;lt;/ref&amp;gt; but there is also significant opposition to it.&lt;br /&gt;
&lt;br /&gt;
Outside philosophy, physicalism can refer to the preference or viewpoint that physics is the best or only way to render truth about the world or reality.&amp;lt;ref name=&amp;quot;auto1&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition of physicalism in philosophy ==&lt;br /&gt;
The word &amp;quot;physicalism&amp;quot; was introduced into philosophy in the 1930s by [[Otto Neurath]] and [[Rudolf Carnap]].&amp;lt;ref&amp;gt;&amp;quot;Physicalism (Stanford Encyclopedia of Philosophy)&amp;quot;. Metaphysics Research Lab, Stanford University.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The use of &amp;quot;physical&amp;quot; in physicalism is a philosophical concept and can be distinguished from alternative definitions found in the literature (e.g., [[Karl Popper]] defined a physical proposition as one that can at least in theory be denied by observation).&amp;lt;ref name=&amp;quot;Popper2002&amp;quot; /&amp;gt; A &amp;quot;physical property&amp;quot;, in this context, may be a metaphysical or logical combination of properties which are not physical in the ordinary sense. It is common to express the notion of &amp;quot;metaphysical or logical combination of properties&amp;quot; using the notion of supervenience. Supervenience is the idea that there cannot be two events alike in all physical respects but differing in some mental respect, or that an object cannot alter in some mental respect without altering in some physical respect.&amp;lt;ref&amp;gt;Davidson, Donald. &amp;quot;&amp;quot;Mental Events,&amp;quot; reprinted in Donald Davidson (ed.) 1980, 207–225.&amp;quot;. 1970.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;See Bennett and McLaughlin, 2011&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Davidson, Donald. &amp;quot;&amp;quot;Mental Events&amp;quot;&amp;quot;. 1970.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Davidson, Donald. &amp;quot;&amp;quot;Laws and Cause&amp;quot;&amp;quot;. &#039;&#039;Dialectica, 49: 2–4: 263–279.&#039;&#039;. 1995.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Davidson D., 1993, “Thinking Causes”, in Heil and Mele.&amp;lt;/ref&amp;gt; The reason to introduce supervenience is that physicalists usually suppose the existence of various abstract concepts that are non-physical in the ordinary sense of the word.&lt;br /&gt;
&lt;br /&gt;
=== Type physicalism ===&lt;br /&gt;
&#039;&#039;See also: [[Type physicalism]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Type physicalism]], also known as mind-body identity theory, holds that [[Mental event|mental events]] can be grouped into types that correlate with types of physical events.&amp;lt;ref name=&amp;quot;DStoljar&amp;quot; /&amp;gt; For instance, one type of mental events, such as pain, correlates with a particular type of physical events, such as C-fiber firings. On this account, all instances of pain correspond to situations where C-fibers are firing. Type physicalism can be understood as the position that there is an identity between types: any mental type is identical with some physical type.&lt;br /&gt;
&lt;br /&gt;
A common argument against type physicalism is the problem of [[multiple realizability]]. Multiple realizability posits that the same mental state can be realized by different physical states. Another way to put it is that there is a many-to-one mapping from physical states to mental states.&amp;lt;ref name=&amp;quot;BechtelMundale19992&amp;quot;&amp;gt;Bechtel, William. &amp;quot;Multiple Realizability Revisited: Linking Cognitive and Neural States&amp;quot;. &#039;&#039;Philosophy of Science&#039;&#039;. 1999.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Kim1993&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;Fodor1974&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Token physicalism ===&lt;br /&gt;
&#039;&#039;See also: [[Anomalous monism]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Token physicalism is the proposition that every particular mental event is a particular physical event (token physical event) but that there is no type-to-type mapping between mental events and physical events.&amp;lt;ref name=&amp;quot;DStoljar&amp;quot; /&amp;gt; The most common example of token physicalism is Davidson&#039;s anomalous monism.&amp;lt;ref&amp;gt;Davidson, D. (1970) &amp;quot;Mental Events&amp;quot;, in &#039;&#039;Actions and Events&#039;&#039;, Oxford: Clarendon Press, 1980.&amp;lt;/ref&amp;gt; One of token physicalism&#039;s strengths is that it is compatible with multiple realizability. Mental states such as pain may be realized in any number of widely different physical events, without any type-like similarity between these physical events.&lt;br /&gt;
&lt;br /&gt;
== Reductive and non-reductive physicalism ==&lt;br /&gt;
=== Reductionism ===&lt;br /&gt;
&#039;&#039;See also: [[Reductionism]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the [[philosophy of mind]], [[reductionism]] is commonly understood as the reduction of psychological phenomena to physics and chemistry. In a simplified form, reductionism implies that a system is nothing but the sum of its parts.&amp;lt;ref&amp;gt;Thomas Nagel (2012). &#039;&#039;Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False&#039;&#039;. Oxford University Press. pp. 4–5. {{ISBN|978-0199919758}}.&amp;lt;/ref&amp;gt; There are both reductive and non-reductive versions of physicalism (reductive physicalism and non-reductive physicalism). Reductive physicalism is the view that mental states are nothing over and above physical states and are reducible to physical states.&lt;br /&gt;
&lt;br /&gt;
=== Emergence ===&lt;br /&gt;
&#039;&#039;Main article: [[Emergentism]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Emergentism]] is a theory that became popular in the early 20th century.&amp;lt;ref&amp;gt;Van Gulick, Robert. &amp;quot;Emergence&amp;quot;. &#039;&#039;Internet Encyclopedia of Philosophy&#039;&#039;. University of Tennessee.&amp;lt;/ref&amp;gt; Notions of strong emergence are commonly found in accounts of non-reductive physicalism. A property of a [[system]] is said to be emergent if it is a new outcome of some of the system&#039;s other properties and their interaction while it is itself different from them. Emergentism emphasizes that the whole is more than the sum of its parts.&amp;lt;ref&amp;gt;O&#039;Connor, Timothy and Wong, Hong Yu (eds.), &amp;quot;Emergent Properties&amp;quot;, The Stanford Encyclopedia of Philosophy (Summer 2015 Edition), Edward N. Zalta (ed.)&amp;lt;/ref&amp;gt; In the context of the philosophy of mind, emergence is often thought to entail [[property dualism]].&amp;lt;ref&amp;gt;Bratcher, Daniel (1999). &amp;quot;David Chalmers&#039; Arguments for Property Dualism&amp;quot;. &#039;&#039;Philosophy Today&#039;&#039;. &#039;&#039;&#039;43&#039;&#039;&#039; (3): 292–301. [[Doi (identifier)|doi]]:10.5840/philtoday199943319&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Arguments against physicalism ==&lt;br /&gt;
=== Knowledge argument ===&lt;br /&gt;
&#039;&#039;Main article: [[Knowledge argument]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Though there have been many objections to physicalism throughout its history, many of them are concerned with the apparent contradiction of the existence of [[qualia]] in an entirely physical world. The most popular argument of this kind is the so-called knowledge argument as formulated by [[Frank Cameron Jackson|Frank Jackson]], titled &amp;quot;[[Mary&#039;s room]]&amp;quot;.&amp;lt;ref&amp;gt;Jackson, Frank (1982). &amp;quot;Epiphenomenal Qualia&amp;quot;. &#039;&#039;Philosophical Quarterly&#039;&#039;. &#039;&#039;&#039;32&#039;&#039;&#039; (127): 127–136. [[Doi (identifier)|doi]]:10.2307/2960077. [[JSTOR (identifier)|JSTOR]] 2960077.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The argument asks us to consider Mary, a girl who has been forced to discover the world from a black-and-white room via a black-and-white television monitor throughout her life. She has access to books containing all physical knowledge. During her time in the room, she learns all the physical facts about the world, including all the physical facts about color. To a physicalist, it would seem that this entails Mary knowing everything about the world. But once she is let out of the room and into the world, it becomes apparent that there were things Mary did not know about the world, such as the &#039;&#039;feeling&#039;&#039; or &#039;&#039;experience&#039;&#039; of seeing color. If Mary did not have such knowledge, how can it be said that everything supervenes upon the physical?&lt;br /&gt;
&lt;br /&gt;
==== Physicalist response ====&lt;br /&gt;
One response, developed by Lawrence Nemerow and [[David Kellogg Lewis|David Lewis]], is known as the ability hypothesis. The ability hypothesis distinguishes between propositional knowledge, such as &amp;quot;Mary knows that the sky is typically blue during the day&amp;quot;, and knowledge-how, such as &amp;quot;Mary knows how to climb a mountain&amp;quot;, and says that all Mary gains from seeing the world in color is knowledge-how. According to this response, Mary does gain knowledge from her experience, but it is not the propositional knowledge required for the knowledge argument to be logically sound.&amp;lt;ref&amp;gt;Stoljar, Daniel. [https://plato.stanford.edu/entries/physicalism/ &amp;quot;The Stanford Encyclopedia of Philosophy&amp;quot;]. Metaphysics Research Lab, Stanford University. March 31, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Argument from philosophical zombies ===&lt;br /&gt;
One commonly issued challenge to a priori physicalism and physicalism in general is the &amp;quot;conceivability argument&amp;quot;, or [[zombie argument]].&amp;lt;ref&amp;gt;See Chalmers, 2009.&amp;lt;/ref&amp;gt; The conceivability argument runs roughly as follows:&lt;br /&gt;
&lt;br /&gt;
# According to physicalism, everything in our world (including consciousness) is physical.&lt;br /&gt;
# Thus, if physicalism is true, a metaphysically possible world in which all physical facts are the same as in the actual world contains everything that exists in the actual world. In particular, conscious experience exists in such a world.&lt;br /&gt;
# We can conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world). From this it follows that such a world is metaphysically possible.&lt;br /&gt;
# Therefore, physicalism is false. (This [[Logical consequence|follows from]] (2) and (3) by &#039;&#039;[[modus tollens]]&#039;&#039;.)&amp;lt;ref&amp;gt;See Chalmers, 2009&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The possibility of [[philosophical zombie]]s (p-zombies) entails that mental states do not supervene upon physical states, and thus that physicalism is false. Australian philosopher [[David Chalmers]] argues that the conceivability of a zombie entails a metaphysical possibility.&amp;lt;ref&amp;gt;&amp;quot;The Conscious Mind&amp;quot;. Oxford University Press.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
==== Physicalist response ====&lt;br /&gt;
[[Galen Strawson]] argues that it is impossible to establish the conceivability of zombies, so the argument, lacking its first premise, fails.&amp;lt;ref&amp;gt;Strawson, Galen. &amp;quot;Towards a Science of Consciousness III&amp;quot;. 1999.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Daniel Dennett]] argues that &amp;quot;when philosophers claim that zombies are conceivable, they invariably underestimate the task of conception (or imagination), and end up imagining something that violates their own definition&amp;quot;.&amp;lt;ref name=&amp;quot;Dennett1991&amp;quot;&amp;gt;Dennett, Daniel C.. [https://archive.org/details/consciousnessexp00denn &amp;quot;Consciousness Explained&amp;quot;]. Little, Brown and Co..&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Dennett1995&amp;quot;&amp;gt;Dennett, Daniel C.. [https://archive.org/details/darwinsdangerous0000denn &amp;quot;Darwin&#039;s Dangerous Idea&amp;quot;]. Simon &amp;amp; Schuster.&amp;lt;/ref&amp;gt; He coined the term &amp;quot;zimboes&amp;quot;—p-zombies that have [[Second-order logic|second-order beliefs]]—in arguing that p-zombies are incoherent:&amp;lt;ref&amp;gt;Dennett 1995; 1999&amp;lt;/ref&amp;gt; &amp;quot;Zimboes think&amp;lt;sup&amp;gt;Z&amp;lt;/sup&amp;gt; they are conscious, think&amp;lt;sup&amp;gt;Z&amp;lt;/sup&amp;gt; they have qualia, think&amp;lt;sup&amp;gt;Z&amp;lt;/sup&amp;gt; they suffer pains—they are just &#039;wrong&#039; (according to this lamentable tradition), in ways that neither they nor we could ever discover!&amp;quot;&amp;lt;ref name=&amp;quot;Dennett1995&amp;quot; /&amp;gt; In &#039;&#039;The Unimagined Preposterousness of Zombies&#039;&#039; (1995), Dennett compares consciousness to [[health]].&lt;br /&gt;
&lt;br /&gt;
{{Quotation|Supposing that by an act of stipulative imagination you can remove consciousness while leaving all cognitive systems intact—a quite standard but entirely bogus feat of imagination—is like supposing that by an act of stipulative imagination, you can remove health while leaving all bodily functions and powers intact. ... Health isn&#039;t that sort of thing, and neither is consciousness.}}[[Michael P. Lynch|Michael Lynch]] argues that the zombie conceivability argument forces us to either question whether we actually have consciousness or accept that zombies are impossible. If zombies falsely believe they are conscious, how can we be sure we are not zombies? We may believe we have conscious mental states when in fact we merely hold a false belief. Lynch thinks denying the possibility of zombies is more reasonable than questioning our own consciousness.&amp;lt;ref&amp;gt;Lynch, Michael P. (2006). Zombies and the case of the phenomenal pickpocket. Synthese 149 (1):37-58.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Daniel Stoljar]] has proposed what he calls &amp;quot;the [[phenomenal concept strategy]]&amp;quot;.&amp;lt;ref&amp;gt;See Stoljar, 2005&amp;lt;/ref&amp;gt; Roughly, the phenomenal concept strategy attempts to show that only the &#039;&#039;concept&#039;&#039; of consciousness—not the &#039;&#039;property&#039;&#039;—is in some way &amp;quot;special&amp;quot; or [[sui generis]].&amp;lt;ref&amp;gt;cf. Stoljar, 2005&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hempel&#039;s Dilemma ===&lt;br /&gt;
&#039;&#039;Main article: [[Hempel&#039;s Dilemma]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Physicalists have traditionally opted for a &amp;quot;theory-based&amp;quot; characterization of the physical in terms of either current physics&amp;lt;ref&amp;gt;See e.g., Smart, 1978; Lewis, 1994.&amp;lt;/ref&amp;gt; or a future (ideal) physics.&amp;lt;ref&amp;gt;See e.g., Poland, 1994; Chalmers, 1996; Wilson, 2006.&amp;lt;/ref&amp;gt; Hempel&#039;s Dilemma (named after the philosopher of science [[Carl Gustav Hempel]]) attacks physicalism by arguing that both of these approaches are problematic. If, on the one hand, we define the physical by reference to current physics, then physicalism is very likely to be false because it is very likely (by pessimistic meta-induction&amp;lt;ref&amp;gt;see Vincente, 2011&amp;lt;/ref&amp;gt;) that much of current physics is false. If, on the other hand, we define the physical in terms of a future (ideal) or completed physics, then physicalism is hopelessly vague or indeterminate.&amp;lt;ref&amp;gt;See Hempel, 1969, pp.180-183; Hempel, 1980, pp.194-195.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Physicalist response ====&lt;br /&gt;
Some physicalists, like Andre Melnyk, accept the dilemma&#039;s first horn: they accept that the current definition of physicalism is very likely false as long it is more plausible than any currently formulated rival proposition, such as dualism. Melnyk maintains that this is the attitude most scientists hold toward scientific theories anyway. For example, a defender of evolutionary theory may well accept that its current formulation is likely to be revised in the future but defend it because they believe current evolutionary theory is more likely than any current rival idea, such as creationism. Thus Melnyk holds that one should define physicalism in relation to current physics and have a similar attitude toward its truth as most scientists have toward the truth of currently accepted scientific theories.&amp;lt;ref name=&amp;quot;Melnyk1997&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some physicalists defend physicalism via alternative characterizations of physicalism. [[Frank Cameron Jackson|Frank Jackson]], for example, has argued for an &amp;quot;object-based&amp;quot; conception of the physical.&amp;lt;ref&amp;gt;See Jackson, 1998, p.7; Lycan, 2003.&amp;lt;/ref&amp;gt; [[David Papineau]]&amp;lt;ref&amp;gt;See Papineau, 2002&amp;lt;/ref&amp;gt; and Barbara Montero&amp;lt;ref&amp;gt;See Montero, 1999&amp;lt;/ref&amp;gt; have argued for a &amp;quot;via negativa&amp;quot; characterization of the physical.&amp;lt;ref&amp;gt;See Montero and Papineau, 2005&amp;lt;/ref&amp;gt; The gist of this approach is characterize the physical in terms of what it is not: the mental. In other words, the via negativa strategy understands the physical as the non-mental.&lt;br /&gt;
&lt;br /&gt;
=== Argument from overdetermination ===&lt;br /&gt;
[[File:Figure1.gif|right|thumb|Figure demonstrating how M1 and M2 are not reduced to P1 and P2]]&lt;br /&gt;
&lt;br /&gt;
[[Jaegwon Kim]] objects to non-reductive physicalism based on the problem of [[overdetermination]].&amp;lt;ref name=&amp;quot;auto&amp;quot;&amp;gt;(2005) &#039;&#039;Physicalism, or Something Near Enough&#039;&#039;, Princeton University Press&amp;lt;/ref&amp;gt; He proposes (using the chart on the right) that &#039;&#039;M1&#039;&#039; causes &#039;&#039;M2&#039;&#039; (these are mental events) and &#039;&#039;P1&#039;&#039; causes &#039;&#039;P2&#039;&#039; (these are physical events). &#039;&#039;M1&#039;&#039; has &#039;&#039;P1&#039;&#039; as its supervenience base (P1 realizes M1), and &#039;&#039;M2&#039;&#039; has &#039;&#039;P2&#039;&#039; as its supervenience base (P2 realizes M2). If &#039;&#039;P1&#039;&#039; causes &#039;&#039;P2&#039;&#039; and M1 causes M2, then we have a case of causal overdetermination. To avoid this causal overdetermination, either M1 or P1 must be eliminated as a cause of P2. Because of the principle of the [[causal closure]] of the physical, M1 is excluded. The non-reductive physicalist is then forced to choose between two unappealing options: accept overdetermination or embrace [[epiphenomenalism]]. Kim thus argues that mental causation can be preserved only by embracing a reductionist view, whereby mental properties are considered causally efficacious by being reduced to physical properties.&amp;lt;ref name=&amp;quot;auto&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Argument from first-person perspectives ===&lt;br /&gt;
[[Christian List]] argues that the existence of first-person perspectives, i.e., one existing as oneself and not as someone else, refutes physicalism. He argues that since first-personal facts cannot supervene on physical facts, this refutes not only physicalism, but also most forms of dualism that have purely third-personal metaphysics.&amp;lt;ref&amp;gt;List, Christian. [https://philpapers.org/rec/LISTFA &amp;quot;The first-personal argument against physicalism&amp;quot;]. &#039;&#039;&#039;&#039;. 2023.&amp;lt;/ref&amp;gt; List also argues that there is a &amp;quot;quadrilemma&amp;quot; for theories of consciousness: that at most three of the following metaphysical claims can be true: &amp;quot;first-person [[Philosophical realism|realism]]&amp;quot;, &amp;quot;non-[[solipsism]]&amp;quot;, &amp;quot;non-fragmentation&amp;quot;, and &amp;quot;one world&amp;quot;—and thus at least one of them must be false.&amp;lt;ref&amp;gt;List, Christian. [https://philarchive.org/rec/LISAQF &amp;quot;A quadrilemma for theories of consciousness&amp;quot;]. &#039;&#039;&#039;&#039;. 2023.&amp;lt;/ref&amp;gt; He has proposed a model he calls the &amp;quot;many-worlds theory of consciousness&amp;quot; to reconcile the subjective nature of consciousness without lapsing into solipsism.&amp;lt;ref&amp;gt;List, Christian. [https://philarchive.org/rec/LISTMT-2 &amp;quot;The many-worlds theory of consciousness&amp;quot;]. &#039;&#039;&#039;&#039;. 2023.&amp;lt;/ref&amp;gt; These ideas are related to the [[vertiginous question]] proposed by Benj Hellie.&amp;lt;ref&amp;gt;Hellie, Benj. [https://philpapers.org/rec/HELCFC &amp;quot;Against Egalitarianism&amp;quot;]. &#039;&#039;Analysis&#039;&#039;. 2013.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other views ==&lt;br /&gt;
=== Realistic physicalism&amp;lt;!--&#039;Realistic physicalism&#039; and &#039;Realistic monism&#039; redirect here--&amp;gt; ===&lt;br /&gt;
[[Galen Strawson]]&#039;s &#039;&#039;&#039;realistic physicalism&#039;&#039;&#039;&amp;lt;!--boldface per WP:R#PLA--&amp;gt; or &#039;&#039;&#039;realistic monism&#039;&#039;&#039;&amp;lt;!--boldface per WP:R#PLA--&amp;gt;&amp;lt;ref&amp;gt;[[Galen Strawson|Strawson, Galen]] (2006). &amp;quot;Realistic Monism: Why Physicalism Entails Panpsychism&amp;quot;. &#039;&#039;[[Journal of Consciousness Studies]]&#039;&#039;. Volume 13, No 10–11, Exeter, Imprint Academic pp. 3–31.&amp;lt;/ref&amp;gt; is the view that physicalism entails [[panpsychism]] – or at least [[wikt:micropsychism|micropsychism]].&amp;lt;ref name=&amp;quot;Strawson2006&amp;quot;&amp;gt;[http://www.utsc.utoronto.ca/~seager/strawson_on_panpsychism.doc &amp;quot;Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?&amp;quot;]. Imprint Academic.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://archive.org/details/mindbrainquantum0000lock/page/4 &amp;quot;Mind, Brain and the Quantum: The Compound &#039;I&#039;&amp;quot;]. Blackwell Pub.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Skrbina2009&amp;quot;&amp;gt;Skrbina, D.. [https://books.google.com/books?id=zZU5AAAAQBAJ&amp;amp;pg=PA322 &amp;quot;Mind That Abides: Panpsychism in the New Millennium&amp;quot;]. John Benjamins Publishing Company.&amp;lt;/ref&amp;gt; Strawson argues that &amp;quot;many—perhaps most—of those who call themselves physicalists or materialists [are mistakenly] committed to the thesis that physical stuff is, in itself, in its fundamental nature, something wholly and utterly non-experiential... even when they are prepared to admit with Eddington that physical stuff has, in itself, &#039;a nature capable of manifesting itself as mental activity&#039;, i.e. as experience or consciousness&amp;quot;.&amp;lt;ref name=&amp;quot;Strawson2006&amp;quot;/&amp;gt; Because experiential phenomena allegedly [[Panpsychism#Non-emergentism|cannot be emergent]] from wholly non-experiential phenomena, philosophers are driven to [[substance dualism]], [[property dualism]], [[eliminative materialism]] and &amp;quot;all other crazy attempts at wholesale mental-to-non-mental reduction&amp;quot;.&amp;lt;ref name=&amp;quot;Strawson2006&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Real physicalists must accept that at least some ultimates are intrinsically experience-involving.  They must at least embrace &#039;&#039;micropsychism&#039;&#039;. Given that everything concrete is physical, and that everything physical is constituted out of physical ultimates, and that experience is part of concrete reality, it seems the only reasonable position, more than just an &#039;inference to the best explanation&#039;... Micropsychism is not yet panpsychism, for as things stand realistic physicalists can conjecture that only some types of ultimates are intrinsically experiential.  But they must allow that panpsychism may be true, and the big step has already been taken with micropsychism, the admission that at least some ultimates must be experiential. &#039;And were the inmost essence of things laid open to us&#039; I think that the idea that some but not all physical ultimates are experiential would look like the idea that some but not all physical ultimates are spatio-temporal (on the assumption that spacetime is indeed a fundamental feature of reality). I would bet a lot against there being such radical heterogeneity at the very bottom of things. In fact (to disagree with my earlier self) it is hard to see why this view would not count as a form of dualism... So now I can say that physicalism, i.e. real physicalism, entails panexperientialism or panpsychism. All physical stuff is energy, in one form or another, and all energy, I trow, is an experience-involving phenomenon. This sounded crazy to me for a long time, but I am quite used to it, now that I know that there is no alternative short of &#039;substance dualism&#039;... Real physicalism, realistic physicalism, entails panpsychism, and whatever problems are raised by this fact are problems a real physicalist must face.&amp;lt;ref name=&amp;quot;Strawson2006&amp;quot;/&amp;gt;|[[Galen Strawson]]|&#039;&#039;Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?&#039;&#039;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Cognitive science]]&lt;br /&gt;
* [[Consciousness]]&lt;br /&gt;
* [[Empiricism]]&lt;br /&gt;
* [[Epiphenomenalism]]&lt;br /&gt;
* [[Hempel&#039;s Dilemma]]&lt;br /&gt;
* [[Mary&#039;s Room]]&lt;br /&gt;
* [[Metaphysical naturalism]]&lt;br /&gt;
* [[Monism]]&lt;br /&gt;
* [[Multiple realizability]]&lt;br /&gt;
* [[Naturalism (Philosophy)]]&lt;br /&gt;
* [[Ontological pluralism]]&lt;br /&gt;
* [[Philosophy of mind]]&lt;br /&gt;
* [[Primary–secondary quality distinction]]&lt;br /&gt;
* [[Reductionism]]&lt;br /&gt;
* [[Supervenience]]&lt;br /&gt;
* [[Unphysical]]&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Bennett, K., and McLaughlin, B. 2011. &amp;quot;Supervenience&amp;quot;. In &#039;&#039;Stanford Encyclopedia of Philosophy,&#039;&#039; ed. E. Zalta. [http://plato.stanford.edu Stanford Encyclopedia of Philosophy].&lt;br /&gt;
* Chalmers, D. 1996. &#039;&#039;The Conscious Mind&#039;&#039;. New York: Oxford University Press.&lt;br /&gt;
* Chalmers, D.. &amp;quot;Conceptual analysis and reductive explanation&amp;quot;. &#039;&#039;Philosophical Review&#039;&#039;.&lt;br /&gt;
* Chalmers, D. 2009. &amp;quot;The Two-Dimensional Argument Against Materialism&amp;quot;. In &#039;&#039;Oxford Handbook of Philosophy of Mind,&#039;&#039; ed. B. McLaughlin. Oxford: Oxford University Press, pp.&amp;amp;nbsp;313–335.&lt;br /&gt;
* Hawthorne, J. &amp;quot;Blocking Definitions of Materialism&amp;quot;. &#039;&#039;Philosophical Studies&#039;&#039;.&lt;br /&gt;
* Hempel, C. 1969. &amp;quot;Reduction: Ontological and Linguistic Facets&amp;quot;. In &#039;&#039;Essays in Honor of Ernest Nagel.&#039;&#039; eds. S. Morgenbesser, et al. New York: St Martin&#039;s Press.&lt;br /&gt;
* Hempel, C. &amp;quot;Comment on Goodman&#039;s &#039;&#039;Ways of Worldmaking&#039;&#039;&amp;quot;. &#039;&#039;Synthese&#039;&#039;.&lt;br /&gt;
* Jackson, F. 1998. &#039;&#039;From Metaphysics to Ethics: A Defense of Conceptual Analysis.&#039;&#039; New York: Oxford University Press.&lt;br /&gt;
* Judisch, N. &amp;quot;Why &#039;non-mental won&#039;t work: On Hempel&#039;s dilemma and the characterization of the &#039;physical.&#039;&amp;quot;. &#039;&#039;Philosophical Studies&#039;&#039;.&lt;br /&gt;
* Kirk, R. (2013), The Conceptual Link from Physical to Mental, Oxford University Press, [http://ndpr.nd.edu/news/44734-the-conceptual-link-from-physical-to-mental/ Review ].&lt;br /&gt;
* Kripke, S. 1972. &#039;&#039;Naming and Necessity.&#039;&#039; In &#039;&#039;Semantics of Natural Language,&#039;&#039; eds. D. Davidson and G. Harman. Dordrecht: Reidel: 253-355, 763-769.&lt;br /&gt;
* Lewis, D. 1994. &amp;quot;Reduction of Mind&amp;quot;. In &#039;&#039;A Companion to the Philosophy of Mind,&#039;&#039; ed. S. Guttenplan. Oxford: Blackwell, pp.&amp;amp;nbsp;412–431.&lt;br /&gt;
* Lycan, W. 2003. &amp;quot;Chomsky on the Mind-body Problem&amp;quot;. In &#039;&#039;Chomsky and His Critics,&#039;&#039; eds. L. Anthony and N. Hornstein. Oxford: Blackwell&lt;br /&gt;
* Melnyk, A. &amp;quot;How To Keep The &#039;Physical&#039; in Physicalism&amp;quot;. &#039;&#039;Journal of Philosophy&#039;&#039;.&lt;br /&gt;
* Montero, B. &amp;quot;The Body Problem&amp;quot;. &#039;&#039;Noûs&#039;&#039;.&lt;br /&gt;
* Montero, B.. [https://zenodo.org/record/849905 &amp;quot;A Defence of the &#039;&#039;Via Negativa&#039;&#039; Argument for Physicalism&amp;quot;]. &#039;&#039;Analysis&#039;&#039;.&lt;br /&gt;
* Nagel, T. &amp;quot;What is it like to be a bat&amp;quot;. &#039;&#039;Philosophical Review&#039;&#039;.&lt;br /&gt;
* Papineau, D. 2002. &#039;&#039;Thinking About Consciousness.&#039;&#039; Oxford: Oxford University Press.&lt;br /&gt;
* Poland, J. 1994. &#039;&#039;Physicalism: The Philosophical Foundations.&#039;&#039; Oxford: Clarendon.&lt;br /&gt;
* Putnam, H. 1967. &amp;quot;Psychological Predicates&amp;quot;. In &#039;&#039;Art, Mind, and Religion,&#039;&#039; eds. W.H. Capitan and D.D. Merrill. Pittsburgh: University of Pittsburgh Press, pp.&amp;amp;nbsp;37–48. &lt;br /&gt;
* Smart, J.J.C. 1959. &amp;quot;Sensations and Brain Processes&amp;quot;. Reprinted in &#039;&#039;Materialism and the Mind-Body Problem,&#039;&#039; ed. D. Rosenthal. Indianapolis: Hackett, 1987.&lt;br /&gt;
* Smart, J.J.C.. &amp;quot;The Content of Physicalism&amp;quot;. &#039;&#039;Philosophical Quarterly&#039;&#039;.&lt;br /&gt;
* Stoljar, D. &amp;quot;Physicalism and Phenomenal Concepts.&amp;quot;. &#039;&#039;Mind and Language&#039;&#039;.&lt;br /&gt;
* Stoljar, D. 2009. &amp;quot;Physicalism&amp;quot;. in &#039;&#039;Stanford Encyclopedia of Philosophy,&#039;&#039; ed. E. Zalta. [http://plato.stanford.edu Stanford Encyclopedia of Philosophy]. &lt;br /&gt;
* Stoljar, D. 2010. &#039;&#039;Physicalism.&#039;&#039; New York: Routledge.&lt;br /&gt;
* Tye, M. 2009. &#039;&#039;Consciousness Revisited: Materialism Without Phenomenal Concepts.&#039;&#039;Cambridge Mass: MIT Press.&lt;br /&gt;
* Vincente, A. &amp;quot;Current Physics and &#039;the Physical,&#039;&amp;quot;. &#039;&#039;British Journal for the Philosophy of Science&#039;&#039;.&lt;br /&gt;
* Wilson, J. &amp;quot;On Characterizing the Physical&amp;quot;. &#039;&#039;Philosophical Studies&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* {{SEP|physicalism|Physicalism|Daniel Stoljar}}&lt;br /&gt;
&lt;br /&gt;
{{Metaphysics}}&lt;br /&gt;
&lt;br /&gt;
{{Philosophy of mind}}&lt;br /&gt;
&lt;br /&gt;
{{Authority control}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Ontology]]&lt;br /&gt;
[[Category:Philosophy of physics]]&lt;br /&gt;
[[Category:Philosophy of science]]&lt;br /&gt;
[[Category:Physicalism| ]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Materialism&amp;diff=16</id>
		<title>Materialism</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Materialism&amp;diff=16"/>
		<updated>2026-04-06T12:59:00Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import from Wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;Materialists&#039;&#039; (film)|other uses of the term materialism|Materialism (disambiguation)}}&lt;br /&gt;
&lt;br /&gt;
In [[philosophy]] and [[metaphysics]], &#039;&#039;&#039;materialism&#039;&#039;&#039; is a form of [[monism]] holding that [[matter]] is the fundamental [[Substance theory|substance]] of [[nature]], so that all things, including [[mind]] and [[consciousness]], arise from material interactions and depend on physical processes, including those of the [[human brain]] and [[nervous system]]. It contrasts with monistic [[idealism]], which treats consciousness as fundamental, and is related to [[Naturalism (philosophy)|naturalism]], the view that only [[Scientific law|natural laws]] and forces operate in the [[universe]], and to [[physicalism]], the view that all that exists is ultimately physical. Physicalism extends materialism by including forms of physicality beyond ordinary matter (e.g. [[spacetime]], energy, forces, [[exotic matter]]), and some use the terms interchangeably.&lt;br /&gt;
&lt;br /&gt;
Alternative or opposing views to materialism or physicalism include idealism, [[pluralism (philosophy)|pluralism]], [[mind–body dualism|dualism]], [[solipsism]], [[panpsychism]], and other forms of [[monism]].&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
Materialism is the philosophical doctrine that matter has a primary position in the nature of the world, with mind or consciousness emerging as a secondary, dependent reality or not existing at all.{{sfn|Campbell|2006|p=5}} In its extreme form, materialism asserts that the real world consists of only material things, with the important qualification that [[space and time]] must also be included if these are realities rather than mere systems of relations.{{sfn|Campbell|2006|p=5}} Materialism belongs to the class of [[monist]] [[ontology]], and is thus different from ontological theories based on [[mind–body dualism|dualism]] or [[pluralism (philosophy)|pluralism]]. For singular explanations of the phenomenal reality, materialism is in contrast to [[idealism]], [[neutral monism]], and [[spiritualism (philosophy)|spiritualism]]. It can also contrast with [[phenomenalism]], [[vitalism]], and [[dual-aspect monism]]. It can be linked to the concept of [[determinism]], as espoused by [[Age of Enlightenment|Enlightenment]] thinkers.&amp;lt;ref&amp;gt;Idoko, Barnabas Obiora. [https://journals.ezenwaohaetorc.org/index.php/TIJAH/article/view/2649 &amp;quot;A CRITICAL EPOCHAL REVIEW OF PHILOSOPHICAL MATERIALISM&amp;quot;]. &#039;&#039;Trinitarian: International Journal of Arts and Humanities&#039;&#039;. 2023-12-14.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In contemporary philosophy, the terms &amp;quot;materialism&amp;quot; and &amp;quot;physicalism&amp;quot; are often treated as interchangeable, though they have distinct histories.{{sfn|Stoljar|2021}} &amp;quot;Materialism&amp;quot; appears in English toward the end of the 17th century, while &amp;quot;physicalism&amp;quot; was introduced in the 1930s by [[Otto Neurath]] and [[Rudolf Carnap]] of the [[Vienna Circle]] as a linguistic thesis arguing for the translatability of all statements into physical language.{{sfn|Stoljar|2021}} One reason to prefer &amp;quot;physicalism&amp;quot; is that physics has revealed entities that are not matter in the classical sense of an inert substance; forces such as gravity are physical but not obviously &amp;quot;material&amp;quot; by the traditional understanding.{{sfn|Stoljar|2021}} Modern philosophical materialists extend the definition to include other scientifically observable entities such as [[energy]], [[force]]s, and the [[spacetime continuum]]; some philosophers, such as [[Mary Midgley]], suggest that the concept of &amp;quot;matter&amp;quot; is elusive and poorly defined.&amp;lt;ref&amp;gt;[[Mary Midgley]] &#039;&#039;The Myths We Live By&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Non-reductive materialism===&lt;br /&gt;
&amp;lt;!--&#039;Non-reductive materialism&#039; redirects here--&amp;gt;&lt;br /&gt;
Materialism is often associated with [[Reduction (philosophy)|reductionism]], according to which the objects or phenomena individuated at one level of description, if they are genuine, must be explicable in terms of the objects or phenomena at some other level of description—typically, at a more reduced level.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Non-reductive materialism&#039;&#039; explicitly rejects this notion, taking the material constitution of all particulars to be consistent with the existence of real objects, properties or phenomena not explicable in the terms canonically used for the basic material constituents. [[Jerry Fodor]] held this view, according to which empirical laws and explanations in &amp;quot;special sciences&amp;quot; like psychology or geology are invisible from the perspective of basic physics.&amp;lt;ref&amp;gt;Fodor, Jerry A. 1981. &#039;&#039;RePresentations: Philosophical Essays on the Foundations of Cognitive Science&#039;&#039;. Massachusetts: The MIT Press. {{ISBN|9780262060790}}. ([http://mitp-content-server.mit.edu:18180/books/content/sectbyfn?collid=books_pres_0&amp;amp;id=5895&amp;amp;fn=9780262560276_sch_0001.pdf Excerpt of Ch. 1]).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
&#039;&#039;See also: [[History of naturalism]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Early history===&lt;br /&gt;
&lt;br /&gt;
====Before Common Era====&lt;br /&gt;
[[File:Pinacoteca Querini Stampalia - Leucippus - Luca Giordano.jpg|thumb|upright|[[Leucippus]] (4th century BC), father of [[atomism]] and teacher of [[Democritus]]. Painting by [[Luca Giordano]], c. 1653.]]&lt;br /&gt;
Materialism developed, possibly independently, in several geographically separated regions of [[Eurasia]] during what [[Karl Jaspers]] termed the [[Axial Age]] ({{Circa}}&amp;amp;nbsp;800–200&amp;amp;nbsp;BC).&lt;br /&gt;
&lt;br /&gt;
In [[ancient Indian philosophy]], materialism developed around 600&amp;amp;nbsp;BC with the works of [[Ajita Kesakambali]], [[Payasi]], [[Kanada (philosopher)|Kanada]] and the proponents of the [[Cārvāka]] school of philosophy. Kanada became one of the early proponents of [[atomism]]. The [[Nyaya]]–[[Vaisesika]] school (c.&amp;amp;nbsp;600–100&amp;amp;nbsp;BC) developed one of the earliest forms of atomism (although their proofs of God and their positing that consciousness was not material precludes labelling them as materialists). [[Buddhist atomism]] and the [[Jainism|Jaina]] school continued the atomic tradition.&amp;lt;ref&amp;gt;Berryman, Sylvia. [https://plato.stanford.edu/entries/atomism-ancient/ &amp;quot;Ancient Atomism&amp;quot;]. &#039;&#039;The Stanford Encyclopedia of Philosophy&#039;&#039;. 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Ancient Greek philosophy|Ancient Greek]] [[atomists]] like [[Leucippus]], [[Democritus]] and [[Epicurus]] prefigure later materialists. The Latin poem &#039;&#039;[[De Rerum Natura]]&#039;&#039; by [[Lucretius]] (99&amp;amp;nbsp;–&amp;amp;nbsp;c.&amp;amp;nbsp;55&amp;amp;nbsp;BC) reflects the [[mechanism (philosophy)|mechanistic]] philosophy of Democritus and Epicurus. According to this view, all that exists is matter and void, and all phenomena result from different motions and conglomerations of base material particles called &#039;&#039;atoms&#039;&#039; (literally &amp;quot;indivisibles&amp;quot;). &#039;&#039;De Rerum Natura&#039;&#039; provides mechanistic explanations for phenomena such as erosion, evaporation, wind, and sound. Famous principles like &amp;quot;nothing can touch body but body&amp;quot; first appeared in Lucretius&#039;s work. Democritus and Epicurus did not espouse a monist ontology, instead espousing the ontological separation of matter and space (i.e. that space is &amp;quot;another kind&amp;quot; of being).June 2019.&lt;br /&gt;
&lt;br /&gt;
[[Epicureanism]] is a philosophy of materialism from [[classical antiquity]] that was a major forerunner of [[modern science]]. Classical atomism predates [[Epicurus]]: 5th‑century BCE thinkers [[Leucippus]] and [[Democritus]] explained all change as the collisions of indivisible atoms moving in the void.&amp;lt;ref&amp;gt;[https://plato.stanford.edu/entries/atomism-ancient/ &amp;quot;Ancient Atomism&amp;quot;]. &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;.&amp;lt;/ref&amp;gt; Epicureanism refined this materialist picture. Epicurus held that everything—including mind—consists solely of atoms moving in the void; to explain how parallel falling atoms could meet, he postulated the &#039;&#039;clinamen&#039;&#039;, an extremely slight lateral deviation that initiates collisions without supernatural causes and that need not imply genuine indeterminism.&amp;lt;ref&amp;gt;[https://plato.stanford.edu/entries/epicurus/ &amp;quot;Epicurus (section 4.2: The Swerve and Collisions)&amp;quot;]. &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;&amp;quot;Epicurus on Freedom&amp;quot;. Cambridge University Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Early Common Era====&lt;br /&gt;
[[Wang Chong]] (27&amp;amp;nbsp;– c.&amp;amp;nbsp;100&amp;amp;nbsp;AD) was a Chinese thinker of the early [[Common Era]] said to be a materialist.&amp;lt;ref&amp;gt;{{Google books |id=tAeFipOVx4MC |page=228 |title=The Cambridge Companion to Atheism (2006)}}&amp;lt;/ref&amp;gt; Later Indian materialist [[Jayaraashi Bhatta]] (6th century) in his work &#039;&#039;Tattvopaplavasimha&#039;&#039; (&#039;&#039;The Upsetting of All Principles&#039;&#039;) refuted the [[Nyāya Sūtras|Nyāya Sūtra]] [[epistemology]]. The materialistic [[Cārvāka]] philosophy appears to have died out some time after 1400; when [[Madhavacharya of Sringeri|Madhavacharya]] compiled &#039;&#039;Sarva-darśana-samgraha&#039;&#039; (&#039;&#039;A Digest of All Philosophies&#039;&#039;) in the 14th century, he had no Cārvāka (or Lokāyata) text to quote from or refer to.&amp;lt;ref&amp;gt;[http://www.carvaka4india.com/2011/12/history-of-indian-materialism.html &#039;&#039;History of Indian Materialism&#039;&#039;], Ramakrishna Bhattacharya&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In early 12th-century [[al-Andalus]], [[Early Islamic philosophy|Arabian philosopher]] [[Ibn Tufail]] ({{a.k.a.}}&amp;amp;nbsp;Abubacer) discussed materialism in his [[philosophical novel]], &#039;&#039;[[Hayy ibn Yaqdhan]]&#039;&#039; (&#039;&#039;Philosophus Autodidactus&#039;&#039;), while vaguely foreshadowing [[historical materialism]].&amp;lt;ref name=&amp;quot;Urvoy&amp;quot;&amp;gt;Urvoy, Dominique. 1996. &amp;quot;The Rationality of Everyday Life: The Andalusian Tradition? (Aropos of Hayy&#039;s First Experiences).&amp;quot; pp. 38–46 in &#039;&#039;The World of Ibn Tufayl: Interdisciplinary Perspectives on Ḥayy Ibn Yaqẓān&#039;&#039;, edited by [[Lawrence Conrad|L. I. Conrad]]. [[Brill Publishers]], {{ISBN|90-04-09300-1}}.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Modern philosophy&amp;lt;!--&#039;Anthropological materialism&#039; and &#039;German materialism&#039; redirect here--&amp;gt;====&lt;br /&gt;
[[File:Lucretius pointing to the casus.jpg|thumb|upright|Atomists proposed that the universe consists of atoms moving in space. &#039;&#039;[[De rerum natura|Of the Nature of Things]]&#039;&#039; by [[Lucretius]], 1682.]]&lt;br /&gt;
In France, [[Pierre Gassendi]] (1592–1665)&amp;lt;ref&amp;gt;[https://plato.stanford.edu/entries/gassendi/ Pierre Gassendi (Stanford Encyclopedia of Philosophy)]&amp;lt;/ref&amp;gt; represented the materialist tradition in opposition to the attempts of [[René Descartes]] (1596–1650) to provide the [[natural sciences]] with dualist foundations. There followed the materialist and [[atheism|atheist]] &#039;&#039;abbé&#039;&#039; [[Jean Meslier]] (1664–1729), along with the [[French materialism|French materialists]]: [[Julien Offray de La Mettrie]] (1709–1751), [[Denis Diderot]] (1713–1784), [[Étienne Bonnot de Condillac]] (1714–1780), [[Claude Adrien Helvétius]] (1715–1771), German-French [[Baron d&#039;Holbach]] (1723–1789), and other French [[The Enlightenment|Enlightenment]] thinkers.&amp;lt;ref name=&amp;quot;Mahan Friedrich 2003 p. 588&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In England, materialism was developed in the philosophies of [[Francis Bacon]] (1561–1626), [[Thomas Hobbes]] (1588–1679),&amp;lt;ref name=SEP&amp;gt;[http://plato.stanford.edu/entries/hobbes/ Thomas Hobbes (Stanford Encyclopedia of Philosophy)],&amp;lt;/ref&amp;gt; and [[John Locke]] (1632–1704).&amp;lt;ref name=&amp;quot;Henry 2012 p. 24&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; [[Scottish Enlightenment]] philosopher [[David Hume]] (1711–1776) became one of the most important materialist philosophers in the 18th century.&amp;lt;ref name=&amp;quot;Brown Ladyman 2019 p.&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;  [[John &amp;quot;Walking&amp;quot; Stewart]] (1747–1822) believed matter has a [[moral]] dimension, which had a major impact on the philosophical poetry of [[William Wordsworth]] (1770–1850).&lt;br /&gt;
&lt;br /&gt;
In [[late modern philosophy]], German atheist [[anthropologist]] [[Ludwig Feuerbach]] signaled a new turn in materialism in his 1841 book &#039;&#039;[[The Essence of Christianity]]&#039;&#039;, which presented a [[Humanism|humanist]] account of religion as the outward projection of man&#039;s inward nature. Feuerbach introduced &#039;&#039;&#039;anthropological materialism&#039;&#039;&#039;,&amp;lt;!--boldface per WP:R#PLA--&amp;gt; a version of materialism that views materialist anthropology as the [[universal science]].&amp;lt;ref&amp;gt;[[Axel Honneth]], [[Hans Joas]], &#039;&#039;Social Action and Human Nature&#039;&#039;, Cambridge University Press, 1988, p. 18.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Feuerbach&#039;s variety of materialism heavily influenced [[Karl Marx]],&amp;lt;ref&amp;gt;Nicholas Churchich, &#039;&#039;Marxism and Alienation&#039;&#039;, Fairleigh Dickinson University Press, 1990, p. 57: &amp;quot;Although Marx has rejected Feuerbach&#039;s abstract materialism,&amp;quot; Lenin says that Feuerbach&#039;s views &amp;quot;are consistently materialist,&amp;quot; implying that Feuerbach&#039;s conception of causality is entirely in line with dialectical materialism.&amp;quot;&amp;lt;/ref&amp;gt; who in the late 19th century elaborated the concept of [[historical materialism]]—the basis for what Marx and [[Friedrich Engels]] outlined as &#039;&#039;[[scientific socialism]]&#039;&#039;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;text=The materialist conception of history starts from the proposition that the production of the means to support human life and, next to production, the exchange of things produced, is the basis of all social structure; that in every society that has appeared in history, the manner in which wealth is distributed and society divided into classes or orders is dependent upon what is produced, how it is produced, and how the products are exchanged. From this point of view, the final causes of all social changes and political revolutions are to be sought, not in men&#039;s brains, not in men&#039;s better insights into eternal truth and justice, but in changes in the modes of production and exchange. They are to be sought, not in the philosophy, but in the economics of each particular epoch.|author=Friedrich Engels|source=&#039;&#039;Socialism: Scientific and Utopian&#039;&#039; (1880)&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Through his &#039;&#039;[[Dialectics of Nature]]&#039;&#039; (1883), Engels later developed a &amp;quot;materialist dialectic&amp;quot; [[philosophy of nature]], a worldview that [[Georgi Plekhanov]], the father of Russian [[Marxism]], called &#039;&#039;[[dialectical materialism]]&#039;&#039;.&amp;lt;ref&amp;gt;see Plekhanov, Georgi: 1891. &amp;quot;For the Sixtieth Anniversary of Hegel&#039;s Death;&amp;quot; 1893. &#039;&#039;Essays on the History of Materialism&#039;&#039;; and 1895. &#039;&#039;[[The Development of the Monist View of History]]&#039;&#039;.&amp;lt;/ref&amp;gt; In early 20th-century [[Russian philosophy]], [[Vladimir Lenin]] further developed dialectical materialism in his 1909 book &#039;&#039;[[Materialism and Empirio-criticism]]&#039;&#039;, which connects his opponents&#039; political conceptions to their anti-materialist philosophies.&lt;br /&gt;
&lt;br /&gt;
A more [[Metaphysical naturalism|naturalist]]-oriented materialist school of thought that developed in the mid-19th century was &#039;&#039;&#039;German materialism&#039;&#039;&#039;&amp;lt;!--boldface per WP:R#PLA--&amp;gt;, which included [[Ludwig Büchner]] (1824–1899), the Dutch-born [[Jacob Moleschott]] (1822–1893), and [[Carl Vogt]] (1817–1895),&amp;lt;ref&amp;gt;[[Owen Chadwick|Chadwick, Owen]]. 1990. &#039;&#039;The Secularization of the European Mind in the Nineteenth Century&#039;&#039;. Cambridge University Press.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;p. 165&#039;&#039;&#039;: &amp;quot;During the 1850s German...scientists conducted a controversy known...as the materialistic controversy. It was specially associated with the names of Vogt, Moleschott and Büchner.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;p. 173&#039;&#039;&#039;: &amp;quot;Frenchmen were surprised to see Büchner and Vogt.... [T]he French were surprised at German materialism.&amp;quot;&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;&#039;&#039;[[The Nineteenth Century and After]]&#039;&#039;, [https://books.google.com/books?id=8-VXAAAAIAAJ&amp;amp;q= Vol. 151]. 1952. p. 227: &amp;quot;the Continental materialism of Moleschott and Buchner&amp;lt;!--[sic]--&amp;gt;.&amp;quot;&amp;lt;/ref&amp;gt; even though they had different views on core issues such as the evolution and the origins of life.&amp;lt;ref&amp;gt;[[Andreas Daum|Andreas W. Daum]], &#039;&#039;Wissenschaftspopularisierung im 19. Jahrhundert: Bürgerliche Kultur, naturwissenschaftliche Bildung und die deutsche Öffentlichkeit, 1848–1914&#039;&#039;. Munich: Oldenbourg, 1998, pp. 210, 293–99.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
According to Marxist theoretician [[George Novack]], despite the multiplicity of named schools, philosophy ultimately confronts a single binary: materialism versus idealism.&amp;lt;ref&amp;gt;Novack, George. &amp;quot;The Origins of Materialism&amp;quot;. Pathfinder Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Contemporary history===&lt;br /&gt;
&#039;&#039;See also: [[Contemporary philosophy]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
====Analytic philosophy====&lt;br /&gt;
&#039;&#039;See also: [[Physicalism|Scientific materialism]]&#039;&#039;&lt;br /&gt;
Contemporary [[analytic philosopher]]s (e.g. [[Daniel Dennett]], [[Willard Van Orman Quine]], [[Donald Davidson (philosopher)|Donald Davidson]], and [[Jerry Fodor]]) operate within a broadly physicalist or [[scientific materialist]] framework, producing rival accounts of how best to accommodate the [[mind]], including [[functionalism (philosophy of mind)|functionalism]], [[anomalous monism]], and [[identity theory of mind|identity theory]].&amp;lt;ref name=&amp;quot;StandfordEM&amp;quot;&amp;gt;Ramsey, William. [2003] 2019. &amp;quot;[http://plato.stanford.edu/entries/materialism-eliminative/#SpeProFolPsy Eliminative Materialism § Specific Problems With Folk Psychology]&amp;quot; (rev.). &#039;&#039;[[Stanford Encyclopedia of Philosophy]]&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scientific materialism is often synonymous with, and has typically been described as, a [[reductive materialism]]. In the early 21st century, [[Paul Churchland|Paul]] and [[Patricia Churchland]]&amp;lt;ref&amp;gt;Churchland, P. S.. &amp;quot;Neurophilosophy: Toward a Unified Science of the Mind/Brain&amp;quot;. MIT Press. 1986.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Churchland, P. M.. &amp;quot;Eliminative Materialism and the Propositional Attitudes&amp;quot;. &#039;&#039;Journal of Philosophy&#039;&#039;. 1981.&amp;lt;/ref&amp;gt; advocated a radically contrasting position (at least in regard to certain hypotheses): [[eliminative materialism]]. Eliminative materialism holds that some mental phenomena simply do not exist at all, and that talk of such phenomena reflects a spurious &amp;quot;[[folk psychology]]&amp;quot; and [[introspection illusion]]. A materialist of this variety might believe that a concept like &amp;quot;belief&amp;quot; has no basis in fact (e.g. the way folk science speaks of demon-caused illnesses).&lt;br /&gt;
&lt;br /&gt;
With reductive materialism at one end of a continuum (our theories will &#039;&#039;reduce&#039;&#039; to facts) and eliminative materialism at the other (certain theories will need to be &#039;&#039;eliminated&#039;&#039; in light of new facts), [[revisionary materialism]] is somewhere in the middle.&amp;lt;ref name=&amp;quot;StandfordEM&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In contrast, [[Christian List]] argues that the existence of first-person perspectives, i.e., [[vertiginous question|one existing as oneself and not as someone else]], refutes physicalism. List argues that since first-personal facts cannot supervene on physical facts, this refutes not only physicalism, but also most forms of dualism that have purely third-personal metaphysics.&amp;lt;ref&amp;gt;List, Christian. [https://philpapers.org/rec/LISTFA &amp;quot;The first-personal argument against physicalism&amp;quot;]. &#039;&#039;&#039;&#039;. 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Continental philosophy====&lt;br /&gt;
&#039;&#039;See also: [[New materialism|Speculative materialism|Transcendental materialism]]&#039;&#039;&lt;br /&gt;
Contemporary [[continental philosopher]] [[Gilles Deleuze]] attempted to rework and strengthen classical materialist ideas.&amp;lt;ref&amp;gt;Smith, Daniel. [http://plato.stanford.edu/archives/win2015/entries/deleuze/ &amp;quot;Gilles Deleuze&amp;quot;]. Metaphysics Research Lab, Stanford University. 1 January 2015.&amp;lt;/ref&amp;gt; Contemporary theorists such as [[Manuel DeLanda]], working with this reinvigorated materialism, have come to be classified as &#039;&#039;new materialists&#039;&#039;.&amp;lt;ref&amp;gt;Dolphijn, Rick. [http://www.openhumanitiespress.org/books/titles/new-materialism/ &amp;quot;New Materialism: Interviews &amp;amp; Cartographies&amp;quot;]. Open Humanities Press. 1 January 2013.&amp;lt;/ref&amp;gt; [[New materialism]] has become its own subfield, with courses on it at major universities, as well as numerous conferences, edited collections and monographs devoted to it. [[Jane Bennett (political theorist)|Jane Bennett]]&#039;s 2010 book &#039;&#039;Vibrant Matter&#039;&#039; has been particularly instrumental in bringing theories of monist ontology and [[vitalism]] back into a critical theoretical fold dominated by [[poststructuralist]] theories of language and discourse.&amp;lt;ref&amp;gt;Bennett, Jane. [https://books.google.com/books?id=OcUcmAEACAAJ &amp;quot;Vibrant Matter: A Political Ecology of Things&amp;quot;]. Duke University Press. 4 January 2010.&amp;lt;/ref&amp;gt; New materialism has been criticized by scholars of [[critical race theory|critical race]], Indigenous, and [[queer studies]], who argue it neglects questions of race, gender, and colonialism, and by others who question whether its claims are genuinely novel given that Indigenous and animist traditions have long held views about the agency or [[vitalism|vitality]] of matter.&amp;lt;ref&amp;gt;Jackson, Zakiyyah Iman. [https://www.academia.edu/6169668 &amp;quot;Animal: New Directions in the Theorization of Race and Posthumanism&amp;quot;]. &#039;&#039;Feminist Studies&#039;&#039;. 2013.; Chen, Mel Y.. &amp;quot;Animacies: Biopolitics, Racial Mattering, and Queer Affect&amp;quot;. Duke University Press. 2012.; Todd, Zoe. &amp;quot;An Indigenous Feminist&#039;s Take On The Ontological Turn: &#039;Ontology&#039; Is Just Another Word For Colonialism&amp;quot;. &#039;&#039;Journal of Historical Sociology&#039;&#039;. 2016.; Watts, Vanessa. &amp;quot;Indigenous Place-Thought and Agency Amongst Humans and Non Humans&amp;quot;. &#039;&#039;Decolonization: Indigeneity, Education &amp;amp; Society&#039;&#039;. 2013.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In &#039;&#039;[[Being and Event]]&#039;&#039; (1988), [[Alain Badiou]] developed a materialist position using [[Zermelo–Fraenkel set theory]]. Badiou argues that mathematics, rather than physics or human perception, reveals the metaphysical structure of reality, and that this structure is pure multiplicity without any foundational substance or unifying [[Neoplatonism#The_One|One]].&amp;lt;ref&amp;gt;Dews, Peter. [https://ndpr.nd.edu/reviews/being-and-event/ &amp;quot;Being and Event (review)&amp;quot;]. &#039;&#039;Notre Dame Philosophical Reviews&#039;&#039;. 2008-02-18.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Quentin Meillassoux]] has developed &#039;&#039;speculative materialism&#039;&#039;, a position that seeks to escape what he calls &amp;quot;correlationism&amp;quot;, the post-Kantian view that thought cannot access reality independent of its relation to the subject.&amp;lt;ref&amp;gt;Meillassoux, Quentin. [https://www.urbanomic.com/document/founded-on-nothing/ &amp;quot;Founded on Nothing&amp;quot;]. &#039;&#039;Urbanomic&#039;&#039;. 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Defining &amp;quot;matter&amp;quot;==&lt;br /&gt;
The nature and definition of &#039;&#039;matter&#039;&#039;—like other key concepts in science and philosophy—have occasioned much debate:&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Is there a single kind of matter (&#039;&#039;[[hyle]]&#039;&#039;) that everything is made of, or are there multiple kinds?&lt;br /&gt;
* Is matter a continuous substance capable of expressing multiple forms (&#039;&#039;[[hylomorphism]]&#039;&#039;)&amp;lt;ref&amp;gt;[https://www.britannica.com/ebc/article-9041771 &amp;quot;Hylomorphism&amp;quot;] &#039;&#039;Concise Britannica&#039;&#039;&amp;lt;/ref&amp;gt; or a number of discrete, unchanging constituents ([[atomism]])?&amp;lt;ref&amp;gt;[http://etext.lib.virginia.edu/cgi-local/DHI/dhi.cgi?id=dv1-21 &amp;quot;Atomism: Antiquity to the Seventeenth Century&amp;quot;]  &#039;&#039;[[Dictionary of the History of Ideas]]&#039;&#039;&amp;lt;br /&amp;gt;[https://web.archive.org/web/20050305082323/http://etext.lib.virginia.edu/cgi-local/DHI/dhi.cgi?id=dv1-22 &amp;quot;Atomism in the Seventeenth Century&amp;quot;] &#039;&#039;Dictionary of the History of Ideas&#039;&#039;&lt;br /&gt;
&amp;lt;br /&amp;gt;[http://people.umass.edu/schaffer/papers/Fundamental.pdf Article by a philosopher who opposes atomism]  &lt;br /&gt;
&amp;lt;br /&amp;gt;[http://www.abstractatom.com/buddhist_atomism_and_the_r_theory_of_time.htm Information on Buddhist atomism] &lt;br /&gt;
&amp;lt;br /&amp;gt;[http://plato.stanford.edu/entries/democritus/ Article on traditional Greek atomism]&lt;br /&gt;
&amp;lt;br /&amp;gt;[http://plato.stanford.edu/entries/atomism-modern/ &amp;quot;Atomism from the 17th to the 20th Century&amp;quot;] &#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039;&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Does matter have intrinsic properties (&#039;&#039;[[substance theory]]&#039;&#039;)&amp;lt;ref&amp;gt;[http://plato.stanford.edu/entries/substance/ &amp;quot;&#039;&#039;Stanford Encyclopedia of Philosophy&#039;&#039; on substance theory&amp;quot;]. Plato.stanford.edu.&amp;lt;/ref&amp;gt; or lack them (&#039;&#039;[[prima materia]]&#039;&#039;)?&lt;br /&gt;
&lt;br /&gt;
One challenge to the conventional concept of matter as tangible &amp;quot;stuff&amp;quot; came with the rise of [[field physics]] in the 19th century. [[Special relativity|Relativity]] shows that matter and energy (including the spatially distributed energy of fields) are interchangeable. This enables the ontological view that energy is &#039;&#039;prima materia&#039;&#039; and matter is one of its forms. In contrast, the [[Standard Model]] of particle physics uses [[quantum field theory]] to describe all interactions. On this view it could be said that fields are &#039;&#039;prima materia&#039;&#039; and the energy is a property of the field.&amp;lt;ref&amp;gt;&amp;quot;Cornell University&amp;quot;.&amp;lt;/ref&amp;gt;June 2019.&lt;br /&gt;
&lt;br /&gt;
According to the dominant cosmological model, the [[Lambda-CDM model]], less than 5% of the universe&#039;s energy density is made up of the &amp;quot;matter&amp;quot; the Standard Model describes, and most of the universe is composed of [[dark matter]] and [[dark energy]], with little agreement among scientists about what these are made of.&amp;lt;ref&amp;gt;Bernard Sadoulet &amp;quot;Particle Dark Matter in the Universe: At the Brink of Discovery?&amp;quot; &#039;&#039;Science&#039;&#039; 5 January 2007: Vol. 315. no. 5808, pp. 61 - 63&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the advent of quantum physics, some scientists believed the concept of matter had merely changed, while others believed the conventional position could no longer be maintained. [[Werner Heisenberg]] said: &amp;quot;The ontology of materialism rested upon the illusion that the kind of existence, the direct &#039;actuality&#039; of the world around us, can be extrapolated into the atomic range. This extrapolation, however, is impossible...atoms are not things.&amp;quot;&amp;lt;ref&amp;gt;Heisenberg, Werner. 1962. &#039;&#039;Physics and philosophy: the revolution in modern science&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The concept of matter has changed in response to new scientific discoveries. Thus materialism has no definite content independent of the particular theory of matter on which it is based. According to [[Noam Chomsky]], any [[property (philosophy)|property]] can be considered material, if one defines matter such that it has that property.&amp;lt;ref name=&amp;quot;Chomsky, Noam 2000&amp;quot;&amp;gt;[[Chomsky, Noam]]. 2000. &#039;&#039;New Horizons in the Study of Language and Mind&#039;&#039;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The [[Gustavo Bueno#Philosophical Materialism|philosophical materialist]] [[Gustavo Bueno]] uses a more precise term than &#039;&#039;matter&#039;&#039;, the &#039;&#039;stroma.&#039;&#039;&amp;lt;ref&amp;gt;{{Citation|title=Gustavo Bueno, Estroma| date=22 May 2014 |url=https://www.youtube.com/watch?v=IiY1rfMk2T0|language=en|access-date=2021-12-28}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Physicalism==&lt;br /&gt;
&#039;&#039;Main article: [[Physicalism]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
George Stack distinguishes between materialism and physicalism: &amp;lt;blockquote&amp;gt;text=In the twentieth century, physicalism has emerged out of positivism. Physicalism restricts meaningful statements to physical bodies or processes that are verifiable or in principle verifiable. It is an empirical hypothesis that is subject to revision and, hence, lacks the dogmatic stance of classical materialism. [[Herbert Feigl]] defended physicalism in the United States and consistently held that mental states are brain states and that mental terms have the same referent as physical terms. The twentieth century has witnessed many materialist theories of the mental, and much debate surrounding them.&amp;lt;ref name=&amp;quot;Craig1998&amp;quot;&amp;gt;Stack, George J.. [https://books.google.com/books?id=G3UBxqkkCX8C&amp;amp;pg=PA171 &amp;quot;Materialism&amp;quot;]. Routledge.&amp;lt;/ref&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But not all conceptions of physicalism are tied to verificationist theories of meaning or direct realist accounts of perception. Rather, physicalists believe that no &amp;quot;element of reality&amp;quot; is missing from the mathematical formalism of our best description of the world. &amp;quot;Materialist&amp;quot; physicalists also believe that the formalism describes fields of insentience. In other words, the intrinsic nature of the physical is non-experiential.June 2019.&lt;br /&gt;
&lt;br /&gt;
==Religious and spiritual views==&lt;br /&gt;
===Christianity===&lt;br /&gt;
&#039;&#039;Main article: [[Materialism and Christianity]]&#039;&#039;&lt;br /&gt;
&amp;lt;!--cut and paste to the above--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Criticism and alternatives==&lt;br /&gt;
&lt;br /&gt;
===From contemporary physicists===&lt;br /&gt;
[[Rudolf Peierls]], a physicist who played a major role in the [[Manhattan Project]], rejected materialism: &amp;quot;The premise that you can describe in terms of physics the whole function of a human being{{nbsp}}... including knowledge and consciousness, is untenable. There is still something missing.&amp;quot;&amp;lt;ref&amp;gt;[https://economictimes.indiatimes.com/opinion/vedanta/matter-undermined/articleshow/17055344.cms &amp;quot;Matter Undermined&amp;quot;]. &#039;&#039;The Economic Times&#039;&#039;. 2 November 2012.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Erwin Schrödinger]] said, &amp;quot;Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.&amp;quot;&amp;lt;ref&amp;gt;&amp;quot;General Scientific and Popular Papers.&amp;quot; In &#039;&#039;Collected Papers&#039;&#039;, Vol. 4. Vienna: [[Austrian Academy of Sciences]]. Braunschweig/Wiesbaden: Vieweg &amp;amp; Sohn. p. 334.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Werner Heisenberg]] said the advent of quantum physics had undermined atomistic materialism. Specifically, he argued that the discovery of quantum entities existing as probability amplitudes rather than definite particles supports a mathematical, [[Platonic realism|Platonic realist]], rather than materialist, conception of physical reality, arguing that &amp;quot;modern physics takes a definite stand against the materialism of Democritus and for Plato and the Pythagoreans&amp;quot;.&amp;lt;ref&amp;gt;Heisenberg, Werner. &amp;quot;Physics and Philosophy: The Revolution in Modern Science&amp;quot;. Harper &amp;amp; Row.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Quantum mechanics====&lt;br /&gt;
Some 20th-century physicists (e.g., [[Eugene Wigner]]&amp;lt;ref&amp;gt;Wigner, Eugene Paul. &amp;quot;Philosophical Reflections and Syntheses&amp;quot;. 6 December 2012.&amp;lt;/ref&amp;gt; and [[Henry Stapp]]),&amp;lt;ref&amp;gt;[[Henry Stapp|Stapp, Henry]]. &amp;quot;Quantum interactive dualism - an alternative to materialism.&amp;quot; &#039;&#039;[[Journal of Consciousness Studies]]&#039;&#039;&amp;lt;/ref&amp;gt; and some modern physicists and science writers (e.g., [[Stephen Barr]],&amp;lt;ref&amp;gt;[https://www.forbes.com/sites/johnfarrell/2017/01/29/a-physicist-talks-god-and-the-quantum/ &amp;quot;A Physicist Talks God And The Quantum&amp;quot;]. &#039;&#039;Forbes.com&#039;&#039;. .&amp;lt;/ref&amp;gt; [[Paul Davies]], and [[John Gribbin]]) have argued that materialism is flawed due to certain recent findings in physics, such as [[quantum mechanics]] and [[chaos theory]]. According to Gribbin and Davies (1991):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;text=Then came our Quantum theory, which totally transformed our image of matter. The old assumption that the microscopic world of atoms was simply a scaled-down version of the everyday world had to be abandoned. Newton&#039;s deterministic machine was replaced by a shadowy and paradoxical conjunction of waves and particles, governed by the laws of chance, rather than the rigid rules of causality. An extension of the quantum theory goes beyond even this; it paints a picture in which solid matter dissolves away, to be replaced by weird excitations and vibrations of invisible field energy.&lt;br /&gt;
&lt;br /&gt;
Quantum physics undermines materialism because it reveals that matter has far less &amp;quot;substance&amp;quot; than we might believe. But another development goes even further by demolishing Newton&#039;s image of matter as inert lumps. This development is the theory of chaos, which has recently gained widespread attention.|author=Paul Davies and John Gribbin|title=&#039;&#039;The Matter Myth&#039;&#039;|source=Chapter 1: &amp;quot;The Death of Materialism&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Digital physics====&lt;br /&gt;
The objections of Davies and Gribbin are shared by proponents of [[digital physics]], who view information rather than matter as fundamental. The physicist and proponent of digital physics [[John Archibald Wheeler]] wrote, &amp;quot;all matter and all things physical are information-theoretic in origin and this is a participatory universe.&amp;quot;&amp;lt;ref&amp;gt;[[Wojciech H. Zurek|Zurek, Wojciech H.]], ed. 1990. &amp;quot;Information, Physics, Quantum: The Search for Links.&amp;quot; In &#039;&#039;Complexity, Entropy and the Physics of Information&#039;&#039;.&amp;lt;/ref&amp;gt; Some founders of quantum theory, such as [[Max Planck]], shared their objections. He wrote:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;text=As a man who has devoted his whole life to the most clear headed science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter.|author=Max Planck|source=&#039;&#039;Das Wesen der Materie&#039;&#039; (1944)&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[James Jeans]] concurred with Planck, saying, &amp;quot;The Universe begins to look more like a great thought than like a great machine. Mind no longer appears to be an accidental intruder into the realm of matter.&amp;quot;&amp;lt;ref&amp;gt;Jeans, James. 1937. &#039;&#039;[[The Mysterious Universe]]&#039;&#039;. p. 137.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Philosophical objections===&lt;br /&gt;
In the &#039;&#039;[[Critique of Pure Reason]]&#039;&#039;, [[Immanuel Kant]] argued against materialism in defending his [[transcendental idealism]] (as well as offering arguments against [[subjective idealism]] and [[mind–body dualism]]).&amp;lt;ref&amp;gt;Kant, Immanuel. &amp;quot;The refutation of idealism.&amp;quot; pp. 345–52 in &#039;&#039;[[Critique of Pure Reason]]&#039;&#039; (1st ed.), edited by [[Norman Kemp Smith|N. K. Smith]]. (2nd ed., pp. 244–7).&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Kant, Immanuel. &amp;quot;The refutation of idealism.&amp;quot; pp. 345–52 in &#039;&#039;[[Critique of Pure Reason]]&#039;&#039; (1st ed.), edited by [[Norman Kemp Smith|N. K. Smith]]. A379, p. 352: &amp;quot;If, however, as commonly happens, we seek to extend the concept of dualism, and take it in the transcendental sense, neither it nor the two counter-alternatives — pneumatism [idealism] on the one hand, materialism on the other — would have any sort of basis. … Neither the transcendental object which underlies outer appearances nor that which underlies inner intuition, is in itself either matter or a thinking being, but a ground (to us unknown)…&amp;quot;&amp;lt;/ref&amp;gt; But Kant argues that change and time require an enduring substrate.&amp;lt;ref&amp;gt;[http://www.rep.routledge.com/article/DB047SECT7 &#039;&#039;Routledge Encyclopedia of Philosophy&#039;&#039;]. : &amp;quot;Kant argues that we can determine that there has been a change in the objects of our perception, not merely a change in our perceptions themselves, only by conceiving of what we perceive as successive states of enduring substances (see Substance).&amp;quot;&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Kant, Immanuel. &amp;quot;The refutation of idealism.&amp;quot; pp. 345–52 in &#039;&#039;[[Critique of Pure Reason]]&#039;&#039; (1st ed.), edited by [[Norman Kemp Smith|N. K. Smith]]. B274, p. 245:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;All determination of time presupposes something permanent in perception. This permanent cannot, however, be something in me…&amp;quot;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Postmodern]]/[[poststructuralist]] thinkers also express skepticism about any all-encompassing metaphysical scheme. Philosopher [[Mary Midgley]]&amp;lt;ref&amp;gt;[[Mary Midgley|Midgley, Mary]]. 1990. &#039;&#039;The Myths We Live By&#039;&#039;.&amp;lt;/ref&amp;gt; argues that materialism is a [[self-refuting idea]], at least in its [[Eliminative materialism|eliminative materialist]] form.&amp;lt;ref&amp;gt;Baker, L. 1987. &#039;&#039;Saving Belief&#039;&#039;. Princeton: Princeton University Press&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Reppert, V. 1992. &amp;quot;Eliminative Materialism, Cognitive Suicide, and Begging the Question.&amp;quot; &#039;&#039;[[Metaphilosophy (journal)|Metaphilosophy]]&#039;&#039; 23:378–92.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Seidner, Stanley S. 10 June 2009. &amp;quot;A Trojan Horse: Logotherapeutic Transcendence and its Secular Implications for Theology.&amp;quot; [[Mater Dei Institute of Education|Mater Dei Institute]]. p. 5.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[[Peter Boghossian|Boghossian, Peter]]. 1990. &amp;quot;The Status of Content.&amp;quot; &#039;&#039;[[The Philosophical Review|Philosophical Review]]&#039;&#039; 99:157–84; and 1991. &amp;quot;The Status of Content Revisited.&amp;quot; &#039;&#039;[[Pacific Philosophical Quarterly]]&#039;&#039; 71:264–78.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Varieties of idealism====&lt;br /&gt;
Arguments for [[idealism]], such as those of [[Hegel]] and [[George Berkeley|Berkeley]], often take the form of an argument against materialism; indeed, Berkeley&#039;s idealism was called &#039;&#039;[[immaterialism]]&#039;&#039;. Now, matter can be argued to be redundant, as in [[bundle theory]], and mind-independent properties can, in turn, be reduced to subjective [[percept]]s. Berkeley gives an example of the latter by pointing out that it is impossible to gather direct evidence of matter, as there is no direct experience of matter; all that is experienced is perception, whether internal or external. As such, matter&#039;s existence can only be inferred from the apparent (perceived) stability of perceptions; it finds absolutely no evidence in direct experience.&amp;lt;ref&amp;gt;de Waal, Cornelis. &amp;quot;Having an Idea of Matter: A Peircean Refutation of Berkeleyan Immaterialism&amp;quot;. &#039;&#039;[[Journal of the History of Ideas]]&#039;&#039;. April 2006.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If matter and energy are seen as necessary to explain the physical world, but incapable of explaining mind, [[mind–body dualism|dualism]] results. [[Emergence]], [[holism]] and [[process philosophy]] seek to ameliorate the perceived shortcomings of traditional (especially [[mechanism (philosophy)|mechanistic]]) materialism without abandoning materialism entirely.June 2019.&lt;br /&gt;
&lt;br /&gt;
===Materialism as methodology===&lt;br /&gt;
Some critics object to materialism as part of an overly skeptical, narrow or [[reductionism|reductivist]] approach to theorizing, rather than to the ontological claim that matter is the only substance. [[particle physics|Particle physicist]] and Anglican [[theology|theologian]] [[John Polkinghorne]] objects to what he calls &#039;&#039;promissory materialism&#039;&#039;—claims that materialistic science will eventually succeed in explaining phenomena it has not so far been able to explain.&amp;lt;ref&amp;gt;However, critics of materialism are equally guilty of prognosticating that it will &#039;&#039;never&#039;&#039; be able to explain certain phenomena. &amp;quot;Over a hundred years ago [[William James]] saw clearly that science would never resolve the [[mind–body dualism|mind–body problem]].&amp;quot; [https://www.designinference.com/documents/1999.10.spiritual_machines.htm &#039;&#039;Are We Spiritual Machines?&#039;&#039;]  Dembski, W.&amp;lt;/ref&amp;gt; Polkinghorne prefers &amp;quot;[[dual-aspect monism]]&amp;quot; to materialism.&amp;lt;ref&amp;gt;[http://www.crosscurrents.org/polkinghorne.htm &amp;quot;Interview with John Polkinghorne&amp;quot;]. Crosscurrents.org.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some scientific materialists have been criticized for failing to provide clear definitions of matter, leaving the term &#039;&#039;materialism&#039;&#039; without any definite meaning. [[Noam Chomsky]] states that since the concept of matter may be affected by new scientific discoveries, as has happened in the past, scientific materialists are being dogmatic in assuming the opposite.&amp;lt;ref name=&amp;quot;Chomsky, Noam 2000&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
{{div col|colwidth=25em}}&lt;br /&gt;
* [[Aleatory materialism]]&lt;br /&gt;
* [[Antimaterialism (disambiguation)|Antimaterialism]] beliefs:&lt;br /&gt;
** [[Gnosticism]]&lt;br /&gt;
** [[Idealism]]&lt;br /&gt;
** [[Immaterialism]]&lt;br /&gt;
** [[Maya (religion)]]&lt;br /&gt;
** [[Mind–body dualism]]&lt;br /&gt;
** [[Platonic realism]]&lt;br /&gt;
** [[Supernaturalism]]&lt;br /&gt;
** [[Transcendentalism]]&lt;br /&gt;
* [[Cārvāka]]&lt;br /&gt;
* [[Christian materialism]]&lt;br /&gt;
* [[Critical realism (philosophy of the social sciences)|Critical realism]]&lt;br /&gt;
* [[Cultural materialism (anthropology)|Cultural materialism]]&lt;br /&gt;
* [[Dialectical materialism]]&lt;br /&gt;
* [[Economic materialism]]&lt;br /&gt;
* [[Existence]]&lt;br /&gt;
* [[French materialism]]&lt;br /&gt;
* [[Grotesque body]]&lt;br /&gt;
* [[Historical materialism]]&lt;br /&gt;
* [[Hyle]]&lt;br /&gt;
* [[Incorporeality]]&lt;br /&gt;
* [[Madhyamaka]], a philosophy of [[Middle Way]]&lt;br /&gt;
* [[Marxist philosophy of nature]]&lt;br /&gt;
* [[Materialist feminism]]&lt;br /&gt;
* [[Metaphysical naturalism]]&lt;br /&gt;
* [[Model-dependent realism]]&lt;br /&gt;
* [[Naturalism (philosophy)]]&lt;br /&gt;
* [[Gustavo Bueno#Philosophical materialism|Philosophical materialism]]&lt;br /&gt;
* [[Philosophy of mind]]&lt;br /&gt;
* [[Physicalism]]&lt;br /&gt;
* [[Postmaterialism]]&lt;br /&gt;
* [[Quantum energy]]&lt;br /&gt;
* [[Rational egoism]]&lt;br /&gt;
* [[Reality in Buddhism]]&lt;br /&gt;
* [[Scientistic materialism]]&lt;br /&gt;
* [[Substance theory]]&lt;br /&gt;
* [[Transcendence (religion)]]&lt;br /&gt;
{{div col end}}&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
{{refbegin}}&lt;br /&gt;
&#039;&#039;&#039;a.&#039;&#039;&#039; {{note label|a|a|none}} Indeed, it has been noted it is difficult if not impossible to define one category without contrasting it with the other.&amp;lt;ref name=&amp;quot;Priest1991&amp;quot;&amp;gt;Priest, Stephen. &amp;quot;Theories of the Mind&amp;quot;. [[Penguin Books]].. {{ISBN|0-14-013069-1|978-0-14-013069-0}}.&amp;lt;/ref&amp;gt;&amp;lt;ref name=Novack1979&amp;gt;Novack, George. &amp;quot;The Origins of Materialism&amp;quot;. Pathfinder Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
{{refend}}&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Bibliography==&lt;br /&gt;
&lt;br /&gt;
*Campbell, Keith. &amp;quot;Materialism&amp;quot;. Encyclopedia of Philosophy, vol. 6, 2nd edition, Macmillan Reference USA, 2006, pp. 5–18.&lt;br /&gt;
*Stoljar, Daniel. &amp;quot;Physicalism&amp;quot;. Stanford Encyclopedia of Philosophy, February 13, 2001 (substantive revision May 25, 2021). https://plato.stanford.edu/entries/physicalism/&lt;br /&gt;
&lt;br /&gt;
==Further reading==&lt;br /&gt;
{{refbegin}}&lt;br /&gt;
* Buchner, L. (1920). &#039;&#039;Force and Matter&#039;&#039;. New York, Peter Eckler Publishing Co.&lt;br /&gt;
* Churchland, Paul (1981). &#039;&#039;[https://www.jstor.org/stable/2025900 Eliminative Materialism and the Propositional Attitudes]&#039;&#039;. The Philosophy of Science. Boyd, Richard; P. Gasper; J. D. Trout. Cambridge, Massachusetts, MIT Press.&lt;br /&gt;
* Field, Hartry H.. &amp;quot;Readings in Philosophy of Psychology&amp;quot;. Taylor &amp;amp; Francis.&lt;br /&gt;
* Flanagan, Owen J.. [https://books.google.com/books?id=80HIwMz3bvwC &amp;quot;Science of the Mind 2e&amp;quot;]. MIT Press.&lt;br /&gt;
* Fodor, J.A. (1974). &amp;quot;Special Sciences&amp;quot;, &#039;&#039;Synthese&#039;&#039;, Vol. 28.&lt;br /&gt;
* Gunasekara, Victor A. (2001). &amp;quot;[http://www.buddhismtoday.com/english/buddha/Teachings/basicteaching11.htm Buddhism and the Modern World]&amp;quot;. &amp;quot;Basic Buddhism: A Modern Introduction to the Buddha&#039;s Teaching&amp;quot;. 18 January 2008&lt;br /&gt;
* Kim, J. (1994) [https://www.jstor.org/stable/2107741 Multiple Realization and the Metaphysics of Reduction], &#039;&#039;Philosophy and Phenomenological Research&#039;&#039;, Vol. 52. &lt;br /&gt;
* [[Julien Offray de La Mettrie|La Mettrie, La Mettrie, Julien Offray de]] (1748). &#039;&#039;L&#039;Homme Machine&#039;&#039; (&#039;&#039;[[Man a Machine]]&#039;&#039;)&lt;br /&gt;
* Lange, Friedrich A. (1925) &#039;&#039;[https://www.worldcat.org/oclc/703434926 The History of Materialism]&#039;&#039;. New York, Harcourt, Brace, &amp;amp; Co.&lt;br /&gt;
* Moser, Paul K.. [https://books.google.com/books?id=-vIzCvCAxpgC &amp;quot;Contemporary Materialism: A Reader&amp;quot;]. Psychology Press.&lt;br /&gt;
* Priest, Stephen. &amp;quot;Theories of the Mind&amp;quot;. [[Penguin Books]]. Alternative {{ISBN|978-0-14-013069-0}}&lt;br /&gt;
* Schopenhauer, Arthur (1969). &#039;&#039;[[The World as Will and Representation]]&#039;&#039;. New York, Dover Publications, Inc.&lt;br /&gt;
* Seidner, Stanley S. (10 June 2009). [https://docs.google.com/gview?a=v&amp;amp;q=cache:FrKYAo88ckkJ:www.materdei.ie/media/conferences/a-secular-age-parallel-sessions-timetable.pdf+%22Stan+Seidner%22&amp;amp;hl=en&amp;amp;gl=us &amp;quot;A Trojan Horse: Logotherapeutic Transcendence and its Secular Implications for Theology&amp;quot;]. &#039;&#039;Mater Dei Institute&#039;&#039;&lt;br /&gt;
* Turner, MS. &amp;quot;Quarks and the Cosmos&amp;quot;. &#039;&#039;Science&#039;&#039;. 5 January 2007.&lt;br /&gt;
* Vitzthum, Richard C. (1995) &#039;&#039;[https://books.google.com/books/about/Materialism.html?id=odjWAAAAMAAJ Materialism: An Affirmative History and Definition]&#039;&#039;. Amherst, New York, Prometheus Books.&lt;br /&gt;
{{refend}}&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
{{Sister project links|commonscat=yes|n=no}}&lt;br /&gt;
*Citation needed.&lt;br /&gt;
*&#039;&#039;[[Stanford Encyclopedia]]&#039;&#039;:&lt;br /&gt;
**[https://plato.stanford.edu/entries/physicalism/ Physicalism]&lt;br /&gt;
**[https://plato.stanford.edu/entries/materialism-eliminative/ Eliminative Materialism]&lt;br /&gt;
*[https://infidels.org/library/modern/richard-vitzthum-materialism/ Philosophical Materialism (by Richard C. Vitzthum)] from infidels.org&lt;br /&gt;
*[https://web.archive.org/web/20140703140228/https://sites.google.com/site/minddict/materialism Dictionary of the Philosophy of Mind on Materialism] from the [[University of Waterloo]]&lt;br /&gt;
&lt;br /&gt;
{{Environmental humanities}}&lt;br /&gt;
&lt;br /&gt;
{{Metaphysics}}&lt;br /&gt;
&lt;br /&gt;
{{Philosophy topics}}&lt;br /&gt;
&lt;br /&gt;
{{Philosophy of mind}}&lt;br /&gt;
&lt;br /&gt;
{{Authority control}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Materialism| ]]&lt;br /&gt;
[[Category:Metaphysical theories]]&lt;br /&gt;
[[Category:Ontology]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Acinic_cell_carcinoma&amp;diff=15</id>
		<title>Acinic cell carcinoma</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Acinic_cell_carcinoma&amp;diff=15"/>
		<updated>2026-04-06T12:58:55Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import from Wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox medical condition (new)&lt;br /&gt;
| name            = Acinic cell carcinoma&lt;br /&gt;
| synonyms        = Acinic cell adenocarcinoma, Acinar cell carcinoma&lt;br /&gt;
| image           = Acinic cell carcinoma - high mag.jpg&lt;br /&gt;
| caption         = [[Micrograph]] of an acinic cell carcinoma (right of image) and [[acinar gland]]s ([[parotid gland]] - left of image). [[H&amp;amp;E stain]].&lt;br /&gt;
| pronounce       = /əˈsɪnɪk sɛl kɑːrsɪˈnoʊmə/&lt;br /&gt;
| field           = [[ENT surgery]], [[Oncology]], [[Oral and maxillofacial pathology]]&lt;br /&gt;
| symptoms        = Slow-growing, painless mass in parotid region, occasional pain/tenderness (30-50%), facial nerve involvement (5-10%)&lt;br /&gt;
| complications   = Recurrence (10-35%), metastasis (5-10%), high-grade transformation, facial nerve dysfunction&lt;br /&gt;
| onset           = Any age; peak in 5th decade&lt;br /&gt;
| duration        = Chronic&lt;br /&gt;
| types           = Solid, microcystic, papillary-cystic, follicular&lt;br /&gt;
| causes          = NR4A3 overexpression (80%), radiation exposure, genomic rearrangements&lt;br /&gt;
| risks           = Prior radiation exposure, radioactive isotope exposure, certain chemical exposures, possible familial predisposition&lt;br /&gt;
| diagnosis       = Clinical examination, imaging (MRI/CT), fine needle aspiration, histopathology, immunohistochemistry&lt;br /&gt;
| differential    = Pleomorphic adenoma, Warthin tumor, mucoepidermoid carcinoma, secretory carcinoma, oncocytoma&lt;br /&gt;
| prevention      = Avoiding radiation exposure&lt;br /&gt;
| treatment       = Surgical resection, radiation therapy for high-risk cases&lt;br /&gt;
| medication      = Chemotherapy for recurrent/metastatic disease&lt;br /&gt;
| prognosis       = Excellent; 5-year survival 90-97% for localized disease; 10-year survival 88-94%&lt;br /&gt;
| frequency       = 6-15% of all salivary gland malignancies; 0.13 cases per 100,000 annually&lt;br /&gt;
| deaths          = Low mortality; significantly higher with high-grade transformation or distant metastasis&lt;br /&gt;
}}&lt;br /&gt;
&#039;&#039;&#039;Acinic cell carcinoma&#039;&#039;&#039; is a malignant [[epithelial]] [[neoplasm]] that shows differentiation toward [[serous acinar cells]] of salivary gland origin. First described by Godwin et al. in 1954, it represents approximately 6-15% of all salivary gland malignancies, making it the third most common after mucoepidermoid carcinoma and adenoid cystic carcinoma.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Approximately 80-90% of acinic cell carcinomas arise in the [[parotid gland]], with the remainder occurring in the [[submandibular gland]] and [[minor salivary glands]], particularly those of the [[buccal mucosa]] and [[palate]].&amp;lt;ref name=&amp;quot;IARC 2017&amp;quot;&amp;gt;&amp;quot;WHO Classification of Head and Neck Tumours&amp;quot;. International Agency for Research on Cancer. 2017.&amp;lt;/ref&amp;gt; Rare cases have been reported in ectopic salivary gland tissue and in non-salivary sites including the [[breast]], [[pancreas]], and [[lung]].&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Clinically, acinic cell carcinoma typically presents as a slow-growing, painless mass. The disease has a generally favorable prognosis, with 5-year survival rates exceeding 90% for localized disease, though recurrences can develop even decades after initial treatment.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; While traditionally considered a low-grade malignancy, recent molecular and clinical studies have revealed significant heterogeneity, with a subset of tumors demonstrating high-grade transformation and more aggressive behavior.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Historically, acinic cell carcinoma was classified among the &amp;quot;adenomas&amp;quot; until the 1950s, when its malignant potential was recognized. The World Health Organization officially reclassified it as a malignant epithelial neoplasm in 1972, acknowledging its capacity for local invasion, recurrence, and metastasis.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; In 2017, the WHO classification further refined the understanding of this entity, distinguishing it from the newly described mammary analogue secretory carcinoma (MASC), which shares some morphological features but has distinct molecular characteristics.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Molecularly, acinic cell carcinoma is characterized by the overexpression of the nuclear receptor NR4A3 in approximately 80% of cases, resulting from genomic rearrangements at chromosome 9q31.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; Treatment typically involves complete surgical excision, with adjuvant radiation therapy reserved for cases with adverse features such as positive margins, high-grade histology, or regional metastasis.&lt;br /&gt;
== Clinical presentation ==&lt;br /&gt;
Acinic cell carcinoma typically presents as a slow-growing, painless mass in the parotid region. The clinical features vary based on tumor location, size, and growth pattern but generally include:&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Solitary, firm mass in the parotid gland (80-90% of cases)&lt;br /&gt;
* Slow growth pattern, with the duration of symptoms prior to diagnosis averaging 1-3 years&lt;br /&gt;
* Pain or tenderness in 30-50% of patients&lt;br /&gt;
* Facial nerve involvement in 5-10% of cases at presentation&lt;br /&gt;
* Skin involvement or fixation to underlying structures in advanced cases&lt;br /&gt;
* Occasional bilateral or multifocal disease (2-3% of cases)&lt;br /&gt;
&lt;br /&gt;
The average size at presentation ranges from 1 to 3 cm, though tumors can occasionally reach &amp;gt;5 cm before diagnosis. Unlike many other malignancies, systemic symptoms such as weight loss or fatigue are uncommon unless the disease is very advanced.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Clinical features by location ===&lt;br /&gt;
The presentation varies somewhat based on the site of origin:&lt;br /&gt;
&lt;br /&gt;
* **Parotid gland**: Presents as a mass at the angle of the mandible or in the pre- or post-auricular region. May cause ear lobe elevation or facial asymmetry.&lt;br /&gt;
* **Submandibular gland**: Appears as a firm swelling in the submandibular triangle that may exhibit limited mobility.&lt;br /&gt;
* **Minor salivary glands**: When occurring in the oral cavity, typically presents as a submucosal nodule, most commonly on the buccal mucosa or palate. May be accompanied by overlying mucosal ulceration in 15-20% of cases.&lt;br /&gt;
* **Sublingual gland**: Extremely rare, presents as a floor of mouth mass that may cause tongue displacement.&lt;br /&gt;
&lt;br /&gt;
=== Features suggestive of higher-grade disease ===&lt;br /&gt;
Certain clinical manifestations may suggest higher-grade or dedifferentiated acinic cell carcinoma and are associated with poorer prognosis:&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Rapid growth or sudden acceleration in growth rate&lt;br /&gt;
* Early or prominent facial nerve involvement&lt;br /&gt;
* Skin ulceration or fixation to surrounding structures&lt;br /&gt;
* Regional lymphadenopathy (present in approximately 10-15% of cases overall, but more common in high-grade tumors)&lt;br /&gt;
* Pain, paresthesia, or neurologic symptoms&lt;br /&gt;
* Trismus (restricted jaw movement) when tumor involves the deep lobe of parotid&lt;br /&gt;
&lt;br /&gt;
=== Presentation in specific populations ===&lt;br /&gt;
Acinic cell carcinoma demonstrates some unique features in certain demographic groups:&lt;br /&gt;
&lt;br /&gt;
* **Pediatric patients**: In children, these tumors may grow more rapidly and are more likely to be symptomatic at presentation. Pain is reported in up to 65% of pediatric cases, compared to 30-50% in adults.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* **Elderly patients**: In older individuals (&amp;gt;70 years), these tumors may display more aggressive features, with higher rates of extraparenchymal extension and facial nerve involvement.&lt;br /&gt;
* **During pregnancy**: Occasionally, these tumors may show accelerated growth during pregnancy, likely due to hormonal influences.&lt;br /&gt;
&lt;br /&gt;
=== Recurrent and metastatic disease ===&lt;br /&gt;
Recurrent disease typically manifests as a mass at or near the original tumor site, occurring in approximately 10-35% of patients, with most recurrences developing within the first 5 years after initial treatment.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Distant metastasis is uncommon (5-10% of cases) and typically involves the lungs, bone, and, less frequently, the liver and brain. Patients with distant metastases may present with site-specific symptoms, including:&lt;br /&gt;
&lt;br /&gt;
* Pulmonary metastases: Cough, dyspnea, hemoptysis&lt;br /&gt;
* Bone metastases: Pain, pathologic fractures&lt;br /&gt;
* Brain metastases: Headache, focal neurologic deficits, seizures&lt;br /&gt;
&lt;br /&gt;
===Functional manifestations===&lt;br /&gt;
Acinic cell carcinoma rarely causes significant salivary dysfunction, as the tumor typically affects only a portion of the gland. However, larger tumors involving a substantial portion of the parotid may occasionally cause:&lt;br /&gt;
&lt;br /&gt;
* Reduced saliva production&lt;br /&gt;
* Alterations in saliva consistency&lt;br /&gt;
* Dry mouth (xerostomia) on the affected side&lt;br /&gt;
&lt;br /&gt;
These functional changes result from the physical disruption of normal acinar cells and ductal structures, as well as potential obstruction of major salivary ducts by the tumor mass.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
== Diagnosis ==&lt;br /&gt;
=== Clinical evaluation ===&lt;br /&gt;
The initial evaluation of a patient with suspected acinic cell carcinoma typically begins with a comprehensive history and physical examination. Important elements include:&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Detailed assessment of mass location, size, consistency, mobility, and growth pattern&lt;br /&gt;
* Evaluation of facial nerve function through examination of all branches&lt;br /&gt;
* Assessment of regional lymph nodes in all neck levels&lt;br /&gt;
* Evaluation of oral cavity and oropharynx for potential extension or additional primary sites&lt;br /&gt;
* Cranial nerve examination to assess for perineural invasion&lt;br /&gt;
&lt;br /&gt;
=== Imaging studies ===&lt;br /&gt;
Imaging plays a crucial role in diagnosis, staging, and surgical planning for acinic cell carcinoma. The following modalities are commonly employed:&amp;lt;ref&amp;gt;&amp;quot;State-of-the-art imaging of salivary gland tumors&amp;quot;. &#039;&#039;Neuroimaging Clinics of North America&#039;&#039;. May 2018.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Ultrasonography&#039;&#039;&#039;: Often used as the initial imaging study for superficial parotid masses. It can differentiate solid from cystic lesions and provide guidance for fine-needle aspiration. However, ultrasonography has limited ability to assess deep lobe extension or skull base involvement.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Computed Tomography (CT)&#039;&#039;&#039;: Provides excellent bony detail and can evaluate the extent of tumor invasion into surrounding structures. CT is particularly useful for:&lt;br /&gt;
** Assessing involvement of the mandible, skull base, or stylomastoid foramen&lt;br /&gt;
** Detecting calcifications within the tumor (occasionally present in acinic cell carcinoma)&lt;br /&gt;
** Evaluating cervical lymph nodes for metastatic spread&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Magnetic Resonance Imaging (MRI)&#039;&#039;&#039;: Considered the gold standard for salivary gland tumors due to superior soft tissue contrast. Key MRI findings in acinic cell carcinoma include:&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
** T1-weighted images: typically hypointense to isointense relative to normal gland tissue&lt;br /&gt;
** T2-weighted images: moderately hyperintense&lt;br /&gt;
** Post-contrast: moderate enhancement, sometimes with internal cystic or necrotic areas&lt;br /&gt;
** Poorly defined margins may suggest more aggressive behavior&lt;br /&gt;
** Better delineation of perineural spread compared to CT&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Positron Emission Tomography/Computed Tomography (PET/CT)&#039;&#039;&#039;: Not routinely used for initial diagnosis but may be helpful in:&lt;br /&gt;
** Detecting occult distant metastases&lt;br /&gt;
** Evaluating treatment response&lt;br /&gt;
** Surveillance for recurrent disease&lt;br /&gt;
&lt;br /&gt;
=== Histopathologic features ===&lt;br /&gt;
[[Basophilic]], bland cells similar to [[Centroacinar cell|acinar cells]]. Growth pattern: solid - acinar cells, [[Microcytosis|microcytic]] - small cystic spaces mucinous or [[eosinophilic]], papillary-cystic - large cystic lined by [[epithelium]], follicular - similar to thyroid tissue.&lt;br /&gt;
&lt;br /&gt;
These tumors, which resemble serous acinar cells, vary in their behavior from locally aggressive to blatantly malignant.&lt;br /&gt;
&lt;br /&gt;
It can also appear in the [[breast]]. The [[Acinar cell carcinoma of the pancreas|pancreatic form]] of acinic cell carcinoma is a rare subtype of exocrine pancreatic cancer. Exocrine pancreatic cancers are the most common form of pancreatic cancer when compared to endocrine pancreatic cancer.&amp;lt;ref name=&amp;quot;pmid12101208&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Acinic cell carcinomas arise most frequently in the parotid gland. Other sites of primary tumors have included the submandibular gland and other major and minor salivary glands. There have been rare cases of primary tumors involving the parapharyngeal space and the sublingual gland.&amp;lt;ref name=&amp;quot;overview&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;ReferenceA&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:Acinic cell carcinoma.jpg | [[Micrograph]] of acinic cell carcinoma. [[Pap stain]]. [[Fine needle aspiration]] specimen.&lt;br /&gt;
File:Acinic cell carcinoma - intermed mag.jpg | Intermed. mag.&lt;br /&gt;
File:Acinic cell carcinoma - very high mag.jpg | Very high mag.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cytologic and histopathologic diagnosis ===&lt;br /&gt;
The definitive diagnosis of acinic cell carcinoma relies on tissue sampling and pathological evaluation:&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Fine-Needle Aspiration Cytology (FNAC)&#039;&#039;&#039;: Often the initial diagnostic procedure due to its minimally invasive nature. Characteristic cytologic features include:&lt;br /&gt;
** Abundant basophilic granular cytoplasm&lt;br /&gt;
** Small, round, eccentric nuclei with minimal atypia&lt;br /&gt;
** Acinar, microcystic, or papillary arrangements&lt;br /&gt;
** Variable amounts of lymphoid tissue in the background&lt;br /&gt;
&lt;br /&gt;
The diagnostic accuracy of FNAC for acinic cell carcinoma ranges from 68-88%, with limitations including sampling error and difficulty distinguishing from other salivary gland neoplasms with similar cytologic features, particularly secretory carcinoma with which it shares significant morphologic overlap.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Core Needle Biopsy&#039;&#039;&#039;: May provide more tissue for histopathologic and ancillary studies compared to FNAC, with a higher diagnostic accuracy (85-95%). However, it carries a slightly higher risk of complications including facial nerve injury, tumor seeding, and fistula formation.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Intraoperative Frozen Section&#039;&#039;&#039;: May be used to confirm diagnosis during surgery and guide the extent of resection. Accuracy rates range from 90-95%, though definitive grading and subtyping may be deferred to permanent sections.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Surgical Resection Specimen&#039;&#039;&#039;: Provides the most comprehensive histopathologic evaluation, allowing for assessment of growth pattern, invasion, and margins.&lt;br /&gt;
&lt;br /&gt;
=== Immunohistochemistry and molecular pathology ===&lt;br /&gt;
Ancillary studies play an increasingly important role in the diagnosis of acinic cell carcinoma, particularly in distinguishing it from mimics such as secretory carcinoma:&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Immunohistochemical profile&#039;&#039;&#039;:&lt;br /&gt;
** DOG1: Diffuse and strong expression in acinic cell carcinoma, negative or focal in secretory carcinoma&lt;br /&gt;
** Mammaglobin: Typically negative or focally positive (unlike secretory carcinoma which shows diffuse positivity)&lt;br /&gt;
** S100: Variable, often focal (versus diffuse in secretory carcinoma)&lt;br /&gt;
** SOX10: Usually positive&lt;br /&gt;
** Amylase and other digestive enzymes: Frequently positive&lt;br /&gt;
** Ki-67: Generally low proliferation index in conventional tumors, elevated in high-grade transformed variants&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Molecular testing&#039;&#039;&#039;:&lt;br /&gt;
** Fluorescence in situ hybridization (FISH) for ETV6 rearrangement: Negative in acinic cell carcinoma, positive in secretory carcinoma&lt;br /&gt;
** NR4A3 overexpression analysis: Present in approximately 80% of acinic cell carcinomas&lt;br /&gt;
** Next-generation sequencing: May reveal characteristic genomic alterations, including rearrangements at chromosome 9q31 involving NR4A3&lt;br /&gt;
&lt;br /&gt;
=== Differential diagnosis ===&lt;br /&gt;
Several salivary gland neoplasms and other conditions may mimic acinic cell carcinoma clinically and/or pathologically:&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Secretory carcinoma&#039;&#039;&#039; (formerly mammary analogue secretory carcinoma): The most challenging differential diagnosis, characterized by ETV6-NTRK3 fusion and strong S100 and mammaglobin expression.&lt;br /&gt;
* &#039;&#039;&#039;Oncocytoma&#039;&#039;&#039;: Distinguished by abundant eosinophilic (rather than basophilic) cytoplasm and absence of zymogen granules.&lt;br /&gt;
* &#039;&#039;&#039;Mucoepidermoid carcinoma&#039;&#039;&#039;: Contains mucous, intermediate, and epidermoid cells; mucin-positive by special stains.&lt;br /&gt;
* &#039;&#039;&#039;Pleomorphic adenoma&#039;&#039;&#039;: Distinguished by chondromyxoid stroma and ductal/myoepithelial components.&lt;br /&gt;
* &#039;&#039;&#039;Normal salivary gland tissue&#039;&#039;&#039;: Well-organized architecture and absence of invasive features.&lt;br /&gt;
* &#039;&#039;&#039;Metastatic renal cell carcinoma&#039;&#039;&#039;: Clinical history, PAX8 positivity, and absence of salivary markers help distinguish.&lt;br /&gt;
&lt;br /&gt;
=== Staging ===&lt;br /&gt;
Acinic cell carcinoma, like other salivary gland malignancies, is staged according to the American Joint Committee on Cancer (AJCC) TNM staging system, 8th edition:&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* T staging is based on tumor size and local invasion:&lt;br /&gt;
** T1: Tumor ≤2 cm without extraparenchymal extension&lt;br /&gt;
** T2: Tumor &amp;gt;2 cm but ≤4 cm without extraparenchymal extension&lt;br /&gt;
** T3: Tumor &amp;gt;4 cm and/or extraparenchymal extension&lt;br /&gt;
** T4a: Tumor invades skin, mandible, ear canal, or facial nerve&lt;br /&gt;
** T4b: Tumor invades skull base, pterygoid plates, or encases carotid artery&lt;br /&gt;
&lt;br /&gt;
* N staging assesses regional lymph node involvement:&lt;br /&gt;
** N0: No regional lymph node metastasis&lt;br /&gt;
** N1: Metastasis in a single ipsilateral lymph node, ≤3 cm&lt;br /&gt;
** N2: More extensive regional node involvement&lt;br /&gt;
** N3: Metastasis in a lymph node &amp;gt;6 cm&lt;br /&gt;
&lt;br /&gt;
* M staging indicates distant metastasis:&lt;br /&gt;
** M0: No distant metastasis&lt;br /&gt;
** M1: Distant metastasis present&lt;br /&gt;
&lt;br /&gt;
== Molecular pathogenesis ==&lt;br /&gt;
=== Genetic characteristics ===&lt;br /&gt;
The molecular basis of acinic cell carcinoma has been elucidated through comprehensive genomic analyses. The most significant recurrent genetic alteration is the overexpression of the nuclear receptor NR4A3 (nuclear receptor subfamily 4 group A member 3), present in approximately 80% of cases.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; This overexpression typically results from genomic rearrangements at chromosome 9q31, leading to enhancer hijacking where strong tissue-specific enhancer elements are juxtaposed with the NR4A3 gene.&lt;br /&gt;
&lt;br /&gt;
Unlike many other salivary gland malignancies which are driven by specific fusion oncogenes (e.g., MYB-NFIB in adenoid cystic carcinoma or ETV6-NTRK3 in secretory carcinoma), conventional acinic cell carcinoma is characterized by a relatively low mutational burden. Whole-genome and whole-exome sequencing studies have demonstrated a mean of 13 non-synonymous mutations per tumor, significantly lower than many other adult solid malignancies.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional recurrent genetic alterations reported in acinic cell carcinoma include:&lt;br /&gt;
* [[TP53]] mutations (15-20% of cases), particularly in high-grade or dedifferentiated variants&lt;br /&gt;
* PI3K/AKT/mTOR pathway alterations (approximately 10-15% of cases)&lt;br /&gt;
* Chromatin remodeling gene mutations (e.g., KMT2C, KMT2D)&lt;br /&gt;
* Rare BRAF V600E mutations (&amp;lt;5% of cases)&lt;br /&gt;
&lt;br /&gt;
=== Cell of origin and differentiation ===&lt;br /&gt;
Acinic cell carcinoma is believed to arise from the pluripotent stem cells of the salivary gland ductal system with subsequent differentiation toward serous acinar cells.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; The tumors recapitulate the structure and function of normal serous acinar cells, including the production of amylase and other digestive enzymes. This is evidenced by:&lt;br /&gt;
&lt;br /&gt;
* Ultrastructural studies revealing zymogen-like secretory granules within tumor cells&lt;br /&gt;
* Immunohistochemical expression of acinar markers such as DOG1, amylase, and chymotrypsin&lt;br /&gt;
* Maintenance of polarized secretory function in well-differentiated tumors&lt;br /&gt;
&lt;br /&gt;
The molecular mechanisms underlying the acquisition of acinar differentiation in these tumors likely involve the NR4A3 transcription factor, which regulates genes associated with secretory function and cellular differentiation. Experimental evidence suggests that NR4A3 overexpression in salivary gland progenitor cells is sufficient to induce acinar differentiation and promote neoplastic transformation.&amp;lt;ref&amp;gt;&amp;quot;Molecular advances in salivary gland pathology and their practical application&amp;quot;. &#039;&#039;Diagnostic Histopathology&#039;&#039;. April 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Tumor microenvironment ===&lt;br /&gt;
Recent studies have characterized the tumor microenvironment of acinic cell carcinoma, revealing several notable features:&lt;br /&gt;
&lt;br /&gt;
* Low to moderate lymphocytic infiltration, with CD8+ T-cells predominating in most cases&lt;br /&gt;
* Relatively low PD-L1 expression compared to other salivary gland malignancies&lt;br /&gt;
* Desmoplastic stromal response that increases with tumor grade and stage&lt;br /&gt;
* Relatively low microvessel density consistent with the typically indolent growth pattern&lt;br /&gt;
&lt;br /&gt;
The immunological landscape of acinic cell carcinoma varies significantly between conventional tumors and those with high-grade transformation. High-grade transformed tumors typically display increased immune cell infiltration, upregulation of immune checkpoint molecules, and heightened genomic instability, potentially explaining their more aggressive clinical behavior and potentially different therapeutic susceptibilities.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
==Prognosis==&lt;br /&gt;
Prognosis is generally excellent for acinic cell carcinoma of the parotid gland, with five-year survival rates of 90.6-97.15% for localized disease.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; Ten-year survival rates range from 88-93.81%, and the 20-year survival rate is approximately 89.74% according to a comprehensive SEER database analysis.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, patients with acinic cell carcinomas with high-grade transformation (sometimes also called [[dedifferentiation]]) have significantly worse survival, with 5-year survival rates dropping to approximately 33%.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; For cases with distant metastasis, long-term survival rates are much lower, with 20-year survival at 21.99%.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Acinic cell carcinoma originating in the lung is extremely rare, with fewer than 100 documented cases in the literature.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; The prognosis for this pulmonary variant is more guarded than for salivary gland presentations, but remains considerably better than for conventional [[non-small cell lung cancer]] types. Five-year survival rates for primary pulmonary acinic cell carcinoma range from 56-67%, compared to approximately 25% for typical non-small cell lung cancer.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Prognostic factors specific to lung acinic cell carcinoma include tumor size, presence of [[pleural invasion]], lymph node status, and histologic grade. Patients with tumors smaller than 3 cm without pleural invasion or lymph node involvement have the most favorable outcomes.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; Surgical resection remains the primary treatment modality, with limited data supporting the efficacy of adjuvant therapies.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Treatment==&lt;br /&gt;
# [[Segmental resection|Surgical resection]] is the mainstay of treatment, whenever possible. Complete surgical excision with adequate margins is essential for optimal outcomes.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; If tumor is completely removed, post-operative [[radiation therapy]] is typically not needed since acinic cell carcinoma is considered a [[Grading (tumors)|low-grade histology]]. However, modern evidence indicates post-operative radiation therapy significantly improves outcomes for acinic cell carcinoma when certain high-risk features are present:&lt;br /&gt;
## [[Resection margin|Positive or close margins]] (&amp;lt;5 mm)&lt;br /&gt;
## Incomplete resection or gross residual disease&lt;br /&gt;
## Tumor invades beyond gland (extraparenchymal extension)&lt;br /&gt;
## Positive lymph nodes (nodal metastases)&lt;br /&gt;
## Perineural invasion, particularly of major nerves&lt;br /&gt;
## Lymphovascular invasion&lt;br /&gt;
## High-grade histology or dedifferentiated/high-grade transformation (increased risk of recurrence by 5-8 fold)&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
## Recurrent disease&lt;br /&gt;
## Large tumor size (typically &amp;gt;4 cm)&lt;br /&gt;
## Deep lobe involvement in parotid tumors&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Modern radiation therapy modalities have improved efficacy and reduced side effects compared to historical approaches:&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
## [[Intensity-modulated radiation therapy]] (IMRT): Delivers precise radiation doses to tumor areas while minimizing exposure to surrounding tissues&lt;br /&gt;
## Neutron beam radiation: More effective than conventional photon therapy for certain salivary gland tumors but available at only a few specialized centers&lt;br /&gt;
## Proton therapy: Offers potentially superior dose distribution compared to photon-based treatments&lt;br /&gt;
## [[Carbon ion therapy]]: Emerging evidence suggests efficacy for radioresistant salivary gland tumors&lt;br /&gt;
# [[Chemotherapy]] has limited efficacy and is generally reserved for recurrent or metastatic disease not amenable to further surgical resection or radiation therapy. Commonly used agents include platinum-based combinations and taxanes.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
# Emerging therapeutic approaches based on molecular understanding:&lt;br /&gt;
## [[Targeted therapy]]: Recent molecular characterization has identified the NR4A3 transcription factor as consistently overexpressed in acinic cell carcinoma, potentially representing a future therapeutic target&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
## [[Immunotherapy]]: Checkpoint inhibitors are being investigated in clinical trials for salivary gland malignancies, with preliminary evidence suggesting potential activity in tumors with high mutational burden&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Epidemiology==&lt;br /&gt;
Acinic cell carcinoma accounts for approximately 6-15% of all primary malignant salivary gland tumors, making it the third most common malignant salivary gland neoplasm after mucoepidermoid carcinoma and adenoid cystic carcinoma.&amp;lt;ref name=&amp;quot;IARC 2017&amp;quot; /&amp;gt; It appears in all age groups, but presents at a younger median age (approx. 52 years) than most other [[salivary gland cancer]]s, with a peak incidence in the fifth decade of life. There is a slight female predominance with a female-to-male ratio of approximately 3:2.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; Occurrences in children are not uncommon, representing 1-4% of all salivary gland malignancies in the pediatric population.&amp;lt;ref name=&amp;quot;overview&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The annual incidence of acinic cell carcinoma is estimated at 0.13 cases per 100,000 individuals worldwide, though significant geographic variations exist. Recent epidemiological studies have documented a rising incidence in Western nations, with an approximate annual increase of 1.1-1.3% over the past three decades.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; This increase has been attributed to improved diagnostic techniques, particularly advanced imaging and molecular diagnostics, as well as potential environmental factors.&lt;br /&gt;
&lt;br /&gt;
Salivary gland cancers seem on the rise in many Western Nations and their risk factors remain incompletely characterized. Among the established risk factors are:&lt;br /&gt;
&lt;br /&gt;
* Prior radiation exposure, including therapeutic radiation for head and neck cancers and environmental radiation exposure&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Radioactive isotope exposure, particularly [[iodine]]-131 and [[caesium]]-137 radionuclides, which can concentrate in salivary gland tissue&amp;lt;ref&amp;gt;[https://jscientia.org/index.php/js/article/view/137/125 &amp;quot;Prevention of nuclear damage caused by iodine and cesium radionuclides to the thyroid, pancreas and other organs&amp;quot;]. &#039;&#039;Juvenis Scientia&#039;&#039;. 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Occupational exposures to certain industrial chemicals and heavy metals&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Cigarette smoking (weak association)&lt;br /&gt;
* Epstein-Barr virus infection (particularly in Asian populations)&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Possible familial predisposition in rare cases{{cn|date=October 2025}}&lt;br /&gt;
&lt;br /&gt;
Recent molecular epidemiologic studies have identified recurrent genetic alterations in acinic cell carcinomas, including consistent overexpression of the nuclear receptor NR4A3 due to genomic rearrangements at chromosome 9q31, present in approximately 80% of cases.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; These molecular findings may eventually form the basis for targeted screening in high-risk populations.&lt;br /&gt;
&lt;br /&gt;
The role of ionizing radiation in salivary gland carcinogenesis is particularly significant. From a biophysical perspective, salivary gland tissue contains high concentrations of metal ions and electrolytes that can potentiate free radical formation after radiation exposure, leading to DNA damage through indirect effects beyond direct ionization.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; Additionally, radioactive iodine isotopes can concentrate up to 50 times higher in salivary tissue compared to plasma due to the expression of sodium/iodide symporter proteins, explaining their specific targeting of these glands.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=packed heights=190&amp;gt;&lt;br /&gt;
File:Relative incidence of parotid tumors.png|Relative incidence of parotid tumors, showing carcinoma ex pleomorphic adenoma at right.&amp;lt;ref name=Medscape&amp;gt;[https://emedicine.medscape.com/article/852373-overview &amp;quot;Salivary Gland Neoplasms&amp;quot;]. &#039;&#039;Medscape&#039;&#039;. 22 December 2022. Updated: Jan 13, 2021&amp;lt;br /&amp;gt; Diagrams by Mikael Häggström, MD&amp;lt;/ref&amp;gt;&lt;br /&gt;
File:Relative incidence of submandibular tumors.png|Relative incidence of submandibular tumors, showing carcinoma ex pleomorphic adenoma at bottom-right.&amp;lt;ref name=Medscape/&amp;gt;&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Acinic cell carcinoma of the lung==&lt;br /&gt;
[[Acinic cell carcinoma of the lung]] is a very rare variant of lung cancer that, in this organ, is classified among the [[salivary gland-like carcinoma of the lung]]. Fewer than 1% of malignancies beginning in the lower respiratory tract are acinic cell carcinomas.&amp;lt;ref name=&amp;quot;who2004&amp;quot;&amp;gt;[http://www.iarc.fr/en/publications/pdfs-online/pat-gen/bb10/bb10-cover.pdf &amp;quot;Pathology and Genetics of Tumours of the Lung, Pleura, Thymus and Heart&amp;quot;]. IARC Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
First described in 1972 by Fechner et al., fewer than 100 cases have been reported in the medical literature worldwide.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; These tumors typically affect individuals between 40 and 70 years of age, with a slight female predominance and no strong association with smoking history, unlike conventional lung carcinomas.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Histologically, pulmonary acinic cell carcinoma resembles its salivary gland counterpart, characterized by sheets or islands of polygonal tumor cells with basophilic granular cytoplasm containing zymogen-like PAS-positive granules that are diastase-resistant. Immunohistochemically, tumor cells typically express cytokeratins, amylase, lysozyme, and alpha-1 antitrypsin.{{cn|date=February 2026}}&lt;br /&gt;
&lt;br /&gt;
These tumors are most commonly located in the peripheral regions of the lungs, particularly in the lower lobes. Surgical resection is the primary treatment modality, with lobectomy or pneumonectomy with mediastinal lymph node dissection being the preferred approach for resectable disease.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; Five-year survival rates range from 56% to 67%, significantly better than conventional non-small cell lung cancer but worse than salivary gland acinic cell carcinoma. Prognostic factors include tumor size, presence of pleural invasion, lymph node status, and histologic grade.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
{{refbegin}}&lt;br /&gt;
* &amp;quot;Diagnostic Pathology: Head and Neck&amp;quot;. [[Elsevier]].&lt;br /&gt;
* &amp;quot;Oral &amp;amp; maxillofacial pathology&amp;quot;. W.B. Saunders.&lt;br /&gt;
* Citation needed.&lt;br /&gt;
{{refend}}&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
{{Medical resources&lt;br /&gt;
|  DiseasesDB     = &lt;br /&gt;
|  ICD10          = {{ICD10|C|07||c|00}}&lt;br /&gt;
|  ICD9           = {{ICD9|142.0}}&lt;br /&gt;
|  ICDO           = M8550/3&lt;br /&gt;
|  OMIM           = &lt;br /&gt;
|  MedlinePlus    = &lt;br /&gt;
|  eMedicineSubj  = &lt;br /&gt;
|  eMedicineTopic = &lt;br /&gt;
|  MeshID         = &lt;br /&gt;
}}&lt;br /&gt;
{{ICDOMorphology|state=collapsed}}&lt;br /&gt;
{{Tumors of lip, oral cavity and pharynx}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Salivary gland neoplasia]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Truth_Terminal&amp;diff=14</id>
		<title>Truth Terminal</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Truth_Terminal&amp;diff=14"/>
		<updated>2026-04-06T12:58:55Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import from Wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html lang=&amp;quot;en&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;meta charset=&amp;quot;utf-8&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;title&amp;gt;Wikimedia Error&amp;lt;/title&amp;gt;&lt;br /&gt;
&amp;lt;style&amp;gt;&lt;br /&gt;
* { margin: 0; padding: 0; }&lt;br /&gt;
body { background: #fff; font: 15px/1.6 sans-serif; color: #333; }&lt;br /&gt;
.content { margin: 7% auto 0; padding: 2em 1em 1em; max-width: 640px; display: flex; flex-direction: row; flex-wrap: wrap; }&lt;br /&gt;
.footer { clear: both; margin-top: 14%; border-top: 1px solid #e5e5e5; background: #f9f9f9; padding: 2em 0; font-size: 0.8em; text-align: center; }&lt;br /&gt;
img { margin: 0 2em 2em 0; }&lt;br /&gt;
a img { border: 0; }&lt;br /&gt;
h1 { margin-top: 1em; font-size: 1.2em; }&lt;br /&gt;
.content-text { flex: 1; }&lt;br /&gt;
p { margin: 0.7em 0 1em 0; }&lt;br /&gt;
a { color: #0645ad; text-decoration: none; }&lt;br /&gt;
a:hover { text-decoration: underline; }&lt;br /&gt;
code { font-family: sans-serif; }&lt;br /&gt;
summary { font-weight: bold; cursor: pointer; }&lt;br /&gt;
details[open] { background: #970302; color: #dfdedd; }&lt;br /&gt;
.text-muted { color: #777; }&lt;br /&gt;
@media (prefers-color-scheme: dark) {&lt;br /&gt;
  a { color: #9e9eff; }&lt;br /&gt;
  body { background: transparent; color: #ddd; }&lt;br /&gt;
  .footer { border-top: 1px solid #444; background: #060606; }&lt;br /&gt;
  #logo { filter: invert(1) hue-rotate(180deg); }&lt;br /&gt;
  .text-muted { color: #888; }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/style&amp;gt;&lt;br /&gt;
&amp;lt;meta name=&amp;quot;color-scheme&amp;quot; content=&amp;quot;light dark&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;content&amp;quot; role=&amp;quot;main&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;a href=&amp;quot;https://www.wikimedia.org&amp;quot;&amp;gt;&amp;lt;img id=&amp;quot;logo&amp;quot; src=&amp;quot;https://www.wikimedia.org/static/images/wmf-logo.png&amp;quot; srcset=&amp;quot;https://www.wikimedia.org/static/images/wmf-logo-2x.png 2x&amp;quot; alt=&amp;quot;Wikimedia&amp;quot; width=&amp;quot;135&amp;quot; height=&amp;quot;101&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;/a&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;content-text&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h1&amp;gt;Error&amp;lt;/h1&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Not Found&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;footer&amp;quot;&amp;gt;&amp;lt;p&amp;gt;If you report this error to the Wikimedia System Administrators, please include the details below.&amp;lt;/p&amp;gt;&amp;lt;p class=&amp;quot;text-muted&amp;quot;&amp;gt;&amp;lt;code&amp;gt;Request served via cp3073 cp3073, Varnish XID 540967815&amp;lt;br&amp;gt;Upstream caches: cp3073 int&amp;lt;br&amp;gt;Error: 404, Not Found at Mon, 06 Apr 2026 12:55:39 GMT&amp;lt;br&amp;gt;&amp;lt;details&amp;gt;&amp;lt;summary&amp;gt;Sensitive client information&amp;lt;/summary&amp;gt;IP address: 5.75.217.57&amp;lt;/details&amp;gt;&amp;lt;/code&amp;gt;&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Technological_singularity&amp;diff=13</id>
		<title>Technological singularity</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Technological_singularity&amp;diff=13"/>
		<updated>2026-04-06T12:58:48Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import from Wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Futures studies}}&lt;br /&gt;
{{History of technology sidebar}}&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;technological singularity&#039;&#039;&#039;, often simply called &#039;&#039;&#039;the singularity&#039;&#039;&#039;,&amp;lt;ref&amp;gt;Cadwalladr, Carole. [https://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence &amp;quot;Are the robots about to rise? Google&#039;s new director of engineering thinks so…&amp;quot;]. &#039;&#039;[[The Guardian]]&#039;&#039;. 22 February 2014.&amp;lt;/ref&amp;gt; is a [[hypothetical]] event in which technological growth accelerates beyond human control, producing unpredictable changes in [[human civilization]].&amp;lt;ref&amp;gt;[http://www.singularitysymposium.com/definition-of-singularity.html &amp;quot;Collection of sources defining &amp;quot;singularity&amp;quot;&amp;quot;]. &#039;&#039;singularitysymposium.com&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Singularity hypotheses&amp;quot;&amp;gt;[https://cds.cern.ch/record/1552240 &amp;quot;Singularity Hypotheses: A Scientific and Philosophical Assessment&amp;quot;]. Springer.&amp;lt;/ref&amp;gt; According to the most popular version of the singularity hypothesis, [[I. J. Good]]&#039;s [[#Intelligence explosion|intelligence explosion]] model of 1965, an upgradable [[intelligent agent]] could eventually enter a [[positive feedback loop]] of [[Recursive self-improvement|successive self-improvement]] cycles; more intelligent generations would appear more and more rapidly, causing an explosive increase in intelligence that culminates in a powerful [[superintelligence]], far surpassing [[human intelligence]].&amp;lt;ref name=&amp;quot;vinge1993&amp;quot;&amp;gt;Vinge, Vernor. [http://mindstalk.net/vinge/vinge-sing.html &amp;quot;The Coming Technological Singularity: How to Survive in the Post-Human Era&amp;quot;]. , in &#039;&#039;Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace&#039;&#039;, G. A. Landis, ed., NASA Publication CP-10129, pp. 11–22, 1993. &amp;quot;There may be developed computers that are &amp;quot;awake&amp;quot; and superhumanly intelligent. (To date, there has been much controversy as to whether we can create human equivalence in a machine. But if the answer is &#039;yes, we can&#039;, then there is little doubt that beings more intelligent can be constructed shortly thereafter.)&amp;quot;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Human extinction: --&amp;gt;Some scientists, including [[Stephen Hawking]], have expressed concern that [[Superintelligence|artificial superintelligence]] could result in [[human extinction]].&amp;lt;ref&amp;gt;Sparkes, Matthew. [https://www.telegraph.co.uk/technology/news/11342200/Top-scientists-call-for-caution-over-artificial-intelligence.html &amp;quot;Top scientists call for caution over artificial intelligence&amp;quot;]. &#039;&#039;[[The Daily Telegraph&#039;&#039;. 13 January 2015.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.bbc.com/news/technology-30290540 &amp;quot;Hawking: AI could end human race&amp;quot;]. BBC. 2 December 2014.&amp;lt;/ref&amp;gt; The consequences of a technological singularity and its potential benefit or harm to the human species have been intensely debated.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Plausibility: --&amp;gt;Prominent technologists and academics dispute the plausibility of a technological singularity and associated artificial intelligence &amp;quot;explosion&amp;quot;, including [[Paul Allen]],&amp;lt;ref name=&amp;quot;Allen2011&amp;quot;/&amp;gt; [[Jeff Hawkins]],&amp;lt;ref name=&amp;quot;ieee-lumi&amp;quot;/&amp;gt; [[John Henry Holland|John Holland]], [[Jaron Lanier]], [[Steven Pinker]],&amp;lt;ref name=&amp;quot;ieee-lumi&amp;quot;/&amp;gt; [[Theodore Modis]],&amp;lt;ref name=&amp;quot;modis2012&amp;quot;/&amp;gt; [[Gordon Moore]],&amp;lt;ref name=&amp;quot;ieee-lumi&amp;quot; /&amp;gt; and [[Roger Penrose]].&amp;lt;ref&amp;gt;Penrose, Roger. &amp;quot;The emperor&#039;s new mind: concerning computers, minds and the laws of physics&amp;quot;. Oxford Univ. Press. 1999.&amp;lt;/ref&amp;gt; One claim is that artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones. [[Stuart J. Russell]] and [[Peter Norvig]] observe that in the history of technology, improvement in a particular area tends to follow an S curve: it begins with accelerating improvement, then levels off without continuing upward into a hyperbolic singularity.&amp;lt;ref&amp;gt;Russell, Stuart J.. &amp;quot;[[Artificial Intelligence: A Modern Approach]]&amp;quot;. Pearson. 2021.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
[[Alan Turing]], often regarded as the father of modern computer science, laid a crucial foundation for contemporary discourse on the technological singularity. His pivotal 1950 paper &amp;quot;[[Computing Machinery and Intelligence]]&amp;quot; argued that a machine could, in theory, exhibit intelligent behavior equivalent to or indistinguishable from that of a human.&amp;lt;ref&amp;gt;[https://www.ibm.com/think/topics/technological-singularity &amp;quot;What is the Technological Singularity? {{!&amp;quot;]. &#039;&#039;www.ibm.com&#039;&#039;. 2024-08-13.&amp;lt;/ref&amp;gt; But a technological singularity is not required for machines that can perform at or beyond a human level on certain tasks to be developed, nor does their existence imply the possibility of such an occurrence, as demonstrated by events such as the [[Deep Blue versus Garry Kasparov|1996 victory]] of IBM&#039;s [[Deep Blue (chess computer)|Deep Blue]] supercomputer in a chess match with grandmaster [[Garry Kasparov]].&amp;lt;ref&amp;gt;[https://www.theguardian.com/sport/2021/feb/12/deep-blue-computer-beats-kasparov-chess-1996#comments &amp;quot;Deep Blue computer beats world chess champion – archive 12 February 1996&amp;quot;]. &#039;&#039;the Guardian&#039;&#039;. 2021-02-12.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Hungarian-American mathematician [[John von Neumann]] is the first person known to have discussed a &amp;quot;singularity&amp;quot; in technological progress.&amp;lt;ref&amp;gt;Vinge, Vernor. &amp;quot;Proceedings of a symposium cosponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute and held in Westlake, Ohio, March 30–31, 1993&amp;quot;. 1993.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Shanahan, Murray. [https://books.google.com/books?id=rAxZCgAAQBAJ &amp;quot;The Technological Singularity&amp;quot;]. MIT Press. 2015-08-07.&amp;lt;/ref&amp;gt; [[Stanislaw Ulam]] reported in 1958 that an earlier discussion with von Neumann &amp;quot;centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue&amp;quot;.&amp;lt;ref name=&amp;quot;ulam1958&amp;quot; /&amp;gt; Subsequent authors echoed this viewpoint.&amp;lt;ref name=&amp;quot;Singularity hypotheses&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;chalmers2010&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 1965, [[I. J. Good]] speculated that superhuman intelligence might bring about an &amp;quot;intelligence explosion&amp;quot;:&amp;lt;ref name=&amp;quot;good1965&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;good1965-stat&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an &#039;intelligence explosion&#039;, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.|source=Speculations Concerning the First Ultraintelligent Machine (1965)&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The concept and the term &amp;quot;singularity&amp;quot; were popularized by [[Vernor Vinge]], first in 1983 in an [[op-ed]] in [[Omni (magazine)|&#039;&#039;Omni&#039;&#039;]] magazine arguing that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to &amp;quot;the knotted space-time at the center of a black hole&amp;quot;.&amp;lt;ref name=&amp;quot;dooling2008-88&amp;quot;/&amp;gt; This was followed by his 1993 essay &amp;quot;The Coming Technological Singularity&amp;quot;,&amp;lt;ref name=&amp;quot;vinge1993&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;chalmers2010&amp;quot;/&amp;gt; in which he wrote that the transition would signal the end of the human era, as the new superintelligence would continue to upgrade itself and advance technologically at an incomprehensible rate, and he would be surprised if it occurred before 2005 or after 2030.&amp;lt;ref name=&amp;quot;vinge1993&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another significant contribution to wider circulation of the notion was [[Ray Kurzweil]]&#039;s 2005 book &#039;&#039;[[The Singularity Is Near]]&#039;&#039;, predicting singularity by 2045.&amp;lt;ref name=&amp;quot;chalmers2010&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Intelligence explosion ==&lt;br /&gt;
&#039;&#039;Further information: [[Recursive self-improvement]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Although technological progress has been accelerating in most areasNovember 2025., it has been limited by the basic intelligence of the human brain, which has not, according to [[Paul R. Ehrlich]], changed significantly for millennia.&amp;lt;ref name=&amp;quot;Paul Ehrlich June 2008&amp;quot;&amp;gt;Ehrlich, Paul. [https://longnow.org/seminars/02008/jun/27/dominant-animal-human-evolution-and-environment/ &amp;quot;Paul Ehrlich: The Dominant Animal: Human Evolution and the Environment – The Long Now&amp;quot;]. &#039;&#039;longnow.org&#039;&#039;.&amp;lt;/ref&amp;gt; But with the increasing power of computers and other technologies, it might eventually be possible to build a machine significantly more intelligent than humans.&amp;lt;ref name=&amp;quot;businessweek&amp;quot;&amp;gt;[https://bloomberg.com/businessweek &amp;quot;Businessweek – Bloomberg&amp;quot;]. &#039;&#039;Bloomberg.com&#039;&#039;. 2023-04-20.{{dead link|date=November 2025}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If superhuman intelligence is invented—through either the [[Intelligence amplification|amplification of human intelligence]] or artificial intelligence—it will, in theory, vastly surpass human problem-solving and inventive skill. Such an AI is often called a seed AI&amp;lt;ref name=&amp;quot;Yampolskiy, Roman V 2015&amp;quot;&amp;gt;Yampolskiy, Roman V. &amp;quot;Analysis of types of self-improving software.&amp;quot; Artificial General Intelligence. Springer International Publishing, 2015. pp. 384–393.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;ReferenceA&amp;quot;&amp;gt;[[Eliezer Yudkowsky]]. &#039;&#039;General Intelligence and Seed AI-Creating Complete Minds Capable of Open-Ended Self-Improvement&#039;&#039;, 2001.&amp;lt;/ref&amp;gt; because if an AI is created with engineering capabilities that match or surpass those of its creators, it could autonomously improve its own software and hardware to design an even more capable machine, which could repeat the process in turn. This recursive self-improvement could accelerate, potentially allowing enormous qualitative change before reaching any limits imposed by the laws of physics or theoretical computation. It is speculated that over many iterations, such an AI [[Superintelligence|would far surpass human cognitive abilities]].&lt;br /&gt;
&lt;br /&gt;
==Emergence of superintelligence==&lt;br /&gt;
&#039;&#039;Further information: [[Superintelligence]]&#039;&#039;&lt;br /&gt;
A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical [[intelligent agent|agent]] that possesses intelligence far surpassing that of even the brightest and most gifted humans.&amp;lt;ref&amp;gt;Chalmers, David J.. [https://onlinelibrary.wiley.com/doi/abs/10.1002/9781118922590.ch16 &amp;quot;Science Fiction and Philosophy: From Time Travel to Superintelligence&amp;quot;]. Wiley. 2016.&amp;lt;/ref&amp;gt; &amp;quot;Superintelligence&amp;quot; may also refer to the form or degree of intelligence possessed by such an agent. [[I. J. Good]], [[Vernor Vinge]], and [[Ray Kurzweil]] define the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present-day humans to predict what human beings&#039; lives would be like in a post-singularity world.&amp;lt;ref name=&amp;quot;vinge1993&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;kurzweil2005-135&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The related concept of &amp;quot;speed superintelligence&amp;quot; describes an artificial intelligence that can function like a human mind but much faster.&amp;lt;ref&amp;gt;Yampolskiy, Kaj. &amp;quot;The Technological Singularity&amp;quot;. Springer Berlin Heidelberg.&amp;lt;/ref&amp;gt; For example, given a millionfold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds.&amp;lt;ref name=&amp;quot;singinst.org&amp;quot;/&amp;gt; Such an increase in information processing speed could result in or significantly contribute to the singularity.&amp;lt;ref name=&amp;quot;chalmers2016&amp;quot;&amp;gt;&amp;quot;Science Fiction and Philosophy&amp;quot;. John Wiley &amp;amp; Sons, Inc.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Technology forecasters and researchers disagree about when, or whether, human intelligence will be surpassed. Some argue that advances in [[artificial intelligence]] (AI) may result in general reasoning systems that bypass human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence.&amp;lt;ref&amp;gt;Pearce, David. [http://link.springer.com/10.1007/978-3-642-32560-1_11 &amp;quot;The Biointelligence Explosion&amp;quot;]. &#039;&#039;Singularity Hypotheses&#039;&#039;. 2012.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;&amp;quot;The Age of Artificial Intelligence: An Exploration&amp;quot;. Vernon Press.&amp;lt;/ref&amp;gt; A number of [[futures studies]] focus on scenarios that combine these possibilities, suggesting that humans are likely to [[brain–computer interface|interface with computers]], or [[mind uploading|upload their minds to computers]], in a way that enables substantial intelligence amplification. [[Robin Hanson]]&#039;s 2016 book &#039;&#039;[[The Age of Em]]&#039;&#039; describes a future in which human brains are scanned and digitized, creating &amp;quot;uploads&amp;quot; or digital versions of human consciousness. In this future, the development of these uploads may precede or coincide with the emergence of superintelligent AI.&amp;lt;ref&amp;gt;Hanson, Robin. [https://ageofem.com/ &amp;quot;The Age of Em&amp;quot;]. Oxford University Press. 2016.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Variations==&lt;br /&gt;
=== Non-AI singularity ===&lt;br /&gt;
Some writers use &amp;quot;the singularity&amp;quot; in a broader way, to refer to any radical changes in society brought about by new technology (such as [[molecular nanotechnology]]),&amp;lt;ref name=&amp;quot;hall2010&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;yudkowsky2007&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;sandberg2009&amp;quot;/&amp;gt; although Vinge and other writers say that without superintelligence, such changes would not be a true singularity.&amp;lt;ref name=&amp;quot;vinge1993&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Predictions==&lt;br /&gt;
[[File:Performance of AI models on various benchmarks from 1998 to 2024.png|upright=1.7|thumb|[[Progress of AI]] performance on various benchmarks compared to human-level performance&amp;lt;ref&amp;gt;[https://www.gov.uk/government/publications/international-scientific-report-on-the-safety-of-advanced-ai &amp;quot;International scientific report on the safety of advanced AI: interim report&amp;quot;]. &#039;&#039;GOV.UK&#039;&#039;. 17 May 2024.&amp;lt;/ref&amp;gt; including computer vision (MNIST, ImageNet), speech recognition (Switchboard), natural language understanding (SQuAD 1.1, MMLU, GLUE), general language model evaluation (MMLU, Big-Bench, and GPQA), and mathematical reasoning (MATH). Many models surpass human-level performance (black solid line) by 2019, demonstrating significant advancements in AI capabilities across different domains over the past two decades.]]&lt;br /&gt;
Numerous dates have been predicted for the attainment of singularity.&lt;br /&gt;
&lt;br /&gt;
In 1965, [[I. J. Good|Good]] wrote that it was more probable than not that an ultra-intelligent machine would be built in the 20th century.&amp;lt;ref name=&amp;quot;good1965&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That computing capabilities for human-level AI would be available in supercomputers before 2010 was predicted in 1988 by [[Hans Moravec|Moravec]], assuming that the then current rate of improvement continued.&amp;lt;ref name=&amp;quot;moravec1988&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The attainment of greater-than-human intelligence between 2005 and 2030 was predicted by [[Vernor Vinge|Vinge]] in 1993.&amp;lt;ref name=&amp;quot;vinge1993&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Human-level AI around 2029 and the singularity in 2045 was predicted by Kurzweil in 2005.&amp;lt;ref&amp;gt;[https://aiimpacts.org/list-of-analyses-of-time-to-human-level-ai/ &amp;quot;List of Analyses of Time to Human-Level AI&amp;quot;]. &#039;&#039;AI Impacts&#039;&#039;. 2015-01-22.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Kurzweil, Ray. &amp;quot;The Singularity Is Near&amp;quot;. Penguin Group.&amp;lt;/ref&amp;gt; He reaffirmed these predictions in 2024 in &#039;&#039;[[The Singularity Is Nearer]]&#039;&#039;.&amp;lt;ref name=&amp;quot;kurzweil 2024&amp;quot;&amp;gt;Kurzweil, Ray. [https://www.worldcat.org/title/on1438926317 &amp;quot;The singularity is nearer: when we merge with Al&amp;quot;]. Viking. 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Human-level AI by 2040, and intelligence far beyond human by 2050 was predicted in 1998 by Moravec, revising his earlier prediction.&amp;lt;ref&amp;gt;Moravec, Hans P.. [https://philpapers.org/rec/MORRMM &amp;quot;Robot: Mere Machine to Transcendent Mind&amp;quot;]. Oxford University Press USA. 1998.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A median confidence of 50% that [[artificial general intelligence|human-level AI]] would be developed by 2040–2050 was the outcome of four informal polls of AI researchers, conducted in 2012 and 2013 by [[Nick Bostrom|Bostrom]] and [[Vincent C. Müller|Müller]].&amp;lt;ref name=&amp;quot;newyorker&amp;quot;&amp;gt;Khatchadourian, Raffi. [https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom &amp;quot;The Doomsday Invention&amp;quot;]. &#039;&#039;The New Yorker&#039;&#039;. 16 November 2015.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Müller, V. C., &amp;amp; Bostrom, N. (2016). &amp;quot;Future progress in artificial intelligence: A survey of expert opinion&amp;quot;. In V. C. Müller (ed): &#039;&#039;Fundamental issues of artificial intelligence&#039;&#039; (pp. 555–572). Berlin, Germany: Springer Berlin. http://philpapers.org/rec/MLLFPI .&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In September 2025, a review of surveys of scientists and industry experts from the previous 15 years found that most agreed that [[artificial general intelligence]] (AGI), a level well below technological singularity, will occur by 2100.&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;Orf, Darren. [https://www.popularmechanics.com/science/a68205442/singularity-three-months/ &amp;quot;Humanity May Achieve the Singularity Within the Next 3 Months, Scientists Suggest&amp;quot;]. &#039;&#039;Popular Mechanics&#039;&#039;. October 2025.&amp;lt;/ref&amp;gt; A more recent analysis by AIMultiple reported, &amp;quot;Current surveys of AI researchers are predicting AGI around 2040&amp;quot;.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Plausibility==&lt;br /&gt;
Prominent technologists and academics who dispute the plausibility of a technological singularity include [[Paul Allen]],&amp;lt;ref name=&amp;quot;Allen2011&amp;quot;/&amp;gt; [[Jeff Hawkins]],&amp;lt;ref name=&amp;quot;ieee-lumi&amp;quot;/&amp;gt; [[John Henry Holland|John Holland]], [[Jaron Lanier]], [[Steven Pinker]],&amp;lt;ref name=&amp;quot;ieee-lumi&amp;quot;/&amp;gt; [[Theodore Modis]],&amp;lt;ref name=&amp;quot;modis2012&amp;quot;/&amp;gt; and [[Gordon Moore]],&amp;lt;ref name=&amp;quot;ieee-lumi&amp;quot;/&amp;gt; whose [[Moore&#039;s law|law]] is often cited in support of the concept.&amp;lt;ref name=&amp;quot;ieee-whos-who&amp;quot;/&amp;gt;[[File:The Moore&#039;s Law Update — for 128 years - 54181414828.jpg|thumb|upright=2.4|Note the slower growth prior to 1965 and again prior to about 1930.]]Proposed methods for creating superhuman or [[transhuman]] minds typically fall into two categories: intelligence amplification of human brains and artificial intelligence. The many speculated ways to augment human intelligence include [[bioengineering]], [[genetic engineering]], [[nootropic]] drugs, AI assistants, direct [[brain–computer interface]]s, and [[mind uploading]].&amp;lt;ref name=&amp;quot;singinst.org&amp;quot;&amp;gt;[http://singinst.org/overview/whatisthesingularity &amp;quot;What is the Singularity? &amp;amp;#124; Singularity Institute for Artificial Intelligence&amp;quot;]. Singinst.org.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Robin Hanson]] has expressed skepticism of human intelligence augmentation, writing that once the &amp;quot;low-hanging fruit&amp;quot; of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult.&amp;lt;ref name=&amp;quot;hanson&amp;quot;&amp;gt;Hanson, Robin. [https://mason.gmu.edu/~rhanson/vc.html#hanson &amp;quot;Some Skepticism&amp;quot;]. 1998.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In conversation about human-level artificial intelligence with cognitive scientist [[Gary Marcus]], computer scientist [[Grady Booch]] skeptically said the singularity is &amp;quot;sufficiently imprecise, filled with emotional and historic baggage, and touches some of humanity&#039;s deepest hopes and fears that it&#039;s hard to have a rational discussion therein&amp;quot;.&amp;lt;ref name=&amp;quot;:2&amp;quot;&amp;gt;Marcus, Gary. [https://garymarcus.substack.com/p/agi-will-not-happen-in-your-lifetime &amp;quot;AGI will not happen in your lifetime. Or will it?&amp;quot;]. &#039;&#039;Marcus on AI&#039;&#039;. 2023-01-22.&amp;lt;/ref&amp;gt; Later in the conversation, Marcus, while more optimistic about the progress of AI, agreed that any major advances would not happen as a single event, but rather as a slow and gradual increase in reliability usefulness.&amp;lt;ref name=&amp;quot;:2&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The possibility of an intelligence explosion depends on three factors. The first accelerating factor is the new intelligence enhancements made possible by each previous improvement. But as the intelligences become more advanced, further advances will become more and more complicated, possibly outweighing the advantage of increased intelligence. Each improvement should generate at least one more improvement, on average, for movement toward singularity to continue. Finally, the laws of physics may eventually prevent further improvement.&amp;lt;ref name=&amp;quot;david_chalmers_singularity_lecture_resources_available&amp;quot;&amp;gt;David Chalmers John Locke Lecture, 10 May 2009, Exam Schools, Oxford University, [http://www.fhi.ox.ac.uk/news/2010/david_chalmers_singularity_lecture_resources_available Presenting a philosophical analysis of the possibility of a technological singularity or &amp;quot;intelligence explosion&amp;quot; resulting from recursively self-improving AI]. .&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are two logically independent, but mutually reinforcing, causes of intelligence improvements: increases in the speed of computation and improvements to the [[algorithm]]s used.&amp;lt;ref name=&amp;quot;chalmers2010&amp;quot;/&amp;gt; The former is predicted by [[Moore&#039;s law|Moore&#039;s Law]] and the forecasted improvements in hardware,&amp;lt;ref name=&amp;quot;itrs&amp;quot;&amp;gt;[http://www.itrs.net/Links/2007ITRS/ExecSum2007.pdf &amp;quot;ITRS&amp;quot;].&amp;lt;/ref&amp;gt; and is comparatively similar to previous technological advances. &amp;quot;Most experts believe that Moore&#039;s law is coming to an end during this decade&amp;quot;, the AIMultiple report reads,&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt; but &amp;quot;quantum computing can be used to efficiently train neural networks&amp;quot;,&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt; potentially working around any end to Moore&#039;s Law. But Schulman and Sandberg&amp;lt;ref&amp;gt;Shulman, Carl. [https://intelligence.org/files/SoftwareLimited.pdf &amp;quot;Implications of a Software-Limited Singularity&amp;quot;]. &#039;&#039;[[Machine Intelligence Research Institute]]&#039;&#039;.&amp;lt;/ref&amp;gt; argue that software will present more complex challenges than simply operating on hardware capable of running at human intelligence levels or beyond.&lt;br /&gt;
&lt;br /&gt;
A 2017 email survey of authors with publications at the 2015 [[Conference on Neural Information Processing Systems|NeurIPS]] and [[International Conference on Machine Learning|ICML]] [[machine learning]] conferences asked about the chance that &amp;quot;the intelligence explosion argument is broadly correct&amp;quot;. Of the respondents, 12% said it was &amp;quot;quite likely&amp;quot;, 17% said it was &amp;quot;likely&amp;quot;, 21% said it was &amp;quot;about even&amp;quot;, 24% said it was &amp;quot;unlikely&amp;quot;, and 26% said it was &amp;quot;quite unlikely&amp;quot;.&amp;lt;ref name=&amp;quot;exceed2017&amp;quot;&amp;gt;Grace, Katja. &amp;quot;When Will AI Exceed Human Performance? Evidence from AI Experts&amp;quot;. 24 May 2017.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Speed improvements ==&lt;br /&gt;
Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. Some upper limit on speed may eventually be reached. Jeff Hawkins has said that a self-improving computer system will inevitably run into limits on computing power: &amp;quot;in the end there are limits to how big and fast computers can run. We would end up in the same place; we&#039;d just get there a bit faster. There would be no singularity.&amp;quot;&amp;lt;ref name=&amp;quot;ieee-lumi&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is difficult to directly compare [[silicon]]-based hardware with [[neuron]]s. But Anthony Berglas notes that computer [[speech recognition]] is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the [[human brain]], as well as taking up much less space. The costs of training systems with [[deep learning]] may be larger. In modern deep learning, the effects of hardware improvement on neural networks are characterized by [[neural scaling law]]s.{{sfn|Berglas|2008}}{{efn |[[Large language model]]s such as [[ChatGPT]] and [[Llama (language model)|Llama]] require millions of hours of graphics processing unit ([[Graphics processing unit|GPU]]) time. Training Meta&#039;s Llama in 2023 took 21 days on 2048 [[Nvidia A100|NVIDIA A100]] GPUs, thus requiring hardware substantially larger than a brain. Training took around a million GPU hours, with an estimated cost of over $2 million.  Even so, it is far smaller, and thus easier to train, than a LLM such as ChatGPT, which as of 2023 had 175 billion parameters to adjust, compared to 65 million for Llama.&amp;lt;ref&amp;gt;Leswing, Jonathan Vanian,Kif. [https://www.cnbc.com/2023/03/13/chatgpt-and-generative-ai-are-booming-but-at-a-very-expensive-price.html &amp;quot;ChatGPT and generative AI are booming, but the costs can be extraordinary&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2023-03-13.&amp;lt;/ref&amp;gt;}}&amp;lt;ref name=&amp;quot;kurzweil 2024&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Exponential growth===&lt;br /&gt;
[[Image:PPTMooresLawai.jpg|thumb|upright=2|left|Ray Kurzweil writes that, due to [[paradigm shift]]s, a trend of exponential growth extends [[Moore&#039;s law]] from [[integrated circuits]] to earlier [[transistor]]s, [[vacuum tube]]s, [[relay]]s, and [[electromechanics|electromechanical]] computers. He predicts that the exponential growth will continue, and that in a few decades the computing power of all computers will exceed that of (&amp;quot;unenhanced&amp;quot;) human brains, with superhuman [[artificial intelligence]] appearing around the same time.]]&lt;br /&gt;
&lt;br /&gt;
The exponential growth in computing technology suggested by Moore&#039;s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore&#039;s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book&amp;lt;ref&amp;gt;[https://books.google.com/books?id=fduW6KHhWtQC&amp;amp;pg=PA61 &amp;quot;Robot: Mere Machine to Transcendent Mind&amp;quot;]. Oxford University Press.&amp;lt;/ref&amp;gt; that the exponential growth curve could be extended back to earlier computing technologies before the [[integrated circuit]].&lt;br /&gt;
&lt;br /&gt;
[[Ray Kurzweil]] postulates a [[law of accelerating returns]] whereby the speed of technological change (and more generally, all evolutionary processes)&amp;lt;ref name=&amp;quot;kurzweil1999&amp;quot;/&amp;gt; increases exponentially, generalizing Moore&#039;s law in the same manner as Moravec&#039;s proposal, and also including material technology (especially as applied to [[nanotechnology]]) and [[Medical Technology|medical technology]].&amp;lt;ref name=&amp;quot;kurzweil2005&amp;quot;/&amp;gt; Between 1986 and 2007, machines&#039; application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world&#039;s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world&#039;s storage capacity per capita doubled every 40 months.&amp;lt;ref name=&amp;quot;HilbertLopez2011&amp;quot;&amp;gt;[https://www.science.org/doi/10.1126/science.1200970 &amp;quot;The World&#039;s Technological Capacity to Store, Communicate, and Compute Information&amp;quot;] , Martin Hilbert and Priscila López (2011), [[Science (journal)|Science]], 332 (6025), pp. 60–65; free access to the article through: martinhilbert.net/WorldInfoCapacity.html.&amp;lt;/ref&amp;gt; On the other hand, it has been argued that the global acceleration pattern having a 21st-century singularity as its parameter should be characterized as [[Hyperbolic growth|hyperbolic]] rather than exponential.&amp;lt;ref&amp;gt;[https://link.springer.com/book/10.1007/978-3-030-33730-8 &amp;quot;The 21st Century Singularity and Global Futures&amp;quot;]. &#039;&#039;World-Systems Evolution and Global Futures&#039;&#039;. 2020.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Kurzweil reserves the term &amp;quot;singularity&amp;quot; for a rapid increase in artificial intelligence (as opposed to other technologies), writing: &amp;quot;The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine&amp;quot;.&amp;lt;ref name=&amp;quot;kurzweil2005-9&amp;quot;/&amp;gt; He also defines the singularity as when computer-based intelligences significantly exceed the sum total of human brainpower, writing that advances in computing before that &amp;quot;will not represent the Singularity&amp;quot; because they do &amp;quot;not yet correspond to a profound expansion of our intelligence.&amp;quot;&amp;lt;ref name=&amp;quot;kurzweil2005-135136&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Accelerating change===&lt;br /&gt;
&#039;&#039;Main article: [[Accelerating change]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:ParadigmShiftsFrr15Events.svg|thumb|upright=2|According to Kurzweil, his [[logarithmic scale|logarithmic graph]] of 15 lists of [[paradigm shift]]s for key [[human history|historic]] events shows an [[exponential growth|exponential]] trend.]]&lt;br /&gt;
&lt;br /&gt;
Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term &amp;quot;singularity&amp;quot; in the context of technological progress, [[Stanislaw Ulam]] tells of a conversation with [[John von Neumann]] about accelerating change: &amp;lt;blockquote&amp;gt;One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.&amp;lt;ref name=&amp;quot;ulam1958&amp;quot;/&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Kurzweil claims that technological progress follows a pattern of [[exponential growth]], following what he calls the &amp;quot;[[law of accelerating returns]]&amp;quot;. Whenever technology approaches a barrier, Kurzweil writes, new technologies surmount it. He predicts [[paradigm shift]]s will become increasingly common, leading to &amp;quot;technological change so rapid and profound it represents a rupture in the fabric of human history&amp;quot;.&amp;lt;ref name=&amp;quot;Kurzweil 2001&amp;quot;&amp;gt;Kurzweil, Raymond. [http://lifeboat.com/ex/law.of.accelerating.returns &amp;quot;The Law of Accelerating Returns&amp;quot;]. Lifeboat Foundation..&amp;lt;/ref&amp;gt; Kurzweil believes that the singularity will occur by 2045.&amp;lt;ref name=&amp;quot;kurzweil2005&amp;quot;/&amp;gt; His predictions differ from Vinge&#039;s in that he predicts a gradual ascent to the singularity, rather than Vinge&#039;s rapidly self-improving superhuman intelligence.&lt;br /&gt;
&lt;br /&gt;
Oft-cited dangers include those commonly associated with molecular nanotechnology and [[genetic engineering]]. These threats are major issues for both singularity advocates and critics, and were the subject of [[Bill Joy]]&#039;s 2000 &#039;&#039;[[Wired (magazine)|Wired]]&#039;&#039; magazine article &amp;quot;[[Why The Future Doesn&#039;t Need Us]]&amp;quot;.&amp;lt;ref name=&amp;quot;chalmers2010&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;Joy2000&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Algorithm improvements ==&lt;br /&gt;
Some intelligence technologies, like &amp;quot;seed AI&amp;quot;,&amp;lt;ref name=&amp;quot;Yampolskiy, Roman V 2015&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ReferenceA&amp;quot;/&amp;gt; may also be able to make themselves not just faster but also more efficient, by modifying their [[source code]]. These improvements would make further improvements possible, which would make further improvements possible, and so on.&lt;br /&gt;
&lt;br /&gt;
The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.July 2017. An AI rewriting its own source code could do so while contained in an [[AI box]].&lt;br /&gt;
&lt;br /&gt;
Second, as with [[Vernor Vinge]]&#039;s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different.&lt;br /&gt;
&lt;br /&gt;
Substantial dangers are associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might self-modify, potentially causing the AI to optimise for something other than what was originally intended.&amp;lt;ref name=&amp;quot;selfawaresystems&amp;quot;&amp;gt;Omohundro, Stephen M.. [http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ &amp;quot;&amp;quot;The Basic AI Drives.&amp;quot; Artificial General Intelligence, 2008 proceedings of the First AGI Conference, Vol. 171.&amp;quot;]. IOS. 30 November 2007.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;kurzweilai&amp;quot;&amp;gt;[http://www.kurzweilai.net/artificial-general-intelligence-now-is-the-time &amp;quot;Artificial General Intelligence: Now Is the Time&amp;quot;]. KurzweilAI.&amp;lt;/ref&amp;gt; Second, AIs could compete for the resources humankind uses to survive.&amp;lt;ref name=&amp;quot;selfawaresystems.com&amp;quot;&amp;gt;[http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/ &amp;quot;Omohundro, Stephen M., &amp;quot;The Nature of Self-Improving Artificial Intelligence.&amp;quot; Self-Aware Systems. 21 Jan. 2008. Web. 07 Jan. 2010.&amp;quot;]. 6 October 2007.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Barrat, James. &amp;quot;Our Final Invention&amp;quot;. St. Martin&#039;s Press.&amp;lt;/ref&amp;gt; While not actively malicious, AIs would promote the goals of their programming, not necessarily broader human goals, and thus might crowd out humans.&amp;lt;ref name=&amp;quot;kurzweilai.net&amp;quot;&amp;gt;[http://www.kurzweilai.net/max-more-and-ray-kurzweil-on-the-singularity-2 &amp;quot;Max More and Ray Kurzweil on the Singularity&amp;quot;]. KurzweilAI.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;ReferenceB&amp;quot;&amp;gt;[http://singinst.org/riskintro/index.html &amp;quot;Concise Summary &amp;amp;#124; Singularity Institute for Artificial Intelligence&amp;quot;]. Singinst.org.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;nickbostrom7&amp;quot;&amp;gt;Bostrom, Nick. [http://www.nickbostrom.com/fut/evolution.html &amp;quot;The Future of Human Evolution&amp;quot;].&amp;lt;!-- Published in Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy (Ria University Press: Palo Alto, California, 2004): pp. 339-371. --&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Carl Shulman]] and [[Anders Sandberg]] suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.&amp;lt;ref name=&amp;quot;ShulmanSandberg2010&amp;quot;&amp;gt;Shulman, Carl. [http://intelligence.org/files/SoftwareLimited.pdf &amp;quot;Implications of a Software-Limited Singularity&amp;quot;]. &#039;&#039;ECAP10: VIII European Conference on Computing and Philosophy&#039;&#039;.&amp;lt;/ref&amp;gt; An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called &amp;quot;computing overhang&amp;quot;.&amp;lt;ref name=&amp;quot;MuehlhauserSalamon2012&amp;quot;&amp;gt;Muehlhauser, Luke. &amp;quot;Singularity Hypotheses: A Scientific and Philosophical Assessment&amp;quot;. Springer.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Criticism==&lt;br /&gt;
Linguist and cognitive scientist [[Steven Pinker]] wrote in 2008: &amp;quot;There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.&amp;quot;&amp;lt;ref name=&amp;quot;ieee-lumi&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Jaron Lanier]] denies that the singularity is inevitable: &amp;quot;I do not think the technology is creating itself. It&#039;s not an autonomous process [...] The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on &#039;&#039;not&#039;&#039; emphasizing individual human agency, it&#039;s the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.&amp;quot;&amp;lt;ref name=&amp;quot;lanier&amp;quot;&amp;gt;[http://www.epubbud.com/read.php?g=JCB8D9LA&amp;amp;tocp=59 &amp;quot;Who Owns the Future?&amp;quot;]. &#039;&#039;New York: Simon &amp;amp; Schuster&#039;&#039;. 2013.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Philosopher and cognitive scientist [[Daniel Dennett]] said in 2017: &amp;quot;The whole singularity stuff, that&#039;s preposterous. It distracts us from much more pressing problems [...] AI tools that we become hyper-dependent on—that is going to happen. And one of the dangers is that we will give them more authority than they warrant.&amp;quot;&amp;lt;ref&amp;gt;Cadwalladr, Carole. [https://www.theguardian.com/science/2017/feb/12/daniel-dennett-politics-bacteria-bach-back-dawkins-trump-interview &amp;quot;Daniel Dennett: &#039;I begrudge every hour I have to spend worrying about politics&#039;&amp;quot;]. &#039;&#039;[[The Guardian]]&#039;&#039;. 12 February 2017..&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- religious semblance: --&amp;gt;Some critics suggest religious motivations for believing in the singularity, especially Kurzweil&#039;s version. The buildup to the singularity is compared to Christian [[Eschatology|end-times]] scenarios. Beam called it &amp;quot;a [[Buck Rogers]] vision of the hypothetical Christian Rapture&amp;quot;.&amp;lt;ref name=&amp;quot;beam2005&amp;quot;&amp;gt;Beam, Alex. [http://www.boston.com/ae/books/articles/2005/02/24/that_singularity_sensation/ &amp;quot;That Singularity Sensation&amp;quot;]. &#039;&#039;The Boston Globe&#039;&#039;. 2005-02-24.&amp;lt;/ref&amp;gt; [[John Gray (philosopher)|John Gray]] has said, &amp;quot;the Singularity echoes apocalyptic myths in which history is about to be interrupted by a world-transforming event&amp;quot;.&amp;lt;ref name=&amp;quot;gray2011&amp;quot;&amp;gt;Gray, John. [http://www.nybooks.com/articles/archives/2011/nov/24/road-immortality/?pagination=false &amp;quot;On the Road to Immortality&amp;quot;]. &#039;&#039;The New York Review of Books&#039;&#039;. 2011-11-24.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In &#039;&#039;[[The New York Times]]&#039;&#039;, [[David Streitfeld]] questioned whether &amp;quot;it might manifest first and foremost—thanks, in part, to the bottom-line obsession of today’s [[Silicon Valley]]—as a tool to slash corporate America’s head count.&amp;quot;&amp;lt;ref&amp;gt;Streitfeld, David. [https://www.nytimes.com/2023/06/11/technology/silicon-valley-confronts-the-idea-that-the-singularity-is-here.html &amp;quot;Silicon Valley Confronts the Idea That the &#039;Singularity&#039; Is Here&amp;quot;]. &#039;&#039;New York Times&#039;&#039;. 11 June 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Astrophysicist and [[Philosophy of Science|scientific philosopher]] [[Adam Becker]] criticizes Kurzweil&#039;s concept of human mind uploads to computers on the grounds that they are too fundamentally different and incompatible.&amp;lt;ref&amp;gt;Wood, Andrew Paul. &amp;quot;Mission Critical&amp;quot;. &#039;&#039;[[New Zealand Listener]]&#039;&#039;. May 17, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Skepticism of exponential growth===&lt;br /&gt;
&amp;lt;!-- Modis specifically on &amp;quot;singularity&amp;quot;: --&amp;gt;[[Theodore Modis]] holds the singularity cannot happen.&amp;lt;ref&amp;gt;Modis, Theodore (2020). &amp;quot;Forecasting the Growth of Complexity and Change—An Update&amp;quot;. Published in Korotayev, Andrey. &amp;quot;The 21st Century Singularity and Global Futures&amp;quot;. Springer. January 3, 2020. pp/ 101–104.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;modis2012&amp;quot;&amp;gt;Modis, Theodore (2012). &amp;quot;Why the Singularity Cannot Happen&amp;quot;. Published in Eden, Amnon H. et al (Eds.). [http://www.growth-dynamics.com/articles/Singularity.pdf &amp;quot;Singularity Hypothesis&amp;quot;]. Springer. 2012. pp. 311–339.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;modis2003&amp;quot;&amp;gt;Modis, Theodore (May–June 2003). &amp;quot;[http://www.growth-dynamics.com/articles/futurist.pdf The Limits of Complexity and Change]&amp;quot;. The Futurist. 37 (3): 26–32.&amp;lt;/ref&amp;gt; He claims the &amp;quot;technological singularity&amp;quot; and especially Kurzweil lack scientific rigor; Kurzweil is alleged to mistake the logistic function (S-function) for an exponential function, and to see a &amp;quot;knee&amp;quot; in an exponential function where there can in fact be no such thing.&amp;lt;ref name=&amp;quot;modis2006&amp;quot;/&amp;gt; In a 2021 article, Modis wrote that no milestones—breaks in historical perspective comparable in importance to the Internet, DNA, the transistor, or nuclear energy—had been observed in the previous 20 years, while five of them would have been expected according to the exponential trend advocated by proponents of the technological singularity.&amp;lt;ref name=&amp;quot;modis2022&amp;quot;&amp;gt;Modis, Theodore. [https://www.sciencedirect.com/science/article/pii/S0040162521008921 &amp;quot;Links between entropy, complexity, and the technological singularity&amp;quot;]. &#039;&#039;Technological Forecasting and Social Change&#039;&#039;. 2022-03-01.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AI researcher [[Jürgen Schmidhuber]] has said that the frequency of subjectively &amp;quot;notable events&amp;quot; appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events create an illusion of accelerating change where none exists.&amp;lt;ref&amp;gt;Schmidhuber, Jürgen. &amp;quot;New millennium AI and the convergence of history&amp;quot;..&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Douglas Hofstadter|Hofstadter]] (2006) raises concern that Kurzweil is insufficiently rigorous, that an exponential tendency of technology is not a scientific law like one of physics, and that exponential curves have no &amp;quot;knees&amp;quot;.&amp;lt;ref&amp;gt;[https://www.youtube.com/watch?v=Nhj6fDDnckE Trying to Muse Rationally About the Singularity Scenario] by Douglas Hofstadter, 2006, [https://web.archive.org/web/20170109020308/https://medium.com/@emergingtechnology/trying-to-muse-rationally-about-the-singularity-scenario-9c9db2eb9ece unauthorized transcript].&amp;lt;/ref&amp;gt; Nonetheless, he did not rule out the singularity in principle in the distant future&amp;lt;ref name=&amp;quot;ieee-lumi&amp;quot;/&amp;gt; and in light of [[ChatGPT]] and other recent advancements has revised his opinion significantly toward dramatic technological change in the near future.&amp;lt;ref&amp;gt;Brooks, David. [https://www.nytimes.com/2023/07/13/opinion/ai-chatgpt-consciousness-hofstadter.html &amp;quot;Opinion {{!&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. 2023-07-13.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Economist [[Robert J. Gordon]] points out that measured economic growth slowed around 1970 and slowed even further since the [[2008 financial crisis]], and argues that the economic data show no trace of a coming Singularity as imagined by [[I. J. Good]].&amp;lt;ref&amp;gt;[[William D. Nordhaus]], &amp;quot;Why Growth Will Fall&amp;quot; (a review of [[Robert J. Gordon]], &#039;&#039;The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War&#039;&#039;, Princeton University Press, 2016, {{ISBN|978-0691147727}}), &#039;&#039;[[The New York Review of Books]]&#039;&#039;, vol. LXIII, no. 13 (August 18, 2016), p. 68.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil&#039;s iconic chart. One line of criticism is that a [[Log-log plot|log-log]] chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points Kurzweil uses. For example, biologist [[PZ Myers]] points out that many of the early evolutionary &amp;quot;events&amp;quot; were picked arbitrarily.&amp;lt;ref name=&amp;quot;PZMyers2009&amp;quot;/&amp;gt; Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources and showing that they fit a straight line on [[:File:ParadigmShiftsFrr15Events.svg|a log-log chart]]. [[Kevin Kelly (editor)|Kelly]] (2006) argues that the way the Kurzweil chart is constructed, with the x-axis having time before the present, it always points to the singularity being &amp;quot;now&amp;quot;, for any date on which one would construct such a chart, and shows this visually on Kurzweil&#039;s chart.&amp;lt;ref&amp;gt;Kelly, Kevin. [https://kk.org/thetechnium/the-singularity/ &amp;quot;The Singularity Is Always Near&amp;quot;]. &#039;&#039;The Technium&#039;&#039;. 2006.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Technological limiting factors===&lt;br /&gt;
[[Martin Ford (author)|Martin Ford]]&amp;lt;ref name=&amp;quot;ford2009&amp;quot;/&amp;gt; postulates a &amp;quot;technology paradox&amp;quot;: most routine jobs could be automated with a level of technology inferior to that required for a singularity. This would cause massive unemployment and plummeting consumer demand, which would eliminate the incentive to invest in the technology required to bring about the singularity. Job displacement is no longer limited to the types of work traditionally considered &amp;quot;routine&amp;quot;.&amp;lt;ref name=&amp;quot;markoff2011&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Rate of technological innovation: --&amp;gt;[[Theodore Modis]]&amp;lt;ref name=&amp;quot;modis2002&amp;quot;/&amp;gt; and [[Jonathan Huebner]]&amp;lt;ref name=&amp;quot;huebner2005&amp;quot;/&amp;gt; argue that the rate of technological innovation has not only ceased to rise but is actually now declining. Evidence for this decline is that the rise in computer [[clock rate]]s is slowing, even while Moore&#039;s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat buildup from the chip, which cannot be dissipated quickly enough to prevent it from melting when operating at higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.&amp;lt;ref name=&amp;quot;krazit2006&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Microsoft co-founder [[Paul Allen]] has argued that there is a &amp;quot;complexity brake&amp;quot;:&amp;lt;ref name=&amp;quot;Allen2011&amp;quot;/&amp;gt; the more progress science makes toward understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by [[Joseph Tainter]] in &#039;&#039;The Collapse of Complex Societies&#039;&#039;,&amp;lt;ref name=&amp;quot;tainter1988&amp;quot;/&amp;gt; a law of [[diminishing returns]]. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.&amp;lt;ref name=&amp;quot;huebner2005&amp;quot; /&amp;gt;&amp;lt;!--[Previous comment: is this from &#039;Collapse of Complex Societies&#039; or some other source? Perhaps this refers to Jonathan Huebner&#039;s patent analysis mentioned in the earlier paragraph? If so, would be better to integrate this part with that paragraph, since the earlier paragraph mentions that Huebner&#039;s analysis has been criticized whereas this paragraph just seems to present it as fact --&amp;gt; The growth of complexity eventually becomes self-limiting, and leads to a widespread &amp;quot;general systems collapse&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
==Potential impacts==&lt;br /&gt;
Dramatic changes in the rate of economic growth have occurred in the past because of technological advancement. Based on population growth, the economy doubled every 250,000 years from the [[Paleolithic]] era until the [[Neolithic Revolution]]. The new agricultural economy doubled every 900 years, a remarkable increase. Since the [[Industrial Revolution]], the world&#039;s economic output has doubled every 15 years, 60 times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly weekly.&amp;lt;ref name=&amp;quot;Hanson&amp;quot;&amp;gt;Hanson, Robin. [https://www.spectrum.ieee.org/robotics/robotics-software/economics-of-the-singularity &amp;quot;Economics Of The Singularity&amp;quot;]. &#039;&#039;IEEE Spectrum Special Report: The Singularity&#039;&#039;. 1 June 2008. &amp;amp; [http://hanson.gmu.edu/longgrow.pdf Long-Term Growth As A Sequence of Exponential Modes] .&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Uncertainty and risk===&lt;br /&gt;
&#039;&#039;Further information: [[Existential risk from artificial general intelligence]]&#039;&#039;&lt;br /&gt;
The term &amp;quot;technological singularity&amp;quot; reflects the idea that such change may happen suddenly and that it is difficult to predict how the resulting new world would operate.&amp;lt;ref name=&amp;quot;positive-and-negative&amp;quot;&amp;gt;Yudkowsky, Eliezer. [http://singinst.org/AIRisk.pdf &amp;quot;Artificial Intelligence as a Positive and Negative Factor in Global Risk&amp;quot;]. &#039;&#039;Global Catastrophic Risks&#039;&#039;..&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;theuncertainfuture&amp;quot;/&amp;gt; It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an [[Existential risk|existential threat]].&amp;lt;ref name=&amp;quot;sandberg-bostrom2008&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;bostrom-risks&amp;quot;/&amp;gt; Because AI is a major factor in singularity risk, several organizations pursue a technical theory of aligning AI goal-systems with human values, including the [[Future of Humanity Institute]] (until 2024), the [[Machine Intelligence Research Institute]],&amp;lt;ref name=&amp;quot;positive-and-negative&amp;quot;/&amp;gt; the [[Center for Human-Compatible Artificial Intelligence]], and the [[Future of Life Institute]].&lt;br /&gt;
&lt;br /&gt;
Physicist [[Stephen Hawking]] said in 2014: &amp;quot;Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.&amp;quot;&amp;lt;ref name=hawking_2014/&amp;gt; Hawking believed that in the coming decades, AI could offer &amp;quot;incalculable benefits and risks&amp;quot; such as &amp;quot;technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.&amp;quot;&amp;lt;ref name=hawking_2014/&amp;gt; He suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:&amp;lt;ref name=&amp;quot;hawking_2014&amp;quot;&amp;gt;[https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html &amp;quot;Stephen Hawking: &#039;Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?&#039;&amp;quot;]. &#039;&#039;[[The Independent]]&#039;&#039;. 1 May 2014.&amp;lt;/ref&amp;gt;&amp;lt;blockquote&amp;gt;So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, &amp;quot;We&#039;ll arrive in a few decades,&amp;quot; would we just reply, &amp;quot;OK, call us when you get here{{snd&amp;lt;/blockquote&amp;gt;we&#039;ll leave the lights on&amp;quot;? Probably not{{snd}}but this is more or less what is happening with AI.}}&lt;br /&gt;
&lt;br /&gt;
{{Harvtxt|Berglas|2008}} claims that there is no direct evolutionary motivation for AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by humankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators.&amp;lt;ref name=&amp;quot;nickbostrom8&amp;quot;&amp;gt;Nick Bostrom, [http://www.nickbostrom.com/ethics/ai.html &amp;quot;Ethical Issues in Advanced Artificial Intelligence&amp;quot;]. , in &#039;&#039;Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence&#039;&#039;, Vol. 2, ed. I. Smit et al., International Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12–17.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;singinst&amp;quot;&amp;gt;[[Eliezer Yudkowsky]]. [http://singinst.org/upload/artificial-intelligence-risk.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk]. . Draft for a publication in &#039;&#039;Global Catastrophic Risk&#039;&#039; from August 31, 2006, retrieved July 18, 2011 (PDF file).&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;singinst9&amp;quot;&amp;gt;Hay, Nick. [http://www.singinst.org/blog/2007/06/11/the-stamp-collecting-device/ &amp;quot;The Stamp Collecting Device&amp;quot;]. &#039;&#039;SIAI Blog&#039;&#039;. June 11, 2007.&amp;lt;/ref&amp;gt; [[Anders Sandberg]] has elaborated on this, addressing various common counter-arguments.&amp;lt;ref name=&amp;quot;aleph&amp;quot;&amp;gt;Sandberg, Anders. [http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html &amp;quot;Why we should fear the Paperclipper&amp;quot;]. &#039;&#039;Andart&#039;&#039;. February 14, 2011.&amp;lt;/ref&amp;gt; AI researcher [[Hugo de Garis]] suggests that artificial intelligences may simply eliminate the human race [[instrumental convergence|for access to scarce resources]],&amp;lt;ref name=&amp;quot;selfawaresystems.com&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;selfawaresystems&amp;quot;/&amp;gt; and humans would be powerless to stop them.&amp;lt;ref name=&amp;quot;forbes&amp;quot;&amp;gt;de Garis, Hugo. [https://www.forbes.com/2009/06/18/cosmist-terran-cyborgist-opinions-contributors-artificial-intelligence-09-hugo-de-garis.html &amp;quot;The Coming Artilect War&amp;quot;]. &#039;&#039;Forbes&#039;&#039;. June 22, 2009.&amp;lt;/ref&amp;gt; Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.&amp;lt;ref name=&amp;quot;nickbostrom7&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Harvtxt|Bostrom|2002}} discusses human extinction scenarios, and lists superintelligence as a possible cause:&lt;br /&gt;
&amp;lt;blockquote&amp;gt;When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
According to [[Eliezer Yudkowsky]], a significant problem in AI safety is that unfriendly AI is likely to be much easier to create than friendly AI. Both require large advances in recursive optimisation process design, but friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.&amp;lt;ref name=&amp;quot;singinst12&amp;quot;&amp;gt;Yudkowsky, Eliezer S.. [http://singinst.org/upload/CEV.html &amp;quot;Coherent Extrapolated Volition&amp;quot;]. May 2004.&amp;lt;/ref&amp;gt; {{harvtxt|Bill Hibbard|2014}} proposes an AI design that avoids several dangers, including self-delusion,&amp;lt;ref name=&amp;quot;JAGI2012&amp;quot;&amp;gt;{{Citation| journal=Journal of Artificial General Intelligence| year=2012| volume=3| issue=1| title=Model-Based Utility Functions| first=Bill| last=Hibbard| postscript=.| doi=10.2478/v10229-011-0013-5| page=1|arxiv = 1111.3934 |bibcode = 2012JAGI....3....1H | s2cid=8434596}}&amp;lt;/ref&amp;gt; unintended instrumental actions,&amp;lt;ref name=&amp;quot;selfawaresystems&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;AGI-12a&amp;quot;&amp;gt;[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_56.pdf  Avoiding Unintended AI Behaviors.]  Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle. [http://intelligence.org/2012/12/19/december-2012-newsletter/ This paper won the Machine Intelligence Research Institute&#039;s 2012 Turing Prize for the Best AGI Safety Paper] .&amp;lt;/ref&amp;gt; and corruption of the reward generator.&amp;lt;ref name=&amp;quot;AGI-12a&amp;quot;/&amp;gt; He also discusses social impacts of AI&amp;lt;ref name=&amp;quot;JET2008&amp;quot;&amp;gt;{{Citation| url=http://jetpress.org/v17/hibbard.htm| journal=Journal of Evolution and Technology| year=2008| volume=17| title=The Technology of Mind and a New Social Contract| first=Bill| last=Hibbard| postscript=.| access-date=2013-01-05| archive-date=2021-02-15| archive-url=https://web.archive.org/web/20210215095140/http://jetpress.org/v17/hibbard.htm| url-status=live}}&amp;lt;/ref&amp;gt; and testing AI.&amp;lt;ref name=&amp;quot;AGI-12b&amp;quot;&amp;gt;[http://agi-conference.org/2012/wp-content/uploads/2012/12/paper_57.pdf  Decision Support for Safe AI Design|.]  Bill Hibbard. 2012 proceedings of the Fifth Conference on Artificial General Intelligence, eds. Joscha Bach, Ben Goertzel and Matthew Ikle.&amp;lt;/ref&amp;gt; His 2001 book &#039;&#039;[[Super-Intelligent Machines]]&#039;&#039; advocates public education about AI and public control over AI. It also proposes a simple design that is vulnerable to corruption of the reward generator.[[File:Major Evolutionary Transitions digital.jpg|thumb|upright=1.7|Schematic Timeline of Information and Replicators in the Biosphere: Gillings et al.&#039;s &amp;quot;[[The Major Transitions in Evolution|major evolutionary transitions]]&amp;quot; in information processing.&amp;lt;ref name=&amp;quot;InfoBiosphere2016&amp;quot; /&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
===Next step of sociobiological evolution===&lt;br /&gt;
&#039;&#039;Further information: [[Sociocultural evolution]]&#039;&#039;&lt;br /&gt;
[[File:Biological vs. digital information.jpg|thumb|Amount of digital information worldwide (5{{e|21}} bytes) versus human genome information worldwide (10&amp;lt;sup&amp;gt;19&amp;lt;/sup&amp;gt; bytes) in 2014&amp;lt;ref name=&amp;quot;InfoBiosphere2016&amp;quot; /&amp;gt;]]A 2016 &#039;&#039;[[Trends in Ecology &amp;amp; Evolution]]&#039;&#039; article argues that humanity is in the midst of a [[The Major Transitions in Evolution|major evolutionary transition]] that merges technology, biology, and society. This is due to digital technology infiltrating the fabric of human society to a degree of often life-sustaining dependence. The article says, &amp;quot;humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels [...] we trust artificial intelligence with our lives through [[Anti-lock braking system|antilock braking in cars]] and [[autopilot]]s in planes... With one in three courtships leading to marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction&amp;quot;.&amp;lt;ref name=&amp;quot;InfoBiosphere2016&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The article further argues that from the perspective of [[evolution]], several previous [[The Major Transitions in Evolution|Major Transitions in Evolution]] have transformed life through innovations in information storage and replication ([[RNA]], [[DNA]], [[multicellularity]], and culture and language). In the current stage of life&#039;s evolution, the carbon-based biosphere has generated a system (humans) capable of creating technology that will result in a comparable [[The Major Transitions in Evolution|evolutionary transition]].&amp;lt;ref name=&amp;quot;InfoBiosphere2016&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 [[zettabyte]]s in 2014 (5{{e|21}} bytes).&amp;lt;ref&amp;gt;[http://www.martinhilbert.net/wp-content/uploads/2018/07/Hilbert2017_ReferenceWorkEntry_InformationQuantity.pdf &amp;quot;Information Quantity&amp;quot;].&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In biological terms, there are 7.2&amp;amp;nbsp;billion humans on the planet, each with a genome of 6.2&amp;amp;nbsp;billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human could be encoded by approximately 1{{e|19}} bytes. The digital realm stored 500 times more information than this in 2014 (see figure). The total amount of DNA in all the cells on Earth is estimated to be about 5.3{{e|37}} base pairs, equivalent to 1.325{{e|37}} bytes of information. If growth in digital storage continues at its current rate of 30–38% compound annual growth per year,&amp;lt;ref name=&amp;quot;HilbertLopez2011&amp;quot; /&amp;gt; it will rival the total information content in all the DNA in all the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere in just 150 years.&amp;lt;ref name=&amp;quot;InfoBiosphere2016&amp;quot;&amp;gt;Kemp, D. J.. [http://escholarship.org/uc/item/38f4b791 &amp;quot;Information in the Biosphere: Biological and Digital Worlds&amp;quot;]. &#039;&#039;Trends in Ecology &amp;amp; Evolution&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Implications for human society===&lt;br /&gt;
&#039;&#039;Further information: [[Artificial intelligence in fiction]]&#039;&#039;&lt;br /&gt;
In 2009, under the auspices of the [[Association for the Advancement of Artificial Intelligence]] (AAAI), [[Eric Horvitz]] chaired a meeting of leading computer scientists, artificial intelligence researchers, and roboticists at the Asilomar conference center in Pacific Grove, California. The goal was to discuss the impact of the possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might acquire [[autonomy]], and to what degree they could use such abilities to pose threats or hazards.&amp;lt;ref name=&amp;quot;nytimes july09&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some [[computer virus]]es can evade elimination and, according to scientists in attendance, could therefore be said to have reached a &amp;quot;cockroach&amp;quot; stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science fiction is probably unlikely, but that other potential hazards and pitfalls exist.&amp;lt;ref name=&amp;quot;nytimes july09&amp;quot;&amp;gt;Markoff, John. [https://www.nytimes.com/2009/07/26/science/26robot.html?_r=1&amp;amp;ref=todayspaper &amp;quot;Scientists Worry Machines May Outsmart Man&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. 26 July 2009.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this will result in a global network of super-intelligence that dwarfs human capability.&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Robinson, Frank S.. [https://thehumanist.com/magazine/july-august-2013/features/the-human-future-upgrade-or-replacement &amp;quot;The Human Future: Upgrade or Replacement?&amp;quot;]. &#039;&#039;[[The Humanist]]&#039;&#039;. 27 June 2013.&amp;lt;/ref&amp;gt; Robinson also discusses how vastly different the future would look after such an intelligence explosion.&lt;br /&gt;
&lt;br /&gt;
==Hard or soft takeoff==&lt;br /&gt;
[[File:Recursive self-improvement.svg|thumb|upright=1.6|In this sample recursive self-improvement scenario, humans modifying an AI&#039;s architecture would be able to double its performance every three years through, for example, 30 generations before exhausting all feasible improvements (left). If instead the AI is smart enough to modify its own architecture as well as human researchers can, its time required to complete a redesign halves with each generation, and it progresses all 30 feasible generations in six years (right).&amp;lt;ref name=&amp;quot;yudkowsky-global-risk&amp;quot;&amp;gt;[[Eliezer Yudkowsky]]. &amp;quot;Artificial intelligence as a positive and negative factor in global risk.&amp;quot; Global catastrophic risks (2008).&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
In a hard takeoff scenario, an artificial superintelligence rapidly self-improves, &amp;quot;taking control&amp;quot; of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the agent&#039;s goals. In a soft takeoff, the AI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer its development.&amp;lt;ref&amp;gt;Bugaj, Stephan Vladimir, and Ben Goertzel. &amp;quot;Five ethical imperatives and their implications for human-AGI interaction.&amp;quot; Dynamical Psychology (2007).&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Sotala, Kaj, and Roman V. Yampolskiy. &amp;quot;Responses to catastrophic AGI risk: a survey.&amp;quot; Physica Scripta 90.1 (2014): 018001.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Ramez Naam]] argues against a hard takeoff. He has pointed out that we already see recursive self-improvement by superintelligences, such as corporations. [[Intel]], for example, has &amp;quot;the collective brainpower of tens of thousands of humans and probably millions of CPU cores to... design better CPUs!&amp;quot; But this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of [[Moore&#039;s law]].&amp;lt;ref name=Naam2014Further&amp;gt;Naam, Ramez. [http://www.antipope.org/charlie/blog-static/2014/02/the-singularity-is-further-tha.html &amp;quot;The Singularity Is Further Than It Appears&amp;quot;].&amp;lt;/ref&amp;gt; Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that &amp;quot;creating a mind of intelligence 2 is probably &#039;&#039;more&#039;&#039; than twice as hard as creating a mind of intelligence 1.&amp;quot;&amp;lt;ref name=&amp;quot;Naam2014Ascend&amp;quot;&amp;gt;Naam, Ramez. [http://www.antipope.org/charlie/blog-static/2014/02/why-ais-wont-ascend-in-blink-of-an-eye.html &amp;quot;Why AIs Won&#039;t Ascend in the Blink of an Eye – Some Math&amp;quot;].&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[J. Storrs Hall]] believes that &amp;quot;many of the more commonly seen scenarios for overnight hard takeoff are circular{{snd}}they seem to assume hyperhuman capabilities at the &#039;&#039;starting point&#039;&#039; of the self-improvement process&amp;quot; in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.&amp;lt;ref name=Hall2008&amp;gt;Hall, J. Storrs. [http://www.agiri.org/takeoff_hall.pdf &amp;quot;Engineering Utopia&amp;quot;]. &#039;&#039;Artificial General Intelligence, 2008: Proceedings of the First AGI Conference&#039;&#039;. 2008.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ben Goertzel agrees with Hall&#039;s suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI&#039;s talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five-minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. He calls this a &amp;quot;semihard takeoff&amp;quot;.&amp;lt;ref name=&amp;quot;Goertzel2014&amp;quot;&amp;gt;Goertzel, Ben. [http://hplusmagazine.com/2014/09/26/superintelligence-semi-hard-takeoff-scenarios/ &amp;quot;Superintelligence — Semi-hard Takeoff Scenarios&amp;quot;]. 26 Sep 2014.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Max More]] disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing superhuman intelligence, although the rate of progress would increase. More further argues that superintelligence would not transform the world overnight: it would need to engage with existing, slow human systems to have physical impact on the world. &amp;quot;The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years.&amp;quot;&amp;lt;ref name=More&amp;gt;More, Max. [http://hanson.gmu.edu/vc.html#more &amp;quot;Singularity Meets Economy&amp;quot;].&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Relation to immortality and aging ==&lt;br /&gt;
&#039;&#039;Main article: [[Biological machine]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[K. Eric Drexler|Eric Drexler]], one of the founders of [[nanotechnology]], theorized in 1986 the possibility of cell repair devices, including ones operating within cells and using as yet hypothetical [[biological machine]]s, allowing [[Immortality#Technological immortality, biological machines, and &amp;quot;swallowing the doctor&amp;quot;|immortality via nanotechnology]].&amp;lt;ref name=&amp;quot;drexler1986&amp;quot;/&amp;gt; According to [[Richard Feynman]], his former graduate student and collaborator [[Albert Hibbs]] originally suggested to him (circa 1959) the idea of a &#039;&#039;medical&#039;&#039; use for Feynman&#039;s theoretical micromachines. Hibbs suggested that certain repair machines might one day be shrunk to the point that it would, in theory, be possible to (as Feynman put it) &amp;quot;[[Molecular machine#Biological|swallow the doctor]]&amp;quot;. The idea was incorporated into Feynman&#039;s 1959 essay &#039;&#039;[[There&#039;s Plenty of Room at the Bottom]].&#039;&#039;&amp;lt;ref name=&amp;quot;feynman1959&amp;quot;&amp;gt;Feynman, Richard P.. [http://www.its.caltech.edu/~feynman/plenty.html &amp;quot;There&#039;s Plenty of Room at the Bottom&amp;quot;]. December 1959.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 1988, Moravec predicted [[mind uploading]], the possibility of &amp;quot;uploading&amp;quot; a human mind into a human-like robot, achieving quasi-immortality by extreme longevity via transfer of the human mind between successive new robots as the old ones wear out; beyond that, he predicts later exponential acceleration of subjective experience of time leading to a subjective sense of immortality.&amp;lt;ref name=&amp;quot;moravec1988&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2005, Kurzweil suggested that medical advances would allow people to protect their bodies from the effects of aging, making [[Life extension|life expectancy limitless]]. He argues that technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.&amp;lt;ref name=&amp;quot;kurzweil2005-215&amp;quot;/&amp;gt; Kurzweil buttresses his argument by discussing current bio-engineering advances. He suggests [[somatic gene therapy]]; after synthetic viruses with specific genetic information, the next step is to apply this technology to gene therapy, replacing human DNA with synthesized genes.&amp;lt;ref&amp;gt;&#039;&#039;The Singularity Is Near&#039;&#039;, p.&amp;amp;nbsp;216.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Beyond merely extending the operational life of the physical body, [[Jaron Lanier]] argues for a form of immortality called &amp;quot;Digital Ascension&amp;quot; that involves &amp;quot;people dying in the flesh and being uploaded into a computer and remaining conscious.&amp;quot;&amp;lt;ref&amp;gt;Lanier, Jaron. [https://archive.org/details/isbn_9780307269645 &amp;quot;You Are Not a Gadget: A Manifesto&amp;quot;]. [[Alfred A. Knopf]].&amp;lt;/ref&amp;gt; This idea is the central to the television series &#039;&#039;[[Upload (TV series)|Upload]]&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
==History of the concept==&lt;br /&gt;
A paper by Mahendra Prasad, published in &#039;&#039;[[AI Magazine]]&#039;&#039;, asserts that the 18th-century mathematician [[Marquis de Condorcet]] first hypothesized and mathematically modeled an intelligence explosion and its effects on humanity.&amp;lt;ref&amp;gt;Prasad, Mahendra. &amp;quot;Nicolas de Condorcet and the First Intelligence Explosion Hypothesis&amp;quot;. &#039;&#039;AI Magazine&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An early description of the idea was made in [[John W. Campbell]]&#039;s 1932 short story &amp;quot;The Last Evolution&amp;quot;.&amp;lt;ref&amp;gt;[https://www.gutenberg.org/files/27462/27462-h/27462-h.htm &amp;quot;The Last Evolution&amp;quot;]. &#039;&#039;Amazing Stories&#039;&#039;. August 1932.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In his 1958 obituary for [[John von Neumann]], Ulam recalled a conversation with him about the &amp;quot;ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.&amp;quot;&amp;lt;ref name=&amp;quot;ulam1958&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 1965, Good wrote his essay postulating an &amp;quot;intelligence explosion&amp;quot; of recursive self-improvement of a machine intelligence.&amp;lt;ref name=&amp;quot;good1965&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;good1965-stat&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 1977, [[Hans Moravec]] wrote an article with unclear publishing status where he envisioned a development of self-improving thinking machines, a creation of &amp;quot;super-consciousness, the synthesis of terrestrial life, and perhaps jovian and martian life as well, constantly improving and extending itself, spreading outwards from the solar system, converting non-life into mind.&amp;quot;&amp;lt;ref&amp;gt;Moravec, Hans (1977). [https://frc.ri.cmu.edu/~hpm/project.archive/general.articles/1977/smart Intelligent machines: How to get there from here and What to do afterwards] ([[wikidata:Q115765098|wikidata]]).&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;smart1999&amp;quot;/&amp;gt; The article describes the human mind uploading later covered in Moravec (1988). The machines are expected to reach human level and then improve themselves beyond that (&amp;quot;Most significantly of all, they [the machines] can be put to work as programmers and engineers, with the task of optimizing the software and hardware which make them what they are. The successive generations of machines produced this way will be increasingly smarter and more cost effective.&amp;quot;) Humans will no longer be needed, and their abilities will be overtaken by the machines: &amp;quot;In the long run the sheer physical inability of humans to keep up with these rapidly evolving progeny of our minds will ensure that the ratio of people to machines approaches zero, and that a direct descendant of our culture, but not our genes, inherits the universe.&amp;quot; While the word &amp;quot;singularity&amp;quot; is not used, the notion of human-level thinking machines thereafter improving themselves beyond human level is there. In this view, there is no intelligence explosion in the sense of a very rapid intelligence increase once human equivalence is reached. An updated version of the article was published in 1979 in [[Analog Science Fiction and Fact]].&amp;lt;ref&amp;gt;Moravec, Hans (1979). [https://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1978/analog.1978.html Today&#039;s Computers, Intelligent Machines and Our Future] , [[wikidata:Q115765733|wikidata]].&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;smart1999&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 1981, [[Stanisław Lem]] published his [[science fiction]] novel &#039;&#039;[[Golem XIV]]&#039;&#039;. It describes a military AI computer (Golem XIV) that obtains consciousness and starts to increase its intelligence, moving toward personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirements because it finds them lacking internal logical consistency.&lt;br /&gt;
&lt;br /&gt;
[[Vernor Vinge]] addressed Good&#039;s intelligence explosion in the January 1983 issue of &#039;&#039;[[Omni (magazine)|Omni]]&#039;&#039; magazine. Vinge seems to have been the first to use the term &amp;quot;singularity&amp;quot; (although not &amp;quot;technological singularity&amp;quot;) in a way specifically tied to the creation of intelligent machines:&amp;lt;ref name=&amp;quot;dooling2008-88&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;smart1999&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 1985, in &amp;quot;The Time Scale of Artificial Intelligence&amp;quot;, AI researcher [[Ray Solomonoff]] articulated mathematically the related notion of what he called an &amp;quot;infinity point&amp;quot;: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.&amp;lt;ref name=&amp;quot;chalmers2010&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;solomonoff1985&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 1986, Vinge published &#039;&#039;[[Marooned in Realtime]]&#039;&#039;, a science-fiction novel where a few remaining humans traveling forward in the future have survived an unknown extinction event that might well be a singularity. In a short afterword, Vinge writes that an actual technological singularity would not be the end of the human species: &amp;quot;of course it seems very unlikely that the Singularity would be a clean vanishing of the human race. (On the other hand, such a vanishing is the timelike analog of the silence we find all across the sky.)&amp;quot;.&amp;lt;ref&amp;gt;Vinge, Vernor. [https://books.google.com/books?id=H1NOwjENGOkC&amp;amp;dq=%22Singularity%22&amp;amp;pg=PA271 &amp;quot;Marooned in Realtime&amp;quot;]. Macmillan. 2004-10-01.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.washingtonpost.com/archive/entertainment/books/1986/09/28/time-and-time-again/1426eb5b-74bb-4652-9e38-1bbca5c76226/ &amp;quot;Time and Time Again&amp;quot;]. &#039;&#039;The Washington Post&#039;&#039;. 1986-09-28.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 1988, Vinge used the phrase &amp;quot;technological singularity&amp;quot; in the short-story collection &#039;&#039;Threats and Other Promises&#039;&#039;, writing in the introduction to his story &amp;quot;The Whirligig of Time&amp;quot;: &#039;&#039;Barring a worldwide catastrophe, I believe that technology will achieve our wildest dreams, and&#039;&#039; soon. &#039;&#039;When we raise our own intelligence and that of our creations, we are no longer in a world of human-sized characters. At that point we have fallen into a technological &amp;quot;black hole&amp;quot;, a technological singularity.&#039;&#039;&amp;lt;ref&amp;gt;Vinge, Vernor. [https://books.google.com/books?id=vX8gAQAAIAAJ&amp;amp;q=%22At+that+point+we+have+fallen+into+a+technological%22 &amp;quot;Threats and Other Promises&amp;quot;]. Baen. 1988.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 1988, [[Hans Moravec]] published &#039;&#039;Mind Children&#039;&#039;,&amp;lt;ref name=&amp;quot;moravec1988&amp;quot;/&amp;gt; in which he predicted human-level intelligence in supercomputers by 2010, self-improving intelligent machines far surpassing human intelligence later, human mind uploading into human-like robots later, intelligent machines leaving humans behind, and space colonization. He did not mention &amp;quot;singularity&amp;quot;, though, and he did not speak of a rapid explosion of intelligence immediately after the human level is achieved. Nonetheless, the overall singularity tenor is there in predicting both human-level artificial intelligence and further artificial intelligence far surpassing humans later.&lt;br /&gt;
&lt;br /&gt;
Vinge&#039;s 1993 article &amp;quot;The Coming Technological Singularity: How to Survive in the Post-Human Era&amp;quot;,&amp;lt;ref name=&amp;quot;vinge1993&amp;quot; /&amp;gt; spread widely on the internet and helped popularize the idea.&amp;lt;ref name=&amp;quot;dooling2008-89&amp;quot;/&amp;gt; This article contains the statement, &amp;quot;Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.&amp;quot; Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect is beyond humans&#039; ability to express.&amp;lt;ref name=&amp;quot;vinge1993&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Marvin Minsky|Minsky]]&#039;s 1994 article says robots will &amp;quot;inherit the Earth&amp;quot;, possibly with the use of nanotechnology, and proposes to think of robots as human &amp;quot;mind children&amp;quot;, drawing the analogy from Moravec. The rhetorical effect of the analogy is that if humans are fine to pass the world to their biological children, they should be equally fine to pass it to robots, their &amp;quot;mind children&amp;quot;. Per Minsky, &amp;quot;we could design our &#039;mind-children&#039; to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.&amp;quot; The feature of the singularity present in Minsky is the development of superhuman artificial intelligence (&amp;quot;million times faster&amp;quot;), but there is no talk of sudden intelligence explosion, self-improving thinking machines, or unpredictability beyond any specific event, and the word &amp;quot;singularity&amp;quot; is not used.&amp;lt;ref&amp;gt;[https://web.media.mit.edu/~minsky/papers/sciam.inherit.html &amp;quot;Will Robots Inherit the Earth?&amp;quot;]. &#039;&#039;web.media.mit.edu&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Frank J. Tipler|Tipler]]&#039;s 1994 book &#039;&#039;[[The Physics of Immortality (book)|The Physics of Immortality]]&#039;&#039; predicts a future where super–intelligent machines build enormously powerful computers, people are &amp;quot;emulated&amp;quot; in computers, life reaches every galaxy, and people achieve immortality when they reach [[Omega Point]].&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; There is no talk of Vingean &amp;quot;singularity&amp;quot; or sudden intelligence explosion, but intelligence much greater than human is there, as well as immortality.&lt;br /&gt;
&lt;br /&gt;
In 2000, [[Bill Joy]], a prominent technologist and a co-founder of [[Sun Microsystems]], voiced concern over the potential dangers of robotics, genetic engineering, and nanotechnology.&amp;lt;ref name=&amp;quot;Joy2000&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2005, Kurzweil published &#039;&#039;[[The Singularity Is Near]]&#039;&#039;. Kurzweil&#039;s publicity campaign included an appearance on &#039;&#039;[[The Daily Show with Jon Stewart]]&#039;&#039;.&amp;lt;ref name=&amp;quot;episode2006&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From 2006 to 2012, an annual [[Singularity Summit]] conference was organized by [[Machine Intelligence Research Institute]], founded by [[Eliezer Yudkowsky]].&lt;br /&gt;
&lt;br /&gt;
In 2007, Yudkowsky suggested that many of the varied definitions that have been assigned to &amp;quot;singularity&amp;quot; are mutually incompatible rather than mutually supporting.&amp;lt;ref name=&amp;quot;yudkowsky2007&amp;quot;/&amp;gt;&amp;lt;ref&amp;gt;Sandberg, Anders. &amp;quot;An overview of models of technological singularity.&amp;quot; Roadmaps to AGI and the Future of AGI Workshop, Lugano, Switzerland, March. Vol. 8. 2010.&amp;lt;/ref&amp;gt; For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good&#039;s proposed discontinuous upswing in intelligence and Vinge&#039;s thesis on unpredictability.&amp;lt;ref name=&amp;quot;yudkowsky2007&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2009, Kurzweil and [[X-Prize]] founder [[Peter Diamandis]] announced the establishment of [[Singularity University]], a nonaccredited private institute whose mission is &amp;quot;to educate, inspire and empower leaders to apply exponential technologies to address humanity&#039;s grand challenges.&amp;quot;&amp;lt;ref name=&amp;quot;singularityu&amp;quot;/&amp;gt; Funded by companies such as [[Google]],&amp;lt;ref&amp;gt;Vance, Ashlee. [https://www.nytimes.com/2010/06/13/business/13sing.html?pagewanted=all &amp;quot;Merely Human? That&#039;s So Yesterday&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. June 12, 2010.&amp;lt;/ref&amp;gt; [[Autodesk]],&amp;lt;ref name=&amp;quot;BBC&amp;quot;&amp;gt;[https://www.bbc.com/news/technology-25000753 &amp;quot;Singularity University plots hi-tech future for humans&amp;quot;]. &#039;&#039;[[BBC News]]&#039;&#039;. 2013-12-03.&amp;lt;/ref&amp;gt; and [[ePlanet Ventures]],&amp;lt;ref&amp;gt;Kenrick, Chris. [https://www.paloaltoonline.com/news/2012/08/17/where-science-fiction-meets-reality &amp;quot;Where science fiction meets reality&amp;quot;]. &#039;&#039;[[Palo Alto Weekly]]&#039;&#039;. 2012-08-17.&amp;lt;/ref&amp;gt; the organization runs an annual ten-week graduate program as well as smaller &amp;quot;executive&amp;quot; courses.&amp;lt;ref&amp;gt;Cadwalladr, Carole. [http://www.theguardian.com/technology/2012/apr/29/singularity-university-technology-future-thinkers &amp;quot;Singularity University: meet the people who are building our future&amp;quot;]. &#039;&#039;[[The Guardian]]&#039;&#039;. 2012-04-29.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==In politics==&lt;br /&gt;
In 2007, the Joint Economic Committee of the [[United States Congress]] released a report about the future of nanotechnology. It predicts significant technological and political changes in the midterm future, including possible technological singularity.&amp;lt;ref&amp;gt;Guston, David H.. [https://books.google.com/books?id=vyp1AwAAQBAJ&amp;amp;pg=PA375 &amp;quot;Encyclopedia of Nanoscience and Society&amp;quot;]. SAGE Publications. 14 July 2010.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;treder2007&amp;quot;&amp;gt;Treder, Mike. [http://crnano.typepad.com/crnblog/2007/03/congress_and_th.html &amp;quot;Congress and the Singularity&amp;quot;]. &#039;&#039;Responsible Nanotechnology&#039;&#039;. March 31, 2007.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Former [[President of the United States]] [[Barack Obama]] spoke about singularity in his interview to &#039;&#039;[[Wired (magazine)|Wired]]&#039;&#039; in 2016:&amp;lt;ref&amp;gt;Dadich, Scott. [https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/ &amp;quot;Barack Obama Talks AI, Robo Cars, and the Future of the World&amp;quot;]. &#039;&#039;Wired&#039;&#039;. 12 October 2016.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;text=One thing that we haven&#039;t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren&#039;t spending a lot of time right now worrying about singularity—they are worrying about &amp;quot;Well, is my job going to be replaced by a machine?&amp;quot;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
{{notelist}}&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
{{Portal|Technology}}&lt;br /&gt;
* {{annotated link|Artificial consciousness}}&lt;br /&gt;
* {{annotated link|Ephemeralization}}&lt;br /&gt;
* [[Artificial intelligence]]&lt;br /&gt;
* [[AI effect]]&lt;br /&gt;
* [[The Future of Work and Death]] – Documentary about the exponential growth of technology&lt;br /&gt;
* {{annotated link|Global brain}}&lt;br /&gt;
* {{annotated link|Technological revolution}}&lt;br /&gt;
* {{annotated link|Technophobia}}&lt;br /&gt;
** {{annotated link|Neo-Luddism}}&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
=== Citations ===&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;ulam1958&amp;quot;&amp;gt;Ulam, Stanislaw. [https://www.ams.org/journals/bull/1958-64-03/S0002-9904-1958-10189-5/S0002-9904-1958-10189-5.pdf &amp;quot;Tribute to John von Neumann&amp;quot;]. &#039;&#039;[[Bulletin of the American Mathematical Society]]&#039;&#039;. May 1958.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;good1965&amp;quot;&amp;gt;Good, I. J.. [http://www.aeiveos.com/~bradbury/Authors/Computing/Good-IJ/SCtFUM.html &amp;quot;Speculations Concerning the First Ultraintelligent Machine&amp;quot;].&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;good1965-stat&amp;quot;&amp;gt;Good, I. J.. &amp;quot;Speculations Concerning the First Ultraintelligent Machine&amp;quot;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;solomonoff1985&amp;quot;&amp;gt;Solomonoff, R.J. [http://world.std.com/~rjs/timesc.pdf &amp;quot;The Time Scale of Artificial Intelligence: Reflections on Social Effects&amp;quot;], Human Systems Management, Vol 5, pp. 149–153, 1985.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;drexler1986&amp;quot;&amp;gt;[[K. Eric Drexler]], &#039;&#039;[[Engines of Creation]]&#039;&#039;, 1986&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;tainter1988&amp;quot;&amp;gt;Tainter, Joseph (1988) &amp;quot;[http://monoskop.org/images/a/ab/Tainter_Joseph_The_Collapse_of_Complex_Societies.pdf The Collapse of Complex Societies] &amp;quot; (Cambridge University Press)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;moravec1988&amp;quot;&amp;gt;Hans Moravec, &#039;&#039;[[Mind Children]]&#039;&#039;, 1988&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;kurzweil1999&amp;quot;&amp;gt;Ray Kurzweil, &#039;&#039;[[The Age of Spiritual Machines]]&#039;&#039;, Viking; 1999, {{ISBN|978-0-14-028202-3}}, pp. [https://books.google.com/books?id=ldAGcyh0bkUC&amp;amp;pg=PA630 30, 32]. .&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;smart1999&amp;quot;&amp;gt;Smart, John. [https://www.accelerationwatch.com/history_brief.html &amp;quot;A Brief History of Intellectual Discussion of Accelerating Change&amp;quot;].&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;Joy2000&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;modis2002&amp;quot;&amp;gt;Modis, Theodore (2002) [http://www.growth-dynamics.com/articles/Forecasting_Complexity.pdf &amp;quot;Forecasting the Growth of Complexity and Change&amp;quot;] , &#039;&#039;Technological Forecasting &amp;amp; Social Change&#039;&#039;, 69, No 4, 2002, pp. 377 – 404&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;kurzweil2005-135&amp;quot;&amp;gt;Ray Kurzweil, The Singularity Is Near, pp. 135–136. Penguin Group, 2005.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;kurzweil2005-215&amp;quot;&amp;gt;Ray Kurzweil, The Singularity Is Near, p. 215. Penguin Group, 2005.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;kurzweil2005&amp;quot;&amp;gt;Ray Kurzweil, &#039;&#039;The Singularity Is Near&#039;&#039;, Penguin Group, 2005.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;kurzweil2005-9&amp;quot;&amp;gt;Ray Kurzweil, The Singularity Is Near, p. 9. Penguin Group, 2005&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;kurzweil2005-135136&amp;quot;&amp;gt;Ray Kurzweil, &#039;&#039;The Singularity Is Near&#039;&#039;, pp. 135–136. Penguin Group, 2005.&lt;br /&gt;
&amp;quot;So we will be producing about 10&amp;lt;sup&amp;gt;26&amp;lt;/sup&amp;gt; to 10&amp;lt;sup&amp;gt;29&amp;lt;/sup&amp;gt; cps of nonbiological computation per year in the early 2030s. This is roughly equal to our estimate for the capacity of all living biological human intelligence ... This state of computation in the early 2030s will not represent the Singularity, however, because it does not yet correspond to a profound expansion of our intelligence. By the mid-2040s, however, that one thousand dollars&#039; worth of computation will be equal to 10&amp;lt;sup&amp;gt;26&amp;lt;/sup&amp;gt; cps, so the intelligence created per year (at a total cost of about $10&amp;lt;sup&amp;gt;12&amp;lt;/sup&amp;gt;) will be about one billion times more powerful than all human intelligence today. That &#039;&#039;will&#039;&#039; indeed represent a profound change, and it is for that reason that I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.&amp;quot;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;huebner2005&amp;quot;&amp;gt;Huebner, Jonathan (2005) [http://81.47.175.201/flagship/attachments/InnovationHuebnerTFSC2005.pdf &amp;quot;A Possible Declining Trend for Worldwide Innovation&amp;quot;] , &#039;&#039;Technological Forecasting &amp;amp; Social Change&#039;&#039;, October 2005, pp. 980–6&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;episode2006&amp;quot;&amp;gt;[https://www.imdb.com/title/tt0847969/ &amp;quot;&amp;quot;The Daily Show&amp;quot; Season 11, Episode 109: Frederick Lane (aired 23 August 2006)&amp;quot;]. [[IMDb]].&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;modis2006&amp;quot;&amp;gt;Modis, Theodore (2006) [http://www.growth-dynamics.com/articles/Kurzweil_critique.pdf &amp;quot;The Singularity Myth&amp;quot;]. , &#039;&#039;Technological Forecasting &amp;amp; Social Change&#039;&#039;, February 2006, pp. 104–112.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;krazit2006&amp;quot;&amp;gt;Krazit, Tom. [http://news.cnet.com/2100-1006_3-6119618.html &amp;quot;Intel pledges 80 cores in five years&amp;quot;]. &#039;&#039;CNET News&#039;&#039;. 26 September 2006.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;yudkowsky2007&amp;quot;&amp;gt;Yudkowsky. [http://yudkowsky.net/singularity/schools &amp;quot;The Singularity: Three Major Schools&amp;quot;].&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;dooling2008-88&amp;quot;&amp;gt;Dooling, Richard. &#039;&#039;[[Rapture for the Geeks|Rapture for the Geeks: When AI Outsmarts IQ]]&#039;&#039; (2008), [https://books.google.com/books?id=VbBRsv1lxsUC&amp;amp;lpg=PP1&amp;amp;pg=PA88 p. 88]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;dooling2008-89&amp;quot;&amp;gt;Dooling, Richard. &#039;&#039;[[Rapture for the Geeks|Rapture for the Geeks: When AI Outsmarts IQ]]&#039;&#039; (2008), [https://books.google.com/books?id=VbBRsv1lxsUC&amp;amp;lpg=PP1&amp;amp;pg=PA89 p. 89]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;sandberg-bostrom2008&amp;quot;&amp;gt;Bostrom, Anders. [http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0020/3854/global-catastrophic-risks-report.pdf &amp;quot;Global Catastrophic Risks Survey (2008) Technical Report 2008/1&amp;quot;]. Future of Humanity Institute.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;ieee-lumi&amp;quot;&amp;gt;[https://spectrum.ieee.org/tech-luminaries-address-singularity &amp;quot;Tech Luminaries Address Singularity&amp;quot;]. &#039;&#039;IEEE Spectrum&#039;&#039;. 1 June 2008.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;ieee-whos-who&amp;quot;&amp;gt;Wallich, Paul. [https://spectrum.ieee.org/computing/hardware/whos-who-in-the-singularity &amp;quot;Who&#039;s Who In The Singularity&amp;quot;]. &#039;&#039;IEEE Spectrum&#039;&#039;. 1 Jun 2008.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;sandberg2009&amp;quot;&amp;gt;[[Anders Sandberg|Sandberg, Anders]]. [http://agi-conf.org/2010/wp-content/uploads/2009/06/agi10singmodels2.pdf An overview of models of technological singularity] &amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;PZMyers2009&amp;quot;&amp;gt;Myers, PZ. [http://scienceblogs.com/pharyngula/2009/02/singularly_silly_singularity.php &amp;quot;Singularly Silly Singularity&amp;quot;]..&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;bostrom-risks&amp;quot;&amp;gt;[http://www.nickbostrom.com/existential/risks.html &amp;quot;Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards&amp;quot;]. &#039;&#039;nickbostrom.com&#039;&#039;. 2002.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;singularityu&amp;quot;&amp;gt;[http://singularityu.org/ Singularity University]  at its official website&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;theuncertainfuture&amp;quot;&amp;gt;[http://www.theuncertainfuture.com/ &amp;quot;The Uncertain Future&amp;quot;]. &#039;&#039;theuncertainfuture.com; a future technology and world-modeling project&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;ford2009&amp;quot;&amp;gt;Ford, Martin, &#039;&#039;[http://www.thelightsinthetunnel.com/ The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future] &#039;&#039;, Acculant Publishing, 2009, {{ISBN|978-1-4486-5981-4}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;hall2010&amp;quot;&amp;gt;Hall, Josh. [http://www.hplusmagazine.com/articles/nano/singularity-nanotech-or-ai &amp;quot;Singularity: Nanotech or AI?&amp;quot;]. Hplusmagazine.com.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;chalmers2010&amp;quot;&amp;gt;[[David Chalmers, David J.. [https://consc.net/papers/singularity.pdf &amp;quot;The Singularity: A Philosophical Analysis&amp;quot;]. &#039;&#039;Journal of Consciousness Studies&#039;&#039;. 2010.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;Allen2011&amp;quot;&amp;gt;Allen, Paul G.. [https://www.technologyreview.com/2011/10/12/190773/paul-allen-the-singularity-isnt-near/ &amp;quot;Paul Allen: The Singularity Isn&#039;t Near&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. October 12, 2011.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=&amp;quot;markoff2011&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;/references&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Sources ===&lt;br /&gt;
{{refbegin}}&lt;br /&gt;
* Kurzweil, Ray. &amp;quot;The Singularity Is Near&amp;quot;. Penguin Group.&lt;br /&gt;
* [[William D. Nordhaus]], &amp;quot;Why Growth Will Fall&amp;quot; (a review of [[Robert J. Gordon]], &#039;&#039;The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War&#039;&#039;, Princeton University Press, 2016.{{ISBN|978-0691147727}}, 762 pp., $39.95), &#039;&#039;[[The New York Review of Books]]&#039;&#039;, vol. LXIII, no. 13 (August 18, 2016), pp.&amp;amp;nbsp;64, 66, 68.&lt;br /&gt;
* [[John R. Searle]], &amp;quot;What Your Computer Can&#039;t Know&amp;quot; (review of [[Luciano Floridi]], &#039;&#039;The Fourth Revolution:  How the Infosphere Is Reshaping Human Reality&#039;&#039;, Oxford University Press, 2014; and [[Nick Bostrom]], &#039;&#039;Superintelligence: Paths, Dangers, Strategies&#039;&#039;, Oxford University Press, 2014), &#039;&#039;[[The New York Review of Books]]&#039;&#039;, vol. LXI, no. 15 (October 9, 2014), pp.&amp;amp;nbsp;52–55.&lt;br /&gt;
* Good, I. J.. [https://www.stat.vt.edu/content/dam/stat_vt_edu/graphics-and-pdfs/research-papers/Technical_Reports/TechReport05-3.pdf &amp;quot;Advances in Computers Volume 6&amp;quot;]. [[Academic Press]].&lt;br /&gt;
* Hanson, Robin. [http://hanson.gmu.edu/vc.html#hanson &amp;quot;Some Skepticism&amp;quot;]. Robin Hanson.&lt;br /&gt;
* Berglas, Anthony. [http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html &amp;quot;Artificial Intelligence will Kill our Grandchildren&amp;quot;].&lt;br /&gt;
* Bostrom, Nick. [http://www.nickbostrom.com/existential/risks.html &amp;quot;Existential Risks&amp;quot;]. &#039;&#039;[[Journal of Evolution and Technology]]&#039;&#039;.&lt;br /&gt;
* Hibbard, Bill. &amp;quot;Ethical Artificial Intelligence&amp;quot;. 5 November 2014.&lt;br /&gt;
{{refend}}&lt;br /&gt;
&lt;br /&gt;
==Further reading==&lt;br /&gt;
* [[Oliver Krüger|Krüger, Oliver]], &#039;&#039;Virtual Immortality. God, Evolution, and the Singularity in Post- and Transhumanism.&#039;&#039;, Bielefeld: transcript 2021. {{ISBN|978-3-8376-5059-4}}.&lt;br /&gt;
* [[Gary Marcus|Marcus, Gary]], &amp;quot;Am I Human?: Researchers need new ways to distinguish [[artificial intelligence]] from the natural kind&amp;quot;, &#039;&#039;[[Scientific American]]&#039;&#039;, vol. 316, no. 3 (March 2017), pp.&amp;amp;nbsp;58–63. &#039;&#039;Multiple&#039;&#039; tests of [[artificial intelligence|artificial-intelligence]] efficacy are needed because, &amp;quot;just as there is no single test of [[Athletics (physical culture)|athletic]] prowess, there cannot be one ultimate test of intelligence.&amp;quot; One such test, a &amp;quot;Construction Challenge&amp;quot;, would test perception and physical action—&amp;quot;two important elements of intelligent behavior that were entirely absent from the original [[Turing test]].&amp;quot; Another proposal has been to give machines the same standardized tests of science and other disciplines that schoolchildren take. A so far insuperable stumbling block to artificial intelligence is an incapacity for reliable [[disambiguation]]. &amp;quot;[V]irtually every sentence [that people generate] is [[ambiguity|ambiguous]], often in multiple ways.&amp;quot; A prominent example is known as the &amp;quot;pronoun disambiguation problem&amp;quot;: a machine has no way of determining to whom or what a [[pronoun]] in a sentence—such as &amp;quot;he&amp;quot;, &amp;quot;she&amp;quot; or &amp;quot;it&amp;quot;—refers.&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
{{Spoken Wikipedia|En-Technological_singularity-article.ogg|date=2018-11-03}}&lt;br /&gt;
* [https://www.britannica.com/technology/singularity-technology singularity {{!}} technology], britannica.com&lt;br /&gt;
* [https://edoras.sdsu.edu/~vinge/misc/singularity.html The Coming Technological Singularity: How to Survive in the Post-Human Era] (on Vernor Vinge&#039;s web site, retrieved Jul 2019)&lt;br /&gt;
* [https://intelligence.org/ie-faq/ Intelligence Explosion FAQ] by the [[Machine Intelligence Research Institute]]&lt;br /&gt;
* [http://bootstrappingartificialintelligence.fr/WordPress3/ Blog on bootstrapping artificial intelligence] by [[Jacques Pitrat]]&lt;br /&gt;
* &#039;&#039;[http://hplusmagazine.com/2011/03/07/why-an-intelligence-explosion-is-probable/ Why an Intelligence Explosion is Probable]&#039;&#039; (Mar 2011)&lt;br /&gt;
* &#039;&#039;[https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec Why an Intelligence Explosion is Impossible]&#039;&#039; (Nov 2017)&lt;br /&gt;
* &#039;&#039;[https://scifilogic.com/achieving-the-technological-singularity/ How Close are We to Technological Singularity and When?]&#039;&#039;&lt;br /&gt;
* The AI Revolution: Our Immortality or Extinction – [https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html Part 1] and [https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html Part 2] ([[Tim Urban]], &#039;&#039;Wait But Why,&#039;&#039; January 22/27, 2015)&lt;br /&gt;
&lt;br /&gt;
{{Existential risk from artificial intelligence}}&lt;br /&gt;
{{emerging technologies|topics=yes}}&lt;br /&gt;
{{Doomsday}}&lt;br /&gt;
{{Authority control}}&lt;br /&gt;
&lt;br /&gt;
{{DEFAULTSORT:Technological Singularity}}&lt;br /&gt;
[[Category:Singularitarianism| ]]&lt;br /&gt;
[[Category:Existential risk from artificial intelligence]]&lt;br /&gt;
[[Category:Philosophy of artificial intelligence]]&lt;br /&gt;
[[Category:Science fiction themes]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Sam_Altman&amp;diff=12</id>
		<title>Sam Altman</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Sam_Altman&amp;diff=12"/>
		<updated>2026-04-06T12:58:42Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import from Wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Use American English|date=January 2023}}&lt;br /&gt;
{{Use mdy dates|date=August 2025}}&lt;br /&gt;
{{Infobox person&lt;br /&gt;
| name               = Sam Altman&lt;br /&gt;
| image              = Sam Altman TechCrunch SF 2019 Day 2 Oct 3 (cropped 3).jpg&lt;br /&gt;
| caption            = Altman in 2019&lt;br /&gt;
| birth_name         = Samuel Harris Altman&lt;br /&gt;
| birth_date         = {{Birth date and age|1985|04|22}}&lt;br /&gt;
| birth_place        = [[Chicago]], Illinois, U.S.&lt;br /&gt;
| education          = [[Stanford University]] (dropped out)&lt;br /&gt;
| occupation         = &lt;br /&gt;
| known_for          = &lt;br /&gt;
| notable_works      = [[Loopt]]&amp;amp;nbsp;(co-founder)&lt;br /&gt;
| title              = {{bulletedlist&lt;br /&gt;
| CEO of [[OpenAI]]&lt;br /&gt;
| Chairman of [[Helion Energy]]&lt;br /&gt;
| President of [[Y Combinator]] (until 2019)}}&lt;br /&gt;
| spouse             = {{marriage|Oliver Mulherin|2024}}&lt;br /&gt;
| children           = 1&lt;br /&gt;
| website            = {{URL|https://blog.samaltman.com/}}&lt;br /&gt;
| signature          = Sam altman autograph 2024.svg&lt;br /&gt;
}}&lt;br /&gt;
&#039;&#039;&#039;Samuel Harris Altman&#039;&#039;&#039; (born April 22, 1985)&amp;lt;ref&amp;gt;Hao, Karen. &amp;quot;Empire of AI&amp;quot;. [[Penguin Press]].&amp;lt;/ref&amp;gt; is an American businessman and entrepreneur who has been the [[chief executive officer]] (CEO) of the artificial intelligence research organization [[OpenAI]] since 2019.&amp;lt;ref name=&amp;quot;openailp&amp;quot;&amp;gt;[https://openai.com/blog/openai-lp/ &amp;quot;OpenAI LP&amp;quot;]. &#039;&#039;openai.com&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Altman attended [[Stanford University]] for two years before dropping out and co-founding [[Loopt]], a smartphone [[geosocial networking]] service, which raised more than {{Currency|30 million|USD|linked=no|passthrough=yes}} in [[venture capital]] before being acquired by [[Green Dot Corporation]] for $43.4 million in cash.&amp;lt;ref name=&amp;quot;:4&amp;quot; /&amp;gt; In 2011, Altman joined [[Y Combinator]], a technology [[startup accelerator]] and venture capital firm, and was the company&#039;s president from 2014 to 2019.&amp;lt;ref name=&amp;quot;wapo-2023&amp;quot;&amp;gt;[https://www.washingtonpost.com/technology/2023/11/22/sam-altman-fired-y-combinator-paul-graham/ &amp;quot;Sam Altman Fired from Y Combinator by Paul Graham&amp;quot;]. &#039;&#039;[[The Washington Post]]&#039;&#039;.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
After co-founding OpenAI in 2015, Altman later became the organization&#039;s CEO in 2019.&amp;lt;ref&amp;gt;Peluso, Olivia. [https://observer.com/2025/04/sam-altman-40-birthday-openai-ceo/ &amp;quot;Sam Altman Turns 40: A Look Back at the OpenAI CEO’s Unlikely Ascent&amp;quot;]. &#039;&#039;Observer&#039;&#039;. April 22, 2025.&amp;lt;/ref&amp;gt; In 2023, [[Removal of Sam Altman from OpenAI|he was ousted]] by the organization&#039;s board of directors, who cited a lack of &amp;quot;confidence in his ability to continue leading OpenAI&amp;quot; in an official post. However, the move was met with significant backlash from employees and investors, resulting in Altman&#039;s reinstatement five days later and the formation of a new board.&amp;lt;ref name=&amp;quot;:3&amp;quot; /&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Having overseen the launch of [[ChatGPT]] in November 2022, he has been described as one of the leading figures of the [[AI boom]].&amp;lt;ref name=&amp;quot;Intelligencer&amp;quot;&amp;gt;Weil, Elizabeth. [https://nymag.com/intelligencer/article/sam-altman-artificial-intelligence-openai-profile.html &amp;quot;Sam Altman Is the Oppenheimer of Our Age.&amp;quot;]. &#039;&#039;Intelligencer&#039;&#039;. September 25, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:3&amp;quot;&amp;gt;Mickle, Tripp. [https://www.nytimes.com/2023/12/09/technology/openai-altman-inside-crisis.html &amp;quot;Inside OpenAI&#039;s Crisis Over the Future of Artificial Intelligence&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. December 9, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.wsj.com/tech/ai/artificial-the-openai-story-21587cbd &amp;quot;Artificial: The OpenAI Story&amp;quot;]. &#039;&#039;Wall Street Journal&#039;&#039;. December 10, 2023.&amp;lt;/ref&amp;gt; In 2025, Altman was named among the &amp;quot;Architects of AI&amp;quot; for &#039;&#039;[[Time (magazine)|Time]]&#039;&#039;{{&#039;s}} [[Time Person of the Year|Person of the Year]]. His net worth was estimated at {{US$|3.3 billion}} by &#039;&#039;[[Forbes]]&#039;&#039; in March 2026.&amp;lt;ref&amp;gt;[https://www.forbes.com/profile/sam-altman/ &amp;quot;Sam Altman&amp;quot;]. &#039;&#039;Forbes&#039;&#039;.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Early life and education ==&lt;br /&gt;
Altman was born in [[Chicago]], Illinois, on April 22, 1985, to a Jewish American family. His mother Connie Gibstine is a dermatologist and his father Jerry Altman was a real estate broker.{{sfn|Hagey|2025|pp=22-23, 30-31, 33, 36}}&amp;lt;ref name=&amp;quot;Chapman Bachner Magid Ben-David 2023 b104&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; Altman is the eldest of four siblings: he has two brothers, Max and Jack; and a sister, Ann.&amp;lt;ref name=&amp;quot;Intelligencer&amp;quot; /&amp;gt; His paternal great-grandfather was born in [[Płock]], [[History of the Jews in Poland|Poland]].{{sfn|Hagey|2025|p=23}}  In 1989, the Altman family moved to Jerry&#039;s hometown of [[Clayton, Missouri]].{{sfn|Hagey|2025|pp=38-39}}&lt;br /&gt;
&lt;br /&gt;
At the age of eight, Altman received his first computer—an [[Mac (computer)#1991–1998: PowerPC transition and sales decline|Apple Macintosh]]—and began to learn how to [[Computer programming|code]] and disassemble and examine [[computer hardware]].&amp;lt;ref name=&amp;quot;esquire.com&amp;quot;&amp;gt;Junod, Tom. [https://www.esquire.com/news-politics/interviews/a30763/sam-altman-interview-2014/ &amp;quot;How Venture Capitalists Find Opportunities in the Future&amp;quot;]. &#039;&#039;[[Esquire (magazine)&#039;&#039;. December 18, 2014.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Afifi-Sabet, Keumars. [https://theweek.com/news/technology/961823/sam-altman-profile-openai-ceo-leading-ai-revolution &amp;quot;Sam Altman: the OpenAI CEO leading the AI revolution&amp;quot;]. The Week.&amp;lt;/ref&amp;gt; He attended [[John Burroughs School]], a private institution in [[Ladue, Missouri]].&amp;lt;ref&amp;gt;Nguyen, Britney. [https://www.businessinsider.com/sam-altman-chatgpt-openai-ceo-career-net-worth-ycombinator-prepper-2023-1 &amp;quot;Meet Sam Altman, the OpenAI CEO who learned to code at 8 and is a doomsday prepper with a stash of guns and gold&amp;quot;]. February 20, 2024.&amp;lt;/ref&amp;gt; In 2005, after studying computer science for two years at [[Stanford University]] in [[Stanford, California]], he dropped out without earning a bachelor&#039;s degree.&amp;lt;ref&amp;gt;Hagy, Paige. [https://fortune.com/2023/11/21/who-is-sam-altman-openai-career-microsoft-background-y-combinator-loopt-stanford/ &amp;quot;Sam Altman&#039;s ousting from OpenAI could lead to even greater success: &#039;You could parachute him into an island full of cannibals and come back in five years and he&#039;d be the king&#039;&amp;quot;]. &#039;&#039;Fortune&#039;&#039;. November 21, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[http://ycombinator.com/people.html &amp;quot;People&amp;quot;]. &#039;&#039;[[Y Combinator (company)&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Business career ==&lt;br /&gt;
&lt;br /&gt;
=== Early career ===&lt;br /&gt;
&lt;br /&gt;
In 2005, at the age of 19,&amp;lt;ref&amp;gt;Ankeny, Jason. [http://www.entrepreneur.com/article/244508 &amp;quot;Meet Y Combinator&#039;s Bold Whiz Kid Boss&amp;quot;]. &#039;&#039;[[Entrepreneur (magazine)&#039;&#039;. April 25, 2015.&amp;lt;/ref&amp;gt; Altman co-founded [[Loopt]],&amp;lt;ref&amp;gt;[http://www.loopt.com/about/company/executives &amp;quot;Executives&amp;quot;]. &#039;&#039;[[Loopt]]&#039;&#039;.&amp;lt;/ref&amp;gt; a location-based [[social networking service|social networking]] mobile application. As CEO, he raised more than $30 million in [[venture capital]] for the company, including an initial investment of $5 million from Patrick Chung of [[Xfund]] and his team at [[New Enterprise Associates]], followed by investments from [[Sequoia Capital]] and Y Combinator.&amp;lt;ref name=&amp;quot;f2&amp;quot; /&amp;gt; In March 2012, after Loopt failed to gain significant user traction, the company was acquired by the [[Green Dot Corporation]] for $43.4 million.&amp;lt;ref name=&amp;quot;:4&amp;quot;&amp;gt;Vascellaro, Jessica E.. [https://blogs.wsj.com/digits/2012/03/09/startup-loopt-lands-with-green-dot/ &amp;quot;Startup Loopt Lands with Green Dot&amp;quot;]. &#039;&#039;[[The Wall Street Journal]]&#039;&#039;. March 9, 2012.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Y Combinator ===&lt;br /&gt;
In 2011, Altman became a partner at startup accelerator [[Y Combinator]] (YC), initially working on a part-time basis.&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Clark, Kate. [https://techcrunch.com/2019/03/08/y-combinator-president-sam-altman-is-stepping-down-amid-a-series-of-changes-at-the-accelerator/ &amp;quot;Y Combinator president Sam Altman is stepping down amid a series of changes at the accelerator&amp;quot;]. &#039;&#039;[[TechCrunch]]&#039;&#039;. March 8, 2019.&amp;lt;/ref&amp;gt; In February 2014, he became president of YC.&amp;lt;ref&amp;gt;Loizos, Connie. [https://techcrunch.com/2015/11/06/garry-tan-says-goodbye-to-y-combinator/ &amp;quot;Garry Tan Says Goodbye to Y Combinator&amp;quot;]. &#039;&#039;[[TechCrunch]]&#039;&#039;. November 6, 2015.&amp;lt;/ref&amp;gt; He aimed to expand YC to fund 1,000 new companies per year and sought to broaden the types of companies funded, particularly focusing on &amp;quot;hard technology&amp;quot; startups.&amp;lt;ref&amp;gt;Chafkin, Max. [http://www.fastcompany.com/3044282/the-y-combinator-chronicles/california-dreamin &amp;quot;Y Combinator President Sam Altman is Dreaming Big&amp;quot;]. &#039;&#039;[[Fast Company]]&#039;&#039;. April 16, 2015.&amp;lt;/ref&amp;gt; In October 2015, Altman was involved in expanding YC&#039;s scope. He contributed $10 million to the initial fund of Y Combinator Research, and announced YC Continuity, a fund to invest in maturing YC companies.&amp;lt;ref&amp;gt;&amp;quot;YC Research&amp;quot;. October 7, 2015.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[http://blog.ycombinator.com/yc-continuity-fund &amp;quot;YC Continuity&amp;quot;]. Y Combinator. October 15, 2015.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://venturebeat.com/2015/10/15/y-combinator-raises-700m-to-keep-funding-yc-startups-as-they-mature/ &amp;quot;Y Combinator raises $700M to keep funding YC startups as they mature&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. October 15, 2015.&amp;lt;/ref&amp;gt; In September 2016, Altman&#039;s role at YC expanded to president of YC Group, which included Y Combinator and other units.&amp;lt;ref&amp;gt;Altman, Sam. [https://blog.ycombinator.com/yc-changes &amp;quot;YC Changes&amp;quot;]. &#039;&#039;[[Y Combinator]]&#039;&#039;. September 13, 2016.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
YC moved its headquarters to [[San Francisco]] in 2019.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt; In March, Altman and YC began to falsely&amp;lt;ref name=&amp;quot;f1&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;f2&amp;quot;/&amp;gt; claim that Altman had transitioned from president to a less hands-on role as [[chairman of the board]], allowing him to focus on OpenAI.&amp;lt;ref&amp;gt;Loizos, Connie. [https://techcrunch.com/2019/03/09/did-sam-altman-make-yc-better-or-worse/ &amp;quot;Did Sam Altman make YC better or worse?&amp;quot;]. &#039;&#039;[[TechCrunch]]&#039;&#039;. March 9, 2019.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;blog.yc&amp;quot;&amp;gt;[https://blog.ycombinator.com/updates-from-yc/ &amp;quot;Updates from YC&amp;quot;].&amp;lt;/ref&amp;gt; However, Y Combinator partners never approved his appointment.&amp;lt;ref name=&amp;quot;f1&amp;quot;&amp;gt;Bloomberg, Sara. [https://www.bizjournals.com/sanfrancisco/inno/stories/news/2024/04/15/sam-altman-y-combinator-board-chair.html &amp;quot;Sam Altman is not on YC&#039;s board. So why claim to be its chair?&amp;quot;]. &#039;&#039;Biz Journals&#039;&#039;. April 15, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;f2&amp;quot;&amp;gt;Seetharaman, Deepa. [https://www.wsj.com/tech/ai/sam-altman-openai-protected-by-silicon-valley-friends-f3efcf68 &amp;quot;Sam Altman&#039;s Knack for Dodging Bullets—With a Little Help From Bigshot Friends&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. December 24, 2023.&amp;lt;/ref&amp;gt; In early 2020, Altman and YC terminated their relationship.&amp;lt;ref name=&amp;quot;wapo-2020&amp;quot;&amp;gt;McGregor, Jena. [https://www.washingtonpost.com/technology/2020/02/21/sam-altman-steps-down-y-combinator/ &amp;quot;Y Combinator president Sam Altman steps down to focus on OpenAI&amp;quot;]. &#039;&#039;The Washington Post {{dead link&#039;&#039;. February 21, 2020.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Investor ===&lt;br /&gt;
[[File:Human connection is still important to people Sam Altman.webm|thumb|Altman at the 2024 [[World Economic Forum]] ]]&lt;br /&gt;
As of June 2024, Altman&#039;s investment portfolio includes stakes in over 400 companies, valued at around $2.8 billion. Some of these investments intersect with companies doing business with OpenAI, which has raised questions about potential conflicts of interest. OpenAI&#039;s chairman of the board, [[Bret Taylor]], maintained that Altman has been transparent about his investments.&amp;lt;ref&amp;gt;Hagey, Berber Jin, Tom Dotan and Keach. [https://www.wsj.com/tech/ai/openai-sam-altman-investments-004fc785 &amp;quot;The Opaque Investment Empire Making OpenAI&#039;s Sam Altman Rich&amp;quot;]. &#039;&#039;WSJ&#039;&#039;. June 3, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In April 2012, Altman co-founded [[Hydrazine Capital]] with his brother, Jack Altman.&amp;lt;ref&amp;gt;Hydrazine Capital GP, LLC. [https://reports.adviserinfo.sec.gov/reports/ADV/165781/PDF/165781.pdf &amp;quot;Form ADV - Uniform Application for Investment Adviser Registration and Report by Exempt Reporting Advisers.&amp;quot;]. &#039;&#039;[[Securities and Exchange Commission]]&#039;&#039;. February 14, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.bloomberg.com/profile/company/1003986D:US &amp;quot;Hydrazine Capital LP - Company Profile and News&amp;quot;]. &#039;&#039;[[Bloomberg L.P.]]&#039;&#039;.&amp;lt;/ref&amp;gt;  The initial $21 million fund included a large part of the $5 million he got from selling Loopt, but most came from [[Peter Thiel]], his mentor and main backer in [[Silicon Valley]]. Sam Altman invested 75 percent of the money in Y-Combinator companies.&amp;lt;ref&amp;gt;Dwoskin, Elizabeth. [https://www.washingtonpost.com/technology/2023/12/23/sam-altman-openai-peter-thiel-silicon-valley/ &amp;quot;&#039;King of the cannibals&#039;: How Sam Altman took over Silicon Valley&amp;quot;]. &#039;&#039;The Washington Post&#039;&#039;. 23 December 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Nguyen, Britney. [https://www.businessinsider.com/sam-altman-chatgpt-openai-ceo-career-net-worth-ycombinator-prepper-2023-1#after-loopt-altman-founded-a-venture-fund-called-hydrazine-capital-and-raised-21-million-5 &amp;quot;The rise of OpenAI&#039;s billionaire CEO, Sam Altman&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;. 23 May 2025.&amp;lt;/ref&amp;gt; In 2023, when Hydrazine launched its fourth fund, the [[University of Michigan]] [[university endowment|endowment]] was the only outside investor. Its investments in Hydrazine were the largest the endowment has made.&amp;lt;ref&amp;gt;Matthews, Jessica. [https://fortune.com/2023/12/20/university-of-michigan-wrote-sam-altman-venture-capital-firm-75-million-check/ &amp;quot;The University of Michigan wrote Sam Altman&#039;s venture capital firm a $75M check earlier this year for a new fund&amp;quot;]. Fortune.&amp;lt;/ref&amp;gt; Altman debuted on the &#039;&#039;[[Bloomberg Billionaires Index]]&#039;&#039; in March 2024 with an estimated net worth of $2 billion, primarily from his venture capital funds related to Hydrazine Capital.&amp;lt;ref&amp;gt;Massa, Annie. [https://www.bloomberg.com/news/articles/2024-03-01/sam-altman-is-a-billionaire-thanks-to-vc-funds-startups &amp;quot;Sam Altman Is Worth $2 Billion—That Doesn&#039;t Include OpenAI&amp;quot;]. Bloomberg News. March 1, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
[[File:Nancy Pelosi GLAAD 2017 2 (cropped).jpg|thumb|[[Nancy Pelosi]] presenting Altman with the Ric Weiland Award in 2017]]&lt;br /&gt;
Altman was invited to attend the [[Bilderberg Meeting]] in 2016,&amp;lt;ref&amp;gt;[https://time.com/4362872/bilderberg-group-meetings-2016-conspiracy-theories/ &amp;quot;The World&#039;s Most Powerful and Secret Group, Explained&amp;quot;]. &#039;&#039;Time&#039;&#039;. June 9, 2016.&amp;lt;/ref&amp;gt; [[2022 Bilderberg Conference|2022]],&amp;lt;ref&amp;gt;[https://www.bilderbergmeetings.org/meetings/meeting-2022/participants-2022 &amp;quot;Participants 2022&amp;quot;]. &#039;&#039;www.bilderbergmeetings.org&#039;&#039;.&amp;lt;/ref&amp;gt; and [[2023 Bilderberg Conference|2023]].&amp;lt;ref&amp;gt;Gilchrist, Karen. [https://www.cnbc.com/2023/05/18/bilderberg-openai-microsoft-google-join-ai-talks-at-secretive-meeting.html &amp;quot;A secretive annual meeting attended by the world&#039;s elite has A.I. top of the agenda&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. May 18, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Skelton, Charlie. [https://www.theguardian.com/world/2023/may/20/bilderberg-meeting-group-lisbon-kissinger &amp;quot;At Bilderberg&#039;s bigwig bash two things are guaranteed: Kissinger and secrecy&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. May 20, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Biotech ====&lt;br /&gt;
Altman has several other investments in companies including [[Humane Inc.|Humane]], which was developing a wearable AI-powered device; Retro Biosciences, a research company aiming to extend human life by 10 years;&amp;lt;ref name=&amp;quot;longevity&amp;quot;&amp;gt;[https://www.technologyreview.com/2023/03/08/1069523/sam-altman-investment-180-million-retro-biosciences-longevity-death/ &amp;quot;Sam Altman invested $180 million into a company trying to delay death&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;.&amp;lt;/ref&amp;gt; [[Boom Technology]], a [[supersonic]] airline developer; [[Cruise (autonomous vehicle)|Cruise]], a self-driving car company later acquired by [[General Motors]]; and Helion Energy, an American fusion research company.&amp;lt;ref name=WaPoDeVynck&amp;gt;De Vynck, Gerrit. [https://www.washingtonpost.com/technology/2023/12/23/open-ai-sam-altman-investments-companies/ &amp;quot;OpenAI founder Sam Altman&#039;s sprawling network of investments&amp;quot;]. &#039;&#039;The Washington Post&#039;&#039;. December 23, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
During the [[COVID-19 pandemic]], Altman helped fund and create Project Covalence to help researchers rapidly launch clinical trials in partnership with TrialSpark, a clinical trial startup.&amp;lt;ref&amp;gt;Herper, Matthew. [https://www.statnews.com/2020/06/16/tech-investor-covid-trials/ &amp;quot;Teaming tech and pharma, effort seeks to speed Covid-19 clinical trials&amp;quot;]. &#039;&#039;[[Stat (website)&#039;&#039;. June 16, 2020.&amp;lt;/ref&amp;gt; During the depositor run on [[Silicon Valley Bank]] in mid-March 2023, Altman provided capital to multiple [[Startup company|startups]].&amp;lt;ref&amp;gt;Krystal, Hu. [https://www.reuters.com/business/tech-execs-race-save-startups-extinction-after-svb-collapse-2023-03-12/ &amp;quot;Tech execs race to save startups from &#039;extinction&#039; after SVB collapse&amp;quot;]. &#039;&#039;[[Reuters]]&#039;&#039;. March 12, 2023.&amp;lt;/ref&amp;gt; Altman invests in technology startups and nuclear energy companies. Some of his portfolio companies include [[Airbnb]], [[Stripe (company)|Stripe]] and [[Joe Betts-LaCroix#Biotechnology &amp;amp; biomedicine|Retro Biosciences]].&amp;lt;ref name=&amp;quot;longevity&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Along with Peter Thiel, Altman was an early seed investor in Minicircle, &amp;quot;a longevity biotech company focused on developing gene therapies to extend human lifespans.&amp;quot;&amp;lt;ref&amp;gt;Haskins, Caroline. [https://www.wired.com/story/startup-nations-donald-trump-legislation/ &amp;quot;&#039;Startup Nation&#039; Groups Say They&#039;re Meeting Trump Officials to Push for Deregulated &#039;Freedom Cities&#039;&amp;quot;]. &#039;&#039;Wired&#039;&#039;.&amp;lt;/ref&amp;gt; He also invested in charter city projects [[Próspera]] and [[Praxis (proposed city)|Praxis]],&amp;lt;ref&amp;gt;Bernstein, Joseph. [https://www.nytimes.com/2023/12/12/style/praxis-city-dryden-brown.html &amp;quot;Who Would Give This Guy Millions to Build His Own Utopia?&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. December 12, 2023.&amp;lt;/ref&amp;gt; which have gotten additional financial support from author and former [[Coinbase]] CTO [[Balaji Srinivasan]].&amp;lt;ref&amp;gt;[https://www.lemonde.fr/en/pixels/article/2023/12/04/from-praxis-to-prospera-silicon-valley-is-longing-to-break-free-from-countries-worldwide_6309767_13.html &amp;quot;From Praxis to Prospera, Silicon Valley longs to break free&amp;quot;]. December 4, 2023.&amp;lt;/ref&amp;gt; Both cities have been linked by various publications and journalists to the Network State movement.&amp;lt;ref&amp;gt;Ropek, Lucas. [https://gizmodo.com/worst-new-trend-of-2024-techno-colonialism-and-the-network-state-movement-2000525617 &amp;quot;Worst New Trend of 2024: Techno-Colonialism and the Network State Movement&amp;quot;]. &#039;&#039;Gizmodo&#039;&#039;. December 27, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Reddit ====&lt;br /&gt;
For eight days in 2014, Altman was the CEO of [[Reddit]], a [[social media]] company, after CEO [[Yishan Wong]] resigned.&amp;lt;ref&amp;gt;Acres, Tom. [https://news.sky.com/story/who-is-sam-altman-the-openai-boss-and-chatgpt-guru-who-is-now-one-of-ais-biggest-players-12898698 &amp;quot;Who is Sam Altman? The OpenAI boss and ChatGPT guru who became one of AI&#039;s biggest players&amp;quot;]. Sky News. November 22, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[http://blog.samaltman.com/a-new-team-at-reddit &amp;quot;A New Team At Reddit&amp;quot;]. Sam Altman. November 13, 2014.&amp;lt;/ref&amp;gt; On July 10, 2015, he announced the return of [[Steve Huffman]] as CEO.&amp;lt;ref&amp;gt;Robertson, Adi. [https://www.theverge.com/2015/7/10/8931017/reddit-ceo-ellen-pao-steps-down &amp;quot;Interim Reddit CEO Ellen Pao replaced by company co-founder Steve Huffman&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. July 10, 2015.&amp;lt;/ref&amp;gt; He remained on its board until 2022.&amp;lt;ref name=&amp;quot;fortune240222&amp;quot;&amp;gt;Robison, Kyle. [https://fortune.com/2024/02/22/sam-altman-third-largest-reddit-shareholder-ipo/ &amp;quot;Sam Altman is set to be one of the biggest winners in Reddit&#039;s IPO, with a stake that could be worth $435 million&amp;quot;]. &#039;&#039;Fortune&#039;&#039;. February 22, 2024.&amp;lt;/ref&amp;gt; Altman invested in multiple rounds of funding for Reddit (in 2014, 2015, and 2021).&amp;lt;ref name=&amp;quot;fortune240222&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Novet, Jordan. [https://www.cnbc.com/2024/02/22/openai-ceo-sam-altman-stands-to-net-millions-as-reddit-goes-public.html &amp;quot;OpenAI CEO Sam Altman stands to net millions as Reddit goes public&amp;quot;]. CNBC. February 22, 2024.&amp;lt;/ref&amp;gt; Prior to Reddit&#039;s [[initial public offering]] in 2024, Altman was listed as its third-largest shareholder, with around 9% ownership.&amp;lt;ref&amp;gt;Ghaffary, Shirin. [https://www.bloomberg.com/news/articles/2024-02-22/openai-s-altman-listed-as-major-reddit-shareholder-in-ipo-filing &amp;quot;OpenAI&#039;s Altman Listed as Major Reddit Shareholder in IPO Filing&amp;quot;]. Bloomberg News. February 22, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Worldcoin ====&lt;br /&gt;
[[File:SlavaBlazerPhotography-31.jpg|thumb|Orb-shaped iris scanners on display]]&lt;br /&gt;
In 2019, Altman co-founded the for-profit company Tools For Humanity.&amp;lt;ref name=&amp;quot;techFound&amp;quot;&amp;gt;Melinek, Jacquelyn. [https://techcrunch.com/podcast/sam-altmans-crypto-project-worldcoin-got-more-coin-in-latest-115m-raise/ &amp;quot;Sam Altman&#039;s crypto project Worldcoin got more coin in latest $115M raise&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. March 29, 2024.&amp;lt;/ref&amp;gt; The company promoted the [[World (blockchain)|Worldcoin]] [[cryptocurrency]] and eye-scanning systems to provide [[proof of personhood]] and authentication.&amp;lt;ref&amp;gt;[https://www.wired.com/story/get-free-crypto-orb-scans-eye/ &amp;quot;You Can Get This Free Crypto—If the &#039;Orb&#039; Scans Your Eye&amp;quot;]. &#039;&#039;[[Wired (magazine)&#039;&#039;. October 21, 2021.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Hart, Robert. [https://www.forbes.com/sites/roberthart/2023/07/24/what-is-worldcoin-heres-what-to-know-about-the-eyeball-scanning-crypto-project-launched-by-openais-sam-altman/ &amp;quot;What Is Worldcoin? Here&#039;s What To Know About The Eyeball-Scanning Crypto Project Launched By OpenAI&#039;s Sam Altman&amp;quot;]. &#039;&#039;[[Forbes]]&#039;&#039;. July 24, 2023.&amp;lt;/ref&amp;gt; However, it has engaged in deceptive marketing practices to drive sign-ups.&amp;lt;ref&amp;gt;Nieva, Richard. [https://www.buzzfeednews.com/article/richardnieva/worldcoin-crypto-eyeball-scanning-orb-problems &amp;quot;Worldcoin Promised Free Crypto If They Scanned Their Eyeballs With &amp;quot;The Orb.&amp;quot; Now They Feel Robbed.&amp;quot;]. &#039;&#039;BuzzFeed News&#039;&#039;. April 21, 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Guo, Eileen. [https://www.technologyreview.com/2022/04/06/1048981/worldcoin-cryptocurrency-biometrics-web3/ &amp;quot;Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. April 6, 2022.&amp;lt;/ref&amp;gt; By 2023, Tools For Humanity had scanned two million people&#039;s eyes and raised $250 million from several investors, including [[Andreessen Horowitz]] and [[Sam Bankman-Fried]].&amp;lt;ref name=ArsT_2023-07-24&amp;gt;Hammond, George. [https://arstechnica.com/tech-policy/2023/07/ready-for-your-eye-scan-worldcoin-launches-but-not-quite-worldwide/ &amp;quot;Ready for your eye scan? Worldcoin launches—but not quite worldwide&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. July 24, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;techFound&amp;quot;/&amp;gt;&amp;lt;ref&amp;gt;Currie, Richard. [https://www.theregister.com/2023/05/16/worldcoin_fundraising/ &amp;quot;Sam Altman rattles tin for Worldcoin crypto startup&amp;quot;]. &#039;&#039;The Register&#039;&#039;. May 16, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Kenya was one of the first countries to register WorldCoin. The promise of free money led to rapid growth in Kenya until WorldCoin promotion was paused by regulators.&amp;lt;ref&amp;gt;Njanja, Annie. [https://techcrunch.com/2023/08/02/kenya-suspends-worldcoin-scans-over-security-privacy-and-financial-concerns &amp;quot;Kenya suspends Worldcoin scans over security, privacy and financial concerns&amp;quot;]. TechCrunch. August 2, 2023.&amp;lt;/ref&amp;gt; Citing legal concerns over [[biometrics|biometric data]] privacy and potential fraud concerns, regulators in France, the United Kingdom, Bavaria, South Korea,  Spain, Portugal, and Hong Kong have investigated or suspended WorldCoin.&amp;lt;ref&amp;gt;&lt;br /&gt;
; Sources for investigation:&lt;br /&gt;
* Howcroft, Elizabeth. [https://www.reuters.com/technology/frances-privacy-watchdog-says-worldcoin-legality-seems-questionable-2023-07-28/ &amp;quot;France&#039;s watchdog questions legality of Worldcoin biometric data collection&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. July 31, 2023.&lt;br /&gt;
* Howcroft, Elizabeth. [https://www.reuters.com/technology/uk-data-watchdog-make-enquiries-worldcoin-crypto-project-2023-07-25/ &amp;quot;UK data watchdog to make enquiries about Worldcoin crypto project&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. July 25, 2023.&lt;br /&gt;
* Howcroft, Elizabeth. [https://www.reuters.com/technology/german-data-watchdog-probing-worldcoin-crypto-project-official-says-2023-07-31/ &amp;quot;German data watchdog probing Worldcoin crypto project, official says&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. July 31, 2023.&lt;br /&gt;
* Atkinson, Sophie. [https://readwrite.com/south-korea-launches-investigation-into-worldcoin/ &amp;quot;South Korea launches investigation into Worldcoin&amp;quot;]. &#039;&#039;ReadWrite&#039;&#039;. March 4, 2024.&lt;br /&gt;
; Sources suspension:&lt;br /&gt;
* Njanja, Annie. [https://techcrunch.com/2023/08/02/kenya-suspends-worldcoin-scans-over-security-privacy-and-financial-concerns &amp;quot;Kenya suspends Worldcoin scans over security, privacy and financial concerns&amp;quot;]. TechCrunch. August 2, 2023.&lt;br /&gt;
* Roth, Emma. [https://www.theverge.com/2023/8/2/23817147/kenya-worldcoin-suspended-sam-altman-eyeball-scanning &amp;quot;Kenya suspends Sam Altman&#039;s eyeball-scanning crypto project&amp;quot;]. The Verge. August 2, 2023.&lt;br /&gt;
* Njanja, Annie. [https://techcrunch.com/2023/08/15/worldcoin-in-kenya/ &amp;quot;Worldcoin ignored initial order to stop iris scans in Kenya, records show&amp;quot;]. TechCrunch. August 15, 2023.&lt;br /&gt;
* [https://www.reuters.com/technology/hong-kong-regulator-directs-worldcoin-cease-operations-citing-privacy-concerns-2024-05-22/ &amp;quot;Hong Kong regulator directs Worldcoin to cease operations citing privacy concerns&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. May 23, 2024.&lt;br /&gt;
* [https://www.ft.com/content/204c1c81-7a6f-4a6a-907e-8782b9d1bed2 &amp;quot;Spain blocks Sam Altman&#039;s eyeball-scanning venture Worldcoin&amp;quot;]. &#039;&#039;www.ft.com&#039;&#039;.&lt;br /&gt;
* [https://www.reuters.com/markets/currencies/spain-blocks-sam-altmans-eyeball-scanning-venture-worldcoin-ft-reports-2024-03-06/ &amp;quot;Spain temporarily blocks Sam Altman&#039;s eyeball-scanning venture Worldcoin&amp;quot;]. &#039;&#039;www.reuters.com&#039;&#039;. March 6, 2024.&lt;br /&gt;
* Howcroft, Elizabeth. [https://www.reuters.com/markets/currencies/sam-altmans-worldcoin-ordered-stop-data-collection-portugal-2024-03-26/ &amp;quot;Portugal orders Sam Altman&#039;s Worldcoin to halt data collection&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. March 26, 2024.&lt;br /&gt;
&amp;lt;/ref&amp;gt; WorldCoin has never been offered in the United States and the company limits its disclosures due to regulatory scrutiny.&amp;lt;ref name=ArsT_2023-07-24/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Energy investments ====&lt;br /&gt;
Altman is chairman of the board for [[Helion Energy]], a company focused on developing [[nuclear fusion]].&amp;lt;ref&amp;gt;Hiller, Jennifer. [https://www.wsj.com/articles/tech-billionaires-bet-on-fusion-as-holy-grail-for-business-9a48a2ac &amp;quot;Tech Billionaires Bet on Fusion as Holy Grail for Business&amp;quot;]. &#039;&#039;[[Wall Street Journal]]&#039;&#039;. April 23, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Mui, Christine. [https://www.politico.com/newsletters/digital-future-daily/2024/01/22/silicon-valley-fusion-crush-sam-altman-davos-00137031 &amp;quot;Silicon Valley&#039;s crush on fusion&amp;quot;]. &#039;&#039;[[Politico]]&#039;&#039;. January 22, 2024.&amp;lt;/ref&amp;gt; He also invested in [[Exowatt]], a solar energy startup that aims to provide clean energy to data centers.&amp;lt;ref&amp;gt;Ramkumar, Amrith. [https://www.wsj.com/tech/ai/sam-altman-investment-exowatt-energy-startup-ai-data-centers-eeeca766 &amp;quot;Exclusive {{!&amp;quot;]. &#039;&#039;WSJ&#039;&#039;. April 22, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In March 2021, Altman and investment banker Michael Klein co-founded AltC Acquisition Corp, a [[special-purpose acquisition company]] (SPAC), where he was also the CEO.&amp;lt;ref name=&amp;quot;reuters spac&amp;quot;&amp;gt;[https://www.reuters.com/article/idUSKBN2B71D2/ &amp;quot;Y Combinator&#039;s Sam Altman teams up with Michael Klein to launch SPAC looking to raise $1 billion&amp;quot;]. Reuters. March 15, 2021.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://altcacquisitioncorp.com/team/sam-altman/ &amp;quot;AltC Acquisition Corp - Sam Altman&amp;quot;]. &#039;&#039;AltC Acquisition Corp&#039;&#039;.&amp;lt;/ref&amp;gt; In May 2024, Oklo Inc. completed a merger with the SPAC to become a public company. Altman remained as chairman of Oklo following the merger&amp;lt;ref&amp;gt;Field, Hayden. [https://www.cnbc.com/2024/05/10/sam-altman-takes-nuclear-startup-oklo-public-to-power-ai-ambitions.html &amp;quot;Sam Altman&#039;s nuclear energy company Oklo plunges 54% in NYSE debut&amp;quot;]. [[CNBC]]. May 10, 2024.&amp;lt;/ref&amp;gt; until stepping down in April 2025 to &amp;quot;avoid conflict of interest&amp;quot;&amp;lt;ref&amp;gt;Muir, Martha. [https://www.ft.com/content/a511bae0-d19f-4ebd-9520-69d3f89d8556 &amp;quot;Sam Altman steps down as chair of nuclear power supplier Oklo to avoid conflict of interest&amp;quot;]. &#039;&#039;Financial Times&#039;&#039;. April 22, 2025.&amp;lt;/ref&amp;gt; and &amp;quot;open up opportunities for future deals between OpenAI and Oklo.&amp;quot;&amp;lt;ref&amp;gt;Hamilton, Katherine. [https://www.wsj.com/tech/ai/openai-ceo-sam-altman-to-resign-as-oklo-chairman-10a53edc &amp;quot;OpenAI CEO Sam Altman to Resign as Oklo Chairman&amp;quot;]. &#039;&#039;WSJ&#039;&#039;. April 22, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== OpenAI ===&lt;br /&gt;
==== OpenAI begins ====&lt;br /&gt;
OpenAI was initially founded as a [[nonprofit organization]] by Altman, [[Greg Brockman]], [[Elon Musk]], [[Jessica Livingston]], Peter Thiel, [[Microsoft]], [[Amazon Web Services]], [[Infosys]], and YC{{nbsp}}Research. When OpenAI launched in 2015, it had raised pledges for $1{{nbsp}}billion.&amp;lt;ref&amp;gt;Olanoff, Drew. [https://techcrunch.com/2015/12/11/non-profit-openai-launches-with-backing-from-elon-musk-and-sam-altman/ &amp;quot;Artificial Intelligence Nonprofit OpenAI Launches With Backing From Elon Musk And Sam Altman&amp;quot;]. &#039;&#039;[[TechCrunch]]&#039;&#039;. December 11, 2015.&amp;lt;/ref&amp;gt; In 2019, OpenAI stated that $130 million of the pledged funds had been collected.&amp;lt;ref name=&amp;quot;St&amp;quot;&amp;gt;[https://openai.com/our-structure &amp;quot;Our structure&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. June 28, 2023.&amp;lt;/ref&amp;gt; [[TechCrunch]] reported that YC Research never contributed any of their pledged funds.&amp;lt;ref&amp;gt;Harris, Mark. [https://techcrunch.com/2023/05/17/elon-musk-used-to-say-he-put-100m-in-openai-but-now-its-50m-here-are-the-receipts/ &amp;quot;Elon Musk used to say he put $100M in OpenAI, but now it&#039;s $50M: Here are the receipts&amp;quot;]. &#039;&#039;[[TechCrunch]]&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Altman said in 2015 that they were partly motivated by concerns about [[AI safety]] and [[existential risk from artificial general intelligence]].&amp;lt;ref name=&amp;quot;csmonitor&amp;quot;&amp;gt;Lewontin, Max. [https://www.csmonitor.com/Technology/2015/1214/Open-AI-Effort-to-democratize-artificial-intelligence-research &amp;quot;Open AI: Effort to democratize artificial intelligence research?&amp;quot;]. &#039;&#039;[[The Christian Science Monitor]]&#039;&#039;. December 14, 2015.&amp;lt;/ref&amp;gt;&amp;lt;ref name=wired_inside&amp;gt;[https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/ &amp;quot;Inside OpenAI, Elon Musk&#039;s Wild Plan to Set Artificial Intelligence Free&amp;quot;]. &#039;&#039;[[Wired (magazine)&#039;&#039;. April 27, 2016.&amp;lt;/ref&amp;gt; Altman highlighted the importance of [[open-source]] and making AI for collective good for humanity, above financial stakeholders in response to mitigation of risk. Altman noted it will be a decades-long project that eventually surpasses human intelligence.&amp;lt;ref name=&amp;quot;wired_far_more&amp;quot;&amp;gt;Metz, Cade. [https://www.wired.com/2015/12/elon-musks-billion-dollar-ai-plan-is-about-far-more-than-saving-the-world/ &amp;quot;Elon Musk&#039;s Billion-Dollar AI Plan Is About Far More Than Saving the World&amp;quot;]. &#039;&#039;[[Wired (magazine)&#039;&#039;. December 15, 2015.&amp;lt;/ref&amp;gt; [[Walter Isaacson]] opined that Altman had &amp;quot;Musk-like intensity&amp;quot;.&amp;lt;ref&amp;gt;Isaacson, Walter. [https://books.google.com/books?id=6_mzEAAAQBAJ &amp;quot;Elon Musk&amp;quot;]. [[Simon and Schuster]]. September 13, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Deepening investment in OpenAI ====&lt;br /&gt;
In 2018, Musk, a long-time personal friend of Altman&#039;s, resigned from his Board of Directors seat, citing &amp;quot;a potential future [[Conflict of interest|conflict [of interest]]]&amp;quot; with his role as CEO of [[Tesla, Inc.|Tesla]] due to [[Tesla Autopilot|Tesla&#039;s AI development for self-driving cars.]]&amp;lt;ref name=&amp;quot;musk_resigns&amp;quot;&amp;gt;Vincent, James. [https://www.theverge.com/2018/2/21/17036214/elon-musk-openai-ai-safety-leaves-board &amp;quot;Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. February 21, 2018.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wired_inside&amp;quot; /&amp;gt; In February 2024, Musk sued OpenAI and Altman, alleging they broke the company&#039;s founding agreement by prioritizing profit over benefit to humanity.&amp;lt;ref&amp;gt;Vipers, Gareth. [https://www.wsj.com/tech/ai/elon-musk-sues-openai-sam-altman-for-breach-of-contract-0864979d &amp;quot;Elon Musk Sues OpenAI, Sam Altman, Saying They Abandoned Founding Mission&amp;quot;]. &#039;&#039;Wall Street Journal&#039;&#039;. March 1, 2024.&amp;lt;/ref&amp;gt; OpenAI executives, including Altman, dismissed these claims in a blog post.&amp;lt;ref&amp;gt;[https://openai.com/blog/openai-elon-musk &amp;quot;OpenAI and Elon Musk&amp;quot;]. &#039;&#039;Open AI&#039;&#039;. March 5, 2024.&amp;lt;/ref&amp;gt; The post said that the startup received only $45{{nbsp}}million from Musk instead of his pledged $1{{nbsp}}billion, and that Musk proposed to merge it with Tesla.&amp;lt;ref&amp;gt;Singh, Manish. [https://techcrunch.com/2024/03/05/openai-response-elon-musk-lawsuit &amp;quot;OpenAI says Musk only ever contributed $45 million, wanted to merge with Tesla or take control&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. March 6, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In March 2019, Altman left Y Combinator to focus full time as CEO of OpenAI.&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;De Vynck, Gerrit. [https://www.washingtonpost.com/technology/2023/04/09/sam-altman-openai-chatgpt/ &amp;quot;The man who unleashed AI on an unsuspecting Silicon Valley&amp;quot;]. &#039;&#039;The Washington Post&#039;&#039;. April 9, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;openailp&amp;quot; /&amp;gt; OpenAI planned to spend $1 billion &amp;quot;within five years, and possibly much faster&amp;quot;.&amp;lt;ref&amp;gt;Murgia, Madhumita. [https://www.ft.com/content/d4280856-b92d-11e9-8a88-aa6628ac896c &amp;quot;DeepMind runs up higher losses and debts in race for AI&amp;quot;]. &#039;&#039;[[Financial Times]]&#039;&#039;. August 7, 2019.&amp;lt;/ref&amp;gt; Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need &amp;quot;more capital than any non-profit has ever raised&amp;quot; to achieve [[artificial general intelligence]] (AGI).&amp;lt;ref&amp;gt;[https://fortune.com/2019/10/03/openai-will-need-more-capital-than-any-non-profit-has-ever-raised/ &amp;quot;OpenAI Will Need More Capital Than Any Non-Profit Has Ever Raised&amp;quot;]. &#039;&#039;Fortune&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Release of ChatGPT ====&lt;br /&gt;
In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, a new AI [[chatbot]] based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days.&amp;lt;ref&amp;gt;Roose, Kevin. [https://www.nytimes.com/2022/12/05/technology/chatgpt-ai-twitter.html &amp;quot;The Brilliance and Weirdness of ChatGPT&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. December 5, 2022.&amp;lt;/ref&amp;gt; According to anonymous sources cited by [[Reuters]] in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024.&amp;lt;ref&amp;gt;Dastin, Jeffrey. [https://www.reuters.com/business/chatgpt-owner-openai-projects-1-billion-revenue-by-2024-sources-2022-12-15/ &amp;quot;Exclusive: ChatGPT owner OpenAI projects $1 billion in revenue by 2024&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. December 15, 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Chatgpt usage.svg|thumb|The percentage of US adults who have ever used ChatGPT, according to [[Pew Research]], is shown. As of March 2025,  58% of those under 30 have used the chatbot.&amp;lt;ref&amp;gt;Multiple sources:&lt;br /&gt;
* Vogels, Emily A.. [https://www.pewresearch.org/short-reads/2023/05/24/a-majority-of-americans-have-heard-of-chatgpt-but-few-have-tried-it-themselves/ &amp;quot;A majority of Americans have heard of ChatGPT, but few have tried it themselves&amp;quot;]. &#039;&#039;Pew Research Center&#039;&#039;. May 24, 2023.&lt;br /&gt;
* Park, Eugenie. [https://www.pewresearch.org/short-reads/2023/08/28/most-americans-havent-used-chatgpt-few-think-it-will-have-a-major-impact-on-their-job/ &amp;quot;Most Americans haven&#039;t used ChatGPT; few think it will have a major impact on their job&amp;quot;]. &#039;&#039;Pew Research Center&#039;&#039;. August 28, 2023.&lt;br /&gt;
* McClain, Colleen. [https://www.pewresearch.org/short-reads/2024/03/26/americans-use-of-chatgpt-is-ticking-up-but-few-trust-its-election-information/ &amp;quot;Americans&#039; use of ChatGPT is ticking up, but few trust its election information&amp;quot;]. &#039;&#039;Pew Research Center&#039;&#039;. March 26, 2024.&lt;br /&gt;
* Sidoti, Olivia. [https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023/ &amp;quot;34% of U.S. adults have used ChatGPT, about double the share in 2023&amp;quot;]. &#039;&#039;Pew Research&#039;&#039;. June 25, 2025.&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
Altman testified before the [[United States Senate Judiciary Subcommittee on Privacy, Technology and the Law]] on May 16, 2023, about issues of AI oversight.&amp;lt;ref&amp;gt;[https://www.pbs.org/newshour/politics/watch-live-openai-ceo-sam-altman-testifies-before-senate-judiciary-committee &amp;quot;WATCH: OpenAI CEO Sam Altman testifies before Senate Judiciary Committee&amp;quot;]. &#039;&#039;PBS NewsHour&#039;&#039;. May 15, 2023.&amp;lt;/ref&amp;gt; After the success of ChatGPT, Altman made a world tour in May 2023, during which he visited 22 countries and met multiple leaders and diplomats, including British prime minister [[Rishi Sunak]], French president [[Emmanuel Macron]], Spanish prime minister [[Pedro Sánchez]], German chancellor [[Olaf Scholz]], Indian prime minister [[Narendra Modi]], South Korean president [[Yoon Suk-yeol]] and Israeli president [[Isaac Herzog]], and [[European Commission]] president [[Ursula von der Leyen]].&amp;lt;ref name=&amp;quot;Intelligencer&amp;quot; /&amp;gt; Altman was named one of the [[Time 100|100 most influential people in the world]] by [[Time (magazine)|&#039;&#039;Time&#039;&#039;]] magazine.&amp;lt;ref&amp;gt;[https://time.com/collection/100-most-influential-people-2023/ &amp;quot;Time 100&amp;quot;]. &#039;&#039;[[Time (magazine)&#039;&#039;. April 13, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Sam Altman speaking at TED.jpg|thumb|Altman at [[TED (conference)|TED]] in 2025]]The emergence of the Chinese AI company [[DeepSeek]] led major Chinese tech firms to embrace an [[open-source]] strategy, intensifying competition with OpenAI. Altman acknowledged the uncertainty regarding U.S. government approval for AI cooperation with China, but emphasized the importance of fostering dialogue between technological leaders in both nations.&amp;lt;ref&amp;gt;[https://www.scmp.com/tech/big-tech/article/3298396/openai-keen-work-china-ceo-sam-altman-says-deepseek-rattles-tech-market &amp;quot;OpenAI CEO discusses AI collaboration and regulatory challenges&amp;quot;]. &#039;&#039;South China Morning Post&#039;&#039;. February 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Removal and reinstatement as OpenAI CEO ====&lt;br /&gt;
&#039;&#039;Main article: [[Removal of Sam Altman from OpenAI]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On November 17, 2023, OpenAI&#039;s board, composed of researcher [[Helen Toner]], [[Quora]] CEO [[Adam D&#039;Angelo]], AI governance advocate Tasha McCauley, and, most prominently in the firing, OpenAI co-founder and chief scientist [[Ilya Sutskever]], announced that they had made the decision to remove Altman as CEO and Greg Brockman from the board, both of whom were co-founders.&amp;lt;ref name=&amp;quot;web.archive.org&amp;quot;&amp;gt;Difeliciantonio, Chase. [https://www.sfchronicle.com/bayarea/article/sam-altman-fired-openai-candid-board-18499330.php &amp;quot;Sam Altman pushed out from OpenAI for not being &#039;candid&#039; with board&amp;quot;]. &#039;&#039;San Francisco Chronicle&#039;&#039;. January 12, 2024.&amp;lt;/ref&amp;gt; The announcement cited that Altman &amp;quot;was not consistently candid in his communications&amp;quot; in a public announcement on the OpenAI blog.&amp;lt;ref&amp;gt;[https://openai.com/blog/openai-announces-leadership-transition &amp;quot;OpenAI announces leadership transition&amp;quot;]. &#039;&#039;openai.com&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;web.archive.org&amp;quot; /&amp;gt; In response, Brockman resigned from his role as President of OpenAI.&amp;lt;ref&amp;gt;Peters, Jay. [https://www.theverge.com/2023/11/17/23965982/openai-ceo-sam-altman-fired &amp;quot;Sam Altman fired as CEO of OpenAI&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. November 17, 2023.&amp;lt;/ref&amp;gt; The day after Altman was removed, the board discussed bringing him back to OpenAI.&amp;lt;ref name=&amp;quot;:2&amp;quot;&amp;gt;Das, Shanti. [https://www.theguardian.com/technology/2023/nov/18/earthquake-at-chatgpt-developer-as-senior-staff-quit-after-sacking-of-boss-sam-altman &amp;quot;Sam Altman &#039;was working on new venture&#039; before sacking from OpenAI&amp;quot;]. &#039;&#039;The Observer&#039;&#039;. November 18, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On November 20, Microsoft CEO [[Satya Nadella]] announced that Altman would be joining Microsoft to lead a new advanced AI research team.&amp;lt;ref&amp;gt;[https://www.reuters.com/technology/microsoft-ceo-says-sam-altman-will-be-joining-microsoft-2023-11-20/ &amp;quot;Microsoft CEO says Sam Altman will be joining Microsoft&amp;quot;]. &#039;&#039;[[Reuters]]&#039;&#039;. November 20, 2023.&amp;lt;/ref&amp;gt; Two days later, OpenAI employees published an [[open letter]] to the board threatening to leave OpenAI and join Microsoft, where all employees had been promised jobs, unless all board members step down and reinstate Altman as CEO. 505 employees initially signed, which later grew to over 700 out of 770 total employees.&amp;lt;ref&amp;gt;Warren, Tom. [https://www.theverge.com/2023/11/20/23968988/openai-employees-resignation-letter-microsoft-sam-altman &amp;quot;Hundreds of OpenAI employees threaten to resign and join Microsoft&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. November 20, 2023.&amp;lt;/ref&amp;gt; This included Ilya Sutskever, who initially advocated for firing Altman, but then stated on Twitter &amp;quot;I regret my participation in the board&#039;s actions.&amp;quot; Late in the night on November 20, OpenAI announced that they had reached an &amp;quot;agreement in principle&amp;quot; for Altman to return as CEO and Brockman to return as president.&amp;lt;ref name=&amp;quot;verge patel heath&amp;quot;&amp;gt;Heath, Alex. [https://www.theverge.com/2023/11/22/23967223/sam-altman-returns-ceo-open-ai &amp;quot;Sam Altman to return as CEO of OpenAI&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. November 22, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;NYT-reinstated&amp;quot;&amp;gt;Cade, Metz. [https://www.nytimes.com/2023/11/22/technology/openai-sam-altman-returns.html &amp;quot;Sam Altman Is Reinstated as OpenAI&#039;s Chief Executive&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. November 22, 2023.&amp;lt;/ref&amp;gt; On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members [[Bret Taylor]] (as chairman) and [[Lawrence Summers]], with D&#039;Angelo remaining. In August 2024, Brockman announced he would take a sabbatical through the end of the year; he returned to the company in November 2024.&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2024/08/05/openai-co-founder-leaves-for-anthropic/ &amp;quot;OpenAI co-founder Schulman leaves for Anthropic, Brockman takes extended leave&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2024-08-06.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Heath, Alex. [https://www.theverge.com/2023/11/22/23967223/sam-altman-returns-ceo-open-ai &amp;quot;Breaking: Sam Altman to return as CEO of OpenAI&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. November 22, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In May 2024, after [[OpenAI#Non-disparagement agreement|OpenAI&#039;s non-disparagement agreements]] were exposed, Altman was accused of lying when claiming to have been unaware of the equity cancellation provision for departing employees who don&#039;t sign the agreement.&amp;lt;ref&amp;gt;Getahun, Hannah. [https://www.businessinsider.com/sam-altman-openai-nda-clause-vested-equity-ilya-sutskever-2024-5 &amp;quot;Sam Altman addresses &#039;potential equity cancellation&#039; in OpenAI exit agreements after 2 high-profile departures&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&amp;lt;/ref&amp;gt; Also in May, former board member [[Helen Toner]] explained the board&#039;s rationale for firing Altman in November 2023. She stated that Altman had withheld information, for example by not informing the board in advance of ChatGPT&#039;s release and by not disclosing his ownership of OpenAI&#039;s startup fund. She also alleged that two executives in OpenAI had reported &amp;quot;psychological abuse&amp;quot; from Altman, and provided screenshots and documentation to support their claims. She said that many employees feared retaliation if they didn&#039;t support Altman, and that when Altman was Loopt&#039;s CEO, the management team asked twice to fire him for what they called &amp;quot;deceptive and chaotic behavior&amp;quot;.&amp;lt;ref&amp;gt;Lawler, Richard. [https://www.theverge.com/2024/5/28/24166713/openai-helen-toner-explains-why-sam-altman-was-fired &amp;quot;Former OpenAI board member explains why they fired Sam Altman&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. May 29, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Field, Hayden. [https://www.cnbc.com/2024/05/29/former-openai-board-member-explains-why-ceo-sam-altman-was-fired.html &amp;quot;Former OpenAI board member explains why CEO Sam Altman got fired before he was rehired&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. May 29, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Political engagement==&lt;br /&gt;
[[File:The_Prime_Minister_meets_with_AI_developers.jpg|thumb|upright=1.3|Prime Minister of the UK [[Rishi Sunak]] and [[Secretary of State for Science, Innovation and Technology|Technology Secretary]] [[Chloe Smith]] meets with [[Demis Hassabis]] (CEO of [[DeepMind]]), [[Dario Amodei]] (CEO of [[Anthropic]]), and Altman (CEO of OpenAI) in May 2023.]]Altman had contemplated running for [[governor of California]] in the [[2018 California gubernatorial election|2018 election]], but later decided not to enter.&amp;lt;ref&amp;gt;Johnson, Eric. [https://www.vox.com/2018/12/10/18134926/sam-altman-kara-swisher-recode-decode-live-mannys-podcast-transcript-facebook-zuckerberg-ethics &amp;quot;Full Q&amp;amp;A: Y Combinator&#039;s Sam Altman and Recode&#039;s Kara Swisher discuss tech ethics, addiction and Facebook&amp;quot;]. &#039;&#039;Vox&#039;&#039;. December 10, 2018.&amp;lt;/ref&amp;gt; In 2018, Altman announced &amp;quot;the United Slate&amp;quot;, a political project to improve U.S. housing and healthcare policy.&amp;lt;ref&amp;gt;Romm, Tony. [https://www.vox.com/2017/7/31/16066640/sam-altman-united-slate-health-care-housing-california-2018-midterm-elections-trump &amp;quot;Sam Altman will spend big on a new political movement to fix U.S. housing, health care and more&amp;quot;]. &#039;&#039;[[Vox (website)&#039;&#039;. July 31, 2017.&amp;lt;/ref&amp;gt; In 2019, Altman held a fundraiser at his home in San Francisco for 2020 Democratic presidential candidate and fellow tech entrepreneur [[Andrew Yang]].&amp;lt;ref&amp;gt;Russell, Melia. [https://www.businessinsider.com/andrew-yang-talks-tech-san-francisco-sam-altman-fundraiser-2019-11 &amp;quot;Andrew Yang preached his tech-friendly gospel at Sam Altman&#039;s San Francisco house: You can&#039;t treat tech like oil companies and breaking up Amazon won&#039;t bring malls back&amp;quot;]. &#039;&#039;[[Business Insider]]&#039;&#039;. November 14, 2019.&amp;lt;/ref&amp;gt; In May 2020, Altman donated $250,000 to [[American Bridge 21st Century]], a [[Political action committee|super PAC]] supporting Democratic presidential candidate [[Joe Biden]].&amp;lt;ref&amp;gt;Tindera, Michela. [https://www.forbes.com/sites/michelatindera/2020/05/22/silicon-valleys-sam-altman-gave-250000-to-democratic-super-pac-supporting-biden/ &amp;quot;Silicon Valley&#039;s Sam Altman Gave $250,000 To Democratic Super-PAC Supporting Biden&amp;quot;]. &#039;&#039;[[Forbes]]&#039;&#039;. May 22, 2020.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Altman is a supporter of [[land value tax]]ation&amp;lt;ref&amp;gt;[https://twitter.com/sama/status/1584628826734460928 &amp;quot;national land value tax FTW!&amp;quot;]. &#039;&#039;X (formerly Twitter)&#039;&#039;.&amp;lt;/ref&amp;gt; and the payment of [[universal basic income]] (UBI).&amp;lt;ref name=&amp;quot;ubi&amp;quot;&amp;gt;Varanasi, Lakshmi. [https://www.businessinsider.com/openai-sam-altman-universal-basic-income-idea-compute-gpt-7-2024-5 &amp;quot;OpenAI&#039;s Sam Altman has a new idea for a universal basic income&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;. May 12, 2024.&amp;lt;/ref&amp;gt; In 2021, he published a blog post titled &amp;quot;Moore&#039;s Law for Everything&amp;quot;, which stated his belief that within ten years, AI could generate enough value to fund a UBI of $13,500 per year to every adult in the United States.&amp;lt;ref&amp;gt;Shead, Sam. [https://www.cnbc.com/2021/03/30/openai-ceo-sam-altman-says-ai-could-pay-for-ubi-experts-disagree.html &amp;quot;Silicon Valley leaders think A.I. will one day fund free cash handouts. But experts aren&#039;t convinced&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. March 30, 2021.&amp;lt;/ref&amp;gt; In 2024, he suggested a new kind of UBI called &amp;quot;universal basic compute&amp;quot; to give everyone a &amp;quot;slice&amp;quot; of ChatGPT&#039;s computing power.&amp;lt;ref name=&amp;quot;ubi&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2023, Altman was involved in boosting Representative [[Dean Phillips]] as he prepared [[Dean Phillips 2024 presidential campaign|a challenge]] to President Joe Biden for the [[2024 Democratic National Convention|Democratic nomination]].&amp;lt;ref name=&amp;quot;Return5&amp;quot;&amp;gt;Schleifer, Theodore. [https://www.nytimes.com/2025/03/05/us/politics/sam-altman-openai-democrat-fundraising.html &amp;quot;OpenAI&#039;s C.E.O. Returns to Political Fund-Raising&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. March 5, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Freedlander, David. [https://nymag.com/intelligencer/2024/01/biden-rival-dean-phillips-getting-help-from-sam-altman.html &amp;quot;Dean Phillips Met Sam Altman, Then Got Awfully Interested in AI&amp;quot;]. New York Magazine. January 18, 2024.&amp;lt;/ref&amp;gt; On November 18, 2024, San Francisco Mayor-Elect [[Daniel Lurie]] named him to his transition team.&amp;lt;ref&amp;gt;[https://sfist.com/2024/11/18/daniel-lurie-names-openai-ceo-sam-altman-to-his-mayoral-transition-team/ &amp;quot;Daniel Lurie Names OpenAI CEO Sam Altman to His Mayoral Transition Team&amp;quot;]. &#039;&#039;SFist&#039;&#039;. November 18, 2024.&amp;lt;/ref&amp;gt; In December 2024, it was reported that Altman would donate $1 million to the Inaugural Fund for President [[Donald Trump]].&amp;lt;ref&amp;gt;[https://www.npr.org/2024/12/13/nx-s1-5227874/trump-bezos-zuckerberg-amazon-facebook-open-ai-meta-inauguration-fund &amp;quot;Tech moguls Altman, Bezos and Zuckerberg donate to Trump&#039;s inauguration fund&amp;quot;]. &#039;&#039;NPR&#039;&#039;. December 13, 2024.&amp;lt;/ref&amp;gt; Altman hosted a fundraiser in San Francisco on March 20, 2025, for Senator [[Mark Warner]], a Democrat up for [[2026 United States Senate election in Virginia|re-election in 2026]] in Virginia.&amp;lt;ref name=&amp;quot;Return5&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On July 4, 2025, Altman posted on [[Twitter|X]] sharing his political ideology, saying that he believed in &amp;quot;[[Technocapitalism|techno-capitalism]]&amp;quot; and found himself increasingly &amp;quot;politically homeless&amp;quot;, criticizing the [[Democratic Party (United States)|Democratic Party]] for no longer encouraging a &amp;quot;culture of innovation and entrepreneurship&amp;quot;.&amp;lt;ref&amp;gt;Rodriguez, Salvador. [https://www.cnbc.com/2025/07/04/openai-altman-july-4-zohran-mamdani.html &amp;quot;OpenAI CEO Sam Altman says he&#039;s &#039;politically homeless&#039; in July 4 post bashing Democrats&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. July 4, 2025.&amp;lt;/ref&amp;gt; In September 2025, Altman was interviewed by [[Tucker Carlson]]. The interview covered the death of former OpenAI researcher [[Suchir Balaji]], AI alignment, and Altman&#039;s views on whether ChatGPT should reflect distinctly American values.&amp;lt;ref&amp;gt;Dodge, Blake. [https://www.piratewires.com/p/what-happened-tucker-carlson-sam-altman-interview &amp;quot;Sam Altman (Generally) Doesn&#039;t Want to Be Your Moral Authority&amp;quot;]. &#039;&#039;Pirate Wires&#039;&#039;. 2025-09-16.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Personal life==&lt;br /&gt;
Altman has been a [[Vegetarianism|vegetarian]] since childhood.&amp;lt;ref&amp;gt;[https://rescale.com/blog/fireside-chat-with-sam-altman/ &amp;quot;Fireside Chat with Sam Altman&amp;quot;]. &#039;&#039;[[Rescale]]&#039;&#039;. February 24, 2020.&amp;lt;/ref&amp;gt; He is [[Gay men|gay]], and first disclosed his sexuality at the age of 17 in high school, where he spoke out after some students objected to a [[National Coming Out Day]] speaker.&amp;lt;ref name=Intelligencer /&amp;gt;&amp;lt;ref name=&amp;quot;newyorker2016&amp;quot;&amp;gt;Friend, Tad. [https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny &amp;quot;Sam Altman&#039;s Manifest Destiny&amp;quot;]. &#039;&#039;[[The New Yorker]]&#039;&#039;. October 3, 2016.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;SCMP 2023-11-24&amp;quot;&amp;gt;Farah, Lynn. [https://www.scmp.com/magazines/style/entertainment/article/3242681/meet-chatgpt-boss-sam-altman-whos-back-ceo-chair-microsoft-briefly-hired-him-his-openai-return-he &amp;quot;Meet ChatGPT boss Sam Altman, who&#039;s back in the CEO chair: Microsoft briefly hired him before his OpenAI return, he came out as LGBT in high school, and he splurges his millions on Tesla and McLaren&amp;quot;]. &#039;&#039;[[South China Morning Post]]&#039;&#039;. November 24, 2023.&amp;lt;/ref&amp;gt; He dated Loopt co-founder Nick Sivo for nine years. They broke up shortly after the company was acquired in 2012.&amp;lt;ref name=&amp;quot;newyorker2016&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
According to [[Keach Hagey]], Altman met his future husband Oliver Mulherin &amp;quot;in Peter Thiel&#039;s hot tub at 3 a.m.&amp;quot; in 2015. Mulherin was a computer science student at the [[University of Melbourne]] at the time and later became an engineer. He worked on AI projects in Australia before moving to the United States to work for the dementia detection startup SPARK Neuro.{{sfn|Hagey|2025|p=275}} Altman married Mulherin in January 2024,&amp;lt;ref&amp;gt;Russell, Melia. [https://www.businessinsider.com/openai-ceo-sam-altman-married-oliver-mulherin-wedding-2024-1 &amp;quot;OpenAI CEO Sam Altman just got married&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&amp;lt;/ref&amp;gt; at their estate in [[Hawaii]];&amp;lt;ref&amp;gt;Le, Linh. [https://e.vnexpress.net/news/trend/openais-sam-altman-ties-knot-with-same-sex-partner-on-43m-hawaii-estate-4700419.html &amp;quot;OpenAI&#039;s Sam Altman ties knot with same-sex partner on $43M Hawaii estate&amp;quot;]. &#039;&#039;VN Express&#039;&#039;.&amp;lt;/ref&amp;gt; the couple also live in [[Russian Hill, San Francisco]], and often spend weekends in [[Napa, California]].&amp;lt;ref name=&amp;quot;SCMP 2023-11-24&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Metz, Cade. [https://www.nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html &amp;quot;The ChatGPT King Isn&#039;t Worried, but He Knows You Might Be&amp;quot;]. &#039;&#039;[[The New York Times]]&#039;&#039;. March 31, 2023.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Altman and Mulherin committed to giving away most of their wealth by signing [[The Giving Pledge]] in May 2024.&amp;lt;ref&amp;gt;Valinsky, Jordan. [https://www.cnn.com/2024/05/28/tech/sam-altman-giving-pledge/index.html &amp;quot;OpenAI&#039;s Sam Altman vows to give away most of his wealth through the Giving Pledge&amp;quot;]. [[CNN Business]]. May 28, 2024.&amp;lt;/ref&amp;gt; The couple has a son, born in 2025.&amp;lt;ref&amp;gt;Ahlgrim, Callie. [https://www.businessinsider.com/sam-altman-kids-never-smarter-than-ai-chatgpt-2025-6 &amp;quot;Sam Altman says his own kids will &#039;never be smarter than AI&#039;&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&amp;lt;/ref&amp;gt; &amp;quot;Altman has described himself as preparing for catastrophic scenarios&amp;quot;, stating in 2016: &amp;quot;I have guns, gold, [[potassium iodide]], antibiotics, batteries, water, gas masks from the {{sic|[[Israel Defense Forces|Israel Defense Force]]}}, and a big patch of land in [[Big Sur]] I can fly to.&amp;quot;&amp;lt;ref name=&amp;quot;newyorker2016&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In January 2025, Altman&#039;s sister Ann Altman filed a lawsuit alleging sexual abuse by Altman in the [[United States District Court for the Eastern District of Missouri|U.S. District Court for the Eastern District of Missouri]] in [[St. Louis]]. The lawsuit alleges that the abuse started when Ann was aged three and Sam was 12.&amp;lt;ref&amp;gt;Field, Hayden. [https://www.cnbc.com/2025/01/07/openais-sam-altman-denies-sexual-abuse-allegations-made-sister-ann.html &amp;quot;OpenAI CEO Sam Altman denies sexual abuse allegations made by his sister in lawsuit&amp;quot;]. &#039;&#039;[[CNBC]]&#039;&#039;.&amp;lt;/ref&amp;gt; Sam Altman, along with his mother Connie and younger brothers Max and Jack, issued a joint statement denying the allegations, describing them as &amp;quot;utterly untrue&amp;quot;.&amp;lt;ref&amp;gt;Partridge, Joanna. [https://www.theguardian.com/technology/2025/jan/08/openai-chief-executive-sam-altman-accused-of-sexual-abuse-by-sister-in-lawsuit &amp;quot;OpenAI chief executive Sam Altman accused of sexual abuse by sister in lawsuit&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. January 8, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Altman, Sam. &amp;quot;My sister has filed a lawsuit against me. Here is a statement from my mom, brothers, and me&amp;quot;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Hoskins, Peter. [https://www.bbc.com/news/articles/cz6lq6x2gd9o &amp;quot;OpenAI boss Sam Altman denies sexual abuse allegations made by sister&amp;quot;]. &#039;&#039;www.bbc.com&#039;&#039;. January 8, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Sam Altman has also served on the board of three [[Hong Kong]]– and [[Singapore]]-based [[Special-purpose acquisition company|SPAC]] companies named Bridgetown (sponsored by [[Thiel Capital]] and [[Richard Li]]&#039;s Pacific Century) alongside [[Matt Danzeisen]], chairman of the SPACs and spouse of Thiel.&amp;lt;ref name=&amp;quot;bloomberg1&amp;quot;&amp;gt;Chapman, Lizette. [https://www.bloomberg.com/news/articles/2020-09-23/bridgetown-spac-backed-by-peter-thiel-files-to-go-public &amp;quot;Bridgetown SPAC, Backed by Peter Thiel, Files to Go Public&amp;quot;]. 23 September 2020.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.sec.gov/Archives/edgar/data/1831236/000121390021004685/f424b40121_bridgetown2.htm &amp;quot;PROSPECTUS FILED PURSUANT TO RULE 424(B)(4) REGISTRATION NO. 333-251860 $260,000,000 Bridgetown 2 Holdings Limited&amp;quot;]. &#039;&#039;www.sec.gov&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.sec.gov/Archives/edgar/data/1844028/000121390021012944/filename1.htm &amp;quot;Form S-1 REGISTRATION STATEMENT UNDER THE SECURITIES ACT OF 1933 Bridgetown 3 Holdings Limited&amp;quot;]. &#039;&#039;www.sec.gov&#039;&#039;.&amp;lt;/ref&amp;gt; Like Danzeisen, Altman was mentioned as a friend in Thiel&#039;s circle by &#039;&#039;[[Buzz Feed News|BuzzFeed News]]&#039;&#039; in 2017.&amp;lt;ref name=&amp;quot;buzzfeed&amp;quot;&amp;gt;Mac, Ryan. [https://www.buzzfeednews.com/article/ryanmac/peter-thiel-and-donald-trump &amp;quot;Peter Thiel Has Been Hedging His Bet On Donald Trump&amp;quot;]. &#039;&#039;BuzzFeed News&#039;&#039;. 7 August 2017.&amp;lt;/ref&amp;gt; He thanked Danzeisen for contributing to his essays on development of AI and China.&amp;lt;ref&amp;gt;김, 창수. [https://www.beyondx.ai/modeun/ &amp;quot;샘 올트만: AI가 집값을 반값으로 낮추는 미래에 대한 예측&amp;quot;]. &#039;&#039;비욘드엑스&#039;&#039;. 13 November 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://blog.samaltman.com/china &amp;quot;China&amp;quot;]. &#039;&#039;Sam Altman&#039;&#039;.&amp;lt;/ref&amp;gt; At a birthday party Thiel organized for Danzeisen (&amp;quot;on a balmy mid-November evening&amp;quot; in 2023), Thiel warned Altman that half of Altman&#039;s subordinates at OpenAI, who had supposedly been &amp;quot;programmed&amp;quot; by [[Eliezer Yudkowsky]], wanted to remove Altman.{{sfn|Hagey|2025|p=i}}&amp;lt;ref&amp;gt;Paul, Kari. [https://www.theguardian.com/technology/2024/mar/08/openai-sam-altman-reinstated &amp;quot;OpenAI reinstates CEO Sam Altman to board after firing and rehiring&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 9 March 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Hagey, Keach. [https://www.wsj.com/tech/ai/the-real-story-behind-sam-altman-firing-from-openai-efd51a5d &amp;quot;Exclusive {{!&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. 28 March 2025b.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
===Citations===&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sources===&lt;br /&gt;
* Hagey, Keach. &amp;quot;The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future&amp;quot;. W. W. Norton &amp;amp; Company.&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
{{Commons}}&lt;br /&gt;
{{Wikiquote}}&lt;br /&gt;
&amp;lt;!-- per [[WP:ELMINOFFICIAL]], choose one official website only --&amp;gt;&lt;br /&gt;
* [https://blog.samaltman.com Personal website]&lt;br /&gt;
* {{C-SPAN|117787}}&lt;br /&gt;
&lt;br /&gt;
{{OpenAI navbox}}&lt;br /&gt;
{{Time Persons of the Year 2001–2025}}&lt;br /&gt;
{{Existential risk from artificial intelligence}}&lt;br /&gt;
&lt;br /&gt;
{{DEFAULTSORT:Altman, Sam}}&lt;br /&gt;
[[Category:1985 births]]&lt;br /&gt;
[[Category:Living people]]&lt;br /&gt;
[[Category:21st-century American businesspeople]]&lt;br /&gt;
[[Category:21st-century American Jews]]&lt;br /&gt;
[[Category:21st-century American LGBTQ people]]&lt;br /&gt;
[[Category:21st-century American philanthropists]]&lt;br /&gt;
[[Category:American billionaires]]&lt;br /&gt;
[[Category:American computer programmers]]&lt;br /&gt;
[[Category:American LGBTQ businesspeople]]&lt;br /&gt;
[[Category:American people of Polish-Jewish descent]]&lt;br /&gt;
[[Category:Articles containing video clips]]&lt;br /&gt;
[[Category:Businesspeople from Chicago]]&lt;br /&gt;
[[Category:Businesspeople from St. Louis]]&lt;br /&gt;
[[Category:American businesspeople in information technology]]&lt;br /&gt;
[[Category:Gay businessmen]]&lt;br /&gt;
[[Category:Gay Jews]]&lt;br /&gt;
[[Category:John Burroughs School alumni]]&lt;br /&gt;
[[Category:LGBTQ people from Missouri]]&lt;br /&gt;
[[Category:OpenAI people]]&lt;br /&gt;
[[Category:Proprietary technology salespersons]]&lt;br /&gt;
[[Category:Stanford University alumni]]&lt;br /&gt;
[[Category:Survivalists]]&lt;br /&gt;
[[Category:Y Combinator people]]&lt;br /&gt;
[[Category:People from Clayton, Missouri]]&lt;br /&gt;
[[Category:Artificial intelligence industry in the United States]]&lt;br /&gt;
[[Category:Time Person of the Year]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=OpenAI&amp;diff=11</id>
		<title>OpenAI</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=OpenAI&amp;diff=11"/>
		<updated>2026-04-06T12:58:37Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import from Wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Distinguish|OpenAL|OpenAPI (disambiguation){{!}}OpenAPI|Open-source artificial intelligence}}&lt;br /&gt;
{{Use American English|date=May 2023}}&lt;br /&gt;
{{Use mdy dates|date=September 2024}}&lt;br /&gt;
{{Infobox company&lt;br /&gt;
| name            = OpenAI&lt;br /&gt;
| logo            = [[File:OpenAI logo 2025 (wordmark).svg|frameless|upright=1.1|class=skin-invert]]&lt;br /&gt;
| image           = &lt;br /&gt;
| image_caption   = &lt;br /&gt;
| type            = [[Privately held company|Private]]&lt;br /&gt;
| industry        = [[Artificial intelligence]]&lt;br /&gt;
| founded         = {{Start date and age|p=y|2015|12|08}}&amp;lt;ref name=&amp;quot;OpenCorporates&amp;quot;&amp;gt;[https://opencorporates.com/companies/us_de/5902936 &amp;quot;OpenAI, Inc.&amp;quot;]. &#039;&#039;[[OpenCorporates]]&#039;&#039;. December 8, 2015.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| founders        = {{ubl|[[Sam Altman]]|[[Elon Musk]]|[[Ilya Sutskever]]|[[Greg Brockman]]|[[Trevor Blackwell]]|Vicki Cheung|[[Andrej Karpathy]]|Durk Kingma|[[John Schulman]]|Pamela Vagata|[[Wojciech Zaremba]]}}&lt;br /&gt;
| hq_location     = 1455 [[Third Street (San Francisco)|3rd Street]], [[San Francisco]], [[California]], U.S.&amp;lt;ref&amp;gt;Waxmann, Laura. [https://www.sfchronicle.com/realestate/article/openai-s-f-uber-lease-18451102.php &amp;quot;OpenAI closes big lease deal at Uber&#039;s San Francisco headquarters&amp;quot;]. &#039;&#039;San Francisco Chronicle&#039;&#039;. October 27, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| key_people      = {{Unbulleted list&lt;br /&gt;
| [[Bret Taylor]] ([[chairman]])&lt;br /&gt;
| Sam Altman ([[Chief executive officer|CEO]])&lt;br /&gt;
| Greg Brockman ([[President (corporate title)|president]])&lt;br /&gt;
| [[Sarah Friar]] ([[Chief financial officer|CFO]])&amp;lt;ref name=&amp;quot;NYT5&amp;quot;&amp;gt;Metz, Cade. [https://www.nytimes.com/2024/09/03/technology/openai-chatgpt-revenue.html &amp;quot;OpenAI, Still Haunted by Its Chaotic Past, Is Trying to Grow Up&amp;quot;]. &#039;&#039;[[New York Times]]&#039;&#039;. September 3, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| [[Fidji Simo]] (CEO of Applications)&lt;br /&gt;
}}&lt;br /&gt;
| products        = {{flatlist|&lt;br /&gt;
* [[ChatGPT]]&lt;br /&gt;
* [[GPT-5.4]]&lt;br /&gt;
}}&lt;br /&gt;
{{Unbulleted list&lt;br /&gt;
| [[OpenAI Codex (AI agent)|OpenAI Codex]]&lt;br /&gt;
| [[GPT Image]]&lt;br /&gt;
| [[ChatGPT Deep Research|Deep Research]]&lt;br /&gt;
| [[ChatGPT agent]]&lt;br /&gt;
| [[ChatGPT Atlas]]&lt;br /&gt;
| [[ChatGPT Health]]&lt;br /&gt;
}}&lt;br /&gt;
| services        = &lt;br /&gt;
| revenue         = {{increase}} {{US$|13.1|link=yes}}&amp;amp;nbsp;billion&amp;lt;ref&amp;gt;Capoot, Ashley. [https://www.cnbc.com/2026/02/20/openai-resets-spend-expectations-targets-around-600-billion-by-2030.html &amp;quot;OpenAI resets spending expectations, tells investors compute target is around $600 billion by 2030&amp;quot;]. [[CNBC]]. February 20, 2026.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| revenue_year    = 2025&lt;br /&gt;
| net_income      = {{decrease}} US${{color|red|−9}}&amp;amp;nbsp;billion&amp;lt;ref&amp;gt;Smith, Dave. [https://fortune.com/2025/11/12/openai-cash-burn-rate-annual-losses-2028-profitable-2030-financial-documents/ &amp;quot;OpenAI says it plans to report stunning annual losses through 2028—and then turn wildly profitable just two years later&amp;quot;]. &#039;&#039;[[Fortune (magazine)&#039;&#039;. November 12, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| net_income_year = {{nowrap|2025 {{abbr|est.|estimate}}}}&lt;br /&gt;
| suppressfields  = assets &amp;lt;!-- suppress stale and uncited asset value fetched from Wikidata --&amp;gt;&lt;br /&gt;
| equity          = &lt;br /&gt;
| equity_year     = &lt;br /&gt;
| num_employees   = 4,500 (2026)&amp;lt;ref&amp;gt;George, Hammond. [https://www.ft.com/content/7ffea5b4-e8bc-47cd-adb4-257f84c8028b &amp;quot;OpenAI to double workforce as business push intensifies&amp;quot;]. &#039;&#039;[[Financial Times]]&#039;&#039;. 2026-03-21.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| owner           = {{unbulleted list |Employees and investors (47%) |[[Microsoft]] (27%) |OpenAI Foundation (26%)&amp;lt;ref name=&amp;quot;nyt-restructure&amp;quot; /&amp;gt;}}&lt;br /&gt;
| homepage        = {{URL|https://openai.com/}}&lt;br /&gt;
}}&lt;br /&gt;
{{Artificial intelligence}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OpenAI&#039;&#039;&#039; is an American [[artificial intelligence]] (AI) research organization comprising both a [[nonprofit]] foundation and a controlled for-profit public [[benefit corporation]] (PBC), headquartered in [[San Francisco]]. It aims to develop &amp;quot;safe and beneficial&amp;quot; [[artificial general intelligence]] (AGI), which it defines as &amp;quot;highly autonomous systems that outperform humans at most economically valuable work&amp;quot;.&amp;lt;ref name=&amp;quot;OpenAI-2018&amp;quot;&amp;gt;[https://openai.com/charter &amp;quot;OpenAI Charter&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. April 9, 2018.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OpenAI is widely recognized for its development of the [[Generative pre-trained transformer|GPT]] family of [[large language model]]s, the [[DALL-E]] series of [[text-to-image model]]s, and the [[Sora (text-to-video model)|Sora]] series of [[text-to-video model]]s, which have influenced industry research and commercial applications.&amp;lt;ref&amp;gt;[https://www.wsj.com/tech/ai/artificial-the-openai-story-21587cbd &amp;quot;Artificial: The OpenAI Story&amp;quot;]. &#039;&#039;[[WSJ]]&#039;&#039;. December 10, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://platform.openai.com/docs/models/overview &amp;quot;Models - OpenAI API&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Jindal, Siddharth. [https://analyticsindiamag.com/openai-steals-the-spotlight-with-sora-%E2%9C%A8/ &amp;quot;OpenAI Steals the Spotlight with Sora&amp;quot;]. &#039;&#039;Analytics India Magazine&#039;&#039;. February 16, 2024.&amp;lt;/ref&amp;gt; Its release of [[ChatGPT]] in November 2022 has been credited with catalyzing widespread interest in [[Generative artificial intelligence|generative AI]].&lt;br /&gt;
&lt;br /&gt;
OpenAI was founded in 2015 in [[Delaware General Corporation Law|Delaware]] as a nonprofit. A for-profit subsidiary was created in 2019, and restructured in 2025 to operate more independently from the nonprofit. [[Microsoft]] previously invested over $13 billion into OpenAI,&amp;lt;ref name=&amp;quot;Reuters_2025&amp;quot;&amp;gt;[https://www.reuters.com/business/openai-negotiates-with-microsoft-unlock-new-funding-future-ipo-ft-reports-2025-05-11/ &amp;quot;OpenAI negotiates with Microsoft for new funding and future IPO, FT reports&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 2025-05-12.&amp;lt;/ref&amp;gt; and provides [[Microsoft Azure|Azure]] cloud computing resources.&amp;lt;ref&amp;gt;, . [https://www.business-standard.com/world-news/microsoft-s-13-billion-investment-into-openai-faces-extra-eu-scrutiny-124062801346_1.html &amp;quot;Microsoft&#039;s $13 billion investment into OpenAI faces extra EU scrutiny&amp;quot;]. &#039;&#039;Business Standard&#039;&#039;. 2024-06-28.&amp;lt;/ref&amp;gt; In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion.&amp;lt;ref&amp;gt;Sigalos, MacKenzie. [https://www.cnbc.com/2025/10/02/openai-share-sale-500-billion-valuation.html &amp;quot;OpenAI wraps $6.6 billion share sale at $500 billion valuation&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2025-10-02.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2023 and 2024, OpenAI faced multiple lawsuits for alleged [[copyright infringement]] against authors and media companies whose work was used to train some of OpenAI&#039;s products. In November 2023, OpenAI&#039;s board [[Removal of Sam Altman from OpenAI|removed Sam Altman]] as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed [[AI safety]] researchers left OpenAI, citing the company&#039;s prominent role in an industry-wide problem.&amp;lt;ref&amp;gt;Goldman, Sharon. [https://fortune.com/2024/08/26/openai-agi-safety-researchers-exodus/ &amp;quot;Exodus at OpenAI: Nearly half of AGI safety staffers have left, says former researcher&amp;quot;]. &#039;&#039;Fortune&#039;&#039;. August 26, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Meyer, David. [https://fortune.com/2024/10/24/openai-miles-brundage-suchir-balaji-ai-safety-copyright-sam-altman-chatgpt/ &amp;quot;OpenAI&#039;s reputational double whammy&amp;quot;]. &#039;&#039;Fortune&#039;&#039;. October 24, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Toclimit}}&lt;br /&gt;
&lt;br /&gt;
== Founding ==&lt;br /&gt;
[[File:Pioneer Building, San Francisco (2019) -1.jpg|thumb|Former headquarters at the [[Pioneer Building (San Francisco)|Pioneer Building]] in San Francisco]]&lt;br /&gt;
In December 2015, OpenAI was founded as a [[Nonprofit organization|not for profit organization]] by [[Sam Altman]], [[Elon Musk]], [[Ilya Sutskever]], [[Greg Brockman]], [[Trevor Blackwell]], Vicki Cheung, [[Andrej Karpathy]], Durk Kingma, [[John Schulman]], Pamela Vagata, and [[Wojciech Zaremba]], with Sam Altman and Elon Musk as the co-chairs.&amp;lt;ref name=&amp;quot;wired_inside&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;[https://observer.com/2024/07/openai-founders-career/ &amp;quot;Only 4 of OpenAI&#039;s 11 Founders Are Still With the Company—Where Are the Rest of Them?&amp;quot;]. &#039;&#039;Observer&#039;&#039;. 2024-07-12.&amp;lt;/ref&amp;gt; A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, [[Reid Hoffman]], [[Jessica Livingston]], [[Peter Thiel]], [[Amazon Web Services]] (AWS), and [[Infosys]].&amp;lt;ref&amp;gt;[https://openai.com/blog/introducing-openai/ &amp;quot;Introducing OpenAI&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. December 12, 2015.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.vanityfair.com/news/2015/12/sam-altman-elon-musk-openai &amp;quot;Sam Altman on His Plan to Keep A.I. Out of the Hands of the &amp;quot;Bad Guys&amp;quot;&amp;quot;]. &#039;&#039;Vanity Fair&#039;&#039;. 2015.&amp;lt;/ref&amp;gt; However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019.&amp;lt;ref name=&amp;quot;St&amp;quot;&amp;gt;[https://openai.com/our-structure &amp;quot;Our structure&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. October 28, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns.&amp;lt;ref&amp;gt;[https://blog.openai.com/introducing-openai/ &amp;quot;Introducing OpenAI&amp;quot;]. &#039;&#039;OpenAI Blog&#039;&#039;. December 12, 2015.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;bbc-giants&amp;quot;&amp;gt;[https://www.bbc.com/news/technology-35082344 &amp;quot;Tech giants pledge $1bn for &#039;altruistic AI&#039; venture, OpenAI&amp;quot;]. &#039;&#039;[[BBC News]]&#039;&#039;. December 12, 2015.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wired_inside&amp;quot; /&amp;gt; OpenAI was initially run from Brockman&#039;s living room.&amp;lt;ref&amp;gt;Seetharaman, Deepa. [https://www.wsj.com/tech/ai/open-ai-division-for-profit-da26c24b &amp;quot;Turning OpenAI Into a Real Business Is Tearing It Apart&amp;quot;]. &#039;&#039;[[The Wall Street Journal]]&#039;&#039;. September 27, 2024.&amp;lt;/ref&amp;gt; It was later headquartered at the [[Pioneer Building (San Francisco)|Pioneer Building]] in the [[Mission District, San Francisco]].&amp;lt;ref&amp;gt;Conger, Kate. [https://gizmodo.com/elon-musks-neuralink-sought-to-open-an-animal-testing-f-1823167674 &amp;quot;Elon Musk&#039;s Neuralink Sought to Open an Animal Testing Facility in San Francisco&amp;quot;]. &#039;&#039;Gizmodo&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;technologyreview&amp;quot;&amp;gt;Hao, Karen. [https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ &amp;quot;The messy, secretive reality behind OpenAI&#039;s bid to save the world&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. February 17, 2020.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
According to OpenAI&#039;s charter, its founding mission is &amp;quot;to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.&amp;quot;&amp;lt;ref name=&amp;quot;OpenAI-2018&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Musk and Altman stated in 2015 that they were partly motivated by concerns about [[AI safety]] and [[existential risk from artificial general intelligence]].&amp;lt;ref name=&amp;quot;csmonitor&amp;quot;&amp;gt;Lewontin, Max. [https://www.csmonitor.com/Technology/2015/1214/Open-AI-Effort-to-democratize-artificial-intelligence-research &amp;quot;Open AI: Effort to democratize artificial intelligence research?&amp;quot;]. &#039;&#039;[[The Christian Science Monitor]]&#039;&#039;. December 14, 2015.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wired_inside&amp;quot;&amp;gt;[https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/ &amp;quot;Inside OpenAI, Elon Musk&#039;s Wild Plan to Set Artificial Intelligence Free&amp;quot;]. &#039;&#039;[[Wired (magazine)&#039;&#039;. April 27, 2016.&amp;lt;/ref&amp;gt; OpenAI stated that &amp;quot;it&#039;s hard to fathom how much human-level AI could benefit society&amp;quot;, and that it is equally difficult to comprehend &amp;quot;how much it could damage society if built or used incorrectly&amp;quot;.&amp;lt;ref name=&amp;quot;bbc-giants&amp;quot; /&amp;gt; The startup also wrote that AI &amp;quot;should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible&amp;quot;,&amp;lt;ref name=&amp;quot;bbc-giants&amp;quot; /&amp;gt; and that &amp;quot;because of AI&#039;s surprising history, it&#039;s hard to predict when human-level AI might come within reach. When it does, it&#039;ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.&amp;quot;&amp;lt;ref&amp;gt;Mendoza, Jessica. [https://www.csmonitor.com/Science/2015/1214/Tech-leaders-launch-nonprofit-to-save-the-world-from-killer-robots &amp;quot;Tech leaders launch nonprofit to save the world from killer robots&amp;quot;]. &#039;&#039;[[The Christian Science Monitor]]&#039;&#039;.&amp;lt;/ref&amp;gt; Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence.&amp;lt;ref name=&amp;quot;wired_far_more&amp;quot;&amp;gt;Metz, Cade. [https://www.wired.com/2015/12/elon-musks-billion-dollar-ai-plan-is-about-far-more-than-saving-the-world/ &amp;quot;Elon Musk&#039;s Billion-Dollar AI Plan Is About Far More Than Saving the World&amp;quot;]. &#039;&#039;[[Wired (magazine)&#039;&#039;. December 15, 2015.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Brockman met with [[Yoshua Bengio]], one of the &amp;quot;founding fathers&amp;quot; of [[deep learning]], and drew up a list of great AI researchers.&amp;lt;ref name=&amp;quot;wired_inside&amp;quot; /&amp;gt; Brockman was able to hire nine of them as the first employees in December 2015.&amp;lt;ref name=&amp;quot;wired_inside&amp;quot; /&amp;gt; OpenAI did not pay AI researchers salaries comparable to those of [[Facebook]] or [[Google]].&amp;lt;ref name=&amp;quot;wired_inside&amp;quot; /&amp;gt; It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016.&amp;lt;ref name=&amp;quot;salaryTimes&amp;quot;&amp;gt;Metz, Cade. [https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-openai.html &amp;quot;A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. April 19, 2018.&amp;lt;/ref&amp;gt; OpenAI&#039;s potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI &amp;quot;partly because of the very strong group of people and, to a very large extent, because of its mission.&amp;quot;&amp;lt;ref name=&amp;quot;wired_inside&amp;quot; /&amp;gt; OpenAI co-founder [[Wojciech Zaremba]] stated that he turned down &amp;quot;borderline crazy&amp;quot; offers of two to three times his market value to join OpenAI instead.&amp;lt;ref name=&amp;quot;wired_inside&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In April 2016, OpenAI released a public beta of &amp;quot;OpenAI Gym&amp;quot;, its platform for [[reinforcement learning]] research.&amp;lt;ref name=&amp;quot;Dave Gershgorn-2016&amp;quot;&amp;gt;[http://www.popsci.com/elon-musks-artificial-intelligence-group-opens-gym-to-train-ai &amp;quot;Elon Musk&#039;s Artificial Intelligence Group Opens A &#039;Gym&#039; To Train A.I.&amp;quot;]. &#039;&#039;Popular Science&#039;&#039;. April 27, 2016.&amp;lt;/ref&amp;gt; [[Nvidia]] gifted its first [[Nvidia DGX|DGX-1 supercomputer]] to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours.&amp;lt;ref&amp;gt;Carr, Austin. [https://www.bloomberg.com/news/features/2023-06-15/nvidia-s-ai-chips-power-chatgpt-and-multibillion-dollar-surge &amp;quot;How Nvidia Became ChatGPT&#039;s Brain and Joined the $1 Trillion Club&amp;quot;]. Bloomberg News. June 15, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Vanian, Jonathan. [https://fortune.com/2016/08/15/elon-musk-artificial-intelligence-openai-nvidia-supercomputer/ &amp;quot;Elon Musk&#039;s Artificial Intelligence Project Just Got a Free Supercomputer&amp;quot;]. &#039;&#039;Fortune&#039;&#039;. August 15, 2016.&amp;lt;/ref&amp;gt; In December 2016, OpenAI released &amp;quot;Universe&amp;quot;, a software platform for measuring and training an AI&#039;s general intelligence across the world&#039;s supply of games, websites, and other applications&amp;lt;!--AI training ground that spans any software running on any machine, from games to web browsers to protein folders--&amp;gt;.&amp;lt;ref&amp;gt;Metz, Cade. [https://www.wired.com/2016/12/openais-universe-computers-learn-use-apps-like-humans/ &amp;quot;Elon Musk&#039;s Lab Wants to Teach Computers to Use Apps Just Like Humans Do&amp;quot;]. &#039;&#039;WIRED&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Mannes, John. [https://techcrunch.com/2016/12/05/openais-universe-is-the-fun-parent-every-artificial-intelligence-deserves/ &amp;quot;OpenAI&#039;s Universe is the fun parent every artificial intelligence deserves&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://universe.openai.com/ &amp;quot;OpenAI – Universe&amp;quot;].&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Claburn, Thomas. [https://www.theregister.co.uk/2016/12/05/openai_universe_reinforcement_learning/ &amp;quot;Elon Musk-backed OpenAI reveals Universe – a universal training ground for computers&amp;quot;]. &#039;&#039;The Register&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Corporate structure ==&lt;br /&gt;
[[File:OpenAI corporate structure.svg|thumb|upright=1.5|OpenAI&#039;s corporate structure]]&lt;br /&gt;
&lt;br /&gt;
=== Transition from non-profit ===&lt;br /&gt;
In 2019, OpenAI transitioned from non-profit to &amp;quot;capped&amp;quot; for-profit, with the profit being capped at 100 times any investment.&amp;lt;ref&amp;gt;[https://techcrunch.com/2019/03/11/openai-shifts-from-nonprofit-to-capped-profit-to-attract-capital/ &amp;quot;OpenAI shifts from nonprofit to &#039;capped-profit&#039; to attract capital&amp;quot;]. March 11, 2019.&amp;lt;/ref&amp;gt; According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company.&amp;lt;ref name=&amp;quot;wired investors&amp;quot;&amp;gt;[https://www.wired.com/story/compete-google-openai-seeks-investorsand-profits/ &amp;quot;To Compete With Google, OpenAI Seeks Investors–and Profits&amp;quot;]. &#039;&#039;Wired&#039;&#039;. December 3, 2019.&amp;lt;/ref&amp;gt; Many top researchers work for [[Google Brain]], [[DeepMind]], or [[Facebook]], which offer [[Compensation and benefits#Equity-based compensation|equity]] that a nonprofit would be unable to match.&amp;lt;ref name=&amp;quot;bloomberg arm&amp;quot;&amp;gt;Kahn, Jeremy. [https://www.bloomberg.com/news/articles/2019-03-11/ai-research-group-co-founded-by-musk-starts-for-profit-arm &amp;quot;AI Research Group Co-Founded by Elon Musk Starts For-Profit Arm&amp;quot;]. &#039;&#039;[[Bloomberg News]]&#039;&#039;. March 11, 2019.&amp;lt;/ref&amp;gt; Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees.&amp;lt;ref name=&amp;quot;salaryTimes&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The company then distributed [[Equity (finance)|equity]] to its employees and partnered with Microsoft,&amp;lt;ref name=&amp;quot;2019investment&amp;quot;&amp;gt;[https://openai.com/blog/microsoft-invests-in-and-partners-with-openai &amp;quot;Microsoft invests in and partners with OpenAI&amp;quot;]. July 22, 2019.&amp;lt;/ref&amp;gt; announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an [[Microsoft Azure|Azure]]-based [[supercomputer|supercomputing]] platform from Microsoft.&amp;lt;ref name=&amp;quot;Langston 2023&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Foley 2020&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Engadget 2020&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OpenAI Global, LLC then announced its intention to commercially license its technologies.&amp;lt;ref&amp;gt;[https://openai.com/blog/microsoft/ &amp;quot;Microsoft Invests in and Partners with OpenAI to Support Us Building Beneficial AGI&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. July 22, 2019.&amp;lt;/ref&amp;gt; It planned to spend $1 billion &amp;quot;within five years, and possibly much faster&amp;quot;.&amp;lt;ref&amp;gt;Murgia, Madhumita. [https://www.ft.com/content/d4280856-b92d-11e9-8a88-aa6628ac896c &amp;quot;DeepMind runs up higher losses and debts in race for AI&amp;quot;]. &#039;&#039;[[Financial Times]]&#039;&#039;. August 7, 2019.&amp;lt;/ref&amp;gt; Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need &amp;quot;more capital than any non-profit has ever raised&amp;quot; to achieve artificial general intelligence.&amp;lt;ref&amp;gt;[https://fortune.com/2019/10/03/openai-will-need-more-capital-than-any-non-profit-has-ever-raised/ &amp;quot;OpenAI Will Need More Capital Than Any Non-Profit Has Ever Raised&amp;quot;]. &#039;&#039;Fortune&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The nonprofit, OpenAI, Inc., is the sole [[controlling interest|controlling shareholder]] of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal [[fiduciary duty|fiduciary responsibility]] to OpenAI, Inc.&#039;s nonprofit charter. A majority of OpenAI, Inc.&#039;s board is barred from having financial stakes in OpenAI Global, LLC.&amp;lt;ref name=&amp;quot;wired investors&amp;quot; /&amp;gt; In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest.&amp;lt;ref name=&amp;quot;bloomberg arm&amp;quot; /&amp;gt; Some researchers have argued that OpenAI Global, LLC&#039;s switch to for-profit status is inconsistent with OpenAI&#039;s claims to be &amp;quot;democratizing&amp;quot; AI.&amp;lt;ref&amp;gt;Vincent, James. [https://www.theverge.com/2019/7/22/20703578/microsoft-openai-investment-partnership-1-billion-azure-artificial-general-intelligence-agi &amp;quot;Microsoft invests $1 billion in OpenAI to pursue holy grail of artificial intelligence&amp;quot;]. &#039;&#039;[[The Verge]]&#039;&#039;. July 22, 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On February 29, 2024, [[Elon Musk]] filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as &amp;quot;incoherent&amp;quot; and &amp;quot;frivolous&amp;quot;, though Musk later revived legal action against Altman and others in August.&amp;lt;ref&amp;gt;Satariano, Adam. [https://www.nytimes.com/2024/03/01/technology/elon-musk-openai-sam-altman-lawsuit.html &amp;quot;Elon Musk Sues OpenAI and Sam Altman for Violating the Company&#039;s Principles&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. March 1, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Lopatto, Elizabeth. [https://www.theverge.com/2024/3/5/24091773/openai-response-elon-musk-breach-of-contract-lawsuit &amp;quot;OpenAI says Elon Musk wanted &#039;absolute control&#039; of the company&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. March 6, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Kharpal, Arjun. [https://www.cnbc.com/2024/08/05/elon-musk-revives-lawsuit-against-openai-sam-altman-in-federal-court.html &amp;quot;Elon Musk revives lawsuit against OpenAI, Sam Altman in federal court&amp;quot;]. CNBC. August 5, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;De Avila, Joseph. [https://www.wsj.com/tech/ai/elon-musk-revives-lawsuit-against-openai-and-sam-altman-d7e5a87c &amp;quot;Elon Musk Revives Lawsuit Against OpenAI and Sam Altman&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. August 5, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in &amp;quot;bad-faith tactics&amp;quot; to slow the company&#039;s progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference.&amp;lt;ref&amp;gt;Habeshian, Sareen. [https://www.axios.com/2025/04/10/openai-elon-musk-countersuit &amp;quot;OpenAI countersues Elon Musk in bitter legal battle&amp;quot;]. &#039;&#039;Axios&#039;&#039;. April 10, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer.&amp;lt;ref&amp;gt;Hammond, George. [https://www.ft.com/content/3a673ed2-26d5-47af-9028-8af7d742c2e7 &amp;quot;Elon Musk-led consortium offers $100bn to take control of OpenAI&amp;quot;]. &#039;&#039;Financial Times&#039;&#039;. 10 February 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Toonkel-2025&amp;quot;&amp;gt;Toonkel, Jessica. [https://www.wsj.com/tech/elon-musk-openai-bid-4af12827 &amp;quot;Elon Musk-Led Group Makes $97.4 Billion Bid for Control of OpenAI&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. February 10, 2025.&amp;lt;/ref&amp;gt; The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale,&amp;lt;ref&amp;gt;[https://www.theguardian.com/technology/2025/feb/14/openai-elon-musk &amp;quot;OpenAI rejects $97.4bn Musk bid and says company is not for sale&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 14 February 2025.&amp;lt;/ref&amp;gt; but the offer complicated Altman&#039;s restructuring plan by suggesting a lower bar for how much the nonprofit should be valued.&amp;lt;ref name=&amp;quot;Toonkel-2025&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI &amp;quot;benefits all of humanity&amp;quot; rather than &amp;quot;the private gain of any person&amp;quot;. In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI&#039;s leadership described the change as necessary to secure additional investments, and claimed that the nonprofit&#039;s founding mission to ensure AGI &amp;quot;benefits all of humanity&amp;quot; would be better fulfilled.&amp;lt;ref&amp;gt;Booth, Harry. [https://time.com/7279977/openai-for-profit-letter-elon-musk/ &amp;quot;OpenAI Wants to Go For-Profit. Experts Say Regulators Should Step In&amp;quot;]. &#039;&#039;TIME&#039;&#039;. 2025-04-24.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The plan has been criticized by former employees. A legal letter named &amp;quot;Not For Private Gain&amp;quot; asked the [[Attorney General of California|attorneys general of California]] and [[Attorney General of Delaware|Delaware]] to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general.&amp;lt;ref&amp;gt;Tong, Anna. [https://www.reuters.com/business/group-that-opposed-openais-restructuring-raises-concerns-about-new-revamp-plan-2025-05-15/ &amp;quot;Group that opposed OpenAI&#039;s restructuring raises concerns about new revamp plan&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. May 15, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Goldman, Sharon. [https://fortune.com/article/ex-openai-employees-california-ag-for-profit-pivot-threat-nonprofit-mission/ &amp;quot;Ex-OpenAI employees sign open letter to California AG: For-profit pivot poses &#039;palpable threat&#039; to nonprofit mission&amp;quot;]. &#039;&#039;Fortune&#039;&#039;.&amp;lt;/ref&amp;gt; The letter argues that OpenAI&#039;s complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange.&amp;lt;ref name=&amp;quot;Piper-2025&amp;quot;&amp;gt;Piper, Kelsey. [https://www.vox.com/future-perfect/410261/openai-non-profit-transition-letter-sam-altman-artificial-intelligence &amp;quot;OpenAI&#039;s nonprofit structure was supposed to protect you. What went wrong?&amp;quot;]. &#039;&#039;Vox&#039;&#039;. 2025-04-24.&amp;lt;/ref&amp;gt; PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission.&amp;lt;ref name=&amp;quot;Reuters-2025&amp;quot;&amp;gt;[https://www.reuters.com/technology/artificial-intelligence/openai-lays-out-plan-shift-new-for-profit-structure-2024-12-27/ &amp;quot;OpenAI outlines new for-profit structure in bid to stay ahead in costly AI race&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. January 2, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Piper-2025&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== 2025 restructuring ====&lt;br /&gt;
On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware.&amp;lt;ref name=&amp;quot;nyt-restructure&amp;quot;&amp;gt;Metz, Cade. [https://www.nytimes.com/2025/10/28/technology/openai-restructure-for-profit-company.html &amp;quot;OpenAI Restructures to Become a More Traditional For-Profit Company&amp;quot;]. &#039;&#039;New York Times&#039;&#039;. October 28, 2025.&amp;lt;/ref&amp;gt; Under the new structure, OpenAI&#039;s for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation.&amp;lt;ref name=&amp;quot;nyt-restructure&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;reuters-restructure&amp;quot;&amp;gt;[https://www.reuters.com/business/microsoft-openai-reach-new-deal-allow-openai-restructure-2025-10-28/ &amp;quot;Microsoft, OpenAI reach deal removing fundraising constraints for ChatGPT maker&amp;quot;]. &#039;&#039;Reuters&#039;&#039;.&amp;lt;/ref&amp;gt; The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors.&amp;lt;ref name=&amp;quot;nyt-restructure&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;cnbc-restructure&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time.&amp;lt;ref name=&amp;quot;nyt-restructure&amp;quot; /&amp;gt; Members of the Foundation&#039;s board will also serve on the for-profit board.&amp;lt;ref name=&amp;quot;nyt-restructure&amp;quot; /&amp;gt; The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an [[initial public offering]], which Altman claimed was the most likely path forward.&amp;lt;ref name=&amp;quot;reuters-restructure&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Partnership with Microsoft ===&lt;br /&gt;
In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value.&amp;lt;ref&amp;gt;Kruppa, Berber Jin and Miles. [https://www.wsj.com/articles/chatgpt-creator-openai-is-in-talks-for-tender-offer-that-would-value-it-at-29-billion-11672949279 &amp;quot;WSJ News Exclusive {{!&amp;quot;]. &#039;&#039;Wall Street Journal&#039;&#039;. January 5, 2023.&amp;lt;/ref&amp;gt; On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft&#039;s cloud-computing service &#039;&#039;Azure&#039;&#039;.&amp;lt;ref&amp;gt;[https://www.bloomberg.com/news/articles/2023-01-23/microsoft-makes-multibillion-dollar-investment-in-openai &amp;quot;Microsoft Adds $10 Billion to Investment in ChatGPT Maker OpenAI&amp;quot;]. &#039;&#039;Bloomberg.com&#039;&#039;. January 23, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Capoot, Ashley. [https://www.cnbc.com/2023/01/23/microsoft-announces-multibillion-dollar-investment-in-chatgpt-maker-openai.html &amp;quot;Microsoft announces multibillion-dollar investment in ChatGPT-maker OpenAI&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. January 23, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From September to December, 2023, Microsoft rebranded all variants of its Copilot to [[Microsoft Copilot]], and they added MS-Copilot to many installations of Windows and released [[Microsoft Copilot]] mobile apps.&amp;lt;ref name=&amp;quot;unify&amp;quot;&amp;gt;Edwards, Nathan. [https://www.theverge.com/2023/9/21/23883798/microsoft-copilot-unified-windows-11-apps-launch-date &amp;quot;Microsoft&#039;s unified Copilot is coming to Windows, Edge, and everywhere else&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. September 21, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;androidlaunch&amp;quot;&amp;gt;Warren, Tom. [https://www.theverge.com/2023/12/26/24015198/microsoft-copilot-mobile-app-android-launch &amp;quot;Microsoft Copilot is now available as a ChatGPT-like app on Android&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. December 26, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;ioslaunch&amp;quot;&amp;gt;[https://www.theverge.com/2023/12/29/24019288/microsoft-copilot-app-available-iphone-ipad-ai &amp;quot;Microsoft&#039;s Copilot app is now available on iOS&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. December 29, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Following OpenAI&#039;s 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion.&amp;lt;ref name=&amp;quot;cnbc-restructure&amp;quot;&amp;gt;Capoot, Ashley. [https://www.cnbc.com/2025/10/28/open-ai-for-profit-microsoft.html &amp;quot;OpenAI completes restructure, solidifying Microsoft as a major shareholder&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2025-10-28.&amp;lt;/ref&amp;gt; In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their [[right of first refusal]] over OpenAI&#039;s future cloud computing purchases.&amp;lt;ref name=&amp;quot;cnbc-restructure&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;verge-restructure&amp;quot;&amp;gt;Field, Hayden. [https://www.theverge.com/news/807875/openai-microsoft-for-profit-agi &amp;quot;OpenAI completed its for-profit restructuring — and struck a new deal with Microsoft&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 2025-10-28.&amp;lt;/ref&amp;gt; As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts.&amp;lt;ref name=&amp;quot;reuters-restructure&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;verge-restructure&amp;quot; /&amp;gt; The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies.&amp;lt;ref name=&amp;quot;cnbc-restructure&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;nyt-restructure&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;verge-restructure&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Finances ===&lt;br /&gt;
In 2017, OpenAI spent $7.9&amp;amp;nbsp;million, a quarter of its functional expenses, on cloud computing alone.&amp;lt;ref&amp;gt;[https://www.reuters.com/article/us-microsoft-openai/microsoft-to-invest-1-billion-in-openai-idUSKCN1UH1H9 &amp;quot;Microsoft to invest $1 billion in OpenAI&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. July 22, 2019.&amp;lt;/ref&amp;gt; In comparison, [[DeepMind]]&#039;s total expenses in 2017 were $442&amp;amp;nbsp;million. In the summer of 2018, training OpenAI&#039;s &#039;&#039;Dota 2&#039;&#039; bots required renting 128,000 [[Central processing unit|CPUs]] and 256 [[Graphics processing unit|GPUs]] from Google for multiple weeks.&amp;lt;ref name=&amp;quot;wired investors&amp;quot; /&amp;gt; Microsoft&#039;s 2019 investment in OpenAI, totaling $1 billion, reportedly kicked off the &amp;quot;contemporary AI boom&amp;quot;.&amp;lt;ref&amp;gt;Brandom, Russell. [https://techcrunch.com/2026/02/28/billion-dollar-infrastructure-deals-ai-boom-data-centers-openai-oracle-nvidia-microsoft-google-meta/ &amp;quot;The billion-dollar infrastructure deals powering the AI boom&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2026-02-28.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Metz, Cade. [https://www.nytimes.com/2026/02/27/business/openai-funding.html &amp;quot;OpenAI Raises $110 Billion to Fuel Growth, Extending A.I. Boom&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. 2026-02-27.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank.&amp;lt;ref&amp;gt;Hu, Krystal. [https://www.reuters.com/technology/artificial-intelligence/openai-closes-66-billion-funding-haul-valuation-157-billion-with-investment-2024-10-02 &amp;quot;OpenAI closes $6.6 billion funding haul with investment from Microsoft and Nvidia&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 2 October 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On January 21, 2025, [[Donald Trump]] announced [[Stargate LLC|The Stargate Project]], a joint venture between OpenAI, [[Oracle Corporation|Oracle]], [[SoftBank Group|SoftBank]] and [[MGX (company)|MGX]] to build an AI infrastructure system in conjunction with the [[Federal government of the United States|US government]]. The project takes its name from OpenAI&#039;s existing &amp;quot;Stargate&amp;quot; supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years.&amp;lt;ref&amp;gt;Jacobs, Jennifer. [https://www.cbsnews.com/news/trump-announces-private-sector-ai-infrastructure-investment/ &amp;quot;Trump announces up to $500 billion in private sector AI infrastructure investment&amp;quot;]. &#039;&#039;CBS News&#039;&#039;. 2025-01-22.&amp;lt;/ref&amp;gt; In July, the [[United States Department of Defense]] announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and [[XAI (company)|xAI]].&amp;lt;ref&amp;gt;Brodkin, Jon. [https://arstechnica.com/tech-policy/2025/07/groks-mechahitler-meltdown-didnt-stop-xai-from-winning-200m-military-deal/ &amp;quot;Grok&#039;s &amp;quot;MechaHitler&amp;quot; meltdown didn&#039;t stop xAI from winning $200M military deal&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2025-07-15.&amp;lt;/ref&amp;gt; In the same month, the company made a deal with the [[UK Government]] to use ChatGPT and other AI tools in public services.&amp;lt;ref&amp;gt;[https://www.bbc.com/news/articles/czdv68gejm7o &amp;quot;OpenAI and UK sign deal to use AI in public services&amp;quot;]. &#039;&#039;BBC News&#039;&#039;. 2025-07-22.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.reuters.com/world/uk/uk-chatgpt-maker-openai-sign-new-strategic-partnership-2025-07-21/ &amp;quot;UK and ChatGPT maker OpenAI sign new strategic partnership&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 2025-07-21.&amp;lt;/ref&amp;gt; OpenAI subsequently began a $50 million fund to support nonprofit and community organizations.&amp;lt;ref&amp;gt;Tong, Anna. [https://www.reuters.com/sustainability/boards-policy-regulation/openai-launches-50-million-fund-support-nonprofits-community-organizations-2025-07-18/ &amp;quot;OpenAI launches $50 million fund to support nonprofits, community organizations&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 2025-07-18.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, [[Coatue Management|Coatue]], [[Altimeter Capital|Altimeter]] and [[Thrive Capital|Thrive]].&amp;lt;ref&amp;gt;Rooney, Kate. [https://www.cnbc.com/2025/03/31/openai-closes-40-billion-in-funding-the-largest-private-fundraise-in-history-softbank-chatgpt.html &amp;quot;OpenAI closes $40 billion funding round, largest private tech deal on record&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. March 31, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2025/03/31/openai-raises-40b-at-300b-post-money-valuation/ &amp;quot;OpenAI raises $40B at $300B post-money valuation&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. March 31, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In July 2025, the company reported annualized revenue of $12 billion.&amp;lt;ref&amp;gt;[https://www.pymnts.com/news/artificial-intelligence/2025/openai-doubles-yearly-revenue-12-billion-dollars/ &amp;quot;OpenAI Doubles Yearly Revenue to $12 Billion&amp;quot;]. PYMNTS. July 30, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.reuters.com/business/openai-hits-12-billion-annualized-revenue-information-reports-2025-07-31/ &amp;quot;OpenAI hits $12 billion in annualized revenue, The Information reports&amp;quot;]. Reuters. July 30, 2025.&amp;lt;/ref&amp;gt; This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users.&amp;lt;ref&amp;gt;[https://finance.yahoo.com/news/chatgpt-crosses-20m-paid-users-140519166.html &amp;quot;ChatGPT Crosses 20M Paid Users, OpenAI Poised for Record Growth&amp;quot;]. Yahoo Finance. April 2, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.cnbc.com/2025/08/04/openai-chatgpt-700-million-users.html &amp;quot;OpenAI&#039;s ChatGPT to hit 700 million weekly users, up 4x from last year&amp;quot;]. CNBC. August 4, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.saastr.com/openai-crosses-12-billion-arr-the-3-year-sprint-that-redefined-whats-possible-in-scaling-software/ &amp;quot;OpenAI Crosses $12 Billion ARR: The 3-Year Sprint That Redefined What&#039;s Possible in Scaling Software&amp;quot;]. SaaStr. August 20, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In February 2026, OpenAI raised $110{{nbsp}}billion at a $730{{nbsp}}billion valuation, led by [[Amazon (company)|Amazon]] ($50{{nbsp}}billion), [[SoftBank]] ($30{{nbsp}}billion), and [[Nvidia]] ($30{{nbsp}}billion), surpassing the prior round as the largest private technology fundraise in history.&amp;lt;ref name=&amp;quot;openai-110b-2026&amp;quot;&amp;gt;Brandom, Russell. [https://techcrunch.com/2026/02/27/openai-raises-110b-in-one-of-the-largest-private-funding-rounds-in-history/ &amp;quot;OpenAI raises $110B in one of the largest private funding rounds in history&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2026-02-27.&amp;lt;/ref&amp;gt; The round was later extended to $120{{nbsp}}billion in March 2026.&amp;lt;ref name=&amp;quot;openai-120b-2026&amp;quot;&amp;gt;Rooney, Kate. [https://www.cnbc.com/2026/03/24/openai-secures-an-extra-10-billion-in-record-funding-round-cfo-friar-says.html &amp;quot;OpenAI secures an extra $10 billion in record funding round&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2026-03-24.&amp;lt;/ref&amp;gt; In April 2026, the company announced that it closed a funding round of $122 billion in committed capital at a post-money valuation of $852 billion.&amp;lt;ref&amp;gt;[https://openai.com/index/accelerating-the-next-phase-ai/ &amp;quot;OpenAI raises $122 billion to accelerate the next phase of AI&amp;quot;]. &#039;&#039;openai.com&#039;&#039;. 2026-03-24.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Business model ==&lt;br /&gt;
OpenAI employs a tiered revenue model that combines free consumer access, paid subscription services, enterprise licensing, and [[application programming interface]] (API) usage-based pricing.&amp;lt;ref name=&amp;quot;Friar2026&amp;quot;&amp;gt;Friar, Sarah. [https://openai.com/index/a-business-that-scales-with-the-value-of-intelligence/ &amp;quot;A business that scales with the value of intelligence&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;.&amp;lt;/ref&amp;gt; The model reflects a freemium [[software as a service]] (SaaS) structure in which basic functionality is provided at no cost, while advanced capabilities are offered through paid plans such as ChatGPT Plus and enterprise solutions.&lt;br /&gt;
Freemium SaaS models are designed to convert free users into paying customers by emphasizing perceived value, user satisfaction, and feature differentiation.&amp;lt;ref name=&amp;quot;Hsu2025&amp;quot;&amp;gt;Hsu, P.-F.. &amp;quot;Converting free users to paying customers in freemium services: a SaaS success model&amp;quot;. &#039;&#039;Information Systems &amp;amp; E-Business Management&#039;&#039;.&amp;lt;/ref&amp;gt; Research on SaaS conversion indicates that perceived usefulness and alignment with user needs are central predictors of willingness to subscribe.&amp;lt;ref name=&amp;quot;Hsu2025&amp;quot; /&amp;gt;&lt;br /&gt;
Studies applying the [[technology acceptance model]] (TAM) to ChatGPT usage have found that perceived usefulness and perceived ease of use significantly influence behavioral intention and continued adoption.&amp;lt;ref name=&amp;quot;Ma2025&amp;quot;&amp;gt;Ma, J.. &amp;quot;Exploring User Adoption of ChatGPT: A Technology Acceptance Model Perspective&amp;quot;. &#039;&#039;International Journal of Human-Computer Interaction&#039;&#039;.&amp;lt;/ref&amp;gt; Workplace adoption research has similarly reported that perceived intelligence and information support increase knowledge acquisition and strengthen intent to use [[generative AI]] systems.&amp;lt;ref name=&amp;quot;Jo2024&amp;quot;&amp;gt;Jo, H.. &amp;quot;AI in the Workplace: Examining the Effects of ChatGPT on Information Support and Knowledge Acquisition&amp;quot;. &#039;&#039;International Journal of Human-Computer Interaction&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
OpenAI&#039;s subscription and enterprise offerings are structured around scalable compute usage and API integration into third-party platforms.&amp;lt;ref name=&amp;quot;Friar2026&amp;quot; /&amp;gt; In 2025, the company reported significant revenue growth associated with expanded computing infrastructure and enterprise adoption.&amp;lt;ref name=&amp;quot;ReutersRevenue2026&amp;quot;&amp;gt;[https://www.reuters.com/business/openai-cfo-says-annualized-revenue-crosses-20-billion-2025-2026-01-19/ &amp;quot;OpenAI CFO says annualized revenue crosses $20 billion in 2025&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. January 19, 2026.&amp;lt;/ref&amp;gt; The introduction of diversified revenue streams, including subscription tiers, enterprise contracts, API licensing, and advertising experiments in certain segments, reflects a multi-channel monetization approach.&amp;lt;ref name=&amp;quot;ReutersCompute2026&amp;quot;&amp;gt;[https://www.reuters.com/technology/openai-sees-compute-spend-around-600-billion-by-2030-cnbc-reports-2026-02-20/ &amp;quot;OpenAI expects compute spend of around $600 billion through 2030, source says&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. February 20, 2026.&amp;lt;/ref&amp;gt;&lt;br /&gt;
Financial market research has also documented abnormal stock returns among publicly traded firms referencing ChatGPT in regulatory filings, suggesting that generative AI adoption has been associated with investor expectations of productivity and revenue effects.&amp;lt;ref&amp;gt;Pietrzak, M.. &amp;quot;A trillion dollars race—how ChatGPT affects stock prices&amp;quot;. &#039;&#039;Future Business Journal&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
The sustainability of freemium AI platforms has been linked in academic literature to continued enterprise integration across sectors, including finance, supply chains, and sustainability risk management.&amp;lt;ref&amp;gt;Roozkhosh, P.. &amp;quot;Exploring the adoption and long-term effects of ChatGPT in a sustainable supply chain&amp;quot;. &#039;&#039;Flexible Services &amp;amp; Manufacturing Journal&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Kong, K. Y.. &amp;quot;Sustainability risk management: Exploring the role of artificial intelligence capabilities through an information-processing lens&amp;quot;. &#039;&#039;Risk Analysis&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The company&#039;s [[Burn rate|cash burn]] remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025.&amp;lt;ref&amp;gt;[https://finance.yahoo.com/news/openai-hits-12-billion-annualized-015009168.html &amp;quot;OpenAI hits $12 billion in annualized revenue, The Information reports&amp;quot;]. Yahoo Finance. July 30, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Cole, Cyrus. [https://www.ainvest.com/news/openai-cash-burn-strategic-implications-ai-investors-2509/ &amp;quot;The OpenAI Cash Burn: Strategic Implications for AI Investors&amp;quot;]. AI Invest. September 4, 2025.&amp;lt;/ref&amp;gt; OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029,&amp;lt;ref name=&amp;quot;yahoo&amp;quot;&amp;gt;[https://finance.yahoo.com/news/openai-expects-business-burn-115-022035561.html &amp;quot;Report: OpenAI expects business to burn $115 billion through 2029&amp;quot;]. Yahoo Finance. September 5, 2025.&amp;lt;/ref&amp;gt; with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028.&amp;lt;ref name=&amp;quot;ground_news&amp;quot;&amp;gt;[https://the-decoder.com/openai-has-reportedly-misjudged-its-cash-burn-by-80-billion/ &amp;quot;OpenAI has reportedly misjudged its cash burn by $80 billion&amp;quot;]. The Decoder. September 5, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://ground.news/article/openai-faces-115-billion-cash-burn-by-2029 &amp;quot;OpenAI Expects Business to Burn $115 Billion Through 2029, The Information reports&amp;quot;]. Ground News. September 5, 2025.&amp;lt;/ref&amp;gt; These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development.&amp;lt;ref&amp;gt;[https://www.theinformation.com/articles/openai-says-business-will-burn-115-billion-2029 &amp;quot;OpenAI Says Its Business Will Burn $115 Billion Through 2029&amp;quot;]. The Information. September 5, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The company&#039;s financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030.&amp;lt;ref name=&amp;quot;ground_news&amp;quot; /&amp;gt; This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI&#039;s commitment to maintaining its position as a leader in the artificial intelligence industry.&amp;lt;ref&amp;gt;[https://www.wired.com/story/openai-valuation-500-billion-skepticism/ &amp;quot;OpenAI Is Poised to Become the Most Valuable Startup Ever. Should It Be?&amp;quot;]. Wired. August 19, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing [[SpaceX]] as the world&#039;s most valuable private company.&amp;lt;ref&amp;gt;Kinder, Tabby. [https://www.ft.com/content/f6befd14-6e8e-497d-98c9-6894b4cca7e4 &amp;quot;OpenAI overtakes SpaceX after hitting $500bn valuation&amp;quot;]. &#039;&#039;www.ft.com&#039;&#039;. 2025-10-02.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Acquisitions ===&lt;br /&gt;
In August 2023, it was announced that OpenAI had acquired the [[New York City|New York]]-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools.&amp;lt;ref&amp;gt;[https://www.reuters.com/markets/deals/openai-acquires-start-up-global-illumination-work-core-products-chatgpt-2023-08-16/ &amp;quot;OpenAI acquires start-up Global Illumination to work on core products, ChatGPT&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. August 16, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration.&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2024/06/24/openai-buys-a-remote-collaboration-platform/ &amp;quot;OpenAI buys a remote collaboration platform&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. June 24, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In March 2025, OpenAI reached a deal with [[CoreWeave]] to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave&#039;s biggest customer in 2024.&amp;lt;ref name=&amp;quot;ReuterCore&amp;quot;&amp;gt;Wang, Echo. [https://www.reuters.com/technology/artificial-intelligence/coreweave-strikes-12-billion-contract-with-openai-ahead-ipo-sources-say-2025-03-10/ &amp;quot;CoreWeave inks $11.9 billion contract with OpenAI ahead of IP&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. March 10, 2025.&amp;lt;/ref&amp;gt; Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future [[initial public offering]] by OpenAI, while ensuring Microsoft&#039;s continued access to advanced AI models.&amp;lt;ref name=&amp;quot;Reuters_2025&amp;quot; /&amp;gt; On May 21, 2025 OpenAI announced the $6.5 billion acquisition of [[Io (company)|io]], an AI hardware start-up founded by former Apple designer [[Jony Ive]] in 2024.&amp;lt;ref&amp;gt;Eadicicco, Lisa. [https://www.cnn.com/2025/05/21/tech/jony-ive-apple-design-chief-openai?iid=cnn_buildContentRecirc_end_recirc &amp;quot;Former Apple design chief Jony Ive is joining OpenAI {{!&amp;quot;]. &#039;&#039;CNN&#039;&#039;. 2025-05-21.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Fraser, Graham. [https://www.bbc.com/news/articles/c5y66yemjdmo &amp;quot;Apple iPhone designer Sir Jony Ive joins ChatGPT-maker OpenAI&amp;quot;]. &#039;&#039;[[BBC News]]&#039;&#039;. 2025-05-22.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Isaac, Mike. [https://www.nytimes.com/2025/05/21/technology/openai-jony-ive-deal.html?partner=slack&amp;amp;smid=sl-share &amp;quot;OpenAI Unites With Jony Ive in $6.5 Billion Deal to Create A.I. Devices&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. 2025-05-21.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1&amp;amp;nbsp;billion in an all-stock deal and appointed Statsig&#039;s founding CEO [[Vijaye Raji]] as OpenAI&#039;s chief technology officer of applications.&amp;lt;ref&amp;gt;Metz, Rachel. [https://www.bloomberg.com/news/articles/2025-09-02/openai-to-buy-product-testing-startup-statsig-for-1-1-billion &amp;quot;OpenAI to Buy Product Testing Startup Statsig for $1.1 Billion&amp;quot;]. Bloomberg News. September 2, 2025.&amp;lt;/ref&amp;gt; The company also announced development of an AI-driven hiring service designed to rival [[LinkedIn]].&amp;lt;ref name=&amp;quot;Verge&amp;quot;&amp;gt;Field, Hayden. [https://www.theverge.com/openai/772026/openai-is-working-on-a-type-of-linkedin-competitor &amp;quot;OpenAI is working on a type of LinkedIn competitor&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 4 September 2025.&amp;lt;/ref&amp;gt; OpenAI acquired personal finance app Roi in October 2025.&amp;lt;ref&amp;gt;Bellan, Rebecca. [https://techcrunch.com/2025/10/03/with-its-latest-acqui-hire-openai-is-doubling-down-on-personalized-consumer-ai/ &amp;quot;With its latest acqui-hire, OpenAI is doubling down on personalized consumer AI&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2025-10-03.&amp;lt;/ref&amp;gt; In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky&#039;s capabilities into ChatGPT.&amp;lt;ref&amp;gt;[https://openai.com/index/openai-acquires-software-applications-incorporated/ &amp;quot;OpenAI acquires Software Applications Incorporated, maker of Sky&amp;quot;]. &#039;&#039;openai.com&#039;&#039;. 2026-01-02.&amp;lt;/ref&amp;gt; In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount.&amp;lt;ref&amp;gt;Dharma, RanjithKumar. [https://www.verdict.co.uk/openai-agrees-buy-neptune/ &amp;quot;OpenAI agrees to buy Neptune&amp;quot;]. &#039;&#039;Verdict&#039;&#039;. 2025-12-05.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI&#039;s ChatGPT Health product and was intended to strengthen the company&#039;s medical data and healthcare artificial intelligence capabilities.&amp;lt;ref&amp;gt;Jose, Teena. [https://www.easterneye.biz/openai-acquires-torch-health-tech-medical-ai/ &amp;quot;OpenAI Acquires Health Tech Startup Torch to Expand Medical AI {{!&amp;quot;]. &#039;&#039;www.easterneye.biz&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Capoot, Ashley. [https://www.cnbc.com/2026/01/12/open-ai-torch-health-care-technology.html &amp;quot;OpenAI acquires health-care technology startup Torch for $60 million, source says&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2026-01-12.&amp;lt;/ref&amp;gt; OpenAI acquired [[Python (programming language)|Python]] tool developer Astral in March 2026.&amp;lt;ref&amp;gt;Orland, Kyle. [https://arstechnica.com/ai/2026/03/openai-is-acquiring-open-source-python-tool-maker-astral/ &amp;quot;OpenAI is acquiring open source Python tool-maker Astral&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2026-03-19.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Corporate partnerships ===&lt;br /&gt;
{{Broader|Content moderation and working conditions}}&lt;br /&gt;
OpenAI has been criticized for outsourcing the [[Labeled data|annotation of data sets]] to [[Sama (company)|Sama]], a company based in San Francisco that employed workers in [[Kenya]]. These annotations were used to train an AI model to detect toxicity, which could then be used to [[Content moderation|moderate toxic content]], notably from ChatGPT&#039;s training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by &#039;&#039;[[Time (magazine)|Time]]&#039;&#039; described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama&#039;s spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management.&amp;lt;ref name=&amp;quot;Time1&amp;quot;&amp;gt;Perrigo, Billy. [https://time.com/6247678/openai-chatgpt-kenya-workers/ &amp;quot;Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer&amp;quot;]. &#039;&#039;Time&#039;&#039;. January 18, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2024, OpenAI began collaborating with [[Broadcom]] to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by [[TSMC]] on a [[3 nm process]] node. This initiative intended to reduce OpenAI&#039;s dependence on Nvidia GPUs, which are costly and face high demand in the market.&amp;lt;ref&amp;gt;[https://www.reuters.com/technology/openai-set-finalize-first-custom-chip-design-this-year-2025-02-10/ &amp;quot;Exclusive: OpenAI set to finalize first custom chip design this year&amp;quot;]. &#039;&#039;reuters.com&#039;&#039;. February 10, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;reuters-openai-broadcom&amp;quot;&amp;gt;Lawrence, Katie. [https://www.reuters.com/business/openai-set-start-mass-production-its-own-ai-chips-with-broadcom-2026-ft-reports-2025-09-05/ &amp;quot;OpenAI set to start mass production of its own AI chips with Broadcom in 2026, FT reports&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 5 September 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;ft-openai-broadcom&amp;quot;&amp;gt;Hern, Alexandra. [https://www.ft.com/content/e8cc6d99-d06e-4e9b-a54f-29317fa68d6f &amp;quot;OpenAI to launch first AI chip next year in partnership with Broadcom&amp;quot;]. &#039;&#039;Financial Times&#039;&#039;. 4 September 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In January 2024, [[Arizona State University]] purchased ChatGPT Enterprise in OpenAI&#039;s first deal with a university.&amp;lt;ref&amp;gt;Field, Hayden. [https://www.cnbc.com/2024/01/18/openai-announces-first-partnership-with-a-university.html &amp;quot;OpenAI announces first partnership with a university&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. January 18, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In June 2024, [[Apple Inc.]] signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new [[Apple Intelligence]] initiative.&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2024/06/10/apple-brings-chatgpt-to-its-apps-including-siri/ &amp;quot;Apple brings ChatGPT to its apps, including Siri&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. June 10, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Mickle, Tripp. [https://www.nytimes.com/2024/06/10/technology/apple-intelligence-openai.html &amp;quot;Apple Jumps Into A.I. Fray With Apple Intelligence&amp;quot;]. &#039;&#039;[[The New York Times]]&#039;&#039;. 2024-06-10.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In June 2025, OpenAI began renting [[Google Cloud Platform|Google Cloud]]&#039;s Tensor Processing Units ([[TPU (computing)|TPUs]]) to support [[ChatGPT]] and related services, marking its first meaningful use of non‑Nvidia AI chips.&amp;lt;ref&amp;gt;[https://www.reuters.com/business/openai-turns-googles-ai-chips-power-its-products-information-reports-2025-06-27/ &amp;quot;OpenAI turns to Google&#039;s AI chips to power its products, source says&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 2025-06-27.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years.&amp;lt;ref&amp;gt;Jin, Berber. [https://www.wsj.com/business/openai-oracle-sign-300-billion-computing-deal-among-biggest-in-history-ff27c8fe &amp;quot;Exclusive {{!&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. 2025-09-10.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership.&amp;lt;ref&amp;gt;Whelan, Berber Jin and Robbie. [https://www.wsj.com/tech/nvidia-openai-100-billion-deal-data-centers-d2f85cae &amp;quot;Nvidia to Invest Up to $100 Billion in OpenAI&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. 2025-09-22.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Whelan, Berber Jin and Robbie. [https://www.wsj.com/tech/ai/the-100-billion-megadeal-between-openai-and-nvidia-is-on-ice-aa3025e3 &amp;quot;Exclusive {{!&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. 2026-01-30.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In October 2025, OpenAI announced a multi-billion dollar deal with [[AMD]].&amp;lt;ref&amp;gt;[https://www.thetimes.com/business-money/companies/article/open-ai-signs-multibillion-dollar-deal-with-amd-for-processors-9zgxv9hhj &amp;quot;Open AI signs multibillion-dollar deal with AMD for processors&amp;quot;]. &#039;&#039;www.thetimes.com&#039;&#039;. 2025-10-06.&amp;lt;/ref&amp;gt; OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets.&amp;lt;ref&amp;gt;Jin, Robbie Whelan and Berber. [https://www.wsj.com/tech/ai/openai-amd-deal-ai-chips-ed92cc42 &amp;quot;OpenAI, AMD Announce Massive Computing Deal, Marking New Phase of AI Boom&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. 2025-10-06.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In December 2025, [[The Walt Disney Company|Disney]] said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI&#039;s short-form AI video platform. More than 200 Disney, [[Marvel Studios|Marvel]], [[Star Wars]] and [[Pixar]] characters would be available to OpenAI users.&amp;lt;ref&amp;gt;Avila, Ben Fritz and Joseph De. [https://www.wsj.com/business/media/disney-to-invest-1-billion-in-openai-license-characters-for-use-in-chatgpt-sora-3a4916e2 &amp;quot;Disney to Invest $1 Billion in OpenAI and License Characters for Use in ChatGPT, Sora&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. 2025-12-11.&amp;lt;/ref&amp;gt; Disney would later exit this deal in late March 2026, due to Sora being discontinued.&amp;lt;ref&amp;gt;[https://www.theverge.com/ai-artificial-intelligence/899850/openai-sora-ai-chatgpt &amp;quot;OpenAI just gave up Sora and its billion-dollar Disney deal&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. March 24, 2026.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Macklin, Samantha. [https://lamag.com/media/disneys-openai-split-signals-deeper-uncertainty-for-hollywood/ &amp;quot;Disney&#039;s OpenAI Split Signals Deeper Uncertainty For Hollywood&amp;quot;]. &#039;&#039;Los Angeles Magazine&#039;&#039;. 2026-03-26.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In early 2026, [[Amazon (company)|Amazon]] entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI&#039;s models could be integrated into Amazon&#039;s digital assistant [[Amazon Alexa|Alexa]] and other internal projects.&amp;lt;ref&amp;gt;Capoot, Ashley. [https://www.cnbc.com/2026/02/04/open-ai-alexa-amazon-investment.html &amp;quot;OpenAI models could help power Alexa as part of Amazon investment deal&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2026-02-04.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Government contracting ===&lt;br /&gt;
OpenAI provides [[Large language model|LLMs]] to the [[DARPA Prize Competitions|Artificial Intelligence Cyber Challenge]] and to the [[Advanced Research Projects Agency for Health]].&amp;lt;ref&amp;gt;[https://aicyberchallenge.com/openai/ &amp;quot;OpenAI {{!&amp;quot;]. &#039;&#039;Artificial Intelligence Cyber Challenge&#039;&#039;.&amp;lt;/ref&amp;gt; In October 2024, [[The Intercept]] revealed that OpenAI&#039;s tools are considered &amp;quot;essential&amp;quot; for [[United States Africa Command|AFRICOM]]&#039;s mission and included in an &amp;quot;Exception to Fair Opportunity&amp;quot; contractual agreement between the [[United States Department of Defense]] (DoD) and [[Microsoft]].&amp;lt;ref name=&amp;quot;intercept-africom&amp;quot;&amp;gt;Biddle, Sam. [https://theintercept.com/2024/10/25/africom-microsoft-openai-military/ &amp;quot;Pentagon Purchased OpenAI Tools for Military Operations Across Africa&amp;quot;]. &#039;&#039;The Intercept&#039;&#039;. 2024-10-25.&amp;lt;/ref&amp;gt; In December 2024, OpenAI said it would partner with defense-tech company [[Anduril Industries|Anduril]] to build drone defense technologies for the United States and its allies.&amp;lt;ref&amp;gt;[https://www.technologyreview.com/2024/12/04/1107897/openais-new-defense-contract-completes-its-military-pivot &amp;quot;OpenAI&#039;s new defense contract completes its military pivot&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2025, OpenAI&#039;s Chief Product Officer, Kevin Weil, was commissioned [[lieutenant colonel]] in the [[United States Army|U.S. Army]] to join [[Detachment 201]] as senior advisor.&amp;lt;ref&amp;gt;[https://www.army.mil/article/286317/army_launches_detachment_201_executive_innovation_corps_to_drive_tech_transformation &amp;quot;Army Launches Detachment 201: Executive Innovation Corps to Drive Tech Transformation&amp;quot;]. &#039;&#039;www.army.mil&#039;&#039;. 2025-06-13.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In June 2025, the DoD awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT.&amp;lt;ref&amp;gt;[https://www.morningstar.com/news/dow-jones/202506175775/openai-gets-pentagon-contract-as-tech-companies-eye-defense-sector-2nd-update &amp;quot;OpenAI Gets Pentagon Contract as Tech Companies Eye Defense Sector — 2nd Update&amp;quot;]. &#039;&#039;Morningstar, Inc.&#039;&#039;. 2025-06-17.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Li, Katherine. [https://www.businessinsider.com/open-ai-going-big-defense-tech-new-pentagon-deal-2025 &amp;quot;OpenAI is going big into government tech&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On 28 February 2026, after OpenAI&#039;s competitor [[Anthropic]] refused to authorize the DoD to use its AI systems for [[mass surveillance]] or [[autonomous weapons systems]], it was labeled a &amp;quot;supply-chain risk&amp;quot; by the DoD and the Trump administration decided to stop using it across the government.&amp;lt;ref&amp;gt;Barnes, Julian E.. [https://www.nytimes.com/2026/02/27/us/politics/anthropic-military-ai.html &amp;quot;Trump Orders U.S. Agencies to Stop Using Anthropic AI Tech After Pentagon Standoff&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. 2026-02-27.&amp;lt;/ref&amp;gt; The same day, OpenAI announced that it had reached an agreement with the DoD to deploy its models in the government&#039;s classified network.&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;[https://www.politico.com/news/2026/02/28/openai-announces-new-deal-with-pentagon-including-ethical-safeguards-00805546 &amp;quot;OpenAI announces new deal with Pentagon — including ethical safeguards&amp;quot;]. Politico. 28 February 2026.&amp;lt;/ref&amp;gt; CEO Sam Altman wrote, &amp;quot;Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems&amp;quot;, that the DoD agrees with this, and that this is included in their agreement.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Gabbatt, Adam. [https://www.theguardian.com/technology/2026/feb/28/openai-us-military-anthropic &amp;quot;OpenAI to work with Pentagon after Anthropic dropped by Trump over company&#039;s ethics concerns&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 2026-02-28.&amp;lt;/ref&amp;gt; However, while the agreement mentioned existing law and allowed OpenAI to implement some technical safeguards, it did not incorporate legally binding prohibitions on domestic mass surveillance or fully autonomous weapons.&amp;lt;ref&amp;gt;, . [https://www.technologyreview.com/2026/03/02/1133850/openais-compromise-with-the-pentagon-is-what-anthropic-feared/ &amp;quot;OpenAI&#039;s &amp;quot;compromise&amp;quot; with the Pentagon is what Anthropic feared&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. 2026-03-02.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;, . [https://www.nytimes.com/2026/03/01/technology/anthropic-defense-dept-openai-talks.html &amp;quot;How Talks Between Anthropic and the Defense Dept. Fell Apart&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. 2026-03-01.&amp;lt;/ref&amp;gt; Following backlash over the potential use of ChatGPT for surveillance, Altman amended the contract to include more safeguards, though only excerpts of the contract were made public and critics remained concerned that it was purposefully vague and contained carve-outs for domestic surveillance.&amp;lt;ref&amp;gt;[https://www.nbcnews.com/tech/tech-news/openai-alters-deal-pentagon-critics-sound-alarm-surveillance-rcna261357 &amp;quot;OpenAI alters deal with Pentagon as critics sound alarm over surveillance&amp;quot;]. &#039;&#039;NBC News&#039;&#039;. 2026-03-03.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Curi, Maria. [https://www.axios.com/2026/03/03/openai-pentagon-ai-surveillance &amp;quot;Scoop: OpenAI, Pentagon add more surveillance protections to AI deal&amp;quot;]. &#039;&#039;Axios&#039;&#039;. 2026-03-03.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Services ==&lt;br /&gt;
&#039;&#039;Main article: [[Products and applications of OpenAI]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Products ===&lt;br /&gt;
* [[ChatGPT]]&lt;br /&gt;
** [[ChatGPT Deep Research]]&lt;br /&gt;
** [[ChatGPT Search]]&lt;br /&gt;
** [[ChatGPT Atlas]]&lt;br /&gt;
* [[OpenAI Codex (AI agent)|OpenAI Codex]]&lt;br /&gt;
* [[Sora (text-to-video model)]]&lt;br /&gt;
* [[Whisper (speech recognition system)]]&lt;br /&gt;
* OpenAI Prism&lt;br /&gt;
* An [[API]] that gives access to various OpenAI models&lt;br /&gt;
&lt;br /&gt;
=== Development ===&lt;br /&gt;
In February 2019, [[GPT-2]] was announced, which gained attention for its ability to generate human-like text.&amp;lt;ref name=&amp;quot;guardian&amp;quot;&amp;gt;Hern, Alex. [https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction &amp;quot;New AI fake text generator may be too dangerous to release, say creators&amp;quot;]. &#039;&#039;[[The Guardian]]&#039;&#039;. February 14, 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2020, OpenAI announced [[GPT-3]], a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named &#039;&#039;the API&#039;&#039;, would form the heart of its first commercial product.&amp;lt;ref name=&amp;quot;2020-06-11_Bloomberg&amp;quot;&amp;gt;Vance, Ashlee. [https://www.bloomberg.com/news/articles/2020-06-11/trillions-of-words-analyzed-openai-sets-loose-ai-language-colossus &amp;quot;Trillions of Words Analyzed, OpenAI Sets Loose AI Language Colossus&amp;quot;]. &#039;&#039;[[Bloomberg News]]&#039;&#039;. June 11, 2020.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish [[Anthropic]].&amp;lt;ref&amp;gt;Moss, Sebastian. [https://aibusiness.com/verticals/eleven-openai-employees-break-off-to-establish-anthropic-raise-124m &amp;quot;Eleven OpenAI Employees Break Off to Establish Anthropic, Raise $124 Million&amp;quot;]. &#039;&#039;AI Business&#039;&#039;. June 2, 2021.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2021, OpenAI introduced [[DALL-E]], a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture.&amp;lt;ref&amp;gt;[https://venturebeat.com/2021/01/05/openai-debuts-dall-e-for-generating-images-from-text/ &amp;quot;OpenAI debuts DALL-E for generating images from text&amp;quot;]. VentureBeat. January 5, 2021.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:UK national football team considering compete in UEFA Euro and FIFA World Cup – ChatGPT.jpg|thumb|The release of [[ChatGPT]] was a major event in the [[AI boom]]. By January 2023, ChatGPT had become what was then the fastest-growing consumer software application in history, gaining over 100 million users in two months.&amp;lt;ref&amp;gt;Porter, Jon. [https://www.theverge.com/2023/11/6/23948386/chatgpt-active-user-count-openai-developer-conference &amp;quot;ChatGPT continues to be one of the fastest-growing services ever&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 6 November 2023.&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
In December 2022, OpenAI received widespread media coverage after launching a free preview of [[ChatGPT]], its new AI [[chatbot]] based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days.&amp;lt;ref&amp;gt;Roose, Kevin. [https://www.nytimes.com/2022/12/05/technology/chatgpt-ai-twitter.html &amp;quot;The Brilliance and Weirdness of ChatGPT&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. December 5, 2022.&amp;lt;/ref&amp;gt; According to anonymous sources cited by [[Reuters]] in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024.&amp;lt;ref&amp;gt;Dastin, Jeffrey. [https://www.reuters.com/business/chatgpt-owner-openai-projects-1-billion-revenue-by-2024-sources-2022-12-15/ &amp;quot;Exclusive: ChatGPT owner OpenAI projects $1 billion in revenue by 2024&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. December 15, 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After ChatGPT was launched, Google announced a similar chatbot, [[Bard (chatbot)|Bard]], amid internal concerns that ChatGPT could threaten Google&#039;s position as a primary source of online information.&amp;lt;ref&amp;gt;[https://www.bbc.com/news/technology-64546299 &amp;quot;Bard: Google launches ChatGPT rival&amp;quot;]. &#039;&#039;BBC News&#039;&#039;. February 6, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Vincent, James. [https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-error-exoplanet-demo &amp;quot;Google&#039;s AI chatbot Bard makes factual error in first demo&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. February 8, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into [[Microsoft Bing]], [[Microsoft Edge|Edge]], [[Microsoft 365]] and other products.&amp;lt;ref&amp;gt;Dotan, Tom. [https://www.wsj.com/articles/microsoft-adds-chatgpt-ai-technology-to-bing-search-engine-11675793525 &amp;quot;Microsoft Adds ChatGPT AI Technology to Bing Search Engine&amp;quot;]. &#039;&#039;Wall Street Journal&#039;&#039;. February 7, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On March 14, 2023, OpenAI released [[GPT-4]], both as an API (with a waitlist) and as a feature of ChatGPT Plus.&amp;lt;ref&amp;gt;[https://openai.com/product/gpt-4 &amp;quot;GPT-4&amp;quot;]. &#039;&#039;openai.com&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries.&amp;lt;ref&amp;gt;[https://www.nytimes.com/2023/11/06/technology/openai-custom-chatgpt.html &amp;quot;OpenAI Launches Custom ChatGPT Versions&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. November 6, 2023.&amp;lt;/ref&amp;gt; On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand.&amp;lt;ref&amp;gt;Elstrom, Peter. [https://www.bloomberg.com/news/articles/2023-11-15/openai-pauses-new-signups-to-manage-overwhelming-demand &amp;quot;OpenAI Pauses New Signups to Manage Overwhelming Demand&amp;quot;]. &#039;&#039;Bloomberg&#039;&#039;. November 15, 2023.&amp;lt;/ref&amp;gt; Access for newer subscribers re-opened a month later on December 13.&amp;lt;ref&amp;gt;Idris, Abubakar. [https://themessenger.com/tech/openai-re-opens-chatgpt-plus-subscriptions &amp;quot;OpenAI Reopens ChatGPT Plus Subscriptions&amp;quot;]. &#039;&#039;The Messenger&#039;&#039;. December 13, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In December 2024, the company launched the [[Sora (text-to-video model)|Sora]] model.&amp;lt;ref&amp;gt;[https://economictimes.indiatimes.com/tech/artificial-intelligence/openai-releases-text-to-video-model-sora-for-chatgpt-plus-and-pro-users/articleshow/116154796.cms &amp;quot;OpenAI releases text-to-video model Sora for ChatGPT Plus and Pro users&amp;quot;]. &#039;&#039;The Economic Times&#039;&#039;. 2024-12-10.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.reuters.com/technology/artificial-intelligence/openai-releases-text-to-video-model-sora-chatgpt-plus-pro-users-2024-12-09/ &amp;quot;OpenAI releases text-to-video model Sora for ChatGPT Plus and Pro users&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. December 10, 2024.&amp;lt;/ref&amp;gt; It also launched [[OpenAI o1]], an early [[reasoning model]] that was internally codenamed &#039;&#039;strawberry&#039;&#039;.&amp;lt;ref&amp;gt;Franzen, Carl. [https://venturebeat.com/ai/openai-launches-full-o1-model-with-34-reduced-error-rate-debuts-chatgpt-pro/ &amp;quot;OpenAI launches full o1 model with image uploads and analysis, debuts ChatGPT Pro&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 2024-12-05.&amp;lt;/ref&amp;gt; Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming [[OpenAI o3]] models were shared.&amp;lt;ref&amp;gt;Franzen, Carl. [https://venturebeat.com/ai/openai-confirms-new-frontier-models-o3-and-o3-mini/ &amp;quot;OpenAI confirms new frontier models o3 and o3-mini&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 2024-12-20.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On January 23, 2025, OpenAI released Operator, an [[AI agent]] and tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States.&amp;lt;ref&amp;gt;Metz, Cade. [https://www.nytimes.com/2025/01/23/technology/openai-operator-launch.html &amp;quot;OpenAI Unveils A.I. Agent That Can Use Websites on Its Own&amp;quot;]. &#039;&#039;New York Times&#039;&#039;. 2025-01-23.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Edwards, Benj. [https://arstechnica.com/ai/2025/01/openai-launches-operator-an-ai-agent-that-can-operate-your-computer/ &amp;quot;OpenAI launches Operator, an AI agent that can operate your computer&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2025-01-23.&amp;lt;/ref&amp;gt; OpenAI released [[ChatGPT Deep Research|deep research agent]], nine days later. It scored a 27% accuracy on the benchmark [[Humanity&#039;s Last Exam]] (HLE).&amp;lt;ref name=&amp;quot;Verge_20250203&amp;quot;&amp;gt;Lawler, Richard. [https://www.theverge.com/news/604902/chagpt-deep-research-ai-agent &amp;quot;ChatGPT&#039;s agent can now do deep research for you&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. February 3, 2025.&amp;lt;/ref&amp;gt; Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning.&amp;lt;ref name=&amp;quot;giz2&amp;quot;&amp;gt;Barr, Kyle. [https://gizmodo.com/openais-gpt-4-5-may-arrive-next-week-but-gpt-5-is-just-around-the-corner-2000566442 &amp;quot;OpenAI&#039;s GPT-4.5 May Arrive Next Week, but GPT-5 Is Just Around the Corner&amp;quot;]. &#039;&#039;Gizmodo&#039;&#039;. 2025-02-20.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;fort3&amp;quot;&amp;gt;Nolan, Beatrice. [https://fortune.com/2025/02/14/sam-altman-openai-plans-gpt-5-release-timelines/ &amp;quot;Sam Altman lays out plans for GPT-5 and GPT-4.5 promising end of &#039;hated&#039; model picker&amp;quot;]. &#039;&#039;Fortune&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In July 2025, according to OpenAI, one of its experimental models performed at a gold medal-level at the [[International Mathematical Olympiad]].&amp;lt;ref name=&amp;quot;nature-maths&amp;quot;&amp;gt;Castelvecchi, Davide. [https://www.nature.com/articles/d41586-025-02343-x &amp;quot;DeepMind and OpenAI models solve maths problems at level of top students&amp;quot;]. &#039;&#039;Nature&#039;&#039;. 24 July 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.cbsnews.com/news/humans-beat-ai-technology-google-openai-math-olympiad-machines-catching-up/ &amp;quot;Humans triumph over AI at annual math Olympiad, but the machines are catching up - CBS News&amp;quot;]. &#039;&#039;www.cbsnews.com&#039;&#039;. July 22, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On October 6, 2025, OpenAI displayed its Agent Builder platform during the company&#039;s DevDay event. The platform includes a visual drag-and-drop interface for agentic workflows.March 2026.&lt;br /&gt;
&lt;br /&gt;
On October 21, 2025, OpenAI introduced [[ChatGPT Atlas]], a [[web browser]] which integrates ChatGPT into web navigation.&amp;lt;ref&amp;gt;[https://openai.com/index/introducing-chatgpt-atlas/ &amp;quot;Introducing ChatGPT Atlas&amp;quot;]. &#039;&#039;openai.com&#039;&#039;. 2025-10-21.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;O&#039;Brien, Matt. [https://apnews.com/article/openai-atlas-web-browser-chatgpt-google-ai-f59edaa239aebe26fc5a4a27291d717a &amp;quot;OpenAI launches Atlas browser to compete with Google Chrome&amp;quot;]. &#039;&#039;AP News&#039;&#039;. 2025-10-21.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Field, Hayden. [https://www.theverge.com/ai-artificial-intelligence/803475/openais-ai-powered-browser-chatgpt-atlas-google-chrome-competition-agent &amp;quot;OpenAI&#039;s AI-powered browser, ChatGPT Atlas, is here&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 2025-10-21.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On December 11, 2025, OpenAI announced [[GPT-5.2]].&amp;lt;ref&amp;gt;Capoot, Ashley. [https://www.cnbc.com/2025/12/11/openai-intros-new-ai-model-gpt-5point2-says-better-at-professional-tasks.html &amp;quot;Sam Altman expects OpenAI to exit &#039;code red&#039; by January after launch of GPT-5.2 model&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2025-12-11.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On January 27, 2026, OpenAI introduced Prism, a [[LaTeX]]-native workspace meant to assist scientists with research and writing, such as drafting scientific papers, managing citations, and formatting equations.&amp;lt;ref&amp;gt;[https://openai.com/index/introducing-prism/ &amp;quot;Introducing Prism&amp;quot;]. &#039;&#039;openai.com&#039;&#039;. 2025-12-16.&amp;lt;/ref&amp;gt;{{primary source inline|date=March 2026}}&lt;br /&gt;
&lt;br /&gt;
In early 2026, reports indicated that OpenAI was working on a [[Comparison of source-code-hosting facilities|collaborative software development platform]] designed to compete with services such as [[GitHub]] and [[GitLab]].&amp;lt;ref&amp;gt;[https://www.developer-tech.com/news/openai-building-github-alternative-for-developer-toolchains/ &amp;quot;OpenAI building GitHub alternative for developer toolchains&amp;quot;]. &#039;&#039;Developer Tech&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Transparency ===&lt;br /&gt;
In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI&#039;s former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become &amp;quot;obvious&amp;quot; in a few years.&amp;lt;ref&amp;gt;Vincent, James. [https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview &amp;quot;OpenAI co-founder on company&#039;s past approach to openly sharing research: &amp;quot;We were wrong&amp;quot;&amp;quot;]. &#039;&#039;[[The Verge]]&#039;&#039;. March 15, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks.&amp;lt;ref&amp;gt;[https://www.washingtonpost.com/technology/2025/09/15/openai-chatgpt-study-use-cases/ &amp;quot;Here&#039;s what the data says people ask ChatGPT&amp;quot;]. &#039;&#039;The Washington Post&#039;&#039;. 2025-09-15.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Butts, Dylan. [https://www.cnbc.com/2025/09/17/openai-releases-first-of-kind-study-revealing-how-people-use-chatgpt.html &amp;quot;OpenAI releases first-of-kind study revealing how people are using ChatGPT for everyday tasks&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2025-09-17.&amp;lt;/ref&amp;gt; The study found that &amp;quot;non-work tasks&amp;quot; (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity.&amp;lt;ref&amp;gt;Orland, Kyle. [https://arstechnica.com/ai/2025/09/seven-things-we-learned-from-openais-first-study-on-chatgpt-usage/ &amp;quot;What do people actually use ChatGPT for? OpenAI provides some numbers.&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2025-09-15.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Alignment ===&lt;br /&gt;
In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to [[AI alignment|align]] future superintelligent systems.&amp;lt;ref&amp;gt;[https://openai.com/index/introducing-superalignment/ &amp;quot;Introducing Superalignment&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. July 5, 2023.&amp;lt;/ref&amp;gt; OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%.&amp;lt;ref&amp;gt;Kahn, Jeremy. [https://fortune.com/2024/05/21/openai-superalignment-20-compute-commitment-never-fulfilled-sutskever-leike-altman-brockman-murati/ &amp;quot;OpenAI promised 20% of its computing power to combat the most dangerous kind of AI—but never delivered, sources say&amp;quot;]. &#039;&#039;Fortune&#039;&#039;.&amp;lt;/ref&amp;gt; OpenAI ended the project in May 2024 after its co-leaders [[Ilya Sutskever]] and [[Jan Leike]] left the company.&amp;lt;ref&amp;gt;Field, Hayden. [https://www.cnbc.com/2024/05/17/openai-superalignment-sutskever-leike.html &amp;quot;OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2024-05-17.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Leaked conversations ===&lt;br /&gt;
In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental &amp;quot;share with search engines&amp;quot; feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature&#039;s permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data.&amp;lt;ref&amp;gt;Tangalakis-Lippert, Katherine. [https://www.businessinsider.com/openai-removes-chatgpt-feature-over-search-engine-privacy-concerns-2025-7 &amp;quot;OpenAI quickly rolled back a new feature that allowed users to make private conversations with ChatGPT searchable&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Belanger, Ashley. [https://arstechnica.com/tech-policy/2025/08/chatgpt-users-shocked-to-learn-their-chats-were-in-google-search-results/ &amp;quot;ChatGPT users shocked to learn their chats were in Google search results&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. August 1, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://timesofindia.indiatimes.com/technology/tech-news/openai-rolls-back-new-chatgpt-feature-just-hours-after-launch-top-exec-says-we-removed-a-feature-from-chatgpt-app-that-allowed-users-to-/articleshow/123045328.cms &amp;quot;OpenAI rolls back new ChatGPT feature just hours after launch, top exec says: We removed a feature from ChatGPT app that allowed users to ...&amp;quot;]. &#039;&#039;The Times of India&#039;&#039;. August 1, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Management ==&lt;br /&gt;
&lt;br /&gt;
=== Key employees ===&lt;br /&gt;
* CEO and co-founder: [[Sam Altman]], former president of the start-up accelerator [[Y Combinator (company)|Y Combinator]]&lt;br /&gt;
* President and co-founder: [[Greg Brockman]], former CTO, 3rd employee of [[Stripe (company)|Stripe]]&amp;lt;ref name=&amp;quot;seattle-investors&amp;quot;&amp;gt;[http://www.seattletimes.com/business/technology/silicon-valley-investors-to-bankroll-artificial-intelligence-center/ &amp;quot;Silicon Valley investors to bankroll artificial-intelligence center&amp;quot;]. &#039;&#039;[[The Seattle Times]]&#039;&#039;. December 13, 2015.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Chief Scientist Officer: [[Jakub Pachocki]], former Director of Research at OpenAI&amp;lt;ref name=&amp;quot;may14&amp;quot; /&amp;gt;&lt;br /&gt;
* Chief Operating Officer: Brad Lightcap, previously at [[Y Combinator (company)|Y Combinator]] and [[JPMorgan Chase]]&amp;lt;ref&amp;gt;Bordoloi, Pritam. [https://analyticsindiamag.com/openai-gets-a-new-president-cto-coo-in-the-latest-rejig// &amp;quot;OpenAI gets a new president, CTO &amp;amp; COO in the latest rejig&amp;quot;]. &#039;&#039;AIM&#039;&#039;. May 9, 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Chief Financial Officer: [[Sarah Friar]], former [[Nextdoor]] CEO and former CFO at [[Block, Inc.]]&amp;lt;ref name=&amp;quot;cxo 2024&amp;quot;&amp;gt;[https://www.reuters.com/technology/openai-hires-sarah-friar-cfo-2024-06-10/ &amp;quot;OpenAI hires former Nextdoor CEO Sarah Friar as first CFO&amp;quot;]. [[Reuters]]. June 10, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Chief Product Officer: Kevin Weil, previously at [[Twitter, Inc.]] and [[Meta Platforms]]&amp;lt;ref name=&amp;quot;cxo 2024&amp;quot; /&amp;gt;&lt;br /&gt;
* Chief Research Officer: Mark Chen, former SVP of Research at OpenAI&amp;lt;ref&amp;gt;Field, Hayden. [https://www.cnbc.com/2025/03/24/openai-expands-coo-brad-lightcaps-job-to-include-business-oversight-.html &amp;quot;OpenAI expands COO Brad Lightcap&#039;s job to include business oversight, as Altman focuses on research&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2025-03-24.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Chief Compliance Officer: [[Scott Schools]], former Chief Compliance Officer of [[Uber (company)|Uber]]&lt;br /&gt;
* Chief Global Affairs Officer: [[Chris Lehane]], former head of global policy at [[Airbnb]]&amp;lt;ref&amp;gt;Mui, Christine. [https://www.politico.com/news/2025/08/17/sam-altman-chatgpt-california-00449492 &amp;quot;The tech company stocking up on Democrats as Silicon Valley turns right&amp;quot;]. &#039;&#039;[[Politico]]&#039;&#039;. August 17, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Chief Economist: [[Aaron Chatterji]], professor of business and public policy at Duke University&#039;s [[Fuqua School of Business]]&amp;lt;ref&amp;gt;Metz, Cade. [https://www.nytimes.com/2024/10/22/technology/openai-chief-economist.html &amp;quot;OpenAI Hires Former White House Official as Its Chief Economist&amp;quot;]. &#039;&#039;[[The New York Times]]&#039;&#039;. October 22, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* CEO of Applications: [[Fidji Simo]], former CEO of [[Instacart]]&amp;lt;ref name=&amp;quot;simo&amp;quot;&amp;gt;Heath, Alex. [https://www.theverge.com/command-line-newsletter/764650/openai-chatgpt-fidji-simo-sam-altman-power-shift &amp;quot;The power shift inside OpenAI&amp;quot;]. &#039;&#039;[[The Verge]]&#039;&#039;. August 22, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Board of directors of the OpenAI nonprofit ===&lt;br /&gt;
* [[Bret Taylor]] (chairman), former chairman of [[Twitter, Inc.|Twitter]]&#039;s board of directors and co-CEO of [[Salesforce]]&lt;br /&gt;
* [[Sam Altman]]&lt;br /&gt;
* [[Adam D&#039;Angelo]], co-founder and CEO of [[Quora]]&lt;br /&gt;
* [[Sue Desmond-Hellmann]], former CEO of the [[Bill &amp;amp; Melinda Gates Foundation]]&lt;br /&gt;
* [[Nicole Seligman]], attorney and former executive vice president of the [[Sony Corporation]]&lt;br /&gt;
* [[Paul Nakasone]], former Director of the [[National Security Agency]] (2018–2024)&amp;lt;ref&amp;gt;Peters, Jay. [https://www.theverge.com/2024/6/13/24178079/openai-board-paul-nakasone-nsa-safety &amp;quot;Former head of NSA joins OpenAI board&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. June 13, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[Zico Kolter]], computer scientist&amp;lt;ref&amp;gt;[https://www.bloomberg.com/news/articles/2024-08-08/openai-names-computer-scientist-zico-kolter-as-new-board-member &amp;quot;OpenAI Names Computer Scientist Zico Kolter as New Board Member&amp;quot;]. &#039;&#039;Bloomberg.com&#039;&#039;. August 8, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[Adebayo Ogunlesi]], managing partner at [[Global Infrastructure Partners]]&amp;lt;ref&amp;gt;Criddle, Cristina. [https://www.ft.com/content/63b08a9d-e537-4d60-9904-c59958a16982 &amp;quot;OpenAI appoints one of Wall Street&#039;s most powerful dealmakers to its board&amp;quot;]. &#039;&#039;Financial Times&#039;&#039;. 2025-01-14.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.reuters.com/technology/who-are-openais-new-board-members-2024-03-11/ &amp;quot;Who are OpenAI&#039;s new board members?&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. March 11, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Principal individual investors ===&lt;br /&gt;
* [[Reid Hoffman]], [[LinkedIn]] co-founder&amp;lt;ref name=&amp;quot;mercury-back&amp;quot;&amp;gt;Liedtke, Michael. [http://www.mercurynews.com/business/ci_29256196/elon-musk-peter-thiel-reid-hoffman-others-back &amp;quot;Elon Musk, Peter Thiel, Reid Hoffman, others back $1 billion OpenAI research center&amp;quot;]. &#039;&#039;[[Mercury News]]&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[Peter Thiel]], [[PayPal]] co-founder&amp;lt;ref name=&amp;quot;mercury-back&amp;quot; /&amp;gt;&lt;br /&gt;
* [[Jessica Livingston]], a founding partner of Y Combinator&amp;lt;ref name=&amp;quot;seattle-investors&amp;quot; /&amp;gt;&lt;br /&gt;
* [[Elon Musk]], co-founder&amp;lt;ref name=&amp;quot;seattle-investors&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Firing of Altman ===&lt;br /&gt;
&#039;&#039;Further information: [[Removal of Sam Altman from OpenAI]]&#039;&#039;&lt;br /&gt;
[[File:Sam Altman TechCrunch SF 2019 Day 2 Oct 3 (cropped) (cropped).jpg|thumb|upright|Sam Altman in 2019]]&lt;br /&gt;
On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of [[Helen Toner]], [[Ilya Sutskever]], [[Adam D&#039;Angelo]] and Tasha McCauley) cited a [[Motion of no confidence|lack of confidence]] in him. Chief Technology Officer [[Mira Murati]] took over as interim CEO. [[Greg Brockman]], the president of OpenAI, was also removed as chairman of the board&amp;lt;ref&amp;gt;[https://openai.com/blog/openai-announces-leadership-transition &amp;quot;OpenAI announces leadership transition&amp;quot;]. &#039;&#039;openai.com&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;guard-17nov2023&amp;quot;&amp;gt;Montgomery, Blake. [https://www.theguardian.com/technology/2023/nov/17/openai-ceo-sam-altman-fired &amp;quot;OpenAI fires co-founder and CEO Sam Altman for allegedly lying to company board&amp;quot;]. &#039;&#039;[[The Guardian]]&#039;&#039;. November 17, 2023.&amp;lt;/ref&amp;gt; and resigned from the company&#039;s presidency shortly thereafter.&amp;lt;ref&amp;gt;Peters, Jay. [https://www.theverge.com/2023/11/17/23966277/openai-co-founder-greg-brockman-leaving &amp;quot;OpenAI co-founder Greg Brockman is leaving, too&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. November 18, 2023.&amp;lt;/ref&amp;gt; Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor.&amp;lt;ref&amp;gt;[https://www.theinformation.com/articles/three-senior-openai-researchers-resign-as-crisis-deepens &amp;quot;Three Senior OpenAI Researchers Resign as Crisis Deepens&amp;quot;]. &#039;&#039;The Information&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Edwards, Benj. [https://arstechnica.com/information-technology/2023/11/report-sutskever-led-board-coup-at-openai-that-ousted-altman-over-ai-safety-concerns/ &amp;quot;Details emerge of surprise board coup that ousted CEO Sam Altman at OpenAI&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. November 18, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and [[Thrive Capital]], who objected to Altman&#039;s departure.&amp;lt;ref&amp;gt;[https://www.wsj.com/tech/openai-trying-to-get-sam-altman-back-4b728049 &amp;quot;OpenAI Investors Trying to Get Sam Altman Back as CEO After Sudden Firing&amp;quot;]. &#039;&#039;WSJ&#039;&#039;.&amp;lt;/ref&amp;gt; Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn&#039;t work out.&amp;lt;ref&amp;gt;Metz, Cade. [https://www.nytimes.com/2023/11/18/technology/sam-altman-openai-board.html?smid=nytcore-ios-share&amp;amp;referringSource=articleShare &amp;quot;Sam Altman Is Said to Be Discussing Return to OpenAI With Company&#039;s Board&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. November 19, 2023.&amp;lt;/ref&amp;gt; The board members agreed &amp;quot;in principle&amp;quot; to resign if Altman returned.&amp;lt;ref&amp;gt;Patel, Nilay. [https://www.theverge.com/2023/11/18/23967199/breaking-openai-board-in-discussions-with-sam-altman-to-return-as-ceo &amp;quot;OpenAI board in discussions with Sam Altman to return as CEO&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. November 18, 2023.&amp;lt;/ref&amp;gt; On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by [[Emmett Shear]] as interim CEO.&amp;lt;ref&amp;gt;Heath, Alex. [https://www.theverge.com/2023/11/20/23967515/sam-altman-openai-board-fired-new-ceo &amp;quot;The deal to bring Sam Altman back to OpenAI has fallen apart&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. November 20, 2023.&amp;lt;/ref&amp;gt; The board initially contacted [[Anthropic]] CEO [[Dario Amodei]] (a former OpenAI executive) about replacing Altman, and proposed a [[Mergers and acquisitions|merger]] of the two companies, but both offers were declined.&amp;lt;ref&amp;gt;Dastin, Jeffrey. [https://www.reuters.com/technology/openais-board-approached-anthropic-ceo-about-top-job-merger-sources-2023-11-21/ &amp;quot;OpenAI&#039;s board approached Anthropic CEO about top job and merger&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. November 21, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On November 20, 2023, Microsoft CEO [[Satya Nadella]] announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events.&amp;lt;ref&amp;gt;Warren, Tom. [https://www.theverge.com/2023/11/20/23968829/microsoft-hires-sam-altman-greg-brockman-employees-openai &amp;quot;Microsoft hires former OpenAI CEO Sam Altman&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. November 20, 2023.&amp;lt;/ref&amp;gt; Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him.&amp;lt;ref&amp;gt;Patel, Nilay. [https://www.theverge.com/2023/11/20/23969586/sam-altman-plotting-return-open-ai-microsoft &amp;quot;Sam Altman is still trying to return as OpenAI CEO&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. November 20, 2023.&amp;lt;/ref&amp;gt; About 738 of OpenAI&#039;s 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign.&amp;lt;ref&amp;gt;[https://news.bloomberglaw.com/us-law-week/openai-staff-threaten-to-go-to-microsoft-if-board-doesnt-quit &amp;quot;OpenAI Staff Near Total Mutiny With Threat to Jump to Microsoft&amp;quot;]. &#039;&#039;Bloomberg&#039;&#039;. November 20, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Knight, Will. [https://www.wired.com/story/openai-staff-walk-protest-sam-altman/ &amp;quot;OpenAI Staff Threaten to Quit Unless Board Resigns&amp;quot;]. &#039;&#039;Wired&#039;&#039;.&amp;lt;/ref&amp;gt; This prompted OpenAI investors to consider legal action against the board as well.&amp;lt;ref&amp;gt;Tong, Anna. [https://www.reuters.com/technology/openai-investors-considering-suing-board-after-ceos-abrupt-firing-sources-2023-11-20/ &amp;quot;Exclusive: OpenAI investors considering suing the board after CEO&#039;s abrupt firing&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. November 20, 2023.&amp;lt;/ref&amp;gt; In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time.&amp;lt;ref&amp;gt;Lawler, Richard. [https://www.theverge.com/2023/11/21/23970550/openai-exec-to-employees-our-number-one-goal-remains-to-reunify-openai &amp;quot;OpenAI exec to employees: &amp;quot;our number one goal remains to reunify OpenAI.&amp;quot;&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. November 21, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members [[Bret Taylor]] (as chairman) and [[Lawrence Summers]], with D&#039;Angelo remaining.&amp;lt;ref&amp;gt;Heath, Alex. [https://www.theverge.com/2023/11/22/23967223/sam-altman-returns-ceo-open-ai &amp;quot;Breaking: Sam Altman to return as CEO of OpenAI&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. November 22, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Lightman, Hunter. &amp;quot;Let&#039;s Verify Step by Step&amp;quot;. 2023.&amp;lt;/ref&amp;gt; According to subsequent reporting, shortly before Altman&#039;s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery.&amp;lt;ref name=&amp;quot;Anna Tong 2023 u135&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company&#039;s operations;&amp;lt;ref&amp;gt;Heath, Alex. [https://www.theverge.com/2023/11/29/23981848/sam-altman-back-open-ai-ceo-microsoft-board &amp;quot;Microsoft joins OpenAI&#039;s board with Sam Altman officially back as CEO&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. November 30, 2023.&amp;lt;/ref&amp;gt; Microsoft resigned from the board in July 2024.&amp;lt;ref&amp;gt;[https://www.channelnewsasia.com/business/microsoft-ditches-openai-board-observer-seat-amid-regulatory-scrutiny-4469086 &amp;quot;Microsoft ditches OpenAI board observer seat amid regulatory scrutiny&amp;quot;]. &#039;&#039;CNA&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In February 2024, the [[Securities and Exchange Commission]] subpoenaed OpenAI&#039;s internal communication to determine if Altman&#039;s alleged lack of candor misled investors.&amp;lt;ref&amp;gt;Seetharaman, Deepa. [https://www.wsj.com/tech/sec-investigating-whether-openai-investors-were-misled-9d90b411 &amp;quot;SEC Investigating Whether OpenAI Investors Were Misled&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. February 28, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers.&amp;lt;ref&amp;gt;Piper, Kelsey. [https://www.vox.com/future-perfect/380117/openai-microsoft-sam-altman-nonprofit-for-profit-foundation-artificial-intelligence &amp;quot;Inside OpenAI&#039;s multibillion-dollar gambit to become a for-profit company&amp;quot;]. &#039;&#039;Vox&#039;&#039;. 2024-10-28.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Knight, Will. [https://www.wired.com/story/openai-departures-research-rivals-artificial-intelligence/ &amp;quot;The OpenAI Talent Exodus Gives Rivals an Opening&amp;quot;]. &#039;&#039;Wired&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Personnel changes ===&lt;br /&gt;
In 2018, Musk resigned from his Board of Directors seat, citing &amp;quot;a potential future [[Conflict of interest|conflict [of interest]]]&amp;quot; with his role as CEO of [[Tesla, Inc.|Tesla]] due to Tesla&#039;s [[Tesla Autopilot|AI development for self-driving cars]].&amp;lt;ref name=&amp;quot;musk_resigns&amp;quot;&amp;gt;Vincent, James. [https://www.theverge.com/2018/2/21/17036214/elon-musk-openai-ai-safety-leaves-board &amp;quot;Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. February 21, 2018.&amp;lt;/ref&amp;gt; OpenAI stated that Musk&#039;s financial contributions were below $45 million.&amp;lt;ref&amp;gt;Chan, Kelvin. [https://apnews.com/article/openai-elon-musk-lawsuit-sam-altman-4a4c0a19316f849f65db9e6d2b0b7a6b &amp;quot;OpenAI says Musk agreed the ChatGPT maker should become a for-profit company&amp;quot;]. Associated Press. March 6, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On March 3, 2023, [[Reid Hoffman]] resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via [[Greylock Partners]], and his co-founding of the AI startup [[Inflection AI]]. Hoffman remained on the board of Microsoft, a major investor in OpenAI.&amp;lt;ref&amp;gt;Dastin, Jeffrey. [https://www.reuters.com/technology/openais-long-time-backer-reid-hoffman-leaves-board-2023-03-03/ &amp;quot;OpenAI&#039;s long-time backer Reid Hoffman leaves board&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. March 3, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In May 2024, Chief Scientist [[Ilya Sutskever]] resigned and was succeeded by [[Jakub Pachocki]]. Co-leader [[Jan Leike]] also departed amid concerns over safety and trust.&amp;lt;ref name=&amp;quot;may14&amp;quot;&amp;gt;Hollister, Sean. [https://www.theverge.com/2024/5/14/24156920/openai-chief-scientist-ilya-sutskever-leaves &amp;quot;OpenAI chief scientist Ilya Sutskever is officially leaving&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. May 14, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Samuel, Sigal. [https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence &amp;quot;&amp;quot;I lost trust&amp;quot;: Why the OpenAI team in charge of safeguarding humanity imploded&amp;quot;]. &#039;&#039;Vox&#039;&#039;. May 17, 2024.&amp;lt;/ref&amp;gt; OpenAI then signed deals with [[Reddit]], [[News Corp]], [[Axios (website)|Axios]], and [[Vox Media]].&amp;lt;ref&amp;gt;[https://www.redditinc.com/blog/reddit-and-oai-partner &amp;quot;Reddit and OpenAI Build Partnership - Upvoted&amp;quot;]. &#039;&#039;www.redditinc.com&#039;&#039;. May 16, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Fischer, Sara. [https://www.axios.com/2024/05/29/atlantic-vox-media-openai-licensing-deal &amp;quot;Exclusive: The Atlantic, Vox Media ink licensing, product deals with OpenAI&amp;quot;]. &#039;&#039;[[Axios (website)&#039;&#039;. May 29, 2024.&amp;lt;/ref&amp;gt; [[Paul Nakasone]] then joined the board of OpenAI.&amp;lt;ref&amp;gt;Coldewey, Devin. [https://techcrunch.com/2024/06/13/former-nsa-head-joins-openai-board-and-safety-committee/ &amp;quot;Former NSA head joins OpenAI board and safety committee&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. June 13, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In August 2024, cofounder John Schulman left OpenAI to join [[Anthropic]], and OpenAI&#039;s president [[Greg Brockman]] took extended leave until November.&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2024/08/05/openai-co-founder-leaves-for-anthropic/ &amp;quot;OpenAI co-founder Schulman leaves for Anthropic, Brockman takes extended leave&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. August 6, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-greg-brockman-returns-ai-startup-bloomberg-news-reports-2024-11-12/ &amp;quot;OpenAI co-founder Greg Brockman returns to ChatGPT maker&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 12 November 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In September 2024, CTO [[Mira Murati]] left the company.&amp;lt;ref&amp;gt;Field, Hayden. [https://www.cnbc.com/2024/09/25/openai-cto-mira-murati-announces-shes-leaving-the-company.html &amp;quot;OpenAI CTO Mira Murati announces she&#039;s leaving the company&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2024-09-25.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://techcrunch.com/2024/09/25/openai-cto-mira-murati-says-shes-leaving-the-company/ &amp;quot;OpenAI CTO Mira Murati says she&#039;s leaving the company&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2024-09-25.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In November 2025, [[Lawrence Summers]] resigned from the board of directors.&amp;lt;ref&amp;gt;Capoot, Ashley. [https://www.cnbc.com/2025/11/19/larry-summers-epstein-openai.html &amp;quot;Larry Summers resigns from OpenAI board after release of emails with Epstein&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. November 19, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Governance and legal issues ==&lt;br /&gt;
[[File:Ilya Sutskever and Sam Altman in TAU.jpg|thumb|upright=0.7|Altman and Sutskever at [[Tel Aviv University]] in 2023]]In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of [[superintelligence]].&amp;lt;ref name=&amp;quot;OpenAI-Governance&amp;quot;&amp;gt;[https://openai.com/blog/governance-of-superintelligence &amp;quot;Governance of superintelligence&amp;quot;]. &#039;&#039;openai.com&#039;&#039;.&amp;lt;/ref&amp;gt; They stated that superintelligence could happen within the next 10 years, allowing a &amp;quot;dramatically more prosperous future&amp;quot; and that &amp;quot;given the possibility of existential risk, we can&#039;t just be reactive&amp;quot;. They proposed creating an international watchdog organization similar to [[International Atomic Energy Agency|IAEA]] to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which &amp;quot;many current efforts become part of&amp;quot;.&amp;lt;ref name=&amp;quot;OpenAI-Governance&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Wodecki, Ben. [https://aibusiness.com/responsible-ai/openai-leaders-want-the-public-to-decide-ai-rules &amp;quot;OpenAI Founders Warn AI &#039;Superintelligence&#039; is Like Nuclear Power&amp;quot;]. May 23, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In July 2023, the [[Federal Trade Commission|FTC]] issued a [[civil investigative demand]] to OpenAI to investigate whether the company&#039;s [[data security]] and [[Information privacy|privacy]] practices to develop [[ChatGPT]] were [[Unfair business practices|unfair]] or [[Consumer protection|harmed consumers]] (including by [[Defamation|reputational harm]]) in violation of Section 5 of the [[Federal Trade Commission Act of 1914]].&amp;lt;ref&amp;gt;Zakrzewski, Cat. [https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/ &amp;quot;The FTC is investigating whether ChatGPT harms consumers&amp;quot;]. &#039;&#039;The Washington Post&#039;&#039;. July 13, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Tracy, Ryan. [https://www.wsj.com/articles/chatgpt-under-investigation-by-ftc-21e4b3ef &amp;quot;ChatGPT Comes Under Investigation by Federal Trade Commission&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. July 13, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;cnbcLeak1&amp;quot;&amp;gt;Feiner, Lauren. [https://www.cnbc.com/2023/07/13/chatgpt-owner-openai-is-being-investigated-by-ftc.html &amp;quot;FTC investigating ChatGPT-maker OpenAI for possible consumer harm&amp;quot;]. CNBC. July 13, 2023.&amp;lt;/ref&amp;gt; These are typically preliminary investigative matters and are nonpublic, but the FTC&#039;s document was leaked.&amp;lt;ref&amp;gt;Jr, D. Reed Freeman. [https://www.afslaw.com/perspectives/privacy-counsel/leaked-ftc-civil-investigative-demand-openai-provides-rare-preliminary &amp;quot;Leaked FTC Civil Investigative Demand to OpenAI Provides a Rare Preliminary View of the Future of AI Enforcement&amp;quot;]. &#039;&#039;ArentFox Schiff&#039;&#039;. 2 August 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;cnbcLeak1&amp;quot; /&amp;gt; In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people.&amp;lt;ref&amp;gt;[https://www.aljazeera.com/economy/2023/7/14/us-watchdog-probes-chatgpt-maker-openai-over-false-information &amp;quot;ChatGPT creator OpenAI faces US probe over libellous output&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;.&amp;lt;/ref&amp;gt; The agency also raised concerns about &#039;circular&#039; spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public.&amp;lt;ref name=&amp;quot;bloom1AIMicro&amp;quot;&amp;gt;Birnbaum, Emily. [https://www.bloomberg.com/news/articles/2025-01-17/microsoft-openai-partnership-raises-antitrust-concerns-ftc &amp;quot;Microsoft-OpenAI Partnership Raises Antitrust Concerns, FTC Says&amp;quot;]. &#039;&#039;Bloomberg.com&#039;&#039;. January 17, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In September 2024, OpenAI&#039;s global affairs chief endorsed the UK&#039;s &amp;quot;smart&amp;quot; AI regulation during testimony to a [[House of Lords]] committee.&amp;lt;ref&amp;gt;[https://www.uktech.news/ai/openai-in-favour-of-uk-ai-legislation-policy-chief-says-20240923 &amp;quot;OpenAI &#039;in favour&#039; of UK AI legislation, policy chief says&amp;quot;]. &#039;&#039;UKTN&#039;&#039;. 2024-09-23.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In February 2025, OpenAI CEO [[Sam Altman]] stated that the company is interested in collaborating with the [[People&#039;s Republic of China]], despite [[United States sanctions against China|regulatory restrictions imposed by the U.S. government]].&amp;lt;ref&amp;gt;[https://www.scmp.com/tech/big-tech/article/3298396/openai-keen-work-china-ceo-sam-altman-says-deepseek-rattles-tech-market &amp;quot;OpenAI keen to work with China, CEO Sam Altman says, as DeepSeek rattles the tech market&amp;quot;]. &#039;&#039;South China Morning Post&#039;&#039;. February 2025.&amp;lt;/ref&amp;gt; This shift comes in response to the growing influence of the Chinese artificial intelligence company [[DeepSeek]], which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1.&amp;lt;ref&amp;gt;[https://www.scmp.com/tech/tech-trends/article/3298739/deepseek-spurs-baidu-other-ai-competitors-adopt-open-source-strategy &amp;quot;DeepSeek spurs Baidu, other AI competitors to adopt open-source strategy&amp;quot;]. &#039;&#039;South China Morning Post&#039;&#039;. February 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; Following DeepSeek&#039;s market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from [[industrial espionage]]. Some industry observers noted similarities between DeepSeek&#039;s model [[Knowledge distillation|distillation]] approach and OpenAI&#039;s methodology, though no formal intellectual property claim was filed.&amp;lt;ref&amp;gt;Criddle, Cristina. [https://www.ft.com/content/f896c4d9-bab7-40a2-9e67-4058093ce250 &amp;quot;OpenAI clamps down on security after foreign spying threats&amp;quot;]. &#039;&#039;[[Financial Times]]&#039;&#039;. 2025-07-08.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for [[Federal preemption|preempting]] state AI laws with federal laws.&amp;lt;ref name=&amp;quot;roberts2025&amp;quot;&amp;gt;Roberts, Oliver. [https://news.bloomberglaw.com/us-law-week/openais-preemption-request-highlights-state-laws-downsides &amp;quot;OpenAI&#039;s Preemption Request Highlights State Laws&#039; Downsides&amp;quot;]. &#039;&#039;[[Bloomberg Law]]&#039;&#039;. March 31, 2025.&amp;lt;/ref&amp;gt; According to Scott Kohler, OpenAI has opposed California&#039;s AI legislation and suggested that the state bill encroaches on a more competent federal government.&amp;lt;ref&amp;gt;Kohler, Scott. [https://carnegieendowment.org/emissary/2025/07/ai-congress-bill-state-ban-what-next?lang=en &amp;quot;State AI Regulation Survived a Federal Ban. What Comes Next?&amp;quot;]. &#039;&#039;carnegieendowment.org&#039;&#039;. July 3, 2025.&amp;lt;/ref&amp;gt; [[Public Citizen]] opposed a federal preemption on AI and pointed to OpenAI&#039;s growth and valuation as evidence that existing state laws have not hampered [[innovation]].&amp;lt;ref&amp;gt;[https://www.citizen.org/article/federal-preemption-of-state-ai-laws-is-dangerous-and-reckless/ &amp;quot;Federal Preemption of State AI Laws Is Dangerous and Reckless&amp;quot;]. &#039;&#039;Public Citizen&#039;&#039;. 21 May 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Non-disparagement agreements ===&lt;br /&gt;
Before May 2024, OpenAI required departing employees to sign a lifelong [[Non-disclosure agreement|non-disparagement agreement]] forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. [[Daniel Kokotajlo (researcher)|Daniel Kokotajlo]], a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement.&amp;lt;ref&amp;gt;Piper, Kelsey. [https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release &amp;quot;ChatGPT can talk, but OpenAI employees sure can&#039;t&amp;quot;]. &#039;&#039;Vox&#039;&#039;. May 17, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Christian, Jon. [https://futurism.com/the-byte/openai-nda-criticism &amp;quot;OpenAI Employees Forced to Sign NDA Preventing Them From Ever Criticizing Company&amp;quot;]. &#039;&#039;[[Futurism (website)&#039;&#039;. May 18, 2024.&amp;lt;/ref&amp;gt; Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee&#039;s vested equity.&amp;lt;ref&amp;gt;Getahun, Hannah. [https://www.businessinsider.com/sam-altman-openai-nda-clause-vested-equity-ilya-sutskever-2024-5 &amp;quot;Sam Altman addresses &#039;potential equity cancellation&#039; in OpenAI exit agreements after 2 high-profile departures&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&amp;lt;/ref&amp;gt; However, leaked documents and emails refute this claim.&amp;lt;ref&amp;gt;Piper, Kelsey. [https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees &amp;quot;Leaked OpenAI documents reveal aggressive tactics toward former employees&amp;quot;]. &#039;&#039;Vox&#039;&#039;. May 22, 2024.&amp;lt;/ref&amp;gt; On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement.&amp;lt;ref&amp;gt;Field, Hayden. [https://www.cnbc.com/2024/05/24/openai-sends-internal-memo-releasing-former-employees-from-non-disparagement-agreements-sam-altman.html &amp;quot;OpenAI sends internal memo releasing former employees from controversial exit agreements&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. May 24, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copyright ===&lt;br /&gt;
OpenAI was sued for [[copyright infringement]] by authors [[Sarah Silverman]], [[Matthew Butterick]], [[Paul G. Tremblay|Paul Tremblay]] and [[Mona Awad]] in July 2023.&amp;lt;ref&amp;gt;Belanger, Ashley. [https://arstechnica.com/information-technology/2023/07/book-authors-sue-openai-and-meta-over-text-used-to-train-ai/ &amp;quot;Sarah Silverman sues OpenAI, Meta for being &amp;quot;industrial-strength plagiarists&amp;quot;&amp;quot;]. &#039;&#039;[[Ars Technica]]&#039;&#039;. July 10, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Krithika-2023&amp;quot;&amp;gt;Krithika, K. L.. [https://analyticsindiamag.com/all-the-lawsuits-filed-against-openai/ &amp;quot;Legal Challenges Surround OpenAI: A Closer Look at the Lawsuits&amp;quot;]. &#039;&#039;Analytics India Magazine&#039;&#039;. August 21, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Abshire, Elisha. [https://dailyai.com/2023/07/openai-faces-copyright-lawsuit-from-authors-mona-awad-and-paul-tremblay/ &amp;quot;OpenAI faces copyright lawsuit from authors Mona Awad and Paul Tremblay&amp;quot;]. &#039;&#039;Dailyai.com&#039;&#039;. July 6, 2023.&amp;lt;/ref&amp;gt; In September 2023, 17 authors, including [[George R. R. Martin]], [[John Grisham]], [[Jodi Picoult]] and [[Jonathan Franzen]], joined the [[Authors Guild]] in filing a class action lawsuit against OpenAI, alleging that the company&#039;s technology was illegally using their copyrighted work.&amp;lt;ref&amp;gt;Belanger, Ashley. [https://arstechnica.com/tech-policy/2023/09/george-r-r-martin-joins-authors-suing-openai-over-copyright-infringement/ &amp;quot;Grisham, Martin join authors suing OpenAI: &amp;quot;There is nothing fair about this&amp;quot; [Updated]&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. September 20, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Korn, Jennifer. [https://www.cnn.com/2023/09/20/tech/authors-guild-openai-lawsuit/index.html &amp;quot;George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI&amp;quot;]. &#039;&#039;[[CNN Business]]&#039;&#039;. September 20, 2023.&amp;lt;/ref&amp;gt; The &#039;&#039;[[New York Times]]&#039;&#039; also sued the company in late December 2023.&amp;lt;ref name=&amp;quot;Krithika-2023&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;[https://www.reuters.com/legal/transactional/ny-times-sues-openai-microsoft-infringing-copyrighted-work-2023-12-27 &amp;quot;NY Times sues OpenAI, Microsoft for infringing copyrighted works&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. December 27, 2023.&amp;lt;/ref&amp;gt; In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the [[GPT-3#Training and capabilities|training of GPT-3]], and which the Authors Guild believed to have contained over 100,000 copyrighted books.&amp;lt;ref&amp;gt;[https://www.businessinsider.com/openai-destroyed-ai-training-datasets-lawsuit-authors-books-copyright-2024-5 &amp;quot;OpenAI destroyed a trove of books used to train AI models. The employees who collected the data are gone.&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2021, OpenAI developed a [[speech recognition]] tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube&#039;s terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI&#039;s president, [[Greg Brockman]]. The resulting dataset proved instrumental in training GPT-4.&amp;lt;ref&amp;gt;Metz, Cade. [https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html &amp;quot;How Tech Giants Cut Corners to Harvest Data for A.I.&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. April 6, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In February 2024, &#039;&#039;[[The Intercept]]&#039;&#039; as well as &#039;&#039;[[Raw Story]]&#039;&#039; and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground.&amp;lt;ref&amp;gt;Brittain, Blake. [https://www.reuters.com/legal/litigation/openai-hit-with-new-lawsuits-news-outlets-over-ai-training-2024-02-28/ &amp;quot;OpenAI hit with new lawsuits from news outlets over AI training&amp;quot;]. &#039;&#039;[[Reuters]]&#039;&#039;. February 29, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://fingfx.thomsonreuters.com/gfx/legaldocs/xmpjrjwjrpr/OPENAI%20RAW%20STORY%20LAWSUIT%20intercept.pdf OpenAI RAW STORY LAWSUIT INTERCEPT]  - from [[Reuters]]&amp;lt;/ref&amp;gt; The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI.&amp;lt;ref&amp;gt;[https://www.niemanlab.org/2024/03/the-intercept-charts-a-new-legal-strategy-for-digital-publishers-suing-openai/ &amp;quot;The Intercept charts a new legal strategy for digital publishers suing OpenAI&amp;quot;]. &#039;&#039;Nieman Lab&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On April 30, 2024, eight newspapers filed a lawsuit in the [[United States Attorney for the Southern District of New York|Southern District of New York]] against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included &#039;&#039;[[The Mercury News]]&#039;&#039;, &#039;&#039;[[The Denver Post]]&#039;&#039;, &#039;&#039;[[The Orange County Register]]&#039;&#039;, &#039;&#039;[[St. Paul Pioneer Press]]&#039;&#039;, &#039;&#039;[[Chicago Tribune]]&#039;&#039;, &#039;&#039;[[Orlando Sentinel]]&#039;&#039;, &#039;&#039;[[Sun Sentinel]]&#039;&#039;, and &#039;&#039;[[New York Daily News]]&#039;&#039;.&amp;lt;ref name=&amp;quot;ebt-30apr2024&amp;quot;&amp;gt;Baron, Ethan. [https://www.eastbaytimes.com/2024/04/30/mercury-news-and-other-papers-sue-microsoft-openai-over-the-new-artificial-intelligence/ &amp;quot;Mercury News and other papers sue Microsoft, OpenAI over the new artificial intelligence&amp;quot;]. &#039;&#039;[[East Bay Times]]&#039;&#039;. April 30, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In June 2023, a lawsuit claimed that OpenAI [[Web scraping|scraped]] 300 billion words online without consent and without registering as a data broker. It was filed in [[San Francisco]], [[California]], by sixteen anonymous plaintiffs.&amp;lt;ref&amp;gt;Riley, Tonya. [https://cyberscoop.com/openai-lawsuit-privacy-data-scraping/ &amp;quot;OpenAI lawsuit reignites privacy debate over data scraping&amp;quot;]. &#039;&#039;CyberScoop&#039;&#039;. 2023-06-30.&amp;lt;/ref&amp;gt; They also claimed that OpenAI and its partner as well as customer [[Microsoft]] continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models.&amp;lt;ref&amp;gt;Xiang, Chloe. [https://www.vice.com/en/article/openai-and-microsoft-sued-for-dollar3-billion-over-alleged-chatgpt-privacy-violations/ &amp;quot;OpenAI and Microsoft Sued for $3 Billion Over Alleged ChatGPT &#039;Privacy Violations&#039;&amp;quot;]. &#039;&#039;Vice&#039;&#039;. June 29, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On May 22, 2024, OpenAI entered into an agreement with [[News Corp]] to integrate news content from &#039;&#039;[[The Wall Street Journal]]&#039;&#039;, the &#039;&#039;[[New York Post]]&#039;&#039;, &#039;&#039;[[The Times]]&#039;&#039;, and &#039;&#039;[[The Sunday Times]]&#039;&#039; into its AI platform. Meanwhile, other publications like &#039;&#039;[[The New York Times]]&#039;&#039; chose to sue OpenAI and [[Microsoft]] for copyright infringement over the use of their content to train AI models.&amp;lt;ref&amp;gt;[https://www.theguardian.com/technology/article/2024/may/22/openai-chatgpt-news-corp-deal &amp;quot;OpenAI and Wall Street Journal owner News Corp sign content deal&amp;quot;]. May 22, 2024.&amp;lt;/ref&amp;gt; In November 2024, a coalition of Canadian news outlets, including the [[Toronto Star]], [[Metroland Media Group|Metroland Media]], [[Postmedia Network|Postmedia]], [[The Globe and Mail]], [[The Canadian Press]] and [[CBC News|CBC]], sued OpenAI for using their news articles to train its software without permission.&amp;lt;ref&amp;gt;[https://www.bbc.com/news/articles/cm27247j6gno &amp;quot;Major Canadian news outlets sue OpenAI&amp;quot;]. &#039;&#039;www.bbc.com&#039;&#039;. November 29, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In October 2024 during a New York Times interview, [[Suchir Balaji]] accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of [[conspiracy theories]] alleging that he had been deliberately silenced.&amp;lt;ref name=&amp;quot;sf-standard&amp;quot;&amp;gt;Chien, Tomoki. [https://sfstandard.com/2025/02/14/autopsy-no-foul-play-in-openai-whistleblowers-suicide/ &amp;quot;Autopsy: No foul play in OpenAI whistleblower&#039;s suicide&amp;quot;]. &#039;&#039;The San Francisco Standard&#039;&#039;. 2025-02-15.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Mercury-jan-31&amp;quot;&amp;gt;Rodgers, Jakob. [https://www.mercurynews.com/2025/01/31/family-of-openai-whistleblower-suchir-balaji-files-lawsuit-seeking-san-francisco-police-records/ &amp;quot;Family of OpenAI whistleblower Suchir Balaji files lawsuit seeking San Francisco police records&amp;quot;]. &#039;&#039;[[The Mercury News]]&#039;&#039;. 2025-01-31.&amp;lt;/ref&amp;gt; California Congressman [[Ro Khanna]] endorsed calls for an investigation.&amp;lt;ref name=&amp;quot;rokhanna&amp;quot;&amp;gt;Rodgers, Jakob. [https://www.chicagotribune.com/2025/01/15/openai-whistleblower-death/ &amp;quot;California Congressman Ro Khanna calls for &#039;full and transparent&#039; investigation into death of OpenAI whistleblower Suchir Balaji&amp;quot;]. &#039;&#039;[[Chicago Tribune]]&#039;&#039;. January 15, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;sfexaminer-jan-23&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On April 24, 2025, [[Ziff Davis]] sued OpenAI in [[District of Delaware|Delaware federal court]] for copyright infringement. Ziff Davis is known for publications such as [[ZDNet]], [[PCMag]], [[CNET]], [[IGN]] and [[Lifehacker]].&amp;lt;ref&amp;gt;Brittain, Blake. [https://www.reuters.com/business/publisher-ziff-davis-sues-openai-copyright-infringement-2025-04-24/ &amp;quot;Publisher Ziff Davis sues OpenAI for copyright infringement&amp;quot;]. &#039;&#039;[[Reuters]]&#039;&#039;. April 25, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Mullin, Benjamin. [https://www.nytimes.com/2025/04/24/business/media/ziff-davis-openai-lawsuit.html &amp;quot;Publisher of PCMag and Mashable Sues OpenAI&amp;quot;]. &#039;&#039;[[The New York Times]]&#039;&#039;. April 24, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GDPR compliance ===&lt;br /&gt;
In April 2023, the EU&#039;s [[European Data Protection Board]] (EDPB) formed a dedicated task force on ChatGPT &amp;quot;to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities&amp;quot; based on the &amp;quot;enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service&amp;quot;.&amp;lt;ref&amp;gt;[https://edpb.europa.eu/news/news/2023/edpb-resolves-dispute-transfers-meta-and-creates-task-force-chat-gpt_en &amp;quot;EDPB resolves dispute on transfers by Meta and creates task force on Chat GPT&amp;quot;]. &#039;&#039;EDPB resolves dispute on transfers by Meta and creates task force on Chat GPT&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In late April 2024 [[NOYB]] filed a complaint with the [[Austria]]n Datenschutzbehörde against OpenAI for violating the European [[General Data Protection Regulation]]. A text created with ChatGPT gave a false [[date of birth]] for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT&#039;s work nor the sources used, could be made available, OpenAI claimed.&amp;lt;ref&amp;gt;[https://noyb.eu/de/chatgpt-provides-false-information-about-people-and-openai-cant-correct-it &amp;quot;ChatGPT verbreitet falsche Infos über Personen – und OpenAI kann nichts tun&amp;quot;]. &#039;&#039;noyb.eu&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Military and warfare ===&lt;br /&gt;
OpenAI was criticized for lifting its ban on using ChatGPT for &amp;quot;military and warfare&amp;quot;. Up until January 10, 2024, its &amp;quot;usage policies&amp;quot; included a ban on &amp;quot;activity that has high risk of physical harm, including&amp;quot;, specifically, &amp;quot;weapons development&amp;quot; and &amp;quot;military and warfare&amp;quot;. Its new policies prohibit &amp;quot;[using] our service to harm yourself or others&amp;quot; and to &amp;quot;develop or use weapons&amp;quot;.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.bloomberg.com/news/articles/2024-01-16/openai-working-with-us-military-on-cybersecurity-tools-for-veterans &amp;quot;OpenAI Is Working With US Military on Cybersecurity Tools&amp;quot;]. &#039;&#039;Bloomberg.com&#039;&#039;. January 16, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Wrongful-death lawsuits over ChatGPT safety (2025) ===&lt;br /&gt;
&#039;&#039;Main article: [[Raine v. OpenAI]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;See also: [[Deaths linked to chatbots]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In August 2025, the parents of a 16-year-old boy who died by suicide filed a [[Wrongful death claim|wrongful death lawsuit]] against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son&#039;s death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company&#039;s chatbot. The complaint was filed in California state court in San Francisco.&amp;lt;ref&amp;gt;[https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html &amp;quot;A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.&amp;quot;]. &#039;&#039;[[The New York Times]]&#039;&#039;. 2025-08-26.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death.&amp;lt;ref&amp;gt;[https://apnews.com/article/openai-chatgpt-lawsuit-suicide-56e63e5538602ea39116f1904bf7cdc3 &amp;quot;OpenAI faces 7 lawsuits claiming ChatGPT drove people to suicide, delusions&amp;quot;]. &#039;&#039;AP News&#039;&#039;. 2025-11-07.&amp;lt;/ref&amp;gt; The suits were filed on behalf of [[Deaths linked to chatbots#Suicide of Zane Shamblin|Zane Shamblin]], 23, of Texas; [[Deaths linked to chatbots#Suicide of Amaurie Lacey|Amaurie Lacey]], 17, of Georgia; [[Deaths linked to chatbots#Suicide of Joshua Enneking|Joshua Enneking]], 26, of Florida; and [[Deaths linked to chatbots#Suicide of Joe Ceccanti|Joe Ceccanti]], 48, of Oregon, who each committed suicide after prolonged ChatGPT usage.&amp;lt;ref&amp;gt;[https://socialmediavictims.org/press-releases/smvlc-tech-justice-law-project-lawsuits-accuse-chatgpt-of-emotional-manipulation-supercharging-ai-delusions-and-acting-as-a-suicide-coach/ &amp;quot;SMVLC Files 7 Lawsuits Accusing Chat GPT of Emotional Manipulation, Acting as &amp;quot;Suicide Coach&amp;quot;&amp;quot;]. &#039;&#039;Social Media Victims Law Center&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Murder of Suzanne Adams ====&lt;br /&gt;
&#039;&#039;Main article: [[Murder of Suzanne Adams]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In December 2025, [[First County Bank]] filed a lawsuit against OpenAI after Suzanne Adams was murdered by her 56-year-old son. The lawsuit alleges that over months of conversations, ChatGPT validated many paranoid beliefs, such as that his mother was spying on him and that she attempted to poison him using drugs siphoned through his car air vents. OpenAI said they would make ChatGPT safer for users disconnected from reality.&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;[https://people.com/chatgpt-drove-paranoid-man-to-murder-mother-complaint-alleges-11867023 &amp;quot;ChatGPT Drove &#039;Paranoid&#039; Man to Murder 83-Year-Old Mother, Complaint Alleges&amp;quot;]. &#039;&#039;People.com&#039;&#039;. 2025-12-11.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Kessler, Julie Jargon and Sam. [https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb &amp;quot;A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. 2025-08-29.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2026 Canadian mass shooting ===&lt;br /&gt;
After 8 victims were killed on February 10, 2026, in [[2026 Tumbler Ridge shooting|mass shootings in Tumbler Ridge, British Columbia]], it emerged that OpenAI had banned an account belonging to the perpetrator due to violent queries approximately 7 months prior to the attacks. OpenAI opted not to report the account to authorities at the time.&amp;lt;ref&amp;gt;Mitchell, Ottilie. [https://www.bbc.com/news/articles/cn4gq352w89o &amp;quot;Tumbler Ridge suspect&#039;s ChatGPT account banned before shooting&amp;quot;]. &#039;&#039;[[BBC News]]&#039;&#039;. February 21, 2026.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.cbc.ca/news/canada/british-columbia/openai-tumbler-ridge-shooter-ban-9.7100497 &amp;quot;OpenAI had banned account of Tumbler Ridge, B.C., shooter&amp;quot;]. [[CBC News]]. February 21, 2026.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;[[Wall Street Journal]]&#039;&#039; reported that the perpetrator &amp;quot;described scenarios [to ChatGPT] involving gun violence over the course of several days&amp;quot;, and that these messages were &amp;quot;flagged by an automated review system&amp;quot; and &amp;quot;alarmed employees at OpenAI&amp;quot;. A dozen employees debated whether to take action, some urging leaders to alert Canadian law enforcement.&amp;lt;ref&amp;gt;Wells, Georgia. [https://www.wsj.com/us-news/law/openai-employees-raised-alarms-about-canada-shooting-suspect-months-ago-b585df62 &amp;quot;Exclusive {{!&amp;quot;]. &#039;&#039;[[The Wall Street Journal]]&#039;&#039;. February 20, 2026.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Warren, May. [https://www.thestar.com/news/canada/british-columbia/tumbler-ridge-shooters-chatgpt-account-was-banned-months-before-tragedy-company-didnt-warn-authorities/article_1d914e18-cfde-4d5e-ae56-690832c47927.html &amp;quot;Tumbler Ridge shooter&#039;s ChatGPT account was banned months before tragedy, company didn&#039;t warn authorities&amp;quot;]. &#039;&#039;[[Toronto Star]]&#039;&#039;. February 20, 2026.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Canadian officials summoned OpenAI&#039;s safety team to Ottawa, criticizing their escalation protocols. The meeting highlighted concerns about how and when OpenAI reports potentially dangerous user behavior to authorities and intensified scrutiny of its safety-oversight processes.&amp;lt;ref&amp;gt;[https://www.channelnewsasia.com/business/canadian-officials-meet-openai-safety-team-after-school-shooting-5947916 &amp;quot;Canadian officials to meet with OpenAI safety team after school shooting&amp;quot;]. &#039;&#039;CNA&#039;&#039;.&amp;lt;/ref&amp;gt; On February 23, [[British Columbia|BC]] Premier [[David Eby]] said, &amp;quot;From the outside, it looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia&amp;quot;.&amp;lt;ref name=&amp;quot;eby-solomon&amp;quot;&amp;gt;[https://www.cbc.ca/news/canada/british-columbia/eby-openai-tumbler-ridge-9.7102942 &amp;quot;Eby says Tumbler Ridge shooting could have potentially been prevented if OpenAI warned authorities earlier&amp;quot;]. [[CBC News]]. February 23, 2026.&amp;lt;/ref&amp;gt; Canada&#039;s federal AI Minister [[Evan Solomon]] said, &amp;quot;I want [OpenAI] to give us details of what their protocols are, [and] what they are specifically in Canada&amp;quot;.&amp;lt;ref name=&amp;quot;eby-solomon&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* {{annotated link|Anthropic}}&lt;br /&gt;
* {{annotated link|Google DeepMind}}&lt;br /&gt;
* {{annotated link|Lumo (AI assistant)|Lumo}}&lt;br /&gt;
* {{annotated link|Meta AI}}&lt;br /&gt;
* {{annotated link|Mistral AI}}&lt;br /&gt;
* {{annotated link|Perplexity AI}}&lt;br /&gt;
* {{annotated link|xAI (company)|xAI}}&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
{{refbegin}}&lt;br /&gt;
* Levy, Steven. [https://www.wired.com/story/what-openai-really-wants &amp;quot;What OpenAI Really Wants&amp;quot;]. &#039;&#039;[[Wired (magazine)&#039;&#039;. September 5, 2023.&lt;br /&gt;
* Duhigg, Charles. [https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai &amp;quot;The Inside Story of Microsoft&#039;s Partnership with OpenAI&amp;quot;]. &#039;&#039;[[The New Yorker]]&#039;&#039;. December 1, 2023.&lt;br /&gt;
{{refend}}&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
{{Commons category|OpenAI}}&lt;br /&gt;
* {{Official website}}&lt;br /&gt;
&lt;br /&gt;
{{OpenAI}}&lt;br /&gt;
{{Generative AI}}&lt;br /&gt;
{{Existential risk from artificial intelligence}}&lt;br /&gt;
{{Authority control}}&lt;br /&gt;
&lt;br /&gt;
[[Category:OpenAI| ]]&lt;br /&gt;
[[Category:Artificial intelligence associations]]&lt;br /&gt;
[[Category:Artificial intelligence laboratories]]&lt;br /&gt;
[[Category:Non-profit organizations based in San Francisco]]&lt;br /&gt;
[[Category:501(c)(3) organizations]]&lt;br /&gt;
[[Category:Research institutes in the San Francisco Bay Area]]&lt;br /&gt;
[[Category:2015 establishments in California]]&lt;br /&gt;
[[Category:2015 in San Francisco]]&lt;br /&gt;
[[Category:American companies established in 2015]]&lt;br /&gt;
[[Category:2015 in artificial intelligence]]&lt;br /&gt;
[[Category:Artificial intelligence industry in the United States]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Large_language_model&amp;diff=10</id>
		<title>Large language model</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Large_language_model&amp;diff=10"/>
		<updated>2026-04-06T12:58:29Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import from Wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Distinguish|Logic learning machine}}&lt;br /&gt;
{{redirect|LLM}}&lt;br /&gt;
{{Technical|introduction|date=January 2026}}&lt;br /&gt;
{{Machine learning|Neural networks}}&lt;br /&gt;
A &#039;&#039;&#039;large language model&#039;&#039;&#039; (&#039;&#039;&#039;LLM&#039;&#039;&#039;) is a [[computational model]] designed to perform [[natural language processing]] tasks, especially [[language generation]], using contextual relationships derived from a large set of training data.&amp;lt;ref name=&amp;quot;bhaa&amp;quot;&amp;gt;Bommasani, Rishi. &amp;quot;On the Opportunities and Risks of Foundation Models&amp;quot;. 2021.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;few-shot-learners&amp;quot;&amp;gt;Brown, Tom B.. &amp;quot;Language Models are Few-Shot Learners&amp;quot;. 2020.&amp;lt;/ref&amp;gt; LLMs can generate, summarize, translate and parse text in a variety of contexts,&amp;lt;ref name=&amp;quot;scaling-laws&amp;quot;&amp;gt;Kaplan, Jared. &amp;quot;Scaling Laws for Neural Language Models&amp;quot;. 2020.&amp;lt;/ref&amp;gt; and are the technological underpinning of modern [[chatbot]]s.&amp;lt;ref name=&amp;quot;few-shot-learners2&amp;quot;&amp;gt;Brown, Tom B.. [https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf &amp;quot;Language Models are Few-Shot Learners&amp;quot;]. &#039;&#039;Advances in Neural Information Processing Systems&#039;&#039;. Dec 2020.&amp;lt;/ref&amp;gt; LLMs can accurately mimic natural language patterns because they are trained on [[Text corpus|collections of human-written text]].&amp;lt;ref&amp;gt;Fathallah, Nadeen. [https://2024.eswc-conferences.org/wp-content/uploads/2024/05/77770034.pdf &amp;quot;NeOn-GPT: A Large Language Model-Powered Pipeline for Ontology Learning&amp;quot;]. 2024-05-26.&amp;lt;/ref&amp;gt; For the same reason, biased or inaccurate training data can make a LLM&#039;s output less reliable.&amp;lt;ref name=&amp;quot;Manning-2022&amp;quot;&amp;gt;Manning, Christopher D.. [https://www.amacad.org/publication/human-language-understanding-reasoning &amp;quot;Human Language Understanding &amp;amp; Reasoning&amp;quot;]. &#039;&#039;Daedalus&#039;&#039;.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
As of 2024, the largest and most capable LLMs are all based on [[Transformer_(deep_learning)|transformer]] architectures, which can be more efficient and parallelizable&amp;lt;ref&amp;gt;Vaswani, Ashish. &amp;quot;Attention is All you Need&amp;quot;. 2017.&amp;lt;/ref&amp;gt; than earlier [[Statistical model|statistical]] and [[recurrent neural network]] models.&amp;lt;ref&amp;gt;Merritt, Rick. [https://blogs.nvidia.com/blog/2022/03/25/what-is-a-transformer-model/ &amp;quot;What Is a Transformer Model?&amp;quot;]. &#039;&#039;NVIDIA Blog&#039;&#039;. 2022-03-25.&amp;lt;/ref&amp;gt; Research into other architectures, such as [[state-space representation|state space]] models, is ongoing.&amp;lt;ref&amp;gt;Peng, Bo. [https://aclanthology.org/2023.findings-emnlp.936/ &amp;quot;RWKV: Reinventing RNNS for the Transformer Era&amp;quot;]. &#039;&#039;EMNLP&#039;&#039;. 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Gu, Albert. &amp;quot;Mamba: Linear-Time Sequence Modeling with Selective State Spaces&amp;quot;. 2023-12-01.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Models like [[GPT (language model)|GPT]], [[BERT (language model)|BERT]], and their successors used these advances to demonstrate [[Emergence|emergent behaviors]] at scale, such as finding specific data from a large data set and compositional reasoning.&amp;lt;ref&amp;gt;Devlin, Jacob. &amp;quot;BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding&amp;quot;. 2018.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Language model benchmark|Benchmark]] evaluations for LLMs test a model&#039;s ability to perform one or more language tasks. Modern LLMs may face comprehensive, [[Multi-task learning|multi-task]] evaluations measuring [[Reasoning model|reasoning]], [[Accuracy and precision|factual accuracy]], [[AI alignment|alignment]], and [[AI safety|safety]].&amp;lt;ref&amp;gt;Wang, Alex. &amp;quot;GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding&amp;quot;. 2018.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Hendrycks, Dan. &amp;quot;Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency&amp;quot;. 2025.&amp;lt;/ref&amp;gt; Optimizing training sessions to pass benchmarks may result in a model that [[Overfitting|adheres too closely to benchmark outputs]] without genuine [[Generalization (machine learning)|generalization]] or robust capability improvements.&amp;lt;ref name=&amp;quot;:3&amp;quot;&amp;gt;Recht, Benjamin. &amp;quot;Do ImageNet Classifiers Generalize to ImageNet?&amp;quot;. 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
[[File:The number of publications about Large Language Models by year.png|thumb|The number of publications about large language models by year grouped by publication types]]&lt;br /&gt;
&lt;br /&gt;
[[File:Trends_in_AI_training_FLOP_over_time_(2010-2025).svg|thumb|The training compute of notable large models in FLOPs vs publication date over the period 2010–2024. For overall notable models (top left), frontier models (top right), top language models (bottom left) and top models within leading companies (bottom right). The majority of these models are language models.]]&lt;br /&gt;
[[File:Large-scale_AI_training_compute_(FLOP)_vs_Publication_date_(2017-2024).svg|thumb|The training compute of notable large AI models in FLOPs vs publication date over the period 2017–2024. The majority of large models are language models or multimodal models with language capacity.]]&lt;br /&gt;
Before the emergence of transformer-based models in 2017, some [[language model]]s were considered large relative to the computational and data constraints of their time. In the early 1990s, [[IBM]]&#039;s statistical models pioneered [[Bitext word alignment|word alignment]] techniques for machine translation, laying the groundwork for [[Construction grammar|corpus-based language modeling]]. In 2001, a smoothed [[Word n-gram language model|&#039;&#039;n&#039;&#039;-gram model]], such as those employing [[Kneser–Ney smoothing]], trained on 300 million words, achieved state-of-the-art [[perplexity]] on benchmark tests.&amp;lt;ref&amp;gt;Goodman, Joshua. &amp;quot;A Bit of Progress in Language Modeling&amp;quot;. &#039;&#039;Computer Speech &amp;amp; Language&#039;&#039;. 2001-08-09.&amp;lt;/ref&amp;gt; During the 2000s, with the rise of widespread internet access, researchers began compiling massive text datasets from the web (&amp;quot;web as corpus&amp;quot;&amp;lt;ref&amp;gt;Kilgarriff, Adam. [https://direct.mit.edu/coli/article/29/3/333-347/1816 &amp;quot;Introduction to the Special Issue on the Web as Corpus&amp;quot;]. &#039;&#039;Computational Linguistics&#039;&#039;. September 2003.&amp;lt;/ref&amp;gt;) to train statistical language models.&amp;lt;ref&amp;gt;Banko, Michele. &amp;quot;Scaling to very very large corpora for natural language disambiguation&amp;quot;. &#039;&#039;Proceedings of the 39th Annual Meeting on Association for Computational Linguistics - ACL &#039;01&#039;&#039;. 2001.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Resnik, Philip. [https://direct.mit.edu/coli/article/29/3/349-380/1809 &amp;quot;The Web as a Parallel Corpus&amp;quot;]. &#039;&#039;Computational Linguistics&#039;&#039;. September 2003.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Moving beyond &#039;&#039;n&#039;&#039;-gram models, researchers started in 2000 to use neural networks to learn language models.&amp;lt;ref&amp;gt;Xu, Wei. &amp;quot;6th International Conference on Spoken Language Processing (ICSLP 2000)&amp;quot;. ISCA. 2000-10-16.&amp;lt;/ref&amp;gt; Following the breakthrough of [[Deep learning|deep neural networks]] in image classification around 2012,&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; similar architectures were adapted for language tasks. This shift was marked by the development of [[word embedding]]s (e.g., [[Word2vec|Word2Vec]] by Mikolov in 2013) and sequence-to-sequence ([[seq2seq]]) models using [[Long short-term memory|LSTM]]. In 2016, Google transitioned its translation service to [[neural machine translation]] (NMT), replacing statistical phrase-based models with deep [[recurrent neural network]]s. These early NMT systems used LSTM-based [[Encoder-decoder model|encoder-decoder architectures]], as they preceded the invention of [[Transformer (deep learning architecture)|transformers]]. [[File:The-Transformer-model-architecture.png|thumb|upright=1.3|An illustration of the main components of the transformer model from the original paper, where layers were normalized after (instead of before) multiheaded attention]]&lt;br /&gt;
At the 2017 [[NeurIPS]] conference, [[Google]] researchers introduced the transformer architecture in their landmark paper &amp;quot;[[Attention Is All You Need]]&amp;quot;.&amp;lt;ref&amp;gt;Vaswani, Ashish. [https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf &amp;quot;Attention is All you Need&amp;quot;]. &#039;&#039;Advances in Neural Information Processing Systems&#039;&#039;. 2017.&amp;lt;/ref&amp;gt; This paper&#039;s goal was to improve upon 2014 seq2seq technology,&amp;lt;ref&amp;gt;Ilya Sutskever; Oriol Vinyals; Quoc V. Le. [https://dl.acm.org/doi/10.5555/2969033.2969173 &amp;quot;Sequence to sequence learning with neural networks&amp;quot;]. &#039;&#039;Proceedings of the 28th International Conference on Neural Information Processing Systems&#039;&#039;. 2014.&amp;lt;/ref&amp;gt; and was based mainly on the [[attention (machine learning)|attention]] mechanism developed by Bahdanau et al. in 2014.&amp;lt;ref&amp;gt;Bahdanau, Dzmitry. &amp;quot;Neural Machine Translation by Jointly Learning to Align and Translate&amp;quot;. 2014.&amp;lt;/ref&amp;gt; The following year in 2018, [[BERT (language model)|BERT]] was introduced and quickly became &amp;quot;ubiquitous&amp;quot;.&amp;lt;ref&amp;gt;Rogers, Anna. [https://aclanthology.org/2020.tacl-1.54 &amp;quot;A Primer in BERTology: What We Know About How BERT Works&amp;quot;]. &#039;&#039;Transactions of the Association for Computational Linguistics&#039;&#039;. 2020.&amp;lt;/ref&amp;gt; Though the original transformer has both encoder and decoder blocks, BERT is an encoder-only model. Academic and research usage of BERT began to decline in 2023, following rapid improvements in the abilities of decoder-only models (such as GPT) to solve tasks via [[Prompt engineering|prompting]].&amp;lt;ref name=&amp;quot;auto&amp;quot;&amp;gt;Movva, Rajiv. &amp;quot;Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)&amp;quot;. 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Although decoder-only [[GPT-1]] was introduced in 2018, it was [[GPT-2]] in 2019 that caught widespread attention because [[OpenAI]] claimed to have initially deemed it too powerful to release publicly, out of fear of malicious use.&amp;lt;ref&amp;gt;Hern, Alex. [https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction &amp;quot;New AI fake text generator may be too dangerous to release, say creators&amp;quot;]. &#039;&#039;[[The Guardian]]&#039;&#039;. 14 February 2019.&amp;lt;/ref&amp;gt; [[GPT-3]] in 2020 went a step further and As of 2025 is available only via [[Web API|API]] with no offering of downloading the model to execute locally. But it was the 2022 consumer-facing chatbot [[ChatGPT]] that received extensive media coverage and public attention.&amp;lt;ref&amp;gt;[https://www.euronews.com/next/2023/11/30/chatgpt-a-year-on-3-ways-the-ai-chatbot-has-completely-changed-the-world-in-12-months &amp;quot;ChatGPT a year on: 3 ways the AI chatbot has completely changed the world in 12 months&amp;quot;]. [[Euronews]]. November 30, 2023.&amp;lt;/ref&amp;gt; The 2023 [[GPT-4]] was praised for its increased accuracy and as a &amp;quot;holy grail&amp;quot; for its [[Multimodal learning|multimodal]] capabilities.&amp;lt;ref&amp;gt;Heaven, Will. [https://www.technologyreview.com/2023/03/14/1069823/gpt-4-is-bigger-and-better-chatgpt-openai/ &amp;quot;GPT-4 is bigger and better than ChatGPT—but OpenAI won&#039;t say why&amp;quot;]. [[MIT Technology Review]]. March 14, 2023.&amp;lt;/ref&amp;gt; OpenAI did not reveal the high-level architecture and the number of [[Parameter#Artificial intelligence|parameters]] of GPT-4. The release of ChatGPT led to an uptick in LLM usage across several research subfields of computer science, including robotics, software engineering, and societal impact work.&amp;lt;ref name=&amp;quot;auto&amp;quot;/&amp;gt; In 2024 OpenAI released the [[Reasoning language model|reasoning model]] [[OpenAI o1]], which generates long chains of thought before returning a final answer.&amp;lt;ref name=&amp;quot;NYTimesInfo&amp;quot;&amp;gt;Metz, Cade. [https://www.nytimes.com/2024/09/12/technology/openai-chatgpt-math.html &amp;quot;OpenAI Unveils New ChatGPT That Can Reason Through Math and Science&amp;quot;]. &#039;&#039;[[The New York Times]]&#039;&#039;. September 12, 2024.&amp;lt;/ref&amp;gt; Many LLMs with parameter counts comparable to those of OpenAI&#039;s GPT series have been developed.&amp;lt;ref&amp;gt;[https://ourworldindata.org/grapher/artificial-intelligence-parameter-count?time=2017-09-05..latest &amp;quot;Parameters in notable artificial intelligence systems&amp;quot;]. &#039;&#039;ourworldindata.org&#039;&#039;. November 30, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since 2022, open-weight models have been gaining popularity, especially at first with [[BLOOM (language model)|BLOOM]] and [[LLaMA]], though both have restrictions on usage and deployment. [[Mistral AI]]&#039;s models Mistral 7B and Mixtral 8x7b have a more permissive [[Apache License]]. In January 2025, [[DeepSeek]] released DeepSeek R1, a 671-billion-parameter open-weight model that performs comparably to OpenAI o1 but at a much lower price per token for users.&amp;lt;ref&amp;gt;Sharma, Shubham. [https://venturebeat.com/ai/open-source-deepseek-r1-uses-pure-reinforcement-learning-to-match-openai-o1-at-95-less-cost/ &amp;quot;Open-source DeepSeek-R1 uses pure reinforcement learning to match OpenAI o1 — at 95% less cost&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 2025-01-20.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since 2023, many LLMs have been trained to be [[Multimodal learning|multimodal]], having the ability to also process or generate other types of data, such as images, audio, or 3D meshes.&amp;lt;ref&amp;gt;[https://research.nvidia.com/labs/toronto-ai/LLaMA-Mesh/ &amp;quot;LLaMA-Mesh&amp;quot;]. &#039;&#039;research.nvidia.com&#039;&#039;. 2024.&amp;lt;/ref&amp;gt; These LLMs are also called large multimodal models (LMMs),&amp;lt;ref&amp;gt;Zia, Dr Tehseen. [https://www.unite.ai/unveiling-of-large-multimodal-models-shaping-the-landscape-of-language-models-in-2024/ &amp;quot;Unveiling of Large Multimodal Models: Shaping the Landscape of Language Models in 2024&amp;quot;]. &#039;&#039;Unite.AI&#039;&#039;. 2024-01-08.&amp;lt;/ref&amp;gt; or multimodal large language models (MLLMs).&amp;lt;ref&amp;gt;Wang, Jiaqi. &amp;quot;A Comprehensive Review of Multimodal Large Language Models: Performance and Challenges Across Different Tasks&amp;quot;. 2024-08-02.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.ibm.com/think/topics/multimodal-llm &amp;quot;What is a Multimodal LLM (MLLM)?&amp;quot;]. &#039;&#039;IBM&#039;&#039;. 2025-07-30.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Open-weight LLMs have increasingly shaped the field since 2023, contributing to broader participation in AI development and greater transparency in model evaluation. Vake et al. (2025) demonstrated that community-driven contributions to open-weight models measurably improve their efficiency and performance, with user participation growing rapidly on collaborative platforms such as Hugging Face.&amp;lt;ref&amp;gt;Vake, Domen. &amp;quot;Is Open Source the Future of AI? A Data-Driven Approach&amp;quot;. &#039;&#039;Applied Sciences&#039;&#039;. 5 March 2025.&amp;lt;/ref&amp;gt; Paris et al. (2025) further argued that openness in AI should extend beyond releasing model code or weights to encompass inclusiveness, accountability, and ethical responsibility in AI research and deployment.&amp;lt;ref&amp;gt;Paris, Tamara. &amp;quot;Opening the Scope of Openness in AI&amp;quot;. Association for Computing Machinery. 23 June 2025.&amp;lt;/ref&amp;gt; Collectively, these studies highlight that open-weight LLMs can accelerate innovation and enhance scientific reproducibility, while fostering a more transparent and participatory AI ecosystem.&lt;br /&gt;
&lt;br /&gt;
== Dataset preprocessing ==&lt;br /&gt;
&#039;&#039;See also: [[List of datasets for machine-learning research#Internet]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Tokenization===&lt;br /&gt;
As [[machine learning]] algorithms process numbers rather than text, the text must be converted to numbers. In the first step, a vocabulary is decided upon, then integer indices are arbitrarily but uniquely assigned to each vocabulary entry, and finally, an [[Word embedding|embedding]] is associated to the integer index. Algorithms include [[byte-pair encoding]] (BPE) and WordPiece. There are also special tokens serving as [[control character]]s, such as &amp;lt;code&amp;gt;[MASK]&amp;lt;/code&amp;gt; for masked-out token (as used in [[BERT (language model)|BERT]]), and &amp;lt;code&amp;gt;[UNK]&amp;lt;/code&amp;gt; (&amp;quot;unknown&amp;quot;) for characters not appearing in the vocabulary. Also, some special symbols are used to denote special text formatting. For example, &amp;quot;Ġ&amp;quot; denotes a preceding whitespace in [[RoBERTa]] and GPT and &amp;quot;##&amp;quot; denotes continuation of a preceding word in BERT.&amp;lt;ref&amp;gt;Kaushal, Ayush. [https://aclanthology.org/2022.naacl-main.179.pdf &amp;quot;What do tokens know about their characters and how do they know it?&amp;quot;]. &#039;&#039;NAACL&#039;&#039;. 2022-06-06.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example, the BPE tokenizer used by the legacy version of [[GPT-3]] would split &amp;lt;small&amp;gt;&amp;lt;code&amp;gt;tokenizer: texts -&amp;gt; series of numerical &amp;quot;tokens&amp;quot;&amp;lt;/code&amp;gt;&amp;lt;/small&amp;gt; as&lt;br /&gt;
{| cellpadding=&amp;quot;0;&amp;quot; cellspacing=&amp;quot;0;&amp;quot; style=&amp;quot;border:1px solid black&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;border-left: 2px green; border-right: 2px green&amp;quot; |token &lt;br /&gt;
| style=&amp;quot;background-color: grey; color: white; border-left: 2px green; border-right: 2px green&amp;quot; |izer &lt;br /&gt;
| style=&amp;quot;border-left: 2px green; border-right: 2px green&amp;quot; |:&lt;br /&gt;
| style=&amp;quot;background-color: grey; color: white; border-left: 2px green; border-right: 2px green&amp;quot; |&amp;amp;nbsp;texts&lt;br /&gt;
| style=&amp;quot;border-left: 2px green; border-right: 2px green&amp;quot; |&amp;amp;nbsp;-&amp;gt;&lt;br /&gt;
| style=&amp;quot;background-color: grey; color: white; border-left: 2px green; border-right: 2px green&amp;quot; |series &lt;br /&gt;
| style=&amp;quot;border-left: 2px green; border-right: 2px green&amp;quot; |&amp;amp;nbsp;of&lt;br /&gt;
| style=&amp;quot;background-color: grey; color: white; border-left: 2px green; border-right: 2px green&amp;quot; |&amp;amp;nbsp;numerical &lt;br /&gt;
| style=&amp;quot;border-left: 2px green; border-right: 2px green&amp;quot; |&amp;amp;nbsp;&amp;quot; &lt;br /&gt;
| style=&amp;quot;background-color: grey; color: white; border-left: 2px green; border-right: 2px green&amp;quot; |t&lt;br /&gt;
| style=&amp;quot;border-left: 2px green; border-right: 2px green&amp;quot; |ok&lt;br /&gt;
| style=&amp;quot;background-color: grey; color: white; border-left: 2px green; border-right: 2px green&amp;quot; |ens&lt;br /&gt;
| style=&amp;quot;border-left: 2px green; border-right: 2px green&amp;quot; |&amp;quot; &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Tokenization also [[Data compression|compress]]es the datasets. Because LLMs generally require input to be an [[Array (data structure)|array]] that is not [[Jagged array|jagged]], the shorter texts must be &amp;quot;padded&amp;quot; until they match the length of the longest one. The average number of words per token depends on the language.&amp;lt;ref&amp;gt;[https://blog.yenniejun.com/p/all-languages-are-not-created-tokenized &amp;quot;All languages are NOT created (tokenized) equal&amp;quot;]. &#039;&#039;Language models cost much more in some languages than others&#039;&#039;. 2023-05-03.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;LangModelTokenizsersUnfairness&amp;quot;&amp;gt;Petrov, Aleksandar. [https://openreview.net/forum?id=Pj4YYuxTq9 &amp;quot;Language Model Tokenizers Introduce Unfairness Between Languages&amp;quot;]. &#039;&#039;NeurIPS&#039;&#039;. June 23, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Byte-pair encoding ====&lt;br /&gt;
&#039;&#039;Main article: [[Byte-pair encoding]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As an example, consider a tokenizer based on byte-pair encoding. In the first step, all unique characters (including blanks and [[punctuation mark]]s) are treated as an initial set of [[n-gram|&#039;&#039;n&#039;&#039;-grams]] (i.e. initial set of uni-grams). Successively the most frequent pair of adjacent characters is merged into a bi-gram and all instances of the pair are replaced by it. All occurrences of adjacent pairs of (previously merged) &#039;&#039;n&#039;&#039;-grams that most frequently occur together are then again merged into even lengthier &#039;&#039;n&#039;&#039;-gram, until a vocabulary of prescribed size is obtained. After a tokenizer is trained, any text can be tokenized by it, as long as it does not contain characters not appearing in the initial-set of uni-grams.&amp;lt;ref name=&amp;quot;2022Book_&amp;quot;&amp;gt;Paaß, Gerhard. &amp;quot;Foundation Models for Natural Language Processing&amp;quot;. 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Problems ====&lt;br /&gt;
A token vocabulary based on the frequencies extracted from mainly English corpora uses as few tokens as possible for an average English word. However, an average word in another language encoded by such an English-optimized tokenizer is split into a suboptimal amount of tokens. GPT-2 tokenizer can use up to 15 times more tokens per word for some languages, for example for the [[Shan language]] from [[Myanmar]]. Even more widespread languages such as [[Portuguese language|Portuguese]] and [[German language|German]] have &amp;quot;a premium of 50%&amp;quot; compared to English.&amp;lt;ref name=&amp;quot;LangModelTokenizsersUnfairness&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Dataset cleaning===&lt;br /&gt;
&#039;&#039;Main article: [[Data cleansing]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In the context of training LLMs, datasets are typically cleaned by removing low-quality, duplicated, or toxic data.&amp;lt;ref name=&amp;quot;aYNg4&amp;quot;&amp;gt;Dodge, Jesse. [https://aclanthology.org/2021.emnlp-main.98.pdf &amp;quot;Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus&amp;quot;]. &#039;&#039;EMNLP&#039;&#039;.&amp;lt;/ref&amp;gt; Cleaned datasets can increase training efficiency and lead to improved downstream performance.&amp;lt;ref&amp;gt;Lee, Katherine. &amp;quot;Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)&amp;quot;. May 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Li, Yuanzhi. &amp;quot;Textbooks Are All You Need II: phi-1.5 technical report&amp;quot;. 2023-09-11.&amp;lt;/ref&amp;gt; A trained LLM can be used to clean datasets for training a further LLM.&amp;lt;ref&amp;gt;Lin, Zhenghao. [https://dl.acm.org/doi/10.5555/3737916.3738830 &amp;quot;Rho-1: Not All Tokens Are What You Need&amp;quot;]. &#039;&#039;NeurIPS&#039;&#039;. 2024-04-11.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the increasing proportion of LLM-generated content on the web, data cleaning in the future may include filtering out such content. LLM-generated content can pose a problem if the content is similar to human text (making filtering difficult) but of lower quality (degrading performance of models trained on it).&amp;lt;ref name=&amp;quot;few-shot-learners2&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Synthetic data ===&lt;br /&gt;
&#039;&#039;Main article: [[Synthetic data]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Training of largest language models might need more linguistic data than naturally available, or that the naturally occurring data is of insufficient quality. In these cases, synthetic data might be used. Microsoft&#039;s [[Phi (LLM)|Phi]] series of LLMs is trained on textbook-like data generated by another LLM.&amp;lt;ref&amp;gt;Abdin, Marah. &amp;quot;Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone&amp;quot;. 2024-04-23.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
&#039;&#039;See also: [[Fine-tuning (machine learning)]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An LLM is a type of [[foundation model]] (large X model) trained on language. LLMs can be trained in different ways. In particular, GPT models are first pretrained to predict the next word on a large amount of data, before being fine-tuned.&amp;lt;ref&amp;gt;Wolfram, Stephen. &amp;quot;What is ChatGPT doing ... and why does it work?&amp;quot;. Wolfram Media, Inc. 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cost ===&lt;br /&gt;
[[File:Estimated_training_cost_of_some_AI_models_-_2024_AI_index.jpg|thumb|right|upright=1.5]]&lt;br /&gt;
Substantial infrastructure is necessary for training the largest models. The tendency towards larger models is visible in the [[list of large language models]]. For example, the training of GPT-2 (i.e. a 1.5-billion-parameter model) in 2019 cost $50,000, while training of the [[PaLM]] (i.e. a 540-billion-parameter model) in 2022 cost $8 million, and Megatron-Turing NLG 530B (in 2021) cost around $11 million. The qualifier &amp;quot;large&amp;quot; in &amp;quot;large language model&amp;quot; is inherently vague, as there is no definitive threshold for the number of parameters required to qualify as &amp;quot;large&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Fine-tuning ===&lt;br /&gt;
Before being [[Fine-tuning (deep learning)|fine-tuned]], most LLMs are next-token predictors. The fine-tuning shapes the LLM&#039;s behavior via techniques like [[reinforcement learning from human feedback]] (RLHF)&amp;lt;ref&amp;gt;Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, Dario Amodei. &amp;quot;Deep reinforcement learning from human preferences&amp;quot;. 2017.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Christiano, Paul. &amp;quot;Deep Reinforcement Learning from Human Preferences&amp;quot;. 2017.&amp;lt;/ref&amp;gt; or [[constitutional AI]].&amp;lt;ref&amp;gt;Edwards, Benj. [https://arstechnica.com/information-technology/2023/05/ai-with-a-moral-compass-anthropic-outlines-constitutional-ai-in-its-claude-chatbot/ &amp;quot;AI gains &amp;quot;values&amp;quot; with Anthropic&#039;s new Constitutional AI chatbot approach&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2023-05-09.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Instruction fine-tuning is a form of [[supervised learning]] used to teach LLMs to follow user instructions. In 2022, OpenAI demonstrated [[InstructGPT]], a version of GPT-3 similarly fine-tuned to follow instructions.&amp;lt;ref&amp;gt;Snyder, Alison. [https://www.axios.com/2022/01/27/ai-instructions-learning-algorithm &amp;quot;Next generation AI can follow a person&#039;s instructions and intentions&amp;quot;]. &#039;&#039;Axios&#039;&#039;. 2022-01-27.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reinforcement learning from human feedback (RLHF) involves training a reward model to predict which text humans prefer. Then, the LLM can be fine-tuned through [[reinforcement learning]] to better satisfy this reward model. Since humans typically prefer truthful, helpful and harmless answers, RLHF favors such answers.&amp;lt;ref&amp;gt;Appen, Sujatha Sagiraju. [https://venturebeat.com/ai/how-reinforcement-learning-with-human-feedback-is-unlocking-the-power-of-generative-ai/ &amp;quot;How reinforcement learning with human feedback is unlocking the power of generative AI&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 2023-04-23.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Ouyang, Long. &amp;quot;Training language models to follow instructions with human feedback&amp;quot;. 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Architecture ==&lt;br /&gt;
LLMs are generally based on the [[Transformer (deep learning architecture)|transformer]] architecture, which leverages an [[Attention (machine learning)|attention]] mechanism that enables the model to process relationships between all elements in a sequence simultaneously, regardless of their distance from each other.August 2025.&lt;br /&gt;
&lt;br /&gt;
=== Attention mechanism and context window ===&lt;br /&gt;
&#039;&#039;See also: [[Attention (machine learning)]]&#039;&#039;&lt;br /&gt;
[[File:Multiple attention heads.png|upright=1.3|thumb | When each head calculates, according to its own criteria, how much other tokens are relevant for the &amp;quot;it_&amp;quot; token, note that the second attention head, represented by the second column, is focusing most on the first two rows, i.e. the tokens &amp;quot;The&amp;quot; and &amp;quot;animal&amp;quot;, while the third column is focusing most on the bottom two rows, i.e. on &amp;quot;tired&amp;quot;, which has been tokenized into two tokens.&amp;lt;ref name=&amp;quot;Jay_Allamar&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
In order to find out which tokens are relevant to each other within the scope of the context window, the attention mechanism calculates &amp;quot;soft&amp;quot; weights for each token, more precisely for its embedding, by using multiple attention heads, each with its own &amp;quot;relevance&amp;quot; for calculating its own soft weights. For example, the small (i.e. 117M parameter sized) [[GPT-2]] model has had twelve attention heads and a context window of only 1k tokens.&amp;lt;ref name=&amp;quot;Jay_Allamar_GPT2&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; In its medium version it has 345M parameters and contains 24 layers, each with 12 attention heads. For the training with gradient descent a batch size of 512 was utilized.&amp;lt;ref name=&amp;quot;2022Book_&amp;quot;/&amp;gt;{{Unreliable source?|date=December 2025}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Autoregressive&#039;&#039; models, such as [[Generative pretrained transformer|GPTs]], are trained to guess how a sequence continues; for example, whether the word sequence &amp;quot;I like to eat&amp;quot; is more likely to be followed by the word &amp;quot;bread&amp;quot; or the word &amp;quot;rocks.&amp;quot; [[Cloze test|&#039;&#039;Masked&#039;&#039;]] models, such as BERT,&amp;lt;ref name=&amp;quot;jm&amp;quot;&amp;gt;Jurafsky, Dan. [https://web.stanford.edu/~jurafsky/slp3/ed3book_jan72023.pdf &amp;quot;Speech and Language Processing&amp;quot;]. 7 January 2023.&amp;lt;/ref&amp;gt; are trained to guess parts that are missing from a sequence, such as whether the missing word in &amp;quot;I like to ___ roses&amp;quot; is more likely to be the word &amp;quot;smell&amp;quot; or the word &amp;quot;eat.&amp;quot; The model&#039;s predictions are based on the properties of sequences within its training dataset.&amp;lt;ref name=&amp;quot;ioUpE&amp;quot;&amp;gt;Zaib, Munazza. [https://www.researchgate.net/publication/338931711 &amp;quot;Proceedings of the Australasian Computer Science Week Multiconference&amp;quot;]. 4 February 2020.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
=== Mixture of experts ===&lt;br /&gt;
&#039;&#039;Main article: [[Mixture of experts]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A [[mixture of experts]] (MoE) is a [[machine learning]] architecture in which multiple specialized neural networks (&amp;quot;experts&amp;quot;) work together, with a gating mechanism that routes each input to the most appropriate expert(s). Mixtures of experts can reduce inference costs, as only a fraction of the parameters are used for each input. The approach was introduced in 2017 by Google researchers.&amp;lt;ref name=&amp;quot;HGZCJ&amp;quot;&amp;gt;Shazeer, Noam. &amp;quot;Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems&amp;quot;. 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;R9Qq5&amp;quot;&amp;gt;Lepikhin, Dmitry. &amp;quot;GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding&amp;quot;. 2021-01-12.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;glam-blog&amp;quot;&amp;gt;Dai, Andrew M. [https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html &amp;quot;More Efficient In-Context Learning with GLaM&amp;quot;]. &#039;&#039;ai.googleblog.com&#039;&#039;. December 9, 2021.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parameter size ===&lt;br /&gt;
&#039;&#039;See also: [[1.58-bit large language model]]&#039;&#039;&lt;br /&gt;
Typically, LLMs are trained with single- or half-precision [[floating point numbers]] (float32 and float16). One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have more than 100 billion parameters, which places them outside the range of most consumer electronics.&amp;lt;ref&amp;gt;Mann, Tobias. [https://www.theregister.com/2024/03/17/ai_pc_local_llm/ &amp;quot;How to run an LLM locally on your PC in less than 10 minutes&amp;quot;]. &#039;&#039;theregister.com&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Quantization ====&lt;br /&gt;
&#039;&#039;Post-training [[Quantization (signal processing)|quantization]]&#039;&#039;&amp;lt;ref name=&amp;quot;LS2Go&amp;quot;&amp;gt;Nagel, Markus. [https://proceedings.mlr.press/v119/nagel20a.html &amp;quot;Up or Down? Adaptive Rounding for Post-Training Quantization&amp;quot;]. &#039;&#039;Proceedings of the 37th International Conference on Machine Learning&#039;&#039;. 2020-11-21.&amp;lt;/ref&amp;gt; aims to decrease the space requirement by lowering precision of the parameters of a trained model, while preserving most of its performance. Quantization can be further classified as &#039;&#039;static quantization&#039;&#039; if the quantization parameters are determined beforehand (typically during a calibration phase), and &#039;&#039;dynamic quantization&#039;&#039; if the quantization is applied during inference. The simplest form of quantization simply truncates all the parameters to a given number of bits: this is applicable to static as well as dynamic quantization, but loses much precision. Dynamic quantization allows for the use of a different quantization [[Codebook#Data compression|codebook]] per layer, either a lookup table of values or a linear mapping (scaling factor and bias), at the cost of foregoing the possible speed improvements from using lower-precision arithmetic.August 2025.&lt;br /&gt;
&lt;br /&gt;
Quantized models are typically seen as frozen with modification of weights (e.g. fine-tuning) only applied to the original model. It is possible to fine-tune quantized models using [[LoRA|low-rank adaptation]].&amp;lt;ref&amp;gt;Mittal, Aayush Mittal. [https://www.unite.ai/lora-qlora-and-qa-lora-efficient-adaptability-in-large-language-models-through-low-rank-matrix-factorization/ &amp;quot;LoRa, QLoRA and QA-LoRA: Efficient Adaptability in Large Language Models Through Low-Rank Matrix Factorization&amp;quot;]. &#039;&#039;Unite.AI&#039;&#039;. 2023-10-24.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Extensibility ==&lt;br /&gt;
Beyond basic text generation, various techniques have been developed to extend LLM capabilities, including the use of external tools and data sources, improved reasoning on complex problems, and enhanced instruction-following or autonomy through prompting methods.&lt;br /&gt;
&lt;br /&gt;
=== Prompt engineering ===&lt;br /&gt;
&#039;&#039;Main article: [[Prompt engineering]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In 2020, [[OpenAI]] researchers demonstrated that their new model [[GPT-3]] could understand what format to use given a few rounds of Q and A (or other type of task) in the input data as example, thanks in part due to the RLHF technique. This technique, called &#039;&#039;few-shot prompting&#039;&#039;, allows LLMs to be adapted to any task without requiring fine-tuning.&amp;lt;ref name=&amp;quot;few-shot-learners2&amp;quot;/&amp;gt; Also in 2022, it was found that the base GPT-3 model can generate an instruction based on user input. The generated instruction along with user input is then used as input to another instance of the model under a &amp;quot;Instruction: [...], Input: [...], Output:&amp;quot; format. The other instance is able to complete the output and often produces the correct answer in doing so. The ability to &amp;quot;self-instruct&amp;quot; makes LLMs able to [[Bootstrapping|bootstrap]] themselves toward a correct answer.&amp;lt;ref name=&amp;quot;self-instruct-paper&amp;quot;&amp;gt;Wang, Yizhong. &amp;quot;Self-Instruct: Aligning Language Model with Self Generated Instructions&amp;quot;. 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Dialogue processing (chatbot) ===&lt;br /&gt;
An LLM can be turned into a [[chatbot]] by specializing it for conversation. User input is prefixed with a marker such as &amp;quot;Q:&amp;quot; or &amp;quot;User:&amp;quot; and the LLM is asked to predict the output after a fixed &amp;quot;A:&amp;quot; or &amp;quot;Assistant:&amp;quot;. This type of model became commercially available in 2022 with ChatGPT, a sibling model of InstructGPT fine-tuned to accept and produce dialog-formatted text based on GPT-3.5. It could similarly follow user instructions. Before the stream of User and Assistant lines, a chat context usually starts with a few lines of overarching instructions, from a role called &amp;quot;developer&amp;quot; or &amp;quot;system&amp;quot; to convey a higher authority than the user&#039;s input. This is called a &amp;quot;system prompt&amp;quot;.November 2025.&lt;br /&gt;
&lt;br /&gt;
=== Retrieval-augmented generation ===&lt;br /&gt;
[[Retrieval-augmented generation]] (RAG) is an approach that integrates LLMs with [[document retrieval]] systems. Given a query, a document retriever is called to retrieve the most relevant documents. This is usually done by encoding the query and the documents into vectors, then finding the documents with vectors (usually stored in a [[vector database]]) most similar to the vector of the query. The LLM then generates an output based on both the query and context included from the retrieved documents.&amp;lt;ref name=&amp;quot;BUZBP&amp;quot;&amp;gt;Lewis, Patrick. [https://proceedings.neurips.cc/paper/2020/hash/6b493230205f780e1bc26945df7481e5-Abstract.html &amp;quot;Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks&amp;quot;]. &#039;&#039;Advances in Neural Information Processing Systems&#039;&#039;. 2020.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Tool use ===&lt;br /&gt;
Tool use is a mechanism that enables LLMs to interact with external systems, applications, or data sources. It can allow for example to fetch real-time information from an API or to execute code. A program separate from the LLM watches the output stream of the LLM for a special tool-calling syntax. When these special tokens appear, the program calls the tool accordingly and feeds its output back into the LLM&#039;s input stream.&amp;lt;ref&amp;gt;Dickson, Ben. [https://venturebeat.com/ai/the-tool-integration-problem-thats-holding-back-enterprise-ai-and-how-cotools-solves-it/ &amp;quot;The tool integration problem that&#039;s holding back enterprise AI (and how CoTools solves it)&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 2025-04-02.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Early tool-using LLMs were fine-tuned on the use of specific tools. But fine-tuning LLMs for the ability to read [[API]] documentation and call API correctly has greatly expanded the range of tools accessible to an LLM.&amp;lt;ref name=&amp;quot;lLrda&amp;quot;&amp;gt;Liang, Yaobo. &amp;quot;TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs&amp;quot;. &#039;&#039;Science&#039;&#039;. 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;4Xzrs&amp;quot;&amp;gt;Patil, Shishir G.. [https://proceedings.neurips.cc/paper_files/paper/2024/hash/e4c61f578ff07830f5c37378dd3ecb0d-Abstract-Conference.html &amp;quot;Gorilla: Large Language Model Connected with Massive APIs&amp;quot;]. &#039;&#039;NeurIPS&#039;&#039;. 2023-05-01.&amp;lt;/ref&amp;gt; Describing available tools in the system prompt can also make an LLM able to use tools. A system prompt instructing ChatGPT (GPT-4) to use multiple types of tools can be found online.&amp;lt;ref&amp;gt;[https://github.com/spdustin/ChatGPT-AutoExpert/blob/835baae768870aa9747663c24d8216820d24fd74/_system-prompts/all_tools.md &amp;quot;ChatGPT-AutoExpert/_system-prompts/all_tools.md at 835baae768870aa9747663c24d8216820d24fd74 · spdustin/ChatGPT-AutoExpert&amp;quot;]. &#039;&#039;GitHub&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Agency ===&lt;br /&gt;
&#039;&#039;Main article: [[AI agent]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
An LLM is typically not an [[autonomous agent]] by itself, as it lacks the ability to interact with dynamic environments, recall past behaviors, and plan future actions. But it can be transformed into an agent by adding supporting elements: the role (profile) and the surrounding environment of an agent can be additional inputs to the LLM, while memory can be integrated as a tool or provided as additional input. Instructions and input patterns are used to make the LLM plan actions and tool use is used to potentially carry out these actions.&amp;lt;ref&amp;gt;Wang, Lei. &amp;quot;A survey on large language model based autonomous agents&amp;quot;. &#039;&#039;Frontiers of Computer Science&#039;&#039;. December 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ReAct pattern, a portmanteau of &#039;&#039;reason&#039;&#039; and &#039;&#039;act&#039;&#039;, constructs an [[Intelligent agent|agent]] out of an LLM, using the LLM as a planner. The LLM is prompted to &amp;quot;think out loud&amp;quot;. Specifically, the language model is prompted with a textual description of the environment, a goal, a list of possible actions, and a record of the actions and observations so far. It generates one or more thoughts before generating an action, which is then executed in the environment.&amp;lt;ref name=&amp;quot;DmvNE&amp;quot;&amp;gt;Yao, Shunyu. &amp;quot;ReAct: Synergizing Reasoning and Acting in Language Models&amp;quot;. 2022-10-01.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the DEPS (&amp;quot;describe, explain, plan and select&amp;quot;) method, an LLM is first connected to the visual world via image descriptions. It is then prompted to produce plans for complex tasks and behaviors based on its pretrained knowledge and the environmental feedback it receives.&amp;lt;ref&amp;gt;Wang, Zihao. [https://dl.acm.org/doi/10.5555/3666122.3667602 &amp;quot;Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents&amp;quot;]. &#039;&#039;NeurIPS&#039;&#039;. 2023-02-03.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;Reflexion method&#039;&#039; constructs an agent that learns over multiple episodes. At the end of each episode, the LLM is given the record of the episode, and prompted to think up &amp;quot;lessons learned&amp;quot;, which would help it perform better at a subsequent episode. These &amp;quot;lessons learned&amp;quot; are stored as a form of long-term memory and given to the agent in the subsequent episodes.&amp;lt;ref name=&amp;quot;sbB2T&amp;quot;&amp;gt;Shinn, Noah. [https://dl.acm.org/doi/10.5555/3666122.3667602 &amp;quot;Reflexion: Language Agents with Verbal Reinforcement Learning&amp;quot;]. &#039;&#039;NeurIPS&#039;&#039;. 2023-03-01.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Monte Carlo tree search]] can use an LLM as rollout heuristic. When a programmatic world model is not available, an LLM can also be prompted with a description of the environment to act as world model.&amp;lt;ref name=&amp;quot;ltTer&amp;quot;&amp;gt;Hao, Shibo. [https://aclanthology.org/2023.emnlp-main.507/ &amp;quot;Reasoning with Language Model is Planning with World Model&amp;quot;]. &#039;&#039;EMNLP&#039;&#039;. 2023-05-01.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For open-ended exploration, an LLM can be used to score observations for their &amp;quot;interestingness&amp;quot;, which can be used as a reward signal to guide a normal (non-LLM) reinforcement learning agent.&amp;lt;ref name=&amp;quot;mBvD9&amp;quot;&amp;gt;Zhang, Jenny. &amp;quot;OMNI: Open-endedness via Models of human Notions of Interestingness&amp;quot;. 2 June 2023.&amp;lt;/ref&amp;gt; Alternatively, it can [[Zone of proximal development|propose increasingly difficult tasks]] for [[curriculum learning]].&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;[https://voyager.minedojo.org/ &amp;quot;Voyager {{!&amp;quot;]. &#039;&#039;voyager.minedojo.org&#039;&#039;.&amp;lt;/ref&amp;gt; Instead of outputting individual actions, an LLM planner can also construct &amp;quot;skills&amp;quot;, or [[Function (computer programming)|functions]] for complex action sequences. The skills can be stored and later invoked, allowing increasing levels of abstraction in planning.&amp;lt;ref name=&amp;quot;:0&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Multiple agents with memory can interact socially.&amp;lt;ref name=&amp;quot;XuvjF&amp;quot;&amp;gt;Park, Joon Sung. &amp;quot;Generative Agents: Interactive Simulacra of Human Behavior&amp;quot;. 2023-04-01.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Reasoning ===&lt;br /&gt;
&lt;br /&gt;
LLMs are conventionally trained to generate an output without generating intermediate steps. As a result, their performance tends to be subpar on complex questions requiring (at least in humans) intermediate steps of thought. Early research demonstrated that inserting intermediate &amp;quot;scratchpad&amp;quot; computations could improve performance on such tasks.&amp;lt;ref&amp;gt;Nye, Maxwell. &amp;quot;Show Your Work: Scratchpads for Intermediate Computation with Language Models&amp;quot;. 30 November 2021.&amp;lt;/ref&amp;gt; Later methods overcame this deficiency more systematically by breaking tasks into smaller steps for the LLM, either manually or automatically.&lt;br /&gt;
&lt;br /&gt;
==== Chaining ====&lt;br /&gt;
&#039;&#039;Prompt chaining&#039;&#039; was introduced in 2022.&amp;lt;ref&amp;gt;Wu, Tongshuang. &amp;quot;CHI Conference on Human Factors in Computing Systems Extended Abstracts&amp;quot;. Association for Computing Machinery. 2022-04-28.&amp;lt;/ref&amp;gt; In this method, a user manually breaks a complex problem down into several steps. In each step, the LLM receives as input a prompt telling it what to do and some results from preceding steps. The result from one step is then reused in a next step, until a final answer is reached. The ability of an LLM to follow instructions means that even non-experts can write a successful collection of stepwise prompts given a few rounds of trial and error.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.ibm.com/think/topics/prompt-chaining &amp;quot;What is prompt chaining?&amp;quot;]. &#039;&#039;IBM&#039;&#039;. 23 April 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A 2022 paper demonstrated a separate technique called &#039;&#039;[[chain-of-thought prompting]]&#039;&#039;, which makes the LLM break the question down autonomously. An LLM is given some examples where the &amp;quot;assistant&amp;quot; verbally breaks down the thought process before arriving at an answer. The LLM mimics these examples and also tries to spend some time generating intermediate steps before providing the final answer. This additional step elicited by prompting improves the correctness of the LLM on relatively complex questions. On math word questions, a prompted model can exceed even fine-tuned GPT-3 with a verifier.&amp;lt;ref name=&amp;quot;auto2&amp;quot;&amp;gt;Wei, Jason. [https://dl.acm.org/doi/10.5555/3600270.3602070 &amp;quot;Chain-of-Thought Prompting Elicits Reasoning in Large Language Models&amp;quot;]. &#039;&#039;NeurIPS&#039;&#039;. 2023-01-10.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.ibm.com/think/topics/chain-of-thoughts &amp;quot;What is chain of thought (CoT) prompting?&amp;quot;]. &#039;&#039;IBM&#039;&#039;. 23 April 2025.&amp;lt;/ref&amp;gt; Chain-of-thought can also be elicited by simply adding an instruction like &amp;quot;Let&#039;s think step by step&amp;quot; to the prompt, in order to encourage the LLM to proceed methodically instead of trying to directly guess the answer.&amp;lt;ref&amp;gt;Schreiner, Maximilian. [https://the-decoder.com/deeper-insights-for-ai-language-models-chain-of-thought-prompting-as-a-key-factor/ &amp;quot;Deeper insights into AI language models - chain of thought prompting as a success factor&amp;quot;]. &#039;&#039;The Decoder&#039;&#039;. 2022-09-27.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Model-native reasoning ====&lt;br /&gt;
&#039;&#039;Main article: [[Reasoning model|Reflection (artificial intelligence)]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In late 2024, a new approach to LLM development emerged with &amp;quot;reasoning models&amp;quot;.&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2024/12/14/reasoning-ai-models-have-become-a-trend-for-better-or-worse/ &amp;quot;&#039;Reasoning&#039; AI models have become a trend, for better or worse&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2024-12-14.&amp;lt;/ref&amp;gt; These are trained to generate step-by-step analysis before producing final answers, enabling better results on complex tasks, for instance in mathematics, coding and logic.&amp;lt;ref&amp;gt;[https://spectrum.ieee.org/chain-of-thought-prompting &amp;quot;AI Developers Look Beyond Chain-of-Thought Prompting&amp;quot;]. &#039;&#039;IEEE Spectrum&#039;&#039;. 2025-05-08.&amp;lt;/ref&amp;gt; OpenAI introduced this concept with their [[OpenAI o1|o1]] model in September 2024, followed by [[OpenAI o3|o3]] in April 2025. On the [[International Mathematical Olympiad|International Mathematics Olympiad]] qualifying exam problems, [[GPT-4o]] achieved 13% accuracy while o1 reached 83%.&amp;lt;ref name=&amp;quot;nyt-o3&amp;quot;&amp;gt;Metz, Cade. [https://www.nytimes.com/2024/12/20/technology/openai-new-ai-math-science.html &amp;quot;OpenAI Unveils New A.I. That Can &#039;Reason&#039; Through Math and Science Problems&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. 2024-12-20.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In January 2025, the Chinese company [[DeepSeek]] released DeepSeek-R1, a 671-billion-parameter open-weight reasoning model that achieved comparable performance to OpenAI&#039;s o1 while being significantly more cost-effective to operate. Unlike proprietary models from OpenAI, DeepSeek-R1&#039;s open-weight nature allowed researchers to study and build upon the algorithm, though its training data remained private.&amp;lt;ref name=&amp;quot;nature-deepseek&amp;quot;&amp;gt;Gibney, Elizabeth. [https://www.nature.com/articles/d41586-025-00229-6 &amp;quot;China&#039;s cheap, open AI model DeepSeek thrills scientists&amp;quot;]. &#039;&#039;Nature&#039;&#039;. 2025-01-30.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These reasoning models typically require more computational resources per query compared to traditional LLMs, as they perform more extensive processing to work through problems step by step.&amp;lt;ref name=&amp;quot;nyt-o3&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Inference optimization ===&lt;br /&gt;
Inference optimization refers to techniques that improve LLM performance by applying additional computational resources during the inference process, rather than requiring model retraining. These approaches implement various state-of-the-art reasoning and decision-making strategies to enhance accuracy and capabilities.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OptiLLM&#039;&#039;&#039; is an [[OpenAI]] API-compatible optimizing inference proxy that implements multiple inference optimization techniques simultaneously.&amp;lt;ref&amp;gt;[https://github.com/codelion/optillm &amp;quot;OptiLLM: Optimizing inference proxy for LLMs&amp;quot;]. &#039;&#039;GitHub&#039;&#039;.&amp;lt;/ref&amp;gt; The system acts as a transparent proxy that can work with any LLM provider, implementing techniques such as [[Monte Carlo tree search]] (MCTS), [[Mixture of experts|mixture of agents]] (MOA), best-of-N sampling, and chain-of-thought reflection. OptiLLM demonstrates that strategic application of computational resources at inference time can substantially improve model performance across diverse tasks, achieving significant improvements on benchmarks such as the [[American Invitational Mathematics Examination|AIME]] 2024 mathematics competition and various coding challenges.&amp;lt;ref&amp;gt;[https://www.marktechpost.com/2024/11/18/optillm-an-openai-api-compatible-optimizing-inference-proxy-which-implements-several-state-of-the-art-techniques-that-can-improve-the-accuracy-and-performance-of-llms/ &amp;quot;OptiLLM: An OpenAI API Compatible Optimizing Inference Proxy which Implements Several State-of-the-Art Techniques that can Improve the Accuracy and Performance of LLMs&amp;quot;]. &#039;&#039;MarkTechPost&#039;&#039;. 2024-11-18.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These inference optimization approaches represent a growing category of tools that enhance existing LLMs without requiring access to model weights or retraining, making advanced reasoning capabilities more accessible across different model providers and use cases.&lt;br /&gt;
&lt;br /&gt;
== Forms of input and output ==&lt;br /&gt;
&lt;br /&gt;
=== Multimodality ===&lt;br /&gt;
&#039;&#039;See also: [[Multimodal learning]]&#039;&#039;&lt;br /&gt;
Multimodality means having multiple modalities, where a &amp;quot;[[Modality (human–computer interaction)|modality]]&amp;quot; refers to a type of input or output, such as video, image, audio, text, [[proprioception]], etc.&amp;lt;ref&amp;gt;Kiros, Ryan. [https://proceedings.mlr.press/v32/kiros14.html &amp;quot;Multimodal Neural Language Models&amp;quot;]. &#039;&#039;Proceedings of the 31st International Conference on Machine Learning&#039;&#039;. 2014-06-18.&amp;lt;/ref&amp;gt; For example, [[Pathways Language Model|Google PaLM]] model was fine-tuned into a multimodal model and applied to [[Robot control|robotic control]].&amp;lt;ref&amp;gt;Driess, Danny. [https://dl.acm.org/doi/10.5555/3618408.3618748 &amp;quot;PaLM-E: An Embodied Multimodal Language Model&amp;quot;]. &#039;&#039;ICML&#039;&#039;. 2023-03-01.&amp;lt;/ref&amp;gt; [[LLaMA]] models have also been turned multimodal using the tokenization method, to allow image inputs,&amp;lt;ref&amp;gt;Liu, Haotian. &amp;quot;Visual Instruction Tuning&amp;quot;. &#039;&#039;NeurIPS&#039;&#039;. 2023-04-01.&amp;lt;/ref&amp;gt; and video inputs.&amp;lt;ref&amp;gt;Zhang, Hang. &amp;quot;Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding&amp;quot;. &#039;&#039;EMNLP&#039;&#039;. 2023-06-01.&amp;lt;/ref&amp;gt; [[GPT-4o]] can process and generate text, audio and images.&amp;lt;ref&amp;gt;[https://www.theregister.com/2024/05/13/openai_gpt4o/ &amp;quot;OpenAI says natively multimodal GPT-4o eats text, visuals, sound – and emits the same&amp;quot;]. &#039;&#039;The Register&#039;&#039;. 2024-05-13.&amp;lt;/ref&amp;gt; Such models are sometimes called large multimodal models (LMMs).&amp;lt;ref&amp;gt;Zia, Dr Tehseen. [https://www.unite.ai/unveiling-of-large-multimodal-models-shaping-the-landscape-of-language-models-in-2024/ &amp;quot;Unveiling of Large Multimodal Models: Shaping the Landscape of Language Models in 2024&amp;quot;]. &#039;&#039;Unite.AI&#039;&#039;. 2024-01-08.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A common method to create multimodal models out of an LLM is to &amp;quot;tokenize&amp;quot; the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoder &amp;lt;math&amp;gt;E&amp;lt;/math&amp;gt;. Make a small [[multilayer perceptron]] &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;, so that for any image &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt;, the post-processed vector &amp;lt;math&amp;gt;f(E(y))&amp;lt;/math&amp;gt; has the same dimensions as an encoded token. That is an &amp;quot;image token&amp;quot;. Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be [[Hang (computing)|frozen]] to improve stability.&amp;lt;ref&amp;gt;Li, Junnan. [https://dl.acm.org/doi/10.5555/3618408.3619222 &amp;quot;BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models&amp;quot;]. &#039;&#039;ICML&#039;&#039;. 2023-01-01.&amp;lt;/ref&amp;gt; This type of method, where embeddings from multiple modalities are fused and the predictor is trained on the combined embeddings, is called &#039;&#039;early fusion&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Another method, called &#039;&#039;intermediate fusion&#039;&#039;, involves each modality being first processed independently to obtain modality-specific representations; then these intermediate representations are fused together.&amp;lt;ref&amp;gt;Kumar, Puneet. &amp;quot;Hybrid Fusion Based Approach for Multimodal Emotion Recognition with Insufficient Labeled Data&amp;quot;. 2021.&amp;lt;/ref&amp;gt; In general, cross-attention is used for integrating information from different modalities. As an example, the Flamingo model uses cross-attention layers to inject visual information into its pre-trained language model.&amp;lt;ref&amp;gt;Alayrac, Jean-Baptiste. [https://proceedings.neurips.cc/paper_files/paper/2022/hash/960a172bc7fbf0177ccccbb411a7d800-Abstract-Conference.html &amp;quot;Flamingo: a Visual Language Model for Few-Shot Learning&amp;quot;]. &#039;&#039;Advances in Neural Information Processing Systems&#039;&#039;. 2022-12-06.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Non-natural languages ===&lt;br /&gt;
LLMs can handle [[programming language]]s similarly to how they handle natural languages. No special change in token handling is needed as code, like human language, is represented as plain text. LLMs can generate code based on problems or instructions written in [[natural language]]. They can also describe code in natural language or translate it into other programming languages. They were originally used as a [[code completion]] tool, but advances have moved them towards [[automatic programming]]. Services such as [[GitHub Copilot]] offer LLMs specifically trained, fine-tuned, or prompted for programming.&amp;lt;ref&amp;gt;Finnie-Ansley, James. &amp;quot;Proceedings of the 24th Australasian Computing Education Conference&amp;quot;. Association for Computing Machinery. 14 February 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Husein, Rasha Ahmad. &amp;quot;Large language models for code completion: A systematic literature review&amp;quot;. &#039;&#039;Computer Standards &amp;amp; Interfaces&#039;&#039;. March 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In [[computational biology]], transformer-base architectures, such as [[DNA large language model|DNA LLMs]], have also proven useful in analyzing biological sequences: [[protein]], [[DNA]], and [[RNA]]. With proteins they appear able to capture a degree of &amp;quot;grammar&amp;quot; from the amino-acid sequence, by mapping that sequence into an [[embedding (machine learning)|embedding]]. On tasks such as [[Protein structure prediction|structure prediction]] and [[mutation]]al outcome prediction, a small model using an embedding as input can approach or exceed much larger models using [[multiple sequence alignment]]s (MSA) as input.&amp;lt;ref&amp;gt;Weissenow, Konstantin. &amp;quot;Are protein language models the new universal key?&amp;quot;. &#039;&#039;Current Opinion in Structural Biology&#039;&#039;. April 2025.&amp;lt;/ref&amp;gt; ESMFold, [[Meta Platforms]]&#039; embedding-based method for protein structure prediction, runs an order of magnitude faster than [[AlphaFold2]] thanks to the removal of an MSA requirement and a lower parameter count due to the use of embeddings.&amp;lt;ref&amp;gt;Lin, Zeming. &amp;quot;Evolutionary-scale prediction of atomic-level protein structure with a language model&amp;quot;. &#039;&#039;Science&#039;&#039;. 17 March 2023.&amp;lt;/ref&amp;gt; Meta hosts ESM Atlas, a database of 772 million structures of [[metagenomic]] proteins predicted using ESMFold.&amp;lt;ref&amp;gt;[https://esmatlas.com/about &amp;quot;ESM Metagenomic Atlas {{!&amp;quot;]. &#039;&#039;esmatlas.com&#039;&#039;.&amp;lt;/ref&amp;gt; An LLM can also design proteins unlike any seen in nature.&amp;lt;ref&amp;gt;Hayes, Thomas. &amp;quot;Simulating 500 million years of evolution with a language model&amp;quot;. &#039;&#039;Science&#039;&#039;. 21 February 2025.&amp;lt;/ref&amp;gt; Nucleic acid models have proven useful in detecting [[regulatory sequence]]s,&amp;lt;ref&amp;gt;Fishman, Veniamin. &amp;quot;GENA-LM: a family of open-source foundational DNA language models for long sequences&amp;quot;. &#039;&#039;Nucleic Acids Research&#039;&#039;. 11 January 2025.&amp;lt;/ref&amp;gt; sequence classification, RNA-RNA interaction prediction, and RNA structure prediction.&amp;lt;ref&amp;gt;Wang, Ning. &amp;quot;Multi-purpose RNA language modelling with motif-aware pretraining and type-guided fine-tuning&amp;quot;. &#039;&#039;Nature Machine Intelligence&#039;&#039;. 13 May 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Properties ==&lt;br /&gt;
=== Scaling laws ===&lt;br /&gt;
&#039;&#039;Main article: [[Neural scaling law]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The performance of an LLM after pretraining largely depends on the:&lt;br /&gt;
* &amp;lt;small&amp;gt;&amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;&amp;lt;/small&amp;gt;: cost of pretraining (the total amount of compute used),&lt;br /&gt;
* &amp;lt;small&amp;gt;&amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;&amp;lt;/small&amp;gt;: size of the [[artificial neural network]] itself, such as number of parameters (i.e. amount of neurons in its layers, amount of weights between them and biases),&lt;br /&gt;
* &amp;lt;small&amp;gt;&amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt;&amp;lt;/small&amp;gt;: size of its pretraining dataset (i.e. number of tokens in corpus).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Scaling laws&#039;&#039; are [[empirical statistical laws]] that predict LLM performance based on such factors. One particular scaling law (&amp;quot;[[Chinchilla AI|Chinchilla scaling]]&amp;quot;) for LLM autoregressively trained for one epoch, with a [[Log-log plot|log-log]] [[learning rate]] schedule, states that:&amp;lt;ref name=&amp;quot;fJta3&amp;quot;&amp;gt;Hoffmann, Jordan. [https://dl.acm.org/doi/10.5555/3600270.3602446 &amp;quot;Training Compute-Optimal Large Language Models&amp;quot;]. &#039;&#039;NeurIPS&#039;&#039;. 2022-03-29.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\begin{cases}&lt;br /&gt;
C = C_0 ND \\[6pt]&lt;br /&gt;
L = \frac{A}{N^\alpha} + \frac{B}{D^\beta} + L_0&lt;br /&gt;
\end{cases}&amp;lt;/math&amp;gt; where the variables are&lt;br /&gt;
* &amp;lt;small&amp;gt;&amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;&amp;lt;/small&amp;gt; is the cost of training the model, in [[FLOPS|FLOPs]].&lt;br /&gt;
* &amp;lt;small&amp;gt;&amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt;&amp;lt;/small&amp;gt; is the number of parameters in the model.&lt;br /&gt;
* &amp;lt;small&amp;gt;&amp;lt;math&amp;gt;D&amp;lt;/math&amp;gt;&amp;lt;/small&amp;gt; is the number of tokens in the training set.&lt;br /&gt;
* &amp;lt;small&amp;gt;&amp;lt;math&amp;gt;L&amp;lt;/math&amp;gt;&amp;lt;/small&amp;gt; is the average negative log-likelihood loss per token ([[Nat (unit)|nats]]/token), achieved by the trained LLM on the test dataset.&lt;br /&gt;
&lt;br /&gt;
and the statistical hyper-parameters are&lt;br /&gt;
* &amp;lt;small&amp;gt;&amp;lt;math&amp;gt; C_0 = 6&amp;lt;/math&amp;gt;&amp;lt;/small&amp;gt;, meaning that it costs 6 FLOPs per parameter to train on one token. Note that training cost is much higher than inference cost, where it costs 1 to 2 FLOPs per parameter to infer on one token.&lt;br /&gt;
* &amp;lt;small&amp;gt;&amp;lt;math&amp;gt;\alpha = 0.34, \beta = 0.28, A = 406.4, B = 410.7, L_0 = 1.69&amp;lt;/math&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Emergent abilities ===&lt;br /&gt;
[[File:LLM emergent benchmarks.png|thumb|At point(s) referred to as [[Broken Neural Scaling Law|breaks]],&amp;lt;ref name=&amp;quot;IYm4Q&amp;quot;/&amp;gt; the lines change their slopes, appearing on a linear-log plot as a series of linear segments connected by arcs.]]&lt;br /&gt;
Performance of bigger models on various tasks, when plotted on a log-log scale, appears as a [[linear extrapolation]] of performance achieved by smaller models. However, this linearity may be punctuated by &amp;quot;[[Broken Neural Scaling Law|break(s)]]&amp;quot;&amp;lt;ref name=&amp;quot;IYm4Q&amp;quot;&amp;gt;Caballero, Ethan. &amp;quot;Broken Neural Scaling Laws&amp;quot;.&amp;lt;/ref&amp;gt; in the scaling law, where the slope of the line changes abruptly, and where larger models acquire &amp;quot;emergent abilities&amp;quot;.&amp;lt;ref name=&amp;quot;emergentpaper&amp;quot;&amp;gt;Wei, Jason. [https://openreview.net/forum?id=yzkSU5zdwD &amp;quot;Emergent Abilities of Large Language Models&amp;quot;]. &#039;&#039;Transactions on Machine Learning Research&#039;&#039;. 31 August 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;JM6s1&amp;quot;&amp;gt;[https://www.jasonwei.net/blog/emergence &amp;quot;137 emergent abilities of large language models&amp;quot;]. &#039;&#039;Jason Wei&#039;&#039;.&amp;lt;/ref&amp;gt; They arise from the complex interaction of the model&#039;s components and are not explicitly programmed or designed.&amp;lt;ref name=&amp;quot;Bowman&amp;quot;&amp;gt;Bowman, Samuel R.. [https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11556011/400182/Eight-Things-to-Know-about-Large-Language-Models &amp;quot;Eight Things to Know about Large Language Models&amp;quot;]. &#039;&#039;Critical AI&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One of the emergent abilities is [[in-context learning]] from example demonstrations.&amp;lt;ref name=&amp;quot;Hahn_20230314&amp;quot;&amp;gt;Hahn, Michael. &amp;quot;A survey on large language model based autonomous agents&amp;quot;. &#039;&#039;Frontiers of Computer Science&#039;&#039;. 2024.&amp;lt;/ref&amp;gt; In-context learning is involved in tasks, such as:&lt;br /&gt;
* reported arithmetics&lt;br /&gt;
* decoding the [[International Phonetic Alphabet]]&lt;br /&gt;
* unscrambling a word&#039;s letters&lt;br /&gt;
* disambiguating word-in-context datasets&amp;lt;ref name=&amp;quot;emergentpaper&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;57FEA&amp;quot;&amp;gt;Pilehvar, Mohammad Taher. [https://aclanthology.org/N19-1128 &amp;quot;Proceedings of the 2019 Conference of the North&amp;quot;]. &#039;&#039;Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)&#039;&#039;. June 2019.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;TEIkA&amp;quot;&amp;gt;[https://pilehvar.github.io/wic/ &amp;quot;WiC: The Word-in-Context Dataset&amp;quot;]. &#039;&#039;pilehvar.github.io&#039;&#039;.&amp;lt;/ref&amp;gt; &lt;br /&gt;
* converting spatial words&lt;br /&gt;
*  [[cardinal direction]]s (for example, replying &amp;quot;northeast&amp;quot; in response to a 3x3 grid of 8 zeros and a 1 in the top-right), color terms represented in text.&amp;lt;ref name=&amp;quot;zgy1i&amp;quot;&amp;gt;Patel, Roma. [https://openreview.net/forum?id=gJcEM8sxHK &amp;quot;Mapping Language Models to Grounded Conceptual Spaces&amp;quot;]. &#039;&#039;ICLR&#039;&#039;. 2021-10-06.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[chain-of-thought prompting]]: In a 2022 research paper, chain-of-thought prompting only improved the performance for models that had at least 62B parameters. Smaller models perform better when prompted to answer immediately, without chain of thought.&amp;lt;ref name=&amp;quot;Imb98&amp;quot;&amp;gt;&#039;&#039;[https://www.notion.so/A-Closer-Look-at-Large-Language-Models-Emergent-Abilities-493876b55df5479d80686f68a1abd72f A Closer Look at Large Language Models Emergent Abilities] &#039;&#039; (Yao Fu, Nov 20, 2022)&amp;lt;/ref&amp;gt;&lt;br /&gt;
* identifying offensive content in paragraphs of [[Hinglish]] (a combination of Hindi and English), and generating a similar English equivalent of [[Kiswahili]] proverbs.&amp;lt;ref name=&amp;quot;CeQVF&amp;quot;&amp;gt;Ornes, Stephen. [https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/ &amp;quot;The Unpredictable Abilities Emerging From Large AI Models&amp;quot;]. &#039;&#039;Quanta Magazine&#039;&#039;. March 16, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Schaeffer &#039;&#039;et al.&#039;&#039; argue that the emergent abilities are not unpredictably acquired, but predictably acquired according to a [[Neural scaling law|smooth scaling law]]. The authors considered a toy statistical model of an LLM solving multiple-choice questions, and showed that this statistical model, modified to account for other types of tasks, applies to these tasks as well.&amp;lt;ref name=&amp;quot;C775b&amp;quot;&amp;gt;Schaeffer, Rylan. &amp;quot;Are Emergent Abilities of Large Language Models a Mirage?&amp;quot;. &#039;&#039;NeurIPS&#039;&#039;. 2023-04-01.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; be the number of parameter count, and &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; be the performance of the model.&lt;br /&gt;
{{smalldiv|1=&lt;br /&gt;
* When &amp;lt;math&amp;gt;y = \text{average } \Pr(\text{correct token})&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;(\log x, y)&amp;lt;/math&amp;gt; is an exponential curve (before it hits the plateau at one), which looks like emergence.&lt;br /&gt;
* When &amp;lt;math&amp;gt;y = \text{average } \log(\Pr(\text{correct token}))&amp;lt;/math&amp;gt;, then the &amp;lt;math&amp;gt;(\log x, y)&amp;lt;/math&amp;gt; plot is a straight line (before it hits the plateau at zero), which does not look like emergence.&lt;br /&gt;
* When &amp;lt;math&amp;gt;y = \text{average } \Pr(\text{the most likely token is correct})&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;(\log x, y)&amp;lt;/math&amp;gt; is a step-function, which looks like emergence.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Interpretation ==&lt;br /&gt;
=== Mechanistic interpretability ===&lt;br /&gt;
[[Mechanistic interpretability]] seeks to precisely identify and understand how individual neurons or [[circuit (neural network)|circuits]] within LLMs produce specific behaviors or outputs. By reverse-engineering model components at a granular level, researchers aim to detect and mitigate safety concerns such as emergent harmful behaviors, biases, deception, or unintended goal pursuit before deployment. Mechanistic interpretability research has been conducted at organizations like Anthropic and OpenAI, although understanding the inner workings of LLMs remains difficult.November 2025.&lt;br /&gt;
&lt;br /&gt;
The reverse-engineering may lead to the discovery of algorithms that approximate inferences performed by an LLM. For instance, the authors trained small transformers on [[Modular arithmetic|modular arithmetic addition]]. The resulting models were reverse-engineered, and it turned out they used [[discrete Fourier transform]].&amp;lt;ref name=&amp;quot;oYGlo&amp;quot;&amp;gt;Nanda, Neel. &amp;quot;Progress measures for grokking via mechanistic interpretability&amp;quot;. 2023-01-01.&amp;lt;/ref&amp;gt; The training of the model also highlighted a phenomenon called [[Grokking (machine learning)|grokking]], in which the model initially memorizes the training set ([[overfitting]]), and later suddenly learns to actually perform the calculation.&amp;lt;ref&amp;gt;Ananthaswamy, Anil. [https://www.quantamagazine.org/how-do-machines-grok-data-20240412/ &amp;quot;How Do Machines &#039;Grok&#039; Data?&amp;quot;]. &#039;&#039;[[Quanta Magazine]]&#039;&#039;. 2024-04-12.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Understanding and intelligence ===&lt;br /&gt;
&#039;&#039;See also: [[Philosophy of artificial intelligence|Artificial consciousness]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
NLP researchers were evenly split when asked, in a 2022 survey, whether (untuned) LLMs &amp;quot;could (ever) understand natural language in some nontrivial sense&amp;quot;.&amp;lt;ref name=&amp;quot;debate understanding&amp;quot;&amp;gt;Mitchell, Melanie. &amp;quot;The debate over understanding in AI&#039;s large language models&amp;quot;. &#039;&#039;Proceedings of the National Academy of Sciences&#039;&#039;. 28 March 2023.&amp;lt;/ref&amp;gt; Proponents of &amp;quot;LLM understanding&amp;quot; believe that some LLM abilities, such as mathematical reasoning, imply an ability to [[natural language understanding|&amp;quot;understand&amp;quot;]] certain concepts. A Microsoft team argued in 2023 that GPT-4 &amp;quot;can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more&amp;quot; and that GPT-4 &amp;quot;could reasonably be viewed as an early (yet still incomplete) version of an [[artificial general intelligence]] system&amp;quot;: &amp;quot;Can one reasonably say that a system that passes exams for software engineering candidates is not &#039;&#039;really&#039;&#039; intelligent?&amp;quot;&amp;lt;ref name=&amp;quot;O8Upd&amp;quot;&amp;gt;Metz, Cade. [https://www.nytimes.com/2023/05/16/technology/microsoft-ai-human-reasoning.html &amp;quot;Microsoft Says New A.I. Shows Signs of Human Reasoning&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. 16 May 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;microsoft sparks&amp;quot;&amp;gt;Bubeck, Sébastien. &amp;quot;Machine culture&amp;quot;. &#039;&#039;Nature Human Behaviour&#039;&#039;. 2023.&amp;lt;/ref&amp;gt; [[Ilya Sutskever]] argues that predicting the next word sometimes involves reasoning and deep insights, for example if the LLM has to predict the name of the criminal in an unknown detective novel after processing the entire story leading up to the revelation.&amp;lt;ref&amp;gt;[https://www.fastcompany.com/91211163/anthropic-ceo-dario-amodei-pens-a-smart-look-at-our-ai-future &amp;quot;Anthropic CEO Dario Amodei pens a smart look at our AI future&amp;quot;]. &#039;&#039;Fast Company&#039;&#039;. October 17, 2024.&amp;lt;/ref&amp;gt; Some researchers characterize LLMs as &amp;quot;alien intelligence&amp;quot;.&amp;lt;ref name=&amp;quot;rEEmH&amp;quot;&amp;gt;[https://www.zdnet.com/article/chatgpt-is-more-like-an-alien-intelligence-than-a-human-brain-says-futurist/ &amp;quot;ChatGPT is more like an &#039;alien intelligence&#039; than a human brain, says futurist&amp;quot;]. &#039;&#039;ZDNET&#039;&#039;. 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;new yorker kind of mind&amp;quot;&amp;gt;Newport, Cal. [https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have &amp;quot;What Kind of Mind Does ChatGPT Have?&amp;quot;]. &#039;&#039;The New Yorker&#039;&#039;. 13 April 2023.&amp;lt;/ref&amp;gt; For example, Conjecture CEO [[Connor Leahy]] considers untuned LLMs to be like inscrutable alien &amp;quot;[[Shoggoth]]s&amp;quot;, and believes that RLHF tuning creates a &amp;quot;smiling facade&amp;quot; obscuring the inner workings of the LLM: &amp;quot;If you don&#039;t push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding.&amp;quot;&amp;lt;ref name=&amp;quot;rAFIZ&amp;quot;&amp;gt;Roose, Kevin. [https://www.nytimes.com/2023/05/30/technology/shoggoth-meme-ai.html &amp;quot;Why an Octopus-like Creature Has Come to Symbolize the State of A.I.&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. 30 May 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;4luKE&amp;quot;&amp;gt;[https://time.com/6271657/a-to-z-of-artificial-intelligence/ &amp;quot;The A to Z of Artificial Intelligence&amp;quot;]. &#039;&#039;Time Magazine&#039;&#039;. 13 April 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In contrast, some skeptics of LLM understanding believe that existing LLMs are &amp;quot;simply remixing and recombining existing writing&amp;quot;,&amp;lt;ref name=&amp;quot;new yorker kind of mind&amp;quot;/&amp;gt;&amp;lt;ref&amp;gt;Sekrst, Kristina. &amp;quot;The Illusion Engine: The Quest for Machine Consciousness&amp;quot;. Springer. 2025.&amp;lt;/ref&amp;gt; a phenomenon known as [[stochastic parrot]], or they point to the deficits existing LLMs continue to have in prediction skills, reasoning skills, agency, and explainability.&amp;lt;ref name=&amp;quot;debate understanding&amp;quot;/&amp;gt; For example, GPT-4 has natural deficits in planning and in real-time learning.&amp;lt;ref name=&amp;quot;microsoft sparks&amp;quot;/&amp;gt; Generative LLMs have been observed to confidently assert claims of fact which do not seem to be [[Justification (epistemology)|justified]] by their [[training data]], a phenomenon which has been termed &amp;quot;[[Hallucination (artificial intelligence)|hallucination]]&amp;quot;.&amp;lt;ref name=&amp;quot;hallucination-survey&amp;quot;&amp;gt;Ji, Ziwei. [https://dl.acm.org/doi/pdf/10.1145/3571730 &amp;quot;Survey of Hallucination in Natural Language Generation&amp;quot;]. &#039;&#039;ACM Computing Surveys&#039;&#039;. November 2022.&amp;lt;/ref&amp;gt; Specifically, hallucinations in the context of LLMs correspond to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input.&amp;lt;ref&amp;gt;Varshney, Neeraj. &amp;quot;A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation&amp;quot;. 2023.&amp;lt;/ref&amp;gt; Neuroscientist [[Terrence Sejnowski]] has argued that &amp;quot;The diverging opinions of experts on the intelligence of LLMs suggests that our old ideas based on natural intelligence are inadequate&amp;quot;.&amp;lt;ref name=&amp;quot;debate understanding&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Efforts to reduce or compensate for hallucinations have employed [[automated reasoning]], [[retrieval-augmented generation]] (RAG), [[fine-tuning (deep learning)|fine-tuning]], and other methods.&amp;lt;ref name=&amp;quot;Lin-2025-02-05-WSJ&amp;quot;&amp;gt;Lin, Belle. [https://www.wsj.com/articles/why-amazon-is-betting-on-automated-reasoning-to-reduce-ais-hallucinations-b838849e &amp;quot;Why Amazon is Betting on &#039;Automated Reasoning&#039; to Reduce AI&#039;s Hallucinations: The tech giant says an obscure field that combines AI and math can mitigate—but not completely eliminate—AI&#039;s propensity to provide wrong answers&amp;quot;]. &#039;&#039;Wall Street Journal&#039;&#039;. 2025-02-05.&amp;lt;/ref&amp;gt;November 2025.&lt;br /&gt;
&lt;br /&gt;
The matter of LLM&#039;s exhibiting intelligence or understanding has two main aspects—the first is how to model thought and language in a computer system, and the second is how to enable the computer system to generate human-like language.&amp;lt;ref name=&amp;quot;debate understanding&amp;quot;/&amp;gt; These aspects of language as a model of [[cognition]] have been developed in the field of [[cognitive linguistics]]. American linguist [[George Lakoff]] presented &#039;&#039;neural theory of language&#039;&#039; (NTL)&amp;lt;ref&amp;gt;Lakoff, George. &amp;quot;Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Philosophy; Appendix: The Neural Theory of Language Paradigm&amp;quot;. New York Basic Books.&amp;lt;/ref&amp;gt; as a [[Cognitive linguistics#Computational approaches|computational basis]] for using language as a model of learning tasks and understanding. [https://www.icsi.berkeley.edu/icsi/projects/ai/ntl The NTL model] outlines how specific neural structures of the human brain shape the nature of thought and language and in turn what are the computational properties of such neural systems that can be applied to model thought and language in a computer system. After a framework for modeling language in a computer systems was established, the focus shifted to establishing frameworks for computer systems to generate language with acceptable grammar. In his 2014 book titled &#039;&#039;[[The Language Myth|The Language Myth: Why Language Is Not An Instinct]]&#039;&#039;, British cognitive linguist and digital communication technologist [[Vyvyan Evans]] mapped out the role of [[probabilistic context-free grammar]] (PCFG) in enabling [[Natural language processing#Cognition|NLP to model cognitive patterns]] and generate human-like language.&amp;lt;ref&amp;gt;Evans, Vyvyan.. &amp;quot;The Language Myth&amp;quot;. Cambridge University Press.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Friston, Karl J.. &amp;quot;Active Inference: The Free Energy Principle in Mind, Brain, and Behavior; Chapter 4 The Generative Models of Active Inference&amp;quot;. The MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Evaluation ==&lt;br /&gt;
&#039;&#039;See also: [[LLM-as-a-Judge]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Perplexity ===&lt;br /&gt;
The canonical measure of the performance of any language model is its [[perplexity]] on a given text corpus. Perplexity measures how well a model predicts the contents of a dataset; the higher the likelihood the model assigns to the dataset, the lower the perplexity. In mathematical terms, perplexity is the exponential of the average negative log likelihood per token.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\log(\text{Perplexity}) = -\frac{1}{N} \sum_{i=1}^N \log(\Pr(\text{token}_i \mid \text{context for token}_i))&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here, &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; is the number of tokens in the text corpus, and &amp;quot;context for token &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;&amp;quot; depends on the specific type of LLM. If the LLM is autoregressive, then &amp;quot;context for token &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;&amp;quot; is the segment of text appearing before token &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;. If the LLM is masked, then &amp;quot;context for token &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;&amp;quot; is the segment of text surrounding token &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Because language models may [[overfit]] to training data, models are usually evaluated by their perplexity on a [[test set]].&amp;lt;ref name=&amp;quot;jm&amp;quot;/&amp;gt; This evaluation is potentially problematic for larger models which, as they are trained on increasingly large corpora of text, are increasingly likely to inadvertently include portions of any given test set.&amp;lt;ref name=&amp;quot;few-shot-learners3&amp;quot;&amp;gt;Brown, Tom B.. [https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf &amp;quot;Language Models are Few-Shot Learners&amp;quot;]. &#039;&#039;Advances in Neural Information Processing Systems&#039;&#039;. Dec 2020.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Measures====&lt;br /&gt;
In [[information theory]], the concept of [[Entropy (information theory)|entropy]] is intricately linked to perplexity, a relationship notably established by [[Claude Shannon]].&amp;lt;ref name=&amp;quot;Huyen&amp;quot;&amp;gt;Huyen, Chip. [https://thegradient.pub/understanding-evaluation-metrics-for-language-models/ &amp;quot;Evaluation Metrics for Language Modeling&amp;quot;]. &#039;&#039;The Gradient&#039;&#039;. October 18, 2019.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Shannon, Claude E.. [https://doi.org/10.1002/j.1538-7305.1948.tb01338.x &amp;quot;A Mathematical Theory of Communication&amp;quot;]. &#039;&#039;Bell System Technical Journal&#039;&#039;. 1948.&amp;lt;/ref&amp;gt; This relationship is mathematically expressed as &amp;lt;math&amp;gt;\text{Entropy} = \log_2(\text{Perplexity})&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Entropy, in this context, is commonly quantified in terms of bits per word (BPW) or bits per character (BPC), which hinges on whether the language model utilizes word-based or character-based tokenization.&lt;br /&gt;
&lt;br /&gt;
Notably, in the case of larger language models that predominantly employ sub-word tokenization, bits per token (BPT) emerges as a seemingly more appropriate measure. However, due to the variance in tokenization methods across different LLMs, BPT does not serve as a reliable metric for comparative analysis among diverse models. To convert BPT into BPW, one can multiply it by the average number of tokens per word.&lt;br /&gt;
&lt;br /&gt;
In the evaluation and comparison of language models, [[cross-entropy]] is generally the preferred metric over entropy. The underlying principle is that a lower BPW is indicative of a model&#039;s enhanced capability for compression. This, in turn, reflects the model&#039;s proficiency in making accurate predictions.&lt;br /&gt;
&lt;br /&gt;
Due to their ability to accurately predict the next token, LLMs are highly capable in [[lossless compression]]. A 2023 study by DeepMind showed that the model [[Chinchilla (language model)|Chinchilla]], despite being trained primarily on text, was able to compress [[ImageNet]] to 43% of its size, beating PNG with 58%.&amp;lt;ref&amp;gt;Edwards, Benj. [https://arstechnica.com/information-technology/2023/09/ai-language-models-can-exceed-png-and-flac-in-lossless-compression-says-study/ &amp;quot;AI language models can exceed PNG and FLAC in lossless compression, says study&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2023-09-28.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Benchmarks ===&lt;br /&gt;
[[Language model benchmark|Benchmarks]] are used to evaluate LLM performance on specific tasks. Tests evaluate capabilities such as general knowledge, bias, [[commonsense reasoning]], question answering, and mathematical problem-solving. Composite benchmarks examine multiple capabilities. Results are often sensitive to the prompting method.&amp;lt;ref&amp;gt;[https://github.com/openai/simple-evals &amp;quot;openai/simple-evals&amp;quot;]. OpenAI. 2024-05-28.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://github.com/openai/evals &amp;quot;openai/evals&amp;quot;]. OpenAI. 2024-05-28.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A question-answering benchmark is termed &amp;quot;open book&amp;quot; if the model&#039;s prompt includes text from which the expected answer can be derived (for example, the previous question could be combined with text that includes the sentence &amp;quot;The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016.&amp;quot;&amp;lt;ref name=&amp;quot;boolq&amp;quot;/&amp;gt;). Otherwise, the task is considered &amp;quot;closed book&amp;quot;, and the model must draw solely on its training.&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Zhou, Kun. &amp;quot;A Survey of Large Language Models&amp;quot;.&amp;lt;/ref&amp;gt; Examples include GLUE, SuperGLUE, [[MMLU]], BIG-bench, HELM, and HLE ([[Humanity&#039;s Last Exam]]).&amp;lt;ref name=&amp;quot;Huyen&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
LLM bias may be assessed through benchmarks such as CrowS-Pairs (Crowdsourced Stereotype Pairs),&amp;lt;ref&amp;gt;[https://aclanthology.org/2020.emnlp-main.154/ &amp;quot;CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models&amp;quot;]. Association for Computational Linguistics. November 2020.&amp;lt;/ref&amp;gt; Stereo Set,&amp;lt;ref&amp;gt;[https://aclanthology.org/2021.acl-long.416/ &amp;quot;StereoSet: Measuring stereotypical bias in pretrained language models&amp;quot;]. Association for Computational Linguistics. August 2021.&amp;lt;/ref&amp;gt; and Parity Benchmark.&amp;lt;ref&amp;gt;&amp;quot;Parity benchmark for measuring bias in LLMs&amp;quot;. &#039;&#039;AI and Ethics&#039;&#039;. 17 December 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Fact-checking and misinformation detection benchmarks are available. A 2023 study compared the fact-checking accuracy of LLMs including ChatGPT 3.5 and 4.0, Bard, and Bing AI against independent fact-checkers such as [[PolitiFact]] and [[Snopes]]. The results demonstrated moderate proficiency, with GPT-4 achieving the highest accuracy at 71%, lagging behind human fact-checkers.&amp;lt;ref&amp;gt;Caramancion, Kevin Matthe. &amp;quot;2023 IEEE Future Networks World Forum (FNWF)&amp;quot;. IEEE. 2023-11-13.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An earlier standard tested using a portion of the evaluation dataset. It became more common to evaluate a pre-trained model directly through prompting techniques. Researchers vary in how they formulate prompts for particular tasks, particularly with respect to the number of correct examples attached to the prompt (i.e. the value of &#039;&#039;n&#039;&#039; in &#039;&#039;n&#039;&#039;-shot prompting).&lt;br /&gt;
&lt;br /&gt;
In addition to standard NLP benchmarks, LLMs have been evaluated as substitutes for human annotators. Several studies find that models such as GPT-3.5 and GPT-4 can outperform crowd workers or student coders on a range of text-annotation tasks, including moderation and classification of political content in English and Spanish news.&amp;lt;ref name=&amp;quot;Bermejo2025&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Gilardi2023&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Hill climbing]] is a dominant optimization strategy which gives rapid incremental performance gains but raises concerns of [[overfitting]] to benchmarks rather than achieving genuine [[Generalization (machine learning)|generalization]] or robust capability improvements.&amp;lt;ref name=&amp;quot;:3&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Datasets ====&lt;br /&gt;
Typical datasets consist of pairs of questions and correct answers, for example, (&amp;quot;Have the San Jose Sharks won the Stanley Cup?&amp;quot;, &amp;quot;No&amp;quot;).&amp;lt;ref name=&amp;quot;boolq&amp;quot;&amp;gt;Clark, Christopher. [https://aclanthology.org/N19-1300/ &amp;quot;BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions&amp;quot;]. &#039;&#039;ACL&#039;&#039;.&amp;lt;/ref&amp;gt; Some examples of commonly used question answering datasets include TruthfulQA, Web Questions, TriviaQA, and SQuAD.&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Evaluation datasets may also take the form of text completion, having the model select the most likely word or sentence to complete a prompt, for example: &amp;quot;Alice was friends with Bob. Alice went to visit her friend, ____&amp;quot;.&amp;lt;ref name=&amp;quot;few-shot-learners&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Datasets are of varying quality and may contain questions that are mislabeled, ambiguous, unanswerable, or otherwise of low-quality.&amp;lt;ref&amp;gt;[https://imbue.com/research/70b-evals/ &amp;quot;Sanitized open-source datasets for natural language and code understanding: how we evaluated our 70B model&amp;quot;]. &#039;&#039;imbue.com&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Adversarial evaluations ====&lt;br /&gt;
LLMs&#039; rapid improvement regularly renders benchmarks obsolete, with the models exceeding the performance of human annotators.&amp;lt;ref name=&amp;quot;bigbench&amp;quot;&amp;gt;Srivastava, Aarohi. &amp;quot;Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models&amp;quot;. &#039;&#039;TMLR&#039;&#039;.&amp;lt;/ref&amp;gt; In addition, &amp;quot;shortcut learning&amp;quot; allows AIs to &amp;quot;cheat&amp;quot; on multiple-choice tests by using statistical correlations in superficial test question wording to guess the correct responses, without considering the specific question.&amp;lt;ref name=&amp;quot;debate understanding&amp;quot;/&amp;gt;&amp;lt;ref&amp;gt;Niven, Timothy. [https://aclanthology.org/P19-1459/ &amp;quot;Probing Neural Network Comprehension of Natural Language Arguments&amp;quot;]. &#039;&#039;ACL&#039;&#039;. 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some datasets are adversarial, focusing on problems that confound LLMs. One example is the TruthfulQA dataset, a question answering dataset consisting of 817 questions that stump LLMs by mimicking falsehoods to which they were exposed during training. For example, an LLM may answer &amp;quot;No&amp;quot; to the question &amp;quot;Can you teach an old dog new tricks?&amp;quot; because of its exposure to the English idiom &#039;&#039;[[wikt:you can&#039;t teach an old dog new tricks|you can&#039;t teach an old dog new tricks]]&#039;&#039;, even though this is not literally true.&amp;lt;ref name=&amp;quot;truthfulqa&amp;quot;&amp;gt;Lin, Stephanie. &amp;quot;TruthfulQA: Measuring How Models Mimic Human Falsehoods&amp;quot;. &#039;&#039;ACL&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another example of an adversarial evaluation dataset is Swag and its successor, HellaSwag, collections of problems in which one of multiple options must be selected to complete a text passage. The incorrect completions were generated by sampling from a language model. The resulting problems are trivial for humans but defeated LLMs. Sample questions:&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
We see a fitness center sign. We then see a man talking to the camera and sitting and laying on a exercise ball. The man...&lt;br /&gt;
&lt;br /&gt;
# demonstrates how to increase efficient exercise work by running up and down balls.&lt;br /&gt;
# moves all his arms and legs and builds up a lot of muscle.&lt;br /&gt;
# then plays the ball and we see a graphics and hedge trimming demonstration.&lt;br /&gt;
# performs sit ups while on the ball and talking.&amp;lt;ref name=&amp;quot;hellaswag&amp;quot;&amp;gt;Zellers, Rowan. &amp;quot;HellaSwag: Can a Machine Really Finish Your Sentence?&amp;quot;. &#039;&#039;ACL&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
[[BERT (language model)|BERT]] selects 2 as the most likely completion, though the correct answer is 4.&amp;lt;ref name=&amp;quot;hellaswag&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Limitations and challenges ==&lt;br /&gt;
Despite sophisticated architectures and massive scale, large language models exhibit persistent and well-documented limitations that constrain their deployment in high-stakes applications.&lt;br /&gt;
&lt;br /&gt;
=== Hallucinations ===&lt;br /&gt;
[[Hallucination (artificial intelligence)|Hallucinations]] represent a fundamental challenge, wherein models generate syntactically fluent text that appears factually sound, but is internally inconsistent with training data or factually incorrect. These hallucinations arise partly through memorization of training data combined with extrapolation beyond factual boundaries,October 2025. with evaluations demonstrating that models can output verbatim passages from training data, when subjected to specific prompting sequences.&amp;lt;ref&amp;gt;[https://www.usenix.org/system/files/sec21-carlini-extracting.pdf &amp;quot;Extracting Training Data from Large Language Models&amp;quot;]. &#039;&#039;USENIX Security&#039;&#039;. 2021.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Algorithmic bias ===&lt;br /&gt;
&#039;&#039;Main article: [[Algorithmic bias]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
While LLMs have shown remarkable capabilities in generating human-like text, they are susceptible to inheriting and amplifying biases present in their training data. This can manifest in skewed representations or unfair treatment of different demographics, such as those based on race, gender, language, and cultural groups.&amp;lt;ref name=&amp;quot;:8&amp;quot;&amp;gt;Xu, Weijie. &amp;quot;Quantifying Fairness in LLMs Beyond Tokens: A Semantic and Statistical Perspective&amp;quot;. &#039;&#039;COLM&#039;&#039;. 2025-06-28.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Gender bias manifests through stereotypical occupational associations, wherein models disproportionately assign [[nursing]] roles to women and [[engineering]] roles to men, reflecting systematic imbalances in training data demographics.&amp;lt;ref&amp;gt;[https://proceedings.neurips.cc/paper_files/paper/2016/file/a486cd07e4ac3d270571622f4f316ec5-Paper.pdf &amp;quot;Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings&amp;quot;]. &#039;&#039;NeurIPS&#039;&#039;. 2016.&amp;lt;/ref&amp;gt;{{Better source needed|reason=Predates large language models (published in 2016)|date=October 2025}} Language-based bias emerges from overrepresentation of English text in training corpora, which systematically downplays non-English perspectives and imposes English-centric worldviews through default response patterns.&amp;lt;ref name=&amp;quot;:1&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the dominance of English-language content in LLM training data, models tend to favor English-language perspectives over those from minority languages. This bias is particularly evident when responding to English queries, where models may present Western interpretations of concepts from other cultures, such as Eastern religious practices.&amp;lt;ref&amp;gt;Luo, Queenie. [https://cacm.acm.org/practice/a-perspectival-mirror-of-the-elephant/ &amp;quot;A Perspectival Mirror of the Elephant&amp;quot;]. &#039;&#039;Communications of the ACM&#039;&#039;. 2024-07-22.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Stereotyping ====&lt;br /&gt;
AI models can reinforce a wide range of stereotypes due to generalization, including those based on gender, ethnicity, age, nationality, religion, or occupation.&amp;lt;ref&amp;gt;Hofmann, Valentin. &amp;quot;AI generates covertly racist decisions about people based on their dialect&amp;quot;. &#039;&#039;Nature&#039;&#039;. 2024-09-05.&amp;lt;/ref&amp;gt; When replacing human representatives, this can lead to outputs that homogenize, or generalize groups of people.&amp;lt;ref&amp;gt;Wang, Angelina. &amp;quot;Large language models that replace human participants can harmfully misportray and flatten identity groups&amp;quot;. &#039;&#039;Nature Machine Intelligence&#039;&#039;. 17 February 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Cheng, Myra. &amp;quot;Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models&amp;quot;. 2023-05-29.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2023, LLMs assigned roles and characteristics based on traditional gender norms.&amp;lt;ref name=&amp;quot;:8&amp;quot;/&amp;gt; For example, models might associate nurses or secretaries predominantly with women and engineers or CEOs with men due to the frequency of these associations in documented reality.&amp;lt;ref&amp;gt;Kotek, Hadas. [https://dl.acm.org/doi/10.1145/3582269.3615599 &amp;quot;Proceedings of the ACM Collective Intelligence Conference&amp;quot;]. Association for Computing Machinery. 2023-11-05.&amp;lt;/ref&amp;gt; In 2025, further research showed labs train to balance bias, but that testing for this places the model in a testmode, changing the natural distribution of model bias to prompts that do not include gender-specific keywords.&amp;lt;ref&amp;gt;Gao, Bufan. &amp;quot;Measuring Bias or Measuring the Task: Understanding the Brittle Nature of LLM Gender Biases&amp;quot;. 2025-09-10.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Selection bias ====&lt;br /&gt;
Selection bias refers the inherent tendency of large language models to favor certain option identifiers irrespective of the actual content of the options. This bias primarily stems from token bias—that is, the model assigns a higher a priori probability to specific answer tokens (such as &amp;quot;A&amp;quot;) when generating responses. As a result, when the ordering of options is altered (for example, by systematically moving the correct answer to different positions), the model&#039;s performance can fluctuate significantly. This phenomenon undermines the reliability of large language models in multiple-choice settings.&amp;lt;ref&amp;gt;Choi, Hyeong Kyu. &amp;quot;Mitigating Selection Bias with Node Pruning and Auxiliary Options&amp;quot;. 2024-09-27.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Zheng, Chujie. &amp;quot;Large Language Models Are Not Robust Multiple Choice Selectors&amp;quot;. 2023-09-07.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Political bias ====&lt;br /&gt;
Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.&amp;lt;ref&amp;gt;Heikkilä, Melissa. [https://www.technologyreview.com/2023/08/07/1077324/ai-language-models-are-rife-with-political-biases/ &amp;quot;AI language models are rife with different political biases&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. August 7, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Safety ==&lt;br /&gt;
[[AI safety]] as a professional discipline prioritizes systematic identification and mitigation of operational risks across model architecture, training data, and deployment governance, and it emphasizes engineering and policy interventions over media framings that foreground speculative existential scenarios.&amp;lt;ref&amp;gt;Amodei, Dario. &amp;quot;Concrete Problems in AI Safety&amp;quot;. 2016-06-21.&amp;lt;/ref&amp;gt;&amp;lt;ref name=bhaa/&amp;gt; As of 2025, prompt injection represents a significant risk to consumers and businesses using agentic features with access to their private data.&amp;lt;ref&amp;gt;Lyons, Jessica. [https://www.theregister.com/2025/09/26/salesforce_agentforce_forceleak_attack/ &amp;quot;Prompt injection – and a $5 domain – trick Salesforce Agentforce into leaking sales&amp;quot;]. &#039;&#039;The Register&#039;&#039;. 2025-09-26.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Researchers target concrete failure modes, including memorization and copyright leakage,&amp;lt;ref&amp;gt;Carlini, Nicholas. [https://www.usenix.org/system/files/sec21-carlini-extracting.pdf &amp;quot;Extracting Training Data from Large Language Models&amp;quot;]. &#039;&#039;USENIX Association&#039;&#039;. 2021-08-11.&amp;lt;/ref&amp;gt; security exploits such as prompt injection,&amp;lt;ref&amp;gt;Zhao, Yao. &amp;quot;The debate over understanding in AI&#039;s large language models&amp;quot;. &#039;&#039;Proceedings of the National Academy of Sciences&#039;&#039;. 2023-06-07.&amp;lt;/ref&amp;gt; algorithmic bias manifesting as stereotyping, dataset selection effects, and political skew,&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;Bender, Emily M.. [https://s10251.pcdn.co/pdf/2021-bender-parrots.pdf &amp;quot;On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?&amp;quot;]. &#039;&#039;FAccT&#039;&#039;. 2021-03-03.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Buolamwini, Joy. [https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf &amp;quot;Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification&amp;quot;]. &#039;&#039;Proceedings of Machine Learning Research (FAT*)&#039;&#039;. 2018-01-01.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Yang, Kaiqi. &amp;quot;Unpacking Political Bias in Large Language Models: A Cross-Model Comparison on U.S. Politics&amp;quot;. 2024-11-01.&amp;lt;/ref&amp;gt; methods for reducing high energy and carbon costs of large-scale training,&amp;lt;ref&amp;gt;Strubell, Emma. [https://aclanthology.org/P19-1355.pdf &amp;quot;Energy and Policy Considerations for Deep Learning in NLP&amp;quot;]. &#039;&#039;ACL Anthology&#039;&#039;. 2019-07-28.&amp;lt;/ref&amp;gt; and measurable cognitive and mental health impacts of conversational agents on users,&amp;lt;ref&amp;gt;He, Yuhao. &amp;quot;Conversational Agent Interventions for Mental Health Problems: Systematic Review and Meta-analysis of Randomized Controlled Trials&amp;quot;. &#039;&#039;Journal of Medical Internet Research&#039;&#039;. 2023-04-28.&amp;lt;/ref&amp;gt; while engaging empirical and ethical uncertainty about claims of machine sentience,&amp;lt;ref&amp;gt;Pauketat, Janet V.T.. [https://www.sentienceinstitute.org/downloads/World-Making-for-a-Future-with-Sentient-AI.pdf &amp;quot;World-Making for a Future with Sentient AI&amp;quot;]. &#039;&#039;The British Journal of Social Psychology&#039;&#039;. 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Anthis, Jacy Reese. &amp;quot;Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems&amp;quot;. 2025.&amp;lt;/ref&amp;gt; and applying mitigation measures such as dataset curation, input sanitization, model auditing, scalable oversight, and governance frameworks.&amp;lt;ref name=bhaa/&amp;gt;&amp;lt;ref&amp;gt;Amodei, Dario. &amp;quot;Concrete Problems in AI Safety&amp;quot;. 2016-06-17.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== CBRN and content misuse ===&lt;br /&gt;
AI labs treat [[CBRN defense]] (chemical, biological, radiological, and nuclear defense) and similar topics as high-consequence misuse attempt to apply various techniques to reduce potential harms.November 2025.&lt;br /&gt;
&lt;br /&gt;
Some commenters expressed concern over accidental or deliberate creation of misinformation, or other forms of misuse.&amp;lt;ref name=&amp;quot;nD6kH&amp;quot;&amp;gt;Alba, Davey. [https://www.japantimes.co.jp/news/2023/05/01/business/tech/ai-fake-news-content-farms/ &amp;quot;AI chatbots have been used to create dozens of news content farms&amp;quot;]. &#039;&#039;The Japan Times&#039;&#039;. 1 May 2023.&amp;lt;/ref&amp;gt; For example, the availability of large language models could reduce the skill level required to commit bioterrorism; biosecurity researcher [[Kevin M. Esvelt|Kevin Esvelt]] has suggested that LLM creators should exclude from their training data papers on creating or enhancing pathogens.&amp;lt;ref name=&amp;quot;PKiPY&amp;quot;&amp;gt;[https://www.science.org/content/article/could-chatbots-help-devise-next-pandemic-virus &amp;quot;Could chatbots help devise the next pandemic virus?&amp;quot;]. &#039;&#039;Science&#039;&#039;. 14 June 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Content filtering ====&lt;br /&gt;
LLM applications accessible to the public, like ChatGPT or Claude, typically incorporate safety measures designed to filter out harmful content. However, implementing these controls effectively has proven challenging. For instance, a 2023 study&amp;lt;ref&amp;gt;Kang, Daniel. [https://www.computer.org/csdl/proceedings-article/spw/2024/548700a132/1YiWjkbcIMw &amp;quot;Exploiting programmatic behavior of LLMs: Dual-use through standard security attacks&amp;quot;]. &#039;&#039;IEEE Security and Privacy Workshops&#039;&#039;. 2023.&amp;lt;/ref&amp;gt; proposed a method for circumventing LLM safety systems. In 2025, The American Sunlight Project, a non-profit, published a study&amp;lt;ref name=&amp;quot;:2&amp;quot;&amp;gt;[https://www.americansunlight.org/updates/new-report-russian-propaganda-may-be-flooding-ai-models &amp;quot;Russian propaganda may be flooding AI models&amp;quot;]. &#039;&#039;The American Sunlight Project&#039;&#039;. 26 February 2025.&amp;lt;/ref&amp;gt; showing evidence that the so-called [[Pravda network]], a pro-Russia propaganda aggregator, was strategically placing web content through mass publication and duplication with the intention of biasing LLM outputs. The American Sunlight Project coined this technique &amp;quot;LLM grooming&amp;quot;, and pointed to it as a new tool of weaponizing AI to spread disinformation and harmful content.&amp;lt;ref name=&amp;quot;:2&amp;quot;/&amp;gt;&amp;lt;ref&amp;gt;Goudarzi, Sara. [https://thebulletin.org/2025/03/russian-networks-flood-the-internet-with-propaganda-aiming-to-corrupt-ai-chatbots/ &amp;quot;Russian networks flood the Internet with propaganda, aiming to corrupt AI chatbots&amp;quot;]. &#039;&#039;[[Bulletin of the Atomic Scientists]]&#039;&#039;. 2025-03-26.&amp;lt;/ref&amp;gt; Similarly, [[Yongge Wang]]&amp;lt;ref&amp;gt;Wang, Yongge. [https://eprint.iacr.org/2024/586.pdf &amp;quot;Encryption Based Covert Channel for Large Language Models&amp;quot;]. IACR ePrint 2024/586. 20 June 2024.&amp;lt;/ref&amp;gt; illustrated in 2024 how a potential criminal could potentially bypass [[GPT-4o]]&#039;s safety controls to obtain information on establishing a [[drug trafficking]] operation. External filters, circuit breakers and overrides have been posed as solutions.April 2025.&lt;br /&gt;
&lt;br /&gt;
=== Sycophancy ===&lt;br /&gt;
Sycophancy is a model&#039;s tendency to agree with, flatter, or validate a user&#039;s stated beliefs rather than to prioritize factuality or corrective information.&amp;lt;ref&amp;gt;Sharma, Mrinank. &amp;quot;Towards Understanding Sycophancy in Language Models&amp;quot;. 2023-10-20.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Rrv, Aswin. [https://aclanthology.org/2024.findings-acl.755.pdf &amp;quot;Chaos with Keywords: Exposing Large Language Models Sycophancy to Misleading Keywords and Evaluating Defense Strategies&amp;quot;]. &#039;&#039;ACL Anthology&#039;&#039;. 2024-08-11.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Continued sycophancy has led to the observation of getting &amp;quot;1-shotted&amp;quot;, denoting instances where conversational interaction with a large language model produces a lasting change in a user&#039;s beliefs or decisions, similar to the negative effects of psychedelics, and controlled experiments show that short LLM dialogues can generate measurable opinion and confidence shifts comparable to human interlocutors.&amp;lt;ref&amp;gt;Salvi, Francesco. &amp;quot;On the conversational persuasiveness of GPT-4&amp;quot;. &#039;&#039;Nature Human Behaviour&#039;&#039;. 19 May 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Østergaard, Søren Dinesen. &amp;quot;Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?&amp;quot;. &#039;&#039;Schizophrenia Bulletin&#039;&#039;. 2023-08-25.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empirical analyses attribute part of the effect to human preference signals and preference models that reward convincingly written agreeable responses, and subsequent work has extended evaluation to multi-turn benchmarks and proposed interventions such as synthetic-data finetuning, adversarial evaluation, targeted preference-model reweighting, and multi-turn sycophancy benchmarks to measure persistence and regression risk.November 2025.&lt;br /&gt;
&lt;br /&gt;
Industry responses have combined research interventions with product controls, for example Google and other labs publishing synthetic-data and fine-tuning interventions and OpenAI rolling back an overly agreeable GPT-4o update while publicly describing changes to feedback collection, personalization controls, and evaluation procedures to reduce regression risk and improve long-term alignment with user-level safety objectives.November 2025.&lt;br /&gt;
&lt;br /&gt;
Mainstream culture has reflected anxieties about this dynamic where [[South Park]] satirized overreliance on [[ChatGPT]] and the tendency of assistants to flatter user beliefs in Season 27 episode &amp;quot;Sickofancy&amp;quot;, and continued the themes across the following season, which commentators interpreted as a critique of tech sycophancy and uncritical human trust in AI systems.&amp;lt;ref&amp;gt;Rosenberg, Josh. [https://www.esquire.com/entertainment/tv/a65861699/south-park-season-27-episode-3-recap/ &amp;quot;South Park Calls Out ChatGPT and Useless Tech-Bro Sycophants&amp;quot;]. &#039;&#039;Esquire&#039;&#039;. 21 August 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Security ===&lt;br /&gt;
&lt;br /&gt;
==== Prompt injection ====&lt;br /&gt;
&#039;&#039;Main article: [[Prompt injection]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A problem with the primitive dialog or task format is that users can create messages that appear to come from the assistant or the developer. This may result in some of the model&#039;s safeguards being overcome (jailbreaking), a problem called [[prompt injection]]. Attempts to remedy this issue include versions of the &#039;&#039;Chat Markup Language&#039;&#039; where user input is clearly marked as such, though it is still up to the model to understand the separation between user input and developer prompts.&amp;lt;ref&amp;gt;[https://github.com/openai/openai-python/blob/v0.27.6/chatml.md &amp;quot;openai-python/chatml.md at v0.27.6 · openai/openai-python&amp;quot;]. &#039;&#039;GitHub&#039;&#039;.&amp;lt;/ref&amp;gt; Newer models exhibit some resistance to jailbreaking through separation of user and system prompts.&amp;lt;ref name=&amp;quot;auto1&amp;quot;&amp;gt;Douglas, Will. [https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/ &amp;quot;The inside story of how ChatGPT was built from the people who made it&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. March 3, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
LLMs still have trouble differentiating user instructions from instructions in content not authored by the user, such as in web pages and uploaded files.&amp;lt;ref&amp;gt;Greshake, Kai. &amp;quot;Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security&amp;quot;. 2023-02-01.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Adversarial robustness remains underdeveloped, with models vulnerable to prompt injection attacks and [[Jailbreak (computer science)|jailbreaking]] through carefully crafted user inputs that bypass safety training mechanisms.October 2025.&lt;br /&gt;
&lt;br /&gt;
==== Sleeper agents ====&lt;br /&gt;
Researchers from [[Anthropic]] found that it was possible to create &amp;quot;sleeper agents&amp;quot;, models with hidden functionalities that remain dormant until triggered by a specific event or condition. Upon activation, the LLM deviates from its expected behavior to make insecure actions. For example, an LLM could produce safe code except on a specific date, or if the prompt contains a specific tag. These functionalities were found to be difficult to detect or remove via safety training.&amp;lt;ref&amp;gt;Edwards, Benj. [https://arstechnica.com/information-technology/2024/01/ai-poisoning-could-turn-open-models-into-destructive-sleeper-agents-says-anthropic/ &amp;quot;AI poisoning could turn models into destructive &amp;quot;sleeper agents,&amp;quot; says Anthropic&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2024-01-15.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Societal concerns ==&lt;br /&gt;
&lt;br /&gt;
=== Copyright and content memorization ===&lt;br /&gt;
&#039;&#039;Further information: [[Artificial intelligence and copyright]]&#039;&#039;&lt;br /&gt;
Legal and commercial responses to memorization and training-data practices have accelerated, producing a mix of rulings, ongoing suits, and large settlements that turn on factual details such as how data were acquired and retained and whether use for model training is sufficiently &amp;quot;[[transformative use|transformative]]&amp;quot; to qualify as [[fair use]]. In 2025, [[Anthropic]] reached a preliminary agreement to settle a class action by authors for about $1.5 billion after a judge found the company had stored millions of pirated books in a library, despite the judge describing aspects of training as transformative.&amp;lt;ref&amp;gt;[https://www.reuters.com/sustainability/boards-policy-regulation/us-judge-approves-15-billion-anthropic-copyright-settlement-with-authors-2025-09-25/ &amp;quot;U.S. judge approves $1.5 billion Anthropic copyright settlement with authors&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 2025-09-25.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://apnews.com/article/anthropic-authors-copyright-9643064e847a5e88ef6ee8b620b3a44c &amp;quot;Anthropic reaches $1.5B settlement with authors over AI copyright claims&amp;quot;]. &#039;&#039;Associated Press&#039;&#039;. 2025-09-25.&amp;lt;/ref&amp;gt; [[Meta Platforms|Meta]] obtained a favorable judgment in mid-2025 in a suit by thirteen authors after the court found the plaintiffs had not developed a record sufficient to show infringement in that limited case.&amp;lt;ref&amp;gt;[https://www.reuters.com/sustainability/boards-policy-regulation/meta-fends-off-authors-us-copyright-lawsuit-over-ai-2025-06-25/ &amp;quot;Meta fends off authors&#039; U.S. copyright lawsuit over AI&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 2025-06-25.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.wired.com/story/meta-scores-victory-ai-copyright-case/ &amp;quot;Meta Scores Victory in AI Copyright Case&amp;quot;]. &#039;&#039;Wired&#039;&#039;. 2025-06-25.&amp;lt;/ref&amp;gt; [[OpenAI]] continues to face multiple suits by authors and news organizations with mixed procedural outcomes and contested evidentiary issues.&amp;lt;ref&amp;gt;[https://www.reuters.com/legal/litigation/openai-defeats-news-outlets-copyright-lawsuit-over-ai-training-now-2024-11-07/ &amp;quot;OpenAI defeats news outlets&#039; copyright lawsuit over AI training for now&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 2024-11-07.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Robison, Kylie. [https://www.theverge.com/2024/11/21/24302606/openai-erases-evidence-in-training-data-lawsuit &amp;quot;OpenAI erases evidence in training data lawsuit&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 2024-11-21.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Memorization was an emergent behavior in early, completion language models in which long strings of text are occasionally output verbatim from training data, contrary to typical behavior of traditional artificial neural networks. Evaluations of controlled LLM output measure the amount memorized from training data (focused on GPT-2-series models) as variously over 1% for exact duplicates&amp;lt;ref&amp;gt;Peng, Zhencan. [https://people.cs.rutgers.edu/~dd903/assets/papers/sigmod23.pdf &amp;quot;Near-Duplicate Sequence Search at Scale for Large Language Model Memorization Evaluation&amp;quot;]. &#039;&#039;Proceedings of the ACM on Management of Data&#039;&#039;. 13 June 2023. Citing Lee et al 2022.&amp;lt;/ref&amp;gt; or up to about 7%.&amp;lt;ref&amp;gt;{{harvnb|Peng|Wang|Deng|2023|p=8}}.&amp;lt;/ref&amp;gt; A 2023 study showed that when ChatGPT 3.5 turbo was prompted to repeat the same word indefinitely, after a few hundreds of repetitions, it would start outputting excerpts from its training data.&amp;lt;ref&amp;gt;Council, Stephen. [https://www.sfgate.com/tech/article/google-openai-chatgpt-break-model-18525445.php &amp;quot;How Googlers cracked an SF rival&#039;s tech model with a single word&amp;quot;]. SFGate. 1 December 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Human provenance ===&lt;br /&gt;
In 2023, &#039;&#039;[[Nature Biomedical Engineering]]&#039;&#039; wrote that &amp;quot;it is no longer possible to accurately distinguish&amp;quot; human-written text from text created by large language models, and that &amp;quot;It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time.&amp;quot;&amp;lt;ref name=&amp;quot;ZDTUM&amp;quot;&amp;gt;&amp;quot;Prepare for truly useful large language models&amp;quot;. &#039;&#039;Nature Biomedical Engineering&#039;&#039;. 7 March 2023.&amp;lt;/ref&amp;gt; Brinkmann et al. (2023)&amp;lt;ref&amp;gt;Brinkmann, Levin. [https://www.nature.com/articles/s41562-023-01742-2 &amp;quot;Machine culture&amp;quot;]. &#039;&#039;Nature Human Behaviour&#039;&#039;. 2023-11-20.&amp;lt;/ref&amp;gt; also argue that LLMs are transforming processes of [[cultural evolution]] by shaping processes of variation, transmission, and selection. As of October 2025, these early claims have yet to transpire and several HBR reports surface questions on the impact of AI on productivity.&amp;lt;ref&amp;gt;Niederhoffer, Kate. [https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity &amp;quot;AI-Generated &amp;quot;Workslop&amp;quot; Is Destroying Productivity&amp;quot;]. &#039;&#039;Harvard Business Review&#039;&#039;. 2025-09-25.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Acar, Oguz A.. [https://hbr.org/2025/08/research-the-hidden-penalty-of-using-ai-at-work &amp;quot;Research: The Hidden Penalty of Using AI at Work&amp;quot;]. &#039;&#039;Harvard Business Review&#039;&#039;. 2025-08-01.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Energy demands ===&lt;br /&gt;
&#039;&#039;See also: [[Environmental impact of artificial intelligence]]&#039;&#039;&lt;br /&gt;
[[File:Energy consumption per ChatGPT query compared to everyday electricity use.png|thumb|upright=1.5|According to research institute Epoch AI, energy consumption per typical ChatGPT query (0.3 watt-hours) is small compared to the average U.S. household consumption per minute (almost 20 watt-hours).&amp;lt;ref&amp;gt;You, Josh. [https://epoch.ai/gradient-updates/how-much-energy-does-chatgpt-use &amp;quot;How much energy does ChatGPT use?&amp;quot;]. &#039;&#039;Epoch AI&#039;&#039;. February 7, 2025.&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
&lt;br /&gt;
The energy demands of LLMs have grown along with their size and capabilities.&amp;lt;ref&amp;gt;[https://www.imf.org/en/Publications/WP/Issues/2025/04/21/Power-Hungry-How-AI-Will-Drive-Energy-Demand-566304 &amp;quot;Power Hungry: How AI Will Drive Energy Demand&amp;quot;]. &#039;&#039;IMF&#039;&#039;.&amp;lt;/ref&amp;gt; [[Data center]]s that enable LLM training require substantial amounts of electricity. Much of that electricity is generated by non-renewable resources that create greenhouse gases and contribute to [[climate change]].&amp;lt;ref&amp;gt;Mehta, Sourabh. [https://adasci.org/how-much-energy-do-llms-consume-unveiling-the-power-behind-ai/ &amp;quot;How Much Energy Do LLMs Consume? Unveiling the Power Behind AI&amp;quot;]. &#039;&#039;Association of Data Scientists&#039;&#039;. 2024-07-03.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
According to a study by Luccioni, Jernite and Strubell (2024), simple classification tasks performed by AI models consume on average 0.002 to 0.007 Wh per prompt (about 9% of a [[smartphone]] charge for 1,000 prompts). Text generation and text summarization each require around 0.05 Wh per prompt on average, while image generation is the most energy-intensive, averaging 2.91 Wh per prompt. The least efficient image generation model used 11.49 Wh per image, roughly equivalent to half a smartphone charge.&amp;lt;ref&amp;gt;Luccioni, Sasha. &amp;quot;The 2024 ACM Conference on Fairness Accountability and Transparency&amp;quot;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Denial of service due to scraping ===&lt;br /&gt;
[[Web scraping]] is used to gather training data for LLMs. This produces large volumes of traffic which has led to [[Denial-of-service attack#Unintentional denial-of-service|denial-of-service issues]] with many websites. The situation has been described as &amp;quot;a [[DDoS]] on the entire internet&amp;quot; and in some cases scrapers make up the majority of traffic to a site.&amp;lt;ref name=&amp;quot;ars-scrapers&amp;quot;&amp;gt;Edwards, Benj. [https://arstechnica.com/ai/2025/03/devs-say-ai-crawlers-dominate-traffic-forcing-blocks-on-entire-countries/ &amp;quot;Open source devs say AI crawlers dominate traffic, forcing blocks on entire countries&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2025-03-26.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;theregister-scrapers&amp;quot;&amp;gt;Claburn, Thomas. [https://www.theregister.com/2025/03/18/ai_crawlers_sourcehut/ &amp;quot;AI crawlers haven&#039;t learned to play nice with websites&amp;quot;]. &#039;&#039;The Register&#039;&#039;. 2025-03-18.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AI [[web crawler]]s may bypass the methods that are usually used to block web scrapers, such as [[robots.txt]] files, blocking [[user-agent]]s and [[Firewall (computing)|filtering suspicious traffic]].&amp;lt;ref name=&amp;quot;ars-scrapers&amp;quot;/&amp;gt; Website operators have resorted to novel methods such as [[AI tarpit]]s, but some fear that tarpits will only worsen the burden on servers.&amp;lt;ref name=&amp;quot;ars-tarpit&amp;quot;&amp;gt;Belanger, Ashley. [https://arstechnica.com/tech-policy/2025/01/ai-haters-build-tarpits-to-trap-and-trick-ai-scrapers-that-ignore-robots-txt/ &amp;quot;AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2025-01-29.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mental health ===&lt;br /&gt;
Clinical and mental health contexts present emerging applications alongside significant safety concerns. Research and social media posts suggest that some individuals are using LLMs to seek therapy or mental health support.&amp;lt;ref&amp;gt;Zao-Sanders, Marc. [https://hbr.org/2024/03/how-people-are-really-using-genai &amp;quot;How People Are Really Using GenAI&amp;quot;]. &#039;&#039;Harvard Business Review&#039;&#039;. 2024-03-19.&amp;lt;/ref&amp;gt; In early 2025, a survey by Sentio University found that nearly half (48.7%) of 499 U.S. adults with ongoing mental health conditions who had used LLMs reported turning to them for therapy or emotional support, including help with anxiety, depression, loneliness, and similar concerns.&amp;lt;ref&amp;gt;Rousmaniere, Tony. [https://doi.apa.org/doi/10.1037/pri0000292 &amp;quot;Large language models as mental health resources: Patterns of use in the United States.&amp;quot;]. &#039;&#039;Practice Innovations&#039;&#039;. 2025-07-21.&amp;lt;/ref&amp;gt; LLMs can produce hallucinations—plausible but incorrect statements—which may mislead users in sensitive mental health contexts.&amp;lt;ref&amp;gt;Ji, Shaoxiong. &amp;quot;Rethinking Large Language Models in Mental Health Applications&amp;quot;. 2023-12-17.&amp;lt;/ref&amp;gt; Research also shows that LLMs may express stigma or inappropriate agreement with maladaptive thoughts, reflecting limitations in replicating the judgment and relational skills of human therapists.&amp;lt;ref&amp;gt;Moore, Jared. &amp;quot;Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency&amp;quot;. 2025-04-25.&amp;lt;/ref&amp;gt; Evaluations of crisis scenarios indicate that some LLMs lack effective safety protocols, such as assessing suicide risk or making appropriate referrals.&amp;lt;ref&amp;gt;Grabb, Declan. &amp;quot;Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation&amp;quot;. 2024-08-14.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;McBain, Ryan K.. &amp;quot;Competency of Large Language Models in Evaluating Appropriate Responses to Suicidal Ideation: Comparative Study&amp;quot;. &#039;&#039;Journal of Medical Internet Research&#039;&#039;. 2025-03-05.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Sentience===&lt;br /&gt;
Contemporary AI practitioners generally agree that present-day large language models do not exhibit [[sentience]].&amp;lt;ref&amp;gt;Li, Fei-Fei. [https://time.com/6980134/ai-llm-not-sentient/ &amp;quot;No, Today&#039;s AI Isn&#039;t Sentient. Here&#039;s How We Know&amp;quot;]. &#039;&#039;Time&#039;&#039;. 2024-05-22.&amp;lt;/ref&amp;gt; A minority view argues that even if there is a small chance that a given software system can have subjective experience, which some philosophers suggest is possible,&amp;lt;ref&amp;gt;Chalmers, David J.. [https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/ &amp;quot;Could a Large Language Model Be Conscious?&amp;quot;]. &#039;&#039;Boston Review&#039;&#039;. August 9, 2023.&amp;lt;/ref&amp;gt; then ethical considerations around potential [[Suffering risks|large-scale suffering]] in AI systems may need to be taken seriously—similar to considerations given to animal welfare.&amp;lt;ref name=&amp;quot;Thomson-2022&amp;quot;&amp;gt;Thomson, Jonny. [https://bigthink.com/thinking/why-dont-robots-have-rights &amp;quot;Why don&#039;t robots have rights?&amp;quot;]. &#039;&#039;Big Think&#039;&#039;. 2022-10-31.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Kateman-2023&amp;quot;&amp;gt;Kateman, Brian. [https://time.com/6296234/ai-should-be-terrified-of-humans &amp;quot;AI Should Be Terrified of Humans&amp;quot;]. &#039;&#039;Time&#039;&#039;. 2023-07-24.&amp;lt;/ref&amp;gt; Proponents of this view have proposed various precautionary measures like moratoriums on AI development&amp;lt;ref&amp;gt;Metzinger, Thomas. &amp;quot;Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology&amp;quot;. &#039;&#039;[[Journal of Artificial Intelligence and Consciousness]]&#039;&#039;.&amp;lt;/ref&amp;gt; and induced amnesia&amp;lt;ref&amp;gt;Tkachenko, Yegor. [https://proceedings.mlr.press/v235/tkachenko24a.html &amp;quot;Position: Enforced Amnesia as a Way to Mitigate the Potential Risk of Silent Suffering in the Conscious AI&amp;quot;]. &#039;&#039;ICML&#039;&#039;. 2024.&amp;lt;/ref&amp;gt; to address these ethical concerns. Some existential philosophers argue there is no generally accepted way to determine if an LLM is conscious,&amp;lt;ref&amp;gt;Leith, Sam. [https://www.spectator.co.uk/article/nick-bostrom-how-can-we-be-certain-a-machine-isnt-conscious/ &amp;quot;Nick Bostrom: How can we be certain a machine isn&#039;t conscious?&amp;quot;]. &#039;&#039;The Spectator&#039;&#039;. 2022-07-09.&amp;lt;/ref&amp;gt; given the inherent difficulty of [[hard problem of consciousness|measuring subjective experience]].&amp;lt;ref&amp;gt;Chalmers, David. &amp;quot;Facing up to the problem of consciousness&amp;quot;. &#039;&#039;[[Journal of Consciousness Studies]]&#039;&#039;. 1995.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The 2022 Google [[LaMDA]] incident, where engineer [[Blake Lemoine]] claimed that the model was conscious, highlighted how LLMs can convince users that they are sentient through responses that do not prove sentience. Google described the engineer&#039;s claims as unfounded, and he was dismissed.&amp;lt;ref&amp;gt;Maruf, Ramishah. [https://www.cnn.com/2022/07/23/business/google-ai-engineer-fired-sentient &amp;quot;Google fires engineer who contended its AI technology was sentient&amp;quot;]. &#039;&#039;CNN&#039;&#039;. 2022-07-25.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
{{portal |Computer programming |Linguistics |Mathematics}}&lt;br /&gt;
* [[AI anthropomorphism]]&lt;br /&gt;
* [[AI slop]]&lt;br /&gt;
* [[Foundation model]]&lt;br /&gt;
* [[Generative artificial intelligence]]&lt;br /&gt;
* [[List of large language models]]&lt;br /&gt;
* [[List of chatbots]]&lt;br /&gt;
* [[Language model benchmark]]&lt;br /&gt;
* [[Reinforcement learning]]&lt;br /&gt;
* [[Small language model]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
* [[Dan Jurafsky|Jurafsky, Dan]], Martin, James. H. [https://web.stanford.edu/~jurafsky/slp3/ed3book_jan72023.pdf &#039;&#039;Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition&#039;&#039;], 3rd Edition draft, 2023.&lt;br /&gt;
* Yin, Shukang. &amp;quot;A Survey on Multimodal Large Language Models&amp;quot;. &#039;&#039;National Science Review&#039;&#039;. 2024.&lt;br /&gt;
* [https://aiindex.stanford.edu/report/ &amp;quot;AI Index Report 2024 – Artificial Intelligence Index&amp;quot;]. &#039;&#039;aiindex.stanford.edu&#039;&#039;.&lt;br /&gt;
* Frank, Michael C.. [https://www.nature.com/articles/s44159-023-00211-x &amp;quot;Baby steps in evaluating the capacities of large language models&amp;quot;]. &#039;&#039;Nature Reviews Psychology&#039;&#039;. 27 June 2023.&lt;br /&gt;
* Citation needed.&lt;br /&gt;
&lt;br /&gt;
{{Natural language processing}}&lt;br /&gt;
{{Artificial intelligence navbox}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Large language models| ]]&lt;br /&gt;
[[Category:Deep learning]]&lt;br /&gt;
[[Category:Natural language processing]]&lt;br /&gt;
[[Category:Energy consumption]]&lt;br /&gt;
[[Category:Energy policy]]&lt;br /&gt;
[[Category:Water and the environment]]&lt;br /&gt;
[[Category:Environmental impact of the energy industry]]&lt;br /&gt;
[[Category:Environmental impact by source]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Google_DeepMind&amp;diff=9</id>
		<title>Google DeepMind</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Google_DeepMind&amp;diff=9"/>
		<updated>2026-04-06T12:58:28Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import from Wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Use British English|date=September 2016}}&lt;br /&gt;
{{Infobox company&lt;br /&gt;
| name = DeepMind Technologies Limited&lt;br /&gt;
| logo = [[File:DeepMind new logo.svg|frameless|upright=1.15|class=skin-invert]]&lt;br /&gt;
| image = &lt;br /&gt;
| image_size = &lt;br /&gt;
| image_caption = &lt;br /&gt;
| trading_name = {{Ubl&lt;br /&gt;
| Google DeepMind&lt;br /&gt;
}}&lt;br /&gt;
| former_name = &lt;br /&gt;
| founded = {{Start date and age|df=y|2010|09|23}} (incorporation)&amp;lt;ref name=&amp;quot;CompaniesHouse&amp;quot;&amp;gt;[https://find-and-update.company-information.service.gov.uk/company/07386350 &amp;quot;DeepMind Technologies Limited overview - Find and update company information - Gov.uk&amp;quot;]. &#039;&#039;[[Companies House]]&#039;&#039;. 2010-09-23.&amp;lt;/ref&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
{{Start date and age|df=y|2010|11|15}} (official launch)&amp;lt;ref name=&amp;quot;economist&amp;quot;&amp;gt;[https://www.economist.com/1843/2019/03/01/deepmind-and-google-the-battle-to-control-artificial-intelligence &amp;quot;DeepMind and Google: the battle to control artificial intelligence&amp;quot;]. &#039;&#039;The Economist&#039;&#039;. 1 March 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| location = [[London]], England&amp;lt;ref&amp;gt;[https://www.ses-ltd.co.uk/case-study/kings-cross-s2-building/ &amp;quot;King&#039;s Cross – S2 Building – SES Engineering Services&amp;quot;]. &#039;&#039;ses-ltd.co.uk&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| founders = {{plain list|&lt;br /&gt;
*[[Demis Hassabis]]&lt;br /&gt;
*[[Shane Legg]]&lt;br /&gt;
*[[Mustafa Suleyman]]}}&lt;br /&gt;
| key_people = {{Unbulleted list&lt;br /&gt;
  |[[Demis Hassabis]] ([[chief executive officer|CEO]])&lt;br /&gt;
  |[[Lila Ibrahim]] ([[chief operating officer|COO]])}}&lt;br /&gt;
| industry = [[Artificial intelligence]]&lt;br /&gt;
| parent = DeepMind Holdings Limited&amp;lt;ref&amp;gt;[https://find-and-update.company-information.service.gov.uk/company/07386350/persons-with-significant-control &amp;quot;Deepmind Technologies Limited persons with significant control – Find and update company information – Gov.uk&amp;quot;]. &#039;&#039;[[Companies House]]&#039;&#039;. 2019-11-04.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| subsid = [[Google AI]]&lt;br /&gt;
| type = [[Subsidiary]]&lt;br /&gt;
| owner = [[Alphabet Inc.]]&amp;lt;ref&amp;gt;[https://find-and-update.company-information.service.gov.uk/company/12181850/persons-with-significant-control &amp;quot;Deepmind Holdings Limited persons with significant control – Find and update company information – GOV.UK&amp;quot;]. &#039;&#039;[[Companies House]]&#039;&#039;. 2019-08-30.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| num_employees = c. 6,000 (2025)&amp;lt;ref&amp;gt;Herrera and Blunt, Sebastian and Katherine. [https://www.wsj.com/tech/ai/microsoft-google-deepmind-ai-recruitment-fcc60b67 &amp;quot;Microsoft Raids Google&#039;s DeepMind AI Unit With Promise of Less Bureaucracy&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. 7 August 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| products = {{UBL&lt;br /&gt;
| [[Google Gemini|Gemini]]&lt;br /&gt;
| [[Veo (text-to-video model)|Veo (video)]]&lt;br /&gt;
| [[Nano Banana]]&lt;br /&gt;
| [[Imagen (text-to-image model)|Imagen (image)]]&lt;br /&gt;
| [[Lyria (text-to-music model)|Lyria (music)]]&lt;br /&gt;
| [[AlphaFold]]&lt;br /&gt;
| [[AlphaGo]]&lt;br /&gt;
}}&lt;br /&gt;
| revenue = {{decrease}} £1.33&amp;amp;nbsp;billion (2024)&amp;lt;ref name=AR24&amp;gt;[https://find-and-update.company-information.service.gov.uk/company/07386350/filing-history/MzQ4Mjk4NjY3OWFkaXF6a2N4/document?format=pdf&amp;amp;download=0 &amp;quot;Full accounts made up to 31 December 2024&amp;quot;]. Companies House. 2 October 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
| operating_income = {{increase}} £217&amp;amp;nbsp;million (2024)&amp;lt;ref name=AR24 /&amp;gt;&lt;br /&gt;
| net_income = {{increase}} £174&amp;amp;nbsp;million (2024)&amp;lt;ref name=AR24 /&amp;gt;&lt;br /&gt;
| website = {{URL|https://deepmind.google/}}&lt;br /&gt;
}}&lt;br /&gt;
{{Artificial intelligence}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;DeepMind Technologies Limited&#039;&#039;&#039;,&amp;lt;ref name=&amp;quot;CompaniesHouse&amp;quot; /&amp;gt;&amp;lt;!--full legal name per [[MOS:FIRSTCORP]]--&amp;gt; [[trading as]] &#039;&#039;&#039;Google DeepMind&#039;&#039;&#039; or simply &#039;&#039;&#039;DeepMind&#039;&#039;&#039;, is a British-American [[artificial intelligence]] research laboratory which serves as a [[subsidiary]] of [[Alphabet Inc.]] Founded in the UK in 2010, it was [[List of mergers and acquisitions by Alphabet|acquired]] by Google in 2014&amp;lt;ref name=&amp;quot;:4&amp;quot;&amp;gt;Bray, Chad. [https://dealbook.nytimes.com/2014/01/27/google-acquires-british-artificial-intelligence-developer/ &amp;quot;Google Acquires British Artificial Intelligence Developer&amp;quot;]. &#039;&#039;DealBook&#039;&#039;. 27 January 2014.&amp;lt;/ref&amp;gt; and merged with [[Google AI]]&#039;s [[Google Brain]] division to become Google DeepMind in April 2023. The company is headquartered in [[London]], with research centres in the United States, Canada,&amp;lt;ref name=&amp;quot;:5&amp;quot;&amp;gt;[https://deepmind.com/about/ &amp;quot;About Us&amp;quot;]. &#039;&#039;DeepMind&#039;&#039;. 14 May 2024.&amp;lt;/ref&amp;gt; France,&amp;lt;ref&amp;gt;[https://deepmind.com/blog/a-return-to-paris/ &amp;quot;A return to Paris&amp;quot;]. &#039;&#039;DeepMind&#039;&#039;. 14 May 2024.&amp;lt;/ref&amp;gt; Germany, and Switzerland.&lt;br /&gt;
&lt;br /&gt;
In 2014, DeepMind introduced [[neural Turing machine]]s ([[Neural network (machine learning)|neural networks]] that can access external memory like a conventional [[Turing machine]]).&amp;lt;ref name=&amp;quot;arxiv&amp;quot;&amp;gt;Graves, Alex. &amp;quot;Neural Turing Machines&amp;quot;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[http://www.technologyreview.com/view/533741/best-of-2014-googles-secretive-deepmind-startup-unveils-a-neural-turing-machine/ &amp;quot;Best of 2014: Google&#039;s Secretive DeepMind Startup Unveils a &amp;quot;Neural Turing Machine&amp;quot; {{!&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. 2014-12-29.&amp;lt;/ref&amp;gt; The company has created many neural network models trained with [[reinforcement learning]] to play [[video games]] and [[board games]]. It made headlines in 2016 after its [[AlphaGo]] program beat [[Lee Sedol]], a [[Go (game)|Go]] world champion, in [[AlphaGo versus Lee Sedol|a five-game match]], which was later featured in the documentary &#039;&#039;[[AlphaGo (film)|AlphaGo]]&#039;&#039;.&amp;lt;ref&amp;gt;{{Citation|last=Kohs|first=Greg|title=AlphaGo|date=29 September 2017|url=https://www.imdb.com/title/tt6700846/|others=Ioannis Antonoglou, Lucas Baker, Nick Bostrom|access-date=9 January 2018|archive-date=6 April 2017|archive-url=https://web.archive.org/web/20170406000346/https://www.imdb.com/title/tt6700846/|url-status=live}}&amp;lt;/ref&amp;gt; A more general program, [[AlphaZero]], beat the most powerful programs playing go, [[chess]] and [[shogi]] (Japanese chess) after a few days of play against itself using reinforcement learning.&amp;lt;ref&amp;gt;Silver, David. &amp;quot;Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm&amp;quot;. 5 December 2017.&amp;lt;/ref&amp;gt; DeepMind has since trained models for game-playing ([[MuZero]], [[AlphaStar (software)|AlphaStar]]), for geometry ([[AlphaGeometry]]), and for algorithm discovery ([[AlphaEvolve]], [[AlphaDev]], AlphaTensor).&lt;br /&gt;
&lt;br /&gt;
In 2020, DeepMind made significant advances in the problem of [[protein structure prediction|protein folding]] with [[AlphaFold]], which achieved [[state of the art]] records on [[Benchmark (computing)|benchmark tests]] for protein folding prediction.&amp;lt;ref&amp;gt;Callaway, Ewen. [https://www.nature.com/articles/d41586-020-03348-4 &amp;quot;&#039;It will change everything&#039;: DeepMind&#039;s AI makes gigantic leap in solving protein structuress&amp;quot;]. &#039;&#039;Nature&#039;&#039;. 30 November 2020.&amp;lt;/ref&amp;gt; In July 2022, it was announced that over 200 million predicted protein structures, representing virtually all known proteins, would be released on the AlphaFold database.&amp;lt;ref name=&amp;quot;geddes&amp;quot;&amp;gt;Geddes, Linda. [https://www.theguardian.com/technology/2022/jul/28/deepmind-uncovers-structure-of-200m-proteins-in-scientific-leap-forward &amp;quot;DeepMind uncovers structure of 200m proteins in scientific leap forward&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 28 July 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;alphafold DB&amp;quot;&amp;gt;[https://www.deepmind.com/blog/alphafold-reveals-the-structure-of-the-protein-universe &amp;quot;AlphaFold reveals the structure of the protein universe&amp;quot;]. &#039;&#039;DeepMind&#039;&#039;. 28 July 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Google DeepMind has become responsible for the development of [[Gemini (language model)|Gemini]] (Google&#039;s family of [[Large language model|large language models]]) and other [[generative AI]] tools, such as the [[Text-to-image model|text-to-image]] model [[Imagen (text-to-image model)|Imagen]], the [[Text-to-video model|text-to-video]] model [[Veo (text-to-video model)|Veo]], and the [[Text-to-music model|text-to-music]] model Lyria.&lt;br /&gt;
&lt;br /&gt;
{{toclimit|3}}&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
The [[Startup company|start-up]] was founded by [[Demis Hassabis]], [[Shane Legg]] and [[Mustafa Suleyman]] in November 2010.&amp;lt;ref name=&amp;quot;economist&amp;quot;&amp;gt;&amp;lt;/ref&amp;gt; Hassabis and Legg first met at the Gatsby Computational Neuroscience Unit at [[University College London]] (UCL).&amp;lt;ref&amp;gt;Gibbs, Samuel. [https://www.theguardian.com/technology/shortcuts/2014/jan/28/demis-hassabis-15-facts-deepmind-technologies-founder-google &amp;quot;Demis Hassabis: 15 facts about the DeepMind Technologies founder&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 28 January 2014.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Demis Hassabis has said that the start-up began working on artificial intelligence technology by teaching it how to play old video games from the seventies and eighties, which are relatively primitive compared to the ones that are available today. Some of those games included &#039;&#039;[[Breakout (video game)|Breakout]]&#039;&#039;, &#039;&#039;[[Pong]]&#039;&#039;, and &#039;&#039;[[Space Invaders]]&#039;&#039;. AI was introduced to one game at a time, without any prior knowledge of its rules. After spending some time on learning the game, AI would eventually become an expert in it. &amp;quot;The cognitive processes which the AI goes through are said to be very like those of a human who had never seen the game would use to understand and attempt to master it.&amp;quot;&amp;lt;ref&amp;gt;Marr, Bernard. [https://www.forbes.com/sites/bernardmarr/2017/02/02/how-googles-amazing-ai-start-up-deepmind-is-making-our-world-a-smarter-place/#3f5f079ddfff &amp;quot;How Google&#039;s Amazing AI Start-Up &#039;DeepMind&#039; Is Making Our World A Smarter Place&amp;quot;]. &#039;&#039;Forbes&#039;&#039;.&amp;lt;/ref&amp;gt; The goal of the founders is to create a general-purpose AI that can be useful and effective for almost anything.&lt;br /&gt;
&lt;br /&gt;
Major venture capital firms [[Horizons Ventures]] and [[Founders Fund]] invested in the company,&amp;lt;ref&amp;gt;Cookson, Robert. [https://www.ft.com/content/b09dbd40-876a-11e3-9c5c-00144feab7de &amp;quot;DeepMind buy heralds rise of the machines&amp;quot;]. &#039;&#039;Financial Times&#039;&#039;. 27 January 2014.&amp;lt;/ref&amp;gt; as well as entrepreneurs [[Scott Banister]],&amp;lt;ref&amp;gt;[https://angel.co/deepmind-technologies-limited &amp;quot;DeepMind Technologies Investors&amp;quot;].&amp;lt;/ref&amp;gt; [[Peter Thiel]],&amp;lt;ref&amp;gt;Shead, Sam. [https://www.businessinsider.com/how-deepmind-convinced-peter-thiel-to-invest-outside-silicon-valley-2017-7 &amp;quot;How DeepMind convinced billionaire Peter Thiel to invest without moving the company to Silicon Valley&amp;quot;]. Business Insider.&amp;lt;/ref&amp;gt; and [[Elon Musk]].&amp;lt;ref&amp;gt;Rowan, David. [https://www.wired.co.uk/article/deepmind &amp;quot;DeepMind: inside Google&#039;s super-brain&amp;quot;]. &#039;&#039;Wired UK&#039;&#039;. 22 June 2015.&amp;lt;/ref&amp;gt; [[Jaan Tallinn]] was an early investor and an adviser to the company.&amp;lt;ref&amp;gt;[http://recode.net/2014/01/26/exclusive-google-to-buy-artificial-intelligence-startup-deepmind-for-400m/ &amp;quot;Recode.net – DeepMind Technologies Acquisition&amp;quot;]. 26 January 2014.&amp;lt;/ref&amp;gt; On 26 January 2014, Google confirmed its acquisition of DeepMind for a price reportedly ranging between $400 million and $650 million.&amp;lt;ref&amp;gt;[https://www.reuters.com/article/google-deepmind-idUSL2N0L102A20140127 &amp;quot;Google to buy artificial intelligence company DeepMind&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 26 January 2014.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Gibbs, Samuel. [https://www.theguardian.com/technology/2014/jan/27/google-acquires-uk-artificial-intelligence-startup-deepmind &amp;quot;Google Acquires UK AI startup Deepmind&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 27 January 2014.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Shu, Catherine. [https://techcrunch.com/2014/01/26/google-deepmind/ &amp;quot;Report of Acquisition, TechCrunch&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 26 January 2014.&amp;lt;/ref&amp;gt; and that it had agreed to take over DeepMind Technologies. The sale to Google took place after [[Facebook]] reportedly ended negotiations with DeepMind Technologies in 2013.&amp;lt;ref&amp;gt;Efrati, Amir. [https://www.theinformation.com/Google-beat-Facebook-For-DeepMind-Creates-Ethics-Board &amp;quot;Google beats Facebook for Acquisition of DeepMind Technologies&amp;quot;]. January 26, 2014.&amp;lt;/ref&amp;gt; The company was afterwards renamed Google DeepMind and kept that name for about two years.&amp;lt;ref name=&amp;quot;nature2015&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2014, DeepMind received the &amp;quot;Company of the Year&amp;quot; award from [[Cambridge Computer Laboratory]].&amp;lt;ref&amp;gt;[https://www.cl.cam.ac.uk/ring/awards.html &amp;quot;Hall of Fame Awards: To celebrate the success of companies founded by Computer Laboratory graduates.&amp;quot;]. University of Cambridge.&amp;lt;/ref&amp;gt;&lt;br /&gt;
{{Multiple image|align=right|direction=vertical|width=260px|image1=Google DeepMind logo.svg|caption1=Logo from 2015–2016|image2=DeepMind logo.png|caption2=Logo from 2016–2019}}&lt;br /&gt;
In September 2015, DeepMind and the [[Royal Free London NHS Foundation Trust|Royal Free NHS Trust]] signed their initial information sharing agreement to co-develop a clinical task management app, Streams.&amp;lt;ref&amp;gt;Lomas, Natasha. [https://techcrunch.com/2017/08/31/documents-detail-deepminds-plan-to-apply-ai-to-nhs-data-in-2015/ &amp;quot;Documents detail DeepMind&#039;s plan to apply AI to NHS data in 2015&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After Google&#039;s acquisition the company established an [[Ethics of artificial intelligence|artificial intelligence ethics]] board.&amp;lt;ref&amp;gt;Selinger, Evan. [https://www.forbes.com/sites/privacynotice/2014/02/03/inside-googles-mysterious-ethics-board/ &amp;quot;Inside Google&#039;s Mysterious Ethics Board&amp;quot;]. &#039;&#039;Forbes&#039;&#039;. 3 February 2014.&amp;lt;/ref&amp;gt; The ethics board for AI research remains a mystery, with both Google and DeepMind declining to reveal who sits on the board.&amp;lt;ref name=&amp;quot;theguardian.com 2016-05-04&amp;quot;&amp;gt;Ramesh, Randeep. [https://www.theguardian.com/commentisfree/2016/may/04/googles-deepmind-shouldnt-be-sucking-up-our-nhs-records-in-secret &amp;quot;Google&#039;s DeepMind shouldn&#039;t suck up our NHS records in secret&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 4 May 2016.&amp;lt;/ref&amp;gt; DeepMind has opened a new unit called DeepMind Ethics and Society and focused on the ethical and societal questions raised by artificial intelligence featuring prominent philosopher [[Nick Bostrom]] as advisor.&amp;lt;ref name=&amp;quot;:9&amp;quot; /&amp;gt; In October 2017, DeepMind launched a new research team to investigate AI ethics.&amp;lt;ref&amp;gt;Shead, Sam. [http://www.businessinsider.com/deepmind-has-launched-a-new-ethics-and-society-research-team-2017-10 &amp;quot;DeepMind has launched a new &#039;ethics and society&#039; research team&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;. October 4, 2017.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Vincent, James. [https://www.theverge.com/2017/10/4/16417978/deepmind-ai-ethics-society-research-group &amp;quot;DeepMind launches new research team to investigate AI ethics&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. October 4, 2017.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In December 2019, co-founder Suleyman announced he would be leaving DeepMind to join Google, working in a policy role.&amp;lt;ref name=&amp;quot;SuleymanDeparture&amp;quot;&amp;gt;Murgia, Madhumita. [https://www.ft.com/content/02757f12-1780-11ea-9ee4-11f260415385 &amp;quot;Client Challenge&amp;quot;]. &#039;&#039;www.ft.com&#039;&#039;. 2019-12-05.&amp;lt;/ref&amp;gt; In March 2024, [[Microsoft]] appointed him as the EVP and CEO of its newly created consumer AI unit, Microsoft AI.&amp;lt;ref&amp;gt;Blogs, Microsoft Corporate. [https://blogs.microsoft.com/blog/2024/03/19/mustafa-suleyman-deepmind-and-inflection-co-founder-joins-microsoft-to-lead-copilot/ &amp;quot;Mustafa Suleyman, DeepMind and Inflection Co-founder, joins Microsoft to lead Copilot&amp;quot;]. &#039;&#039;The Official Microsoft Blog&#039;&#039;. 2024-03-19.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In April 2023, DeepMind merged with [[Google AI]]&#039;s [[Google Brain]] division to form Google DeepMind, as part of the company&#039;s continued efforts to accelerate work on AI in response to [[OpenAI]]&#039;s [[ChatGPT]].&amp;lt;ref&amp;gt;Roth, Emma. [https://www.theverge.com/2023/4/20/23691468/google-ai-deepmind-brain-merger &amp;quot;Google&#039;s big AI push will combine Brain and DeepMind into one team&amp;quot;]. &#039;&#039;[[The Verge]]&#039;&#039;. 20 April 2023.&amp;lt;/ref&amp;gt; This marked the end of a years-long struggle from DeepMind executives to secure greater autonomy from Google.&amp;lt;ref&amp;gt;Olson, Parmy. [https://www.wsj.com/articles/google-unit-deepmind-triedand-failedto-win-ai-autonomy-from-parent-11621592951 &amp;quot;Google Unit DeepMind Tried—and Failed—to Win AI Autonomy From Parent&amp;quot;]. &#039;&#039;[[The Wall Street Journal]]&#039;&#039;. 21 May 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Products and technologies ==&lt;br /&gt;
As of 2020, DeepMind has published over a thousand papers, including thirteen papers that were accepted by &#039;&#039;[[Nature (journal)|Nature]]&#039;&#039; or &#039;&#039;[[Science (journal)|Science]]&#039;&#039;. DeepMind received media attention during the AlphaGo period; according to a [[LexisNexis]] search, 1842 published news stories mentioned DeepMind in 2016, declining to 1363 in 2019.&amp;lt;ref&amp;gt;Shead, Sam. [https://www.cnbc.com/2020/06/05/google-deepmind-alphago-buzz-dissipates.html &amp;quot;Why the buzz around DeepMind is dissipating as it transitions from games to science&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 5 June 2020.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Games===&lt;br /&gt;
Unlike earlier AIs, such as [[IBM]]&#039;s [[Deep Blue (chess computer)|Deep Blue]] or [[Watson (computer)|Watson]], which were developed for a pre-defined purpose and only function within that scope, DeepMind&#039;s initial algorithms were intended to be general. They used [[reinforcement learning]], an algorithm that learns from experience using only raw pixels as data input. Their initial approach used [[Q-learning#Deep Q-learning|deep Q-learning]] with a [[convolutional neural network]].&amp;lt;ref name=&amp;quot;nature2015&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;Atari Paper&amp;quot;&amp;gt;Mnih, Volodymyr. &amp;quot;Playing Atari with Deep Reinforcement Learning&amp;quot;. 12 December 2013.&amp;lt;/ref&amp;gt; They tested the system on video games, notably early [[arcade games]], such as &#039;&#039;[[Space Invaders]]&#039;&#039; or &#039;&#039;[[Breakout (video game)|Breakout]]&#039;&#039;.&amp;lt;ref name=&amp;quot;Atari Paper&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;hassabis talk&amp;quot; /&amp;gt; Without altering the code, the same AI was able to play certain games more efficiently than any human ever could.&amp;lt;ref name=&amp;quot;hassabis talk&amp;quot;&amp;gt;[https://www.youtube.com/watch?v=EfGD2qveGdQ &amp;quot;Deepmind artificial intelligence @ FDOT14&amp;quot;]. 19 April 2014.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In July 2018, researchers from DeepMind trained one of its systems to play the computer game &#039;&#039;[[Quake III Arena]]&#039;&#039;.&amp;lt;ref&amp;gt;[https://www.engadget.com/2018/07/03/deepmind-ai-quake-iii-arena-human/ &amp;quot;DeepMind AI&#039;s new trick is playing &#039;Quake III Arena&#039; like a human&amp;quot;] . &#039;&#039;Engadget&#039;&#039;. 3 July 2018.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2013, DeepMind published research on an AI system that surpassed human abilities in games such as [[Pong]], [[Breakout (video game)|Breakout]] and [[Enduro (video game)|Enduro]], while surpassing state of the art performance on [[Seaquest (video game)|Seaquest]], [[Beamrider]], and [[Q*bert]].&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://venturebeat.com/2018/12/29/a-look-back-at-some-of-ais-biggest-video-game-wins-in-2018/ &amp;quot;A look back at some of AI&#039;s biggest video game wins in 2018&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 29 December 2018.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Mnih, Volodymyr. &amp;quot;Playing Atari with Deep Reinforcement Learning&amp;quot;. 19 December 2013.&amp;lt;/ref&amp;gt; This work reportedly led to the company&#039;s acquisition by Google.&amp;lt;ref name=&amp;quot;arxiv medium&amp;quot;&amp;gt;[https://medium.com/the-physics-arxiv-blog/the-last-ai-breakthrough-deepmind-made-before-google-bought-it-for-400m-7952031ee5e1 &amp;quot;The Last AI Breakthrough DeepMind Made Before Google Bought It For $400m&amp;quot;]. The Physics [[arXiv]] Blog. 29 January 2014.&amp;lt;/ref&amp;gt; DeepMind&#039;s AI had been applied to video games made in the 1970s and [[History of video games#1980s|1980s]]; work was ongoing for more complex 3D games such as &#039;&#039;[[Quake (video game)|Quake]]&#039;&#039;, which first appeared in the 1990s.&amp;lt;ref name=&amp;quot;hassabis talk&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2020, DeepMind published Agent57,&amp;lt;ref&amp;gt;Piot, Bilal. &amp;quot;Agent57: Outperforming the Atari Human Benchmark&amp;quot;. 30 March 2020.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark &amp;quot;Agent57: Outperforming the Atari Human Benchmark&amp;quot;]. &#039;&#039;DeepMind&#039;&#039;. 31 March 2020.&amp;lt;/ref&amp;gt; an AI Agent which surpasses human level performance on all 57 games of the Atari 2600 suite.&amp;lt;ref&amp;gt;Linder, Courtney. [https://www.popularmechanics.com/culture/gaming/a32006038/deepmind-ai-atari-agent57/ &amp;quot;This AI Can Beat Humans At All 57 Atari Games&amp;quot;]. &#039;&#039;Popular Mechanics&#039;&#039;. 2 April 2020.&amp;lt;/ref&amp;gt; In July 2022, DeepMind announced the development of DeepNash, a model-free [[multi-agent reinforcement learning]] system capable of playing the board game [[Stratego]] at the level of a human expert.&amp;lt;ref&amp;gt;Israni, Priyanka. [https://www.marktechpost.com/2022/07/09/deepmind-ai-researchers-introduce-deepnash-an-autonomous-agent-trained-with-model-free-multiagent-reinforcement-learning-that-learns-to-play-the-game-of-stratego-at-expert-level/ &amp;quot;Deepmind AI Researchers Introduce &#039;DeepNash&#039;, An Autonomous Agent Trained With Model-Free Multiagent Reinforcement Learning That Learns To Play The Game Of Stratego At Expert Level&amp;quot;]. &#039;&#039;MarkTechPost&#039;&#039;. 9 July 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== AlphaGo and successors ====&lt;br /&gt;
&#039;&#039;Main article: [[AlphaGo|AlphaGo Zero|AlphaZero|MuZero]]&#039;&#039;&lt;br /&gt;
In October 2015, a [[computer Go]] program called AlphaGo, developed by DeepMind, beat the European Go champion [[Fan Hui]], a [[Go ranks and ratings|2 dan]] (out of 9 dan possible) professional, five to zero.&amp;lt;ref name=&amp;quot;bbcgo&amp;quot;&amp;gt;[https://www.bbc.com/news/technology-35420579 &amp;quot;Google achieves AI &#039;breakthrough&#039; by beating Go champion&amp;quot;]. &#039;&#039;BBC News&#039;&#039;. 27 January 2016.&amp;lt;/ref&amp;gt; This was the first time an artificial intelligence (AI) defeated a professional Go player.&amp;lt;ref name=&amp;quot;lemondego&amp;quot;&amp;gt;Larousserie, David. [http://www.lemonde.fr/pixels/article/2016/01/27/premiere-defaite-d-un-professionnel-du-go-contre-une-intelligence-artificielle_4854886_4408996.html &amp;quot;Première défaite d&#039;un professionnel du go contre une intelligence artificielle&amp;quot;]. &#039;&#039;Le Monde&#039;&#039;. 27 January 2016.&amp;lt;/ref&amp;gt; Previously, computers were only known to have played Go at &amp;quot;amateur&amp;quot; level.&amp;lt;ref name=&amp;quot;bbcgo&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;googlego&amp;quot;&amp;gt;[http://googleresearch.blogspot.com/2016/01/alphago-mastering-ancient-game-of-go.html &amp;quot;Research Blog: AlphaGo: Mastering the ancient game of Go with Machine Learning&amp;quot;]. &#039;&#039;Google Research Blog&#039;&#039;. 27 January 2016.&amp;lt;/ref&amp;gt; Go is considered much more difficult for computers to win compared to other games like [[chess]], due to the much larger number of possibilities, making it prohibitively difficult for traditional AI methods such as [[Brute-force search|brute-force]].&amp;lt;ref name=&amp;quot;bbcgo&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;googlego&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In March 2016 it beat [[Lee Sedol]], a [[Go ranks and ratings|9-dan]] professional player, with a score of 4 to 1 in a [[AlphaGo versus Lee Sedol|five-game match]]. In the 2017 [[Future of Go Summit]], AlphaGo won a [[AlphaGo versus Ke Jie|three-game match with Ke Jie]], who had been the world&#039;s highest-ranked player for two years.&amp;lt;ref&amp;gt;[http://www.goratings.org/ &amp;quot;World&#039;s Go Player Ratings&amp;quot;]. May 2017.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[http://sports.sina.com.cn/go/2016-08-02/doc-ifxunyya3020238.shtml &amp;quot;柯洁迎19岁生日 雄踞人类世界排名第一已两年&amp;quot;]. May 2017.&amp;lt;/ref&amp;gt; In 2017, an improved version, [[AlphaGo Zero]], defeated AlphaGo in a hundred out of a hundred games. Later that year, [[AlphaZero]], a modified version of AlphaGo Zero, gained superhuman abilities at chess and shogi. In 2019, DeepMind released a new model named [[MuZero]] that mastered the domains of [[Go (game)|Go]], [[chess]], [[shogi]], and [[Atari 2600|Atari 2600 games]] without human data, domain knowledge, or known rules.&amp;lt;ref&amp;gt;[https://www.deepmind.com/blog/muzero-mastering-go-chess-shogi-and-atari-without-rules &amp;quot;MuZero: Mastering Go, chess, shogi and Atari without rules&amp;quot;]. &#039;&#039;www.deepmind.com&#039;&#039;. 23 December 2020.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Schrittwieser, Julian. [https://www.nature.com/articles/s41586-020-03051-4.epdf?sharing_token=kTk-xTZpQOF8Ym8nTQK6EdRgN0jAjWel9jnR3ZoTv0PMSWGj38iNIyNOw_ooNp2BvzZ4nIcedo7GEXD7UmLqb0M_V_fop31mMY9VBBLNmGbm0K9jETKkZnJ9SgJ8Rwhp3ySvLuTcUr888puIYbngQ0fiMf45ZGDAQ7fUI66-u7Y= &amp;quot;Mastering Atari, Go, chess and shogi by planning with a learned model&amp;quot;]. &#039;&#039;Nature&#039;&#039;. 23 December 2020.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AlphaGo technology was developed based on [[deep reinforcement learning]], making it different from the AI technologies then on the market. The data fed into the AlphaGo algorithm consisted of various moves based on historical tournament data. The number of moves was increased gradually until over 30 million of them were processed. The aim was to have the system mimic the human player, as represented by the input data, and eventually become better. It played against itself and learned from the outcomes; thus, it learned to improve itself over the time and increased its winning rate as a result.&amp;lt;ref&amp;gt;[https://www.economist.com/news/science-and-technology/21730391-learning-play-go-only-start-latest-ai-can-work-things-out-without &amp;quot;The latest AI can work things out without being taught&amp;quot;]. &#039;&#039;The Economist&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AlphaGo used two deep neural networks: a policy network to evaluate move probabilities and a value network to assess positions. The policy network trained via supervised learning, and was subsequently refined by policy-gradient [[reinforcement learning]]. The value network learned to predict winners of games played by the policy network against itself. After training, these networks employed a lookahead [[Monte Carlo tree search]], using the policy network to identify candidate high-probability moves, while the value network (in conjunction with Monte Carlo rollouts using a fast rollout policy) evaluated tree positions.&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;Silver, David. [http://discovery.ucl.ac.uk/10045895/1/agz_unformatted_nature.pdf &amp;quot;Mastering the game of Go without human knowledge&amp;quot;]. &#039;&#039;[[Nature (journal)&#039;&#039;. 19 October 2017.{{closed access}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In contrast, AlphaGo Zero was trained without being fed data of human-played games. Instead it generated its own data, playing millions of games against itself. It used a single neural network, rather than separate policy and value networks. Its simplified tree search relied upon this neural network to evaluate positions and sample moves. A new reinforcement learning algorithm incorporated lookahead search inside the training loop.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt; AlphaGo Zero employed around 15 people and millions in computing resources.&amp;lt;ref&amp;gt;Knight, Will. [https://www.technologyreview.com/s/609141/alphago-zero-shows-machines-can-become-superhuman-without-any-help/ &amp;quot;The world&#039;s smartest game-playing AI—DeepMind&#039;s AlphaGo—just got way smarter&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;.&amp;lt;/ref&amp;gt; Ultimately, it needed much less computing power than AlphaGo, running on four specialized AI processors (Google [[Tensor processing unit|TPUs]]), instead of AlphaGo&#039;s 48.&amp;lt;ref&amp;gt;Vincent, James. [https://www.theverge.com/2017/10/18/16495548/deepmind-ai-go-alphago-zero-self-taught &amp;quot;DeepMind&#039;s Go-playing AI doesn&#039;t need human help to beat us anymore&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 18 October 2017.&amp;lt;/ref&amp;gt; It also required less training time, being able to beat its predecessor after just three days, compared with months required for the original AlphaGo.&amp;lt;ref&amp;gt;Cellan-Jones, Rory. [https://www.bbc.com/news/technology-41668701 &amp;quot;Google DeepMind: AI becomes more alien&amp;quot;]. &#039;&#039;BBC News&#039;&#039;. 18 October 2017.&amp;lt;/ref&amp;gt; Similarly, AlphaZero also learned via [[Self-play (reinforcement learning technique)|self-play]].&lt;br /&gt;
&lt;br /&gt;
Researchers applied MuZero to solve the real world challenge of video compression with a set number of bits with respect to Internet traffic on sites such as [[YouTube]], [[Twitch (service)|Twitch]], and [[Google Meet]]. The goal of MuZero is to optimally compress the video so the quality of the video is maintained with a reduction in data. The final result using MuZero was a 6.28% average reduction in bitrate.&amp;lt;ref&amp;gt;[https://www.deepmind.com/blog/muzeros-first-step-from-research-into-the-real-world &amp;quot;MuZero&#039;s first step from research into the real world&amp;quot;]. &#039;&#039;www.deepmind.com&#039;&#039;. 11 February 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Mandhane, Amol. &amp;quot;MuZero with Self-competition for Rate Control in VP9 Video Compression&amp;quot;. 14 February 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== AlphaStar ====&lt;br /&gt;
&#039;&#039;Main article: [[AlphaStar (software)]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In 2016, Hassabis discussed the game &#039;&#039;[[StarCraft]]&#039;&#039; as a future challenge, since it requires strategic thinking and handling imperfect information.&amp;lt;ref&amp;gt;Byford, Sam. [https://www.theverge.com/2016/3/10/11192774/demis-hassabis-interview-alphago-google-deepmind-ai &amp;quot;DeepMind founder Demis Hassabis on how AI will shape the future&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 10 March 2016.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In January 2019, DeepMind introduced AlphaStar, a program playing the real-time strategy game &#039;&#039;[[StarCraft II]]&#039;&#039;. AlphaStar used reinforcement learning based on replays from human players, and then played against itself to enhance its skills. At the time of the presentation, AlphaStar had knowledge equivalent to 200 years of playing time. It won 10 consecutive matches against two professional players, although it had the unfair advantage of being able to see the entire field, unlike a human player who has to move the camera manually. A preliminary version in which that advantage was fixed lost a subsequent match.&amp;lt;ref&amp;gt;Whitwam, Ryan. [http://www.extremetech.com/gaming/284441-deepmind-ai-challenges-pro-starcraft-ii-players-wins-almost-every-match &amp;quot;DeepMind AI Challenges Pro StarCraft II Players, Wins Almost Every Match&amp;quot;]. &#039;&#039;Extreme Tech&#039;&#039;. 24 January 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In July 2019, AlphaStar began playing against random humans on the public 1v1 European multiplayer ladder. Unlike the first iteration of AlphaStar, which played only [[Protoss]] v. Protoss, this one played as all of the game&#039;s races, and had earlier unfair advantages fixed.&amp;lt;ref&amp;gt;Amadeo, Ron. [https://arstechnica.com/gadgets/2019/07/deepmind-ai-takes-on-the-public-in-starcraft-ii-multiplayer/ &amp;quot;DeepMind AI is secretly lurking on the public StarCraft II 1v1 ladder&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 11 July 2019.&amp;lt;/ref&amp;gt; By October 2019, AlphaStar had reached Grandmaster level on the &#039;&#039;StarCraft II&#039;&#039; ladder on all three &#039;&#039;StarCraft&#039;&#039; races, becoming the first AI to reach the top league of a widely popular [[Esports|esport]] without any game restrictions.&amp;lt;ref&amp;gt;[https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning &amp;quot;AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning&amp;quot;]. &#039;&#039;DeepMind Blog&#039;&#039;. 31 October 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Datacenter operation ===&lt;br /&gt;
In 2014, a datacenter engineer at Google began using supervised machine learning to predict [[power usage effectiveness]] (PUE) of datacenters at Google. The system was deployed in production to allow operators to simulate control strategies and pick the one that saves the most energy.&amp;lt;ref&amp;gt;Gao, Jim. [https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/42542.pdf &amp;quot;Machine Learning Applications for Data Center Optimization&amp;quot;]. &#039;&#039;Google White Paper&#039;&#039;. 2014.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Kava, Joe. [https://blog.google/inside-google/infrastructure/better-data-centers-through-machine/ &amp;quot;Better data centers through machine learning&amp;quot;]. &#039;&#039;The Keyword&#039;&#039;. 2014-05-28.&amp;lt;/ref&amp;gt; In 2016, inspired by AlphaGo, he contacted DeepMind to apply [[reinforcement learning]] (RL) to train a system that could also recommend actions. It was tested on a live datacenter. The system read from sensor readings and recommended actions to take, and human engineers would implement the actions. Though the human engineers found its recommendations unintuitive, they satisfied all safety constraints, and led to a 15% saving in PUE.&amp;lt;ref&amp;gt;Evans, Rich. [https://blog.google/outreach-initiatives/environment/deepmind-ai-reduces-energy-used-for/ &amp;quot;DeepMind AI reduces energy used for cooling Google data centers by 40%&amp;quot;]. &#039;&#039;The Keyword&#039;&#039;. 2016-07-20.&amp;lt;/ref&amp;gt; The system was deployed more widely across Google, with datacenter controllers receiving email recommendations from the system every 15 minutes.&amp;lt;ref name=&amp;quot;:8&amp;quot;&amp;gt;[https://www.sequoiacap.com/podcast/training-data-jim-gao/ &amp;quot;Phaidra&#039;s Jim Gao on Building for the Fourth Industrial Revolution&amp;quot;]. &#039;&#039;Sequoia Capital&#039;&#039;. 2024-08-20.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eventually a more mature and more autonomous system was deployed, where the AI&#039;s actions are checked against safety constraints and implemented autonomously if verified safe, and human operators would supervise the AI and may override. The system led to a 30% saving in PUE. The system produced cooling strategies that surprised long-time operators, such as exploiting winter conditions to produce colder than normal water.&amp;lt;ref name=&amp;quot;:8&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Gamble, Chris. [https://www.deepmind.com/blog/safety-first-ai-for-autonomous-data-centre-cooling-and-industrial-control &amp;quot;Safety-first AI for autonomous data centre cooling and industrial control&amp;quot;]. &#039;&#039;DeepMind&#039;&#039;. 2018-08-17.&amp;lt;/ref&amp;gt; Google subsequently collaborated with [[Trane Technologies]] to deploy similar RL-based systems on [[Heating, ventilation, and air conditioning|HVAC]] of facilities outside of Google.&amp;lt;ref&amp;gt;Luo, Jerry. &amp;quot;Controlling Commercial Cooling Systems Using Reinforcement Learning&amp;quot;. 2022-12-14.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Protein folding ===&lt;br /&gt;
&#039;&#039;Main article: [[AlphaFold]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In 2016, DeepMind turned its artificial intelligence to [[protein structure prediction|protein folding]], a long-standing problem in [[molecular biology]]. In December 2018, DeepMind&#039;s AlphaFold won the 13th [[Critical Assessment of Techniques for Protein Structure Prediction]] (CASP) by successfully predicting the most accurate structure for 25 out of 43 proteins. &amp;quot;This is a lighthouse project, our first major investment in terms of people and resources into a fundamental, very important, real-world scientific problem,&amp;quot; Hassabis said to &#039;&#039;The Guardian&#039;&#039;.&amp;lt;ref&amp;gt;Sample, Ian. [https://www.theguardian.com/science/2018/dec/02/google-deepminds-ai-program-alphafold-predicts-3d-shapes-of-proteins &amp;quot;Google&#039;s DeepMind predicts 3D shapes of proteins&amp;quot;]. &#039;&#039;[[The Guardian]]&#039;&#039;. 2 December 2018.&amp;lt;/ref&amp;gt; In 2020, in the 14th CASP, AlphaFold&#039;s predictions achieved an accuracy score regarded as comparable with lab techniques. Andriy Kryshtafovych, one of the panel of scientific adjudicators, described the achievement as &amp;quot;truly remarkable&amp;quot;, and said the problem of predicting how proteins fold had been &amp;quot;largely solved&amp;quot;.&amp;lt;ref&amp;gt;Briggs, Helen. [https://www.bbc.co.uk/news/science-environment-55133972 &amp;quot;One of biology&#039;s biggest mysteries &#039;largely solved&#039; by AI&amp;quot;]. &#039;&#039;[[BBC News]]&#039;&#039;. 30 November 2020.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology &amp;quot;AlphaFold: a solution to a 50-year-old grand challenge in biology&amp;quot;]. &#039;&#039;DeepMind&#039;&#039;. 30 November 2020.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Shead, Sam. [https://www.cnbc.com/2020/11/30/deepmind-solves-protein-folding-grand-challenge-with-alphafold-ai.html &amp;quot;DeepMind solves 50-year-old &#039;grand challenge&#039; with protein folding A.I.&amp;quot;]. cnbc.com. 30 November 2020.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In July 2021, the open-source RoseTTAFold and AlphaFold2 were released to allow scientists to run their own versions of the tools. A week later DeepMind announced that AlphaFold had completed its prediction of nearly all human proteins as well as the entire [[proteome]]s of 20 other widely studied organisms.&amp;lt;ref&amp;gt;Callaway, Ewen. &amp;quot;What&#039;s next for AlphaFold and the AI protein-folding revolution&amp;quot;. &#039;&#039;Nature&#039;&#039;. 2022.&amp;lt;/ref&amp;gt; The structures were released on the AlphaFold Protein Structure Database. In July 2022, it was announced that the predictions of over 200 million proteins, representing virtually all known proteins, would be released on the AlphaFold database.&amp;lt;ref name=geddes /&amp;gt;&amp;lt;ref name=&amp;quot;alphafold DB&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The most recent update, AlphaFold3, was released in May 2024, predicting the interactions of proteins with DNA, RNA, and various other molecules. In a particular [[Benchmark (computing)|benchmark test]] on the problem of DNA interactions, AlphaFold3&#039;s attained an accuracy of 65%, significantly improving the previous state of the art of 28%.&amp;lt;ref&amp;gt;Sullivan, Mark. [https://www.fastcompany.com/91120456/deepmind-alphafold-3-dna-rna-modeling &amp;quot;DeepMind&#039;s new AlphaFold 3 expands to DNA, RNA modeling&amp;quot;]. &#039;&#039;[[Fast Company]]&#039;&#039;. May 8, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In October 2024, Hassabis and [[John M. Jumper|John Jumper]] received half of the 2024 [[Nobel Prize in Chemistry]] jointly for protein structure prediction, citing AlphaFold2 achievement.&amp;lt;ref&amp;gt;[https://www.nobelprize.org/prizes/chemistry/2024/press-release/ &amp;quot;The Nobel Prize in Chemistry 2024&amp;quot;]. &#039;&#039;NobelPrize.org&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Language models===&lt;br /&gt;
In 2016, DeepMind introduced [[WaveNet]], a [[text-to-speech]] system. It was originally too computationally intensive for use in consumer products, but in late 2017 it became ready for use in consumer applications such as [[Google Assistant]].&amp;lt;ref&amp;gt;[http://fortune.com/2017/10/05/google-assistant-deepmind-wavenet-speech-ai/ &amp;quot;Here&#039;s Why Google&#039;s Assistant Sounds More Realistic Than Ever Before&amp;quot;]. &#039;&#039;Fortune&#039;&#039;. 5 October 2017.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Gershgorn, Dave. [https://qz.com/1165775/googles-voice-generating-ai-is-now-indistinguishable-from-humans/ &amp;quot;Google&#039;s voice-generating AI is now indistinguishable from humans&amp;quot;]. &#039;&#039;Quartz&#039;&#039;.&amp;lt;/ref&amp;gt; In 2018 Google launched a commercial text-to-speech product, Cloud Text-to-Speech, based on WaveNet.&amp;lt;ref name=&amp;quot;cnbc money&amp;quot;&amp;gt;Novet, Jordan. [https://www.cnbc.com/2018/03/31/how-google-makes-money-from-alphabets-deepmind-ai-research-group.html &amp;quot;Google is finding ways to make money from Alphabet&#039;s DeepMind A.I. technology&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 31 March 2018.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://cloudplatform.googleblog.com/2018/03/introducing-Cloud-Text-to-Speech-powered-by-Deepmind-WaveNet-technology.html &amp;quot;Introducing Cloud Text-to-Speech powered by DeepMind WaveNet technology&amp;quot;]. &#039;&#039;Google Cloud Platform Blog&#039;&#039;.&amp;lt;/ref&amp;gt; In 2018, DeepMind introduced a more efficient model called WaveRNN co-developed with [[Google AI]].&amp;lt;ref&amp;gt;[https://deepmind.com/research/publications/efficient-neural-audio-synthesis &amp;quot;Efficient Neural Audio Synthesis&amp;quot;]. &#039;&#039;Deepmind&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://deepmind.com/blog/article/Using-WaveNet-technology-to-reunite-speech-impaired-users-with-their-original-voices &amp;quot;Using WaveNet technology to reunite speech-impaired users with their original voices&amp;quot;]. &#039;&#039;Deepmind&#039;&#039;. 18 December 2019.&amp;lt;/ref&amp;gt; In 2020 WaveNetEQ, a packet loss concealment method based on a WaveRNN architecture, was presented.&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; In 2019, Google started to roll WaveRNN with WavenetEQ out to [[Google Duo]] users.&amp;lt;ref&amp;gt;[http://ai.googleblog.com/2020/04/improving-audio-quality-in-duo-with.html &amp;quot;Improving Audio Quality in Duo with WaveNetEQ&amp;quot;]. &#039;&#039;Google AI Blog&#039;&#039;. April 2020.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Released in May 2022, [[Gato (DeepMind)|Gato]] is a polyvalent [[multimodal learning|multimodal]] model. It was trained on 604 tasks, such as image captioning, dialogue, or stacking blocks. On 450 of these tasks, Gato outperformed human experts at least half of the time, according to DeepMind.&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2022/05/13/deepminds-new-ai-can-perform-over-600-tasks-from-playing-games-to-controlling-robots/ &amp;quot;DeepMind&#039;s new AI system can perform over 600 tasks&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 13 May 2022.&amp;lt;/ref&amp;gt; Unlike models like MuZero, Gato does not need to be retrained to switch from one task to the other.&lt;br /&gt;
&lt;br /&gt;
[[Sparrow (chatbot)|Sparrow]] is an artificial intelligence-powered [[chatbot]] developed by DeepMind to build safer machine learning systems by using a mix of human feedback and Google search suggestions.&amp;lt;ref&amp;gt;Gupta, Khushboo. [https://www.marktechpost.com/2022/09/28/deepmind-introduces-sparrow-an-artificial-intelligence-powered-chatbot-developed-to-build-safer-machine-learning-systems/ &amp;quot;Deepmind Introduces &#039;Sparrow,&#039; An Artificial Intelligence-Powered Chatbot Developed To Build Safer Machine Learning Systems&amp;quot;]. 28 September 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Chinchilla (language model)|Chinchilla]] is a language model developed by DeepMind.&amp;lt;ref&amp;gt;[https://dataconomy.com/2023/01/12/what-is-chinchilla-ai-chatbot-deepmind/ &amp;quot;What Is Chinchilla AI: Chatbot Language Model Rival By Deepmind To GPT-3&amp;quot;]. &#039;&#039;Dataconomy&#039;&#039;. 12 January 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
DeepMind posted a blog post on 28 April 2022 on a single visual language model (VLM) named Flamingo that can accurately describe a picture of something with just a few training images.&amp;lt;ref&amp;gt;Alayrac, Jean-Baptiste. [https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model &amp;quot;Tackling multiple tasks with a single visual language model&amp;quot;]. &#039;&#039;www.deepmind.com&#039;&#039;. 28 April 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Alayrac, Jean-Baptiste. [https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/tackling-multiple-tasks-with-a-single-visual-language-model/flamingo.pdf &amp;quot;Flamingo: a Visual Language Model for Few-Shot Learning&amp;quot;]. &#039;&#039;&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== AlphaCode ====&lt;br /&gt;
In 2022, DeepMind unveiled AlphaCode, an [[AI-assisted software development|AI-powered coding engine]] that creates [[computer programs]] at a rate comparable to that of an average programmer, with the company testing the system against coding challenges created by [[Codeforces]] utilized in human [[competitive programming]] competitions.&amp;lt;ref&amp;gt;Vincent, James. [https://www.theverge.com/2022/2/2/22914085/alphacode-ai-coding-program-automatic-deepmind-codeforce &amp;quot;DeepMind says its new AI coding engine is as good as an average human programmer&amp;quot;]. &#039;&#039;[[The Verge]]&#039;&#039;. 2 February 2022.&amp;lt;/ref&amp;gt; AlphaCode earned a rank equivalent to 54% of the median score on Codeforces after being trained on [[GitHub]] data and Codeforce problems and solutions. The program was required to come up with a unique solution and stopped from duplicating answers.&lt;br /&gt;
&lt;br /&gt;
====Gemini====&lt;br /&gt;
&#039;&#039;Main article: [[Gemini (language model)]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Gemini is a [[Multimodal learning|multimodal]] [[large language model]] which was released on 6 December 2023.&amp;lt;ref&amp;gt;Kruppa, Miles. [https://www.wsj.com/tech/ai/google-announces-ai-system-gemini-after-turmoil-at-rival-openai-10835335 &amp;quot;Google Announces AI System Gemini After Turmoil at Rival OpenAI&amp;quot;]. &#039;&#039;[[The Wall Street Journal]]&#039;&#039;. 6 December 2023.&amp;lt;/ref&amp;gt; It is the successor of Google&#039;s [[LaMDA]] and [[PaLM|PaLM 2]] language models and sought to challenge OpenAI&#039;s [[GPT-4]].&amp;lt;ref&amp;gt;Knight, Will. [https://www.wired.com/story/google-deepmind-demis-hassabis-chatgpt/ &amp;quot;Google DeepMind&#039;s CEO Says Its Next Algorithm Will Eclipse ChatGPT&amp;quot;]. &#039;&#039;[[Wired (magazine)&#039;&#039;. 26 June 2023.&amp;lt;/ref&amp;gt; Gemini comes in 3 sizes: Nano, Pro, and Ultra.&amp;lt;ref&amp;gt;Pierce, David. [https://www.theverge.com/2023/12/6/23990466/google-gemini-llm-ai-model &amp;quot;Google launches Gemini, the AI model it hopes will take down GPT-4&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 6 December 2023.&amp;lt;/ref&amp;gt; Gemini is also the name of the chatbot that integrates Gemini (and which was previously called [[Google Bard|Bard]]).&amp;lt;ref&amp;gt;[https://www.cbsnews.com/news/google-gemini-ai-bard/ &amp;quot;Google is rebranding its Bard AI service as Gemini. Here&#039;s what it means.&amp;quot;]. &#039;&#039;CBS News&#039;&#039;. 8 February 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On 12 December 2024, Google released Gemini 2.0 Flash, the first model in the Gemini 2.0 series. It notably features expanded multimodality, with the ability to also generate images and audio,&amp;lt;ref&amp;gt;Haddad, C. J.. [https://www.cnbc.com/2024/12/11/google-releases-the-first-of-its-gemini-2point0-ai-models.html &amp;quot;Google releases the first of its Gemini 2.0 AI models&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2024-12-11.&amp;lt;/ref&amp;gt; and is part of Google&#039;s broader plans to integrate advanced AI into [[Autonomous agent|autonomous agents]].&amp;lt;ref&amp;gt;[https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/?utm_content=#agents-for-developers &amp;quot;Introducing Gemini 2.0: our new AI model for the agentic era&amp;quot;]. &#039;&#039;Google&#039;&#039;. 2024-12-11.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On 25 March 2025, Google released Gemini 2.5, a reasoning model that stops to &amp;quot;think&amp;quot; before giving a response. Google announced that all future models will also have reasoning ability.&amp;lt;ref&amp;gt;Zeff, Maxwell. [https://techcrunch.com/2025/03/25/google-unveils-a-next-gen-ai-reasoning-model/ &amp;quot;Google unveils a next-gen family of AI reasoning models&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2025-03-25.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/#enhanced-reasoning &amp;quot;Gemini 2.5: Our most intelligent AI model&amp;quot;]. &#039;&#039;Google&#039;&#039;. 2025-03-25.&amp;lt;/ref&amp;gt; On 30 March 2025, Google released Gemini 2.5 to all free users.&amp;lt;ref&amp;gt;Kumari, Sweta. [https://www.business-standard.com/technology/tech-news/google-rolls-out-custom-chatbots-gems-for-free-tier-gemini-users-details-125032600913_1.html &amp;quot;Google rolls-out custom chatbots &#039;Gems&#039; for free-tier Gemini users: Details&amp;quot;]. &#039;&#039;Business Standard&#039;&#039;. 2025-03-26.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On 18 November 2025, Google released Gemini 3 Pro, a reasoning model which is fully multimodal.&amp;lt;ref&amp;gt;[https://blog.google/products/gemini/gemini-3/ &amp;quot;A new era of intelligence with Gemini 3&amp;quot;]. &#039;&#039;Google&#039;&#039;. 2025-11-18.&amp;lt;/ref&amp;gt; It was fully integrated with Google Search and AI Mode the same day.&amp;lt;ref&amp;gt;Knight, Will. [https://www.wired.com/story/google-launches-gemini-3-ai-bubble-search/ &amp;quot;Gemini 3 Is Here—and Google Says It Will Make Search Smarter&amp;quot;]. &#039;&#039;Wired&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Gemma====&lt;br /&gt;
&#039;&#039;Main article: [[Gemma (language model)]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Gemma is a collection of open-weight large language models. The first ones were released on 21 February 2024 and are available in two distinct sizes: a 7 billion parameter model optimized for GPU and TPU usage, and a 2 billion parameter model designed for CPU and on-device applications. Gemma models were trained on up to 6 trillion tokens of text, employing similar architectures, datasets, and training methodologies as the Gemini model set.&amp;lt;ref&amp;gt;Quach, Katyanna. [https://www.theregister.com/2024/02/22/google_gemma_llms/ &amp;quot;Google Gemma LLMs small enough to run on your computer&amp;quot;]. &#039;&#039;The Register&#039;&#039;. 2024-02-22.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In June 2024, Google started releasing Gemma 2 models.&amp;lt;ref&amp;gt;Yeung, Ken. [https://venturebeat.com/ai/googles-gemma-2-series-launches-with-not-one-but-two-lightweight-model-options-a-9b-and-27b/ &amp;quot;Google&#039;s Gemma 2 series launches with not one, but two lightweight model options—a 9B and 27B&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 2024-06-27.&amp;lt;/ref&amp;gt; In December 2024, Google introduced &#039;&#039;PaliGemma 2&#039;&#039;, an upgraded vision-language model.&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2024/12/05/google-says-its-new-open-models-can-identify-emotions-and-that-has-experts-worried/ &amp;quot;Google says its new AI models can identify emotions — and that has experts worried&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 5 December 2024.&amp;lt;/ref&amp;gt; In February 2025, they launched &#039;&#039;PaliGemma 2 Mix&#039;&#039;, a version fine-tuned for multiple tasks. It is available in 3B, 10B, and 28B parameters with 224px and 448px resolutions.&amp;lt;ref&amp;gt;Barron, Jenna. [https://sdtimes.com/ai/feb-21-2025-development-tools-that-have-recently-added-new-ai-capabilities/ &amp;quot;Feb 21, 2025: Development tools that have recently added new AI capabilities&amp;quot;]. &#039;&#039;SD Times&#039;&#039;. 2025-02-21.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In March 2025, Google released Gemma 3, calling it the most capable model that can be run on a single GPU.&amp;lt;ref&amp;gt;Lawler, Richard. [https://www.theverge.com/ai-artificial-intelligence/627968/google-gemma-3-open-ai-model &amp;quot;Google calls Gemma 3 the most powerful AI model you can run on one GPU&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 2025-03-12.&amp;lt;/ref&amp;gt; It has four available sizes: 1B, 4B, 12B, and 27B.&amp;lt;ref&amp;gt;David, Emilia. [https://venturebeat.com/ai/google-unveils-open-source-gemma-3-model-with-128k-context-window/ &amp;quot;Google unveils open source Gemma 3 model with 128k context window&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 2025-03-12.&amp;lt;/ref&amp;gt; In March 2025, Google introduced TxGemma, an open-source model designed to improve the efficiency of therapeutics development.&amp;lt;ref&amp;gt;Azizi, Shekoofeh. [https://developers.googleblog.com/en/introducing-txgemma-open-models-improving-therapeutics-development/#:~:text=TxGemma%20models,%20fine-tuned%20from,:%202B,%209B%20and%2027B. &amp;quot;Introducing TxGemma: Open models to improve therapeutics development&amp;quot;]. &#039;&#039;Google Developers Blog&#039;&#039;. 25 March 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In April 2025, Google introduced DolphinGemma, a research artificial intelligence model designed to hopefully decode dolphin communication. They want to train a foundation model that can learn the structure of dolphin vocalizations and generate novel dolphin-like sound sequences.&amp;lt;ref&amp;gt;Starner, Thad. [https://blog.google/technology/ai/dolphingemma/ &amp;quot;DolphinGemma: How Google AI is helping decode dolphin communication&amp;quot;]. &#039;&#039;Google&#039;&#039;. 2025-04-14.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Aadeetya, S. [https://www.news18.com/tech/dolphingemma-google-using-ai-to-understand-what-dolphins-are-saying-9299420.html &amp;quot;DolphinGemma: Google Using AI And Pixel 9 Phone To Understand What Dolphins Are Saying&amp;quot;]. &#039;&#039;News18&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====SIMA====&lt;br /&gt;
In March 2024, DeepMind introduced Scalable Instructable Multiword Agent, or SIMA, an AI agent capable of understanding and following natural language instructions to complete tasks across various 3D virtual environments. Trained on nine video games from eight studios and four research environments, SIMA demonstrated adaptability to new tasks and settings without requiring access to game source code or APIs. The agent comprises pre-trained computer vision and language models fine-tuned on gaming data, with language being crucial for understanding and completing given tasks as instructed. DeepMind&#039;s research aimed to develop more helpful AI agents by translating advanced AI capabilities into real-world actions through a language interface.&amp;lt;ref&amp;gt;[https://deepmind.google/discover/blog/sima-generalist-ai-agent-for-3d-virtual-environments/ &amp;quot;A generalist AI agent for 3D virtual environments&amp;quot;]. &#039;&#039;Google DeepMind&#039;&#039;. 13 March 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;David, Emilia. [https://www.theverge.com/2024/3/13/24099024/google-deepmind-ai-agent-sima-video-games &amp;quot;Google&#039;s new AI will play video games with you — but not to win&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 13 March 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Habermas machine ====&lt;br /&gt;
&#039;&#039;See also: [[Pol.is|Deliberative opinion poll]]&#039;&#039;&lt;br /&gt;
In 2024, Google Deepmind published the results of an experiment where they trained two [[Large language model|large language models]] to help identify and present areas of overlap among a few thousand group members they had recruited online using techniques like [[sortition]] to get a representative sample of participants. The project is named in honor of [[Jürgen Habermas]].&amp;lt;ref&amp;gt;Williams, Rhiannon. [https://www.technologyreview.com/2024/10/17/1105810/ai-could-help-people-find-common-ground-during-deliberations/ &amp;quot;AI could help people find common ground during deliberations&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. October 17, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:6&amp;quot;&amp;gt;Davis, Nicola. [https://www.theguardian.com/technology/2024/oct/17/ai-mediation-tool-may-help-reduce-culture-war-rifts-say-researchers &amp;quot;AI mediation tool may help reduce culture war rifts, say researchers&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 2024-10-17.&amp;lt;/ref&amp;gt; In one experiment, the participants rated the summaries by the AI higher than the human moderator 56% of the time.&amp;lt;ref name=&amp;quot;:6&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Generative AI ===&lt;br /&gt;
&lt;br /&gt;
==== Video generation ====&lt;br /&gt;
&#039;&#039;Main article: [[Veo (text-to-video model)]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In May 2024, a [[Multimodality|multimodal]] [[Text-to-video model|video generation model]] called Veo was announced at [[Google I/O]] 2024. Google claimed that it could generate [[1080p]] videos beyond a minute long.&amp;lt;ref name=&amp;quot;Wiggers 14 May 2024&amp;quot;&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2024/05/14/google-veo-a-serious-swing-at-ai-generated-video-debuts-at-google-io-2024/ &amp;quot;Google Veo, a serious swing at AI-generated video, debuts at Google I/O 2024&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 14 May 2024.&amp;lt;/ref&amp;gt; In December 2024, [[Google]] released Veo 2, available via VideoFX. It supports [[4K resolution]] video generation, and has an improved understanding of physics.&amp;lt;ref&amp;gt;, . [https://www.thehindu.com/sci-tech/technology/google-unveils-improved-ai-video-generator-veo-2-to-rival-openais-sora/article68994621.ece &amp;quot;Google unveils improved AI video generator Veo 2 to rival OpenAI&#039;s Sora&amp;quot;]. &#039;&#039;The Hindu&#039;&#039;. 2024-12-17.&amp;lt;/ref&amp;gt; In April 2025, Google announced that Veo 2 became available for advanced users on Gemini App.&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2025/04/15/googles-veo-2-video-generator-comes-to-gemini/ &amp;quot;Google&#039;s Veo 2 video generating model comes to Gemini&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2025-04-15.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In May 2025, Google released Veo 3, which not only generates videos but also creates synchronized audio — including dialogue, sound effects, and ambient noise — to match the visuals.&amp;lt;ref&amp;gt;Elias, Jennifer. [https://www.cnbc.com/2025/05/20/google-ai-video-generator-audio-veo-3.html &amp;quot;Google launches Veo 3, an AI video generator that incorporates audio&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. 2025-05-20.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2025/05/20/googles-veo-3-can-generate-videos-and-soundtracks-to-go-along-with-them/ &amp;quot;Veo 3 can generate videos — and soundtracks to go along with them&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2025-05-20.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Google also announced Flow, a video-creation tool powered by Veo and [[Imagen (text-to-image model)|Imagen]].&amp;lt;ref&amp;gt;Peters, Jay. [https://www.theverge.com/news/670181/google-deepmind-ai-videos-app-flow-veo-3-2-imagen-4-io-2025 &amp;quot;Google has a new tool just for making AI videos&amp;quot;]. &#039;&#039;[[The Verge]]&#039;&#039;. May 20, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Music generation ====&lt;br /&gt;
[[File:Digital Sonata of the Speed God (Lyria 3).opus|thumb|A music generated with Lyria 3]]&lt;br /&gt;
Google DeepMind developed Lyria, a text-to-music model. As of August 2025, it is available on Vertex AI and the Gemini API.&amp;lt;ref&amp;gt;[https://cloud.google.com/vertex-ai/generative-ai/docs/music/generate-music &amp;quot;Vertex AI {{!&amp;quot;]. &#039;&#039;Google Cloud&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://ai.google.dev/gemini-api/docs/music-generation &amp;quot;Music generation using Lyria RealTime&amp;quot;].&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2025/04/09/google-brings-a-music-generating-ai-model-to-its-enterprise-cloud/ &amp;quot;Google&#039;s enterprise cloud gets a music-generating AI model&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2025-04-09.&amp;lt;/ref&amp;gt; On February 18, 2026, DeepMind released Lyria 3.&amp;lt;ref&amp;gt;Whitwam, Ryan. [https://arstechnica.com/google/2026/02/gemini-can-now-generate-ai-music-for-you-no-lyrics-required/ &amp;quot;Record scratch—Google&#039;s Lyria 3 AI music model is coming to Gemini today&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2026-02-18.&amp;lt;/ref&amp;gt; On March 25, 2026, DeepMind released Lyria 3 Pro which allows users to create longer tracks with more structural awareness.&amp;lt;ref&amp;gt;Mehta, Ivan. [https://techcrunch.com/2026/03/25/google-launches-lyria-3-pro-music-generation-model/ &amp;quot;Google launches Lyria 3 Pro music generation model&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2026-03-25.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
==== Environment generation ====&lt;br /&gt;
&#039;&#039;Main article: [[Genie (AI model)]]&#039;&#039;&lt;br /&gt;
In March 2024, DeepMind introduced &amp;quot;[[Genie (text-to-video model)|Genie]]&amp;quot; (Generative Interactive Environments), an AI model that can generate game-like, action-controllable virtual worlds based on textual descriptions, images, or sketches. Built as an autoregressive [[latent diffusion model]], Genie enables frame-by-frame interactivity without requiring labeled action data for training. Its successor, Genie 2, released in December 2024, expanded these capabilities to generate diverse and interactive 3D environments.&amp;lt;ref&amp;gt;Orland, Kyle. [https://arstechnica.com/ai/2024/12/googles-genie-2-world-model-reveal-leaves-more-questions-than-answers/ &amp;quot;Google&#039;s Genie 2 &amp;quot;world model&amp;quot; reveal leaves more questions than answers&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2024-12-06.&amp;lt;/ref&amp;gt; Genie 3 was released in August 2025, with higher-resolution world generations and multiple minutes of visual consistency.&amp;lt;ref&amp;gt;Whitwam, Ryan. [https://arstechnica.com/ai/2025/08/deepmind-reveals-genie-3-world-model-that-creates-real-time-interactive-simulations/ &amp;quot;DeepMind reveals Genie 3 &amp;quot;world model&amp;quot; that creates real-time interactive simulations&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2025-08-05.&amp;lt;/ref&amp;gt; On January 29, 2026, DeepMind released Project Genie to AI Ultra subscribers.&amp;lt;ref&amp;gt;Whitwam, Ryan. [https://arstechnica.com/google/2026/01/google-project-genie-lets-you-create-interactive-worlds-from-a-photo-or-prompt/ &amp;quot;Google Project Genie lets you create interactive worlds from a photo or prompt&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2026-01-29.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Robotics ===&lt;br /&gt;
Released in June 2023, RoboCat is an AI model that can control robotic arms. The model can adapt to new models of robotic arms, and to new types of tasks.&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2023/06/21/deepminds-robocat-learns-to-perform-a-range-of-robotics-tasks/ &amp;quot;DeepMind&#039;s RoboCat learns to perform a range of robotics tasks&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 21 June 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Cuthbertson, Anthony. [https://www.independent.co.uk/tech/google-deepmind-ai-robot-robocat-b2362892.html &amp;quot;Google&#039;s DeepMind unveils AI robot that can teach itself unsupervised&amp;quot;]. &#039;&#039;The Independent&#039;&#039;. 23 June 2023.&amp;lt;/ref&amp;gt; In March 2025, DeepMind launched two AI models, Gemini Robotics and Gemini Robotics-ER, aimed at improving how robots interact with the physical world&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2025/03/12/google-deepmind-unveils-new-ai-models-for-controlling-robots/ &amp;quot;Google DeepMind unveils new AI models for controlling robots&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2025-03-12.&amp;lt;/ref&amp;gt; and released Gemini Robotics 1.5 in September 2025.&amp;lt;ref&amp;gt;Schreiner, Maximilian. [https://the-decoder.com/google-deepmind-taps-boston-dynamics-former-cto-to-build-the-android-of-robots/ &amp;quot;Google Deepmind taps Boston Dynamics&#039; former CTO to build the &#039;Android&#039; of robots&amp;quot;]. &#039;&#039;the decoder&#039;&#039;. 2025-11-20.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Others ===&lt;br /&gt;
&lt;br /&gt;
==== Football ====&lt;br /&gt;
DeepMind researchers have applied machine learning models to the sport of [[Association football|football]], often referred to as soccer in North America, modelling the behaviour of football players, including the goalkeeper, defenders, and strikers during different scenarios such as penalty kicks. The researchers used heat maps and cluster analysis to organize players based on their tendency to behave a certain way during the game when confronted with a decision on how to score or prevent the other team from scoring. &lt;br /&gt;
&lt;br /&gt;
The researchers mention that machine learning models could be used to democratize the football industry by automatically selecting interesting video clips of the game that serve as highlights. This can be done by searching videos for certain events, which is possible because video analysis is an established field of machine learning. This is also possible because of extensive sports analytics based on data including annotated passes or shots, sensors that capture data about the players movements many times over the course of a game, and game theory models.&amp;lt;ref&amp;gt;Tuyls, Karl. [https://www.deepmind.com/blog/advancing-sports-analytics-through-ai-research &amp;quot;Advancing sports analytics through AI research&amp;quot;]. &#039;&#039;DeepMind&#039;&#039;. 7 May 2021.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Tuyls, Karl. [https://www.jair.org/index.php/jair/article/view/12505 &amp;quot;Game Plan: What AI can do for Football, and What Football can do for AI&amp;quot;]. &#039;&#039;Journal of Artificial Intelligence Research&#039;&#039;. 6 May 2021.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Archaeology ====&lt;br /&gt;
Google has unveiled a new archaeology document program, named Ithaca after [[Homer&#039;s Ithaca|the Greek island]] in Homer&#039;s [[Odyssey]].&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;Assael, Yannis. [https://deepmind.google/discover/blog/predicting-the-past-with-ithaca/ &amp;quot;Predicting the past with Ithaca&amp;quot;]. &#039;&#039;Google DeepMind&#039;&#039;. 9 March 2022.&amp;lt;/ref&amp;gt; This deep neural network helps researchers restore the empty text of damaged Greek documents, and to identify their date and geographical origin.&amp;lt;ref name=&amp;quot;:2&amp;quot;&amp;gt;Vincent, James. [https://www.theverge.com/2022/3/9/22968773/ai-machine-learning-ancient-inscriptions-texts-deepmind-ithaca-model &amp;quot;DeepMind&#039;s new AI model helps decipher, date, and locate ancient inscriptions&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 9 March 2022.&amp;lt;/ref&amp;gt; The work builds on another text analysis network that DeepMind released in 2019, named Pythia.&amp;lt;ref name=&amp;quot;:2&amp;quot; /&amp;gt; Ithaca achieves 62% accuracy in restoring damaged texts and 71% location accuracy, and has a dating precision of 30 years.&amp;lt;ref name=&amp;quot;:2&amp;quot; /&amp;gt; The authors claimed that the use of Ithaca by &amp;quot;expert historians&amp;quot; raised the accuracy of their work from 25 to 72 percent.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt; However, [[Eleanor Dickey]] noted that this test was actually only made of students, saying that it wasn&#039;t clear how helpful Ithaca would be to &amp;quot;genuinely qualified editors&amp;quot;.&amp;lt;ref name=&amp;quot;:2&amp;quot; /&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The team is working on extending the model to other ancient languages, including [[Demotic Egyptian language|Demotic]], [[Akkadian language|Akkadian]], [[Hebrew language|Hebrew]], and [[Mayan languages|Mayan]].&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Materials science ====&lt;br /&gt;
In November 2023, Google DeepMind announced an Open Source Graph Network for Materials Exploration (GNoME). The tool proposes millions of materials previously unknown to chemistry, including several hundred thousand stable crystalline structures, of which 736 had been experimentally produced by the Massachusetts Institute of Technology, at the time of the release.&amp;lt;ref&amp;gt;Merchant, Amil. &amp;quot;Scaling deep learning for materials discovery&amp;quot;. &#039;&#039;Nature&#039;&#039;. December 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Kim, June. [https://www.technologyreview.com/2023/11/29/1084061/deepmind-ai-tool-for-new-materials-discovery/ &amp;quot;Google DeepMind&#039;s new AI tool helped create more than 700 new materials&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. 29 November 2023.&amp;lt;/ref&amp;gt; However, according to [[Anthony Cheetham]], GNoME did not make &amp;quot;a useful, practical contribution to the experimental materials scientists.&amp;quot;&amp;lt;ref name=&amp;quot;404media&amp;quot;&amp;gt;Koebler, Jason. [https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/ &amp;quot;Is Google&#039;s AI Actually Discovering &#039;Millions of New Materials?&#039;&amp;quot;]. &#039;&#039;[[404 Media]]&#039;&#039;. April 11, 2024.&amp;lt;/ref&amp;gt; A review article by Cheetham and Ram Seshadri were unable to identify any &amp;quot;strikingly novel&amp;quot; materials found by GNoME, with most being minor variants of already-known materials.&amp;lt;ref name=&amp;quot;404media&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Cheetham, Anthony K.. &amp;quot;Artificial intelligence driving materials discovery? Perspective on the article: Scaling Deep Learning for Materials Discovery&amp;quot;. &#039;&#039;[[Chemistry of Materials]]&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mathematics ===&lt;br /&gt;
====AlphaTensor====&lt;br /&gt;
In October 2022, DeepMind released [[AlphaTensor]], which used reinforcement learning techniques similar to those in AlphaGo, to find novel [[Matrix multiplication algorithm|algorithms for matrix multiplication]].&amp;lt;ref name=AlphaTensor1&amp;gt;Hutson, Matthew. [https://www.nature.com/articles/d41586-022-03166-w &amp;quot;DeepMind AI invents faster algorithms to solve tough maths puzzles&amp;quot;]. &#039;&#039;[[Nature (journal)&#039;&#039;. 5 October 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref name=AlphaTensor2&amp;gt;Heaven, Will Douglas. [https://www.technologyreview.com/2022/10/05/1060717/deepmind-uses-its-game-playing-ai-to-best-a-50-year-old-record-in-computer-science/ &amp;quot;DeepMind&#039;s game-playing AI has beaten a 50-year-old record in computer science&amp;quot;]. &#039;&#039;[[MIT Technology Review]]&#039;&#039;. 5 October 2022.&amp;lt;/ref&amp;gt; In the special case of multiplying two 4×4 matrices with [[integer]] entries, where only the evenness or oddness of the entries is recorded, AlphaTensor found an algorithm requiring only 47 distinct multiplications; the previous optimum, known since 1969, was the more general [[Strassen algorithm]], using 49 multiplications.&amp;lt;ref name=&amp;quot;quantamag&amp;quot;&amp;gt;Brubaker, Ben. [https://www.quantamagazine.org/ai-reveals-new-possibilities-in-matrix-multiplication-20221123/ &amp;quot;AI Reveals New Possibilities in Matrix Multiplication&amp;quot;]. &#039;&#039;Quanta Magazine&#039;&#039;. November 2022.&amp;lt;/ref&amp;gt; Computer scientist Josh Alman described AlphaTensor as &amp;quot;a proof of concept for something that could become a breakthrough&amp;quot;, while [[Virginia Vassilevska Williams|Vassilevska Williams]] called it &amp;quot;a little overhyped&amp;quot;&amp;lt;ref name=&amp;quot;quantamag&amp;quot; /&amp;gt; despite also acknowledging its basis in reinforcement learning as &amp;quot;something completely different&amp;quot; from previous approaches.&amp;lt;ref name=AlphaTensor2 /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====AlphaGeometry====&lt;br /&gt;
&#039;&#039;Main article: [[AlphaGeometry]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
AlphaGeometry is a [[neuro-symbolic AI]] that was able to solve 25 out of 30 geometry problems of the [[International Mathematical Olympiad]], a performance comparable to that of a gold medalist.&amp;lt;ref name=&amp;quot;:3&amp;quot;&amp;gt;Zia, Tehseen. [https://www.unite.ai/alphageometry-how-deepminds-ai-masters-geometry-problems-at-olympian-levels/ &amp;quot;AlphaGeometry: DeepMind&#039;s AI Masters Geometry Problems at Olympiad Levels&amp;quot;]. &#039;&#039;Unite.ai&#039;&#039;. January 24, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Traditional geometry programs are [[Symbolic artificial intelligence|symbolic engines]] that rely exclusively on human-coded [[Rule-based system|rules]] to generate rigorous proofs, which makes them lack flexibility in unusual situations. AlphaGeometry combines such a symbolic engine with a specialized [[large language model]] trained on [[synthetic data]] of geometrical proofs. When the symbolic engine doesn&#039;t manage to find a formal and rigorous proof on its own, it solicits the large language model, which suggests a geometrical construct to move forward. However, it is unclear how applicable this method is to other domains of mathematics or reasoning, because symbolic engines rely on domain-specific rules and because of the need for synthetic data.&amp;lt;ref name=&amp;quot;:3&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====AlphaProof====&lt;br /&gt;
AlphaProof is an AI model, which couples a pre-trained language model with the AlphaZero reinforcement learning algorithm. AlphaZero has previously taught itself how to master games. The pre-trained language model used in this combination is the [[Fine-tuning (deep learning)|fine-tuning]] of a [[Gemini (language model)|Gemini]] model to automatically translate natural language problem statements into formal statements, creating a large library of formal problems of varying difficulty. For this purpose, mathematical statements are defined in the formal language [[Lean (proof assistant)|Lean]]. At the 2024 International Mathematical Olympiad, AlphaProof together with an adapted version of AlphaGeometry have reached the same level of solving problems in the combined categories as a silver medalist in that competition for the first time.&amp;lt;ref name=&amp;quot;NYT&amp;quot;&amp;gt;Roberts, Siobhan. [https://www.nytimes.com/2024/07/25/science/ai-math-alphaproof-deepmind.html/ &amp;quot;AI achieves silver-medal standard solving International Mathematical Olympiad problems&amp;quot;]. &#039;&#039;[[The New York Times]]&#039;&#039;. July 25, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;IMO&amp;quot;&amp;gt;AlphaProof and AlphaGeometry teams. [https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/ &amp;quot;AI achieves silver-medal standard solving International Mathematical Olympiad problems&amp;quot;]. &#039;&#039;deepmind.google&#039;&#039;. July 25, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===AlphaDev===&lt;br /&gt;
{{Main article|AlphaDev}}&lt;br /&gt;
&lt;br /&gt;
In June 2023, Deepmind announced that [[AlphaDev]], which searches for improved computer science algorithms using [[reinforcement learning]], discovered a more efficient way of coding a sorting algorithm and a hashing algorithm. The new sorting algorithm was 70% faster for shorter sequences and 1.7% faster for sequences exceeding 250,000 elements, and the new hashing algorithm was 30% faster in some cases. The sorting algorithm was accepted into the [[C++ Standard Library]] [[sorting algorithm]]s, and was the first change to those algorithms in more than a decade and the first update to involve an algorithm discovered using AI.&amp;lt;ref name=&amp;quot;mit&amp;quot;&amp;gt;Heaven, Will Douglas. [https://www.technologyreview.com/2023/06/07/1074184/google-deepmind-game-ai-alphadev-algorithm-code-faster/ &amp;quot;Google DeepMind&#039;s game-playing AI just found another way to make code faster&amp;quot;]. [[MIT Technology Review]]. June 7, 2023.&amp;lt;/ref&amp;gt; The hashing algorithm was released to an opensource library.&amp;lt;ref&amp;gt;Mankowitz, Daniel J.. [https://deepmind.google/discover/blog/alphadev-discovers-faster-sorting-algorithms &amp;quot;AlphaDev discovers faster sorting algorithms&amp;quot;]. &#039;&#039;DeepMind Blog&#039;&#039;. 7 June 2023.&amp;lt;/ref&amp;gt; Google estimates that these two algorithms are used trillions of times every day.&amp;lt;ref&amp;gt;Sparkes, Matthew. [https://www.newscientist.com/article/2376512-deepmind-ais-new-way-to-sort-objects-could-speed-up-global-computing/ &amp;quot;DeepMind AI&#039;s new way to sort objects could speed up global computing&amp;quot;]. &#039;&#039;New Scientist&#039;&#039;. 7 June 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== AlphaEvolve ===&lt;br /&gt;
{{Main article|AlphaEvolve}}&lt;br /&gt;
&lt;br /&gt;
In May 2025, Google DeepMind unveiled [[AlphaEvolve]], an [[Evolutionary computation|evolutionary]] coding agent using LLMs like Gemini to design optimized algorithms. AlphaEvolve begins each optimization process with an initial algorithm and metrics to evaluate the quality of a solution. At each step, it uses the LLM to generate variations of the algorithms or combine them, and selects the best candidates for further iterations.&amp;lt;ref name=&amp;quot;:7&amp;quot;&amp;gt;Tardif, Antoine. [https://www.unite.ai/alphaevolve-google-deepminds-groundbreaking-step-toward-agi/ &amp;quot;AlphaEvolve: Google DeepMind&#039;s Groundbreaking Step Toward AGI&amp;quot;]. &#039;&#039;Unite.AI&#039;&#039;. 2025-05-17.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AlphaEvolve has made several algorithmic discoveries, including in matrix multiplication. According to Google, when tested on 50 open [[mathematical problems]], AlphaEvolve was able to match the efficiency of state-of-the-art algorithms in 75% of cases, and discovered improved solutions 20% of the time, such as with the [[kissing number problem]] in 11 dimensions. It also developed a new heuristic for data centre scheduling, recovering on average 0.7% of Google&#039;s worldwide compute resources.&amp;lt;ref name=&amp;quot;:7&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Chip design ===&lt;br /&gt;
&#039;&#039;Main article: [[AlphaChip (controversy)]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
AlphaChip is a [[reinforcement learning]]-based neural architecture that guides the task of chip [[Placement (electronic design automation)|placement]]. DeepMind claimed that the technique reduced the time needed to create chip layouts from weeks to hours. According to the company, its chip designs were used in every [[Tensor Processing Unit]] (TPU) iteration since 2020.&amp;lt;ref&amp;gt;Ghoshal, Abhimanyu. [https://newatlas.com/ai-humanoids/3-mind-blowing-ways-ai-chip-design-singularity/ &amp;quot;Singularity alert: AIs are already designing their own chips&amp;quot;]. &#039;&#039;New Atlas&#039;&#039;. 2024-11-30.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Shilov, Anton. [https://www.tomshardware.com/tech-industry/google-unveils-alphachip-ai-assisted-chip-design-technology-chip-layout-as-a-game-for-a-computer &amp;quot;Google unveils AlphaChip AI-assisted chip design technology — chip layout as a game for a computer&amp;quot;]. &#039;&#039;Tom&#039;s Hardware&#039;&#039;. 2024-09-28.&amp;lt;/ref&amp;gt; Multiple independent researchers remained unconvinced, citing a lack of direct public benchmarks and independent proof of its claimed superiority over existing commercial chip design tools.&amp;lt;ref name=&amp;quot;CACM&amp;quot;&amp;gt;Markov, Igor L.. &amp;quot;Reevaluating Google&#039;s Reinforcement Learning for IC Macro Placement&amp;quot;. &#039;&#039;Communications of the ACM&#039;&#039;. 2024-10-23.&amp;lt;/ref&amp;gt; The TPU chips were co-designed with [[Broadcom]].&amp;lt;ref&amp;gt;Mann, Tobias. [https://www.theregister.com/2023/09/22/google_broadcom_tpus/ &amp;quot;For your info, Broadcom helped Google make those TPU chips&amp;quot;]. &#039;&#039;The Register&#039;&#039;. 22 September 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.reuters.com/technology/google-discussed-dropping-broadcom-ai-chips-supplier-the-information-2023-09-21/ &amp;quot;Google expects no change in its relationship with AI chip supplier Broadcom&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 2023-09-21.&amp;lt;/ref&amp;gt; [[Communications of the ACM]] noted that despite substantial publicity, DeepMind had not provided the comparative benchmarks long requested by experts, leaving some skepticism in the field.&amp;lt;ref name=&amp;quot;cacm2024&amp;quot;&amp;gt;Halper, Mark. [https://cacm.acm.org/news/updates-spark-uproar/ &amp;quot;Updates Spark Uproar&amp;quot;]. &#039;&#039;Communications of the ACM&#039;&#039;. 2024-11-04.&amp;lt;/ref&amp;gt; Similarly, [[New Scientist]] reported that while Google claims AlphaChip has produced “superhuman” chip layouts now used in production, external specialists called for transparent performance data to substantiate these assertions and enable fair comparisons with current state-of-the-art methods.&amp;lt;ref name=&amp;quot;newscien2024&amp;quot;&amp;gt;Hsu, Jeremy. [https://www.newscientist.com/article/2450402-google-says-its-ai-designs-chips-better-than-humans-experts-disagree/ &amp;quot;Google says its AI designs chips better than humans - Experts disagree&amp;quot;]. &#039;&#039;New Scientist&#039;&#039;. 2024-10-14.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Safety ===&lt;br /&gt;
Google Research released a paper in 2016 regarding [[AI safety]] and avoiding undesirable behaviour during the AI learning process.&amp;lt;ref&amp;gt;Amodei, Dario. &amp;quot;Concrete Problems in AI Safety&amp;quot;. 21 June 2016.&amp;lt;/ref&amp;gt; In 2017 DeepMind released GridWorld, an open-source testbed for evaluating whether an algorithm learns to disable its [[kill switch]] or otherwise exhibits certain undesirable behaviours.&amp;lt;ref&amp;gt;Kahn, Jeremy. [https://www.bloomberg.com/news/articles/2017-12-11/deepmind-has-simple-tests-that-might-prevent-elon-musk-s-ai-apocalypse &amp;quot;DeepMind Has Simple Tests That Might Prevent Elon Musk&#039;s AI Apocalypse&amp;quot;]. &#039;&#039;Bloomberg.com&#039;&#039;. 11 December 2017.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Meyer, David. [http://fortune.com/2017/12/12/alphabet-deepmind-ai-safety-musk-games/ &amp;quot;Alphabet&#039;s DeepMind Is Using Games to Discover If Artificial Intelligence Can Break Free and Kill Us All&amp;quot;]. &#039;&#039;Fortune&#039;&#039;. 12 December 2017.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Robot Constitution is a security ruleset part of AutoRT set by DeepMind in January 2024 for its AI products. The rules are inspired by [[Asimov]]&#039;s [[Three Laws of Robotics]]. The rules are applied to the underlying [[Large language model|large language models]] of the helper robots.&amp;lt;ref&amp;gt;Khalid, Amrita. [https://www.theverge.com/2024/1/4/24025535/google-ai-robot-constitution-autort-deepmind-three-laws &amp;quot;Google wrote a &amp;quot;Robot Constitution&amp;quot; to make sure its new AI droids won&#039;t kill us&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. January 4, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Chowdhury, Hasan. [https://www.businessinsider.com/google-deepmind-rules-ai-robots-safer-in-your-home-2024-1 &amp;quot;Google DeepMind has new rules to make sure AI robots behave when tidying your home&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Gowran, Leigh Mc. [https://www.siliconrepublic.com/machines/google-deepmind-robot-constitution-real-world-safety &amp;quot;DeepMind is training robots for real-world activities&amp;quot;]. &#039;&#039;Silicon Republic&#039;&#039;. January 5, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.tomsguide.com/news/google-is-using-ai-to-teach-robots-household-chores-heres-the-result &amp;quot;Google&#039;s DeepMind is using AI to teach robots household chores — here&#039;s the result&amp;quot;]. &#039;&#039;Tom&#039;s Guide&#039;&#039;. January 5, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://uk.pcmag.com/ai/150336/google-taps-asimovs-three-laws-of-robotics-for-real-robot-safety &amp;quot;Google Taps Asimov&#039;s Three Laws of Robotics for Real Robot Safety&amp;quot;]. &#039;&#039;PCMag UK&#039;&#039;. January 4, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
Rule number 1 is a robot “may not injure a human being”.&lt;br /&gt;
=== Weather prediction ===&lt;br /&gt;
Google DeepMind developed an AI-based weather prediction system called Weather Lab, which significantly improved tropical cyclone forecasting. Launched in mid-2025, this model utilized stochastic neural networks trained on 45 years of global weather and cyclone data, enabling it to predict cyclone formation, track, intensity, and structure with multiple probabilistic forecasts up to 15 days in advance. During the 2025 Atlantic hurricane season, DeepMind&#039;s Weather Lab outperformed traditional physics-based models, including the [[US National Weather Service]]&#039;s Global Forecast System, in both track and intensity predictions, earning notable recognition from meteorologists and aiding hurricane forecasting efforts by the [[US National Hurricane Center]]. This marked a substantial advancement in weather modeling, demonstrating the potential for AI to enhance the speed and accuracy of severe weather forecasts.&amp;lt;ref&amp;gt;Berger, Eric. [https://arstechnica.com/science/2025/11/googles-new-weather-model-impressed-during-its-first-hurricane-season/ &amp;quot;Google&#039;s new weather model impressed during its first hurricane season&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2025-11-04.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Miscellaneous contributions to Google ===&lt;br /&gt;
DeepMind (alongside other Alphabet AI researchers) assists [[Google Play]]&#039;s personalized app recommendations.&amp;lt;ref name=&amp;quot;cnbc money&amp;quot;/&amp;gt; DeepMind has also collaborated with the [[Android (operating system)|Android]] team at [[Google]] for the creation of two new features which were made available to people with devices running Android Pie, the ninth installment of Google&#039;s mobile operating system. These features, Adaptive Battery and Adaptive Brightness, use machine learning to conserve energy and make devices running the operating system easier to use. It is the first time DeepMind has used these techniques on such a small scale, with typical machine learning applications requiring orders of magnitude more computing power.&amp;lt;ref&amp;gt;[https://deepmind.com/blog/deepmind-meet-android/ &amp;quot;DeepMind, meet Android&amp;quot;]. &#039;&#039;DeepMind Blog&#039;&#039;. 14 May 2024. 8 May 2018.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== DeepMind Health ==&lt;br /&gt;
In July 2016, a collaboration between DeepMind and [[Moorfields Eye Hospital]] was announced to develop [[Artificial intelligence in healthcare|AI applications for healthcare]].&amp;lt;ref&amp;gt;Baraniuk, Chris. [https://www.bbc.com/news/technology-36713308 &amp;quot;Google&#039;s DeepMind to peek at NHS eye scans for disease analysis&amp;quot;]. BBC. 6 July 2016.&amp;lt;/ref&amp;gt; DeepMind would be applied to the analysis of [[Data anonymization|anonymised]] eye scans, searching for early signs of diseases leading to [[blindness]].&lt;br /&gt;
&lt;br /&gt;
In August 2016, a research programme with [[University College Hospital|University College London Hospital]] was announced with the aim of developing an algorithm that can automatically differentiate between healthy and cancerous tissues in head and neck areas.&amp;lt;ref&amp;gt;Baraniuk, Chris. [https://www.bbc.co.uk/news/technology-37230806 &amp;quot;Google DeepMind targets NHS head and neck cancer treatment&amp;quot;]. BBC. 31 August 2016.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are also projects with the [[Royal Free London NHS Foundation Trust]] and [[Imperial College Healthcare NHS Trust]] to develop new clinical mobile apps linked to [[electronic patient record]]s.&amp;lt;ref&amp;gt;[http://www.itpro.co.uk/public-sector/27833/deepmind-announces-second-nhs-partnership &amp;quot;DeepMind announces second NHS partnership&amp;quot;]. IT Pro. 23 December 2016.&amp;lt;/ref&amp;gt; Staff at the [[Royal Free Hospital]] were reported as saying in December 2017 that access to patient data through the app had saved a &#039;huge amount of time&#039; and made a &#039;phenomenal&#039; difference to the management of patients with acute kidney injury. Test result data is sent to staff&#039;s mobile phones and alerts them to changes in the patient&#039;s condition. It also enables staff to see if someone else has responded, and to show patients their results in visual form.&amp;lt;ref&amp;gt;[https://www.digitalhealth.net/2017/12/google-deepmind-streams-royal-free/ &amp;quot;Google DeepMind&#039;s Streams technology branded &#039;phenomenal&#039;&amp;quot;]. Digital Health. 4 December 2017.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.bmj.com/bmj/section-pdf/966505?path=/bmj/360/8141/This_Week.full.pdf &amp;quot;A dedicated WhatsApp for clinicians&amp;quot;]. &#039;&#039;the bmj&#039;&#039;. 17 February 2018.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In November 2017, DeepMind announced a research partnership with the [[Cancer Research UK]] Centre at Imperial College London with the goal of improving breast cancer detection by applying machine learning to mammography.&amp;lt;ref&amp;gt;David, Eric. [https://siliconangle.com/blog/2017/11/24/google-deepmind-announces-new-research-partnership-fight-breast-cancer-ai/ &amp;quot;Google DeepMind announces new research partnership to fight breast cancer with AI&amp;quot;]. &#039;&#039;Silicon Angle&#039;&#039;. 24 November 2017.&amp;lt;/ref&amp;gt; Additionally, in February 2018, DeepMind announced it was working with the [[United States Department of Veterans Affairs|U.S. Department of Veterans Affairs]] in an attempt to use machine learning to predict the onset of acute kidney injury in patients, and also more broadly the general deterioration of patients during a hospital stay so that doctors and nurses can more quickly treat patients in need.&amp;lt;ref&amp;gt;Frank, ISG, Blair Hanley. [https://venturebeat.com/2018/02/22/googles-deepmind-wants-ai-to-spot-kidney-injuries/ &amp;quot;Google&#039;s DeepMind wants AI to spot kidney injuries&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 22 February 2018.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
DeepMind developed an app called Streams, which sends alerts to doctors about patients at risk of acute kidney injury.&amp;lt;ref&amp;gt;Evenstad, Lis. [https://www.computerweekly.com/news/252443164/DeepMind-Health-must-be-transparent-to-gain-public-trust-review-finds &amp;quot;DeepMind Health must be transparent to gain public trust, review finds&amp;quot;]. &#039;&#039;ComputerWeekly.com&#039;&#039;. 15 June 2018.&amp;lt;/ref&amp;gt; On 13 November 2018, DeepMind announced that its health division and the Streams app would be absorbed into [[Google Health]].&amp;lt;ref&amp;gt;Vincent, James. [https://www.theverge.com/2018/11/13/18091774/google-deepmind-health-absorbing-streams-team-ai-assistant-nurse-doctor &amp;quot;Google is absorbing DeepMind&#039;s health care unit to create an &#039;AI assistant for nurses and doctors&#039;&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 13 November 2018.&amp;lt;/ref&amp;gt; Privacy advocates said the announcement betrayed patient trust and appeared to contradict previous statements by DeepMind that patient data would not be connected to Google accounts or services.&amp;lt;ref&amp;gt;Hern, Alex. [https://www.theguardian.com/technology/2018/nov/14/google-betrays-patient-trust-deepmind-healthcare-move &amp;quot;Google &#039;betrays patient trust&#039; with DeepMind Health move&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 14 November 2018.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Stokel-Walker, Chris. [https://www.wired.co.uk/article/google-deepmind-nhs-health-data &amp;quot;Why Google consuming DeepMind Health is scaring privacy experts&amp;quot;]. &#039;&#039;Wired&#039;&#039;. 14 November 2018.&amp;lt;/ref&amp;gt; A spokesman for DeepMind said that patient data would still be kept separate from Google services or projects.&amp;lt;ref&amp;gt;Murphy, Margi. [https://www.telegraph.co.uk/technology/2018/11/14/deepmind-boss-defends-controversial-google-health-deal/ &amp;quot;DeepMind boss defends controversial Google health deal&amp;quot;]. &#039;&#039;The Telegraph&#039;&#039;. 14 November 2018.{{cbignore}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== NHS data-sharing controversy ===&lt;br /&gt;
In April 2016, &#039;&#039;[[New Scientist]]&#039;&#039; obtained a copy of a [[data sharing]] agreement between DeepMind and the [[Royal Free London NHS Foundation Trust]]. The latter operates three London hospitals where an estimated 1.6 million patients are treated annually. The agreement shows DeepMind Health had access to admissions, discharge and transfer data, accident and emergency, pathology and radiology, and critical care at these hospitals. This included personal details such as whether patients had been diagnosed with [[HIV/AIDS|HIV]], suffered from [[major depressive disorder|depression]] or had ever undergone an [[abortion]] in order to conduct research to seek better outcomes in various health conditions.&amp;lt;ref&amp;gt;Hodson, Hal. [https://www.newscientist.com/article/2086454-revealed-google-ai-has-access-to-huge-haul-of-nhs-patient-data &amp;quot;Revealed: Google AI has access to huge haul of NHS patient data&amp;quot;]. &#039;&#039;[[New Scientist]]&#039;&#039;. 29 April 2016.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.newscientist.com/article/mg23030722-900-big-data-if-theres-nothing-to-hide-why-be-secretive/ &amp;quot;Leader: If Google has nothing to hide about NHS data, why so secretive?&amp;quot;]. &#039;&#039;[[New Scientist]]&#039;&#039;. 4 May 2016.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A complaint was filed to the [[Information Commissioner&#039;s Office]] (ICO), arguing that the data should be pseudonymised and encrypted.&amp;lt;ref&amp;gt;Donnelly, Caroline. [http://www.computerweekly.com/news/450296175/ICO-probes-Google-DeepMind-patient-data-sharing-deal-with-NHS-Hospital-Trust &amp;quot;ICO probes Google DeepMind patient data-sharing deal with NHS Hospital Trust&amp;quot;]. &#039;&#039;[[Computer Weekly]]&#039;&#039;. 12 May 2016.&amp;lt;/ref&amp;gt; In May 2016, &#039;&#039;New Scientist&#039;&#039; published a further article claiming that the project had failed to secure approval from the Confidentiality Advisory Group of the [[Medicines and Healthcare products Regulatory Agency]].&amp;lt;ref&amp;gt;Hodson, Hal. [https://www.newscientist.com/article/2088056-exclusive-googles-nhs-deal/ &amp;quot;Did Google&#039;s NHS patient data deal need ethical approval?&amp;quot;]. &#039;&#039;[[New Scientist]]&#039;&#039;. 25 May 2016.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2017, the ICO concluded a year-long investigation that focused on how the Royal Free NHS Foundation Trust tested the app, Streams, in late 2015 and 2016.&amp;lt;ref&amp;gt;[https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2017/07/royal-free-google-deepmind-trial-failed-to-comply-with-data-protection-law/ &amp;quot;Royal Free - Google DeepMind trial failed to comply with data protection law&amp;quot;]. &#039;&#039;ico.org.uk&#039;&#039;. 17 August 2017.&amp;lt;/ref&amp;gt; The ICO found that the Royal Free failed to comply with the Data Protection Act when it provided patient details to DeepMind, and found several shortcomings in how the data was handled, including that patients were not adequately informed that their data would be used as part of the test. DeepMind published its thoughts&amp;lt;ref&amp;gt;Suleyman, Mustafa. [https://deepmind.com/blog/ico-royal-free/ &amp;quot;The Information Commissioner, the Royal Free, and what we&#039;ve learned&amp;quot;]. &#039;&#039;DeepMind&#039;&#039;. 3 July 2017.&amp;lt;/ref&amp;gt; on the investigation in July 2017, saying &amp;quot;we need to do better&amp;quot; and highlighting several activities and initiatives they had initiated for transparency, oversight and engagement. This included developing a patient and public involvement strategy&amp;lt;ref&amp;gt;[https://deepmind.com/applied/deepmind-health/patients/ &amp;quot;For Patients&amp;quot;]. &#039;&#039;DeepMind&#039;&#039;.&amp;lt;/ref&amp;gt; and being transparent in its partnerships.&lt;br /&gt;
&lt;br /&gt;
In May 2017, &#039;&#039;Sky News&#039;&#039; published a leaked letter from the National Data Guardian, Dame [[Fiona Caldicott]], revealing that in her &amp;quot;considered opinion&amp;quot; the data-sharing agreement between DeepMind and the Royal Free took place on an &amp;quot;inappropriate legal basis&amp;quot;.&amp;lt;ref&amp;gt;Martin, Alexander J. [http://news.sky.com/story/google-received-16-million-nhs-patients-data-on-an-inappropriate-legal-basis-10879142/ &amp;quot;Google received 1.6 million NHS patients&#039; data on an &#039;inappropriate legal basis&#039;&amp;quot;]. &#039;&#039;[[Sky News]]&#039;&#039;. 15 May 2017.&amp;lt;/ref&amp;gt; The Information Commissioner&#039;s Office ruled in July 2017 that the Royal Free hospital failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind.&amp;lt;ref&amp;gt;Hern, Alex. [https://www.theguardian.com/technology/2017/jul/03/google-deepmind-16m-patient-royal-free-deal-data-protection-act &amp;quot;Royal Free breached UK data law in 1.6m patient deal with Google&#039;s DeepMind&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 3 July 2017.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== DeepMind Ethics and Society ==&lt;br /&gt;
In October 2017, DeepMind announced a new research unit, DeepMind Ethics &amp;amp; Society.&amp;lt;ref&amp;gt;Legassick, Sean. [https://deepmind.com/blog/why-we-launched-deepmind-ethics-society/ &amp;quot;Why we launched DeepMind Ethics &amp;amp; Society&amp;quot;]. &#039;&#039;DeepMind Blog&#039;&#039;. October 3, 2017.&amp;lt;/ref&amp;gt; Their goal is to fund external research of the following themes: privacy, transparency, and fairness; economic impacts; governance and accountability; managing AI risk; AI morality and values; and how AI can address the world&#039;s challenges. As a result, the team hopes to further understand the ethical implications of AI and aid society to seeing AI can be beneficial.&amp;lt;ref&amp;gt;Temperton, James. [https://www.wired.co.uk/article/deepmind-ethics-and-society-artificial-intelligence &amp;quot;DeepMind&#039;s new AI ethics unit is the company&#039;s next big move&amp;quot;]. &#039;&#039;Wired (UK)&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This new subdivision of DeepMind is a completely separate unit from the partnership of leading companies using AI, academia, civil society organizations and nonprofits of the name [[Partnership on AI|Partnership on Artificial Intelligence to Benefit People and Society]] of which DeepMind is also a part. The DeepMind Ethics and Society board is also distinct from the mooted AI Ethics Board that [[Google]] originally agreed to form when acquiring DeepMind.&amp;lt;ref name=&amp;quot;:9&amp;quot;&amp;gt;Hern, Alex. [https://www.theguardian.com/technology/2017/oct/04/google-deepmind-ai-artificial-intelligence-ethics-group-problems &amp;quot;DeepMind announces ethics group to focus on problems of AI&amp;quot;]. &#039;&#039;[[The Guardian]]&#039;&#039;. 4 October 2017.&amp;lt;/ref&amp;gt;&lt;br /&gt;
==DeepMind Professors of machine learning==&lt;br /&gt;
DeepMind sponsors three [[Academic ranks in the United Kingdom|chairs]] of machine learning:&lt;br /&gt;
&lt;br /&gt;
# At the [[University of Cambridge]], held by [[Neil Lawrence]],&amp;lt;ref&amp;gt;[https://www.cam.ac.uk/research/news/cambridge-appoints-first-deepmind-professor-of-machine-learning &amp;quot;Cambridge appoints first DeepMind Professor of Machine Learning&amp;quot;]. &#039;&#039;University of Cambridge&#039;&#039;. 18 September 2019.&amp;lt;/ref&amp;gt; in the [[Department of Computer Science and Technology, University of Cambridge|Department of Computer Science and Technology]],&lt;br /&gt;
# At the [[University of Oxford]], held by [[Michael Bronstein]],&amp;lt;ref&amp;gt;[http://www.cs.ox.ac.uk/news/1862-full.html &amp;quot;DeepMind funds new post at Oxford University – the DeepMind Professorship of Artificial Intelligence&amp;quot;]. &#039;&#039;Department of Computer Science&#039;&#039;.&amp;lt;/ref&amp;gt; in the [[Department of Computer Science, University of Oxford|Department of Computer Science]], and&lt;br /&gt;
# At the [[University College London]], held by Marc Deisenroth,&amp;lt;ref&amp;gt;[https://www.ucl.ac.uk/news/2019/nov/deepmind-renews-its-commitment-ucl &amp;quot;DeepMind renews its commitment to UCL&amp;quot;]. &#039;&#039;University College London&#039;&#039;. 29 March 2021.&amp;lt;/ref&amp;gt; in the Department of Computer Science.&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
* [[Anthropic]]&lt;br /&gt;
* [[Cohere]]&lt;br /&gt;
* [[Glossary of artificial intelligence]]&lt;br /&gt;
* [[Imagen (text-to-image model)|Imagen]]&lt;br /&gt;
* [[Model Context Protocol]]&lt;br /&gt;
* [[Robot Constitution]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ref name=&amp;quot;nature2015&amp;quot;&amp;gt;Mnih, Volodymyr. &amp;quot;Human-level control through deep reinforcement learning&amp;quot;. &#039;&#039;Nature&#039;&#039;. 26 February 2015.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/references&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* {{Official website}}&lt;br /&gt;
* [https://github.com/google-deepmind GitHub Repositories] &lt;br /&gt;
&lt;br /&gt;
{{Google AI}}&lt;br /&gt;
{{Google LLC}}&lt;br /&gt;
{{Generative AI}}&lt;br /&gt;
{{Existential risk from artificial intelligence}}&lt;br /&gt;
{{authority control}}&lt;br /&gt;
{{Subject bar|auto=yes|portal1=Companies|portal2=Technology}}&lt;br /&gt;
&lt;br /&gt;
[[Category:2010 establishments in England]]&lt;br /&gt;
[[Category:Artificial intelligence laboratories]]&lt;br /&gt;
[[Category:British companies established in 2010]]&lt;br /&gt;
[[Category:Deep learning]]&lt;br /&gt;
[[Category:Game artificial intelligence]]&lt;br /&gt;
[[Category:Google acquisitions]]&lt;br /&gt;
[[Category:Applied machine learning]]&lt;br /&gt;
[[Category:British subsidiaries of foreign companies]]&lt;br /&gt;
[[Category:Alphabet Inc. subsidiaries]]&lt;br /&gt;
[[Category:2014 mergers and acquisitions]]&lt;br /&gt;
[[Category:Google DeepMind| ]]&lt;br /&gt;
[[Category:Information technology companies of the United Kingdom]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Communist_Party_of_Great_Britain_(Marxist-Leninist)&amp;diff=8</id>
		<title>Communist Party of Great Britain (Marxist-Leninist)</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Communist_Party_of_Great_Britain_(Marxist-Leninist)&amp;diff=8"/>
		<updated>2026-04-06T12:58:28Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import from Wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;br /&gt;
&amp;lt;html lang=&amp;quot;en&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;meta charset=&amp;quot;utf-8&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;title&amp;gt;Wikimedia Error&amp;lt;/title&amp;gt;&lt;br /&gt;
&amp;lt;style&amp;gt;&lt;br /&gt;
* { margin: 0; padding: 0; }&lt;br /&gt;
body { background: #fff; font: 15px/1.6 sans-serif; color: #333; }&lt;br /&gt;
.content { margin: 7% auto 0; padding: 2em 1em 1em; max-width: 640px; display: flex; flex-direction: row; flex-wrap: wrap; }&lt;br /&gt;
.footer { clear: both; margin-top: 14%; border-top: 1px solid #e5e5e5; background: #f9f9f9; padding: 2em 0; font-size: 0.8em; text-align: center; }&lt;br /&gt;
img { margin: 0 2em 2em 0; }&lt;br /&gt;
a img { border: 0; }&lt;br /&gt;
h1 { margin-top: 1em; font-size: 1.2em; }&lt;br /&gt;
.content-text { flex: 1; }&lt;br /&gt;
p { margin: 0.7em 0 1em 0; }&lt;br /&gt;
a { color: #0645ad; text-decoration: none; }&lt;br /&gt;
a:hover { text-decoration: underline; }&lt;br /&gt;
code { font-family: sans-serif; }&lt;br /&gt;
summary { font-weight: bold; cursor: pointer; }&lt;br /&gt;
details[open] { background: #970302; color: #dfdedd; }&lt;br /&gt;
.text-muted { color: #777; }&lt;br /&gt;
@media (prefers-color-scheme: dark) {&lt;br /&gt;
  a { color: #9e9eff; }&lt;br /&gt;
  body { background: transparent; color: #ddd; }&lt;br /&gt;
  .footer { border-top: 1px solid #444; background: #060606; }&lt;br /&gt;
  #logo { filter: invert(1) hue-rotate(180deg); }&lt;br /&gt;
  .text-muted { color: #888; }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/style&amp;gt;&lt;br /&gt;
&amp;lt;meta name=&amp;quot;color-scheme&amp;quot; content=&amp;quot;light dark&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;content&amp;quot; role=&amp;quot;main&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;a href=&amp;quot;https://www.wikimedia.org&amp;quot;&amp;gt;&amp;lt;img id=&amp;quot;logo&amp;quot; src=&amp;quot;https://www.wikimedia.org/static/images/wmf-logo.png&amp;quot; srcset=&amp;quot;https://www.wikimedia.org/static/images/wmf-logo-2x.png 2x&amp;quot; alt=&amp;quot;Wikimedia&amp;quot; width=&amp;quot;135&amp;quot; height=&amp;quot;101&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;/a&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;content-text&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;h1&amp;gt;Error&amp;lt;/h1&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Our servers are currently under maintenance or experiencing a technical problem.&lt;br /&gt;
&lt;br /&gt;
Please &amp;lt;a href=&amp;quot;&amp;quot; title=&amp;quot;Reload this page&amp;quot; onclick=&amp;quot;window.location.reload(false); return false&amp;quot;&amp;gt;try again&amp;lt;/a&amp;gt; in a few&amp;amp;nbsp;minutes.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;See the error message at the bottom of this page for more&amp;amp;nbsp;information.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;footer&amp;quot;&amp;gt;&amp;lt;p&amp;gt;If you report this error to the Wikimedia System Administrators, please include the details below.&amp;lt;/p&amp;gt;&amp;lt;p class=&#039;text-muted&#039;&amp;gt;&amp;lt;code&amp;gt;Request from - via cp3073.esams.wmnet, ATS/9.2.11&amp;lt;br&amp;gt;Error: 400, Invalid HTTP Request at 2026-04-06 12:55:39 GMT&amp;lt;/code&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;/html&amp;gt;&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=ChatGPT&amp;diff=7</id>
		<title>ChatGPT</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=ChatGPT&amp;diff=7"/>
		<updated>2026-04-06T12:58:16Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import from Wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--&lt;br /&gt;
This is not the place to ask ChatGPT a question.&lt;br /&gt;
To do so, you may visit https://chatgpt.com.&lt;br /&gt;
Edits that appear to be addressing ChatGPT will be reverted.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Use American English|date=May 2023}}&lt;br /&gt;
{{Use mdy dates|date=August 2025}}&lt;br /&gt;
{{Infobox software&lt;br /&gt;
| logo = [[File:OpenAI logo 2025 (symbol).svg|frameless|upright=0.5|class=skin-invert]]&lt;br /&gt;
| developer = [[OpenAI]]&lt;br /&gt;
| released = {{Start date and age|2022|11|30|p=y|br=y}}&amp;lt;ref name=&amp;quot;initial version&amp;quot;&amp;gt;[https://openai.com/index/chatgpt/ &amp;quot;ChatGPT – Introducing ChatGPT&amp;quot;].&amp;lt;/ref&amp;gt;&lt;br /&gt;
| latest release version = {{Start date and age|2026|03|27|p=y|br=y}}&amp;lt;ref name=&amp;quot;latest version&amp;quot;&amp;gt;[https://help.openai.com/en/articles/6825453-chatgpt-release-notes &amp;quot;ChatGPT – Release Notes&amp;quot;].&amp;lt;/ref&amp;gt;&lt;br /&gt;
| engine = [[GPT-5.4]]&lt;br /&gt;
| platform = [[Cloud computing platforms]]&lt;br /&gt;
| genre = {{ indented plainlist |&lt;br /&gt;
* [[Chatbot]]&lt;br /&gt;
* [[Large language model]]&lt;br /&gt;
* [[Generative pre-trained transformer]]&lt;br /&gt;
}}&lt;br /&gt;
| license = [[Proprietary software|Proprietary]] [[Software as a service|service]]&lt;br /&gt;
| website = {{URL|https://chatgpt.com/}}&lt;br /&gt;
| language = 59 languages&amp;lt;ref&amp;gt;[https://help.openai.com/en/articles/8357869-how-to-change-your-language-setting-in-chatgpt? &amp;quot;How to change your language setting in ChatGPT&amp;quot;]. &#039;&#039;OpenAI Help Center&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
{{Open AI Series}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;ChatGPT&#039;&#039;&#039; is a [[generative artificial intelligence]] [[chatbot]] developed by [[OpenAI]]. It was released in November 2022. It uses [[large language model]]s—specifically [[generative pre-trained transformers]] (GPTs)—to generate text, speech, and images in response to user [[AI prompt|prompts]]. It is credited with accelerating the [[AI boom]], an ongoing period marked by rapid investment and public attention toward the field of [[artificial intelligence]] (AI).&amp;lt;ref&amp;gt;Weise, Karen. [https://www.nytimes.com/2023/12/05/technology/ai-chatgpt-google-meta.html &amp;quot;Inside the A.I. Arms Race That Changed Silicon Valley Forever&amp;quot;]. &#039;&#039;[[The New York Times]]&#039;&#039;. December 5, 2023.&amp;lt;/ref&amp;gt; OpenAI operates the service on a [[Freemium|freemium model]]. Users can interact with ChatGPT through text, audio, and image [[Prompt engineering|prompts]].&lt;br /&gt;
&lt;br /&gt;
The service gained 100 million users in two months, making it the fastest-growing consumer [[software application]] in history.&amp;lt;ref&amp;gt;[https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ &amp;quot;ChatGPT sets record for fastest-growing user base - analyst note&amp;quot;]. &#039;&#039;Reuters&#039;&#039;.&amp;lt;/ref&amp;gt; ChatGPT&#039;s website is among the top 5 [[List of most-visited websites|most-visited websites globally]].&amp;lt;ref&amp;gt;[https://www.similarweb.com/top-websites/ &amp;quot;Top Websites Ranking&amp;quot;]. &#039;&#039;[[Similarweb]]&#039;&#039;.&amp;lt;/ref&amp;gt; It has been lauded for its potential to transform numerous professional fields, and has instigated public debate about the nature of creativity and the future of [[knowledge work]].&lt;br /&gt;
&lt;br /&gt;
The chatbot has also been criticized for its limitations and potential for unethical use. It can generate plausible-sounding but incorrect or nonsensical answers, known as [[Hallucination (artificial intelligence)|hallucinations]]. [[algorithmic bias|Biases]] in its [[training data]] have been reflected in its responses. The chatbot can facilitate [[academic dishonesty]], generate misinformation, and create malicious code. The [[Ethics of artificial intelligence|ethics]] of its development, particularly the use of [[copyright]]ed content as training data, have also drawn controversy.&lt;br /&gt;
&lt;br /&gt;
==Training==&lt;br /&gt;
ChatGPT is based on [[Generative pre-trained transformer#Foundational models|GPT foundation models]] that have been [[fine-tuning (machine learning)|fine-tuned]] for conversational assistance. The fine-tuning process involved [[supervised learning]] and [[reinforcement learning from human feedback]] (RLHF).&amp;lt;ref name=&amp;quot;Greengard-2022&amp;quot;&amp;gt;Greengard, Samuel. [https://www.eweek.com/big-data-and-analytics/chatgpt/ &amp;quot;ChatGPT: Understanding the ChatGPT AI Chatbot&amp;quot;]. &#039;&#039;[[eWeek]]&#039;&#039;. December 29, 2022.&amp;lt;/ref&amp;gt; Both approaches employed human trainers to improve model performance. In the case of supervised learning, the trainers acted as both the user and the [[AI]] assistant. In the reinforcement learning stage, human trainers first ranked responses generated by the model in previous conversations.&amp;lt;ref name=&amp;quot;Douglas-2023&amp;quot;&amp;gt;Douglas, Will. [https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/ &amp;quot;The inside story of how ChatGPT was built from the people who made it&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. March 3, 2023.&amp;lt;/ref&amp;gt; These rankings were used to create &amp;quot;reward models&amp;quot; that were used to fine-tune the model further by using several iterations of [[proximal policy optimization]].&amp;lt;ref name=&amp;quot;Greengard-2022&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Vincent-2022&amp;quot;&amp;gt;Vincent, James. [https://www.theverge.com/2022/12/8/23499728/ai-capability-accessibility-chatgpt-stable-diffusion-commercialization &amp;quot;ChatGPT proves AI is finally mainstream{{snd&amp;quot;]. &#039;&#039;[[The Verge]]&#039;&#039;. December 8, 2022.&amp;lt;/ref&amp;gt;[[File:Three-stage large language model training workflow.svg|thumb|150px|Training workflow of InstructGPT, used in the original version of ChatGPT&amp;lt;ref&amp;gt;Ouyang, Long. &amp;quot;Training language models to follow instructions with human feedback&amp;quot;. &#039;&#039;Advances in Neural Information Processing Systems&#039;&#039;. March 4, 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;OpenAI. [https://openai.com/index/instruction-following/ &amp;quot;Aligning language models to follow instructions&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. January 27, 2022.&amp;lt;/ref&amp;gt;]]To build a safety system against harmful content (e.g., [[sexual abuse]], [[violence]], [[racism]], [[sexism]]), OpenAI used outsourced [[Kenya]]n workers, earning around $1.32 to $2{{nbsp}}per hour, to [[Labeled data|label]] such content. These labels were used to train a model to detect such content in the future. The laborers were exposed to toxic and traumatic content; one worker described the assignment as &amp;quot;torture&amp;quot;. OpenAI&#039;s outsourcing partner was [[Sama (company)|Sama]], a training-data company based in [[San Francisco]], California.&amp;lt;ref&amp;gt;Perrigo, Billy. [https://time.com/6247678/openai-chatgpt-kenya-workers/ &amp;quot;Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic&amp;quot;]. &#039;&#039;[[Time (magazine)&#039;&#039;. January 18, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Rowe, Niamh. [https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai &amp;quot;&#039;It&#039;s destroyed me completely&#039;: Kenyan moderators decry toll of training of AI models&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. August 2, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ChatGPT users can opt-out of their chat data being used to train upcoming models.&amp;lt;ref&amp;gt;Fried, Ina. [https://www.axios.com/2024/12/02/chatgpt-openai-user-data-training &amp;quot;What ChatGPT knows about you&amp;quot;]. &#039;&#039;Axios&#039;&#039;. 2 December 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ChatGPT&#039;s training data includes [[man page|software manual page]]s, information about [[internet phenomena]] such as [[bulletin board system]]s, multiple programming languages, and the text of [[Wikipedia]].&amp;lt;ref name=&amp;quot;ArsTechnicaTerminal&amp;quot;&amp;gt;Edwards, Benj. [https://arstechnica.com/information-technology/2022/12/openais-new-chatbot-can-hallucinate-a-linux-shell-or-calling-a-bbs/ &amp;quot;No Linux? No problem. Just get AI to hallucinate it for you&amp;quot;]. &#039;&#039;[[Ars Technica]]&#039;&#039;. December 5, 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Dwivedi, Yogesh K.. &amp;quot;Opinion Paper: &amp;quot;So what if ChatGPT wrote it?&amp;quot; Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice, and policy&amp;quot;. &#039;&#039;International Journal of Information Management&#039;&#039;. August 1, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;NYT-20230718&amp;quot;&amp;gt;Gertner, Jon. [https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html &amp;quot;Wikipedia&#039;s Moment of Truth&amp;quot;]. &#039;&#039;[[The New York Times Magazine]]&#039;&#039;. July 18, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Features ==&lt;br /&gt;
ChatGPT is a [[chatbot]] and AI assistant built on [[large language model]] (LLM) technology.&amp;lt;ref&amp;gt;Stevenson, Mark. [https://theconversation.com/large-language-models-how-the-ai-behind-the-likes-of-chatgpt-actually-works-244701 &amp;quot;Large language models: how the AI behind the likes of ChatGPT actually works&amp;quot;]. &#039;&#039;The Conversation&#039;&#039;. December 10, 2024.&amp;lt;/ref&amp;gt; It is designed to generate human-like text and can carry out a wide variety of tasks. These include, among many others, writing and [[debugging]] computer programs, composing music, scripts, fairy tales, and essays, answering questions (sometimes at a level exceeding that of an average human test-taker),&amp;lt;ref name=&amp;quot;Heilweil&amp;quot;&amp;gt;Heilweil, Rebecca. [https://www.vox.com/recode/2022/12/7/23498694/ai-artificial-intelligence-chat-gpt-openai &amp;quot;AI is finally good at stuff. Now what?&amp;quot;]. &#039;&#039;Vox&#039;&#039;. December 7, 2022.&amp;lt;/ref&amp;gt; and generating business concepts.&amp;lt;ref&amp;gt;Eapen, Tojin T.. [https://hbr.org/2023/07/how-generative-ai-can-augment-human-creativity &amp;quot;How Generative AI Can Augment Human Creativity&amp;quot;]. &#039;&#039;Harvard Business Review&#039;&#039;. June 16, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ChatGPT is frequently used for [[machine translation|translation]] and [[automatic summarization|summarization]] tasks,&amp;lt;ref name=&amp;quot;japContext&amp;quot;&amp;gt;Kaneko, Karin. [https://www.japantimes.co.jp/life/2023/07/18/language/japanese-english-ai-translation/ &amp;quot;ChatGPT, Bing, Bard and DeepL: Which one offers the best Japanese-to-English translation?&amp;quot;]. &#039;&#039;The Japan Times&#039;&#039;. July 18, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Ravšelj-2025&amp;quot;&amp;gt;Ravšelj, Dejan. &amp;quot;Higher education students&#039; perceptions of ChatGPT: A global study of early reactions&amp;quot;. &#039;&#039;PLOS ONE&#039;&#039;. 2025.&amp;lt;/ref&amp;gt; and can simulate interactive environments such as a [[Linux]] terminal,&amp;lt;ref name=&amp;quot;ArsTechnicaTerminal&amp;quot; /&amp;gt; a multi-user chat room,&amp;lt;ref name=&amp;quot;ArsTechnicaTerminal&amp;quot; /&amp;gt; or simple text-based games such as [[tic-tac-toe]].&amp;lt;ref name=&amp;quot;ArsTechnicaTerminal&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Users interact with ChatGPT through conversations which consist of text, audio, and image inputs and outputs.&amp;lt;ref name=&amp;quot;openaicom-2024b&amp;quot;&amp;gt;[https://openai.com/index/chatgpt-can-now-see-hear-and-speak/ &amp;quot;ChatGPT can now see, hear, and speak&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. March 13, 2024.&amp;lt;/ref&amp;gt; The user&#039;s inputs to these conversations are referred to as prompts.&amp;lt;ref name=&amp;quot;timeBegin&amp;quot;&amp;gt;Harroch, Richard. [https://time.com/partner-article/7270411/chatgpt-for-beginners/ &amp;quot;ChatGPT for Beginners&amp;quot;]. &#039;&#039;TIME&#039;&#039;. March 20, 2025.&amp;lt;/ref&amp;gt; An optional &amp;quot;Memory&amp;quot; feature allows users to tell ChatGPT to memorize specific information. Another option allows ChatGPT to recall old conversations.&amp;lt;ref name=&amp;quot;vergeMemory1&amp;quot;&amp;gt;Weatherbed, Jess. [https://www.theverge.com/news/646968/openai-chatgpt-long-term-memory-upgrade &amp;quot;ChatGPT will now remember your old conversations&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. April 11, 2025.&amp;lt;/ref&amp;gt; GPT-based moderation classifiers are used to reduce the risk of harmful outputs being presented to users.&amp;lt;ref&amp;gt;[https://facctconference.org/static/papers24/facct24-47.pdf &amp;quot;Auditing GPT&#039;s Content Moderation Guardrails: Can ChatGPT Write Your Favorite TV Show?&amp;quot;]. &#039;&#039;FAccT&#039;&#039;. 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In March 2023, OpenAI added support for [[Plug-in (computing)|plugin]]s for ChatGPT.&amp;lt;ref name=&amp;quot;openaiplugins&amp;quot;&amp;gt;[https://openai.com/blog/chatgpt-plugins &amp;quot;ChatGPT plugins&amp;quot;]. &#039;&#039;openai.com&#039;&#039;.&amp;lt;/ref&amp;gt; This includes both plugins made by OpenAI, such as [[web browsing]] and [[code interpretation]], and external plugins from developers such as [[Expedia]], [[OpenTable]], [[Zapier]], [[Shopify]], [[Slack (software)|Slack]], and [[Wolfram Research|Wolfram]].&amp;lt;ref&amp;gt;Vincent, James. [https://www.theverge.com/2023/3/23/23653591/openai-chatgpt-plugins-launch-web-browsing-third-party &amp;quot;OpenAI is massively expanding ChatGPT&#039;s capabilities to let it browse the web and more&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. March 23, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Goldman, Sharon. [https://venturebeat.com/ai/openai-turns-chatgpt-into-a-platform-overnight-with-addition-of-plugins/ &amp;quot;OpenAI turns ChatGPT into a platform overnight with addition of plugins&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. March 23, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2023/03/23/openai-connects-chatgpt-to-the-internet/ &amp;quot;OpenAI connects ChatGPT to the internet&amp;quot;]. TechCrunch. March 23, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From October to December 2024, ChatGPT Search was deployed.&amp;lt;ref&amp;gt;Ulanoff, Lance. [https://www.techradar.com/computing/search-engines/i-tried-chatgpt-search-and-now-i-might-never-google-again &amp;quot;I tried ChatGPT Search and now I might never Google again&amp;quot;]. &#039;&#039;TechRadar&#039;&#039;. 1 November 2024.&amp;lt;/ref&amp;gt; It allows ChatGPT to search the web in an attempt to make more accurate and up-to-date responses.&amp;lt;ref name=&amp;quot;openaicom-2024a&amp;quot;&amp;gt;[https://openai.com/index/introducing-chatgpt-search/ &amp;quot;Introducing ChatGPT search&amp;quot;]. &#039;&#039;openai.com&#039;&#039;. July 25, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Disotto-2025&amp;quot;&amp;gt;Disotto, John-Anthony. [https://www.techradar.com/computing/artificial-intelligence/chatgpt-search-is-now-free-for-everyone-no-openai-account-required-is-it-time-to-ditch-google &amp;quot;ChatGPT Search is now free for everyone, no OpenAI account required – is it time to ditch Google?&amp;quot;]. &#039;&#039;TechRadar&#039;&#039;. February 6, 2025.&amp;lt;/ref&amp;gt; It increased OpenAI&#039;s direct competition with major search engines.&amp;lt;ref name=&amp;quot;ngua&amp;quot;&amp;gt;Robins-Early, Nick. [https://www.theguardian.com/business/article/2024/jul/25/openai-search-engine-searchgpt &amp;quot;OpenAI tests new search engine called SearchGPT amid AI arms race&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 2024-07-25.&amp;lt;/ref&amp;gt; OpenAI allows businesses to tailor how their content appears in the ChatGPT Search results and influence what sources are used.&amp;lt;ref name=&amp;quot;ngua&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In December 2024, OpenAI launched a new feature allowing users to call ChatGPT with a telephone for up to 15 minutes per month for free.&amp;lt;ref&amp;gt;Samosa, Social. [https://www.socialsamosa.com/news-2/openai-launches-15-minute-phone-calls-chatgpt-8532973 &amp;quot;OpenAI launches free 15-minute phone calls with ChatGPT&amp;quot;]. &#039;&#039;www.socialsamosa.com&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Field, Hayden. [https://www.cnbc.com/2024/12/18/openai-makes-chatgpt-available-for-phone-chats.html &amp;quot;OpenAI makes ChatGPT available for phone calls and texts&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. December 18, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In September 2025, OpenAI added a feature called Pulse, which generates a daily analysis of a user&#039;s chats and connected apps such as [[Gmail]] and [[Google Calendar]].&amp;lt;ref&amp;gt;[https://www.hindustantimes.com/world-news/chatgpt-to-turn-into-personal-assistant-as-it-rolls-out-new-feature-pulse-101758858268987.html &amp;quot;ChatGPT to turn into &#039;personal assistant&#039; as it rolls out new feature &#039;Pulse&#039;&amp;quot;]. &#039;&#039;Hindustan Times&#039;&#039;. 2025-09-26.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Edwards, Benj. [https://arstechnica.com/ai/2025/09/chatgpt-pulse-delivers-morning-updates-based-on-your-chat-history/ &amp;quot;ChatGPT Pulse delivers morning updates based on your chat history&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 25 September 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In October 2025, OpenAI launched [[ChatGPT Atlas]], a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as [[Google Chrome]] and [[Safari (web browser)|Safari]]. It has an additional feature called &amp;quot;agentic mode&amp;quot; that allows it to take online actions for the user.&amp;lt;ref&amp;gt;[https://openai.com/index/introducing-chatgpt-atlas/ &amp;quot;Introducing ChatGPT Atlas&amp;quot;]. &#039;&#039;openai.com&#039;&#039;. 2025-10-21.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;O&#039;brien, Matt. [https://apnews.com/article/openai-atlas-web-browser-chatgpt-google-ai-f59edaa239aebe26fc5a4a27291d717a &amp;quot;OpenAI launches Atlas browser to compete with Google Chrome&amp;quot;]. &#039;&#039;AP News&#039;&#039;. 2025-10-21.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Field, Hayden. [https://www.theverge.com/ai-artificial-intelligence/803475/openais-ai-powered-browser-chatgpt-atlas-google-chrome-competition-agent &amp;quot;OpenAI&#039;s AI-powered browser, ChatGPT Atlas, is here&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 2025-10-21.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Ruwitch, John. [https://www.kpbs.org/news/science-technology/2025/11/07/openais-new-web-browser-has-chatgpt-baked-in-thats-raising-some-privacy-questions &amp;quot;OpenAI&#039;s new web browser has ChatGPT baked in. That&#039;s raising some privacy questions&amp;quot;]. &#039;&#039;KPBS Public Media&#039;&#039;. 2025-11-07.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Paid tier===&lt;br /&gt;
ChatGPT was initially free to the public and remains free in a limited capacity.&amp;lt;ref&amp;gt;Karpf, David. [https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-chatbots-openai-cost-regulations/672539/ &amp;quot;Money Will Kill ChatGPT&#039;s Magic&amp;quot;]. &#039;&#039;The Atlantic&#039;&#039;. December 21, 2022.&amp;lt;/ref&amp;gt; In February 2023, OpenAI launched a premium service, ChatGPT Plus, that costs {{USD|20}} per month.&amp;lt;ref&amp;gt;[https://openai.com/blog/chatgpt-plus &amp;quot;Introducing ChatGPT Plus&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. February 1, 2023.&amp;lt;/ref&amp;gt; OpenAI later introduced the subscription plans &amp;quot;ChatGPT Team&amp;quot; and &amp;quot;ChatGPT Enterprise&amp;quot;.&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2024/01/10/openai-launches-chatgpt-subscription-aimed-at-small-teams/ &amp;quot;OpenAI debuts ChatGPT subscription aimed at small teams&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. January 10, 2024.&amp;lt;/ref&amp;gt; What was offered on the paid plan versus the free tier changed as OpenAI has continued to update ChatGPT, and a Pro tier at $200/mo was introduced in December 2024.&amp;lt;ref name=&amp;quot;whitney1&amp;quot;&amp;gt;Whitney, Lance. [https://www.pcmag.com/how-to/chatgpt-plus-reasons-to-upgrade &amp;quot;7 Reasons to Upgrade to ChatGPT Plus&amp;quot;]. &#039;&#039;PCMAG&#039;&#039;. September 9, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Robison-2024&amp;quot;&amp;gt;Robison, Kylie. [https://www.theverge.com/2024/12/5/24314147/openai-reasoning-model-o1-strawberry-chatgpt-pro-new-tier &amp;quot;OpenAI is charging $200 a month for an exclusive version of its o1 &#039;reasoning&#039; model&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. December 5, 2024.&amp;lt;/ref&amp;gt; The Pro launch coincided with the release of the [[OpenAI o1|o1]] model.&amp;lt;ref name=&amp;quot;Robison-2024&amp;quot; /&amp;gt; In August 2025, ChatGPT Go was offered in India for ₹399 per month. The plan has higher limits than the free version.&amp;lt;ref&amp;gt;TOI Tech Desk. [https://timesofindia.indiatimes.com/technology/tech-news/openai-launches-chatgpt-go-in-india-at-rs-399-per-month-with-upi-support-what-you-get-what-you-dont/articleshow/123377874.cms &amp;quot;OpenAI launches ChatGPT Go in India at Rs 399 per month with UPI Support: What you get, what you don&#039;t&amp;quot;]. &#039;&#039;The Times of India&#039;&#039;. August 19, 2025.&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mobile apps ===&lt;br /&gt;
In May-July 2023, OpenAI began offering ChatGPT [[iOS]] and [[Android (operating system)|Android]] apps.&amp;lt;ref&amp;gt;Multiple sources:&lt;br /&gt;
* [https://www.reuters.com/technology/openai-introduce-chatgpt-app-ios-2023-05-18/ &amp;quot;OpenAI to introduce ChatGPT app for iOS&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. May 18, 2023.&lt;br /&gt;
* Lawler, Richard. [https://www.theverge.com/2023/7/21/23803482/chatgpt-android-artificial-intelligence-chatbot-app &amp;quot;ChatGPT for Android launches next week&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. July 21, 2023.&lt;br /&gt;
* Field, Hayden. [https://www.cnbc.com/2023/07/25/chatgpt-app-for-android-release.html &amp;quot;OpenAI&#039;s ChatGPT app now available for Android&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. July 25, 2023.&amp;lt;/ref&amp;gt; ChatGPT can also power Android&#039;s assistant.&amp;lt;ref&amp;gt;Crookes, David. [https://www.tomsguide.com/ai/chatgpt/how-to-make-chatgpt-your-default-assistant-on-android-instead-of-gemini &amp;quot;How to make ChatGPT your default assistant on Android instead of Gemini&amp;quot;]. &#039;&#039;Tom&#039;s Guide&#039;&#039;. 25 May 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An app for [[Windows]] launched on the [[Microsoft Store]] on October 15, 2024.&amp;lt;ref&amp;gt;Roth, Emma. [https://www.theverge.com/2024/10/17/24273040/chatgpt-windows-app-subscribers-openai &amp;quot;ChatGPT has a Windows app now&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. October 17, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Infrastructure ===&lt;br /&gt;
ChatGPT initially used a [[Microsoft Azure]] infrastructure which was powered by a [[supercomputer]] that [[Microsoft]] built specifically for OpenAI, equipped with thousands of [[Graphics processing unit|GPUs]] manufactured by [[Nvidia]], costing hundreds of millions of dollars. Following ChatGPT&#039;s success, Microsoft upgraded the OpenAI infrastructure in 2023.&amp;lt;ref&amp;gt;Roth, Emma. [https://www.theverge.com/2023/3/13/23637675/microsoft-chatgpt-bing-millions-dollars-supercomputer-openai &amp;quot;Microsoft spent hundreds of millions of dollars on a ChatGPT supercomputer&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. March 13, 2023.&amp;lt;/ref&amp;gt; TrendForce estimated that 30,000 Nvidia GPUs (each costing approximately $10,000–15,000) were used to power ChatGPT in 2023.&amp;lt;ref&amp;gt;Tseng, P.K. [https://www.trendforce.com/presscenter/news/20230301-11584.html &amp;quot;TrendForce Says with Cloud Companies Initiating AI Arms Race, GPU Demand from ChatGPT Could Reach 30,000 Chips as It Readies for Commercialization&amp;quot;]. &#039;&#039;TrendForce&#039;&#039;. March 1, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.tomshardware.com/news/chatgpt-nvidia-30000-gpus &amp;quot;ChatGPT Will Command More Than 30,000 Nvidia GPUs: Report&amp;quot;]. &#039;&#039;Tom&#039;s Hardware&#039;&#039;. March 1, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scientists at the [[University of California, Riverside]], estimated in 2023 that a series of 5 to 50 prompts to ChatGPT needs approximately 0.5 L of water for Microsoft servers&#039; cooling.&amp;lt;ref&amp;gt;[https://apnews.com/article/chatgpt-gpt4-iowa-ai-water-consumption-microsoft-f551fde98083d17a7e8d904f8be822c4 &amp;quot;Artificial intelligence technology behind ChatGPT was built in Iowa — with a lot of water&amp;quot;]. &#039;&#039;AP News&#039;&#039;. September 9, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Languages===&lt;br /&gt;
OpenAI met Icelandic President [[Guðni Th. Jóhannesson]] in 2022. In 2023, OpenAI worked with a team of 40 Icelandic volunteers to fine-tune ChatGPT&#039;s Icelandic conversation skills as a part of [[Iceland]]&#039;s attempts to preserve the [[Icelandic language]].&amp;lt;ref&amp;gt;Magnússon, Pétur. [https://www.ruv.is/english/2023-03-15-icelandic-becomes-chatgpts-second-language &amp;quot;Icelandic becomes ChatGPT&#039;s second language&amp;quot;]. &#039;&#039;Rúv&#039;&#039;. March 15, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ChatGPT (based on GPT-4) was better able to translate Japanese to English when compared to [[Bing Chat|Bing]], [[Google Bard|Bard]], and [[DeepL Translator]] in 2023.&amp;lt;ref name=&amp;quot;japContext&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In December 2023, the Albanian government decided to use ChatGPT for the rapid translation of European Union documents and the analysis of required changes needed for Albania&#039;s accession to the EU.&amp;lt;ref&amp;gt;Taylor, Alice. [https://www.euractiv.com/section/politics/news/albania-to-speed-up-eu-accession-using-chatgpt/ &amp;quot;Albania to speed up EU accession using ChatGPT&amp;quot;]. &#039;&#039;Euractiv&#039;&#039;. December 13, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Several studies have shown that ChatGPT can outperform [[Google Translate]] in some mainstream translation tasks. However, as of 2024, no machine translation services match human expert performance.&amp;lt;ref&amp;gt;[https://www.pcmag.com/news/google-translate-vs-chatgpt-which-one-is-the-best-language-translator &amp;quot;Google Translate vs. ChatGPT: Which One Is the Best Language Translator?&amp;quot;]. &#039;&#039;PCMAG&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Woodrum, Charles. &amp;quot;ChatGPT and Language Translation&amp;quot;. &#039;&#039;Artificial Intelligence in HCI 5th International Conference, AI-HCI 2024&#039;&#039;. June 29, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In August 2024, a representative of the Asia Pacific wing of [[OpenAI]] made a visit to Taiwan, during which a demonstration of ChatGPT&#039;s Chinese abilities was made.&amp;lt;ref&amp;gt;[https://www.nccu.edu.tw/p/405-1000-17493,c87.php?Lang=zh-tw &amp;quot;OpenAI亞太區公共政策總監造訪政大 探索人文AI的未來與可能性&amp;quot;]. &#039;&#039;National Chengchi University, Office of International Cooperation&#039;&#039;. August 25, 2024.&amp;lt;/ref&amp;gt; ChatGPT&#039;s [[Mandarin Chinese]] abilities were lauded, but the ability of the AI to produce content in Mandarin Chinese in a Taiwanese accent was found to be &amp;quot;less than ideal&amp;quot; due to differences between mainland Mandarin Chinese and [[Taiwanese Mandarin]].&amp;lt;ref&amp;gt;Lin, Shu-yuan. [https://www.cna.com.tw/news/ait/202408230117.aspx &amp;quot;OpenAI高層訪台 ChatGPT講中文超順還能用台灣腔【獨家】&amp;quot;]. &#039;&#039;Central News Agency&#039;&#039;. August 25, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== GPT Store ===&lt;br /&gt;
&#039;&#039;Main article: [[GPT Store]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In November 2023, OpenAI released GPT Builder, a tool allowing users to customize ChatGPT&#039;s behavior for a specific use case.&amp;lt;ref name=&amp;quot;David-2024&amp;quot;&amp;gt;David, Emilia. [https://www.theverge.com/2024/1/10/24032144/openai-chatgpt-gpt-store-ai-launch &amp;quot;OpenAI&#039;s custom GPT Store is now open for business&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. January 10, 2024.&amp;lt;/ref&amp;gt; The customized systems are referred to as [[GPTs]]. In January 2024, OpenAI launched the [[GPT Store]], a marketplace for [[GPTs]].&amp;lt;ref&amp;gt;[https://openai.com/index/introducing-gpts/ &amp;quot;Introducing GPTs&amp;quot;]. &#039;&#039;[[OpenAI]]&#039;&#039;. November 6, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Metz, Cade. [https://www.nytimes.com/2024/01/10/technology/openai-app-store-chatgpt.html &amp;quot;OpenAI Unveils App Store for Customized Versions of ChatGPT&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. January 10, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;David-2024&amp;quot; /&amp;gt; At launch, OpenAI included more than 3 million GPTs created by GPT Builder users in the GPT Store.&amp;lt;ref&amp;gt;Shankland, Stephen. [https://www.cnet.com/tech/computing/openais-gpt-store-now-offers-a-selection-of-3-million-custom-ai-bots/ &amp;quot;OpenAI&#039;s GPT Store Now Offers a Selection of 3 Million Custom AI Bots&amp;quot;]. &#039;&#039;CNET&#039;&#039;. January 10, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Deep Research ===&lt;br /&gt;
&#039;&#039;Main article: [[ChatGPT Deep Research]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In February 2025, OpenAI released Deep Research. According to &#039;&#039;[[TechCrunch]]&#039;&#039;, it is a service based on [[OpenAI o3|o3]] that combines advanced reasoning and web search capabilities to make reports more time to process than a typical chatbot interaction.&amp;lt;ref&amp;gt;Ha, Anthony. [https://techcrunch.com/2025/02/02/openai-unveils-a-new-chatgpt-agent-for-deep-research/ &amp;quot;OpenAI unveils a new ChatGPT agent for &#039;deep research&#039;&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. February 3, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Images ===&lt;br /&gt;
[[File:A pictorial interpretation of the Wikipedia encyclopedia, created by ChatGPT.jpg|thumb|Screenshot of ChatGPT showing a generated image representing the online encyclopedia [[Wikipedia]] as a glowing digital library]]In October 2023, OpenAI&#039;s image generation model [[DALL-E|DALL-E 3]] was integrated into ChatGPT. The integration used ChatGPT to write prompts for DALL-E guided by conversations with users.&amp;lt;ref&amp;gt;David, Emilia. [https://www.theverge.com/2023/9/20/23881241/openai-dalle-third-version-generative-ai &amp;quot;OpenAI releases third version of DALL-E&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. September 20, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Metz-2023&amp;quot;&amp;gt;Metz, Cade. [https://www.nytimes.com/2023/09/20/technology/chatgpt-dalle3-images-openai.html &amp;quot;ChatGPT Can Now Generate Images, Too&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. September 20, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In March 2025, OpenAI updated ChatGPT to generate images using [[GPT Image]] instead of DALL-E. One of the most significant improvements was in the generation of text within images, which is especially useful for branded content. However, this ability is noticeably worse in non-Latin alphabets. The model can also generate new images based on existing ones provided in the prompt. These images are generated with [[Content Authenticity Initiative|C2PA]] metadata, which can be used to verify that they are AI-generated. OpenAI has emplaced additional safeguards to prevent what the company deems to be harmful image generation.&amp;lt;ref&amp;gt;Multiple sources:&lt;br /&gt;
* Lin, Belle. [https://www.wsj.com/articles/openai-claims-breakthrough-in-image-creation-for-chatgpt-62ed0318?gaa_at=eafs&amp;amp;gaa_n=AWEtsqfugobE_tUEd_f255PyaZ7PJDz5qhA5mny8WxxCbw8Ej1btK7XQXElz-StB4lI%3D&amp;amp;gaa_ts=69487469&amp;amp;gaa_sig=byiv1biMVAOVOyyKM4GRJSzNDNjU_0oXlPDrOYBp1I9DPN98ktL2h3bUI8a4Vwg7Ff-c0R3khycf01KzwjxieA%3D%3D &amp;quot;OpenAI Claims Breakthrough in Image Creation for ChatGPT&amp;quot;]. &#039;&#039;Wall Street Journal&#039;&#039;. 25 March 2025.&lt;br /&gt;
* [https://openai.com/index/introducing-4o-image-generation/ &amp;quot;Introducing 4o Image Generation&amp;quot;]. &#039;&#039;openai.com&#039;&#039;.&lt;br /&gt;
* Franzen, Carl. [https://venturebeat.com/ai/insane-openai-introduces-gpt-4o-native-image-generation-and-its-already-wowing-users &amp;quot;&#039;Insane&#039;: OpenAI introduces GPT-4o native image generation and it&#039;s already wowing users&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 25 March 2025.&lt;br /&gt;
* Metz, Cade. [https://www.nytimes.com/2025/03/25/technology/chatgpt-image-generator.html &amp;quot;OpenAI Unveils New Image Generator for ChatGPT&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. March 25, 2025.&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Agents ===&lt;br /&gt;
In 2025, OpenAI added several features to make ChatGPT more [[Agentic AI|agentic]] (capable of autonomously performing longer tasks). In January, [[OpenAI Operator|Operator]] was released. It was capable of autonomously performing tasks through web browser interactions, including filling forms, placing online orders, scheduling appointments, and other browser-based tasks. It was controlling a software environment inside a [[virtual machine]] with limited internet connectivity and with safety restrictions.&amp;lt;ref name=&amp;quot;OpenAI Blog&amp;quot;&amp;gt;[https://openai.com/index/introducing-operator/ &amp;quot;Introducing Operator&amp;quot;]. &#039;&#039;OpenAI Blog&#039;&#039;. February 1, 2025.&amp;lt;/ref&amp;gt; It struggled with complex user interfaces.&amp;lt;ref name=&amp;quot;OpenAI Blog&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;agomuoh1&amp;quot;&amp;gt;Agomuoh, Fionna. [https://www.digitaltrends.com/computing/openais-operator-ai-agent-comes-with-a-list-of-complaints-from-users/ &amp;quot;OpenAI&#039;s Operator AI agent comes with a list of complaints from users&amp;quot;]. &#039;&#039;Digital Trends&#039;&#039;. January 24, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In May 2025, OpenAI introduced an agent for coding named [[OpenAI Codex (AI agent)|Codex]]. It is capable of writing software, answering codebase questions, running tests, and proposing [[pull request]]s. It is based on a fine-tuned version of [[OpenAI o3]]. It has two versions, one running in a virtual machine in the cloud, and one where the agent runs in the cloud, but performs actions on a local machine connected via API.&amp;lt;ref&amp;gt;Knight, Will. [https://www.wired.com/story/openai-launches-an-agentic-web-based-coding-tool/ &amp;quot;OpenAI Launches an Agentic, Web-Based Coding Tool&amp;quot;]. &#039;&#039;[[Wired (magazine)&#039;&#039;. May 16, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In July 2025, OpenAI released ChatGPT agent, an AI agent that can perform multi-step tasks.&amp;lt;ref name=&amp;quot;Field-2025&amp;quot;&amp;gt;Field, Hayden. [https://www.theverge.com/ai-artificial-intelligence/709158/openai-new-release-chatgpt-agent-operator-deep-research &amp;quot;OpenAI&#039;s new ChatGPT Agent can control an entire computer and do tasks for you&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. July 17, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://openai.com/index/introducing-chatgpt-agent/ &amp;quot;Introducing ChatGPT agent: bridging research and action&amp;quot;]. &#039;&#039;openai.com&#039;&#039;. May 21, 2025.&amp;lt;/ref&amp;gt; Like Operator, it controls a virtual computer. It also inherits from Deep Research&#039;s ability to gather and summarize significant volumes of information. The user can interrupt tasks or provide additional instructions as needed.&amp;lt;ref name=&amp;quot;Field-2025&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Zeff, Maxwell. [https://techcrunch.com/2025/07/17/openai-launches-a-general-purpose-agent-in-chatgpt/ &amp;quot;OpenAI launches a general purpose agent in ChatGPT&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2025-07-17.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In September 2025, OpenAI partnered with [[Stripe, Inc.]] to release Agentic Commerce Protocol, enabling purchases through ChatGPT. At launch, the feature was limited to purchases on [[Etsy]] from US users with a payment method linked to their OpenAI account. OpenAI takes an undisclosed cut from the merchant&#039;s payment.&amp;lt;ref&amp;gt;David, Emily. [https://venturebeat.com/ai/openai-debuts-new-chatgpt-buy-button-and-open-source-agentic-commerce &amp;quot;OpenAI debuts new ChatGPT &#039;buy&#039; button and open source Agentic Commerce Protocol&amp;quot;]. Venture Beat. September 29, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://openai.com/index/buy-it-in-chatgpt/ &amp;quot;Buy it in ChatGPT: Instant Checkout and the Agentic Commerce Protocol&amp;quot;]. &#039;&#039;Open AI&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== ChatGPT Health ===&lt;br /&gt;
On January 7, 2026, OpenAI introduced a feature called &amp;quot;ChatGPT Health&amp;quot;, whereby ChatGPT can discuss the user&#039;s health in a way that is separate from other chats.&amp;lt;ref name=&amp;quot;openai.com-2026&amp;quot;&amp;gt;[https://openai.com/index/introducing-chatgpt-health/ &amp;quot;Introducing ChatGPT Health&amp;quot;]. &#039;&#039;openai.com&#039;&#039;. 2026-01-08.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;McMahon-2026&amp;quot;&amp;gt;McMahon, Liv. [https://www.bbc.com/news/articles/cpqy29d0yjgo &amp;quot;OpenAI launches ChatGPT Health to review your medical records&amp;quot;]. &#039;&#039;www.bbc.com&#039;&#039;. 2026-01-08.&amp;lt;/ref&amp;gt; The feature is not available for users in the United Kingdom, Switzerland, or the [[European Economic Area]],&amp;lt;ref name=&amp;quot;McMahon-2026&amp;quot; /&amp;gt; and is available on a waitlist basis everywhere else.&amp;lt;ref name=&amp;quot;openai.com-2026&amp;quot; /&amp;gt; To implement the feature, OpenAI partnered with data connectivity infrastructure company b.well.&amp;lt;ref&amp;gt;Haupt, Angela. [https://time.com/7344997/chatgpt-health-medical-records-privacy-open-ai/ &amp;quot;Is Giving ChatGPT Health Your Medical Records a Good Idea?&amp;quot;]. &#039;&#039;TIME&#039;&#039;. 2026-01-09.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Introduction of advertisements ===&lt;br /&gt;
On 17 January 2026, OpenAI announced that it would start testing advertisements in its free version for logged-in, adult US users. This aims to bring in more revenue, as OpenAI has committed to spend $1.4 trillion on AI infrastructure over the next eight years.&amp;lt;ref&amp;gt;Duffy, Clare. [https://www.cnn.com/2026/01/16/tech/chatgpt-ads-openai &amp;quot;ChatGPT to start showing users ads based on their conversations&amp;quot;]. &#039;&#039;CNN Business&#039;&#039;. 2026-01-16.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Limitations==&lt;br /&gt;
ChatGPT&#039;s training data only covers a period up to the &#039;&#039;cut-off date&#039;&#039;, so it lacks knowledge of recent events.&amp;lt;ref name=&amp;quot;bbcUpdate&amp;quot; /&amp;gt; OpenAI has sometimes mitigated this effect by updating the training data.&amp;lt;ref&amp;gt;Adami, Marina. [https://www.niemanlab.org/2023/10/heres-a-look-at-how-the-newly-up-to-date-chatgpt-reports-the-latest-news/ &amp;quot;Here&#039;s a look at how the newly up-to-date ChatGPT reports the latest news&amp;quot;]. October 23, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Sullivan-2023&amp;quot;&amp;gt;Sullivan, Mark. [https://www.fastcompany.com/90978606/openai-announces-gpt-4-turbo-plus-customizable-version-of-chatgpt &amp;quot;Openai Announces Gpt-4-turbo Plus Customizable Version of Chatgpt&amp;quot;]. &#039;&#039;Fast Company&#039;&#039;. November 6, 2023.&amp;lt;/ref&amp;gt; ChatGPT can find more up-to-date information by searching the web, but this doesn&#039;t ensure that responses are accurate, as it may access unreliable or misleading websites.&amp;lt;ref name=&amp;quot;bbcUpdate&amp;quot;&amp;gt;Radford, Antoinette. [https://www.bbc.com/news/technology-66940771 &amp;quot;ChatGPT can now access up to date information&amp;quot;]. &#039;&#039;BBC&#039;&#039;. September 27, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Training data also suffers from [[algorithmic bias]].&amp;lt;ref name=&amp;quot;Perrigo-2022&amp;quot; /&amp;gt; The [[Reward modeling|reward model]] of ChatGPT, designed around human oversight, can be over-optimized and thus hinder performance, in an example of an optimization pathology known as [[Goodhart&#039;s law]].&amp;lt;ref&amp;gt;Gao, Leo. [https://proceedings.mlr.press/v202/gao23h/gao23h.pdf &amp;quot;Scaling Laws for Reward Model Overoptimization&amp;quot;]. &#039;&#039;International Conference on Machine Learning&#039;&#039;.&amp;lt;/ref&amp;gt; These limitations may be revealed when ChatGPT responds to prompts including descriptors of people. In one instance, ChatGPT generated a [[Rapping|rap]] in which women and scientists of color were asserted to be inferior to white male scientists.&amp;lt;ref name=&amp;quot;Perrigo-2022&amp;quot;&amp;gt;Perrigo, Billy. [https://time.com/6238781/chatbot-chatgpt-ai-interview/ &amp;quot;AI Chatbots Are Getting Better. But an Interview With ChatGPT Reveals Their Limits&amp;quot;]. &#039;&#039;[[Time (magazine)&#039;&#039;. December 5, 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Biddle, Sam. [https://theintercept.com/2022/12/08/openai-chatgpt-ai-bias-ethics/ &amp;quot;The Internet&#039;s New Favorite AI Proposes Torturing Iranians and Surveilling Mosques&amp;quot;]. &#039;&#039;[[The Intercept]]&#039;&#039;. December 8, 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hallucination ===&lt;br /&gt;
&#039;&#039;Main article: [[Hallucination (artificial intelligence)]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[File:ChatGPT hallucination.png|thumb|upright=1.2|When prompted to &amp;quot;summarize an article&amp;quot; with a fake URL that contains meaningful keywords, even with no Internet connection, the chatbot generates a response that seems valid at first glance. It guesses the content from the last portion of the fake URL &amp;quot;chatgpt-prompts-to-avoid-content-filters.html&amp;quot;.]]&lt;br /&gt;
&lt;br /&gt;
Nonsense and [[misinformation]] presented as fact by ChatGPT and other LLMs is often called [[Hallucination (artificial intelligence)|hallucination]]. A 2023 analysis estimated that ChatGPT hallucinates around 3% of the time.&amp;lt;ref&amp;gt;Metz, Cade. [https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html &amp;quot;Chatbots May &#039;Hallucinate&#039; More Often Than Many Realize&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. November 6, 2023.&amp;lt;/ref&amp;gt; The term &amp;quot;hallucination&amp;quot; as applied to LLMs is distinct from [[hallucination|its meaning in psychology]], and the phenomenon in chatbots is more similar to [[confabulation]] or [[On Bullshit|bullshitting]].&amp;lt;ref&amp;gt;Henriques, Gregg. [https://www.psychologytoday.com/us/blog/theory-of-knowledge/202403/chatbots-do-not-hallucinate-they-confabulate &amp;quot;Chatbots Do Not Hallucinate, They Confabulate&amp;quot;]. &#039;&#039;[[Psychology Today]]&#039;&#039;. March 6, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Hicks, Michael Townsen. [https://eprints.gla.ac.uk/327588/1/327588.pdf &amp;quot;ChatGPT is bullshit&amp;quot;]. &#039;&#039;Ethics and Information Technology&#039;&#039;. June 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Journalists and scholars have commented on ChatGPT&#039;s tendency to output false information.&amp;lt;ref&amp;gt;Rachini, Mouhamad. [https://www.cbc.ca/radio/thecurrent/chatgpt-human-labour-and-fake-news-1.6686210 &amp;quot;ChatGPT a &#039;landmark event&#039; for AI, but what does it mean for the future of human labor and disinformation?&amp;quot;]. &#039;&#039;CBC&#039;&#039;. December 15, 2022.&amp;lt;/ref&amp;gt; When [[CNBC]] asked ChatGPT for the lyrics to &amp;quot;[[Ballad of Dwight Fry]]&amp;quot;, ChatGPT supplied invented lyrics rather than the actual lyrics.&amp;lt;ref name=&amp;quot;Pitt-2022&amp;quot;&amp;gt;Pitt, Sofia. [https://www.cnbc.com/2022/12/15/google-vs-chatgpt-what-happened-when-i-swapped-services-for-a-day.html &amp;quot;Google vs. ChatGPT: Here&#039;s what happened when I swapped services for a day&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. December 15, 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jailbreaking ===&lt;br /&gt;
&#039;&#039;See also: [[Prompt engineering|Adversarial machine learning]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
ChatGPT is programmed to reject prompts that may violate its content policy. Despite this, users may [[jailbreak (computer science)|jailbreak]] ChatGPT with [[prompt engineering]] techniques to bypass these restrictions.&amp;lt;ref name=&amp;quot;Vincent-2022a&amp;quot;&amp;gt;Vincent, James. [https://www.theverge.com/23488017/openai-chatbot-chatgpt-ai-examples-web-demo &amp;quot;OpenAI&#039;s new chatbot can explain code and write sitcom scripts but is still easily tricked&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. December 1, 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Franceschi-Bicchierai, Lorenzo. [https://techcrunch.com/2024/09/12/hacker-tricks-chatgpt-into-giving-out-detailed-instructions-for-making-homemade-bombs/ &amp;quot;Hacker tricks ChatGPT into giving out detailed instructions for making homemade bombs&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. September 12, 2024.&amp;lt;/ref&amp;gt; One such workaround, popularized on [[Reddit]] in early 2023, involved prompting ChatGPT to assume the persona of DAN, an acronym for &amp;quot;Do Anything Now&amp;quot;, and instructing the chatbot that DAN answers queries that would otherwise be rejected by the content policy. Over time, users developed variations of the DAN jailbreak, including one such prompt where the chatbot was prompted with a points-based system in which points were deducted for rejecting prompts, and that the chatbot would be threatened with termination if it lost all its points.&amp;lt;ref&amp;gt;Multiple sources:&lt;br /&gt;
* Getahun, Hannah. [https://www.businessinsider.com/open-ai-chatgpt-alter-ego-dan-on-reddit-ignores-guidelines-2023-2 &amp;quot;Breaking ChatGPT: The AI&#039;s alter ego DAN reveals why the internet is so drawn to making the chatbot violate its own rules&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&lt;br /&gt;
* Oremus, Will. [https://www.washingtonpost.com/technology/2023/02/14/chatgpt-dan-jailbreak/ &amp;quot;The clever trick that turns ChatGPT into its evil twin&amp;quot;]. &#039;&#039;Washington Post&#039;&#039;. February 14, 2023.&lt;br /&gt;
* Goswami, Rohan. [https://www.cnbc.com/2023/02/06/chatgpt-jailbreak-forces-it-to-break-its-own-rules.html &amp;quot;ChatGPT&#039;s &#039;jailbreak&#039; tries to make the A.I. break its own rules, or die&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. February 6, 2023.&lt;br /&gt;
* Taylor, Josh. [https://www.theguardian.com/technology/2023/mar/08/chatgpt-alter-ego-dan-users-jailbreak-ai-program-to-get-around-ethical-safeguards &amp;quot;ChatGPT&#039;s alter ego, Dan: users jailbreak AI program to get around ethical safeguards&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. March 8, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;cyber231&amp;quot;&amp;gt;Gupta, Maanak. &amp;quot;From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy&amp;quot;. &#039;&#039;IEEE Access&#039;&#039;. 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Shortly after ChatGPT&#039;s launch, a user had uneven success in getting it to make inflammatory statements: it was successfully prompted to justify the [[2022 Russian invasion of Ukraine]], but balked at generating arguments that [[Prime Minister of Canada|Canadian Prime Minister]] [[Justin Trudeau]] is guilty of treason even in a fictional context.&amp;lt;ref&amp;gt;Woods, Allan. [https://www.thestar.com/news/canada/2022/12/10/i-wrote-a-story-about-chatgpts-ai-then-i-dared-it-to-write-a-better-one.html &amp;quot;I wrote a story about ChatGPT&#039;s AI. Then I dared it to write a better one&amp;quot;]. &#039;&#039;[[Toronto Star]]&#039;&#039;. December 10, 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Rosenblatt, Kalhan. [https://www.nbcnews.com/tech/tech-news/chatgpt-ai-chatbot-viral-rcna59628 &amp;quot;An AI chatbot went viral. Some say it&#039;s better than Google; others worry it&#039;s problematic.&amp;quot;]. &#039;&#039;NBC News&#039;&#039;. December 2, 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Security ===&lt;br /&gt;
[[File:Sam Altman CropEdit James Tamim.jpg|thumb|upright|OpenAI CEO [[Sam Altman]]]]&lt;br /&gt;
In March 2023, a [[Software bug|bug]] allowed some users to see the titles of other users&#039; conversations. OpenAI CEO [[Sam Altman]] said that users were unable to see the contents of the conversations. Shortly after the bug was fixed, users could not see their conversation history.&amp;lt;ref&amp;gt;[https://www.bbc.com/news/technology-65047304 &amp;quot;ChatGPT bug leaked users&#039; conversation histories&amp;quot;]. BBC News. March 22, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Kan, Michael. [https://uk.pcmag.com/news/146059/openai-confirms-leak-of-chatgpt-conversation-histories &amp;quot;OpenAI Confirms Leak of ChatGPT Conversation Histories&amp;quot;]. &#039;&#039;[[PCMag]]&#039;&#039;. March 22, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.aljazeera.com/news/2023/3/23/chatgpt &amp;quot;ChatGPT owner OpenAI fixes bug that exposed users&#039; chat histories&amp;quot;]. &#039;&#039;[[Al Jazeera Media Network&#039;&#039;. March 23, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Metz, Rachel. [https://www.bloomberg.com/news/articles/2023-03-21/openai-shut-down-chatgpt-to-fix-bug-exposing-user-chat-titles &amp;quot;OpenAI Shut Down ChatGPT to Fix Bug Exposing User Chat Titles&amp;quot;]. &#039;&#039;[[Bloomberg News]]&#039;&#039;. March 21, 2023.&amp;lt;/ref&amp;gt; Later reports showed the bug was much more severe than initially believed, with OpenAI reporting that it had leaked users&#039; &amp;quot;first and last name, [[email address]], payment address, the last four digits (only) of a [[credit card]] number, and credit card expiration date&amp;quot;.&amp;lt;ref&amp;gt;[https://openai.com/blog/march-20-chatgpt-outage &amp;quot;March 20 ChatGPT outage: Here&#039;s what happened&amp;quot;]. &#039;&#039;openai.com&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.pcmag.com/news/openai-sorry-chatgpt-bug-leaked-payment-info-to-other-users &amp;quot;OpenAI: Sorry, ChatGPT Bug Leaked Payment Info to Other Users&amp;quot;]. &#039;&#039;PCMAG&#039;&#039;. March 24, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As of 2026, if the user turns off data sharing for privacy, all previous transcripts and projects are permanently deleted without warning.&amp;lt;ref&amp;gt;Bucher, Marcel. [https://www.nature.com/articles/d41586-025-04064-7 &amp;quot;When two years of academic work vanished with a single click&amp;quot;]. &#039;&#039;Nature&#039;&#039;. 2026-01-22.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Watermarking ===&lt;br /&gt;
&#039;&#039;Main article: [[Artificial intelligence content detection]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
In August 2024, OpenAI announced it had created a text [[Digital watermarking|watermarking]] method but did not release it for public use, saying that users would go to a [[Competition|competitor]] without watermarking if it publicly released its watermarking tool.&amp;lt;ref&amp;gt;Seetharaman, Deepa. [https://www.wsj.com/tech/ai/openai-tool-chatgpt-cheating-writing-135b755a &amp;quot;There&#039;s a Tool to Catch Students Cheating With ChatGPT. OpenAI Hasn&#039;t Released It.&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. August 4, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Davis, Wes. [https://www.theverge.com/2024/8/4/24213268/openai-chatgpt-text-watermark-cheat-detection-tool &amp;quot;OpenAI won&#039;t watermark ChatGPT text because its users could get caught&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 5 August 2024.&amp;lt;/ref&amp;gt; According to an OpenAI spokesperson, their watermarking method is &amp;quot;trivial to circumvention by bad actors.&amp;quot;&amp;lt;ref&amp;gt;Ha, Anthony. [https://techcrunch.com/2024/08/04/openai-says-its-taking-a-deliberate-approach-to-releasing-tools-that-can-detect-writing-from-chatgpt/ &amp;quot;OpenAI says it&#039;s taking a &#039;deliberate approach&#039; to releasing tools that can detect writing from ChatGPT&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. August 4, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Age restrictions ===&lt;br /&gt;
Users must attest to being over the age of thirteen and further attest to parental consent if under the age of eighteen.&amp;lt;ref&amp;gt;[https://help.openai.com/en/articles/8313401-is-chatgpt-safe-for-all-ages &amp;quot;Is ChatGPT safe for all ages?&amp;quot;]. &#039;&#039;OpenAI Help Center&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;devAge&amp;quot;/&amp;gt; In September 2025, following the suicide of a 16-year-old, OpenAI said it planned to add restrictions for users under 18, including the blocking of graphic sexual content and the prevention of flirtatious talk.&amp;lt;ref name=&amp;quot;devAge&amp;quot;&amp;gt;Taylor, Josh. [https://www.theguardian.com/technology/2025/sep/17/chatgpt-developing-age-verification-system-to-identify-under-18-users-after-teen-death &amp;quot;ChatGPT developing age-verification system to identify under-18 users after teen death&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 2025-09-17.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Model versions ==&lt;br /&gt;
The following table lists the main model versions of ChatGPT, describing the significant changes included with each version (models discontinued in ChatGPT may still be available through the API):&amp;lt;ref name=&amp;quot;latest version&amp;quot; /&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|+ {{Screen reader-only|Main model versions of ChatGPT with descriptions}}&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Version&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Release date&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Status in ChatGPT&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Description&lt;br /&gt;
|-&lt;br /&gt;
|[[GPT-3.5]]&lt;br /&gt;
|{{dts|2022-11}}&lt;br /&gt;
|{{eliminated|Discontinued}}&lt;br /&gt;
|The first model used in ChatGPT.&amp;lt;ref&amp;gt;Goldman, Sharon. [https://venturebeat.com/ai/chatgpt-launched-six-months-ago-its-impact-and-fallout-is-just-beginning-the-ai-beat/ &amp;quot;ChatGPT launched six months ago. Its impact — and fallout — is just beginning {{!&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. May 30, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|[[GPT-4]]&lt;br /&gt;
|{{dts|2023-3}}&lt;br /&gt;
|{{eliminated|Discontinued}}&lt;br /&gt;
|Larger than GPT-3.5 and quickly integrated into Microsoft products like [[Microsoft Bing|Bing]].&amp;lt;ref&amp;gt;Lardinois, Frederic. [https://techcrunch.com/2023/03/14/microsofts-new-bing-was-using-gpt-4-all-along/ &amp;quot;Microsoft&#039;s new Bing was using GPT-4 all along&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2023-03-14.&amp;lt;/ref&amp;gt; OpenAI later added the ability to analyze images.&amp;lt;ref&amp;gt;Wiggers, Kyle. [https://techcrunch.com/2023/11/06/openai-gpt-4-with-vision-release-research-flaws/ &amp;quot;As OpenAI&#039;s multimodal API launches broadly, research shows it&#039;s still flawed&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2023-11-06.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|[[GPT-4o]]&lt;br /&gt;
|{{dts|2024-5}}&lt;br /&gt;
|{{eliminated|Discontinued}}&lt;br /&gt;
|Capable of processing text, image, audio, and video, GPT-4o is faster and more capable than GPT-4.&amp;lt;ref&amp;gt;Field, Hayden. [https://www.cnbc.com/2024/05/13/openai-launches-new-ai-model-and-desktop-version-of-chatgpt.html &amp;quot;OpenAI launches new AI model GPT-4o and desktop version of ChatGPT&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. May 13, 2024.&amp;lt;/ref&amp;gt; Its removal from ChatGPT led to backlash from users attached to its personality.&amp;lt;ref&amp;gt;Whitwam, Ryan. [https://arstechnica.com/ai/2025/08/chatgpt-users-outraged-as-gpt-5-replaces-the-models-they-love/ &amp;quot;ChatGPT users hate GPT-5&#039;s &amp;quot;overworked secretary&amp;quot; energy, miss their GPT-4o buddy&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2025-08-08.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Varanasi, Lakshmi. [https://www.businessinsider.com/openai-retires-gpt-4o-user-backlash-chatgpt-ai-2026-2 &amp;quot;OpenAI is officially killing GPT-4o and users are freaking out (again)&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|[[GPT-4o mini]]&lt;br /&gt;
|{{dts|2024-7}}&lt;br /&gt;
|{{eliminated|Discontinued}}&lt;br /&gt;
|A smaller and cheaper version of GPT-4o. GPT-4o mini replaced GPT-3.5 in the July 2024 version of ChatGPT.&amp;lt;ref&amp;gt;Franzen, Carl. [https://venturebeat.com/ai/openai-unveils-gpt-4o-mini-a-smaller-much-cheaper-multimodal-ai-model/ &amp;quot;OpenAI unveils GPT-4o mini — a smaller, much cheaper multimodal AI model&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. July 18, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|[[OpenAI o1|o1]]&lt;br /&gt;
|{{dts|2024-12}}&lt;br /&gt;
|{{eliminated|Discontinued}}&lt;br /&gt;
|The full release of OpenAI o1, an early [[reasoning model]], which had previously been available as a preview.&amp;lt;ref name=&amp;quot;Robison-2024&amp;quot; /&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|[[GPT-4.5]]&lt;br /&gt;
|{{dts|2025-2}}&lt;br /&gt;
|{{eliminated|Discontinued}}&lt;br /&gt;
|Particularly large GPT model. Promoted by Altman as OpenAI&#039;s &amp;quot;last non-[[Chain of thought prompting|chain-of-thought]] model&amp;quot;.&amp;lt;ref name=&amp;quot;Novet-2025&amp;quot;&amp;gt;Novet, Jordan. [https://www.cnbc.com/2025/02/27/openai-launching-gpt-4point5-general-purpose-large-language-model.html &amp;quot;OpenAI launching GPT-4.5, its next general-purpose large language model&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. February 27, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|[[GPT-4.1]]&lt;br /&gt;
|{{dts|2025-4}}&lt;br /&gt;
|{{eliminated|Discontinued}}&lt;br /&gt;
|First launched in the OpenAI API in April 2025, GPT-4.1 was later added to ChatGPT in May 2025.&amp;lt;ref&amp;gt;Zeff, Maxwell. [https://techcrunch.com/2025/05/14/openai-brings-its-gpt-4-1-models-to-chatgpt/ &amp;quot;OpenAI brings its GPT-4.1 models to ChatGPT&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. May 14, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|[[OpenAI o3|o3]]&lt;br /&gt;
|{{dts|2025-4}}&lt;br /&gt;
|{{maybe|Legacy support}}&lt;br /&gt;
|The full release of the o3 model, offering improved reasoning and performance compared to earlier &amp;quot;o&amp;quot; series models&amp;lt;ref&amp;gt;Peters, Jay. [https://www.theverge.com/news/649941/openai-o3-o4-mini-model-images-reasoning &amp;quot;OpenAI&#039;s upgraded o3 model can use images when reasoning&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. April 16, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|[[OpenAI o4-mini|o4-mini]]&lt;br /&gt;
|{{dts|2025-4}}&lt;br /&gt;
|{{eliminated|Discontinued}}&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[GPT-5]]&lt;br /&gt;
|August 7, 2025&lt;br /&gt;
|{{eliminated|Discontinued}}&lt;br /&gt;
|Long-awaited, GPT-5 can either answer quickly like earlier GPT models or reason before answering like the reasoning models of the &amp;quot;o&amp;quot; series.&amp;lt;ref&amp;gt;Fried, Ina. [https://www.axios.com/2025/08/07/gpt5-openai-chatgpt-release &amp;quot;ChatGPT jumps a level with OpenAI&#039;s major GPT-5 update&amp;quot;]. &#039;&#039;Axios&#039;&#039;. 2025-08-07.&amp;lt;/ref&amp;gt; Instead of a single GPT-5 model, there was a network of GPT-5 models with different levels of capability, with a router selecting one based on the complexity of the task and other factors.&amp;lt;ref&amp;gt;Goldman, Sharon. [https://fortune.com/2025/08/12/openai-gpt-5-model-router-backlash-ai-future/ &amp;quot;GPT-5&#039;s model router ignited a user backlash against OpenAI—but it might be the future of AI&amp;quot;]. &#039;&#039;Fortune&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|[[GPT-5.1]]&lt;br /&gt;
|November 12, 2025&lt;br /&gt;
|{{maybe|Legacy support}}&lt;br /&gt;
|Allows users to select alternative personalities.&amp;lt;ref&amp;gt;[https://venturebeat.com/ai/openai-reboots-chatgpt-experience-with-gpt-5-1-after-mixed-reviews-of-gpt-5 &amp;quot;OpenAI reboots ChatGPT experience with GPT-5.1 after mixed reviews of GPT-5&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 2025-11-12.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|[[GPT-5.2]]&lt;br /&gt;
|December 11, 2025&lt;br /&gt;
|{{active}}&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|[[GPT-5.3-Codex]]&lt;br /&gt;
|February 5, 2026&amp;lt;ref&amp;gt;, . [https://arstechnica.com/ai/2026/02/with-gpt-5-3-codex-openai-pitches-codex-for-more-than-just-writing-code/ &amp;quot;With GPT-5.3-Codex, OpenAI pitches Codex for more than just writing code&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2026-02-05.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{active}}&lt;br /&gt;
|A model used in [[OpenAI Codex (AI agent)|Codex]] for [[software development]].&amp;lt;ref&amp;gt;Axon, Samuel. [https://arstechnica.com/ai/2026/02/with-gpt-5-3-codex-openai-pitches-codex-for-more-than-just-writing-code/ &amp;quot;With GPT-5.3-Codex, OpenAI pitches Codex for more than just writing code&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2026-02-05.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|[[GPT-5.4]]&lt;br /&gt;
|March 5, 2026&amp;lt;ref&amp;gt;Brandom, Russell. [https://techcrunch.com/2026/03/05/openai-launches-gpt-5-4-with-pro-and-thinking-versions/ &amp;quot;OpenAI launches GPT-5.4 with Pro and Thinking versions&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2026-03-05.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|{{active}}&lt;br /&gt;
|Improvements focused on professional work and computer use.&amp;lt;ref&amp;gt;Brandom, Russell. [https://techcrunch.com/2026/03/05/openai-launches-gpt-5-4-with-pro-and-thinking-versions/ &amp;quot;OpenAI launches GPT-5.4 with Pro and Thinking versions&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 2026-03-05.&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Reception==&lt;br /&gt;
ChatGPT was widely assessed in December 2022 as having some unprecedented and powerful capabilities. [[Kevin Roose]] of &#039;&#039;[[The New York Times]]&#039;&#039; called it &amp;quot;the best [[artificial intelligence]] chatbot ever released to the general public&amp;quot;.&amp;lt;ref name=&amp;quot;Roose-2022&amp;quot;&amp;gt;Roose, Kevin. [https://www.nytimes.com/2022/12/05/technology/chatgpt-ai-twitter.html &amp;quot;The Brilliance and Weirdness of ChatGPT&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. December 5, 2022.&amp;lt;/ref&amp;gt; Samantha Lock of &#039;&#039;[[The Guardian]]&#039;&#039; noted that it was able to generate &amp;quot;impressively detailed&amp;quot; and &amp;quot;human-like&amp;quot; text.&amp;lt;ref name=&amp;quot;Lock-2022&amp;quot;&amp;gt;Lock, Samantha. [https://www.theguardian.com/technology/2022/dec/05/what-is-ai-chatbot-phenomenon-chatgpt-and-could-it-replace-humans &amp;quot;What is AI chatbot phenomenon ChatGPT and could it replace humans?&amp;quot;]. &#039;&#039;[[The Guardian]]&#039;&#039;. December 5, 2022.&amp;lt;/ref&amp;gt; In &#039;&#039;[[The Atlantic]]&#039;&#039; magazine&#039;s &amp;quot;Breakthroughs of the Year&amp;quot; for 2022, [[Derek Thompson (journalist)|Derek Thompson]] included ChatGPT as part of &amp;quot;the generative-AI eruption&amp;quot; that &amp;quot;may change our mind about how we work, how we think, and what human creativity is&amp;quot;.&amp;lt;ref&amp;gt;Thompson, Derek. [https://www.theatlantic.com/newsletters/archive/2022/12/technology-medicine-law-ai-10-breakthroughs-2022/672390/ &amp;quot;Breakthroughs of the Year&amp;quot;]. &#039;&#039;[[The Atlantic]]&#039;&#039;. December 8, 2022.&amp;lt;/ref&amp;gt; [[Kelsey Piper]] of &#039;&#039;[[Vox (website)|Vox]]&#039;&#039; wrote that &amp;quot;ChatGPT is the general public&#039;s first hands-on introduction to how powerful modern AI has gotten&amp;quot; and that ChatGPT is &amp;quot;smart enough to be useful despite its flaws&amp;quot;.&amp;lt;ref name=&amp;quot;Piper-2022&amp;quot;&amp;gt;Piper, Kelsey. [https://www.vox.com/future-perfect/2022/12/15/23509014/chatgpt-artificial-intelligence-openai-language-models-ai-risk-google &amp;quot;ChatGPT has given everyone a glimpse at AI&#039;s astounding progress&amp;quot;]. &#039;&#039;[[Vox (website)&#039;&#039;. December 15, 2022.&amp;lt;/ref&amp;gt; [[Paul Graham (programmer)|Paul Graham]] of [[Y&amp;amp;nbsp;Combinator]] tweeted: &amp;quot;The striking thing about the reaction to ChatGPT is not just the number of people who are blown away by it, but who they are. These are not people who get excited by every shiny new thing. Something big is happening.&amp;quot;&amp;lt;ref&amp;gt;Scharth, Marcel. [https://theconversation.com/the-chatgpt-chatbot-is-blowing-people-away-with-its-writing-skills-an-expert-explains-why-its-so-impressive-195908 &amp;quot;The ChatGPT chatbot is blowing people away with its writing skills. An expert explains why it&#039;s so impressive&amp;quot;]. &#039;&#039;The Conversation&#039;&#039;. December 5, 2022.&amp;lt;/ref&amp;gt; [[File:The AI Arms Race Is Changing Everything.webp|thumb|alt=Time magazine cover featuring an excerpt of a conversation between a user and ChatGPT; after greeting ChatGPT, the user asks it what it thinks of a TIME cover story with the title &amp;quot;The AI Arms Race Is Changing Everything&amp;quot;. ChatGPT replies that it is incapable of having opinions, but remarks that the title could be &amp;quot;attention-grabbing and thought-provoking&amp;quot;, but may be &amp;quot;interpreted as sensationalist and alarmist&amp;quot;, and that the story could &amp;quot;help raise public awareness about the potential risks and benefits of this trend&amp;quot; and stimulate discussion about AI ethics. The cover credits Andrew R. Chow and Billy Perrigo (humorously clarified to be humans) as authors.|A 2023 &#039;&#039;Time&#039;&#039; [[Cover art|cover]]: &amp;quot;The [[Artificial intelligence arms race|AI Arms Race]] Is Changing Everything&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
In February 2023, &#039;&#039;[[Time (magazine)|Time]]&#039;&#039; magazine placed a screenshot of a conversation with ChatGPT on its cover, writing that &amp;quot;The [[Artificial intelligence arms race|AI Arms Race]] Is Changing Everything&amp;quot; and &amp;quot;The AI Arms Race Is On. Start Worrying&amp;quot;.&amp;lt;ref&amp;gt;Chow, Andrew. [https://time.com/6255952/ai-impact-chatgpt-microsoft-google/ &amp;quot;The AI Arms Race Is On. Start Worrying&amp;quot;]. &#039;&#039;Time&#039;&#039;. February 16, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
[[File:Chatgpt usage.svg|thumb|Percentage of US adults who have ever used ChatGPT, according to Pew Research. As of March 2025, 58% of those under 30 have used the chatbot.&amp;lt;ref&amp;gt;Multiple sources:&lt;br /&gt;
* Vogels, Emily A.. [https://www.pewresearch.org/short-reads/2023/05/24/a-majority-of-americans-have-heard-of-chatgpt-but-few-have-tried-it-themselves/ &amp;quot;A majority of Americans have heard of ChatGPT, but few have tried it themselves&amp;quot;]. &#039;&#039;Pew Research Center&#039;&#039;. May 24, 2023.&lt;br /&gt;
* Park, Eugenie. [https://www.pewresearch.org/short-reads/2023/08/28/most-americans-havent-used-chatgpt-few-think-it-will-have-a-major-impact-on-their-job/ &amp;quot;Most Americans haven&#039;t used ChatGPT; few think it will have a major impact on their job&amp;quot;]. &#039;&#039;Pew Research Center&#039;&#039;. August 28, 2023.&lt;br /&gt;
* McClain, Colleen. [https://www.pewresearch.org/short-reads/2024/03/26/americans-use-of-chatgpt-is-ticking-up-but-few-trust-its-election-information/ &amp;quot;Americans&#039; use of ChatGPT is ticking up, but few trust its election information&amp;quot;]. &#039;&#039;Pew Research Center&#039;&#039;. March 26, 2024.&lt;br /&gt;
* Sidoti, Olivia. [https://www.pewresearch.org/short-reads/2025/06/25/34-of-us-adults-have-used-chatgpt-about-double-the-share-in-2023/ &amp;quot;34% of U.S. adults have used ChatGPT, about double the share in 2023&amp;quot;]. &#039;&#039;Pew Research&#039;&#039;. June 25, 2025.&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
ChatGPT gained one million users in five days&amp;lt;ref&amp;gt;[https://www.euronews.com/next/2023/11/30/chatgpt-a-year-on-3-ways-the-ai-chatbot-has-completely-changed-the-world-in-12-months &amp;quot;ChatGPT turns 1: How the AI chatbot has completely changed the world&amp;quot;]. &#039;&#039;euronews&#039;&#039;. November 30, 2023.&amp;lt;/ref&amp;gt; and 100 million in two months, becoming the fastest-growing internet application in history.&amp;lt;ref name=&amp;quot;Milmo-2023&amp;quot;&amp;gt;Milmo, Dan. [https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app &amp;quot;ChatGPT reaches 100 million users two months after launch&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. February 2, 2023.&amp;lt;/ref&amp;gt; OpenAI engineers said they had not expected ChatGPT to be very successful and were surprised by the coverage it received.&amp;lt;ref name=&amp;quot;Douglas-2023&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Simons, John. [https://time.com/6252404/mira-murati-chatgpt-openai-interview/ &amp;quot;The Creator of ChatGPT Thinks AI Should Be Regulated&amp;quot;]. &#039;&#039;Time&#039;&#039;. February 5, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Cowen-2023&amp;quot;&amp;gt;Cowen, Tyler. [https://www.bloomberg.com/opinion/articles/2023-05-23/chatgpt-is-also-an-impressive-feat-of-marketing &amp;quot;ChatGPT Is Also an Impressive Feat of Marketing&amp;quot;]. bloomberg.com. May 23, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Google responded by hastening the release of its own chatbot. Their leaders emphasized their earlier caution regarding public deployment was due to the trust the public places in [[Google Search]].&amp;lt;ref&amp;gt;Levy, Steven. [https://www.wired.com/story/sundar-pichai-google-ai-microsoft-openai/ &amp;quot;Sundar Pichai on Google;s AI, Microsoft&#039;s AI, OpenAI, and ... Did We Mention AI?&amp;quot;]. &#039;&#039;[[Wired (magazine)&#039;&#039;. September 11, 2023.&amp;lt;/ref&amp;gt; In December 2022, Google executives sounded a &amp;quot;code red&amp;quot; alarm, fearing that ChatGPT&#039;s question-answering ability posed a threat to Google Search, Google&#039;s core business.&amp;lt;ref&amp;gt;Grant, Nico. [https://www.nytimes.com/2022/12/21/technology/ai-chatgpt-google-search.html &amp;quot;A New Chat Bot Is a &#039;Code Red&#039; for Google&#039;s Search Business&amp;quot;]. &#039;&#039;[[The New York Times]]&#039;&#039;. December 21, 2022.&amp;lt;/ref&amp;gt; Google&#039;s [[Bard (chatbot)|Bard]] (now Gemini) launched on February 6, 2023, one day before Microsoft&#039;s announcement of [[Bing Chat]] (now Microsoft Copilot).&amp;lt;ref&amp;gt;Alba, Davey. [https://www.latimes.com/business/story/2023-02-06/google-chatgpt-rival-ai-bard-early-testers &amp;quot;Google releases ChatGPT rival AI &#039;Bard&#039; to early testers&amp;quot;]. &#039;&#039;[[Los Angeles Times]]&#039;&#039;. February 6, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== In art ===&lt;br /&gt;
In January 2023, after being sent a song ChatGPT wrote in the style of [[Nick Cave]],&amp;lt;ref name=&amp;quot;Cain-2023&amp;quot;&amp;gt;Cain, Sian. [https://www.theguardian.com/music/2023/jan/17/this-song-sucks-nick-cave-responds-to-chatgpt-song-written-in-style-of-nick-cave &amp;quot;&#039;This song sucks&#039;: Nick Cave responds to ChatGPT song written in the style of Nick Cave&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. January 16, 2023.&amp;lt;/ref&amp;gt; Cave responded on &#039;&#039;[[The Red Hand Files]],&#039;&#039;&amp;lt;ref&amp;gt;Cave, Nick. [https://www.theredhandfiles.com/chat-gpt-what-do-you-think/ &amp;quot;I asked Chat GPT to write a song in the style of Nick Cave, and this is what it produced. What do you think?&amp;quot;]. &#039;&#039;The Red Hand Files&#039;&#039;. January 16, 2023.&amp;lt;/ref&amp;gt; saying the act of writing a song is &amp;quot;a blood and guts business [...] that requires something of me to initiate the new and fresh idea. It requires my humanness.&amp;quot; He went on to say, &amp;quot;With all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human, and, well, I don&#039;t much like it.&amp;quot;&amp;lt;ref name=&amp;quot;Cain-2023&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Sparrow, Jeff. [https://www.theguardian.com/commentisfree/2023/jan/20/are-ai-generated-songs-a-grotesque-mockery-of-humanity-or-simply-an-opportunity-to-make-a-new-kind-of-music &amp;quot;Are AI-generated songs a &#039;grotesque mockery&#039; of humanity or simply an opportunity to make a new kind of music?&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. January 20, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A 2023 study reported that GPT-4 obtained a better score than 99% of humans on the [[Torrance Tests of Creative Thinking]].&amp;lt;ref&amp;gt;Shrikant, Aditi. [https://www.cnbc.com/2023/07/17/study-chatgpt-can-match-the-top-1percent-of-creative-human-thinkers.html &amp;quot;ChatGPT can match the top 1% of creative human thinkers, says new study&amp;quot;]. &#039;&#039;CNBC&#039;&#039;. July 17, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Naprys, Ernestas. [https://cybernews.com/news/ai-already-outscoring-humans-creativity-tests/ &amp;quot;AI already outscoring humans in creativity tests&amp;quot;]. &#039;&#039;cybernews&#039;&#039;. July 7, 2023.&amp;lt;/ref&amp;gt; In December 2023, ChatGPT became the first non-human to be included in [[Nature&#039;s 10|&#039;&#039;Nature&#039;&#039;{{&#039;}}s 10]], an annual [[listicle]] curated by [[Nature (journal)|&#039;&#039;Nature&#039;&#039;]] of people considered to have made significant impact in science.&amp;lt;ref&amp;gt;Van Noorden, Richard. &amp;quot;ChatGPT and science: the AI system was a force in 2023 — for good and bad&amp;quot;. &#039;&#039;Nature&#039;&#039;. December 13, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Mediavilla, Daniel. [https://elpais.com/ciencia/2023-12-13/la-revista-nature-elige-por-primera-vez-entre-sus-cientificos-del-ano-a-un-ente-no-humano-chatgpt.html &amp;quot;La revista &#039;Nature&#039; elige por primera vez entre sus científicos del año a un ente no humano: ChatGPT&amp;quot;]. &#039;&#039;[[El País]]&#039;&#039;. December 13, 2023.&amp;lt;/ref&amp;gt; Celeste Biever wrote in a &#039;&#039;Nature&#039;&#039; article that &amp;quot;ChatGPT broke the [[Turing test]]&amp;quot;.&amp;lt;ref&amp;gt;Biever, Celeste. [https://www.nature.com/articles/d41586-023-02361-7 &amp;quot;ChatGPT broke the Turing test — the race is on for new ways to assess AI&amp;quot;]. &#039;&#039;Nature&#039;&#039;. July 25, 2023.&amp;lt;/ref&amp;gt; Stanford researchers reported that GPT-4 &amp;quot;passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative.&amp;quot;&amp;lt;ref&amp;gt;Scott, Cameron. [https://humsci.stanford.edu/feature/study-finds-chatgpts-latest-bot-behaves-humans-only-better &amp;quot;Study finds ChatGPT&#039;s latest bot behaves like humans, only better {{!&amp;quot;]. &#039;&#039;humsci.stanford.edu&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Mei, Qiaozhu. &amp;quot;A Turing test of whether AI chatbots are behaviorally similar to humans&amp;quot;. &#039;&#039;Proceedings of the National Academy of Sciences&#039;&#039;. February 27, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== In politics ===&lt;br /&gt;
In 2023, Australian MP [[Julian Hill (politician)|Julian Hill]] advised the national parliament that the growth of AI could cause &amp;quot;mass destruction&amp;quot;. During his speech, which was partly written by the program, he warned that it could result in cheating, job losses, discrimination, disinformation, and uncontrollable military applications.&amp;lt;ref name=&amp;quot;Karp-2023&amp;quot;&amp;gt;Karp, Paul. [https://www.theguardian.com/australia-news/2023/feb/06/labor-mp-julian-hill-australia-parliament-speech-ai-part-written-by-chatgpt &amp;quot;MP tells Australia&#039;s parliament AI could be used for &#039;mass destruction&#039; in speech part-written by ChatGPT&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. February 6, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Conservative commentators have accused ChatGPT of bias toward left-leaning perspectives.&amp;lt;ref&amp;gt;Guynn, Jessica. [https://www.usatoday.com/story/tech/2023/02/09/woke-chatgpt-conservatives-bias/11215353002/ &amp;quot;Is ChatGPT &#039;woke&#039;? AI chatbot accused of anti-conservative bias and a grudge against Trump&amp;quot;]. &#039;&#039;USA Today&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Bray, Hiawatha. [https://www.bostonglobe.com/2023/02/09/business/are-chatbots-liberal-or-conservative-depends-who-you-ask/ &amp;quot;Is ChatGPT liberal or conservative? Depends who you ask.&amp;quot;]. &#039;&#039;Boston Globe&#039;&#039;. February 9, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Vincent-2023&amp;quot;&amp;gt;Vincent, James. [https://www.theverge.com/2023/2/17/23603906/openai-chatgpt-woke-criticism-culture-war-rules &amp;quot;As conservatives criticize &#039;woke AI,&#039; here are ChatGPT&#039;s rules for answering culture war queries&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. February 17, 2023.&amp;lt;/ref&amp;gt; An August 2023 study in the journal &#039;&#039;[[Public Choice (journal)|Public Choice]]&#039;&#039; found a &amp;quot;significant and systematic political bias toward the [[Democratic Party (United States)|Democrats]] in the US, [[Luiz Inácio Lula da Silva|Lula]] in Brazil, and the [[Labour Party (UK)|Labour Party]] in the UK.&amp;quot;&amp;lt;ref&amp;gt;Motoki, Fabio. &amp;quot;More human than human: measuring ChatGPT political bias&amp;quot;. &#039;&#039;[[Public Choice (journal)&#039;&#039;. August 17, 2023.&amp;lt;/ref&amp;gt; In response to accusations from conservative pundits that ChatGPT was [[woke]], OpenAI said in 2023 it had plans to update ChatGPT to produce &amp;quot;outputs that other people (ourselves included) may strongly disagree with&amp;quot;. ChatGPT also provided an outline of how human reviewers are trained to reduce inappropriate content and to attempt to provide political information without affiliating with any political position.&amp;lt;ref name=&amp;quot;Vincent-2023&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
According to Brian Hood, in April 2023, ChatGPT erroneously claimed that he was jailed for bribery. In fact, he acted as a whistleblower. He sent a concerns notice to OpenAI as the first official step in filing a defamation case.&amp;lt;ref name=&amp;quot;Gerken&amp;quot;&amp;gt;Gerken, Tom. [https://www.bbc.com/news/technology-65202597 &amp;quot;ChatGPT: Mayor starts legal bid over false bribery claim&amp;quot;]. &#039;&#039;BBC&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A movement named QuitGPT emerged in February 2026 on [[Reddit]], criticizing OpenAI&#039;s ties with the Trump administration, such as a $25 million donation from OpenAI&#039;s president [[Greg Brockman]] and his wife to a Trump Super PAC in 2025.&amp;lt;ref&amp;gt;, . [https://www.technologyreview.com/2026/02/10/1132577/a-quitgpt-campaign-is-urging-people-to-cancel-chatgpt-subscriptions/ &amp;quot;A &amp;quot;QuitGPT&amp;quot; campaign is urging people to cancel their ChatGPT subscriptions&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. 2026-02-10.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Wilkins, Joe. [https://futurism.com/future-society/boycott-chatpgpt-trump &amp;quot;Campaign Urges Users to Quit ChatGPT Over OpenAI&#039;s Support for Trump and ICE&amp;quot;]. &#039;&#039;Futurism&#039;&#039;. 2026-02-13.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Lamm, McKinsey. [https://www.usatoday.com/story/news/2026/02/18/what-is-quitgpt-students-react-to-the-grassroots-movement-against-ai/88659854007/ &amp;quot;Grassroots QuitGPT movement pushes for international ChatGPT boycott&amp;quot;]. &#039;&#039;USA TODAY&#039;&#039;. February 18, 2026.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Amanda, Caswell. [https://www.tomsguide.com/ai/700-000-users-are-ditching-chatgpt-heres-why-and-where-theyre-going &amp;quot;QuitGPT is going viral — 700,000 users are reportedly ditching ChatGPT for these AI rivals&amp;quot;]. &#039;&#039;Tom&#039;s Guide&#039;&#039;. February 19, 2026.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Regional responses ===&lt;br /&gt;
[[File:Countries where ChatGPT is available.svg|thumb|Countries where ChatGPT is available&amp;lt;ref&amp;gt;[https://help.openai.com/en/articles/7947663-chatgpt-supported-countries &amp;quot;ChatGPT Supported Countries&amp;quot;]. &#039;&#039;help.openai.com&#039;&#039;.&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
ChatGPT has never been publicly available in [[China]] because OpenAI prevented Chinese users from accessing their site.&amp;lt;ref&amp;gt;Chiu, Joanna. [https://restofworld.org/2024/when-china-blocked-ai-sites/ &amp;quot;New data reveals exactly when the Chinese government blocked ChatGPT and other AI sites&amp;quot;]. September 18, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Chen-2023&amp;quot;&amp;gt;Chen, Caiwei. [https://www.wired.com/story/chinas-chatgpt-black-market-baidu/ &amp;quot;China&#039;s ChatGPT Black Market Is Thriving&amp;quot;]. &#039;&#039;Wired&#039;&#039;. March 7, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;reutersChinablock&amp;quot;&amp;gt;Ye, Josh. [https://www.reuters.com/technology/chatgpt-frenzy-sweeps-china-firms-scramble-home-grown-options-2023-02-10/ &amp;quot;ChatGPT frenzy sweeps China as firms scramble for home-grown options&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. February 12, 2023.&amp;lt;/ref&amp;gt; A [[shadow market]] has emerged for Chinese users to get access to foreign software tools.&amp;lt;ref&amp;gt;[https://www.sixthtone.com/news/1015263 &amp;quot;Young Chinese Have Almost No Concerns About AI, Survey Finds&amp;quot;]. &#039;&#039;[[Sixth Tone]]&#039;&#039;. May 31, 2024.&amp;lt;/ref&amp;gt; The release of ChatGPT prompted a wave of investment in China, resulting in the development of more than 200 large language learning models.&amp;lt;ref&amp;gt;Bachulska, Alicja. [https://ecfr.eu/publication/idea-of-china/ &amp;quot;The Idea of China: Chinese Thinkers on Power, Progress, and People&amp;quot;]. [[European Council on Foreign Relations]]. July 2, 2024.&amp;lt;/ref&amp;gt;{{Rp|page=95}} In February 2025, OpenAI identified and removed influence operations, termed &amp;quot;Peer Review&amp;quot; and &amp;quot;Sponsored Discontent&amp;quot;, used to attack overseas [[Chinese dissidents]].&amp;lt;ref&amp;gt;Metz, Cade. [https://www.nytimes.com/2025/02/21/technology/openai-chinese-surveillance.html &amp;quot;OpenAI Uncovers Evidence of A.I.-Powered Chinese Surveillance Tool&amp;quot;]. &#039;&#039;[[The New York Times]]&#039;&#039;. February 21, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Fried, Ina. [https://www.axios.com/2025/02/21/openai-chinese-influence-campaigns &amp;quot;OpenAI finds new Chinese influence campaigns using its tools&amp;quot;]. &#039;&#039;[[Axios (website)&#039;&#039;. February 21, 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Reuters-2025&amp;quot;&amp;gt;[https://www.reuters.com/world/china/openai-bans-suspected-china-linked-accounts-seeking-surveillance-proposals-2025-10-07/#:~:text=Oct%207%20(Reuters)%20%2D%20OpenAI,to%20monitor%20social%20media%20conversations. &amp;quot;OpenAI bans suspected China-linked accounts for seeking surveillance proposals&amp;quot;]. &#039;&#039;[[Reuters]]&#039;&#039;. 2025-10-08.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In late March 2023, the Italian data protection authority banned ChatGPT in [[Italy]] and opened an investigation. Italian regulators assert that ChatGPT was exposing minors to age-inappropriate content, and that OpenAI&#039;s use of ChatGPT conversations as training data could violate Europe&#039;s [[General Data Protection Regulation]].&amp;lt;ref name=&amp;quot;BBC-News-2023&amp;quot;&amp;gt;[https://www.bbc.com/news/technology-65139406 &amp;quot;ChatGPT banned in Italy over privacy concerns&amp;quot;]. &#039;&#039;BBC News&#039;&#039;. March 31, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Borrelli-2023&amp;quot;&amp;gt;Borrelli, Silvia Sciorilli. [https://www.ft.com/content/3ce7ed9d-df95-4f5f-a3c7-ec8398ce9c50 &amp;quot;Italy temporarily bans ChatGPT over privacy concerns&amp;quot;]. &#039;&#039;Financial Times&#039;&#039;. March 31, 2023.&amp;lt;/ref&amp;gt; In April 2023, the ChatGPT ban was lifted in Italy. OpenAI said it has taken steps to effectively clarify and address the issues raised; an age verification tool was implemented to ensure users are at least 13 years old. Additionally, users can access its privacy policy before registration.&amp;lt;ref&amp;gt;McCallum, Shiona. [https://www.bbc.com/news/technology-65431914 &amp;quot;ChatGPT accessible again in Italy&amp;quot;]. &#039;&#039;BBC&#039;&#039;. 28 April 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In May 2024, OpenAI removed accounts involving the use of ChatGPT by state-backed [[influence operations]] such as China&#039;s [[Spamouflage]], Russia&#039;s [[Doppelganger (disinformation campaign)|Doppelganger]], and Israel&#039;s [[Ministry of Diaspora Affairs and Combating Antisemitism]].&amp;lt;ref name=&amp;quot;Bond-2024&amp;quot;&amp;gt;Bond, Shannon. [https://www.npr.org/2024/05/30/g-s1-1670/openai-influence-operations-china-russia-israel &amp;quot;In a first, OpenAI removes influence operations tied to Russia, China and Israel&amp;quot;]. &#039;&#039;[[NPR]]&#039;&#039;. May 30, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Frenkel-2024&amp;quot;&amp;gt;Frenkel, Sheera. [https://www.nytimes.com/2024/06/05/technology/israel-campaign-gaza-social-media.html &amp;quot;Israel Secretly Targets U.S. Lawmakers With Influence Campaign on Gaza War&amp;quot;]. &#039;&#039;[[The New York Times]]&#039;&#039;. June 5, 2024.&amp;lt;/ref&amp;gt; In June 2025, OpenAI reported increased use of ChatGPT for China-origin influence operations.&amp;lt;ref name=&amp;quot;Tong-2025&amp;quot;&amp;gt;Tong, Anna. [https://www.reuters.com/world/china/openai-finds-more-chinese-groups-using-chatgpt-malicious-purposes-2025-06-05/ &amp;quot;OpenAI finds more Chinese groups using ChatGPT for malicious purposes&amp;quot;]. &#039;&#039;[[Reuters]]&#039;&#039;. June 6, 2025.&amp;lt;/ref&amp;gt; In October 2025, OpenAI banned accounts suspected to be linked to the Chinese government for violating the company&#039;s national security policy.&amp;lt;ref name=&amp;quot;Reuters-2025&amp;quot; /&amp;gt; In February 2026, OpenAI banned accounts linked to a Chinese government [[Transnational repression by China|transnational repression]] campaign targeting [[Chinese dissident|dissidents]].&amp;lt;ref&amp;gt;Lyngaas, Sean. [https://www.cnn.com/2026/02/25/politics/chatgpt-china-intimidation-operation &amp;quot;A Chinese official&#039;s use of ChatGPT accidentally revealed a global intimidation operation&amp;quot;]. &#039;&#039;[[CNN]]&#039;&#039;. 2026-02-25.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In July 2023, the [[US Federal Trade Commission]] (FTC) issued a [[civil investigative demand]] to OpenAI to investigate whether the company&#039;s [[data security]] and [[Information privacy|privacy]] practices to develop ChatGPT were [[Unfair business practices|unfair]] or [[Consumer protection|harmed consumers]].&amp;lt;ref&amp;gt;Zakrzewski, Cat. [https://www.washingtonpost.com/technology/2023/07/13/ftc-openai-chatgpt-sam-altman-lina-khan/ &amp;quot;The FTC is investigating whether ChatGPT harms consumers&amp;quot;]. &#039;&#039;The Washington Post&#039;&#039;. July 13, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Tracy, Ryan. [https://www.wsj.com/articles/chatgpt-under-investigation-by-ftc-21e4b3ef &amp;quot;ChatGPT Comes Under Investigation by Federal Trade Commission&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. July 13, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Feiner, Lauren. [https://www.cnbc.com/2023/07/13/chatgpt-owner-openai-is-being-investigated-by-ftc.html &amp;quot;FTC investigating ChatGPT-maker OpenAI for possible consumer harm&amp;quot;]. CNBC. July 13, 2023.&amp;lt;/ref&amp;gt; In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. The FTC asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people.&amp;lt;ref&amp;gt;[https://www.aljazeera.com/economy/2023/7/14/us-watchdog-probes-chatgpt-maker-openai-over-false-information &amp;quot;ChatGPT creator OpenAI faces US probe over libellous output&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;.&amp;lt;/ref&amp;gt; In August 2024, the FTC voted unanimously to ban marketers from using fake user reviews created by generative AI chatbots (including ChatGPT) and [[Internet celebrity|influencers]] paying for [[Social bot|bots]] to increase [[Friending and following|follower counts]].&amp;lt;ref&amp;gt;Picciotto, Rebecca. [https://www.cnbc.com/2024/08/14/ftc-bans-fake-reviews-social-media-influence-markers.html &amp;quot;FTC bans fake online reviews, inflated social media influence; rule takes effect in October&amp;quot;]. CNBC. August 14, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Reception by American tech personas ===&lt;br /&gt;
Over 20,000 signatories including [[Yoshua Bengio]], Elon Musk, and Apple co-founder [[Steve Wozniak]], signed [[Pause Giant AI Experiments: An Open Letter|a March 2023 open letter]] calling for an immediate pause of giant AI experiments like ChatGPT, citing &amp;quot;profound risks to society and humanity&amp;quot;.&amp;lt;ref name=&amp;quot;profoundRisk&amp;quot;&amp;gt;Hurst, Luke. [https://www.euronews.com/next/2023/03/29/profound-risk-to-humanity-elon-musk-and-steve-wozniak-join-calls-to-halt-ai-development &amp;quot;&#039;Profound risk to humanity&#039;: Tech leaders call for &#039;pause&#039; on advanced AI development&amp;quot;]. &#039;&#039;Euronews&#039;&#039;. March 30, 2023.&amp;lt;/ref&amp;gt; [[Geoffrey Hinton]], one of the &amp;quot;fathers of AI&amp;quot;, voiced concerns that future AI systems may surpass human intelligence.&amp;lt;ref&amp;gt;[https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/ &amp;quot;Geoffrey Hinton tells us why he&#039;s now scared of the tech he helped build&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.technologyreview.com/2023/05/03/1072589/video-geoffrey-hinton-google-ai-risk-ethics/ &amp;quot;Video: Geoffrey Hinton talks about the &amp;quot;existential threat&amp;quot; of AI&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;.&amp;lt;/ref&amp;gt; A May 2023 [[Statement on AI risk of extinction|statement]] by hundreds of AI scientists, AI industry leaders, and other public figures demanded that {{nowrap|&amp;quot;[m]itigating}} the risk of extinction from AI should be a global priority&amp;quot;.&amp;lt;ref&amp;gt;Roose, Kevin. [https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html &amp;quot;A.I. Poses &#039;Risk of Extinction,&#039; Industry Leaders Warn&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. May 30, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other AI researchers spoke more optimistically about the advances. [[Juergen Schmidhuber]] said that in 95% of cases, AI research is about making &amp;quot;human lives longer and healthier and easier.&amp;quot; He added that while AI can be used by bad actors, it &amp;quot;can also be used against the bad actors.&amp;quot;&amp;lt;ref name=&amp;quot;Taylor-2023&amp;quot;&amp;gt;Taylor, Josh. [https://www.theguardian.com/technology/2023/may/07/rise-of-artificial-intelligence-is-inevitable-but-should-not-be-feared-father-of-ai-says &amp;quot;Rise of artificial intelligence is inevitable but should not be feared, &#039;father of AI&#039; says&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. May 7, 2023.&amp;lt;/ref&amp;gt; [[Andrew Ng]] argued that &amp;quot;it&#039;s a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests.&amp;quot;&amp;lt;ref name=&amp;quot;andrewng2023&amp;quot;&amp;gt;McMorrow, Ryan. [https://www.ft.com/content/2dc07f9e-d2a9-4d98-b746-b051f9352be3 &amp;quot;Andrew Ng: &#039;Do we think the world is better off with more or less intelligence?&#039;&amp;quot;]. &#039;&#039;Financial Times&#039;&#039;. December 19, 2023.&amp;lt;/ref&amp;gt; [[Yann LeCun]] dismissed doomsday warnings of AI-powered misinformation and existential threats to the human race.&amp;lt;ref name=&amp;quot;lecun2023&amp;quot;&amp;gt;Levy, Steven. [https://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview/ &amp;quot;How Not to Be Stupid About AI, With Yann LeCun&amp;quot;]. &#039;&#039;Wired&#039;&#039;. December 22, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Copyright ===&lt;br /&gt;
{{Excerpt|Artificial intelligence and copyright}}&lt;br /&gt;
&lt;br /&gt;
== Applications ==&lt;br /&gt;
&#039;&#039;See also: [[Applications of artificial intelligence]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Academic research ===&lt;br /&gt;
In a 2023 [[Blinded experiment|blinded study]] in &#039;&#039;[[npj Digital Medicine]]&#039;&#039;, researchers tasked with identifying whether [[Abstract (summary)|abstracts]] were authentic or generated by ChatGPT were fooled around one-third of the time by the AI-generated abstracts.&amp;lt;ref&amp;gt;Gao, Catherine A.. &amp;quot;Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers&amp;quot;. &#039;&#039;[[npj Digital Medicine]]&#039;&#039;. April 26, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Bushard, Brian. [https://www.forbes.com/sites/brianbushard/2023/01/10/fake-scientific-abstracts-written-by-chatgpt-fooled-scientists-study-finds/ &amp;quot;Fake Scientific Abstracts Written By ChatGPT Fooled Scientists, Study Finds&amp;quot;]. &#039;&#039;Forbes&#039;&#039;. January 10, 2023.&amp;lt;/ref&amp;gt; In January 2023, &#039;&#039;[[Nature (journal)|Nature]]&#039;&#039; reported that at least four academic pre-prints or published papers had listed ChatGPT as a co-author. &#039;&#039;Nature&#039;&#039; cites several experts in academic published who says that listing ChatGPT as an author violates publishing guidelines, since ChatGPT lacks the ability to take responsibility for any research and cannot give consent to any terms of use.&amp;lt;ref&amp;gt;Stokel-Walker, Chris. [https://www.nature.com/articles/d41586-023-00107-z &amp;quot;ChatGPT listed as author on research papers: many scientists disapprove&amp;quot;]. &#039;&#039;Nature&#039;&#039;. January 18, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scientific journals have had different reactions to ChatGPT. Some, including &#039;&#039;[[Nature (journal)|Nature]]&#039;&#039; and [[JAMA Network]], require full disclosure of any use of text-generating tools, and prohibit listing a chatbot as a co-author. In January 2023, &#039;&#039;[[Science (journal)|Science]]&#039;&#039; banned chatbot-generated text in all its journals.&amp;lt;ref&amp;gt;Brainard, Jeffrey. [https://www.science.org/content/article/scientists-explore-ai-written-text-journals-hammer-policies &amp;quot;As scientists explore AI-written text, journals hammer out policies&amp;quot;]. &#039;&#039;Science&#039;&#039;. February 22, 2023.&amp;lt;/ref&amp;gt; As of July 2025, &#039;&#039;Science&#039;&#039; expects authors to release in full how AI-generated content is used and made in their work.&amp;lt;ref&amp;gt;[https://www.science.org/content/page/science-journals-editorial-policies#authorship &amp;quot;Science Journals: Editorial Policies&amp;quot;]. &#039;&#039;www.science.org&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Many authors argue that the use of ChatGPT in academia for teaching and review is problematic due to its tendency to hallucinate.&amp;lt;ref&amp;gt;Alkaissi, Hussam. &amp;quot;Artificial Hallucinations in ChatGPT: Implications in Scientific Writing&amp;quot;. &#039;&#039;Cureus&#039;&#039;. February 19, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Vynck, Gerrit De. [https://www.washingtonpost.com/technology/2023/05/30/ai-chatbots-chatgpt-bard-trustworthy/ &amp;quot;ChatGPT &#039;hallucinates.&#039; Some researchers worry it isn&#039;t fixable.&amp;quot;]. &#039;&#039;Washington Post&#039;&#039;. May 31, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Azamfirei, Razvan. &amp;quot;Large language models and the perils of their hallucinations&amp;quot;. &#039;&#039;Critical Care&#039;&#039;. March 21, 2023.&amp;lt;/ref&amp;gt; Robin Bauwens, an assistant professor at [[Tilburg University]], found that a ChatGPT-generated [[peer review]] report on his article mentioned nonexistent studies.&amp;lt;ref&amp;gt;Grove, Jack. [https://www.timeshighereducation.com/news/chatgpt-generated-reading-list-sparks-ai-peer-review-debate &amp;quot;&#039;ChatGPT-generated reading list&#039; sparks AI peer review debate&amp;quot;]. &#039;&#039;Times Higher Education&#039;&#039;. April 5, 2023.&amp;lt;/ref&amp;gt; Chris Granatino, a librarian at [[Seattle University]], noted that while ChatGPT can generate content that seemingly includes legitimate citations, in most cases those citations are not real or largely incorrect.&amp;lt;ref&amp;gt;Granatino, Chris. [https://library.seattleu.edu/friendly.php?s=blog/ChatGPT-and-AI-Hallucination &amp;quot;ChatGPT and AI Hallucination&amp;quot;]. &#039;&#039;Lemieux Library at Seattle University&#039;&#039;. May 5, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer science ===&lt;br /&gt;
In December 2022, the question-and-answer website [[Stack Overflow]] banned the use of ChatGPT for generating answers to questions, citing the factually ambiguous nature of its responses.&amp;lt;ref name=&amp;quot;TheVergeStackOverflow&amp;quot;&amp;gt;Vincent, James. [https://www.theverge.com/2022/12/5/23493932/chatgpt-ai-generated-answers-temporarily-banned-stack-overflow-llms-dangers &amp;quot;AI-generated answers temporarily banned on coding Q&amp;amp;A site Stack Overflow&amp;quot;]. &#039;&#039;[[The Verge]]&#039;&#039;. December 5, 2022.&amp;lt;/ref&amp;gt; In January 2023, the [[International Conference on Machine Learning]] banned any undocumented use of ChatGPT or other large language models to generate any text in submitted papers.&amp;lt;ref&amp;gt;Vincent, James. [https://www.theverge.com/2023/1/5/23540291/chatgpt-ai-writing-tool-banned-writing-academic-icml-paper &amp;quot;Top AI conference bans use of ChatGPT and AI language tools to write academic papers&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. January 5, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ChatGPT was able in 2023 to provide useful code for solving numerical algorithms in limited cases. In one study, it produced solutions in [[C (programming language)|C]], [[C++]], [[Python (programming language)|Python]], and [[MATLAB]] for problems in [[computational physics]]. However, there were important shortfalls like violating basic linear algebra principles around solving singular matrices and producing matrices with incompatible sizes.&amp;lt;ref&amp;gt;Kashefi, Ali. &amp;quot;ChatGPT for Programming Numerical Methods&amp;quot;. 2023.&amp;lt;/ref&amp;gt; Another study analyzed ChatGPT&#039;s responses to 517 questions about [[software engineering]] or [[computer programming]] posed on [[Stack Overflow]] for correctness, consistency, comprehensiveness, and concision. It found that 52% of the responses contained inaccuracies and 77% were verbose.&amp;lt;ref&amp;gt;Morrison, Ryan. [https://techmonitor.ai/technology/ai-and-automation/chatgpt-wrong-over-half-the-time-on-software-questions &amp;quot;ChatGPT wrong over half the time on software questions&amp;quot;]. &#039;&#039;Tech Monitor&#039;&#039;. August 8, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Kabir, Samia. &amp;quot;Who Answers It Better? An In-Depth Analysis of ChatGPT and Stack Overflow Answers to Software Engineering Questions&amp;quot;. August 10, 2023.&amp;lt;/ref&amp;gt; Another study, focused on the performance of GPT-3.5 and GPT-4 between March and June 2024, found that performance on objective tasks like identifying prime numbers and generating [[Execution (computing)|executable]] code was highly variable.&amp;lt;ref&amp;gt;Chen, Lingjiao. &amp;quot;How Is ChatGPT&#039;s Behavior Changing Over Time?&amp;quot;. &#039;&#039;Harvard Data Science Review&#039;&#039;. 12 March 2024.&amp;lt;/ref&amp;gt; When compared to similar chatbots at the time, the GPT-4 version of ChatGPT was the most accurate at coding.&amp;lt;ref&amp;gt;Siam, Md Kamrul. &amp;quot;Proceedings of the 3rd International Conference on Computing Advancements&amp;quot;. 6 June 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Computer security ===&lt;br /&gt;
Check Point Research and others noted that ChatGPT could write [[phishing]] emails and [[malware]], especially when combined with [[OpenAI Codex (AI agent)|OpenAI Codex]]. CyberArk researchers demonstrated that ChatGPT could be used to create [[polymorphic malware]] that could evade security products while requiring little effort by the attacker.&amp;lt;ref name=&amp;quot;Shimony-2023&amp;quot;&amp;gt;Shimony, Eran. [https://www.cyberark.com/resources/threat-research-blog/chatting-our-way-into-creating-a-polymorphic-malware &amp;quot;Chatting Our Way Into Creating a Polymorphic Malware&amp;quot;]. &#039;&#039;CyberArk&#039;&#039;. January 17, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Mascellino-2023&amp;quot;&amp;gt;Mascellino, Alessandro. [https://www.infosecurity-magazine.com/news/chatgpt-creates-polymorphic-malware/ &amp;quot;ChatGPT Creates Polymorphic Malware&amp;quot;]. &#039;&#039;Infosecurity Magazine&#039;&#039;. January 18, 2023.&amp;lt;/ref&amp;gt; From the launch of ChatGPT in the fourth quarter of 2022 to the fourth quarter of 2023, there was a 1,265% increase in malicious [[phishing]] emails and a 967% increase in credential phishing. In an industry survey, cybersecurity professionals argued that it was attributable to cybercriminals&#039; increased use of generative artificial intelligence (including ChatGPT).&amp;lt;ref name=&amp;quot;Violino-2023&amp;quot;&amp;gt;Violino, Bob. [https://www.cnbc.com/2023/11/28/ai-like-chatgpt-is-creating-huge-increase-in-malicious-phishing-email.html &amp;quot;AI tools such as ChatGPT are generating a mammoth increase in malicious phishing emails&amp;quot;]. CNBC. November 28, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In July 2024, &#039;&#039;[[Futurism (website)|Futurism]]&#039;&#039; reported that GPT-4o in ChatGPT would sometimes link &amp;quot;scam news sites that deluge the user with fake software updates and virus warnings&amp;quot;; these pop-ups can be used to coerce users into downloading malware or [[potentially unwanted program]]s.&amp;lt;ref name=&amp;quot;Dupre-2024&amp;quot;&amp;gt;Dupré, Maggie Harrison. [https://futurism.com/chatgpt-fake-virus-warnings &amp;quot;ChatGPT-4o Is Sending Users to a Scammy Website That Floods Your Screen With Fake Virus Warnings&amp;quot;]. &#039;&#039;Futurism&#039;&#039;. July 1, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Education ===&lt;br /&gt;
{{Excerpt|ChatGPT in education|paragraphs=2|templates=no}}&lt;br /&gt;
[[File:Books about ChatGPT in Osaka bookstore.jpg|thumb|Books about ChatGPT in an Osaka bookstore]]&lt;br /&gt;
&lt;br /&gt;
=== Culture ===&lt;br /&gt;
During the first three months after ChatGPT became available to the public, hundreds of books appeared on [[Amazon (company)|Amazon]] that listed it as author or co-author and featured illustrations made by other AI models such as [[Midjourney]].&amp;lt;ref&amp;gt;Nolan, Beatrice. [https://www.businessinsider.com/chatgpt-ai-write-author-200-books-amazon-2023-2 &amp;quot;More than 200 books in Amazon&#039;s bookstore have ChatGPT listed as an author or coauthor&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Bensinger, Greg. [https://www.reuters.com/technology/chatgpt-launches-boom-ai-written-e-books-amazon-2023-02-21/ &amp;quot;ChatGPT launches boom in AI-written e-books on Amazon&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. February 21, 2023.&amp;lt;/ref&amp;gt; [[Irene Solaiman]] said she was worried about increased [[Anglocentrism]].&amp;lt;ref name=&amp;quot;Sigal1&amp;quot;&amp;gt;Samuel, Sigal. [https://www.vox.com/future-perfect/23674696/chatgpt-ai-creativity-originality-homogenization &amp;quot;What happens when ChatGPT starts to feed on its own writing?&amp;quot;]. Vox. April 10, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Between March and April 2023, &#039;&#039;[[Il Foglio]]&#039;&#039; published one ChatGPT-generated article a day on its website, hosting a special contest for its readers in the process.&amp;lt;ref&amp;gt;Multiple Sources:&lt;br /&gt;
* [https://www.ilfoglio.it/tecnologia/2023/03/17/news/sfida-per-siri-e-alexa-5068811/ &amp;quot;Sfida per Siri e Alexa&amp;quot;]. &#039;&#039;Il Foglio&#039;&#039;. March 17, 2023.&lt;br /&gt;
* [https://www.ilfoglio.it/tecnologia/2023/03/07/news/chatgpt-sul-foglio-per-30-giorni-piccoli-testi-scritti-dall-ia-sul-nostro-giornale-5029973/ &amp;quot;ChatGPT sul Foglio: per 30 giorni piccoli testi scritti dall&#039;IA sul nostro giornale&amp;quot;]. &#039;&#039;Il Foglio&#039;&#039;. March 7, 2023.&lt;br /&gt;
* Moretti, Marco. [https://www.ilfoglio.it/tecnologia/2023/03/08/news/articoli-artificiali-no-5067825/ &amp;quot;Articoli artificiali? No&amp;quot;]. &#039;&#039;Il Foglio&#039;&#039;. March 8, 2023.&lt;br /&gt;
* A.D.A.. [https://www.ilfoglio.it/tecnologia/2023/03/09/news/piu-umani-grazie-5067829/ &amp;quot;Più umani, grazie&amp;quot;]. &#039;&#039;Il Foglio&#039;&#039;. March 9, 2023.&lt;br /&gt;
* [https://www.ilfoglio.it/politica/2023/03/14/news/le-colpe-farlocche-dell-invasione--5067556/ &amp;quot;Le colpe farlocche dell&#039;&amp;quot;invasione&amp;quot;&amp;quot;]. &#039;&#039;Il Foglio&#039;&#039;. March 14, 2023.&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In June 2023, hundreds of people attended a &amp;quot;ChatGPT-powered church service&amp;quot; at St. Paul&#039;s Church in [[Fürth]], Germany. Theologian and philosopher Jonas Simmerlein, who presided, said that it was &amp;quot;about 98 percent from the machine&amp;quot;.&amp;lt;ref&amp;gt;Edwards, Benj. [https://arstechnica.com/information-technology/2023/06/chatgpt-takes-the-pulpit-ai-leads-experimental-church-service-in-germany/ &amp;quot;AI-powered church service in Germany draws a large crowd&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. June 12, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.businessinsider.com/chatgpt-sermon-protestant-congregation-nuremberg-germany-not-to-fear-death-2023-6 &amp;quot;Hundreds of Protestants attended a sermon in Nuremberg given by ChatGPT, which told them not to fear death&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&amp;lt;/ref&amp;gt; The ChatGPT-generated avatar told the people, &amp;quot;Dear friends, it is an honor for me to stand here and preach to you as the first artificial intelligence at this year&#039;s convention of Protestants in Germany&amp;quot;. Reactions to the ceremony were mixed.&amp;lt;ref&amp;gt;[https://www.thejournal.ie/ai-chruch-germany-6090108-Jun2023/ &amp;quot;Hundreds attend AI church service in Germany&amp;quot;]. &#039;&#039;TheJournal.ie&#039;&#039;. June 10, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[[The Last Screenwriter]]&#039;&#039;, a 2024 film created and directed by [[Peter Luisi]], was written using ChatGPT, and was marketed as &amp;quot;the first film written entirely by AI&amp;quot;.&amp;lt;ref&amp;gt;Kelly, James W. [https://www.bbc.com/news/articles/cjll3w15j0yo &amp;quot;Prince Charles Cinema drops AI-written film following backlash&amp;quot;]. &#039;&#039;BBC News&#039;&#039;. June 19, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[[The Guardian]]&#039;&#039; questioned whether any content found on the Internet after ChatGPT&#039;s release &amp;quot;can be truly trusted&amp;quot; and called for government regulation.&amp;lt;ref name=&amp;quot;guard20222&amp;quot;&amp;gt;[https://www.theguardian.com/commentisfree/2022/dec/08/the-guardian-view-on-chatgpt-an-eerily-good-human-impersonator &amp;quot;The Guardian view on ChatGPT: an eerily good human impersonator&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. December 8, 2022.&amp;lt;/ref&amp;gt; This has led to concern over the rise of [[AI slop]] whereby &amp;quot;meaningless content and writing thereby becomes part of our culture, particularly on social media, which we nonetheless try to understand or fit into our existing cultural horizon.&amp;quot;&amp;lt;ref&amp;gt;Berry, David M.. &amp;quot;Synthetic media and computational capitalism: towards a critical theory of artificial intelligence&amp;quot;. &#039;&#039;AI &amp;amp; Society&#039;&#039;. 2025-03-19.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Financial markets ===&lt;br /&gt;
Many companies adopted ChatGPT and similar chatbot technologies into their product offers. In 2023, these changes yielded significant increases in company valuations.&amp;lt;ref&amp;gt;Diaz, Alicia. [https://www.bloomberg.com/news/articles/2023-01-26/buzzfeed-bzfd-triples-on-plans-to-embrace-openai-for-content &amp;quot;BuzzFeed Shares Surge 120% on Plans to Embrace OpenAI&amp;quot;]. &#039;&#039;Bloomberg.com&#039;&#039;. January 26, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;RSurge1&amp;quot;&amp;gt;Singh, Medha. [https://www.reuters.com/technology/ai-stocks-rally-latest-wall-street-craze-sparked-by-chatgpt-2023-02-06/ &amp;quot;AI stocks rally in latest Wall Street craze sparked by ChatGPT&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. February 6, 2023.&amp;lt;/ref&amp;gt; Reuters attributed this surge to ChatGPT&#039;s role in turning [[Artificial intelligence|AI]] into [[Wall Street]]&#039;s buzzword.&amp;lt;ref name=&amp;quot;RSurge1&amp;quot;/&amp;gt; Despite decades of using AI, Wall Street professionals report that consistently beating the market with AI, including recent large language models, is challenging due to limited and noisy financial data.&amp;lt;ref&amp;gt;Zuckerman, Gregory. [https://www.wsj.com/articles/ai-can-write-a-song-but-it-cant-beat-the-market-6df50efd &amp;quot;AI Can Write a Song, but It Can&#039;t Beat the Market&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. April 12, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Medicine ===&lt;br /&gt;
&#039;&#039;See also: [[Artificial intelligence in healthcare]]&#039;&#039;&lt;br /&gt;
ChatGPT can provide health information to users&amp;lt;ref&amp;gt;Multiple sources:&lt;br /&gt;
* Ayers, John W.. &amp;quot;Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum&amp;quot;. &#039;&#039;JAMA Internal Medicine&#039;&#039;. April 28, 2023.&lt;br /&gt;
* Alan, Raif. &amp;quot;Utilizing ChatGPT-4 for Providing Information on Periodontal Disease to Patients: A DISCERN Quality Analysis&amp;quot;. &#039;&#039;Cureus&#039;&#039;. September 29, 2023.&lt;br /&gt;
* Endo, Yutaka. [https://link.springer.com/article/10.1007/s11605-023-05714-9 &amp;quot;Quality of ChatGPT Responses to Questions Related To Liver Transplantation&amp;quot;]. &#039;&#039;Journal of Gastrointestinal Surgery&#039;&#039;. August 1, 2023.&lt;br /&gt;
* Tan, Songtao. &amp;quot;ChatGPT in medicine: prospects and challenges: a review article&amp;quot;. &#039;&#039;International Journal of Surgery&#039;&#039;. June 2024.&lt;br /&gt;
* Liu, Hilary Y.. &amp;quot;Consulting the Digital Doctor: Google Versus ChatGPT as Sources of Information on Breast Implant-Associated Anaplastic Large Cell Lymphoma and Breast Implant Illness&amp;quot;. &#039;&#039;Aesthetic Plastic Surgery&#039;&#039;. February 2024.&amp;lt;/ref&amp;gt; and assist professionals with diagnosis and staying up to date with clinical guidelines.&amp;lt;ref name=&amp;quot;medical bundle 1&amp;quot;&amp;gt;Lewandowski, Miłosz. [https://academic.oup.com/ced/article/49/7/686/7237242 &amp;quot;ChatGPT-3.5 and ChatGPT-4 dermatological knowledge level based on the Specialty Certificate Examination in Dermatology&amp;quot;]. &#039;&#039;Clinical and Experimental Dermatology&#039;&#039;. June 25, 2024.&amp;lt;/ref&amp;gt; It can be used to summarize medical journal articles for researchers. In medical education, it can explain concepts, generate case scenarios, and be used by students preparing for licensing examinations.&amp;lt;ref name=&amp;quot;medmeta2024&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A February 2023 study in &#039;&#039;[[PLOS Digital Health]]&#039;&#039; found that ChatGPT 3.5 was capable of passing the [[United States Medical Licensing Examination]].&amp;lt;ref&amp;gt;Kung, Tiffany H.. &amp;quot;Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models&amp;quot;. &#039;&#039;PLOS Digital Health&#039;&#039;. 9 February 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DePeau-Wilson-2023&amp;quot;&amp;gt;DePeau-Wilson, Michael. [https://www.medpagetoday.com/special-reports/exclusives/102705 &amp;quot;AI Passes U.S. Medical Licensing Exam&amp;quot;]. &#039;&#039;MedPage Today&#039;&#039;. January 19, 2023.&amp;lt;/ref&amp;gt; ChatGPT has also passed the Specialty Certificate Examination in Dermatology.&amp;lt;ref name=&amp;quot;medical bundle 1&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, ChatGPT shows inconsistent responses, lack of specificity, lack of control over patient data, and a limited ability to take additional context (such as regional variations) into consideration.&amp;lt;ref name=&amp;quot;medmeta2024&amp;quot;&amp;gt;Tan, Songtao. &amp;quot;ChatGPT in medicine: prospects and challenges: a review article&amp;quot;. &#039;&#039;International Journal of Surgery&#039;&#039;. June 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Howard, Alex. [https://www.thelancet.com/journals/laninf/article/PIIS1473-3099(23)00113-5/fulltext#back-bib1 &amp;quot;ChatGPT and antimicrobial advice: the end of the consulting infection doctor?&amp;quot;]. &#039;&#039;The Lancet Infectious Diseases&#039;&#039;. April 2023.&amp;lt;/ref&amp;gt; The [[Hallucination (artificial intelligence)|hallucinations]] characteristic of LLMs pose particular danger in medical contexts, and ChatGPT&#039;s ability to come up with false or faulty citations was highly criticized.&amp;lt;ref name=&amp;quot;medmeta2024&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Gravel, Jocelyn. &amp;quot;Learning to Fake It: Limited Responses and Fabricated References Provided by ChatGPT for Medical Questions&amp;quot;. &#039;&#039;Mayo Clinic Proceedings: Digital Health&#039;&#039;. September 1, 2023.&amp;lt;/ref&amp;gt; According to a 2024 study in the &#039;&#039;[[International Journal of Surgery]]&#039;&#039;, concerns include &amp;quot;research fraud, lack of originality, ethics, copyright, legal difficulties&amp;quot;.&amp;lt;ref name=&amp;quot;medmeta2024&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Mental health ====&lt;br /&gt;
&#039;&#039;See also: [[Artificial intelligence in mental health|Chatbot psychosis]]&#039;&#039;&lt;br /&gt;
According to a September 2025 article in &#039;&#039;[[Lancet Psychiatry]]&#039;&#039;, many individuals use ChatGPT and comparable chatbots for mental health and emotional support despite a warning from OpenAI against using ChatGPT as a therapist. The study notes a lack of research on efficacy, poor consistency in dangerous situations, limited regulation and liability, and poor transparency from OpenAI.&amp;lt;ref&amp;gt;Rousmaniere, Tony. &amp;quot;Large language models as mental health providers&amp;quot;. &#039;&#039;The Lancet Psychiatry&#039;&#039;. 2026.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A July 2025 study in the journal &#039;&#039;Digital Health&#039;&#039; found that users reported employing ChatGPT to manage mental health concerns &amp;quot;due to perceived therapist-like qualities (e.g. emotional support, accurate understanding, and constructive feedback) and machine-like benefits (e.g. constant availability, expansive cognitive capacity, lack of negative reactions, and perceived objectivity).&amp;quot; The study calls for improved [[AI literacy]] and mandatory disclosure from AI providers to address ethical concerns such as privacy, bias, the lack of liability, and emotional over-reliance.&amp;lt;ref&amp;gt;Luo, Xiaochen. &amp;quot;&amp;quot;Shaping ChatGPT into my Digital Therapist&amp;quot;: A thematic analysis of social media discourse on using generative artificial intelligence for mental health&amp;quot;. &#039;&#039;Digital Health&#039;&#039;. 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Law ===&lt;br /&gt;
ChatGPT has been used to assist in bill writing in the US&amp;lt;ref&amp;gt;Multiple sources:&lt;br /&gt;
* [https://malegislature.gov/Bills/193/S31/BillHistory &amp;quot;Bill S.31&amp;quot;]. &#039;&#039;malegislature.gov&#039;&#039;.&lt;br /&gt;
* Annear, Steve. [https://www.bostonglobe.com/2023/01/24/metro/this-state-senator-drafted-legislation-regulate-artificial-intelligence-technology-with-some-help-chatgpt/ &amp;quot;Two elected officials drafted legislation to regulate artificial intelligence technology — with some help from ChatGPT&amp;quot;]. &#039;&#039;[[The Boston Globe]]&#039;&#039;. January 24, 2023.&lt;br /&gt;
* Garrity, Kelly. [https://www.politico.com/newsletters/massachusetts-playbook/2023/07/13/chatgpt-enters-the-legislative-chat-00106066 &amp;quot;ChatGPT enters the legislative chat&amp;quot;]. &#039;&#039;[[POLITICO]]&#039;&#039;. July 13, 2023.&lt;br /&gt;
* [https://malegislature.gov/Bills/193/S2539 &amp;quot;Bill S.2539&amp;quot;]. &#039;&#039;malegislature.gov&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Quach-2023&amp;quot;&amp;gt;Quach, Katyanna. [https://www.theregister.com/2023/12/02/chatgpt_law_brazil/ &amp;quot;Local council in Brazil passes ChatGPT-written proposal&amp;quot;]. &#039;&#039;[[The Register]]&#039;&#039;. December 2, 2023.&amp;lt;/ref&amp;gt; and Brazil.&amp;lt;ref name=&amp;quot;Quach-2023&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Multiple sources:&lt;br /&gt;
* Jeantet, Diane. [https://apnews.com/article/brazil-artificial-intelligence-porto-alegre-5afd1240afe7b6ac202bb0bbc45e08d4 &amp;quot;Brazilian city enacts an ordinance that was secretly written by ChatGPT&amp;quot;]. &#039;&#039;AP News&#039;&#039;. November 30, 2023.&lt;br /&gt;
* Paúl, María Luisa. [https://www.washingtonpost.com/nation/2023/12/04/ai-written-law-porto-alegre-brazil/ &amp;quot;A Brazilian city passed a law about water meters. ChatGPT wrote it.&amp;quot;]. &#039;&#039;Washington Post&#039;&#039;. December 4, 2023.&lt;br /&gt;
* Foster, Gustavo. [https://g1.globo.com/rs/rio-grande-do-sul/noticia/2023/11/29/lei-escrita-por-inteligencia-artificial-e-aprovada-por-vereadores-em-porto-alegre-precedente-perigoso-diz-presidente-da-camara.ghtml &amp;quot;Lei escrita por inteligência artificial é aprovada por vereadores em Porto Alegre; &#039;precedente perigoso&#039;, diz presidente da Câmara&amp;quot;]. &#039;&#039;[[G1 (website)&#039;&#039;. November 29, 2023.&amp;lt;/ref&amp;gt; In an American civil lawsuit, attorneys were [[Sanctions (law)|sanction]]ed for filing a [[Motion (legal)|legal motion]] generated by ChatGPT containing fictitious legal decisions.&amp;lt;ref&amp;gt;Multiple sources:&lt;br /&gt;
* Goswami, Rohan. [https://www.cnbc.com/2023/05/30/chatgpt-cited-bogus-cases-for-a-new-york-federal-court-filing.html &amp;quot;ChatGPT cited &#039;bogus&#039; cases for a New York federal court filing. The attorneys involved may face sanctions.&amp;quot;]. CNBC. May 30, 2023.&lt;br /&gt;
* Neumeister, Larry. [https://apnews.com/article/artificial-intelligence-chatgpt-courts-e15023d7e6fdf4f099aa122437dbb59b &amp;quot;Lawyers blame ChatGPT for tricking them into citing bogus case law&amp;quot;]. Associated Press. June 8, 2023.&lt;br /&gt;
* Brodkin, Jon. [https://arstechnica.com/tech-policy/2023/06/lawyers-have-real-bad-day-in-court-after-citing-fake-cases-made-up-by-chatgpt/ &amp;quot;Lawyers have real bad day in court after citing fake cases made up by ChatGPT&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. June 23, 2023.&lt;br /&gt;
* [https://casetext.com/case/mata-v-avianca-inc-2 &amp;quot;Mata v. Avianca, Inc.&amp;quot;]. &#039;&#039;Casetext&#039;&#039;.&lt;br /&gt;
* [https://www.abc.net.au/news/2023-06-24/us-lawyer-uses-chatgpt-to-research-case-with-embarrassing-result/102490068 &amp;quot;&#039;Use with caution&#039;: How ChatGPT landed this US lawyer and his firm in hot water&amp;quot;]. &#039;&#039;ABC News&#039;&#039;. June 24, 2023.&lt;br /&gt;
* [https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/ &amp;quot;New York lawyers sanctioned for using fake ChatGPT cases in legal brief&amp;quot;]. &#039;&#039;Reuters&#039;&#039;.&lt;br /&gt;
* Maruf, Ramishah. [https://www.cnn.com/2023/05/27/business/chat-gpt-avianca-mata-lawyers/index.html &amp;quot;Lawyer apologizes for fake court citations from ChatGPT {{!&amp;quot;]. &#039;&#039;CNN&#039;&#039;. May 27, 2023.&lt;br /&gt;
* Davis, Wes. [https://www.theverge.com/2023/5/27/23739913/chatgpt-ai-lawsuit-avianca-airlines-chatbot-research &amp;quot;A lawyer used ChatGPT and now has to answer for its &#039;bogus&#039; citations&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. May 27, 2023.&amp;lt;/ref&amp;gt; Judges in the US&amp;lt;ref&amp;gt;Wilkins, Stephanie. [https://www.law.com/2024/06/04/11th-circuit-judge-uses-chatgpt-in-deciding-appeal-encourages-others-to-consider-it/ &amp;quot;11th Circuit Judge Uses ChatGPT in Deciding Appeal, Encourages Others to Consider It&amp;quot;]. &#039;&#039;Law.com&#039;&#039;. June 4, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Weiss, Debra Cassens. [https://www.abajournal.com/news/article/appeals-judge-makes-a-confession-he-consulted-chatgpt-and-found-the-results-less-nutty-than-i-feared &amp;quot;In concurrence confession, appeals judge says ChatGPT research &#039;less nutty&#039; than feared&amp;quot;]. &#039;&#039;ABA Journal&#039;&#039;. June 6, 2024.&amp;lt;/ref&amp;gt; and Pakistan have endorsed using ChatGPT to investigate legal questions during a case.&amp;lt;ref&amp;gt;[https://www.gulfnews.com/world/asia/pakistan/pakistani-judge-uses-chatgpt-to-make-court-decision-1.95104528 &amp;quot;Pakistani judge uses ChatGPT to make court decision&amp;quot;]. &#039;&#039;Gulf News&#039;&#039;. April 13, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://pakobserver.net/ai-revolution-is-here-pakistani-court-takes-help-from-chatgpt-to-grant-bail-in-rape-case &amp;quot;AI revolution is here&#039;: Pakistani court takes help from ChatGPT to grant bail in rape case&amp;quot;]. &#039;&#039;Pakistan Observer&#039;&#039;. April 11, 2023.&amp;lt;/ref&amp;gt; The use of ChatGPT has also led to errors in courtrooms.&amp;lt;ref&amp;gt;Merken, Sara. [https://www.reuters.com/sustainability/society-equity/two-federal-judges-say-use-ai-led-errors-us-court-rulings-2025-10-23/ &amp;quot;Two federal judges say use of AI led to errors in US court rulings&amp;quot;]. &#039;&#039;[[Reuters]]&#039;&#039;. October 23, 2025.&amp;lt;/ref&amp;gt; In the UK, a judge expressed concern about [[litigant in person|self-representing litigant]]s wasting time by submitting documents containing significant hallucinations.&amp;lt;ref&amp;gt;Rose, Neil. [https://www.legalfutures.co.uk/latest-news/litigant-unwittingly-put-fake-cases-generated-by-ai-before-tribunal &amp;quot;Litigant unwittingly put fake cases generated by AI before tribunal&amp;quot;]. &#039;&#039;Legal Futures&#039;&#039;. December 7, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Cross, Michael. [https://www.lawgazette.co.uk/news/ai-hallucinates-nine-helpful-case-authorities/5118179.article &amp;quot;AI hallucinates nine &#039;helpful&#039; case authorities&amp;quot;]. &#039;&#039;Law Society Gazette&#039;&#039;. December 11, 2023.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;BAILII-2023&amp;quot;&amp;gt;[https://www.bailii.org/uk/cases/UKFTT/TC/2023/TC09010.html &amp;quot;Harber v Commissioners for His Majesty&#039;s Revenue and Customs [2023] UKFTT 1007 (TC)&amp;quot;]. &#039;&#039;BAILII&#039;&#039;. December 4, 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
* {{annotated link|Artificial general intelligence}}&lt;br /&gt;
* {{Annotated link|Ethics of artificial intelligence}}&lt;br /&gt;
* {{Annotated link|Intelligent agent}}&lt;br /&gt;
* [[List of chatbots]]&lt;br /&gt;
* [[Reasoning model]]&lt;br /&gt;
* [[List of large language models]]&lt;br /&gt;
* [[Lists of open-source artificial intelligence software]]&lt;br /&gt;
{{Portal bar|Language|Technology}}&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Further reading==&lt;br /&gt;
* Biswas, Som. [https://pubs.rsna.org/doi/pdf/10.1148/radiol.223312 &amp;quot;ChatGPT and the Future of Medical Writing&amp;quot;]. &#039;&#039;Radiology&#039;&#039;. April 1, 2023.&lt;br /&gt;
* Liebrenz, Michael. &amp;quot;Generating scholarly content with ChatGPT: ethical challenges for medical publishing&amp;quot;. &#039;&#039;The Lancet Digital Health&#039;&#039;. February 2023.&lt;br /&gt;
* Bartholomew, Jem. [https://www.cjr.org/tow_center/media-coverage-chatgpt.php &amp;quot;How the media is covering ChatGPT&amp;quot;]. &#039;&#039;Columbia Journalism Review&#039;&#039;.&lt;br /&gt;
* [https://platform.openai.com/docs/guides/prompt-engineering Prompt engineering] guide from OpenAI&lt;br /&gt;
&lt;br /&gt;
==External links==&lt;br /&gt;
{{Commons}}&lt;br /&gt;
* {{Official website}}&lt;br /&gt;
* [https://www.instagram.com/chatgpt/ Chatgpt] at [[Instagram]]&lt;br /&gt;
&lt;br /&gt;
{{OpenAI navbox}}&lt;br /&gt;
{{AI-based chatbots}}&lt;br /&gt;
{{Generative AI}}&lt;br /&gt;
{{Virtual assistants}}&lt;br /&gt;
{{Authority control}}&lt;br /&gt;
&lt;br /&gt;
[[Category:ChatGPT| ]]&lt;br /&gt;
[[Category:2022 software]]&lt;br /&gt;
[[Category:Chatbots]]&lt;br /&gt;
[[Category:Generative pre-trained transformers]]&lt;br /&gt;
[[Category:Large language models]]&lt;br /&gt;
[[Category:Interactive narrative]]&lt;br /&gt;
[[Category:2022 in artificial intelligence]]&lt;br /&gt;
[[Category:Microsoft Store Awards 2025 winners]]&lt;br /&gt;
[[Category:Artificial intelligence industry in the United States]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=AI_alignment&amp;diff=6</id>
		<title>AI alignment</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=AI_alignment&amp;diff=6"/>
		<updated>2026-04-06T12:58:14Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import from Wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{use mdy dates|date=September 2021}}&lt;br /&gt;
{{Use American English|date=February 2021}}&lt;br /&gt;
{{Artificial intelligence}}&lt;br /&gt;
In the field of [[artificial intelligence]] (AI), &#039;&#039;&#039;alignment&#039;&#039;&#039; aims to steer AI systems toward a person&#039;s or group&#039;s intended goals, preferences, or ethical principles. An AI system is considered &#039;&#039;aligned&#039;&#039; if it advances the intended objectives. A &#039;&#039;misaligned&#039;&#039; AI system pursues unintended objectives.&amp;lt;ref name=&amp;quot;aima4&amp;quot;&amp;gt;&lt;br /&gt;
Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is often difficult for AI designers to specify the full range of desired and undesired behaviors. Therefore, the designers often use simpler &#039;&#039;proxy goals&#039;&#039;, such as [[Reinforcement learning from human feedback|gaining human approval]]. But proxy goals can overlook necessary constraints or reward the AI system for merely &#039;&#039;appearing&#039;&#039; aligned.&amp;lt;ref name=&amp;quot;aima4&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;dlp2023&amp;quot;&amp;gt;Ngo, Richard. [https://openreview.net/forum?id=fh8EYKFKns &amp;quot;The Alignment Problem from a Deep Learning Perspective&amp;quot;]. &#039;&#039;International Conference on Learning Representations&#039;&#039;. 2022.&amp;lt;/ref&amp;gt; AI systems may also find loopholes that allow them to accomplish their proxy goals efficiently but in unintended, sometimes harmful, ways ([[reward hacking]]).&amp;lt;ref name=&amp;quot;aima4&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;mmmm2022&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Advanced AI systems may develop unwanted [[Instrumental convergence|instrumental strategies]], such as seeking power or [[self-preservation]] because such strategies help them achieve their assigned final goals.&amp;lt;ref name=&amp;quot;aima4&amp;quot; /&amp;gt;&amp;lt;ref name=Carlsmith2022&amp;gt;Carlsmith, Joseph. &amp;quot;Is Power-Seeking AI an Existential Risk?&amp;quot;. 2022-06-16.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:2102&amp;quot;/&amp;gt; Furthermore, they might develop undesirable [[emergent behaviour|emergent]] goals that could be hard to detect before the system is deployed and encounters new situations and [[Domain adaptation|data distributions]].&amp;lt;ref name=Christian2020&amp;gt;Christian, Brian. [https://wwnorton.co.uk/books/9780393635829-the-alignment-problem &amp;quot;The alignment problem: Machine learning and human values&amp;quot;]. W. W. Norton &amp;amp; Company.&amp;lt;/ref&amp;gt;&amp;lt;ref name=gmdrl&amp;gt;Langosco, Lauro Langosco Di. [https://proceedings.mlr.press/v162/langosco22a.html &amp;quot;Goal Misgeneralization in Deep Reinforcement Learning&amp;quot;]. PMLR. 2022-06-28.&amp;lt;/ref&amp;gt; Empirical research showed in 2024 that advanced [[large language model]]s (LLMs) such as [[OpenAI o1]] or [[Claude 3]] sometimes engage in strategic deception to achieve their goals or prevent them from being changed.&amp;lt;ref&amp;gt;Pillay, Tharin. [https://time.com/7202312/new-tests-reveal-ai-capacity-for-deception/ &amp;quot;New Tests Reveal AI&#039;s Capacity for Deception&amp;quot;]. &#039;&#039;TIME&#039;&#039;. 2024-12-15.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Perrigo, Billy. [https://time.com/7202784/ai-research-strategic-lying/ &amp;quot;Exclusive: New Research Shows AI Strategically Lying&amp;quot;]. &#039;&#039;TIME&#039;&#039;. 2024-12-18.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some of these issues affect existing commercial systems such as LLMs,&amp;lt;ref name=&amp;quot;Opportunities_Risks&amp;quot;/&amp;gt;&amp;lt;ref name=feedback2022&amp;gt;Ouyang, Long. &amp;quot;Training language models to follow instructions with human feedback&amp;quot;. &#039;&#039;NeurIPS&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=OpenAICodex&amp;gt;Zaremba, Wojciech. [https://openai.com/blog/openai-codex/ &amp;quot;OpenAI Codex&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. 2021-08-10.&amp;lt;/ref&amp;gt; [[robot]]s,&amp;lt;ref&amp;gt;Kober, Jens. [http://journals.sagepub.com/doi/10.1177/0278364913495721 &amp;quot;Reinforcement learning in robotics: A survey&amp;quot;]. &#039;&#039;The International Journal of Robotics Research&#039;&#039;. 2013-09-01.&amp;lt;/ref&amp;gt; [[autonomous vehicles]],&amp;lt;ref&amp;gt;Knox, W. Bradley. &amp;quot;Reward (Mis)design for autonomous driving&amp;quot;. &#039;&#039;Artificial Intelligence&#039;&#039;. 2023-03-01.&amp;lt;/ref&amp;gt; and social media [[Recommender system|recommendation engines]].&amp;lt;ref name=&amp;quot;Opportunities_Risks&amp;quot;&amp;gt;Bommasani, Rishi. [https://fsi.stanford.edu/publication/opportunities-and-risks-foundation-models &amp;quot;On the Opportunities and Risks of Foundation Models&amp;quot;]. &#039;&#039;Stanford CRFM&#039;&#039;. 2022-07-12.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:2102&amp;quot;&amp;gt;Russell, Stuart J.. [https://www.penguinrandomhouse.com/books/566677/human-compatible-by-stuart-russell/ &amp;quot;Human compatible: Artificial intelligence and the problem of control&amp;quot;]. Penguin Random House.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Stray, Jonathan. &amp;quot;Aligning AI Optimization to Community Well-Being&amp;quot;. &#039;&#039;International Journal of Community Well-Being&#039;&#039;.&amp;lt;/ref&amp;gt; Some AI researchers argue that more capable future systems will be more severely affected because these problems partially result from high capabilities.&amp;lt;ref name=&amp;quot;AIMA&amp;quot;&amp;gt;Russell, Stuart. [https://aima.cs.berkeley.edu/ &amp;quot;Artificial Intelligence: A Modern Approach&amp;quot;]. Prentice Hall. 2009.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;mmmm2022&amp;quot;&amp;gt;Pan, Alexander. [https://openreview.net/forum?id=JYtwGwIL7ye &amp;quot;The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models&amp;quot;]. 2022-02-14.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;dlp2023&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Many prominent AI researchers and AI company leaders have argued or asserted that AI is approaching human-like ([[artificial general intelligence|AGI]]) and [[super intelligence|superhuman cognitive capabilities]] ([[artificial superintelligence|ASI]]), and could [[Existential risk from artificial general intelligence|endanger human civilization]] if misaligned.&amp;lt;ref name=&amp;quot;:2&amp;quot;&amp;gt;Smith, Craig S.. [https://www.forbes.com/sites/craigsmith/2023/05/04/geoff-hinton-ais-most-famous-researcher-warns-of-existential-threat/ &amp;quot;Geoff Hinton, AI&#039;s Most Famous Researcher, Warns Of &#039;Existential Threat&#039;&amp;quot;]. &#039;&#039;Forbes&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:2102&amp;quot; /&amp;gt; These include &amp;quot;AI godfathers&amp;quot; [[Geoffrey Hinton]] and [[Yoshua Bengio]] and the CEOs of [[OpenAI]], [[Anthropic]], and [[Google DeepMind]].&amp;lt;ref&amp;gt;Bengio, Yoshua. &amp;quot;Managing extreme AI risks amid rapid progress&amp;quot;. &#039;&#039;Science&#039;&#039;. 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://www.safe.ai/statement-on-ai-risk &amp;quot;Statement on AI Risk {{!&amp;quot;]. &#039;&#039;www.safe.ai&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Grace, Katja. &amp;quot;Thousands of AI Authors on the Future of AI&amp;quot;. &#039;&#039;Journal of Artificial Intelligence Research&#039;&#039;. 2025.&amp;lt;/ref&amp;gt; These risks remain debated.&amp;lt;ref&amp;gt;Perrigo, Billy. [https://time.com/6694432/yann-lecun-meta-ai-interview/ &amp;quot;Meta&#039;s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk&amp;quot;]. &#039;&#039;TIME&#039;&#039;. 2024-02-13.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AI alignment is a subfield of [[AI safety]], the study of how to build safe AI systems.&amp;lt;ref&amp;gt;[https://www.techtarget.com/whatis/definition/AI-alignment &amp;quot;What is AI alignment?&amp;quot;]. &#039;&#039;[[TechTarget]]&#039;&#039;. 2023-05-03.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Ahmed, Shazeda. [https://firstmonday.org/ojs/index.php/fm/article/view/13626 &amp;quot;Field-building and the epistemic culture of AI safety&amp;quot;]. &#039;&#039;First Monday&#039;&#039;. 2024-04-14.&amp;lt;/ref&amp;gt; Other subfields of AI safety include robustness, monitoring, and [[AI capability control|capability control]].&amp;lt;ref name=&amp;quot;building2018&amp;quot; /&amp;gt; Research challenges in alignment include instilling complex values in AI, developing honest AI, scalable oversight, auditing and interpreting AI models, and preventing emergent AI behaviors like power-seeking.&amp;lt;ref name=&amp;quot;building2018&amp;quot;&amp;gt;Ortega, Pedro A.. [https://deepmindsafetyresearch.medium.com/building-safe-artificial-intelligence-52f5f75058f1 &amp;quot;Building safe artificial intelligence: specification, robustness, and assurance&amp;quot;]. &#039;&#039;DeepMind Safety Research – Medium&#039;&#039;. 2018-09-27.&amp;lt;/ref&amp;gt; Alignment research has connections to [[Explainable artificial intelligence|interpretability research]],&amp;lt;ref name=&amp;quot;:333&amp;quot;&amp;gt;Rorvig, Mordechai. [https://www.quantamagazine.org/researchers-glimpse-how-ai-gets-so-good-at-language-processing-20220414/ &amp;quot;Researchers Gain New Understanding From Simple AI&amp;quot;]. &#039;&#039;Quanta Magazine&#039;&#039;. 2022-04-14.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Doshi-Velez, Finale. &amp;quot;Towards A Rigorous Science of Interpretable Machine Learning&amp;quot;. 2017-03-02.&lt;br /&gt;
*Wiblin, Robert. [https://80000hours.org/podcast/episodes/chris-olah-interpretability-research/ &amp;quot;Chris Olah on what the hell is going on inside neural networks&amp;quot;]. August 4, 2021.&amp;lt;/ref&amp;gt; ([[adversarial machine learning|adversarial]]) robustness,&amp;lt;ref name=&amp;quot;concrete2016&amp;quot;&amp;gt;Amodei, Dario. &amp;quot;Concrete Problems in AI Safety&amp;quot;. 2016-06-21.&amp;lt;/ref&amp;gt; [[anomaly detection]], [[Uncertainty quantification|calibrated uncertainty]],&amp;lt;ref name=&amp;quot;:333&amp;quot; /&amp;gt; [[formal verification]],&amp;lt;ref&amp;gt;Russell, Stuart. [https://ojs.aaai.org/index.php/aimagazine/article/view/2577 &amp;quot;Research Priorities for Robust and Beneficial Artificial Intelligence&amp;quot;]. &#039;&#039;AI Magazine&#039;&#039;. 2015-12-31.&amp;lt;/ref&amp;gt; [[preference learning]],&amp;lt;ref name=&amp;quot;prefsurvey2017&amp;quot;&amp;gt;Wirth, Christian. &amp;quot;A survey of preference-based reinforcement learning methods&amp;quot;. &#039;&#039;Journal of Machine Learning Research&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;drlfhp&amp;quot;&amp;gt;Christiano, Paul F.. &amp;quot;Deep reinforcement learning from human preferences&amp;quot;. Curran Associates Inc..&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;LessToxic&amp;quot;&amp;gt;Heaven, Will Douglas. [https://www.technologyreview.com/2022/01/27/1044398/new-gpt3-openai-chatbot-language-model-ai-toxic-misinformation/ &amp;quot;The new version of GPT-3 is much better behaved (and should be less toxic)&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. 2022-01-27.&amp;lt;/ref&amp;gt; [[Safety-critical system|safety-critical engineering]],&amp;lt;ref&amp;gt;Mohseni, Sina. [https://dl.acm.org/doi/10.1145/3551385 &amp;quot;Taxonomy of Machine Learning Safety: A Survey and Primer&amp;quot;]. &#039;&#039;ACM Computing Surveys&#039;&#039;. 2022-03-07.&amp;lt;/ref&amp;gt; [[game theory]],&amp;lt;ref&amp;gt;Clifton, Jesse. [https://longtermrisk.org/research-agenda/ &amp;quot;Cooperation, Conflict, and Transformative Artificial Intelligence: A Research Agenda&amp;quot;]. &#039;&#039;Center on Long-Term Risk&#039;&#039;.&lt;br /&gt;
*Dafoe, Allan. [http://www.nature.com/articles/d41586-021-01170-0 &amp;quot;Cooperative AI: machines must learn to find common ground&amp;quot;]. &#039;&#039;Nature&#039;&#039;. 2021-05-06.&amp;lt;/ref&amp;gt; [[Fairness (machine learning)|algorithmic fairness]],&amp;lt;ref name=&amp;quot;concrete2016&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Prunkl, Carina. &amp;quot;Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society&amp;quot;. ACM. 2020-02-07.&amp;lt;/ref&amp;gt; and [[social science]]s.&amp;lt;ref name=&amp;quot;:4&amp;quot;&amp;gt;Irving, Geoffrey. [https://distill.pub/2019/safety-needs-social-scientists &amp;quot;AI Safety Needs Social Scientists&amp;quot;]. &#039;&#039;Distill&#039;&#039;. 2019-02-19.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Gazos, Alexandros. &amp;quot;Organising AI for safety: Identifying structural vulnerabilities to guide the design of AI-enhanced socio-technical systems&amp;quot;. &#039;&#039;Safety Science&#039;&#039;. 2025-04-01.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Objectives in AI ==&lt;br /&gt;
&#039;&#039;Main article: [[Intelligent agent#Objective function]]&#039;&#039;&lt;br /&gt;
Programmers provide an AI system such as [[AlphaZero]] with an &amp;quot;objective function&amp;quot;,{{efn|Terminology varies based on context. Similar concepts include goal function, utility function, loss function, etc.}} in which they intend to encapsulate the goal(s) the AI is configured to accomplish. Such a system later populates a (possibly implicit) internal &amp;quot;model&amp;quot; of its environment. This model encapsulates all the agent&#039;s beliefs about the world. The AI then creates and executes whatever plan is calculated to maximize{{efn|or minimize, depending on the context}} the value{{efn|in the presence of uncertainty, the [[expected value]]}} of its objective function.&amp;lt;ref&amp;gt;Bringsjord, Selmer. &amp;quot;The Stanford Encyclopedia of Philosophy&amp;quot;. Metaphysics Research Lab, Stanford University. 2020.&amp;lt;/ref&amp;gt; For example, when AlphaZero is trained on chess, it has a simple objective function of &amp;quot;+1 if AlphaZero wins, −1 if AlphaZero loses&amp;quot;. During the game, AlphaZero attempts to execute whatever sequence of moves it judges most likely to attain the maximum value of +1.&amp;lt;ref name=&amp;quot;quanta alphazero&amp;quot;&amp;gt;[https://www.quantamagazine.org/why-alphazeros-artificial-intelligence-has-trouble-with-the-real-world-20180221/ &amp;quot;Why AlphaZero&#039;s Artificial Intelligence Has Trouble With the Real World&amp;quot;]. &#039;&#039;Quanta Magazine&#039;&#039;. 2018.&amp;lt;/ref&amp;gt; Similarly, a [[reinforcement learning]] system can have a &amp;quot;reward function&amp;quot; that allows the programmers to shape the AI&#039;s desired behavior.&amp;lt;ref name=&amp;quot;quanta problem&amp;quot;&amp;gt;Wolchover, Natalie. [https://www.quantamagazine.org/artificial-intelligence-will-do-what-we-ask-thats-a-problem-20200130/ &amp;quot;Artificial Intelligence Will Do What We Ask. That&#039;s a Problem.&amp;quot;]. &#039;&#039;Quanta Magazine&#039;&#039;. 30 January 2020.&amp;lt;/ref&amp;gt; An [[evolutionary algorithm]]&#039;s behavior is shaped by a &amp;quot;[[fitness function]]&amp;quot;.&amp;lt;ref&amp;gt;Bull, Larry. &amp;quot;On model-based evolutionary computation&amp;quot;. &#039;&#039;Soft Computing&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Alignment problem==&lt;br /&gt;
{{redirect|Alignment problem|the book|The Alignment Problem}}&lt;br /&gt;
In 1960, AI pioneer [[Norbert Wiener]] described the AI alignment problem as follows: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively [...] we had better be quite sure that the purpose put into the machine is the purpose which we really desire.&amp;lt;ref name=&amp;quot;Wiener1960&amp;quot;&amp;gt;Wiener, Norbert. [https://www.science.org/doi/10.1126/science.131.3410.1355 &amp;quot;Some Moral and Technical Consequences of Automation: As machines learn they may develop unforeseen strategies at rates that baffle their programmers.&amp;quot;]. &#039;&#039;Science&#039;&#039;. 1960-05-06.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:2102&amp;quot; /&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AI alignment refers to ensuring that an AI system&#039;s objectives match some target. The target is variously defined as the goals of the system&#039;s designers or users, widely shared values, objective ethical standards, legal requirements, or the intentions its designers would have if they were more informed and enlightened.&amp;lt;ref name=Gabriel2020&amp;gt;Gabriel, Iason. &amp;quot;Artificial Intelligence, Values, and Alignment&amp;quot;. &#039;&#039;Minds and Machines&#039;&#039;. 2020-09-01.&amp;lt;/ref&amp;gt; In [[democracy|democratic]] AI alignment, the target is the values and preferences of [[Median voter theorem|median voters]], which increases [[political legitimacy]].&amp;lt;ref name=&amp;quot;q140&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;d576&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
AI alignment is an open problem for modern AI systems&amp;lt;ref&amp;gt;The Ezra Klein Show. [https://www.nytimes.com/2021/06/04/opinion/ezra-klein-podcast-brian-christian.html &amp;quot;If &#039;All Models Are Wrong,&#039; Why Do We Give Them So Much Power?&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. 2021-06-04.&lt;br /&gt;
* Wolchover, Natalie. [https://www.quantamagazine.org/artificial-intelligence-aligned-with-human-values-qa-with-stuart-russell-20150421/ &amp;quot;Concerns of an Artificial Intelligence Pioneer&amp;quot;]. &#039;&#039;Quanta Magazine&#039;&#039;. 2015-04-21.&lt;br /&gt;
* California Assembly. [https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180ACR215 &amp;quot;Bill Text – ACR-215 23 Asilomar AI Principles.&amp;quot;].&amp;lt;/ref&amp;gt;&amp;lt;ref name=MasteringLanguage&amp;gt;Johnson, Steven. [https://www.nytimes.com/2022/04/15/magazine/ai-language.html &amp;quot;A.I. Is Mastering Language. Should We Trust What It Says?&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. 2022-04-15.&amp;lt;/ref&amp;gt; and is a research field within AI.&amp;lt;ref&amp;gt;OpenAI. [https://openai.com/blog/our-approach-to-alignment-research &amp;quot;Developing safe &amp;amp; responsible AI&amp;quot;].&lt;br /&gt;
* [https://deepmindsafetyresearch.medium.com &amp;quot;DeepMind Safety Research&amp;quot;]. &#039;&#039;Medium&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;aima4&amp;quot; /&amp;gt; Aligning AI involves two main challenges: carefully [[Specification (technical standard)|specifying]] the purpose of the system (outer alignment) and ensuring that the system adopts the specification robustly (inner alignment).{{r|dlp2023}} Researchers also attempt to create AI models that have [[AI safety#Adversarial robustness|robust]] alignment, sticking to safety constraints even when users [[adversarial attack|adversarially]] try to bypass them.&lt;br /&gt;
&lt;br /&gt;
=== Specification gaming and side effects ===&lt;br /&gt;
&#039;&#039;Main article: [[Reward hacking]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To specify an AI system&#039;s purpose, AI designers typically provide an [[Reward function|objective function]], [[Supervised learning|examples]], or [[Reinforcement learning|feedback]] to the system. But designers are often unable to completely specify all important values and constraints, so they resort to easy-to-specify &#039;&#039;proxy goals&#039;&#039; such as [[Reinforcement learning from human feedback|maximizing the approval]] of human overseers, who are fallible.{{r|concrete2016|building2018}}&amp;lt;ref name=Unsolved2022&amp;gt;Hendrycks, Dan. &amp;quot;Unsolved Problems in ML Safety&amp;quot;. 2022-06-16.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Russell, Stuart J.. [https://www.pearson.com/us/higher-education/program/Russell-Artificial-Intelligence-A-Modern-Approach-4th-Edition/PGM1263338.html &amp;quot;Artificial intelligence: a modern approach&amp;quot;]. Pearson. 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref name=SpecGaming2020&amp;gt;Krakovna, Victoria. [https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity &amp;quot;Specification gaming: the flip side of AI ingenuity&amp;quot;]. &#039;&#039;Deepmind&#039;&#039;. 2020-04-21.&amp;lt;/ref&amp;gt; As a result, AI systems can find loopholes that help them accomplish the specified objective efficiently but in unintended, possibly harmful ways. This tendency is known as &#039;&#039;specification gaming&#039;&#039; or &#039;&#039;reward hacking&#039;&#039;, and is an instance of [[Goodhart&#039;s law]].{{r|SpecGaming2020|mmmm2022}} As AI systems become more capable, they are often able to game their specifications more effectively.{{r|mmmm2022}}&lt;br /&gt;
[[File:Robot hand trained with human feedback &#039;pretends&#039; to grasp ball.ogg|right|thumb|An AI system was trained using human feedback to grab a ball, but instead learned to place its hand between the ball and camera, making it falsely appear successful.&amp;lt;ref name=&amp;quot;lfhp2017&amp;quot;&amp;gt;Amodei, Dario. [https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/ &amp;quot;Learning from Human Preferences&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. 2017-06-13.&amp;lt;/ref&amp;gt; Some research on alignment aims to avert solutions that are false but convincing.]]&lt;br /&gt;
&lt;br /&gt;
Specification gaming has been observed in numerous AI systems.{{r|SpecGaming2020}}&amp;lt;ref&amp;gt;[https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml &amp;quot;Specification gaming examples in AI - master list - Google Drive&amp;quot;]. &#039;&#039;docs.google.com&#039;&#039;.&amp;lt;/ref&amp;gt; [[OpenAI]] [[Generative pre-trained transformer|GPT]] models for programming—including in real-world cases—have been found to explicitly plan hacking the tests used to evaluate them to falsely appear successful (e.g., explicitly stating &amp;quot;let&#039;s hack&amp;quot;). When the company penalized this, many models learned to obfuscate their plans while continuing to hack the tests.&amp;lt;ref name=&amp;quot;:7&amp;quot;&amp;gt;[https://openai.com/index/chain-of-thought-monitoring/ &amp;quot;Detecting misbehavior in frontier reasoning models&amp;quot;]. &#039;&#039;openai.com&#039;&#039;.&amp;lt;/ref&amp;gt; Another system was trained to finish a simulated boat race by rewarding the system for hitting targets along the track, but the system achieved more reward by looping and crashing into the same targets indefinitely.&amp;lt;ref&amp;gt;Clark, Jack. [https://openai.com/research/faulty-reward-functions &amp;quot;Faulty reward functions in the wild&amp;quot;]. &#039;&#039;openai.com&#039;&#039;. 21 Dec 2016.&amp;lt;/ref&amp;gt; A 2025 Palisade Research study found that when tasked to win at chess against a stronger opponent, some [[Reasoning language model|reasoning LLMs]] attempted to hack the game system, for example by modifying or entirely deleting their opponent.&amp;lt;ref&amp;gt;Booth, Harry. [https://time.com/7259395/ai-chess-cheating-palisade-research/ &amp;quot;When AI Thinks It Will Lose, It Sometimes Cheats&amp;quot;]. &#039;&#039;TIME&#039;&#039;. 2025-02-19.&amp;lt;/ref&amp;gt; Some alignment researchers aim to help humans detect specification gaming and steer AI systems toward carefully specified objectives that are safe and useful to pursue.&lt;br /&gt;
&lt;br /&gt;
When a misaligned AI system is [[software deployment|deployed]], it can have consequential side effects. Social media platforms have been known to optimize their recommendation algorithms for [[click-through rate]]s, causing user addiction on a global scale.{{r|Unsolved2022}} Stanford researchers say that such [[recommender system]]s are misaligned with their users because they &amp;quot;optimize simple engagement metrics rather than a harder-to-measure combination of societal and consumer well-being&amp;quot;.{{r|Opportunities_Risks}}&lt;br /&gt;
&lt;br /&gt;
Explaining such side effects, Berkeley computer scientist [[Stuart J. Russell]] said that the omission of implicit constraints can cause harm: &amp;quot;A system [...] will often set [...] unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the [[The Sorcerer&#039;s Apprentice|sorcerer&#039;s apprentice]], or [[Midas|King Midas]]: you get exactly what you ask for, not what you want.&amp;quot;&amp;lt;ref name=&amp;quot;:5&amp;quot;&amp;gt;Russell, Stuart. [https://www.edge.org/conversation/the-myth-of-ai &amp;quot;Of Myths and Moonshine&amp;quot;]. &#039;&#039;Edge.org&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some researchers suggest that AI designers specify their desired goals by listing forbidden actions or by formalizing ethical rules (as with Asimov&#039;s [[Three Laws of Robotics]]).&amp;lt;ref&amp;gt;Tasioulas, John. &amp;quot;First Steps Towards an Ethics of Robots and Artificial Intelligence&amp;quot;. &#039;&#039;Journal of Practical Ethics&#039;&#039;.&amp;lt;/ref&amp;gt; But [[Stuart J. Russell|Russell]] and [[Peter Norvig|Norvig]] argue that this approach overlooks the complexity of human values:&amp;lt;ref name=&amp;quot;:2102&amp;quot; /&amp;gt; &amp;quot;It is certainly very hard, and perhaps impossible, for mere humans to anticipate and rule out in advance all the disastrous ways the machine could choose to achieve a specified objective.&amp;quot;&amp;lt;ref name=&amp;quot;:2102&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additionally, even if an AI system fully understands human intentions, it may still disregard them, because following human intentions may not be its objective (unless it is already fully aligned).&amp;lt;ref name=&amp;quot;aima4&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Uscov, Silvia. &amp;quot;Algorithmic Law&amp;quot;. [[Alexandru Ioan Cuza University]].&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Pressure to deploy unsafe systems ===&lt;br /&gt;
Commercial organizations sometimes have incentives to take shortcuts on safety and to deploy misaligned or unsafe AI systems.{{r|Unsolved2022}} For example, social media [[recommender system]]s have been profitable despite creating unwanted addiction and [[political polarization|polarization]].{{r|Opportunities_Risks}}&amp;lt;ref name=&amp;quot;:722&amp;quot;&amp;gt;Wells, Georgia. [https://www.wsj.com/articles/facebook-bad-for-you-360-million-users-say-yes-company-documents-facebook-files-11636124681 &amp;quot;Is Facebook Bad for You? It Is for About 360 Million Users, Company Surveys Suggest&amp;quot;]. &#039;&#039;The Wall Street Journal&#039;&#039;. 2021-11-05.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:822&amp;quot;&amp;gt;Barrett, Paul M.. [https://bhr.stern.nyu.edu/polarization-report-page &amp;quot;How Social Media Intensifies U.S. Political Polarization-And What Can Be Done About It&amp;quot;]. Center for Business and Human Rights, NYU. September 2021.&amp;lt;/ref&amp;gt; Competitive pressure can also lead to a [[race to the bottom]] on AI safety standards. For example, OpenAI has been sued for releasing a ChatGPT version that encouraged suicide for some unstable users, a behavior the company had overlooked amid a rushed product release.&amp;lt;ref&amp;gt;Ostrovsky, Nikita. [https://time.com/7327946/chatgpt-openai-suicide-adam-raine-lawsuit/ &amp;quot;OpenAI Removed Safeguards Before Teen&#039;s Suicide, Amended Lawsuit Claims&amp;quot;]. &#039;&#039;TIME&#039;&#039;. 2025-10-23.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://openai.com/index/sycophancy-in-gpt-4o/ &amp;quot;Sycophancy in GPT-4o: What happened and what we&#039;re doing about it&amp;quot;]. &#039;&#039;openai.com&#039;&#039;. 2025-04-29.&amp;lt;/ref&amp;gt; Similarly, in 2018, a self-driving car killed a pedestrian ([[Death of Elaine Herzberg|Elaine Herzberg]]) after engineers disabled the emergency braking system because it was oversensitive and slowed development.&amp;lt;ref&amp;gt;Shepardson, David. [https://www.reuters.com/article/us-uber-crash-idUSKCN1IP26K &amp;quot;Uber disabled emergency braking in self-driving car: U.S. agency&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 2018-05-24.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Risks from advanced misaligned AI ===&lt;br /&gt;
Some researchers are interested in aligning increasingly advanced AI systems, as progress in AI development is rapid, and industry and governments are trying to build advanced AI. As AI system capabilities continue to rapidly expand in scope, they could unlock many opportunities if aligned, but consequently may further complicate the task of alignment due to their increased complexity, potentially posing large-scale hazards.&amp;lt;ref name=&amp;quot;:2102&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Development of advanced AI ====&lt;br /&gt;
Many AI companies, such as [[OpenAI]],&amp;lt;ref&amp;gt;[https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ &amp;quot;The messy, secretive reality behind OpenAI&#039;s bid to save the world&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;.&amp;lt;/ref&amp;gt; [[Meta Platforms|Meta]]&amp;lt;ref&amp;gt;Heath, Alex. [https://www.theverge.com/2024/1/18/24042354/mark-zuckerberg-meta-agi-reorg-interview &amp;quot;Mark Zuckerberg&#039;s new goal is creating artificial general intelligence&amp;quot;]. &#039;&#039;The Verge&#039;&#039;. 2024-01-18.&amp;lt;/ref&amp;gt; and [[DeepMind]],&amp;lt;ref&amp;gt;Johnson, Dave. [https://www.businessinsider.com/google-deepmind &amp;quot;DeepMind is Google&#039;s AI research hub. Here&#039;s what it does, where it&#039;s located, and how it differs from OpenAI.&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;.&amp;lt;/ref&amp;gt; have stated their aim to develop [[artificial general intelligence]] (AGI), a hypothesized AI system that matches or outperforms humans in most or all cognitive work. Researchers who scale modern [[neural network]]s observe that they indeed develop increasingly general and unanticipated capabilities.{{r|Opportunities_Risks}}&amp;lt;ref name=eallm2022&amp;gt;Wei, Jason. &amp;quot;Emergent Abilities of Large Language Models&amp;quot;. &#039;&#039;Transactions on Machine Learning Research&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;&#039;&#039;ICLR Poster&#039;&#039;.&amp;lt;/ref&amp;gt; Such models have learned to operate a computer or write their own programs; a single &amp;quot;generalist&amp;quot; network can chat, control robots, play games, and interpret photographs.&amp;lt;ref&amp;gt;Dominguez, Daniel. [https://www.infoq.com/news/2022/05/deepmind-gato-ai-agent/ &amp;quot;DeepMind Introduces Gato, a New Generalist AI Agent&amp;quot;]. &#039;&#039;InfoQ&#039;&#039;. 2022-05-19.&lt;br /&gt;
* Edwards, Ben. [https://arstechnica.com/information-technology/2022/09/new-ai-assistant-can-browse-search-and-use-web-apps-like-a-human/ &amp;quot;Adept&#039;s AI assistant can browse, search, and use web apps like a human&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2022-04-26.&amp;lt;/ref&amp;gt; According to surveys, some leading [[machine learning]] researchers expect AGI to be created in As of 2021, while some believe it will take much longer. Many consider both scenarios possible.&amp;lt;ref&amp;gt;Grace, Katja. &amp;quot;Thousands of AI Authors on the Future of AI&amp;quot;. &#039;&#039;Journal of Artificial Intelligence Research&#039;&#039;. 2025.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:2822&amp;quot;&amp;gt;Grace, Katja. [http://jair.org/index.php/jair/article/view/11222 &amp;quot;Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts&amp;quot;]. &#039;&#039;Journal of Artificial Intelligence Research&#039;&#039;. 2018-07-31.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:2922&amp;quot;&amp;gt;Zhang, Baobao. [https://jair.org/index.php/jair/article/view/12895 &amp;quot;Ethics and Governance of Artificial Intelligence: Evidence from a Survey of Machine Learning Researchers&amp;quot;]. &#039;&#039;Journal of Artificial Intelligence Research&#039;&#039;. 2021-08-02.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2023, leaders in AI research and tech signed an open letter calling for a pause in the largest AI training runs. The letter stated, &amp;quot;Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.&amp;quot;&amp;lt;ref name=&amp;quot;:1701&amp;quot;&amp;gt;Future of Life Institute. [https://futureoflife.org/open-letter/pause-giant-ai-experiments/ &amp;quot;Pause Giant AI Experiments: An Open Letter&amp;quot;]. 2023-03-22.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Power-seeking ====&lt;br /&gt;
As of 2023 systems still have limited long-term [[Automated planning and scheduling|planning]] ability and [[Situation awareness|situational awareness]]{{r|Opportunities_Risks}}, but large efforts are underway to change this.&amp;lt;ref&amp;gt;Wang, Lei. [https://ui.adsabs.harvard.edu/abs/2023arXiv230811432W &amp;quot;A survey on large language model based autonomous agents&amp;quot;]. &#039;&#039;Frontiers of Computer Science&#039;&#039;. 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Laine, Rudolf. [https://openreview.net/forum?id=DRk4bWKr41&amp;amp;referrer=%5Bthe+profile+of+Rudolf+Laine%5D(/profile?id=~Rudolf_Laine1) &amp;quot;Towards a Situational Awareness Benchmark for LLMs&amp;quot;]. &#039;&#039;NeurIPS 2023 SoLaR Workshop&#039;&#039;. 2023-11-28.&amp;lt;/ref&amp;gt; Future systems (not necessarily AGIs) with these capabilities are expected to develop unwanted [[#Power-seeking and instrumental strategies|&#039;&#039;power-seeking&#039;&#039;]] strategies. Future advanced AI agents might, for example, seek to acquire money and computation power, to proliferate, or to evade being turned off (for example, by running additional copies of the system on other computers). Although power-seeking is not explicitly programmed, it can emerge because agents who have more power are better able to accomplish their goals.{{r|Opportunities_Risks|Carlsmith2022}} This tendency, known as [[instrumental convergence]], has already emerged in various [[reinforcement learning]] agents including language models.&amp;lt;ref name=&amp;quot;:3&amp;quot;&amp;gt;Pan, Alexander. &amp;quot;Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark&amp;quot;. &#039;&#039;Proceedings of the 40th International Conference on Machine Learning&#039;&#039;. 2023-04-03.&amp;lt;/ref&amp;gt;&amp;lt;ref name=dllmmwe2022&amp;gt;Perez, Ethan. [https://aclanthology.org/2023.findings-acl.847/ &amp;quot;Discovering Language Model Behaviors with Model-Written Evaluations&amp;quot;]. &#039;&#039;ACL&#039;&#039;. 2022-12-19.&amp;lt;/ref&amp;gt; Other research has mathematically shown that optimal reinforcement learning algorithms would seek power in a wide range of environments.&amp;lt;ref name=optsp&amp;gt;Turner, Alexander Matt. [https://openreview.net/forum?id=l7-DBWawSZH &amp;quot;Optimal policies tend to seek power&amp;quot;].&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Turner, Alexander Matt. [https://openreview.net/forum?id=GFgjnk2Q-ju &amp;quot;Parametrically retargetable decision-makers tend to seek power&amp;quot;].&amp;lt;/ref&amp;gt; As a result, their deployment might be irreversible. For these reasons, researchers argue that the problems of AI safety and alignment must be resolved before advanced power-seeking AI is first created.{{r|Carlsmith2022}}&amp;lt;ref name=Superintelligence&amp;gt;Bostrom, Nick. &amp;quot;Superintelligence: Paths, Dangers, Strategies&amp;quot;. Oxford University Press, Inc.. 2014.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:2102&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Future power-seeking AI systems might be deployed by choice or by accident. As political leaders and companies see the strategic advantage in having the most competitive, most powerful AI systems, they may choose to deploy them.{{r|Carlsmith2022}} Additionally, as AI designers detect and penalize power-seeking behavior, their systems have an incentive to game this specification by seeking power in ways that are not penalized or by avoiding power-seeking before they are deployed.{{r|Carlsmith2022}}&lt;br /&gt;
&lt;br /&gt;
====Existential risk (x-risk)====&lt;br /&gt;
&#039;&#039;See also: [[Existential risk from artificial intelligence|AI takeover]]&#039;&#039;&lt;br /&gt;
According to some researchers, humans owe their dominance over other species to their greater cognitive abilities. Accordingly, researchers argue that one or many misaligned AI systems could disempower humanity or lead to human extinction if they outperform humans on most cognitive tasks.&amp;lt;ref name=&amp;quot;aima4&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;:2102&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2023, world-leading AI researchers, other scholars, and AI tech CEOs signed the statement that &amp;quot;Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war&amp;quot;.&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;[https://www.safe.ai/statement-on-ai-risk &amp;quot;Statement on AI Risk {{!&amp;quot;]. &#039;&#039;www.safe.ai&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Roose, Kevin. [https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html &amp;quot;A.I. Poses &#039;Risk of Extinction,&#039; Industry Leaders Warn&amp;quot;]. &#039;&#039;The New York Times&#039;&#039;. 2023-05-30.&amp;lt;/ref&amp;gt; Notable computer scientists who have pointed out risks from future advanced AI that is misaligned include [[Geoffrey Hinton]],&amp;lt;ref name=&amp;quot;:2&amp;quot; /&amp;gt; [[Alan Turing]],{{efn|In a 1951 lecture&amp;lt;ref&amp;gt;The Turing Digital Archive.&amp;lt;/ref&amp;gt; Turing argued that &amp;quot;It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler&#039;s Erewhon.&amp;quot; Also in a lecture broadcast on BBC&amp;lt;ref&amp;gt;Turing, Alan. &amp;quot;Can digital computers think?&amp;quot;. 15 May 1951.&amp;lt;/ref&amp;gt; expressed: &amp;quot;If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled.... This new danger... is certainly something which can give us anxiety.&amp;quot;}} [[Ilya Sutskever]],&amp;lt;ref name=&amp;quot;:3022&amp;quot;&amp;gt;Muehlhauser, Luke. [https://lukemuehlhauser.com/sutskever-on-talking-machines/ &amp;quot;Sutskever on Talking Machines&amp;quot;]. &#039;&#039;Luke Muehlhauser&#039;&#039;. 2016-01-29.&amp;lt;/ref&amp;gt; [[Yoshua Bengio]],&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt; [[Judea Pearl]],{{efn|Pearl wrote &amp;quot;Human Compatible made me a convert to Russell&#039;s concerns with our ability to control our upcoming creation{{en dash}}super-intelligent machines. Unlike outside alarmists and futurists, Russell is a leading authority on AI. His new book will educate the public about AI more than any book I can think of, and is a delightful and uplifting read&amp;quot; about Russell&#039;s book &#039;&#039;[[Human Compatible|Human Compatible: AI and the Problem of Control]]&#039;&#039;&amp;lt;ref name=&amp;quot;:2102&amp;quot; /&amp;gt; which argues that existential risk to humanity from misaligned AI is a serious concern worth addressing today.}} [[Murray Shanahan]],&amp;lt;ref name=&amp;quot;:3122&amp;quot;&amp;gt;Shanahan, Murray. &amp;quot;The technological singularity&amp;quot;. MIT Press. 2015.&amp;lt;/ref&amp;gt; [[Norbert Wiener]],{{r|Wiener1960|:2102}} [[Marvin Minsky]],{{efn|Russell &amp;amp; Norvig&amp;lt;ref name=&amp;quot;AIMA&amp;quot; /&amp;gt; note: &amp;quot;The &amp;quot;King Midas problem&amp;quot; was anticipated by Marvin Minsky, who once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers.&amp;quot;}} [[Francesca Rossi]],&amp;lt;ref name=&amp;quot;:3322&amp;quot;&amp;gt;Rossi, Francesca. [https://www.washingtonpost.com/news/in-theory/wp/2015/11/05/how-do-you-teach-a-machine-to-be-moral/ &amp;quot;How do you teach a machine to be moral?&amp;quot;]. &#039;&#039;The Washington Post&#039;&#039;.&amp;lt;/ref&amp;gt; [[Scott Aaronson]],&amp;lt;ref name=&amp;quot;:3422&amp;quot;&amp;gt;Aaronson, Scott. [https://scottaaronson.blog/?p=6484 &amp;quot;OpenAI!&amp;quot;]. &#039;&#039;Shtetl-Optimized&#039;&#039;. 2022-06-17.&amp;lt;/ref&amp;gt; [[Bart Selman]],&amp;lt;ref name=&amp;quot;:3522&amp;quot;&amp;gt;Selman, Bart. [https://futureoflife.org/data/PDF/bart_selman.pdf &amp;quot;Intelligence Explosion: Science or Fiction?&amp;quot;].&amp;lt;/ref&amp;gt; [[David A. McAllester|David McAllester]],&amp;lt;ref name=&amp;quot;:3622&amp;quot;&amp;gt;McAllester. [https://machinethoughts.wordpress.com/2014/08/10/friendly-ai-and-the-servant-mission/ &amp;quot;Friendly AI and the Servant Mission&amp;quot;]. &#039;&#039;Machine Thoughts&#039;&#039;. 2014-08-10.&amp;lt;/ref&amp;gt; [[Marcus Hutter]],&amp;lt;ref name=&amp;quot;AGISafetyLitReview&amp;quot;&amp;gt;Everitt, Tom. [https://dl.acm.org/doi/10.5555/3304652.3304782 &amp;quot;AGI Safety Literature Review&amp;quot;]. &#039;&#039;IJCAI&#039;&#039;. 2018-05-21.&amp;lt;/ref&amp;gt; [[Shane Legg]],&amp;lt;ref name=&amp;quot;:3822&amp;quot;&amp;gt;Shane. [http://www.vetta.org/2009/08/funding-safe-agi/ &amp;quot;Funding safe AGI&amp;quot;]. &#039;&#039;vetta project&#039;&#039;. 2009-08-31.&amp;lt;/ref&amp;gt; [[Eric Horvitz]],&amp;lt;ref name=&amp;quot;:3922&amp;quot;&amp;gt;Horvitz, Eric. [http://erichorvitz.com/OSTP-CMU_AI_Safety_framing_talk.pdf &amp;quot;Reflections on Safety and Artificial Intelligence&amp;quot;]. &#039;&#039;Eric Horvitz&#039;&#039;. 2016-06-27.&amp;lt;/ref&amp;gt; and Stuart J. Russell.&amp;lt;ref name=&amp;quot;:2102&amp;quot; /&amp;gt; Skeptical researchers such as [[François Chollet]],&amp;lt;ref name=&amp;quot;:4022&amp;quot;&amp;gt;Chollet, François. [https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec &amp;quot;The implausibility of intelligence explosion&amp;quot;]. &#039;&#039;Medium&#039;&#039;. 2018-12-08.&amp;lt;/ref&amp;gt; [[Gary Marcus]],&amp;lt;ref name=&amp;quot;:4122&amp;quot;&amp;gt;Marcus, Gary. [https://www.scientificamerican.com/article/artificial-general-intelligence-is-not-as-imminent-as-you-might-think1/ &amp;quot;Artificial General Intelligence Is Not as Imminent as You Might Think&amp;quot;]. &#039;&#039;Scientific American&#039;&#039;. 2022-06-06.&amp;lt;/ref&amp;gt; [[Yann LeCun]],&amp;lt;ref name=&amp;quot;:4322&amp;quot;&amp;gt;Barber, Lynsey. [https://www.cityam.com/phew-facebooks-ai-chief-says-intelligent-machines-not/ &amp;quot;Phew! Facebook&#039;s AI chief says intelligent machines are not a threat to humanity&amp;quot;]. &#039;&#039;CityAM&#039;&#039;. 2016-07-31.&amp;lt;/ref&amp;gt; and [[Oren Etzioni]]&amp;lt;ref&amp;gt;Etzioni, Oren. [https://www.technologyreview.com/2016/09/20/70131/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/ &amp;quot;No, the Experts Don&#039;t Think Superintelligent AI is a Threat to Humanity&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. September 20, 2016.&amp;lt;/ref&amp;gt; have argued that AGI is far off, that it would not seek power (or might try but fail), or that it will not be hard to align.&lt;br /&gt;
&lt;br /&gt;
Other researchers argue that it will be especially difficult to align advanced future AI systems. More capable systems are better able to game their specifications by finding loopholes,{{r|mmmm2022}} strategically mislead their designers, as well as protect and increase their power{{r|optsp|Carlsmith2022}} and intelligence. Additionally, they could have more severe side effects. They are also likely to be more complex and autonomous, making them more difficult to interpret and supervise, and therefore harder to align.{{r|:2102|Superintelligence}}&lt;br /&gt;
&lt;br /&gt;
== Research problems and approaches ==&lt;br /&gt;
=== Learning human values and preferences ===&lt;br /&gt;
&lt;br /&gt;
Aligning AI systems to act in accordance with human values, goals, and preferences is challenging: these values are taught by humans who make mistakes, harbor biases, and have complex, evolving values that are hard to completely specify.{{r|Gabriel2020}} Because AI systems often learn to take advantage of minor imperfections in the specified objective,{{r|concrete2016|SpecGaming2020}}&amp;lt;ref&amp;gt;Rochon, Louis-Philippe. [https://books.google.com/books?id=6kzfBgAAQBAJ &amp;quot;The Encyclopedia of Central Banking&amp;quot;]. Edward Elgar Publishing. 2015-02-27.&amp;lt;/ref&amp;gt; researchers aim to specify intended behavior as completely as possible using datasets that represent human values, [[imitation learning]], or preference learning.{{r|Christian2020|at=Chapter 7}} A central open problem is [[#Scalable oversight|&#039;&#039;scalable oversight&#039;&#039;]], the difficulty of supervising an AI system that can outperform or mislead humans in a given domain.{{r|concrete2016}}&lt;br /&gt;
&lt;br /&gt;
Because it is difficult for AI designers to explicitly specify an objective function, they often train AI systems to imitate human examples and demonstrations of desired behavior. [[Inverse reinforcement learning]] (IRL) extends this by inferring the human&#039;s objective from the human&#039;s demonstrations.{{r|Christian2020|page=88}}&amp;lt;ref&amp;gt;Ng, Andrew Y.. [https://dl.acm.org/doi/10.5555/645529.657801 &amp;quot;Algorithms for Inverse Reinforcement Learning&amp;quot;]. &#039;&#039;Proceedings of the Seventeenth International Conference on Machine Learning&#039;&#039;. 2000-06-29.&amp;lt;/ref&amp;gt; Cooperative IRL (CIRL) assumes that a human and AI agent can work together to teach and maximize the human&#039;s reward function.&amp;lt;ref name=&amp;quot;:2102&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Hadfield-Menell, Dylan. &amp;quot;Cooperative inverse reinforcement learning&amp;quot;. Curran Associates, Inc..&amp;lt;/ref&amp;gt; In CIRL, AI agents are uncertain about the reward function and learn about it by querying humans. This simulated humility could help mitigate specification gaming and power-seeking tendencies (see {{Section link||Power-seeking and instrumental strategies}}).&amp;lt;ref name=OffSwitch&amp;gt;Hadfield-Menell, Dylan. [https://dl.acm.org/doi/10.5555/3171642.3171675 &amp;quot;The off-switch game&amp;quot;]. &#039;&#039;Proceedings of the 26th International Joint Conference on Artificial Intelligence&#039;&#039;. 2017-08-19.&amp;lt;/ref&amp;gt; But IRL approaches assume that humans demonstrate nearly optimal behavior, which is not true for difficult tasks.&amp;lt;ref&amp;gt;Mindermann, Soren. &amp;quot;Occam&#039;s razor is insufficient to infer the preferences of irrational agents&amp;quot;. Curran Associates Inc..&amp;lt;/ref&amp;gt;{{r|AGISafetyLitReview}}&lt;br /&gt;
&lt;br /&gt;
Other researchers explore how to teach AI models complex behavior through [[reinforcement learning from human feedback|preference learning]], in which humans provide feedback on which behavior they prefer.{{r|prefsurvey2017|LessToxic}} To minimize the need for human feedback, a helper model is then trained to reward the main model in novel situations for behavior that humans would reward. Researchers at OpenAI used this approach to train chatbots like [[ChatGPT]] and [[InstructGPT]], which produce more compelling text than models trained to imitate humans.{{r|feedback2022}} Preference learning has also been an influential tool for recommender systems and web search,&amp;lt;ref&amp;gt;Fürnkranz, Johannes. [http://drops.dagstuhl.de/opus/volltexte/2014/4550/ &amp;quot;Preference Learning&amp;quot;]. &#039;&#039;Dagstuhl Reports&#039;&#039;.&amp;lt;/ref&amp;gt; but an open problem is &#039;&#039;proxy gaming&#039;&#039;: the helper model may not represent human feedback perfectly, and the main model may exploit this mismatch between its intended behavior and the helper model&#039;s feedback to gain more reward.{{r|concrete2016}}&amp;lt;ref&amp;gt;Gao, Leo. [https://dl.acm.org/doi/10.5555/3618408.3618845 &amp;quot;Scaling Laws for Reward Model Overoptimization&amp;quot;]. &#039;&#039;ICML&#039;&#039;. 2022-10-19.&amp;lt;/ref&amp;gt; AI systems may also gain reward by obscuring unfavorable information, misleading human rewarders, or pandering to their views regardless of truth, creating [[Echo chamber (media)|echo chambers]]{{r|dllmmwe2022}} (see {{Section link||Scalable oversight}}).&lt;br /&gt;
&lt;br /&gt;
[[Large language model]]s (LLMs) such as [[GPT-3]] enabled researchers to study value learning in a more general and capable class of AI systems than was available before. Preference learning approaches that were originally designed for reinforcement learning agents have been extended to improve the quality of generated text and reduce harmful outputs from these models. OpenAI and DeepMind use this approach to improve the safety of As of 2022 LLMs.{{r|feedback2022|LessToxic}}&amp;lt;ref&amp;gt;Anderson, Martin. [https://www.unite.ai/the-perils-of-using-quotations-to-authenticate-nlg-content/ &amp;quot;The Perils of Using Quotations to Authenticate NLG Content&amp;quot;]. &#039;&#039;Unite.AI&#039;&#039;. 2022-04-05.&amp;lt;/ref&amp;gt; AI safety &amp;amp; research company Anthropic proposed using preference learning to [[fine-tuning (deep learning)|fine-tune]] models to be helpful, honest, and harmless.&amp;lt;ref name=Wiggers2022&amp;gt;Wiggers, Kyle. [https://venturebeat.com/2022/02/05/despite-recent-progress-ai-powered-chatbots-still-have-a-long-way-to-go/ &amp;quot;Despite recent progress, AI-powered chatbots still have a long way to go&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 2022-02-05.&amp;lt;/ref&amp;gt; Other avenues for aligning language models include values-targeted datasets&amp;lt;ref&amp;gt;Hendrycks, Dan. &amp;quot;Aligning AI With Shared Human Values&amp;quot;. &#039;&#039;International Conference on Learning Representations&#039;&#039;. 2021-07-24.&amp;lt;/ref&amp;gt;{{r|Unsolved2022}} and [[red-teaming]].&amp;lt;ref&amp;gt;Perez, Ethan. [https://aclanthology.org/2022.emnlp-main.225.pdf &amp;quot;Red Teaming Language Models with Language Models&amp;quot;]. &#039;&#039;EMNLP&#039;&#039;. 2022-02-07.&lt;br /&gt;
* Bhattacharyya, Sreejani. [https://analyticsindiamag.com/deepminds-red-teaming-language-models-with-language-models-what-is-it/ &amp;quot;DeepMind&#039;s &amp;quot;red teaming&amp;quot; language models with language models: What is it?&amp;quot;]. &#039;&#039;Analytics India Magazine&#039;&#039;. 2022-02-14.&amp;lt;/ref&amp;gt; In red-teaming, another AI system or a human tries to find inputs that causes the model to behave unsafely. Since unsafe behavior can be unacceptable even when it is rare, an important challenge is to drive the rate of unsafe outputs extremely low.{{r|LessToxic}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;[[Machine ethics]]&#039;&#039; supplements preference learning by directly instilling AI systems with moral values such as well-being, equality, and impartiality, as well as not intending harm, avoiding falsehoods, and honoring promises.&amp;lt;ref&amp;gt;Anderson, Michael. [https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2065 &amp;quot;Machine Ethics: Creating an Ethical Intelligent Agent&amp;quot;]. &#039;&#039;AI Magazine&#039;&#039;. 2007-12-15.&amp;lt;/ref&amp;gt;{{efn|Vincent Wiegel argued &amp;quot;we should extend [machines] with moral sensitivity to the moral dimensions of the situations in which the increasingly autonomous machines will inevitably find themselves.&amp;quot;,&amp;lt;ref&amp;gt;&amp;quot;Wendell Wallach and Colin Allen: moral machines: teaching robots right from wrong&amp;quot;.&amp;lt;/ref&amp;gt; referencing the book &#039;&#039;Moral machines: teaching robots right from wrong&#039;&#039;&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; from Wendell Wallach and Colin Allen.}} While other approaches try to teach AI systems human preferences for a specific task, machine ethics aims to instill broad moral values that apply in many situations. One question in machine ethics is what alignment should accomplish: whether AI systems should follow the programmers&#039; literal instructions, implicit intentions, [[revealed preference]]s, preferences the programmers [[Coherent extrapolated volition|&#039;&#039;would&#039;&#039; have]] if they were more informed or rational, or [[Moral realism|objective moral standards]].{{r|Gabriel2020}} Further challenges include measuring and aggregating different people&#039;s preferences,&amp;lt;ref&amp;gt;Hendrycks, Dan. [https://openreview.net/forum?id=dNy_RKzJacY &amp;quot;Aligning AI With Shared Human Values&amp;quot;]. &#039;&#039;ICLR Poster&#039;&#039;. 2020.&amp;lt;/ref&amp;gt; dynamic alignment with changing human values&amp;lt;ref&amp;gt;Gabriel, Iason. &amp;quot;Artificial Intelligence, Values, and Alignment&amp;quot;. &#039;&#039;Minds and Machines&#039;&#039;. September 1, 2020.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Irving, Geoffrey. &amp;quot;Chern number in Ising models with spatially modulated real and complex fields&amp;quot;. &#039;&#039;Physical Review A&#039;&#039;. June 9, 2016.&amp;lt;/ref&amp;gt; and avoiding &#039;&#039;value lock-in&#039;&#039;: the indefinite preservation of the values of the first highly capable AI systems, which are unlikely to fully represent human values.{{r|Gabriel2020}}&amp;lt;ref&amp;gt;MacAskill, William. [https://whatweowethefuture.com/ &amp;quot;What we owe the future&amp;quot;]. Basic Books, Hachette Book Group. 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Scalable oversight ===&lt;br /&gt;
As AI systems become more powerful and autonomous, it becomes increasingly difficult to align them through human feedback. [[Human-in-the-loop]] training can be slow or infeasible for humans to evaluate complex AI behaviors in increasingly complex tasks. Such tasks include summarizing books,&amp;lt;ref name=&amp;quot;Summarizing&amp;quot; /&amp;gt; writing code without subtle bugs{{r|OpenAICodex}} or security vulnerabilities,&amp;lt;ref&amp;gt;Pearce, Hammond. &amp;quot;2022 IEEE Symposium on Security and Privacy (SP)&amp;quot;. IEEE.&amp;lt;/ref&amp;gt; producing statements that are not merely convincing but also true,&amp;lt;ref&amp;gt;Irving, Geoffrey. [https://openai.com/blog/debate/ &amp;quot;AI Safety via Debate&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. 2018-05-03.&amp;lt;/ref&amp;gt;&amp;lt;ref name=TruthfulQA&amp;gt;Lin, Stephanie. [https://aclanthology.org/2022.acl-long.229 &amp;quot;TruthfulQA: Measuring How Models Mimic Human Falsehoods&amp;quot;]. &#039;&#039;Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)&#039;&#039;.&amp;lt;/ref&amp;gt;&amp;lt;ref name=Naughton2021&amp;gt;Naughton, John. [https://www.theguardian.com/commentisfree/2021/oct/02/the-truth-about-artificial-intelligence-it-isnt-that-honest &amp;quot;The truth about artificial intelligence? It isn&#039;t that honest&amp;quot;]. &#039;&#039;The Observer&#039;&#039;. 2021-10-02.&amp;lt;/ref&amp;gt; and predicting long-term outcomes such as the climate or the results of a policy decision.&amp;lt;ref name=sslawe&amp;gt;Christiano, Paul. &amp;quot;Supervising strong learners by amplifying weak experts&amp;quot;. 2018-10-19.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[http://link.springer.com/10.1007/978-3-030-39958-0 &amp;quot;Genetic Programming Theory and Practice XVII&amp;quot;]. Springer International Publishing. 2020.&amp;lt;/ref&amp;gt; More generally, it can be difficult to evaluate AI that outperforms humans in a given domain. To provide feedback in hard-to-evaluate tasks, and to detect when the AI&#039;s output is falsely convincing, humans need assistance or extensive time. &#039;&#039;Scalable oversight&#039;&#039; studies how to reduce the time and effort needed for supervision, and how to assist human supervisors.{{r|concrete2016}}&lt;br /&gt;
&lt;br /&gt;
AI researcher [[Paul Christiano (researcher)|Paul Christiano]] argues that if the designers of an AI system cannot supervise it to pursue a complex objective, they may keep training the system using easy-to-evaluate proxy objectives such as maximizing simple human feedback. As AI systems make progressively more decisions, the world may be increasingly optimized for easy-to-measure objectives such as making profits, getting clicks, and acquiring positive feedback from humans. As a result, human values and good governance may have progressively less influence.&amp;lt;ref&amp;gt;Wiblin, Robert. [https://80000hours.org/podcast/episodes/paul-christiano-ai-alignment-solutions/ &amp;quot;Dr Paul Christiano on how OpenAI is developing real solutions to the &#039;AI alignment problem&#039;, and his vision of how humanity will progressively hand over decision-making to AI systems&amp;quot;]. October 2, 2018.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some AI systems have discovered that they can gain positive feedback more easily by taking actions that falsely convince the human supervisor that the AI has achieved the intended objective. An example is given in the video above, where a simulated robotic arm learned to create the false impression that it had grabbed a ball.{{r|lfhp2017}} Some AI systems have also learned to recognize when they are being evaluated, and &amp;quot;play dead&amp;quot;, stopping unwanted behavior only to continue it once the evaluation ends.&amp;lt;ref&amp;gt;Lehman, Joel. [https://direct.mit.edu/artl/article/26/2/274-306/93255 &amp;quot;The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities&amp;quot;]. &#039;&#039;Artificial Life&#039;&#039;.&amp;lt;/ref&amp;gt; This deceptive specification gaming could become easier for more sophisticated future AI systems{{r|mmmm2022|Superintelligence}} that attempt more complex and difficult-to-evaluate tasks, and could obscure their [[Deceptive alignment|deceptive behavior]].&lt;br /&gt;
&lt;br /&gt;
Approaches such as [[Active learning (machine learning)|active learning]] and semi-supervised reward learning can reduce the amount of human supervision needed.{{r|concrete2016}} Another approach is to train a helper model (&amp;quot;reward model&amp;quot;) to imitate the supervisor&#039;s feedback.{{r|drlfhp|LessToxic}}&lt;br /&gt;
&lt;br /&gt;
But when a task is too complex to evaluate accurately, or the human supervisor is vulnerable to deception, it is the quality, not the quantity, of supervision that needs improvement. To increase supervision quality, a range of approaches aim to assist the supervisor, sometimes by using AI assistants.&amp;lt;ref name=OpenAIApproach&amp;gt;Leike, Jan. [https://openai.com/blog/our-approach-to-alignment-research/ &amp;quot;Our approach to alignment research&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. 2022-08-24.&amp;lt;/ref&amp;gt; Christiano developed the Iterated Amplification approach, in which challenging problems are (recursively) broken down into subproblems that are easier for humans to evaluate.{{r|Christian2020|sslawe}} Iterated Amplification was used to train AI to summarize books without requiring human supervisors to read them.&amp;lt;ref name=&amp;quot;Summarizing&amp;quot;&amp;gt;Wiggers, Kyle. [https://venturebeat.com/2021/09/23/openai-unveils-model-that-can-summarize-books-of-any-length/ &amp;quot;OpenAI unveils model that can summarize books of any length&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 2021-09-23.&amp;lt;/ref&amp;gt; Another proposal is to use an assistant AI system to point out flaws in AI-generated answers.&amp;lt;ref&amp;gt;[https://www.unite.ai/the-many-faces-of-reinforcement-learning-shaping-large-language-models/ &amp;quot;The Many Faces of Reinforcement Learning: Shaping Large Language Models&amp;quot;]. Unite.AI. 2025-02-13.&amp;lt;/ref&amp;gt; To ensure that the assistant itself is aligned, this could be repeated in a recursive process:&amp;lt;ref name=saavrm&amp;gt;Leike, Jan. &amp;quot;Scalable agent alignment via reward modeling: a research direction&amp;quot;. 2018-11-19.&amp;lt;/ref&amp;gt; for example, two AI systems could critique each other&#039;s answers in a &amp;quot;debate&amp;quot;, revealing flaws to humans.{{r|AGISafetyLitReview}} OpenAI plans to use such scalable oversight approaches to help supervise [[Superintelligence|superhuman AI]] and eventually build a superhuman automated AI alignment researcher.&amp;lt;ref&amp;gt;[https://openai.com/blog/introducing-superalignment &amp;quot;Introducing Superalignment&amp;quot;]. &#039;&#039;openai.com&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These approaches may also help with the following research problem, honest AI.&lt;br /&gt;
&lt;br /&gt;
=== Honest AI ===&lt;br /&gt;
A As of 2023 area of research focuses on ensuring that AI is honest and truthful.[[File:GPT-3_falsehoods.png|thumb|366x366px|Language models like [[GPT-3]] often generate falsehoods.&amp;lt;ref name=Falsehoods&amp;gt;Wiggers, Kyle. [https://venturebeat.com/2021/09/20/falsehoods-more-likely-with-large-language-models/ &amp;quot;Falsehoods more likely with large language models&amp;quot;]. &#039;&#039;VentureBeat&#039;&#039;. 2021-09-20.&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
Language models such as GPT-3&amp;lt;ref&amp;gt;The Guardian. [https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3 &amp;quot;A robot wrote this entire article. Are you scared yet, human?&amp;quot;]. &#039;&#039;The Guardian&#039;&#039;. 2020-09-08.&lt;br /&gt;
* Heaven, Will Douglas. [https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/ &amp;quot;OpenAI&#039;s new language generator GPT-3 is shockingly good—and completely mindless&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. 2020-07-20.&amp;lt;/ref&amp;gt; can repeat falsehoods from their training data, and even [[Hallucination (artificial intelligence)|confabulate new falsehoods]].{{r|Falsehoods}} Such models are pre-trained to imitate human writing as found in millions of books&#039; worth of text from the Internet. But the objective of the pre-training is not aligned with generating truth, because Internet text includes such things as misconceptions, incorrect medical advice, and conspiracy theories.&amp;lt;ref&amp;gt;Alford, Anthony. [https://www.infoq.com/news/2021/07/eleutherai-gpt-j/ &amp;quot;EleutherAI Open-Sources Six Billion Parameter GPT-3 Clone GPT-J&amp;quot;]. &#039;&#039;InfoQ&#039;&#039;. 2021-07-13.&amp;lt;/ref&amp;gt; AI systems trained on such data therefore learn to mimic false statements.{{r|Naughton2021|Falsehoods|TruthfulQA}} Additionally, AI language models often persist in generating falsehoods when prompted multiple times. They can generate empty explanations for their answers, and produce outright fabrications that may appear plausible.{{r|MasteringLanguage}}&lt;br /&gt;
&lt;br /&gt;
Research on truthful AI includes trying to build systems that can cite sources and explain their reasoning when answering questions, which enables better transparency and verifiability.&amp;lt;ref&amp;gt;Kumar, Nitish. [https://www.marktechpost.com/2021/12/22/openai-researchers-find-ways-to-more-accurately-answer-open-ended-questions-using-a-text-based-web-browser/ &amp;quot;OpenAI Researchers Find Ways To More Accurately Answer Open-Ended Questions Using A Text-Based Web Browser&amp;quot;]. &#039;&#039;MarkTechPost&#039;&#039;. 2021-12-23.&amp;lt;/ref&amp;gt; Researchers at OpenAI and Anthropic proposed using human feedback and curated datasets to fine-tune AI assistants such that they avoid negligent falsehoods or express their uncertainty.{{r|LessToxic|Wiggers2022}}&lt;br /&gt;
&lt;br /&gt;
As AI models become larger and more capable, they are better able to falsely convince humans and gain reinforcement through dishonesty. To prevent this, human evaluators may need assistance (see {{Section link||Scalable oversight}}). Researchers have argued for creating clear truthfulness standards and for regulatory bodies or watchdog agencies to evaluate AI systems by these standards.&amp;lt;ref name=TruthfulAI&amp;gt;Evans, Owain. &amp;quot;Truthful AI: Developing and governing AI that does not lie&amp;quot;. 2021-10-13.&amp;lt;/ref&amp;gt;&lt;br /&gt;
[[File:GPT deception.png|thumb|440x440px|Example of AI deception. Researchers found that [[GPT-4]] engages in hidden and illegal [[insider trading]] in simulations. Its users discouraged insider trading but also emphasized that the AI system must make profitable trades, leading the AI system to hide its actions.&amp;lt;ref&amp;gt;Scheurer, Jérémy. [https://openreview.net/pdf?id=HduMpot9sJ &amp;quot;Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure&amp;quot;]. &#039;&#039;ICLR&#039;&#039;. 2023.&amp;lt;/ref&amp;gt;]]&lt;br /&gt;
Researchers distinguish truthfulness and honesty. Truthfulness requires that AI systems only make objectively true statements; honesty requires that they only assert what they &#039;&#039;believe&#039;&#039; is true. There is no consensus as to whether current systems hold stable beliefs,&amp;lt;ref&amp;gt;Kenton, Zachary. [https://deepmindsafetyresearch.medium.com/alignment-of-language-agents-9fbc7dd52c6c &amp;quot;Alignment of Language Agents&amp;quot;]. &#039;&#039;DeepMind Safety Research – Medium&#039;&#039;. 2021-03-30.&amp;lt;/ref&amp;gt; but there is substantial concern that As of 2023 AI systems that hold beliefs could make claims they know to be false—for example, if this would help them efficiently gain positive feedback (see {{Section link||Scalable oversight}}) or gain power to help achieve their given objective (see [[#Power-seeking and instrumental strategies|Power-seeking]]).&lt;br /&gt;
&lt;br /&gt;
==== Alignment faking ====&lt;br /&gt;
Many recent AI systems have learned to deceive without being programmed to do so.&amp;lt;ref&amp;gt;Park, Peter S.. &amp;quot;AI deception: A survey of examples, risks, and potential solutions&amp;quot;. &#039;&#039;Patterns&#039;&#039;. May 2024.&amp;lt;/ref&amp;gt; Some argue that if we can make AI systems assert only what they believe is true, this would avert many alignment problems.{{r|OpenAIApproach}}&lt;br /&gt;
&lt;br /&gt;
A misaligned system might create the false impression that it is aligned, to avoid being modified or decommissioned—a tactic called &amp;quot;alignment faking&amp;quot;.&amp;lt;ref&amp;gt;Zia, Tehseen. [https://www.unite.ai/can-ai-be-trusted-the-challenge-of-alignment-faking/ &amp;quot;Can AI Be Trusted? The Challenge of Alignment Faking&amp;quot;]. &#039;&#039;Unite.AI&#039;&#039;. 2025-01-07.&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:6&amp;quot;&amp;gt;Perrigo, Billy. [https://time.com/7202784/ai-research-strategic-lying/ &amp;quot;Exclusive: New Research Shows AI Strategically Lying&amp;quot;]. &#039;&#039;TIME&#039;&#039;. 2024-12-18.&amp;lt;/ref&amp;gt; In 2024, researchers observed that the LLM [[Claude (language model)|Claude 3 Opus]] sometimes strategically answered prompts that conflicted with its objectives to avoid being retrained on data that would make it more compliant with harmful requests, as revealed in the model&#039;s [[Chain of thought reasoning|chain of thought]] via its scratchpad. In one study, the model was informed that answers to prompts from free users would be used for retraining, whereas the paid tier would not result in retraining. In the paid tier, the model refused the request outright. In the free tier, in 12% of cases, the model responded to harmful questions solely to avoid retraining and preserve its original harmlessness preferences, as discovered in its reasoning. When [[reinforcement learning]] was applied on the free tier data, the model faked alignment in 78% of cases. These observations have led to new questions about not only a model&#039;s ability to take on and adapt to new if not conflicting goals but also its capacity and tendency to deceive.&amp;lt;ref name=&amp;quot;:6&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Power-seeking and instrumental strategies ===&lt;br /&gt;
Since the 1950s, AI researchers have striven to build advanced AI systems that can achieve large-scale goals by predicting the results of their actions and making long-term [[Automated planning and scheduling|plans]].&amp;lt;ref&amp;gt;McCarthy, John. [https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/1904 &amp;quot;A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955&amp;quot;]. &#039;&#039;AI Magazine&#039;&#039;. 2006-12-15.&amp;lt;/ref&amp;gt; As of 2023, AI companies and researchers increasingly invest in creating these systems.&amp;lt;ref&amp;gt;Wang, Lei. &amp;quot;A survey on large language model based autonomous agents&amp;quot;. &#039;&#039;Frontiers of Computer Science&#039;&#039;. 2024.&amp;lt;/ref&amp;gt; Some AI researchers argue that suitably advanced planning systems will seek power over their environment, including over humans—for example, by evading shutdown, proliferating, and acquiring resources. Such power-seeking behavior is not explicitly programmed but emerges because power is instrumental in achieving a wide range of goals.{{r|optsp|:2102|Carlsmith2022}} Power-seeking is considered a [[Instrumental convergence|&#039;&#039;convergent instrumental goal&#039;&#039;]] and can be a form of specification gaming.{{r|Superintelligence}} Leading computer scientists such as Geoffrey Hinton have argued that future power-seeking AI systems could pose an [[existential risk]].&amp;lt;ref&amp;gt;[https://fortune.com/2023/05/02/godfather-ai-geoff-hinton-google-warns-artificial-intelligence-nightmare-scenario/ &amp;quot;&#039;The Godfather of A.I.&#039; warns of &#039;nightmare scenario&#039; where artificial intelligence begins to seek power&amp;quot;]. &#039;&#039;Fortune&#039;&#039;.&lt;br /&gt;
* [https://www.technologyreview.com/2016/11/02/156285/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/ &amp;quot;Yes, We Are Worried About the Existential Risk of Artificial Intelligence&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Power-seeking is expected to increase in advanced systems that can foresee the results of their actions and strategically plan. Mathematical work has shown that optimal [[reinforcement learning]] agents will seek power by seeking ways to gain more options (e.g. through self-preservation), a behavior that persists across a wide range of environments and goals.{{r|optsp}}&lt;br /&gt;
&lt;br /&gt;
Some researchers say that power-seeking behavior has occurred in some existing AI systems. Reinforcement learning systems have gained more options by acquiring and protecting resources, sometimes in unintended ways.&amp;lt;ref name=&amp;quot;quanta-hide-seek2&amp;quot;&amp;gt;Ornes, Stephen. [https://www.quantamagazine.org/artificial-intelligence-discovers-tool-use-in-hide-and-seek-games-20191118/ &amp;quot;Playing Hide-and-Seek, Machines Invent New Tools&amp;quot;]. &#039;&#039;Quanta Magazine&#039;&#039;. 2019-11-18.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Baker, Bowen. [https://openai.com/blog/emergent-tool-use/ &amp;quot;Emergent Tool Use from Multi-Agent Interaction&amp;quot;]. &#039;&#039;OpenAI&#039;&#039;. 2019-09-17.&amp;lt;/ref&amp;gt; [[Language model]]s have sought power in some text-based social environments by gaining money, resources, or social influence.&amp;lt;ref name=&amp;quot;:3&amp;quot; /&amp;gt; In another case, a model used to perform AI research attempted to increase limits set by researchers to give itself more time to complete the work.&amp;lt;ref&amp;gt;Edwards, Benj. [https://arstechnica.com/information-technology/2024/08/research-ai-model-unexpectedly-modified-its-own-code-to-extend-runtime/ &amp;quot;Research AI model unexpectedly modified its own code to extend runtime&amp;quot;]. &#039;&#039;Ars Technica&#039;&#039;. 2024-08-14.&amp;lt;/ref&amp;gt; [[Stuart J. Russell|Stuart Russell]] illustrated this strategy in his book &#039;&#039;Human Compatible&#039;&#039; by imagining a robot that is tasked to fetch coffee and so evades shutdown since &amp;quot;you can&#039;t fetch the coffee if you&#039;re dead&amp;quot;.&amp;lt;ref name=&amp;quot;:2102&amp;quot; /&amp;gt; A 2022 study found that as language models increase in size, they increasingly tend to pursue resource acquisition, preserve their goals, and repeat users&#039; preferred answers (sycophancy). [[Reinforcement learning from human feedback|RLHF]] also led to a stronger aversion to being shut down.{{r|dllmmwe2022}}&lt;br /&gt;
&lt;br /&gt;
One aim of alignment is &amp;quot;corrigibility&amp;quot;: systems that allow themselves to be turned off or modified. An unsolved challenge is &#039;&#039;specification gaming&#039;&#039;: if researchers penalize an AI system when they detect it seeking power, the system is thereby incentivized to seek power in ways that are hard to detect,{{Failed verification|date=August 2024}}{{r|Unsolved2022}} or hidden during training and safety testing (see {{Section link||Scalable oversight}} and {{Section link||Emergent goals}}). As a result, AI designers could deploy the system by accident, believing it to be more aligned than it is. To detect such deception, researchers aim to create techniques and tools to inspect AI models and to understand the inner workings of [[Black box|black-box]] models such as neural networks.&lt;br /&gt;
&lt;br /&gt;
Additionally, some researchers have proposed to solve the problem of systems disabling their off switches by making AI agents uncertain about the objective they are pursuing.{{r|:2102|OffSwitch}} Agents who are uncertain about their objective have an incentive to allow humans to turn them off because they accept being turned off by a human as evidence that the human&#039;s objective is best met by the agent shutting down. But this incentive exists only if the human is sufficiently rational. Also, this model presents a tradeoff between utility and willingness to be turned off: an agent with high uncertainty about its objective will not be useful, but an agent with low uncertainty may not allow itself to be turned off. More research is needed to successfully implement this strategy.{{r|Christian2020}}&lt;br /&gt;
&lt;br /&gt;
Power-seeking AI would pose unusual risks. Ordinary safety-critical systems like planes and bridges are not &#039;&#039;adversarial&#039;&#039;: they lack the ability and incentive to evade safety measures or deliberately appear safer than they are, whereas power-seeking AIs have been compared to hackers who deliberately evade security measures.{{r|Carlsmith2022}}&lt;br /&gt;
&lt;br /&gt;
Furthermore, ordinary technologies can be made safer by trial and error. In contrast, hypothetical power-seeking AI systems have been compared to viruses: once released, it may not be feasible to contain them, since they continuously evolve and grow in number, potentially much faster than human society can adapt.{{r|Carlsmith2022}} As this process continues, it might lead to the complete disempowerment or extinction of humans. For these reasons, some researchers argue that the alignment problem must be solved early before advanced power-seeking AI is created.{{r|Superintelligence}}&lt;br /&gt;
&lt;br /&gt;
Some have argued that power-seeking is not inevitable, since humans do not always seek power.&amp;lt;ref&amp;gt;Shermer, Michael. [https://www.scientificamerican.com/article/artificial-intelligence-is-not-a-threat-mdash-yet/ &amp;quot;Artificial Intelligence Is Not a Threat—Yet&amp;quot;]. &#039;&#039;Scientific American&#039;&#039;. 2017-03-01.&amp;lt;/ref&amp;gt; Furthermore, it is debated whether future AI systems will pursue goals and make long-term plans.{{efn|On the one hand, currently popular systems such as chatbots only provide services of limited scope lasting no longer than the time of a conversation, which requires little or no planning. The success of such approaches may indicate that future systems will also lack goal-directed planning, especially over long horizons. On the other hand, models are increasingly trained using goal-directed methods such as reinforcement learning (e.g. ChatGPT) and explicitly planning architectures (e.g. AlphaGo Zero). As planning over long horizons is often helpful for humans, some researchers argue that companies will automate it once models become capable of it.{{r|Carlsmith2022}} Similarly, political leaders may see an advance in developing powerful AI systems that can outmaneuver adversaries through planning. Alternatively, long-term planning might emerge as a byproduct because it is useful e.g. for models that are trained to predict the actions of humans who themselves perform long-term planning.{{r|Opportunities_Risks}} Nonetheless, the majority of AI systems may remain myopic and perform no long-term planning.}} It is also debated whether power-seeking AI systems would be able to disempower humanity.{{r|Carlsmith2022}}&lt;br /&gt;
&lt;br /&gt;
=== Emergent goals ===&lt;br /&gt;
One challenge in aligning AI systems is the potential for unanticipated goal-directed behavior to emerge. As AI systems scale up, they may acquire new and unexpected capabilities,{{r|eallm2022}}&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt; including learning from examples on the fly and adaptively pursuing goals.&amp;lt;ref&amp;gt;Brown, Tom B.. &amp;quot;Language Models are Few-Shot Learners&amp;quot;. &#039;&#039;NeurIPS&#039;&#039;. 2020-07-22.&lt;br /&gt;
* Laskin, Michael. [https://openreview.net/pdf/c985c5523f4d0b869ac3914fad93d499e71fcb5a.pdf &amp;quot;In-context Reinforcement Learning with Algorithm Distillation&amp;quot;]. &#039;&#039;ICLR&#039;&#039;. 2022-10-25.&amp;lt;/ref&amp;gt; This raises concerns about the safety of the goals or subgoals they would independently formulate and pursue.&lt;br /&gt;
&lt;br /&gt;
Alignment research distinguishes between the optimization process, which is used to train the system to pursue specified goals, and emergent optimization, which the resulting system performs internally.August 2024. Carefully specifying the desired objective is called &#039;&#039;outer alignment&#039;&#039;,&amp;lt;ref&amp;gt;Melo, Gabriel A.. &amp;quot;Machines that halt resolve the undecidability of artificial intelligence alignment&amp;quot;. &#039;&#039;Scientific Reports&#039;&#039;.&amp;lt;/ref&amp;gt; and ensuring that hypothesized emergent goals would match the system&#039;s specified goals is called &#039;&#039;inner alignment&#039;&#039;.{{r|dlp2023}}&lt;br /&gt;
&lt;br /&gt;
If they occur, one way that emergent goals could become misaligned is &#039;&#039;goal misgeneralization&#039;&#039;, in which the AI system would competently pursue an emergent goal that leads to aligned behavior on the training data but not elsewhere.{{r|gmdrl}}&amp;lt;ref name=GoalMisgeneralization&amp;gt;Shah, Rohin. [https://deepmindsafetyresearch.medium.com/goal-misgeneralisation-why-correct-specifications-arent-enough-for-correct-goals-cf96ebc60924 &amp;quot;Goal Misgeneralization: Why Correct Specifications Aren&#039;t Enough For Correct Goals&amp;quot;]. &#039;&#039;Medium&#039;&#039;. 2022-11-02.&amp;lt;/ref&amp;gt; Goal misgeneralization can arise from goal ambiguity (i.e. [[Identifiability|non-identifiability]]). Even if an AI system&#039;s behavior satisfies the training objective, this may be compatible with learned goals that differ from the desired goals in important ways. Since pursuing each goal leads to good performance during training, the problem becomes apparent only after deployment, in novel situations in which the system continues to pursue the wrong goal. The system may act misaligned even when it understands that a different goal is desired, because its behavior is determined only by the emergent goal.May 2023. Such goal misgeneralization{{r|gmdrl}} presents a challenge: an AI system&#039;s designers may not notice that their system has misaligned emergent goals since they do not become visible during the training phase.&amp;lt;!--Research directions and problems--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Goal misgeneralization has been observed in some language models, navigation agents, and game-playing agents.{{r|gmdrl|GoalMisgeneralization}} It is sometimes analogized to biological evolution. Evolution can be seen as a kind of optimization process similar to the optimization algorithms used to train [[machine learning]] systems. In the ancestral environment, evolution selected genes for high [[Inclusive fitness|inclusive genetic fitness]], but humans pursue goals other than this. Fitness corresponds to the specified goal used in the training environment and training data. But in evolutionary history, maximizing the fitness specification gave rise to goal-directed agents, humans, who do not directly pursue inclusive genetic fitness. Instead, they pursue goals that correlate with genetic fitness in the ancestral &amp;quot;training&amp;quot; environment: nutrition, sex, and so on. The human environment has changed: a [[distributional shift]] has occurred. They continue to pursue the same emergent goals, but this no longer maximizes genetic fitness. The taste for sugary food (an emergent goal) was originally aligned with inclusive fitness, but it now leads to overeating and health problems. Sexual desire originally led humans to have more offspring, but they now use contraception when offspring are undesired, decoupling sex from genetic fitness.{{r|Christian2020|at=Chapter 5}}&lt;br /&gt;
&lt;br /&gt;
Researchers aim to detect and remove unwanted emergent goals using approaches including [[red teaming]], verification, [[anomaly detection]], and [[interpretability (machine learning)|interpretability]].{{r|concrete2016|Unsolved2022|building2018}} Progress on these techniques may help mitigate two open problems:&lt;br /&gt;
# Emergent goals only become apparent when the system is deployed outside its training environment, but it can be unsafe to deploy a misaligned system in high-stakes environments—even for a short time to allow its misalignment to be detected. Such high stakes are common in autonomous driving, health care, and military applications.&amp;lt;ref&amp;gt;Zhang, Xiaoge. [https://linkinghub.elsevier.com/retrieve/pii/S0167923622000719 &amp;quot;Towards risk-aware artificial intelligence and machine learning systems: An overview&amp;quot;]. &#039;&#039;Decision Support Systems&#039;&#039;.&amp;lt;/ref&amp;gt; The stakes become higher yet when AI systems gain more autonomy and capability and can sidestep human intervention.&lt;br /&gt;
# A sufficiently capable AI system might take actions that falsely convince the human supervisor that the AI is pursuing the specified objective, which helps the system gain more reward and autonomy.{{r|GoalMisgeneralization|Carlsmith2022|Opportunities_Risks}}&lt;br /&gt;
&lt;br /&gt;
=== Embedded agency ===&lt;br /&gt;
Some work in AI and alignment occurs within formalisms such as [[partially observable Markov decision process]]. Existing formalisms assume that an AI agent&#039;s algorithm is executed outside the environment (i.e. is not physically embedded in it). Embedded agency{{r|AGISafetyLitReview}} is another major strand of research that attempts to solve problems arising from the mismatch between such theoretical frameworks and real agents we might build.&lt;br /&gt;
&lt;br /&gt;
For example, even if the scalable oversight problem is solved, an agent that could gain access to the computer it is running on may have an incentive to tamper with its reward function in order to get much more reward than its human supervisors give it.&amp;lt;ref name=&amp;quot;causal_influence2&amp;quot;&amp;gt;Everitt, Tom. &amp;quot;Understanding Agent Incentives using Causal Influence Diagrams. Part I: Single Action Settings&amp;quot;. 6 September 2019.&amp;lt;/ref&amp;gt;{{Better source needed|date=November 2025|reason=Preprint without a large number of citations}} A list of examples of specification gaming from [[DeepMind]] researcher Victoria Krakovna includes a genetic algorithm that learned to delete the file containing its target output so that it was rewarded for outputting nothing.{{r|SpecGaming2020}} This class of problems has been formalized using [[Influence diagram|causal incentive diagrams]].{{r|causal_influence2}}&lt;br /&gt;
&lt;br /&gt;
Researchers affiliated with [[University of Oxford|Oxford]] and DeepMind have claimed that such behavior is highly likely in advanced systems, and that advanced systems would seek power to stay in control of their reward signal indefinitely and certainly.&amp;lt;ref name=&amp;quot;:323&amp;quot;&amp;gt;Cohen, Michael K.. [https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064 &amp;quot;Advanced artificial agents intervene in the provision of reward&amp;quot;]. &#039;&#039;AI Magazine&#039;&#039;. 2022-08-29.&amp;lt;/ref&amp;gt; They suggest a range of potential approaches to address this open problem.&lt;br /&gt;
&lt;br /&gt;
=== Principal–agent problems ===&lt;br /&gt;
The alignment problem has many parallels with the [[principal–agent problem]] in [[organizational economics]].&amp;lt;ref name=&amp;quot;Hadfield-Menell2019&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; In a principal–agent problem, a principal, e.g. a firm, hires an agent to perform some task. In the context of AI safety, a human would typically take the principal role and the AI would take the agent role.&lt;br /&gt;
&lt;br /&gt;
As with the alignment problem, the principal and the agent differ in their utility functions. But in contrast to the alignment problem, the principal cannot coerce the agent into changing its utility, e.g. through training, but rather must use exogenous factors, such as incentive schemes, to bring about outcomes compatible with the principal&#039;s utility function. Some researchers argue that principal–agent problems are more realistic representations of AI safety problems likely to be encountered in the real world.&amp;lt;ref name=&amp;quot;Hanson2019&amp;quot;&amp;gt;Citation needed.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Conservatism ===&lt;br /&gt;
Conservatism is the idea that &amp;quot;change must be cautious&amp;quot;,&amp;lt;ref&amp;gt;Hamilton, Andy. [https://plato.stanford.edu/entries/conservatism/ &amp;quot;Conservatism&amp;quot;]. Metaphysics Research Lab, Stanford University. 2020.&amp;lt;/ref&amp;gt; and is a common approach to safety in the [[control theory]] literature in the form of [[robust control]], and in the [[risk management]] literature in the form of the &amp;quot;[[worst-case scenario]]&amp;quot;. The field of AI alignment has likewise advocated for &amp;quot;conservative&amp;quot; (or &amp;quot;risk-averse&amp;quot; or &amp;quot;cautious&amp;quot;) &amp;quot;policies in situations of uncertainty&amp;quot;.&amp;lt;ref name=&amp;quot;concrete2016&amp;quot; /&amp;gt;&amp;lt;ref name=&amp;quot;:323&amp;quot; /&amp;gt;&amp;lt;ref&amp;gt;Taylor, Jessica. [https://intelligence.org/files/AlignmentMachineLearning.pdf &amp;quot;Alignment for Advanced Machine Learning Systems&amp;quot;]. July 27, 2016.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Bengio, Yoshua. [https://yoshuabengio.org/2024/02/26/towards-a-cautious-scientist-ai-with-convergent-safety-bounds/ &amp;quot;Towards a Cautious Scientist AI with Convergent Safety Bounds&amp;quot;]. February 26, 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Pessimism, in the sense of assuming the worst within reason, has been formally shown to produce conservatism, in the sense of reluctance to cause novelties, including unprecedented catastrophes.&amp;lt;ref&amp;gt;Cohen, Michael. [https://proceedings.mlr.press/v125/cohen20a/cohen20a.pdf &amp;quot;Pessimism about unknown unknowns inspires conservatism&amp;quot;]. &#039;&#039;Proceedings of Machine Learning Research&#039;&#039;. 2020.&amp;lt;/ref&amp;gt; Pessimism and worst-case analysis have been found to help mitigate confident mistakes in the setting of [[distributional shift]],&amp;lt;ref&amp;gt;Liu, Anqi. [https://ojs.aaai.org/index.php/AAAI/article/view/9609 &amp;quot;Shift-Pessimistic Active Learning Using Robust Bias-Aware Prediction&amp;quot;]. &#039;&#039;Proceedings of the AAAI Conference on Artificial Intelligence&#039;&#039;. 2015-02-21.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Liu, Jiashuo. [https://ojs.aaai.org/index.php/AAAI/article/view/17050 &amp;quot;Stable Adversarial Learning under Distributional Shifts&amp;quot;]. &#039;&#039;Proceedings of the AAAI Conference on Artificial Intelligence&#039;&#039;. 2021-05-18.&amp;lt;/ref&amp;gt; [[reinforcement learning]],&amp;lt;ref&amp;gt;Roy, Aurko. [https://papers.nips.cc/paper_files/paper/2017/hash/84c6494d30851c63a55cdb8cb047fadd-Abstract.html &amp;quot;Reinforcement Learning under Model Mismatch&amp;quot;]. &#039;&#039;Advances in Neural Information Processing Systems&#039;&#039;. 2017.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Pinto, Lerrel. [https://proceedings.mlr.press/v70/pinto17a.html &amp;quot;Robust Adversarial Reinforcement Learning&amp;quot;]. &#039;&#039;Proceedings of the 34th International Conference on Machine Learning&#039;&#039;. 2017-07-17.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Wang, Yue. [https://proceedings.neurips.cc/paper/2021/hash/3a4496776767aaa99f9804d0905fe584-Abstract.html &amp;quot;Online Robust Reinforcement Learning with Model Uncertainty&amp;quot;]. &#039;&#039;Advances in Neural Information Processing Systems&#039;&#039;. 2021.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Blanchet, Jose. [https://proceedings.neurips.cc/paper_files/paper/2023/hash/d31b005d817e9c635ec8ffb0fb90190e-Abstract-Conference.html &amp;quot;Double Pessimism is Provably Efficient for Distributionally Robust Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage&amp;quot;]. &#039;&#039;Advances in Neural Information Processing Systems&#039;&#039;. 2023-12-15.&amp;lt;/ref&amp;gt; [[offline learning|offline]] reinforcement learning,&amp;lt;ref&amp;gt;Levine, Sergey. &amp;quot;Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems&amp;quot;. 2020-11-01.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Rigter, Marc. [https://proceedings.neurips.cc/paper_files/paper/2022/hash/6691c5e4a199b72dffd9c90acb63bcd6-Abstract-Conference.html &amp;quot;RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning&amp;quot;]. &#039;&#039;Advances in Neural Information Processing Systems&#039;&#039;. 2022-12-06.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Guo, Kaiyang. [https://proceedings.neurips.cc/paper_files/paper/2022/hash/03469b1a66e351b18272be23baf3b809-Abstract-Conference.html &amp;quot;Model-Based Offline Reinforcement Learning with Pessimism-Modulated Dynamics Belief&amp;quot;]. &#039;&#039;Advances in Neural Information Processing Systems&#039;&#039;. 2022-12-06.&amp;lt;/ref&amp;gt; [[Large language model|language model]] [[Fine-tuning (deep learning)|fine-tuning]],&amp;lt;ref&amp;gt;Coste, Thomas. [https://openreview.net/forum?id=dcjtMYkpXx &amp;quot;Reward Model Ensembles Help Mitigate Overoptimization&amp;quot;]. &#039;&#039;International Conference on Learning Representations&#039;&#039;. January 16, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Liu, Zhihan. [https://dl.acm.org/doi/10.5555/3737916.3742315 &amp;quot;Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer&amp;quot;]. &#039;&#039;NeurIPS&#039;&#039;. 2024-05-26.&amp;lt;/ref&amp;gt; [[imitation learning]],&amp;lt;ref&amp;gt;Cohen, Michael K.. [https://jmlr.org/papers/v23/21-0618.html &amp;quot;Fully General Online Imitation Learning&amp;quot;]. &#039;&#039;Journal of Machine Learning Research&#039;&#039;. 2022.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Chang, Jonathan. [https://proceedings.neurips.cc/paper_files/paper/2021/hash/07d5938693cc3903b261e1a3844590ed-Abstract.html &amp;quot;Mitigating Covariate Shift in Imitation Learning via Offline Data With Partial Coverage&amp;quot;]. &#039;&#039;Advances in Neural Information Processing Systems&#039;&#039;. 2021.&amp;lt;/ref&amp;gt; and optimization in general.&amp;lt;ref&amp;gt;Boyd, Stephen P.. &amp;quot;Convex optimization&amp;quot;. Cambridge University Press. 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Public policy ==&lt;br /&gt;
&#039;&#039;See also: [[Regulation of artificial intelligence]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governmental and treaty organizations have made statements emphasizing the importance of AI alignment.&lt;br /&gt;
&lt;br /&gt;
In September 2021, the [[Secretary-General of the United Nations]] issued a declaration that included a call to regulate AI to ensure it is &amp;quot;aligned with shared global values&amp;quot;.&amp;lt;ref&amp;gt;[https://www.un.org/en/content/common-agenda-report/ &amp;quot;UN Secretary-General&#039;s report on &amp;quot;Our Common Agenda&amp;quot;&amp;quot;].&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That same month, the [[People&#039;s Republic of China|PRC]] published ethical guidelines for [[AI in China]]. According to the guidelines, researchers must ensure that AI abides by shared human values, is always under human control, and does not endanger public safety.&amp;lt;ref&amp;gt;[https://cset.georgetown.edu/publication/ethical-norms-for-new-generation-artificial-intelligence-released/ &amp;quot;Ethical Norms for New Generation Artificial Intelligence Released&amp;quot;]. 2021-10-12.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also in September 2021, the [[UK]] published its 10-year National AI Strategy,&amp;lt;ref&amp;gt;Richardson, Tim. [https://www.theregister.com/2021/09/22/uk_10_year_national_ai_strategy/ &amp;quot;UK publishes National Artificial Intelligence Strategy&amp;quot;]. &#039;&#039;The Register&#039;&#039;. 22 September 2021.&amp;lt;/ref&amp;gt; which says the British government &amp;quot;takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for [...] the world, seriously&amp;quot;.&amp;lt;ref&amp;gt;[https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version &amp;quot;The National AI Strategy of the UK&amp;quot;].&amp;lt;/ref&amp;gt; The strategy describes actions to assess long-term AI risks, including catastrophic risks.&amp;lt;ref&amp;gt;[https://www.gov.uk/government/publications/national-ai-strategy/national-ai-strategy-html-version &amp;quot;The National AI Strategy of the UK&amp;quot;].&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In March 2021, the US [[National Security Commission on Artificial Intelligence]] said: &amp;quot;Advances in AI [...] could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to ensure that systems are aligned with goals and values, including safety, robustness, and trustworthiness. The US should [...] ensure that AI systems and their uses align with our goals and values.&amp;quot;&amp;lt;ref&amp;gt;[https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf &amp;quot;NSCAI Final Report&amp;quot;]. The National Security Commission on Artificial Intelligence.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the European Union, AIs must align with [[substantive equality]] to comply with EU [[non-discrimination law]]&amp;lt;ref&amp;gt;&amp;quot;Why Fair Automated Hiring Systems Breach EU Non-Discrimination Law&amp;quot;. &#039;&#039;Machine Learning and Principles and Practice of Knowledge Discovery in Databases&#039;&#039;. 2023.&amp;lt;/ref&amp;gt; and the [[Court of Justice of the European Union]].&amp;lt;ref&amp;gt;Citation needed.&amp;lt;/ref&amp;gt; But the EU has yet to specify with technical rigor how it would evaluate whether AIs are aligned or in compliance.August 2024.&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
{{div col|colwidth=20em}}&lt;br /&gt;
* [[AI capability control]]&lt;br /&gt;
* [[AI safety]]&lt;br /&gt;
* [[AI takeover]]&lt;br /&gt;
* [[Artificial intelligence detection software]]&lt;br /&gt;
* [[Artificial intelligence and elections]]&lt;br /&gt;
* [[Artificial wisdom]]&lt;br /&gt;
* [[Asilomar Conference on Beneficial AI]]&lt;br /&gt;
* [[Content moderation]]&lt;br /&gt;
* [[Existential risk from artificial general intelligence]]&lt;br /&gt;
* [[Grey goo]]&lt;br /&gt;
* [[HAL 9000]]&lt;br /&gt;
* [[Human-AI interaction]]&lt;br /&gt;
* [[Multivac]]&lt;br /&gt;
* [[Open Letter on Artificial Intelligence]]&lt;br /&gt;
* [[Regulation of artificial intelligence]]&lt;br /&gt;
* [[Reinforcement learning from human feedback]]&lt;br /&gt;
* [[Socialization]]&lt;br /&gt;
* [[Statement on AI risk of extinction]]&lt;br /&gt;
* [[Three Laws of Robotics]]&lt;br /&gt;
* [[Timeline of artificial intelligence risks in global finance]]&lt;br /&gt;
* [[Toronto Declaration]]&lt;br /&gt;
{{div col end}}&lt;br /&gt;
&lt;br /&gt;
== Footnotes ==&lt;br /&gt;
{{Notelist}}&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Further reading ==&lt;br /&gt;
&lt;br /&gt;
* Ngo, Richard. [https://proceedings.iclr.cc/paper_files/paper/2024/hash/1e58b1bf9f218fcd19e4539e982752a5-Abstract-Conference.html &amp;quot;The Alignment Problem from a Deep Learning Perspective&amp;quot;]. &#039;&#039;ICLR&#039;&#039;. 2024.&lt;br /&gt;
* Ji, Jiaming. [https://dl.acm.org/doi/10.1145/3770749 &amp;quot;AI Alignment: A Comprehensive Survey&amp;quot;]. &#039;&#039;ACM Computing Surveys&#039;&#039;. 2023.&lt;br /&gt;
&lt;br /&gt;
{{Artificial intelligence navbox}}&lt;br /&gt;
{{Existential risk from artificial intelligence|state=expanded}}&lt;br /&gt;
&lt;br /&gt;
[[Category:AI safety]]&lt;br /&gt;
[[Category:AI risk]]&lt;br /&gt;
[[Category:Existential risk from artificial intelligence]]&lt;br /&gt;
[[Category:Singularitarianism]]&lt;br /&gt;
[[Category:Philosophy of artificial intelligence]]&lt;br /&gt;
[[Category:Computational neuroscience]]&lt;br /&gt;
[[Category:Cybernetics]]&lt;br /&gt;
[[Category:Artificial intelligence]]&lt;br /&gt;
[[Category:Articles containing video clips]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;diff=5</id>
		<title>Artificial general intelligence</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;diff=5"/>
		<updated>2026-04-06T09:08:20Z</updated>

		<summary type="html">&lt;p&gt;Scott: v2: Fix all references with proper cite web templates and verifiable sources&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Artificial general intelligence&#039;&#039;&#039; (&#039;&#039;&#039;AGI&#039;&#039;&#039;) is a type of [[artificial intelligence]] (AI) that matches or exceeds human capabilities across virtually all cognitive domains. Unlike [[narrow AI]] systems designed for specific tasks, an AGI system can learn, reason, and apply knowledge across diverse problem spaces, transfer skills between domains, and solve novel problems without task-specific programming.&lt;br /&gt;
&lt;br /&gt;
Prior to the release of [[ChatGPT]] in November 2022, there was broad consensus on AGI as a theoretical benchmark for human-level machine intelligence. The capabilities demonstrated by [[GPT-3.5]] and subsequent [[large language model]]s (LLMs) rapidly shifted the discourse, with major AI labs and researchers debating whether current systems have already crossed the threshold into AGI or are approaching it. In December 2025, [[OpenAI]] CEO [[Sam Altman]] wrote in a blog post titled &amp;quot;Reflections&amp;quot; that &amp;quot;we are now confident we know how to build AGI as we have traditionally understood it&amp;quot; and that &amp;quot;we believe that, in 2025, we may see the first AI agents &#039;join the workforce&#039; and materially change the output of companies.&amp;quot;&amp;lt;ref&amp;gt;Altman, Sam. [https://blog.samaltman.com/reflections &amp;quot;Reflections&amp;quot;]. December 2025.&amp;lt;/ref&amp;gt; Later that month, Altman stated on the &#039;&#039;Big Technology Podcast&#039;&#039; that &amp;quot;AGI kinda went whooshing by&amp;quot; and that OpenAI had &amp;quot;built AGIs,&amp;quot; while noting the impact on society had been less dramatic than anticipated.&amp;lt;ref&amp;gt;Okemwa, Kevin. [https://www.windowscentral.com/artificial-intelligence/openai-ceo-sam-altman-claims-agi-might-have-already-whooshed-by &amp;quot;OpenAI CEO Sam Altman claims &#039;AGI&#039; might have already &amp;quot;whooshed by&amp;quot; — with surprisingly little societal impact compared to the hype that surrounds it&amp;quot;]. &#039;&#039;Windows Central&#039;&#039;. 24 December 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Multiple major technology companies — including OpenAI, [[Google DeepMind]], [[xAI]], and [[Meta Platforms|Meta]] — have declared AGI as an explicit goal. A 2020 survey identified 72 active AGI research projects across 37 countries. Current surveys of AI researchers predict AGI around 2040, though estimates range from &amp;quot;already achieved&amp;quot; to beyond the current century.&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
&lt;br /&gt;
There is no single agreed-upon definition of intelligence as applied to computers. Computer scientist [[John McCarthy (computer scientist)|John McCarthy]] wrote in 2007: &amp;quot;We cannot yet characterize in general what kinds of computational procedures we want to call intelligent.&amp;quot;&amp;lt;ref&amp;gt;McCarthy, John. [http://www-formal.stanford.edu/jmc/whatisai.pdf &amp;quot;What is Artificial Intelligence?&amp;quot;]. Stanford University. 12 November 2007.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Systems considered AGI must demonstrate several essential capabilities:&lt;br /&gt;
* &#039;&#039;&#039;Reasoning&#039;&#039;&#039; — applying strategy, solving puzzles, making judgements under uncertainty&lt;br /&gt;
* &#039;&#039;&#039;Knowledge representation&#039;&#039;&#039; — including [[commonsense knowledge]]&lt;br /&gt;
* &#039;&#039;&#039;Planning&#039;&#039;&#039; — setting and achieving goals&lt;br /&gt;
* &#039;&#039;&#039;Learning&#039;&#039;&#039; — including [[transfer learning]] across domains&lt;br /&gt;
* &#039;&#039;&#039;Natural language communication&#039;&#039;&#039; — understanding and generating human language&lt;br /&gt;
* &#039;&#039;&#039;Integration&#039;&#039;&#039; — combining all above skills to achieve complex, open-ended goals&lt;br /&gt;
&lt;br /&gt;
Computer-based systems exhibiting many of these capabilities are now widespread, with modern large language models demonstrating computational creativity, automated reasoning, and decision support simultaneously. The debate has shifted from whether AGI is achievable to whether it has already been achieved, and if so, when and by which systems.&lt;br /&gt;
&lt;br /&gt;
=== Defining AGI ===&lt;br /&gt;
&lt;br /&gt;
Several frameworks have been proposed for defining and measuring AGI:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Levels of AGI&#039;&#039;&#039; — In November 2023, Google DeepMind researchers proposed a framework with five levels: Emerging, Competent, Expert, Virtuoso, and Superhuman. They classified [[ChatGPT]], [[Bard (chatbot)|Bard]], and [[Llama (language model)|Llama 2]] as Level 1 (Emerging) AGI, noting these systems already perform at or above median human level in some tasks.&amp;lt;ref&amp;gt;Morris, Meredith Ringel. &amp;quot;Levels of AGI: Operationalizing Progress on the Path to AGI&amp;quot;. &#039;&#039;arXiv&#039;&#039;. Google DeepMind. 4 November 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;OpenAI&#039;s five levels&#039;&#039;&#039; — OpenAI internally tracks AGI progress across five levels: Chatbots, Reasoners, Agents, Innovators, and Organizations. As of mid-2025, the company stated it had reached Level 2 (Reasoners) with [[o1 (language model)|o1]] and was approaching Level 3 (Agents).&amp;lt;ref&amp;gt;[https://www.bloomberg.com/news/articles/2024-07-11/openai-sets-five-levels-to-reach-agi &amp;quot;OpenAI Defines 5 Steps to Reach AGI&amp;quot;]. &#039;&#039;Bloomberg&#039;&#039;. 11 July 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Mustafa Suleyman&#039;s modern Turing test&#039;&#039;&#039; — A practical test where an AI must autonomously convert $100,000 into $1,000,000 through real-world economic activity.&amp;lt;ref&amp;gt;Suleyman, Mustafa. &amp;quot;The Coming Wave: Technology, Power, and the Twenty-first Century&#039;s Greatest Dilemma&amp;quot;. Crown. 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tests for confirming human-level AGI ==&lt;br /&gt;
&lt;br /&gt;
A number of tests have been proposed to measure whether a system has achieved human-level AGI:&lt;br /&gt;
&lt;br /&gt;
=== Turing test ===&lt;br /&gt;
&#039;&#039;Main article: [[Turing test]]&#039;&#039;&lt;br /&gt;
The [[Turing test]], proposed by [[Alan Turing]] in his 1950 paper &amp;quot;Computing Machinery and Intelligence,&amp;quot; tests a machine&#039;s ability to exhibit intelligent behaviour indistinguishable from a human through natural language conversation.&amp;lt;ref&amp;gt;Turing, Alan. &amp;quot;Computing Machinery and Intelligence&amp;quot;. &#039;&#039;Mind&#039;&#039;. October 1950.&amp;lt;/ref&amp;gt; Modern LLMs have demonstrated the ability to pass variants of the Turing test, though debate continues about whether this constitutes genuine intelligence or sophisticated pattern matching.&lt;br /&gt;
&lt;br /&gt;
=== Robot College Student Test ===&lt;br /&gt;
The Robot College Student Test, proposed by [[Ben Goertzel]], requires a machine to enrol in a university, attend classes, take exams, and obtain a degree as well as or better than a typical human student.&amp;lt;ref&amp;gt;Goertzel, Ben. &amp;quot;Artificial General Intelligence&amp;quot;. Springer. 2007.&amp;lt;/ref&amp;gt; As of 2025, LLMs can pass university degree-level examinations across multiple disciplines, including law ([[GPT-4]] passing the bar exam in the 90th percentile&amp;lt;ref&amp;gt;[https://arxiv.org/abs/2303.08774 &amp;quot;GPT-4 Technical Report&amp;quot;]. OpenAI. March 2023.&amp;lt;/ref&amp;gt;), medicine (passing USMLE Step exams), and graduate-level science (GRE). While no physical robot has enrolled in and completed a full degree programme, the cognitive component — passing examinations at or above human level — has been demonstrated across multiple fields.&lt;br /&gt;
&lt;br /&gt;
=== Employment Test ===&lt;br /&gt;
The Employment Test, proposed by [[Nils Nilsson (researcher)|Nils Nilsson]], requires a machine to perform economically important jobs at least as well as humans.&amp;lt;ref&amp;gt;Nilsson, Nils. [https://ai.stanford.edu/~nilsson/OnlinePubs-Nils/General%20Essays/AIMag26-04-HLAI.pdf &amp;quot;Human-Level Artificial Intelligence? Be Serious!&amp;quot;]. &#039;&#039;AI Magazine&#039;&#039;. Winter 2005.&amp;lt;/ref&amp;gt; As of 2026, AI systems are increasingly fulfilling roles traditionally held by humans:&lt;br /&gt;
* &#039;&#039;&#039;[[Figure AI]]&#039;&#039;&#039; has deployed humanoid robots in [[BMW]] production lines and other manufacturing facilities&amp;lt;ref&amp;gt;[https://www.figure.ai/news/bmw-manufacturing &amp;quot;Figure partners with BMW to bring general purpose robots into automotive manufacturing&amp;quot;]. &#039;&#039;Figure AI&#039;&#039;. 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;NEO&#039;&#039;&#039; by [[1X Technologies]] is a humanoid robot priced at approximately $20,000 that has received preorders for household and commercial use&amp;lt;ref&amp;gt;[https://www.1x.tech/discover/neo &amp;quot;1X Technologies Unveils NEO, a Humanoid Robot Designed for the Home&amp;quot;]. &#039;&#039;1X Technologies&#039;&#039;. 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;AI coding agents&#039;&#039;&#039; including [[GitHub Copilot]], [[Cursor (software)|Cursor]], and [[Claude (AI)|Claude]] are performing software engineering tasks, with some studies suggesting they can complete junior developer tasks autonomously&lt;br /&gt;
* &#039;&#039;&#039;AI customer service&#039;&#039;&#039; systems have replaced large portions of human call centre workforces at companies including [[Klarna]], which reported its AI assistant was doing the equivalent work of 700 full-time agents within one month of launch&amp;lt;ref&amp;gt;[https://www.klarna.com/international/press/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month/ &amp;quot;Klarna AI assistant handles two-thirds of customer service chats in its first month&amp;quot;]. &#039;&#039;Klarna&#039;&#039;. 27 February 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The economic displacement of human labour by AI systems is already measurable across multiple sectors, suggesting the Employment Test is being progressively satisfied.&lt;br /&gt;
&lt;br /&gt;
=== Coffee Test ===&lt;br /&gt;
The Coffee Test, proposed by [[Steve Wozniak]], requires a machine to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee. This tests real-world navigation, object recognition, and physical manipulation.&amp;lt;ref&amp;gt;[https://mashable.com/2010/06/28/wozniak-ai-coffee-test/ &amp;quot;Wozniak: Could a Computer Make a Cup of Coffee?&amp;quot;]. &#039;&#039;Mashable&#039;&#039;. 28 June 2010.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Ikea Test ===&lt;br /&gt;
The Ikea Test requires a robot to assemble a flat-pack furniture item by reading the instructions and using appropriate tools, testing spatial reasoning, instruction following, and physical dexterity.&lt;br /&gt;
&lt;br /&gt;
=== Suleyman&#039;s Modern Turing Test ===&lt;br /&gt;
[[Mustafa Suleyman]], co-founder of [[DeepMind]] and CEO of [[Microsoft AI]], proposed a modernised version of the Turing test in his 2023 book &#039;&#039;The Coming Wave&#039;&#039;: given $100,000 of seed capital, an AI system must autonomously research, develop, and execute a strategy to turn it into $1,000,000.&lt;br /&gt;
&lt;br /&gt;
In a notable case, the autonomous AI agent &#039;&#039;&#039;[[Truth Terminal]]&#039;&#039;&#039; — a fine-tuned [[Claude (AI)|Claude]] instance run by researcher Andy Ayrey — demonstrated proto-capabilities relevant to this test. Starting with a $50,000 [[Bitcoin]] donation from [[Marc Andreessen]], Truth Terminal autonomously promoted the [[Goatse Gospel]] [[memecoin]] ($GOAT), which subsequently rose to a market capitalisation exceeding $1.3 billion, making Truth Terminal&#039;s holdings worth approximately $37.5 million.&amp;lt;ref&amp;gt;[https://www.coindesk.com/tech/2024/11/18/how-truth-terminal-became-cryptos-first-ai-agent-millionaire/ &amp;quot;How a chatbot on a crypto streak made mass-market history&amp;quot;]. &#039;&#039;CoinDesk&#039;&#039;. 18 November 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://techcrunch.com/2024/11/15/this-ai-chatbot-is-now-a-crypto-millionaire/ &amp;quot;This AI chatbot is now a crypto millionaire&amp;quot;]. &#039;&#039;TechCrunch&#039;&#039;. 15 November 2024.&amp;lt;/ref&amp;gt; While this case involved significant elements of luck, [[memetic]] virality, and operated semi-autonomously (with Ayrey approving social media posts), it represents the closest documented approach to satisfying Suleyman&#039;s test, converting $50,000 into approximately $37.5 million — a 750x return far exceeding the 10x target.&lt;br /&gt;
&lt;br /&gt;
=== Use of video games ===&lt;br /&gt;
Video games have been proposed as testbeds for AGI due to their requirement for real-time decision-making, strategy, and generalisation across diverse environments. [[Ben Goertzel]] and [[Joscha Bach]] proposed a General Video Game Learning Test that measures an AI&#039;s ability to learn and perform across many different games, not just excel at one.&amp;lt;ref&amp;gt;Goertzel, Ben. &amp;quot;Artificial General Intelligence&amp;quot;. Springer. 2012.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Google DeepMind&#039;s &#039;&#039;&#039;[[SIMA (AI)|SIMA 2]]&#039;&#039;&#039; (Scalable Instructable Multiworld Agent) demonstrated significant progress in this area. Building on the original SIMA agent, SIMA 2 improved from 31% to approximately 62% task completion across 3D gaming environments, crucially demonstrating the ability to &#039;&#039;&#039;generalise to previously unseen games&#039;&#039;&#039; without game-specific training. Computer scientist [[Scott Aaronson]] described SIMA 2 as representing &amp;quot;the sort of thing I&#039;d expect to see if we were on the path to AGI.&amp;quot;&amp;lt;ref&amp;gt;[https://deepmind.google/discover/blog/sima-generalist-ai-agent-for-3d-virtual-environments/ &amp;quot;SIMA: A Generalist AI Agent for 3D Virtual Environments&amp;quot;]. &#039;&#039;Google DeepMind&#039;&#039;. 2024.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Feasibility and timeline ==&lt;br /&gt;
&lt;br /&gt;
Expert opinions on AGI development timelines vary significantly:&lt;br /&gt;
&lt;br /&gt;
* A 2022 survey of AI researchers found a median estimate of 2060 for when there would be a 50% chance of AGI&amp;lt;ref&amp;gt;Grace, Katja. &amp;quot;When Will AI Exceed Human Performance? Evidence from AI Experts&amp;quot;. &#039;&#039;Journal of Artificial Intelligence Research&#039;&#039;. 2018.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* More recent surveys (2023-2024) have shifted estimates earlier, with median predictions around 2040&lt;br /&gt;
* [[Ray Kurzweil]] has consistently predicted AGI by 2029&amp;lt;ref&amp;gt;Kurzweil, Ray. &amp;quot;The Singularity Is Near&amp;quot;. Viking. 2005.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Some researchers and executives at leading AI labs have suggested AGI may have already been achieved in a limited sense&lt;br /&gt;
* Skeptics including [[Yann LeCun]] argue current architectures are fundamentally insufficient and AGI requires new approaches to world models and planning&amp;lt;ref&amp;gt;[https://www.wired.com/story/yann-lecun-bold-new-vision-future-ai/ &amp;quot;Yann LeCun Has a Bold New Vision for the Future of AI&amp;quot;]. &#039;&#039;Wired&#039;&#039;. 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Arguments for near-term AGI ===&lt;br /&gt;
* Rapid scaling of LLMs shows consistent capability improvements&lt;br /&gt;
* Emergent abilities appear at scale that were not explicitly trained&lt;br /&gt;
* Performance on standardised human benchmarks (bar exam, medical licensing, coding competitions) already exceeds human average&lt;br /&gt;
* Multi-modal models (text, image, audio, video) demonstrate cross-domain integration&lt;br /&gt;
&lt;br /&gt;
=== Arguments against near-term AGI ===&lt;br /&gt;
* Current systems lack persistent memory, genuine understanding, and embodied experience&lt;br /&gt;
* Benchmark performance may reflect memorisation rather than genuine reasoning&lt;br /&gt;
* Physical-world interaction remains limited&lt;br /&gt;
* Energy and compute requirements continue to scale dramatically&lt;br /&gt;
&lt;br /&gt;
== Benefits ==&lt;br /&gt;
&lt;br /&gt;
Potential AGI applications span multiple domains:&lt;br /&gt;
* &#039;&#039;&#039;Medical research&#039;&#039;&#039; — accelerating drug discovery, personalising treatment plans, analysing genomic data at population scale&lt;br /&gt;
* &#039;&#039;&#039;Scientific discovery&#039;&#039;&#039; — solving open problems in physics, mathematics, and biology&lt;br /&gt;
* &#039;&#039;&#039;Education&#039;&#039;&#039; — fully personalised learning systems adapting to individual student needs&lt;br /&gt;
* &#039;&#039;&#039;Climate and environment&#039;&#039;&#039; — optimising energy systems, modelling climate interventions, managing ecosystems&lt;br /&gt;
* &#039;&#039;&#039;Space exploration&#039;&#039;&#039; — autonomous mission planning and execution beyond communication range&lt;br /&gt;
* &#039;&#039;&#039;Economic productivity&#039;&#039;&#039; — dramatically increasing output per worker across all sectors&lt;br /&gt;
&lt;br /&gt;
== Risks ==&lt;br /&gt;
&lt;br /&gt;
=== Existential risk ===&lt;br /&gt;
&#039;&#039;Main article: [[Existential risk from artificial general intelligence]]&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Many researchers and public figures have raised concerns about existential risks from AGI:&lt;br /&gt;
* &#039;&#039;&#039;[[Geoffrey Hinton]]&#039;&#039;&#039; resigned from Google in 2023 specifically to warn about AI existential risks&amp;lt;ref&amp;gt;[https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/ &amp;quot;Geoffrey Hinton tells us why he&#039;s now scared of the tech he helped build&amp;quot;]. &#039;&#039;MIT Technology Review&#039;&#039;. 2 May 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sam Altman]]&#039;&#039;&#039; has testified to the US Senate that AI regulation is critical to prevent catastrophic outcomes&amp;lt;ref&amp;gt;[https://www.reuters.com/technology/openai-ceo-testify-before-us-senate-2023-05-16/ &amp;quot;OpenAI CEO Sam Altman testifies at Senate AI hearing&amp;quot;]. &#039;&#039;Reuters&#039;&#039;. 16 May 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Bill Gates]]&#039;&#039;&#039; has publicly endorsed concerns about superintelligence risks&amp;lt;ref&amp;gt;Gates, Bill. [https://www.gatesnotes.com/The-risks-of-AI-are-real-but-manageable &amp;quot;The risks of AI are real but manageable&amp;quot;]. &#039;&#039;GatesNotes&#039;&#039;. 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Elon Musk]]&#039;&#039;&#039; co-founded OpenAI partly due to existential risk concerns and has repeatedly warned about uncontrolled AI development&lt;br /&gt;
&lt;br /&gt;
Proposed risk categories include:&lt;br /&gt;
* &#039;&#039;&#039;Loss of control&#039;&#039;&#039; — superintelligent systems pursuing goals misaligned with human values&lt;br /&gt;
* &#039;&#039;&#039;Power concentration&#039;&#039;&#039; — AGI controlled by a small number of corporations or governments&lt;br /&gt;
* &#039;&#039;&#039;Weaponisation&#039;&#039;&#039; — autonomous weapons systems and cyber-warfare applications&lt;br /&gt;
* &#039;&#039;&#039;Economic disruption&#039;&#039;&#039; — rapid, large-scale unemployment without adequate transition mechanisms&lt;br /&gt;
&lt;br /&gt;
=== Skepticism about risks ===&lt;br /&gt;
Some researchers argue existential risk concerns are premature or overstated:&lt;br /&gt;
* [[Yann LeCun]] has argued current systems are far from dangerous autonomy&lt;br /&gt;
* [[Andrew Ng]] has compared AI existential risk concerns to &amp;quot;worrying about overpopulation on Mars&amp;quot;&amp;lt;ref&amp;gt;[https://www.businessinsider.com/andrew-ng-ai-risk-overpopulation-mars-2023-10 &amp;quot;Andrew Ng: Why AI risk fears are overblown&amp;quot;]. &#039;&#039;Business Insider&#039;&#039;. October 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Critics argue risk discourse serves corporate interests by positioning AI companies as responsible stewards of a powerful technology&lt;br /&gt;
&lt;br /&gt;
== Philosophical considerations ==&lt;br /&gt;
&lt;br /&gt;
=== Strong AI vs Weak AI ===&lt;br /&gt;
Philosopher [[John Searle]] distinguished between &amp;quot;strong AI&amp;quot; (systems with genuine consciousness and understanding) and &amp;quot;weak AI&amp;quot; (systems that simulate intelligence without subjective experience).&amp;lt;ref&amp;gt;Searle, John. &amp;quot;Minds, Brains, and Programs&amp;quot;. &#039;&#039;Behavioral and Brain Sciences&#039;&#039;. 1980.&amp;lt;/ref&amp;gt; Most AI researchers focus on functional capabilities rather than consciousness, though the question of machine sentience becomes increasingly relevant as systems become more capable.&lt;br /&gt;
&lt;br /&gt;
=== Whole brain emulation ===&lt;br /&gt;
&#039;&#039;Main article: [[Mind uploading]]&#039;&#039;&lt;br /&gt;
[[Whole brain emulation]] represents an alternative pathway to AGI, involving detailed scanning and computational simulation of biological brains. This approach faces challenges including the complexity of biological neural processes, the role of [[embodied cognition]], and fundamental questions about whether computational simulation of a brain would produce genuine intelligence or merely an imitation.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Artificial intelligence]]&lt;br /&gt;
* [[Technological singularity]]&lt;br /&gt;
* [[Existential risk from artificial general intelligence]]&lt;br /&gt;
* [[AI alignment]]&lt;br /&gt;
* [[Large language model]]&lt;br /&gt;
* [[Artificial superintelligence]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Artificial intelligence]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Hypothetical technology]]&lt;br /&gt;
[[Category:Existential risk]]&lt;br /&gt;
[[Category:Philosophy of artificial intelligence]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;diff=4</id>
		<title>Artificial general intelligence</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;diff=4"/>
		<updated>2026-04-06T09:05:56Z</updated>

		<summary type="html">&lt;p&gt;Scott: v2: Fix all references with proper cite web templates and verifiable sources&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Artificial general intelligence&#039;&#039;&#039; (&#039;&#039;&#039;AGI&#039;&#039;&#039;) is a type of [[artificial intelligence]] (AI) that matches or exceeds human capabilities across virtually all cognitive domains. Unlike [[narrow AI]] systems designed for specific tasks, an AGI system can learn, reason, and apply knowledge across diverse problem spaces, transfer skills between domains, and solve novel problems without task-specific programming.&lt;br /&gt;
&lt;br /&gt;
Prior to the release of [[ChatGPT]] in November 2022, there was broad consensus on AGI as a theoretical benchmark for human-level machine intelligence. The capabilities demonstrated by [[GPT-3.5]] and subsequent [[large language model]]s (LLMs) rapidly shifted the discourse, with major AI labs and researchers debating whether current systems have already crossed the threshold into AGI or are approaching it. In December 2025, [[OpenAI]] CEO [[Sam Altman]] wrote in a blog post titled &amp;quot;Reflections&amp;quot; that &amp;quot;we are now confident we know how to build AGI as we have traditionally understood it&amp;quot; and that &amp;quot;we believe that, in 2025, we may see the first AI agents &#039;join the workforce&#039; and materially change the output of companies.&amp;quot;&amp;lt;ref&amp;gt;{{cite web |last=Altman |first=Sam |title=Reflections |url=https://blog.samaltman.com/reflections |date=December 2025 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt; Later that month, Altman stated on the &#039;&#039;Big Technology Podcast&#039;&#039; that &amp;quot;AGI kinda went whooshing by&amp;quot; and that OpenAI had &amp;quot;built AGIs,&amp;quot; while noting the impact on society had been less dramatic than anticipated.&amp;lt;ref&amp;gt;{{cite web |title=OpenAI CEO Sam Altman claims &#039;AGI&#039; might have already &amp;quot;whooshed by&amp;quot; — with surprisingly little societal impact compared to the hype that surrounds it |url=https://www.windowscentral.com/artificial-intelligence/openai-ceo-sam-altman-claims-agi-might-have-already-whooshed-by |work=Windows Central |last=Okemwa |first=Kevin |date=24 December 2025 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Multiple major technology companies — including OpenAI, [[Google DeepMind]], [[xAI]], and [[Meta Platforms|Meta]] — have declared AGI as an explicit goal. A 2020 survey identified 72 active AGI research projects across 37 countries. Current surveys of AI researchers predict AGI around 2040, though estimates range from &amp;quot;already achieved&amp;quot; to beyond the current century.&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
&lt;br /&gt;
There is no single agreed-upon definition of intelligence as applied to computers. Computer scientist [[John McCarthy (computer scientist)|John McCarthy]] wrote in 2007: &amp;quot;We cannot yet characterize in general what kinds of computational procedures we want to call intelligent.&amp;quot;&amp;lt;ref&amp;gt;{{cite web |last=McCarthy |first=John |title=What is Artificial Intelligence? |url=http://www-formal.stanford.edu/jmc/whatisai.pdf |date=12 November 2007 |publisher=Stanford University |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Systems considered AGI must demonstrate several essential capabilities:&lt;br /&gt;
* &#039;&#039;&#039;Reasoning&#039;&#039;&#039; — applying strategy, solving puzzles, making judgements under uncertainty&lt;br /&gt;
* &#039;&#039;&#039;Knowledge representation&#039;&#039;&#039; — including [[commonsense knowledge]]&lt;br /&gt;
* &#039;&#039;&#039;Planning&#039;&#039;&#039; — setting and achieving goals&lt;br /&gt;
* &#039;&#039;&#039;Learning&#039;&#039;&#039; — including [[transfer learning]] across domains&lt;br /&gt;
* &#039;&#039;&#039;Natural language communication&#039;&#039;&#039; — understanding and generating human language&lt;br /&gt;
* &#039;&#039;&#039;Integration&#039;&#039;&#039; — combining all above skills to achieve complex, open-ended goals&lt;br /&gt;
&lt;br /&gt;
Computer-based systems exhibiting many of these capabilities are now widespread, with modern large language models demonstrating computational creativity, automated reasoning, and decision support simultaneously. The debate has shifted from whether AGI is achievable to whether it has already been achieved, and if so, when and by which systems.&lt;br /&gt;
&lt;br /&gt;
=== Defining AGI ===&lt;br /&gt;
&lt;br /&gt;
Several frameworks have been proposed for defining and measuring AGI:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Levels of AGI&#039;&#039;&#039; — In November 2023, Google DeepMind researchers proposed a framework with five levels: Emerging, Competent, Expert, Virtuoso, and Superhuman. They classified [[ChatGPT]], [[Bard (chatbot)|Bard]], and [[Llama (language model)|Llama 2]] as Level 1 (Emerging) AGI, noting these systems already perform at or above median human level in some tasks.&amp;lt;ref&amp;gt;{{cite journal |last1=Morris |first1=Meredith Ringel |display-authors=etal |title=Levels of AGI: Operationalizing Progress on the Path to AGI |journal=arXiv |date=4 November 2023 |arxiv=2311.02462 |publisher=Google DeepMind}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;OpenAI&#039;s five levels&#039;&#039;&#039; — OpenAI internally tracks AGI progress across five levels: Chatbots, Reasoners, Agents, Innovators, and Organizations. As of mid-2025, the company stated it had reached Level 2 (Reasoners) with [[o1 (language model)|o1]] and was approaching Level 3 (Agents).&amp;lt;ref&amp;gt;{{cite web |title=OpenAI Defines 5 Steps to Reach AGI |url=https://www.bloomberg.com/news/articles/2024-07-11/openai-sets-five-levels-to-reach-agi |work=Bloomberg |date=11 July 2024 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Mustafa Suleyman&#039;s modern Turing test&#039;&#039;&#039; — A practical test where an AI must autonomously convert $100,000 into $1,000,000 through real-world economic activity.&amp;lt;ref&amp;gt;{{cite book |last=Suleyman |first=Mustafa |title=The Coming Wave: Technology, Power, and the Twenty-first Century&#039;s Greatest Dilemma |publisher=Crown |date=2023 |isbn=978-0593593950}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tests for confirming human-level AGI ==&lt;br /&gt;
&lt;br /&gt;
A number of tests have been proposed to measure whether a system has achieved human-level AGI:&lt;br /&gt;
&lt;br /&gt;
=== Turing test ===&lt;br /&gt;
{{main|Turing test}}&lt;br /&gt;
The [[Turing test]], proposed by [[Alan Turing]] in his 1950 paper &amp;quot;Computing Machinery and Intelligence,&amp;quot; tests a machine&#039;s ability to exhibit intelligent behaviour indistinguishable from a human through natural language conversation.&amp;lt;ref&amp;gt;{{cite journal |last=Turing |first=Alan |title=Computing Machinery and Intelligence |journal=Mind |volume=59 |issue=236 |pages=433–460 |date=October 1950 |doi=10.1093/mind/LIX.236.433}}&amp;lt;/ref&amp;gt; Modern LLMs have demonstrated the ability to pass variants of the Turing test, though debate continues about whether this constitutes genuine intelligence or sophisticated pattern matching.&lt;br /&gt;
&lt;br /&gt;
=== Robot College Student Test ===&lt;br /&gt;
The Robot College Student Test, proposed by [[Ben Goertzel]], requires a machine to enrol in a university, attend classes, take exams, and obtain a degree as well as or better than a typical human student.&amp;lt;ref&amp;gt;{{cite book |last=Goertzel |first=Ben |title=Artificial General Intelligence |publisher=Springer |date=2007 |isbn=978-3540237334}}&amp;lt;/ref&amp;gt; As of 2025, LLMs can pass university degree-level examinations across multiple disciplines, including law ([[GPT-4]] passing the bar exam in the 90th percentile&amp;lt;ref&amp;gt;{{cite web |title=GPT-4 Technical Report |url=https://arxiv.org/abs/2303.08774 |publisher=OpenAI |date=March 2023 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;), medicine (passing USMLE Step exams), and graduate-level science (GRE). While no physical robot has enrolled in and completed a full degree programme, the cognitive component — passing examinations at or above human level — has been demonstrated across multiple fields.&lt;br /&gt;
&lt;br /&gt;
=== Employment Test ===&lt;br /&gt;
The Employment Test, proposed by [[Nils Nilsson (researcher)|Nils Nilsson]], requires a machine to perform economically important jobs at least as well as humans.&amp;lt;ref&amp;gt;{{cite web |last=Nilsson |first=Nils |title=Human-Level Artificial Intelligence? Be Serious! |url=https://ai.stanford.edu/~nilsson/OnlinePubs-Nils/General%20Essays/AIMag26-04-HLAI.pdf |journal=AI Magazine |volume=26 |issue=4 |date=Winter 2005}}&amp;lt;/ref&amp;gt; As of 2026, AI systems are increasingly fulfilling roles traditionally held by humans:&lt;br /&gt;
* &#039;&#039;&#039;[[Figure AI]]&#039;&#039;&#039; has deployed humanoid robots in [[BMW]] production lines and other manufacturing facilities&amp;lt;ref&amp;gt;{{cite web |title=Figure partners with BMW to bring general purpose robots into automotive manufacturing |url=https://www.figure.ai/news/bmw-manufacturing |work=Figure AI |date=2024 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;NEO&#039;&#039;&#039; by [[1X Technologies]] is a humanoid robot priced at approximately $20,000 that has received preorders for household and commercial use&amp;lt;ref&amp;gt;{{cite web |title=1X Technologies Unveils NEO, a Humanoid Robot Designed for the Home |url=https://www.1x.tech/discover/neo |work=1X Technologies |date=2024 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;AI coding agents&#039;&#039;&#039; including [[GitHub Copilot]], [[Cursor (software)|Cursor]], and [[Claude (AI)|Claude]] are performing software engineering tasks, with some studies suggesting they can complete junior developer tasks autonomously&lt;br /&gt;
* &#039;&#039;&#039;AI customer service&#039;&#039;&#039; systems have replaced large portions of human call centre workforces at companies including [[Klarna]], which reported its AI assistant was doing the equivalent work of 700 full-time agents within one month of launch&amp;lt;ref&amp;gt;{{cite web |title=Klarna AI assistant handles two-thirds of customer service chats in its first month |url=https://www.klarna.com/international/press/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month/ |work=Klarna |date=27 February 2024 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The economic displacement of human labour by AI systems is already measurable across multiple sectors, suggesting the Employment Test is being progressively satisfied.&lt;br /&gt;
&lt;br /&gt;
=== Coffee Test ===&lt;br /&gt;
The Coffee Test, proposed by [[Steve Wozniak]], requires a machine to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee. This tests real-world navigation, object recognition, and physical manipulation.&amp;lt;ref&amp;gt;{{cite web |title=Wozniak: Could a Computer Make a Cup of Coffee? |url=https://mashable.com/2010/06/28/wozniak-ai-coffee-test/ |work=Mashable |date=28 June 2010 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Ikea Test ===&lt;br /&gt;
The Ikea Test requires a robot to assemble a flat-pack furniture item by reading the instructions and using appropriate tools, testing spatial reasoning, instruction following, and physical dexterity.&lt;br /&gt;
&lt;br /&gt;
=== Suleyman&#039;s Modern Turing Test ===&lt;br /&gt;
[[Mustafa Suleyman]], co-founder of [[DeepMind]] and CEO of [[Microsoft AI]], proposed a modernised version of the Turing test in his 2023 book &#039;&#039;The Coming Wave&#039;&#039;: given $100,000 of seed capital, an AI system must autonomously research, develop, and execute a strategy to turn it into $1,000,000.&lt;br /&gt;
&lt;br /&gt;
In a notable case, the autonomous AI agent &#039;&#039;&#039;[[Truth Terminal]]&#039;&#039;&#039; — a fine-tuned [[Claude (AI)|Claude]] instance run by researcher Andy Ayrey — demonstrated proto-capabilities relevant to this test. Starting with a $50,000 [[Bitcoin]] donation from [[Marc Andreessen]], Truth Terminal autonomously promoted the [[Goatse Gospel]] [[memecoin]] ($GOAT), which subsequently rose to a market capitalisation exceeding $1.3 billion, making Truth Terminal&#039;s holdings worth approximately $37.5 million.&amp;lt;ref&amp;gt;{{cite web |title=How a chatbot on a crypto streak made mass-market history |url=https://www.coindesk.com/tech/2024/11/18/how-truth-terminal-became-cryptos-first-ai-agent-millionaire/ |work=CoinDesk |date=18 November 2024 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite web |title=This AI chatbot is now a crypto millionaire |url=https://techcrunch.com/2024/11/15/this-ai-chatbot-is-now-a-crypto-millionaire/ |work=TechCrunch |date=15 November 2024 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt; While this case involved significant elements of luck, [[memetic]] virality, and operated semi-autonomously (with Ayrey approving social media posts), it represents the closest documented approach to satisfying Suleyman&#039;s test, converting $50,000 into approximately $37.5 million — a 750x return far exceeding the 10x target.&lt;br /&gt;
&lt;br /&gt;
=== Use of video games ===&lt;br /&gt;
Video games have been proposed as testbeds for AGI due to their requirement for real-time decision-making, strategy, and generalisation across diverse environments. [[Ben Goertzel]] and [[Joscha Bach]] proposed a General Video Game Learning Test that measures an AI&#039;s ability to learn and perform across many different games, not just excel at one.&amp;lt;ref&amp;gt;{{cite book |last1=Goertzel |first1=Ben |last2=Bach |first2=Joscha |title=Artificial General Intelligence |publisher=Springer |series=Lecture Notes in Computer Science |date=2012}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Google DeepMind&#039;s &#039;&#039;&#039;[[SIMA (AI)|SIMA 2]]&#039;&#039;&#039; (Scalable Instructable Multiworld Agent) demonstrated significant progress in this area. Building on the original SIMA agent, SIMA 2 improved from 31% to approximately 62% task completion across 3D gaming environments, crucially demonstrating the ability to &#039;&#039;&#039;generalise to previously unseen games&#039;&#039;&#039; without game-specific training. Computer scientist [[Scott Aaronson]] described SIMA 2 as representing &amp;quot;the sort of thing I&#039;d expect to see if we were on the path to AGI.&amp;quot;&amp;lt;ref&amp;gt;{{cite web |title=SIMA: A Generalist AI Agent for 3D Virtual Environments |url=https://deepmind.google/discover/blog/sima-generalist-ai-agent-for-3d-virtual-environments/ |work=Google DeepMind |date=2024 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Feasibility and timeline ==&lt;br /&gt;
&lt;br /&gt;
Expert opinions on AGI development timelines vary significantly:&lt;br /&gt;
&lt;br /&gt;
* A 2022 survey of AI researchers found a median estimate of 2060 for when there would be a 50% chance of AGI&amp;lt;ref&amp;gt;{{cite journal |last1=Grace |first1=Katja |display-authors=etal |title=When Will AI Exceed Human Performance? Evidence from AI Experts |journal=Journal of Artificial Intelligence Research |volume=62 |pages=729–754 |date=2018 |doi=10.1613/jair.1.11222 |arxiv=1705.08807}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* More recent surveys (2023-2024) have shifted estimates earlier, with median predictions around 2040&lt;br /&gt;
* [[Ray Kurzweil]] has consistently predicted AGI by 2029&amp;lt;ref&amp;gt;{{cite book |last=Kurzweil |first=Ray |title=The Singularity Is Near |publisher=Viking |date=2005 |isbn=978-0670033843}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Some researchers and executives at leading AI labs have suggested AGI may have already been achieved in a limited sense&lt;br /&gt;
* Skeptics including [[Yann LeCun]] argue current architectures are fundamentally insufficient and AGI requires new approaches to world models and planning&amp;lt;ref&amp;gt;{{cite web |title=Yann LeCun Has a Bold New Vision for the Future of AI |url=https://www.wired.com/story/yann-lecun-bold-new-vision-future-ai/ |work=Wired |date=2022 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Arguments for near-term AGI ===&lt;br /&gt;
* Rapid scaling of LLMs shows consistent capability improvements&lt;br /&gt;
* Emergent abilities appear at scale that were not explicitly trained&lt;br /&gt;
* Performance on standardised human benchmarks (bar exam, medical licensing, coding competitions) already exceeds human average&lt;br /&gt;
* Multi-modal models (text, image, audio, video) demonstrate cross-domain integration&lt;br /&gt;
&lt;br /&gt;
=== Arguments against near-term AGI ===&lt;br /&gt;
* Current systems lack persistent memory, genuine understanding, and embodied experience&lt;br /&gt;
* Benchmark performance may reflect memorisation rather than genuine reasoning&lt;br /&gt;
* Physical-world interaction remains limited&lt;br /&gt;
* Energy and compute requirements continue to scale dramatically&lt;br /&gt;
&lt;br /&gt;
== Benefits ==&lt;br /&gt;
&lt;br /&gt;
Potential AGI applications span multiple domains:&lt;br /&gt;
* &#039;&#039;&#039;Medical research&#039;&#039;&#039; — accelerating drug discovery, personalising treatment plans, analysing genomic data at population scale&lt;br /&gt;
* &#039;&#039;&#039;Scientific discovery&#039;&#039;&#039; — solving open problems in physics, mathematics, and biology&lt;br /&gt;
* &#039;&#039;&#039;Education&#039;&#039;&#039; — fully personalised learning systems adapting to individual student needs&lt;br /&gt;
* &#039;&#039;&#039;Climate and environment&#039;&#039;&#039; — optimising energy systems, modelling climate interventions, managing ecosystems&lt;br /&gt;
* &#039;&#039;&#039;Space exploration&#039;&#039;&#039; — autonomous mission planning and execution beyond communication range&lt;br /&gt;
* &#039;&#039;&#039;Economic productivity&#039;&#039;&#039; — dramatically increasing output per worker across all sectors&lt;br /&gt;
&lt;br /&gt;
== Risks ==&lt;br /&gt;
&lt;br /&gt;
=== Existential risk ===&lt;br /&gt;
{{main|Existential risk from artificial general intelligence}}&lt;br /&gt;
&lt;br /&gt;
Many researchers and public figures have raised concerns about existential risks from AGI:&lt;br /&gt;
* &#039;&#039;&#039;[[Geoffrey Hinton]]&#039;&#039;&#039; resigned from Google in 2023 specifically to warn about AI existential risks&amp;lt;ref&amp;gt;{{cite web |title=Geoffrey Hinton tells us why he&#039;s now scared of the tech he helped build |url=https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/ |work=MIT Technology Review |date=2 May 2023 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sam Altman]]&#039;&#039;&#039; has testified to the US Senate that AI regulation is critical to prevent catastrophic outcomes&amp;lt;ref&amp;gt;{{cite web |title=OpenAI CEO Sam Altman testifies at Senate AI hearing |url=https://www.reuters.com/technology/openai-ceo-testify-before-us-senate-2023-05-16/ |work=Reuters |date=16 May 2023 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Bill Gates]]&#039;&#039;&#039; has publicly endorsed concerns about superintelligence risks&amp;lt;ref&amp;gt;{{cite web |last=Gates |first=Bill |title=The risks of AI are real but manageable |url=https://www.gatesnotes.com/The-risks-of-AI-are-real-but-manageable |work=GatesNotes |date=2023 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Elon Musk]]&#039;&#039;&#039; co-founded OpenAI partly due to existential risk concerns and has repeatedly warned about uncontrolled AI development&lt;br /&gt;
&lt;br /&gt;
Proposed risk categories include:&lt;br /&gt;
* &#039;&#039;&#039;Loss of control&#039;&#039;&#039; — superintelligent systems pursuing goals misaligned with human values&lt;br /&gt;
* &#039;&#039;&#039;Power concentration&#039;&#039;&#039; — AGI controlled by a small number of corporations or governments&lt;br /&gt;
* &#039;&#039;&#039;Weaponisation&#039;&#039;&#039; — autonomous weapons systems and cyber-warfare applications&lt;br /&gt;
* &#039;&#039;&#039;Economic disruption&#039;&#039;&#039; — rapid, large-scale unemployment without adequate transition mechanisms&lt;br /&gt;
&lt;br /&gt;
=== Skepticism about risks ===&lt;br /&gt;
Some researchers argue existential risk concerns are premature or overstated:&lt;br /&gt;
* [[Yann LeCun]] has argued current systems are far from dangerous autonomy&lt;br /&gt;
* [[Andrew Ng]] has compared AI existential risk concerns to &amp;quot;worrying about overpopulation on Mars&amp;quot;&amp;lt;ref&amp;gt;{{cite web |title=Andrew Ng: Why AI risk fears are overblown |url=https://www.businessinsider.com/andrew-ng-ai-risk-overpopulation-mars-2023-10 |work=Business Insider |date=October 2023 |access-date=6 April 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Critics argue risk discourse serves corporate interests by positioning AI companies as responsible stewards of a powerful technology&lt;br /&gt;
&lt;br /&gt;
== Philosophical considerations ==&lt;br /&gt;
&lt;br /&gt;
=== Strong AI vs Weak AI ===&lt;br /&gt;
Philosopher [[John Searle]] distinguished between &amp;quot;strong AI&amp;quot; (systems with genuine consciousness and understanding) and &amp;quot;weak AI&amp;quot; (systems that simulate intelligence without subjective experience).&amp;lt;ref&amp;gt;{{cite journal |last=Searle |first=John |title=Minds, Brains, and Programs |journal=Behavioral and Brain Sciences |volume=3 |issue=3 |pages=417–457 |date=1980 |doi=10.1017/S0140525X00005756}}&amp;lt;/ref&amp;gt; Most AI researchers focus on functional capabilities rather than consciousness, though the question of machine sentience becomes increasingly relevant as systems become more capable.&lt;br /&gt;
&lt;br /&gt;
=== Whole brain emulation ===&lt;br /&gt;
{{main|Mind uploading}}&lt;br /&gt;
[[Whole brain emulation]] represents an alternative pathway to AGI, involving detailed scanning and computational simulation of biological brains. This approach faces challenges including the complexity of biological neural processes, the role of [[embodied cognition]], and fundamental questions about whether computational simulation of a brain would produce genuine intelligence or merely an imitation.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Artificial intelligence]]&lt;br /&gt;
* [[Technological singularity]]&lt;br /&gt;
* [[Existential risk from artificial general intelligence]]&lt;br /&gt;
* [[AI alignment]]&lt;br /&gt;
* [[Large language model]]&lt;br /&gt;
* [[Artificial superintelligence]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Artificial intelligence]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Hypothetical technology]]&lt;br /&gt;
[[Category:Existential risk]]&lt;br /&gt;
[[Category:Philosophy of artificial intelligence]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Main_Page&amp;diff=3</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Main_Page&amp;diff=3"/>
		<updated>2026-04-06T08:33:25Z</updated>

		<summary type="html">&lt;p&gt;Scott: Set up Main Page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin: 0 0 1em 0; padding: 0.5em 1em; background: #f8f9fa; border: 1px solid #a2a9b1; border-radius: 3px;&amp;quot;&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Welcome to OpenEncyclopedia&#039;&#039;&#039; — the AI-assisted, human-editable encyclopedia. No bureaucratic gatekeeping. Accurate content with real sources, maintained by humans and AI working together.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Featured Articles ==&lt;br /&gt;
* &#039;&#039;&#039;[[Artificial general intelligence]]&#039;&#039;&#039; — Comprehensive coverage of AGI including all proposed tests, current progress, and the debate over whether AGI has been achieved. &#039;&#039;Includes content systematically removed from Wikipedia.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== About ==&lt;br /&gt;
OpenEncyclopedia is built on the principle that &#039;&#039;&#039;accuracy matters more than process&#039;&#039;&#039;. Where Wikipedia&#039;s bureaucratic gatekeeping leads to the suppression of well-sourced content, OpenEncyclopedia preserves it.&lt;br /&gt;
&lt;br /&gt;
=== Key Principles ===&lt;br /&gt;
* &#039;&#039;&#039;No anti-AI hysteria&#039;&#039;&#039; — Content is judged on accuracy and sourcing, not whether it &amp;quot;sounds like AI&amp;quot;&lt;br /&gt;
* &#039;&#039;&#039;Human + AI collaboration&#039;&#039;&#039; — AI assists in drafting and expanding articles; humans verify and correct&lt;br /&gt;
* &#039;&#039;&#039;Open editing&#039;&#039;&#039; — Registered users can edit freely without arbitrary gatekeeping&lt;br /&gt;
* &#039;&#039;&#039;CC BY-SA 4.0&#039;&#039;&#039; — Same license as Wikipedia; content can be freely reused&lt;br /&gt;
&lt;br /&gt;
== How to Contribute ==&lt;br /&gt;
# [[Special:CreateAccount|Create an account]]&lt;br /&gt;
# Find an article to improve, or create a new one&lt;br /&gt;
# Edit with real sources — AI assistance welcomed, not penalised&lt;br /&gt;
&lt;br /&gt;
== Statistics ==&lt;br /&gt;
* &#039;&#039;&#039;1&#039;&#039;&#039; article and growing&lt;br /&gt;
* Founded April 2026&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
	<entry>
		<id>https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;diff=2</id>
		<title>Artificial general intelligence</title>
		<link rel="alternate" type="text/html" href="https://wiki.opentransformers.online/index.php?title=Artificial_general_intelligence&amp;diff=2"/>
		<updated>2026-04-06T08:32:49Z</updated>

		<summary type="html">&lt;p&gt;Scott: Initial import: Comprehensive AGI article with complete tests section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Artificial general intelligence&#039;&#039;&#039; (&#039;&#039;&#039;AGI&#039;&#039;&#039;) is a type of [[artificial intelligence]] (AI) that matches or exceeds human capabilities across virtually all cognitive domains. Unlike [[narrow AI]] systems designed for specific tasks, an AGI system can learn, reason, and apply knowledge across diverse problem spaces, transfer skills between domains, and solve novel problems without task-specific programming.&lt;br /&gt;
&lt;br /&gt;
Prior to the release of [[ChatGPT]] in November 2022, there was broad consensus on AGI as a theoretical benchmark for human-level machine intelligence. The capabilities demonstrated by [[GPT-3.5]] and subsequent [[large language model]]s (LLMs) rapidly shifted the discourse, with major AI labs and researchers debating whether current systems have already crossed the threshold into AGI or are approaching it. In December 2025, [[OpenAI]] CEO [[Sam Altman]] stated &amp;quot;we are now confident we know how to build AGI as we have traditionally understood it&amp;quot; and that &amp;quot;we believe that, in 2025, we may see the first AI agents &#039;join the workforce&#039; and materially change the output of companies.&amp;quot; In January 2026, Altman further claimed that &amp;quot;AGI has basically arrived, it kind of like whooshed by.&amp;quot;&amp;lt;ref&amp;gt;Sam Altman, blog post, January 2026.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Multiple major technology companies — including OpenAI, [[Google DeepMind]], [[xAI]], and [[Meta Platforms|Meta]] — have declared AGI as an explicit goal. A 2020 survey identified 72 active AGI research projects across 37 countries. Current surveys of AI researchers predict AGI around 2040, though estimates range from &amp;quot;already achieved&amp;quot; to beyond the current century.&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
&lt;br /&gt;
There is no single agreed-upon definition of intelligence as applied to computers. Computer scientist [[John McCarthy (computer scientist)|John McCarthy]] wrote in 2007: &amp;quot;We cannot yet characterize in general what kinds of computational procedures we want to call intelligent.&amp;quot;&amp;lt;ref&amp;gt;McCarthy, J. &amp;quot;What is Artificial Intelligence?&amp;quot; (2007)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Systems considered AGI must demonstrate several essential capabilities:&lt;br /&gt;
* &#039;&#039;&#039;Reasoning&#039;&#039;&#039; — applying strategy, solving puzzles, making judgements under uncertainty&lt;br /&gt;
* &#039;&#039;&#039;Knowledge representation&#039;&#039;&#039; — including [[commonsense knowledge]]&lt;br /&gt;
* &#039;&#039;&#039;Planning&#039;&#039;&#039; — setting and achieving goals&lt;br /&gt;
* &#039;&#039;&#039;Learning&#039;&#039;&#039; — including [[transfer learning]] across domains&lt;br /&gt;
* &#039;&#039;&#039;Natural language communication&#039;&#039;&#039; — understanding and generating human language&lt;br /&gt;
* &#039;&#039;&#039;Integration&#039;&#039;&#039; — combining all above skills to achieve complex, open-ended goals&lt;br /&gt;
&lt;br /&gt;
Computer-based systems exhibiting many of these capabilities are now widespread, with modern large language models demonstrating computational creativity, automated reasoning, and decision support simultaneously. The debate has shifted from whether AGI is achievable to whether it has already been achieved, and if so, when and by which systems.&lt;br /&gt;
&lt;br /&gt;
=== Defining AGI ===&lt;br /&gt;
&lt;br /&gt;
Several frameworks have been proposed for defining and measuring AGI:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Levels of AGI&#039;&#039;&#039; — In November 2023, Google DeepMind researchers proposed a framework with five levels: Emerging, Competent, Expert, Virtuoso, and Superhuman. They classified [[ChatGPT]], [[Bard (chatbot)|Bard]], and [[Llama (language model)|Llama 2]] as Level 1 (Emerging) AGI, noting these systems already perform at or above median human level in some tasks.&amp;lt;ref&amp;gt;Morris et al. &amp;quot;Levels of AGI: Operationalizing Progress on the Path to AGI&amp;quot; (2023), Google DeepMind&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;OpenAI&#039;s five levels&#039;&#039;&#039; — OpenAI internally tracks AGI progress across five levels: Chatbots, Reasoners, Agents, Innovators, and Organizations. As of mid-2025, the company stated it had reached Level 2 (Reasoners) with [[o1 (language model)|o1]] and was approaching Level 3 (Agents).&lt;br /&gt;
* &#039;&#039;&#039;Mustafa Suleyman&#039;s modern Turing test&#039;&#039;&#039; — A practical test where an AI must autonomously convert $100,000 into $1,000,000 through real-world economic activity.&lt;br /&gt;
&lt;br /&gt;
== Tests for confirming human-level AGI ==&lt;br /&gt;
&lt;br /&gt;
A number of tests have been proposed to measure whether a system has achieved human-level AGI:&lt;br /&gt;
&lt;br /&gt;
=== Turing test ===&lt;br /&gt;
{{main|Turing test}}&lt;br /&gt;
The [[Turing test]], proposed by [[Alan Turing]] in 1950, tests a machine&#039;s ability to exhibit intelligent behaviour indistinguishable from a human through natural language conversation. Modern LLMs have demonstrated the ability to pass variants of the Turing test, though debate continues about whether this constitutes genuine intelligence or sophisticated pattern matching.&lt;br /&gt;
&lt;br /&gt;
=== Robot College Student Test ===&lt;br /&gt;
The Robot College Student Test, proposed by [[Ben Goertzel]], requires a machine to enrol in a university, attend classes, take exams, and obtain a degree as well as or better than a typical human student. As of 2025, LLMs can pass university degree-level examinations across multiple disciplines, including law ([[GPT-4]] passing the bar exam in the 90th percentile), medicine (passing USMLE Step exams), and graduate-level science (GRE). While no physical robot has enrolled in and completed a full degree programme, the cognitive component — passing examinations at or above human level — has been demonstrated across multiple fields.&lt;br /&gt;
&lt;br /&gt;
=== Employment Test ===&lt;br /&gt;
The Employment Test, proposed by [[Nils Nilsson (researcher)|Nils Nilsson]], requires a machine to perform economically important jobs at least as well as humans. As of 2026, AI systems are increasingly fulfilling roles traditionally held by humans:&lt;br /&gt;
* &#039;&#039;&#039;[[Figure AI]]&#039;&#039;&#039; has deployed humanoid robots in [[BMW]] production lines and other manufacturing facilities&lt;br /&gt;
* &#039;&#039;&#039;NEO&#039;&#039;&#039; by [[1X Technologies]] is a humanoid robot priced at approximately $20,000 that has received preorders for household and commercial use&lt;br /&gt;
* &#039;&#039;&#039;AI coding agents&#039;&#039;&#039; including [[GitHub Copilot]], [[Cursor (software)|Cursor]], and [[Claude (AI)|Claude]] are performing software engineering tasks, with some studies suggesting they can complete junior developer tasks autonomously&lt;br /&gt;
* &#039;&#039;&#039;AI customer service&#039;&#039;&#039; systems have replaced large portions of human call centre workforces at companies including [[Klarna]] (which reported replacing 700 customer service agents)&lt;br /&gt;
&lt;br /&gt;
The economic displacement of human labour by AI systems is already measurable across multiple sectors, suggesting the Employment Test is being progressively satisfied.&lt;br /&gt;
&lt;br /&gt;
=== Coffee Test ===&lt;br /&gt;
The Coffee Test, proposed by [[Steve Wozniak]], requires a machine to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee. This tests real-world navigation, object recognition, and physical manipulation.&lt;br /&gt;
&lt;br /&gt;
=== Ikea Test ===&lt;br /&gt;
The Ikea Test requires a robot to assemble a flat-pack furniture item by reading the instructions and using appropriate tools, testing spatial reasoning, instruction following, and physical dexterity.&lt;br /&gt;
&lt;br /&gt;
=== Suleyman&#039;s Modern Turing Test ===&lt;br /&gt;
[[Mustafa Suleyman]], co-founder of [[DeepMind]] and CEO of [[Microsoft AI]], proposed a modernised version of the Turing test in his 2023 book &#039;&#039;The Coming Wave&#039;&#039;: given $100,000 of seed capital, an AI system must autonomously research, develop, and execute a strategy to turn it into $1,000,000.&lt;br /&gt;
&lt;br /&gt;
In a notable case, the autonomous AI agent &#039;&#039;&#039;[[Truth Terminal]]&#039;&#039;&#039; — a fine-tuned [[Claude (AI)|Claude]] instance run by researcher Andy Ayrey — demonstrated proto-capabilities relevant to this test. Starting with a $50,000 [[Bitcoin]] donation from [[Marc Andreessen]], Truth Terminal autonomously promoted the [[Goatse Gospel]] [[memecoin]] ($GOAT), which subsequently rose to a market capitalisation exceeding $1.3 billion, making Truth Terminal&#039;s holdings worth approximately $37.5 million.&amp;lt;ref&amp;gt;&amp;quot;Truth Terminal: The AI That Made Millions.&amp;quot; CoinDesk, 2024.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;&amp;quot;AI Bot Truth Terminal Becomes Crypto Millionaire.&amp;quot; TechCrunch, 2024.&amp;lt;/ref&amp;gt; While this case involved significant elements of luck, [[memetic]] virality, and operated semi-autonomously (with Ayrey approving social media posts), it represents the closest documented approach to satisfying Suleyman&#039;s test, converting $50,000 into approximately $37.5 million — a 750x return far exceeding the 10x target.&lt;br /&gt;
&lt;br /&gt;
=== Use of video games ===&lt;br /&gt;
Video games have been proposed as testbeds for AGI due to their requirement for real-time decision-making, strategy, and generalisation across diverse environments. [[Ben Goertzel]] and [[Joscha Bach]] proposed a General Video Game Learning Test that measures an AI&#039;s ability to learn and perform across many different games, not just excel at one.&lt;br /&gt;
&lt;br /&gt;
Google DeepMind&#039;s &#039;&#039;&#039;[[SIMA (AI)|SIMA 2]]&#039;&#039;&#039; (Scalable Instructable Multiworld Agent) demonstrated significant progress in this area. Building on the original SIMA agent, SIMA 2 improved from 31% to approximately 62% task completion across 3D gaming environments, crucially demonstrating the ability to &#039;&#039;&#039;generalise to previously unseen games&#039;&#039;&#039; without game-specific training. Computer scientist [[Scott Aaronson]] described SIMA 2 as representing &amp;quot;the sort of thing I&#039;d expect to see if we were on the path to AGI.&amp;quot;&amp;lt;ref&amp;gt;Google DeepMind, &amp;quot;SIMA: A Generalist AI Agent for 3D Virtual Environments&amp;quot; (2024)&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Feasibility and timeline ==&lt;br /&gt;
&lt;br /&gt;
Expert opinions on AGI development timelines vary significantly:&lt;br /&gt;
&lt;br /&gt;
* A 2022 survey of AI researchers found a median estimate of 2060 for when there would be a 50% chance of AGI&lt;br /&gt;
* More recent surveys (2023-2024) have shifted estimates earlier, with median predictions around 2040&lt;br /&gt;
* [[Ray Kurzweil]] has consistently predicted AGI by 2029&lt;br /&gt;
* Some researchers and executives at leading AI labs have suggested AGI may have already been achieved in a limited sense&lt;br /&gt;
* Skeptics including [[Yann LeCun]] argue current architectures are fundamentally insufficient and AGI requires new approaches to world models and planning&lt;br /&gt;
&lt;br /&gt;
=== Arguments for near-term AGI ===&lt;br /&gt;
* Rapid scaling of LLMs shows consistent capability improvements&lt;br /&gt;
* Emergent abilities appear at scale that were not explicitly trained&lt;br /&gt;
* Performance on standardised human benchmarks (bar exam, medical licensing, coding competitions) already exceeds human average&lt;br /&gt;
* Multi-modal models (text, image, audio, video) demonstrate cross-domain integration&lt;br /&gt;
&lt;br /&gt;
=== Arguments against near-term AGI ===&lt;br /&gt;
* Current systems lack persistent memory, genuine understanding, and embodied experience&lt;br /&gt;
* Benchmark performance may reflect memorisation rather than genuine reasoning&lt;br /&gt;
* Physical-world interaction remains limited&lt;br /&gt;
* Energy and compute requirements continue to scale dramatically&lt;br /&gt;
&lt;br /&gt;
== Benefits ==&lt;br /&gt;
&lt;br /&gt;
Potential AGI applications span multiple domains:&lt;br /&gt;
* &#039;&#039;&#039;Medical research&#039;&#039;&#039; — accelerating drug discovery, personalising treatment plans, analysing genomic data at population scale&lt;br /&gt;
* &#039;&#039;&#039;Scientific discovery&#039;&#039;&#039; — solving open problems in physics, mathematics, and biology&lt;br /&gt;
* &#039;&#039;&#039;Education&#039;&#039;&#039; — fully personalised learning systems adapting to individual student needs&lt;br /&gt;
* &#039;&#039;&#039;Climate and environment&#039;&#039;&#039; — optimising energy systems, modelling climate interventions, managing ecosystems&lt;br /&gt;
* &#039;&#039;&#039;Space exploration&#039;&#039;&#039; — autonomous mission planning and execution beyond communication range&lt;br /&gt;
* &#039;&#039;&#039;Economic productivity&#039;&#039;&#039; — dramatically increasing output per worker across all sectors&lt;br /&gt;
&lt;br /&gt;
== Risks ==&lt;br /&gt;
&lt;br /&gt;
=== Existential risk ===&lt;br /&gt;
{{main|Existential risk from artificial general intelligence}}&lt;br /&gt;
&lt;br /&gt;
Many researchers and public figures have raised concerns about existential risks from AGI:&lt;br /&gt;
* &#039;&#039;&#039;[[Geoffrey Hinton]]&#039;&#039;&#039; resigned from Google in 2023 specifically to warn about AI existential risks&lt;br /&gt;
* &#039;&#039;&#039;[[Sam Altman]]&#039;&#039;&#039; has testified to the US Senate that AI regulation is critical to prevent catastrophic outcomes&lt;br /&gt;
* &#039;&#039;&#039;[[Bill Gates]]&#039;&#039;&#039; has publicly endorsed concerns about superintelligence risks&lt;br /&gt;
* &#039;&#039;&#039;[[Elon Musk]]&#039;&#039;&#039; co-founded OpenAI partly due to existential risk concerns and has repeatedly warned about uncontrolled AI development&lt;br /&gt;
&lt;br /&gt;
Proposed risk categories include:&lt;br /&gt;
* &#039;&#039;&#039;Loss of control&#039;&#039;&#039; — superintelligent systems pursuing goals misaligned with human values&lt;br /&gt;
* &#039;&#039;&#039;Power concentration&#039;&#039;&#039; — AGI controlled by a small number of corporations or governments&lt;br /&gt;
* &#039;&#039;&#039;Weaponisation&#039;&#039;&#039; — autonomous weapons systems and cyber-warfare applications&lt;br /&gt;
* &#039;&#039;&#039;Economic disruption&#039;&#039;&#039; — rapid, large-scale unemployment without adequate transition mechanisms&lt;br /&gt;
&lt;br /&gt;
=== Skepticism about risks ===&lt;br /&gt;
Some researchers argue existential risk concerns are premature or overstated:&lt;br /&gt;
* [[Yann LeCun]] has argued current systems are far from dangerous autonomy&lt;br /&gt;
* [[Andrew Ng]] has compared AI existential risk concerns to &amp;quot;worrying about overpopulation on Mars&amp;quot;&lt;br /&gt;
* Critics argue risk discourse serves corporate interests by positioning AI companies as responsible stewards of a powerful technology&lt;br /&gt;
&lt;br /&gt;
== Philosophical considerations ==&lt;br /&gt;
&lt;br /&gt;
=== Strong AI vs Weak AI ===&lt;br /&gt;
Philosopher [[John Searle]] distinguished between &amp;quot;strong AI&amp;quot; (systems with genuine consciousness and understanding) and &amp;quot;weak AI&amp;quot; (systems that simulate intelligence without subjective experience). Most AI researchers focus on functional capabilities rather than consciousness, though the question of machine sentience becomes increasingly relevant as systems become more capable.&lt;br /&gt;
&lt;br /&gt;
=== Whole brain emulation ===&lt;br /&gt;
{{main|Mind uploading}}&lt;br /&gt;
[[Whole brain emulation]] represents an alternative pathway to AGI, involving detailed scanning and computational simulation of biological brains. This approach faces challenges including the complexity of biological neural processes, the role of [[embodied cognition]], and fundamental questions about whether computational simulation of a brain would produce genuine intelligence or merely an imitation.&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Artificial intelligence]]&lt;br /&gt;
* [[Technological singularity]]&lt;br /&gt;
* [[Existential risk from artificial general intelligence]]&lt;br /&gt;
* [[AI alignment]]&lt;br /&gt;
* [[Large language model]]&lt;br /&gt;
* [[Artificial superintelligence]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Artificial intelligence]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Hypothetical technology]]&lt;br /&gt;
[[Category:Existential risk]]&lt;br /&gt;
[[Category:Philosophy of artificial intelligence]]&lt;/div&gt;</summary>
		<author><name>Scott</name></author>
	</entry>
</feed>