{"id":3461,"date":"2025-07-22T16:17:04","date_gmt":"2025-07-22T08:17:04","guid":{"rendered":"https:\/\/www.rzautoassembly.com\/?p=3461"},"modified":"2025-07-22T16:17:42","modified_gmt":"2025-07-22T08:17:42","slug":"human-level-ai-is-not-inevitable-we-have-the-power-to-change-course","status":"publish","type":"post","link":"https:\/\/www.rzautoassembly.com\/fi\/human-level-ai-is-not-inevitable-we-have-the-power-to-change-course\/","title":{"rendered":"Human-Level AI Is Not Inevitable. We Have the Power to Change Course"},"content":{"rendered":"<h3><a href=\"https:\/\/www.rzautoassembly.com\/fi\/product\/epson-robot\/\"><img fetchpriority=\"high\" decoding=\"async\" class=\"size-medium wp-image-3463 aligncenter\" src=\"https:\/\/www.rzautoassembly.com\/wp-content\/smush-webp\/2025\/07\/\u975e\u6807\u81ea\u52a8\u5316\u8bbe\u5907\u5e7f\u544a\u521b\u610f-221-4-300x228.png.webp\" alt=\"\" width=\"300\" height=\"228\" srcset=\"https:\/\/www.rzautoassembly.com\/wp-content\/smush-webp\/2025\/07\/\u975e\u6807\u81ea\u52a8\u5316\u8bbe\u5907\u5e7f\u544a\u521b\u610f-221-4-300x228.png.webp 300w, https:\/\/www.rzautoassembly.com\/wp-content\/smush-webp\/2025\/07\/\u975e\u6807\u81ea\u52a8\u5316\u8bbe\u5907\u5e7f\u544a\u521b\u610f-221-4-1024x779.png.webp 1024w, https:\/\/www.rzautoassembly.com\/wp-content\/smush-webp\/2025\/07\/\u975e\u6807\u81ea\u52a8\u5316\u8bbe\u5907\u5e7f\u544a\u521b\u610f-221-4-768x584.png.webp 768w, https:\/\/www.rzautoassembly.com\/wp-content\/smush-webp\/2025\/07\/\u975e\u6807\u81ea\u52a8\u5316\u8bbe\u5907\u5e7f\u544a\u521b\u610f-221-4-16x12.png.webp 16w, https:\/\/www.rzautoassembly.com\/wp-content\/smush-webp\/2025\/07\/\u975e\u6807\u81ea\u52a8\u5316\u8bbe\u5907\u5e7f\u544a\u521b\u610f-221-4.png.webp 1136w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/a><\/h3>\n<p>\u201cTechnology happens because it is possible,\u201d OpenAI CEO Sam Altman told the\u00a0New York Times\u00a0in 2019, echoing Robert Oppenheimer, the father of the atomic bomb. It\u2019s a Silicon Valley mantra: technology marches forward, unstoppable.<\/p>\n<p>Another common belief in tech circles is that artificial general intelligence (AGI)\u2014human-level AI\u2014will lead to one of two fates: a post-scarcity utopia or human extinction. For countless species, humans spelled doom not because we were stronger, but because we were smarter and more coordinated. Extinction was often a byproduct of our goals, not a deliberate act. AGI would be a new species\u2014one that might outthink or outnumber us, seeing humanity as an obstacle (like an anthill in a dam\u2019s path) or a resource (like factory-farmed animals).<\/p>\n<p>Altman and other top AI lab leaders acknowledge that AI-driven extinction is a real risk\u2014joining hundreds of researchers and public figures in this concern. Given this, a simple question arises: Should we build a technology that could kill us if it fails?<\/p>\n<p>The most common reply is: \u201cAGI is inevitable.\u201d It\u2019s too useful to ignore\u2014the \u201clast invention humanity will ever need,\u201d as a colleague of Alan Turing put it. And if we don\u2019t build it, someone else will\u2014less responsibly. A Silicon Valley ideology called \u201ceffective accelerationism\u201d (e\/acc) even claims AGI is inevitable due to thermodynamics and \u201ctechnocapital\u201d: \u201cThe engine can\u2019t be stopped. Progress only moves forward.\u201d<\/p>\n<p>But this is a myth. Technology isn\u2019t a force of nature\u2014it\u2019s a product of human choices, shaped by incentives, values, and action. History proves we\u2019ve reined in powerful technologies before.<\/p>\n<p>We\u2019ve Regulated Powerful Technologies Before<\/p>\n<p>Fearing risks, biologists banned then regulated recombinant DNA experiments in the 1970s. Human cloning has been technically possible for over a decade, yet no one has done it; the only scientist who genetically engineered humans was imprisoned. Nuclear power, despite its carbon-free potential, faces strict regulations due to catastrophe fears.<\/p>\n<p>Even nuclear weapons\u2014now seemingly inescapable\u2014were a \u201ccontingent\u201d creation. The U.S. built them in 1945 partly because of a false belief that Germany was racing to do the same. Historian Philip Zelikow notes: \u201cIf the U.S. hadn\u2019t built the atomic bomb in WWII, it\u2019s unclear if it ever would have been built.\u201d Later, Reagan and Gorbachev nearly agreed to eliminate all nukes; while that failed, global stockpiles are now under 20% of their 1986 peak, thanks to international agreements.<\/p>\n<p>Climate action offers another example. Fossil fuels have massive economic incentives, yet advocacy shifted public opinion and accelerated decarbonization. Extinction Rebellion\u2019s 2019 protests pushed the UK to declare a climate emergency. The Sierra Club\u2019s \u201cBeyond Coal\u201d campaign closed a third of U.S. coal plants in five years, dropping U.S. per capita emissions below 1913 levels.<\/p>\n<p>AGI Is Not Inevitable\u2014We Control the Timeline<\/p>\n<p>Regulating AGI is easier than decarbonization. Fossil fuels power 82% of global energy; we don\u2019t depend on hypothetical AGI. Slowing AGI development wouldn\u2019t stop us from using existing AI for medicine, climate, or other critical needs.<\/p>\n<p>Capitalists love AGI\u2014it could cut workers out of the loop\u2014but governments care about more than profits: employment, stability, democracy. They aren\u2019t prepared for a world with mass technological unemployment. And while capital often wins, it doesn\u2019t always. As one OpenAI safety researcher noted, politicians like AOC or Josh Hawley could \u201cderail\u201d unchecked AI progress.<\/p>\n<p>AGI boosters claim it\u2019s \u201cimminent,\u201d but timelines matter. We had the computing power to train GPT-2 over a decade before OpenAI did\u2014we just didn\u2019t see the point. Today, top labs race so fiercely they skip safety measures their own teams recommend. A \u201csafety tax\u201d slows progress, and no lab wants to fall behind. But this is a choice, not a law of nature.<\/p>\n<p>Governments could change this. AGI requires massive supercomputers and specialized chips\u2014resources controlled by a small, regulated industry. \u201cCompute governance\u201d could halt unchecked training runs over a certain threshold (e.g., $100 million per run) without stifling smaller innovators. International treaties could share AI benefits while preventing reckless scaling\u2014just as the Montreal Protocol fixed the ozone layer, or the Non-Proliferation Treaty curbed nuclear spread.<\/p>\n<p>The Public Doesn\u2019t Want AGI<\/p>\n<p>When polled, most Americans oppose superhuman AI. As AI becomes more common, opposition grows. Boosters dismiss this as \u201cneo-Luddism,\u201d but their \u201cinevitability\u201d talk is a dodge: they don\u2019t want to argue their case in public, because they\u2019d lose.<\/p>\n<p>AGI\u2019s allure is strong, but its risks are existential. We need a global effort to resist it. Technology doesn\u2019t \u201chappen\u201d\u2014people make it happen. And we can choose not to.<\/p>\n<p><span style=\"color: #00ccff;\"><a style=\"color: #00ccff;\" href=\"https:\/\/www.rzautoassembly.com\/fi\/injection-molded-parts-automated-assembly-system-with-auto-loading\/\">Which companies produce medical device assembly machines?<\/a><\/span><\/p>\n<p><span style=\"color: #00ccff;\"><a style=\"color: #00ccff;\" href=\"https:\/\/www.rzautoassembly.com\/fi\/product\/epson-robot\/\">Can medical device assembly machines be optimized with artificial intelligence?<\/a><\/span><\/p>","protected":false},"excerpt":{"rendered":"<p>\u201cTechnology happens because it is possible,\u201d OpenAI CEO Sam Altman told the\u00a0New York Times\u00a0in 2019, echoing Robert Oppenheimer, the father of the atomic bomb. It\u2019s a Silicon Valley mantra: technology marches forward, unstoppable. Another common belief in tech circles is that artificial general intelligence (AGI)\u2014human-level AI\u2014will lead to one of two fates: a post-scarcity utopia [\u2026]<\/p>","protected":false},"author":1,"featured_media":3462,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1,124],"tags":[],"class_list":["post-3461","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","category-technology"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.rzautoassembly.com\/fi\/wp-json\/wp\/v2\/posts\/3461","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.rzautoassembly.com\/fi\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.rzautoassembly.com\/fi\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.rzautoassembly.com\/fi\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.rzautoassembly.com\/fi\/wp-json\/wp\/v2\/comments?post=3461"}],"version-history":[{"count":0,"href":"https:\/\/www.rzautoassembly.com\/fi\/wp-json\/wp\/v2\/posts\/3461\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.rzautoassembly.com\/fi\/wp-json\/wp\/v2\/media\/3462"}],"wp:attachment":[{"href":"https:\/\/www.rzautoassembly.com\/fi\/wp-json\/wp\/v2\/media?parent=3461"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.rzautoassembly.com\/fi\/wp-json\/wp\/v2\/categories?post=3461"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.rzautoassembly.com\/fi\/wp-json\/wp\/v2\/tags?post=3461"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}