小程序
传感搜
传感圈

Our Evolutionary Past Can Teach Us about AI’s Future

2023-11-11 02:17:37
关注

As artificial intelligence advances, experts have warned about its potential to cause human extinction. Exactly how this might come about is a matter of speculation—but it’s not hard to see that intelligent robots could build more of themselves, improve on their own designs and pursue their own interests. And that could be a threat to humanity.

Last week, an AI Safety Summit was held at Bletchley Park in the U.K. It sought to address some of the threats associated with the most advanced AI technologies, among them “loss of control” risks—the possibility that such systems might become independent.

It’s worth asking what we can predict about such scenarios based on things we already know. Machines able to act independently and upgrade their own designs would be subject to the same evolutionary laws as bacteria, animals and plants. Thus evolution has a lot to teach us about how AI might develop—and how to ensure humans survive its rise.

A first lesson is that, in the long run, there are no free lunches. Unfortunately, that means we can’t expect AI to produce a hedonistic paradise where every human need is met by robot servants. Most organisms live close to the edge of survival, eking out an existence as best they can. Many humans today do live more comfortable and prosperous lives, but evolutionary history suggests that AI could disrupt this. The fundamental reason is competition.

This is an argument that traces back to Darwin, and applies more widely than just to AI. However, it’s easily illustrated using an AI-based scenario. Imagine we have two future AI-run nation-states where humans no longer make significant economic contributions. One slavishly devotes itself to meeting every hedonistic need of its human population. The other puts less energy into its humans and focuses more on acquiring resources and improving its technology. The latter would become more powerful over time. It might take over the first one. And eventually, it might decide to dispense with its humans altogether. The example does not have to be a nation-state for this argument to work; the key thing is the competition. One takeaway from such scenarios is that humans should try to keep their economic relevance. In the long run, the only way to ensure our survival is to actively work toward it ourselves.

Another insight is that evolution is incremental. We can see this in major past innovations such as the evolution of multicellularity. For most of Earth’s history, life consisted mainly of single-celled organisms. Environmental conditions were unsuitable for large multicellular organisms due to low oxygen levels. However, even when the environment became more friendly, the world was not suddenly filled with redwoods and whales and humans. Building a complex structure like a tree or a mammal requires many capabilities, including elaborate gene regulatory networks and cellular mechanisms for adhesion and communication. These arose bit by bit over time.

AI is also likely to advance incrementally. Rather than a pure robot civilization springing up de novo, it’s more likely that AI will integrate itself into things that already exist in our world. The resulting hybrid entities could take many forms; imagine, for example, a company with a human owner but machine-based operations and research. Among other things, arrangements like this would lead to extreme inequality among humans, as owners would profit from their control of AI, while those without such control would become unemployed and impoverished.

Such hybrids are also likely to be where the immediate threat to humanity lies. Some have argued that the “robots take over the world” scenario is overblown because AI will not intrinsically have a desire to dominate. That may be true. However, humans certainly do—and this could be a big part of what they would contribute to a collaboration with machines. With all this in mind, perhaps another principle for us to adopt is that AI should not be allowed to exacerbate inequality in our society.

Contemplating all this may leave one wondering if humans have any long-term prospects at all. Another observation from the history of life on Earth is that major innovations allow life to occupy new niches. Multicellularity evolved in the oceans and enabled novel ways of making a living there. For animals, these included burrowing through sediments and new kinds of predation. This opened up new food options and allowed animals to diversify, eventually leading to the riot of shapes and lifestyles that exist today. Crucially, the creation of new niches does not mean all the old ones go away. After animals and plants evolved, bacteria and other single-celled organisms persisted. Today, some of them do similar things to what they did before (and indeed are central to the functioning of the biosphere). Others have profited from new opportunities such as living in the guts of animals.

Hopefully some possible futures include an ecological niche for humans. After all, some things that humans need (such as oxygen and organic food), machines do not. Maybe we can convince them to go out into the solar system to mine the outer planets and harvest the sun’s energy. And leave the Earth to us.

But we may need to act quickly. A final lesson from the history of biological innovations is that what happens in the beginning matters. The evolution of multicellularity led to the Cambrian explosion, a period more than 500 million years ago when large multicellular animals appeared in great diversity. Many of these early animals went extinct without descendants. Because the ones that survived went on to found major groupings of animals, what happened in this era determined much about the biological world of today. It has been argued that many paths were possible in the Cambrian, and that the world we ended up with was not foreordained. If the development of AI is like that, then now is the time when we have maximum leverage to steer events.

Steering events, however, requires specifics. It is well and good to have general principles like “humans should maintain an economic role,” and “AI should not exacerbate inequality.” The challenge is to turn those into specific regulations regarding the development and use of AI. We’ll need to do that despite the fact that computer scientists themselves don’t know how AI will progress over the next 10 years, much less over the long term. And we’ll also need to apply the regulations we come up with relatively consistently across the world. All of this will require us to act with more coherence and foresight than we’ve demonstrated when dealing with other existential problems such as climate change.

It seems like a tall order. But then again, four or five million years ago, no one would have suspected that our small-brained, relatively apelike ancestors would evolve into something that can sequence genomes and send probes to the edge of the solar system. With luck, maybe we’ll rise to the occasion again.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

参考译文
我们进化的历史可以教会我们人工智能的未来
随着人工智能的发展,专家们已经警告其可能导致人类灭绝的潜在风险。这种风险究竟如何发生仍属推测——但从现实来看,智能机器人可以制造更多同类,改进自身设计并追求自身利益,这就可能构成对人类的威胁。上周,英国布莱切利公园举办了一场人工智能安全峰会,旨在应对与最先进人工智能技术相关的部分风险,其中就包括“失去控制”的风险——即这些系统可能会变得自主。值得思考的是,我们能基于目前已知的事实,对这些场景做出哪些预测。能够自主行动并升级自身设计的机器将受制于与细菌、动物和植物相同的进化法则。因此,进化史可以教给我们许多关于人工智能可能如何发展、以及人类如何确保自身在人工智能兴起时代生存下去的知识。第一个启示是:从长远来看,世上没有免费的午餐。很遗憾,这意味着我们不能期待人工智能带来一个享乐天堂,由机器人仆人满足人类的所有需求。大多数生物都生活在生存边缘,尽其所能地维持生存。今天许多人类确实生活得更舒适、更富裕,但进化史表明人工智能可能打破这种状态。根本原因在于竞争。这是一个可以追溯到达尔文的论点,不仅适用于人工智能,也适用于更广泛的领域。然而,用人工智能的情景来说明这一点则十分容易。设想两个未来由人工智能管理的国家,其中人类不再对经济做出重大贡献。一个国家一味满足其人类人口的所有享乐需求,另一个国家则对人类投入较少精力,而更多地专注于资源获取和技术改进。后者将随着时间的推移变得更加强大,可能会吞并前者,最终甚至可能彻底抛弃其人类。这一例子不必局限在国家层面,其关键是竞争。从这些情景中,我们可以吸取的教训之一是:人类应努力保持自身的经济重要性。从长远来看,确保人类生存的唯一方法,就是我们自己积极地去争取。另一个见解是进化是渐进的。我们可以在过去的重大创新中看到这一点,例如多细胞生物的进化。在地球历史的大部分时间里,生命主要由单细胞生物构成。由于氧气水平较低,大型多细胞生物的环境条件并不理想。然而,即使环境变得更加适合,世界也并没有立刻被红杉、鲸鱼和人类填满。要构建一棵树或一个哺乳动物这样的复杂结构,需要许多能力,包括精细的基因调控网络以及细胞间的黏附和通讯机制。这些能力是一点一点逐步形成的。人工智能的发展也很可能是渐进式的。与其出现一个纯粹的机器人文明,不如说人工智能更可能逐步融入我们世界中已有的事物。由此产生的混合体可以有多种形式;例如,可以设想一家公司,其所有者是人类,但其运营和研究则是基于机器的。这样的安排将导致人类之间极端的不平等,拥有控制人工智能能力的所有者将从中获利,而没有这种控制能力的人则可能失业并陷入贫困。这些混合体也最可能构成对人类的直接威胁。有些人认为“机器人统治世界”的情节被夸大了,因为人工智能本质上不会拥有统治世界的欲望。这也许是对的。然而,人类确实有这个欲望,并且这可能是人类与机器协作过程中所做出的重要贡献之一。考虑到这一切,我们或许还可以采纳这样一个原则:不应允许人工智能加剧我们社会中的不平等现象。思考这一切可能会让人质疑人类是否还有任何长期的前景。从地球生命历史中的另一个观察是,重大创新使得生命能够占据新的生态位。例如,多细胞生物在海洋中进化,使得在那里生存有了新的方式。对于动物来说,这包括在沉积物中挖掘以及新的捕猎方式。这开辟了新的食物来源,使动物得以多样化,最终形成了如今纷繁多样的生命形式和生活方式。关键的是,新生态位的出现并不意味着旧生态位的消失。在动物和植物进化之后,细菌和其他单细胞生物仍然存在。今天,它们中的某些仍在做与以前类似的事情(事实上,它们对于生物圈的运行至关重要),而其他一些则从新机遇中获益,例如生活在动物的肠道中。希望在未来的某些图景中,人类也能占据一个生态位。毕竟,人类需要的一些东西(如氧气和有机食物),机器并不需要。也许我们可以说服它们前往太阳系中去开采外行星并获取太阳能量,而地球则留给我们。但我们可能需要迅速行动。生物学创新历史上的最后一个启示是,初期发生的事情非常重要。多细胞生物的进化导致了寒武纪大爆发——这是五亿多年前一个时期,在这个时期中,大量多样化的多细胞动物首次大量出现。许多早期动物灭绝了,没有留下后代。由于幸存下来的动物最终形成了主要的动物类群,因此这一时期的事件在很大程度上决定了今天生物界的样子。有人认为,寒武纪时期有多种可能的发展路径,而我们最终所处的世界并非注定如此。如果人工智能的发展也类似,那么现在正是我们拥有最大影响力去引导事情走向的关键时刻。但引导事件需要具体的措施。制定“人类应保持经济角色”和“人工智能不应加剧不平等”这样的原则是好的,但挑战在于将它们转化为关于人工智能发展和使用的具体法规。尽管计算机科学家们自己也不清楚人工智能在未来十年甚至更长远时间内的发展路径,我们仍需要这样做。此外,我们还需要在全球范围内相对一致地应用这些制定出来的法规。这一切都需要我们展现出比在处理气候变化等其他生存性问题时更大的协调性和远见。这似乎是一个艰巨的任务。但话说回来,四百万到五百万年前,没有人会料到我们这个脑容量较小、外形接近猿类的祖先,会进化成能够测序基因组并将探测器送至太阳系边缘的物种。也许幸运的是,我们能够再次迎难而上。本文为观点与分析文章,作者的观点不一定代表《科学美国人》的立场。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘