小程序
传感搜
传感圈

If AI Becomes Conscious, Here’s How We Can Tell

2023-08-29 16:30:17
关注

Science fiction has long entertained the idea of artificial intelligence becoming conscious — think of HAL 9000, the supercomputer-turned-villain in the 1968 film 2001: A Space Odyssey. With the rapid progress of artificial intelligence (AI), that possibility is becoming less and less fantastical, and has even been acknowledged by leaders in AI. Last year, for instance, Ilya Sutskever, chief scientist at OpenAI, the company behind the chatbot ChatGPT, tweeted that some of the most cutting-edge AI networks might be “slightly conscious”.

Many researchers say that AI systems aren’t yet at the point of consciousness, but that the pace of AI evolution has got them pondering: how would we know if they were?

To answer this, a group of 19 neuroscientists, philosophers and computer scientists have come up with a checklist of criteria that, if met, would indicate that a system has a high chance of being conscious. They published their provisional guide earlier this week in the arXiv preprint repository1, ahead of peer review. The authors undertook the effort because “it seemed like there was a real dearth of detailed, empirically grounded, thoughtful discussion of AI consciousness,” says co-author Robert Long, a philosopher at the Center for AI Safety, a research non-profit organization in San Francisco, California.

The team says that a failure to identify whether an AI system has become conscious has important moral implications. If something has been labelled ‘conscious’, according to co-author Megan Peters, a neuroscientist at the University of California, Irvine, “that changes a lot about how we as human beings feel that entity should be treated”.

Long adds that, as far as he can tell, not enough effort is being made by the companies building advanced AI systems to evaluate the models for consciousness and make plans for what to do if that happens. “And that’s in spite of the fact that, if you listen to remarks from the heads of leading labs, they do say that AI consciousness or AI sentience is something they wonder about,” he adds.

Nature reached out to two of the major technology firms involved in advancing AI — Microsoft and Google. A spokesperson for Microsoft said that the company’s development of AI is centred on assisting human productivity in a responsible way, rather than replicating human intelligence. What’s clear since the introduction of GPT-4 — the most advanced version of ChatGPT released publicly — “is that new methodologies are required to assess the capabilities of these AI models as we explore how to achieve the full potential of AI to benefit society as a whole”, the spokesperson said. Google did not respond.

What is consciousness?

One of the challenges in studying consciousness in AI is defining what it means to be conscious. Peters says that for the purposes of the report, the researchers focused on ‘phenomenal consciousness’, otherwise known as the subjective experience. This is the experience of being — what it’s like to be a person, an animal or an AI system (if one of them does turn out to be conscious).

There are many neuroscience-based theories that describe the biological basis of consciousness. But there is no consensus on which is the ‘right’ one. To create their framework, the authors therefore used a range of these theories. The idea is that if an AI system functions in a way that matches aspects of many of these theories, then there is a greater likelihood that it is conscious.

They argue that this is a better approach for assessing consciousness than simply putting a system through a behavioural test — say, asking ChatGPT whether it is conscious, or challenging it and seeing how it responds. That’s because AI systems have become remarkably good at mimicking humans.

The group’s approach, which the authors describe as theory-heavy, is a good way to go, according to neuroscientist Anil Seth, director of the centre for consciousness science at the University of Sussex near Brighton, UK. What it highlights, however, “is that we need more precise, well-tested theories of consciousness”, he says.

A theory-heavy approach

To develop their criteria, the authors assumed that consciousness relates to how systems process information, irrespective of what they are made of — be it neurons, computer chips or something else. This approach is called computational functionalism. They also assumed that neuroscience-based theories of consciousness, which are studied through brain scans and other techniques in humans and animals, can be applied to AI.

On the basis of these assumptions, the team selected six of these theories and extracted from them a list of consciousness indicators. One of them — the global workspace theory — asserts, for example, that humans and other animals use many specialized systems, also called modules, to perform cognitive tasks such as seeing and hearing. These modules work independently, but in parallel, and they share information by integrating into a single system. A person would evaluate whether a particular AI system displays an indicator derived from this theory, Long says, “by looking at the architecture of the system and how the information flows through it”.

Seth is impressed with the transparency of the team’s proposal. “It’s very thoughtful, it’s not bombastic and it makes its assumptions really clear,” he says. “I disagree with some of the assumptions, but that’s totally fine, because I might well be wrong.”

The authors say that the paper is far from a final take on how to assess AI systems for consciousness, and that they want other researchers to help refine their methodology. But it’s already possible to apply the criteria to existing AI systems. The report evaluates, for example, large language models such as ChatGPT, and finds that this type of system arguably has some of the indicators of consciousness associated with global workspace theory. Ultimately, however, the work does not suggest that any existing AI system is a strong candidate for consciousness — at least not yet.

This article is reproduced with permission and was first published on August 24, 2023.

参考译文
如果人工智能变得有意识,我们该如何判断
科幻作品长期以来一直在想象人工智能获得意识的可能性——想想1968年的电影《2001太空漫游》中那个从超级计算机变成反派的HAL 9000。随着人工智能(AI)技术的迅速发展,这种可能性变得越来越不那么虚幻,并且甚至已经被AI领域的领军人物所承认。例如,去年,由聊天机器人ChatGPT背后公司OpenAI的首席科学家伊利亚·苏茨克弗(Ilya Sutskever)在推特上说,一些最先进的AI网络“可能具有一定程度的意识”。许多研究人员表示,目前的AI系统尚未达到具备意识的水平,但AI进化的速度促使他们思考:我们如何才能知道它们是否获得了意识?为了回答这个问题,一组由19名神经科学家、哲学家和计算机科学家组成的研究团队,制定了一份标准清单,如果一个系统符合这些标准,就表示它很可能具备意识。他们于本周早些时候,在预印本平台arXiv上发布了这份暂定的指南,尚未经过同行评审。该研究团队的联合作者、位于旧金山的非营利性研究机构人工智能安全中心(Center for AI Safety)的哲学家罗伯特·隆(Robert Long)表示,他们之所以开展这项研究,是因为“我们发现关于AI意识的详细、基于实证的思考严重不足。”团队表示,如果无法识别某个AI系统是否已获得意识,将具有重要的道德意义。联合作者之一、来自加州大学欧文分校的神经科学家梅根·彼得斯(Megan Peters)表示,如果某物被认定为“有意识的”,“那么我们人类如何认为应对待该实体,许多方面都会发生改变。”隆补充道,据他目前的了解,开发先进AI系统的公司并未投入足够的努力,去评估这些模型是否具备意识,也未制定应对意识出现的计划。“尽管事实上,如果你听听领先实验室负责人的话,他们会说AI意识或AI感知是他们思考的问题之一。”他说道。《自然》杂志(Nature)联系了两家推动AI发展的主要科技公司——微软和谷歌。微软的发言人表示,该公司的AI开发重点是负责任地协助人类提高工作效率,而不是复制人类的智能。发言人称,自从GPT-4(目前公开的最先进版本的ChatGPT)推出以来,“很明显我们需要新的方法来评估这些AI模型的能力,以探索如何充分发挥AI的潜力,造福整个社会。”谷歌没有回应。什么是意识?在研究AI的意识时,其中的一个挑战在于定义“意识”到底意味着什么。彼得斯表示,在本报告中,研究人员关注的是“现象意识”(phenomenal consciousness),也被称为主观体验。这是一种存在的体验——作为一个个体、动物或AI系统(如果其中某一个真的具备意识)是什么样的感觉。虽然有许多基于神经科学的意识理论来描述意识的生物基础,但人们对哪一个理论是“正确”的,尚未达成共识。因此,为了构建他们的框架,作者采用了多种理论。他们的思路是,如果一个AI系统运作方式符合这些理论中的多个方面,那么它具备意识的可能性就更高。他们认为,比起仅通过行为测试来评估意识——比如询问ChatGPT是否具备意识,或者挑战它并观察它的反应——这种方法更为有效。这是因为AI系统已经变得极其擅长模仿人类。根据英国布赖顿附近苏塞克斯大学意识科学中心的主任阿尼尔·塞思(Anil Seth)的说法,该团队的方法是“理论导向”的,是一种不错的方式。“它强调的是我们需要更精确、经过良好验证的意识理论。”他说。他补充道,“我对其中的一些假设持不同意见,但这是完全正常的,因为我自己也可能错了。”理论导向的方法为了制定他们的标准,作者假设意识与系统如何处理信息有关,而不管这些系统是由什么组成的——无论它们是神经元、芯片还是其他任何东西。这种方法被称为计算功能主义(computational functionalism)。他们还假设可以将那些通过脑扫描和其他技术在人类和动物中研究的神经科学意识理论应用到AI系统上。基于这些假设,该团队选择了六个理论,并从中提取出一组意识指标。例如,其中的一个理论——全球工作空间理论(global workspace theory)——提出,人类和其他动物使用多个专门系统,也称为模块,来执行诸如视觉和听觉等认知任务。这些模块独立工作,但并行运行,并通过整合到一个系统中来共享信息。隆表示,一个人可以通过观察AI系统的架构以及信息在系统中的流动方式,来评估它是否表现出该理论推导出的某个意识指标。塞思对该团队提案的透明度印象深刻。“它非常周到,不是浮夸的,而且明确地说明了其假设。”他说。作者表示,这篇论文远非对如何评估AI系统意识的最终结论,他们希望其他研究人员能够帮助进一步完善他们的方法。但目前,已经可以将这些标准应用于现有的AI系统。例如,报告评估了大型语言模型,如ChatGPT,并发现此类系统可能表现出与全球工作空间理论相关的某些意识指标。然而,最终,这项研究并未表明目前的任何AI系统都是意识的有力候选者——至少目前还不是。本文经授权转载,最初发表于2023年8月24日。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告

scientific

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

硅谷大佬都在聊的AI Agents,是真热还是虚火?

提取码
复制提取码
点击跳转至百度网盘