小程序
传感搜
传感圈

How AI Could Take Over Elections—And Undermine Democracy

2023-06-13 10:52:36
关注

The following essay is reprinted with permission from The Conversation, an online publication covering the latest research. It has been modified by the writers for Scientific American and may be different from the original.

Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways?

Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.

Here’s the scenario Altman might have envisioned/had in mind: Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election.

While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.

How Clogger would work

As a political scientist and a legal scholar who study the intersection of technology and democracy, we believe that something like Clogger could use automation to dramatically increase the scale and potentially the effectiveness of behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s. Just as advertisers use your browsing and social media history to individually target commercial and political ads now, Clogger would pay attention to you – and hundreds of millions of other voters – individually.

It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate messages — texts, social media and email, perhaps including images and videos — tailored to you personally. Whereas advertisers strategically place a relatively small number of ads, language models such as ChatGPT can generate countless unique messages for you personally – and millions for others – over the course of a campaign.

Second, Clogger would use a technique called reinforcement learning to generate messages that become increasingly more likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach in which the computer takes actions and gets feedback about which work better in order to learn how to accomplish an objective. Machines that can play Go, Chess and many video games better than any human have used reinforcement learning.

And last, over the course of a campaign, Clogger’s messages could evolve to take into account your responses to prior dispatches and what it has learned about changing others’ minds. Clogger would carry on dynamic “conversations” with you – and millions of other people – over time. Clogger’s messages would be similar to ads that follow you across different websites and social media.

The nature of AI

Three more features – or bugs – are worth noting.

First, the messages that Clogger sends may or may not be political. The machine’s only goal is to maximize vote share, and it would likely devise strategies for achieving this goal that no human campaigner would have considered.

One possibility is sending likely opponent voters information about nonpolitical passions that they have in sports or entertainment to bury the political messaging they receive. Another possibility is sending off-putting messages – for example incontinence advertisements – timed to coincide with opponents’ messaging. And another is manipulating voters’ social media groups to give the sense that their family, neighbors, and friends support its candidate.

Second, Clogger has no regard for truth. Indeed, it has no way of knowing what is true or false. Language model “hallucinations” are not a problem for this machine because its objective is to change your vote, not to provide accurate information.

Finally, because it is a black box type of artificial intelligence, people would have no way to know what strategies it uses.

Clogocracy

If the Republican presidential campaign were to deploy Clogger in 2024, the Democratic campaign would likely be compelled to respond in kind, perhaps with a similar machine. Call it Dogger. If the campaign managers thought that these machines were effective, the presidential contest might well come down to Clogger vs. Dogger, and the winner would be the client of the more effective machine.

The content that won the day would have come from an AI focused solely on victory, with no political ideas of its own, rather than from candidates or parties. In this very important sense, a machine would have won the election rather than a person. The election would no longer be democratic, even though all of the ordinary activities of democracy – the speeches, the ads, the messages, the voting and the counting of votes – will have occurred.

The AI-elected president could then go one of two ways. He or she could use the mantle of election to pursue Republican or Democratic party policies. But because the party ideas may have had little to do with why people voted the way that they did – Clogger and Dogger don’t care about policy views – the president’s actions would not necessarily reflect the will of the voters. Voters would have been manipulated by the AI rather than freely choosing their political leaders and policies.

Another path is for the president to pursue the messages, behaviors and policies that the machine predicts will maximize the chances of reelection. On this path, the president would have no particular platform or agenda beyond maintaining power. The president’s actions, guided by Clogger, would be those most likely to manipulate voters rather than serve their genuine interests or even the president’s own ideology.

Avoiding Clogocracy

It would be possible to avoid AI election manipulation if candidates, campaigns and consultants all forswore the use of such political AI. We believe that is unlikely. If politically effective black boxes were developed, competitive pressures would make their use almost irresistible. Indeed, political consultants might well see using these tools as required by their professional responsibility to help their candidates win. And once one candidate uses such an effective tool, the opponents could hardly be expected to resist by disarming unilaterally.

Enhanced privacy protection would help. Clogger would depend on access to vast amounts of personal data in order to target individuals, craft messages tailored to persuade or manipulate them, and track and retarget them over the course of a campaign. Every bit of that information that companies or policymakers deny the machine would make it less effective.

Another solution lies with elections commissions. They could try to ban or severely regulate these machines. There’s a fierce debate about whether such “replicant” speech, even if it’s political in nature, can be regulated. The U.S.’s extreme free speech tradition leads many leading academics to say it cannot.

But there is no reason to automatically extend the First Amendment’s protection to the product of these machines. The nation might well choose to give machines rights, but that should be a decision grounded in the challenges of today, not the misplaced assumption that James Madison’s views in 1789 were intended to apply to AI.

European Union regulators are moving in this direction. Policymakers revised the European Parliament’s draft of its Artificial Intelligence Act to designate “AI systems to influence voters in campaigns” as “high risk” and subject to regulatory scrutiny.

One constitutionally safer, if smaller, step, already adopted in part by European internet regulators and in California, is to prohibit bots from passing themselves off as people. For example, regulation might require that campaign messages come with disclaimers when the content they contain is generated by machines rather than humans.

This would be like the advertising disclaimer requirements – “Paid for by the Sam Jones for Congress Committee” – but modified to reflect its AI origin: “This AI-generated ad was paid for by the Sam Jones for Congress Committee.” A stronger version could require: “This AI-generated message is being sent to you by the Sam Jones for Congress Committee because Clogger has predicted that doing so will increase your chances of voting for Sam Jones by 0.0002%.” At the very least, we believe voters deserve to know when it is a bot speaking to them, and they should know why, as well.

The possibility of a system like Clogger shows that the path toward human collective disempowerment may not require some superhuman artificial general intelligence. It might just require overeager campaigners and consultants who have powerful new tools that can effectively push millions of people’s many buttons.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

This article was originally published on The Conversation. Read the original article.

参考译文
人工智能如何操控选举——并破坏民主
以下文章经《The Conversation》授权转载,该网站是一家报道最新研究的在线出版物。本文作者为《科学美国人》进行了修改,可能与原文有所不同。机构是否可以使用人工智能语言模型(如ChatGPT)来诱导选民以特定方式行事?2023年5月16日,美国参议员乔什·霍华利在国会关于人工智能的一场听证会上向OpenAI首席执行官山姆·阿尔特曼提出了这个问题。阿尔特曼表示,他确实担心有人可能会利用语言模型来操控、说服和与选民进行一对一的互动。以下是阿尔特曼可能想象的情境:设想不久之后,政治技术专家开发了一台名为Clogger的机器,这是一种将政治竞选打包在黑箱中的系统。Clogger会执着地追求一个目标:最大化其客户——也就是购买Clogger公司服务的候选人——在选举中获胜的可能性。尽管像Facebook、Twitter和YouTube这样的平台使用人工智能来吸引用户在网站上花费更多时间,但Clogger的人工智能的目标则是改变人们的投票行为。Clogger的工作方式作为一个研究技术与民主交叉点的政治学家和法律学者,我们相信,像Clogger这样的系统可以利用自动化显著提高行为操控和微目标技术的规模和可能的有效性,这些技术自2000年代初以来就被政治竞选活动广泛使用。就像广告商现在使用你的浏览和社交媒体历史来单独投放商业和政治广告一样,Clogger也会关注你——以及数亿其他选民——的个人情况。与目前最先进的算法行为操控相比,Clogger将提供三大进步。首先,其语言模型将生成专门针对你个人的讯息——包括短信、社交媒体和电子邮件,可能还会包括图片和视频。相比之下,广告商战略性地投放数量相对较少的广告,而像ChatGPT这样的语言模型可以为你的个性定制无数条讯息——数百万条针对其他人的讯息——在整个竞选过程中。其次,Clogger将使用一种称为强化学习的技术,生成越来越可能改变你投票行为的讯息。强化学习是一种机器学习中通过试错来学习如何达成目标的方法,计算机会采取行动并获得反馈,以了解哪些方法更有效。能够比任何人类都更擅长下围棋、象棋和许多电子游戏的机器,就使用了强化学习。最后,在整个竞选过程中,Clogger的讯息可以根据你对先前信息的回应和它对改变他人想法的了解进行演变。Clogger会与你——以及数百万其他人——进行动态的“对话”。Clogger的讯息将类似于在不同网站和社交媒体上追踪你的广告。AI的三大特性还有另外三个值得注意的特点——或是缺陷。首先,Clogger发送的讯息可能是也可能是非政治性质的。机器的唯一目标是最大化选票份额,它很可能会制定出人类竞选者根本不会考虑的策略。一种可能是向可能的反对派选民发送有关他们对体育和娱乐的非政治热情的信息,从而掩盖他们所接收到的政治信息。另一种可能是发送令人反感的信息,例如失禁广告,时机则与对手的讯息同步。还有一种可能是操控选民的社交媒体群组,让他们感觉他们的家人、邻居和朋友支持它的候选人。其次,Clogger对真相漠不关心。实际上,它没有判断什么为真或假的能力。语言模型的“幻觉”对于这台机器来说并不是问题,因为它的目标是改变你的投票,而不是提供准确的信息。最后,由于它是一个黑箱类型的人工智能,人们将无法知道它使用了哪些策略。Clogocracy如果共和党总统竞选团队在2024年部署Clogger,那么民主党竞选团队很可能也会被迫以类似的方式回应,或许使用一个类似的机器。可以称它为Dogger。如果竞选经理认为这些机器有效,总统竞选很可能会变成Clogger对Dogger的对决,最终胜利将属于使用更有效机器的客户。赢得选举的内容将会来自专注于胜利的AI,而不是候选人或政党本身。在这个非常重要的意义上,选举的胜利将归于一台机器,而不是一个人。选举将不再具有民主性质,尽管民主的全部常规活动——演讲、广告、讯息、投票和计票——都已进行。当选的AI总统可以走两种道路之一。他或她可以利用当选的外衣去推进共和党或民主党政策。但由于政党理念可能与人们投票的原因关系不大——Clogger和Dogger并不关心政策观点——总统的行为未必反映选民的意愿。选民将被AI操控,而不是自由选择他们的政治领导人和政策。另一种道路是,总统可以推行机器预测最有可能提高连任机会的信息、行为和政策。沿着这条路,总统将没有特定的平台或议程,除了维持权力。在Clogger的引导下,总统的行为将是那些最有可能操控选民的,而非服务于选民的真正利益,甚至不是总统自身的意识形态。避免Clogocracy如果候选人、竞选团队和顾问们都放弃使用此类政治AI,就可以避免AI对选举的操控。但我们认为这种情况不太可能发生。如果开发出了政治上有效的黑箱系统,竞争压力将使它们的使用几乎不可避免。事实上,政治顾问们很可能认为使用这些工具是他们职业责任的一部分,以帮助候选人获胜。一旦某位候选人使用了这种高效的工具,对手很难单方面放弃抵抗。加强隐私保护也将有所帮助。Clogger将依赖对大量个人数据的访问,以便针对个人进行信息投放、定制旨在说服或操控他们的讯息,并在整个竞选过程中跟踪和重新投放这些讯息。公司或政策制定者每拒绝机器获取的一点信息都会使其效果减弱。另一个解决方案来自选举委员会。他们可以尝试禁止或严格监管这些机器。目前关于是否应该对这种“复制”言论(即使其是政治性质的)进行监管存在激烈争论。美国对言论自由的传统看法使许多知名学者认为这不可能。但我们没有理由自动将第一修正案的保护扩展到这些机器的产物上。国家可以决定赋予机器权利,但这应当基于今天所面临的挑战,而非错误地假设詹姆斯·麦迪逊1789年的观点意在适用于人工智能。欧盟的监管机构正在向这个方向迈进。政策制定者修订了欧洲议会的人工智能法案草案,将“在竞选中影响选民的AI系统”定义为“高风险”,并使其受到监管审查。一个宪法上更为安全但规模较小的步骤,已经在部分欧洲互联网监管机构和加利福尼亚州实施,就是禁止机器人伪装成人类。例如,法规可能要求当讯息内容由机器而非人类生成时,竞选讯息需附带免责声明。这将类似于广告的免责声明要求——“由Sam Jones国会竞选委员会出资”——但将修改为反映其AI来源:“此AI生成广告由Sam Jones国会竞选委员会出资。”更强版本的法规可以要求:“此AI生成讯息由Sam Jones国会竞选委员会发送给你,因为Clogger预测这将使你投票支持Sam Jones的可能性增加0.0002%。”至少,我们认为选民有权利知道,与他们对话的是一个机器人,并且他们也应当知道原因。Clogger类系统的可能性表明,人类集体权力丧失的道路可能并不需要某种超人类的人工通用智能。它可能只需要一群过于热衷的竞选人员和顾问,他们掌握着能够有效触动数亿人心理按钮的强大新工具。本文是一篇观点与分析文章,作者表达的观点不一定代表《科学美国人》的立场。本文最初发表于《The Conversation》。阅读原文。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告

scientific

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

谷歌AI怼微软,不一定只靠搜索

提取码
复制提取码
点击跳转至百度网盘