小程序
传感搜
传感圈

Social Media Algorithms Warp How People Learn from Each Other

2023-09-03 02:47:24
关注

The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.

People’s daily interactions with online algorithms affect how they learn from others, with negative consequences including social misperceptions, conflict and the spread of misinformation, my colleagues and I have found.

People are increasingly interacting with others in social media environments where algorithms control the flow of social information they see. Algorithms determine in part which messages, which people and which ideas social media users see.

On social media platforms, algorithms are mainly designed to amplify information that sustains engagement, meaning they keep people clicking on content and coming back to the platforms. I’m a social psychologist, and my colleagues and I have found evidence suggesting that a side effect of this design is that algorithms amplify information people are strongly biased to learn from. We call this information “PRIME,” for prestigious, in-group, moral and emotional information.

In our evolutionary past, biases to learn from PRIME information were very advantageous: Learning from prestigious individuals is efficient because these people are successful and their behavior can be copied. Paying attention to people who violate moral norms is important because sanctioning them helps the community maintain cooperation.

But what happens when PRIME information becomes amplified by algorithms and some people exploit algorithm amplification to promote themselves? Prestige becomes a poor signal of success because people can fake prestige on social media. Newsfeeds become oversaturated with negative and moral information so that there is conflict rather than cooperation.

The interaction of human psychology and algorithm amplification leads to dysfunction because social learning supports cooperation and problem-solving, but social media algorithms are designed to increase engagement. We call this mismatch functional misalignment.

Why it matters

One of the key outcomes of functional misalignment in algorithm-mediated social learning is that people start to form incorrect perceptions of their social world. For example, recent research suggests that when algorithms selectively amplify more extreme political views, people begin to think that their political in-group and out-group are more sharply divided than they really are. Such “false polarization” might be an important source of greater political conflict.

Functional misalignment can also lead to greater spread of misinformation. A recent study suggests that people who are spreading political misinformation leverage moral and emotional information – for example, posts that provoke moral outrage – in order to get people to share it more. When algorithms amplify moral and emotional information, misinformation gets included in the amplification.

What other research is being done

In general, research on this topic is in its infancy, but there are new studies emerging that examine key components of algorithm-mediated social learning. Some studies have demonstrated that social media algorithms clearly amplify PRIME information.

Whether this amplification leads to offline polarization is hotly contested at the moment. A recent experiment found evidence that Meta’s newsfeed increases polarization, but another experiment that involved a collaboration with Meta found no evidence of polarization increasing due to exposure to their algorithmic Facebook newsfeed.

More research is needed to fully understand the outcomes that emerge when humans and algorithms interact in feedback loops of social learning. Social media companies have most of the needed data, and I believe that they should give academic researchers access to it while also balancing ethical concerns such as privacy.

What’s next

A key question is what can be done to make algorithms foster accurate human social learning rather than exploit social learning biases. My research team is working on new algorithm designs that increase engagement while also penalizing PRIME information. We argue that this might maintain user activity that social media platforms seek, but also make people’s social perceptions more accurate.

This article was originally published on The Conversation. Read the original article.

参考译文
社交媒体算法扭曲了人们彼此学习的方式
以下文章经许可转载自《The Conversation》,这是一家在线出版物,报道最新研究成果。我和我的同事发现,人们每天与在线算法的互动会影响他们从他人那里学习的方式,其负面影响包括对社会的误解、冲突以及虚假信息的传播。人们越来越多地在社交媒体环境中与他人互动,这些环境中的算法控制着他们所看到的社会信息流动。算法部分决定了社交媒体用户看到哪些信息、哪些人以及哪些观点。在社交媒体平台中,算法主要设计用于放大能够保持用户参与度的信息,这意味着它们让人们不断点击内容并返回平台。我是一名社会心理学家,我和我的同事发现,这种设计的一个副作用是,算法会放大人们特别偏向于学习的信息。我们将这些信息称为“PRIME”(即具有声望、群体内、道德和情感的)信息。在我们进化的历史中,偏向于学习PRIME信息是非常有利的:因为学习来自有声望的个体是高效的,因为他们是成功者,他们的行为可以被模仿。关注违反道德规范的人也很重要,因为制裁他们有助于社区维持合作。但当PRIME信息被算法放大,且有些人利用算法放大来提升自己时,会发生什么?声望变成了一种成功能力的较差信号,因为人们可以在社交媒体上伪装声望。新闻流被负面和道德信息填满,从而导致冲突多于合作。人类心理与算法放大的互动导致了功能失调,因为社会学习支持合作与解决问题,但社交媒体算法的设计目的是提高参与度。我们将这种不匹配称为“功能错配”。为什么这很重要功能错配在算法中介化社会学习中的一个关键结果是,人们开始形成对社会世界的错误认知。例如,最近的研究表明,当算法选择性地放大更极端的政治观点时,人们开始认为他们的政治内群体和外群体之间的分歧远比实际情况更尖锐。这种“虚假极化”可能是导致政治冲突加剧的重要原因。功能错配还可能导致虚假信息的更大范围传播。一项近期研究表明,传播政治虚假信息的人利用道德和情感信息——例如,引发道德愤怒的帖子——来促使人们更多地分享这些信息。当算法放大道德和情感信息时,虚假信息也被纳入其中。其他相关研究目前,关于这一主题的研究仍处于早期阶段,但已有新的研究开始探讨算法中介化社会学习的关键组成部分。一些研究已经证明,社交媒体算法确实明显放大了PRIME信息。目前,这一放大是否导致线下极化仍然存在争议。最近一项实验发现Meta的新闻流会加剧极化,但另一项与Meta合作进行的实验则发现,接触其算法驱动的Facebook新闻流并不会增加极化。需要更多的研究,才能充分理解人类与算法在社会学习反馈循环中互动所产生的结果。社交媒体公司拥有大部分所需的数据,我相信他们应该在平衡隐私等伦理问题的同时,向学术研究人员开放这些数据。下一步的关键问题是,可以采取什么措施,使算法促进准确的人类社会学习,而不是利用社会学习的偏差。我的研究团队正在开发新的算法设计,以提高用户参与度的同时惩罚PRIME信息。我们认为,这可能会在维持社交媒体平台所追求的用户活跃度的同时,也提升人们对社会的认知准确性。本文最初发表于《The Conversation》。阅读原始文章。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘