this post was submitted on 16 Oct 2024
11 points (92.3% liked)
collapse of the old society
1007 readers
6 users here now
to discuss news and stuff of the old world dying
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
“But when you conflate those two usages, it becomes dubious, because it's suggesting that these are posts coming from real humans, when, in fact, it's maybe getting posted by a real human, but it's not written by a real human,” Lewis told me. “It's written and generated by an AI system. The lines start to get really blurry, and that's where I think ethical questions do come to the foreground. I think that it would be wise for anyone looking to work with them to maybe ask for expanded definitions around what they mean by ‘authentic’ here.”
In another video demo Impact shows how a fake organization named “Pro-Democracy” can share a video in support of Kamala Harris with users and ask them to share it to TikTok alongside an AI-generated caption. 0:00 /4:39
“These AI tools are so new that we don’t yet have clear norms surrounding when it’s acceptable to use AI in the democratic process,” Josh A. Goldstein, a research fellow at Georgetown University's Center for Security and Emerging Technology, said when 404 Media showed him the Pro-Democracy demo video. “If AI can help someone articulate a view they truly hold, it could empower people who might not otherwise participate and increase involvement in civic discourse. But there are also risks. People may become overly reliant on AI models and passively share AI-generated content that they haven’t checked themselves.”
The “Impact platform” has two sides. There’s an app for “supporters (participants),” and a separate app for “coordinators/campaigners/stakeholders/broadcasters (initiatives),” according to the overview document.
Supporters download the app and provide “onboarding data” which “is used by Impact’s AI to (1) Target and (2) Personalize the action requests” that are sent to them. Supporters connect to initiatives by entering a provided code, and these action requests are sent as push notifications, the document explains.
“Initiatives,” on the other hand, “have access to an advanced, AI-assisted dashboard for managing supporters and actions.”
In the Stop Anti-Semitism demo, Thielen directs supporters to this tweet, about a July 19 International Court of Justice Advisory Opinion that Israel’s presence in the occupied Palestinian territories is illegal and should stop, an opinion it also shared in 2004.
In the Impact demo video Thielen doesn’t instruct supporters to correct any misinformation in the tweet and instead asks supporters to “provide additional context and set the record straight.”
Specifically, it gives supporters the following “talking points.”
“Think of these as the core substance of the response that you want,” Thielen says in the video, and explains that some of the responses that will be AI-generated based on those talking points may include just one of them, more than one, or a synthesis of several.
In the “additional context” box Thielen writes that the target audience should be “People who have been seeing a lot of misinformation about Israel and the war online, and find themselves increasingly sympathetic to Gaza. Encourage them to do more research.”
Impact then generates a “seed” for each supporter. “This is what makes the messages all appear to be coming from different perspectives and angles.”
An example of one seed shown in the demo reads: “Informative and calm, longer, providing historical context, link to reputable sources.”
“Frustrated and urgent, medium, highlighting double standards, use caps for emphasis,” reads a seed to another supporter. The demo video also shows what the push notification each supporter would get is based on the seed, as well as the “Draft message” Impact is asking them to share. According to the video, the push notification this supporter would get would read: “Dana, respond to the tweet about the ICJ ruling on Israel. Add context and correct any misinformation.”
The draft message for this user reads:
“Where’s the ICJ ruling on Hamas? The court’s history of anti-Semitism is CLEAR. So much misinformation out there is warping public opinion. Before jumping to conclusions, DO YOUR RESEARCH. The ICJ has ZERO jurisdiction over Israel anyway!”
“Meme-like, very short, pointing out hypocrisy, include trending hashtag,” another seed says. The generated draft message based on that seed is: “ICJ ruling on Israel but silent on Hamas? 🤔 Make it make sense. #DoubleStandards.”
“The goal is to create a well-rounded yet consistent narrative in a way that makes it easy for your supporters to just tap ‘copy,’ paste this in, and then they’re good to go,” Thielen says in the video.
When I asked Thielen why the demo showed Impact directing users to flood a factual tweet with replies trying to undermine it, he said that he did not give the specifics of the demo a lot of thought.
“That was just me being lazy,” he told me. “I just typed ‘Israel’ into Twitter search and clicked on the top thing without looking at it.”
Twitter’s “platform manipulation and spam policy” states that “You may not use X's services in a manner intended to artificially amplify or suppress information or engage in behavior that manipulates or disrupts people’s experience or platform manipulation defenses on X.” Twitter also says that prohibited behavior includes “coordinated activity, that attempts to artificially influence conversations through the use of multiple accounts, fake accounts, automation and/or scripting.” However, it’s unclear if what Impact proposes would violate Twitter’s policy, which also states that “coordinating with others to express ideas, viewpoints, support, or opposition towards a cause,” is not a violation of this policy.
“Coordinated groups of people can show up and help, or coordinated groups of people can show up and harass,” Shapiro said. “We don't think coordination is in any way a bad thing. We think it’s a great thing, because you can get stuff done, and if you're doing good, truthful things, then I don't see any problems.”
Twitter did not respond to a request for comment.
“If social media users aren’t transparent about their own AI use, others may lose trust in online forums as it becomes harder to distinguish human writing from synthetic prose,” Goldstein said in response to the Pro-Democracy demo video.
“I think astroturfing is a great way of phrasing it, and brigading as well,” Lewis said. “It also shows it's going to continue to siphon off who has the ability to use these types of tools by who is able to pay for them. The people with the ability to actually generate this seemingly organic content are ironically the people with the most money. So I can see the discourse shifting towards the people with the money to to shift it in a specific direction.”