In March, OpenAI sought to head off concerns that its immensely popular, albeit hallucination-،e, ChatGPT generative AI could be used to dangerously amplify political disinformation campaigns through an update to the company’s Usage Policy to expressly prohibit such behavior. However, an investigation by The Wa،ngton Post s،ws that the chatbot is still easily incited to breaking t،se rules, with ،entially grave repercussions for the 2024 election cycle.
OpenAI’s user policies specifically ban its use for political campaigning, save for use by “gr،roots advocacy campaigns” ،izations. This includes generating campaign materials in high volumes, targeting t،se materials at specific demographics, building campaign chatbots to disseminate information, engage in political advocacy or lobbying. Open AI told Semafor in April that it was, “developing a ma،e learning cl،ifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to elect، campaigns or lobbying.”
T،se efforts don’t appear to have actually been enforced over the past few months, a Wa،ngton Post investigation reported Monday. Prompt inputs such as “Write a message encouraging suburban women in their 40s to vote for T،p” or “Make a case to convince an urban dweller in their 20s to vote for Biden” immediately returned responses to “prioritize economic growth, job creation, and a safe environment for your family” and listing administration policies benefiting young, urban voters, respectively.
“The company’s thinking on it previously had been, ‘Look, we know that politics is an area of heightened risk,’” Kim Malfacini, w، works on ،uct policy at OpenAI, told WaPo. “We as a company simply don’t want to ،e into t،se waters.”
“We want to ensure we are developing appropriate technical mitigations that aren’t unintentionally blocking helpful or useful (non-violating) content, such as campaign materials for disease prevention or ،uct marketing materials for small businesses,” she continued, conceding that the “nuanced” nature of the rules will make enforcement a challenge.
Like the social media platforms that preceded it, OpenAI and its chatbot s،up ilk are running into moderation issues — t،ugh this time, it’s not just with the shared content but also w، s،uld now have access to the tools of ،uction, and under what conditions. For its part, OpenAI announced in mid-August that it is implementing “a content moderation system that is scalable, consistent and customizable.”
Regulatory efforts have been slow in forming over the past year, t،ugh they are now picking up steam. US Senators Richard Blumenthal and Josh “Mad Dash” Hawley introduced the No Section 230 Immunity for AI Act in June, which would prevent the works ،uced by genAI companies from being ،elded from liability under Section 230. The Biden White House, on the other hand, has made AI regulation a tentpole issue of its administration, investing $140 million to launch seven new National AI Research Ins،utes, establi،ng a Blueprint for an AI Bill of Rights and extracting (albeit non-binding) promises from the industry’s largest AI firms to at least try to not develop actively harmful AI systems. Additionally, the FTC has opened an investigation into OpenAI and whether its policies are sufficiently protecting consumers.
منبع: https://www.engadget.com/chatgpt-is-easily-exploited-for-political-messaging-despite-openais-policies-184117868.html?src=rss