[ad_1]
Artificial intelligence companies have been at the forefront of developing transformative technology. Now they are also engaged in a race to determine the limits of AI use in a year of major elections around the world.
Last month, OpenAI, the maker of the ChatGPT chatbot, said it was working to prevent misuse of its tools in elections, partly by banning their use to create chatbots that impersonate real people or institutions. Let’s show off. In recent weeks, Google also said it would limit its AI chatbot, Bard, from responding to certain election-related prompts to avoid inaccuracies., And Meta, which owns Facebook and Instagram, promised to better label AI-generated content on its platforms so voters could more easily identify what information was real and what was fake.
On Friday, Anthropic, another major AI start-up, joined its peers in barring its technology from being applied to political campaigns or lobbying. In a blog post, the company that makes the chatbot, Cloud, said it would warn or suspend any user who violates its rules. It said it was using tools trained to automatically detect and prevent misinformation from spreading and influencing operations.
“The history of AI deployment has also been full of surprises and unexpected impacts,” the company said. “We expect to see surprising uses for AI systems in 2024 – uses that were not anticipated by their own developers.”
These efforts are part of an effort by AI companies to hold on to the technology they popularized during the election campaign to the tune of billions of people. At least 83 elections are expected around the world this year, the largest in at least the next 24 years, according to Anchor Change, a consulting firm. In recent weeks, people have voted in Taiwan, Pakistan and Indonesia, with India, the world’s largest democracy, scheduled to hold general elections in the spring.
It’s unclear how effective a ban on AI tools would be, especially as tech companies are moving forward with increasingly sophisticated technology. On Thursday, OpenAI unveiled Sora, a technology that can instantly generate realistic videos. Such tools can be used in political campaigns to create text, sounds and images, blurring fact and fiction and raising questions about whether voters can tell which content is genuine.
AI-generated content has already appeared in US political campaigns, leading to regulatory and legal pushback. Some state legislators are drafting bills to regulate AI-generated political content.
Last month, New Hampshire residents received robocall messages discouraging them from voting in the state primaries, with a voice possibly artificially generated to sound like President Biden. The Federal Communications Commission last week outlawed such calls.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, impersonate celebrities, and misinform voters,” FCC Chairwoman Jessica Rosenworcel said at the time.
AI tools have also led to misleading or deceptive portrayals of politicians and political subjects in Argentina, Australia, Britain and Canada. Last week, former Prime Minister Imran Khan, whose party won the most seats in Pakistan’s elections, used an AI voice to declare victory while in jail.
In one of the most consequential election cycles in memory, the misinformation and deception that AI can create could be devastating to democracy, experts said.
“We’re behind the eight ball here,” said Oren Etzioni, a University of Washington professor specializing in artificial intelligence and founder of True Media, a nonprofit that works to identify online misinformation in political campaigns. “We need the tools to respond to this in real time.”
Anthropic said in its announcement on Friday that it is planning tests to identify how well its cloud chatbot can generate biased or misleading content related to political candidates, political issues and election administration. These “Red Team” tests, which are often used to break down a technology’s security measures to better identify its vulnerabilities, will also explore how the AI responds to harmful questions, such as voter-bias. Asking about suppression tactics.
In the coming weeks, Anthropic is also launching a test aimed at redirecting US users who have access to official sources of voting-related information such as TurboVote from Democracy Works, a non-partisan nonprofit group. The company said its AI models were not trained well enough to reliably provide real-time facts about specific elections.
Similarly, OpenAI said last month that it planned to label AI-generated images as well as provide people with voting information through ChatGPIT.
“Like any new technology, these tools come with benefits and challenges,” OpenAI said in a blog post. “They are also groundbreaking, and we will continue to evolve our approach as we learn more about how our devices are used.”
(The New York Times sued OpenAI and its partner Microsoft in December, claiming copyright infringement of news content related to the AI system.)
Synthesia, a start-up with an AI video generator that has been linked to disinformation campaigns, also bans the use of the technology for “news-like content”, including false, polarizing, divisive or misleading content. Alexandru Voica, Synthesia’s head of corporate affairs and policy, said the company has improved the systems it uses to detect misuse of its technology.
Stability AI, a start-up with an image-generator tool, said it has prohibited the use of its technology for illegal or unethical purposes, worked to prevent the generation of unsafe images and An inconspicuous watermark is applied.
Even the biggest tech companies have considered this. Last week, Meta said it was collaborating with other companies on technical standards to help identify when content was generated with artificial intelligence. Ahead of EU parliamentary elections in June, TikTok said in a blog post on Wednesday that it would ban potentially deceptively manipulated content and require users to label realistic AI creations.
Google said in December that it would also require video creators and all election advertisers on YouTube to disclose digitally altered or generated content. The company said it was preparing for the 2024 elections by restricting its AI tools like Bard from answering certain election-related questions.
“Like any emerging technology, AI presents new opportunities as well as challenges,” Google said. AI can help fight abuse, the company said, “but we’re also preparing for how it could change the misinformation landscape.”