What the US can learn from the role of AI in other elections

What the US can learn from the role of AI in other elections

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. If it’s not broken, don’t fix it. That’s the approach bad state actors seem to have taken when it comes to how they mess …

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

If it’s not broken, don’t fix it. That’s the approach bad state actors seem to have taken when it comes to how they mess with elections around the world.

When the generative-AI boom first kicked off, one of the biggest concerns among pundits and experts was that hyperrealistic AI deepfakes could be used to influence elections. But new research from the Alan Turing Institute in the UK shows that those fears might have been overblown. AI-generated falsehoods and deepfakes seem to have had no effect on election results in the UK, France, and the European Parliament, as well as other elections around the world so far this year.

Instead of using generative AI to interfere in elections, state actors such as Russia are relying on well-established techniques—such as social bots that flood comment sections—to sow division and create confusion, says Sam Stockwell, the researcher who conducted the study. Read more about it from me here.

But one of the most consequential elections of the year is still ahead of us. In just over a month, Americans will head to the polls to choose Donald Trump or Kamala Harris as their next president. Are the Russians saving their GPUs for the US elections? 

So far, that does not seem to be the case, says Stockwell, who has been monitoring viral AI disinformation around the US elections too. Bad actors are “still relying on these well-established methods that have been used for years, if not decades, around things such as social bot accounts that try to create the impression that pro-Russian policies are gaining traction among the US public,” he says. 

And when they do try to use generative-AI tools, they don’t seem to pay off, he adds. For example, one information campaign with strong ties to Russia, called Copy Cop, has been trying to use chatbots to rewrite genuine news stories on Russia’s war in Ukraine to reflect pro-Russian narratives. 

The problem? They’re forgetting to remove the prompts from the articles they publish. 

In the short term, there are a few things that the US can do to counter more immediate harms, says Stockwell. For example, some states, such as Arizona and Colorado, are already conducting red-teaming workshops with election polling officials and law enforcement to simulate worst-case scenarios involving AI threats on Election Day. There also needs to be heightened collaboration between social media platforms, their online safety teams, fact-checking organizations, disinformation researchers, and law enforcement to ensure that viral influencing efforts can be exposed, debunked, and taken down, says Stockwell. 

But while state actors aren’t using deepfakes, that hasn’t stopped the candidates themselves. Most recently Donald Trump has used AI-generated images implying that Taylor Swift had endorsed him. (Soon after, the pop star offered her endorsement to Harris.) 

Earlier this year I wrote a piece exploring the brave new world of hyperrealistic deepfakes and what the technology is doing to our information landscape. As I wrote then, there is a real risk of creating so much skepticism and distrust in our information landscape that bad actors, or opportunistic politicians, can take advantage of this trust vacuum and lie about the authenticity of real content. This is called the “liar’s dividend.” 

There is an urgent need for guidelines on how politicians use AI. We currently lack accountability or clear red lines as to how political candidates can use AI in an ethical manner within the election context, says Stockwell. The more we see political candidates carry out practices like sharing AI-generated adverts without labels or or making accusations that other candidates’ activities are AI-generated, the more it becomes normalized, he adds. And everything we’ve seen so far suggests that these elections are only the beginning. 


Now read the rest of The Algorithm

Deeper Learning

AI models let robots carry out tasks in unfamiliar environments

It’s tricky to get robots to do things in environments they’ve never seen before. Typically, researchers need to train them on new data for every new place they encounter, which can become very time-consuming and expensive.

Now researchers have developed a series of AI models that teach robots to complete basic tasks in new surroundings without further training or fine-tuning. The five AI models, called robot utility models (RUMs), allow machines to complete five separate tasks—opening doors and drawers, and picking up tissues, bags, and cylindrical objects—in unfamiliar environments with a 90% success rate. This approach could make it easier and cheaper to deploy robots in our homes. Read more from Rhiannon Williams here.

Bits and Bytes

There are more than 120 AI bills in Congress right now
US policymakers have an “everything everywhere all at once” approach to regulating artificial intelligence, with bills that are as varied as the definitions of AI itself.
(MIT Technology Review)

Google is funding an AI-powered satellite constellation to spot wildfires faster
The full FireSat system should be able to detect tiny fires anywhere in the world—and provide updated images every 20 minutes. (MIT Technology Review

A project analyzing human language usage shut down because “generative AI has polluted the data”
Wordfreq, an open-source project that scraped the internet to analyze how humans use language, found that post-2021, there is too much AI-generated text online to make any reliable analyses. (404 Media

Data center emissions are probably 662% higher than Big Tech claims
AI models take a lot of energy to run and train, and tech companies have emphasized their efforts to counter their emissions. There is, however, a lot of “creative accounting” happening when it comes to calculating carbon footprints, and new analysis shows that data center emissions from these companies is likely 7.62 times higher than officially reported.
(The Guardian

Deep Dive

Artificial intelligence

""

Why OpenAI’s new model is such a big deal

The bulk of LLM progress until now has been language-driven. This new model enters the realm of complex reasoning, with implications for physics, coding, and more.

person using the voice function of their phone with the openai logo and a sound wave

OpenAI has released a new ChatGPT bot that you can talk to

The voice-enabled chatbot will be available to a small group of people today, and to all ChatGPT Plus users in the fall. 

a protractor, a child writing math problems on a blackboard and a German text on geometry

Google DeepMind’s new AI systems can now solve complex math problems

AlphaProof and AlphaGeometry 2 are steps toward building systems that can reason, which could unlock exciting new capabilities.

foundational models of a racetrack with a text/image prompt to "Make the scenery a desert."

Roblox is launching a generative AI that builds 3D environments in a snap

It will make it easy to build new game environments on the platform, even if you don’t have any design skills.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories,
upcoming events, and more.

Source: MIT Technology Review

Read original article

Leave a Reply

Your email address will not be published. Required fields are marked *