CahtGPT creator OpenAI has laid out its plan to prevent its technology from being used to spread misinformation and interfere with elections.
More than 50 countries will head to the polls this year in what is being touted as the biggest ever demonstration of democracy, however the recent rise of generative artificial intelligence has led to fears that it could pose a threat to free and fair elections.
One of the main concerns relates to deepfake images that can be created using tools like OpenAI’s Dall-E. These can manipulate existing images or generate entirely new depictions of politicians in compromising situations.
“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” stated an OpenAI blog post addressing AI and elections.
“Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”
OpenAI said it had drawn together members of its safety systems, threat intelligence, legal, engineering and policy teams in order to investigate and address any potential abuse of its technology.
The San Francisco-based company already has measures in place to prevent its Dall-E tool from accepting requests for image generation of real people, however other AI startups do not have such guardrails.
Making pictures of Trump getting arrested while waiting for Trump's arrest. pic.twitter.com/4D2QQfUpLZ
— Eliot Higgins (@EliotHiggins) March 20, 2023
OpenAI chief executive Sam Altman has previously said that he was “nervous” about the threat generative AI poses to election integrity, testifying before Congress in May that it could be used in a novel way to spread “one-on-one interactive disinformation”.
Among the new measures to prevent such misuse is a new feature with ChatGPT that directs US users to the site CanIVote.org in order to offer authoritative information about voting.
New ways to identify AI-generated images are also being developed, with OpenAI working with the Coalition for Content Provenance and Authenticity to place icons on fake images.
Deepfake content has already been used in an attempt to influence elections, with AI-generated audio used in the build-up to Slovakia’s elections last year that compromised one of the candidates.
The UK Labour Party has also been the target of deepfake audio, with a fake recording of party leader Keir Starmer saying “I f***ing hate Liverpool” surfacing in October.
A recent survey from comparison site Finder.com found that only 1.5 per cent of people in the UK were able to correctly identify deepfake clips of celebrities and politicians.