Bill would strengthen protections against "deepfakes" seeking to influence elections

How AI and deepfakes are changing politics

ST. PAUL, Minn. — proposal at the capitol would boost penalties in a new law regulating "deepfakes" by barring candidates convicted of using the artificial intelligence  technology to influence elections from holding office. 

The statute approved last session prohibits using this AI-generated content–manipulated photos, videos and audio—if it's created without the consent of the person depicted and with intent of hurting a candidate or influencing an election, within 90 days of an election. This new language would clarify that the parameters where the conduct would be a crime are 90 days before a general election and 30 days before political parties' nominating conventions and state, local and presidential  primaries.

A candidate or other person guilty of misusing deepfakes in this way would be disqualified from being elected to certain positions, adding it to a list of violations outlined in the Fair Campaign Practices Act. 

"This is a new frontier for all of us. We are really grappling with technology that you're looking and seeing and hearing something that did not happen from somebody who did not do it," said Sen. Erin May Quade, DFL-Apple Valley, the bill's author. "There has never been a time in our life where we could look at something and be so sure we're looking at the real thing, and it absolutely is not. It tests our sense of reality in a way that's really troubling."

Just seven other states have  laws regulating deepfakes, according to a Bloomberg report. The initial change passed nearly unanimously in the state legislature this past spring. 

But Republicans during a Senate panel Thursday expressed concern over the new proposal and withheld their support as committee members voted to advance it to subsequent committees for vetting. Among their reservations was the provision that would prohibit candidates convicted from seeking elected office. 

Sen. Mark Koran, R-North Branch, and Sen. Andrew Mathews, R-Princeton, also said a push to strengthen the law is happening too soon,  just months after the initial measure passed. 

"We haven't even had an election to test how the law that we applied last year went, and I think we really should see, 'did we do a fair enough job? Did we have issues with it? Was there something that still fell in the cracks?," Mathews said. 

The proposal would also make it clear that media, including TV, radio, social platforms and websites, that dissemiate the deepfakes are not violating the law by simply airing or publishing political ads that the law targets. 

It has the support of DFL Secretary of State Steve Simon, who said AI can harm and corrode faith in election systems when used by people who deliberately want to mislead voters. New Hampshire's attorney general is investigating an AI-generated robocall pretending to be President Joe Biden that urged voters to not show up to the polls during the presidential primary election last month.

"These tools have, in recent weeks, been used to create deep fakes that attempt to misinform voters as states across the country hold their presidential nomination primaries," Simon wrote in a letter to lawmakers. "I commend the legislature for taking the step last year to prohibit the dissemination of such deep fakes in Minnesota and support the further strengthening of that provision."

If approved by both chambers, the new language would take effect in July.

Read more
f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.