The state of New Hampshire faces a deepfake scandal as AI-generated robocalls impersonating President Joe Biden advise citizens not to vote in the Jan. 23 primary. The calls allegedly aimed at meddling in the 2024 presidential election have prompted investigations by the state attorney general's office. Meanwhile, a separate deepfake audio scandal emerged involving Manhattan Democrat leader Keith Wright trash-talking a fellow Democratic Assembly member. Experts highlight the challenges in detecting deep fakes and recommend caution when engaging with media from unknown sources.
In a concerning development, New Hampshire residents faced a deepfake scandal over the weekend as AI-generated robocalls impersonating President Joe Biden advised citizens not to vote in the upcoming Jan. 23 primary. The calls, believed to be part of an attempt to meddle in the 2024 presidential election, have prompted investigations by the state attorney general's office. The AI-generated messages featured a warning against voting, claiming it would support the Republicans in their quest to re-elect Donald Trump.
The New Hampshire attorney general's office issued a statement denouncing the robocalls as misinformation, urging voters to disregard the content entirely. However, the source of the calls remains unidentified, and investigations are ongoing.
In a separate incident, a deep-fake audio scandal unfolded in Manhattan involving Keith Wright, a prominent Democrat leader. The AI-generated audio imitated Wright's voice, engaging in trash talk against a fellow Democratic Assembly member, Inez Dickens. Although some initially dismissed the deepfake as fake, at least one political insider was momentarily convinced of its authenticity.
Manhattan Democrat and former City Council Speaker Melissa Mark-Viverito revealed that the fakes were briefly perceived as real, emphasizing the challenge of detecting such audio manipulations. The use of deepfake audio in political contexts highlights the growing sophistication of AI-generated content, posing risks of misinformation and manipulation.
Experts note that bad actors may prefer audio deepfakes over visual ones due to consumers' increased discernment in detecting visual manipulations. While there is currently no universal method for detecting or deterring deepfakes, caution is advised when engaging with media from unknown or dubious sources, particularly when extraordinary claims are involved. The incidents in New Hampshire and Manhattan underscore the need for vigilance in the face of evolving AI capabilities for generating convincing fake content.
(TRISTAN GREENE, COINTELEGRAPH, 2024)