AI has facilitated the creation of convincing fake explicit content, leading victims of malicious deepfakes to discover limited avenues for recourse.
Significance: The accessibility of generating realistic and harmful imagery means that individuals, ranging from high school students to renowned celebrities, could potentially become victims of these damaging deepfakes.
In the absence of comprehensive federal legal protection — with only a handful of state laws addressing the issue — those impacted may be left to grapple with enduring consequences affecting their mental well-being and reputation.
For average individuals, particularly those lacking a substantial following, self-protection becomes a formidable challenge, as highlighted by Bernard Marr, a futurist and generative AI expert, speaking to Bombay Rocks.
Targets of this harassment can appeal to social media companies for post takedowns or report accounts spreading the content, but, as Srijan Kumar, an assistant professor at Georgia Tech specializing in AI, points out, there is no effective or structured method to address such situations.
Latest developments: A bipartisan group of senators recently introduced a bill aimed at holding individuals accountable for disseminating “digital forgery.”
“Although the imagery may be fake, the harm to the victims from the distribution of sexually explicit deepfakes is very real,” emphasized Judiciary Committee Chair Sen. Dick Durbin (D-Ill.) and ranking member Sen. Lindsey Graham (R-S.C.) in a joint statement.
“Victims have lost their jobs, and may suffer ongoing depression or anxiety.”
If enacted, the legislation would establish a civil remedy for identifiable victims.
The proposed legislation came a day before senators grilled Big Tech CEOs over child social media exploitation on Capitol Hill.
Reading between the lines: Victims of AI-generated harmful images often face limited legal options, according to Mary Anne Franks, president of the Cyber Civil Rights Initiative, who spoke with Bombay Rocks.
Leave a Reply