People can use generative AI to create fake videos, audio, or images, known as deepfakes, by altering an original. It is like constructing something from imagination and making it appear more realistic than it actually is. Though this advancement can be fun and educational, they are often manipulated to harm someone’s reputation, commit identity fraud, or spread misinformation and disinformation.
In the Legalweek conference taking place in New York in 2025, experts will analyze the methods to prevent the spread of misinformation and deepfakes. The blog will discuss legal concerns of deepfakes and how society can build walls against deepfakes.
What Are Deepfakes?
When using AI techniques, deepfakes refer to a fabricated video or image that looks realistic at first glance. In simple terms, videos of reporters presenting forged information, phone calls where a different voice is used, and even fake video footage of celebrities or politicians are all referred to as deepfakes.
Three main types primary types of deepfakes:
- Lip Sync: Rendering a video or audio clip of someone appearing as if they’re saying particular words or phrases they didn’t utter.
- Face Swap: Replacing one person’s face with a different person’s face in a video or a photo.
- Puppet Technique: To manipulate an individual’s body in a video to make it look like the said individual is moving in ways that they usually do not.
Even though some deepfakes can be seen as harmless, many can use them for malicious intent, such as:
- Purposefully creating false advertisements, fake news, or other misinformation.
- Accusing people falsely.
- Scamming by impersonating peoples’ voices.
- Non-consensual imaging videos.
The Legal Trouble with Deepfakes
1. Defamation
Falsifying someone’s image through deepfakes can hurt their reputation and, in turn, can make them file a lawsuit against the creator of the deepfake. The issue arises when there is no clear way to prove who created the defaming video.
2. Privacy Concerns
A common and serious issue is damage done to the victim’s privacy concerning freely using their voice or image for a deepfake video without their consent. This, and many cases, leads to the deepfake subject suffering psychologically.
3. Impersonation and Fraud
Deepfake technology can be helpful as well as dangerous if placed in the wrong hands. It allows the impersonation of trusted people to trick others into providing confidential information or money. For instance, an energy company from the UK lost close to $250,000 because a fraudster used a deepfake voice to impersonate a CEO.
4. Copyright infringement of Intellectual Property
Employing AI to duplicate someone’s image and voice can easily lead to copyright infringement. Prominent people, for instance, Tom Hanks, have come out publicly complaining about actors using impersonation of his persona for AI ads without his permission.
5. Political and Voting Deception
Politicians can be affected by contagion effects through the use of deepfake videos that can influence voters. This has led states like California and Texas to enact legislation that bans the use of deepfake videos that can mislead voters.
6. AI and the Hollywood Disagreement
Concerns from Hollywood actors and screenwriters about the influence of AI are highly intense. During the 2023 Hollywood labor strikes, the AI-generated deepfake became a controversial topic. Actors and writers were concerned that studios would make use of AI to replicate their images without their consent, which would threaten their jobs and reputations.
Responses from the Government and the Law
To deal with the risk of deepfakes and AI misinformation, the US government is implementing AI and deepfake mitigative strategies at different levels. A few of the actions taken include:
FTC Regulations
The FTC has been advancing new measures to prevent scams through the use of deepfakes. The Trustee Chair, Lina M. Khan, stated, “Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale.” With the increase of AI voice fraud and other scams, there is a fierce need to place regulations to protect citizens against impersonation.
The Accountability Act of Deepfakes
The proposed legislation establishes a requirement for the identification of AI-created audio and video recordings and sets out sanctions for the intended use of deepfakes.
State Legislation
Other states in the US have also started passing legislation aimed at the mitigation of extreme cases of computer-generated content manipulation.
- Texas S.B. 751 and California AB 730 prohibit the use of electoral deepfakes.
- California AB 602, Virginia SB 1736, and Georgia S.B. 337 make it an offense to create non-consensual pornographic deepfakes.
- New York S6829A allows legal action to be brought for the unauthorized use of deepfakes.
- Legislative Action: New regulations are being drafted by the US Congress, which would make the use of AI-generated deepfakes for fraudulent, defamatory, and electoral impersonation illegal.
Corporate Policies
Social media and technology companies put out policies addressing how to identify and remove harmful deepfake content, Facebook for instance, launched the Deepfake Detection Challenge which seeks to improve detection tool development.
Research Findings: A deepfake studies publication, Research Gate, notes that the United States has no federal statutes that deal with deepfake dangers.
How to Combat Deepfakes in Our Legal System
To tackle the deepfake problem, legal and technology experts suggest several strategies:
1. AI Tools to Detect Deepfakes
AI Tools to combat discrimination using media deepfake detection algorithm New AI tools are being developed around spotting modified videos or pictures and other content of what is considered deepfake.
2. Digital Watermarking
Using a watermark on an image or audio marks that image or audio as being either fake or real. The Deepfakes Accountability Image seeks to resolve authenticity issues by digitally watermarking content. This is one of the many solutions we can offer.
3. Deepening the Violations and Penalty Regulations
A more serious approach should be taken to sanction people who create fake deepfakes. Moreover, social media networks should also be held accountable for disseminating deepfakes.
4. Spread Awareness
There shouldn’t be a single person on the globe who is not able to recognize a deepfake or accept that not everything should be taken at face value. Media companies should be able to work side by side with schools to improve digital literacy.
5. Cross-Disciplinary Cooperation
Cooperation between government institutions, tech companies, and law firms is crucial to establishing the appropriate framework for AI content and its ethical usage.
Legalweek New York 2025: Addressing AI Challenges
Legalweek New York will be held from 24 – 27 March 2025 at the Hilton Midtown in New York City, one of the most frequented cities in the United States. It is among the most important conferences for the legal field, assembling over a thousand lawyers, specialists in information technology, and politicians.
The event will tackle the most current issues in AI technology and its implications in data protection, compliance, risk management, and new legal treatment of technologies. Here are some of the most exciting AI-related sessions we are looking forward to:
- The State of AI in the Legal Profession: A discussion about the implementation of AI in legal activities.
- AI-inspired tools for discovery and case strategy development: Particular emphasis on search queries and elaborate legal actions through the use of AI.
- Recent Developments in Technology and Law: Investigating the Influence of AI on the Law Industry.
- Women, Influence & Power in Law: AI is Here: Leading Discussion on AI Legislation – Focus on Leadership and Governance.
- Operation Safe Spaces: Concerns surrounding AI and Data Privacy. Analyzing the relationship between AI and security.
Through participation in this event, US legal experts will be able to anticipate issues caused by AI technology.
Final Thoughts
The misuse of misinformation and deepfakes created by AI technology are problems that need to be dealt with as soon as possible. One of the purposes of Legalweek New York 2025 is to create a legal framework for the challenges posed by Artificial Intelligence.
Aeren LPO will participate in this event to demonstrate its willingness to deal with the perils associated with AI content.
Legal specialists in the United States have the chance to actively defend the concepts of truth and trust in the digital world by staying active and informed.