At 12:00 AM, Sarah Kang, an international student in Texas received phone calls from Korea filled with worried messages from her parents as a scammer threatened them by fabricating a story that she was kidnapped. Police vehicles stormed her apartment in Houston while she was on a family visit in Dallas and chaos erupted the next morning as she realized someone had fabricated her identity. Those same scammers used a deepfake to manipulate her voice and caused Sarah’s parents to launch a diplomatic investigation as they believed their daughter was a victim of assault and was held as a hostage.
“I was so shocked, I didn’t understand anything that was going on. I woke up to so many missed calls,” said Sarah.
Sarah is just one of the victims of identity theft from deep fakes online that generate manipulated videos mimicking a person, usually of celebrities from the likes of Obama to Selena Gomez. Still, artificial intelligence in voice and facial recognition has some positive uses, whether it’s as an assisting tool for those with speech impediments or used as an assistant in virtual learning. Artificial intelligence will continue improving after each generation and it’s already becoming an ingrained part within everyone’s lives. However, when it comes to voice recognition artificial intelligence; there needs to be a consideration of safeguards to protect the identity of citizens.
Artificial intelligence in voice recognition has birthed a new source for identity theft crime.
In the United States, the proportion of AI deep fakes has increased from just 0.2% to 0.4%, whereas printed forgeries have dropped to 0% (Department of Homeland Security). This is alarming, as identity theft crimes are beginning to switch to an advanced digital form. The ability to recreate certain voices using deepfakes is not limited to popular celebrities, more commonly the safety of ordinary citizens is affected by the lack of regulation surrounding artificial intelligence. Proponents of AI have championed its benefits in peoples’ daily lives: making difficult tasks such as reading over reports easier through AI summary. However, the risks of stolen identity must be safeguarded, specifically in severe cases of pornography. There are more severe cases of stolen identity through deepfake and other AI impersonating platforms, specifically within the field of pornography and creation of sexual adult material. The issue of consent has not been addressed by such platforms and the platform continues to take advantage of women’s identity for monetary earnings rather than protecting the safety of citizens.
Not only is artificial intelligence causing harm through means of stolen identity theft, but also in the complexity of copyright law. Copyright laws have long protected the creative means of artists, allowing their work to be unique and to prosecute plagiarism in all forms. However artificial intelligence is complicating this issue through using artists’ voices and generating art without the consent of the original artists. Artificial intelligence has been able to generate songs using the voices of popular artists such as Drake or the Weeknd without their permission. Artificial intelligence softwares like AIVA is able to generate lyrics and musical beats for users.
However, this creates conflict as instead of fans supporting their artists, they support someone else creating a song that takes advantage of a popular singer’s voice. It also marginalizes smaller artists and their rights to their work through artificial intelligence coming at a forefront in the fields of music and fine arts. In fact, the court of law has ruled artificial intelligence exempt from copyrights laws creating conflict with the protection of artist’s creative liberties and protection (Merken).
To prioritize the safety of citizens while also promoting the efficient use of artificial intelligence in voice recognition, a solution that considers privacy needs to be implemented. President Biden and his administration is the first to sign an executive order on artificial intelligent safeguards. While this order is a step towards protecting the privacy of citizens, there needs to be additional monitoring. Law enforcement and courts will need to begin assessing cases of digital crime and set precedents to maintain a balance between free enterprise in artificial intelligence and the privacy of citizens. Initiatives to support real-time verification and the use of academic labels to produce AI-based classifiers that can detect the presence of deep fakes.
There’s no doubt that the future is one with artificial intelligence. Maybe it’ll be in the form of advanced algorithms and machine learning to automate repetitive tasks or in creative uses to generate music beats. However, artificial intelligence especially in voice creation or recognition will need to be monitored to a higher degree of scrutiny. If deepfakes continue to amass an internet presence and endanger the security of citizens, it is the duty of the government to address such concerns and place additional safeguards on artificial intelligence.
Works Cited
Department of Homeland Security. “Increasing Threat of Deep Fake Identities – Homeland Security.” Department of Homeland Security , www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf. Accessed 16 Nov. 2023.
Merken, Sara. “More Judges, Lawyers Confront Pitfalls of Artificial Intelligence.” Reuters, Thomson Reuters, 16 June 2023, www.reuters.com/legal/transactional/more-judges-lawyers-confront-pitfalls-artificial-intelligence-2023-06-16/.