New York City Mayor Eric Adams has attracted attention for using artificial intelligence (AI) to make robocalls in languages he doesn’t actually speak. The mayor has been utilizing AI software to create calls in Mandarin, Yiddish, and other languages to inform residents about city hiring events. While some find this amusing, others, like Albert Fox Cahn, the executive director of the Surveillance Technology Oversight Project, criticize the practice as unethical. Cahn describes it as the mayor making “deep fakes” of himself at taxpayers’ expense.
Although amusing on the surface, this incident raises important questions about the future of AI and its potential misuse. Companies like Spotify already have AI features that can translate podcasts into different languages using the original podcaster’s voice. Additionally, there are companies like ElevenLabs that can convert spoken content into another language while duplicating the original speaker’s voice. While these advancements may seem beneficial in making information more accessible, they also create the possibility for the manipulation of politicians and public figures. As deepfake technology evolves, there is concern that individuals could use AI to make public figures appear to say things they never actually did, eroding trust in information and public discourse.
In an era of widespread information and technological advancements, it is becoming increasingly challenging to discern what is real and what is manipulated. The incident with Mayor Adams highlights the potential dangers of AI misuse. As technology continues to evolve, it is crucial to address the ethical implications and establish guidelines and safeguards to prevent the deceptive use of AI in the future.