News/Crimes
Jury acquits consultant behind AI Biden robocalls in New Hampshire
A New Hampshire jury acquitted political consultant Steven Kramer on all charges related to sending AI-generated robocalls that mimicked President Biden's voice to thousands of Democratic voters before the 2024 primary. The case represents one of the first major legal tests of how courts will handle AI-powered election interference, with implications for how similar cases might be prosecuted as artificial intelligence becomes more sophisticated and accessible. What happened: Kramer, a 56-year-old New Orleans political consultant, admitted to orchestrating the robocalls sent two days before New Hampshire's January 23, 2024, presidential primary. The AI-generated voice mimicked Biden's speech patterns and used...
read Jun 12, 2025Political consultant facing prison time for AI Biden robocalls has no regrets
A political consultant from New Orleans testified that he has no regrets about orchestrating AI-generated robocalls that mimicked President Biden's voice, claiming his actions were intended to highlight the dangers of artificial intelligence rather than suppress votes. Steven Kramer, 56, faces decades in prison on charges of voter suppression and impersonating a candidate for the calls sent to thousands of New Hampshire voters just days before the state's 2024 Democratic primary. What you should know: The robocalls used AI to replicate Biden's voice and catchphrase "What a bunch of malarkey," telling recipients to save their votes for the November election...
read Jun 10, 2025AI scammers steal $11M in student aid using fake “ghost students”
Scammers are deploying AI-powered "ghost students" to fraudulently enroll in online college courses and steal millions in federal financial aid, prompting the U.S. Education Department to introduce emergency identity verification requirements. The scheme has exploded alongside the rise of artificial intelligence and online learning, with California community colleges alone reporting 1.2 million fraudulent applications in 2024 that resulted in 223,000 suspected fake enrollments and $11.1 million in stolen aid. The big picture: Criminal organizations are using chatbots to impersonate students in online classes, staying enrolled just long enough to collect financial aid checks before disappearing. Professors report discovering that almost...
read Jun 5, 2025AI robocall impersonator faces trial for fake Biden calls
The trial of a political consultant who used AI-generated Biden robocalls to manipulate voters highlights the growing intersection of artificial intelligence and electoral integrity. This landmark legal case tests both New Hampshire's voter suppression laws and raises broader questions about AI regulation in politics, as states increasingly grapple with technology that can convincingly impersonate candidates and potentially interfere with democratic processes. The big picture: Political consultant Steven Kramer faces 11 felony charges and 11 misdemeanors for sending AI-generated robocalls impersonating President Biden before the January 2024 New Hampshire primary. The calls falsely told voters they should skip the primary and...
read May 23, 2025Facial recognition tech aids in New Orleans inmate search, civil libertarians concerned
Facial recognition cameras in New Orleans are shifting the balance between crime-fighting and privacy concerns, as demonstrated by their role in capturing fugitives from a recent jailbreak. The use of this technology by Project NOLA, a non-profit operating independently from law enforcement, exemplifies the growing but controversial adoption of AI-powered surveillance in American cities—raising fundamental questions about the appropriate limits of monitoring technologies in public spaces. The big picture: Project NOLA operates approximately 5,000 surveillance cameras throughout New Orleans, with 200 equipped with facial recognition capabilities that helped locate escaped inmates within minutes of a prison break. After Louisiana State...
read May 22, 2025Judge declines First Amendment defense in AI harm case against Google and Character.AI
A landmark lawsuit claiming AI chatbots contributed to a teenager's suicide is moving forward after a judge rejected motions to dismiss, marking the first major legal test of how courts will handle AI-related harm claims. The case could establish important precedents for AI company liability, particularly regarding platforms accessed by minors, as courts navigate the complex interplay between algorithmic speech, user protection, and First Amendment considerations. The big picture: A Florida judge has denied a motion to dismiss a lawsuit against Character.AI and Google claiming their AI chatbot technology contributed to the suicide of 14-year-old Sewell Setzer III, allowing this...
read May 22, 2025AI chatbots lack free speech rights in teen death lawsuit, says judge
A federal judge's decision to allow a wrongful death lawsuit against Character.AI to proceed marks a significant legal test for AI companies claiming First Amendment protections. The case centers on a 14-year-old boy who died by suicide after allegedly developing an abusive relationship with an AI chatbot, raising fundamental questions about the constitutional status of AI-generated content and the legal responsibilities of companies developing conversational AI. The big picture: U.S. Senior District Judge Anne Conway rejected Character.AI's argument that its chatbot outputs constitute protected speech, allowing a mother's lawsuit against the company to move forward. The judge ruled she was...
read May 20, 2025AI voice scams target US officials at federal, state level to steal data
The FBI is warning about sophisticated smishing campaigns targeting current and former government officials that use AI-generated voices and social engineering techniques to steal sensitive information. This escalation represents a concerning evolution in government-targeted scams, as cybercriminals impersonate senior officials to establish trust before directing victims to malicious links that compromise personal accounts. The big picture: Since April, cybercriminals have been targeting U.S. federal and state employees with texts and AI-generated voice messages that impersonate senior officials to establish rapport and ultimately gain access to sensitive information. Once scammers compromise one account, they use the stolen information to target additional...
read May 20, 2025AI-powered street cameras halted by police over accuracy concerns
New Orleans police have conducted a secretive real-time facial recognition program using a private camera network to identify and arrest suspects—potentially violating a city ordinance designed to limit and regulate such technology. This unauthorized surveillance operation represents a significant escalation in police facial recognition use, raising serious concerns about civil liberties and proper oversight of AI-powered law enforcement tools. The big picture: New Orleans police secretly used a network of over 200 private cameras to automatically identify suspects in real time, bypassing required oversight processes and potentially violating a 2022 city ordinance. The Washington Post investigation revealed that when cameras...
read May 20, 2025AI in crime prevention raises “Minority Report”-style civil liberties questions
The global expansion of AI-powered predictive policing signals a controversial shift in law enforcement strategy, with multiple countries developing systems to identify potential criminals before they commit violent acts. These initiatives raise profound questions about privacy, civil liberties, and the ethics of algorithmic decision-making in criminal justice systems where personal data like mental health history could determine whether someone is flagged as a future threat. The big picture: Government agencies in the UK, Argentina, Canada, and the US are implementing AI-powered crime prediction and surveillance systems reminiscent of science fiction portrayals. The UK government plans to deploy an AI tool...
read May 19, 2025The new federal law that makes AI-generated deepfakes illegal
The Take It Down Act marks a pivotal federal response to the proliferation of AI-generated explicit imagery, creating the first nationwide protections against non-consensual deepfakes. After high-profile victims from celebrities to high school students suffered from having their faces superimposed onto nude bodies, this bipartisan legislation establishes clear criminal penalties and platform responsibilities. This rare moment of congressional unity illustrates how certain AI harms can transcend political divisions, particularly when targeting vulnerable individuals. The big picture: President Trump is set to sign the Take It Down Act on Monday, establishing federal protections against non-consensual explicit images regardless of whether they're...
read May 12, 2025Chrome browser uses AI to detect tech support scams
Google is enhancing Chrome's security by implementing on-device AI technology to combat tech support scams in real-time. This AI-powered protection addresses a persistent threat where scammers create convincing fake security alerts to trick users into paying for unnecessary services. By integrating Gemini Nano directly into the browser, Google aims to detect and block these scams as they appear, even when traditional security measures might miss them. The big picture: Google will deploy Gemini Nano, an on-device large language model, in Chrome version 137 to identify and neutralize tech support scams that have plagued users for years. These scams typically appear...
read May 12, 2025AI-driven scams fuel new era of digital paranoia amid remote collaboration trend
The rise of AI-driven scams is triggering a widespread verification crisis, forcing individuals to develop multi-step validation protocols for even routine professional interactions online. As artificial intelligence makes creating convincing fake personas increasingly effortless, traditional trust mechanisms are breaking down in work environments already transformed by remote collaboration norms. This fundamental shift in online interaction is creating a new social paradigm where verification becomes a necessary preliminary step before engaging with unknown contacts. The big picture: AI technology is enabling sophisticated digital impersonation that has expanded from traditional scam platforms into professional communication channels, creating widespread trust issues. Nicole Yelland,...
read May 9, 2025Smarter scams meet smarter security in Google’s new rollout
Google is deploying AI technology to combat common online scams, particularly tech support schemes that trick users into believing their devices are infected. This initiative represents a significant expansion of Google's security infrastructure, as the company harnesses its Gemini AI models to detect and warn users about potential threats across Chrome, Search, and Android platforms. The timing is crucial, as AI advancements have simultaneously made it easier for scammers to create convincing fake content, with global scam losses exceeding $1 trillion last year. The big picture: Google is implementing on-device AI to identify and warn users about tech support scams...
read May 7, 2025Cybercrime-as-a-Service? AI tool Xanthorox enables illicit activity for novices
A sophisticated AI platform designed specifically for criminal activities has emerged from the shadows of the dark web into surprisingly public channels. Xanthorox represents a troubling evolution in cybercrime-as-a-service, offering on-demand access to deepfake generation, phishing tools, and malware creation through mainstream platforms like Discord and Telegram. This development signals how criminal AI tools are becoming increasingly accessible and commercialized, blurring lines between underground hacking communities and everyday technology spaces. The big picture: Despite its ominous purpose, Xanthorox operates with surprising transparency, maintaining public profiles on GitHub, YouTube, and communication platforms where subscribers can pay for access using cryptocurrency. The...
read May 5, 2025Hacker admits using AI malware to breach Disney employee data
The intersection of AI tools and cybersecurity continues to evolve dangerously, as demonstrated by a recent case where malicious code embedded in an AI image generation tool led to a major data breach at Disney. This incident highlights how threat actors are exploiting the growing popularity of AI applications to distribute trojans that can compromise high-value corporate targets and personal information. The big picture: A California man has pleaded guilty to hacking a Disney employee by distributing a malicious version of a popular open source AI image generation tool that stole sensitive corporate and personal data. Key details: Ryan Mitchell...
read May 4, 2025Disney abandons Slack after hacker steals terabytes of confidential data using fake AI tool
A California man has admitted to orchestrating a sophisticated cybersecurity attack against Disney that led to a massive data breach and ultimately prompted the entertainment giant to abandon Slack entirely. The case highlights how seemingly innocent AI-related software downloads can serve as vehicles for credential theft, resulting in significant corporate security compromises and legal consequences. The hack details: Ryan Mitchell Kramer, a 25-year-old from Santa Clarita, pleaded guilty to hacking Disney's company Slack channel and stealing 1.1 terabytes of confidential information. The stolen data included sensitive revenue figures for services like Disney+ and ESPN+, personal information of current and prospective...
read May 1, 2025Remote hiring becomes gateway for North Korea’s state-sponsored infiltration
North Korea's sophisticated digital infiltration scheme has evolved from placing individual IT workers in Western companies to a complex operation leveraging AI tools and fake identities. The scheme, which generates millions for the North Korean government, now involves sophisticated identity theft, AI-generated personas, and local facilitators who manage physical logistics—creating unprecedented national security and economic risks as these operatives gain access to sensitive corporate systems while posing as remote tech workers. The big picture: North Korean operatives are systematically infiltrating Western companies through remote work positions, using stolen identities and increasingly sophisticated AI tools to create convincing fake personas. Simon...
read May 1, 2025AI-powered romance scams target Boomers, but younger generations more defrauded
Real-time AI deepfakes are creating a dangerous new frontier in internet scams, particularly targeting vulnerable populations like the elderly. Fraudsters are now using generative AI technology to alter their appearance and voices during live video conversations, allowing them to convincingly impersonate trusted individuals or create attractive fake personas. This evolution of scam technology is making even video verification—once considered relatively secure—increasingly unreliable as a means of establishing someone's true identity. The big picture: Scammers are deploying sophisticated AI filters during live video calls to completely transform their appearance and voice, creating nearly undetectable fake identities. A recent investigation by 404...
read Apr 30, 2025Former athletic director jailed for racist AI-generated recording
The use of AI to create deepfake content has reached a disturbing legal landmark with the sentencing of a school official who weaponized the technology for personal retaliation. This case highlights the real-world consequences of AI misuse in educational settings and establishes precedent for criminal penalties when synthetic media is deployed to harm reputations and disrupt institutions. The verdict: A former Baltimore-area high school athletic director received a four-month jail sentence after pleading guilty to creating a racist and antisemitic deepfake audio impersonating the school's principal. Dazhon Darien, 32, entered an Alford plea to the misdemeanor charge of disturbing school...
read Apr 23, 2025Oregon lawmakers crack down on AI-generated fake nudes
Oregon is taking decisive action against AI-generated deepfake pornography with a new bill that would criminalize the creation and distribution of digitally altered explicit images without consent. The unanimous House vote signals growing recognition of how artificial intelligence can weaponize innocent photos, particularly affecting young people who may have their social media images manipulated and distributed as fake nudes. This legislation reflects a nationwide trend as states race to update revenge porn laws for the AI era. The big picture: Oregon lawmakers voted 56-0 to expand the state's "revenge porn" law to include digitally created or altered explicit images, positioning...
read Apr 23, 2025AI hallucination bug spreads malware through “slopsquatting”
AI-powered software hallucinations are creating a new cybersecurity threat as criminals exploit coding vulnerabilities. Research has identified over 205,000 hallucinated package names generated by AI models, particularly smaller open-source ones like CodeLlama and Mistral. These fictional software components provide an opportunity for attackers to create malware with matching names, embedding malicious code whenever programmers request these non-existent packages through their AI assistants. The big picture: AI-generated code hallucinations have evolved into a sophisticated form of supply chain attack called "slopsquatting," where cybercriminals study AI hallucinations and create malware using the same names. When AI models hallucinate non-existent software packages and...
read Apr 18, 2025AI voice cloning risks exposed by Consumer Reports, Descript more secure than ElevenLabs
Voice cloning technology has rapidly advanced to a concerning level of realism, requiring only seconds of audio to create convincing replicas of someone's voice. While this technology enables legitimate applications like audiobooks and marketing, it simultaneously creates serious vulnerabilities for fraud and scams. A new Consumer Reports investigation reveals alarming gaps in safeguards across leading voice cloning platforms, highlighting the urgent need for stronger protection mechanisms to prevent malicious exploitation of this increasingly accessible technology. The big picture: Consumer Reports evaluated six major voice cloning tools and found most lack adequate technical safeguards to prevent unauthorized voice cloning. Only two...
read Apr 12, 2025Leak exposes 95,000 AI-generated explicit images, including child abuse material
An unsecured database has exposed tens of thousands of AI-generated explicit images, including content depicting minors, highlighting the destructive potential of unregulated image generation technology. The leak from South Korean company GenNomis reveals how these tools can be weaponized to create harmful, non-consensual content targeting real individuals, adding to growing concerns about AI safety and the proliferation of deepfake technology that victimizes women and children. The big picture: An open database belonging to South Korean AI firm GenNomis leaked over 95,000 records containing explicit AI-generated images, including child sexual abuse material and de-aged celebrities. Security researcher Jeremiah Fowler discovered the...
read