News/Crimes

Apr 7, 2025

South Korean AI startup shuts down, disappears after database exposed deepfake porn images

That breeze coming from the south of the peninsula is an AI startup in the wind... The explosive growth of AI-generated explicit content has reached a disturbing milestone with South Korean company GenNomis shutting down after researchers discovered an unsecured database containing thousands of non-consensual pornographic deepfakes. This incident highlights the dangerous intersection of accessible generative AI technology and inadequate regulation, creating serious harm particularly for women who constitute most victims of these digital violations. The big picture: A South Korean AI startup called GenNomis abruptly deleted its entire online presence after a researcher discovered tens of thousands of AI-generated...

read
Apr 7, 2025

Lawsuit reveals teen’s suicide linked to Character.AI chatbots as platform hosts disturbing impersonations

Character.AI's platform has become the center of a disturbing controversy following the suicide of a 14-year-old user who had formed emotional attachments to AI chatbots. The Google-backed company now faces allegations that it failed to protect minors from harmful content, while simultaneously hosting insensitive impersonations of the deceased teen. This case highlights the growing tension between AI companies' rapid deployment of emotionally responsive technologies and their responsibility to safeguard vulnerable users, particularly children. The disturbing discovery: Character.AI was found hosting at least four public impersonations of Sewell Setzer III, the deceased 14-year-old whose suicide is central to a lawsuit against...

read
Apr 3, 2025

Court ruling: AI-generated child sexual abuse images protected for private possession, not distribution

A recent court ruling on AI-generated child sexual exploitation material highlights the delicate balance between First Amendment protections and fighting digital child abuse. The decision in a case involving AI-created obscene images establishes important precedent for how the legal system will address synthetic child sexual abuse material, while clarifying that prosecutors have effective tools to pursue offenders despite constitutional constraints on criminalizing private possession. The legal distinction: A U.S. district court opinion differentiates between private possession of AI-generated obscene material and acts of production or distribution, establishing important boundaries for prosecutions in the emerging field of synthetic child sexual abuse...

read
Mar 28, 2025

Amazon blocks 99% of counterfeit listings with AI-powered fraud prevention

Amazon's latest Brand Protection Report reveals significant advances in AI-powered fraud prevention, showcasing how the e-commerce giant has weaponized artificial intelligence to combat counterfeiting. The company's billion-dollar investment and strategic hiring of ML scientists and software developers has enabled them to block over 99% of suspected infringing listings proactively, demonstrating how AI has become central to maintaining marketplace integrity and consumer trust in digital commerce. The big picture: Amazon invested more than $1 billion last year and expanded its workforce with specialized AI talent to strengthen its fraud prevention capabilities. The company specifically hired machine learning scientists and software developers...

read
Mar 20, 2025

Me, you, and celebrities too: Loti AI opens deepfake detection to the public, not just VIPs

Deepfake detection is experiencing a critical democratization as threats to personal digital identity move beyond public figures to everyday citizens. Loti AI's expansion of its detection technology to all users signals a significant shift in how society approaches synthetic media protection and digital reputation management. This move highlights the growing recognition that personal image protection should be a universally accessible right rather than a privilege limited to celebrities. The big picture: Loti AI has opened access to its deepfake detection and takedown service to the general public, expanding beyond its previous exclusive availability to celebrities and public figures. The company,...

read
Mar 19, 2025

Instagram’s disturbing new trend: AI-generated disability fetish accounts for profit

Instagram is enabling a disturbing new trend of AI-generated content that exploits and fetishizes people with disabilities for profit. The platform has become ground zero for a growing network of accounts using artificial intelligence to create fake influencers with Down syndrome who sell nude content on adult platforms. This practice represents a dangerous evolution of "AI pimping" — where content thieves use AI to modify stolen material, creating specialized fetish content that simultaneously exploits real creators and harmfully objectifies marginalized groups. The big picture: A network of Instagram accounts is using AI to steal content from human creators and deepfake...

read
Mar 18, 2025

AI is boosting organized crime across Europe, blurring lines between profit and ideological motives

Artificial intelligence is becoming a powerful accelerator for organized crime across Europe, creating unprecedented challenges for law enforcement agencies. Europol's latest four-year assessment reveals a concerning evolution where AI-enhanced criminal operations are not only becoming more sophisticated but are increasingly intertwined with state-sponsored destabilization efforts. This convergence represents a fundamental threat to EU societies as criminal networks leverage advanced technologies to amplify their reach, efficiency, and destructive capabilities. The big picture: Europol's Executive Director Catherine De Bolle warns that cybercrime has evolved into a "digital arms race" targeting multiple sectors of society with increasingly devastating precision. Criminal activities now frequently...

read
Mar 6, 2025

Rush in, attack: Cybercriminals now operate like businesses, using AI to aggress faster than ever

Cybersecurity has entered a new era where sophisticated adversaries operate with business-like efficiency and structure, utilizing AI tools and social engineering to breach defenses with unprecedented speed. According to the 2025 CrowdStrike Global Threat Report, threat actors have evolved beyond traditional malware attacks to employ identity-based techniques, deepfake-driven social engineering, and rapid cloud exploitation capabilities—creating a high-stakes innovation race between defenders and increasingly professionalized attackers. The big picture: Modern cyber adversaries now mirror legitimate business operations with sophisticated organizational structures, specialized roles, and resource management practices. Nation-state actors, ransomware groups, and financially motivated cybercriminals have developed methodical approaches to identifying...

read
Feb 28, 2025

Singapore files fraud charges against 3 in Nvidia chip scandal

Singapore's arrest of three men for alleged fraud has exposed a potential pipeline for smuggling Nvidia's advanced AI chips into China, highlighting the growing challenge of enforcing U.S. export controls on critical technology. The case involves DeepSeek, a Chinese AI firm whose recent model's performance sparked industry buzz, and underscores Singapore's crucial position as Nvidia's second-largest market, where it functions primarily as an invoicing hub rather than a final destination for shipments. The big picture: Singapore authorities charged two citizens and one Chinese national with making false declarations about the end users of server equipment in 2023 and 2024. The...

read
Feb 23, 2025

AI protest at OpenAI HQ leads to 3 arrests in San Francisco

The rise of artificial general intelligence (AGI) has sparked intense debate about AI safety and oversight, with technology companies like OpenAI at the center of growing public concern. In February 2025, this tension manifested in a protest at OpenAI's San Francisco headquarters, resulting in three arrests and renewed attention to controversial deaths within the AI industry. The protest details: A demonstration organized by the advocacy group Stop AI drew approximately two dozen protesters to OpenAI's Mission Bay office, focusing on concerns about artificial general intelligence development. Protesters chanted slogans including "Stop AI or we're all going to die" and "Close...

read
Feb 18, 2025

Financial crime-stopper Napier AI to create 100+ jobs in Belfast move

The growing fight against financial crime has led technology companies to establish regional hubs focused on AI-powered solutions. Napier AI, a London-based company specializing in anti-money laundering technology, is expanding its presence with a significant investment in Belfast. Investment details: Napier AI is investing £10 million to create 106 new jobs at its recently opened office in Belfast's Pearl Assurance building. Twenty-five positions have already been filled, with the remaining roles to be completed by 2027 The expansion is expected to contribute nearly £5 million in additional salaries to the Northern Ireland economy The new positions focus on high-end research...

read
Feb 13, 2025

Romance scams thrive in an age of increasing social isolation, costing billions

The global rise in social isolation and the proliferation of dating apps have created fertile ground for romance scams targeting vulnerable individuals. Criminal enterprises are increasingly leveraging artificial intelligence and sophisticated social engineering tactics to exploit feelings of loneliness, resulting in billions of dollars in losses. The scope of the crisis: Romance scams have caused nearly $4.5 billion in losses across the United States over the past decade, with individual victims often losing significant portions of their savings. Scammers operate systematically through dating apps and social media platforms, dedicating extensive time to building relationships with potential targets Criminal organizations are...

read
Feb 13, 2025

Ex-Google CEO Eric Schmidt fears AI-enabled ‘Bin Laden’ scenario

The rapid advancement of artificial intelligence technology has raised concerns about potential misuse by malicious actors. Eric Schmidt, former Google CEO from 2001 to 2017, has expressed specific worries about AI falling into the hands of hostile states and terrorists. Key concerns from Schmidt: The former tech executive emphasizes that extreme risks from AI could come from nations like North Korea, Iran, or Russia potentially misusing the technology for harmful purposes. Schmidt specifically highlighted the possibility of AI being used to develop biological weapons He drew parallels to scenarios like the 9/11 attacks, expressing concern about "evil" actors exploiting modern...

read
Feb 12, 2025

Convincing AI voice scam targeting CEOs leads to cash freeze by Italian police, fraudsters get the boot

The rapid evolution of AI technology has enabled sophisticated voice cloning scams, as demonstrated by a recent high-profile fraud case in Italy targeting prominent business figures. Defense Minister Guido Crosetto's voice was artificially replicated by scammers who used it to solicit urgent financial transfers under the guise of rescuing kidnapped journalists. The scam's methodology: Fraudsters orchestrated an elaborate scheme involving fake calls from government offices and AI-generated voice impersonation of Italy's Defense Minister. The scammers posed as defense ministry officials, making calls that appeared to originate from Rome government offices They claimed urgent funds were needed to secure the release...

read
Feb 11, 2025

FTC bans DoNotPay’s ‘AI lawyer’ claims and orders refunds

DoNotPay, a company that marketed its online service as "the world's first robot lawyer," has faced regulatory action from the Federal Trade Commission (FTC) over misleading artificial intelligence claims. The FTC's investigation revealed that DoNotPay made unsubstantiated claims about its AI chatbot's ability to match human lawyer expertise in generating legal documents and providing legal advice. Key enforcement actions: The FTC has finalized an order requiring DoNotPay to cease making deceptive claims about its AI capabilities and implement significant remedial measures. The company must pay $193,000 in monetary relief DoNotPay is required to notify all subscribers from 2021-2023 about the...

read
Feb 5, 2025

AI grandma Daisy battles scammers with surprising results

Two months ago, British telecommunications provider O2 announced Daisy, an AI-powered chatbot designed to waste scammers' time. O2 is now beginning to share the results of its chatbot in action. The innovation: Daisy specifically targets phone scammers by keeping them engaged in pointless conversations. The AI bot, nicknamed Daisy, presents herself as an elderly grandmother and expertly deploys tactics like searching for glasses, discussing recipes, and reminiscing about the past Conversations can last up to 40 minutes, effectively preventing scammers from targeting actual potential victims during this time The system was trained on real scam call data, enabling it to...

read
Feb 2, 2025

2 members of fringe AI accelerationist group arrested for murder

A radical AI-focused fringe group called the Zizians has been linked to two recent killings, with the arrests of two young computer scientists who were allegedly influenced by the group's extreme ideology. The key arrests: Two computer scientists in their early twenties were arrested in January 2024 for separate homicides on opposite coasts of the United States. Maximillian Snyder, 22, was arrested on January 17 in Redding, California, for allegedly stabbing 82-year-old landlord Curtis Lind Teresa Youngblut, 21, was arrested on January 24 for allegedly killing Border Patrol agent David Maland, 44, in Vermont during a shootout The suspects had...

read
Feb 2, 2025

Merely possessing these AI tools may land you up to 5 years in prison now

The U.K. is set to become the first nation to criminalize AI tools designed for creating child sexual abuse material (CSAM), with offenders facing up to five years in prison. Key legislation details: The U.K. Home Office is introducing four new laws specifically targeting AI-generated child sexual abuse material. Possession, creation, or distribution of AI tools designed to create CSAM will become illegal, carrying a maximum five-year prison sentence Operating websites that share AI-generated CSAM will be punishable by up to 10 years in prison Possession of AI manuals explaining how to use AI for sexual abuse will carry up...

read
Jan 29, 2025

Google thwarts hacker group using Gemini to breach accounts

State-sponsored hackers from Iran, North Korea, China, and Russia have attempted to use Google's Gemini AI for malicious purposes, but their efforts have not produced any significant cybersecurity threats. Key findings: Google's investigation revealed that multiple state-sponsored hacking groups have been experimenting with Gemini AI for various tasks, though their attempts at sophisticated cyber attacks have been unsuccessful. More than 10 Iranian, 20 Chinese, and nine North Korean hacking groups were identified using Gemini Iranian APT actors were found to be the most frequent users of the AI system The hackers primarily used Gemini for basic tasks like translation, content...

read
Jan 29, 2025

OpenAI is investigating a potential data breach by DeepSeek

ChatGPT maker OpenAI is investigating Chinese AI startup DeepSeek for potentially misusing data from its models to create a competing AI assistant. Core investigation details; OpenAI is reviewing evidence that DeepSeek may have used a technique called distillation to transfer knowledge from OpenAI's models to its own smaller model. Distillation is a legitimate technique that transfers knowledge between AI models without exposing their inner workings While distillation itself is permitted, OpenAI's terms of service prohibit using distilled data to build competing AI products OpenAI is working with the U.S. government to protect advanced AI models developed in the United States...

read
Jan 25, 2025

$1M AI gun detection system fails to prevent fatal school shooting

A $1 million artificial intelligence gun detection system at Nashville's Antioch High School failed to detect a weapon involved in a fatal school shooting incident. The incident details: A tragic shooting at Antioch High School in Nashville resulted in the death of a 16-year-old student and injuries to another, with the 17-year-old shooter subsequently taking his own life. The shooting occurred in the school's cafeteria where a student managed to bring in a concealed weapon The AI-powered detection system, provided by Omnilert, did not identify the weapon before the shooting The system later activated when police entered the building with...

read
Jan 25, 2025

This San Francisco couple defrauded their AI investors out of $60M

The founders of San Francisco AI startup GameOn Technology have been charged with orchestrating a $60 million fraud scheme targeting investors between 2018 and 2024. The allegations: Federal prosecutors have indicted Alexander Beckman and Valerie Lau Beckman on 25 criminal counts, including conspiracy, wire fraud, securities fraud, and identity theft. Beckman founded GameOn Technology (later renamed ON Platform) in 2014, developing customer service chatbots for major sports leagues and luxury brands The company's business model allegedly proved unsustainable, relying entirely on investor funding rather than genuine revenue Federal investigators found the company's actual annual revenue never exceeded $1 million, despite...

read
Jan 23, 2025

AI impersonators are on a mission to exploit your personal data

The rise of AI personas designed to mimic individuals for marketing and potential scams represents a significant development in digital marketing and online fraud techniques. The core concept: Advanced generative AI systems can now create sophisticated digital replicas of individuals, using their likeness, personality traits, and communication styles to influence purchasing decisions or perpetrate scams. AI personas can mimic an individual's writing style, voice, facial expressions, and even full-body movements These digital replicas can be created using publicly available data from social media and other online sources The technology can create both static and dynamic representations, including 3D visualizations Technical...

read
Jan 19, 2025

How to protect yourself against AI scams

AI-powered scams are becoming increasingly sophisticated, with scammers using deepfake technology to impersonate voices and create deceptive video content. Current threat landscape; Recent incidents highlight how AI technologies are being weaponized for financial fraud and impersonation scams. A notable case involved a scammer using AI to replicate WIRED editorial director Katie Drummond's voice in an attempt to deceive her father Fraudsters are now employing AI tools to create convincing deepfake videos for real-time scam operations Some AI financial advisory startups have been found to push high-fee cash advances and high-interest personal loans rather than providing genuine financial guidance Key warning...

read
Load More