(In)Secure Digest – NY Edition: Brewery Shutdown, Insider Scams, AI Risks
23.12.2025

2025 turned out to be a busy year for information security incidents and industry news. As tradition dictates, on the eve of the New Year we asked our Leading Analyst, Sergio Bertoni, to share his personal shortlist of the most memorable breaches of the year.

Nothing Ventured, Nothing Gained

What happened: A hacker attempted to bribe a BBC employee to gain access to the company’s internal infrastructure.

How it happened: In July 2025, Joe Tidy, a BBC journalist covering cybersecurity, received a message on Signal from a user calling himself Syndicate. The sender offered Joe a way to “make some money” by granting a hacker access to the BBC’s infrastructure, specifically by providing paid access to his work laptop so corporate data could be encrypted.

In return, Syndicate promised 15% of the ransom he planned to demand. He also assured full anonymity and claimed he had successfully carried out similar schemes before using insiders.

During the exchange, the attacker repeatedly pushed the journalist to run a specific script. Tidy later wrote about the incident, suggesting he may have been mistaken for a BBC security engineer. While pursuing the story and deliberately stalling, his phone was flooded with multi-factor authentication prompts. The attacker may have used a technique similar to MFA fatigue or notification bombing as a form of psychological pressure.

The journalist did not take the bait. He contacted the internal security team, and as a precautionary measure his access to the corporate network was temporarily suspended.

CrowdStrike was less fortunate. On November 20, the Scattered Lapsus$ Hunters group claimed they had compromised the company’s infrastructure and published screenshots of internal systems as proof. CrowdStrike denied an external breach, stating that the screenshots had been leaked by an employee.

According to media reports, the attackers negotiated with the insider to provide not only screenshots but also direct infrastructure access for USD 25,000. However, CrowdStrike’s security team detected suspicious activity in advance and revoked the employee’s access.

Sergio Bertoni: Reading about these incidents once again brings to mind a familiar problem. Perimeter defenses may be strong against external threats, but a malicious insider can bypass them entirely. In practice, insiders often pose a greater risk than external attackers. They are already inside the organization, have legitimate access, can earn colleagues’ trust, and quietly test security controls for weaknesses.

Laziness Is the Engine of Innovation

What happened: Public-sector employees found a creative way to cheat a time-tracking system.

How it happened: Municipal workers in a major Asian city collected salaries for months without showing up for work. They exploited weaknesses in a facial recognition–based attendance system.

Employees printed photos of their colleagues and used them as masks. One person could clock in for several others simply by holding up a printed face in front of the terminal.

In October 2025, a concerned citizen reported the scheme and submitted CCTV footage from a camera installed above the attendance terminal. How the footage was obtained remains unclear, but the information was passed to supervisory authorities, who promised a response by year-end.

Sergio Bertoni comments: Automation is powerful only when it is well designed and applied where it truly makes sense. Blindly deploying modern technologies, including AI, to optimize processes or replace humans is risky. There will always be someone clever enough to comply only with the formal rules enforced by a machine in order to gain personal benefits. In this case, a traditional security guard might have been more effective than an advanced facial recognition system.

Grok, Is This Real?

What happened: An AI-generated image brought rail traffic to a halt.

How it happened: In December 2025, a magnitude 3.3 earthquake struck a European region. While assessing the aftermath, local authorities came across a widely shared image of a supposedly destroyed bridge. Officials quickly dispatched a repair crew and halted or delayed 32 trains.

When engineers arrived, they discovered the image circulating online had been generated by AI.

According to law enforcement reports, throughout 2025 criminals increasingly used publicly available photos and videos to generate AI-based fake content. These deepfakes depicted people allegedly held hostage, pleading for help and urging relatives to pay ransom. Such materials were sent directly to family members and often timed to coincide with real missing-person cases to increase credibility.

Sergio Bertoni comments: Deepfakes have reached critical mass. Their volume has grown significantly, and their quality now raises serious concerns. Major platforms began requiring AI content labeling back in 2024. In several jurisdictions, publishing deliberately false information, including deepfakes, is now a criminal offense. Mandatory labeling of AI-generated content is quickly becoming a global norm.

Progress in (Un)Security

What happened: In 2025, OpenAI repeatedly found itself at the center of controversial security incidents.

How it happened: On November 26, the developer of ChatGPT disclosed a data leak affecting both OpenAI itself and organizations using its API.

The breach occurred at a third-party contractor, the analytics platform Mixpanel. On November 9, Mixpanel detected unauthorized access to parts of its infrastructure and confirmed the exfiltration of OpenAI API user data, including email addresses, usernames, and approximate locations derived from browser data.

Although Mixpanel immediately notified OpenAI, the AI provider terminated the partnership and issued apologies to affected customers.

Earlier, in July 2025, personal ChatGPT conversations, including chats containing sensitive corporate information, appeared in Google search results. The issue was caused by public chat links being indexed when users failed to disable the “Make chat searchable” option. While the feature was legitimate, users later complained that potential privacy risks had not been communicated clearly.

Sergio Bertoni comments: Businesses have long understood the risks of data leakage through AI. AI improves productivity, but when a company relies on public AI services, all user data is transferred to the vendor. That data may surface in responses to other users or be exposed if a contractor or developer is compromised. On-premises LLMs mitigate external leakage but introduce new challenges, such as access control. Organizations must ensure sensitive data is visible only to authorized teams. AI agents capable of performing actions in IT systems create additional risks if granted excessive permissions. New technologies solve old problems but inevitably create new ones.

Loneliness Is a Dangerous Motivator

What happened: An accountant embezzled company funds to donate to streamers and pay for host clubs.

How it happened: An employee identified as Zhou struggled with loneliness and turned to interactive livestreams offering emotional conversations in exchange for tips. One streamer gained her trust by promising long-term care and affection.

Mounting expenses led to debt, the collapse of her family finances, and divorce. Seeking stronger emotional engagement, she began visiting host clubs, where similar services came at a much higher cost.

Over three months, Zhou transferred approximately 4.5 million yuan from company accounts. She is now in custody on embezzlement charges.

Sergio Bertoni comments: Security teams often expect insider fraud to follow rational patterns. In reality, people are not always driven by logic or greed. Emotional distress, loneliness, and dissatisfaction can also motivate misconduct. Insider risk programs must take this human factor into account.

The Beer Supply Collapse

What happened: Hackers disrupted operations at Japanese beverage producer Asahi.

How it happened: On September 29, 2025, Asahi disclosed a cyberattack that halted production and distribution at its domestic facilities.

As a result, flagship products nearly disappeared from store shelves. Given daily output of millions of bottles, the financial impact was severe.

The Qilin ransomware group claimed responsibility and reported the theft of more than 9,000 files totaling 27 GB.

An internal investigation confirmed attackers gained access through network equipment, deployed ransomware, and accessed employee workstations. More than 1.5 million records containing personal and contact information were potentially exposed.

As of the latest update, the company operates at roughly 10% of its former capacity. Production remains disrupted, and staff rely on manual processes, taking orders by phone and fax.

Sergio Bertoni comments: No organization is fully immune to cyberattacks. Every major incident is a reason to reassess security readiness and disaster recovery plans and to test them in practice. Holiday periods are particularly risky due to reduced vigilance and staffing. Attacks during these periods often cause greater business damage, especially for consumer-facing companies.

Before stepping away for the holidays, ensure on-call coverage is assigned, incident response plans are rehearsed, and security controls are properly configured.

Merry Christmas and Happy New Year!

Letter Subscribe to get helpful articles and white papers. We discuss industry trends and give advice on how to deal with data leaks and cyber incidents.