Massive AI Data Leak & Apple Trade Secret Theft
27.08.2025

In this weekly report we are discussing two leakages. AI model Grok leaked over 370,000 user chats, exposing sensitive data through its public “Share” feature. At the same time, Apple sued ex-employee Chen Shi for allegedly stealing trade secrets and confidential documents before joining competitor. Let’s dive into details!

AI has once again become a source of data leaks. This time, Grok, a language model developed by xAI, has exposed more than 370,000 user conversations. Many of the disclosed conversations contain sensitive information, such as health records, personal details, passwords, and uploaded documents. User conversations were published on Grok’s website and became available to search engines, like Google, Bing and DuckDuckGo.

The issue stemmed from the platform’s “Share” feature. When users chose to share a conversation, clicking the “Share” button generated a unique URL. However, instead of keeping the link private, the chat was published on Grok’s website, making it visible to search engines and therefore accessible to anyone. Although user login details were not displayed, the content of the conversations often included sensitive information and documents, which could easily be misused if accessed by third parties.

Most users were unaware that this feature worked in this way. The incident highlights, once again, the urgent need for stronger safeguards to protect information shared with AI systems. On the one hand, developers should keep in mind not only new features, but comprehensive security guardrails, as otherwise any security measures will be easily avoidable. On the other hand, sensitive information should be protected here and now. There are a lot of cases when AI leaked not only a personal information, but commercial secrets and other highly valuable data.

If you want to ensure the safety of AI use, start by reading our article about the risks and best practices for working with neural networks. Companies can address these challenges with the help of data loss prevention (DLP) solutions, which prevent the exposure of sensitive data when users try to input confidential information into a chatbot through a browser. These systems provide immediate benefits for companies that have already implemented AI-powered solutions, and they lay the groundwork for broader AI security measures that will keep data safe in a long run.

Another major story is emerging from Apple, which has filed a lawsuit against a former employee, Chen Shi. According to court documents, Shi is accused of stealing trade secrets, including design and development documentation, internal specifications, and product roadmaps. The investigation also alleges that he held a series of one-on-one meetings with colleagues to gather additional details about other ongoing projects.

Court filings provide further background. Shi worked at Apple as a Sensor System Architect between January 2020 and June 2025. Before starting his role, he signed a Confidentiality and Intellectual Property Agreement. In early June, Shi announced his departure from the company, telling Apple he planned to visit his parents and had no future employment plans. However, investigators say he later contacted a prospective employer, promising to “collect as much information as possible—will share with you later,” and allegedly used this as leverage during salary negotiations.

The investigation found that Shi accessed confidential files and downloaded 63 documents from Box cloud storage system, later transferring them to a personal USB drive. He is also accused of contacting colleagues in other departments under the pretense of networking or exploring new career directions, during which he conducted 33 meetings in his final month at Apple.

Apple claims Shi violated his confidentiality and intellectual property agreement. In response, Shi’s current employer, OPPO, stated: “We have found no evidence establishing any connection between these allegations and the employee’s conduct during his employment at OPPO.”


This case is just one of many examples of intellectual property theft. In numerous cases, confidential data is stolen through USB drives or personal cloud storage accounts. To prevent such breaches, organizations need a strong Data Loss Prevention (DLP) solution.

SearchInform has developed Risk Monitor, a next-generation DLP system that provides control over nearly all business communication channels, including web browsers, email, USB drives, cloud storage services, and printers. In addition, Risk Monitor integrates with popular business collaboration platforms such as Microsoft 365, Google Docs, and Amazon S3.

Our Next-Gen DLP system delivers comprehensive data protection. It classifies information, marks files with special labels according to content sensitivity, and blocks both accidental and intentional leaks. The solution also includes a wide range of forensic and analytical tools designed to boost the effectiveness of your security team.

Start your free 30-day trial now!


Letter Subscribe to get helpful articles and white papers. We discuss industry trends and give advice on how to deal with data leaks and cyber incidents.