I’m a cybersecurity professional, here’s why I’m preparing for an AI data breach

5 hours ago 4
A robot hand touching a locked digital shield blocking a human from accessing data (Image credit: Blue Planet Studio/Shutterstock)

Recently, OpenAI acknowledged a security breach at a third-party data analytics vendor that led to the exposure of some of its API users’ personal information, including email addresses, names, and browser details.

The incident on its own underscores the continuing issues surrounding supply chain targeting the risks of third-party data exposure but beyond that, the incident serves as a potential shot across the bow for the cybersecurity community and the broader public in general.

Director of Threat Intelligence at LastPass.

Treasure trove of data

AI companies are a treasure trove of data. Not just the data the models are trained on or even the intellectual property involved in the actual technology- AI can be viewed akin to Cloud Service Providers (CSPs) as repositories for a massive amount and variety of customer-provided data.

Article continues below

As we saw in the late 2010s, nation-states and other threat actors increased their targeting of CSPs to maximize their return on investment, and it is a matter of time until we see a major breach of one of the AI companies and the accompanying exposure of personal and proprietary data.

The data is too attractive, and threat actors are too capable.

This isn’t to take anything away from the security programs at these companies; on the contrary, there is no doubt that, particularly among the most advanced firms that would draw the biggest interest among threat actors, the security programs are world-class and incredibly well-resourced and operated, but it’s the classic issue of defenders need to be right all the time and attackers only need to be right once.

Secure by design

To be clear, this isn’t even taking into consideration the recent security issues identified within Moltbook after it was rapidly adopted in the last few weeks, including major vulnerabilities independently discovered by both Wiz, as captured in their excellent blog post, and Jameson O’Reilly which were highlighted by 404 Media.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

While Moltbook is the focus of these recent reports, the issues arising from insecure development of AI tools - especially as the capabilities and technology proliferate - are much larger and more distressing, and they deserve their own analysis.

These issues go back to an overarching emphasis on speed of implementation, an overreliance on vibe coding, and a fundamental lack of implementation of the “secure-by-design” mantra that is creating security issues that threat actors will most certainly leverage. But again, that’s another topic… back to the issue at hand.

What makes a potential large-scale breach of a major AI firm so unique is the variety and sensitivity of the data. Many companies don’t even realize some of their most sensitive data may have already been shared via their employees.

According to a study earlier this year from Harmonic, 45.4% of company’s sensitive data submissions into AI apps came from personal accounts and Varonis found 99% of organizations have sensitive data exposed to AI tools, including unsanctioned apps.

Combine this data with deeply personal information individuals are sharing with AI chatbots, including asking questions that have later been used in criminal cases and leveraging AI for mental health and therapy-like discussions.

The potential for extortion and blackmail becomes a concern as well, particularly among those who may feel pressure to avoid going to therapists or reporting mental health concerns, such as those in intelligence, first responders, or the military.

People are viewing AI chatbots as a safe place to share their thoughts and questions while maintaining a sense of anonymity when this may not be the case, particularly in the long-term.

Enforcing robust AI

I raise these concerns not to be a naysayer or a Cassandra, but in hopes of preparing the larger AI customer base for the inevitable so that they can take the appropriate steps now before something happens.

This means examining their risk appetite, be it personal, professional, or organizational, for that they are willing to share with AI and let be stored in perpetuity on third-party servers that are viewed as rich targets. This means users should examine what, if any, sensitive data they are comfortable sharing with an external organization.

For companies, which often have data classification policies, this is easier to do. For personal users, this can be more difficult. Once this examination is complete, it means taking steps to adjust behavior, again either personal or organizational, to align with that risk appetite.

This may mean developing, implementing, and (most importantly) enforcing robust AI use policies within your company. This may also mean researching chatbots before leveraging them for asking personal and/or sensitive questions that you may not want to have out in the open in the event of a large breach.

Major breach

AI and its continuing rapid development obviously have some amazing and wonderful implications for companies and individuals alike. But these companies’ place as highly-prized targets for advanced threat actors means it is almost certainly just a matter of time until a major breach occurs.

Best for users to consider now what data they would like to avoid being exposed in the event of a major breach by refraining from submitting it in the first place.

We've featured the best encryption software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Senior Principal Intelligence Analyst at LastPass.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Read Entire Article