From AI to Identity: 5 Things Black Hat 2024 Taught Us About Modern Data Security

Aug 15, 2024
August 15, 2024
Scott Solomon
,
From AI to Identity: 5 Things Black Hat 2024 Taught Us About Modern Data Security

At Black Hat 2024, data security, identity, and AI were all top of mind topics as organizations struggle with the challenges of integrating AI technologies like Microsoft Copilot, while ensuring robust data security and identity access measures. There is clearly a critical tension: the desire to leverage cutting-edge AI technologies versus the reality of the security risks they introduce. 

We breakdown the AI issues and more below with our 5 key insights from Black Hat 2024.

1. The AI Conundrum: Balancing Innovation with Security

There was widespread agreement and buzz around the fact that AI security is fundamentally about data security. Organizations are eager to adopt AI technologies but remain wary of the risks, particularly in the absence of strong data visibility. The notion that a single compromised account or an unregulated AI training process could lead to a significant data breach, compliance violation, or the misalignment of AI outputs is a real fear. This risk pushes many to reconsider the deployment of AI tools without proper safeguards, including data sanitization processes and strict access controls.

One of the most pressing issues discussed was the inadvertent feeding of sensitive information, such as Intellectual Property (IP) or Personally Identifiable Information (PII), into AI models. If such data is used to train AI, it can lead to unintended consequences, such as the generation of outputs that expose this sensitive information. 

With this in mind, another recurring concern addressed was the risk posed if a threat actor gains access to a user account and starts using Copilot to extract that sensitive information like IP or PII at scale.

There is also an ongoing concern about the long-term implications of sensitive data being embedded into AI models. Once data is used in training, it can be challenging—if not impossible—to fully extract it from the model. This permanence creates a lasting risk, especially if the data was not intended to be included in the first place. As AI becomes more integrated into business processes, organizations must be vigilant in ensuring that their data security practices evolve to meet these new challenges.

Additionally, Nvidia illustrated this with a case study involving an AI-driven customer service chatbot, where an insecure plugin led to unauthorized data access. This example underscored the critical importance of secure integration when implementing plugins in AI systems.

In summary, there were four AI security use cases we heard repeatedly.

  • Ensuring Copilot and Other AI Assistants Don’t Access Sensitive Data: Organizations want to prevent generative AI tools from interacting with regulated or sensitive data, thereby mitigating the risk of accidental data leaks.
  • Cleaning Up Training Data: Before using data to train or fine-tune AI models, it’s crucial to ensure that it’s free from regulated data or PII. This step is essential for maintaining compliance and protecting sensitive information.
  • Controlling AI Output: There’s a growing demand for mechanisms that ensure AI-generated outputs are properly labeled and controlled, particularly when they involve sensitive or regulated data.
  • AI Inventory: Many organizations expressed a desire to simply know where AI is being used within their systems. Having an inventory of AI tools and applications can help in managing and securing these technologies more effectively

For more on Cyera’s approach to securing AI tools, check out our recent webinar, “From Artificial to Intelligent: Securing Data for AI”

2. MFA Implementation: The Decision Tree Dilemma

Multi-Factor Authentication (MFA) is a cornerstone of modern cybersecurity, but its implementation in large enterprises is far from straightforward. Many organizations operate with a complex decision tree, where some users are mandated to use MFA, while others are not. This inconsistent approach often stems from a lack of context about the users and the types of records they can access.

There’s a growing recognition that understanding user context and their data access patterns could simplify the MFA decision-making process. With better insights into who is accessing what, organizations can streamline their MFA policies, ensuring that those with access to sensitive information are adequately protected. This, at the end of the day, is an identity access challenge. 

3. Identity and External Sharing with Microsoft 365

At Black Hat we saw that Microsoft 365 remains a vital tool for collaboration, but external sharing of records presents a significant security challenge. Security teams are caught between the need to maintain operational efficiency and the necessity of protecting sensitive information. While it’s technically possible to lock down external sharing, doing so can disrupt business processes and hinder collaboration.

To manage this, many security teams resort to custom PowerShell scripts to track shared files and domains. However, there’s a clear need for a more efficient solution—an “easy button” that can identify who has access to files, how they are being shared, and with whom. 

The scenario where external users, such as former consulting firms or divested business units, still have access to sensitive files was a common talking point. The fact that these external users are often overlooked when identity access controls are updated highlights a significant security gap.

While this isn’t a new issue, emerging data security technologies like Cyera’s Identity Module are bringing it to the forefront. Organizations are increasingly focused on ensuring that external users are promptly removed from access lists when their association with the company ends, preventing potential data leaks.

4. The Struggle with Legacy Data Classification Tools

The consensus at Black Hat was clear: legacy data classification tools have not lived up to expectations. Security executives voiced frustrations about the time it takes to realize value from these tools, the complexity of deployments, and the high maintenance overhead. In many cases, support has been lacking, and the cost has outweighed the return on investment.

This dissatisfaction is driving organizations to seek alternatives that offer quicker deployments, easier maintenance, and better support. The demand is for solutions that can deliver immediate value, reducing the time and effort required to secure sensitive data. 

5. The Path Forward

The insights from Black Hat 2024 underscore the complexities that modern enterprises face in securing their data while embracing new technologies like AI. Organizations seeking to adopt data security solutions that provide context, streamline processes, and deliver immediate value. As AI, identity, and data security continue to converge, the focus will increasingly be on solutions that can safeguard sensitive information without hindering innovation.

For organizations looking to stay ahead of the curve, the time to act is now. Whether it’s improving your data security posture, investigating identity access and MFA issues, finding better ways to safely share information with Microsoft 365, or securing AI initiatives, the decisions made today will shape the security landscape of tomorrow.

Interested in how Cyera can help address these challenges? Request a demo today