GUIDE
Secure Your Web Apps: Practical Fixes for the Top 5 Vulnerabilities.
GUIDE
Secure Your Web Apps: Practical Fixes for the Top 5 Vulnerabilities.

A CISO's View of AI and Supply Chain Risks

A CISO's View of AI and Supply Chain Risks
8:22

As a CISO, I’m increasingly concerned about the new wave of cyber risks—particularly the vulnerabilities lurking in third-party code, much of which is now being generated by AI. This challenge is further compounded by the widespread integration of large language models (LLMs) into software. I’m not alone in raising these concerns, and our latest CISO Perspectives Report captures and quantifies them perfectly.

According to our survey of CISOs and other security leaders, 68% of them are concerned about third-party software supply chain risk, while an identical 68% view the secure deployment of generative AI as a critical priority. At first glance, these appear to be two separate, monumental challenges. But the critical insight—and the urgent reality for every security leader—is that they are not separate at all. They are the same problem, converging into a new, more complex threat vector.

The genAI and LLMs that we are rushing to integrate into our products and workflows are the new frontier of the digital supply chain. As our State of LLM Security Report highlights, most genAI capabilities rely on third-party models, platforms, and data sets to power their AI capabilities. When your developers call an API from a major AI provider, they are integrating a third-party component into your stack—one that is far more complex and opaque than a traditional software library.

This isn't just another vendor relationship. The data from our LLM security research reveals that these new "intelligent" supply chain components are uniquely perilous. Our pentests of LLM applications show the highest proportion of serious vulnerabilities (32%) of any asset type we test. Worse, they have the lowest remediation rate for these serious issues, at just 21%, often because fixes depend on the third-party model provider.

The convergence of AI and the supply chain demands a fundamental evolution of our security strategy. We can no longer treat vendor risk and AI security as separate disciplines. We must scrutinize these intelligent components with a new level of rigor, moving beyond a defensive posture to an offensive, proactive mindset that continuously tests and validates every link in our increasingly intelligent supply chain.

Digital Supply Chain Risk: The Number One CISO Concern

The CISO Perspectives Report makes it clear that the digital supply chain is our primary battleground, and by at least one measure, the biggest CISO worry. It’s not a theoretical threat; 73% of executives reported receiving at least one notification of a software supply chain vulnerability in the past year. This has rightly put security leaders on high alert.

In response, CISOs are moving from trust to verification. Eighty-three percent of organizations now face formal requirements to demonstrate the security of their vendor ecosystems. The methods for doing so are becoming more robust, with reviewing pentesting results from software vendors (48%) and reviewing their vulnerability remediation policy and SLA (46%) topping the list of security measures.

As CISO of Cobalt, I see this in my own work, and in conversations with my peers. The corporate perimeter is no longer the network firewall; it's our vendor list. Every third-party tool is a potential entry point. The question now is: how do we handle the newest, most complex, and potentially most dangerous vendor on that list—our AI provider?

GenAI: The Supply Chain Risk Multiplier

Our CISO survey shows leaders are acutely aware of genAI's dual nature. While it promises innovation, third-party software remains the top concern (66%), and AI-enabled features and LLMs are now the second-highest concern at 46%. Leaders worry about how attackers can leverage AI to evade defenses and how AI-assisted development tools may introduce hidden flaws and vulnerabilities.

These concerns are not unfounded. Our LLM-focused research provides the hard data to back up these fears. The "black box" nature of third-party AI models creates a troubling remediation gap. The rapid 19-day average remediation time for the few resolved serious LLM issues suggests teams are only fixing the easy, internally-controlled bugs, while complex vulnerabilities dependent on third-party model providers persist. This confirms that the supply chain dependency is a major blocker to securing AI.

This is the heart of the issue. We are embedding third-party AI services into our core operations, and our own research proves these components are introducing serious vulnerabilities that organizations are struggling to fix. It is the classic supply chain security problem, amplified by the speed, complexity, and opacity of AI.

From Worry to Action: Pentesting as the Linchpin

Faced with these converging threats, CISOs are indicating an overwhelming strategic shift toward proactive defense. A remarkable 88% of security leaders now consider pentesting a vital component of their overall security strategy.

This is being applied directly to the supply chain, with 58% now requiring third-party pentest reports to validate security for customers, and 49% planning to use pentesting specifically to identify software supply chain vulnerabilities in the next 24 months.

The value of this human-led, offensive approach is most evident when securing AI. Our CISO survey shows leaders are most worried about data-centric threats like sensitive information disclosure. However, our LLM pentesting finds that the most common gateways for these attacks are often classic vulnerabilities. SQL Injection was the number one finding in our LLM pentests at 19.4%, a traditional flaw that can be exploited through a novel AI interface. This proves that only expert, human-led pentesting can uncover these nuanced, multi-stage attack paths that automated scanners will miss. It provides the crucial context needed to make informed risk-acceptance decisions.

I believe you cannot secure what you don't understand. Pentesting moves the conversation from abstract fears about AI to concrete, actionable intelligence about exploitable vulnerabilities. This is how we take back control.

Recommendations for Security Leaders

The CISO Perspectives Report concludes with forward-looking advice. As security leaders, I urge you to build on these recommendations with a clear action plan to address the converged risk of AI and the supply chain.

  1. Redefine vendor risk for the AI era: Mandate transparency from your vendors about their use of GenAI models and data practices. Update your third-party risk assessments with specific questions about genAI. Treat your AI providers as your most critical vendors.
  2. Integrate security into AI development: Embed security into your AI life cycle from day one. This requires a mandated, deep collaboration between your security and AI/ML development teams to ensure security is foundational, not an afterthought.
  3. Adopt a programmatic, offensive security approach: This is the most critical step. Move beyond ad-hoc tests to a structured program that secures your applications, infrastructure, and cloud environments. Make human-led pentesting a non-negotiable part of both procurement and development. Use frequent, narrow-scope pentests to secure the software you build, and rigorous assessments to validate the software you buy.

Innovating Fearlessly in the Age of Intelligent Supply Chains

By treating AI as the newest, most critical component of our supply chain and applying a rigorous, proactive, and offensive security mindset, we can manage the associated risks.

Our job as security leaders is to enable the business to innovate safely. In 2025, that means leading the charge to understand and mitigate these converged threats, turning security into a competitive advantage.

The data tells a compelling story. I invite you to download the full CISO Perspectives Report to explore the data and insights for yourself. Let's continue this conversation and work together to build a more resilient future.

CISO Perspectives Report 2025

Back to Blog
About Andrew Obadiaru
Andrew Obadiaru is the Chief Information Security Officer at Cobalt. In this role Andrew is responsible for maintaining the confidentiality, integrity, and availability of Cobalt's systems and data. Prior to joining Cobalt, Andrew was the Head of Information Security for BBVA USA Corporate Investment banking, where he oversaw the creation and execution of Cyber Security Strategy. Andrew has 20+ years in the security and technology space, with a history of managing and mitigating risk across changing technologies, software, and diverse platforms. More By Andrew Obadiaru