Occasionally, when walking through the corridors where senior management resides, one might overhear conversations that include phrases like, “Where can we save money?” or “Cybersecurity is just a cost center—can we reduce these expenses?” On the surface, these are valid questions. After all, senior executives are tasked with ensuring the company’s profitability and keeping shareholders content. Cutting costs in areas that don’t directly generate revenue might seem like a logical step toward achieving those goals.

Yes, cybersecurity can involve significant financial outlays—substantial investment in personnel, technology, and training. However, what often goes unacknowledged is that the costs of a large-scale security breach are exponentially higher in terms of immediate financial loss and long-term damage to the company’s reputation, customer trust, and regulatory standing. A security incident could result in millions in penalties, legal fees, and lost business, while intangible costs, such as brand damage, are even harder to quantify.

Thus, senior management should ask not simply, “How can we reduce cybersecurity costs?” but rather, “What is the likelihood of an enterprise-wide security incident, and what would the consequences be?” Answering this question is far from straightforward. It requires a deep understanding of the company’s risk landscape, the evolving threat environment, and the potential vulnerabilities across all business operations. Cybersecurity isn’t just a cost center; it’s a vital element of business continuity and risk management.


Challenge 1 – Know your environment

A thorough understanding of the entire IT/OT/IoT landscape is essential to assess the likelihood of a security incident. This means having a detailed grasp of how the technological environment is structured, the specific technologies in use, the layout of the network architecture, the placement of security controls, and the physical and virtual locations of servers and data. For a small business, these questions are relatively easy to answer. With fewer assets and a simpler infrastructure, it’s easier to maintain a clear and updated view of the organization’s systems. However, the complexity skyrockets when dealing with a larger enterprise—especially those with over 10,000 employees or 10,000+ connected assets.

As the number of employees, devices, and systems increases, the challenge of maintaining visibility into the entire IT and OT environment grows significantly. With scale comes complexity. More devices, systems, applications, and networks introduce more potential points of failure or vulnerability. For instance, with the rise of the IoT, enterprises now face an influx of connected devices that might not even be managed under traditional IT oversight yet still present substantial security risks. A company’s attack surface expands exponentially as more assets come online, and managing these risks requires a granular and up-to-date understanding of the entire digital ecosystem.

Suppose you were to walk into your IT department and request a comprehensive and accurate diagram of the current network architecture that clearly outlines the above elements. In that case, you might be surprised by what you find. Many organizations struggle to maintain an up-to-date, detailed map of their technology environment. Often, these diagrams are outdated, missing critical elements, or overly simplified. Systems evolve rapidly, new devices are added, applications change, and networks expand, but documentation frequently lags. The reality is that in a fast-moving organization, ensuring the accuracy of this information can be a monumental task.

Without an accurate, real-time view of your environment, how can you effectively gauge the likelihood of a security incident?

This makes the case for a more proactive and strategic approach to network visibility and asset management. Organizations must prioritize maintaining an accurate and dynamic view of their entire IT/OT/IoT environment. This might involve leveraging advanced monitoring tools, adopting automated asset discovery solutions, or investing in real-time network mapping technologies. Static documentation is insufficient in larger enterprises, where the digital landscape constantly shifts. Businesses need real-time insights into their infrastructure to manage risk and anticipate potential security incidents effectively. The question isn’t just about having a diagram but about whether that diagram is reflective of reality at any given moment. Without this, determining the likelihood of a security incident becomes an exercise in guesswork.


Challenge 2 – Know your cyber adversaries

Equally important to understanding your IT/OT/IoT environment is a deep awareness and understanding of your adversaries and the tactics, techniques, and procedures (TTPs) they employ. Cyber threats are not generic; they are often highly targeted and sophisticated, tailored to exploit specific vulnerabilities in an organization’s infrastructure. Understanding the specific threats that apply to your environment—whether from nation-state actors, cybercriminal groups, hacktivists, or insider threats—is crucial to building an effective defense. However, this task is far from simple.

The cyber threat landscape continually evolves, and adversaries are becoming more innovative and aggressive, driven by the potential for high financial rewards or strategic gains.

This is not a one-time exercise. Cyber adversaries are constantly evolving, learning from past failures and adapting to new defenses ever-increasingly. They are organized, with highly skilled teams focused on identifying vulnerabilities, developing new attack vectors, and fine-tuning their methods to bypass even the most advanced security measures. These adversaries often operate like professional organizations, leveraging cutting-edge tools and strategies to compromise environments. Many of them work collaboratively, sharing information across underground networks to refine their attacks further, which makes the challenge of defending against them even more daunting.

To put this in perspective, over the last five years, more than 15,000 new vulnerabilities have been discovered annually. This staggering number highlights the sheer volume of potential weaknesses that could be exploited in any given environment. With such a high frequency of newly discovered vulnerabilities, it’s not a question of if but when your systems will be exposed to a vulnerability that adversaries can exploit. The larger your technological footprint—whether that involves IT infrastructure, operational technology, or Internet of Things devices—the greater the likelihood that one or more of these vulnerabilities will apply to your environment. This creates a constant game of cat and mouse, where defenders must work tirelessly to patch and protect, while attackers only need to find one weak link.

But it’s not just the number of vulnerabilities that matters; it’s also about the relevance and context of those vulnerabilities to your specific environment. Certain vulnerabilities may be critical for one company but irrelevant for another. This is why understanding your potential adversaries’ TTPs is so essential. For example, if your organization operates in the financial sector, you will likely face specific threats from financially motivated cybercriminals employing phishing campaigns, ransomware, or insider trading schemes. In contrast, critical infrastructure companies may be targeted by nation-state actors using more APTs designed to sabotage or disrupt operations.

Moreover, while a vulnerability may be well-known, adversaries’ tactics to exploit it can shift rapidly. Threat actors frequently experiment with new delivery mechanisms, evade detection through novel approaches, and adapt to changes in security practices. Staying ahead of these shifts requires constant vigilance, intelligence gathering, and a dynamic approach to security operations. This underscores the importance of a continuously evolving defense strategy. Businesses must integrate threat intelligence into their cybersecurity frameworks, mapping known vulnerabilities to the adversaries most likely to exploit them.

Security teams must also adopt a mindset of constant adaptation, updating defenses as adversaries update their attack methods.


Challenge 3 – Life cycle management

Technology inherently follows a specific life cycle, influenced by factors such as usage, technological advancements, and financial considerations. The typical life cycle for IT-based technology ranges from 4 to 6 years, while OT can remain in place for over 20 years. From an investment perspective, choosing technology with a longer life cycle can seem like a sound strategy. It allows organizations to amortize the cost over a more extended period, reduce the frequency of disruptive replacements, and potentially gain more return on investment (ROI). However, adversaries are keenly aware of this and understand how to exploit the vulnerabilities that arise from prolonged use of the same technology.

The longer a technology is used, the more familiar it becomes to cyber adversaries. They understand that organizations with long life cycles tend to rely on the same systems for extended periods, often with limited updates and changes. This creates an attractive target. Over time, attackers can study the technology, reverse-engineer its components, and uncover vulnerabilities the original developers or security teams might have missed. The more time an adversary invests in understanding how a system works, the more adept it becomes at identifying and exploiting its weaknesses.

Moreover, as technology ages, it becomes increasingly difficult to maintain up-to-date security measures. Regular patches and updates may help mitigate vulnerabilities for IT-based systems, but even then, the protection is only as good as the system’s architecture allows. As the years go by, new attacks emerge—ransomware, APTs, and zero-day exploits—that older technologies may not be designed to withstand. The challenge is even greater for OT systems, which have much longer life cycles and are often designed with reliability rather than security as the priority. These systems were often built in a pre-cybersecurity era and do not include basic protections such as encryption or authentication mechanisms, leaving them especially vulnerable to exploitation.

Adversaries know that replacing outdated technology is not always an easy or quick solution for organizations. There are several reasons for this. First, replacing large-scale systems, particularly in OT environments, can be prohibitive. Upgrading infrastructure may require significant capital investment, planning, and downtime, which many companies are hesitant to undertake, particularly in industries where operational continuity is critical, such as manufacturing, energy, and healthcare. Additionally, certain legacy systems are deeply integrated into business operations, making them difficult to replace without causing widespread disruptions. For example, OT systems controlling critical infrastructure or industrial processes may be indispensable and lack accessible alternatives.

Adversaries take advantage of this reluctance to replace or upgrade aging technology. They know that even if vulnerabilities are discovered, organizations may delay addressing them due to the complexities of replacing or retrofitting existing systems. This gives cybercriminals ample opportunity to exploit these weaknesses. They may dedicate resources to developing customized attack techniques specifically tailored to these older technologies, knowing that organizations may be unable to make timely upgrades or deploy comprehensive security measures.

Furthermore, adversaries understand the trade-offs companies face between operational efficiency and security. Organizations that prioritize longevity in their technology investments often do so at the expense of cybersecurity preparedness. When companies opt for long life cycles to maximize ROI, they inadvertently allow adversaries to exploit aging systems. Over time, these systems may become a veritable “playground” for cybercriminals who have honed their skills on outdated and poorly defended technology.

In industries where OT technology is meant to last decades, the pace of technological change in cybersecurity becomes a critical factor. Even if the OT system was considered secure at its implementation, it becomes less so over time as cyber threats evolve and outpaces the static defenses built into older systems. This gap between technological longevity and cybersecurity evolution creates a growing risk. What might have once been a secure system can become the weakest link in an organization’s security posture as adversaries leverage more advanced and evolving tactics.

All of these factors—extended life cycles, adversary knowledge, and the complexity of replacing outdated technology—significantly influence the likelihood of a security incident. As systems age, organizations must contend with the increasing probability that they will be targeted by adversaries who have spent years studying and learning to break into these environments. The longer an organization waits to address aging infrastructure, the greater the risk that a security breach will occur, and the more damaging that breach could be in terms of financial loss, operational disruption, and reputational harm.

Therefore, while selecting technology with a long life cycle may make sense from an investment standpoint, it must be balanced with a proactive cybersecurity strategy. This includes implementing compensating controls, regular assessments, retrofitting older systems with modern security measures, and staying vigilant against emerging threats. Simply relying on technology’s durability is not enough. Organizations must actively manage the increasing risks of prolonged use, ensuring that their security posture evolves alongside the threats they face.


Challenge 4 – Make, buy, or ally

At some point, every senior manager will face the critical question: “Should we handle it in-house, or should we outsource it?” This question becomes even more relevant and pressing with the current war on talent, particularly in specialized fields like cybersecurity and IT. Companies grapple with a shortage of skilled professionals, driving many to consider outsourcing as a potential solution to fill these talent gaps. But, as attractive as outsourcing may seem in the short term, it raises a fundamental question about security: Does outsourcing increase or decrease the likelihood of a security incident?

While outsourcing can provide immediate access to expertise and alleviate some operational burdens, it also comes with significant risks. I think outsourcing, especially when poorly managed, actually increases the likelihood of a security incident. One primary reason for this is that outsourcing inherently expands the attack surface. When you involve a third-party provider, you are extending your organization’s digital footprint and handing over critical aspects of your infrastructure or data to an external entity. This means that part of your attack surface—essentially the sum of all the points where an unauthorized user could attempt to enter or extract data—is outside your direct control. In other words, the more external parties involved, the more potential vulnerabilities exist in your security architecture.

Moreover, outsourcing often comes with the added challenge of relinquishing control. When you outsource to a third-party provider, you rely on them to maintain the same standards of security that you would expect from your internal team. However, this is not always the case. Some service providers may focus more on optimizing their profits than on providing robust, long-term security solutions. When the outsourcing provider feels that the client is not actively monitoring their efforts or holding them accountable, they may cut corners by delaying updates, skimping on security measures, or deprioritizing certain tasks. This creates an environment where security gaps can emerge over time, and by the time these gaps are discovered, it may be too late to prevent an incident.

Another problem with outsourcing is the potential for misalignment in policies and objectives. When you outsource, you often have to conform to the provider’s way of working, their policies, and their operational processes. This can lead to mismatches in how security is prioritized or implemented. The provider may have different views on what constitutes an acceptable risk or may not be as responsive to emerging threats as your internal team would be. Ultimately, you lose some degree of control over critical security decisions, increasing the chance of a breach.

This is why co-sourcing can be a more effective strategy in many cases. Co-sourcing involves working alongside a partner who complements your existing team and technology rather than fully outsourcing control. The key distinction between co-sourcing and outsourcing is that, with co-sourcing, you stay in control. You dictate the security policies, risk management strategies, and operational procedures, while the co-sourcing partner provides the expertise and support to implement them effectively. This gives you the best of both worlds: access to specialized talent and resources without sacrificing control over your security framework.

In a co-sourcing arrangement, you and your partner work collaboratively, ensuring that your security posture remains aligned with your organization’s goals and that there is accountability on both sides. The partner supports your existing technology stack, helping you optimize and secure it, but the policy and decision-making authority remains with you. This level of control helps mitigate the risks associated with outsourcing because you are not handing over the keys to your kingdom. Instead, you bring expertise while overseeing your systems, processes, and security measures.

The benefits of co-sourcing go beyond just control. By having a trusted partner with specific knowledge of your chosen technology stack, you can ensure that security solutions are tailored to your unique environment. The co-sourcing partner is there to augment your capabilities, not replace them, which means they are more likely to provide customized solutions that fit your specific needs, as opposed to the one-size-fits-all approach some outsourcing providers may take.

Additionally, with a co-sourcing strategy, there is typically better communication and integration between the external partner and your internal team. This fosters a more robust relationship where both parties are invested in the security program’s success. The co-sourcing partner becomes an extension of your own security team, working together to detect and address vulnerabilities before they can be exploited. This collaborative approach can lead to faster response times, more proactive threat management, and a greater likelihood of preventing security incidents altogether.


Conclusion

Answering the question, “What is the likelihood of an enterprise-wide security incident?” is anything but simple. Numerous unknown variables are at play—ranging from the constantly evolving threat landscape to the unique vulnerabilities within an organization’s infrastructure. Factors such as the sophistication of potential adversaries, the complexity of the IT/OT/IoT environment, the effectiveness of security measures, and even employee behavior all contribute to this uncertainty. Predicting with precision how likely an attack requires an in-depth understanding of the internal and external factors and recognizing that these elements are constantly shifting.

However, despite the inherent complexity, answering this question with a constructive and comprehensive approach is possible. It will take serious effort but is achievable through a structured risk assessment process. This involves evaluating the current security posture, identifying critical assets, mapping potential threat vectors, and incorporating threat intelligence data to understand what adversaries are most likely to target and how they might do it. In addition, it requires ongoing monitoring and regular reassessments as the threat landscape changes rapidly. Implementing frameworks like MITRE ATT&CK or NIST’s Cybersecurity Framework can help organizations assess and quantify the likelihood of a security breach more methodically. While you may never be able to eliminate all uncertainties, this approach allows for a more informed risk estimate.

On the other hand, answering the question, “What would be the consequences of an enterprise-wide security incident?” is significantly easier. In most scenarios, the immediate and visible impact is a company-wide shutdown. Critical business operations halt and IT teams are thrown into emergency response mode, frantically working to restore essential services and contain the damage. Whether it’s a ransomware attack, a data breach, or a distributed denial-of-service (DDoS) incident, the consequences typically include significant operational disruption, financial losses, potential legal and regulatory penalties, and long-term damage to the company’s reputation.

In such situations, the organization often experiences a ripple effect: business continuity is compromised, customers are affected, supply chains may be disrupted, and there’s usually a loss of trust from stakeholders. The recovery process can take weeks or even months for large enterprises, with long-lasting financial and reputational impacts. Unlike the ambiguity in predicting the likelihood of an incident, the consequences are often predictable and severe, making it crucial for organizations to invest in proactive measures to mitigate these risks in the first place.