In recent events, universities and hospitals across The Netherlands have been significantly impacted by a wave of Distributed Denial of Service (DDoS) attacks. These incidents are a stark reminder of the persistent nature of such cyber threats, underscoring that DDoS attacks, like most other cyber risks, are not a fleeting issue—they are an enduring challenge in the digital landscape. Fortunately, the cybersecurity industry has proactively acknowledged this reality and developed a range of sophisticated DDoS mitigation services. These solutions are designed to absorb, deflect, or otherwise neutralize such attacks, ensuring continuity of operations even when faced with substantial threats.

This raises an important and thought-provoking question: When organizations leverage these advanced mitigation services, does a DDoS attack become merely a nuisance rather than a critical threat? The response to whether a DDoS attack becomes merely a nuisance depends significantly on the adversary’s attack method. Fundamentally, DDoS attacks can be categorized into two primary types: volumetric attacks and connection-oriented attacks.

  1. Volumetric Attacks:
    These attacks are designed to overwhelm network resources by flooding them with massive traffic. An NTP (Network Time Protocol) amplification attack is a classic example and represents one of the most common types of volumetric threats.
  2. Connection-Oriented Attacks:
    These focus on exhausting system resources or server capacity by initiating numerous seemingly legitimate connections, eventually denying service to legitimate users.

When dealing with well-known and well-defined DDoS attacks—such as an NTP amplification attack—the situation tends to be more manageable. Anti-DDoS service providers, leveraging their extensive databases and threat intelligence, can quickly recognize these attack signatures and take swift action to neutralize the threat. These providers utilize advanced techniques such as traffic filtering, rate limiting, and signature-based detection to identify and remove malicious traffic streams efficiently. This effectiveness stems from their ability to anticipate and prepare for these familiar attack vectors, having likely encountered them numerous times before. Their experience allows them to deploy preconfigured mitigations tailored to such threats, minimizing their impact on targeted organizations.

However, it is essential to note that not all DDoS attacks fall into the “known and predictable” category. More sophisticated or novel attack methods, especially within connection-oriented categories, may require deeper analysis and a more dynamic response strategy. This underscores the importance of relying on the capabilities of mitigation providers and fostering a robust internal incident response capability to adapt to the unique characteristics of an ongoing attack.

This raises a critical question: How can organizations effectively prepare for more advanced and sophisticated DDoS attacks? Unlike traditional, well-known attacks that mitigation providers can easily detect and address, advanced DDoS attacks often involve complex strategies, multi-vector approaches, and evolving tactics.

Drawing on my professional experience in the field, I have identified four essential steps organizations must take to enhance their preparedness and resilience against these advanced threats. These steps ensure a comprehensive approach, combining technical, operational, and strategic measures to safeguard critical systems and maintain business continuity. By following these best practices, organizations can move beyond reactive measures and adopt a proactive stance, effectively reducing the potential impact of even the most sophisticated DDoS campaigns.


1 – Design to scale up/down the web application or service itself.

Modern web applications and services derive their power and value from leveraging cutting-edge technologies and providing seamless user experiences. These advancements, however, come with inherent dependencies on network bandwidth, computing resources, and other infrastructure components that can become scarce during a DDoS attack. While it is common practice to design infrastructure with elasticity—scaling up and down in response to workload demands—the question remains: Is that flexibility truly sufficient when faced with a high-impact DDoS attack?

During a DDoS attack, where resource consumption is artificially and maliciously driven to unsustainable levels, an effective strategy is to implement feature scalability. This involves temporarily scaling down or disabling non-essential features of the web application or service. By adopting this approach, organizations can:

  1. Preserve Core Functionality: Ensuring that critical features and services remain operational and accessible to users despite constrained resources.
  2. Enhance User Experience: Minimizing disruption for end-users, as they can continue interacting with the application or service without being significantly impacted by the attack.
  3. Optimize Resource Allocation: Redirecting limited resources to support essential operations rather than overloading the system with secondary functions.
  4. Demonstrate Business Resilience: Maintaining service continuity under adverse conditions reinforces trust and reliability among customers and stakeholders.

Implementing feature scalability as part of a DDoS response strategy is not merely a technical decision but a sound business practice. By enabling applications to operate in a “minimal viable mode” during attacks, organizations can safeguard their reputation, reduce downtime, and maintain a degree of operational normalcy. Ultimately, the ability to dynamically adapt your service offerings during a crisis is a hallmark of effective business continuity planning. It demonstrates both foresight and resilience in the face of ever-evolving threats.


2 – Know the traffic profile of the application/service

When responding to a DDoS attack, the top priority is clear: mitigating the attack to restore normalcy and protect operational continuity. However, successful mitigation hinges on one critical prerequisite—knowing what “good” looks like. Without a thorough understanding of the application’s normal traffic profile, distinguishing legitimate activity from malicious traffic becomes overwhelming and nearly impossible. This, however, is easier said than done. Establishing and maintaining an accurate traffic baseline presents several challenges, including:

  1. Frequent Updates to Application Functionality:
    Modern applications and services are often dynamic, evolving to meet user demands and incorporate new features. Each time functionality is added, modified, or removed, the baseline documentation must be updated to reflect these changes. This process is time-consuming but essential to ensure the baseline remains accurate.
  2. Limited Control Over External Users:
    When the application or service caters to external users, the complexity increases. External users operate on diverse hardware and software environments beyond the organization’s control. These variations introduce additional variability to traffic patterns, making it more challenging to establish a reliable baseline.
  3. Impact of Component Upgrades and Updates:
    Even if the application or service remains unchanged, the underlying infrastructure—such as servers, network equipment, and software components—is likely to be updated or upgraded over time. These changes can subtly alter the application or service’s traffic signature, necessitating continuous monitoring and adjustment of the baseline.

To address these challenges, organizations must adopt a proactive and structured approach:

  • Automate Baseline Monitoring: Invest in tools and technologies for real-time traffic analysis and anomaly detection. These systems can dynamically adapt to changes in the application environment, reducing teams’ manual burden.
  • Integrate Change Management Processes: Ensure that a formal review and update of traffic baselines accompanies updates to application functionality or infrastructure components. Embedding this practice within change management protocols can streamline the process.
  • Collaborate with External Stakeholders: For applications serving external users, establish clear communication channels and guidelines to understand their environments and usage patterns better. Consider leveraging behavioral analytics to identify legitimate traffic from these diverse sources.
  • Conduct Regular Baseline Reviews: Schedule periodic evaluations of traffic baselines to ensure they remain aligned with the application or service’s current state. These reviews should be part of your broader DDoS readiness strategy.

Ultimately, knowing what “good” looks like is not just a technical necessity but a cornerstone of effective DDoS mitigation planning. By prioritizing baseline accuracy and implementing measures to address its dynamic nature, organizations can enhance their resilience and response effectiveness in the face of evolving DDoS threats.


3 – Ability to capture network traffic

When facing a DDoS attack, one of the most critical questions to answer is, “What type of DDoS attack are we dealing with?” This seemingly simple question can often be challenging to resolve, even when you have a detailed understanding of your application’s or service’s standard traffic profile. Without the ability to capture and analyze network traffic effectively, identifying the attack type becomes significantly more complex—introducing operational and technical challenges that can hinder timely mitigation. Some of the challenges of capturing and analyzing network traffic are:

  1. Network Traffic Capture Is Not Always Feasible:
    • On-Premise Traffic: While capturing traffic on-premise might seem straightforward, practical issues often arise when using a SPAN (Switched Port Analyzer) port or similar mirroring method. For instance, capturing and processing traffic at high bandwidths (e.g., 1Gbps or more) introduces resource constraints that can overwhelm analysis workstations. It is essential to recognize that at 1Gbps, traffic generates approximately 7.5GB of data per minute, quickly creating massive volumes of information to process and analyze.
    • Cloud Provider Traffic: In contrast, capturing network traffic at a cloud provider adds an entirely new layer of difficulty. Most cloud environments do not offer native options for packet-level traffic capture at the same depth as on-premise infrastructure. This limitation can blind organizations to the underlying traffic patterns during an attack, making identification and mitigation significantly harder.
  2. Resource Constraints on Analysis Workstations:
    Even if you successfully capture traffic, the ability to analyze it depends heavily on the workstation’s resources or system used for inspection. Many standard workstations may struggle to handle such high volumes of data efficiently, especially in real-time scenarios.
  3. Filtering Noise from Legitimate Traffic:
    Another challenge is the complexity of filtering legitimate (or “known good”) traffic from malicious activity. Depending on the tools used (e.g., PCAP viewers), this can become a labor-intensive and time-consuming task, particularly in environments with high traffic volumes or diverse user activity.

To address these challenges and enhance response capabilities, organizations should adopt a strategic and resource-conscious approach:

  1. Invest in Scalable Packet Capture Solutions:
    Leverage high-performance packet capture tools capable of handling multi-gigabit traffic without dropping packets. Consider hardware-accelerated solutions or cloud-native monitoring tools designed to work seamlessly in hybrid environments.
  2. Utilize Cloud-Specific Monitoring Tools:
    Since packet-level traffic capture may not be feasible in cloud environments, invest in cloud provider-native monitoring and logging tools, such as flow logs, application-level logs, and DDoS detection services. These tools can provide valuable insights into traffic anomalies without requiring full packet captures.
  3. Automate Traffic Analysis:
    Implement automated tools that use machine learning or signature-based detection to analyze traffic in real-time. These tools can more efficiently distinguish malicious traffic patterns from legitimate traffic than manual processes.
  4. Enhance Workstation and Storage Capabilities:
    Ensure that workstations used for traffic analysis have sufficient processing power, memory, and storage capacity to handle large PCAP files. Supplement this with fast-access storage systems and advanced analytics software.
  5. Integrate Baseline Profiles into Filtering Tools:
    Build and maintain updated traffic baselines that can be directly integrated into filtering tools. This will make it easier to identify anomalies and eliminate known-good traffic during analysis.
  6. Adopt a Layered Monitoring Strategy:
    Use a combination of on-premise and cloud-based monitoring tools to view your traffic patterns comprehensively. A multi-layered approach reduces blind spots and enhances the accuracy of attack-type identification.

4 – Being ready to face a DDoS attack

The capability to capture and analyze network traffic and a clear understanding of the application’s or service’s “good” traffic profile is only part of the equation for effective DDoS mitigation. The next critical step is ensuring that your DDoS analysts are fully equipped to utilize these tools and interpret the data accurately. Without skilled personnel, even the most advanced tools and defined baselines can fail to deliver their intended value.

Empowering Analysts Through Training and Preparation

  1. Comprehensive Tool Training:
    Analysts must be thoroughly trained in the tools used for traffic capture and analysis. This includes:
    • Understanding tool functionality and features.
    • Knowing how to apply filters and run diagnostics efficiently.
    • Learning to correlate traffic patterns with known DDoS signatures and anomalies.
  2. Baseline Familiarity:
    Training must extend beyond tool usage to include deep familiarity with the application’s or service’s “good” traffic profile. Analysts need to:
    • Recognize normal traffic patterns, including peak usage times and legitimate fluctuations.
    • Identify key metrics and thresholds that indicate deviations from normal behavior.
    • Understand how legitimate traffic might change over time due to user behavior, seasonal trends, or business cycles.
  3. Scenario-Based Training and Simulations:
    DDoS attack scenarios can vary widely in scale, complexity, and tactics. To prepare analysts for these realities:
    • Conduct regular simulations and rehearsals replicating various DDoS attack scenarios, from volumetric floods to sophisticated application-layer attacks.
    • To test analytical precision, incorporate real-world complexities, such as mixed legitimate and malicious traffic.
    • Evaluate analysts’ decision-making under pressure, ensuring they can quickly identify attack types and recommend mitigation steps.
  4. Documentation and Knowledge Sharing:
    Develop and maintain up-to-date documentation that consolidates:
    • Operational procedures for traffic capture and analysis tools.
    • Case studies of past DDoS incidents, including traffic patterns, mitigation strategies, and lessons learned.
    • A repository of “good” traffic baselines for different applications, services, or periods.
  5. Cross-Team Collaboration:
    Ensure that DDoS analysts collaborate with other stakeholders, such as network engineers, application developers, and cloud operations teams. This fosters a shared understanding of how infrastructure, application changes, and user behavior influence traffic profiles.