3 Overlooked Cybersecurity Breaches

Here are three of the worst breaches, attacker tactics and techniques of 2022, and the security controls that can provide effective, enterprise security protection for them.

#1: 2 RaaS Attacks in 13 Months#

Ransomware as a service is a type of attack in which the ransomware software and infrastructure are leased out to the attackers. These ransomware services can be purchased on the dark web from other threat actors and ransomware gangs. Common purchasing plans include buying the entire tool, using the existing infrastructure while paying per infection, or letting other attackers perform the service while sharing revenue with them.

In this attack, the threat actor consists of one of the most prevalent ransomware groups, specializing in access via third parties, while the targeted company is a medium-sized retailer with dozens of sites in the United States.

The threat actors used ransomware as a service to breach the victim’s network. They were able to exploit third-party credentials to gain initial access, progress laterally, and ransom the company, all within mere minutes.

The swiftness of this attack was unusual. In most RaaS cases, attackers usually stay in the networks for weeks and months before demanding ransom. What is particularly interesting about this attack is that the company was ransomed in minutes, with no need for discovery or weeks of lateral movement.

A log investigation revealed that the attackers targeted servers that did not exist in this system. As it turns out, the victim was initially breached and ransomed 13 months before this second ransomware attack. Subsequently, the first attacker group monetized the first attack not only through the ransom they obtained, but also by selling the company’s network information to the second ransomware group.

In the 13 months between the two attacks, the victim changed its network and removed servers, but the new attackers were not aware of these architectural modifications. The scripts they developed were designed for the previous network map. This also explains how they were able to attack so quickly – they had plenty of information about the network. The main lesson here is that ransomware attacks can be repeated by different groups, especially if the victim pays well.

“RaaS attacks such as this one are a good example of how full visibility allows for early alerting. A global, converged, cloud-native SASE platform that supports all edges, like Cato Networks provides complete network visibility into network events that are invisible to other providers or may go under the radar as benign events. And, being able to fully contextualize the events allows for early detection and remediation.

#2: The Critical Infrastructure Attack on Radiation Alert Networks#

Attacks on critical infrastructure are becoming more common and more dangerous. Breaches of water supply plants, sewage systems and other such infrastructures could put millions of residents at risk of a human crisis. These infrastructures are also becoming more vulnerable, and attack surface management tools for OSINT like Shodan and Censys allow security teams to find such vulnerabilities with ease.

In 2021, two hackers were suspected of targeting radiation alert networks. Their attack relied on two insiders that worked for a third party. These insiders disabled the radiation alert systems, significantly debilitating their ability to monitor radiation attacks. The attackers were then able to delete critical software and disable radiation gauges (which is part of the infrastructure itself).

Cybersecurity Breaches

“Unfortunately, scanning for vulnerable systems in critical infrastructure is easier than ever. While many such organizations have multiple layers of security, they are still using point solutions to try and defend their infrastructure rather than one system that can look holistically at the full attack lifecycle. Breaches are never just a phishing problem, or a credentials problem, or a vulnerable system problem – they are always a combination of multiple compromises performed by the threat actor,” said Etay Maor, Sr. Director of Security Strategy at Cato Networks.

#3: The Three-Step Ransomware Attack That Started with Phishing#

The third attack is also a ransomware attack. This time, it consisted of three steps:

1. Infiltration – The attacker was able to gain access to the network through a phishing attack. The victim clicked on a link that generated a connection to an external site, which resulted in the download of the payload.

2. Network activity – In the second phase, the attacker progressed laterally in the network for two weeks. During this time, it collected admin passwords and used in-memory fileless malware. Then on New Year’s Eve, it performed the encryption. This date was chosen since it was (rightfully) assumed the security team would be off on vacation.

3. Exfiltration – Finally, the attackers uploaded the data out of the network.

In addition to these three main steps, additional sub-techniques were employed during the attack and the victim’s point security solutions were not able to block this attack.

Cybersecurity Breaches

“A multiple choke point approach, one that looks horizontally (so to speak) at the attack rather than as a set of vertical, disjointed issues, is the way to enhance detection, mitigation and prevention of such threats. Opposed to popular belief, the attacker needs to be right many times and the defenders only need to be right just once. The underlying technologies to implement a multiple choke point approach are full network visibility via a cloud-native backbone, and a single pass security stack that’s based on ZTNA.” said Etay Maor, Sr. Director of Security Strategy at Cato Networks.

How Do Security Point Solutions Stack Up?#

It is common for security professionals to succumb to the “single point of failure fallacy”. However, cyber-attacks are sophisticated events that rarely involve just one tactic or technique which is the cause of the breach. Therefore, an all-encompassing outlook is required to successfully mitigate cyber-attacks. Security point solutions are a solution for single points of failure. These tools can identify risks, but they will not connect the dots, which could and has led to a breach.

Here’s Watch Out for in the Coming Months#

According to ongoing security research conducted by Cato Networks Security Team, they have identified two additional vulnerabilities and exploit attempts that they recommend including in your upcoming security plans:

1. Log4j#

While Log4j made its debut as early as December of 2021, the noise its making hasn’t died down. Log4j is still being used by attackers to exploit systems, as not all organizations have been able to patch their Log4j vulnerabilities or detect Log4j attacks, in what is known as “virtual patching”. They recommend prioritizing Log4j mitigation.

2. Misconfigured Firewalls and VPNs#

Security solutions like firewalls and VPNs have become access points for attackers. Patching them has become increasingly difficult, especially in the era of architecture cloudification and remote work. It is recommended to pay close attention to these components as they are increasingly vulnerable.

How to Minimize Your Attack Surface and Gain Visibility into the Network#

To reduce the attack surface, security professionals need visibility into their networks. Visibility relies on three pillars:

  • Actionable information – that can be used to mitigate attacks
  • Reliable information – that minimizes the number of false positives
  • Timely information – to ensure mitigation happens before the attack has an impact

Once an organization has complete visibility to the activity on their network they can contextualize the data, decide whether the activity witnessed should be allowed, denied, monitored, restricted (or any other action) and then have the ability to enforce this decision. All these elements must be applied to every entity, be it a user, device, cloud app etc. All the time everywhere. That is what SASE is all about.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

Source :
https://thehackernews.com/2023/02/3-overlooked-cybersecurity-breaches.html

VMware Security Solutions Advisories VMSA-2021-0002

Advisory ID: VMSA-2021-0002
CVSSv3 Range: 5.3-9.8
Issue Date: 2021-02-23
Updated On: 2021-02-23 (Initial Advisory)
CVE(s): CVE-2021-21972, CVE-2021-21973, CVE-2021-21974
Synopsis: VMware ESXi and vCenter Server updates address multiple security vulnerabilities (CVE-2021-21972, CVE-2021-21973, CVE-2021-21974)

1. Impacted Products
  • VMware ESXi
  • VMware vCenter Server (vCenter Server)
  • VMware Cloud Foundation (Cloud Foundation)
2. Introduction

Multiple vulnerabilities in VMware ESXi and vSphere Client (HTML5) were privately reported to VMware. Updates are available to remediate these vulnerabilities in affected VMware products.

3a. VMware vCenter Server updates address remote code execution vulnerability in the vSphere Client (CVE-2021-21972)

Description

The vSphere Client (HTML5) contains a remote code execution vulnerability in a vCenter Server plugin. VMware has evaluated the severity of this issue to be in the Critical severity range with a maximum CVSSv3 base score of 9.8.

Known Attack Vectors

A malicious actor with network access to port 443 may exploit this issue to execute commands with unrestricted privileges on the underlying operating system that hosts vCenter Server. 

Resolution

To remediate CVE-2021-21972 apply the updates listed in the ‘Fixed Version’ column of the ‘Response Matrix’ below to affected deployments.

Workarounds

Workarounds for CVE-2021-21972 have been listed in the ‘Workarounds’ column of the ‘Response Matrix’ below.

Additional Documentation

None.

Notes

The affected vCenter Server plugin for vROPs is available in all default installations. vROPs does not need be present to have this endpoint available. Follow the workarounds KB to disable it.

Acknowledgements

VMware would like to thank Mikhail Klyuchnikov of Positive Technologies for reporting this issue to us.

Response Matrix:

ProductVersionRunning OnCVE IdentifierCVSSv3SeverityFixed VersionWorkaroundsAdditional Documentation
vCenter Server7.0AnyCVE-2021-219729.8Critical 7.0 U1cKB82374None
vCenter Server6.7AnyCVE-2021-219729.8Critical 6.7 U3lKB82374None
vCenter Server6.5AnyCVE-2021-219729.8Critical 6.5 U3nKB82374None

Impacted Product Suites that Deploy Response Matrix 3a Components:

ProductVersionRunning OnCVE IdentifierCVSSv3SeverityFixed VersionWorkaroundsAdditional Documentation
Cloud Foundation (vCenter Server)4.xAnyCVE-2021-219729.8Critical 4.2KB82374None
Cloud Foundation (vCenter Server)3.xAnyCVE-2021-219729.8Critical 3.10.1.2KB82374None
3b. ESXi OpenSLP heap-overflow vulnerability (CVE-2021-21974)

Description

OpenSLP as used in ESXi has a heap-overflow vulnerability. VMware has evaluated the severity of this issue to be in the Important severity range with a maximum CVSSv3 base score of 8.8.

Known Attack Vectors

A malicious actor residing within the same network segment as ESXi who has access to port 427 may be able to trigger the heap-overflow issue in OpenSLP service resulting in remote code execution.

Resolution

To remediate CVE-2021-21974 apply the updates listed in the ‘Fixed Version’ column of the ‘Response Matrix’ below to affected deployments.

Workarounds

Workarounds for CVE-2021-21974 have been listed in the ‘Workarounds’ column of the ‘Response Matrix’ below.

Additional Documentation

None.

Notes

[1] Per the Security Configuration Guides for VMware vSphere, VMware now recommends disabling the OpenSLP service in ESXi if it is not used. For more information, see our blog posting: https://blogs.vmware.com/vsphere/2021/02/evolving-the-vmware-vsphere-security-configuration-guides.html

[2] KB82705 documents steps to consume ESXi hot patch asynchronously on top of latest VMware Cloud Foundation (VCF) supported ESXi build. 

Acknowledgements

VMware would like to thank Lucas Leong (@_wmliang_) of Trend Micro’s Zero Day Initiative for reporting this issue to us.

Response Matrix:

ProductVersionRunning OnCVE IdentifierCVSSv3SeverityFixed VersionWorkaroundsAdditional Documentation
[1] ESXi7.0AnyCVE-2021-219748.8Important ESXi70U1c-17325551KB76372None
[1] ESXi6.7AnyCVE-2021-219748.8Important ESXi670-202102401-SGKB76372None
[1] ESXi6.5AnyCVE-2021-219748.8Important ESXi650-202102101-SGKB76372None

Impacted Product Suites that Deploy Response Matrix 3b Components:

ProductVersionRunning OnCVE IdentifierCVSSv3SeverityFixed VersionWorkaroundsAdditional Documentation
[1] Cloud Foundation (ESXi)4.xAnyCVE-2021-219748.8Important 4.2KB76372None
[1] Cloud Foundation (ESXi)3.xAnyCVE-2021-219748.8Important [2] KB82705KB76372None
3c. VMware vCenter Server updates address SSRF vulnerability in the vSphere Client (CVE-2021-21973)

Description

The vSphere Client (HTML5) contains an SSRF (Server Side Request Forgery) vulnerability due to improper validation of URLs in a vCenter Server plugin. VMware has evaluated the severity of this issue to be in the Moderate severity range with a maximum CVSSv3 base score of 5.3.

Known Attack Vectors

A malicious actor with network access to port 443 may exploit this issue by sending a POST request to vCenter Server plugin leading to information disclosure.

Resolution

To remediate CVE-2021-21973 apply the updates listed in the ‘Fixed Version’ column of the ‘Response Matrix’ below to affected deployments.

Workarounds

Workarounds for CVE-2021-21973 have been listed in the ‘Workarounds’ column of the ‘Response Matrix’ below.

Additional Documentation

None.

Notes

The affected vCenter Server plugin for vROPs is available in all default installations. vROPs does not need be present to have this endpoint available. Follow the workarounds KB to disable it.

Acknowledgements

VMware would like to thank Mikhail Klyuchnikov of Positive Technologies for reporting this issue to us.

Response Matrix:

ProductVersionRunning OnCVE IdentifierCVSSv3SeverityFixed VersionWorkaroundsAdditional Documentation
vCenter Server7.0AnyCVE-2021-219735.3Moderate 7.0 U1cKB82374None
vCenter Server6.7AnyCVE-2021-219735.3Moderate 6.7 U3lKB82374None
vCenter Server6.5AnyCVE-2021-219735.3Moderate 6.5 U3nKB82374None

Impacted Product Suites that Deploy Response Matrix 3c Components:

ProductVersionRunning OnCVE IdentifierCVSSv3SeverityFixed VersionWorkaroundsAdditional Documentation
Cloud Foundation (vCenter Server)4.xAnyCVE-2021-219735.3Moderate 4.2KB82374None
Cloud Foundation (vCenter Server)3.xAnyCVE-2021-219735.3Moderate 3.10.1.2KB82374None
4. References

VMware ESXi 7.0 ESXi70U1c-17325551
https://my.vmware.com/group/vmware/patch
https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u1c.html

VMware ESXi 6.7 ESXi670-202102401-SG
https://my.vmware.com/group/vmware/patch
https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202102001.html

VMware ESXi 6.5 ESXi650-202102101-SG
https://my.vmware.com/group/vmware/patch
https://docs.vmware.com/en/VMware-vSphere/6.5/rn/esxi650-202102001.html

VMware vCloud Foundation 4.2
Downloads and Documentation:
https://docs.vmware.com/en/VMware-Cloud-Foundation/4.2/rn/VMware-Cloud-Foundation-42-Release-Notes.html


VMware vCloud Foundation 3.10.1.2
Downloads and Documentation:
https://docs.vmware.com/en/VMware-Cloud-Foundation/3.10.1/rn/VMware-Cloud-Foundation-3101-Release-Notes.html


vCenter Server 7.0.1 Update 1
Downloads and Documentation:
https://my.vmware.com/web/vmware/downloads/details?downloadGroup=VC70U1C&productId=974
https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u1c-release-notes.html

vCenter Server 6.7 U3l
Downloads and Documentation:
https://my.vmware.com/web/vmware/downloads/details?downloadGroup=VC67U3L&productId=742&rPId=57171
https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3l-release-notes.html

vCenter Server 6.5 U3n
Downloads and Documentation:
https://my.vmware.com/web/vmware/downloads/details?downloadGroup=VC65U3N&productId=614&rPId=60942
https://docs.vmware.com/en/VMware-vSphere/6.5/rn/vsphere-vcenter-server-65u3n-release-notes.html

Mitre CVE Dictionary Links:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21972
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21973
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21974

FIRST CVSSv3 Calculator:
CVE-2021-21972: https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
CVE-2021-21973: https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N
CVE-2021-21974: https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

5. Change Log

2021-02-23 VMSA-2021-0002
Initial security advisory.

6. Contact

E-mail list for product security notifications and announcements:

https://lists.vmware.com/cgi-bin/mailman/listinfo/security-announce

This Security Advisory is posted to the following lists:  

security-announce@lists.vmware.com  

bugtraq@securityfocus.com  

fulldisclosure@seclists.org 

E-mail: security@vmware.com

PGP key at:

https://kb.vmware.com/kb/1055

VMware Security Advisories

https://www.vmware.com/security/advisories

VMware Security Response Policy

https://www.vmware.com/support/policies/security_response.html

VMware Lifecycle Support Phases

https://www.vmware.com/support/policies/lifecycle.html

VMware Security & Compliance Blog  

https://blogs.vmware.com/security

Twitter

Source :
https://www.vmware.com/security/advisories/VMSA-2021-0002.html

Is Once-Yearly Pen Testing Enough for Your Organization?

Any organization that handles sensitive data must be diligent in its security efforts, which include regular pen testing. Even a small data breach can result in significant damage to an organization’s reputation and bottom line.

There are two main reasons why regular pen testing is necessary for secure web application development:

  • Security: Web applications are constantly evolving, and new vulnerabilities are being discovered all the time. Pen testing helps identify vulnerabilities that could be exploited by hackers and allows you to fix them before they can do any damage.
  • Compliance: Depending on your industry and the type of data you handle, you may be required to comply with certain security standards (e.g., PCI DSS, NIST, HIPAA). Regular pen testing can help you verify that your web applications meet these standards and avoid penalties for non-compliance.

How Often Should You Pentest?#

Many organizations, big and small, have once a year pen testing cycle. But what’s the best frequency for pen testing? Is once a year enough, or do you need to be more frequent?

The answer depends on several factors, including the type of development cycle you have, the criticality of your web applications, and the industry you’re in.

You may need more frequent pen testing if:

You Have an Agile or Continuous Release Cycle#

Agile development cycles are characterized by short release cycles and rapid iterations. This can make it difficult to keep track of changes made to the codebase and makes it more likely that security vulnerabilities will be introduced.

If you’re only testing once a year, there’s a good chance that vulnerabilities will go undetected for long periods of time. This could leave your organization open to attack.

To mitigate this risk, pen testing cycles should align with the organization’s development cycle. For static web applications, testing every 4-6 months should be sufficient. But for web applications that are updated frequently, you may need to test more often, such as monthly or even weekly.

Your Web Applications Are Business-Critical#

Any system that is essential to your organization’s operations should be given extra attention when it comes to security. This is because a breach of these systems could have a devastating impact on your business. If your organization relies heavily on its web applications to do business, any downtime could result in significant financial losses.

For example, imagine that your organization’s e-commerce site went down for an hour due to a DDoS attack. Not only would you lose out on potential sales, but you would also have to deal with the cost of the attack and the negative publicity.

To avoid this scenario, it’s important to ensure that your web applications are always available and secure.

Non-critical web applications can usually get away with being tested once a year, but business-critical web applications should be tested more frequently to ensure they are not at risk of a major outage or data loss.

Your Web Applications Are Customer-Facing#

If all your web applications are internal, you may be able to get away with pen testing less frequently. However, if your web applications are accessible to the public, you must be extra diligent in your security efforts.

Web applications accessible to external traffic are more likely to be targeted by attackers. This is because there is a greater pool of attack vectors and more potential entry points for an attacker to exploit.

Customer-facing web applications also tend to have more users, which means that any security vulnerabilities will be exploited more quickly. For example, a cross-site scripting (XSS) vulnerability in an external web application with millions of users could be exploited within hours of being discovered.

To protect against these threats, it’s important to pen test customer-facing web applications more frequently than internal ones. Depending on the size and complexity of the application, you may need to pen test every month or even every week.

You Are in a High-Risk Industry#

Certain industries are more likely to be targeted by hackers due to the sensitive nature of their data. Healthcare organizations, for example, are often targeted because of the protected health information (PHI) they hold.

If your organization is in a high-risk industry, you should consider conducting pen testing more frequently to ensure that your systems are secure and meet regulatory compliance. This will help protect your data and reduce the chances of a costly security incident.

You Don’t Have Internal Security Operations or a Pen testing Team#

This might sound counterintuitive, but if you don’t have an internal security team, you may need to conduct pen testing more frequently.

Organizations that don’t have dedicated security staff are more likely to be vulnerable to attacks.

Without an internal security team, you will need to rely on external pen testers to assess your organization’s security posture.

Depending on the size and complexity of your organization, you may need to pen test every month or even every week.

You Are Focused on Mergers or Acquisitions#

During a merger or acquisition, there is often a lot of confusion and chaos. This can make it difficult to keep track of all the systems and data that need to be secured. As a result, it’s important to conduct pen testing more frequently during these times to ensure that all systems are secure.

M&A also means that you are adding new web applications to your organization’s infrastructure. These new applications may have unknown security vulnerabilities that could put your entire organization at risk.

In 2016, Marriott acquired Starwood without being aware that hackers had exploited a flaw in Starwood’s reservation system two years earlier. Over 500 million customer records were compromised. This placed Marriott in hot water with the British watchdog ICO, resulting in 18.4 million pounds in fines in the UK. According to Bloomberg, there is more trouble ahead, as the hotel giant could “face up to $1 billion in regulatory fines and litigation costs.”

To protect against these threats, it’s important to conduct pen testing before and after an acquisition. This will help you identify potential security issues so they can be fixed before the transition is complete.

The Importance of Continuous Pen Testing#

While periodic pen testing is important, it is no longer enough in today’s world. As businesses rely more on their web applications, continuous pen testing becomes increasingly important.

There are two main types of pen testing: time-boxed and continuous.

Traditional pen testing is done on a set schedule, such as once a year. This type of pen testing is no longer enough in today’s world, as businesses rely more on their web applications.

Continuous pen testing is the process of continuously scanning your systems for vulnerabilities. This allows you to identify and fix vulnerabilities before they can be exploited by attackers. Continuous pen testing allows you to find and fix security issues as they happen instead of waiting for a periodic assessment.

Continuous pen testing is especially important for organizations that have an agile development cycle. Since new code is deployed frequently, there is a greater chance for security vulnerabilities to be introduced.

Pen testing as a service models is where continuous pen testing shine. Outpost24’s PTaaS (Penetration-Testing-as-a-Service) platform enables businesses to conduct continuous pen testing with ease. The Outpost24 platform is always up-to-date with an organization’s latest security threats and vulnerabilities, so you can be confident that your web applications are secure.

  • Manual and automated pen testing: Outpost24’s PTaaS platform combines manual and automated pen testing to give you the best of both worlds. This means you can find and fix vulnerabilities faster while still getting the benefits of expert analysis.
  • Provides comprehensive coverage: Outpost24’s platform covers all OWASP Top 10 vulnerabilities and more. This means that you can be confident that your web applications are secure against the latest threats.
  • Is cost-effective: With Outpost24, you only pay for the services you need. This makes it more affordable to conduct continuous pen testing, even for small businesses.

The Bottom Line#

Regular pen testing is essential for secure web application development. Depending on your organization’s size, industry, and development cycle, you may need to revise your pen testing schedule.

Once-a-year pen testing cycle may be enough for some organizations, but for most, it is not. For business-critical, customer-facing, or high-traffic web applications, you should consider continuous pen testing.

Outpost24’s PTaaS platform makes it easy and cost-effective to conduct continuous pen testing. Contact us today to learn more about our platform and how we can help you secure your web applications.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

Source :
https://thehackernews.com/2023/01/is-once-yearly-pen-testing-enough-for.html

What wireless cards and USB broadband modems are supported on Sonicwall firewalls and access points?

Description

The latest version of SonicOS firmware provides support for a wide variety of USB and Hotspot devices and wireless service providers as listed below.

Resolution

Broadband Devices

USA & Canada
Gen7  5G/4G/LTESonicOS 7.0
CARD REGIONOPERATORNAMEGENERATIONTYPESONICOS VERSIONSONICWAVE
USAAT&TNighthawk 5G Mobile Hotspot Pro MR 51005GHotspot7.0.1No
USAAT&TNightHawk LTE MR11004G/LTEHotspot7.0.0No
USAAT&TGlobal Modem USB8004G/LTEUSB7.0.1Yes
USAAT&TiPhone 11 Pro4G/LTEHotspot7.0.0No
USAAT&TiPhone 12 Pro4G/LTEHotspot7.0.1No
USAVerizonM21005GHotspot7.0.1Yes
USAVerizon M10005GHotspot7.0.0Yes
USAVerizonOrbic Speed4G/LTEHotspot7.0.0No
USAVerizonMiFi Global U620L4G/LTEUSB7.0.0Yes
USASprintNetstick4G/LTEUSB7.0.0No
USASprintFranklin U7724G/LTEUSB7.0.0No
USAT-MobileM20004G/LTEHotspot7.0.1No
USAT-MobileLink Zone24G/LTEHotspot7.0.0Yes
Gen6/Gen6.5  4G/LTESonicOS 6.x
CARD REGIONOPERATORNAMEGENERATIONTYPESONICOS VERSIONSONICWAVE
USAAT&TGlobal Modem USB8004G/LTEUSB6.5.4.5Yes
USAAT&TVelocity (ZTE MF861)4G/LTEUSB6.5.3.1Yes
USAAT&TBeam (Netgear AC340U)²4G/LTEUSB5.9.0.1Yes
USAAT&TMomentum (Sierra Wireless 313U)4G/LTEUSB5.9.0.0Yes
USAVerizonMiFi Global U620L4G/LTEUSB6.5.0.0Yes
USAVerizonNovatel 551L4G/LTEUSB6.2.4.2Yes
USAVerizonPantech UML2904G/LTEUSB5.9.0.0No
USASprintFranklin U7724G/LTEUSB6.5.3.1No
USASprintNetgear 341U4G/LTEUSB6.2.2.0Yes
CanadaRogersAirCard (Sierra Wireless 330U)4G/LTEUSB5.9.0.0No
Gen5  3GSonicOS 5.x
USAAT&TVelocity (Option GI0461)3GUSB5.8.1.1No
USAAT&TMercury (Sierra Wireless C885)3GUSB5.3.0.1No
USAVerizonPantech UMW1903GUSB5.9.0.0No
USAVerizonNovatel USB7603GUSB5.3.0.1No
USAVerizonNovatel 7273GUSB5.3.0.1No
USASprintNovatel U7603GUSB5.3.0.1No
USASprintNovatel 727U3GUSB5.3.0.1No
USASprintSierra Wireless 598U3GUSB5.8.1.1No
USAT-MobileRocket 3.0 (ZTE MF683)3GUSB5.9.0.0Yes
CanadaBellNovatel 7603GUSB5.3.1.0No
International
Gen7  5G/4G/LTESonicOS 7.0
CARD REGIONManufacturerNAMEGENERATIONTYPESONICOS VERSIONSONICWAVE
WorldwideHuaweiE6878-8705GHotspot7.0.0No
WorldwideHuaweiE8372H**4G/LTEUSB7.0.0No
WorldwideHuaweiE82014G/LTEUSB7.0.0No
WorldwideHuaweiE33724G/LTEUSB7.0.0No
WorldwideZTEMF833U4G/LTEUSB7.0.0Yes
WorldwideZTEMF825C4G/LTEUSB7.0.0Yes
WorldwideZTEMF79S4G/LTEUSB7.0.0Yes
Gen6/Gen6.5  4G/LTESonicOS 6.x
CARD REGIONManufacturerNAMEGENERATIONTYPESONICOS VERSIONSONICWAVE
WorldwideHuaweiE8372 (Telstra 4GX)4G/LTEUSB6.5.3.1Yes
WorldwideHuaweiE33724G/LTEUSB6.5.3.1Yes
WorldwideHuaweiE3372h (-608 variant) 64G/LTEUSB6.5.3.1Yes
WorldwideHuaweiE3372s (-608 variant) 64G/LTEUSB6.5.3.1Yes
WorldwideHuaweiE398 (Kyocera 5005)4G/LTEUSB5.9.0.2Yes
WorldwideHuaweiE3276s4G/LTEUSBNoYes
WorldwideD-LinkDWM-2214G/LTEUSB6.5.3.1Yes
WorldwideD-LinkDWM-222 A14G/LTEUSB6.5.3.1Yes
WorldwideZTEMF8254G/LTEUSB6.5.3.1Yes
WorldwideZTEMF832G4G/LTEUSBNoYes
WorldwideZTEMF79S4G/LTEUSBNoYes
Gen5  3GSonicOS 5.x
WorldwideHuaweiE353 73GUSB5.9.0.2Yes
WorldwideHuaweiK46053GUSB5.9.0.2Yes
WorldwideHuaweiEC169C3GUSB5.9.0.7No
WorldwideHuaweiE1803GUSB5.9.0.1No
WorldwideHuaweiE1823GUSB5.9.0.0No
WorldwideHuaweiK37153GUSB5.9.0.0No
WorldwideHuaweiE17503GUSB5.8.0.2No
WorldwideHuaweiE176G3GUSB5.3.0.1No
WorldwideHuaweiE2203GUSB5.3.0.1No
WorldwideHuaweiEC1223GUSB5.9.0.0No
LTE Cellular ExtenderSonicOS 6.x
WorldwideAccelerated6300-CX LTE router4G/LTESIM6.5.0.0No


¹ Cellular network operators around the world are announcing their plans to discontinue 3G services starting as early as December 2020.  Therefore LTE or 5G WWAN devices should be used for new deployments.  Existing deployments with 3G should be upgraded soon to LTE or 5G in preparation for the imminent discontinuation of 3G services.
² Refer to AT&T 340U article for more info
³ Multiple variations of the Huawei card: E8371h-153, E8372h-155, & E8372h-510
⁴ Huawei Modem 3372h and 3372s have been released by Huawei in multiple variants (i.e. -608, -153, -607, -517, -511) and with different protocols. At the moment, SonicOS does not support Huawei Proprietary protocol so all the variants using a non-standard or proprietary protocol are not supported or require the ISP to provide a PPP APN Type.
⁵ Huawei Modem E353 is not compatible with SOHO 250. Also note that it is not an LTE card
For customers outside of the 90-day warranty support period, an active SonicWall 8×5 or 24×7 Dynamic Support agreement allows you to keep your network security up-to-date by providing access to the latest firmware updates. You can manage all services including Dynamic Support and firmware downloads on any of your registered appliances at mysonicwall.com.

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/what-wireless-cards-and-usb-broadband-modems-are-supported-on-firewalls-and-access-points/170505473051240/

Port Forwarding configured using SonicOS API

Description

This article describes the steps involved in creating Polices using SonicOS APIs that will let you access internal devices or servers behind the SonicWall firewall.

Cause

SonicWall by default does not allow inbound traffic which not a part of a session that was initiated by an internal device on the network. This is done to protect the devices in the internal network from malicious access. If required certain parts of the network can be opened to external access, for example Webservers, Exchange servers and so on.

To open the network, we need to specify an access rule from the external network to the internal network and a NAT Policy so we direct traffic only to the intended device.

With APIs this can be achieved on scale for example you can create multiple Access Rules and NAT policies with one command and all the attributes can be specified into Json Objects.

Resolution

Manually opening Ports / enabling Port forwarding to allow traffic from the Internet to a Server behind the SonicWall using sonicos API involves the following steps:

Step1: Enabling the API Module.

Step2:Getting into Swagger.

Step3:Login to the SonicWall with API.

Step4:Create Address Objects and Service Objects with API.

Step5:Creating NAT Policy with API.

Step6: Creating Access Rules with API.

Step7:Committing all the configurational changes made with APIs.

Step8: Log out the SonicWall with API:

Scenario Overview

The following walk-through details allowing TCP 3389 From the Internet to a Terminal Server on the Local Network.Once the configuration is complete, Internet Users can RDP into the Terminal Server using the WAN IP address.Although the examples below show the LAN Zone and TCP 3389 they can apply to any Zone and any Port that is required.

Step 1: Enabling the API Module:               
 

  1.  Log into the SonicWall GUI.
  2.  Click Manage in the top navigation menu.
  3. Click Appliance | Base Settings
  4. Under Base Settings search for sonicos API
  5. Click Enable sonicos API  
  6. Click Enable RFC-2617 HTTP Basic Access authentication
Enabling API on the SonicOS

Step2: Getting into Swagger

  1. Click on the Mange Tab
  2. Scroll Down to find API
  3. Click on the Link https://sonicos-api.sonicwall.com
  4. Swagger will prepopulate your SonicWalls’s IP, MGMT Port, Firmware so it can give you a list of applicable APIs.

 NOTE: All the APIs required for configuring Port Forwarding will be listed in this Article.

API on the SonicOS

Step3:Login to the SonicWall with API:

  1.  curl -k -i -u “admin:password” -X POST https://192.168.168.168:443/api/sonicos/auth

      “admin:password” – Replace this with your SonicWalls username : password

      https://192.168.168.168:443/– Replace this with your SonicWalls Public or private IP address

Command Output should contain a string: “success”: true

Login with API

 NOTE: You are free to choose Swagger, Postman, Git bash or any application that allows API calls, if you are using a Linux based operating system you can execute cURL from the terminal.For this article I am using Git bash on Windows.

Step4:Create Address Objects and Service Objects with API:

  1.   curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/address-objects/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d “{\”address_object\”:{\”ipv4\”:{\”name\”:\”Term Server Private\”,\”zone\”:\”LAN\”,\”host\”:{\”ip\”:\”192.168.168.10\”}}}}”  &&  curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/address-objects/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d “{\”address_object\”:{\”ipv4\”:{\”name\”:\”Term Server Public\”,\”zone\”:\”WAN\”,\”host\”:{\”ip\”:\”1.1.1.1\”}}}}”

 OR 

curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/address-objects/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d @add.Json

@add.Json is a file with the following information:

{
“address_objects”: [
{
“ipv4”: {
“name”: “Term Server Private”,
“zone”: “LAN”,
“host”: {
“ip”: “192.168.168.10”
}
}
},
{
“ipv4”: {
“name”: “Term Server Public”,
“zone”: “WAN”,
“host”: {
“ip”: “1.1.1.1”
}
}
}
]
}

Output of the First command where we have parsed the address object data on the command instead of creating a separate File:

Image

 Output of the second Command where we have used a file called @add instead of specifying data on the command:

Image

 TIP: If you are creating only one Address Object then the First command should be sufficient, if you are creating multiple address objects then the second command should be used.

 CAUTION: I have the add.Json file saved on to my desktop and hence I was able to call it into the command, if you have created the Json the file in a different location then make sure you are executing the command from that location.

2. Adding Service Object:

 curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/service-objects” -H “accept: application/Json” -H “Content-Type: application/Json” -d @serviceobj.Json

https://192.168.168.168:443 – Replace that with the IP of the SonicWall

@serviceobj.Json is a file that contains the Attributes of the service object:

{

  “service_object”: {

    “name”: “Terminal Server 3389”,

    “TCP”: {

      “begin”: 3389,

      “end”: 3389

    }

  }

}

Output of the command:

Image

 3. Committing the changes made to the SonicWall: We need to do this to be able to use the Address Objects and service objects that we just created to make a NAT Policy and an Access Rule.

  curl -k -X POST “https://192.168.168.168:443/api/sonicos/config/pending” -H “accept: application/Json”

  https://192.168.168.168:443 – Replace that with the IP of the SonicWall

Output of the command: 

Image

Step5: Creating NAT Policy with API:

1.     curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/nat-policies/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d @natpolicy.Json

         https://192.168.168.168:443 – Replace that with the IP of the SonicWall

         @natpolicy.Json is a file that contains the Attributes of the NAT Policy:

{

  “nat_policies”: [

    {

      “ipv4”: {

        “name”: “Inbound NAT 3389”,

        “enable”: true,

        “comment”: “”,

        “inbound”: “X1”,

        “outbound”: “any”,

        “source”: {

          “any”: true

        },

        “translated_source”: {

          “original”: true

        },

        “destination”: {

          “name”: “Term Server Public”

        },

        “translated_destination”: {

          “name”: “Term Server Private”

        },

        “service”: {

          “name”: “Terminal Server 3389”

        },

        “translated_service”: {

          “original”: true

        }

      }

    }

  ]

}

Output of the command:

Image

Step6: Creating Access Rules with API:

1.     curl -k -X POST “https://192.168.168.168:443/api/sonicos/access-rules/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d @accessrule.Json

https://192.168.168.168:443 – Replace that with the IP of the SonicWall

@accessrule.Json is a file that contains the Attributes of the access rule:

{

  “access_rules”: [

    {

      “ipv4”: {

        “name”: “Inbound 3389”,

        “enable”: true,

        “from”: “WAN”,

        “to”: “LAN”,

        “action”: “allow”,

        “source”: {

          “address”: {

            “any”: true

          },

          “port”: {

            “any”: true

          }

        },

        “service”: {

          “name”: “Terminal Server 3389”

        },

        “destination”: {

          “address”: {

            “name”: “Term Server Public”

          }

        }

        }

    }

  ]

}

Output of the command:

Image

Step7: Committing all the configurational changes made with APIs:

1.     We have already committed Address objects and Service Objects in Step 4, In this step we are committing the NAT Policy and the Access Rule to the SonicWalls configuration:

curl -k -X POST “https://192.168.168.168:443/api/sonicos/config/pending” -H “accept: application/Json”

https://192.168.168.168:443 – Replace that with the IP of the SonicWall

We have Only used the POST method in most of the API calls for this Article because we are only Adding things into the configuration, there are other methods Like GET,DELETE,PUT and etc. I recommend that you go through https://sonicos-api.sonicwall.com for more API commands.

Step8: Log out the SonicWall with API:

1.       It is recommended to log out from the SonicWall via API once the desired configuration is committed.

         curl -k -i -u “admin:password” -X DELETE https://192.168.168.168:443/api/sonicos/auth

         https://192.168.168.168:443 – Replace that with the IP of the SonicWall

         “admin:password” – is the actual username and password for the SonicWall.

Output of the command:

Image

 CAUTION: Caution: If you miss to perform the action in Step 7 and Execute the command in Step 8 you will lose all the configuration changes made in the current session.

Summary:We have successfully configured a Port Forwarding for a user in the Internet to access a Term Server that is behind a Firewall on port 3389 using sonicos API.

 NOTE: It is always recommended to use Client VPN for RDP connections this article here is just an example.

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/port-forwarding-with-sonicos-api-using-postman-and-curl/190224162643523/

What FQDN’s and IP’s are used by SonicWall products to update their services?

Description

This article lists the Fully Qualified Domain Names (FQDNs) in use by SonicWall for its licensing and security services.

Resolution

SonicWall firewalls:

  • lm2.sonicwall.com – Registration information/licensing.
  • licensemanager.sonicwall.com – Registration information/licensing for older firewalls.
  • software.sonicwall.com – Softwares, firmwares, NetExtender, GVC.
  • responder.global.sonicwall.com – Probe target.
  • clientmanager.sonicwall.com – Client CF enforcement download.
  • policymanager.sonicwall.com – Global Security Client.
  • convert.global.sonicwall.com – Preference processor server.
  • geodnsd.global.sonicwall.com – Used for flow reporting and GeoIP.
  • webcfs00.global.sonicwall.com – Content filter server.
  • webcfs01.global.sonicwall.com – Content filter server.
  • webcfs02.global.sonicwall.com – Content filter server.
  • webcfs03.global.sonicwall.com – Content filter server.
  • webcfs04.global.sonicwall.com – Content filter server.
  • webcfs05.global.sonicwall.com – Content filter server.
  • webcfs06.global.sonicwall.com – Content filter server.
  • webcfs07.global.sonicwall.com – Content filter server.
  • webcfs08.global.sonicwall.com – Content filter server.
  • webcfs10.global.sonicwall.com – Content filter server.
  • webcfs11.global.sonicwall.com – Content filter server.
  • gcsd.global.sonicwall.com – Cloud antivirus and status.
  • sig2.sonicwall.com – Signature updates.
  • sigserver.global.sonicwall.com – Signature updates for older firewalls.
  • lmdashboard.global.sonicwall.com – License manager dashboard.
  • appreports.global.sonicwall.com – App reports server.
  • sonicsandbox.global.sonicwall.com – Default Capture ATP server (west coast) UDP 2259, and https (tcp 443).
  • sonicsandboxmia.global.sonicwall.com  – East coast capture ATP server UDP 2259, and https (tcp 443).
  • utmgbdata.global.sonicwall.com – Map info URL domain.
  • cfssupport.sonicwall.com – View rating of a website.
  • cloudtt.global.sonicwall.com – Zero Touch provisioning
  • eprs2.global.sonicwall.com (204.212.170.36, 204.212.170.11, 204.212.170.10) – Content Filter Client servers.
  • wsdl.mysonicwall.com  – Automatic preference backups and firmware downloads.
  • sonicsandbox.global.sonicwall.com
  • sonicsandboxmia.global.sonicwall.com
  • sonicsandboxams.global.sonicwall.com
  • sonicsandboxfra.global.sonicwall.com
  • sonicsandboxtko.global.sonicwall.com

    This information can also be found in the Tech Support Report (TSR). More information about the TSR can be found in the following article:
    How to Download Tech Support Files (TSR, EXP, Logs) From SonicWall UTM Firewalls

Capture Client software:

  • captureclient-36.sonicwall.com
  • captureclient.sonicwall.com
  • sonicwall.sentinelone.net (S1 agent)
  • software.sonicwall.com (software package updates)
  • sonicsandbox.global.sonicwall.com (Capture ATP- Applicable for Capture Client Advanced License)

SonicWall CSC:

  • For SanJose Colo

    FQDN: cloudgms.sonicwall.com
    Zero Touch FQDN: cloudtt.global.sonicwall.com
    IP: 4.16.47.168, 4.16.47.188

  • For AWS Colo

    FQDN: cscma.sonicwall.com
    Zero Touch FQDN: cscmatt.global.sonicwall.com
    IP: 34.211.138.110, 52.37.12.168, 52.89.82.203, 52.11.92.114

  • For AMS Colo

    FQDN: cloudgmsams.sonicwall.com
    Zero Touch FQDN: cloudttams.global.sonicwall.com
    IP: 213.244.188.168, 213.244.188.188

  • For AWS-FRA Colo

    FQDN: cscmafra.sonicwall.com
    Zero Touch FQDN: cscmafratt.global.sonicwall.com, cscmafratta.global.sonicwall.com
    IP: 18.197.234.66, 18.197.234.59

SonicWall NSM:

  • For Oregon AWS Colo

    FQDN: nsm-uswest.sonicwall.com (Use it in GMS settings under Administration Page)
    Zero Touch FQDN: nsm-uswest-zt.sonicwall.com (Use it in ZeroTouch Settings under Diag page)
    IP: 13.227.130.81, 13.227.130.63, 3.227.130.69, 13.227.130.12, 52.39.29.75, 44.233.105.101, 44.227.248.206

  • For AWS-FRA Colo

    FQDN: nsm-eucentral.sonicwall.com (Use it in GMS settings under Administration Page)
    Zero Touch FQDN: nsm-eucentral-zt.sonicwall.com (Use it in ZeroTouch Settings under Diag page)
    IP: 13.227.130.70, 13.227.130.69, 13.227.130.15, 13.227.130.92, 18.156.16.24, 18.157.240.148, 3.127.176.56

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/what-fqdn-s-and-ip-s-are-used-by-sonicwall-products-to-update-their-services/170503941664663/

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

The Border Gateway Protocol (BGP) is the glue that keeps the entire Internet together. However, despite its vital function, BGP wasn’t originally designed to protect against malicious actors or routing mishaps. It has since been updated to account for this shortcoming with the Resource Public Key Infrastructure (RPKI) framework, but can we declare it to be safe yet?

If the question needs asking, you might suspect we can’t. There is a shortage of reliable data on how much of the Internet is protected from preventable routing problems. Today, we’re releasing a new method to measure exactly that: what percentage of Internet users are protected by their Internet Service Provider from these issues. We find that there is a long way to go before the Internet is protected from routing problems, though it varies dramatically by country.

Why RPKI is necessary to secure Internet routing

The Internet is a network of independently-managed networks, called Autonomous Systems (ASes). To achieve global reachability, ASes interconnect with each other and determine the feasible paths to a given destination IP address by exchanging routing information using BGP. BGP enables routers with only local network visibility to construct end-to-end paths based on the arbitrary preferences of each administrative entity that operates that equipment. Typically, Internet traffic between a user and a destination traverses multiple AS networks using paths constructed by BGP routers.

BGP, however, lacks built-in security mechanisms to protect the integrity of the exchanged routing information and to provide authentication and authorization of the advertised IP address space. Because of this, AS operators must implicitly trust that the routing information exchanged through BGP is accurate. As a result, the Internet is vulnerable to the injection of bogus routing information, which cannot be mitigated by security measures at the client or server level of the network.

An adversary with access to a BGP router can inject fraudulent routes into the routing system, which can be used to execute an array of attacks, including:

  • Denial-of-Service (DoS) through traffic blackholing or redirection,
  • Impersonation attacks to eavesdrop on communications,
  • Machine-in-the-Middle exploits to modify the exchanged data, and subvert reputation-based filtering systems.

Additionally, local misconfigurations and fat-finger errors can be propagated well beyond the source of the error and cause major disruption across the Internet.

Such an incident happened on June 24, 2019. Millions of users were unable to access Cloudflare address space when a regional ISP in Pennsylvania accidentally advertised routes to Cloudflare through their capacity-limited network. This was effectively the Internet equivalent of routing an entire freeway through a neighborhood street.

Traffic misdirections like these, either unintentional or intentional, are not uncommon. The Internet Society’s MANRS (Mutually Agreed Norms for Routing Security) initiative estimated that in 2020 alone there were over 3,000 route leaks and hijacks, and new occurrences can be observed every day through Cloudflare Radar.

The most prominent proposals to secure BGP routing, standardized by the IETF focus on validating the origin of the advertised routes using Resource Public Key Infrastructure (RPKI) and verifying the integrity of the paths with BGPsec. Specifically, RPKI (defined in RFC 7115) relies on a Public Key Infrastructure to validate that an AS advertising a route to a destination (an IP address space) is the legitimate owner of those IP addresses.

RPKI has been defined for a long time but lacks adoption. It requires network operators to cryptographically sign their prefixes, and routing networks to perform an RPKI Route Origin Validation (ROV) on their routers. This is a two-step operation that requires coordination and participation from many actors to be effective.

The two phases of RPKI adoption: signing origins and validating origins

RPKI has two phases of deployment: first, an AS that wants to protect its own IP prefixes can cryptographically sign Route Origin Authorization (ROA) records thereby attesting to be the legitimate origin of that signed IP space. Second, an AS can avoid selecting invalid routes by performing Route Origin Validation (ROV, defined in RFC 6483).

With ROV, a BGP route received by a neighbor is validated against the available RPKI records. A route that is valid or missing from RPKI is selected, while a route with RPKI records found to be invalid is typically rejected, thus preventing the use and propagation of hijacked and misconfigured routes.

One issue with RPKI is the fact that implementing ROA is meaningful only if other ASes implement ROV, and vice versa. Therefore, securing BGP routing requires a united effort and a lack of broader adoption disincentivizes ASes from commiting the resources to validate their own routes. Conversely, increasing RPKI adoption can lead to network effects and accelerate RPKI deployment. Projects like MANRS and Cloudflare’s isbgpsafeyet.com are promoting good Internet citizenship among network operators, and make the benefits of RPKI deployment known to the Internet. You can check whether your own ISP is being a good Internet citizen by testing it on isbgpsafeyet.com.

Measuring the extent to which both ROA (signing of addresses by the network that controls them) and ROV (filtering of invalid routes by ISPs) have been implemented is important to evaluating the impact of these initiatives, developing situational awareness, and predicting the impact of future misconfigurations or attacks.

Measuring ROAs is straightforward since ROA data is readily available from RPKI repositories. Querying RPKI repositories for publicly routed IP prefixes (e.g. prefixes visible in the RouteViews and RIPE RIS routing tables) allows us to estimate the percentage of addresses covered by ROA objects. Currently, there are 393,344 IPv4 and 86,306 IPv6 ROAs in the global RPKI system, covering about 40% of the globally routed prefix-AS origin pairs1.

Measuring ROV, however, is significantly more challenging given it is configured inside the BGP routers of each AS, not accessible by anyone other than each router’s administrator.

Measuring ROV deployment

Although we do not have direct access to the configuration of everyone’s BGP routers, it is possible to infer the use of ROV by comparing the reachability of RPKI-valid and RPKI-invalid prefixes from measurement points within an AS2.

Consider the following toy topology as an example, where an RPKI-invalid origin is advertised through AS0 to AS1 and AS2. If AS1 filters and rejects RPKI-invalid routes, a user behind AS1 would not be able to connect to that origin. By contrast, if AS2 does not reject RPKI invalids, a user behind AS2 would be able to connect to that origin.

While occasionally a user may be unable to access an origin due to transient network issues, if multiple users act as vantage points for a measurement system, we would be able to collect a large number of data points to infer which ASes deploy ROV.

If, in the figure above, AS0 filters invalid RPKI routes, then vantage points in both AS1 and AS2 would be unable to connect to the RPKI-invalid origin, making it hard to distinguish if ROV is deployed at the ASes of our vantage points or in an AS along the path. One way to mitigate this limitation is to announce the RPKI-invalid origin from multiple locations from an anycast network taking advantage of its direct interconnections to the measurement vantage points as shown in the figure below. As a result, an AS that does not itself deploy ROV is less likely to observe the benefits of upstream ASes using ROV, and we would be able to accurately infer ROV deployment per AS3.

Note that it’s also important that the IP address of the RPKI-invalid origin should not be covered by a less specific prefix for which there is a valid or unknown RPKI route, otherwise even if an AS filters invalid RPKI routes its users would still be able to find a route to that IP.

The measurement technique described here is the one implemented by Cloudflare’s isbgpsafeyet.com website, allowing end users to assess whether or not their ISPs have deployed BGP ROV.

The isbgpsafeyet.com website itself doesn’t submit any data back to Cloudflare, but recently we started measuring whether end users’ browsers can successfully connect to invalid RPKI origins when ROV is present. We use the same mechanism as is used for global performance data4. In particular, every measurement session (an individual end user at some point in time) attempts a request to both valid.rpki.cloudflare.com, which should always succeed as it’s RPKI-valid, and invalid.rpki.cloudflare.com, which is RPKI-invalid and should fail when the user’s ISP uses ROV.

This allows us to have continuous and up-to-date measurements from hundreds of thousands of browsers on a daily basis, and develop a greater understanding of the state of ROV deployment.

The state of global ROV deployment

The figure below shows the raw number of ROV probe requests per hour during October 2022 to valid.rpki.cloudflare.com and invalid.rpki.cloudflare.com. In total, we observed 69.7 million successful probes from 41,531 ASNs.

Based on APNIC’s estimates on the number of end users per ASN, our weighted5 analysis covers 96.5% of the world’s Internet population. As expected, the number of requests follow a diurnal pattern which reflects established user behavior in daily and weekly Internet activity6.

We can also see that the number of successful requests to valid.rpki.cloudflare.com (gray line) closely follows the number of sessions that issued at least one request (blue line), which works as a smoke test for the correctness of our measurements.

As we don’t store the IP addresses that contribute measurements, we don’t have any way to count individual clients and large spikes in the data may introduce unwanted bias. We account for that by capturing those instants and excluding them.

Overall, we estimate that out of the four billion Internet users, only 261 million (6.5%) are protected by BGP Route Origin Validation, but the true state of global ROV deployment is more subtle than this.

The following map shows the fraction of dropped RPKI-invalid requests from ASes with over 200 probes over the month of October. It depicts how far along each country is in adopting ROV but doesn’t necessarily represent the fraction of protected users in each country, as we will discover.

Sweden and Bolivia appear to be the countries with the highest level of adoption (over 80%), while only a few other countries have crossed the 50% mark (e.g. Finland, Denmark, Chad, Greece, the United States).

ROV adoption may be driven by a few ASes hosting large user populations, or by many ASes hosting small user populations. To understand such disparities, the map below plots the contrast between overall adoption in a country (as in the previous map) and median adoption over the individual ASes within that country. Countries with stronger reds have relatively few ASes deploying ROV with high impact, while countries with stronger blues have more ASes deploying ROV but with lower impact per AS.

In the Netherlands, Denmark, Switzerland, or the United States, adoption appears mostly driven by their larger ASes, while in Greece or Yemen it’s the smaller ones that are adopting ROV.

The following histogram summarizes the worldwide level of adoption for the 6,765 ASes covered by the previous two maps.

Most ASes either don’t validate at all, or have close to 100% adoption, which is what we’d intuitively expect. However, it’s interesting to observe that there are small numbers of ASes all across the scale. ASes that exhibit partial RPKI-invalid drop rate compared to total requests may either implement ROV partially (on some, but not all, of their BGP routers), or appear as dropping RPKI invalids due to ROV deployment by other ASes in their upstream path.

To estimate the number of users protected by ROV we only considered ASes with an observed adoption above 95%, as an AS with an incomplete deployment still leaves its users vulnerable to route leaks from its BGP peers.

If we take the previous histogram and summarize by the number of users behind each AS, the green bar on the right corresponds to the 261 million users currently protected by ROV according to the above criteria (686 ASes).

Looking back at the country adoption map one would perhaps expect the number of protected users to be larger. But worldwide ROV deployment is still mostly partial, lacking larger ASes, or both. This becomes even more clear when compared with the next map, plotting just the fraction of fully protected users.

To wrap up our analysis, we look at two world economies chosen for their contrasting, almost symmetrical, stages of deployment: the United States and the European Union.

112 million Internet users are protected by 111 ASes from the United States with comprehensive ROV deployments. Conversely, more than twice as many ASes from countries making up the European Union have fully deployed ROV, but end up covering only half as many users. This can be reasonably explained by end user ASes being more likely to operate within a single country rather than span multiple countries.

Conclusion

Probe requests were performed from end user browsers and very few measurements were collected from transit providers (which have few end users, if any). Also, paths between end user ASes and Cloudflare are often very short (a nice outcome of our extensive peering) and don’t traverse upper-tier networks that they would otherwise use to reach the rest of the Internet.

In other words, the methodology used focuses on ROV adoption by end user networks (e.g. ISPs) and isn’t meant to reflect the eventual effect of indirect validation from (perhaps validating) upper-tier transit networks. While indirect validation may limit the “blast radius” of (malicious or accidental) route leaks, it still leaves non-validating ASes vulnerable to leaks coming from their peers.

As with indirect validation, an AS remains vulnerable until its ROV deployment reaches a sufficient level of completion. We chose to only consider AS deployments above 95% as truly comprehensive, and Cloudflare Radar will soon begin using this threshold to track ROV adoption worldwide, as part of our mission to help build a better Internet.

When considering only comprehensive ROV deployments, some countries such as Denmark, Greece, Switzerland, Sweden, or Australia, already show an effective coverage above 50% of their respective Internet populations, with others like the Netherlands or the United States slightly above 40%, mostly driven by few large ASes rather than many smaller ones.

Worldwide we observe a very low effective coverage of just 6.5% over the measured ASes, corresponding to 261 million end users currently safe from (malicious and accidental) route leaks, which means there’s still a long way to go before we can declare BGP to be safe.

……
1https://rpki.cloudflare.com/
2Gilad, Yossi, Avichai Cohen, Amir Herzberg, Michael Schapira, and Haya Shulman. “Are we there yet? On RPKI’s deployment and security.” Cryptology ePrint Archive (2016).
3Geoff Huston. “Measuring ROAs and ROV”. https://blog.apnic.net/2021/03/24/measuring-roas-and-rov/
4Measurements are issued stochastically when users encounter 1xxx error pages from default (non-customer) configurations.
5Probe requests are weighted by AS size as calculated from Cloudflare’s worldwide HTTP traffic.
6Quan, Lin, John Heidemann, and Yuri Pradkin. “When the Internet sleeps: Correlating diurnal networks with external factors.” In Proceedings of the 2014 Conference on Internet Measurement Conference, pp. 87-100. 2014.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/rpki-updates-data/

Microsoft 365 network connectivity test tool

The Microsoft 365 network connectivity test tool is located at https://connectivity.office.com. It’s an adjunct tool to the network assessment and network insights available in the Microsoft 365 admin center under the Health | Connectivity menu.

 Important

It’s important to sign in to your Microsoft 365 tenant as all test reports are shared with your administrator and uploaded to the tenant while you are signed in.

Connectivity test tool.

 Note

The network connectivity test tool supports tenants in WW Commercial but not GCC Moderate, GCC High, DoD or China.

Network insights in the Microsoft 365 Admin Center are based on regular in-product measurements for your Microsoft 365 tenant, aggregated each day. In comparison, network insights from the Microsoft 365 network connectivity test are run locally in the tool.

In-product testing is limited, and running tests local to the user collects more data resulting in deeper insights. Network insights in the Microsoft 365 Admin Center will show that there’s a networking problem at a specific office location. The Microsoft 365 connectivity test can help to identify the root cause of that problem and provide a targeted performance improvement action.

We recommend that these insights be used together where networking quality status can be assessed for each office location in the Microsoft 365 Admin Center and more specifics can be found after deployment of testing based on the Microsoft 365 connectivity test.

What happens at each test step

Office location identification

When you click the Run test button, we show the running test page and identify the office location. You can type in your location by city, state, and country or choose to have it detected for you. If you detect the office location, the tool requests the latitude and longitude from the web browser and limits the accuracy to 300 meters by 300 meters before use. It’s not necessary to identify the location more accurately than the building to measure network performance.

JavaScript tests

After office location identification, we run a TCP latency test in JavaScript and we request data from the service about in-use and recommended Microsoft 365 service front door servers. When these tests are completed, we show them on the map and in the details tab where they can be viewed before the next step.

Download the advanced tests client application

Next, we start the download of the advanced tests client application. We rely on the user to launch the client application and they must also have .NET 6.0 Runtime installed.

There are two parts to the Microsoft 365 network connectivity test: the web site https://connectivity.office.com and a downloadable Windows client application that runs advanced network connectivity tests. Most of the tests require the application to be run. It will populate results back into the web page as it runs.

You’ll be prompted to download the advanced client test application from the web site after the web browser tests have completed. Open and run the file when prompted.

Advanced tests client application.

Start the advanced tests client application

Once the client application starts, the web page will update to show this result. Test data will start to be received to the web page. The page updates each time new-data is received and you can review the data as it arrives.

Advanced tests completed and test report upload

When the tests are completed, the web page and the advanced tests client will both show that. If the user is signed in, the test report will be uploaded to the customer’s tenant.

Sharing your test report

The test report requires authentication to your Microsoft 365 account. Your administrator selects how you can share your test report. The default settings allow for sharing of your reports with other user within your organization and the ReportID link is not available. Reports will expire by default after 90 days.

Sharing your report with your administrator

If you’re signed in when a test report occurs, the report is shared with your administrator.

Sharing with your Microsoft account team, support or other personnel

Test reports (excluding any personal identification) are shared with Microsoft employees. This sharing is enabled by default and can be disabled by your administrator in the Health | Network Connectivity page in the Microsoft 365 Admin Center.

Sharing with other users who sign in to the same Microsoft 365 tenant

You can choose users to share your report with. Being able to choose is enabled by default, but it can be disabled by your administrator.

Sharing a link to your test results with a user.

You can share your test report with anyone by providing access to a ReportID link. This link generates a URL that you can send to someone so that they can bring up the test report without signing in. This sharing is disabled by default and must be enabled by your administrator.

Sharing a link to your test results.

Network Connectivity Test Results

The results are shown in the Summary and Details tabs. The summary tab shows a map of the detected network perimeter and a comparison of the network assessment to other Microsoft 365 customers nearby. It also allows for sharing of the test report. Here’s what the summary results view looks like:

Network connectivity test tool summary results.

Here’s an example of the details tab output. On the details tab we show a green circle check mark if the result was compared favorably. We show a red triangle exclamation point if the result exceeded a threshold indicating a network insight. The following sections describe each of the details tab results rows and explain the thresholds used for network insights.

Network connectivity test tool example test results.

Your location information

This section shows test results related to your location.

Your location

The user location is detected from the users web browser. It can also be typed in at the user’s choice. It’s used to identify network distances to specific parts of the enterprise network perimeter. Only the city from this location detection and the distance to other network points are saved in the report.

The user office location is shown on the map view.

Network egress location (the location where your network connects to your ISP)

We identify the network egress IP address on the server side. Location databases are used to look up the approximate location for the network egress. These databases typically have an accuracy of about 90% of IP addresses. If the location looked up from the network egress IP address isn’t accurate, this would lead to a false result. To validate if this error is occurring for a specific IP address, you can use publicly accessible network IP address location web sites to compare against your actual location.

Your distance from the network egress location

We determine the distance from that location to the office location. This is shown as a network insight if the distance is greater than 500 miles (800 kilometers) since that is likely to increase the TCP latency by more than 25 ms and may affect user experience.

The map shows the network egress location in relation to the user office location indicating the network backhaul inside of the enterprise WAN.

Implement local and direct network egress from user office locations to the Internet for optimal Microsoft 365 network connectivity. Improvements to local and direct egress are the best way to address this network insight.

Proxy server information

We identify whether proxy server(s) are configured on the local machine to pass Microsoft 365 network traffic in the Optimize category. We identify the distance from the user office location to the proxy servers.

The distance is tested first by ICMP ping. If that fails, we test with TCP ping and finally we look up the proxy server IP address in an IP address location database. We show a network insight if the proxy server is further than 500 miles (800 kilometers) away from the user office location.

Virtual private network (VPN) you use to connect to your organization

This test detects if you’re using a VPN to connect to Microsoft 365. A passing result will show if you have no VPN, or if you have a VPN with recommended split tunnel configuration for Microsoft 365.

VPN Split Tunnel

Each Optimize category route for Exchange Online, SharePoint Online, and Microsoft Teams is tested to see if It’s tunneled on the VPN. A split out workload avoids the VPN entirely. A tunneled workload is sent over the VPN. A selective tunneled workload has some routes sent over the VPN and some split out. A passing result will show if all workloads are split out or selective tunneled.

Customers in your metropolitan area with better performance

Network latency between the user office location and the Exchange Online service is compared to other Microsoft 365 customers in the same metro area. A network insight is shown if 10% or more of customers in the same metro area have better performance. This means their users will have better performance in the Microsoft 365 user interface.

This network insight is generated on the basis that all users in a city have access to the same telecommunications infrastructure and the same proximity to Internet circuits and Microsoft’s network.

Time to make a DNS request on your network

This shows the DNS server configured on the client machine that ran the tests. It might be a DNS Recursive Resolver server however this is uncommon. It’s more likely to be a DNS forwarder server, which caches DNS results and forwards any uncached DNS requests to another DNS server.

This is provided for information only and does not contribute to any network insight.

Your distance from and/or time to connect to a DNS recursive resolver

The in-use DNS Recursive Resolver is identified by making a specific DNS request and then asking the DNS Name Server for the IP Address that it received the same request from. This IP Address is the DNS Recursive Resolver and it will be looked up in IP Address location databases to find the location. The distance from the user office location to the DNS Recursive Resolver server location is then calculated. This is shown as a network insight if the distance is greater than 500 miles (800 kilometers).

The location looked up from the network egress IP Address may not be accurate and this would lead to a false result from this test. To validate if this error is occurring for a specific IP Address, you can use publicly accessible network IP Address location web sites.

This network insight will specifically impact the selection of the Exchange Online service front door. To address this insight local and direct network egress should be a pre-requisite and then DNS Recursive Resolver should be located close to that network egress.

Exchange Online

This section shows test results related to Exchange Online.

Exchange service front door location

The in-use Exchange service front door is identified in the same way that Outlook does this and we measure the network TCP latency from the user location to it. The TCP latency is shown and the in-use Exchange service front door is compared to the list of best service front doors for the current location. This is shown as a network insight if one of the best Exchange service front door(s) isn’t in use.

Not using one of the best Exchange service front door(s) could be caused by network backhaul before the corporate network egress in which case we recommend local and direct network egress. It could also be caused by use of a remote DNS recursive resolver server in which case we recommend aligning the DNS recursive resolver server with the network egress.

We calculate a potential improvement in TCP latency (ms) to the Exchange service front door. This is done by looking at the tested user office location network latency and subtracting the network latency from the current location to the closets Exchange service front door. The difference represents the potential opportunity for improvement.

Best Exchange service front door(s) for your location

This lists the best Exchange service front door locations by city for your location.

Service front door recorded in the client DNS

This shows the DNS name and IP Address of the Exchange service front door server that you were directed to. It’s provided for information only and there’s no associated network insight.

SharePoint Online

This section shows test results related to SharePoint Online and OneDrive.

The service front door location

The in-use SharePoint service front door is identified in the same way that the OneDrive client does and we measure the network TCP latency from the user office location to it.

Download speed

We measure the download speed for a 15 Mb file from the SharePoint service front door. The result is shown in megabytes per second to indicate what size file in megabytes can be downloaded from SharePoint or OneDrive in one second. The number should be similar to one tenth of the minimum circuit bandwidth in megabits per second. For example if you have a 100mbps internet connection, you may expect 10 megabytes per second (10 MBps).

Buffer bloat

During the 15Mb download we measure the TCP latency to the SharePoint service front door. This is the latency under load and it’s compared to the latency when not under load. The increase in latency when under load is often attributable to consumer network device buffers being loaded (or bloated). A network insight is shown for any bloat of 100ms or more.

Service front door recorded in the client DNS

This shows the DNS name and IP Address of the SharePoint service front door server that you were directed to. It’s provided for information only and there’s no associated network insight.

Microsoft Teams

This section shows test results related to Microsoft Teams.

Media connectivity (audio, video, and application sharing)

This tests for UDP connectivity to the Microsoft Teams service front door. If this is blocked, then Microsoft Teams may still work using TCP, but audio and video will be impaired. Read more about these UDP network measurements, which also apply to Microsoft Teams at Media Quality and Network Connectivity Performance in Skype for Business Online.

Packet loss

Shows the UDP packet loss measured in a 10-second test audio call from the client to the Microsoft Teams service front door. This should be lower than 1.00% for a pass.

Latency

Shows the measured UDP latency, which should be lower than 100ms.

Jitter

Shows the measured UDP jitter, which should be lower than 30ms.

Connectivity

We test for HTTP connectivity from the user office location to all of the required Microsoft 365 network endpoints. These are published at https://aka.ms/o365ip. A network insight is shown for any required network endpoints, which cannot be connected to.

Connectivity may be blocked by a proxy server, a firewall, or another network security device on the enterprise network perimeter. Connectivity to TCP port 80 is tested with an HTTP request and connectivity to TCP port 443 is tested with an HTTPS request. If there’s no response the FQDN is marked as a failure. If there’s an HTTP response code 407 the FQDN is marked as a failure. If there’s an HTTP response code 403 then we check the Server attribute of the response and if it appears to be a proxy server we mark this as a failure. You can simulate the tests we perform with the Windows command-line tool curl.exe.

We test the SSL certificate at each required Microsoft 365 network endpoint that is in the optimize or allow category as defined at https://aka.ms/o365ip. If any tests do not find a Microsoft SSL certificate, then the encrypted network connected must have been intercepted by an intermediary network device. A network insight is shown on any intercepted encrypted network endpoints.

Where an SSL certificate is found that isn’t provided by Microsoft, we show the FQDN for the test and the in-use SSL certificate owner. This SSL certificate owner may be a proxy server vendor, or it may be an enterprise self-signed certificate.

Network path

This section shows the results of an ICMP traceroute to the Exchange Online service front door, the SharePoint Online service front door, and the Microsoft Teams service front door. It’s provided for information only and there’s no associated network insight. There are three traceroutes provided. A traceroute to outlook.office365.com, a traceroute to the customers SharePoint front end or to microsoft.sharepoint.com if one was not provided, and a traceroute to world.tr.teams.microsoft.com.

Connectivity reports

When you are signed in you can review previous reports that you have run. You can also share them or delete them from the list.

Reports.

Network health status

This shows any significant health issues with Microsoft’s global network, which might impact Microsoft 365 customers.

Network health status.

Testing from the Command Line

We provide a command line executable that can be used by your remote deployment and execution tools and run the same tests as are available in the Microsoft 365 network connectivity test tool web site.

The command line test tool can be downloaded here: Command Line Tool

You can run it by double clicking the executable in Windows File Explorer, or you can start it from a command prompt, or you can schedule it with task scheduler.

The first time you launch the executable you will be prompted to accept the end user license agreement (EULA) before testing is performed. If you have already read and accepted the EULA you can create an empty file called Microsoft-365-Network-Connectivity-Test-EULA-accepted.txt in the current working directory for the executable process when it is launched. To accept the EULA you can type ‘y’ and press enter in the command line window when prompted.

The executable accepts the following command line parameters:

  • -h to show a link to this help documentation
  • -testlist <test> Specifies tests to run. By default only basic tests are run. Valid test names include: all, dnsConnectivityPerf, dnsResolverIdentification, bufferBloat, traceroute, proxy, vpn, skype, connectivity, networkInterface
  • -filepath <filedir> Directory path of test result files. Allowed value is absolute or relative path of an accessible directory
  • -city <city> For the city, state, and country fields the specified value will be used if provided. If not provided then Windows Location Services (WLS) will be queried. If WLS fails the location will be detected fromthe machines network egress
  • -state <state>
  • -country <country>
  • -proxy <account> <password> Proxy account name and password can be provided if you require a proxy to access the Internet

Results

Output of results are written to a JSON file in a folder called TestResults which is created in the current working directory of the process unless it already exists. The filename format for the output is connectivity_test_result_YYYY-MM-DD-HH-MM-SS.json. The results are in JSON nodes that match the output shown on the web page for the Microsoft 365 network connectivity test tool web site. A new result file is created each time you run it and the standalone executable does not upload results to your Microsoft tenant for viewing in the Admin Center Network Connectivity pages. Front door codes, longitudes, and latitudes are not included in the result file.

Launching from Windows File Explorer

You can simply double click on the executable to start the testing and a command prompt window will appear.

Launching from the Command Prompt

On a CMD.EXE command prompt window you can type the path and name of the executable to run it. The filename is Microsoft.Connectivity.Test.exe

Launching from Windows Task Scheduler

In Windows Task Scheduler you can add a task to launch the standalone test executable. You should specify the current working directory of the task to be where you have created the EULA accepted file since the executable will block until the EULA is accepted. You cannot interactively accept the EULA if the process is started in the background with no console.

More details on the standalone executable

The commandline tool uses Windows Location Services to find the users City State Country information for determining some distances. If Windows Location Services is disabled in the control panel then user location based assessments will be blank. In Windows Settings “Location services” must be on and “Let desktop apps access your location” must also be on.

The commandline tool will attempt to install the .NET Framework if it is not already installed. It will also download the main testing executable from the Microsoft 365 network connectivity test tool and launch that.

Test using the Microsoft Support and Recovery Assistant

Microsoft Support and Recovery Assistant (Assistant) automates all the steps required to execute the command-line version of the Microsoft 365 network connectivity test tool on a user’s machine and creates a report similar to the one created by the web version of the connectivity test tool. Note, the Assistant runs the command line version of Microsoft 365 network connectivity test tool to produce the same JSON result file, but the JSON file is converted into .CSV file format.

Download and Run the Assistant Here

Viewing Test Results

Reports can be accessed in the following ways:

The reports will be available on the below screen once the Assistant has finished scanning the user’s machine. To access these reports, simply click on the “View log” option to view them.

Microsoft Support and Recovery Assistant wizard.

Connectivity test results and Telemetry data are collected and uploaded to the uploadlogs folder. To access this folder, use one of the following methods:

  • Open Run (Windows logo key + R), and run the %localappdata%/saralogs/uploadlogs command as follows:
Run dialog for locating output.
  • In File Explorer, type C:\Users<UserName>\AppData\Local\saralogs\uploadlogs and press Enter as follows:
Windows Explorer Address Bar for output.

Note: <UserName> is the user’s Windows profile name. To view the information about the test results and telemetry, double-click and open the files.

Windows Explorer SARA Output Files.

Types of result files

Microsoft Support and Recovery Assistant creates 2 files:

  1. Network Connectivity Report (CSV) This report runs the raw JSON file against a rule engine to make sure defined thresholds are being met and if they are not met a “warning” or “error” is displayed in the output column of the CSV file. You can view the NetworkConnectivityReport.csv file to be informed about any detected issues or defects. Please see What happens at each test step for details on each test and the thresholds for warnings.
  2. Network Connectivity Scan Report (JSON) This file provides the raw output test results from the command-line version of the Microsoft 365 network connectivity test tool (MicrosoftConnectivityTest.exe).

FAQ

Here are answers to some of our frequently asked questions.

What is required to run the advanced test client?

The advanced test client requires .NET 6.0 Runtime. If you run the advanced test client without that installed you will be directed to the .NET 6.0 installer page. Be sure to install from the Run desktop apps column for Windows. Administrator permissions on the machine are required to install .NET 6.0 Runtime.

The advanced test client uses SignalR to communicate to the web page. For this you must ensure that TCP port 443 connectivity to connectivity.service.signalr.net is open. This URL isn’t published in the https://aka.ms/o365ip because that connectivity isn’t required for a Microsoft 365 client application user.

What is Microsoft 365 service front door?

The Microsoft 365 service front door is an entry point on Microsoft’s global network where Office clients and services terminate their network connection. For an optimal network connection to Microsoft 365, It’s recommended that your network connection is terminated into the closest Microsoft 365 front door in your city or metro.

 Note

Microsoft 365 service front door has no direct relationship to the Azure Front Door Service product available in the Azure marketplace.

What is the best Microsoft 365 service front door?

A best Microsoft 365 service front door (formerly known as an optimal service front door) is one that is closest to your network egress, generally in your city or metro area. Use the Microsoft 365 network performance tool to determine location of your in-use Microsoft 365 service front door and the best service front door(s). If the tool determines your in-use front door is one of the best ones, then you should expect great connectivity into Microsoft’s global network.

What is an internet egress location?

The internet egress Location is the location where your network traffic exits your enterprise network and connects to the Internet. This is also identified as the location where you have a Network Address Translation (NAT) device and usually where you connect with an Internet Service Provider (ISP). If you see a long distance between your location and your internet egress location, then this may identify a significant WAN backhaul.

Network connectivity in the Microsoft 365 Admin Center

Microsoft 365 network performance insights

Microsoft 365 network assessment

Microsoft 365 Network Connectivity Location Services

Source :
https://learn.microsoft.com/en-us/Microsoft-365/Enterprise/office-365-network-mac-perf-onboarding-tool?view=o365-worldwide

LockBit 3.0 ‘Black’ attacks and leaks reveal wormable capabilities and tooling

Reverse-engineering reveals close similarities to BlackMatter ransomware, with some improvements

A postmortem analysis of multiple incidents in which attackers eventually launched the latest version of LockBit ransomware (known variously as LockBit 3.0 or ‘LockBit Black’), revealed the tooling used by at least one affiliate. Sophos’ Managed Detection and Response (MDR) team has observed both ransomware affiliates and legitimate penetration testers use the same collection of tooling over the past 3 months.

Leaked data about LockBit that showed the backend controls for the ransomware also seems to indicate that the creators have begun experimenting with the use of scripting that would allow the malware to “self-spread” using Windows Group Policy Objects (GPO) or the tool PSExec, potentially making it easier for the malware to laterally move and infect computers without the need for affiliates to know how to take advantage of these features for themselves, potentially speeding up the time it takes them to deploy the ransomware and encrypt targets.

A reverse-engineering analysis of the LockBit functionality shows that the ransomware has carried over most of its functionality from LockBit 2.0 and adopted new behaviors that make it more difficult to analyze by researchers. For instance, in some cases it now requires the affiliate to use a 32-character ‘password’ in the command line of the ransomware binary when launched, or else it won’t run, though not all the samples we looked at required the password.

We also observed that the ransomware runs with LocalServiceNetworkRestricted permissions, so it does not need full Administrator-level access to do its damage (supporting observations of the malware made by other researchers).

Most notably, we’ve observed (along with other researchers) that many LockBit 3.0 features and subroutines appear to have been lifted directly from BlackMatter ransomware.

Is LockBit 3.0 just ‘improved’ BlackMatter?

Other researchers previously noted that LockBit 3.0 appears to have adopted (or heavily borrowed) several concepts and techniques from the BlackMatter ransomware family.

We dug into this ourselves, and found a number of similarities which strongly suggest that LockBit 3.0 reuses code from BlackMatter.

Anti-debugging trick

Blackmatter and Lockbit 3.0 use a specific trick to conceal their internal functions calls from researchers. In both cases, the ransomware loads/resolves a Windows DLL from its hash tables, which are based on ROT13.

It will try to get pointers from the functions it needs by searching the PEB (Process Environment Block) of the module. It will then look for a specific binary data marker in the code (0xABABABAB) at the end of the heap; if it finds this marker, it means someone is debugging the code, and it doesn’t save the pointer, so the ransomware quits.

After these checks, it will create a special stub for each API it requires. There are five different types of stubs that can be created (randomly). Each stub is a small piece of shellcode that performs API hash resolution on the fly and jumps to the API address in memory. This adds some difficulties while reversing using a debugger.

Screenshot of disassembler code
LockBit’s 0xABABABAB marker

SophosLabs has put together a CyberChef recipe for decoding these stub shellcode snippets.

Output of a CyberChef recipe
The first stub, as an example (decoded with CyberChef)

Obfuscation of strings

Many strings in both LockBit 3.0 and BlackMatter are obfuscated, resolved during runtime by pushing the obfuscated strings on to the stack and decrypting with an XOR function. In both LockBit and BlackMatter, the code to achieve this is very similar.

Screenshot of disassembler code
BlackMatter’s string obfuscation (image credit: Chuong Dong)

Georgia Tech student Chuong Dong analyzed BlackMatter and showed this feature on his blog, with the screenshot above.

Screenshot of disassembler code
LockBit’s string obfuscation, in comparison

By comparison, LockBit 3.0 has adopted a string obfuscation method that looks and works in a very similar fashion to BlackMatter’s function.

API resolution

LockBit uses exactly the same implementation as BlackMatter to resolve API calls, with one exception: LockBit adds an extra step in an attempt to conceal the function from debuggers.

Screenshot of disassembler code
BlackMatter’s dynamic API resolution (image credit: Chuong Dong)

The array of calls performs precisely the same function in LockBit 3.0.

Screenshot of disassembler code
LockBit’s dynamic API resolution

Hiding threads

Both LockBit and BlackMatter hide threads using the NtSetInformationThread function, with the parameter ThreadHideFromDebugger. As you probably can guess, this means that the debugger doesn’t receive events related to this thread.

Screenshot of disassembler code
LockBit employs the same ThreadHideFromDebugger feature as an evasion technique

Printing

LockBit, like BlackMatter, sends ransom notes to available printers.

Screenshot of disassembler code
LockBit can send its ransom notes directly to printers, as BlackMatter can do

Deletion of shadow copies

Both ransomware will sabotage the infected computer’s ability to recover from file encryption by deleting the Volume Shadow Copy files.

LockBit calls the IWbemLocator::ConnectServer method to connect with the local ROOT\CIMV2 namespace and obtain the pointer to an IWbemServices object that eventually calls IWbemServices::ExecQuery to execute the WQL query.

Screenshot of disassembler code
BlackMatter code for deleting shadow copies (image credit: Chuong Dong)

LockBit’s method of doing this is identical to BlackMatter’s implementation, except that it adds a bit of string obfuscation to the subroutine.

Screenshot of disassembler code
LockBit’s deletion of shadow copies

Enumerating DNS hostnames

Both LockBit and BlackMatter enumerate hostnames on the network by calling NetShareEnum.

Screenshot of disassembler code
BlackMatter calls NetShareEnum() to enumerate hostnames… (image credit: Chuong Dong)

In the source code for LockBit, the function looks like it has been copied, verbatim, from BlackMatter.

Screenshot of disassembler code
…as does LockBit

Determining the operating system version

Both ransomware strains use identical code to check the OS version – even using the same return codes (although this is a natural choice, since the return codes are hexadecimal representations of the version number).

Screenshot of disassembler code
BlackMatter’s code for checking the OS version (image credit: Chuong Dong)
Screenshot of disassembler code
LockBit’s OS enumeration routine

Configuration

Both ransomware contain embedded configuration data inside their binary executables. We noted that LockBit decodes its config in a similar way to BlackMatter, albeit with some small differences.

For instance, BlackMatter saves its configuration in the .rsrc section, whereas LockBit stores it in .pdata

Screenshot of disassembler code
BlackMatter’s config decryption routine (image credit: Chuong Dong)

And LockBit uses a different linear congruential generator (LCG) algorithm for decoding.

Screenshot of disassembler code
LockBit’s config decryption routine

Some researchers have speculated that the close relationship between the LockBit and BlackMatter code indicates that one or more of BlackMatter’s coders were recruited by LockBit; that LockBit bought the BlackMatter codebase; or a collaboration between developers. As we noted in our white paper on multiple attackers earlier this year, it’s not uncommon for ransomware groups to interact, either inadvertently or deliberately.

Either way, these findings are further evidence that the ransomware ecosystem is complex, and fluid. Groups reuse, borrow, or steal each other’s ideas, code, and tactics as it suits them. And, as the LockBit 3.0 leak site (containing, among other things, a bug bounty and a reward for “brilliant ideas”) suggests, that gang in particular is not averse to paying for innovation.

LockBit tooling mimics what legitimate pentesters would use

Another aspect of the way LockBit 3.0’s affiliates are deploying the ransomware shows that they’re becoming very difficult to distinguish from the work of a legitimate penetration tester – aside from the fact that legitimate penetration testers, of course, have been contracted by the targeted company beforehand, and are legally allowed to perform the pentest.

The tooling we observed the attackers using included a package from GitHub called Backstab. The primary function of Backstab is, as the name implies, to sabotage the tooling that analysts in security operations centers use to monitor for suspicious activity in real time. The utility uses Microsoft’s own Process Explorer driver (signed by Microsoft) to terminate protected anti-malware processes and disable EDR utilities. Both Sophos and other researchers have observed LockBit attackers using Cobalt Strike, which has become a nearly ubiquitous attack tool among ransomware threat actors, and directly manipulating Windows Defender to evade detection.

Further complicating the parentage of LockBit 3.0 is the fact that we also encountered attackers using a password-locked variant of the ransomware, called lbb_pass.exe , which has also been used by attackers that deploy REvil ransomware. This may suggest that there are threat actors affiliated with both groups, or that threat actors not affiliated with LockBit have taken advantage of the leaked LockBit 3.0 builder. At least one group, BlooDy, has reportedly used the builder, and if history is anything to go by, more may follow suit.

LockBit 3.0 attackers also used a number of publicly-available tools and utilities that are now commonplace among ransomware threat actors, including the anti-hooking utility GMER, a tool called AV Remover published by antimalware company ESET, and a number of PowerShell scripts designed to remove Sophos products from computers where Tamper Protection has either never been enabled, or has been disabled by the attackers after they obtained the credentials to the organization’s management console.

We also saw evidence the attackers used a tool called Netscan to probe the target’s network, and of course, the ubiquitous password-sniffer Mimikatz.

Incident response makes no distinction

Because these utilities are in widespread use, MDR and Rapid Response treats them all equally – as though an attack is underway – and immediately alerts the targets when they’re detected.

We found the attackers took advantage of less-than-ideal security measures in place on the targeted networks. As we mentioned in our Active Adversaries Report on multiple ransomware attackers, the lack of multifactor authentication (MFA) on critical internal logins (such as management consoles) permits an intruder to use tooling that can sniff or keystroke-capture administrators’ passwords and then gain access to that management console.

It’s safe to assume that experienced threat actors are at least as familiar with Sophos Central and other console tools as the legitimate users of those consoles, and they know exactly where to go to weaken or disable the endpoint protection software. In fact, in at least one incident involving a LockBit threat actor, we observed them downloading files which, from their names, appeared to be intended to remove Sophos protection: sophoscentralremoval-master.zip and sophos-removal-tool-master.zip. So protecting those admin logins is among the most critically important steps admins can take to defend their networks.

For a list of IOCs associated with LockBit 3.0, please see our GitHub.

Acknowledgments

Sophos X-Ops acknowledges the collaboration of Colin Cowie, Gabor Szappanos, Alex Vermaning, and Steeve Gaudreault in producing this report.

Source :
https://news.sophos.com/en-us/2022/11/30/lockbit-3-0-black-attacks-and-leaks-reveal-wormable-capabilities-and-tooling/

The Art of Cyber War: Sun Tzu and Cybersecurity

Weighing the lessons of Sun Tzu and how they apply to cybersecurity.

Sun Tzu sought to revolutionize the way war was fought. That’s saying quite a bit, since he was born in 544 BCE and lived during an era when most wars were little more than gruesome bludgeoning events between one or more groups armed with axes, clubs and sharp sticks.

While not much information about Sun Tzu’s life has survived, we know he was employed by the then-ruler of the Kingdom of Wei in what is now the northeastern heart of China. He was a Chinese general and philosopher who envisioned the psychological aspects of war, which was a completely original approach to armed conflict in ancient China.

Many historians believe Sun Tzu’s book was intended to help his colleagues engage in the many regional conflicts they faced. Today, Sun Tzu’s the Art of War is a bestseller that has transcended 2,000 years and hundreds of wars. The book has become a kind of Rosetta Stone of military theory, cited by theorists and translated well beyond the battlefield to gain prevalence in business schools worldwide and now cybersecurity.

The Art of Cyberwar: preparation.

Adapting Sun Tzu’s many well-known quotes to cybersecurity is pretty straightforward. We looked for three that could best describe important aspects of cybersecurity: preparation, planning and knowledge. For preparation, we settled on a re-quote of this well-known warning:

Cyber warfare is of vital importance to any company. It is a matter of life and death, a road to safety or ruin.

Despite his military background, Sun Tzu claimed that direct fighting was not the best way to win battles. But when fighting was necessary, it was wise to carefully prepare for every possibility. That’s the lesson commonly ignored by companies who, after a severe breach, found themselves fined, shamed and scorned because they neglected their network security and failed to protect themselves from attackers. To prepare, we not only need the most advanced technology possible, but we must also train the workforce and make cybersecurity everyone’s business.

The Art of Cyberwar: planning.

In the realm of planning, we considered how the “art” is also a source of wisdom for attackers:

Where we intend to fight must not be made known. Force the enemy to prepare against possible attacks from several different points and cause them to spread their defenses in many directions; the numbers we shall have to face at any given moment will be proportionately few.

This re-quote relates to other stratagems where Sun Tzu urges his generals to never underestimate their enemies and to plan for all possibilities. The same goes for cyber attackers. They will pick the easy battles to ensure they have the upper-hand. Therefore, as we engage our defense, it is wise to plan our defenses as though we are already targeted and have been breached.

The Art of Cyberwar: knowledge.

Sun Tzu guides us away from making rash emotional decisions by emphasizing the importance of knowledge. He suggested that leaders gain as much knowledge as possible when preparing for battle, but not to limit themselves to the enemy’s strengths and weaknesses.

If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.

This bit of advice is a direct quote and accurately describes how cybersecurity should operate. Businesses must maximize the power of threat intelligence by giving IT teams the means to analyze real-time analytics and transform every scrap of data into actionable insights. IT teams should also be empowered to consider everything that could happen and assess the best course of action before, during and after a breach.

Explore and learn about the Art of Cyber War.

War theorists have long-standing debates about categorizing military activity preparations and execution. General Carl von Clausewitz stands next to Sun Tzu as one of the best-known and most respected thinkers on the subject. Paraphrasing from Clausewitz’s book Von Kriege (On War) published in 1832), he observes that the preparation for war is scientific, but the conduct of battle is artistic. As a science, we study logistics, technology and other elements depending on need. As an art, we rely on individual talent and grit to exploit opportunities that increase the likelihood of victory. Clausewitz also believed that war belonged to the province of social life, as are all conflicts of great human interest.

Cyberwar also fits these definitions. For instance, consider business activity as a combination of science, art and social life. As businesses compete in the marketplace, they carefully analyze the competition, create ways to appeal to audiences and press for social engagement and interaction. Shouldn’t we apply the same level of attention and resources for our cybersecurity? We think Sun Tzu would rub his beard and nod profoundly.

Cyberattacks for this year already eclipse the full-year totals from 2017, 2018 and 2019, according to the mid-year update to the 2022 SonicWall Cyber Threat Report. And new attack vectors are coming online every day. Without adequate preparation, planning and knowledge, companies and their customers are at a high risk of falling victim to devastating cyberattacks.

Explore and learn about the art and understand the science. Book your seat for MINDHUNTER 11, “The Art of Cyber War,” and learn from experts on how you can keep your company safe in the coming cyberbattles.

Source :
https://blog.sonicwall.com/en-us/2022/11/the-art-of-cyber-war-sun-tzu-and-cybersecurity/