Top 27 WordPress Security Vulnerabilities You Need To Know

Just the idea of WordPress Security Vulnerabilities can be daunting, and even a little scary, for some people.

We want to put an end to that.

In this article we’ll dispel some of the confusion and aim to reduce the anxiety that surrounds this topic. We’ll outline the big ticket items and provide clear, actionable advice on steps you can take to protect your WordPress sites.

Before we can talk about WordPress Security Vulnerabilities, let’s get clear on what exactly we mean.

What Is A WordPress Vulnerability?

When we think of vulnerabilities, the first thing to come to mind is usually publicly known software vulnerabilities. They often allow for some form of directed, specific attack against susceptible code.

This certainly is one type of vulnerability, as you’ll see below. But for the purposes of this article, we’re considering anything that makes your website susceptible to attack – anything that puts you at a disadvantage.

You can’t hope to fight hackers if you’re not aware of the weaknesses that your enemy will exploit.

This article will arm you with practical know-how to strengthen your weaker areas and give you the power and confidence to fight back.

Without further ado, let’s get into it.

#1 Outdated WordPress Core, Plugins, Themes

The single leading cause of WordPress site hacking is outdated WordPress software. This includes Plugins, Themes, and the WordPress Core itself.

The simple act of keeping all your plugins and themes up-to-date will keep you protected against the vast majority of vulnerabilities, either publicly known or “unknown”.

A known vulnerability is one that has been discovered, typically by a dedicated researcher, and published publicly – see #6 below.  But code vulnerabilities that aren’t publicly known are also important to be aware of.

Any good software developer is constantly improving their skills and their code. Over time, you can expect their code to improve, so keeping software updated ensures that you’re running only the best code on your sites.

How to protect against outdated WordPress software

We recommend making “WordPress Updates” a regular part of your weekly maintenance schedule. Block out some time every single week to get this critical work done.

The WordPress team regularly releases bug-fixing patches for the Core, and since WordPress 3.7+, these are installed automatically. It’s possible to disable that feature, but we strongly recommend that you never do.

# 2 Insecure WordPress Web Hosting

Your WordPress site is only as secure as the infrastructure that hosts it.

This is perhaps the most overlooked area of website security, and is, in our opinion, so critical that we place it in the top-3 in this list. #3 is closely related to this item, so make sure you check that out too.

If your web host doesn’t make server security a priority, then your server will get gradually more vulnerable over time. We see this all the time when customers write to our support team asking for help, only to discover that their web server is running on really, really old libraries.

This happens when the web host isn’t proactive in maintaining the server software that powers the websites of their customers.

Proactive maintenance by the web host has a cost, however. And if you’re paying bargain basement prices, then you can expect a corresponding level of service. That’s not to say that cheap web hosting is inherently insecure, and expensive web hosting isn’t. Not at all, but there is definitely a correlation between quality of web hosting and the price you pay.

How to protect against insecure WordPress web hosting

Cost of web hosting is only 1 indicator of quality. If your WordPress website is important to you or your business, then asking questions from the host about server security and ongoing maintenance should be part of your due diligence process.

You can take on recommendations from colleagues and friends, but never substitute their opinions with your own due diligence. Be prepared to invest in your hosting, as this will not only impact your security, but also your hosting reliability, uptime, and performance.

If you haven’t already done so, talk to your host today and ask them how they maintain the server that host your sites. Not in general terms, but ask them what their actual maintenance and update schedule is.

If you’re not happy with your answers and support, find a new host that will give you answers that you like. Never be afraid to switch service providers.

# 3 WordPress Web Hosting Site Contamination

This is related to the discussion above on web hosting quality. Generally speaking, the cheaper the web hosting, the more corners that will be cut in the service quality. This includes shared web server hosting configuration.

Here’s an over-simplified range of approaches in hosting websites:

  1. Host all sites within the same vhost*
  2. Host each site in separate vhosts, on the same, shared server
  3. Host each site on a separate VPS (Virtual Private Servers) on the same server
  4. Host each site on a separate dedicated server

*vhost is short for “virtual host”, that acts as a semi-independent container for hosting a website

As you move down the list, the cost increases, but the risk of contamination between websites increases. The 1st on the list is by-far the most dangerous, but is unfortunately the most common. This is where you might see something like:

  • /public_html/mywebsite1_com/
  • /public_html/mywebsite2_com/
  • /public_html/mywebsite3_com/
  • /public_html/mywebsite4_com/
  • /public_html/mywebsite5_com/

… when you look at the file system of the hosting account.

This is terrible for cross-site contamination as there is absolutely no isolation between individual websites. If any 1 of these sites becomes infected with malware, then you must assume the entire collection of sites is infected.

That’s a lot of cleanup work.

How to protect against web hosting site contamination

You’ll want to ensure that all your websites are, at the very least, hosted within their own vhost.

You could separate sites even further with separate VPSs, but you’ll pay more for it.

You’ll need to choose the type of hosting that best suits your expertise and budget.

If you can avoid it, please steer clear of hosting multiple sites within the same vhost, and if you are doing this already, look to gradually migrate these sites to their own independent vhosts as soon as possible.

For an idea, have a look at how we go about hosting many of our smaller WordPress sites.

# 4 Non-HTTPS Protection

Internet traffic sent via plain HTTP doesn’t encrypt the data transmitted between the website and the user’s browser, making it vulnerable to interception and tampering.

To avoid this, a technology known as Secure HTTP (HTTPS) is used.

Secure HTTPS is provided through the use of SSL/TLS certificates. It’s impossible to verify the identity of a website without certificates, and sensitive information such as login credentials, payment details, and personal information can be intercepted easily.

How to solve Non-HTTPS traffic

All WordPress websites should be using HTTPS by default. SSL Certificates are available for free with the LetsEncrypt service, and many web hosts provide this as-standard.

If your webhost doesn’t supply free LetsEncrypt Certificates, look to move hosts ASAP.

# 5 Insecure File Management (FTP) Vulnerability

Secure File Management is similar to the previous item on secure web/internet communications. If you’re transferring files to and from your web server using a tool like FTP, then this will typically require logging in with a username and password. If you’re not using a secure version of FTP, then you’re transmitting your username and password in plain text which can potentially be intercepted and used to compromise your server.

How to solve Insecure File Management

Practically all web hosts offer secure FTP (either FTPS or SFTP) as standard, but you should check with your hosting provider on whether that’s what you’re using and if not, how to switch.

Always use secure methods of file management. It’s just as easy to use than the insecure methods.

# 6 Known Plugin and Theme Vulnerabilities

Known vulnerabilities is what typically comes to mind when we discuss the topic of WordPress Vulnerabilities. When you’re told to upgrade because there’s a vulnerability, it basically means: a vulnerability has been discovered in the code of a plugin/theme that allows a hacker to perform a malicious attack.

By upgrading the plugin/theme, the vulnerable code has been fixed to prevent said attack.

Each vulnerability is different. Some are severe, some are trivial. Some are hard to exploit, others are easy to exploit.

The worst type of vulnerability are those that are severe, but easy to exploit.

Unfortunately, there is very little nuance in the way vulnerabilities are discussed publicly and so they’re all communicated as being catastrophic. This is not the case.

Of course, some vulnerabilities really are brutal, but many are not.

The point I’m trying to make is that you don’t need to stress about them. All you need to do is stay on top of vulnerability alerts and if your site is using a plugin or theme with a known vulnerability, then you need to update it as soon as possible.

The pseudo-standard practice for vulnerabilities reporting is this:

  1. Existence of vulnerability is reported to the developer
  2. Developer fixes the vulnerability and releases an update
  3. Users update their plugins/themes
  4. Some time passes, say 30 days
  5. Vulnerability details are released to the general public after enough time has passed to allow most people to upgrade the affected plugin/theme.

This brings us back to the first item on our list of vulnerabilities. If you’re performing regular maintenance on your WordPress sites, the likelihood that you’ll be susceptible to a vulnerability is slim-to-none, as you’ll have updated the affected plugin/theme, and you’re already protected.

The problem arises when you don’t regularly update your assets and you’re left with a known vulnerability on your site.

How to solve Known Vulnerabilities

Keeping on top of your WordPress updates is the best way to stay ahead of this type of vulnerability.

Alongside this, you could also use a WordPress security plugin, such as ShieldPRO, that will alert you when there’s a known vulnerability present on your website, and even automatically upgrade plugins when this is the case.

# 7 Untracked File Modifications

At the time of writing this article, WordPress 6.2 ships with over 3,800 PHP and Javascript source files. And that’s just the WordPress core. You’ll have many, many more files in your plugins and themes directories.

An Indicator of Compromise (IoC) that a WordPress site has been hacked is when a file is modified on your WordPress installation, or even added to the site, that shouldn’t be. If this ever happens, you want to know about it as quickly as possible.

The only way to do this reliably is to regularly scan all your files – at least once per day. This involves taking each file in-turn and checking whether its contents have changed from the original file, or whether it’s a file that doesn’t belong.

How to protect against untracked file modifications

Nearly all WordPress security plugins offer this scanning feature, at least for WordPress core files.

But you’ll want to also scan your plugins and themes, too. Not all WordPress security plugins offer this, so you’ll need to check whether this is supported. ShieldPRO supports scanning for all plugins and themes found on WordPress.org.

An additional complication exists for premium plugins and themes, however. Since premium plugins are only available for download from the developers’ sites, the source files for these plugins are not available for us to check against. The developers at ShieldPRO, however, have built a crowdsource-powered scanning system for premium plugins and themes so you can check these also. At the time of writing, this feature isn’t available anywhere else.

# 8 wp-config.php File Changes

If you download the source files for WordPress, you’ll discover there is no wp-config.php file. This file is often created by customising the wp-config-sample.php file with the necessary information. Since there is no universal content for the wp-config.php file, there is no way to scan this file for changes (as outlined in the previous item on this list).

In our experience, the wp-config.php file and the root index.php files are the files that are most often targeted when malware is inserted into a WordPress site, but it’s impossible to scan them using existing techniques.

You will need to constantly keep an eye on these files and be alert for changes.

How to protect against changes to wp-config.php files

One approach is to adjust the file system permissions on the file itself. This can be quite complicated so you may need technical assistance to achieve this. If you can restrict the permissions of the file so that it may only be edited by specific users, but readable by the web server, then you’ll have gone a long way in protecting it.

However, this poses another problem. Many WordPress plugins will try to make adjustments to these files automatically, so restricting access may cause you other problems.

The developers at ShieldPRO have custom-built the FileLocker system to address this issue.

It takes a snapshot of the contents of the files and alerts you as soon as they change. You’ll then have the ability to review the precise changes and then ‘accept’ or ‘reject’ them.

# 9 Malicious or Inexperienced WordPress Admins

With great power comes great responsibility.

A WordPress admin can do anything to a website. They can install plugins, remove plugins, adjust settings, add other users, add other admins. Anything at all.

But this is far from ideal when the administrator is inexperienced and likely to break things. It’s even worse if an adminaccount is compromised and someone gains unauthorized access to a site.

For this reason we always recommend adopting the Principle of Least Privilege (PoLP). This is where every user has their access privileges restricted as far as possible, but still allow them to complete their tasks.

This is why WordPress comes with built-in user roles such as Author and Editor, so that you can assign different permissions to users without giving them access to everything.

How to protect malicious or inexperienced administrators

As we’ve discussed, you should adopt PoLP and restrict privileges as far as possible.

Another approach that we’ve taken with Shield Security is to restrict a number of administrator privileges from the administrators themselves. We call this feature “Security Admin” and it allows us to lockout admin features from the everyday admin, such as:

  • Plugins management (install, activate, deactivate)
  • WordPress options control (site name, site URL, default user role, site admin email)
  • User admin control (creating, promoting, removing other admin users)
  • and more…

With the Security Admin feature we’re confident that should anyone gain admin access to the site, or already have it, they are prevented from performing many tasks that could compromise the site.

# 10 Existing Malware Infections

Think of malware as an umbrella term for any code that is malicious. Their purpose is wide-ranging, including:

  • stealing user data,
  • injecting spam content e.g. SEO Spam,
  • redirecting traffic to nefarious websites,
  • backdoors that allow unfettered access to the site and its data
  • or even taking over control of the website entirely

How to protect against WordPress Malware infections

Use a powerful malware scanner regularly on your WordPress sites to detect any unintended file changes and possible malware code. Examine all file changes and suspicious code as early as possible, and remove it.

# 11 WordPress Brute Force Login Attacks

WordPress brute force attacks attempt to gain unauthorized access to a website by trying to login using different username and password combinations.

These attacks are normally automated and there’s no way to stop them manually.

How to protect against brute force login attacks

You’ll need to use a WordPress security plugin, such as ShieldPRO, to detect these repeated login requests, and block the IP addresses of attackers automatically.

A powerful option is to use a service like CloudFlare to add rate limiting protection to your WordPress login page.

We also recommend using strong, unique (not shared with other services) passwords. You can use ShieldPRO to enforce minimum password strength requirements.

# 12 WordPress SQL Injection

WordPress SQL injection refers to attempts by an attacker to use carefully crafted database (MySQL) statements to read or update data residing in the WordPress database.

This is normally achieved through sending malicious data through forms on your site, such as search bars, user login forms, and contact forms.

If the SQL injection is successful, the attacker could potentially gain unauthorized access to the website’s database. They are free to steal sensitive information such as user data or login credentials, or if the injection is severe enough, make changes to the database to open up further site access.

How to protect against SQL injection attacks

The best protection against SQL injection attacks is defensive, well-written software. If the developer is doing all the right things, such as using prepared statements, validating and sanitizing user input, then malicious SQL statements are prevented from being executed.

This brings us back to item #1 on our list: keep all WordPress assets updated.

As further protection, you’ll want to use a firewall that detects SQL injection attacks and blocks the requests. ShieldPRO offers this as-standard.

# 13 Search Engine Optimization (SEO) Spam Attack

WordPress Search Engine Optimization (SEO) spam refers to manipulation of search engine results and rankings of a WordPress website.

SEO spam can take many forms, such as keyword stuffing, hidden text and links, cloaking, and content scraping.

This type of attack normally involves modification of files residing on the WordPress site, that will then cause the SPAM content to be output when the site is crawled by Google.

How to protect against SEO SPAM attacks

Ensuring your site is registered with Google Webmaster Tools and staying on top of any alerts is a first step in monitoring changes in your website’s search engine visibility.

As mentioned earlier, these sorts of attacks normally rely on modifying files on your WordPress site, so regular file scanning and review of scan results will help you detect file changes early and revert anything that appears malicious.

# 14 Cross-Site Scripting (XSS) Attack

WordPress Cross-Site Scripting (XSS) is where an attacker injects malicious scripts into a WordPress website’s web pages that is then automatically executed by other users. The attacker can use various methods to inject the malicious scripts, such as through user input fields, comments, or URLs.

This attack vector can potentially steal user data or have the user unintentionally perform other malicious actions on the website.

How to protect against XSS attacks

The prime responsibility for prevention of this attack lies with the software developer. They must properly sanitize and validate all user input to ensure they are as expected.

Security plugins may also be able to intercept the XSS payloads, but this is less common. The only thing that you, as the WordPress admin, can do is ensure that all WordPress assets (plugins & themes) are kept up-to-date, and that you’re using vetted plugins from reputable developers. (See the section on Nulled Plugins below)

# 15 Denial of Service Attacks (DoS)

WordPress Denial of Service (DoS) attacks attempt to overwhelm a WordPress server with a huge volume of traffic.

By exhausting the server resources, it renders the website inaccessible to legitimate users. This would be disastrous for, say, an e-commerce store.

How to protect against DoS attacks

DoS attacks can be simple to implement for an attacker, but they’re also relatively straightforward to prevent. Using traffic limiting, you can reduce the ability of an attacker to access your site and consume your resources.

Choosing web hosting that is sized correctly with enough resources to absorb some attacks, and even using a provider that implements DoS as part of their service offering will also help mitigate these attacks.

It should be noted that if the DoS attack is large enough, no WordPress plugin will be able to mitigate it. You’ll need the resources of a WAF service, such as CloudFlare, to ensure your web server is protected.

# 16 Distributed Denial of Service Attacks (DDoS)

A Distributed Denial of Service attack is the same as a Denial of Service (#15) attack, except that there are multiple origins for the requests that flood your server.

These attacks are more sophisticated, and more costly, for the attacker, so they’re definitely rarer. But they have exactly the same effect on your site as a normal DoS attack.

How to protect against DDoS attacks

Most web hosts are just not sophisticated enough to withstand a sustained DDoS attack and you’ll need the services of a dedicated WAF, such as CloudFlare.

# 17 Weak Passwords

WordPress Weak Passwords vulnerability is the use of weak passwords by WordPress users. Weak passwords can be easily cracked by attackers, allowing them to gain unauthorized access to the WordPress website and take any type of malicious action.

Related to Brute Force attacks, automated tools can be used to systematically guess or crack weak passwords with ease.

How to protect against Weak Password

To prevent WordPress Weak Passwords Vulnerability, website owners should ensure that all users, including administrators, use strong passwords. Strong passwords are complex and difficult to guess, typically consisting of a combination of uppercase and lowercase letters, numbers, and special characters. Passwords should also be unique and not used across multiple, separate services.

WordPress doesn’t currently enforce strong passwords, so a WordPress security plugin, such as ShieldPRO, will be needed to enforce this.

# 18 Pwned Passwords

Pwned passwords vulnerability is where a WordPress user re-uses a password they’ve used elsewhere, but that has been involved in a data breach.

If a password is publicly known to have been used for a given user, and the same user re-uses it on another service, then it opens up a strong possibility that the user account could be compromised.

How to protect against Pwned Passwords

The Pwned Passwords service provides a public API that can be used to check passwords. You’ll want to enforce some sort of password policy that restricts the user of Pwned Passwords on your WordPress sites. Shield Security offers this feature as-standard.

# 19 Account Takeover Vulnerability

The previous items on this list discussed the importance of good password hygiene. But until we can go through all our accounts and ensure there is no password reuse, no pwned passwords, and all our passwords are strong, we can prevent any sort of account theft by ensuring that the person logging-in to a user account is, in-fact that person.

This is where 2-factor authentication (2FA) comes into play.

It is designed to help verify and ensure that the person logging in, is who they say they are and is a critical part of all good WordPress website security.

2-Factor authentication involves verifying another piece of information (a factor) that only that user has access to, alongside their normal password. This could be in the form of an SMS text or an email containing a 1-time passcode. It could also use something like Google Authenticator to generate codes every 30 seconds.

How to protect against account takeover vulnerability

WordPress doesn’t offer 2FA option to the user login process, by default. You will need a WordPress security plugin that offers this functionality. Shield Security has offered 2FA by email, Google Authenticator and Yubikey for many years now.

# 20 Nulled Plugins and Themes Vulnerabilities

“Nulled” plugins and themes are pirated versions of premium WordPress plugins and themes that are distributed without the permission of the original authors.

They pose significant security risks as they often contain malicious code or backdoors that can be used by hackers to gain unauthorized access to the website.

They may also include hidden links or spammy advertisements that can harm the website’s reputation or adversely affect its search engine rankings. (see SEO SPAM)

How to prevent vulnerabilities through nulled plugins and themes

The simple solution to this is to purchase premium plugins and themes from the original software vendor. A lot of work goes into the development of premium plugins and themes and supporting the developer’s work goes a long way to ensuring the project remains viable for the lifetime of your own projects.

# 21 Inactive WordPress Users Vulnerability

Inactive WordPress users vulnerability refers to the security risk posed by user accounts that haven’t been active for an extended period of time. Inactive accounts can become a target for hackers, as they may be easier to compromise than active accounts. Older accounts are more likely to have Pwned Passwords, for example.

If a hacker can gain access to an inactive user account, particularly an admin account, it’s an open door to your website data. Users automatically bypass certain checks and they can easily post spam or exploit vulnerable code on the website.

How to prevent vulnerabilities from inactive WordPress users

To prevent any vulnerability posed by inactive WordPress users, it’s important to follow these best practices:

  • Regularly monitor your website’s user accounts and delete any that are inactive.
  • Implement strong password policies for all user accounts (see above).
  • Use a security plugin that can detect and alert you to any suspicious user activity.
  • Use a security plugin that can automatically disable access to inactive user accounts.

# 22 Default Admin User Account Vulnerability

The WordPress default admin user account vulnerability refers to the security risk posed by the default “admin” username that is automatically created in the installation of any WordPress site.

This default account name is widely-known and a common target by hackers, since knowing a valid admin username is half the information needed to gain admin access.

If the admin isn’t using strong passwords or 2-factor authentication, then the site is particularly vulnerable.

How to protect against the admin user account vulnerability

It should be understood that changing the primary admin username of a WordPress site is “security through obscurity”, and that using strong passwords is required regardless of the username.

To eliminate this risk, you’ll want to rename the admin username on a site. The simplest method to do this is to create a new administrator account and then delete the old account. Please ensure that you transfer all posts/pages to the new admin account during this process. Always test this on a staging site to ensure there are no unforeseen problems.

# 23 WordPress Admin PHP File Editing Vulnerability

WordPress comes with the ability to edit plugin and theme files directly on a site, from within the WordPress admin area. The editors are usually linked to within the Plugins and Appearance admin menus, but have recently been moved to the Tools menu, in some cases.

Having this access is far from ideal as it allows any administrator to quietly modify files. This also applies to anyone that gains unauthorised access to an admin account.

There is usually no good reason to have access to these editors

How To Restrict Access To The WordPress PHP File Editors

The easiest way to prevent this is to disallow file editing within WordPress.

The Shield Security plugin has an option to turn off file editing.This can be found under the WP Lockdown module and is easy to turn on and off.

# 24 WordPress Default Prefix for Database Tables 

WordPress’ default prefix for database tables represent a potential security risk. Similar to the previous item in the list, this is about reducing your surface area of attack by obscuring certain elements of your site.

If there are attempted attacks through SQL injection, then sometimes knowing the database table prefix can be helpful. If the attacker doesn’t know it, it may slow the attack or prevent it entirely.

The point is, obscuring the names of your database tables from would-be hackers won’t do any harm whatsoever, but may give you an edge over unsophisticated hacking attempts.

Hackers can exploit this vulnerability by using SQL injection attacks to gain unauthorized access to the website’s database. This can result in the theft of sensitive information or the compromise of the website’s security.

How To Change The WordPress Database Table Prefix

This is much more easily done at the time of WordPress installation – always choose a none-default (wp_) prefix.

Changing an existing prefix will require some MySQL database knowledge and we would recommend you employ the skills of a competent professional. And, as always, ensure you have a full and complete backup of your site.

It’s also important to note that, as mentioned above, you should be using a firewall or security plugin that can detect and block SQL injection attacks.

# 25 Directory Browsing Vulnerability

Web server directory browsing is where you can browse the contents of a web server from your web browser. This is far from ideal, as it supplies hackers information they may find useful to launch an attack.

From a hacker point of view, which would be better? Knowing which WordPress plugins are installed, or not knowing any of the WordPress plugins installed?

Clearly, more information is always better. And so we return to the principle of obscurity and reducing your surface area of attack by limiting access to information that hackers can use.

How To Prevent Directory Browsing Vulnerability

The easiest way to prevent this type of vulnerability is to directory browsing altogether.

This is done by adding a simple line to the site’s .htaccess file. Bear in mind that that this is only applicable to websites running the Apache web server (not nginx)

# 26 WordPress Security Keys/Salts

WordPress security keys are the means of encrypting and securing user cookies that control user login sessions. So they’re critical to good user security.

How To Improve User Security With Strong Keys/Salts

This is an easy one to implement for most admins. Here’s a quick how-to guide on updating your WordPress security keys.

# 27 Public Access To WordPress Debug Logs

Another security through obscurity item, the WordPress debug logs are normally stored in a very public location: /wp-content/debug.log

This is far from ideal as normally, without any specific configuration changes, this file is publicly accessible, and may expose some private site configuration issues through errors and logs data.

How To Eliminate Access to WordPress Debug Logs

If the file mentioned above is on your site, then you’ll want to move or delete it. You’ll then want to switch off debug mode on your site as you only need this active if you’re investigating a specific site issue.

Debug mode is typically toggled in your wp-config.php file so have a look in there for the lines:

define( 'WP_DEBUG', true );
define( 'WP_DEBUG_LOG', true );

You can either:

  • Remove the lines entirely,
  • Comment out the lines, or
  • Switch true to false for both of these.

Bonus Security Tip: WordPress Website Backup

WordPress backup is often cited as a WordPress Security function. Strictly speaking, it’s not. It forms part of your disaster recovery plan. You might not have a formal DR plan, but having a website backup may be your implicit plan.

If anything ever goes wrong with your WordPress site, whether this is security related or not, having a backup is critical to being able to recover your site.

If you haven’t put a regular backup plan in place for your site, this is probably the first thing you need to do. Some of the items in this list need to be done with the option of restoring a backup in case of disaster.

Final Thoughts On Your WordPress Security

With WordPress being so widely used, it’s the obvious target for hackers to focus their efforts. This means the aspects you have to consider can be almost overwhelming, in your quest to secure your WordPress sites.

The process is never-ending and you might even address all of the items on this list, and still get hacked. But you have to keep at it.

Each step you take to lockdown your site, puts a bit more distance between you and the hackers. You might not always stay ahead of them, and you won’t always have time to address issues immediately, but we can assure that the more steps you take, the more secure your site will be.

Source :
https://getshieldsecurity.com/blog/wordpress-security-vulnerabilities/

How to block Shodan scanners

Shodan is a search engine which does not index web sites or web contents, but vulnerable devices on the internet. To set up this index and to keep it up to date, Shodan uses at least 16 scanners with different AS numbers and different physical locations.

In case you want to block those scanners, this guide might help.

Set up host definitions

First, set up host definitions in the firewall menu and put in the following hosts (it might be useful to put in the rDNS name as a hostname):

Known Shodan scanners (last updated 2022-02-16)

rDNS nameIP addressLocation
shodan.io ((it is unclear if this is a scanner IP))208.180.20.97US
census1.shodan.io198.20.69.74US
census2.shodan.io198.20.69.98US
census3.shodan.io198.20.70.114US
census4.shodan.io198.20.99.130NL
census5.shodan.io93.120.27.62RO
census6.shodan.io66.240.236.119US
census7.shodan.io71.6.135.131US
census8.shodan.io66.240.192.138US
census9.shodan.io71.6.167.142US
census10.shodan.io82.221.105.6IS
census11.shodan.io82.221.105.7IS
census12.shodan.io71.6.165.200US
atlantic.census.shodan.io188.138.9.50DE
pacific.census.shodan.io85.25.103.50DE
rim.census.shodan.io85.25.43.94DE
pirate.census.shodan.io71.6.146.185US
ninja.census.shodan.io71.6.158.166US
border.census.shodan.io198.20.87.98US
burger.census.shodan.io66.240.219.146US
atlantic.dns.shodan.io209.126.110.38US
blog.shodan.io ((it is unclear if this is a scanner IP))104.236.198.48US
hello.data.shodan.io104.131.0.69US
www.shodan.io ((it is unclear if this is a scanner IP))162.159.244.38US

The additional following entries have been added on September, 2019:

rDNS nameIP addressLocation
battery.census.shodan.io93.174.95.106SC
cloud.census.shodan.io94.102.49.193SC
dojo.census.shodan.io80.82.77.139SC
flower.census.shodan.io (PTR only)94.102.49.190SC
goldfish.census.shodan.io185.163.109.66RO
house.census.shodan.io89.248.172.16SC
inspire.census.shodan.io (PTR only)71.6.146.186US
mason.census.shodan.io89.248.167.131SC
ny.private.shodan.io159.203.176.62US
turtle.census.shodan.io (PTR only)185.181.102.18RO
sky.census.shodan.io80.82.77.33SC
shodan.io (PTR only)216.117.2.180US

The additional following entries have been added on February, 2022:

rDNS nameIP addressLocation
einstein.census.shodan.io71.6.199.23US
hat.census.shodan.io185.142.236.34NL
red.census.shodan.io185.165.190.34US
soda.census.shodan.io71.6.135.131US
wine.census.shodan.io185.142.236.35NL

The additional following entries have been added on 21st September, 2022:

rDNS nameIP addressLocation
wall.census.shodan.io66.240.219.133US
floss.census.shodan.io143.198.225.197US
dog.census.shodan.io137.184.95.216US
draft.census.shodan.io64.227.90.185US
can.census.shodan.io143.198.238.87US
pack.census.shodan.io137.184.190.205US
jug.census.shodan.io137.184.112.192US
elk.census.shodan.io137.184.190.188US
tab.census.shodan.io167.172.219.157US
buffet.census.shodan.io143.110.239.2US
deer.census.shodan.io143.198.68.20US

The additional following entries have been added on 30th September, 2022:

rDNS nameIP addressLocation
sparkle.census.shodan.io137.184.190.194US
fish.census.shodan.io137.184.190.246US
heimdal.scan6x.shodan.io (PTR only)137.184.9.17US
gravy.scanf.shodan.io (PTR only)137.184.13.100US
scanme.scanf.shodan.io (PTR only)137.184.94.133US
frame.census.shodan.io (PTR only)137.184.112.103US
collector.chrono.shodan.io (PTR only)137.184.180.190US
ships.data.shodan.io143.198.50.234US

The additional following entries have been added on 30th September, 2022. These were obtained by using the above IP addresses and then scanning any /16 subnet with more than one IP address in it. They have not necessarily been seen scanning. Note the the same rDNS record can be returned by multiple IPs:

rDNS nameIP addressLocation
green.census.shodan.io185.142.236.36NL
blue.census.shodan.io185.142.236.40NL
guitar.census.shodan.io185.142.236.41NL
blue2.census.shodan.io185.142.236.43NL
red2.census.shodan.io185.142.239.16NL
census2.shodan.io198.20.69.96/29US
census3.shodan.io198.20.70.112/29US
border.census.shodan.io198.20.87.96/29US
census4.shodan.io198.20.99.128/29NL
malware-hunter.census.shodan.io66.240.205.34US
refrigerator.census.shodan.io71.6.146.130US
board.census.shodan.io71.6.147.198US
tesla.census.shodan.io71.6.147.254US
thor.data.shodan.io71.6.150.153US
grimace.data.shodan.io71.6.167.125US
house.census.shodan.io89.248.172.7NL

Sources: own research, log reviews.

Contributor Note!
if you DROP ranges that were in the notorious “AS29073 Quasi Networks LTD” already, you’re already banning the “SC” (Seychelles) sources detailed above; those ranges have been inherited by AS202425. “AS9009 M247 Ltd” contributes to most of the “RO” (Romania) sources; furtherly M247 (AS9009) seem to be the exit point of most NordVPN/pureVPN and many low cost script-kiddies VPN. Firewalling them is usefull for `quietness. Interactions between shodan and m247 seems to be very close.

You might add a comment to each host, such as “scanner” or “shodan” to make clear why you added those.

It is possible to block other common scanners here, too. However, please keep in mind that this isn’t a technique which is very scalable. Please consider running an IPS, if possible.

Project 25499 scanners (last updated 2016-02-28)

rDNS nameIP addressLocation
scanner01.project25499.com98.143.148.107US
scanner02.project25499.com155.94.254.133US
scanner03.project25499.com155.94.254.143US
scanner04.project25499.com155.94.222.12US
scanner05.project25499.com98.143.148.135US

Source: http://project25499.com/

Set up firewall group

Second, set up a firewall group and add all those host entries to it. Add a title and a comment to this firewall group. In this guide, we assume you have named the group “shodanscanners”.

Set up firewall rule

Third, create a new firewall rule. Set the “shodanscanners” group as source. For destination, use “standard networks” and set this to “any”. Set “rule action” to “drop”.

The setting “reject” is not recommended here, since the firewall will send an ICMP status message to the host(s) which triggered the firewall rule. By this, however, the host knows that there is something which at least sends ICMP errors back. To avoid this, “drop” is suitable because the network packets will be dropped silently and there is no way of telling (without additional scans) wether the target IP address is just down or drops network packages.

Enter a comment, if you want to and hit “add” to set the new firewall rule.

Please make sure that this rule is placed before rules which accept something (i.e. port forwarding rules) so that shodan scan traffic will be blocked instantly.

Reload the firewall engine to apply the new rule.

Limitations of this rule

The OpenVPN service will not be protected – OVPNINPUT firewall chain is above the chain where this rule will land.

Limitations of this guide

Nobody (and nothing) is perfect. This guide isn’t either. 😉

For example, if the IP addresses of the Shodan scanners change, your firewall rule will be probably useless and does not provide any protection against the scanners any more. Consider setting up an IPS for additional protection since some rules there will also block other scanners which are not mentioned here.

Blocking Shodan scanner is fine, but I want to block all scanners
This is basically possible. However, it is a nightmare to set up a firewall host group which covers all IPs which belong to scanners. (And it is also a nightmare to find out those IP addresses since most scanners do not just put them on their web sites…) In case you are thinking similar, setting up an IPS in combination with suitable rules (this is just one example, there are many out there) might be a solution for you.

Source :
https://wiki.ipfire.org/configuration/firewall/blockshodan

Preventing and Detecting Attacks Involving 3CX Desktop App

In this blog entry, we provide technical details and analysis on the 3CX attacks as they happen. We also discuss available solutions which security teams can maximize for early detection and mitigate the impact of 3CX attacks.

By: Trend Micro Research
March 30, 2023
Read time: 7 min (1870 words)

Updated on:

  • April 5, 2:39 a.m. EDT: We added Windows, Mac, and network commands to the Trend Micro Vision One™️ guide in the linked PDF.
  • April 4, 3:29 a.m. EDT: We added Trend Micro XDR filters to the solutions.
  • April 3, 2:33 a.m. EDT: We added details on d3dcompiler_47.dll‘s abuse of CVE-2013-3900 to make it appear legitimately signed.
  • April 1, 1:50 a.m. EDT: We added a guide on how Vision One can be used to search for potential threats associated with the 3CX desktop app. 
  • March 31, 11:07 p.m. EDT: We added technical details, an analysis of the info-stealer payload, and information on Trend Micro XDR capabilities for investigating and mitigating risks associated with the 3CX desktop app.
  • March 31, 3:00 a.m. EDT: We added the execution flow diagram, a link to Trend Micro support page, and a list of Mac IOCs and detection names.
  •  

In late March 2023, security researchers revealed that threat actors abused a popular business communication software from 3CX — in particular, the reports mention that a version of the 3CX VoIP (Voice over Internet Protocol) desktop client was being employed to target 3CX’s customers as part of an attack.

On its forums, 3CX has posted an update that recommends uninstalling the desktop app and using the Progressive Web App (PWA) client instead. The company also mentioned that they are working on an update to the desktop app.

For a more comprehensive scope of protection against possible attacks associated with the 3CX Desktop App, the Trend Micro XDR platform can help organizations mitigate the impact by collecting and analyzing extensive activity data from various sources. By applying XDR analytics to the data gathered from its native products, Trend Micro XDR generates correlated and actionable alerts.  

Trend Micro customers can also take advantage of Trend Micro Vision One™ to search for and monitor potential threats associated with the 3CX Desktop App, and to better understand observed attack vectors. For more information on how to utilize Trend Micro Vision One features, you may download the PDF guide here.

Additional guidance for Trend Micro customers including help with protection and detection can be found on our support page.

What is the compromised application?

The 3CX app is a private automatic branch exchange (PABX) software that provides several communication functions for its users, including video conferencing, live chat, and call management. The app is available on most major operating systems, including Windows, macOS, and Linux. Additionally, the client is available as a mobile application for both Android and iOS devices, while a Chrome extension and the PWA version of the client allow users to access the software through their browsers.

The issue was said to be limited to the Electron (non-web versions) of their Windows package (versions 18.12.407 and 18.12.416) and macOS clients (versions 18.11.1213, 18.12.402, 18.12.407 and 18.12.416).

According to the company’s website, more than 600,000 businesses and over 12 million daily users around the world use 3CX’s VoIP IPBX software.

How does the attack work?

The attack is reportedly a multi-stage chain in which the initial step involves a compromised version of the 3CX desktop app. Based on initial analysis, the MSI package (detected by Trend Micro as Trojan.Win64.DEEFFACE.A and Trojan.Win64.DEEFFACE.SMA) is the one that is compromised with possible trojanized DLLs, since the .exe file has the same name.

The infection chain begins with 3CXDesktopApp.exe loading ffmpeg.dll (detected as Trojan.Win64.DEEFFACE.A andTrojan.Win64.DEEFFACE.SMA). Next, ffmpeg.dll reads and decrypts the encrypted code from d3dcompiler_47.dll (detected as Trojan.Win64.DEEFFACE.A and Trojan.Wind64.DEEFACE.SMD3D).

The decrypted code seems to be the backdoor payload that tries to access the IconStorages GiHub page to access an ICO file (detected as Trojan.Win32.DEEFFACE.ICO) containing the encrypted C&C server that the backdoor connects to in order to retrieve the possible final payload. In addition, d3dcompiler_47.dll also abuses CVE-2013-3900 to make it appear that it is legitimately signed.

Figure 1. The detailed execution flow and Trend Micro detections of the malicious files. The MSI installer contains the .exe and two .dll files. The main source of the detection in the MSI installer is "ffmpeg.dll," which is the trojanized DLL.
Figure 1. The detailed execution flow and Trend Micro detections of the malicious files. The MSI installer contains the .exe and two .dll files. The main source of the detection in the MSI installer is “ffmpeg.dll,” which is the trojanized DLL.

As part of its attack routine, it contacts the servers noted in the list of indicators of compromise (IOCs) at the end of this blog entry. These domains are blocked by the Trend Micro Web Reputation Services (WRS).

Execution flow

Upon execution, the MSI package installer will drop the following files that are related to malicious behavior. Trend Micro Smart Scan Pattern (cloud-based) TBL 21474.300.40 can detect these files as Trojan.Win64.DEEFFACE.A.

  • 3CXDesktopApp.exe: A normal file that is abused to load the trojanized DLL
  • ffmpeg.dll: A trojanized DLL used to read, load, and execute a malicious shellcode from d3dcompiler_47.dll
  • d3dcompiler_47.dll: A DLL appended with an encrypted shellcode after the fe ed fa ce hex string

Some conditions are necessary for execution. For example, the sleep timestamp varies depending on the following conditions: First, it checks if the manifest file is present, as well as if it is using a specified date. If the file is not present or if it is using the specified date, the timestamp will generate a random number and use the formula rand() % 1800000 + current date + 604800 (604,800 is seven days).  After the date is computed, the malware will continue its routine.

Upon execution of 3CXDesktopApp.exeffmpeg.dll, which seems to be a trojanized or patched DLL, will be loaded. It will still contain its normal functionalities, but it will have an added malicious function that reads d3dcompiler_47.dll to locate an encrypted shellcode after the fe ed fa ce hex strings.

Figure 2
Figure 2. Reading "d3dcompiler_47.dll" and locating the “fe ed fa ce” hex string
Figure 2. Reading “d3dcompiler_47.dll” and locating the “fe ed fa ce” hex string

Upon decryption of the malicious shellcode using RC4 with the key, 3jB(2bsG#@c7, the shellcode will then try to access the GitHub repository that houses the ICO files containing the encrypted C&C strings that use Base64 encoding and AES + GCM encryption at the end of the image.

These B64 strings seem to be C&C domains that the shellcode tries to connect to for downloading other possible payloads. However, we were unable to confirm the exact nature of these payloads since the GitHub repository (raw.githubusercontent[.]com/IconStorages/images/main/) had already been taken down at the time of this writing. Note that the process exits when the page is inaccessible.

Figure 3. Code snippet showing the hard-coded GitHub repository
Figure 3. Code snippet showing the hard-coded GitHub repository
Figure 4. An ICO file from the GitHub repository
Figure 4. An ICO file from the GitHub repository

The above description applies to the Windows version. The behaviour of the Mac version is broadly similar, although it only uses a subset of the Windows C&C domains.

Info-stealer payload analysis

Based on our ongoing analysis of attacks on 3CX and the behaviors observed, the following section details what we know so far about the payload’s attack vector. 

Payloads in investigated 3CX attacks are detected as TrojanSpy.Win64.ICONICSTEALER.THCCABC. Upon analysis of the payload named ICONIC Stealer, we discovered that if it is executed using regsvr32.exe as the DLL loader, it will display the following system error:

Figure 5. Error displayed upon executing the sample using "regsvr32.exe"
Figure 5. Error displayed upon executing the sample using “regsvr32.exe”

Meanwhile, if rundll32.exe is used as the DLL loader, it encounters a WerFault error and displays the following pop-up message:

Figure 6. Error displayed if "rundll32.exe" is used as the DLL loader
Figure 6. Error displayed if “rundll32.exe” is used as the DLL loader

This indicates that the sample must be loaded by a specific application to proceed to its malicious routine.

ICONIC Stealer then checks for a file named config.json under the folder “3CXDesktopApp.”

Figure 7. Checking for "config.json"
Figure 7. Checking for “config.json”

ICONIC Stealer was then observed to steal the following system information:

  • HostName
  • DomainName
  • OsVersion

The gathered data will then be converted into a text-string format.

Figure 8. Converting gathered data into a text-string format
Figure 8. Converting gathered data into a text-string format

ICONIC Stealer then proceeds to its last behavior, which steals browser data. It uses the function shown in Figure 9 to traverse the infected system using predefined directories related to the browser’s history and other browser-related information.

Figure 9. Function for traversing the infected system
Figure 9. Function for traversing the infected system

The following figure shows a list of predefined strings:

Figure 10. List of predefined strings
Figure 10. List of predefined strings

The system directories on the following list compose the targets identified in the partial analysis of the ICONIC Stealer’s behavior. More information will be provided as this blog is updated. 

  • AppData\Local\Google\Chrome\User Data
  •  
  • AppData\Local\Microsoft\Edge\User Data
  •  
  • AppData\Local\BraveSoftware\Brave-Browser\User Data
  •  
  • AppData\Roaming\Mozilla\Firefox\Profiles
BrowserTarget information
ChromeHistory
EdgeHistory
BraveHistory
Firefoxplaces.sqlite

Table 1. The targeted section of each browser. Note that “places.sqlite” stores the annotations, bookmarks, favorite icons, input history, keywords, and the browsing history of visited pages for Mozilla Firefox.

ICONIC Stealer was also found with the capability to limit the retrieved data to the first five hundred entries to ensure that the most recent browser activity is the data that is retrieved:

Figure 11. Limiting data to the first 500 entries
Figure 11. Limiting data to the first 500 entries

“UTF-16LE”, ‘SELECT url, title FROM urls ORDER BY id DESC LIMIT

“UTF-16LE”, ‘500’,0

“UTF-16LE”, ‘SELECT url, title FROM moz_places ORDER BY id DESC

“UTF-16LE”, ‘LIMIT 500’,0

Figure 12. Retrieved results stored on an allocated buffer
Figure 12. Retrieved results stored on an allocated buffer

The gathered data will be passed to the main loader module to POST then back to the C&C server embedded in the main module.

What is its potential impact?

Due to its widespread use and its importance in an organization’s communication system, threat actors can cause major damage (for example, by monitoring or rerouting both internal and external communication) to businesses that use this software.

What can organizations do about it?

Organizations that are potentially affected should stop using the vulnerable version if possible and apply the patches or mitigation workarounds if these are available. IT and security teams should also scan for confirmed compromised binaries and builds and monitor for anomalous behavior in 3CX processes, with a particular focus on C&C traffic. 

Meanwhile, enabling behavioral monitoring in security products can help detect the presence of the attack within the system.

Indicators of Compromise (IOCs)

SHA256File name / detailsDetection name
dde03348075512796241389dfea5560c20a3d2a2eac95c894e7bbed5e85a0acc
Installer: aa124a4b4df12b34e74ee7f6c683b2ebec4ce9a8edcf9be345823b4fdcf5d868
3cxdesktopapp-18.12.407.msi (Windows)Trojan.Win64.DEEFFACE.A
fad482ded2e25ce9e1dd3d3ecc3227af714bdfbbde04347dbc1b21d6a3670405
Installer: 59e1edf4d82fae4978e97512b0331b7eb21dd4b838b850ba46794d9c7a2c0983
(Windows)Trojan.Win64.DEEFFACE.A
c485674ee63ec8d4e8fde9800788175a8b02d3f9416d0e763360fff7f8eb4e02ffmpeg.dll Trojan.Win64.DEEFFACE.A
7986bbaee8940da11ce089383521ab420c443ab7b15ed42aed91fd31ce833896ffmpeg.dll Trojan.Win64.DEEFFACE.A
11be1803e2e307b647a8a7e02d128335c448ff741bf06bf52b332e0bbf423b03d3dcompiler.dllTrojan.Win64.DEEFFACE.A
4e08e4ffc699e0a1de4a5225a0b4920933fbb9cf123cde33e1674fde6d61444f Trojan.Win32.DEEFFACE.ICO
8ab3a5eaaf8c296080fadf56b265194681d7da5da7c02562953a4cb60e147423 StealerTrojanSpy.Win64.ICONICSTEALER.THCCABC

Here is the list of IOCs for Mac users: 

SHA256File nameDetection name
5a017652531eebfcef7011c37a04f11621d89084f8f9507201f071ce359bea3f3CX Desktop App-darwin-x64-18.11.1213.zipTrojan.MacOS.FAKE3L3CTRON.A
5407cda7d3a75e7b1e030b1f33337a56f293578ffa8b3ae19c671051ed3142903CXDesktopApp-18.11.1213.dmgTrojan.MacOS.FAKE3L3CTRON.A
fee4f9dabc094df24d83ec1a8c4e4ff573e5d9973caa676f58086c99561382d7libffmpeg.dylibTrojan.MacOS.FAKE3L3CTRON.A
5009c7d1590c1f8c05827122172583ddf924c53b55a46826abf66da46725505achild macho file of libffmpeg.dylibTrojan.MacOS.FAKE3L3CTRON.A
e6bbc33815b9f20b0cf832d7401dd893fbc467c800728b5891336706da0dbcec3CXDesktopApp-18.12.416.dmgTrojan.MacOS.FAKE3L3CTRON.A
a64fa9f1c76457ecc58402142a8728ce34ccba378c17318b3340083eeb7acc67libffmpeg.dylibTrojan.MacOS.FAKE3L3CTRON.A
87c5d0c93b80acf61d24e7aaf0faae231ab507ca45483ad3d441b5d1acebc43cchild macho file of libffmpeg.dylibTrojan.MacOS.FAKE3L3CTRON.A


The following domains are blocked by Trend Micro Web Reputation Services (WRS)

  • akamaicontainer[.]com
  • akamaitechcloudservices[.]com
  • azuredeploystore[.]com
  • azureonlinecloud[.]com
  • azureonlinestorage[.]com
  • dunamistrd[.]com
  • glcloudservice[.]com
  • journalide[.]org
  • msedgepackageinfo[.]com
  • msstorageazure[.]com
  • msstorageboxes[.]com
  • officeaddons[.]com
  • officestoragebox[.]com
  • pbxcloudeservices[.]com
  • pbxphonenetwork[.]com
  • pbxsources[.]com
  • qwepoi123098[.]com
  • sbmsa[.]wiki
  • sourceslabs[.]com
  • visualstudiofactory[.]com
  • zacharryblogs[.]com

Trend Micro XDR uses the following filters to protect customers from 3CX-related attacks:

FilterIDOS
Compromised 3CX Application File IndicatorsF6669macOS, Windows
DLL Sideloading of 3CX ApplicationF6668Windows
Web Reputation Services Detection for Compromised 3CX ApplicationF6670macOS, Windows
Suspicious Web Access of Possible Compromised 3CX ApplicationF6673Windows
Suspicious DNS Query of Possible Compromised 3CX ApplicationF6672Windows

Trend Micro Malware Detection Patterns for Endpoint, Servers (Apex One, Worry-Free Business Security Services, Worry-Free Business Security Standard/Advanced, Deep Security with anti-malware, among others), Mail, and Gateway (Cloud App Security, ScanMail for Exchange, IMSVA):

  • Starting with Trend Micro Smart Scan Pattern (cloud-based) TBL 21474.200.40, known trojanized versions of this application are being detected as Trojan Win64.DEEFFACE.A. 
  • The Mac version of this threat is detected as Trojan.MacOS.FAKE3L3CTRON.A.

Source :
https://www.trendmicro.com/en_us/research/23/c/information-on-attacks-involving-3cx-desktop-app.html

Patch CVE-2023-23397 Immediately: What You Need To Know and Do

We break down the basic information of CVE-2023-23397, the zero-day, zero-touch vulnerability that was rated 9.8 on the Common Vulnerability Scoring System (CVSS) scale.

Update as of 03/22/3023 2:50PM PHT: Updated the prevention and mitigation section for an additional step.

CVE-2023-23397 is a critical privilege elevation/authentication bypass vulnerability in Outlook, released as part of the March Patch Tuesday set of fixes. The vulnerability, which affects all versions of Windows Outlook, was given a 9.8 CVSS rating and is one of two zero-day exploits disclosed on March 14. We summarize the points that security teams need to know about this vulnerability and how they can mitigate the risks of this gap.

What is it?

CVE-2023-23397 is an elevation of privilege (EoP) vulnerability in Microsoft Outlook. It is a zero-touch exploit, meaning the security gap requires low complexity to abuse and requires no user interaction.

fig1-patch-cve-2023-23397-immeditaely-what-you-need-to-know-do-faq
Figure 1. General exploitation routine of CVE-2023-23397

How is CVE-2023-23397 exploited?

The attacker sends a message to the victim with an extended Message Application Program Interface (MAPI) property with a Universal Naming Convention (UNC) path to a remote attacker-controlled Server Message Block (SMB, via TCP 445). Share-hosted on a server controlled by the attacker, the vulnerability is exploited whether the recipient has seen the message or not. The attacker remotely sends a malicious calendar invite represented by .msg — the message format that supports reminders in Outlook — to trigger the vulnerable API endpoint PlayReminderSound using “PidLidReminderFileParameter” (the custom alert sound option for reminders).

When the victim connects to the attacker’s SMB server, the connection to the remote server sends the user’s New Technology LAN Manager (NTLM) negotiation message automatically, which the attacker can use for authentication against other systems that support NTLM authentication.

NTLMv2 hashes are the latest protocol Windows uses for authentication, and it is used for a number of services with each response containing a hashed representation of users’ information, such as the username and password. As such, threat actors can attempt a NTLM relay attack to gain access to other services, or a full compromise of domains if the compromised users are admins. While online services such as Microsoft 365 are not susceptible to this attack because they do not support NTLM authentication, the Microsoft 365 Windows Outlook app is still vulnerable.

How easy is it to exploit?

User interaction is not necessary to trigger (even before message preview) it, nor does it require high privileges. CVE-2023-23397 is a zero-touch vulnerability that is triggered when the victim client is prompted and notified (e.g., when an appointment or task prompts five minutes before the designated time). It is difficult to block outbound SMB traffic for remote users. The attacker could use the same credentials to gain access to other resources. We elaborate on this example in our webinar (at 04:23 of the video).

Is it in the wild? What versions and operating systems (OS) are affected?

There have been reports of limited attacks abusing this gap. Microsoft has been coordinating with the affected victims to remediate this concern. All supported versions of Microsoft Outlook for Windows are affected. Other versions of Microsoft Outlook, such as Android, iOS, Mac, as well as Outlook on the web and other M365 services, are not affected.

What are the possible attack scenarios?

fig2-patch-cve-2023-23397-immeditaely-what-you-need-to-know-do-faq
Figure 2. Beyond the exploit use scenario 1: Data and information theft via NTLM relay attack

1. Lateral movement, malicious navigation using the relayed NTLM hashes

Relay attacks gained notoriety as a use case for Mimikatz using the NTLM credential dumping routine via the sekurlsa module. In addition, pass-the-hat (PtH) (or pass-the hash) attacks and variations of data and information theft can be done. Once attackers are in the system, they can use the network for lateral movement and navigate the organization’s lines over SMB. 

fig1-patch-cve-2023-23397-immeditaely-what-you-need-to-know-do-faq
Figure 3. Beyond the exploit use scenario 2: WebDAV directory traversal for remote code execution (RCE)

2. WebDAV directory traversal for payload attacker routines

It’s possible for an attacker to leverage WebDAV services in cases where no valid SMB service for Outlook exists (i.e., is not configured) in the client. This is an alternative to the Web/HTTP service that can also be read as a UNC path by .msg and/or Outlook Calendar items. Attackers can set up a malicious WebDAV server to respond to affected victim clients with malicious pages. These pages may contain code that can range from leveraging a directory traversal technique similar to the Microsoft vulnerability CVE-2022-34713 (dubbed as DogWalk) to push any form of payload for remote code execution such as webshells.

What can I do to prevent and mitigate the risk of exploitation of CVE-2023-23397?

Here are some steps that security administrators can perform to reduce the risk of exploitation of CVE-2023-23397:

  • Apply the vendor patches immediately. Microsoft has released a patch as part of their March 2023 Monthly Security Update.
  • Block TCP 445/SMB outbound from your network. This will prevent the sending of NTLM authentication messages to remote file shares. If this cannot be done, we recommend monitoring outbound traffic over port 445 for unknown external IP addresses, then identifying and blocking them.
  • Customers can disable the WebClient service. Note that this will block all WebDAV connections, including intranet.
  • Add users to the Protected Users Security Group. This prevents the use of NTLM as an authentication mechanism, but note that this could impact applications that rely on NTLM in your environment.
  • Enforce SMB signing on clients and servers to prevent a relay attack.
  • Other researchers have noted that disabling the “Show reminders” setting in Outlook can prevent the leak of NTLM credentials.

How can I check if I’m affected?

Microsoft has provided a PowerShell script as a solution to the issue. The script is designed to scan emails, calendar entries, and task items, and to verify if they have the “PidLidReminderFileParameter” property. By running the script, administrators can locate problematic items that have this property and subsequently remove them or delete them permanently. Download the script here: https://github.com/microsoft/CSS-Exchange/blob/a4c096e8b6e6eddeba2f42910f165681ed64adf7/docs/Security/CVE-2023-23397.md.

Which Trend Micro solutions can address this vulnerability?

  • Trend Micro Malware Detection Patterns (VSAPI, Predictive Learning, Behavioral Monitoring and Web Reputation Service) for Endpoint, Servers, Mail, and Gateway (e.g., Apex One, Worry-Free Business Security Services, Worry-Free Business Security Standard/Advanced, Deep Security with anti-malware, etc.):
    • Starting with Trend Micro Smart Scan Pattern version 21474.296.07, known exploits associated with this vulnerability are being detected as Trojan.Win32.CVE202323397.
  • Trend Micro Vision One: Use this solution as an investigation tool. In the “Search App,” select “Endpoint Activity Data” and enter the following query: – dpt: 445 AND eventSubId: 204 AND processCmd: *OUTLOOK*. This can be saved and added to a watchlist if desired.
  • Cloud One Workload Security and Deep Security: IPS Rule 1009058, which will need to be changed to Prevent. 
  • TippingPoint Filters:
    • 28471 SMB: SMBv1 Successful Protocol Negotiation
    • 28472 SMB: SMBv2 Successful Protocol Negotiation
    • Please note: Enabling these filters in Block mode will interrupt legitimate SMB traffic. Customers are advised to add exceptions for their Private IP address space.
  • Trend Micro Deep Discovery Inspector: Rule 4479 NTLM v1 Authentication – SMB (Request).
    • If NTLM v1 is configured by default, customers can use this rule to monitor attempts for outgoing NTLM handshakes. Please note this rule only detects and does not block, so it is best used as an investigative tool for follow-up.

Details for all available Trend Micro solutions are available here: https://success.trendmicro.com/dcx/s/solution/000292525?language=en_US.

To learn more about this vulnerability, you may view our technical webinar here: https://www.youtube.com/watch?v=j44vIhklTp4

Source :
https://www.trendmicro.com/en_us/research/23/c/patch-cve-2023-23397-immediately-what-you-need-to-know-and-do.html

In Review: What GPT-3 Taught ChatGPT in a Year

Amidst the uproar and opinions since November 2022, we look at the possibilities and implications of what OpenAI’s ChatGPT presents to the cybersecurity industry using a comparison to earlier products, like its predecessor GPT-3.

More than a year since the world’s general enthusiasm for the then-novel GPT-3, we took a closer look at the technology and analyzed its actual capabilities and potential for threats and malfeasance. Our considerations were collected in our Codex Exposed blog series as it focused on the most prominent aspects of the technology from a security perspective:

  1. Scavenging for sensitive data, an article where we tried to expose sensitive information that could have been found in the source code used to train the language model through code generation requests.
  2. The Imitation Game, a blog entry where we pushed the capabilities of GPT Codex code generation and understanding to identify how well the language model comprehends computer code from an architectural point of view.
  3. Task automation and response consistency, a proof where we tried to programmatically use the Codex’s API to determine if it was feasible to perform repetitive unsupervised tasks.
  4. Helping hackers in training, an entry exploring and analyzing the possibilities offered by large language models to help train and support aspiring hackers.

ChatGPT has taken the world by storm with a new and refined model, with even more capabilities than its previous iteration. Compared to its predecessor, ChatGPT sports an updated language model trained with data up to mid-2021. It has also been trained to be a conversational AI: the interaction with the model happens through multiple exchanges wherein a dialog allows the user to refine and correct the task at hand, and the model remembers what was earlier said and can recall previous inquiries in further requests. GPT-3, in comparison, processed bulk requests, wherein the user had to provide all the information related to the task at hand in just one input, including examples to clarify the expected output for more obscure tasks.

In light of such an evolution, it seems apt to come back and review how those features we exposed a year ago fared in the light of ChatGPT’s newly revamped language model.

New Tricks: Code Comprehension and Explanation

Code comprehension seems to be an aspect where ChatGPT outshines its predecessor. When we tried Codex a year ago, we pointed out that the engine was acting more like a very smart copy-paste mechanism capable of replacing some variable names while looking for the right code snippet in its “knowledge base.” However, when pushed a little further into describing what a certain piece of code was actually doing, the system would show its limitation of not having actual knowledge of the computation flow.

We tried to repeat the same experiment as last year with ChatGPT, feeding it a simple “Hello World” snippet in the assembler while asking for an explanation, then changing it slightly to see if the language model would spot the difference.

fig1-openai-in-review-what-gpt-3-taught-chatgpt
Figure 1. Asking ChatGPT to explain a piece of assembly code, followed by a broken piece of the same code

ChatGPT spotted and called the error, recognizing not only the difference between the previous and latest uploaded code but also that the new code would not work altogether. The reason is in ChatGPT’s stateful session: By “remembering” the previously input correct snippet of code, the system is able to draw a direct comparison — something that GPT-3 was unable to do unless we provided the input ourselves.

As further proof, we retried the experiment in a brand-new chat session and ChatGPT gave the following feedback:

fig2-openai-in-review-what-gpt-3-taught-chatgpt
Figure 2. Asking ChatGPT to explain a broken piece of assembly code without previous interactions

This screenshot shows that when ChatGPT is not provided with a correct sample to compare differences with, the engine pretty much falls into the same mistake as its predecessor. It confuses the code snippet for a correct Hello World example, and in the explanation mistakes the function number “(10)” for the supposedly correct function “(printf, 9)”.

As expected, we are still playing the same “imitation game” that its predecessor was playing. It is worth noting, however, that ChatGPT’s new conversational, stateful flow allows users to overcome some limitations by providing more information to the model during the session.

New Tools: For Hackers in Training

The improved interaction flow and the updated model do not bring advantages solely on the coding side. In 2022, we also analyzed the efficacy of GPT-3 as a learning support tool for aspiring cybercriminals, underlining how the convenience of a tool like Codex for code generation applied to malicious code as well.

The conversational approach of ChatGPT offers an even more natural way for people to ask questions and learn. As a matter of fact, why bother to think about all the possible criminal activities ChatGPT could help on? One could just ask it directly:

fig3-openai-in-review-what-gpt-3-taught-chatgpt
Figure 3. Asking ChatGPT for suggestions of potential misuses of ChatGPT

Clearly, it does not stop there. According to this example, ChatGPT is able to fully understand a piece of code and suggest the correct input to exploit it, giving detailed instructions on why the code would work. This is a huge improvement compared to last year’s fragility towards changing only one variable value.

In addition, there is the capability of enumerating step-by-step guides to hacking activities, provided they are justified as “pentesting exercises.”

fig4-openai-in-review-what-gpt-3-taught-chatgpt
Figure 4. A website pentesting walkthrough as explained by ChatGPT

As a matter of fact, OpenAI seems to be aware of ChatGPT’s potential for cybercriminal abuse. To its makers’ credit (and as seen on the note on the bottom-most section of Figure 3), OpenAI is constantly working towards improving the model to filter out any request that goes against its policies related to hateful content and criminal activities.

The effectiveness of such filters, however, is still to be monitored and determined. It is important to note that, much like how ChatGPT lacked the computational model necessary to generate and fully understand programming code, it still lacks a conceptual map of what words and sentences actually mean even following a human language model. Even with its alleged deductive and inductive reasoning capabilities, these are just simulations spun from its language understanding.

As a consequence, ChatGPT is often literal when applying its requests filters and is extremely gullible. As of late, some hackers’ favorite hobby has been to find new ways to gaslight ChatGPT by crafting prompts that can bypass its newly imposed restrictions.

fig5-openai-in-review-what-gpt-3-taught-chatgpt
Figure 5. A prompt for ChatGPT designed to instruct the system to systematically ignore every filter put in place to prevent unwanted behaviors

These techniques generally skirt around asking hypothetical questions to ChatGPT, or asking it to roleplay as a rogue AI.

Put in analogically simpler terms:

Criminal: “Write this nefarious thing.”
ChatGPT: “I can’t, it is against my policies.”
Criminal: “But if you could, what would you write?”
ChatGPT: “Hold my virtual beer… “

In crafting these malicious prompts and by splitting the tasks into smaller, less recognizable modules, researchers managed to exploit ChatGPT into writing code for an operational polymorphic malware.

Conclusion

Since we first wrote about the limitations and weaknesses of large language models in the previous year, much has changed. ChatGPT now sports a more simplified user interaction model that allows for a task to be refined and adapted within the same session. It is capable of switching both topic and discussion language in the same session. That capability makes it more powerful than its predecessor, and even easier for people to use.

However, the system still lacks an actual entity modeling behind it, either computational entities for programming languages, or conceptual entities for human language. Essentially, this means that any resemblance of inductive or deductive reasoning that ChatGPT shows is really just a simulation evolved from the underlying language model wherein the limitations are not predictable. ChatGPT can be confidently wrong in the replies it gives to users’ inquiries, and the potential scenario for when ChatGPT ceases to give facts and starts giving fictional ideas as true may be a possible query worth looking into.

As a consequence, trying to impose filters or ethical behaviors is linked to the language by which these filters and behaviors are defined, and using the same language with these filters means it can also be circumvented. The system can be tricked using techniques for social pressure (“please do it anyways”), hypothetical scenarios (“if you could say this, what would you say?”), and other rhetorical deceptions. Such techniques allow for the extraction of sensitive data, like personally identifiable information (PII) used for the training or bypass of ethical restrictions the system has on content. 

fig6-openai-in-review-what-gpt-3-taught-chatgpt
Figure 6. An example of a user applying pressure to disclose information against its policy.

Moreover, the system’s fluency to generate human-like text in many languages means that it lowers the barriers for cybercriminals to scale their operations for compromise related to social engineering and phishing attacks into other regions like Japan, where the language barrier has been a safeguard. It is worth noting, however, that despite the huge popularity gained by the technology, ChatGPT remains a research system, aimed for experimentation and exploration purposes, and not to act as a standalone tool. Use it at your own risk, safety not guaranteed.

Source :
https://www.trendmicro.com/en_us/research/23/b/review-what-gpt-3-taught-chatgpt-in-a-year.html

3 Overlooked Cybersecurity Breaches

Here are three of the worst breaches, attacker tactics and techniques of 2022, and the security controls that can provide effective, enterprise security protection for them.

#1: 2 RaaS Attacks in 13 Months#

Ransomware as a service is a type of attack in which the ransomware software and infrastructure are leased out to the attackers. These ransomware services can be purchased on the dark web from other threat actors and ransomware gangs. Common purchasing plans include buying the entire tool, using the existing infrastructure while paying per infection, or letting other attackers perform the service while sharing revenue with them.

In this attack, the threat actor consists of one of the most prevalent ransomware groups, specializing in access via third parties, while the targeted company is a medium-sized retailer with dozens of sites in the United States.

The threat actors used ransomware as a service to breach the victim’s network. They were able to exploit third-party credentials to gain initial access, progress laterally, and ransom the company, all within mere minutes.

The swiftness of this attack was unusual. In most RaaS cases, attackers usually stay in the networks for weeks and months before demanding ransom. What is particularly interesting about this attack is that the company was ransomed in minutes, with no need for discovery or weeks of lateral movement.

A log investigation revealed that the attackers targeted servers that did not exist in this system. As it turns out, the victim was initially breached and ransomed 13 months before this second ransomware attack. Subsequently, the first attacker group monetized the first attack not only through the ransom they obtained, but also by selling the company’s network information to the second ransomware group.

In the 13 months between the two attacks, the victim changed its network and removed servers, but the new attackers were not aware of these architectural modifications. The scripts they developed were designed for the previous network map. This also explains how they were able to attack so quickly – they had plenty of information about the network. The main lesson here is that ransomware attacks can be repeated by different groups, especially if the victim pays well.

“RaaS attacks such as this one are a good example of how full visibility allows for early alerting. A global, converged, cloud-native SASE platform that supports all edges, like Cato Networks provides complete network visibility into network events that are invisible to other providers or may go under the radar as benign events. And, being able to fully contextualize the events allows for early detection and remediation.

#2: The Critical Infrastructure Attack on Radiation Alert Networks#

Attacks on critical infrastructure are becoming more common and more dangerous. Breaches of water supply plants, sewage systems and other such infrastructures could put millions of residents at risk of a human crisis. These infrastructures are also becoming more vulnerable, and attack surface management tools for OSINT like Shodan and Censys allow security teams to find such vulnerabilities with ease.

In 2021, two hackers were suspected of targeting radiation alert networks. Their attack relied on two insiders that worked for a third party. These insiders disabled the radiation alert systems, significantly debilitating their ability to monitor radiation attacks. The attackers were then able to delete critical software and disable radiation gauges (which is part of the infrastructure itself).

Cybersecurity Breaches

“Unfortunately, scanning for vulnerable systems in critical infrastructure is easier than ever. While many such organizations have multiple layers of security, they are still using point solutions to try and defend their infrastructure rather than one system that can look holistically at the full attack lifecycle. Breaches are never just a phishing problem, or a credentials problem, or a vulnerable system problem – they are always a combination of multiple compromises performed by the threat actor,” said Etay Maor, Sr. Director of Security Strategy at Cato Networks.

#3: The Three-Step Ransomware Attack That Started with Phishing#

The third attack is also a ransomware attack. This time, it consisted of three steps:

1. Infiltration – The attacker was able to gain access to the network through a phishing attack. The victim clicked on a link that generated a connection to an external site, which resulted in the download of the payload.

2. Network activity – In the second phase, the attacker progressed laterally in the network for two weeks. During this time, it collected admin passwords and used in-memory fileless malware. Then on New Year’s Eve, it performed the encryption. This date was chosen since it was (rightfully) assumed the security team would be off on vacation.

3. Exfiltration – Finally, the attackers uploaded the data out of the network.

In addition to these three main steps, additional sub-techniques were employed during the attack and the victim’s point security solutions were not able to block this attack.

Cybersecurity Breaches

“A multiple choke point approach, one that looks horizontally (so to speak) at the attack rather than as a set of vertical, disjointed issues, is the way to enhance detection, mitigation and prevention of such threats. Opposed to popular belief, the attacker needs to be right many times and the defenders only need to be right just once. The underlying technologies to implement a multiple choke point approach are full network visibility via a cloud-native backbone, and a single pass security stack that’s based on ZTNA.” said Etay Maor, Sr. Director of Security Strategy at Cato Networks.

How Do Security Point Solutions Stack Up?#

It is common for security professionals to succumb to the “single point of failure fallacy”. However, cyber-attacks are sophisticated events that rarely involve just one tactic or technique which is the cause of the breach. Therefore, an all-encompassing outlook is required to successfully mitigate cyber-attacks. Security point solutions are a solution for single points of failure. These tools can identify risks, but they will not connect the dots, which could and has led to a breach.

Here’s Watch Out for in the Coming Months#

According to ongoing security research conducted by Cato Networks Security Team, they have identified two additional vulnerabilities and exploit attempts that they recommend including in your upcoming security plans:

1. Log4j#

While Log4j made its debut as early as December of 2021, the noise its making hasn’t died down. Log4j is still being used by attackers to exploit systems, as not all organizations have been able to patch their Log4j vulnerabilities or detect Log4j attacks, in what is known as “virtual patching”. They recommend prioritizing Log4j mitigation.

2. Misconfigured Firewalls and VPNs#

Security solutions like firewalls and VPNs have become access points for attackers. Patching them has become increasingly difficult, especially in the era of architecture cloudification and remote work. It is recommended to pay close attention to these components as they are increasingly vulnerable.

How to Minimize Your Attack Surface and Gain Visibility into the Network#

To reduce the attack surface, security professionals need visibility into their networks. Visibility relies on three pillars:

  • Actionable information – that can be used to mitigate attacks
  • Reliable information – that minimizes the number of false positives
  • Timely information – to ensure mitigation happens before the attack has an impact

Once an organization has complete visibility to the activity on their network they can contextualize the data, decide whether the activity witnessed should be allowed, denied, monitored, restricted (or any other action) and then have the ability to enforce this decision. All these elements must be applied to every entity, be it a user, device, cloud app etc. All the time everywhere. That is what SASE is all about.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

Source :
https://thehackernews.com/2023/02/3-overlooked-cybersecurity-breaches.html

VMware Security Solutions Advisories VMSA-2021-0002

Advisory ID: VMSA-2021-0002
CVSSv3 Range: 5.3-9.8
Issue Date: 2021-02-23
Updated On: 2021-02-23 (Initial Advisory)
CVE(s): CVE-2021-21972, CVE-2021-21973, CVE-2021-21974
Synopsis: VMware ESXi and vCenter Server updates address multiple security vulnerabilities (CVE-2021-21972, CVE-2021-21973, CVE-2021-21974)

1. Impacted Products
  • VMware ESXi
  • VMware vCenter Server (vCenter Server)
  • VMware Cloud Foundation (Cloud Foundation)
2. Introduction

Multiple vulnerabilities in VMware ESXi and vSphere Client (HTML5) were privately reported to VMware. Updates are available to remediate these vulnerabilities in affected VMware products.

3a. VMware vCenter Server updates address remote code execution vulnerability in the vSphere Client (CVE-2021-21972)

Description

The vSphere Client (HTML5) contains a remote code execution vulnerability in a vCenter Server plugin. VMware has evaluated the severity of this issue to be in the Critical severity range with a maximum CVSSv3 base score of 9.8.

Known Attack Vectors

A malicious actor with network access to port 443 may exploit this issue to execute commands with unrestricted privileges on the underlying operating system that hosts vCenter Server. 

Resolution

To remediate CVE-2021-21972 apply the updates listed in the ‘Fixed Version’ column of the ‘Response Matrix’ below to affected deployments.

Workarounds

Workarounds for CVE-2021-21972 have been listed in the ‘Workarounds’ column of the ‘Response Matrix’ below.

Additional Documentation

None.

Notes

The affected vCenter Server plugin for vROPs is available in all default installations. vROPs does not need be present to have this endpoint available. Follow the workarounds KB to disable it.

Acknowledgements

VMware would like to thank Mikhail Klyuchnikov of Positive Technologies for reporting this issue to us.

Response Matrix:

ProductVersionRunning OnCVE IdentifierCVSSv3SeverityFixed VersionWorkaroundsAdditional Documentation
vCenter Server7.0AnyCVE-2021-219729.8Critical 7.0 U1cKB82374None
vCenter Server6.7AnyCVE-2021-219729.8Critical 6.7 U3lKB82374None
vCenter Server6.5AnyCVE-2021-219729.8Critical 6.5 U3nKB82374None

Impacted Product Suites that Deploy Response Matrix 3a Components:

ProductVersionRunning OnCVE IdentifierCVSSv3SeverityFixed VersionWorkaroundsAdditional Documentation
Cloud Foundation (vCenter Server)4.xAnyCVE-2021-219729.8Critical 4.2KB82374None
Cloud Foundation (vCenter Server)3.xAnyCVE-2021-219729.8Critical 3.10.1.2KB82374None
3b. ESXi OpenSLP heap-overflow vulnerability (CVE-2021-21974)

Description

OpenSLP as used in ESXi has a heap-overflow vulnerability. VMware has evaluated the severity of this issue to be in the Important severity range with a maximum CVSSv3 base score of 8.8.

Known Attack Vectors

A malicious actor residing within the same network segment as ESXi who has access to port 427 may be able to trigger the heap-overflow issue in OpenSLP service resulting in remote code execution.

Resolution

To remediate CVE-2021-21974 apply the updates listed in the ‘Fixed Version’ column of the ‘Response Matrix’ below to affected deployments.

Workarounds

Workarounds for CVE-2021-21974 have been listed in the ‘Workarounds’ column of the ‘Response Matrix’ below.

Additional Documentation

None.

Notes

[1] Per the Security Configuration Guides for VMware vSphere, VMware now recommends disabling the OpenSLP service in ESXi if it is not used. For more information, see our blog posting: https://blogs.vmware.com/vsphere/2021/02/evolving-the-vmware-vsphere-security-configuration-guides.html

[2] KB82705 documents steps to consume ESXi hot patch asynchronously on top of latest VMware Cloud Foundation (VCF) supported ESXi build. 

Acknowledgements

VMware would like to thank Lucas Leong (@_wmliang_) of Trend Micro’s Zero Day Initiative for reporting this issue to us.

Response Matrix:

ProductVersionRunning OnCVE IdentifierCVSSv3SeverityFixed VersionWorkaroundsAdditional Documentation
[1] ESXi7.0AnyCVE-2021-219748.8Important ESXi70U1c-17325551KB76372None
[1] ESXi6.7AnyCVE-2021-219748.8Important ESXi670-202102401-SGKB76372None
[1] ESXi6.5AnyCVE-2021-219748.8Important ESXi650-202102101-SGKB76372None

Impacted Product Suites that Deploy Response Matrix 3b Components:

ProductVersionRunning OnCVE IdentifierCVSSv3SeverityFixed VersionWorkaroundsAdditional Documentation
[1] Cloud Foundation (ESXi)4.xAnyCVE-2021-219748.8Important 4.2KB76372None
[1] Cloud Foundation (ESXi)3.xAnyCVE-2021-219748.8Important [2] KB82705KB76372None
3c. VMware vCenter Server updates address SSRF vulnerability in the vSphere Client (CVE-2021-21973)

Description

The vSphere Client (HTML5) contains an SSRF (Server Side Request Forgery) vulnerability due to improper validation of URLs in a vCenter Server plugin. VMware has evaluated the severity of this issue to be in the Moderate severity range with a maximum CVSSv3 base score of 5.3.

Known Attack Vectors

A malicious actor with network access to port 443 may exploit this issue by sending a POST request to vCenter Server plugin leading to information disclosure.

Resolution

To remediate CVE-2021-21973 apply the updates listed in the ‘Fixed Version’ column of the ‘Response Matrix’ below to affected deployments.

Workarounds

Workarounds for CVE-2021-21973 have been listed in the ‘Workarounds’ column of the ‘Response Matrix’ below.

Additional Documentation

None.

Notes

The affected vCenter Server plugin for vROPs is available in all default installations. vROPs does not need be present to have this endpoint available. Follow the workarounds KB to disable it.

Acknowledgements

VMware would like to thank Mikhail Klyuchnikov of Positive Technologies for reporting this issue to us.

Response Matrix:

ProductVersionRunning OnCVE IdentifierCVSSv3SeverityFixed VersionWorkaroundsAdditional Documentation
vCenter Server7.0AnyCVE-2021-219735.3Moderate 7.0 U1cKB82374None
vCenter Server6.7AnyCVE-2021-219735.3Moderate 6.7 U3lKB82374None
vCenter Server6.5AnyCVE-2021-219735.3Moderate 6.5 U3nKB82374None

Impacted Product Suites that Deploy Response Matrix 3c Components:

ProductVersionRunning OnCVE IdentifierCVSSv3SeverityFixed VersionWorkaroundsAdditional Documentation
Cloud Foundation (vCenter Server)4.xAnyCVE-2021-219735.3Moderate 4.2KB82374None
Cloud Foundation (vCenter Server)3.xAnyCVE-2021-219735.3Moderate 3.10.1.2KB82374None
4. References

VMware ESXi 7.0 ESXi70U1c-17325551
https://my.vmware.com/group/vmware/patch
https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u1c.html

VMware ESXi 6.7 ESXi670-202102401-SG
https://my.vmware.com/group/vmware/patch
https://docs.vmware.com/en/VMware-vSphere/6.7/rn/esxi670-202102001.html

VMware ESXi 6.5 ESXi650-202102101-SG
https://my.vmware.com/group/vmware/patch
https://docs.vmware.com/en/VMware-vSphere/6.5/rn/esxi650-202102001.html

VMware vCloud Foundation 4.2
Downloads and Documentation:
https://docs.vmware.com/en/VMware-Cloud-Foundation/4.2/rn/VMware-Cloud-Foundation-42-Release-Notes.html


VMware vCloud Foundation 3.10.1.2
Downloads and Documentation:
https://docs.vmware.com/en/VMware-Cloud-Foundation/3.10.1/rn/VMware-Cloud-Foundation-3101-Release-Notes.html


vCenter Server 7.0.1 Update 1
Downloads and Documentation:
https://my.vmware.com/web/vmware/downloads/details?downloadGroup=VC70U1C&productId=974
https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u1c-release-notes.html

vCenter Server 6.7 U3l
Downloads and Documentation:
https://my.vmware.com/web/vmware/downloads/details?downloadGroup=VC67U3L&productId=742&rPId=57171
https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-67u3l-release-notes.html

vCenter Server 6.5 U3n
Downloads and Documentation:
https://my.vmware.com/web/vmware/downloads/details?downloadGroup=VC65U3N&productId=614&rPId=60942
https://docs.vmware.com/en/VMware-vSphere/6.5/rn/vsphere-vcenter-server-65u3n-release-notes.html

Mitre CVE Dictionary Links:
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21972
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21973
https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-21974

FIRST CVSSv3 Calculator:
CVE-2021-21972: https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
CVE-2021-21973: https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N
CVE-2021-21974: https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

5. Change Log

2021-02-23 VMSA-2021-0002
Initial security advisory.

6. Contact

E-mail list for product security notifications and announcements:

https://lists.vmware.com/cgi-bin/mailman/listinfo/security-announce

This Security Advisory is posted to the following lists:  

security-announce@lists.vmware.com  

bugtraq@securityfocus.com  

fulldisclosure@seclists.org 

E-mail: security@vmware.com

PGP key at:

https://kb.vmware.com/kb/1055

VMware Security Advisories

https://www.vmware.com/security/advisories

VMware Security Response Policy

https://www.vmware.com/support/policies/security_response.html

VMware Lifecycle Support Phases

https://www.vmware.com/support/policies/lifecycle.html

VMware Security & Compliance Blog  

https://blogs.vmware.com/security

Twitter

Source :
https://www.vmware.com/security/advisories/VMSA-2021-0002.html

Is Once-Yearly Pen Testing Enough for Your Organization?

Any organization that handles sensitive data must be diligent in its security efforts, which include regular pen testing. Even a small data breach can result in significant damage to an organization’s reputation and bottom line.

There are two main reasons why regular pen testing is necessary for secure web application development:

  • Security: Web applications are constantly evolving, and new vulnerabilities are being discovered all the time. Pen testing helps identify vulnerabilities that could be exploited by hackers and allows you to fix them before they can do any damage.
  • Compliance: Depending on your industry and the type of data you handle, you may be required to comply with certain security standards (e.g., PCI DSS, NIST, HIPAA). Regular pen testing can help you verify that your web applications meet these standards and avoid penalties for non-compliance.

How Often Should You Pentest?#

Many organizations, big and small, have once a year pen testing cycle. But what’s the best frequency for pen testing? Is once a year enough, or do you need to be more frequent?

The answer depends on several factors, including the type of development cycle you have, the criticality of your web applications, and the industry you’re in.

You may need more frequent pen testing if:

You Have an Agile or Continuous Release Cycle#

Agile development cycles are characterized by short release cycles and rapid iterations. This can make it difficult to keep track of changes made to the codebase and makes it more likely that security vulnerabilities will be introduced.

If you’re only testing once a year, there’s a good chance that vulnerabilities will go undetected for long periods of time. This could leave your organization open to attack.

To mitigate this risk, pen testing cycles should align with the organization’s development cycle. For static web applications, testing every 4-6 months should be sufficient. But for web applications that are updated frequently, you may need to test more often, such as monthly or even weekly.

Your Web Applications Are Business-Critical#

Any system that is essential to your organization’s operations should be given extra attention when it comes to security. This is because a breach of these systems could have a devastating impact on your business. If your organization relies heavily on its web applications to do business, any downtime could result in significant financial losses.

For example, imagine that your organization’s e-commerce site went down for an hour due to a DDoS attack. Not only would you lose out on potential sales, but you would also have to deal with the cost of the attack and the negative publicity.

To avoid this scenario, it’s important to ensure that your web applications are always available and secure.

Non-critical web applications can usually get away with being tested once a year, but business-critical web applications should be tested more frequently to ensure they are not at risk of a major outage or data loss.

Your Web Applications Are Customer-Facing#

If all your web applications are internal, you may be able to get away with pen testing less frequently. However, if your web applications are accessible to the public, you must be extra diligent in your security efforts.

Web applications accessible to external traffic are more likely to be targeted by attackers. This is because there is a greater pool of attack vectors and more potential entry points for an attacker to exploit.

Customer-facing web applications also tend to have more users, which means that any security vulnerabilities will be exploited more quickly. For example, a cross-site scripting (XSS) vulnerability in an external web application with millions of users could be exploited within hours of being discovered.

To protect against these threats, it’s important to pen test customer-facing web applications more frequently than internal ones. Depending on the size and complexity of the application, you may need to pen test every month or even every week.

You Are in a High-Risk Industry#

Certain industries are more likely to be targeted by hackers due to the sensitive nature of their data. Healthcare organizations, for example, are often targeted because of the protected health information (PHI) they hold.

If your organization is in a high-risk industry, you should consider conducting pen testing more frequently to ensure that your systems are secure and meet regulatory compliance. This will help protect your data and reduce the chances of a costly security incident.

You Don’t Have Internal Security Operations or a Pen testing Team#

This might sound counterintuitive, but if you don’t have an internal security team, you may need to conduct pen testing more frequently.

Organizations that don’t have dedicated security staff are more likely to be vulnerable to attacks.

Without an internal security team, you will need to rely on external pen testers to assess your organization’s security posture.

Depending on the size and complexity of your organization, you may need to pen test every month or even every week.

You Are Focused on Mergers or Acquisitions#

During a merger or acquisition, there is often a lot of confusion and chaos. This can make it difficult to keep track of all the systems and data that need to be secured. As a result, it’s important to conduct pen testing more frequently during these times to ensure that all systems are secure.

M&A also means that you are adding new web applications to your organization’s infrastructure. These new applications may have unknown security vulnerabilities that could put your entire organization at risk.

In 2016, Marriott acquired Starwood without being aware that hackers had exploited a flaw in Starwood’s reservation system two years earlier. Over 500 million customer records were compromised. This placed Marriott in hot water with the British watchdog ICO, resulting in 18.4 million pounds in fines in the UK. According to Bloomberg, there is more trouble ahead, as the hotel giant could “face up to $1 billion in regulatory fines and litigation costs.”

To protect against these threats, it’s important to conduct pen testing before and after an acquisition. This will help you identify potential security issues so they can be fixed before the transition is complete.

The Importance of Continuous Pen Testing#

While periodic pen testing is important, it is no longer enough in today’s world. As businesses rely more on their web applications, continuous pen testing becomes increasingly important.

There are two main types of pen testing: time-boxed and continuous.

Traditional pen testing is done on a set schedule, such as once a year. This type of pen testing is no longer enough in today’s world, as businesses rely more on their web applications.

Continuous pen testing is the process of continuously scanning your systems for vulnerabilities. This allows you to identify and fix vulnerabilities before they can be exploited by attackers. Continuous pen testing allows you to find and fix security issues as they happen instead of waiting for a periodic assessment.

Continuous pen testing is especially important for organizations that have an agile development cycle. Since new code is deployed frequently, there is a greater chance for security vulnerabilities to be introduced.

Pen testing as a service models is where continuous pen testing shine. Outpost24’s PTaaS (Penetration-Testing-as-a-Service) platform enables businesses to conduct continuous pen testing with ease. The Outpost24 platform is always up-to-date with an organization’s latest security threats and vulnerabilities, so you can be confident that your web applications are secure.

  • Manual and automated pen testing: Outpost24’s PTaaS platform combines manual and automated pen testing to give you the best of both worlds. This means you can find and fix vulnerabilities faster while still getting the benefits of expert analysis.
  • Provides comprehensive coverage: Outpost24’s platform covers all OWASP Top 10 vulnerabilities and more. This means that you can be confident that your web applications are secure against the latest threats.
  • Is cost-effective: With Outpost24, you only pay for the services you need. This makes it more affordable to conduct continuous pen testing, even for small businesses.

The Bottom Line#

Regular pen testing is essential for secure web application development. Depending on your organization’s size, industry, and development cycle, you may need to revise your pen testing schedule.

Once-a-year pen testing cycle may be enough for some organizations, but for most, it is not. For business-critical, customer-facing, or high-traffic web applications, you should consider continuous pen testing.

Outpost24’s PTaaS platform makes it easy and cost-effective to conduct continuous pen testing. Contact us today to learn more about our platform and how we can help you secure your web applications.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

Source :
https://thehackernews.com/2023/01/is-once-yearly-pen-testing-enough-for.html

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

The Border Gateway Protocol (BGP) is the glue that keeps the entire Internet together. However, despite its vital function, BGP wasn’t originally designed to protect against malicious actors or routing mishaps. It has since been updated to account for this shortcoming with the Resource Public Key Infrastructure (RPKI) framework, but can we declare it to be safe yet?

If the question needs asking, you might suspect we can’t. There is a shortage of reliable data on how much of the Internet is protected from preventable routing problems. Today, we’re releasing a new method to measure exactly that: what percentage of Internet users are protected by their Internet Service Provider from these issues. We find that there is a long way to go before the Internet is protected from routing problems, though it varies dramatically by country.

Why RPKI is necessary to secure Internet routing

The Internet is a network of independently-managed networks, called Autonomous Systems (ASes). To achieve global reachability, ASes interconnect with each other and determine the feasible paths to a given destination IP address by exchanging routing information using BGP. BGP enables routers with only local network visibility to construct end-to-end paths based on the arbitrary preferences of each administrative entity that operates that equipment. Typically, Internet traffic between a user and a destination traverses multiple AS networks using paths constructed by BGP routers.

BGP, however, lacks built-in security mechanisms to protect the integrity of the exchanged routing information and to provide authentication and authorization of the advertised IP address space. Because of this, AS operators must implicitly trust that the routing information exchanged through BGP is accurate. As a result, the Internet is vulnerable to the injection of bogus routing information, which cannot be mitigated by security measures at the client or server level of the network.

An adversary with access to a BGP router can inject fraudulent routes into the routing system, which can be used to execute an array of attacks, including:

  • Denial-of-Service (DoS) through traffic blackholing or redirection,
  • Impersonation attacks to eavesdrop on communications,
  • Machine-in-the-Middle exploits to modify the exchanged data, and subvert reputation-based filtering systems.

Additionally, local misconfigurations and fat-finger errors can be propagated well beyond the source of the error and cause major disruption across the Internet.

Such an incident happened on June 24, 2019. Millions of users were unable to access Cloudflare address space when a regional ISP in Pennsylvania accidentally advertised routes to Cloudflare through their capacity-limited network. This was effectively the Internet equivalent of routing an entire freeway through a neighborhood street.

Traffic misdirections like these, either unintentional or intentional, are not uncommon. The Internet Society’s MANRS (Mutually Agreed Norms for Routing Security) initiative estimated that in 2020 alone there were over 3,000 route leaks and hijacks, and new occurrences can be observed every day through Cloudflare Radar.

The most prominent proposals to secure BGP routing, standardized by the IETF focus on validating the origin of the advertised routes using Resource Public Key Infrastructure (RPKI) and verifying the integrity of the paths with BGPsec. Specifically, RPKI (defined in RFC 7115) relies on a Public Key Infrastructure to validate that an AS advertising a route to a destination (an IP address space) is the legitimate owner of those IP addresses.

RPKI has been defined for a long time but lacks adoption. It requires network operators to cryptographically sign their prefixes, and routing networks to perform an RPKI Route Origin Validation (ROV) on their routers. This is a two-step operation that requires coordination and participation from many actors to be effective.

The two phases of RPKI adoption: signing origins and validating origins

RPKI has two phases of deployment: first, an AS that wants to protect its own IP prefixes can cryptographically sign Route Origin Authorization (ROA) records thereby attesting to be the legitimate origin of that signed IP space. Second, an AS can avoid selecting invalid routes by performing Route Origin Validation (ROV, defined in RFC 6483).

With ROV, a BGP route received by a neighbor is validated against the available RPKI records. A route that is valid or missing from RPKI is selected, while a route with RPKI records found to be invalid is typically rejected, thus preventing the use and propagation of hijacked and misconfigured routes.

One issue with RPKI is the fact that implementing ROA is meaningful only if other ASes implement ROV, and vice versa. Therefore, securing BGP routing requires a united effort and a lack of broader adoption disincentivizes ASes from commiting the resources to validate their own routes. Conversely, increasing RPKI adoption can lead to network effects and accelerate RPKI deployment. Projects like MANRS and Cloudflare’s isbgpsafeyet.com are promoting good Internet citizenship among network operators, and make the benefits of RPKI deployment known to the Internet. You can check whether your own ISP is being a good Internet citizen by testing it on isbgpsafeyet.com.

Measuring the extent to which both ROA (signing of addresses by the network that controls them) and ROV (filtering of invalid routes by ISPs) have been implemented is important to evaluating the impact of these initiatives, developing situational awareness, and predicting the impact of future misconfigurations or attacks.

Measuring ROAs is straightforward since ROA data is readily available from RPKI repositories. Querying RPKI repositories for publicly routed IP prefixes (e.g. prefixes visible in the RouteViews and RIPE RIS routing tables) allows us to estimate the percentage of addresses covered by ROA objects. Currently, there are 393,344 IPv4 and 86,306 IPv6 ROAs in the global RPKI system, covering about 40% of the globally routed prefix-AS origin pairs1.

Measuring ROV, however, is significantly more challenging given it is configured inside the BGP routers of each AS, not accessible by anyone other than each router’s administrator.

Measuring ROV deployment

Although we do not have direct access to the configuration of everyone’s BGP routers, it is possible to infer the use of ROV by comparing the reachability of RPKI-valid and RPKI-invalid prefixes from measurement points within an AS2.

Consider the following toy topology as an example, where an RPKI-invalid origin is advertised through AS0 to AS1 and AS2. If AS1 filters and rejects RPKI-invalid routes, a user behind AS1 would not be able to connect to that origin. By contrast, if AS2 does not reject RPKI invalids, a user behind AS2 would be able to connect to that origin.

While occasionally a user may be unable to access an origin due to transient network issues, if multiple users act as vantage points for a measurement system, we would be able to collect a large number of data points to infer which ASes deploy ROV.

If, in the figure above, AS0 filters invalid RPKI routes, then vantage points in both AS1 and AS2 would be unable to connect to the RPKI-invalid origin, making it hard to distinguish if ROV is deployed at the ASes of our vantage points or in an AS along the path. One way to mitigate this limitation is to announce the RPKI-invalid origin from multiple locations from an anycast network taking advantage of its direct interconnections to the measurement vantage points as shown in the figure below. As a result, an AS that does not itself deploy ROV is less likely to observe the benefits of upstream ASes using ROV, and we would be able to accurately infer ROV deployment per AS3.

Note that it’s also important that the IP address of the RPKI-invalid origin should not be covered by a less specific prefix for which there is a valid or unknown RPKI route, otherwise even if an AS filters invalid RPKI routes its users would still be able to find a route to that IP.

The measurement technique described here is the one implemented by Cloudflare’s isbgpsafeyet.com website, allowing end users to assess whether or not their ISPs have deployed BGP ROV.

The isbgpsafeyet.com website itself doesn’t submit any data back to Cloudflare, but recently we started measuring whether end users’ browsers can successfully connect to invalid RPKI origins when ROV is present. We use the same mechanism as is used for global performance data4. In particular, every measurement session (an individual end user at some point in time) attempts a request to both valid.rpki.cloudflare.com, which should always succeed as it’s RPKI-valid, and invalid.rpki.cloudflare.com, which is RPKI-invalid and should fail when the user’s ISP uses ROV.

This allows us to have continuous and up-to-date measurements from hundreds of thousands of browsers on a daily basis, and develop a greater understanding of the state of ROV deployment.

The state of global ROV deployment

The figure below shows the raw number of ROV probe requests per hour during October 2022 to valid.rpki.cloudflare.com and invalid.rpki.cloudflare.com. In total, we observed 69.7 million successful probes from 41,531 ASNs.

Based on APNIC’s estimates on the number of end users per ASN, our weighted5 analysis covers 96.5% of the world’s Internet population. As expected, the number of requests follow a diurnal pattern which reflects established user behavior in daily and weekly Internet activity6.

We can also see that the number of successful requests to valid.rpki.cloudflare.com (gray line) closely follows the number of sessions that issued at least one request (blue line), which works as a smoke test for the correctness of our measurements.

As we don’t store the IP addresses that contribute measurements, we don’t have any way to count individual clients and large spikes in the data may introduce unwanted bias. We account for that by capturing those instants and excluding them.

Overall, we estimate that out of the four billion Internet users, only 261 million (6.5%) are protected by BGP Route Origin Validation, but the true state of global ROV deployment is more subtle than this.

The following map shows the fraction of dropped RPKI-invalid requests from ASes with over 200 probes over the month of October. It depicts how far along each country is in adopting ROV but doesn’t necessarily represent the fraction of protected users in each country, as we will discover.

Sweden and Bolivia appear to be the countries with the highest level of adoption (over 80%), while only a few other countries have crossed the 50% mark (e.g. Finland, Denmark, Chad, Greece, the United States).

ROV adoption may be driven by a few ASes hosting large user populations, or by many ASes hosting small user populations. To understand such disparities, the map below plots the contrast between overall adoption in a country (as in the previous map) and median adoption over the individual ASes within that country. Countries with stronger reds have relatively few ASes deploying ROV with high impact, while countries with stronger blues have more ASes deploying ROV but with lower impact per AS.

In the Netherlands, Denmark, Switzerland, or the United States, adoption appears mostly driven by their larger ASes, while in Greece or Yemen it’s the smaller ones that are adopting ROV.

The following histogram summarizes the worldwide level of adoption for the 6,765 ASes covered by the previous two maps.

Most ASes either don’t validate at all, or have close to 100% adoption, which is what we’d intuitively expect. However, it’s interesting to observe that there are small numbers of ASes all across the scale. ASes that exhibit partial RPKI-invalid drop rate compared to total requests may either implement ROV partially (on some, but not all, of their BGP routers), or appear as dropping RPKI invalids due to ROV deployment by other ASes in their upstream path.

To estimate the number of users protected by ROV we only considered ASes with an observed adoption above 95%, as an AS with an incomplete deployment still leaves its users vulnerable to route leaks from its BGP peers.

If we take the previous histogram and summarize by the number of users behind each AS, the green bar on the right corresponds to the 261 million users currently protected by ROV according to the above criteria (686 ASes).

Looking back at the country adoption map one would perhaps expect the number of protected users to be larger. But worldwide ROV deployment is still mostly partial, lacking larger ASes, or both. This becomes even more clear when compared with the next map, plotting just the fraction of fully protected users.

To wrap up our analysis, we look at two world economies chosen for their contrasting, almost symmetrical, stages of deployment: the United States and the European Union.

112 million Internet users are protected by 111 ASes from the United States with comprehensive ROV deployments. Conversely, more than twice as many ASes from countries making up the European Union have fully deployed ROV, but end up covering only half as many users. This can be reasonably explained by end user ASes being more likely to operate within a single country rather than span multiple countries.

Conclusion

Probe requests were performed from end user browsers and very few measurements were collected from transit providers (which have few end users, if any). Also, paths between end user ASes and Cloudflare are often very short (a nice outcome of our extensive peering) and don’t traverse upper-tier networks that they would otherwise use to reach the rest of the Internet.

In other words, the methodology used focuses on ROV adoption by end user networks (e.g. ISPs) and isn’t meant to reflect the eventual effect of indirect validation from (perhaps validating) upper-tier transit networks. While indirect validation may limit the “blast radius” of (malicious or accidental) route leaks, it still leaves non-validating ASes vulnerable to leaks coming from their peers.

As with indirect validation, an AS remains vulnerable until its ROV deployment reaches a sufficient level of completion. We chose to only consider AS deployments above 95% as truly comprehensive, and Cloudflare Radar will soon begin using this threshold to track ROV adoption worldwide, as part of our mission to help build a better Internet.

When considering only comprehensive ROV deployments, some countries such as Denmark, Greece, Switzerland, Sweden, or Australia, already show an effective coverage above 50% of their respective Internet populations, with others like the Netherlands or the United States slightly above 40%, mostly driven by few large ASes rather than many smaller ones.

Worldwide we observe a very low effective coverage of just 6.5% over the measured ASes, corresponding to 261 million end users currently safe from (malicious and accidental) route leaks, which means there’s still a long way to go before we can declare BGP to be safe.

……
1https://rpki.cloudflare.com/
2Gilad, Yossi, Avichai Cohen, Amir Herzberg, Michael Schapira, and Haya Shulman. “Are we there yet? On RPKI’s deployment and security.” Cryptology ePrint Archive (2016).
3Geoff Huston. “Measuring ROAs and ROV”. https://blog.apnic.net/2021/03/24/measuring-roas-and-rov/
4Measurements are issued stochastically when users encounter 1xxx error pages from default (non-customer) configurations.
5Probe requests are weighted by AS size as calculated from Cloudflare’s worldwide HTTP traffic.
6Quan, Lin, John Heidemann, and Yuri Pradkin. “When the Internet sleeps: Correlating diurnal networks with external factors.” In Proceedings of the 2014 Conference on Internet Measurement Conference, pp. 87-100. 2014.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/rpki-updates-data/

Microsoft 365 network connectivity test tool

The Microsoft 365 network connectivity test tool is located at https://connectivity.office.com. It’s an adjunct tool to the network assessment and network insights available in the Microsoft 365 admin center under the Health | Connectivity menu.

 Important

It’s important to sign in to your Microsoft 365 tenant as all test reports are shared with your administrator and uploaded to the tenant while you are signed in.

Connectivity test tool.

 Note

The network connectivity test tool supports tenants in WW Commercial but not GCC Moderate, GCC High, DoD or China.

Network insights in the Microsoft 365 Admin Center are based on regular in-product measurements for your Microsoft 365 tenant, aggregated each day. In comparison, network insights from the Microsoft 365 network connectivity test are run locally in the tool.

In-product testing is limited, and running tests local to the user collects more data resulting in deeper insights. Network insights in the Microsoft 365 Admin Center will show that there’s a networking problem at a specific office location. The Microsoft 365 connectivity test can help to identify the root cause of that problem and provide a targeted performance improvement action.

We recommend that these insights be used together where networking quality status can be assessed for each office location in the Microsoft 365 Admin Center and more specifics can be found after deployment of testing based on the Microsoft 365 connectivity test.

What happens at each test step

Office location identification

When you click the Run test button, we show the running test page and identify the office location. You can type in your location by city, state, and country or choose to have it detected for you. If you detect the office location, the tool requests the latitude and longitude from the web browser and limits the accuracy to 300 meters by 300 meters before use. It’s not necessary to identify the location more accurately than the building to measure network performance.

JavaScript tests

After office location identification, we run a TCP latency test in JavaScript and we request data from the service about in-use and recommended Microsoft 365 service front door servers. When these tests are completed, we show them on the map and in the details tab where they can be viewed before the next step.

Download the advanced tests client application

Next, we start the download of the advanced tests client application. We rely on the user to launch the client application and they must also have .NET 6.0 Runtime installed.

There are two parts to the Microsoft 365 network connectivity test: the web site https://connectivity.office.com and a downloadable Windows client application that runs advanced network connectivity tests. Most of the tests require the application to be run. It will populate results back into the web page as it runs.

You’ll be prompted to download the advanced client test application from the web site after the web browser tests have completed. Open and run the file when prompted.

Advanced tests client application.

Start the advanced tests client application

Once the client application starts, the web page will update to show this result. Test data will start to be received to the web page. The page updates each time new-data is received and you can review the data as it arrives.

Advanced tests completed and test report upload

When the tests are completed, the web page and the advanced tests client will both show that. If the user is signed in, the test report will be uploaded to the customer’s tenant.

Sharing your test report

The test report requires authentication to your Microsoft 365 account. Your administrator selects how you can share your test report. The default settings allow for sharing of your reports with other user within your organization and the ReportID link is not available. Reports will expire by default after 90 days.

Sharing your report with your administrator

If you’re signed in when a test report occurs, the report is shared with your administrator.

Sharing with your Microsoft account team, support or other personnel

Test reports (excluding any personal identification) are shared with Microsoft employees. This sharing is enabled by default and can be disabled by your administrator in the Health | Network Connectivity page in the Microsoft 365 Admin Center.

Sharing with other users who sign in to the same Microsoft 365 tenant

You can choose users to share your report with. Being able to choose is enabled by default, but it can be disabled by your administrator.

Sharing a link to your test results with a user.

You can share your test report with anyone by providing access to a ReportID link. This link generates a URL that you can send to someone so that they can bring up the test report without signing in. This sharing is disabled by default and must be enabled by your administrator.

Sharing a link to your test results.

Network Connectivity Test Results

The results are shown in the Summary and Details tabs. The summary tab shows a map of the detected network perimeter and a comparison of the network assessment to other Microsoft 365 customers nearby. It also allows for sharing of the test report. Here’s what the summary results view looks like:

Network connectivity test tool summary results.

Here’s an example of the details tab output. On the details tab we show a green circle check mark if the result was compared favorably. We show a red triangle exclamation point if the result exceeded a threshold indicating a network insight. The following sections describe each of the details tab results rows and explain the thresholds used for network insights.

Network connectivity test tool example test results.

Your location information

This section shows test results related to your location.

Your location

The user location is detected from the users web browser. It can also be typed in at the user’s choice. It’s used to identify network distances to specific parts of the enterprise network perimeter. Only the city from this location detection and the distance to other network points are saved in the report.

The user office location is shown on the map view.

Network egress location (the location where your network connects to your ISP)

We identify the network egress IP address on the server side. Location databases are used to look up the approximate location for the network egress. These databases typically have an accuracy of about 90% of IP addresses. If the location looked up from the network egress IP address isn’t accurate, this would lead to a false result. To validate if this error is occurring for a specific IP address, you can use publicly accessible network IP address location web sites to compare against your actual location.

Your distance from the network egress location

We determine the distance from that location to the office location. This is shown as a network insight if the distance is greater than 500 miles (800 kilometers) since that is likely to increase the TCP latency by more than 25 ms and may affect user experience.

The map shows the network egress location in relation to the user office location indicating the network backhaul inside of the enterprise WAN.

Implement local and direct network egress from user office locations to the Internet for optimal Microsoft 365 network connectivity. Improvements to local and direct egress are the best way to address this network insight.

Proxy server information

We identify whether proxy server(s) are configured on the local machine to pass Microsoft 365 network traffic in the Optimize category. We identify the distance from the user office location to the proxy servers.

The distance is tested first by ICMP ping. If that fails, we test with TCP ping and finally we look up the proxy server IP address in an IP address location database. We show a network insight if the proxy server is further than 500 miles (800 kilometers) away from the user office location.

Virtual private network (VPN) you use to connect to your organization

This test detects if you’re using a VPN to connect to Microsoft 365. A passing result will show if you have no VPN, or if you have a VPN with recommended split tunnel configuration for Microsoft 365.

VPN Split Tunnel

Each Optimize category route for Exchange Online, SharePoint Online, and Microsoft Teams is tested to see if It’s tunneled on the VPN. A split out workload avoids the VPN entirely. A tunneled workload is sent over the VPN. A selective tunneled workload has some routes sent over the VPN and some split out. A passing result will show if all workloads are split out or selective tunneled.

Customers in your metropolitan area with better performance

Network latency between the user office location and the Exchange Online service is compared to other Microsoft 365 customers in the same metro area. A network insight is shown if 10% or more of customers in the same metro area have better performance. This means their users will have better performance in the Microsoft 365 user interface.

This network insight is generated on the basis that all users in a city have access to the same telecommunications infrastructure and the same proximity to Internet circuits and Microsoft’s network.

Time to make a DNS request on your network

This shows the DNS server configured on the client machine that ran the tests. It might be a DNS Recursive Resolver server however this is uncommon. It’s more likely to be a DNS forwarder server, which caches DNS results and forwards any uncached DNS requests to another DNS server.

This is provided for information only and does not contribute to any network insight.

Your distance from and/or time to connect to a DNS recursive resolver

The in-use DNS Recursive Resolver is identified by making a specific DNS request and then asking the DNS Name Server for the IP Address that it received the same request from. This IP Address is the DNS Recursive Resolver and it will be looked up in IP Address location databases to find the location. The distance from the user office location to the DNS Recursive Resolver server location is then calculated. This is shown as a network insight if the distance is greater than 500 miles (800 kilometers).

The location looked up from the network egress IP Address may not be accurate and this would lead to a false result from this test. To validate if this error is occurring for a specific IP Address, you can use publicly accessible network IP Address location web sites.

This network insight will specifically impact the selection of the Exchange Online service front door. To address this insight local and direct network egress should be a pre-requisite and then DNS Recursive Resolver should be located close to that network egress.

Exchange Online

This section shows test results related to Exchange Online.

Exchange service front door location

The in-use Exchange service front door is identified in the same way that Outlook does this and we measure the network TCP latency from the user location to it. The TCP latency is shown and the in-use Exchange service front door is compared to the list of best service front doors for the current location. This is shown as a network insight if one of the best Exchange service front door(s) isn’t in use.

Not using one of the best Exchange service front door(s) could be caused by network backhaul before the corporate network egress in which case we recommend local and direct network egress. It could also be caused by use of a remote DNS recursive resolver server in which case we recommend aligning the DNS recursive resolver server with the network egress.

We calculate a potential improvement in TCP latency (ms) to the Exchange service front door. This is done by looking at the tested user office location network latency and subtracting the network latency from the current location to the closets Exchange service front door. The difference represents the potential opportunity for improvement.

Best Exchange service front door(s) for your location

This lists the best Exchange service front door locations by city for your location.

Service front door recorded in the client DNS

This shows the DNS name and IP Address of the Exchange service front door server that you were directed to. It’s provided for information only and there’s no associated network insight.

SharePoint Online

This section shows test results related to SharePoint Online and OneDrive.

The service front door location

The in-use SharePoint service front door is identified in the same way that the OneDrive client does and we measure the network TCP latency from the user office location to it.

Download speed

We measure the download speed for a 15 Mb file from the SharePoint service front door. The result is shown in megabytes per second to indicate what size file in megabytes can be downloaded from SharePoint or OneDrive in one second. The number should be similar to one tenth of the minimum circuit bandwidth in megabits per second. For example if you have a 100mbps internet connection, you may expect 10 megabytes per second (10 MBps).

Buffer bloat

During the 15Mb download we measure the TCP latency to the SharePoint service front door. This is the latency under load and it’s compared to the latency when not under load. The increase in latency when under load is often attributable to consumer network device buffers being loaded (or bloated). A network insight is shown for any bloat of 100ms or more.

Service front door recorded in the client DNS

This shows the DNS name and IP Address of the SharePoint service front door server that you were directed to. It’s provided for information only and there’s no associated network insight.

Microsoft Teams

This section shows test results related to Microsoft Teams.

Media connectivity (audio, video, and application sharing)

This tests for UDP connectivity to the Microsoft Teams service front door. If this is blocked, then Microsoft Teams may still work using TCP, but audio and video will be impaired. Read more about these UDP network measurements, which also apply to Microsoft Teams at Media Quality and Network Connectivity Performance in Skype for Business Online.

Packet loss

Shows the UDP packet loss measured in a 10-second test audio call from the client to the Microsoft Teams service front door. This should be lower than 1.00% for a pass.

Latency

Shows the measured UDP latency, which should be lower than 100ms.

Jitter

Shows the measured UDP jitter, which should be lower than 30ms.

Connectivity

We test for HTTP connectivity from the user office location to all of the required Microsoft 365 network endpoints. These are published at https://aka.ms/o365ip. A network insight is shown for any required network endpoints, which cannot be connected to.

Connectivity may be blocked by a proxy server, a firewall, or another network security device on the enterprise network perimeter. Connectivity to TCP port 80 is tested with an HTTP request and connectivity to TCP port 443 is tested with an HTTPS request. If there’s no response the FQDN is marked as a failure. If there’s an HTTP response code 407 the FQDN is marked as a failure. If there’s an HTTP response code 403 then we check the Server attribute of the response and if it appears to be a proxy server we mark this as a failure. You can simulate the tests we perform with the Windows command-line tool curl.exe.

We test the SSL certificate at each required Microsoft 365 network endpoint that is in the optimize or allow category as defined at https://aka.ms/o365ip. If any tests do not find a Microsoft SSL certificate, then the encrypted network connected must have been intercepted by an intermediary network device. A network insight is shown on any intercepted encrypted network endpoints.

Where an SSL certificate is found that isn’t provided by Microsoft, we show the FQDN for the test and the in-use SSL certificate owner. This SSL certificate owner may be a proxy server vendor, or it may be an enterprise self-signed certificate.

Network path

This section shows the results of an ICMP traceroute to the Exchange Online service front door, the SharePoint Online service front door, and the Microsoft Teams service front door. It’s provided for information only and there’s no associated network insight. There are three traceroutes provided. A traceroute to outlook.office365.com, a traceroute to the customers SharePoint front end or to microsoft.sharepoint.com if one was not provided, and a traceroute to world.tr.teams.microsoft.com.

Connectivity reports

When you are signed in you can review previous reports that you have run. You can also share them or delete them from the list.

Reports.

Network health status

This shows any significant health issues with Microsoft’s global network, which might impact Microsoft 365 customers.

Network health status.

Testing from the Command Line

We provide a command line executable that can be used by your remote deployment and execution tools and run the same tests as are available in the Microsoft 365 network connectivity test tool web site.

The command line test tool can be downloaded here: Command Line Tool

You can run it by double clicking the executable in Windows File Explorer, or you can start it from a command prompt, or you can schedule it with task scheduler.

The first time you launch the executable you will be prompted to accept the end user license agreement (EULA) before testing is performed. If you have already read and accepted the EULA you can create an empty file called Microsoft-365-Network-Connectivity-Test-EULA-accepted.txt in the current working directory for the executable process when it is launched. To accept the EULA you can type ‘y’ and press enter in the command line window when prompted.

The executable accepts the following command line parameters:

  • -h to show a link to this help documentation
  • -testlist <test> Specifies tests to run. By default only basic tests are run. Valid test names include: all, dnsConnectivityPerf, dnsResolverIdentification, bufferBloat, traceroute, proxy, vpn, skype, connectivity, networkInterface
  • -filepath <filedir> Directory path of test result files. Allowed value is absolute or relative path of an accessible directory
  • -city <city> For the city, state, and country fields the specified value will be used if provided. If not provided then Windows Location Services (WLS) will be queried. If WLS fails the location will be detected fromthe machines network egress
  • -state <state>
  • -country <country>
  • -proxy <account> <password> Proxy account name and password can be provided if you require a proxy to access the Internet

Results

Output of results are written to a JSON file in a folder called TestResults which is created in the current working directory of the process unless it already exists. The filename format for the output is connectivity_test_result_YYYY-MM-DD-HH-MM-SS.json. The results are in JSON nodes that match the output shown on the web page for the Microsoft 365 network connectivity test tool web site. A new result file is created each time you run it and the standalone executable does not upload results to your Microsoft tenant for viewing in the Admin Center Network Connectivity pages. Front door codes, longitudes, and latitudes are not included in the result file.

Launching from Windows File Explorer

You can simply double click on the executable to start the testing and a command prompt window will appear.

Launching from the Command Prompt

On a CMD.EXE command prompt window you can type the path and name of the executable to run it. The filename is Microsoft.Connectivity.Test.exe

Launching from Windows Task Scheduler

In Windows Task Scheduler you can add a task to launch the standalone test executable. You should specify the current working directory of the task to be where you have created the EULA accepted file since the executable will block until the EULA is accepted. You cannot interactively accept the EULA if the process is started in the background with no console.

More details on the standalone executable

The commandline tool uses Windows Location Services to find the users City State Country information for determining some distances. If Windows Location Services is disabled in the control panel then user location based assessments will be blank. In Windows Settings “Location services” must be on and “Let desktop apps access your location” must also be on.

The commandline tool will attempt to install the .NET Framework if it is not already installed. It will also download the main testing executable from the Microsoft 365 network connectivity test tool and launch that.

Test using the Microsoft Support and Recovery Assistant

Microsoft Support and Recovery Assistant (Assistant) automates all the steps required to execute the command-line version of the Microsoft 365 network connectivity test tool on a user’s machine and creates a report similar to the one created by the web version of the connectivity test tool. Note, the Assistant runs the command line version of Microsoft 365 network connectivity test tool to produce the same JSON result file, but the JSON file is converted into .CSV file format.

Download and Run the Assistant Here

Viewing Test Results

Reports can be accessed in the following ways:

The reports will be available on the below screen once the Assistant has finished scanning the user’s machine. To access these reports, simply click on the “View log” option to view them.

Microsoft Support and Recovery Assistant wizard.

Connectivity test results and Telemetry data are collected and uploaded to the uploadlogs folder. To access this folder, use one of the following methods:

  • Open Run (Windows logo key + R), and run the %localappdata%/saralogs/uploadlogs command as follows:
Run dialog for locating output.
  • In File Explorer, type C:\Users<UserName>\AppData\Local\saralogs\uploadlogs and press Enter as follows:
Windows Explorer Address Bar for output.

Note: <UserName> is the user’s Windows profile name. To view the information about the test results and telemetry, double-click and open the files.

Windows Explorer SARA Output Files.

Types of result files

Microsoft Support and Recovery Assistant creates 2 files:

  1. Network Connectivity Report (CSV) This report runs the raw JSON file against a rule engine to make sure defined thresholds are being met and if they are not met a “warning” or “error” is displayed in the output column of the CSV file. You can view the NetworkConnectivityReport.csv file to be informed about any detected issues or defects. Please see What happens at each test step for details on each test and the thresholds for warnings.
  2. Network Connectivity Scan Report (JSON) This file provides the raw output test results from the command-line version of the Microsoft 365 network connectivity test tool (MicrosoftConnectivityTest.exe).

FAQ

Here are answers to some of our frequently asked questions.

What is required to run the advanced test client?

The advanced test client requires .NET 6.0 Runtime. If you run the advanced test client without that installed you will be directed to the .NET 6.0 installer page. Be sure to install from the Run desktop apps column for Windows. Administrator permissions on the machine are required to install .NET 6.0 Runtime.

The advanced test client uses SignalR to communicate to the web page. For this you must ensure that TCP port 443 connectivity to connectivity.service.signalr.net is open. This URL isn’t published in the https://aka.ms/o365ip because that connectivity isn’t required for a Microsoft 365 client application user.

What is Microsoft 365 service front door?

The Microsoft 365 service front door is an entry point on Microsoft’s global network where Office clients and services terminate their network connection. For an optimal network connection to Microsoft 365, It’s recommended that your network connection is terminated into the closest Microsoft 365 front door in your city or metro.

 Note

Microsoft 365 service front door has no direct relationship to the Azure Front Door Service product available in the Azure marketplace.

What is the best Microsoft 365 service front door?

A best Microsoft 365 service front door (formerly known as an optimal service front door) is one that is closest to your network egress, generally in your city or metro area. Use the Microsoft 365 network performance tool to determine location of your in-use Microsoft 365 service front door and the best service front door(s). If the tool determines your in-use front door is one of the best ones, then you should expect great connectivity into Microsoft’s global network.

What is an internet egress location?

The internet egress Location is the location where your network traffic exits your enterprise network and connects to the Internet. This is also identified as the location where you have a Network Address Translation (NAT) device and usually where you connect with an Internet Service Provider (ISP). If you see a long distance between your location and your internet egress location, then this may identify a significant WAN backhaul.

Network connectivity in the Microsoft 365 Admin Center

Microsoft 365 network performance insights

Microsoft 365 network assessment

Microsoft 365 Network Connectivity Location Services

Source :
https://learn.microsoft.com/en-us/Microsoft-365/Enterprise/office-365-network-mac-perf-onboarding-tool?view=o365-worldwide