In this guide, I’m going to show you how to make Google Chrome the default browser using Group Policy (GPO). This guide applies to Windows Server 2012,2016,2019, 2022 as well as Windows 8/10/11.
To do this, there are several steps you’ll need to do. It’s not as simple as just creating a GPO and applying it to a target computer.
This guide assumes you’ve already implemented Google Chrome Enterprise and are already managing Google Chrome browsers at an enterprise level. If not, follow step 1 first.
Step 1: (Optional) Import Google Chrome .ADMX Template Files
Extract it once download and expand the subfolder Configuration.
In the “adm” folder, find your language (en-US) and copy the chrome.adm file to your desktop.
In the admx folder, find your language again (en-US), and copy the chrome.adml file to your desktop.
Next, RDP to your Domain Controller. Copy those two extracted files to the desktop of your DC.
Browse to C:\Windows\PolicyDefinitions and drag the chrome.admx.
In C:\Windows\PolicyDefinitions\en-US\folder, drag the chrome.adml file.
Now that you’ve copied in the necessary Group Policy files to manage your Google Chrome browsers, install Chrome Enterprise from here.
I used PDQ Deploy to push this out to all computers, but for testing you can simply install it on your PC.
Step 2: Create a new Group Policy Object
Log into your Domain Controller and open Group Policy Management. Right-click Group Policy Objects > New. Give it a helpful name like “Chrome Default Browser”.
Right-click the new policy > Edit. Then expand Computer Configuration > Policies > Administrative Templates > Google > Google Chrome. Double-click that and switch to Enabled.
You’ll notice in the Help section of the GPO that this will only work for Windows 7. For Windows 8-10, you will need to define a file associates XML file.
Step 3: Deploy File Associations File
The next step is to download a “default file associations” sample file, place it on a network share, and then configure another group policy.
You can either place the file in a network share available by everyone. Or, you could also use Powershell or PDQ Deploy/SCCM to push this file to a certain location on everyone’s computer.
For this example, I put the file in a network share like this: \\server01\fileshare01\chromedefault.xml
Step 4: Edit Chrome Browser GPO to include path to XML
Next, open up Group Policy Management from your DC again. Edit your new “Chrome Default Browser” policy.
Navigate to Computer Configuration > Policies > Administrative Templates > Windows Components > File Explorer.
Locate the “Set a default associations configuration file” policy. Edit it, and use the path from step 3.
Click Apply and OK once complete.
Step 5: Update GPO and Test
Next, you need to apply this GPO to a target OU or computer. I always recommend moving a test computer from Active Directory Users & Computers into a test OU to prevent breaking any production systems.
Locate the OU > right-click > Link an existing GPO > Choose the new “Chrome Default Browser” GPO.
Once the computer has been moved into the test OU, and you’ve applied the policy to that same OU, run the following command on the command to update the policy:
gpupdate /force
Then, sign out. The default browser will not be switched until after you log out.
To confirm it’s working properly, search Windows for “Default Apps” on your computer and switch it to Edge. Then, sign out and sign back in. If all goes well – you can open Default Apps again and successfully see that it has switched your default web browser to Google Chrome!
Wrapping Up
Hopefully this guide helped you force change the default web browser to Google Chrome for your company!
The Traffic Light Protocol (TLP) was created in order to facilitate greater sharing of information. TLP is a set of designations used to ensure that sensitive information is shared with the appropriate audience. It employs five official marking options to indicate expected sharing boundaries to be applied by the recipient(s). TLP only has five marking options; any designations not listed in this standard are not considered valid by FIRST.
TLP provides a simple and intuitive schema for indicating when and how sensitive information can be shared, facilitating more frequent and effective collaboration. TLP is not a “control marking” or classification scheme. TLP was not designed to handle licensing terms, handling and encryption rules, and restrictions on action or instrumentation of information. TLP labels and their definitions are not intended to have any effect on freedom of information or “sunshine” laws in any jurisdiction.
TLP is optimized for ease of adoption, human readability and person-to-person sharing; it may be used in automated sharing exchanges, but is not optimized for that use.
TLP is distinct from the Chatham House Rule (when a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.), but may be used in conjunction if it is deemed appropriate by participants in an information exchange.
The source is responsible for ensuring that recipients of TLP information understand and can follow TLP sharing guidance.
If a recipient needs to share the information more widely than indicated by the original TLP designation, they must obtain explicit permission from the original source.
TLP:RED Not for disclosure, restricted to participants only.
Sources may use TLP:RED when information cannot be effectively acted upon without significant risk for the privacy, reputation, or operations of the organizations involved. For the eyes and ears of individual recipients only, no further.
Recipients may not share TLP:RED information with any parties outside of the specific exchange, meeting, or conversation in which it was originally disclosed. In the context of a meeting, for example, TLP:RED information is limited to those present at the meeting. In most circumstances, TLP:RED should be exchanged verbally or in person.
TLP:AMBER+STRICT Limited disclosure, restricted to participants’ organization.
Sources may use TLP:AMBER+STRICT when information requires support to be effectively acted upon, yet carries risk to privacy, reputation, or operations if shared outside of the organization.
Recipients may share TLP:AMBER+STRICT information only with members of their own organization on a need-to-know basis to protect their organization and prevent further harm.
TLP:AMBER Limited disclosure, restricted to participants’ organization and its clients (see Terminology Definitions).
Sources may use TLP:AMBER when information requires support to be effectively acted upon, yet carries risk to privacy, reputation, or operations if shared outside of the organizations involved. Note that TLP:AMBER+STRICT should be used to restrict sharing to the recipient organization only.
Recipients may share TLP:AMBER information with members of their own organization and its clients on a need-to-know basis to protect their organization and its clients and prevent further harm.
TLP:GREEN Limited disclosure, restricted to the community.
Sources may use TLP:GREEN when information is useful to increase awareness within their wider community.
Recipients may share TLP:GREEN information with peers and partner organizations within their community, but not via publicly accessible channels. Unless otherwise specified, TLP:GREEN information may not be shared outside of the cybersecurity or cyber defense community.
TLP:CLEAR Disclosure is not limited.
Sources may use TLP:CLEAR when information carries minimal or no foreseeable risk of misuse, in accordance with applicable rules and procedures for public release.
Recipients may share this information without restriction. Information is subject to standard copyright rules.
Under TLP, a community is a group who share common goals, practices, and informal trust relationships. A community can be as broad as all cybersecurity practitioners in a country (or in a sector or region).
Organization
Under TLP, an organization is a group who share a common affiliation by formal membership and are bound by common policies set by the organization. An organization can be as broad as all members of an information sharing organization, but rarely broader.
Clients
Under TLP, clients are those people or entities that receive cybersecurity services from an organization. Clients are by default included in TLP:AMBER so that the recipients may share information further downstream in order for clients to take action to protect themselves. For teams with national responsibility, this definition includes stakeholders and constituents. Note: CISA considers “clients” to be stakeholders and constituents that have a legal agreement with CISA.
TLP-designated email correspondence should indicate the TLP color of the information in the Subject line and in the body of the email, prior to the designated information itself. The TLP color must be in capital letters: TLP:RED, TLP:AMBER+STRICT, TLP:AMBER, TLP:GREEN, or TLP:WHITE.
How to use TLP in documents
TLP-designated documents should indicate the TLP color of the information in the header and footer of each page. To avoid confusion with existing control marking schemes, it is advisable to right-justify TLP designations. The TLP color should appear in capital letters and in 12 point type or greater. Note: TLP 2.0 has changed the color coding of TLP:RED to accomodate individuals with low vision.
This article describes the steps involved in creating Polices using SonicOS APIs that will let you access internal devices or servers behind the SonicWall firewall.
Cause
SonicWall by default does not allow inbound traffic which not a part of a session that was initiated by an internal device on the network. This is done to protect the devices in the internal network from malicious access. If required certain parts of the network can be opened to external access, for example Webservers, Exchange servers and so on.
To open the network, we need to specify an access rule from the external network to the internal network and a NAT Policy so we direct traffic only to the intended device.
With APIs this can be achieved on scale for example you can create multiple Access Rules and NAT policies with one command and all the attributes can be specified into Json Objects.
Resolution
Manually opening Ports / enabling Port forwarding to allow traffic from the Internet to a Server behind the SonicWall using sonicos API involves the following steps:
Step1: Enabling the API Module.
Step2:Getting into Swagger.
Step3:Login to the SonicWall with API.
Step4:Create Address Objects and Service Objects with API.
Step5:Creating NAT Policy with API.
Step6: Creating Access Rules with API.
Step7:Committing all the configurational changes made with APIs.
Step8: Log out the SonicWall with API:
Scenario Overview
The following walk-through details allowing TCP 3389 From the Internet to a Terminal Server on the Local Network.Once the configuration is complete, Internet Users can RDP into the Terminal Server using the WAN IP address.Although the examples below show the LAN Zone and TCP 3389 they can apply to any Zone and any Port that is required.
Click on the Link https://sonicos-api.sonicwall.com
Swagger will prepopulate your SonicWalls’s IP, MGMT Port, Firmware so it can give you a list of applicable APIs.
NOTE: All the APIs required for configuring Port Forwarding will be listed in this Article.
Step3:Login to the SonicWall with API:
curl -k -i -u “admin:password” -X POST https://192.168.168.168:443/api/sonicos/auth
“admin:password” – Replace this with your SonicWalls username : password
https://192.168.168.168:443/– Replace this with your SonicWalls Public or private IP address
Command Output should contain a string: “success”: true
NOTE: You are free to choose Swagger, Postman, Git bash or any application that allows API calls, if you are using a Linux based operating system you can execute cURL from the terminal.For this article I am using Git bash on Windows.
Step4:Create Address Objects and Service Objects with API:
curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/address-objects/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d “{\”address_object\”:{\”ipv4\”:{\”name\”:\”Term Server Private\”,\”zone\”:\”LAN\”,\”host\”:{\”ip\”:\”192.168.168.10\”}}}}” && curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/address-objects/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d “{\”address_object\”:{\”ipv4\”:{\”name\”:\”Term Server Public\”,\”zone\”:\”WAN\”,\”host\”:{\”ip\”:\”1.1.1.1\”}}}}”
Output of the First command where we have parsed the address object data on the command instead of creating a separate File:
Output of the second Command where we have used a file called @add instead of specifying data on the command:
TIP: If you are creating only one Address Object then the First command should be sufficient, if you are creating multiple address objects then the second command should be used.
CAUTION: I have the add.Json file saved on to my desktop and hence I was able to call it into the command, if you have created the Json the file in a different location then make sure you are executing the command from that location.
https://192.168.168.168:443 – Replace that with the IP of the SonicWall
@serviceobj.Json is a file that contains the Attributes of the service object:
{
“service_object”: {
“name”: “Terminal Server 3389”,
“TCP”: {
“begin”: 3389,
“end”: 3389
}
}
}
Output of the command:
3. Committing the changes made to the SonicWall: We need to do this to be able to use the Address Objects and service objects that we just created to make a NAT Policy and an Access Rule.
curl -k -X POST “https://192.168.168.168:443/api/sonicos/config/pending” -H “accept: application/Json”
https://192.168.168.168:443 – Replace that with the IP of the SonicWall
https://192.168.168.168:443 – Replace that with the IP of the SonicWall
@accessrule.Json is a file that contains the Attributes of the access rule:
{
“access_rules”: [
{
“ipv4”: {
“name”: “Inbound 3389”,
“enable”: true,
“from”: “WAN”,
“to”: “LAN”,
“action”: “allow”,
“source”: {
“address”: {
“any”: true
},
“port”: {
“any”: true
}
},
“service”: {
“name”: “Terminal Server 3389”
},
“destination”: {
“address”: {
“name”: “Term Server Public”
}
}
}
}
]
}
Output of the command:
Step7: Committing all the configurational changes made with APIs:
1. We have already committed Address objects and Service Objects in Step 4, In this step we are committing the NAT Policy and the Access Rule to the SonicWalls configuration:
curl -k -X POST “https://192.168.168.168:443/api/sonicos/config/pending” -H “accept: application/Json”
https://192.168.168.168:443 – Replace that with the IP of the SonicWall
We have Only used the POST method in most of the API calls for this Article because we are only Adding things into the configuration, there are other methods Like GET,DELETE,PUT and etc. I recommend that you go through https://sonicos-api.sonicwall.com for more API commands.
Step8: Log out the SonicWall with API:
1. It is recommended to log out from the SonicWall via API once the desired configuration is committed.
https://192.168.168.168:443 – Replace that with the IP of the SonicWall
“admin:password” – is the actual username and password for the SonicWall.
Output of the command:
CAUTION: Caution: If you miss to perform the action in Step 7 and Execute the command in Step 8 you will lose all the configuration changes made in the current session.
Summary:We have successfully configured a Port Forwarding for a user in the Internet to access a Term Server that is behind a Firewall on port 3389 using sonicos API.
NOTE: It is always recommended to use Client VPN for RDP connections this article here is just an example.
FQDN: cloudgmsams.sonicwall.com Zero Touch FQDN: cloudttams.global.sonicwall.com IP: 213.244.188.168, 213.244.188.188
For AWS-FRA Colo
FQDN: cscmafra.sonicwall.com Zero Touch FQDN: cscmafratt.global.sonicwall.com, cscmafratta.global.sonicwall.com IP: 18.197.234.66, 18.197.234.59
SonicWall NSM:
For Oregon AWS Colo
FQDN: nsm-uswest.sonicwall.com (Use it in GMS settings under Administration Page) Zero Touch FQDN: nsm-uswest-zt.sonicwall.com (Use it in ZeroTouch Settings under Diag page) IP: 13.227.130.81, 13.227.130.63, 3.227.130.69, 13.227.130.12, 52.39.29.75, 44.233.105.101, 44.227.248.206
For AWS-FRA Colo
FQDN: nsm-eucentral.sonicwall.com (Use it in GMS settings under Administration Page) Zero Touch FQDN: nsm-eucentral-zt.sonicwall.com (Use it in ZeroTouch Settings under Diag page) IP: 13.227.130.70, 13.227.130.69, 13.227.130.15, 13.227.130.92, 18.156.16.24, 18.157.240.148, 3.127.176.56
An internet outage can have major consequences for a digital business, especially when it happens during peak usage times and on holidays. Outages can lead to revenue loss, complaints, and customer churn.
Of course, internet outages regularly impact companies across all verticals, including some of the largest internet companies in the world. And they can happen when you least expect them.
Read on to learn about some of the most impactful internet outages to date and some steps you can take to keep your business out of harm’s way.
Historical Internet Outages You Need to Know About
1. Amazon Web Services
Amazon Web Services (AWS) experienced a major outage in December 2021, lasting for several hours. The outage impacted operations for many leading businesses, including Netflix, Disney, Spotify, DoorDash, and Venmo.
Amazon blames the outage on an automation error causing multiple systems to act abnormally. The outage also prevented users from accessing some cloud services.
This outage proved the largest and safest cloud providers are also susceptible to downtime.
2. Facebook
Facebook as well suffered a major outage in 2021, leaving billions of users unable to access its services, including its main social network, Instagram, and WhatsApp.
According to Facebook, the cause of the outage was a configuration change on its backbone routers responsible for transmitting traffic across its data centers. The outage lasted roughly six hours, an eternity for a social network.
3. Fastly
Cloud service provider Fastly had its network go down in June 2021, taking down several sizeable global news websites, including the New York Times and CNN. It also impacted retailers like Target and Amazon, and several other organizations.
The outage resulted from a faulty software update, stemming from a misconfiguration, causing disruptions across multiple servers.
4. British Airways
British Airways experienced a massive IT failure in 2017 during one of the busiest travel weekends in the United Kingdom.
This event created a nightmare scenario for the organization and its customers. Altogether, it grounded 672 flights and stranded tens of thousands of customers.
According to the company, the outage ensued when an engineer disconnected the data center’s power supply. A massive power surge came next, bringing the business’s network down in the process.
5. Google
Google had a major service outage in 2020. It only lasted about forty-five minutes, but it still impacted users worldwide.
Services including Gmail, YouTube, and Google Calendar all crashed. So did Google Home apps. The outage also impacted third-party applications using Google for authentication.
The issue happened due to inadequate storage capacity for the company’s authentication services.
6. Dyn
Undoubtedly, one of the biggest distributed denial of service (DDoS) attacks in history occurred in 2016 against Dyn, which was a major backbone provider.
The attack occurred in three waves, overwhelming the company’s servers. As a result, many internet users were unable to access partnering platforms like Twitter, Spotify, and Netflix.
While the internet outage lasted only about an hour, Verizon experienced a sharp drop in traffic volume. Naturally, many customers complained about the loss of service.
At first, the company reported the incident was the result of someone cutting fiber cables. However, it was unrelated and turned out to be a “software issue” during routine network maintenance activities.
8. Microsoft
Another major internet outage occurred at Microsoft when its Azure service went under in December 2021. Azure’s Active Directory service crashed for about ninety minutes.
Compared to some other outages, this one was relatively small. Nonetheless, it prevented users from signing in to Microsoft services such as Office 365. Although applications remained online, users couldn’t access them, making this a major productivity killer for many organizations worldwide.
9. Comcast
There was an internet outage at Comcast in November 2021, which happened when its San Francisco backbone shut down for about two hours.
Following the outage, a broader issue occurred, spanning multiple U.S. cities, including hubs like Philadelphia and Chicago. Several thousand customers lost service, leaving them unable to access basic network functionality during the height of the pandemic.
10. Akamai Edge DNS
Akamai, a global content delivery provider, experienced an outage with its DNS service in 2021. The Akamai outage resulted from a faulty software configuration update activating a bug in its Secure Edge Content Delivery Network.
In a similar fashion to other attacks against service providers, Akamai’s outage caused widespread damage. Other websites—including American Airlines, Fox News, and Steam—all experienced performance issues following the incident.
11. Cox Communications
Cox Communications reported a major internet outage in March 2022, impacting nearly seven thousand customers in the Las Vegas region.
The problem resulted from an NV Energy backhoe damaging a transmission line and triggering a power event. The surge caused a cable modem to reset, and many customers tried to reconnect simultaneously. As a result, it took several hours for service to resume.
12. Slack
The recent Slack outage in January 2021 created havoc for distributed workers who rely on the platform for communication and collaboration.
The platform’s outage impacted organizations across the US, UK, Germany, Japan, and India, with interruptions occurring for about two and a half hours. Slack says the issue came from scaling problems on the AWS Transit Gateway, which couldn’t accommodate a spike in traffic.
Best Practices for Avoiding Internet Outages
At the end of the day, there’s nothing you can do to prevent outages entirely, especially if your business relies on multiple third-party systems. Eventually, your company or a partner will experience some level of service disruption. It’s best to plan for them and, where possible, enable systems to ‘fail gracefully.’
As part of your resiliency planning, here are some steps to mitigate damage, maximize uptime, and keep your organization safe, along with some best practices to help you avoid disruptions from network and connectivity issues.
Set Up a Backup Internet Solution
It’s impossible to protect your business from local internet outages completely. They can stem from issues like local construction, service disruptions, and more.
Consider setting up a backup internet solution as a workaround, so you never lose connectivity. For example, you may choose to combine broadband with a wireless failover solution.
Consider a Multi-Cloud Strategy
If your business is in the cloud, it’s a good idea to explore a multi-cloud strategy. By spreading your workloads across multiple cloud providers, you can prevent cloud service disruptions from knocking your digital applications offline. This approach can also improve uptime and resiliency.
Use Website Performance and Availability Monitoring
One of the best ways to protect your business is to use website performance and availability monitoring. It provides real-time visibility into how end users are interacting with and experiencing your website.
A robust website performance and availability monitoring solution can provide actionable insights into the health and stability of your website. As a result, you can track uptime and performance over time and troubleshoot issues when they occur.
The Pingdom Approach to Website Performance Monitoring
SolarWinds® Pingdom® provides real-time and historical end-user experience monitoring, giving your team deep visibility from a single pane of glass. With Pingdom, it’s possible to protect against the kind of outages helping your company make headlines for the wrong reasons.
This post was written by Justin Reynolds. Justin is a freelance writer who enjoys telling stories about how technology, science, and creativity can help workers be more productive. In his spare time, he likes seeing or playing live music, hiking, and traveling.
Today we are incredibly excited to announce that Wordfence is launching an entirely free vulnerability database API and web interface, available for commercial use by hosting companies, security organizations, threat analysts, security researchers, and the WordPress user community. This is part of a larger project known as Wordfence Intelligence Community Edition, which we are launching today.
This year at Blackhat in Las Vegas, Wordfence launched Wordfence Intelligence, an enterprise product providing organizations with data feeds derived from the attack telemetry we receive from Wordfence users. We did this with one goal in mind: to further secure the Web by enabling enterprises and network defenders with the ability to implement our threat intelligence in a way that will better secure their infrastructure and customers. Wordfence Intelligence includes malware signatures, IP threat feeds and a malware hash feed to enable enterprises to deploy our data at the network and server level.
Wordfence Intelligence Community Edition is a set of data available free for the community to use, and it includes an enterprise quality vulnerability database, and an API that provides a full up-to-date download in JSON format, completely free with no registration required. We are investing heavily in this database by growing the team, maintaining and curating the existing data, and adding new vulnerabilities as soon as they are discovered.
There is no delay on how quickly we add vulnerabilities to this free database. As soon as a vulnerability is disclosed, we add it. There is also no limitation on the use of this data, other than an attribution requirement for vulnerabilities sourced from MITRE, and an attribution requirement for our own vulnerabilities. Each vulnerability record includes the data you need to provide this attribution on your user interface.
Our hope is that hosting companies, software developers and security providers will turn this data into free and commercial security products that will improve the security of the WordPress community. By giving the data away for free, and allowing commercial use, we are acting as a catalyst for innovation in the vulnerability scanning space. Individual developers no longer have an expensive barrier to entry if they want to implement a new kind of vulnerability scanning software for the community. It is our hope that this database will foster innovation in the WordPress security space and improve the security of the WordPress community as a whole.
Wordfence Intelligence Community Edition has the stated goal of uplifting the research community and raising the profile of talented security researchers who make valuable contributions to our community, and who make us all safer. To this end, we are launching with security researcher profile pages, a security researcher leaderboard, and each vulnerability will link to the relevant researcher who discovered the vulnerability. We will also be adding the ability for researchers to edit their own profile page so that they can add links to their resume or personal website. Expect this in the coming weeks.
We will be launching web hooks in the coming weeks that will proactively and programmatically alert users and applications to the release of a new vulnerability. This provides real-time awareness of a new vulnerability, and makes the time between announcement and mitigation of a new vulnerability approach zero.
Defiant Inc and the Wordfence team are investing heavily in this vulnerability database. We are actively recruiting talented security analysts to triage inbound vulnerabilities, and we are recruiting researchers to discover new vulnerabilities in WordPress core, plugins and themes.
Yesterday evening I sat down with Chloe Chamberland, head of product for Wordfence Intelligence, in our studio in Centennial, Colorado, to chat about this exciting product that her and her team are launching today. Here is the conversation.
That concludes the executive summary portion of this post. The rest of this post is written by Chloe Chamberland who heads up the Wordfence Intelligence product. Chloe describes Wordfence Intelligence Community Edition and the vulnerability database and API in more detail. I’d like to extend my congratulations and thanks to Chloe and her team, our security analysts who worked so hard on creating the data in this database, and continue to do so, and to our engineering team for this launch.
~Mark Maunder – Wordfence Founder & CEO.
Introducing Wordfence Intelligence Community Edition
Wordfence Intelligence Community Edition is a threat intelligence data platform which currently consists of an incredibly comprehensive database of WordPress vulnerabilities. We’ve designed this platform with vulnerability researchers, site owners, and security analysts in mind. Each vulnerability has been manually curated by our team of vulnerability analysts and has been populated using historical data from the CVE list, Google fu’ing, and many other vulnerability sources. Each vulnerability record contains details such as the CVSS score, CWE type, a description of the vulnerability, affected software components, the original researcher(s), and more.
Our goal is to provide site owners with as much information needed to effectively secure their WordPress websites while also providing security analysts and researchers the information needed to be able monitor the WordPress threat landscape so they can respond to threats in a timely manner and provide their insights back to the community.
The Wordfence Intelligence Community Edition vulnerability database currently contains over 8,000 unique vulnerability records covering nearly 10,000 vulnerabilities across WordPress core, themes, and plugins. Over the coming months we will continue to actively develop and release features that will enrich the experience of users accessing and using the platform.
We will continue to populate historical vulnerability data while also ensuring we have the most comprehensive and current vulnerability database on the market for the community to use.
Key Features of Wordfence Intelligence Community Edition
Overview of Attack Data Targeting WordPress Sites
On the dashboard of Wordfence Intelligence Community Edition, users can see insights on data related to attack volume targeting WordPress websites. This includes the total number of login attacks and exploit attempts the Wordfence Firewall has blocked, the total number of malware sightings the Wordfence Scanner and our incident response team has observed, along with the top 10 attacking IP addresses in the past 24 hours, the top 10 unique WordPress vulnerabilities being targeted in the past 24 hours, and the top 5 generic vulnerability types being targeted in the past 24 hours in addition to their attack volume. This data can be used to make more informed decisions on the threats faced by WordPress site owners for better risk mitigation. This data can also be used to enhance security research in the WordPress space.
Select Vulnerabilities Enriched with Attack Data
Select vulnerabilities in the database are enriched with data on the attack volume targeting those particular vulnerabilities in the past 24 hours. This gives unparalleled insight into the threat landscape for WordPress, providing site owners, analysts, and security researchers with current and up to date information on the most attacked WordPress vulnerabilities.
Researcher Hall of Fame & Leaderboard
All researchers credited with discoveries in our database are in our Researcher Hall of Fame with their total vulnerability count for the past 30 days and for all time. Researchers can see their all time and 30 day ranking compared to other researchers in the field. Researchers who want to be higher up on the leaderboard will need to find and responsibly disclose more vulnerabilities than their fellow researchers. We hope that this will create a friendly competition to encourage more vulnerability research that in turn makes the WordPress ecosystem more secure.
Individual Researcher Vulnerability Finds All in One Place
Each researcher has their own unique page that lists the total number of vulnerabilities they have discovered in the past 30 days and all time, along with the list of all the vulnerability finds that have been attributed to that researcher. This can be shared with anyone from prospective job employers who may want to see an individual’s previous research, to friends and family researchers may want to show off their work to. Whatever the purpose, this was designed for researchers to be able to hold all of their vulnerability discoveries in one central place.
If you’re a researcher, and your page is missing some of your vulnerability discoveries, please make sure to fill out our vulnerability submissions form here. Any vulnerability reported to us will receive a CVE ID and we will gladly assign CVE IDs to any older discoveries you may have already in our database upon request.
Wordfence Scan Results Enhanced
The Wordfence scanner will now provide a link to the Wordfence Intelligence Community Edition Vulnerability Database’s applicable record when a vulnerability has been detected on a site. This can be used to obtain more information about a vulnerability so that site owners can make informed decisions on how to proceed with remediating any given vulnerability. In most cases the solution is to update to a newer patched version, however, in cases where a plugin or theme has been closed and there is no patch available, this information will help guide decision making when assessing a site’s risk.
It takes a community.
That is why we are calling this Wordfence Intelligence Community Edition. A vast majority of the vulnerabilities in our database are from independent researchers and other organizations conducting security research on WordPress plugins, themes, and core. Without them and their dedicated work finding and responsibly disclosing vulnerabilities, there would be no database of WordPress vulnerabilities to catalog and there would not be nearly as many patches, or opportunities to secure WordPress websites, available to site owners. That’s why we will make sure finding information about vulnerabilities is as easy as possible and researchers get the credit they deserve with Wordfence Intelligence Community Edition.
As we continue to evolve this platform, we will keep this at the forefront of our minds and ensure we continue to deliver a product that will help make the WordPress ecosystem more secure and have a positive impact on the community of security researchers working to make this possible.
In return, we would like to ask the community to help us in making sure this remains the best resource for the community. If you’d like to add any additional details to our vulnerability records or have vulnerabilities you have discovered that should be added to the database, we hope that you’ll reach out to us so we can further improve the database that will remain accessible to all.
A Gift to the Community.
As part of this launch, we have made the vulnerability data feed from Wordfence Intelligence, completely free to access. The feed contains a complete dump of the vulnerabilities and related data in our database You can find the documentation on what is included in this API and how to query it here. You are more than welcome to implement this data in whatever way you would like commercially and personally. We hope that by making this accessible to everyone, we can create a more secure WordPress ecosystem and better platform for researchers to get the credit they deserve.
This is just the beginning. Stay tuned, and make sure you are signed up for our mailing list, for more exciting things to come!
I would like to say a huge congratulations and special thank you to everyone on the Wordfence team that made Wordfence Intelligence Community Edition come to life. From our threat intelligence team processing and manually creating thousands of vulnerability records over a several month period, to our engineering and QA teams who have developed and tested this incredible platform. Without your dedicated work, we would not be able to make the online WordPress community a more secure place for all.
The Wordfence Threat Intelligence team has been tracking exploits targeting a Critical Severity Arbitrary File Upload vulnerability in YITH WooCommerce Gift Cards Premium, a plugin with over 50,000 installations according to the vendor.
The vulnerability, reported by security researcher Dave Jong and publicly disclosed on November 22, 2022, impacts plugin versions up to and including 3.19.0 and allows unauthenticated attackers to upload executable files to WordPress sites running a vulnerable version of the plugin. This allows attackers to place a back door, obtain Remote Code Execution, and take over the site.
All Wordfence customers, including Wordfence Premium, Care, and Response customers as well as Wordfence free users, are protected against exploits targeting this vulnerability by the Wordfence firewall’s built-in file upload rules which prevent the upload of files with known dangerous extensions, files containing executable PHP code, and known malicious files.
We highly recommend updating to the latest version of the plugin, which is 3.21.0 at the time of this writing.
We were able to reverse engineer the exploit based on attack traffic and a copy of the vulnerable plugin and are providing information on its functionality as this vulnerability is already being exploited in the wild and a patch has been available for some time.
The issue lies in the import_actions_from_settings_panel function which runs on the admin_init hook.
Since admin_init runs for any page in the /wp-admin/ directory, it is possible to trigger functions that run on admin_init as an unauthenticated attacker by sending a request to /wp-admin/admin-post.php.
Since the import_actions_from_settings_panel function also lacks a capability check and a CSRF check, it is trivial for an attacker to simply send a request containing a page parameter set to yith_woocommerce_gift_cards_panel, a ywgc_safe_submit_field parameter set to importing_gift_cards, and a payload in the file_import_csv file parameter.
Since the function also does not perform any file type checks, any file type including executable PHP files can be uploaded.
These attacks may appear in your logs as unexpected POST requests to wp-admin/admin-post.php from unknown IP addresses. Additionally, we have observed the following payloads which may be useful in determining whether your site has been compromised. Note that we are providing normalized hashes (hashes of the file with all extraneous whitespace removed):
kon.php/1tes.php – this file loads a copy of the “marijuana shell” file manager in memory from a remote location at shell[.]prinsh[.]com and has a normalized sha256 hash of 1a3babb9ac0a199289262b6acf680fb3185d432ed1e6b71f339074047078b28c
b.php – this file is a simple uploader with a normalized sha256 hash of 3c2c9d07da5f40a22de1c32bc8088e941cea7215cbcd6e1e901c6a3f7a6f9f19
admin.php – this file is a password-protected backdoor and has a normalized sha256 hash of 8cc74f5fa8847ba70c8691eb5fdf8b6879593459cfd2d4773251388618cac90d
Although we’ve seen attacks from more than a hundred IPs, the vast majority of attacks were from just two IP addresses:
103.138.108.15, which sent out 19604 attacks against 10936 different sites and 188.66.0.135, which sent 1220 attacks against 928 sites.
The majority of attacks occurred the day after the vulnerability was disclosed, but have been ongoing, with another peak on December 14, 2022. As this vulnerability is trivial to exploit and provides full access to a vulnerable website we expect attacks to continue well into the future.
Recommendations
If you are running a vulnerable version of YITH WooCommerce Gift Cards Premium, that is, any version up to and including 3.19.0, we strongly recommend updating to the latest version available. While the Wordfence firewall does provide protection against malicious file uploads even for free users, attackers may still be able to cause nuisance issues by abusing the vulnerable functionality in less critical ways.
If you believe your site has been compromised as a result of this vulnerability or any other vulnerability, we offer Incident Response services via Wordfence Care. If you need your site cleaned immediately, Wordfence Response offers the same service with 24/7/365 availability and a 1-hour response time. Both of these products include hands-on support in case you need further assistance. If you have any friends or colleagues who are using this plugin, please share this announcement with them and encourage them to update to the latest patched version of YITH WooCommerce Gift Cards Premium as soon as possible.
Threat actors continue to adapt to the latest technologies, practices, and even data privacy laws—and it’s up to organizations to stay one step ahead by implementing strong cybersecurity measures and programs.
Here’s a look at how cybercrime will evolve in 2023 and what you can do to secure and protect your organization in the year ahead.
With the rapid modernization and digitization of supply chains come new security risks. Gartner predicts that by 2025, 45% of organizations worldwide will have experienced attacks on their software supply chains—this is a three-fold increase from 2021. Previously, these types of attacks weren’t even likely to happen because supply chains weren’t connected to the internet. But now that they are, supply chains need to be secured properly.
The introduction of new technology around software supply chains means there are likely security holes that have yet to be identified, but are essential to uncover in order to protect your organization in 2023.
If you’ve introduced new software supply chains to your technology stack, or plan to do so sometime in the next year, then you must integrate updated cybersecurity configurations. Employ people and processes that have experience with digital supply chains to ensure that security measures are implemented correctly.
It should come as no surprise that with the increased use of smartphones in the workplace, mobile devices are becoming a greater target for cyber-attack. In fact, cyber-crimes involving mobile devices have increased by 22% in the last year, according to the Verizon Mobile Security Index (MSI) 2022 with no signs of slowing down in advance of the new year.
As hackers hone in on mobile devices, SMS-based authentication has inevitably become less secure. Even the seemingly most secure companies can be vulnerable to mobile device hacks. Case in point, several major companies, including Uber and Okta were impacted by security breaches involving one-time passcodes in the past year alone.
This calls for the need to move away from relying on SMS-based authentication, and instead to multifactor authentication (MFA) that is more secure. This could include an authenticator app that uses time-sensitive tokens, or more direct authenticators that are hardware or device-based.
Organizations need to take extra precautions to prevent attacks that begin with the frontline by implementing software that helps verify user identity. According to the World Economic Forum’s 2022 Global Risks Report, 95% of cybersecurity incidents are due to human error. This fact alone emphasizes the need for a software procedure that decreases the chance of human error when it comes to verification. Implementing a tool like Specops’ Secure Service Desk helps reduce vulnerabilities from socially engineered attacks that are targeting the help desk, enabling a secure user verification at the service desk without the risk of human error.
As more companies opt for cloud-based activities, cloud security—any technology, policy, or service that protects information stored in the cloud—should be a top priority in 2023 and beyond. Cyber criminals become more sophisticated and evolve their tactics as technologies evolve, which means cloud security is essential as you rely on it more frequently in your organization.
The most reliable safeguard against cloud-based cybercrime is a zero trust philosophy. The main principle behind zero trust is to automatically verify everything—and essentially not trust anyone without some type of authorization or inspection. This security measure is critical when it comes to protecting data and infrastructure stored in the cloud from threats.
Ransomware attacks continue to increase at an alarming rate. Data from Verizon discovered a 13% increase in ransomware breaches year-over-year. Ransomware attacks have also become increasingly targeted — sectors such as healthcare and food and agriculture are just the latest industries to be victims, according to the FBI.
With the rise in ransomware threats comes the increased use of Ransomware-as-a-Service (RaaS). This growing phenomenon is when ransomware criminals lease out their infrastructure to other cybercriminals or groups. RaaS kits make it even easier for threat actors to deploy their attacks quickly and affordably, which is a dangerous combination to combat for anyone leading the cybersecurity protocols and procedures. To increase protection against threat actors who use RaaS, enlist the help of your end-users.
End-users are your organization’s frontline against ransomware attacks, but they need the proper training to ensure they’re protected. Make sure your cybersecurity procedures are clearly documented and regularly practiced so users can stay aware and vigilant against security breaches. Employing backup measures like password policy software, MFA whenever possible, and email-security tools in your organization can also mitigate the onus on end-user cybersecurity.
Data privacy laws are getting stricter—get ready #
We can’t talk about cybersecurity in 2023 without mentioning data privacy laws. With new data privacy laws set to go into effect in several states over the next year, now is the time to assess your current procedures and systems to make sure they comply. These new state-specific laws are just the beginning; companies would be wise to review their compliance as more states are likely to develop new privacy laws in the years to come.
Data privacy laws often require changes to how companies store and processing data, and implementing these new changes might open you up to additional risk if they are not implemented carefully. Ensure your organization is in adherence to proper cyber security protocols, including zero trust, as mentioned above.
The Wordfence Threat Intelligence team continually monitors trends in the attack data we collect. Occasionally an unusual trend will arise from this data, and we have spotted one such trend standing out over the Thanksgiving holiday in the U.S. and the first weekend in December. Attack attempts have spiked for vulnerabilities in two plugins.
The larger spikes have been from attempts to exploit an arbitrary file upload vulnerability in Kaswara Modern VC Addons <= version 3.0.1, for which a rule was added to the Wordfence firewall and available to Wordfence Premium, Wordfence Care, and Wordfence Response users on April 21, 2021 and released to users of Wordfence Free on May 21, 2021. The other vulnerability is an arbitrary file upload and arbitrary file deletion vulnerability in the Adning Advertising plugin with versions <= 1.5.5, with our firewall rule being added on June 25, 2020 and made available to free users on July 25, 2020.
One thing that makes these spikes interesting is the fact that they are occurring over holidays and weekends. The first spike began on November 24, 2022, which was the Thanksgiving holiday in the United States. This spike lasted for three days. The second spike looked a little different, starting on Saturday, December 3, 2022, dropping on Sunday, and finishing with its peak on Monday. These spikes serve as an important reminder that malicious actors are aware that website administrators are not paying as close attention to their sites on holidays and weekends. This makes holidays and weekends a desirable time for attacks to be attempted.
During these spikes, exploit attempts have been observed against the Kaswara vulnerability on 1,969,494 websites, and on 1,075,458 sites against the Adning vulnerability. In contrast, the normal volume of sites with exploit attempts being blocked is an average of 256,700 for the Kaswara vulnerability, and 374,801 for the Adning vulnerability.
The Kaswara Modern VC Addons plugin had more than 10,000 installations at the time the vulnerability was disclosed on April 21, 2021, and has since been closed without a patch being released. As long as this plugin is installed, it leaves the site vulnerable to attacks that make it possible for unauthenticated attackers upload malicious files that could ultimately lead to a full site takeover due to the fact that the ability to upload PHP files to servers hosting WordPress makes remote code execution possible. Any WordPress website administrators who are still using the plugin should immediately remove the plugin and replace it with a suitable alternative if the functionality is still required for the site, even if you are protected by the Wordfence firewall, as the plugin has not been maintained and may contain other issues. We estimate that about 8,000 WordPress users are still impacted by a vulnerable version, making them an easy target.
The Adning Advertising plugin had more than 8,000 users when our Threat Intelligence team performed our initial investigation of vulnerability on June 24, 2020. After some analysis, we found two vulnerabilities in the plugin, one that would allow an unauthenticated attacker to upload arbitrary files, also leading to easy site takeover. We also found an unauthenticated arbitrary file deletion vulnerability that could just as easily be used for complete site compromise by deleting the wp-config.php file. After we notified the plugin’s author of the vulnerabilities, they quickly worked to release a patched version within 24 hours. Any users of the Adning Advertising plugin should immediately update to the latest version, currently 1.6.3, but version 1.5.6 is the minimum version that includes the patch. We estimate that about 680 WordPress users are still impacted by a vulnerable version of this plugin.
The key takeaway from these attack attempts is to make sure your website components are kept up to date with the latest security updates. When a theme or plugin, or even the WordPress core, has an update available, it should be updated as soon as safely possible for the website. Leaving unpatched vulnerabilities on the website opens a website up to possible attack.
Cyber Observables
The following are the common observables we have logged in these exploit attempts. If any of these are observed on a website or in logs, it is an indication that one of these vulnerabilities has been exploited. The IP addresses listed are specifically from the spikes we have seen over the Thanksgiving holiday and the first weekend in December.
Kaswara
Top ten IPs
40.87.107.73
65.109.128.42
65.21.155.174
65.108.251.64
5.75.244.31
65.109.137.44
65.21.247.31
49.12.184.76
5.75.252.228
5.75.252.229
Common Uploaded Filenames
There were quite a few variations of randomly named six-letter filenames, two are referenced below, but each one observed used the .zip extension.
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.67 Safari/537.36
Amazon CloudFront
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36
Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2224.3 Safari/537.36
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2656.18 Safari/537.36
Mozilla/5.0 (X11; OpenBSD i386) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
Mozilla/5.0 (X11; Ubuntu; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2919.83 Safari/537.36
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2762.73 Safari/537.36
Adning
Top Ten IPs
65.109.128.42
65.108.251.64
65.21.155.174
5.75.244.31
65.109.137.44
65.21.247.31
5.75.252.229
65.109.138.122
40.87.107.73
49.12.184.76
Common Uploaded Filenames
Most observed exploit attempts against the Adning plugin appeared to be nothing more than probing for the vulnerability, but in one instance the following filename was observed as a payload.
In this post we discussed two vulnerabilities that have spiked over the past two weekends. Removing or updating vulnerable plugins is always the best solution, but a Web Application Firewall like the one provided by Wordfence is important to block exploit attempts and can even protect your site from attacks targeting unknown vulnerabilities. The Wordfence firewall protects all Wordfence users, including Wordfence Free, Wordfence Premium, Wordfence Care, and Wordfence Response, against these vulnerabilities. Even with this protection in place, these vulnerabilities are serious as they can lead to full site takeover, and the Kaswara Modern VC Addons should be immediately removed, and the Adning Advertising plugin should immediately be updated.
The Border Gateway Protocol (BGP) is the glue that keeps the entire Internet together. However, despite its vital function, BGP wasn’t originally designed to protect against malicious actors or routing mishaps. It has since been updated to account for this shortcoming with the Resource Public Key Infrastructure (RPKI) framework, but can we declare it to be safe yet?
If the question needs asking, you might suspect we can’t. There is a shortage of reliable data on how much of the Internet is protected from preventable routing problems. Today, we’re releasing a new method to measure exactly that: what percentage of Internet users are protected by their Internet Service Provider from these issues. We find that there is a long way to go before the Internet is protected from routing problems, though it varies dramatically by country.
Why RPKI is necessary to secure Internet routing
The Internet is a network of independently-managed networks, called Autonomous Systems (ASes). To achieve global reachability, ASes interconnect with each other and determine the feasible paths to a given destination IP address by exchanging routing information using BGP. BGP enables routers with only local network visibility to construct end-to-end paths based on the arbitrary preferences of each administrative entity that operates that equipment. Typically, Internet traffic between a user and a destination traverses multiple AS networks using paths constructed by BGP routers.
BGP, however, lacks built-in security mechanisms to protect the integrity of the exchanged routing information and to provide authentication and authorization of the advertised IP address space. Because of this, AS operators must implicitly trust that the routing information exchanged through BGP is accurate. As a result, the Internet is vulnerable to the injection of bogus routing information, which cannot be mitigated by security measures at the client or server level of the network.
An adversary with access to a BGP router can inject fraudulent routes into the routing system, which can be used to execute an array of attacks, including:
Denial-of-Service (DoS) through traffic blackholing or redirection,
Impersonation attacks to eavesdrop on communications,
Machine-in-the-Middle exploits to modify the exchanged data, and subvert reputation-based filtering systems.
Additionally, local misconfigurations and fat-finger errors can be propagated well beyond the source of the error and cause major disruption across the Internet.
Such an incident happened on June 24, 2019. Millions of users were unable to access Cloudflare address space when a regional ISP in Pennsylvania accidentally advertised routes to Cloudflare through their capacity-limited network. This was effectively the Internet equivalent of routing an entire freeway through a neighborhood street.
The most prominent proposals to secure BGP routing, standardized by the IETF focus on validating the origin of the advertised routes using Resource Public Key Infrastructure (RPKI) and verifying the integrity of the paths with BGPsec. Specifically, RPKI (defined in RFC 7115) relies on a Public Key Infrastructure to validate that an AS advertising a route to a destination (an IP address space) is the legitimate owner of those IP addresses.
RPKI has been defined for a long time but lacks adoption. It requires network operators to cryptographically sign their prefixes, and routing networks to perform an RPKI Route Origin Validation (ROV) on their routers. This is a two-step operation that requires coordination and participation from many actors to be effective.
The two phases of RPKI adoption: signing origins and validating origins
RPKI has two phases of deployment: first, an AS that wants to protect its own IP prefixes can cryptographically sign Route Origin Authorization (ROA) records thereby attesting to be the legitimate origin of that signed IP space. Second, an AS can avoid selecting invalid routes by performing Route Origin Validation (ROV, defined in RFC 6483).
With ROV, a BGP route received by a neighbor is validated against the available RPKI records. A route that is valid or missing from RPKI is selected, while a route with RPKI records found to be invalid is typically rejected, thus preventing the use and propagation of hijacked and misconfigured routes.
One issue with RPKI is the fact that implementing ROA is meaningful only if other ASes implement ROV, and vice versa. Therefore, securing BGP routing requires a united effort and a lack of broader adoption disincentivizes ASes from commiting the resources to validate their own routes. Conversely, increasing RPKI adoption can lead to network effects and accelerate RPKI deployment. Projects like MANRS and Cloudflare’s isbgpsafeyet.com are promoting good Internet citizenship among network operators, and make the benefits of RPKI deployment known to the Internet. You can check whether your own ISP is being a good Internet citizen by testing it on isbgpsafeyet.com.
Measuring the extent to which both ROA (signing of addresses by the network that controls them) and ROV (filtering of invalid routes by ISPs) have been implemented is important to evaluating the impact of these initiatives, developing situational awareness, and predicting the impact of future misconfigurations or attacks.
Measuring ROAs is straightforward since ROA data is readily available from RPKI repositories. Querying RPKI repositories for publicly routed IP prefixes (e.g. prefixes visible in the RouteViews and RIPE RIS routing tables) allows us to estimate the percentage of addresses covered by ROA objects. Currently, there are 393,344 IPv4 and 86,306 IPv6 ROAs in the global RPKI system, covering about 40% of the globally routed prefix-AS origin pairs1.
Measuring ROV, however, is significantly more challenging given it is configured inside the BGP routers of each AS, not accessible by anyone other than each router’s administrator.
Measuring ROV deployment
Although we do not have direct access to the configuration of everyone’s BGP routers, it is possible to infer the use of ROV by comparing the reachability of RPKI-valid and RPKI-invalid prefixes from measurement points within an AS2.
Consider the following toy topology as an example, where an RPKI-invalid origin is advertised through AS0 to AS1 and AS2. If AS1 filters and rejects RPKI-invalid routes, a user behind AS1 would not be able to connect to that origin. By contrast, if AS2 does not reject RPKI invalids, a user behind AS2 would be able to connect to that origin.
While occasionally a user may be unable to access an origin due to transient network issues, if multiple users act as vantage points for a measurement system, we would be able to collect a large number of data points to infer which ASes deploy ROV.
If, in the figure above, AS0 filters invalid RPKI routes, then vantage points in both AS1 and AS2 would be unable to connect to the RPKI-invalid origin, making it hard to distinguish if ROV is deployed at the ASes of our vantage points or in an AS along the path. One way to mitigate this limitation is to announce the RPKI-invalid origin from multiple locations from an anycast network taking advantage of its direct interconnections to the measurement vantage points as shown in the figure below. As a result, an AS that does not itself deploy ROV is less likely to observe the benefits of upstream ASes using ROV, and we would be able to accurately infer ROV deployment per AS3.
Note that it’s also important that the IP address of the RPKI-invalid origin should not be covered by a less specific prefix for which there is a valid or unknown RPKI route, otherwise even if an AS filters invalid RPKI routes its users would still be able to find a route to that IP.
The measurement technique described here is the one implemented by Cloudflare’s isbgpsafeyet.com website, allowing end users to assess whether or not their ISPs have deployed BGP ROV.
The isbgpsafeyet.com website itself doesn’t submit any data back to Cloudflare, but recently we started measuring whether end users’ browsers can successfully connect to invalid RPKI origins when ROV is present. We use the same mechanism as is used for global performance data4. In particular, every measurement session (an individual end user at some point in time) attempts a request to both valid.rpki.cloudflare.com, which should always succeed as it’s RPKI-valid, and invalid.rpki.cloudflare.com, which is RPKI-invalid and should fail when the user’s ISP uses ROV.
This allows us to have continuous and up-to-date measurements from hundreds of thousands of browsers on a daily basis, and develop a greater understanding of the state of ROV deployment.
The state of global ROV deployment
The figure below shows the raw number of ROV probe requests per hour during October 2022 to valid.rpki.cloudflare.com and invalid.rpki.cloudflare.com. In total, we observed 69.7 million successful probes from 41,531 ASNs.
Based on APNIC’s estimates on the number of end users per ASN, our weighted5 analysis covers 96.5% of the world’s Internet population. As expected, the number of requests follow a diurnal pattern which reflects established user behavior in daily and weekly Internet activity6.
We can also see that the number of successful requests to valid.rpki.cloudflare.com (gray line) closely follows the number of sessions that issued at least one request (blue line), which works as a smoke test for the correctness of our measurements.
As we don’t store the IP addresses that contribute measurements, we don’t have any way to count individual clients and large spikes in the data may introduce unwanted bias. We account for that by capturing those instants and excluding them.
Overall, we estimate thatout of the four billion Internet users, only 261 million (6.5%) are protected by BGP Route Origin Validation, but the true state of global ROV deployment is more subtle than this.
The following map shows the fraction of dropped RPKI-invalid requests from ASes with over 200 probes over the month of October. It depicts how far along each country is in adopting ROV but doesn’t necessarily represent the fraction of protected users in each country, as we will discover.
Sweden and Bolivia appear to be the countries with the highest level of adoption (over 80%), while only a few other countries have crossed the 50% mark (e.g. Finland, Denmark, Chad, Greece, the United States).
ROV adoption may be driven by a few ASes hosting large user populations, or by many ASes hosting small user populations. To understand such disparities, the map below plots the contrast between overall adoption in a country (as in the previous map) and median adoption over the individual ASes within that country. Countries with stronger reds have relatively few ASes deploying ROV with high impact, while countries with stronger blues have more ASes deploying ROV but with lower impact per AS.
In the Netherlands, Denmark, Switzerland, or the United States, adoption appears mostly driven by their larger ASes, while in Greece or Yemen it’s the smaller ones that are adopting ROV.
The following histogram summarizes the worldwide level of adoption for the 6,765 ASes covered by the previous two maps.
Most ASes either don’t validate at all, or have close to 100% adoption, which is what we’d intuitively expect. However, it’s interesting to observe that there are small numbers of ASes all across the scale. ASes that exhibit partial RPKI-invalid drop rate compared to total requests may either implement ROV partially (on some, but not all, of their BGP routers), or appear as dropping RPKI invalids due to ROV deployment by other ASes in their upstream path.
To estimate the number of users protected by ROV we only considered ASes with an observed adoption above 95%, as an AS with an incomplete deployment still leaves its users vulnerable to route leaks from its BGP peers.
If we take the previous histogram and summarize by the number of users behind each AS, the green bar on the right corresponds to the 261 million users currently protected by ROV according to the above criteria (686 ASes).
Looking back at the country adoption map one would perhaps expect the number of protected users to be larger. But worldwide ROV deployment is still mostly partial, lacking larger ASes, or both. This becomes even more clear when compared with the next map, plotting just the fraction of fully protected users.
To wrap up our analysis, we look at two world economies chosen for their contrasting, almost symmetrical, stages of deployment: the United States and the European Union.
112 million Internet users are protected by 111 ASes from the United States with comprehensive ROV deployments. Conversely, more than twice as many ASes from countries making up the European Union have fully deployed ROV, but end up covering only half as many users. This can be reasonably explained by end user ASes being more likely to operate within a single country rather than span multiple countries.
Conclusion
Probe requests were performed from end user browsers and very few measurements were collected from transit providers (which have few end users, if any). Also, paths between end user ASes and Cloudflare are often very short (a nice outcome of our extensive peering) and don’t traverse upper-tier networks that they would otherwise use to reach the rest of the Internet.
In other words, the methodology used focuses on ROV adoption by end user networks (e.g. ISPs) and isn’t meant to reflect the eventual effect of indirect validation from (perhaps validating) upper-tier transit networks. While indirect validation may limit the “blast radius” of (malicious or accidental) route leaks, it still leaves non-validating ASes vulnerable to leaks coming from their peers.
As with indirect validation, an AS remains vulnerable until its ROV deployment reaches a sufficient level of completion. We chose to only consider AS deployments above 95% as truly comprehensive, and Cloudflare Radar will soon begin using this threshold to track ROV adoption worldwide, as part of our mission to help build a better Internet.
When considering only comprehensive ROV deployments, some countries such as Denmark, Greece, Switzerland, Sweden, or Australia, already show an effective coverage above 50% of their respective Internet populations, with others like the Netherlands or the United States slightly above 40%, mostly driven by few large ASes rather than many smaller ones.
Worldwide we observe a very low effective coverage of just 6.5% over the measured ASes, corresponding to 261 million end users currently safe from (malicious and accidental) route leaks, which means there’s still a long way to go before we can declare BGP to be safe.
…… 1https://rpki.cloudflare.com/ 2Gilad, Yossi, Avichai Cohen, Amir Herzberg, Michael Schapira, and Haya Shulman. “Are we there yet? On RPKI’s deployment and security.” Cryptology ePrint Archive (2016). 3Geoff Huston. “Measuring ROAs and ROV”. https://blog.apnic.net/2021/03/24/measuring-roas-and-rov/ 4Measurements are issued stochastically when users encounter 1xxx error pages from default (non-customer) configurations. 5Probe requests are weighted by AS size as calculated from Cloudflare’s worldwide HTTP traffic. 6Quan, Lin, John Heidemann, and Yuri Pradkin. “When the Internet sleeps: Correlating diurnal networks with external factors.” In Proceedings of the 2014 Conference on Internet Measurement Conference, pp. 87-100. 2014.
Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.
To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.