How to Diagnose High Admin-Ajax Usage on Your WordPress Site

Salman Ravoof, January 8, 2024

Ajax is a JavaScript-based web technology that helps you to build dynamic and interactive websites. WordPress uses Ajax to power many of its core admin area features such as auto-saving posts, user session management, and notifications.

By default, WordPress directs all Ajax calls through the admin-ajax.php file located in the site’s /wp-admin directory.

Numerous simultaneous Ajax requests can lead to high admin-ajax.php usage, resulting in a considerably slowed down server and website. It’s one of the most common problems faced by many unoptimized WordPress sites. Typically, it manifests itself as a slow website or an HTTP 5xx error (mostly 504 or 502 errors).

In this article, you’ll learn about WordPress’ admin-ajax.php file, how it works, its benefits and drawbacks, and how you can diagnose and fix the high admin-ajax.php usage issue.

Ready to go? Let’s roll out!

What Is the admin-ajax.php File?

The admin-ajax.php file contains all the code for routing Ajax requests on WordPress. Its primary purpose is to establish a connection between the client and the server using Ajax. WordPress uses it to refresh the page’s contents without reloading it, thus making it dynamic and interactive to the users.

A basic overview of how Admin Ajax works on WordPress
A basic overview of how Admin Ajax works on WordPress

Since the WordPress core already uses Ajax to power its various backend features, you can use the same functions to use Ajax on WordPress. All you need to do is register an action, point it to your site’s admin-ajax.php file, and define how you want it to return the value. You can set it to return HTML, JSON, or even XML.

admin-ajax.php file in WordPress
admin-ajax.php file in WordPress

As per WordPress Trac, the admin-ajax.php file first appeared in WordPress 2.1. It’s also referred to as Ajax Admin in the WordPress development community.

Checking Ajax usage in MyKinsta dashboard
Checking Ajax usage in MyKinsta dashboard

The chart above only shows the amount of admin-ajax.php requests, not where they might be coming from. It’s a great way to see when the spikes are occurring. You can combine it with other techniques mentioned in this post to narrow down the primary cause.

Checking the number of admin-ajax.php requests in Chrome DevTools
Checking the number of admin-ajax.php requests in Chrome DevTools

You can also use Chrome DevTools to see how many requests are being sent to admin-ajax.php. You can also check out the Timings tab under the Network section to find out how much time it takes to process these requests.

As for finding the exact reason behind high admin-ajax.php usage, there are primarily two main causes: one due to frontend, and the other due to backend. We’ll discuss both below.

Unlock more growth, zero guesswork

Subscribe to our newsletter – we’re serving up the latest web dev news and tips you’ll actually use.

Subscribe

How to Debug High admin-ajax.php Usage on WordPress

Third-party plugins are one of the most common reasons behind high admin-ajax.php usage. Typically, this issue is seen on the site’s frontend and shows up frequently in speed test reports.

But plugins aren’t the only culprit here as themes, the WordPress core, the webserver, and a DDoS attack can also be the reason behind high Admin Ajax usage.

Let’s explore them in more detail.

How to Determine the Origin of High admin-ajax.php Usage for Plugins and Themes

Ajax-powered plugins in WordPress.org repository
Ajax-powered plugins in WordPress.org repository

Ajax is often used by WordPress developers to create dynamic and interactive plugins and themes. Some popular examples include adding features such as live search, product filters, infinite scroll, dynamic shopping cart, and chat box.

Just because a plugin uses Ajax doesn’t mean that it’ll slow down your site.

admin-ajax.php request in WebPageTest report
Viewing the admin-ajax.php request in WebPageTest report

Usually, Admin Ajax loads towards the end of the page load. Also, you can set Ajax requests to load asynchronously, so it can have little to no effect on the page’s perceived performance for the user.

As you can see in the WebPageTest report above, admin-ajax.php loads towards the end of the requests queue, but it still takes up 780 ms. That’s a lot of time for just one request.

GTmetrix report indicating a serious admin-ajax.php usage spike
GTmetrix report indicating a serious admin-ajax.php usage spike

When developers don’t implement Ajax properly on WordPress, it can lead to drastic performance issues. The above GTmetrix report is a perfect example of such behavior.

You can also use GTmetrix to dig into individual post and response data. You can use this feature to pinpoint what’s causing the issue.

To do that, go to GTmetrix report’s Waterfall tab, and then find and click the POST admin-ajax.php item. You’ll see three tabs for this request: Headers, Post, and Response.

POST admin-ajax.php request's Headers data
POST admin-ajax.php request’s Headers data

Checking out the request’s Post and Response tabs will give you some hints to find out the reasons behind the performance issue. For this site, you can see clues in the Response tab.

POST admin-ajax.php request's Response data
POST admin-ajax.php request’s Response data

You can see that part of the response has something to do with an input tag with id set to “fusion-form-nonce-656”.

A quick search of this clue will lead you to ThemeFusion’s website, the creators of Avada theme. Hence, you can conclude that the request is originating from the theme, or any of the plugins it’s bundled with.

In such a case, you must first ensure that the Avada theme and all its related plugins are fully updated. If that doesn’t fix the issue, then you can try disabling the theme and see if that fixes the issue.

Unlike disabling a plugin, disabling a theme isn’t feasible in most scenarios. Hence, try optimizing the theme to remove any bottlenecks. You can also reach out to the theme’s support team to see if they can suggest a better solution.

Testing another slow website in GTmetrix led to finding similar issues with Visual Composer page builder and Notification Bar plugins.

Another POST admin-ajax.php request's Response data
Another POST admin-ajax.php request’s Response data
POST admin-ajax.php request's Post data
POST admin-ajax.php request’s Post data

Thankfully, if you cannot resolve an issue with the plugin, you most like have many alternative plugins available to try out. For example, when it comes to page builders you could also try out Beaver Builder or Elementor.

One platform, dozens of premium hosting features

The list is too long for this section. But you can find them all here. (Hint: you’ll save $275 worth of premium features, included in all WordPress plans.)

Show me

How to Determine the Origin of High admin-ajax.php

Sometimes, the Post and Response data presented in speed test reports may not be as clear and straightforward. Here, finding the origin of high admin-ajax.php usage isn’t as easy. In such cases, you can always do it the old-school way.

Disable all your site’s pluginsclear your site’s cache (if any), and then run a speed test again. If admin-ajax.php is still present, then the most likely culprit is the theme. But if it’s nowhere to be found, then you must activate each plugin one-by-one and run the speed tests each time. By process of elimination, you’ll lock down on the issue’s origin.

Tip: Using a staging environment (e.g. Kinsta’s staging environment) is a great way to run tests on your site without affecting your live site. Once you’ve determined the cause and fixed the issue in the staging environment, you can push the changes to your live site.

Diagnosing Backend Server Issues with admin-ajax.php

The second most common reason for high admin-ajax.php usage is the WordPress Heartbeat API generating frequent Ajax calls, leading to high CPU usage on the server. Typically, this is caused because of many users logged into the WordPress backend dashboard. Hence, you won’t see this show up in speed tests.

By default, the Heartbeat API polls the admin-ajax.php file every 15 seconds to auto-save posts or pages. If you’re using a shared hosting server, then you don’t have many server resources dedicated to your site. If you’re editing a post or page and leave the tab open for a significant time, then it can rack up a lot of Admin Ajax requests.

For example, when you’re writing or editing posts, a single user alone can generate 240 requests in an hour!

Frequent autosave admin-ajax.php requests
Frequent autosave admin-ajax.php requests

That’s a lot of requests on the backend with just one user. Now imagine a site where there are multiple editors logged in concurrently. Such a site can rack up Ajax requests rapidly, generating high CPU usage.

That was the situation discovered by DARTDrones when the company was preparing its WooCommerce site for an expected surge in traffic following an appearance on Shark Tank.

Before being featured on the television show, the DARTDrones site was receiving over 4,100 admin-ajax.php calls in a day with only 2,000 unique visitors. That’s a weak requests-to-visits ratio.

Heavy admin-ajax.php usage on dartdrones.com
Heavy admin-ajax.php usage on dartdrones.com

Investigators noticed the /wp-admin referrer URL and correctly determined the root cause. These requests were due to DARTDrones’ admins and editors updating the site frequently in anticipation of the show.

WordPress has fixed this Heartbeat API issue partially long ago. For instance, you can reduce the frequency of requests generated by the Heartbeat API on hosts with limited resources. It also suspends itself after one hour of keyboard/mouse/touch inactivity.

Info

If you are using WP Rocket, then Heartbeat Control is now a built-in feature instead of a standalone plugin.

High Traffic Due to a DDoS Attack or Spam Bots

Overwhelming your site with a DDoS attack or spam bots can also lead to high admin-ajax.php usage. However, such an attack doesn’t necessarily target increasing Admin Ajax requests. It’s just collateral damage.

If your site is under a DDoS attack, your priority should be to get it behind a robust CDN/WAF like Cloudflare or Sucuri. Every hosting plan with Kinsta includes free Cloudflare integration and Kinsta CDN, which can help you offload your website’s resources to a large extent.

To learn more about how you can protect your websites from malicious attacks like these, you can refer to our in-depth guide on how to stop a DDoS attack.

Summary

WordPress uses Ajax in its Heartbeat API to implement many of its core features. However, it can lead to increased load times if not used correctly. This is typically caused due to a high frequency of requests to the admin-ajax.php file.

In this article, you learned the various causes for high admin-ajax.php usage, how to diagnose what’s responsible for this symptom, and how you can go about fixing it. In most cases, following this guide should get your site back up and running smoothly in no time.

However, in some cases upgrading to a server with higher resources is the only viable solution. Especially for demanding use cases such as ecommerce and membership sites. If you’re running such a site, consider upgrading to a managed WordPress host who is experienced with these types of performance issues.

If you’re still struggling with high admin-ajax.php usage on your WordPress site, let us know in the comments section.


Save time and costs, plus maximize site performance, with $275+ worth of enterprise-level integrations included in every Managed WordPress plan. This includes a high-performance CDN, DDoS protection, malware and hack mitigation, edge caching, and Google’s fastest CPU machines. Get started with no long-term contracts, assisted migrations, and a 30-day money-back guarantee.

Check out our plans or talk to sales to find the plan that’s right for you.

Salman Ravoof

Salman Ravoof is a self-taught web developer, writer, creator, and a huge admirer of Free and Open Source Software (FOSS). Besides tech, he’s excited by science, philosophy, photography, arts, cats, and food. Learn more about him on his website, and connect with Salman on Twitter.

Source :
https://kinsta.com/blog/admin-ajax-php/#:~:text=php%20File%3F-,The%20admin%2Dajax.,and%20interactive%20to%20the%20users.

DDoS threat report for 2023 Q4

09/01/2024
Omer Yoachimik – Jorge Pacheco

Welcome to the sixteenth edition of Cloudflare’s DDoS Threat Report. This edition covers DDoS trends and key findings for the fourth and final quarter of the year 2023, complete with a review of major trends throughout the year.

What are DDoS attacks?

DDoS attacks, or distributed denial-of-service attacks, are a type of cyber attack that aims to disrupt websites and online services for users, making them unavailable by overwhelming them with more traffic than they can handle. They are similar to car gridlocks that jam roads, preventing drivers from getting to their destination.

There are three main types of DDoS attacks that we will cover in this report. The first is an HTTP request intensive DDoS attack that aims to overwhelm HTTP servers with more requests than they can handle to cause a denial of service event. The second is an IP packet intensive DDoS attack that aims to overwhelm in-line appliances such as routers, firewalls, and servers with more packets than they can handle. The third is a bit-intensive attack that aims to saturate and clog the Internet link causing that ‘gridlock’ that we discussed. In this report, we will highlight various techniques and insights on all three types of attacks.

Previous editions of the report can be found here, and are also available on our interactive hub, Cloudflare Radar. Cloudflare Radar showcases global Internet traffic, attacks, and technology trends and insights, with drill-down and filtering capabilities for zooming in on insights of specific countries, industries, and service providers. Cloudflare Radar also offers a free API allowing academics, data sleuths, and other web enthusiasts to investigate Internet usage across the globe.

To learn how we prepare this report, refer to our Methodologies.

Key findings

  1. In Q4, we observed a 117% year-over-year increase in network-layer DDoS attacks, and overall increased DDoS activity targeting retail, shipment and public relations websites during and around Black Friday and the holiday season.
  2. In Q4, DDoS attack traffic targeting Taiwan registered a 3,370% growth, compared to the previous year, amidst the upcoming general election and reported tensions with China. The percentage of DDoS attack traffic targeting Israeli websites grew by 27% quarter-over-quarter, and the percentage of DDoS attack traffic targeting Palestinian websites grew by 1,126% quarter-over-quarter — as the military conflict between Israel and Hamas continues.
  3. In Q4, there was a staggering 61,839% surge in DDoS attack traffic targeting Environmental Services websites compared to the previous year, coinciding with the 28th United Nations Climate Change Conference (COP 28).

For an in-depth analysis of these key findings and additional insights that could redefine your understanding of current cybersecurity challenges, read on!

Illustration of a DDoS attack

Hyper-volumetric HTTP DDoS attacks

2023 was the year of uncharted territories. DDoS attacks reached new heights — in size and sophistication. The wider Internet community, including Cloudflare, faced a persistent and deliberately engineered campaign of thousands of hyper-volumetric DDoS attacks at never before seen rates.

These attacks were highly complex and exploited an HTTP/2 vulnerability. Cloudflare developed purpose-built technology to mitigate the vulnerability’s effect and worked with others in the industry to responsibly disclose it.

As part of this DDoS campaign, in Q3 our systems mitigated the largest attack we’ve ever seen — 201 million requests per second (rps). That’s almost 8 times larger than our previous 2022 record of 26 million rps.

Largest HTTP DDoS attacks as seen by Cloudflare, by year

Growth in network-layer DDoS attacks

After the hyper-volumetric campaign subsided, we saw an unexpected drop in HTTP DDoS attacks. Overall in 2023, our automated defenses mitigated over 5.2 million HTTP DDoS attacks consisting of over 26 trillion requests. That averages at 594 HTTP DDoS attacks and 3 billion mitigated requests every hour.

Despite these astronomical figures, the amount of HTTP DDoS attack requests actually declined by 20% compared to 2022. This decline was not just annual but was also observed in 2023 Q4 where the number of HTTP DDoS attack requests decreased by 7% YoY and 18% QoQ.

On the network-layer, we saw a completely different trend. Our automated defenses mitigated 8.7 million network-layer DDoS attacks in 2023. This represents an 85% increase compared to 2022.

In 2023 Q4, Cloudflare’s automated defenses mitigated over 80 petabytes of network-layer attacks. On average, our systems auto-mitigated 996 network-layer DDoS attacks and 27 terabytes every hour. The number of network-layer DDoS attacks in 2023 Q4 increased by 175% YoY and 25% QoQ.

HTTP and Network-layer DDoS attacks by quarter

DDoS attacks increase during and around COP 28

In the final quarter of 2023, the landscape of cyber threats witnessed a significant shift. While the Cryptocurrency sector was initially leading in terms of the volume of HTTP DDoS attack requests, a new target emerged as a primary victim. The Environmental Services industry experienced an unprecedented surge in HTTP DDoS attacks, with these attacks constituting half of all its HTTP traffic. This marked a staggering 618-fold increase compared to the previous year, highlighting a disturbing trend in the cyber threat landscape.

This surge in cyber attacks coincided with COP 28, which ran from November 30th to December 12th, 2023. The conference was a pivotal event, signaling what many considered the ‘beginning of the end’ for the fossil fuel era. It was observed that in the period leading up to COP 28, there was a noticeable spike in HTTP attacks targeting Environmental Services websites. This pattern wasn’t isolated to this event alone.

Looking back at historical data, particularly during COP 26 and COP 27, as well as other UN environment-related resolutions or announcements, a similar pattern emerges. Each of these events was accompanied by a corresponding increase in cyber attacks aimed at Environmental Services websites.

In February and March 2023, significant environmental events like the UN’s resolution on climate justice and the launch of United Nations Environment Programme’s Freshwater Challenge potentially heightened the profile of environmental websites, possibly correlating with an increase in attacks on these sites​​​​.

This recurring pattern underscores the growing intersection between environmental issues and cyber security, a nexus that is increasingly becoming a focal point for attackers in the digital age.

DDoS attacks and Iron Swords

It’s not just UN resolutions that trigger DDoS attacks. Cyber attacks, and particularly DDoS attacks, have long been a tool of war and disruption. We witnessed an increase in DDoS attack activity in the Ukraine-Russia war, and now we’re also witnessing it in the Israel-Hamas war. We first reported the cyber activity in our report Cyber attacks in the Israel-Hamas war, and we continued to monitor the activity throughout Q4.

Operation “Iron Swords” is the military offensive launched by Israel against Hamas following the Hamas-led 7 October attack. During this ongoing armed conflict, we continue to see DDoS attacks targeting both sides.

DDoS attacks targeting Israeli and Palestinian websites, by industry

Relative to each region’s traffic, the Palestinian territories was the second most attacked region by HTTP DDoS attacks in Q4. Over 10% of all HTTP requests towards Palestinian websites were DDoS attacks, a total of 1.3 billion DDoS requests — representing a 1,126% increase in QoQ. 90% of these DDoS attacks targeted Palestinian Banking websites. Another 8% targeted Information Technology and Internet platforms.

Top attacked Palestinian industries

Similarly, our systems automatically mitigated over 2.2 billion HTTP DDoS requests targeting Israeli websites. While 2.2 billion represents a decrease compared to the previous quarter and year, it did amount to a larger percentage out of the total Israel-bound traffic. This normalized figure represents a 27% increase QoQ but a 92% decrease YoY. Notwithstanding the larger amount of attack traffic, Israel was the 77th most attacked region relative to its own traffic. It was also the 33rd most attacked by total volume of attacks, whereas the Palestinian territories was 42nd.

Of those Israeli websites attacked, Newspaper & Media were the main target — receiving almost 40% of all Israel-bound HTTP DDoS attacks. The second most attacked industry was the Computer Software industry. The Banking, Financial Institutions, and Insurance (BFSI) industry came in third.

Top attacked Israeli industries

On the network layer, we see the same trend. Palestinian networks were targeted by 470 terabytes of attack traffic — accounting for over 68% of all traffic towards Palestinian networks. Surpassed only by China, this figure placed the Palestinian territories as the second most attacked region in the world, by network-layer DDoS attack, relative to all Palestinian territories-bound traffic. By absolute volume of traffic, it came in third. Those 470 terabytes accounted for approximately 1% of all DDoS traffic that Cloudflare mitigated.

Israeli networks, though, were targeted by only 2.4 terabytes of attack traffic, placing it as the 8th most attacked country by network-layer DDoS attacks (normalized). Those 2.4 terabytes accounted for almost 10% of all traffic towards Israeli networks.

Top attacked countries

When we turned the picture around, we saw that 3% of all bytes that were ingested in our Israeli-based data centers were network-layer DDoS attacks. In our Palestinian-based data centers, that figure was significantly higher — approximately 17% of all bytes.

On the application layer, we saw that 4% of HTTP requests originating from Palestinian IP addresses were DDoS attacks, and almost 2% of HTTP requests originating from Israeli IP addresses were DDoS attacks as well.

Main sources of DDoS attacks

In the third quarter of 2022, China was the largest source of HTTP DDoS attack traffic. However, since the fourth quarter of 2022, the US took the first place as the largest source of HTTP DDoS attacks and has maintained that undesirable position for five consecutive quarters. Similarly, our data centers in the US are the ones ingesting the most network-layer DDoS attack traffic — over 38% of all attack bytes.

HTTP DDoS attacks originating from China and the US by quarter

Together, China and the US account for a little over a quarter of all HTTP DDoS attack traffic in the world. Brazil, Germany, Indonesia, and Argentina account for the next twenty-five percent.

Top source of HTTP DDoS attacks

These large figures usually correspond to large markets. For this reason, we also normalize the attack traffic originating from each country by comparing their outbound traffic. When we do this, we often get small island nations or smaller market countries that a disproportionate amount of attack traffic originates from. In Q4, 40% of Saint Helena’s outbound traffic were HTTP DDoS attacks — placing it at the top. Following the ‘remote volcanic tropical island’, Libya came in second, Swaziland (also known as Eswatini) in third. Argentina and Egypt follow in fourth and fifth place.

Top source of HTTP DDoS attacks with respect to each country’s traffic

On the network layer, Zimbabwe came in first place. Almost 80% of all traffic we ingested in our Zimbabwe-based data center was malicious. In second place, Paraguay, and Madagascar in third.

Top source of Network-layer DDoS attacks with respect to each country’s traffic

Most attacked industries

By volume of attack traffic, Cryptocurrency was the most attacked industry in Q4. Over 330 billion HTTP requests targeted it. This figure accounts for over 4% of all HTTP DDoS traffic for the quarter. The second most attacked industry was Gaming & Gambling. These industries are known for being coveted targets and attract a lot of traffic and attacks.

Top industries targeted by HTTP DDoS attacks

On the network layer, the Information Technology and Internet industry was the most attacked — over 45% of all network-layer DDoS attack traffic was aimed at it. Following far behind were the Banking, Financial Services and Insurance (BFSI), Gaming & Gambling, and Telecommunications industries.

Top industries targeted by Network-layer DDoS attacks

To change perspectives, here too, we normalized the attack traffic by the total traffic for a specific industry. When we do that, we get a different picture.

Top attacked industries by HTTP DDoS attacks, by region

We already mentioned in the beginning of this report that the Environmental Services industry was the most attacked relative to its own traffic. In second place was the Packaging and Freight Delivery industry, which is interesting because of its timely correlation with online shopping during Black Friday and the winter holiday season. Purchased gifts and goods need to get to their destination somehow, and it seems as though attackers tried to interfere with that. On a similar note, DDoS attacks on retail companies increased by 16% compared to the previous year.

Top industries targeted by HTTP DDoS attacks with respect to each industry’s traffic

On the network layer, Public Relations and Communications was the most targeted industry — 36% of its traffic was malicious. This too is very interesting given its timing. Public Relations and Communications companies are usually linked to managing public perception and communication. Disrupting their operations can have immediate and widespread reputational impacts which becomes even more critical during the Q4 holiday season. This quarter often sees increased PR and communication activities due to holidays, end-of-year summaries, and preparation for the new year, making it a critical operational period — one that some may want to disrupt.

Top industries targeted by Network-layer DDoS attacks with respect to each industry’s traffic

Most attacked countries and regions

Singapore was the main target of HTTP DDoS attacks in Q4. Over 317 billion HTTP requests, 4% of all global DDoS traffic, were aimed at Singaporean websites. The US followed closely in second and Canada in third. Taiwan came in as the fourth most attacked region — amidst the upcoming general elections and the tensions with China. Taiwan-bound attacks in Q4 traffic increased by 847% compared to the previous year, and 2,858% compared to the previous quarter. This increase is not limited to the absolute values. When normalized, the percentage of HTTP DDoS attack traffic targeting Taiwan relative to all Taiwan-bound traffic also significantly increased. It increased by 624% quarter-over-quarter and 3,370% year-over-year.

Top targeted countries by HTTP DDoS attacks

While China came in as the ninth most attacked country by HTTP DDoS attacks, it’s the number one most attacked country by network-layer attacks. 45% of all network-layer DDoS traffic that Cloudflare mitigated globally was China-bound. The rest of the countries were so far behind that it is almost negligible.

Top targeted countries by Network-layer DDoS attacks
Top targeted countries by Network-layer DDoS attacks

When normalizing the data, Iraq, Palestinian territories, and Morocco take the lead as the most attacked regions with respect to their total inbound traffic. What’s interesting is that Singapore comes up as fourth. So not only did Singapore face the largest amount of HTTP DDoS attack traffic, but that traffic also made up a significant amount of the total Singapore-bound traffic. By contrast, the US was second most attacked by volume (per the application-layer graph above), but came in the fiftieth place with respect to the total US-bound traffic.

Top targeted countries by HTTP DDoS attacks with respect to each country’s traffic
Top targeted countries by HTTP DDoS attacks with respect to each country’s traffic

Similar to Singapore, but arguably more dramatic, China is both the number one most attacked country by network-layer DDoS attack traffic, and also with respect to all China-bound traffic. Almost 86% of all China-bound traffic was mitigated by Cloudflare as network-layer DDoS attacks. The Palestinian territories, Brazil, Norway, and again Singapore followed with large percentages of attack traffic.

Top targeted countries by Network-layer DDoS attacks with respect to each country’s traffic
Top targeted countries by Network-layer DDoS attacks with respect to each country’s traffic

Attack vectors and attributes

The majority of DDoS attacks are short and small relative to Cloudflare’s scale. However, unprotected websites and networks can still suffer disruption from short and small attacks without proper inline automated protection — underscoring the need for organizations to be proactive in adopting a robust security posture.

In 2023 Q4, 91% of attacks ended within 10 minutes, 97% peaked below 500 megabits per second (mbps), and 88% never exceeded 50 thousand packets per second (pps).

Two out of every 100 network-layer DDoS attacks lasted more than an hour, and exceeded 1 gigabit per second (gbps). One out of every 100 attacks exceeded 1 million packets per second. Furthermore, the amount of network-layer DDoS attacks exceeding 100 million packets per second increased by 15% quarter-over-quarter.

DDoS attack stats you should know

One of those large attacks was a Mirai-botnet attack that peaked at 160 million packets per second. The packet per second rate was not the largest we’ve ever seen. The largest we’ve ever seen was 754 million packets per second. That attack occurred in 2020, and we have yet to see anything larger.

This more recent attack, though, was unique in its bits per second rate. This was the largest network-layer DDoS attack we’ve seen in Q4. It peaked at 1.9 terabits per second and originated from a Mirai botnet. It was a multi-vector attack, meaning it combined multiple attack methods. Some of those methods included UDP fragments flood, UDP/Echo flood, SYN Flood, ACK Flood, and TCP malformed flags.

This attack targeted a known European Cloud Provider and originated from over 18 thousand unique IP addresses that are assumed to be spoofed. It was automatically detected and mitigated by Cloudflare’s defenses.

This goes to show that even the largest attacks end very quickly. Previous large attacks we’ve seen ended within seconds — underlining the need for an in-line automated defense system. Though still rare, attacks in the terabit range are becoming more and more prominent.

1.9 Terabit per second Mirai DDoS attacks
1.9 Terabit per second Mirai DDoS attacks

The use of Mirai-variant botnets is still very common. In Q4, almost 3% of all attacks originate from Mirai. Though, of all attack methods, DNS-based attacks remain the attackers’ favorite. Together, DNS Floods and DNS Amplification attacks account for almost 53% of all attacks in Q4. SYN Flood follows in second and UDP floods in third. We’ll cover the two DNS attack types here, and you can visit the hyperlinks to learn more about UDP and SYN floods in our Learning Center.

DNS floods and amplification attacks

DNS floods and DNS amplification attacks both exploit the Domain Name System (DNS), but they operate differently. DNS is like a phone book for the Internet, translating human-friendly domain names like “www.cloudfare.com” into numerical IP addresses that computers use to identify each other on the network.

Simply put, DNS-based DDoS attacks comprise the method computers and servers used to identify one another to cause an outage or disruption, without actually ‘taking down’ a server. For example, a server may be up and running, but the DNS server is down. So clients won’t be able to connect to it and will experience it as an outage.

DNS flood attack bombards a DNS server with an overwhelming number of DNS queries. This is usually done using a DDoS botnet. The sheer volume of queries can overwhelm the DNS server, making it difficult or impossible for it to respond to legitimate queries. This can result in the aforementioned service disruptions, delays or even an outage for those trying to access the websites or services that rely on the targeted DNS server.

On the other hand, a DNS amplification attack involves sending a small query with a spoofed IP address (the address of the victim) to a DNS server. The trick here is that the DNS response is significantly larger than the request. The server then sends this large response to the victim’s IP address. By exploiting open DNS resolvers, the attacker can amplify the volume of traffic sent to the victim, leading to a much more significant impact. This type of attack not only disrupts the victim but also can congest entire networks.

In both cases, the attacks exploit the critical role of DNS in network operations. Mitigation strategies typically include securing DNS servers against misuse, implementing rate limiting to manage traffic, and filtering DNS traffic to identify and block malicious requests.

Top attack vectors
Top attack vectors

Amongst the emerging threats we track, we recorded a 1,161% increase in ACK-RST Floods as well as a 515% increase in CLDAP floods, and a 243% increase in SPSS floods, in each case as compared to last quarter. Let’s walk through some of these attacks and how they’re meant to cause disruption.

Top emerging attack vectors
Top emerging attack vectors

ACK-RST floods

An ACK-RST Flood exploits the Transmission Control Protocol (TCP) by sending numerous ACK and RST packets to the victim. This overwhelms the victim’s ability to process and respond to these packets, leading to service disruption. The attack is effective because each ACK or RST packet prompts a response from the victim’s system, consuming its resources. ACK-RST Floods are often difficult to filter since they mimic legitimate traffic, making detection and mitigation challenging.

CLDAP floods

CLDAP (Connectionless Lightweight Directory Access Protocol) is a variant of LDAP (Lightweight Directory Access Protocol). It’s used for querying and modifying directory services running over IP networks. CLDAP is connectionless, using UDP instead of TCP, making it faster but less reliable. Because it uses UDP, there’s no handshake requirement which allows attackers to spoof the IP address thus allowing attackers to exploit it as a reflection vector. In these attacks, small queries are sent with a spoofed source IP address (the victim’s IP), causing servers to send large responses to the victim, overwhelming it. Mitigation involves filtering and monitoring unusual CLDAP traffic.

SPSS floods

Floods abusing the SPSS (Source Port Service Sweep) protocol is a network attack method that involves sending packets from numerous random or spoofed source ports to various destination ports on a targeted system or network. The aim of this attack is two-fold: first, to overwhelm the victim’s processing capabilities, causing service disruptions or network outages, and second, it can be used to scan for open ports and identify vulnerable services. The flood is achieved by sending a large volume of packets, which can saturate the victim’s network resources and exhaust the capacities of its firewalls and intrusion detection systems. To mitigate such attacks, it’s essential to leverage in-line automated detection capabilities.

Cloudflare is here to help – no matter the attack type, size, or duration

Cloudflare’s mission is to help build a better Internet, and we believe that a better Internet is one that is secure, performant, and available to all. No matter the attack type, the attack size, the attack duration or the motivation behind the attack, Cloudflare’s defenses stand strong. Since we pioneered unmetered DDoS Protection in 2017, we’ve made and kept our commitment to make enterprise-grade DDoS protection free for all organizations alike — and of course, without compromising performance. This is made possible by our unique technology and robust network architecture.

It’s important to remember that security is a process, not a single product or flip of a switch. Atop of our automated DDoS protection systems, we offer comprehensive bundled features such as firewallbot detectionAPI protection, and caching to bolster your defenses. Our multi-layered approach optimizes your security posture and minimizes potential impact. We’ve also put together a list of recommendations to help you optimize your defenses against DDoS attacks, and you can follow our step-by-step wizards to secure your applications and prevent DDoS attacks. And, if you’d like to benefit from our easy to use, best-in-class protection against DDoS and other attacks on the Internet, you can sign up — for free! — at cloudflare.com. If you’re under attack, register or call the cyber emergency hotline number shown here for a rapid response.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/ddos-threat-report-2023-q4/

Prevent spam user registration in WordPress: 2024 guide

JANUARY 20, 2024 BY PAUL G.

Spam registrations are common on WordPress websites. WordPress is the most popular content management system in the world, with over 60 percent market share. This makes it a prime target for scammers. It’s also, unfortunately, easy to create fake user accounts on the platform, requiring only an account name, email address, and password – all things spammers can simply invent. 

Fake registrations can cause extensive issues, such as hogging resources, spreading malware, and creating an unmanageable user base. 

WordPress doesn’t have a default functionality to combat spam user registrations, butthe good news is that plugins like Shield Security PRO can fill in the gap. Let’s take a look at some strategies for preventing spam user registrations. 

Introduction to spam registrations in WordPress 

WordPress spam registrations are when spammers create accounts on sites without any intention of using them for authentic purposes. Typically, spammers use automated programs or bots to create these accounts. Spammers may also use bots and spam accounts for phishing purposes, trying to acquire sensitive information from users and webmasters to compromise their security. 

Website owners often underestimate the harm spam registrations can cause. These range from immediate annoyances to long-term security problems and data distortion. 

For example, spam registrations can clog your inbox, causing surges of email notifications informing you of fake sign-ups for your website. Processing and deleting these emails and accounts without getting rid of legitimate users is time-consuming and challenging. 

Spam registrations can also overload server resources, affecting performance. Spam bots can make frequent login attempts, using up your bandwidth and making your website run slower for legitimate users. 

There can also be some considerable long-term consequences. Users may tire of spam comments and stop interacting with your content. You may also struggle to analyse user data, distorting your view of how your site is functioning. This can lead to security vulnerabilities and damage your site’s SEO. 

Strategies to prevent WordPress user registration spam 

This section covers various strategies and techniques that you can implement to prevent new user registration spam and improve the overall security of your WordPress site.

Install a WordPress security plugin

The first strategy is to install a WordPress security plugin. Choosing the right security plugin not only helps prevent spam registrations on your WordPress site, but it also gives you access to a wide range of security features.

Shield Security PRO is the best plugin for improving the overall security of your WordPress site. The plugin’s key features include bad bot detection and blocking, invisible CAPTCHA codes, human and bot spam prevention, traffic rate limiting, and malware scanning. 

A screenshot of Shield Security PRO’s feature comparison. 

Here’s a rundown of Shield Security PRO’s features and how they can help protect your site:

Disable WordPress registration

Using a plugin like ShieldPRO is the best choice to ensure the ongoing security of your WordPress site. However, there are also manual methods you can employ to help prevent user registration spam.

Disabling user registration in WordPress is one strategy. This approach eliminates the problem of spam signups entirely. You could try this option if you don’t need to collect user information, run a website with limited resources, or simply want to provide audiences with information for free.

The steps to disable registration on your WordPress site are as follows: 

  1. From the WordPress dashboard, go to Settings > General.
  2. Next, go to “Membership” and uncheck the “Anyone can register” box.

It’s worth considering that this technique prevents you from collecting visitor details, which stops you from building email lists or marketing directly to your audience. It also reduces personalisation opportunities and limits community building. 

Add CAPTCHA to your user registration form

You can also try adding CAPTCHA to your user registration form. This prevents automated spam registrations by identifying bots before they can create accounts. 

Various forms of CAPTCHA plugins for your site exist, including: 

  • reCAPTCHA: Google reCAPTCHA is a free service that combines text and images in a user-friendly interface, designed to weed out bots
  • hCAPTCHA: hCAPTCHA is a free service that uses images and action-based tests to identify bots. This service is customisable and prioritises user privacy. 

ShieldPRO’s AntiBot Detection Engine (ADE) avoids the need to use CAPTCHA at all. Since the plugin automatically detects and blocks bots, there’s no reason to test your visitors for signs of nuts and bolts. 

Implement geoblocking

You can also try geoblocking, a security method that limits website access to specific regions. It works by filtering IP addresses by location, only letting specific IPs enter the site. 

Geoblocking prevents spam from regions known for high levels of malicious activity. However, it also comes with various drawbacks. For example, it causes false positives, blocking legitimate site users just because they are in the wrong country. Spammers can also bypass it with proxy sites and VPNs.

Fortunately, ShieldPRO’s automated IP blocking technology more accurately and effectively stops spam users by blocking them after a specified number of offences. It detects malicious activity regardless of the traffic’s origin. 

Require manual approval for user registration

Manual user approvals can also mitigate spam registrations, offering significant benefits. The approach drastically reduces the chances of bot sign-up while also permitting you to collect legitimate user details. 

Drawbacks include the time-intensive nature of this method and the lack of scalability for larger WordPress sites. You may need to hire multiple full-time operatives to manage website administration, which can get pricey, fast.

Turn on email activation for user registration

Email activation for user registration is another popular technique to guard against spam registrations. It works by getting users to click a link in their email account to verify their details. 

Screenshot of Shield Security PRO’s email verification settings.

Shield Security PRO features a built-in email-checking feature. This tests to see if the email has a valid structure and is registered to a legitimate domain. It also checks if there are any mail exchange records for the domain, and determines if the email address goes to a disposable domain. These checks help to flag fake and temporary email addresses in user registrations. 

Block spam IP addresses

One of the primary ways Shield Security PRO works is by blocking malicious IP addresses once they’ve behaved badly enough to qualify as a bot. There is no one clear action an IP address can do on your site that proves it’s a bot. However, certain patterns of behaviour give bots away clear as day. 

“When you look at the activity as a whole” says Paul Goodchild, creator of Shield Security PRO, “a bot’s activity on a site is clearly distinguishable from human users.” 

The plugin then uses this clear indication as a signal to block the IP address entirely, stopping malicious activity in its tracks. The plugin also uses CrowdSec technology to minimise the risk of false positives and enable as many legitimate sign-ups as possible. 

Secure your WordPress site with ShieldPRO today 

The damaging impact of spam user registrations can be substantial. It can cause clogged inboxes, distorted user analytics, and server overload. The long-term consequences are diminished website SEO, reputational damage, and security vulnerabilities due to phishing and malware. 

Fortunately, there are various methods to prevent spam user registrations on WordPress websites. The most effective option is to use a plugin like Shield SecurityPRO. This plugin keeps malicious bots off your website. Since most spam user registrations come from bots, this means you can rest a lot easier. 

Try ShieldPRO on their WordPress sites today with a 14-day money-back guarantee. Install it to maximise your WordPress security and get some well-earned peace of mind.

Hey gorgeous!

If you’re curious about ShieldPRO and would like to explore the powerful features for protecting your WordPress sites, click here to get started today. (14-day satisfaction guarantee!)

You’ll get all PRO features, including AI Malware Scanning, WP Config File Protection, Plugin and Theme File Guard, import/export, exclusive customer support, and so much more.

TRY SHIELDPRO TODAY →

Source :
https://getshieldsecurity.com/blog/stop-spam-registrations-wordpress/

Sonicwall Sonicos 7.1.1 FAQ

01/11/2024

Description

This article helps answer frequently asked questions regarding SonicOS 7.1.1.

Q. What is SonicOS 7.1.1?
A. SonicOS 7.1.1 is the feature release available on all Gen 7 firewalls which brings in new capabilities around security, content filtering, integrations and virtual platforms.


Q. Will we be able to manage SonicOS 7.1 from NSM 2.3.5?
A. NSM 2.3.5 will not support SonicOS 7.1. The support for SonicOS 7.1 will be available from NSM 2.4.0, which will be released early next year (2024). Please read the following article on NSM Compatibility with SonicOS 7.1.


Q. What are the new features available on SonicOS 7.1.1?
A. The major features implemented in SonicOS 7.1.1 are DNS Filtering, reputation-based content filtering, Wi-Fi 6 access-point management, Network Access Control (NAC) integration with Aruba ClearPass, NSv bootstrapping, auto-update firmware and some other enhancements with storage and user interface (UI) for ease of use.


Q. How can existing firewall customers running SonicOS 7 upgrade/migrate to SonicOS 7.1.1?
A. You can upgrade the firewall to SonicOS 7.1 on box without using a migration tool.


Q. How can existing firewall customers running SonicOS6.5 and previous versions upgrade to SonicOS 7.1.1?
A. Users will be required to make use of our Secure Upgrade Program to upgrade their existing hardware models to Gen 7. They will then need to migrate their settings to the new firewall running 7.1.1 OS
Learn more about the Secure Upgrade Program


Q. Are there any new features in 7.1.1 that will require new licenses?
A. The DNS Filtering feature is a licensed feature that will be available as an a la carte license for Gen 7 firewalls without the APSS bundle.


Q. Do I need any additional licensing if I already have the APSS license available on my current Gen 7 firewall?
A. No.


Q. Can I perform a firmware/OS upgrade on my existing NSv NGFW running SonicOS 7.1.1?
A. The downgrade of firmware from SonicOS 7.1 to SonicOS 7.0 is not supported. Please refer to this article when upgrading your firewall: How can I upgrade SonicOS Firmware?


Q. Is there any change in behavior with regard to Policy Mode with 7.1.1?
A. There is no change in behavior with regard to Policy Mode in SonicOS 7.1.1. The NSv 270, 470 and 870 will continue to support both Global and Policy Mode. The NSsp15700 will continue to support only Policy Mode.


Q. What is CFS 5.0? How does it differ from CFS 4.0?
A. Content Filtering Service 5.0 brings category extension with CFS 4.0. SonicOS 7.0.1 supported 64 categories and that has been increased to 89. Content Filtering 5.0 brings in performance improvements along with reputation-based blocking.


Q. What is upgrade behavior when a user upgrades from SonicOS 7.0.1 to SonicOS 7.1.1 with regard to CFS policies?
A. There will be no impact on the existing CFS policies, however as CFS 5.0 brings in reputation-based blocking, users will be required to configure the CFS policies with the new reputation parameter in CFS 5.0. Please refer to this upgrade article.


Q. Can we downgrade the firewall from SonicOS 7.1 to SonicOS 7.0?
A. The downgrade of firmware from SonicOS 7.1 to SonicOS 7.0 is not supported. Please refer to this article when upgrading your firewall.


Q. What is DNS Filtering? How is it different from the current DNS capabilities in SonicOS 7.0.1?
A. DNS Filtering inspects the DNS traffic in real time and provides the ability to block threats and access to malicious websites. DNS Filtering blocks threats before they can reach your network. The DNS security capabilities on 7.0.1 include DNS Tunnel Detection and DNS Sinkholes. Please read DNS Security to understand them in detail.


Q. What is the upgrade behavior when users upgrade from SonicOS 7.0.1 to SonicOS 7.1.1 with regard to DNS proxy and sink-holing?
A. The upgrade from SonicOS 7.0.1 to SonicOS 7.1.1 would have no impact on the behavior that was there previous to the SonicOS 7.1 upgrade.


Q. What does the NAC integration feature do?
A. SonicWall Next-Generation Firewalls (NGFWs) provide Restful threat API which integrates with Aruba ClearPass as network access control (NAC). ClearPass can pass the security context vectors using the restful API which is included with SonicWall NGFWs. ClearPass can pass security context vectors including Source IP, Source MAC, User ID, User Role, Domain, Device Category, Device Family, Device Name, OS Type, Hostname and Health Posture to SonicWall NGFWs to enforce real-time rules based on Device Type, OS and Device Health Posture at every point of control. When an alert is generated on a client machine, it can be shared by ClearPass to SonicWall NGFWs which would trigger a range of predetermined, policy-based actions from quarantine to blocking.


Q. Does this NAC integration feature work with any NAC providers?
A. No, this NAC integration only works with Aruba ClearPass.


Q. Which access point models can I integrate with firewalls running SonicOS 7.1.1?
A. With the launch of SonicOS 7.1.1, users will now also be able to integrate and manage Wi-Fi 6 APs like 621, 641 and 681.


Q. How can I automate NSv deployment using the bootstrapping feature? Which platforms support this feature?
Bootstrapping helps with NSv automated deployments. Token-based registration will help ease the bootstrapping process. KVM already supported bootstrapping in SonicOS 7.0.1. With the launch of 7.1.1, other platforms like VMWare, Hyper-V, AWS and Azure will also support bootstrapping features.


Q. How is the bootstrapping process different between private cloud and public cloud?
A. The bootstrapping process is not different between private cloud and public cloud. SonicOS supports bootstrapping on AWS, Azure, VMware, KVM and Hyper-V.


Q. What are the new parameters that will be stored in secondary storage modules with the launch of 7.1.1?
A. TSR , exp, PCAP, threat logs and appflow logs will be stored in the secondary storage module as part of SonicOS 7.1.1


Q. Will the new features available in SonicOS 7.1.1 be available in the Capture Threat Assessment (CTA) report?
A. During the launch, the new features in SonicOS 7.1.1 will not be included in the CTA report.


Q. Are the new features available on NSM?
A. Yes. The upcoming NSM version 2.4 is planned to support the new features on SonicOS 7.1.1.


Q. Can I manage SonicOS 7.1.1 on the previous versions of NSM (prior to 2.4)?
A. You can upgrade the SonicOS version to 7.1.1, but the new features which are part of 7.1.1 will not be available on NSM versions prior to 2.4


Q. What are the best practices to be followed on SonicOS 7.1.1?
A. Please follow the best practices when upgrading the firewall from SonicOS 7.0.1 to SonicOS 7.1 documented here.

The migration tool is not required for the configuration migration from SonicOS 7.0 to SonicOS 7.1. Any customer migrating from Gen 6 to SonicOS 7.1 would need to upgrade to SonicOS 7.0.1 using the migration tool and then migrate to SonicOS 7.1.

 DNS Filtering is the first line of defense and works independent of Content Filtering Services (CFS). Please follow the admin guides for seamless configuration with best practices.
 
Q. What is the new website for URL rating and reputation lookup with CFS 5.0?
A. https://cfssupportapi.global.sonicwall.com/


Q. How can I check the URL rating on the firewall UI?
A. Device –> Diagnostics –> URL Rating Request Tool 


Q. What is the performance impact of enabling the new SonicOS 7.1 features on an existing firewall?
A. We do not expect there to be any impact on the performance of an existing firewall because of new features.


Q. Can DNS proxy 4to4 and 4to6 features work alongside DNS filtering? Can this be accomplished by adding an additional DNS proxy-only rule alongside a DNS filtering rule for X0 Interface? If so, what will take precedence/priority?
A. DNS rules give the choice of either proxy or filtering on a single rule. When proxy is enabled, Client 4to4 or 4to6 DNS queries can be proxied. When DNS filtering is enabled, only Client 4to4 Requests DNS queries will be proxied and filtered.
—While DNS proxies will process both DNS TCP and DNS UDP, DNS filtering is only for DNS UDP.
—Both proxy or filtering DNS rules can be stacked, the most specific match will be applied, and the lookup precedence/priority is top-down.
—To have DNS proxy 4to6 alongside DNS filtering, the proxy rule must explicitly have source zone and address of the 4to6 Clients for the traffic to hit the rule and the policy to be applied


Q. Can DNS Filtering be applied on custom zones or is it restricted to default zones, LAN, DMZ and WLAN?
A. DNS Filtering can be applied to LAN, DMZ and WLAN zones as well as custom zones with Trusted, Public and Wireless Security Types.


Q. How long does a cache entry last before we request a category for a specific domain again?
A. The cache entry of a domain would depend on the TTL of the domain. 


Q. Are there plans to support DNS over TLS and DNS over HTTPS?
A. Yes. DNS over TLS and DNS over HTTPS will be available in a future release.


Q. Will the DNS Filtering license be included with any existing bundle or does the customer need to buy it separately?
A. DNS Filtering will be available with APSS and there will be a la carte SKUs for EPSS, TPSS and HW only.

Q. What happens to the WNM managed access-point when the firewall is upgraded to SonicOS 7.1?

A. Please note that if you have 600 series access points on the network connected to a WLAN zone of a firewall with 7.0.x managed by WNM, after the update to 7.1 the access points will be acquired by the firewall. All WNM settings will not be available. Please “Disable SonicPoint/SonicWave management” on the WLAN zone for seamless management.

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/sonicos-7-1-1-faq/231212121859137/

What is the minimum recommended length of twisted pair copper cable that I should use with my Sonic

Description

What is the minimum recommended length of twisted pair copper cable that I should use with my SonicWall firewalls’ HA ports for high availability?

Resolution

Question:

What is the minimum recommended length of twisted pair copper cable that I should use with my SonicWall firewalls’ HA ports for high availability?

Answer:

At present, there is no officially published minimum length for a twisted pair cable from IEEE or ANSI, though there is information about maximum lengths for twisted pair cable.

However, sometimes customers want to use a cable that is only several inches in length for this. The reason why is because it looks neat and tidy, and it’s one less cable that has to be strung through a rackmount cable channel. However, using such a small cable often causes problems. Customers have reported issues where the firewalls appear to lock up, and they can neither be managed, nor can they pass traffic.

The reason why this occurs, is because the extremely short HA cable causes problems with the transmissions of the HA heartbeats. In an HA pair, when the idle unit does not receive heartbeats for the configured interval and time threshold, it will go active. However, if the other unit in the HA pair is still active, both units’ interfaces will be competing for the addressing, which means there is an IP conflict between all of each firewall’s interfaces. This will prevent the devices from being managed, and it will also prevent them from passing traffic.

With regards to cable length, the following needs to be considered:

1.) Crosstalk

Crosstalk is when a signal sent on one circuit interferes with another signal sent on a separate but adjacent circuit. This is usually caused by circuits being close together. With ethernet cabling, this effect is reduced by twisting the circuit pairs. This reduces the circuits’ ability to interfere with one another while traveling the length of the copper media. With an extremely short cable, there is usually not enough twisted pair to prevent crosstalk interference.

2.) Return Loss

Return loss is essentially the loss of a signal’s power which is returned or reflected by a discontinuity in the cabling (ie: a point in the transmission line where the signal cannot conduct fully to the next leg of the pathway). It is desirable to have a high level of return loss (ie: the loss of reflectivity). Low return loss can be caused by problems at the termination point of the cable, or by a device which is in line with the transmission pathway. A shorter cable presents a potential for lower return loss, because there is less wire to degrade the reflection of signals.

3.) Cable Quality

The quality of cabling will vary from vendor to vendor, depending upon how accurate the equipment is which is used in the manufacturing process. Some vendors do not twist their cabling as effeciently as others do, and some have lower-quality crimps than others. Cables which are crimped by individuals often suffer greatly by comparison to manufactured cables, as one can only be so precise with a hand-crimping tool. The most common problem with custom cables is a loss of twisting near the termination point of the cable. Most vendors who make cables less than half of 1 meter in length do not have those cables certified by any standards body.


For high availability, SonicWall support recommends using a patch or crossover (NSA units have MDIx autosensing capabilities on their interfaces) cable which is no shorter than 1 meter in length (about three feet). There are many posted discussions on this topic available to read online, however, this post from a Fluke Networks employee at forucms.bicsi.org sums up these discussions very well.

http://forums.bicsi.org/Topic2210-4-1.aspx#bm2215
—-
“If you are talking specifically about patch cords, then 0.5 m is the implied minimum length in ANSI/TIA/EIA-568-B.2-1 for a certified patch cord. That’s because the math for the limit lines really does not work below this. Infact, getting a certified patch cord of 0.5 is going to be tricky. Many vendors only offer a certified patch cord of 1.0 m or longer.”

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/what-is-the-minimum-recommended-length-of-twisted-pair-copper-cable-that-i-should-use-with-my-sonic/170505905452401/#:~:text=For%20high%20availability%2C%20SonicWall%20support,length%20%28about%20three%20feet%29

Black Basta-Affiliated Water Curupira’s Pikabot Spam Campaign

By: Shinji Robert Arasawa, Joshua Aquino, Charles Steven Derion, Juhn Emmanuel Atanque, Francisrey Joshua Castillo, John Carlo Marquez, Henry Salcedo, John Rainier Navato, Arianne Dela Cruz, Raymart Yambot, Ian Kenefick
January 09, 2024
Read time: 8 min (2105 words)

A threat actor we track under the Intrusion set Water Curupira (known to employ the Black Basta ransomware) has been actively using Pikabot. a loader malware with similarities to Qakbot, in spam campaigns throughout 2023.

Pikabot is a type of loader malware that was actively used in spam campaigns by a threat actor we track under the Intrusion set Water Curupira in the first quarter of 2023, followed by a break at the end of June that lasted until the start of September 2023. Other researchers have previously noted its strong similarities to Qakbot, the latter of which was taken down by law enforcement in August 2023. An increase in the number of phishing campaigns related to Pikabot was recorded in the last quarter of 2023, coinciding with the takedown of Qakbot — hinting at the possibility that Pikabot might be a replacement for the latter (with DarkGate being another temporary replacement in the wake of the takedown).

Pikabot’s operators ran phishing campaigns, targeting victims via its two components — a loader and a core module — which enabled unauthorized remote access and allowed the execution of arbitrary commands through an established connection with their command-and-control (C&C) server. Pikabot is a sophisticated piece of multi-stage malware with a loader and core module within the same file, as well as a decrypted shellcode that decrypts another DLL file from its resources (the actual payload).

In general, Water Curupira conducts campaigns for the purpose of dropping backdoors such as Cobalt Strike, leading to Black Basta ransomware attacks (coincidentally, Black Basta also returned to operations in September 2023). The threat actor conducted several DarkGate spam campaigns and a small number of IcedID campaigns in the early weeks of the third quarter of 2023, but has since pivoted exclusively to Pikabot.

Pikabot, which gains initial access to its victim’s machine through spam emails containing an archive or a PDF attachment, exhibits the same behavior and campaign identifiers as Qakbot

Figure 1. Our observations from the infection chain based on Trend’s investigation
Figure 1. Our observations from the infection chain based on Trend’s investigation

Initial access via email

The malicious actors who send these emails employ thread-hijacking, a technique where malicious actors use existing email threads (possibly stolen from previous victims) and create emails that look like they were meant to be part of the thread to trick recipients into believing that they are legitimate. Using this technique increases the chances that potential victims would select malicious links or attachments. Malicious actors send these emails using addresses (created either through new domains or free email services) with names that can be found in original email threads hijacked by the malicious actor. The email contains most of the content of the original thread, including the email subject, but adds a short message on top directing the recipient to open the email attachment.

This attachment is either a password-protected archive ZIP file containing an IMG file or a PDF file. The malicious actor includes the password in the email message. Note that the name of the file attachment and its password vary for each email.

Figure 2. Sample email with a malicious ZIP attachment
Figure 2. Sample email with a malicious ZIP attachment
Figure 3. Sample email with a malicious PDF attachment
Figure 3. Sample email with a malicious PDF attachment

The emails containing PDF files have a shorter message telling the recipient to check or view the email attachment.

The first stage of the attack

The attached archive contains a heavily obfuscated JavaScript (JS) with a file size amounting to more than 100 KB. Once executed by the victim, the script will attempt to execute a series of commands using conditional execution.

Figure 4. Files extracted to the attached archive (.zip or .img)
Figure 4. Files extracted to the attached archive (.zip or .img)
Figure 5. Deobfuscated JS command
Figure 5. Deobfuscated JS command

The script attempts command execution using cmd.exe. If this initial attempt is unsuccessful, the script proceeds with the following steps: It echoes a designated string to the console and tries to ping a specified target using the same string. In case the ping operation fails, the script employs Curl.exe to download the Pikabot payload from an external server, saving the file in the system’s temporary directory.

Subsequently, the script will retry the ping operation. If the retry is also unsuccessful, it uses rundll32.exe to execute the downloaded Pikabot payload (now identified as a .dll file) with “Crash” as the export parameter. The sequence of commands concludes by exiting the script with the specified exit code, ciCf51U2FbrvK.

We were able to observe another attack chain where the malicious actors implemented a more straightforward attempt to deliver the payload. As before, similar phishing techniques were performed to trick victims into downloading and executing malicious attachments. In this case, password-protected archive attachments were deployed, with the password contained in the body of the email.

However, instead of a malicious script, an IMG file was extracted from the attachment. This file contained two additional files — an LNK file posing as a Word document and a DLL file, which turned out to be the Pikabot payload extracted straight from the email attachment:

Figure 6. The content of the IMG file
Figure 6. The content of the IMG file

Contrary to the JS file observed earlier, this chain maintained its straightforward approach even during the execution of the payload.

Once the victim is lured into executing the LNK file, rundll32.exe will be used to run the Pikabot DLL payload using an export parameter, “Limit”.

The content of the PDF file is disguised to look like a file hosted on Microsoft OneDrive to convince the recipient that the attachment is legitimate. Its primary purpose is to trick victims into accessing the PDF file content, which is a link to download malware.

Figure 7. Malicious PDF file disguised to look like a OneDrive attachment; note the misspelling of the word “Download”
Figure 7. Malicious PDF file disguised to look like a OneDrive attachment; note the misspelling of the word “Download”
Figure 7. Malicious PDF file disguised to look like a OneDrive attachment; note the misspelling of the word “Download”

When the user selects the download button, it will attempt to access a malicious URL, then proceed to download a malicious JS file (possibly similar to the previously mentioned JS file).

The delivery of the Pikabot payload via PDF attachment is a more recent development, emerging only in the fourth quarter of 2023.

We discovered an additional variant of the malicious downloader that employed obfuscation methods involving array usage and manipulation:

Figure 8. Elements of array “_0x40ee” containing download URLs and JS methods used for further execution
Figure 8. Elements of array “_0x40ee” containing download URLs and JS methods used for further execution

Nested functions employed array manipulation methods using “push” and “shift,” introducing complexity to the code’s structure and concealing its flow to hinder analysis. The presence of multiple download URLs, the dynamic creation of random directories using the mkdir command, and the use of Curl.exe, as observed in the preceding script, are encapsulated within yet another array. 

The JavaScript will run multiple commands in an attempt to retrieve the malicious payload from different external websites using Curl.exe, subsequently storing it in a random directory created using mkdir.

Figure 9. Payload retrieval commands using curl.exe
Figure 9. Payload retrieval commands using curl.exe

The rundll32.exe file will continue to serve as the execution mechanism for the payload, incorporating its export parameter.

Figure 10. Payload execution using rundll32.exe
Figure 10. Payload execution using rundll32.exe

The Pikabot payload

We analyzed the DLL file extracted from the archive shown in Figure 6 and found it to be a sample of a 32-bit DLL file with 1515 exports. Calling its export function “Limit”, the file will decrypt and execute a shellcode that identifies if the process is being debugged by calling the Windows API NtQueryInformationProcess twice with the flag 0x7 (ProcessDebugPort) on the first call and 0x1F ProcessDebugFlags on the second call. This shellcode also decrypts another DLL file that it loads into memory and then eventually executes.

Figure 11. The shellcode calling the entry point of the decrypted DLL file
Figure 11. The shellcode calling the entry point of the decrypted DLL file

The decrypted DLL file will execute another anti-analysis routine by loading incorrect libraries and other junk to detect sandboxes. This routine seems to be copied from a certain GitHub article.

Security/Virtual Machine/Sandbox DLL filesReal DLL filesFake DLL files
cmdvrt.32.dllkernel32.dllNetProjW.dll
cmdvrt.64.dllnetworkexplorer.dllGhofr.dll
cuckoomon.dllNlsData0000.dllfg122.dll
pstorec.dll  
avghookx.dll  
avghooka.dll  
snxhk.dll  
api_log.dll  
dir_watch.dll  
wpespy.dll  

Table 1. The DLL files loaded to detect sandboxes

After performing the anti-analysis routine, the malware loads a set of PNG images from its resources section which contains an encrypted chunk of the core module and then decrypts them. Once the core payload has been decrypted, the Pikabot injector creates a suspended process (%System%\SearchProtocolHost) and injects the core module into it. The injector uses indirect system calls to hide its injection.

Figure 12. Loading the PNG images to build the core module
Figure 12. Loading the PNG images to build the core module

Resolving the necessary APIs is among the malware’s initial actions. Using a hash of each API (0xF4ACDD80x03A5AF65E, and 0xB1D50DE4), Pikabot uses two functions to obtain the addresses of the three necessary APIs, GetProcAddressLoadLibraryA, and HeapFree. This process is done by looking through kernel32.dll exports. The rest of the used APIs are resolved using GetProcAddress with decrypted strings. Other pertinent strings are also decrypted during runtime before they are used.

Figure 13. Harvesting the GetProcAddress and LoadLibrary API
Figure 13. Harvesting the GetProcAddress and LoadLibrary API
Figure 13. Harvesting the GetProcAddress and LoadLibrary API

The Pikabot core module checks the system’s languages and stops its execution if the language is any of the following:

  • Russian (Russia)
  • Ukrainian (Ukraine)
  •   

It will then ensure that only one instance of itself is running by creating a hard-coded mutex, {A77FC435-31B6-4687-902D-24153579C738}.

The next stage of the core module involves obtaining details about the victim’s system and forwarding them to a C&C server. The collected data uses a JSON format, with every data item  using the wsprintfW function to fill its position. The stolen data will look like the image in Figure 13 but with the collected information before encryption:

Figure 14. Stolen information in JSON format before encryption
Figure 14. Stolen information in JSON format before encryption

Pikabot seems to have a binary version and a campaign ID. The keys 0fwlm4g and v2HLF5WIO are present in the JSON data, with the latter seemingly being a campaign ID.

The malware creates a named pipe and uses it to temporarily store the additional information gathered by creating the following processes: 

  • whoami.exe /all
  • ipconfig.exe /all
  • netstat.exe -aon

Each piece of information returned will be encrypted before the execution of the process.

A list of running processes on the system will also be gathered and encrypted by calling CreateToolHelp32Snapshot and listing processes through Process32First and Process32Next.

Once all the information is gathered, it will be sent to one of the following IP addresses appended with the specific URL, cervicobrachial/oIP7xH86DZ6hb?vermixUnintermixed=beatersVerdigrisy&backoff=9zFPSr: 

  • 70[.]34[.]209[.]101:13720
  • 137[.]220[.]55[.]190:2223
  • 139[.]180[.]216[.]25:2967
  • 154[.]61[.]75[.]156:2078
  • 154[.]92[.]19[.]139:2222
  • 158[.]247[.]253[.]155:2225
  • 172[.]233[.]156[.]100:13721

However, as of writing, these sites are inaccessible.

C&C servers and impact

As previously mentioned, Water Curupira conducts campaigns to drop backdoors such as Cobalt Strike, which leads to Black Basta ransomware attacks.It is this potential association with a sophisticated type of ransomware such as Black Basta that makes Pikabot campaigns particularly dangerous.

The threat actor also conducted several DarkGate spam campaigns and a small number of IcedID campaigns during the early weeks of the third quarter of 2023, but has since pivoted exclusively to Pikabot.

Lastly, we have observed distinct clusters of Cobalt Strike beacons with over 70 C&C domains leading to Black Basta, and which have been dropped via campaigns conducted by this threat actor.

Security recommendations

To avoid falling victim to various online threats such as phishing, malware, and scams, users should stay vigilant when it comes to emails they receive. The following are some best practices in user email security:

  • Always hover over embedded links with the pointer to learn where the link leads.
  • Check the sender’s identity. Unfamiliar email addresses, mismatched email and sender names, and spoofed company emails are signs that the sender has malicious intent.
  • If the email claims to come from a legitimate company, verify both the sender and the email content before downloading attachments or selecting embedded links.
  • Keep operating systems and all pieces of software updated with the latest patches.
  • Regularly back up important data to an external and secure location. This ensures that even if you fall victim to a phishing attack, you can restore your information.

A multilayered approach can help organizations guard possible entry points into their system (endpoint, email, web, and network). Security solutions can detect malicious components and suspicious behavior, which can help protect enterprises.  

  • Trend Vision One™ provides multilayered protection and behavior detection, which helps block questionable behavior and tools before ransomware can do any damage. 
  • Trend Cloud One™ – Workload Security protects systems against both known and unknown threats that exploit vulnerabilities. This protection is made possible through techniques such as virtual patching and machine learning.  
  • Trend Micro™ Deep Discovery™ Email Inspector employs custom sandboxing and advanced analysis techniques to effectively block malicious emails, including phishing emails that can serve as entry points for ransomware.  
  • Trend Micro Apex One™ offers next-level automated threat detection and response against advanced concerns such as fileless threats and ransomware, ensuring the protection of endpoints.
     

Indicators of Compromise (IOCs)

The indicators of compromise for this blog entry can be found here.

Source :
https://www.trendmicro.com/it_it/research/24/a/a-look-into-pikabot-spam-wave-campaign.html

Stopping bot traffic: A guide for WordPress websites

DECEMBER 18, 2023 BY PAUL G.

When you picture your website visitors, you most likely picture a person sitting at a desk, or perhaps scrolling on their phone. However, not all your site’s visitors are flesh and bone; many are in fact bots, running automated tasks. 

Although some of these bots are legitimate, others can put your site at risk, so it’s important to take appropriate security measures. This article will take you through the ways bots interact with your site, give you some insights on the risks of leaving bad bots unchecked, and take you through how Shield Security PRO can help protect your site. 

What are WordPress bots?

Before we dive into how to protect your WordPress site from bad bots, let’s take a step back and talk about bots in general. Put simply, a bot is software that runs an automated task. 

Many of the bots that visit your website are perfectly fine – and, indeed, there are many good bots that you want to visit your site. For example, search engine crawlers automatically evaluate the value of your site’s content to determine its rank in search results.

However, there are also bots out there designed with nefarious purposes in mind. In the next section, we will look at good vs. bad bots in more detail so you know which ones you need to look out for. 

It’s worth remembering: One of the key challenges in cybersecurity is giving both good bots and human users a positive experience on your site, without enabling malicious bots to wreak havoc and compromise your security.

Good bots vs. bad bots

You may be surprised to learn that there are several kinds of good bots out there that should be perfectly welcome on your website. We mentioned search engine crawlers earlier, but they’re just one form of friendly bot that could visit your site. Others include:

  • Uptime monitoring bots: These collect performance data so you can see how well your site is doing 
  • SEO tracking bots: Many sites looking to improve their search engine ranks use analytics software to evaluate results. Tracking bots collect the data reflected in your key performance indicators.
  • Translation bots: These assist with language translation by automatically translating content to another language, helping viewers understand what your web pages are about.
  • AI Bots: AI companies use site crawlers to train their AI systems, particularly in terms of language learning. 

Some types of bad bots include: 

  • Comment spam bots: These are bots that automatically leave irrelevant comments on your site, often advertising another product or service, and generating links to that site. 
  • Brute force bots: Some cybercriminals use bots to perform brute force attacks in order to guess login credentials and gain access to restricted information. 
  • Probing bots: These are bots that simply probe your site for vulnerabilities – you can think of them as casing the joint. If they find any, they make a note so attackers can come back and exploit those vulnerabilities later. 

All of these can sap your resources and make you more vulnerable to major cyber security threats. The right cyber security approach will allow good bots to do their thing without leaving the door open to the baddies. 

Real-world Examples: How bad bots put your website at risk

Left unchecked, bad bots can damage your business in both the short and long term. They can drain your resources and increase your vulnerability to hacking attempts. Bots may flood your contact forms and comment sections with spam, which clutters your site and damages your credibility.

One example of enabling bots to run wild on your site is the Dunkin Donuts attack in 2015. The Dunkin Donuts brute force attack happened when hackers began using a type of attack called “credential stuffing” to gain access to and steal money from customer accounts. This is when bots use compromised passwords obtained from previous breaches to log in to their accounts and steal their data and card details.

A picture of coffee and a donut. Photo credit Pexels.

According to a lawsuit filed against Dunkin, the coffee shop’s parent company failed to address the attacks, despite warnings from developers to do so. While they never denied or accepted responsibility for the hacks, the company agreed to a $650,000 settlement. 

This illustrates that the stakes can get very high, especially when you’re handling sensitive information. Blocking bad bots from your website protects your business, your customers and your reputation, by restricting access to your site and data.

Bots are a drain on your site’s resources

Even if bots don’t put you in direct financial harm, they will still consume your site’s resources. 

An example of this is the case of Geeks2you, where bots were used to attempt to gain access to their servers. Monitoring software discovered over 8,000+ failed login attempts, and at least another 5,000 each hour after the attack was discovered. 

While it was extremely hard for them to actually get into the server (thanks to the company’s excellent password policy), with at least two attempts to hack every second, the attack ate into resources and rapidly degraded the site’s responsiveness to legitimate visitors. 

This demonstrates the harmful impact bots can have, even just for failed attempts to hack a site. Users can be robbed of a pleasant experience, sites can load slowly, images may not look right, and on-page features may fail. This can damage your reputation and cause you to lose valuable traffic. 

Bottom line: At a minimum, bad bots hog your resources and drag down your site’s performance. 

Your Solution: The AntiBot Detection Engine

When it comes to stopping bot traffic, you need to find a technological solution that can filter out the bad and leave you with the good. This is where Shield Security PRO comes in. 

The AntiBot Detection Engine, or ADE, works to distinguish between good bots, bad bots, and human users based on the behaviour of each visitor on the site. It can also distinguish fake web crawlers from true web crawlers. 

The way the technology does this is with “bot signals” it watches for when visitors interact with the site. (We’ll take a closer look at how the ADE does this in the next section.) 

Shield Security PRO displays bot signals logged and bot likelihood for blocked IP addresses.

When a user crosses the threshold of acceptable suspicious activity, Shield Security PRO automatically blocks their IP address and stops them from being able to access your site.

Spotting bot behaviour: login attempts 

One example of bad bot behaviour the ADE is designed to spot is excessive login attempts. Shield Security PRO can detect and capture login bots that can slow down your site and cause harm going forward. It does this by penalising visitors who use a valid username but the wrong password, as well as trying to log in without a username or with a username that doesn’t exist. 

Legitimate users might get their username and password wrong once in a while, but their behaviour is still going to be easy to distinguish from bots, especially when you look at their actions across the site as a whole. 

“Bots are just computer programs,” said Paul Goodchild, creator of Shield Security PRO, “They perform a limited number of tasks, such as login attempts, comment SPAM, and probing to trigger 404 errors.

“When you look at all these actions collectively,” Goodchild continued, “it looks nothing like normal human activity. The ADE acts as a ‘bot watcher’, looking at all requests collectively to sort the bots from the people.”

All-sides defence with Shield Security PRO 

ADE and bad bot blocking are core features of Shield Security PRO, but they’re also just a couple of the plugin’s features designed to keep your site safe and secure. For example, the security plugin has a comprehensive dashboard that allows you to see the current state of your website at a glance.  

Screenshot of Shield Security PRO’s security dashboard.

Other functionalities that help Shield Security PRO protect your site include:

  • DoS protection with traffic rate limitingThis essentially limits the rate at which traffic can access a network or web service, stopping it from being overwhelmed. DoS attacks aim to overwhelm a system’s resources, ultimately slowing or shutting down the site. 
  • Malware detection and vulnerability scanning: These are essential to your website’s safety, and identify and mitigate potential threats to your system. Our technology offers real-time protection and firewalls, scans for patterns or signs of existing malware, and identifies flaws and weak points in your defence.
  • Login protection for WooCommerce and other WordPress plugins: Shield Security PRO allows you to set up strong password requirements and two-factor authentication, keeping site access secure. You can also set customizable login attempt limits to further protect your site from malicious access attempts. 

Cybersecurity is most effective when you tackle it from all sides. The Shield Security PRO plugin kicks bad bots and suspicious visitors off your site and helps you detect any threats that do manage to sneak through. 

Banish bots from your site with Shield Security PRO 

If you let bad bots have unlimited access to your site, you’re taking a serious risk. Bad bots can increase your chances of hacking and data loss, as well as hog server resources and slow your site down. Both of these can damage your reputation as well as your bottom line.

Site owners can take action and protect their websites with a bad-bot blocking plugin like Shield Security PRO. The ADE efficiently identifies bad bots and blocks their IP addresses so they can’t bring their nefarious plans to fruition.

Don’t delay, get started with Shield Security PRO and kick bad bots off your site today for instant peace of mind.

Hey beautiful!

If you’re curious about ShieldPRO and would like to explore the powerful features for protecting your WordPress sites, click here to get started today. (14-day satisfaction guarantee!)

You’ll get all PRO features, including AI Malware Scanning, WP Config File Protection, Plugin and Theme File Guard, import/export, exclusive customer support, and so much more.

Source :
https://getshieldsecurity.com/blog/stopping-bot-traffic-guide-wordpress-websites/

Configuring DFSR to a Static Port – The rest of the story

By Ned Pyle
Published Apr 04 2019 02:39 PM

First published on TechNet on Jul 16, 2009
Ned-san here again. Customers frequently call us about configuring their servers to listen over specific network ports. This is usually to satisfy firewall rules – more on this later. A port in TCP/IP is simply an endpoint to communication between computers. Some are reserved, some are well-known, and the rest are simply available to any application to use. Today I will explain the network communication done through all facets of DFSR operation and administration. Even if you don’t care about firewalls and ports, this should shed some light on DFSR networking in general, and may save you skull sweat someday.

DFSR and RPC

Plenty of Windows components support hard-coding to exclusive ports, and at a glance, DFSR is no exception. By running the DFSRDIAG STATICRPC command against the DFSR servers you force them to listen on whatever port you like for file replication:

thumbnail image 1 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

Many Windows RPC applications use the Endpoint Mapper (EPM) component for these types of client-server operations. It’s not a requirement though; an RPC application is free to declare its own port and only listen on that one, with a client that is hard-coded to contact that port only. This range of ports is 1025-5000 in Windows Server 2003 and older, and 49152-65535 in Vista and … DFSR uses EPM.

Update 3/3/2011 (nice catch Walter)

As you have probably found, we later noticed a bug in DFSR on Win2008 and Win2008 R2 DCs (only – not member servers) where the service would always send-receive on port 5722. This article was done before that and doesn’t reflect it. Read more on this here:

http://support.microsoft.com/default.aspx?scid=kb;EN-US;832017

http://blogs.technet.com/b/askds/archive/2010/05/14/friday-mail-sack-it-s-about-to-get-re…
All of the below is accurate for non-DCs

By setting the port, you are telling EPM to always respond with the same port instead of one within the dynamic range. So when DFSR contacted the other server, it would only need to use two ports:

thumbnail image 2 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

So with a Netmon 3.3 capture, it will look something like this when the DFSR service starts up:

1. The local computer opens a dynamic client port and connects to EPM on the remote computer, asking for connectivity to DFSR.

thumbnail image 3 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

2. That remote computer responds with a port that the local computer can connect to for DFSR communication. Because I have statically assigned port 55555, the remote computer will always respond with this port.

thumbnail image 4 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

3. The local computer then opens a new client port and binds to that RPC port on the remote server, where the DFSR service is actually listening. At this point two DFSR servers can replicate files between each other.

thumbnail image 5 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

The Rest of the Story

If it’s that easy, why the blog post? Because there’s much more DFSR than just the RPC replication port. To start, your DFSR servers need to be able to contact DC’s. To do that, they need name resolution. And they will need to use Kerberos. And the management tools will need DRS API connectivity to the DC’s. There will also need to be SMB connectivity to create replicated folders and communicate with the Service Control Manager to manipulate DFSR. And all of the above also need the dynamic client ports available outbound through the firewall to allow that communication. So now that’s:

  • EPM port 135 (inbound on remote DFSR servers and DC’s)
  • DFSR port (inbound on remote DFSR servers)
  • SMB port 445 (inbound on remote DFSR servers)
  • DNS port 53 (inbound on remote DNS servers)
  • LDAP port 389 (inbound on remote DC’s)
  • Kerberos port 88 (inbound on remote DC’s)
  • Ports 1025-5000 or 49152-65535 (outbound, Win2003 and Win2008 respectively – and inbound on remote DC’s).

Let’s see this in action. Here I gathered a Netmon 3.3 capture of configuring a new replication group:

  • Server-01 – IP 10.10.0.101 – DC/DNS
  • Server-02 – IP 10.10.0.102 – DFSR
  • Server-03 – IP 10.10.0.103 – DFSR
  • Server-04 – IP 10.10.0.104 – Computer running the DFSMGMT.MSC snap-in

1. First the snap-in gets name resolution for the DC from my management computer (local port 51562 to remote port 53):

thumbnail image 6 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

2. Then it contacts the DC – the EPM is bound (local port 49199 to remote port 135) and a dynamic port is negotiated so that the client knows which port on which to talk to the DC (port 49156).

thumbnail image 7 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

3. Having connected to the DC through RPC to DRS (a management API), it then returns information about the domain and other things needed by the snap-in.

thumbnail image 8 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

4. The snap-in then performs an LDAP query to the DC to locate the DFSR-GlobalSettings container in that domain o that it can read in any new Replication Groups (local port 49201 to remote port 389).

thumbnail image 9 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

5. The snap-performs LDAP and DNS queries to get the names of the computers being selected for replication:

thumbnail image 10 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

6. The DFSR service must be verified (is it installed? Is it running?) This requires a Kerberos CIFS (SMB) request to the DC as well as an SMB connection to the DFSR servers – this is actually a ‘named pipe’ operation over remote port 445, where RPC uses SMB as a transport:

thumbnail image 11 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 12 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 13 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

7. The Replicated Folders are created (or verified to exist) on the DFSR servers – I called mine ‘testrf’. This uses SMB again from the snap-in computer to the DFSR server, over remote port 445:

thumbnail image 14 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

8. The snap-in will write all the configuration data through LDAP over remote port 389 against the DC. This creates all the AD objects and attributes, creates the topology, writes to each DFSR computer object, etc. There are quite a few frames here so I will just highlight a bit of it:

thumbnail image 15 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

9. If you wait for AD replication to complete and the DFSR servers to poll for changes, you will see the DFSR servers request configuration info through LDAP, and then start working normally on their static RPC port 55555 – just like I showed at the beginning of this post above.

DCOM and WMI

All of the things I’ve discussed are guaranteed needs in order to use DFSR. For the most part you don’t have to have too many remote ports open on the DFSR server itself. However, if you want to use tools like DFSRDIAG.EXE and WMIC.EXE remotely against a DFSR server, or have a remote DFSR server generate ‘Diagnostic Health Reports’, there is more to do.

DFSR utilizes Windows Management Instrumentation as its ‘quasi-API’. When tools like DFS Management are run to generate health reports, or DFSRDIAG POLLAD is targeted against a remote server, you are actually using DCOM and WMI to tell the targeted server to perform actions on your behalf.

There is no mechanism to control which RPC DCOM/WMI will listen on as there is for DFSR and other services. At service startup DCOM/WMI will pick the next available dynamic RPC port. This means in theory that you would have to have open the entire range of dynamic ports for the target OS, 1025-5000 (Win2003) or 49152-65535 (Win2008)

For example, here I am running DFSRDIAG POLLAD /MEM:2008-02 to force that server to poll its DC for configuration changes. Note the listening port that I am talking to on the DFSR server (hint – it’s not 55555):

thumbnail image 16 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 17 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 18 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

And in my final example, here I am running the DFS Management snap-in and requesting a diagnostic health report. Note again how we use DCOM/WMI/RPC and do not connect directly to the DFSR service; again this requires that we have all those inbound dynamic ports open on the DFSR server:

thumbnail image 19 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

Wrap Up

So is it worth it to try and use a static replication port? Maybe. If you don’t plan on directly administering a DFSR server and just need it talking to its DC, its DNS server, and its replication partners, can definitely keep the number of ports used quite low. But if you ever want to communicate directly with it as an administrator, you will need quite a few holes punched through your firewall.

That is, unless you are using IPSEC tunnels through your Firewalls like we recommend. 🙂

– Ned ‘Honto’ Pyle

Source :
https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/configuring-dfsr-to-a-static-port-the-rest-of-the-story/ba-p/396746

What Is DFS Replication and How to Configure It?

Updated: May 23, 2023
By: NAKIVO Team

File shares are used in organizations to allow users to access and exchange files. If the number of file shares is large, it may be difficult to manage them because mapping many shared resources to each user’s computer takes time and effort. If the configuration of one file share changes, you need to update shared drive mappings for all users using this share. In this case, DFS can help you optimize the hierarchy of shared folders to streamline administration and the use of shared resources.

This blog post explains DFS configuration and how to set up DFS replication in Windows Server 2019.

NAKIVO for Windows Backup

NAKIVO for Windows Backup

Fast backup of Windows servers and workstations to onsite, offiste and cloud. Recovery of full machines and objects in minutes for low RTOs and maximum uptime.

DISCOVER SOLUTION

What Is DFS and How It Works

A Distributed File System (DFS) is a logical organization that transparently groups existing file shares on multiple servers into a structured hierarchy. This hierarchy can be accessed using a single share on a DFS server.
A DFS file share can be replicated across multiple file servers in different locations to optimize server load and increase access speed to shared files. In this case, a user can access a file share on a server that is closest to them. DFS is intended to simplify access to shared files.

Using a DFS namespace server

DFS uses the Server Message Block (SMB) protocol, which is also known as the Common Internet File System (CIFS). Microsoft’s implementation of DFS doesn’t work with other file sharing protocols like NFS or HDFS. However, you can connect multiple SMB shares configured on NAS devices and Linux machines using Samba to your DFS server running on Windows Server. DFS consists of server and client components.

You can configure one DFS share that includes multiple file shares and connect users to this single file share using a unified namespace. When users connect to this file share using a single path, they see a tree structure of shared folders (as they are subfolders of the main share) and can access all needed file shares transparently. Underlying physical file servers hosting file shares are abstracted from the namespace used to access shares. DFS namespaces and DFS replication are the two main components used for DFS functioning.

What is a DFS namespace?

A DFS namespace is a virtual folder that contains links to shared folders stored on different file servers. DFS namespaces can be organized in different ways depending on business needs. They can be organized by geographical location, organization units, a combination of multiple parameters, etc. You can configure multiple namespaces on a DFS server. A DFS namespace can be standalone or domain-based.

DFS namespace and folder targets
  • standalone DFS namespace stores configuration information and metadata locally on a root server in the system registry. A path to access the root namespace is started with the root server name. A standalone DFS namespace is located only on one server and is not fault-tolerant. If a root server is unavailable, the entire DFS namespace is unavailable. You can use this option if you don’t have an Active Directory domain configured (when using a Workgroup).
  • domain-based DFS namespace stores configuration in Active Directory. A path to access a root namespace starts with the domain name. You can store a domain-based DFS namespace on multiple servers to increase the namespace availability. This approach allows you to provide fault tolerance and load balancing across servers. Using domain-based DFS namespaces is recommended.

A namespace consists of the root, links (folders), and folder targets.

  • namespace root is a starting point of a DFS namespace tree. Depending on the type, a namespace can look like this:

\\ServerName\RootName (a standalone namespace)

\\DomainName\RootName (a domain-based namespace)

  • namespace server is a physical server (or a VM) that hosts a DFS namespace. A namespace server can be a regular server with the DFS role installed or a domain controller.
  • folder is a link in a DFS namespace that points to a target folder containing content for user access. There are also folders without targets used for organizing the structure.
  • folder target is a link to a shared file resource located on a particular file server and available via the UNC path (Universal Naming Convention). A folder target is associated with the folder in a DFS namespace, for example, \\FS2\TestShare on the FS2 server. A folder target is what users need to access files.

One folder target can be a link to a single folder or multiple folders (if these folders are located on two different servers and are synchronized/replicated with each other). For example, a user needs to access \\DFS-server01\TestShare\Doc but depending on the user’s location, the user is redirected to a shared folder \\FS01\Doc or \\FS02\Doc.

The DFS tree structure includes the following components:

  • DFS root, which is a DFS server on which the DFS service is running
  • DFS links, which are links pointing to network shares used in DFS
  • DFS targets, which are real network shares to which DFS links point

What is DFS replication?

DFS replication is a feature used to duplicate existing data by replicating copies of that data to multiple locations. Physical file shares can be synchronized with each other at two or more locations.

An important feature of DFS replication is that the replication of a file starts only after that file has been closed. For this reason, DFS replication is not suitable for replicating databases, given that databases have files opened during the operation of a database management system. DFS replication supports multi-master replication technology, and any member of a replication group can change data that is then replicated.

DFS replication group is a group of servers participating in the replication of one or multiple replication folders. A replicated folder is synchronized between all members of the replication group.

DFS replication group

DFS replication uses a special Remote Differential Compression algorithm that allows DFS to detect changes and copy only changed blocks of files instead of copying all data. This approach allows you to save time and reduce replication traffic over the network.

DFS replication is performed asynchronously. There can be a delay between writing changes to the source location and replicating those changes to the target location.

DFS Replication topologies

There are two main DFS replication topologies:

  • Hub and spoke. This topology requires at least three replication members: one which acts as a hub and two others act as spokes. This technique is useful if you have a central source originating data (hub) and you need to replicate this data to multiple locations (spokes).
  • Full mesh. Each member of a replication group replicates data to each group member. Use this technique if you have 10 members or less in a replication group.

What are the requirements for DFS?

The main requirement is using Windows Server 2008 DataCenter or Enterprise editions, Windows Server 2012, or a newer Windows Server version. It is better to use Windows Server 2016 or Windows Server 2019 nowadays.

NTFS must be a file system to store shared files on Windows Server hosts.

If you use domain-based namespaces, all servers of a DFS replication group must belong to one Active Directory forest.

How to Set Up DFS in Your Windows Environment

You need to prepare at least two servers. In this example, we use two machines running Windows Server 2019, one of which is an Active Directory domain controller:

  • Server01-dc.domain1.local is a domain controller.
  • Server02.domain1.local is a domain member.

This is because configuring DFS in a domain environment has advantages compared to Workgroup, as explained above. The domain name is domain1.local in our case. If you use a domain, don’t forget to configure Active Directory backup.

Enable the DFS roles

First of all, you need to enable the DFS roles in Windows Server 2019.

  1. Open Server Manager.
  2. Click Add Roles and Features in Server Manager.
  3. Select Role-based or featured-based installation in the Installation type screen of the Add Roles and Features wizard.
  4. In the Server Selection screen, make sure your current server (which is a domain controller in our case) is selected. Click Next at each step of the wizard to continue.
  5. Select server roles. Select DFS Namespaces and DFS Replication, as explained in the screenshot below.
Setting up DFS in Windows Server 2019 – installing DFS roles
  1. In the Features screen, you can leave settings as is.
  2. Check your configuration in the confirmation screen and if everything is correct, click Install.
  3. Wait for a while until the installation process is finished and then close the window.

DFS Namespace Setup

Create at least one shared folder on any server that is a domain member. In this example, we create a shared folder on our domain controller. The folder name is shared01 (D:\DATA\shared01).

Creating a shared folder

  1. Right-click a folder and, in the context menu, hit Properties.
  2. On the Sharing tab of the folder properties window, click Share.
  3. Share the folder with Domain users and set permissions. We use Read/Write permissions in this example.
  4. Click Share to finish. Then you can close the network sharing options window.
Sharing a folder in Windows Server 2019 to set up DFS

Now the share is available at this address:

\\server01-dc\shared01

Creating a DFS namespace

Let’s create a DFS namespace to link shared folders in a namespace.

  • Press Win+R and run dfsmgmt.msc to open the DFS Management window. You can also run this command in the Windows command line (CMD).

As an alternative, you can click Start > Windows Administrative Tools > DFS Management.

  • In the DFS Management section, click New Namespace.
How to configure DFS namespaces
  • The New Namespace Wizard opens in a new window.
  1. Namespace Server. Enter a server name. If you are not sure that the name is correct, click Browse, enter a server name and click Check Names. In this example, we enter the name of our domain controller (server01-dc). Click Next at each step of the wizard to continue.
Adding a DFS namespace server
  1. Namespace Name and Settings. Enter a name for a namespace, for example, DFS-01. Click Edit Settings.
Entering a name for a DFS namespace

Pay attention to the local path of a shared folder. Change this path if needed. We use the default path in our example (C:\DFSRoots\DFS-01).

  1. You need to configure access permissions for network users. Click Use custom permissions and hit Customize.
Configuring access permissions for a shared folder on a DFS namespace server
  1. We grant all permissions for domain users (Full Control). Click Add, select Domain Users, select the appropriate checkboxes, and hit OK to save settings.
Configuring permissions for a shared folder
  1. Namespace type. Select the type of namespace to create. We select Domain-based namespace and select the Enable Windows Server 2008 mode checkbox. Select this checkbox if the functional level of your domain is Windows Server 2008 when you use Windows Server 2016 or Windows Server 2019 for better compatibility.

It is recommended that you use a Domain-based namespace due to advantages such as high DFS namespace availability by using multiple namespace servers and transferring namespaces to other servers.

Selecting a domain-based namespace for DFS configuration
  1. Review Settings. Review settings and, if everything is correct, click Create.
Reviewing configuration to finish DFS namespace setup
  1. Confirmation. The window view in case of success is displayed in the screenshot below. The namespace creation has finished. Click Close.
A DFS namespace has been created

Adding a new folder to a namespace

Now we need to add a new folder into the existing namespace. We are adding a folder on the same server, which is a domain controller, but this method is applicable for all servers within a domain.

  1. Open the DFS management window by running dfsmgmt.msc as we did before. Perform the following actions in the DFS management window.
  2. In the left pane, expand a namespace tree and select a namespace (\\domain1.local\DFS-01\ in our case).
  3. In the right pane (the Actions pane), click New Folder.
  4. In the New Folder window, enter a folder name, for example, Test-Folder to link the DFS folder and a shared folder created before. Click Add.
Adding a new folder into a DFS namespace
  1. Enter the path to the existing folder. We use \\server01-dc\shared01 in this example. You can click Browse and select a folder. Click OK to save the path to the folder target.
Adding a folder target

The folder target has been added.

  1. Click OK to save settings and close the New Folder window.
A folder target has been added

Now you can access the shared folder by entering the network address in the address bar of Windows Explorer:

\\server01-dc\dfs-01\Test-Folder

You should enter a path in the format:

\\DomainName\DFS-NameSpace\

Accessing a shared folder in Windows Explorer

How to Configure DFS Replication

We need to configure the second server to replicate data. The name of the second server is Server02 and this server is added to the domain1.local domain in this example. Add your second server to a domain if you have not done this operation before.
Install the DFS roles, as we did for the first server. As an alternative method, you can use PowerShell instead of the Add Roles wizard. Run these two commands in PowerShell to install DFS replication and DFS namespace roles.

Install-WindowsFeature -name “FS-DFS-Replication” -IncludeManagementTools

Install-WindowsFeature -name “FS-DFS-Namespace” -IncludeManagementTools

First of all, we need to install the DFS Replication role on the second server.

How to set up DFS roles in PowerShell

Create a folder for replicated data, for example, D:\Replication

We are going to use this folder to replicate data from the first folder created on the first server before.

Share this folder (D:\Replication) on the second server and configure access permissions the same way as for the previous shared folder. In this example, we share the folder with Domain Users and grant Read/Write permissions.

Sharing a folder on the second server

The network path is \\server02\replication in this example after sharing this folder. To check the network path to the folder, you can right-click the folder name and open the Sharing tab.

Let’s go back to the domain controller (server01-dc) and open the DFS Management window.

In the left pane of the DFS Management window, expand the tree and select the namespace created before (Test-Folder in this case).

Click Add Folder Target in the Actions pane located in the top right corner of the window.

The New Folder Target window appears. Enter the network path of the folder that was created on the second server before:

\\Server02\Replication

Click OK to save settings and close the window.

Adding a new folder target to configure Windows DFS replication

A notification message is displayed:

A replication group can be used to keep these folder targets synchronized. Do you want to create a replication group?

Click Yes.

A notification message is displayed when creating a DFS replication group

Wait until the configuration process is finished.

As a result, you should see the Replicate Folder Wizard window. Perform the next steps in the wizard window.

Check the replication group name and replicated folder name. Click Next to continue.

Entering a replication group name and replication folder name

Check folder paths in the Replication Eligibility screen.

Checking paths of shared folders

Select the primary member from the drop-down list. In this example, the primary member is Server01-dc. Data from the primary member is replicated to other folders that are a part of the DFS namespace.

Selecting a primary member when configuring DFS replication

Select the topology of connections for replication.

Full mesh is the recommended option when using a DFS replication group with less than ten servers. We use Full mesh to replicate changes made on one server to other servers.

The No Topology option can be used if you want to create a custom topology after finishing the wizard.

The Hub and spoke option is inactive (grayed out) because we use less than three servers.

Selecting a full mesh topology to configure DFS replication

Configure replication group schedule and bandwidth. There are two options:

  • Replicate continuously using the specified bandwidth. Replication is performed as soon as possible. You can allocate bandwidth. Continuous replication of data that changes extensively can consume a lot of network bandwidth. To avoid a negative impact on other processes using the network, you can limit bandwidth for DFS replication. Keep in mind that hard disk load can be high.
  • Replicate during the specified days and times. You can configure the schedule to perform DFS replication at the custom date and time. You can use this option if you don’t need to always have the last version of replicated data in target folders.

We select the first option in our example.

Setting up DFS replication group schedule

Review settings for your DFS replication group. If everything is correct, click Create.

Reviewing settings for a DFS replication group before finishing configuration

View the DFS replication configuration status on the Confirmation screen. You should see the Success status for all tasks as displayed on the screenshot below. Click Close to close the wizard window.

A DFS replication group has been created successfully

A notification message about the replication delay is displayed. Read the message and hit OK.

A notification message about DFS replication delay

DFS replication has been configured. Open a shared folder from which data must be replicated initially. Write a file to that network folder and check whether the new data is replicated to the second folder on another server. Don’t forget that opened files are not replicated until they are closed after saving changes to a disk. In a few moments, you should see a file-replica in the target folder.

Using filters for DFS Replication

Use file filters to select the file types you don’t want to replicate. Some applications can create temporary files and replicating them wastes network bandwidth, loads hard disk drives, consumes additional storage space in the target folder, and increases overall time to replicate data. You can exclude the appropriate file types from DFS replication by using filters.

To configure filters, perform the following steps in the DFS Management window:

  1. Expand the Replication tree in the navigation pane and select the needed DFS replication group folder name (domain1.local\dfs-01\Test-folder in our case).
  2. Select the Replicated Folders tab.
  3. Select the needed folder, right-click the folder name and hit Properties. Alternatively, you can select the folder and click Properties in the Actions pane.
  4. Set the filtered file types by using masks in the folder properties window. In this example, files matching the rule are excluded from replication:

~*, *.bak, *.tmp

You can also filter subfolders, for example, exclude Temp subfolders from DFS replication.

Configuring DFS replication filters

Staging location

There can be a conflict when two or more users save changes to a file before these changes are replicated. The most recent changes have precedence for replication. Older versions of changed files are moved to the Conflict or Deleted folder. This issue can happen when replication speed is low and the file size is large (amount of changes is high) when the amount of time to transfer changed data is lower than the interval between writing changes to the file by users.

Staging folders act as a cache for new and changed files that are ready to be replicated from source folders to target folders. The staging location is intended for files that exceed a certain file size. Staging is used as a queue to store files that must be replicated and ensure that changes can be replicated without worrying about changes to them during the transfer process.

Another aspect of configuring staging folders is performance optimization. DFS replication can consume additional CPU and disk resources, slow down and even stop if the staging quota is too small for your tasks. The recommended size of the staging quota is equal to the size of the 32 largest files in the replication folder.

You can edit staging folder properties for DFS Replication in the DFS Management window:

  1. Select a replication group in the left pane of the DFS Management window.
  2. Select the Memberships tab.
  3. Select the needed replication folder, right-click the folder, and hit Properties.
  4. Select the Staging tab in the Properties window.
  5. Edit the staging path and quota according to your needs.
Configuring DFS staging location

Saved changes are not applied immediately. New staging settings must be replicated across all DFS servers within a domain. Time depends on Active Directory Domain Services replication latency and the polling interval of servers (5 minutes or more). Server reboot is not required.

DFS Replication vs. Backup

Don’t confuse DFS Replication of data in shared folders and data backup. DFS replication makes copies of data on different servers, but if unwanted changes are written to a file on one server, these changes are replicated to other servers. As a result, you don’t have a recovery point because the file has been overwritten with unwanted changes on all servers and you can use it for recovery in case of failure. This threat is present in case of a ransomware attack.

Use NAKIVO Backup & Replication to protect data stored on your physical Windows Server machines including data stored in shared folders. The product also supports Hyper-V VM backup and VMware VM backup at the host level for effective protection.

1 Year of Free Data Protection: NAKIVO Backup & Replication

1 Year of Free Data Protection: NAKIVO Backup & Replication

Deploy in 2 minutes and protect virtual, cloud, physical and SaaS data. Backup, replication, instant recovery options.

GET THE FREE EDITION

Conclusion

Distributed File System (DFS) can significantly simplify shared resources management for administrators and make accessing shared folders more convenient for end-users. DFS makes transparent links to shared folders located on different servers.

DFS namespaces and DFS replication are two main features that you can configure in the DFS Management window after installing the appropriate Windows server roles. Opt for configuring DFS in a domain environment rather than in a Workgroup environment because there are many advantages, such as high availability and flexibility in an Active Directory domain.

Source :
https://www.nakivo.com/blog/configure-dfs-replication-for-windows-server/

Manually Clearing the ConflictAndDeleted Folder in DFSR

By Ned Pyle
Published Apr 04 2019 01:30 PM

First published on TechNet on Oct 06, 2008
Ned here again. Today I’m going to talk about a couple of scenarios we run into with the ConflictAndDeleted folder in DFSR. These are real quick and dirty, but they may save you a call to us someday.

Scenario 1: We need to empty out the ConflictAndDeleted folder in a controlled manner as part of regular administration (i.e. we just lowered quota and we want to reclaim that space).

Scenario 2: The ConflictAndDeleted folder quota is not being honored due to an error condition and the folder is filling the drive.

Let’s walk through these now.

Emptying the folder normally

It’s possible to clean up the ConflictAndDeleted folder through the DFSMGMT.MSC and SERVICES.EXE snap-ins, but it’s disruptive and kind of gross (you could lower the quota, wait for AD replication, wait for DFSR polling, and then restart the DFSR service). A much faster and slicker way is to call the WMI method CleanupConflictDirectory from the command-line or a script:

1.  Open a CMD prompt as an administrator on the DFSR server.
2.  Get the GUID of the Replicated Folder you want to clean:

WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderconfig get replicatedfolderguid,replicatedfoldername

(This is all one line, wrapped)

Example output:

thumbnail image 1 of blog post titled
Manually Clearing the ConflictAndDeleted Folder in DFSR

3.  Then call the CleanupConflictDirectory method:

WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo where “replicatedfolderguid='<RF GUID>'” call cleanupconflictdirectory

Example output with a sample GUID:

WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo where “replicatedfolderguid=’70bebd41-d5ae-4524-b7df-4eadb89e511e'” call cleanupconflictdirectory
thumbnail image 2 of blog post titled
Manually Clearing the ConflictAndDeleted Folder in DFSR

4.  At this point the ConflictAndDeleted folder will be empty and the ConflictAndDeletedManifest.xml will be deleted.

Emptying the ConflictAndDeleted folder when in an error state

We’ve also seen a few cases where the ConflictAndDeleted quota was not being honored at all. In every single one of those cases, the customer had recently had hardware problems (specifically with their disk system) where files had become corrupt and the disk was unstable – even after repairing the disk (at least to the best of their knowledge), the ConflictAndDeleted folder quota was not being honored by DFSR.

Here’s where quota is set:

thumbnail image 3 of blog post titled
Manually Clearing the ConflictAndDeleted Folder in DFSR

Usually when we see this problem, the ConflictAndDeletedManifest.XML file has grown to hundreds of MB in size. When you try to open the file in an XML parser or in Internet Explorer, you will receive an error like “The XML page cannot be displayed” or that there is an error at line . This is because the file is invalid at some section (with a damaged element, scrambled data, etc).

To fix this issue:

  1. Follow steps 1-4 from above. This may clean the folder as well as update DFSR to say that cleaning has occurred. We always want to try doing things the ‘right’ way before we start hacking.
  2. Stop the DFSR service.
  3. Delete the contents of the ConflictAndDeleted folder manually (with explorer.exe or DEL).
  4. Delete the ConflictAndDeletedManifest.xml file.
  5. Start the DFSR service back up.

For a bit more info on conflict and deletion handling in DFSR, take a look at:

Staging folders and Conflict and Deleted folders (TechNet)
DfsrConflictInfo Class (MSDN)

Until next time…

– Ned “Unhealthy love for DFSR” Pyle

Source :
https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/manually-clearing-the-conflictanddeleted-folder-in-dfsr/ba-p/395711