Stopping bot traffic: A guide for WordPress websites


When you picture your website visitors, you most likely picture a person sitting at a desk, or perhaps scrolling on their phone. However, not all your site’s visitors are flesh and bone; many are in fact bots, running automated tasks. 

Although some of these bots are legitimate, others can put your site at risk, so it’s important to take appropriate security measures. This article will take you through the ways bots interact with your site, give you some insights on the risks of leaving bad bots unchecked, and take you through how Shield Security PRO can help protect your site. 

What are WordPress bots?

Before we dive into how to protect your WordPress site from bad bots, let’s take a step back and talk about bots in general. Put simply, a bot is software that runs an automated task. 

Many of the bots that visit your website are perfectly fine – and, indeed, there are many good bots that you want to visit your site. For example, search engine crawlers automatically evaluate the value of your site’s content to determine its rank in search results.

However, there are also bots out there designed with nefarious purposes in mind. In the next section, we will look at good vs. bad bots in more detail so you know which ones you need to look out for. 

It’s worth remembering: One of the key challenges in cybersecurity is giving both good bots and human users a positive experience on your site, without enabling malicious bots to wreak havoc and compromise your security.

Good bots vs. bad bots

You may be surprised to learn that there are several kinds of good bots out there that should be perfectly welcome on your website. We mentioned search engine crawlers earlier, but they’re just one form of friendly bot that could visit your site. Others include:

  • Uptime monitoring bots: These collect performance data so you can see how well your site is doing 
  • SEO tracking bots: Many sites looking to improve their search engine ranks use analytics software to evaluate results. Tracking bots collect the data reflected in your key performance indicators.
  • Translation bots: These assist with language translation by automatically translating content to another language, helping viewers understand what your web pages are about.
  • AI Bots: AI companies use site crawlers to train their AI systems, particularly in terms of language learning. 

Some types of bad bots include: 

  • Comment spam bots: These are bots that automatically leave irrelevant comments on your site, often advertising another product or service, and generating links to that site. 
  • Brute force bots: Some cybercriminals use bots to perform brute force attacks in order to guess login credentials and gain access to restricted information. 
  • Probing bots: These are bots that simply probe your site for vulnerabilities – you can think of them as casing the joint. If they find any, they make a note so attackers can come back and exploit those vulnerabilities later. 

All of these can sap your resources and make you more vulnerable to major cyber security threats. The right cyber security approach will allow good bots to do their thing without leaving the door open to the baddies. 

Real-world Examples: How bad bots put your website at risk

Left unchecked, bad bots can damage your business in both the short and long term. They can drain your resources and increase your vulnerability to hacking attempts. Bots may flood your contact forms and comment sections with spam, which clutters your site and damages your credibility.

One example of enabling bots to run wild on your site is the Dunkin Donuts attack in 2015. The Dunkin Donuts brute force attack happened when hackers began using a type of attack called “credential stuffing” to gain access to and steal money from customer accounts. This is when bots use compromised passwords obtained from previous breaches to log in to their accounts and steal their data and card details.

A picture of coffee and a donut. Photo credit Pexels.

According to a lawsuit filed against Dunkin, the coffee shop’s parent company failed to address the attacks, despite warnings from developers to do so. While they never denied or accepted responsibility for the hacks, the company agreed to a $650,000 settlement. 

This illustrates that the stakes can get very high, especially when you’re handling sensitive information. Blocking bad bots from your website protects your business, your customers and your reputation, by restricting access to your site and data.

Bots are a drain on your site’s resources

Even if bots don’t put you in direct financial harm, they will still consume your site’s resources. 

An example of this is the case of Geeks2you, where bots were used to attempt to gain access to their servers. Monitoring software discovered over 8,000+ failed login attempts, and at least another 5,000 each hour after the attack was discovered. 

While it was extremely hard for them to actually get into the server (thanks to the company’s excellent password policy), with at least two attempts to hack every second, the attack ate into resources and rapidly degraded the site’s responsiveness to legitimate visitors. 

This demonstrates the harmful impact bots can have, even just for failed attempts to hack a site. Users can be robbed of a pleasant experience, sites can load slowly, images may not look right, and on-page features may fail. This can damage your reputation and cause you to lose valuable traffic. 

Bottom line: At a minimum, bad bots hog your resources and drag down your site’s performance. 

Your Solution: The AntiBot Detection Engine

When it comes to stopping bot traffic, you need to find a technological solution that can filter out the bad and leave you with the good. This is where Shield Security PRO comes in. 

The AntiBot Detection Engine, or ADE, works to distinguish between good bots, bad bots, and human users based on the behaviour of each visitor on the site. It can also distinguish fake web crawlers from true web crawlers. 

The way the technology does this is with “bot signals” it watches for when visitors interact with the site. (We’ll take a closer look at how the ADE does this in the next section.) 

Shield Security PRO displays bot signals logged and bot likelihood for blocked IP addresses.

When a user crosses the threshold of acceptable suspicious activity, Shield Security PRO automatically blocks their IP address and stops them from being able to access your site.

Spotting bot behaviour: login attempts 

One example of bad bot behaviour the ADE is designed to spot is excessive login attempts. Shield Security PRO can detect and capture login bots that can slow down your site and cause harm going forward. It does this by penalising visitors who use a valid username but the wrong password, as well as trying to log in without a username or with a username that doesn’t exist. 

Legitimate users might get their username and password wrong once in a while, but their behaviour is still going to be easy to distinguish from bots, especially when you look at their actions across the site as a whole. 

“Bots are just computer programs,” said Paul Goodchild, creator of Shield Security PRO, “They perform a limited number of tasks, such as login attempts, comment SPAM, and probing to trigger 404 errors.

“When you look at all these actions collectively,” Goodchild continued, “it looks nothing like normal human activity. The ADE acts as a ‘bot watcher’, looking at all requests collectively to sort the bots from the people.”

All-sides defence with Shield Security PRO 

ADE and bad bot blocking are core features of Shield Security PRO, but they’re also just a couple of the plugin’s features designed to keep your site safe and secure. For example, the security plugin has a comprehensive dashboard that allows you to see the current state of your website at a glance.  

Screenshot of Shield Security PRO’s security dashboard.

Other functionalities that help Shield Security PRO protect your site include:

  • DoS protection with traffic rate limitingThis essentially limits the rate at which traffic can access a network or web service, stopping it from being overwhelmed. DoS attacks aim to overwhelm a system’s resources, ultimately slowing or shutting down the site. 
  • Malware detection and vulnerability scanning: These are essential to your website’s safety, and identify and mitigate potential threats to your system. Our technology offers real-time protection and firewalls, scans for patterns or signs of existing malware, and identifies flaws and weak points in your defence.
  • Login protection for WooCommerce and other WordPress plugins: Shield Security PRO allows you to set up strong password requirements and two-factor authentication, keeping site access secure. You can also set customizable login attempt limits to further protect your site from malicious access attempts. 

Cybersecurity is most effective when you tackle it from all sides. The Shield Security PRO plugin kicks bad bots and suspicious visitors off your site and helps you detect any threats that do manage to sneak through. 

Banish bots from your site with Shield Security PRO 

If you let bad bots have unlimited access to your site, you’re taking a serious risk. Bad bots can increase your chances of hacking and data loss, as well as hog server resources and slow your site down. Both of these can damage your reputation as well as your bottom line.

Site owners can take action and protect their websites with a bad-bot blocking plugin like Shield Security PRO. The ADE efficiently identifies bad bots and blocks their IP addresses so they can’t bring their nefarious plans to fruition.

Don’t delay, get started with Shield Security PRO and kick bad bots off your site today for instant peace of mind.

Hey beautiful!

If you’re curious about ShieldPRO and would like to explore the powerful features for protecting your WordPress sites, click here to get started today. (14-day satisfaction guarantee!)

You’ll get all PRO features, including AI Malware Scanning, WP Config File Protection, Plugin and Theme File Guard, import/export, exclusive customer support, and so much more.

Source :

Configuring DFSR to a Static Port – The rest of the story

By Ned Pyle
Published Apr 04 2019 02:39 PM

First published on TechNet on Jul 16, 2009
Ned-san here again. Customers frequently call us about configuring their servers to listen over specific network ports. This is usually to satisfy firewall rules – more on this later. A port in TCP/IP is simply an endpoint to communication between computers. Some are reserved, some are well-known, and the rest are simply available to any application to use. Today I will explain the network communication done through all facets of DFSR operation and administration. Even if you don’t care about firewalls and ports, this should shed some light on DFSR networking in general, and may save you skull sweat someday.


Plenty of Windows components support hard-coding to exclusive ports, and at a glance, DFSR is no exception. By running the DFSRDIAG STATICRPC command against the DFSR servers you force them to listen on whatever port you like for file replication:

thumbnail image 1 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

Many Windows RPC applications use the Endpoint Mapper (EPM) component for these types of client-server operations. It’s not a requirement though; an RPC application is free to declare its own port and only listen on that one, with a client that is hard-coded to contact that port only. This range of ports is 1025-5000 in Windows Server 2003 and older, and 49152-65535 in Vista and … DFSR uses EPM.

Update 3/3/2011 (nice catch Walter)

As you have probably found, we later noticed a bug in DFSR on Win2008 and Win2008 R2 DCs (only – not member servers) where the service would always send-receive on port 5722. This article was done before that and doesn’t reflect it. Read more on this here:;EN-US;832017…
All of the below is accurate for non-DCs

By setting the port, you are telling EPM to always respond with the same port instead of one within the dynamic range. So when DFSR contacted the other server, it would only need to use two ports:

thumbnail image 2 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

So with a Netmon 3.3 capture, it will look something like this when the DFSR service starts up:

1. The local computer opens a dynamic client port and connects to EPM on the remote computer, asking for connectivity to DFSR.

thumbnail image 3 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

2. That remote computer responds with a port that the local computer can connect to for DFSR communication. Because I have statically assigned port 55555, the remote computer will always respond with this port.

thumbnail image 4 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

3. The local computer then opens a new client port and binds to that RPC port on the remote server, where the DFSR service is actually listening. At this point two DFSR servers can replicate files between each other.

thumbnail image 5 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

The Rest of the Story

If it’s that easy, why the blog post? Because there’s much more DFSR than just the RPC replication port. To start, your DFSR servers need to be able to contact DC’s. To do that, they need name resolution. And they will need to use Kerberos. And the management tools will need DRS API connectivity to the DC’s. There will also need to be SMB connectivity to create replicated folders and communicate with the Service Control Manager to manipulate DFSR. And all of the above also need the dynamic client ports available outbound through the firewall to allow that communication. So now that’s:

  • EPM port 135 (inbound on remote DFSR servers and DC’s)
  • DFSR port (inbound on remote DFSR servers)
  • SMB port 445 (inbound on remote DFSR servers)
  • DNS port 53 (inbound on remote DNS servers)
  • LDAP port 389 (inbound on remote DC’s)
  • Kerberos port 88 (inbound on remote DC’s)
  • Ports 1025-5000 or 49152-65535 (outbound, Win2003 and Win2008 respectively – and inbound on remote DC’s).

Let’s see this in action. Here I gathered a Netmon 3.3 capture of configuring a new replication group:

  • Server-01 – IP – DC/DNS
  • Server-02 – IP – DFSR
  • Server-03 – IP – DFSR
  • Server-04 – IP – Computer running the DFSMGMT.MSC snap-in

1. First the snap-in gets name resolution for the DC from my management computer (local port 51562 to remote port 53):

thumbnail image 6 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

2. Then it contacts the DC – the EPM is bound (local port 49199 to remote port 135) and a dynamic port is negotiated so that the client knows which port on which to talk to the DC (port 49156).

thumbnail image 7 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

3. Having connected to the DC through RPC to DRS (a management API), it then returns information about the domain and other things needed by the snap-in.

thumbnail image 8 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

4. The snap-in then performs an LDAP query to the DC to locate the DFSR-GlobalSettings container in that domain o that it can read in any new Replication Groups (local port 49201 to remote port 389).

thumbnail image 9 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

5. The snap-performs LDAP and DNS queries to get the names of the computers being selected for replication:

thumbnail image 10 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

6. The DFSR service must be verified (is it installed? Is it running?) This requires a Kerberos CIFS (SMB) request to the DC as well as an SMB connection to the DFSR servers – this is actually a ‘named pipe’ operation over remote port 445, where RPC uses SMB as a transport:

thumbnail image 11 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 12 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 13 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

7. The Replicated Folders are created (or verified to exist) on the DFSR servers – I called mine ‘testrf’. This uses SMB again from the snap-in computer to the DFSR server, over remote port 445:

thumbnail image 14 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

8. The snap-in will write all the configuration data through LDAP over remote port 389 against the DC. This creates all the AD objects and attributes, creates the topology, writes to each DFSR computer object, etc. There are quite a few frames here so I will just highlight a bit of it:

thumbnail image 15 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

9. If you wait for AD replication to complete and the DFSR servers to poll for changes, you will see the DFSR servers request configuration info through LDAP, and then start working normally on their static RPC port 55555 – just like I showed at the beginning of this post above.


All of the things I’ve discussed are guaranteed needs in order to use DFSR. For the most part you don’t have to have too many remote ports open on the DFSR server itself. However, if you want to use tools like DFSRDIAG.EXE and WMIC.EXE remotely against a DFSR server, or have a remote DFSR server generate ‘Diagnostic Health Reports’, there is more to do.

DFSR utilizes Windows Management Instrumentation as its ‘quasi-API’. When tools like DFS Management are run to generate health reports, or DFSRDIAG POLLAD is targeted against a remote server, you are actually using DCOM and WMI to tell the targeted server to perform actions on your behalf.

There is no mechanism to control which RPC DCOM/WMI will listen on as there is for DFSR and other services. At service startup DCOM/WMI will pick the next available dynamic RPC port. This means in theory that you would have to have open the entire range of dynamic ports for the target OS, 1025-5000 (Win2003) or 49152-65535 (Win2008)

For example, here I am running DFSRDIAG POLLAD /MEM:2008-02 to force that server to poll its DC for configuration changes. Note the listening port that I am talking to on the DFSR server (hint – it’s not 55555):

thumbnail image 16 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 17 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 18 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

And in my final example, here I am running the DFS Management snap-in and requesting a diagnostic health report. Note again how we use DCOM/WMI/RPC and do not connect directly to the DFSR service; again this requires that we have all those inbound dynamic ports open on the DFSR server:

thumbnail image 19 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

Wrap Up

So is it worth it to try and use a static replication port? Maybe. If you don’t plan on directly administering a DFSR server and just need it talking to its DC, its DNS server, and its replication partners, can definitely keep the number of ports used quite low. But if you ever want to communicate directly with it as an administrator, you will need quite a few holes punched through your firewall.

That is, unless you are using IPSEC tunnels through your Firewalls like we recommend. 🙂

– Ned ‘Honto’ Pyle

Source :

What Is DFS Replication and How to Configure It?

Updated: May 23, 2023

File shares are used in organizations to allow users to access and exchange files. If the number of file shares is large, it may be difficult to manage them because mapping many shared resources to each user’s computer takes time and effort. If the configuration of one file share changes, you need to update shared drive mappings for all users using this share. In this case, DFS can help you optimize the hierarchy of shared folders to streamline administration and the use of shared resources.

This blog post explains DFS configuration and how to set up DFS replication in Windows Server 2019.

NAKIVO for Windows Backup

NAKIVO for Windows Backup

Fast backup of Windows servers and workstations to onsite, offiste and cloud. Recovery of full machines and objects in minutes for low RTOs and maximum uptime.


What Is DFS and How It Works

A Distributed File System (DFS) is a logical organization that transparently groups existing file shares on multiple servers into a structured hierarchy. This hierarchy can be accessed using a single share on a DFS server.
A DFS file share can be replicated across multiple file servers in different locations to optimize server load and increase access speed to shared files. In this case, a user can access a file share on a server that is closest to them. DFS is intended to simplify access to shared files.

Using a DFS namespace server

DFS uses the Server Message Block (SMB) protocol, which is also known as the Common Internet File System (CIFS). Microsoft’s implementation of DFS doesn’t work with other file sharing protocols like NFS or HDFS. However, you can connect multiple SMB shares configured on NAS devices and Linux machines using Samba to your DFS server running on Windows Server. DFS consists of server and client components.

You can configure one DFS share that includes multiple file shares and connect users to this single file share using a unified namespace. When users connect to this file share using a single path, they see a tree structure of shared folders (as they are subfolders of the main share) and can access all needed file shares transparently. Underlying physical file servers hosting file shares are abstracted from the namespace used to access shares. DFS namespaces and DFS replication are the two main components used for DFS functioning.

What is a DFS namespace?

A DFS namespace is a virtual folder that contains links to shared folders stored on different file servers. DFS namespaces can be organized in different ways depending on business needs. They can be organized by geographical location, organization units, a combination of multiple parameters, etc. You can configure multiple namespaces on a DFS server. A DFS namespace can be standalone or domain-based.

DFS namespace and folder targets
  • standalone DFS namespace stores configuration information and metadata locally on a root server in the system registry. A path to access the root namespace is started with the root server name. A standalone DFS namespace is located only on one server and is not fault-tolerant. If a root server is unavailable, the entire DFS namespace is unavailable. You can use this option if you don’t have an Active Directory domain configured (when using a Workgroup).
  • domain-based DFS namespace stores configuration in Active Directory. A path to access a root namespace starts with the domain name. You can store a domain-based DFS namespace on multiple servers to increase the namespace availability. This approach allows you to provide fault tolerance and load balancing across servers. Using domain-based DFS namespaces is recommended.

A namespace consists of the root, links (folders), and folder targets.

  • namespace root is a starting point of a DFS namespace tree. Depending on the type, a namespace can look like this:

\\ServerName\RootName (a standalone namespace)

\\DomainName\RootName (a domain-based namespace)

  • namespace server is a physical server (or a VM) that hosts a DFS namespace. A namespace server can be a regular server with the DFS role installed or a domain controller.
  • folder is a link in a DFS namespace that points to a target folder containing content for user access. There are also folders without targets used for organizing the structure.
  • folder target is a link to a shared file resource located on a particular file server and available via the UNC path (Universal Naming Convention). A folder target is associated with the folder in a DFS namespace, for example, \\FS2\TestShare on the FS2 server. A folder target is what users need to access files.

One folder target can be a link to a single folder or multiple folders (if these folders are located on two different servers and are synchronized/replicated with each other). For example, a user needs to access \\DFS-server01\TestShare\Doc but depending on the user’s location, the user is redirected to a shared folder \\FS01\Doc or \\FS02\Doc.

The DFS tree structure includes the following components:

  • DFS root, which is a DFS server on which the DFS service is running
  • DFS links, which are links pointing to network shares used in DFS
  • DFS targets, which are real network shares to which DFS links point

What is DFS replication?

DFS replication is a feature used to duplicate existing data by replicating copies of that data to multiple locations. Physical file shares can be synchronized with each other at two or more locations.

An important feature of DFS replication is that the replication of a file starts only after that file has been closed. For this reason, DFS replication is not suitable for replicating databases, given that databases have files opened during the operation of a database management system. DFS replication supports multi-master replication technology, and any member of a replication group can change data that is then replicated.

DFS replication group is a group of servers participating in the replication of one or multiple replication folders. A replicated folder is synchronized between all members of the replication group.

DFS replication group

DFS replication uses a special Remote Differential Compression algorithm that allows DFS to detect changes and copy only changed blocks of files instead of copying all data. This approach allows you to save time and reduce replication traffic over the network.

DFS replication is performed asynchronously. There can be a delay between writing changes to the source location and replicating those changes to the target location.

DFS Replication topologies

There are two main DFS replication topologies:

  • Hub and spoke. This topology requires at least three replication members: one which acts as a hub and two others act as spokes. This technique is useful if you have a central source originating data (hub) and you need to replicate this data to multiple locations (spokes).
  • Full mesh. Each member of a replication group replicates data to each group member. Use this technique if you have 10 members or less in a replication group.

What are the requirements for DFS?

The main requirement is using Windows Server 2008 DataCenter or Enterprise editions, Windows Server 2012, or a newer Windows Server version. It is better to use Windows Server 2016 or Windows Server 2019 nowadays.

NTFS must be a file system to store shared files on Windows Server hosts.

If you use domain-based namespaces, all servers of a DFS replication group must belong to one Active Directory forest.

How to Set Up DFS in Your Windows Environment

You need to prepare at least two servers. In this example, we use two machines running Windows Server 2019, one of which is an Active Directory domain controller:

  • Server01-dc.domain1.local is a domain controller.
  • Server02.domain1.local is a domain member.

This is because configuring DFS in a domain environment has advantages compared to Workgroup, as explained above. The domain name is domain1.local in our case. If you use a domain, don’t forget to configure Active Directory backup.

Enable the DFS roles

First of all, you need to enable the DFS roles in Windows Server 2019.

  1. Open Server Manager.
  2. Click Add Roles and Features in Server Manager.
  3. Select Role-based or featured-based installation in the Installation type screen of the Add Roles and Features wizard.
  4. In the Server Selection screen, make sure your current server (which is a domain controller in our case) is selected. Click Next at each step of the wizard to continue.
  5. Select server roles. Select DFS Namespaces and DFS Replication, as explained in the screenshot below.
Setting up DFS in Windows Server 2019 – installing DFS roles
  1. In the Features screen, you can leave settings as is.
  2. Check your configuration in the confirmation screen and if everything is correct, click Install.
  3. Wait for a while until the installation process is finished and then close the window.

DFS Namespace Setup

Create at least one shared folder on any server that is a domain member. In this example, we create a shared folder on our domain controller. The folder name is shared01 (D:\DATA\shared01).

Creating a shared folder

  1. Right-click a folder and, in the context menu, hit Properties.
  2. On the Sharing tab of the folder properties window, click Share.
  3. Share the folder with Domain users and set permissions. We use Read/Write permissions in this example.
  4. Click Share to finish. Then you can close the network sharing options window.
Sharing a folder in Windows Server 2019 to set up DFS

Now the share is available at this address:


Creating a DFS namespace

Let’s create a DFS namespace to link shared folders in a namespace.

  • Press Win+R and run dfsmgmt.msc to open the DFS Management window. You can also run this command in the Windows command line (CMD).

As an alternative, you can click Start > Windows Administrative Tools > DFS Management.

  • In the DFS Management section, click New Namespace.
How to configure DFS namespaces
  • The New Namespace Wizard opens in a new window.
  1. Namespace Server. Enter a server name. If you are not sure that the name is correct, click Browse, enter a server name and click Check Names. In this example, we enter the name of our domain controller (server01-dc). Click Next at each step of the wizard to continue.
Adding a DFS namespace server
  1. Namespace Name and Settings. Enter a name for a namespace, for example, DFS-01. Click Edit Settings.
Entering a name for a DFS namespace

Pay attention to the local path of a shared folder. Change this path if needed. We use the default path in our example (C:\DFSRoots\DFS-01).

  1. You need to configure access permissions for network users. Click Use custom permissions and hit Customize.
Configuring access permissions for a shared folder on a DFS namespace server
  1. We grant all permissions for domain users (Full Control). Click Add, select Domain Users, select the appropriate checkboxes, and hit OK to save settings.
Configuring permissions for a shared folder
  1. Namespace type. Select the type of namespace to create. We select Domain-based namespace and select the Enable Windows Server 2008 mode checkbox. Select this checkbox if the functional level of your domain is Windows Server 2008 when you use Windows Server 2016 or Windows Server 2019 for better compatibility.

It is recommended that you use a Domain-based namespace due to advantages such as high DFS namespace availability by using multiple namespace servers and transferring namespaces to other servers.

Selecting a domain-based namespace for DFS configuration
  1. Review Settings. Review settings and, if everything is correct, click Create.
Reviewing configuration to finish DFS namespace setup
  1. Confirmation. The window view in case of success is displayed in the screenshot below. The namespace creation has finished. Click Close.
A DFS namespace has been created

Adding a new folder to a namespace

Now we need to add a new folder into the existing namespace. We are adding a folder on the same server, which is a domain controller, but this method is applicable for all servers within a domain.

  1. Open the DFS management window by running dfsmgmt.msc as we did before. Perform the following actions in the DFS management window.
  2. In the left pane, expand a namespace tree and select a namespace (\\domain1.local\DFS-01\ in our case).
  3. In the right pane (the Actions pane), click New Folder.
  4. In the New Folder window, enter a folder name, for example, Test-Folder to link the DFS folder and a shared folder created before. Click Add.
Adding a new folder into a DFS namespace
  1. Enter the path to the existing folder. We use \\server01-dc\shared01 in this example. You can click Browse and select a folder. Click OK to save the path to the folder target.
Adding a folder target

The folder target has been added.

  1. Click OK to save settings and close the New Folder window.
A folder target has been added

Now you can access the shared folder by entering the network address in the address bar of Windows Explorer:


You should enter a path in the format:


Accessing a shared folder in Windows Explorer

How to Configure DFS Replication

We need to configure the second server to replicate data. The name of the second server is Server02 and this server is added to the domain1.local domain in this example. Add your second server to a domain if you have not done this operation before.
Install the DFS roles, as we did for the first server. As an alternative method, you can use PowerShell instead of the Add Roles wizard. Run these two commands in PowerShell to install DFS replication and DFS namespace roles.

Install-WindowsFeature -name “FS-DFS-Replication” -IncludeManagementTools

Install-WindowsFeature -name “FS-DFS-Namespace” -IncludeManagementTools

First of all, we need to install the DFS Replication role on the second server.

How to set up DFS roles in PowerShell

Create a folder for replicated data, for example, D:\Replication

We are going to use this folder to replicate data from the first folder created on the first server before.

Share this folder (D:\Replication) on the second server and configure access permissions the same way as for the previous shared folder. In this example, we share the folder with Domain Users and grant Read/Write permissions.

Sharing a folder on the second server

The network path is \\server02\replication in this example after sharing this folder. To check the network path to the folder, you can right-click the folder name and open the Sharing tab.

Let’s go back to the domain controller (server01-dc) and open the DFS Management window.

In the left pane of the DFS Management window, expand the tree and select the namespace created before (Test-Folder in this case).

Click Add Folder Target in the Actions pane located in the top right corner of the window.

The New Folder Target window appears. Enter the network path of the folder that was created on the second server before:


Click OK to save settings and close the window.

Adding a new folder target to configure Windows DFS replication

A notification message is displayed:

A replication group can be used to keep these folder targets synchronized. Do you want to create a replication group?

Click Yes.

A notification message is displayed when creating a DFS replication group

Wait until the configuration process is finished.

As a result, you should see the Replicate Folder Wizard window. Perform the next steps in the wizard window.

Check the replication group name and replicated folder name. Click Next to continue.

Entering a replication group name and replication folder name

Check folder paths in the Replication Eligibility screen.

Checking paths of shared folders

Select the primary member from the drop-down list. In this example, the primary member is Server01-dc. Data from the primary member is replicated to other folders that are a part of the DFS namespace.

Selecting a primary member when configuring DFS replication

Select the topology of connections for replication.

Full mesh is the recommended option when using a DFS replication group with less than ten servers. We use Full mesh to replicate changes made on one server to other servers.

The No Topology option can be used if you want to create a custom topology after finishing the wizard.

The Hub and spoke option is inactive (grayed out) because we use less than three servers.

Selecting a full mesh topology to configure DFS replication

Configure replication group schedule and bandwidth. There are two options:

  • Replicate continuously using the specified bandwidth. Replication is performed as soon as possible. You can allocate bandwidth. Continuous replication of data that changes extensively can consume a lot of network bandwidth. To avoid a negative impact on other processes using the network, you can limit bandwidth for DFS replication. Keep in mind that hard disk load can be high.
  • Replicate during the specified days and times. You can configure the schedule to perform DFS replication at the custom date and time. You can use this option if you don’t need to always have the last version of replicated data in target folders.

We select the first option in our example.

Setting up DFS replication group schedule

Review settings for your DFS replication group. If everything is correct, click Create.

Reviewing settings for a DFS replication group before finishing configuration

View the DFS replication configuration status on the Confirmation screen. You should see the Success status for all tasks as displayed on the screenshot below. Click Close to close the wizard window.

A DFS replication group has been created successfully

A notification message about the replication delay is displayed. Read the message and hit OK.

A notification message about DFS replication delay

DFS replication has been configured. Open a shared folder from which data must be replicated initially. Write a file to that network folder and check whether the new data is replicated to the second folder on another server. Don’t forget that opened files are not replicated until they are closed after saving changes to a disk. In a few moments, you should see a file-replica in the target folder.

Using filters for DFS Replication

Use file filters to select the file types you don’t want to replicate. Some applications can create temporary files and replicating them wastes network bandwidth, loads hard disk drives, consumes additional storage space in the target folder, and increases overall time to replicate data. You can exclude the appropriate file types from DFS replication by using filters.

To configure filters, perform the following steps in the DFS Management window:

  1. Expand the Replication tree in the navigation pane and select the needed DFS replication group folder name (domain1.local\dfs-01\Test-folder in our case).
  2. Select the Replicated Folders tab.
  3. Select the needed folder, right-click the folder name and hit Properties. Alternatively, you can select the folder and click Properties in the Actions pane.
  4. Set the filtered file types by using masks in the folder properties window. In this example, files matching the rule are excluded from replication:

~*, *.bak, *.tmp

You can also filter subfolders, for example, exclude Temp subfolders from DFS replication.

Configuring DFS replication filters

Staging location

There can be a conflict when two or more users save changes to a file before these changes are replicated. The most recent changes have precedence for replication. Older versions of changed files are moved to the Conflict or Deleted folder. This issue can happen when replication speed is low and the file size is large (amount of changes is high) when the amount of time to transfer changed data is lower than the interval between writing changes to the file by users.

Staging folders act as a cache for new and changed files that are ready to be replicated from source folders to target folders. The staging location is intended for files that exceed a certain file size. Staging is used as a queue to store files that must be replicated and ensure that changes can be replicated without worrying about changes to them during the transfer process.

Another aspect of configuring staging folders is performance optimization. DFS replication can consume additional CPU and disk resources, slow down and even stop if the staging quota is too small for your tasks. The recommended size of the staging quota is equal to the size of the 32 largest files in the replication folder.

You can edit staging folder properties for DFS Replication in the DFS Management window:

  1. Select a replication group in the left pane of the DFS Management window.
  2. Select the Memberships tab.
  3. Select the needed replication folder, right-click the folder, and hit Properties.
  4. Select the Staging tab in the Properties window.
  5. Edit the staging path and quota according to your needs.
Configuring DFS staging location

Saved changes are not applied immediately. New staging settings must be replicated across all DFS servers within a domain. Time depends on Active Directory Domain Services replication latency and the polling interval of servers (5 minutes or more). Server reboot is not required.

DFS Replication vs. Backup

Don’t confuse DFS Replication of data in shared folders and data backup. DFS replication makes copies of data on different servers, but if unwanted changes are written to a file on one server, these changes are replicated to other servers. As a result, you don’t have a recovery point because the file has been overwritten with unwanted changes on all servers and you can use it for recovery in case of failure. This threat is present in case of a ransomware attack.

Use NAKIVO Backup & Replication to protect data stored on your physical Windows Server machines including data stored in shared folders. The product also supports Hyper-V VM backup and VMware VM backup at the host level for effective protection.

1 Year of Free Data Protection: NAKIVO Backup & Replication

1 Year of Free Data Protection: NAKIVO Backup & Replication

Deploy in 2 minutes and protect virtual, cloud, physical and SaaS data. Backup, replication, instant recovery options.



Distributed File System (DFS) can significantly simplify shared resources management for administrators and make accessing shared folders more convenient for end-users. DFS makes transparent links to shared folders located on different servers.

DFS namespaces and DFS replication are two main features that you can configure in the DFS Management window after installing the appropriate Windows server roles. Opt for configuring DFS in a domain environment rather than in a Workgroup environment because there are many advantages, such as high availability and flexibility in an Active Directory domain.

Source :

Manually Clearing the ConflictAndDeleted Folder in DFSR

By Ned Pyle
Published Apr 04 2019 01:30 PM

First published on TechNet on Oct 06, 2008
Ned here again. Today I’m going to talk about a couple of scenarios we run into with the ConflictAndDeleted folder in DFSR. These are real quick and dirty, but they may save you a call to us someday.

Scenario 1: We need to empty out the ConflictAndDeleted folder in a controlled manner as part of regular administration (i.e. we just lowered quota and we want to reclaim that space).

Scenario 2: The ConflictAndDeleted folder quota is not being honored due to an error condition and the folder is filling the drive.

Let’s walk through these now.

Emptying the folder normally

It’s possible to clean up the ConflictAndDeleted folder through the DFSMGMT.MSC and SERVICES.EXE snap-ins, but it’s disruptive and kind of gross (you could lower the quota, wait for AD replication, wait for DFSR polling, and then restart the DFSR service). A much faster and slicker way is to call the WMI method CleanupConflictDirectory from the command-line or a script:

1.  Open a CMD prompt as an administrator on the DFSR server.
2.  Get the GUID of the Replicated Folder you want to clean:

WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderconfig get replicatedfolderguid,replicatedfoldername

(This is all one line, wrapped)

Example output:

thumbnail image 1 of blog post titled
Manually Clearing the ConflictAndDeleted Folder in DFSR

3.  Then call the CleanupConflictDirectory method:

WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo where “replicatedfolderguid='<RF GUID>'” call cleanupconflictdirectory

Example output with a sample GUID:

WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo where “replicatedfolderguid=’70bebd41-d5ae-4524-b7df-4eadb89e511e'” call cleanupconflictdirectory
thumbnail image 2 of blog post titled
Manually Clearing the ConflictAndDeleted Folder in DFSR

4.  At this point the ConflictAndDeleted folder will be empty and the ConflictAndDeletedManifest.xml will be deleted.

Emptying the ConflictAndDeleted folder when in an error state

We’ve also seen a few cases where the ConflictAndDeleted quota was not being honored at all. In every single one of those cases, the customer had recently had hardware problems (specifically with their disk system) where files had become corrupt and the disk was unstable – even after repairing the disk (at least to the best of their knowledge), the ConflictAndDeleted folder quota was not being honored by DFSR.

Here’s where quota is set:

thumbnail image 3 of blog post titled
Manually Clearing the ConflictAndDeleted Folder in DFSR

Usually when we see this problem, the ConflictAndDeletedManifest.XML file has grown to hundreds of MB in size. When you try to open the file in an XML parser or in Internet Explorer, you will receive an error like “The XML page cannot be displayed” or that there is an error at line . This is because the file is invalid at some section (with a damaged element, scrambled data, etc).

To fix this issue:

  1. Follow steps 1-4 from above. This may clean the folder as well as update DFSR to say that cleaning has occurred. We always want to try doing things the ‘right’ way before we start hacking.
  2. Stop the DFSR service.
  3. Delete the contents of the ConflictAndDeleted folder manually (with explorer.exe or DEL).
  4. Delete the ConflictAndDeletedManifest.xml file.
  5. Start the DFSR service back up.

For a bit more info on conflict and deletion handling in DFSR, take a look at:

Staging folders and Conflict and Deleted folders (TechNet)
DfsrConflictInfo Class (MSDN)

Until next time…

– Ned “Unhealthy love for DFSR” Pyle

Source :

The ConflictAndDeleted folder size may exceed its configured limitation

Article 02/23/2023

In this article

  1. Symptoms
  2. Cause
  3. Resolution
  4. Status

This article provides a resolution for the issue that the ConflictAndDeleted folder size may exceed its configured limitation.

Applies to:   Windows Server 2016, Windows Server 2012 R2
Original KB number:   951010


In Windows Server, the size of the ConflictAndDeleted folder may exceed its configured limitation. By default, the limitation of the ConflictAndDeleted folder is 660 megabytes (MB). When this problem occurs, the ConflictAndDeleted folder may exhaust available disk space on the volume on which the folder resides. Additionally, the Distributed File System (DFS) Replication service cannot replicate any files.


This problem occurs because the ConflictAndDeletedManifest.xml file is corrupted. This file stores information about the current contents of the ConflictAndDeleted folder. The DFS Replication service writes to the ConflictAndDeletedManifest.xml file when files are added or removed from the ConflictAndDeleted folder.


To resolve this problem, use WMIC commands to delete the contents of the ConflictAndDeleted folder and the ConflictAndDeletedManifest.xml file. Run the WMIC commands in a Command Prompt window (cmd.exe). To clean up the ConflictAndDeleted folder content of a replicated folder, run the following command:


wmic /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo where "replicatedfoldername='<ReplicatedFolderName>'" call cleanupconflictdirectory


In this command, <ReplicatedFolderName> represents the name of the replicated folder.

To clean up the ConflictAndDeleted folder content of all of the replicated folders in a replication group, enter the following command:


wmic /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo where "replicationgroupname='<ReplicationGroupName>'" call cleanupconflictdirectory


In this command,<ReplicationGroupName> represents the name of the replication group.


If you have not run a WMIC command on the computer before, a short pause occurs while the computer installs WMIC.

Depending on the size of the ConflictAndDeleted folder, this process may take a few minutes. The process empties the ConflictAndDeleted folder and reduces or deletes the ConflictAndDeletedManifest.xml file.


If any conflicts or deletions occur while cleanupconflictdirectory runs, the information that is related to those conflicts or deletions remains in the ConflictAndDeleted folder and the ConflictAndDeletedManifest.xml file when the process finishes. After the cleanup, the file is much smaller, and the total size of the ConflictAndDeleted folder is less than the quota maximum mark.


Microsoft has confirmed that it is a problem in the Microsoft products that are listed in the “Applies to” section.

Source :

Over 100 WordPress Repository Plugins Affected by Shortcode-based Stored Cross-Site Scripting

István Márton
December 12, 2023

On August 14, 2023, the Wordfence Threat Intelligence team began a research project to find Stored Cross-Site Scripting (XSS) via Shortcode vulnerabilities in WordPress repository plugins. This type of vulnerability enables threat actors with contributor-level permissions or higher to inject malicious web scripts into pages using plugin shortcodes, which will execute whenever a victim accesses the injected page. We found over 100 vulnerabilities across 100 plugins which affect over 6 million sites. You can find the complete chart of affected plugins below.

All Wordfence PremiumWordfence Care, and Wordfence Response customers, as well as those still using the free version of our plugin, are protected by the Wordfence firewall’s built-in Cross-Site Scripting protection against any exploits targeting this type of vulnerability.

Why are these vulnerabilities so common?

By a general definition, shortcodes are unique macro codes added by plugin developers to dynamically and automatically generate content. Developers can use shortcode attributes to optionally add settings, making the content even more dynamic and providing more options for users.

It is important to note that shortcodes are typically used in the post content on WordPress sites, and the post content input is sanitized before being saved to the database, which is a WordPress core functionality, so it is often sanitized in all cases.

Developers might assume that since WordPress core sanitizes post content, the attributes used in shortcodes are also sanitized and secure. However, the wp_kses_post() sanitization function only sanitizes complete HTML elements.

These vulnerabilities occur when the value provided in the shortcode attribute is output in dynamically generated content within the attributes of an HTML element. In such cases, the value specified in the shortcode contains only HTML element attributes, which are not sanitized during the save of a post. As mentioned earlier, the sanitize function only sanitizes complete HTML tags.

An example shortcode containing an HTML tag sanitized by the wp_kses_post() function:

[custom_link class=”<p onmouseover=’alert(/XSS/)’>Click Here!</p>”]
In this case, wp_kses_post() checks and sanitizes the entire <p> tag and its attributes.

An example shortcode not sanitized by the wp_kses_post() function:
[cutsom_link class="' onmouseover='alert(/XSS/)'"]
As there is no HTML tag in this case, the wp_kses_post() function does not check or sanitize anything.

Note: The above explanation demonstrates the usage of cross-site scripting within HTML attributes as it is the most common scenario, but the same problem applies to JS variable values, which will be equally vulnerable if not properly escaped.

Even the WordPress security handbook says the following about escaping output:

“Most WordPress functions properly prepare the data for output, and additional escaping is not needed.”

After reading this, developers might reasonably assume that the shortcode attributes are sanitized and secure. However, as demonstrated in the above example, there are exceptions.

Vulnerability Summary from Wordfence Intelligence

Plugin NamePlugin SlugCVEAffected VersionsPatched Version
VK Filter Searchvk-filter-searchCVE-2023-5705<=
Telephone Number Linkertelephone-number-linkerCVE-2023-5743<= 1.2
Tab Ultimatetabs-proCVE-2023-5667<= 1.31.4
Ibtana – WordPress Website Builderibtana-visual-editorCVE-2023-6684<=
Featured Image Captionfeatured-image-captionCVE-2023-5669<=
Reusable Text Blocksreusable-text-blocksCVE-2023-5745<= 1.5.3
Font Awesome More Iconsfont-awesome-more-iconsCVE-2023-5232<= 3.5
Podcast Subscribe Buttonspodcast-subscribe-buttonsCVE-2023-5308<=
Slick Contact Formsslick-contact-formsCVE-2023-5468<= 1.3.7
LiteSpeed Cachelitespeed-cacheCVE-2023-4372<= 5.65.7
Theme Switcha – Easily Switch Themes for Development and Testingtheme-switchaCVE-2023-5614<=
WordPress Chartswp-chartsCVE-2023-5062<= 0.7.0
EasyRotator for WordPress – Slider Plugineasyrotator-for-wordpressCVE-2023-5742<= 1.0.14
Leaflet Mapleaflet-mapCVE-2023-5050<=
Bitly’s WordPress Pluginwp-bitlyCVE-2023-5577<= 2.7.1
SEO Sliderseo-sliderCVE-2023-5707<=
CallRail Phone Call Trackingcallrail-phone-call-trackingCVE-2023-5051<=
iframeiframeCVE-2023-4919<= 4.64.7
Feeds for YouTube (YouTube video, channel, and gallery plugin)feeds-for-youtubeCVE-2023-4841<=
Instagram for WordPressinstagram-for-wordpressCVE-2023-5357<= 2.1.6
Awesome Weather Widgetawesome-weatherCVE-2023-4944<= 3.0.2
FareHarbor for WordPressfareharborCVE-2023-5252<=
Shortcode Menushortcode-menuCVE-2023-5565<= 3.2
Modal Window – create popup modal windowmodal-windowCVE-2023-5161<=
Sponsorswp-sponsorsCVE-2023-5662<= 3.5.0
Gift Up Gift Cards for WordPress and WooCommercegift-upCVE-2023-5703<=
Bellows Accordion Menubellows-accordion-menuCVE-2023-5164<=
TCD Google Mapstcd-google-mapsCVE-2023-5128<= 1.8
Super Testimonialssuper-testimonialCVE-2023-5613<= 2.93.0
SlimStat Analyticswp-slimstatCVE-2023-4597<=
WP Font Awesomewp-font-awesomeCVE-2023-5127<= 1.7.9
Advanced Menu Widgetadvanced-menu-widgetCVE-2023-5085<= 0.4.1
Comments by Startbitfacebook-comment-by-vivacityCVE-2023-5295<= 1.4
BSK PDF Managerbsk-pdf-managerCVE-2023-5110<=
Video PopUpvideo-popupCVE-2023-4962<=
Privacy Policy Generator, Terms & Conditions Generator WordPress Plugin : WPLegalPageswplegalpagesCVE-2023-4968<=
WP Responsive header image sliderresponsive-header-image-sliderCVE-2023-5334<= 3.2.1
Interact: Embed A Quiz On Your Siteinteract-quiz-embedCVE-2023-5659<=
WDContactFormBuildercontact-form-builderCVE-2023-5048<= 1.0.72
Widget Responsive for Youtubeyoutube-widget-responsiveCVE-2023-5063<=
TM WooCommerce Compare & Wishlisttm-woocommerce-compare-wishlistCVE-2023-5230<= 1.1.7
Pop ups, WordPress Exit Intent Popup, Email Pop Up, Lightbox Pop Up, Spin the Wheel, Contact Form Builder – PoptinpoptinCVE-2023-4961<=
WhatsApp Share ButtonwhatsappCVE-2023-5668<= 1.0.1
Delete Medelete-meCVE-2023-5126<= 3.03.1
WP MapItwp-mapitCVE-2023-5658<= 2.7.1
iframe formsiframe-formsCVE-2023-5073<= 1.0
Newsletter – Send awesome emails from WordPressnewsletterCVE-2023-4772<=
Theme Blvd Shortcodestheme-blvd-shortcodesCVE-2023-5338<= 1.6.8
Social Feed | All social media in one placeadd-facebookCVE-2023-5661<=
WS Facebook Like Box Widgetws-facebook-likeboxCVE-2023-4963<= 5.0
Garden Gnome Packagegarden-gnome-packageCVE-2023-5664<=
Social Sharing Plugin – Social Warfaresocial-warfareCVE-2023-4842<=
Skype Legacy Buttonsskype-online-statusCVE-2023-5615<= 3.1
Simple Cloudflare Turnstile – CAPTCHA Alternativesimple-cloudflare-turnstileCVE-2023-5135<=
Booster for WooCommercewoocommerce-jetpackCVE-2023-4945<=
Simple Shortcodessmpl-shortcodesCVE-2023-5566<= 1.0.20
Font Awesome Integrationfont-awesome-integrationCVE-2023-5233<= 5.0
Giveaways and Contests by RafflePress – Get More Website Traffic, Email Subscribers, and Social FollowersrafflepressCVE-2023-5049<=
ImageMapperimagemapperCVE-2023-5507<= 1.2.6
Accordionaccordions-wpCVE-2023-5666<= 2.62.7
GEO my WordPressgeo-my-wpCVE-2023-5467<=
Related Products for WooCommercewoo-related-products-refresh-on-reloadCVE-2023-5234<=
Live Chat with Facebook Messengerwp-facebook-messengerCVE-2023-5740<= 1.0
Contact form Form For All – Easy to use, fast, 37 languages.formforallCVE-2023-5337<= 1.2
JQuery Accordion Menu Widgetjquery-vertical-accordion-menuCVE-2023-4890<= 3.1.2
Blog Filter – Advanced Post Filtering with Categories Or Tags, Post Portfolio Gallery, Blog Design Template, Post Layoutblog-filterCVE-2023-5291<=
WordPress Social Loginwordpress-social-loginCVE-2023-4773<= 3.0.4
QR Code Tagqr-code-tagCVE-2023-5567<= 1.0
Buzzsprout Podcastingbuzzsprout-podcastingCVE-2023-5335<=
Drop Shadow Boxesdrop-shadow-boxesCVE-2023-5469<=
Carousel, Recent Post Slider and Banner Sliderspice-post-sliderCVE-2023-5362<= 2.02.1
Weather Atlas Widgetweather-atlasCVE-2023-5163<=
Contact Form – Custom Builder, Payment Form, and Morepowr-packCVE-2023-5741<= 2.1.0
MapPress Maps for WordPressmappress-google-maps-for-wordpressCVE-2023-4840<=
Media Library Assistantmedia-library-assistantCVE-2023-4716<= 3.103.11
Google Maps Plugin by Intergeointergeo-mapsCVE-2023-4887<= 2.3.2
SendPress NewsletterssendpressCVE-2023-5660<=
Magic Action Boxmagic-action-boxCVE-2023-5231<= 2.17.2
Embed Calendlyembed-calendly-schedulingCVE-2023-4995<= 3.63.7
Team Showcaseteam-showcaseCVE-2023-5639<= 2.12.2
Horizontal scrolling announcementhorizontal-scrolling-announcementCVE-2023-5001<= 9.2
WP Post Columnswp-post-columnsCVE-2023-5708<= 2.2
Font Awesome 4 Menusfont-awesome-4-menusCVE-2023-4718<= 4.7.0
Advanced Custom Fields: Extendedacf-extendedCVE-2023-5292<=
Options for Twenty Seventeenoptions-for-twenty-seventeenCVE-2023-5162<=
Etsy Shopetsy-shopCVE-2023-5470<=
Copy Anything to Clipboardcopy-the-codeCVE-2023-5086<=
Email Encoder – Protect Email Addresses and Phone Numbersemail-encoder-bundleCVE-2023-4599<=
Advanced iFrameadvanced-iframeCVE-2023-4775<= 2023.82023.9
WP Mailto Links – Protect Email Addresseswp-mailto-linksCVE-2023-5109<=
Booster for WooCommercewoocommerce-jetpackCVE-2023-5638<=
Ziteboard Online Whiteboardziteboard-online-whiteboardCVE-2023-5076<=
Simple Like Page Pluginsimple-facebook-pluginCVE-2023-4888<=
CPO Shortcodescpo-shortcodesCVE-2023-5704<= 1.5.0
WCFM Marketplace – Best Multivendor Marketplace for WooCommercewc-multivendor-marketplaceCVE-2023-4960<=
Connect Matomo (WP-Matomo, WP-Piwik)wp-piwikCVE-2023-4774<=
Very Simple Google Mapsvery-simple-google-mapsCVE-2023-5744<=
Contact Form by FormGet – Best Form Builder Plugin for WordPressformget-contact-formCVE-2023-5125<= 5.5.5
Professional Social Sharing Buttons, Icons & Related Posts – ShareaholicshareaholicCVE-2023-4889<=

Security recommendations for developers

We recommend using one of the built-in WordPress escaping functions before outputting user data. WordPress has a number of functions that can be used for different situations. You can read more about these functions at:

Technical Analysis #1

A general but fictional shortcode will be used to demonstrate a shortcode XSS vulnerability, focusing only on the most important details.

Let’s take an example where shortcode attributes are used as HTML attributes.

The vulnerable shortcode function:

function custom_link_shortcode( $atts, $content ) {
    $atts = shortcode_atts( array(
        'class' => 'custom-link', // default class value
        'href'  => '#', // default href value
    ), $atts );
    $output = '<a class="' . $atts['class'] . '" href="' . $atts['href'] . '">' . $content . '</a>';
    return $output;
add_shortcode( 'custom_link', 'custom_link_shortcode' );

Let’s take a look at an example where the following shortcode is used in the post content:
[custom_link class='my-custom-class']Link Text[/custom_link]

As a result, the following link will be displayed in the post:

1<aclass="my-custom-class"href="#">Link Text</a>

In this case, the class attribute of the shortcode is used and outputted in the class attribute of the <a> HTML tag.

The Exploit

Now, let’s take a look at a threat actor that wants to inject malicious web scripts into a post using the plugin’s shortcode. To accomplish this, the attacker needs to leave the specified HTML attribute, which in the example is the “class” attribute and add an additional malicious HTML attribute after.

Here’s an exploit example:
[custom_link class='" onmouseover="alert(/XSS/)']Link Text[/custom_link]

With the payload above, the following link will be displayed in the post:

1<aclass=""onmouseover="alert(/XSS/)"href="#">Link Text</a>

The first double quotation mark provided in the shortcode’s “class” attribute closes the “class” HTML attribute within the <a> tag. After that the “onmouseover” HTML attribute containing a malicious script is added to the <a> tag. This means that whenever a user mouses over the rendered shortcode, a prompt with “XSS” would appear on the screen.

The Solution

To make the shortcode secure, escape functions must be used. This prevents user-defined input from leaving the original “class” HTML attribute as any quotes used to leave the HTML attribute will be escaped.

Let’s make the example shortcode code secure:

123456789101112functioncustom_link_shortcode( $atts, $content) {    $atts= shortcode_atts( array(        'class'=> 'custom-link', // default class value        'href'=> '#', // default href value    ), $atts);    $output= '<a class="'. esc_attr( $atts['class'] ) . '" href="'. esc_url( $atts['href'] ) . '">'. $content. '</a>';    return$output;}add_shortcode( 'custom_link', 'custom_link_shortcode');

The “class” data is an attribute, so it is recommended to use the esc_attr() function there.
The “href” data is a url, which is an attribute that has more specific requirements, so it is recommended to use the esc_url() function there.

The above two functions make the shortcode completely secure against Cross-Site Scripting.

If the attacker tries to add a malicious shortcode using the patched functionality, it will result in the following link, which no longer contains executable JavaScript:

1<aclass="&quot; onmouseover=&quot;alert(/XSS/)"href="#">Link Text</a>

Technical Analysis #2

Next, let’s look at an example where shortcode attributes are used as JS variable values.

The vulnerable shortcode function assigns shortcode attributes to JS variables:

1234567891011functioncustom_js_color_variable_shortcode( $atts) {    $atts= shortcode_atts( array(        'color'=> 'red', // default color value    ), $atts);    $output= '<script>'. 'let color="'. $atts['color'] . '";'. '</script>';    return$output;}add_shortcode( 'custom_js_color_variable', 'custom_js_color_variable_shortcode');

Here’s an example where the following shortcode is used in the post content:
[custom_js_color_variable color='blue']

As a result, the following script with a variable setting for “color” will be displayed in the post:

1<script>let color="blue";</script>

The Exploit

Now, we’ll try to exploit the shortcode:
[custom_js_color_variable color='"; alert(/XSS/); let more="']

As a result, the following script will be displayed in the post:

1<script>let color=""; alert(/XSS/); let more="";</script>

The Solution

Let’s make the example shortcode code secure:

1234567891011functioncustom_js_color_variable_shortcode( $atts) {    $atts= shortcode_atts( array(        'color'=> 'red', // default color value    ), $atts);    $output= '<script>'. 'let color="'. esc_js( $atts['color'] ) . '";'. '</script>';    return$output;}add_shortcode( 'custom_js_color_variable', 'custom_js_color_variable_shortcode');

The “color” data is a JS variable, so it is recommended to use the esc_js() function.

The following script will be displayed in the post if the attacker tries using the same malicious shortcode:

1<script>let color="&quot;; alert(/XSS/); let more=&quot;";</script>


In this blog post, we have detailed Stored Shortcode-Based XSS vulnerabilities within several WordPress repository plugins. This vulnerability allows authenticated threat actors with contributor-level permissions or higher to inject malicious web scripts into pages that execute when a user accesses an affected page. As with all XSS vulnerabilities, a malicious payload could be used to perform actions as an administrator, including adding new malicious administrator users to the site and embedding backdoors in plugin and theme files, as well as redirecting users to malicious sites.

We encourage WordPress users to verify that their sites are updated to the latest patched version of each impacted plugin. For unpatched plugins that have been closed by the security team, we recommend that WordPress users delete the affected plugin and look for an alternative solution.

All Wordfence users, including those running Wordfence PremiumWordfence Care, and Wordfence Response, as well as sites still running the free version of Wordfence, are fully protected against this type of vulnerability.

If you know someone who uses any of these plugins on their site, we recommend sharing this advisory with them to ensure their site remains secure, as this type of vulnerability poses a significant risk.

For security researchers looking to disclose vulnerabilities responsibly and obtain a CVE ID, you can submit your findings to Wordfence Intelligence and potentially earn a spot on our leaderboard.

Did you know that Wordfence has a Bug Bounty Program? We’ve recently increased our bounties by 6.25x until December 20th, 2023, with our bounties for the most critical vulnerabilities reaching $10,000 USD! If you’re an aspiring or current vulnerability researcher, click here to sign up.

Did you enjoy this post? Share it!

Source :

Ubiquiti UniFi Network Application 8.0.7


UniFi Network Application 8.0.7 adds support for Radio Manager, WireGuard VPN Client, and Site Overview, and improves the Port Manager section by adding an overview of all ports and the VLAN Viewer.

Radio Manager

The new Radios page provides an overview of the Access Point radios and their configuration, statistics, and performance.

  • Filter Devices – Show all APs or only specific devices.
  • Filter Bands – Use the filters to display only certain bands or MIMO, e.g. 5 GHz or 3×3.
  • Bulk Edit – Change the radio configuration on multiple APs at the same time.

Improved Port Manager

The new Ports page provides an overview of all ports across your devices.

  • Filter Ports – Use the filters to display only certain ports, e.g. only PoE or SFP ports.
  • Filter Devices – Show all ports or only ports on a specific device.
  • Insights – View and compare statistics between ports on the same device.

The VLAN port management has been redesigned to improve UX when managing VLANs.

  • Native VLAN / Network – Used for untagged traffic, i.e. not tagged with a VLAN ID. Previously this option was called ‘Primary Network’.
  • Tagged VLAN Management – Used for traffic tagged with a VLAN ID. Previously this option was called ‘Traffic Restriction’.
  • Allow All – Configured VLANs are automatically tagged (allowed) on the port.
  • Block All – All tagged VLANs are blocked (not allowed) on the port.
  • Custom – Specify which VLANs are tagged (allowed) on the port. Any VLAN that is not specified is blocked.

When adding a new VLAN, it is automatically tagged (allowed) on the port when using ‘Allow All’. If ‘Custom’ is used, the new VLAN needs to be manually added to the port.

VLAN Viewer

Provides an easy way to see Native and Tagged VLANs across your devices.

  • Native VLAN Assignment – This shows which VLAN ID is set as native.
  • VLAN Tagging – Shows which VLANs are tagged, blocked, or native.
  • Search for VLANs using the VLAN name, ID, or subnet.

WireGuard VPN Client

Allows you to connect your UniFi Gateway to a VPN service provider and send internet traffic from devices over the VPN. Uploading a file and manual configuration are both supported.

Site Overview

Provides an overview of all sites used on UniFi Network Applications managing multiple sites.

  • UniFi Devices – See how many devices are connected to each site.
  • Client Devices – See how many WiFi/wired clients and guests are connected to each site.
  • Insight – See which sites have offline devices and critical notifications.

Client Connections

The System Log now provides much more details on client connections such as the connection time and data usage.


  • Improved Port Manager.
  • Added all ports overview.
  • Added VLAN Viewer.
  • Improved VLAN port management UX.
  • Added Site Overview.
  • Added ability to select which networks Suspicious Activity is enabled on.
  • Added sorting feature for IP Groups.
  • Added ability to allow opening predefined firewall rules.
  • Improved validation for Prefix ID in Virtual Network settings.
  • Improved empty MAC whitelist validation in Port Manager.
  • Improved validation for DHCP options.
  • Improved DHCP Server TFTP Server field validation.
  • Improved Traffic Rule IP Address validation.
  • Improved Firewall Rules UX.
  • Improved Security Settings UX.
  • Improved Global Network Settings UX.
  • Enabled auto upgrade for UXG-Pro after the adoption is completed.
  • Remove LTE Failover WAN from IPTV Options.
  • Show the local language in the Language dropdown.
  • Prevent provisioning more Layer 3 static routes than UniFi switches can support.
  • Routes that are over the limit at the time of upgrade will be marked as Paused.
  • This does not mean that total static route support on Layer 3 UniFi switches is decreased, instead, UX is improved to prevent configuration of routes that are not functional.


  • Added WireGuard VPN Client.
  • Added messaging to create traffic routes after creating VPN Clients. This applies to the VPN Client feature, not adding clients to VPN Servers.
  • Added validation in VPN Server settings when the port overlaps with a Port Forwarding rule.
  • Added IP/Hostname override option for OpenVPN and WireGuard VPN Servers.
  • This adds a custom hostname or IP address to the configuration file used by clients.
  • This option is useful if the UniFi Gateway is behind NAT or is using a dynamically assigned IP address.
  • Added validation for Local IP in IPsec Site-to-Site VPN settings.
  • Automatically remove Site-to-Site Auto IPsec configuration if the adopted gateway doesn’t support it.
  • Improved Site-to-Site VPN validations.
  • Improved configuration file generation time for OpenVPN Servers.
  • Increased OpenVPN and WireGuard VPN Client limit from 5 to 8. This applies to the VPN Client feature, not VPN users connecting to VPN Servers.
  • Remove the PPTP Server if the adopted gateway doesn’t support it.

Clients and Devices

  • Added PoE power cycle option to the device side panel.
  • Added confirmation message when configuring Network Overrides.
  • Improved UniFi Devices page performance on larger setups.
  • Improved System Logs for client connections.
  • Locked the first column for Devices/Clients pages when scrolling horizontally.
  • Client hostnames (if present) are now shown in the side panel overview.
  • Moved filters to the left side in the Device and Client pages.


  • Added Radio Manager.
  • Added ability to enable Professional installer toggle for Consoles.
  • Improved adding clients to MAC Address Filters.
  • Improved actionable feedback when Outdoor Mode is enabled.
  • Removed Global AP Settings, you can now use Radio Manager for bulk editing.
  • Collapse RF Scan tab by default in the AP device panel.
  • Changed WiFi Experience to TX retries for APs in their device panel.
  • Enhanced voucher printing options.


  • Fixed an issue where some UniFi devices were incorrectly shown on the Client Devices page or not shown at all.
  • As a result of this fix, unmanaged non-network UniFi devices (e.g. UniFi Protect camera) may appear again as offline devices.
  • These offline devices will be removed automatically based on the Data Retention settings.
  • Automatic removal is an automated, periodic process that will run for several minutes after updating. Manual removal is also possible.
  • Fixed an issue where blocked clients couldn’t connect if they were removed until the next AP provision.
  • Fixed incorrect channel width for BeaconHD/U6-Extender.
  • Fixed an issue where Virtual Network usable hosts were incorrectly calculated.
  • Fixed missing ISP names in internet-related notifications.
  • Fixed rare gateway adoption issues via Layer 3.
  • Fixed an issue where WiFiman speed test results were not shown.
  • Fixed issue where WAN configuration is not populated when moving a gateway device to a new site.
  • Fixed an issue where CGNAT IP addresses were incorrectly marked as public IPs for Site Magic.
  • Fixed invalid connected client count for In-Wall APs.
  • Fixed unmanaged Network devices not shown on Client and Device pages in rare cases.
  • Fixed an issue where the Console would appear offline in rare cases.
  • Fixed sorting when there are multiple pages.
  • Fixed an issue where Voice VLAN settings are not effective when all VLANs are auto-allowed on switch ports.
  • Fixed an issue where Lock to AP is not disabled when removing an AP.
  • Fixed an issue where RADIUS profiles couldn’t be disabled when using a WireGuard VPN Server.
  • Fixed rare gateway configuration error.

Additional information

  • Create a backup before upgrading your UniFi Network Application in the event any issues are encountered.
  • See the UniFi Network Server Help Center article for more information on self-hosting a server.
  • UniFi Network Application 7.5 and newer requires MongoDB 3.6 (up to 4.4) and Java 17.

UniFi Network Native Application for UniFi OS

A specific application version that is only compatible with the UDM and UDR (running UniFi OS 3.1.6 or newer).

  • The UniFi OS update uses the application version that is required for your console.
  • The manual update process via SSH requires you to use the compatible package. Incompatible packages will be rejected on installation.
  • Older UniFi OS versions (before UniFi OS 3.1.6) on the UDM and UDR still use regular UniFi Network Application for UniFi OS.

 Checksumsb6a4fc86282e114c3a683ee9b43b4fde *UniFi-installer.exe 93413c6edc8d2bc44034b5086fa06fd7 *UniFi-Network-Server.dmg 6f10183dc78bf6d36290309cee8b6714 * f8e3a81f533d5bedb110afc61695846f *unifi_sysvinit_all.deb f6303e22d7c66102558db1dfeba678a7 *unifi-uos_sysvinit.deb b86b7b88ab650bf1de1a337f2f65d712 *unifi-native_sysvinit.deb 601df32736f41e40a80a3e472450a3e1 *unifi_sh_api ———————————————————————————————————————— SHA256(UniFi-installer.exe)= 193e309725a24a9dc79ac8115ad6ec561e466d0f871bc10c1d48a1c1631e2cfd SHA256(UniFi-Network-Server.dmg)= eb6160c6763f884fbb73df03ecdbc67381ad3cd06f037b227c399a7b33a29c0f SHA256( b409eb13d666d3afbf6f299650f0ee929a45da0ce4206ffe804a72d097f19f36 SHA256(unifi_sysvinit_all.deb)= 4221d7a0f8ce66c58a4f71b70ba6f32e16310429d3fe8165bf0f47bbdb6401a6 SHA256(unifi-uos_sysvinit.deb)= fafdfa57fc5b324e8fc0959b4127e3aafa10f1e4cfdf34c91af6a366033c1937 SHA256(unifi-native_sysvinit.deb)= 3bfd0e985d099fe9bc99578b82548269cf5d65f77e354c714b6b49194e5cd368 SHA256(unifi_sh_api)= 1791685039ea795970bcc7a61eec854058e3e6fc13c52770e31e20f3beb622eb

Download links

UniFi Network Application for Windows

UniFi Network Application for macOS

UniFi Network Application for Debian/Ubuntu

UniFi Network Application for UniFi OS

UniFi Network Native Application for UniFi OS

UniFi Network Application for unsupported Unix/Linux distros *** DIY / Completely unsupported ***

unifi_sh_api (shell library)

Source :

Critical Unauthenticated Remote Code Execution Found in Backup Migration Plugin

Alex Thomas
December 11, 2023

Wordfence just launched its bug bounty program. Through December 20th 2023, all researchers will earn 6.25x our normal bounty rates when Wordfence handles responsible disclosure for our Holiday Bug Extravaganza! The researcher who reported this vulnerability was awarded $2,751.00! Register as a researcher and submit your vulnerabilities today! 🎁

On November 8th, 2023, Wordfence launched a Bug Bounty Program to help support our mission in securing the web. In only a month’s time, we have had over 270 vulnerability researchers register and submit almost 130 vulnerabilities!

On December 5th, 2023, shortly after the launch of our Holiday Bug Extravaganza, we received a submission for a PHP Code Injection vulnerability in Backup Migration, a WordPress plugin with over 90,000+ active installations. This vulnerability makes it possible for unauthenticated threat actors to inject and execute arbitrary PHP code on WordPress sites that use this plugin.

We quickly released a firewall rule to protect Wordfence PremiumWordfence Care, and Wordfence Response customers on December 6, 2023. Sites still running the free version of Wordfence will receive the same protection 30 days later, on January 5, 2024.

We contacted the BackupBliss team, makers of the Backup Migration plugin, on the same day we released our firewall rule. After providing full disclosure details, the team released a patch just hours later. Kudos to the BackupBliss team for an incredibly swift response and patch.

We urge users to update their sites with the latest patched version of Backup Migration, which is version 1.3.8 at the time of this writing, immediately.

Vulnerability Summary from Wordfence Intelligence

Description: Backup Migration <= 1.3.7 backup-backup Unauthenticated Remote Code Execution
Affected Plugin: Backup Migration
Plugin Slug: backup-backup
Affected Versions: <= 1.3.7
CVE ID: CVE-2023-6553
Pending CVSS Score: 9.8 (Critical)
Researcher/s: Nex Team
Fully Patched Version: 1.3.8
Bounty Award: $2,751.00

The Backup Migration plugin for WordPress is vulnerable to Remote Code Execution in all versions up to, and including, 1.3.7 via the /includes/backup-heart.php file. This is due to an attacker being able to control the values passed to an include, and subsequently leverage that to achieve remote code execution. This makes it possible for unauthenticated threat actors to easily execute code on the server.

Technical Analysis

Line 118 within the /includes/backup-heart.php file used by the Backup Migration plugin attempts to include bypasser.php from the BMI_INCLUDES directory. The BMI_INCLUDES directory is defined by concatenating BMI_ROOT_DIR with the includes string on line 64. However, note that BMI_ROOT_DIR is defined via the content-dir HTTP header on line 62.

This means that BMI_ROOT_DIR is user-controllable. By submitting a specially-crafted request, threat-actors can leverage this issue to include arbitrary, malicious PHP code and execute arbitrary commands on the underlying server in the security context of the WordPress instance.

Disclosure Timeline

December 5, 2023 – We receive the submission of the PHP Code Injection vulnerability in Backup Migration via the Wordfence Bug Bounty Program.
December 6, 2023 – We validate the report and confirm the proof-of-concept exploit.
December 6, 2023 – We initiate contact with the plugin developer and send over the full disclosure details. The vendor acknowledges the report and begins working on a fix. A fully patched version of the plugin, 1.3.8, is released.


In this blog post, we detailed a critical PHP Code Injection vulnerability within the Backup Migration plugin affecting versions 1.3.7 and earlier. This vulnerability allows unauthenticated threat actors to inject arbitrary PHP code, resulting in a full site compromise. The vulnerability has been fully addressed in version 1.3.8 of the plugin.

We urge WordPress users to verify that their sites are updated to the latest patched version of Backup Migration.

Wordfence users running Wordfence PremiumWordfence Care, and Wordfence Response have been protected against these vulnerabilities as of December 6, 2023. Users still using the free version of Wordfence will receive the same protection on January 5, 2024.

If you know someone who uses this plugin on their site, we recommend sharing this advisory with them to ensure their site remains secure, as this vulnerability poses a significant risk.

Did you enjoy this post? Share it!

Source :

The Ultimate Guide to Password Best Practices: Guarding Your Digital Identity

Dirk Schrader
Published: November 14, 2023
Updated: November 24, 2023

In the wake of escalating cyber-attacks and data breaches, the ubiquitous advice of “don’t share your password” is no longer enough. Passwords remain the primary keys to our most important digital assets, so following password security best practices is more critical than ever. Whether you’re securing email, networks, or individual user accounts, following password best practices can help protect your sensitive information from cyber threats.

Read this guide to explore password best practices that should be implemented in every organization — and learn how to protect vulnerable information while adhering to better security strategies.

The Secrets of Strong Passwords

A strong password is your first line of defense when it comes to protecting your accounts and networks. Implement these standard password creation best practices when thinking about a new password:

  • Complexity: Ensure your passwords contain a mix of uppercase and lowercase letters, numbers, and special characters. It should be noted that composition rules, such as lowercase, symbols, etc. are no longer recommended by NIST — so use at your own discretion.
  • Length: Longer passwords are generally stronger — and usually, length trumps complexity. Aim for at least 6-8 characters.
  • Unpredictability: Avoid using common phrases or patterns. Avoid using easily guessable information like birthdays or names. Instead, create unique strings that are difficult for hackers to guess.

Handpicked related content:

Combining these factors makes passwords harder to guess. For instance, if a password is 8 characters long and includes uppercase letters, lowercase letters, numbers and special characters, the total possible combinations would be (26 + 26 + 10 + 30)^8. This astronomical number of possibilities makes it exceedingly difficult for an attacker to guess the password.

Of course, given NIST’s updated guidance on passwords, the best approach to effective password security is using a password manager — this solution will not only help create and store your passwords, but it will automatically reject common, easy-to-guess passwords (those included in password dumps). Password managers greatly increase security against the following attack types.

Password-Guessing Attacks

Understanding the techniques that adversaries use to guess user passwords is essential for password security. Here are some of the key attacks to know about:

Brute-Force Attack

In a brute-force attack, an attacker systematically tries every possible combination of characters until the correct password is found. This method is time-consuming but can be effective if the password is weak.

Strong passwords help thwart brute force attacks because they increase the number of possible combinations an attacker must try, making it unlikely they can guess the password within a reasonable timeframe.

Dictionary Attack

A dictionary attack is a type of brute-force attack in which an adversary uses a list of common words, phrases and commonly used passwords to try to gain access.

Unique passwords are essential to thwarting dictionary attacks because attackers rely on common words and phrases. Using a password that isn’t a dictionary word or a known pattern significantly reduces the likelihood of being guessed. For example, the string “Xc78dW34aa12!” is not in the dictionary or on the list of commonly used passwords, making it much more secure than something generic like “password.”

Dictionary Attack with Character Variations

In some dictionary attacks, adversaries also use standard words but also try common character substitutions, such as replacing ‘a’ with ‘@’ or ‘e’ with ‘3’. For example, in addition to trying to log on using the word “password”, they might also try the variant “p@ssw0rd”.

Choosing complex and unpredictable passwords is necessary to thwart these attacks. By using unique combinations and avoiding easily guessable patterns, you make it challenging for attackers to guess your password.

How Password Managers Enhance Security

Password managers are indispensable for securely storing and organizing your passwords. These tools offer several key benefits:

  • Security: Password managers store passwords and enter them for you, eliminating the need for users to remember them all. All users need to remember is the master password for their password manager tool. Therefore, users can use long, complex passwords as recommended by best practices without worrying about forgetting their passwords or resorting to insecure practices like writing passwords down or reusing the same password for multiple sites or applications.
  • Password generation: Password managers can generate a strong and unique password for user accounts, eliminating the need for individuals to come up with them.
  • Encryption: Password managers encrypt password vaults, ensuring the safety of data — even if it is compromised.
  • Convenience: Password managers enable users to easily access passwords across multiple devices.

When selecting a password manager, it’s important to consider your organization’s specific needs, such as support for the platforms you use, price, ease of use and vendor breach history. Conduct research and read reviews to identify the one that best aligns with your organization’s requirements. Some noteworthy options include Netwrix Password Secure, LastPass, Dashlane, 1Password and Bitwarden.

How Multifactor Authentication (MFA) Adds an Extra Layer of Security

Multifactor authentication strengthens security by requiring two or more forms of verification before granting access. Specifically, you need to provide at least two of the following authentication factors:

  • Something you know: The classic example is your password.
  • Something you have: Usually this is a physical device like a smartphone or security token.
  • Something you are: This is biometric data like a fingerprint or facial recognition.

MFA renders a stolen password worthless, so implement it wherever possible.

Password Expiration Management

Password expiration policies play a crucial role in maintaining strong password security. Using a password manager that creates strong passwords also has an influence on password expiration. If you do not use a password manager yet, implement a strategy to check all passwords within your organization; with a rise in data breaches, password lists (like the known rockyou.txt and its variations) used in brute-force attacks are constantly growing. The website offers a service to check whether a certain password has been exposed. Here’s what users should know about password security best practices related to password expiration:

  • Follow policy guidelines: Adhere to your organization’s password expiration policy. This includes changing your password when prompted and selecting a new, strong password that meets the policy’s requirements.
  • Set reminders: If your organization doesn’t enforce password expiration via notifications, set your own reminders to change your password when it’s due. Regularly check your email or system notifications for prompts.
  • Avoid obvious patterns: When changing your password, refrain from using variations of the previous one or predictable patterns like “Password1,” “Password2” and so on.
  • Report suspicious activity: If you notice any suspicious account activity or unauthorized password change requests, report them immediately to your organization’s IT support service or helpdesk.
  • Be cautious with password reset emails: Best practice for good password security means being aware of scams. If you receive an unexpected email prompting you to reset your password, verify its authenticity. Phishing emails often impersonate legitimate organizations to steal your login credentials.

Password Security and Compliance

Compliance standards require password security and password management best practices as a means to safeguard data, maintain privacy and prevent unauthorized access. Here are a few of the laws that require password security:

  • HIPAA (Health Insurance Portability and Accountability Act): HIPAA mandates that healthcare organizations implement safeguards to protect electronic protected health information (ePHI), which includes secure password practices.
  • PCI DSS (Payment Card Industry Data Security Standard): PCI DSS requires organizations that handle payment card data on their website to implement strong access controls, including password security, to protect cardholder data.
  • GDPR (General Data Protection Regulation): GDPR requires organizations that store or process the data of EU residents to implement appropriate security measures to protect personal data. Password security is a fundamental aspect of data protection under GDPR.
  • FERPA (Family Educational Rights and Privacy Act): FERPA governs the privacy of student education records. It includes requirements for securing access to these records, which involves password security.

Organizations subject to these compliance standards need to implement robust password policies and password security best practices. Failure to do so can result in steep fines and other penalties.

There are also voluntary frameworks that help organizations establish strong password policies. Two of the most well known are the following:

  • NIST Cybersecurity Framework: The National Institute of Standards and Technology (NIST) provides guidelines and recommendations, including password best practices, to enhance cybersecurity.
  • ISO 27001: ISO 27001 is an international standard for information security management systems (ISMSs). It includes requirements related to password management as part of its broader security framework.

Password Best Practices in Action

Now, let’s put these password security best practices into action with an example:

Suppose your name is John Doe and your birthday is December 10, 1985. Instead of using “JohnDoe121085” as your password (which is easily guessable), follow these good password practices:

  • Create a long, unique (and unguessable) password, such as: “M3an85DJ121!”
  • Store it in a trusted password manager.
  • Enable multi-factor authentication whenever available.

10 Password Best Practices

If you are looking to strengthen your security, follow these password best practices:

  • Remove hints or knowledge-based authentication: NIST recommends not using knowledge-based authentication (KBA), such as questions like “What town were you born in?” but instead, using something more secure, like two-factor authentication.
  • Encrypt passwords: Protect passwords with encryption both when they are stored and when they are transmitted over networks. This makes them useless to any hacker who manages to steal them.
  • Avoid clear text and reversible forms: Users and applications should never store passwords in clear text or any form that could easily be transformed into clear text. Ensure your password management routine does not use clear text (like in an XLS file).
  • Choose unique passwords for different accounts: Don’t use the same, or even variations, of the same passwords for different accounts. Try to come up with unique passwords for different accounts.
  • Use a password management: This can help select new passwords that meet security requirements, send reminders of upcoming password expiration, and help update passwords through a user-friendly interface.
  • Enforce strong password policies: Implement and enforce strong password policies that include minimum length and complexity requirements, along with a password history rule to prevent the reuse of previous passwords.
  • Update passwords when needed: You should be checking and – if the results indicate so – updating your passwords to minimize the risk of unauthorized access, especially after data breaches.
  • Monitor for suspicious activity: Continuously monitor your accounts for suspicious activity, including multiple failed login attempts, and implement account lockouts and alerts to mitigate threats.
  • Educate users: Conduct or partake in regular security awareness training to learn about password best practices, phishing threats, and the importance of maintaining strong, unique passwords for each account.
  • Implement password expiration policies: Enforce password expiration policies that require password changes at defined circumstances to enhance security.

How Netwrix Can Help

Adhering to password best practices is vital to safeguarding sensitive information and preventing unauthorized access.

Netwrix Password Secure provides advanced capabilities for monitoring password policies, detecting and responding to suspicious activity and ensuring compliance with industry regulations. With features such as real-time alerts, comprehensive reporting and a user-friendly interface, it empowers organizations to proactively identify and address password-related risks, enforce strong password policies, and maintain strong security across their IT environment.


In a world where cyber threats are constantly evolving, adhering to password management best practices is essential to safeguard your digital presence. First and foremost, create a strong and unique password for each system or application — remember that using a password manager makes it much easier to adhere to this critical best practice. In addition, implement multifactor authentication whenever possible to thwart any attacker who manages to steal your password. By following the guidelines, you can enjoy a safer online experience and protect your valuable digital assets.

Dirk Schrader

Dirk Schrader is a Resident CISO (EMEA) and VP of Security Research at Netwrix. A 25-year veteran in IT security with certifications as CISSP (ISC²) and CISM (ISACA), he works to advance cyber resilience as a modern approach to tackling cyber threats. Dirk has worked on cybersecurity projects around the globe, starting in technical and support roles at the beginning of his career and then moving into sales, marketing and product management positions at both large multinational corporations and small startups. He has published numerous articles about the need to address change and vulnerability management to achieve cyber resilience.

Source :

PSA: Critical POP Chain Allowing Remote Code Execution Patched in WordPress 6.4.2

WordPress 6.4.2 was released today, on December 6, 2023. It includes a patch for a POP chain introduced in version 6.4 that, combined with a separate Object Injection vulnerability, could result in a Critical-Severity vulnerability allowing attackers to execute arbitrary PHP code on the site.

We urge all WordPress users to update to 6.4.2 immediately, as this issue could allow full site takeover if another vulnerability is present.

Technical Analysis

We’ve written about Object Injection vulnerabilities in the past, and the primary reason most Object Injection vulnerabilities are difficult to exploit is the lack of useful POP chains.

The problem here resides in the WP_HTML_Token class, which was introduced in WordPress 6.4 and is used to improve HTML parsing in the block editor. It includes a __destruct magic method that is automatically executed after PHP has processed the request. This __destruct method uses call_user_func to execute the function passed in through the on_destroy property, accepting the bookmark_name property as an argument:

12345publicfunction__destruct() {    if( is_callable( $this->on_destroy ) ) {        call_user_func( $this->on_destroy, $this->bookmark_name );        }}

Since an attacker able to exploit an Object Injection vulnerability would have full control over the on_destroy and bookmark_name properties, they can use this to execute arbitrary code on the site to easily gain full control.

While WordPress Core currently does not have any known object injection vulnerabilities, they are rampant in other plugins and themes. The presence of an easy-to-exploit POP chain in WordPress core substantially increases the danger level of any Object Injection vulnerability.

The patch is very simple:

123publicfunction__wakeup() {   thrownew\LogicException( __CLASS__. ' should never be unserialized');}

The newly added __wakeup method ensures that any serialized object with the WP_HTML_Token class throws an error as soon as it is unserialized, preventing the __destruct function from executing.

We have released a firewall rule to protect Wordfence PremiumWordfence Care, and Wordfence Response users. Wordfence free users will receive the same protection in 30 days, on January 5, 2024.


In today’s PSA, we analyzed a patch for a potentially critical issue in WordPress 6.4-6.4.1 that could allow attackers to take advantage of any Object Injection vulnerability present in any plugin to execute code. While most sites should automatically update to WordPress 6.4.2, we strongly recommend manually checking your site to ensure that it is updated.

We recommend sharing this advisory with everyone you know who uses WordPress, as this is a potentially critical issue that could lead to complete site takeover.

Did you know that Wordfence has a Bug Bounty Program? We’ve recently increased our bounties by 6.25x until December 20th, 2023, with our bounties for the most critical vulnerabilities reaching $10,000 USD! If you’re an aspiring or current vulnerability researcher, click here to sign up.

Did you enjoy this post? Share it!

Source :