Enhancing RFC-compliance for message header from addresses

06/02/2024

Hornetsecurity is implementing an update to enhance email security by enforcing checks on the “Header-From” value in emails, as per RFC 5322 standards.
This initiative is driven by several key reasons:

  1. Preventing Email Delivery Issues: Historically, not enforcing the validity of the originator email address has led to emails being accepted by our system but ultimately rejected by the final destination, especially with most customers now using cloud email service providers that enforce stricter validation.
  2. Enhanced Protection Against Spoofed Emails: By strictly validating the “Header-From” value, we aim to significantly reduce the risk of email spoofing.
  3. Enhance Email Authentication for DKIM/DMARC Alignment: By enforcing RFC 5322 compliance in the “Header-From” field, we can ensure better alignment with DKIM and DMARC standards, thereby significantly improving the security and authenticity of email communications.

The cause of malformed “From” headers often stems from incorrect email server configurations by the sender or from bugs in scripts or other applications. Our new protocol aims to rectify these issues, ensuring that all emails passing through our system are fully compliant with established standards, thus improving the overall security and reliability of email communications.

Implementation Timeline

  • Stage 1 (Starting 4 March 2024): 1-5% of invalid emails will be rejected.
  • Stage 2 (Second week): 30% rejection rate.
  • Stage 3 (Third week): 60% rejection rate.
  • Final Stage (By the end of the fourth week): 100% rejection rate.

Impact Assessment

Extensive testing over the past six months indicates that the impact on legitimate email delivery is expected to be minimal. However, email administrators should be prepared for potential queries from users experiencing email rejections.

Handling Rejections

When an email is rejected due to a malformed “Header-From”, the sender will receive a bounce-back message with the error “510 5.1.7 malformed Header-From according to RFC 5322”. This message indicates that the email did not meet the necessary header standards.

Identifying Affected Emails

Email administrators can identify affected emails in the Hornetsecurity Control Panel (https://cp.hornetsecurity.com) using the following steps:

  1. Navigate to ELT in the Hornetsecurity Control Panel.
  2. Select your tenant in the top right field.
  3. Choose a date range for your search. A shorter range will yield quicker results.
  4. Click in the “Search” text box, select the “Msg ID” parameter, and type in “hfromfailed” (exact string).
  5. Press ENTER to perform the search.

When email administrators identify emails affected by the “Header-From” checks in the Email Live Tracking (ELT) system, immediate and appropriate actions are necessary to verify if the email application or server settings are correctly configured to comply with RFC 5322 standards. This will help maintain email flow integrity.


Defining Exceptions

In implementing the new “Header-From” checks, Hornetsecurity recognizes the need for flexibility in certain cases. Therefore, we have provisioned for the definition of exceptions to these checks.

This section details how to set up these exceptions and the timeline for their deprecation:

Configuring Exceptions

  1. Accessing the Control Panel: Log in to the Hornetsecurity Control Panel at https://cp.hornetsecurity.com.
  2. Navigating to the Compliance Filter.
  3. Creating Exception Rules: Within the Compliance Filter, you can create rules that define exceptions to the “Header-From” checks. This should be based on the envelop sender address.
  4. Applying the Exceptions: Once defined, these exceptions will allow certain emails to bypass the strict “Header-From” checks.

Timeline for Deprecation of Exceptions applied to the new Header-From checks

  • Initial Implementation: The ability to define exceptions is available as part of the initial rollout of the “Header-From” checks.
  • Deprecation Date: These exception provisions are set to be deprecated by the end of June 2024.

The provision for exceptions is intended as a temporary measure to facilitate a smoother transition to the new protocol. By June 2024, it is expected that all email senders would have had sufficient time to align their email systems with RFC 5322 standards. Deprecating the exceptions is a step towards ensuring full compliance and maximizing the security benefits of the “Header-From” checks.

Conclusion

The enhancement of our RFC-compliance is a significant step toward securing email communications. Adherence to these standards will collectively reduce risks associated with email. For further assistance or clarification, please reach out to our support team at support@hornetsecurity.com.

 

Invalid “Header From” Examples:

Header From Reason 
From: <> Blank addresses are problematic as they cause issues in scenarios requiring a valid email address, such as allow and deny lists. 
From: John Doe john.doe@hornetsecurity.com Non-compliant with RFC standards. The email address must be enclosed in angle brackets (< and >) when accompanied by a display name. 
From: “John Doe” <john.doe@hornetsecurity.com> (Peter’s cousin) While technically RFC-compliant, such formats are often rejected by M365 unless explicit exceptions are configured. We do accept certain email addresses with comments. 
From: John, Doe <john.doe@hornetsecurity.com> Non-compliant with RFC standards. A display name containing a comma must be enclosed in double quotes. 
From: “John Doe <john.doe@hornetsecurity.com>” Non-compliant with RFC standards. The entire ‘From’ value is incorrectly enclosed in double quotation marks, which is not allowed. 
From: “John Doe <john.doe@hornetsecurity.com>” john.doe@hornetsecurity.com Non-compliant with RFC standards. The display name is present, but the email address is not correctly enclosed in angle brackets. 
From: “John Doe”<john.doe@hornetsecurity.com> Non-compliant with RFC standards due to the absence of white-space between the display name and the email address. 
From: “Nested Brackets” <<info@hornetsecurity.com> Nested angle brackets are not allowed in the “addr-spec” part of the email address. 
From: Peter Martin <e14011> Non-compliant with RFC standards. The domain part of the email address (“addr-spec”) is missing. 
From: “News” <news.@hornetsecurity.com> Non-compliant with RFC standards. The local part of the email address must not end with a dot. 
Missing “From” header altogether A “From” header is mandatory in emails. The absence of this header is a clear violation of RFC standards. 

Valid “Header From” Examples:

Header From Reason 
From: john.doe@hornetsecurity.com RFC-compliant 
From: <john.doe@hornetsecurity.com> RFC-compliant 
From: “Doe, John” <john.doe@hornetsecurity.com> RFC-compliant 
From: “John Doe” <john.doe@hornetsecurity.com> RFC-compliant 
From: < john.doe@hornetsecurity.com > RFC-compliant but not recommended because of the spaces between the email address and angle brackets 
From: John Doe <john.doe@hornetsecurity.com> Acceptable, although it is recommended that the display name is enclosed in double quotes if it contains any white-space. 

Source :
https://support.hornetsecurity.com/hc/en-us/articles/22036971529617-Enhancing-RFC-compliance-for-message-header-from-addresses

Configuring DFSR to a Static Port – The rest of the story

By Ned Pyle
Published Apr 04 2019 02:39 PM

First published on TechNet on Jul 16, 2009
Ned-san here again. Customers frequently call us about configuring their servers to listen over specific network ports. This is usually to satisfy firewall rules – more on this later. A port in TCP/IP is simply an endpoint to communication between computers. Some are reserved, some are well-known, and the rest are simply available to any application to use. Today I will explain the network communication done through all facets of DFSR operation and administration. Even if you don’t care about firewalls and ports, this should shed some light on DFSR networking in general, and may save you skull sweat someday.

DFSR and RPC

Plenty of Windows components support hard-coding to exclusive ports, and at a glance, DFSR is no exception. By running the DFSRDIAG STATICRPC command against the DFSR servers you force them to listen on whatever port you like for file replication:

thumbnail image 1 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

Many Windows RPC applications use the Endpoint Mapper (EPM) component for these types of client-server operations. It’s not a requirement though; an RPC application is free to declare its own port and only listen on that one, with a client that is hard-coded to contact that port only. This range of ports is 1025-5000 in Windows Server 2003 and older, and 49152-65535 in Vista and … DFSR uses EPM.

Update 3/3/2011 (nice catch Walter)

As you have probably found, we later noticed a bug in DFSR on Win2008 and Win2008 R2 DCs (only – not member servers) where the service would always send-receive on port 5722. This article was done before that and doesn’t reflect it. Read more on this here:

http://support.microsoft.com/default.aspx?scid=kb;EN-US;832017

http://blogs.technet.com/b/askds/archive/2010/05/14/friday-mail-sack-it-s-about-to-get-re…
All of the below is accurate for non-DCs

By setting the port, you are telling EPM to always respond with the same port instead of one within the dynamic range. So when DFSR contacted the other server, it would only need to use two ports:

thumbnail image 2 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

So with a Netmon 3.3 capture, it will look something like this when the DFSR service starts up:

1. The local computer opens a dynamic client port and connects to EPM on the remote computer, asking for connectivity to DFSR.

thumbnail image 3 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

2. That remote computer responds with a port that the local computer can connect to for DFSR communication. Because I have statically assigned port 55555, the remote computer will always respond with this port.

thumbnail image 4 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

3. The local computer then opens a new client port and binds to that RPC port on the remote server, where the DFSR service is actually listening. At this point two DFSR servers can replicate files between each other.

thumbnail image 5 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

The Rest of the Story

If it’s that easy, why the blog post? Because there’s much more DFSR than just the RPC replication port. To start, your DFSR servers need to be able to contact DC’s. To do that, they need name resolution. And they will need to use Kerberos. And the management tools will need DRS API connectivity to the DC’s. There will also need to be SMB connectivity to create replicated folders and communicate with the Service Control Manager to manipulate DFSR. And all of the above also need the dynamic client ports available outbound through the firewall to allow that communication. So now that’s:

  • EPM port 135 (inbound on remote DFSR servers and DC’s)
  • DFSR port (inbound on remote DFSR servers)
  • SMB port 445 (inbound on remote DFSR servers)
  • DNS port 53 (inbound on remote DNS servers)
  • LDAP port 389 (inbound on remote DC’s)
  • Kerberos port 88 (inbound on remote DC’s)
  • Ports 1025-5000 or 49152-65535 (outbound, Win2003 and Win2008 respectively – and inbound on remote DC’s).

Let’s see this in action. Here I gathered a Netmon 3.3 capture of configuring a new replication group:

  • Server-01 – IP 10.10.0.101 – DC/DNS
  • Server-02 – IP 10.10.0.102 – DFSR
  • Server-03 – IP 10.10.0.103 – DFSR
  • Server-04 – IP 10.10.0.104 – Computer running the DFSMGMT.MSC snap-in

1. First the snap-in gets name resolution for the DC from my management computer (local port 51562 to remote port 53):

thumbnail image 6 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

2. Then it contacts the DC – the EPM is bound (local port 49199 to remote port 135) and a dynamic port is negotiated so that the client knows which port on which to talk to the DC (port 49156).

thumbnail image 7 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

3. Having connected to the DC through RPC to DRS (a management API), it then returns information about the domain and other things needed by the snap-in.

thumbnail image 8 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

4. The snap-in then performs an LDAP query to the DC to locate the DFSR-GlobalSettings container in that domain o that it can read in any new Replication Groups (local port 49201 to remote port 389).

thumbnail image 9 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

5. The snap-performs LDAP and DNS queries to get the names of the computers being selected for replication:

thumbnail image 10 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

6. The DFSR service must be verified (is it installed? Is it running?) This requires a Kerberos CIFS (SMB) request to the DC as well as an SMB connection to the DFSR servers – this is actually a ‘named pipe’ operation over remote port 445, where RPC uses SMB as a transport:

thumbnail image 11 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 12 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 13 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

7. The Replicated Folders are created (or verified to exist) on the DFSR servers – I called mine ‘testrf’. This uses SMB again from the snap-in computer to the DFSR server, over remote port 445:

thumbnail image 14 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

8. The snap-in will write all the configuration data through LDAP over remote port 389 against the DC. This creates all the AD objects and attributes, creates the topology, writes to each DFSR computer object, etc. There are quite a few frames here so I will just highlight a bit of it:

thumbnail image 15 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

9. If you wait for AD replication to complete and the DFSR servers to poll for changes, you will see the DFSR servers request configuration info through LDAP, and then start working normally on their static RPC port 55555 – just like I showed at the beginning of this post above.

DCOM and WMI

All of the things I’ve discussed are guaranteed needs in order to use DFSR. For the most part you don’t have to have too many remote ports open on the DFSR server itself. However, if you want to use tools like DFSRDIAG.EXE and WMIC.EXE remotely against a DFSR server, or have a remote DFSR server generate ‘Diagnostic Health Reports’, there is more to do.

DFSR utilizes Windows Management Instrumentation as its ‘quasi-API’. When tools like DFS Management are run to generate health reports, or DFSRDIAG POLLAD is targeted against a remote server, you are actually using DCOM and WMI to tell the targeted server to perform actions on your behalf.

There is no mechanism to control which RPC DCOM/WMI will listen on as there is for DFSR and other services. At service startup DCOM/WMI will pick the next available dynamic RPC port. This means in theory that you would have to have open the entire range of dynamic ports for the target OS, 1025-5000 (Win2003) or 49152-65535 (Win2008)

For example, here I am running DFSRDIAG POLLAD /MEM:2008-02 to force that server to poll its DC for configuration changes. Note the listening port that I am talking to on the DFSR server (hint – it’s not 55555):

thumbnail image 16 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 17 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 18 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

And in my final example, here I am running the DFS Management snap-in and requesting a diagnostic health report. Note again how we use DCOM/WMI/RPC and do not connect directly to the DFSR service; again this requires that we have all those inbound dynamic ports open on the DFSR server:

thumbnail image 19 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

Wrap Up

So is it worth it to try and use a static replication port? Maybe. If you don’t plan on directly administering a DFSR server and just need it talking to its DC, its DNS server, and its replication partners, can definitely keep the number of ports used quite low. But if you ever want to communicate directly with it as an administrator, you will need quite a few holes punched through your firewall.

That is, unless you are using IPSEC tunnels through your Firewalls like we recommend. 🙂

– Ned ‘Honto’ Pyle

Source :
https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/configuring-dfsr-to-a-static-port-the-rest-of-the-story/ba-p/396746

What Is DFS Replication and How to Configure It?

Updated: May 23, 2023
By: NAKIVO Team

File shares are used in organizations to allow users to access and exchange files. If the number of file shares is large, it may be difficult to manage them because mapping many shared resources to each user’s computer takes time and effort. If the configuration of one file share changes, you need to update shared drive mappings for all users using this share. In this case, DFS can help you optimize the hierarchy of shared folders to streamline administration and the use of shared resources.

This blog post explains DFS configuration and how to set up DFS replication in Windows Server 2019.

NAKIVO for Windows Backup

NAKIVO for Windows Backup

Fast backup of Windows servers and workstations to onsite, offiste and cloud. Recovery of full machines and objects in minutes for low RTOs and maximum uptime.

DISCOVER SOLUTION

What Is DFS and How It Works

A Distributed File System (DFS) is a logical organization that transparently groups existing file shares on multiple servers into a structured hierarchy. This hierarchy can be accessed using a single share on a DFS server.
A DFS file share can be replicated across multiple file servers in different locations to optimize server load and increase access speed to shared files. In this case, a user can access a file share on a server that is closest to them. DFS is intended to simplify access to shared files.

Using a DFS namespace server

DFS uses the Server Message Block (SMB) protocol, which is also known as the Common Internet File System (CIFS). Microsoft’s implementation of DFS doesn’t work with other file sharing protocols like NFS or HDFS. However, you can connect multiple SMB shares configured on NAS devices and Linux machines using Samba to your DFS server running on Windows Server. DFS consists of server and client components.

You can configure one DFS share that includes multiple file shares and connect users to this single file share using a unified namespace. When users connect to this file share using a single path, they see a tree structure of shared folders (as they are subfolders of the main share) and can access all needed file shares transparently. Underlying physical file servers hosting file shares are abstracted from the namespace used to access shares. DFS namespaces and DFS replication are the two main components used for DFS functioning.

What is a DFS namespace?

A DFS namespace is a virtual folder that contains links to shared folders stored on different file servers. DFS namespaces can be organized in different ways depending on business needs. They can be organized by geographical location, organization units, a combination of multiple parameters, etc. You can configure multiple namespaces on a DFS server. A DFS namespace can be standalone or domain-based.

DFS namespace and folder targets
  • standalone DFS namespace stores configuration information and metadata locally on a root server in the system registry. A path to access the root namespace is started with the root server name. A standalone DFS namespace is located only on one server and is not fault-tolerant. If a root server is unavailable, the entire DFS namespace is unavailable. You can use this option if you don’t have an Active Directory domain configured (when using a Workgroup).
  • domain-based DFS namespace stores configuration in Active Directory. A path to access a root namespace starts with the domain name. You can store a domain-based DFS namespace on multiple servers to increase the namespace availability. This approach allows you to provide fault tolerance and load balancing across servers. Using domain-based DFS namespaces is recommended.

A namespace consists of the root, links (folders), and folder targets.

  • namespace root is a starting point of a DFS namespace tree. Depending on the type, a namespace can look like this:

\\ServerName\RootName (a standalone namespace)

\\DomainName\RootName (a domain-based namespace)

  • namespace server is a physical server (or a VM) that hosts a DFS namespace. A namespace server can be a regular server with the DFS role installed or a domain controller.
  • folder is a link in a DFS namespace that points to a target folder containing content for user access. There are also folders without targets used for organizing the structure.
  • folder target is a link to a shared file resource located on a particular file server and available via the UNC path (Universal Naming Convention). A folder target is associated with the folder in a DFS namespace, for example, \\FS2\TestShare on the FS2 server. A folder target is what users need to access files.

One folder target can be a link to a single folder or multiple folders (if these folders are located on two different servers and are synchronized/replicated with each other). For example, a user needs to access \\DFS-server01\TestShare\Doc but depending on the user’s location, the user is redirected to a shared folder \\FS01\Doc or \\FS02\Doc.

The DFS tree structure includes the following components:

  • DFS root, which is a DFS server on which the DFS service is running
  • DFS links, which are links pointing to network shares used in DFS
  • DFS targets, which are real network shares to which DFS links point

What is DFS replication?

DFS replication is a feature used to duplicate existing data by replicating copies of that data to multiple locations. Physical file shares can be synchronized with each other at two or more locations.

An important feature of DFS replication is that the replication of a file starts only after that file has been closed. For this reason, DFS replication is not suitable for replicating databases, given that databases have files opened during the operation of a database management system. DFS replication supports multi-master replication technology, and any member of a replication group can change data that is then replicated.

DFS replication group is a group of servers participating in the replication of one or multiple replication folders. A replicated folder is synchronized between all members of the replication group.

DFS replication group

DFS replication uses a special Remote Differential Compression algorithm that allows DFS to detect changes and copy only changed blocks of files instead of copying all data. This approach allows you to save time and reduce replication traffic over the network.

DFS replication is performed asynchronously. There can be a delay between writing changes to the source location and replicating those changes to the target location.

DFS Replication topologies

There are two main DFS replication topologies:

  • Hub and spoke. This topology requires at least three replication members: one which acts as a hub and two others act as spokes. This technique is useful if you have a central source originating data (hub) and you need to replicate this data to multiple locations (spokes).
  • Full mesh. Each member of a replication group replicates data to each group member. Use this technique if you have 10 members or less in a replication group.

What are the requirements for DFS?

The main requirement is using Windows Server 2008 DataCenter or Enterprise editions, Windows Server 2012, or a newer Windows Server version. It is better to use Windows Server 2016 or Windows Server 2019 nowadays.

NTFS must be a file system to store shared files on Windows Server hosts.

If you use domain-based namespaces, all servers of a DFS replication group must belong to one Active Directory forest.

How to Set Up DFS in Your Windows Environment

You need to prepare at least two servers. In this example, we use two machines running Windows Server 2019, one of which is an Active Directory domain controller:

  • Server01-dc.domain1.local is a domain controller.
  • Server02.domain1.local is a domain member.

This is because configuring DFS in a domain environment has advantages compared to Workgroup, as explained above. The domain name is domain1.local in our case. If you use a domain, don’t forget to configure Active Directory backup.

Enable the DFS roles

First of all, you need to enable the DFS roles in Windows Server 2019.

  1. Open Server Manager.
  2. Click Add Roles and Features in Server Manager.
  3. Select Role-based or featured-based installation in the Installation type screen of the Add Roles and Features wizard.
  4. In the Server Selection screen, make sure your current server (which is a domain controller in our case) is selected. Click Next at each step of the wizard to continue.
  5. Select server roles. Select DFS Namespaces and DFS Replication, as explained in the screenshot below.
Setting up DFS in Windows Server 2019 – installing DFS roles
  1. In the Features screen, you can leave settings as is.
  2. Check your configuration in the confirmation screen and if everything is correct, click Install.
  3. Wait for a while until the installation process is finished and then close the window.

DFS Namespace Setup

Create at least one shared folder on any server that is a domain member. In this example, we create a shared folder on our domain controller. The folder name is shared01 (D:\DATA\shared01).

Creating a shared folder

  1. Right-click a folder and, in the context menu, hit Properties.
  2. On the Sharing tab of the folder properties window, click Share.
  3. Share the folder with Domain users and set permissions. We use Read/Write permissions in this example.
  4. Click Share to finish. Then you can close the network sharing options window.
Sharing a folder in Windows Server 2019 to set up DFS

Now the share is available at this address:

\\server01-dc\shared01

Creating a DFS namespace

Let’s create a DFS namespace to link shared folders in a namespace.

  • Press Win+R and run dfsmgmt.msc to open the DFS Management window. You can also run this command in the Windows command line (CMD).

As an alternative, you can click Start > Windows Administrative Tools > DFS Management.

  • In the DFS Management section, click New Namespace.
How to configure DFS namespaces
  • The New Namespace Wizard opens in a new window.
  1. Namespace Server. Enter a server name. If you are not sure that the name is correct, click Browse, enter a server name and click Check Names. In this example, we enter the name of our domain controller (server01-dc). Click Next at each step of the wizard to continue.
Adding a DFS namespace server
  1. Namespace Name and Settings. Enter a name for a namespace, for example, DFS-01. Click Edit Settings.
Entering a name for a DFS namespace

Pay attention to the local path of a shared folder. Change this path if needed. We use the default path in our example (C:\DFSRoots\DFS-01).

  1. You need to configure access permissions for network users. Click Use custom permissions and hit Customize.
Configuring access permissions for a shared folder on a DFS namespace server
  1. We grant all permissions for domain users (Full Control). Click Add, select Domain Users, select the appropriate checkboxes, and hit OK to save settings.
Configuring permissions for a shared folder
  1. Namespace type. Select the type of namespace to create. We select Domain-based namespace and select the Enable Windows Server 2008 mode checkbox. Select this checkbox if the functional level of your domain is Windows Server 2008 when you use Windows Server 2016 or Windows Server 2019 for better compatibility.

It is recommended that you use a Domain-based namespace due to advantages such as high DFS namespace availability by using multiple namespace servers and transferring namespaces to other servers.

Selecting a domain-based namespace for DFS configuration
  1. Review Settings. Review settings and, if everything is correct, click Create.
Reviewing configuration to finish DFS namespace setup
  1. Confirmation. The window view in case of success is displayed in the screenshot below. The namespace creation has finished. Click Close.
A DFS namespace has been created

Adding a new folder to a namespace

Now we need to add a new folder into the existing namespace. We are adding a folder on the same server, which is a domain controller, but this method is applicable for all servers within a domain.

  1. Open the DFS management window by running dfsmgmt.msc as we did before. Perform the following actions in the DFS management window.
  2. In the left pane, expand a namespace tree and select a namespace (\\domain1.local\DFS-01\ in our case).
  3. In the right pane (the Actions pane), click New Folder.
  4. In the New Folder window, enter a folder name, for example, Test-Folder to link the DFS folder and a shared folder created before. Click Add.
Adding a new folder into a DFS namespace
  1. Enter the path to the existing folder. We use \\server01-dc\shared01 in this example. You can click Browse and select a folder. Click OK to save the path to the folder target.
Adding a folder target

The folder target has been added.

  1. Click OK to save settings and close the New Folder window.
A folder target has been added

Now you can access the shared folder by entering the network address in the address bar of Windows Explorer:

\\server01-dc\dfs-01\Test-Folder

You should enter a path in the format:

\\DomainName\DFS-NameSpace\

Accessing a shared folder in Windows Explorer

How to Configure DFS Replication

We need to configure the second server to replicate data. The name of the second server is Server02 and this server is added to the domain1.local domain in this example. Add your second server to a domain if you have not done this operation before.
Install the DFS roles, as we did for the first server. As an alternative method, you can use PowerShell instead of the Add Roles wizard. Run these two commands in PowerShell to install DFS replication and DFS namespace roles.

Install-WindowsFeature -name “FS-DFS-Replication” -IncludeManagementTools

Install-WindowsFeature -name “FS-DFS-Namespace” -IncludeManagementTools

First of all, we need to install the DFS Replication role on the second server.

How to set up DFS roles in PowerShell

Create a folder for replicated data, for example, D:\Replication

We are going to use this folder to replicate data from the first folder created on the first server before.

Share this folder (D:\Replication) on the second server and configure access permissions the same way as for the previous shared folder. In this example, we share the folder with Domain Users and grant Read/Write permissions.

Sharing a folder on the second server

The network path is \\server02\replication in this example after sharing this folder. To check the network path to the folder, you can right-click the folder name and open the Sharing tab.

Let’s go back to the domain controller (server01-dc) and open the DFS Management window.

In the left pane of the DFS Management window, expand the tree and select the namespace created before (Test-Folder in this case).

Click Add Folder Target in the Actions pane located in the top right corner of the window.

The New Folder Target window appears. Enter the network path of the folder that was created on the second server before:

\\Server02\Replication

Click OK to save settings and close the window.

Adding a new folder target to configure Windows DFS replication

A notification message is displayed:

A replication group can be used to keep these folder targets synchronized. Do you want to create a replication group?

Click Yes.

A notification message is displayed when creating a DFS replication group

Wait until the configuration process is finished.

As a result, you should see the Replicate Folder Wizard window. Perform the next steps in the wizard window.

Check the replication group name and replicated folder name. Click Next to continue.

Entering a replication group name and replication folder name

Check folder paths in the Replication Eligibility screen.

Checking paths of shared folders

Select the primary member from the drop-down list. In this example, the primary member is Server01-dc. Data from the primary member is replicated to other folders that are a part of the DFS namespace.

Selecting a primary member when configuring DFS replication

Select the topology of connections for replication.

Full mesh is the recommended option when using a DFS replication group with less than ten servers. We use Full mesh to replicate changes made on one server to other servers.

The No Topology option can be used if you want to create a custom topology after finishing the wizard.

The Hub and spoke option is inactive (grayed out) because we use less than three servers.

Selecting a full mesh topology to configure DFS replication

Configure replication group schedule and bandwidth. There are two options:

  • Replicate continuously using the specified bandwidth. Replication is performed as soon as possible. You can allocate bandwidth. Continuous replication of data that changes extensively can consume a lot of network bandwidth. To avoid a negative impact on other processes using the network, you can limit bandwidth for DFS replication. Keep in mind that hard disk load can be high.
  • Replicate during the specified days and times. You can configure the schedule to perform DFS replication at the custom date and time. You can use this option if you don’t need to always have the last version of replicated data in target folders.

We select the first option in our example.

Setting up DFS replication group schedule

Review settings for your DFS replication group. If everything is correct, click Create.

Reviewing settings for a DFS replication group before finishing configuration

View the DFS replication configuration status on the Confirmation screen. You should see the Success status for all tasks as displayed on the screenshot below. Click Close to close the wizard window.

A DFS replication group has been created successfully

A notification message about the replication delay is displayed. Read the message and hit OK.

A notification message about DFS replication delay

DFS replication has been configured. Open a shared folder from which data must be replicated initially. Write a file to that network folder and check whether the new data is replicated to the second folder on another server. Don’t forget that opened files are not replicated until they are closed after saving changes to a disk. In a few moments, you should see a file-replica in the target folder.

Using filters for DFS Replication

Use file filters to select the file types you don’t want to replicate. Some applications can create temporary files and replicating them wastes network bandwidth, loads hard disk drives, consumes additional storage space in the target folder, and increases overall time to replicate data. You can exclude the appropriate file types from DFS replication by using filters.

To configure filters, perform the following steps in the DFS Management window:

  1. Expand the Replication tree in the navigation pane and select the needed DFS replication group folder name (domain1.local\dfs-01\Test-folder in our case).
  2. Select the Replicated Folders tab.
  3. Select the needed folder, right-click the folder name and hit Properties. Alternatively, you can select the folder and click Properties in the Actions pane.
  4. Set the filtered file types by using masks in the folder properties window. In this example, files matching the rule are excluded from replication:

~*, *.bak, *.tmp

You can also filter subfolders, for example, exclude Temp subfolders from DFS replication.

Configuring DFS replication filters

Staging location

There can be a conflict when two or more users save changes to a file before these changes are replicated. The most recent changes have precedence for replication. Older versions of changed files are moved to the Conflict or Deleted folder. This issue can happen when replication speed is low and the file size is large (amount of changes is high) when the amount of time to transfer changed data is lower than the interval between writing changes to the file by users.

Staging folders act as a cache for new and changed files that are ready to be replicated from source folders to target folders. The staging location is intended for files that exceed a certain file size. Staging is used as a queue to store files that must be replicated and ensure that changes can be replicated without worrying about changes to them during the transfer process.

Another aspect of configuring staging folders is performance optimization. DFS replication can consume additional CPU and disk resources, slow down and even stop if the staging quota is too small for your tasks. The recommended size of the staging quota is equal to the size of the 32 largest files in the replication folder.

You can edit staging folder properties for DFS Replication in the DFS Management window:

  1. Select a replication group in the left pane of the DFS Management window.
  2. Select the Memberships tab.
  3. Select the needed replication folder, right-click the folder, and hit Properties.
  4. Select the Staging tab in the Properties window.
  5. Edit the staging path and quota according to your needs.
Configuring DFS staging location

Saved changes are not applied immediately. New staging settings must be replicated across all DFS servers within a domain. Time depends on Active Directory Domain Services replication latency and the polling interval of servers (5 minutes or more). Server reboot is not required.

DFS Replication vs. Backup

Don’t confuse DFS Replication of data in shared folders and data backup. DFS replication makes copies of data on different servers, but if unwanted changes are written to a file on one server, these changes are replicated to other servers. As a result, you don’t have a recovery point because the file has been overwritten with unwanted changes on all servers and you can use it for recovery in case of failure. This threat is present in case of a ransomware attack.

Use NAKIVO Backup & Replication to protect data stored on your physical Windows Server machines including data stored in shared folders. The product also supports Hyper-V VM backup and VMware VM backup at the host level for effective protection.

1 Year of Free Data Protection: NAKIVO Backup & Replication

1 Year of Free Data Protection: NAKIVO Backup & Replication

Deploy in 2 minutes and protect virtual, cloud, physical and SaaS data. Backup, replication, instant recovery options.

GET THE FREE EDITION

Conclusion

Distributed File System (DFS) can significantly simplify shared resources management for administrators and make accessing shared folders more convenient for end-users. DFS makes transparent links to shared folders located on different servers.

DFS namespaces and DFS replication are two main features that you can configure in the DFS Management window after installing the appropriate Windows server roles. Opt for configuring DFS in a domain environment rather than in a Workgroup environment because there are many advantages, such as high availability and flexibility in an Active Directory domain.

Source :
https://www.nakivo.com/blog/configure-dfs-replication-for-windows-server/

Manually Clearing the ConflictAndDeleted Folder in DFSR

By Ned Pyle
Published Apr 04 2019 01:30 PM

First published on TechNet on Oct 06, 2008
Ned here again. Today I’m going to talk about a couple of scenarios we run into with the ConflictAndDeleted folder in DFSR. These are real quick and dirty, but they may save you a call to us someday.

Scenario 1: We need to empty out the ConflictAndDeleted folder in a controlled manner as part of regular administration (i.e. we just lowered quota and we want to reclaim that space).

Scenario 2: The ConflictAndDeleted folder quota is not being honored due to an error condition and the folder is filling the drive.

Let’s walk through these now.

Emptying the folder normally

It’s possible to clean up the ConflictAndDeleted folder through the DFSMGMT.MSC and SERVICES.EXE snap-ins, but it’s disruptive and kind of gross (you could lower the quota, wait for AD replication, wait for DFSR polling, and then restart the DFSR service). A much faster and slicker way is to call the WMI method CleanupConflictDirectory from the command-line or a script:

1.  Open a CMD prompt as an administrator on the DFSR server.
2.  Get the GUID of the Replicated Folder you want to clean:

WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderconfig get replicatedfolderguid,replicatedfoldername

(This is all one line, wrapped)

Example output:

thumbnail image 1 of blog post titled
Manually Clearing the ConflictAndDeleted Folder in DFSR

3.  Then call the CleanupConflictDirectory method:

WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo where “replicatedfolderguid='<RF GUID>'” call cleanupconflictdirectory

Example output with a sample GUID:

WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo where “replicatedfolderguid=’70bebd41-d5ae-4524-b7df-4eadb89e511e'” call cleanupconflictdirectory
thumbnail image 2 of blog post titled
Manually Clearing the ConflictAndDeleted Folder in DFSR

4.  At this point the ConflictAndDeleted folder will be empty and the ConflictAndDeletedManifest.xml will be deleted.

Emptying the ConflictAndDeleted folder when in an error state

We’ve also seen a few cases where the ConflictAndDeleted quota was not being honored at all. In every single one of those cases, the customer had recently had hardware problems (specifically with their disk system) where files had become corrupt and the disk was unstable – even after repairing the disk (at least to the best of their knowledge), the ConflictAndDeleted folder quota was not being honored by DFSR.

Here’s where quota is set:

thumbnail image 3 of blog post titled
Manually Clearing the ConflictAndDeleted Folder in DFSR

Usually when we see this problem, the ConflictAndDeletedManifest.XML file has grown to hundreds of MB in size. When you try to open the file in an XML parser or in Internet Explorer, you will receive an error like “The XML page cannot be displayed” or that there is an error at line . This is because the file is invalid at some section (with a damaged element, scrambled data, etc).

To fix this issue:

  1. Follow steps 1-4 from above. This may clean the folder as well as update DFSR to say that cleaning has occurred. We always want to try doing things the ‘right’ way before we start hacking.
  2. Stop the DFSR service.
  3. Delete the contents of the ConflictAndDeleted folder manually (with explorer.exe or DEL).
  4. Delete the ConflictAndDeletedManifest.xml file.
  5. Start the DFSR service back up.

For a bit more info on conflict and deletion handling in DFSR, take a look at:

Staging folders and Conflict and Deleted folders (TechNet)
DfsrConflictInfo Class (MSDN)

Until next time…

– Ned “Unhealthy love for DFSR” Pyle

Source :
https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/manually-clearing-the-conflictanddeleted-folder-in-dfsr/ba-p/395711

NSA and CISA Red and Blue Teams Share Top Ten Cybersecurity Misconfigurations

Release Date October 05, 2023
Alert CodeAA23-278A

A plea for network defenders and software manufacturers to fix common problems.

EXECUTIVE SUMMARY

The National Security Agency (NSA) and Cybersecurity and Infrastructure Security Agency (CISA) are releasing this joint cybersecurity advisory (CSA) to highlight the most common cybersecurity misconfigurations in large organizations, and detail the tactics, techniques, and procedures (TTPs) actors use to exploit these misconfigurations.

Through NSA and CISA Red and Blue team assessments, as well as through the activities of NSA and CISA Hunt and Incident Response teams, the agencies identified the following 10 most common network misconfigurations:

  1. Default configurations of software and applications
  2. Improper separation of user/administrator privilege
  3. Insufficient internal network monitoring
  4. Lack of network segmentation
  5. Poor patch management
  6. Bypass of system access controls
  7. Weak or misconfigured multifactor authentication (MFA) methods
  8. Insufficient access control lists (ACLs) on network shares and services
  9. Poor credential hygiene
  10. Unrestricted code execution

These misconfigurations illustrate (1) a trend of systemic weaknesses in many large organizations, including those with mature cyber postures, and (2) the importance of software manufacturers embracing secure-by-design principles to reduce the burden on network defenders:

  • Properly trained, staffed, and funded network security teams can implement the known mitigations for these weaknesses.
  • Software manufacturers must reduce the prevalence of these misconfigurations—thus strengthening the security posture for customers—by incorporating secure-by-design and -default principles and tactics into their software development practices.[1]

NSA and CISA encourage network defenders to implement the recommendations found within the Mitigations section of this advisory—including the following—to reduce the risk of malicious actors exploiting the identified misconfigurations.

  • Remove default credentials and harden configurations.
  • Disable unused services and implement access controls.
  • Update regularly and automate patching, prioritizing patching of known exploited vulnerabilities.[2]
  • Reduce, restrict, audit, and monitor administrative accounts and privileges.

NSA and CISA urge software manufacturers to take ownership of improving security outcomes of their customers by embracing secure-by-design and-default tactics, including:

  • Embedding security controls into product architecture from the start of development and throughout the entire software development lifecycle (SDLC).
  • Eliminating default passwords.
  • Providing high-quality audit logs to customers at no extra charge.
  • Mandating MFA, ideally phishing-resistant, for privileged users and making MFA a default rather than opt-in feature.[3]

Download the PDF version of this report: PDF, 660 KB

TECHNICAL DETAILS

Note: This advisory uses the MITRE ATT&CK® for Enterprise framework, version 13, and the MITRE D3FEND™ cybersecurity countermeasures framework.[4],[5] See the Appendix: MITRE ATT&CK tactics and techniques section for tables summarizing the threat actors’ activity mapped to MITRE ATT&CK tactics and techniques, and the Mitigations section for MITRE D3FEND countermeasures.

For assistance with mapping malicious cyber activity to the MITRE ATT&CK framework, see CISA and MITRE ATT&CK’s Best Practices for MITRE ATT&CK Mapping and CISA’s Decider Tool.[6],[7]

Overview

Over the years, the following NSA and CISA teams have assessed the security posture of many network enclaves across the Department of Defense (DoD); Federal Civilian Executive Branch (FCEB); state, local, tribal, and territorial (SLTT) governments; and the private sector:

  • Depending on the needs of the assessment, NSA Defensive Network Operations (DNO) teams feature capabilities from Red Team (adversary emulation), Blue Team (strategic vulnerability assessment), Hunt (targeted hunt), and/or Tailored Mitigations (defensive countermeasure development).
  • CISA Vulnerability Management (VM) teams have assessed the security posture of over 1,000 network enclaves. CISA VM teams include Risk and Vulnerability Assessment (RVA) and CISA Red Team Assessments (RTA).[8] The RVA team conducts remote and onsite assessment services, including penetration testing and configuration review. RTA emulates cyber threat actors in coordination with an organization to assess the organization’s cyber detection and response capabilities.
  • CISA Hunt and Incident Response teams conduct proactive and reactive engagements, respectively, on organization networks to identify and detect cyber threats to U.S. infrastructure.

During these assessments, NSA and CISA identified the 10 most common network misconfigurations, which are detailed below. These misconfigurations (non-prioritized) are systemic weaknesses across many networks.

Many of the assessments were of Microsoft® Windows® and Active Directory® environments. This advisory provides details about, and mitigations for, specific issues found during these assessments, and so mostly focuses on these products. However, it should be noted that many other environments contain similar misconfigurations. Network owners and operators should examine their networks for similar misconfigurations even when running other software not specifically mentioned below.

1. Default Configurations of Software and Applications

Default configurations of systems, services, and applications can permit unauthorized access or other malicious activity. Common default configurations include:

  • Default credentials
  • Default service permissions and configurations settings
Default Credentials

Many software manufacturers release commercial off-the-shelf (COTS) network devices —which provide user access via applications or web portals—containing predefined default credentials for their built-in administrative accounts.[9] Malicious actors and assessment teams regularly abuse default credentials by:

  • Finding credentials with a simple web search [T1589.001] and using them [T1078.001] to gain authenticated access to a device.
  • Resetting built-in administrative accounts [T1098] via predictable forgotten passwords questions.
  • Leveraging default virtual private network (VPN) credentials for internal network access [T1133].
  • Leveraging publicly available setup information to identify built-in administrative credentials for web applications and gaining access to the application and its underlying database.
  • Leveraging default credentials on software deployment tools [T1072] for code execution and lateral movement.

In addition to devices that provide network access, printers, scanners, security cameras, conference room audiovisual (AV) equipment, voice over internet protocol (VoIP) phones, and internet of things (IoT) devices commonly contain default credentials that can be used for easy unauthorized access to these devices as well. Further compounding this problem, printers and scanners may have privileged domain accounts loaded so that users can easily scan documents and upload them to a shared drive or email them. Malicious actors who gain access to a printer or scanner using default credentials can use the loaded privileged domain accounts to move laterally from the device and compromise the domain [T1078.002].

Default Service Permissions and Configuration Settings

Certain services may have overly permissive access controls or vulnerable configurations by default. Additionally, even if the providers do not enable these services by default, malicious actors can easily abuse these services if users or administrators enable them.

Assessment teams regularly find the following:

  • Insecure Active Directory Certificate Services
  • Insecure legacy protocols/services
  • Insecure Server Message Block (SMB) service
Insecure Active Directory Certificate Services

Active Directory Certificate Services (ADCS) is a feature used to manage Public Key Infrastructure (PKI) certificates, keys, and encryption inside of Active Directory (AD) environments. ADCS templates are used to build certificates for different types of servers and other entities on an organization’s network.

Malicious actors can exploit ADCS and/or ADCS template misconfigurations to manipulate the certificate infrastructure into issuing fraudulent certificates and/or escalate user privileges to domain administrator privileges. These certificates and domain escalation paths may grant actors unauthorized, persistent access to systems and critical data, the ability to impersonate legitimate entities, and the ability to bypass security measures.

Assessment teams have observed organizations with the following misconfigurations:

  • ADCS servers running with web-enrollment enabled. If web-enrollment is enabled, unauthenticated actors can coerce a server to authenticate to an actor-controlled computer, which can relay the authentication to the ADCS web-enrollment service and obtain a certificate [T1649] for the server’s account. These fraudulent, trusted certificates enable actors to use adversary-in-the-middle techniques [T1557] to masquerade as trusted entities on the network. The actors can also use the certificate for AD authentication to obtain a Kerberos Ticket Granting Ticket (TGT) [T1558.001], which they can use to compromise the server and usually the entire domain.
  • ADCS templates where low-privileged users have enrollment rights, and the enrollee supplies a subject alternative name. Misconfiguring various elements of ADCS templates can result in domain escalation by unauthorized users (e.g., granting low-privileged users certificate enrollment rights, allowing requesters to specify a subjectAltName in the certificate signing request [CSR], not requiring authorized signatures for CSRs, granting FullControl or WriteDacl permissions to users). Malicious actors can use a low-privileged user account to request a certificate with a particular Subject Alternative Name (SAN) and gain a certificate where the SAN matches the User Principal Name (UPN) of a privileged account.

Note: For more information on known escalation paths, including PetitPotam NTLM relay techniques, see: Domain Escalation: PetitPotam NTLM Relay to ADCS Endpoints and Certified Pre-Owned, Active Directory Certificate Services.[10],[11],[12]

Insecure legacy protocols/services

Many vulnerable network services are enabled by default, and assessment teams have observed them enabled in production environments. Specifically, assessment teams have observed Link-Local Multicast Name Resolution (LLMNR) and NetBIOS Name Service (NBT-NS), which are Microsoft Windows components that serve as alternate methods of host identification. If these services are enabled in a network, actors can use spoofing, poisoning, and relay techniques [T1557.001] to obtain domain hashes, system access, and potential administrative system sessions. Malicious actors frequently exploit these protocols to compromise entire Windows’ environments.

Malicious actors can spoof an authoritative source for name resolution on a target network by responding to passing traffic, effectively poisoning the service so that target computers will communicate with an actor-controlled system instead of the intended one. If the requested system requires identification/authentication, the target computer will send the user’s username and hash to the actor-controlled system. The actors then collect the hash and crack it offline to obtain the plain text password [T1110.002].

Insecure Server Message Block (SMB) service

The Server Message Block service is a Windows component primarily for file sharing. Its default configuration, including in the latest version of Windows, does not require signing network messages to ensure authenticity and integrity. If SMB servers do not enforce SMB signing, malicious actors can use machine-in-the-middle techniques, such as NTLM relay. Further, malicious actors can combine a lack of SMB signing with the name resolution poisoning issue (see above) to gain access to remote systems [T1021.002] without needing to capture and crack any hashes.

2. Improper Separation of User/Administrator Privilege

Administrators often assign multiple roles to one account. These accounts have access to a wide range of devices and services, allowing malicious actors to move through a network quickly with one compromised account without triggering lateral movement and/or privilege escalation detection measures.

Assessment teams have observed the following common account separation misconfigurations:

  • Excessive account privileges
  • Elevated service account permissions
  • Non-essential use of elevated accounts
Excessive Account Privileges

Account privileges are intended to control user access to host or application resources to limit access to sensitive information or enforce a least-privilege security model. When account privileges are overly permissive, users can see and/or do things they should not be able to, which becomes a security issue as it increases risk exposure and attack surface.

Expanding organizations can undergo numerous changes in account management, personnel, and access requirements. These changes commonly lead to privilege creep—the granting of excessive access and unnecessary account privileges. Through the analysis of topical and nested AD groups, a malicious actor can find a user account [T1078] that has been granted account privileges that exceed their need-to-know or least-privilege function. Extraneous access can lead to easy avenues for unauthorized access to data and resources and escalation of privileges in the targeted domain.

Elevated Service Account Permissions

Applications often operate using user accounts to access resources. These user accounts, which are known as service accounts, often require elevated privileges. When a malicious actor compromises an application or service using a service account, they will have the same privileges and access as the service account.

Malicious actors can exploit elevated service permissions within a domain to gain unauthorized access and control over critical systems. Service accounts are enticing targets for malicious actors because such accounts are often granted elevated permissions within the domain due to the nature of the service, and because access to use the service can be requested by any valid domain user. Due to these factors, kerberoasting—a form of credential access achieved by cracking service account credentials—is a common technique used to gain control over service account targets [T1558.003].

Non-Essential Use of Elevated Accounts

IT personnel use domain administrator and other administrator accounts for system and network management due to their inherent elevated privileges. When an administrator account is logged into a compromised host, a malicious actor can steal and use the account’s credentials and an AD-generated authentication token [T1528] to move, using the elevated permissions, throughout the domain [T1550.001]. Using an elevated account for normal day-to-day, non-administrative tasks increases the account’s exposure and, therefore, its risk of compromise and its risk to the network.

Malicious actors prioritize obtaining valid domain credentials upon gaining access to a network. Authentication using valid domain credentials allows the execution of secondary enumeration techniques to gain visibility into the target domain and AD structure, including discovery of elevated accounts and where the elevated accounts are used [T1087].

Targeting elevated accounts (such as domain administrator or system administrators) performing day-to-day activities provides the most direct path to achieve domain escalation. Systems or applications accessed by the targeted elevated accounts significantly increase the attack surface available to adversaries, providing additional paths and escalation options.

After obtaining initial access via an account with administrative permissions, an assessment team compromised a domain in under a business day. The team first gained initial access to the system through phishing [T1566], by which they enticed the end user to download [T1204] and execute malicious payloads. The targeted end-user account had administrative permissions, enabling the team to quickly compromise the entire domain.

3. Insufficient Internal Network Monitoring

Some organizations do not optimally configure host and network sensors for traffic collection and end-host logging. These insufficient configurations could lead to undetected adversarial compromise. Additionally, improper sensor configurations limit the traffic collection capability needed for enhanced baseline development and detract from timely detection of anomalous activity.

Assessment teams have exploited insufficient monitoring to gain access to assessed networks. For example:

  • An assessment team observed an organization with host-based monitoring, but no network monitoring. Host-based monitoring informs defensive teams about adverse activities on singular hosts and network monitoring informs about adverse activities traversing hosts [TA0008]. In this example, the organization could identify infected hosts but could not identify where the infection was coming from, and thus could not stop future lateral movement and infections.
  • An assessment team gained persistent deep access to a large organization with a mature cyber posture. The organization did not detect the assessment team’s lateral movement, persistence, and command and control (C2) activity, including when the team attempted noisy activities to trigger a security response. For more information on this activity, see CSA CISA Red Team Shares Key Findings to Improve Monitoring and Hardening of Networks.[13]

4. Lack of Network Segmentation

Network segmentation separates portions of the network with security boundaries. Lack of network segmentation leaves no security boundaries between the user, production, and critical system networks. Insufficient network segmentation allows an actor who has compromised a resource on the network to move laterally across a variety of systems uncontested. Lack of network segregation additionally leaves organizations significantly more vulnerable to potential ransomware attacks and post-exploitation techniques.

Lack of segmentation between IT and operational technology (OT) environments places OT environments at risk. For example, assessment teams have often gained access to OT networks—despite prior assurance that the networks were fully air gapped, with no possible connection to the IT network—by finding special purpose, forgotten, or even accidental network connections [T1199].

5. Poor Patch Management

Vendors release patches and updates to address security vulnerabilities. Poor patch management and network hygiene practices often enable adversaries to discover open attack vectors and exploit critical vulnerabilities. Poor patch management includes:

  • Lack of regular patching
  • Use of unsupported operating systems (OSs) and outdated firmware
Lack of Regular Patching

Failure to apply the latest patches can leave a system open to compromise from publicly available exploits. Due to their ease of discovery—via vulnerability scanning [T1595.002] and open source research [T1592]—and exploitation, these systems are immediate targets for adversaries. Allowing critical vulnerabilities to remain on production systems without applying their corresponding patches significantly increases the attack surface. Organizations should prioritize patching known exploited vulnerabilities in their environments.[2]

Assessment teams have observed threat actors exploiting many CVEs in public-facing applications [T1190], including:

  • CVE-2019-18935 in an unpatched instance of Telerik® UI for ASP.NET running on a Microsoft IIS server.[14]
  • CVE-2021-44228 (Log4Shell) in an unpatched VMware® Horizon server.[15]
  • CVE-2022-24682, CVE-2022-27924, and CVE-2022-27925 chained with CVE-2022-37042, or CVE-2022-30333 in an unpatched Zimbra® Collaboration Suite.[16]
Use of Unsupported OSs and Outdated Firmware

Using software or hardware that is no longer supported by the vendor poses a significant security risk because new and existing vulnerabilities are no longer patched. Malicious actors can exploit vulnerabilities in these systems to gain unauthorized access, compromise sensitive data, and disrupt operations [T1210].

Assessment teams frequently observe organizations using unsupported Windows operating systems without updates MS17-010 and MS08-67. These updates, released years ago, address critical remote code execution vulnerabilities.[17],[18]

6. Bypass of System Access Controls

A malicious actor can bypass system access controls by compromising alternate authentication methods in an environment. If a malicious actor can collect hashes in a network, they can use the hashes to authenticate using non-standard means, such as pass-the-hash (PtH) [T1550.002]. By mimicking accounts without the clear-text password, an actor can expand and fortify their access without detection. Kerberoasting is also one of the most time-efficient ways to elevate privileges and move laterally throughout an organization’s network.

7. Weak or Misconfigured MFA Methods

Misconfigured Smart Cards or Tokens

Some networks (generally government or DoD networks) require accounts to use smart cards or tokens. Multifactor requirements can be misconfigured so the password hashes for accounts never change. Even though the password itself is no longer used—because the smart card or token is required instead—there is still a password hash for the account that can be used as an alternative credential for authentication. If the password hash never changes, once a malicious actor has an account’s password hash [T1111], the actor can use it indefinitely, via the PtH technique for as long as that account exists.

Lack of Phishing-Resistant MFA

Some forms of MFA are vulnerable to phishing, “push bombing” [T1621], exploitation of Signaling System 7 (SS7) protocol vulnerabilities, and/or “SIM swap” techniques. These attempts, if successful, may allow a threat actor to gain access to MFA authentication credentials or bypass MFA and access the MFA-protected systems. (See CISA’s Fact Sheet Implementing Phishing-Resistant MFA for more information.)[3]

For example, assessment teams have used voice phishing to convince users to provide missing MFA information [T1598]. In one instance, an assessment team knew a user’s main credentials, but their login attempts were blocked by MFA requirements. The team then masqueraded as IT staff and convinced the user to provide the MFA code over the phone, allowing the team to complete their login attempt and gain access to the user’s email and other organizational resources.

8. Insufficient ACLs on Network Shares and Services

Data shares and repositories are primary targets for malicious actors. Network administrators may improperly configure ACLs to allow for unauthorized users to access sensitive or administrative data on shared drives.

Actors can use commands, open source tools, or custom malware to look for shared folders and drives [T1135].

  • In one compromise, a team observed actors use the net share command—which displays information about shared resources on the local computer—and the ntfsinfo command to search network shares on compromised computers. In the same compromise, the actors used a custom tool, CovalentStealer, which is designed to identify file shares on a system, categorize the files [T1083], and upload the files to a remote server [TA0010].[19],[20]
  • Ransomware actors have used the SoftPerfect® Network Scanner, netscan.exe—which can ping computers [T1018], scan ports [T1046], and discover shared folders—and SharpShares to enumerate accessible network shares in a domain.[21],[22]

Malicious actors can then collect and exfiltrate the data from the shared drives and folders. They can then use the data for a variety of purposes, such as extortion of the organization or as intelligence when formulating intrusion plans for further network compromise. Assessment teams routinely find sensitive information on network shares [T1039] that could facilitate follow-on activity or provide opportunities for extortion. Teams regularly find drives containing cleartext credentials [T1552] for service accounts, web applications, and even domain administrators.

Even when further access is not directly obtained from credentials in file shares, there can be a treasure trove of information for improving situational awareness of the target network, including the network’s topology, service tickets, or vulnerability scan data. In addition, teams regularly identify sensitive data and PII on shared drives (e.g., scanned documents, social security numbers, and tax returns) that could be used for extortion or social engineering of the organization or individuals.

9. Poor Credential Hygiene

Poor credential hygiene facilitates threat actors in obtaining credentials for initial access, persistence, lateral movement, and other follow-on activity, especially if phishing-resistant MFA is not enabled. Poor credential hygiene includes:

  • Easily crackable passwords
  • Cleartext password disclosure
Easily Crackable Passwords

Easily crackable passwords are passwords that a malicious actor can guess within a short time using relatively inexpensive computing resources. The presence of easily crackable passwords on a network generally stems from a lack of password length (i.e., shorter than 15 characters) and randomness (i.e., is not unique or can be guessed). This is often due to lax requirements for passwords in organizational policies and user training. A policy that only requires short and simple passwords leaves user passwords susceptible to password cracking. Organizations should provide or allow employee use of password managers to enable the generation and easy use of secure, random passwords for each account.

Often, when a credential is obtained, it is a hash (one-way encryption) of the password and not the password itself. Although some hashes can be used directly with PtH techniques, many hashes need to be cracked to obtain usable credentials. The cracking process takes the captured hash of the user’s plaintext password and leverages dictionary wordlists and rulesets, often using a database of billions of previously compromised passwords, in an attempt to find the matching plaintext password [T1110.002].

One of the primary ways to crack passwords is with the open source tool, Hashcat, combined with password lists obtained from publicly released password breaches. Once a malicious actor has access to a plaintext password, they are usually limited only by the account’s permissions. In some cases, the actor may be restricted or detected by advanced defense-in-depth and zero trust implementations as well, but this has been a rare finding in assessments thus far.

Assessment teams have cracked password hashes for NTLM users, Kerberos service account tickets, NetNTLMv2, and PFX stores [T1555], enabling the team to elevate privileges and move laterally within networks. In 12 hours, one team cracked over 80% of all users’ passwords in an Active Directory, resulting in hundreds of valid credentials.

Cleartext Password Disclosure

Storing passwords in cleartext is a serious security risk. A malicious actor with access to files containing cleartext passwords [T1552.001] could use these credentials to log into the affected applications or systems under the guise of a legitimate user. Accountability is lost in this situation as any system logs would record valid user accounts accessing applications or systems.

Malicious actors search for text files, spreadsheets, documents, and configuration files in hopes of obtaining cleartext passwords. Assessment teams frequently discover cleartext passwords, allowing them to quickly escalate the emulated intrusion from the compromise of a regular domain user account to that of a privileged account, such as a Domain or Enterprise Administrator. A common tool used for locating cleartext passwords is the open source tool, Snaffler.[23]

10. Unrestricted Code Execution

If unverified programs are allowed to execute on hosts, a threat actor can run arbitrary, malicious payloads within a network.

Malicious actors often execute code after gaining initial access to a system. For example, after a user falls for a phishing scam, the actor usually convinces the victim to run code on their workstation to gain remote access to the internal network. This code is usually an unverified program that has no legitimate purpose or business reason for running on the network.

Assessment teams and malicious actors frequently leverage unrestricted code execution in the form of executables, dynamic link libraries (DLLs), HTML applications, and macros (scripts used in office automation documents) [T1059.005] to establish initial access, persistence, and lateral movement. In addition, actors often use scripting languages [T1059] to obscure their actions [T1027.010] and bypass allowlisting—where organizations restrict applications and other forms of code by default and only allow those that are known and trusted. Further, actors may load vulnerable drivers and then exploit the drivers’ known vulnerabilities to execute code in the kernel with the highest level of system privileges to completely compromise the device [T1068].

MITIGATIONS

Network Defenders

NSA and CISA recommend network defenders implement the recommendations that follow to mitigate the issues identified in this advisory. These mitigations align with the Cross-Sector Cybersecurity Performance Goals (CPGs) developed by CISA and the National Institute of Standards and Technology (NIST) as well as with the MITRE ATT&CK Enterprise Mitigations and MITRE D3FEND frameworks.

The CPGs provide a minimum set of practices and protections that CISA and NIST recommend all organizations implement. CISA and NIST based the CPGs on existing cybersecurity frameworks and guidance to protect against the most common and impactful threats, tactics, techniques, and procedures. Visit CISA’s Cross-Sector Cybersecurity Performance Goals for more information on the CPGs, including additional recommended baseline protections.[24]

Mitigate Default Configurations of Software and Applications
MisconfigurationRecommendations for Network Defenders
Default configurations of software and applicationsModify the default configuration of applications and appliances before deployment in a production environment [M1013],[D3-ACH]. Refer to hardening guidelines provided by the vendor and related cybersecurity guidance (e.g., DISA’s Security Technical Implementation Guides (STIGs) and configuration guides).[25],[26],[27]
Default configurations of software and applications: Default CredentialsChange or disable vendor-supplied default usernames and passwords of services, software, and equipment when installing or commissioning [CPG 2.A]. When resetting passwords, enforce the use of “strong” passwords (i.e., passwords that are more than 15 characters and random [CPG 2.B]) and follow hardening guidelines provided by the vendor, STIGsNSA, and/or NIST [M1027],[D3-SPP].[25],[26],[28],[29]
Default service permissions and configuration settings: Insecure Active Directory Certificate ServicesEnsure the secure configuration of ADCS implementations. Regularly update and patch the controlling infrastructure (e.g., for CVE-2021-36942), employ monitoring and auditing mechanisms, and implement strong access controls to protect the infrastructure.If not needed, disable web-enrollment in ADCS servers. See Microsoft: Uninstall-AdcsWebEnrollment (ADCSDeployment) for guidance.[30]If web enrollment is needed on ADCS servers:Enable Extended Protection for Authentication (EPA) for Client Authority Web Enrollment. This is done by choosing the “Required” option. For guidance, see Microsoft: KB5021989: Extended Protection for Authentication.[31]Enable “Require SSL” on the ADCS server.Disable NTLM on all ADCS servers. For guidance, see Microsoft: Network security Restrict NTLM in this domain – Windows Security | Microsoft Learn and Network security Restrict NTLM Incoming NTLM traffic – Windows Security.[32],[33]Disable SAN for UPN Mapping. For guidance see, Microsoft: How to disable the SAN for UPN mapping – Windows Server. Instead, smart card authentication can use the altSecurityIdentities attribute for explicit mapping of certificates to accounts more securely.[34]Review all permissions on the ADCS templates on applicable servers. Restrict enrollment rights to only those users or groups that require it. Disable the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT flag from templates to prevent users from supplying and editing sensitive security settings within these templates. Enforce manager approval for requested certificates. Remove FullControlWriteDacl, and Write property permissions from low-privileged groups, such as domain users, to certificate template objects.
Default service permissions and configuration settings: Insecure legacy protocols/servicesDetermine if LLMNR and NetBIOS are required for essential business operations.If not required, disable LLMNR and NetBIOS in local computer security settings or by group policy.
Default service permissions and configuration settings: Insecure SMB serviceRequire SMB signing for both SMB client and server on all systems.[25] This should prevent certain adversary-in-the-middle and pass-the-hash techniques. For more information on SMB signing, see Microsoft: Overview of Server Message Block Signing. [35] Note: Beginning in Microsoft Windows 11 Insider Preview Build 25381, Windows requires SMB signing for all communications.[36]
Mitigate Improper Separation of User/Administrator Privilege
MisconfigurationRecommendations for Network Defenders
Improper separation of user/administrator privilege:Excessive account privileges,Elevated service account permissions, andNon-essential use of elevated accountsImplement authentication, authorization, and accounting (AAA) systems [M1018] to limit actions users can perform, and review logs of user actions to detect unauthorized use and abuse. Apply least privilege principles to user accounts and groups allowing only the performance of authorized actions.Audit user accounts and remove those that are inactive or unnecessary on a routine basis [CPG 2.D]. Limit the ability for user accounts to create additional accounts.Restrict use of privileged accounts to perform general tasks, such as accessing emails and browsing the Internet [CPG 2.E],[D3-UAP]. See NSA Cybersecurity Information Sheet (CSI) Defend Privileges and Accounts for more information.[37]Limit the number of users within the organization with an identity and access management (IAM) role that has administrator privileges. Strive to reduce all permanent privileged role assignments, and conduct periodic entitlement reviews on IAM users, roles, and policies.Implement time-based access for privileged accounts. For example, the just-in-time access method provisions privileged access when needed and can support enforcement of the principle of least privilege (as well as the Zero Trust model) by setting network-wide policy to automatically disable admin accounts at the Active Directory level. As needed, individual users can submit requests through an automated process that enables access to a system for a set timeframe. In cloud environments, just-in-time elevation is also appropriate and may be implemented using per-session federated claims or privileged access management tools.Restrict domain users from being in the local administrator group on multiple systems.Run daemonized applications (services) with non-administrator accounts when possible.Only configure service accounts with the permissions necessary for the services they control to operate.Disable unused services and implement ACLs to protect services.
Mitigate Insufficient Internal Network Monitoring
MisconfigurationRecommendations for Network Defenders
Insufficient internal network monitoringEstablish a baseline of applications and services, and routinely audit their access and use, especially for administrative activity [D3-ANAA]. For instance, administrators should routinely audit the access lists and permissions for of all web applications and services [CPG 2.O],[M1047]. Look for suspicious accounts, investigate them, and remove accounts and credentials, as appropriate, such as accounts of former staff.[39]Establish a baseline that represents an organization’s normal traffic activity, network performance, host application activity, and user behavior; investigate any deviations from that baseline [D3-NTCD],[D3-CSPP],[D3-UBA].[40]Use auditing tools capable of detecting privilege and service abuse opportunities on systems within an enterprise and correct them [M1047].Implement a security information and event management (SIEM) system to provide log aggregation, correlation, querying, visualization, and alerting from network endpoints, logging systems, endpoint and detection response (EDR) systems and intrusion detection systems (IDS) [CPG 2.T],[D3-NTA].
Mitigate Lack of Network Segmentation
MisconfigurationRecommendations for Network Defenders
Lack of network segmentationImplement next-generation firewalls to perform deep packet filtering, stateful inspection, and application-level packet inspection [D3-NTF]. Deny or drop improperly formatted traffic that is incongruent with application-specific traffic permitted on the network. This practice limits an actor’s ability to abuse allowed application protocols. The practice of allowlisting network applications does not rely on generic ports as filtering criteria, enhancing filtering fidelity. For more information on application-aware defenses, see NSA CSI Segment Networks and Deploy Application-Aware Defenses.[41]Engineer network segments to isolate critical systems, functions, and resources [CPG 2.F],[D3-NI]. Establish physical and logical segmentation controls, such as virtual local area network (VLAN) configurations and properly configured access control lists (ACLs) on infrastructure devices [M1030]. These devices should be baselined and audited to prevent access to potentially sensitive systems and information. Leverage properly configured Demilitarized Zones (DMZs) to reduce service exposure to the Internet.[42],[43],[44]Implement separate Virtual Private Cloud (VPC) instances to isolate essential cloud systems. Where possible, implement Virtual Machines (VM) and Network Function Virtualization (NFV) to enable micro-segmentation of networks in virtualized environments and cloud data centers. Employ secure VM firewall configurations in tandem with macro segmentation.
Mitigate Poor Patch Management
MisconfigurationRecommendations for Network Defenders
Poor patch management: Lack of regular patchingEnsure organizations implement and maintain an efficient patch management process that enforces the use of up-to-date, stable versions of OSs, browsers, and software [M1051],[D3-SU].[45]Update software regularly by employing patch management for externally exposed applications, internal enterprise endpoints, and servers. Prioritize patching known exploited vulnerabilities.[2]Automate the update process as much as possible and use vendor-provided updates. Consider using automated patch management tools and software update tools.Where patching is not possible due to limitations, segment networks to limit exposure of the vulnerable system or host.
Poor patch management: Use of unsupported OSs and outdated firmwareEvaluate the use of unsupported hardware and software and discontinue use as soon as possible. If discontinuing is not possible, implement additional network protections to mitigate the risk.[45]Patch the Basic Input/Output System (BIOS) and other firmware to prevent exploitation of known vulnerabilities.
Mitigate Bypass of System Access Controls
MisconfigurationRecommendations for Network Defenders
Bypass of system access controlsLimit credential overlap across systems to prevent credential compromise and reduce a malicious actor’s ability to move laterally between systems [M1026],[D3-CH]. Implement a method for monitoring non-standard logon events through host log monitoring [CPG 2.G].Implement an effective and routine patch management process. Mitigate PtH techniques by applying patch KB2871997 to Windows 7 and newer versions to limit default access of accounts in the local administrator group [M1051],[D3-SU].[46]Enable the PtH mitigations to apply User Account Control (UAC) restrictions to local accounts upon network logon [M1052],[D3-UAP].Deny domain users the ability to be in the local administrator group on multiple systems [M1018],[D3-UAP].Limit workstation-to-workstation communications. All workstation communications should occur through a server to prevent lateral movement [M1018],[D3-UAP].Use privileged accounts only on systems requiring those privileges [M1018],[D3-UAP]. Consider using dedicated Privileged Access Workstations for privileged accounts to better isolate and protect them.[37]
Mitigate Weak or Misconfigured MFA Methods
MisconfigurationRecommendations for Network Defenders
Weak or misconfigured MFA methods: Misconfigured smart cards or tokens In Windows environments:Disable the use of New Technology LAN Manager (NTLM) and other legacy authentication protocols that are susceptible to PtH due to their use of password hashes [M1032],[D3-MFA]. For guidance, see Microsoft: Network security Restrict NTLM in this domain – Windows Security | Microsoft Learn and Network security Restrict NTLM Incoming NTLM traffic – Windows Security.[32],[33]Use built-in functionality via Windows Hello for Business or Group Policy Objects (GPOs) to regularly re-randomize password hashes associated with smartcard-required accounts. Ensure that the hashes are changed at least as often as organizational policy requires passwords to be changed [M1027],[D3-CRO]. Prioritize upgrading any environments that cannot utilize this built-in functionality.As a longer-term effort, implement cloud-primary authentication solution using modern open standards. See CISA’s Secure Cloud Business Applications (SCuBA) Hybrid Identity Solutions Architecture for more information.[47] Note: this document is part of CISA’s Secure Cloud Business Applications (SCuBA) project, which provides guidance for FCEB agencies to secure their cloud business application environments and to protect federal information that is created, accessed, shared, and stored in those environments. Although tailored to FCEB agencies, the project’s guidance is applicable to all organizations.[48]
Weak or misconfigured MFA methods: Lack of phishing-resistant MFAEnforce phishing-resistant MFA universally for access to sensitive data and on as many other resources and services as possible [CPG 2.H].[3],[49]
Mitigate Insufficient ACLs on Network Shares and Services
MisconfigurationRecommendations for Network Defenders
Insufficient ACLs on network shares and servicesImplement secure configurations for all storage devices and network shares that grant access to authorized users only.Apply the principal of least privilege to important information resources to reduce risk of unauthorized data access and manipulation.Apply restrictive permissions to files and directories, and prevent adversaries from modifying ACLs [M1022],[D3-LFP].Set restrictive permissions on files and folders containing sensitive private keys to prevent unintended access [M1022],[D3-LFP].Enable the Windows Group Policy security setting, “Do Not Allow Anonymous Enumeration of Security Account Manager (SAM) Accounts and Shares,” to limit users who can enumerate network shares.
Mitigate Poor Credential Hygiene
MisconfigurationRecommendations for Network Defenders
Poor credential hygiene: easily crackable passwords Follow National Institute of Standards and Technologies (NIST) guidelines when creating password policies to enforce use of “strong” passwords that cannot be cracked [M1027],[D3-SPP].[29] Consider using password managers to generate and store passwords.Do not reuse local administrator account passwords across systems. Ensure that passwords are “strong” and unique [CPG 2.B],[M1027],[D3-SPP].Use “strong” passphrases for private keys to make cracking resource intensive. Do not store credentials within the registry in Windows systems. Establish an organizational policy that prohibits password storage in files.Ensure adequate password length (ideally 25+ characters) and complexity requirements for Windows service accounts and implement passwords with periodic expiration on these accounts [CPG 2.B],[M1027],[D3-SPP]. Use Managed Service Accounts, when possible, to manage service account passwords automatically.
Poor credential hygiene: cleartext password disclosure Implement a review process for files and systems to look for cleartext account credentials. When credentials are found, remove, change, or encrypt them [D3-FE]. Conduct periodic scans of server machines using automated tools to determine whether sensitive data (e.g., personally identifiable information, protected health information) or credentials are stored. Weigh the risk of storing credentials in password stores and web browsers. If system, software, or web browser credential disclosure is of significant concern, technical controls, policy, and user training may prevent storage of credentials in improper locations.Store hashed passwords using Committee on National Security Systems Policy (CNSSP)-15 and Commercial National Security Algorithm Suite (CNSA) approved algorithms.[50],[51]Consider using group Managed Service Accounts (gMSAs) or third-party software to implement secure password-storage applications.
Mitigate Unrestricted Code Execution
MisconfigurationRecommendations for Network Defenders
Unrestricted code executionEnable system settings that prevent the ability to run applications downloaded from untrusted sources.[52]Use application control tools that restrict program execution by default, also known as allowlisting [D3-EAL]. Ensure that the tools examine digital signatures and other key attributes, rather than just relying on filenames, especially since malware often attempts to masquerade as common Operating System (OS) utilities [M1038]. Explicitly allow certain .exe files to run, while blocking all others by default.Block or prevent the execution of known vulnerable drivers that adversaries may exploit to execute code in kernel mode. Validate driver block rules in audit mode to ensure stability prior to production deployment [D3-OSM].Constrain scripting languages to prevent malicious activities, audit script logs, and restrict scripting languages that are not used in the environment [D3-SEA]. See joint Cybersecurity Information Sheet: Keeping PowerShell: Security Measures to Use and Embrace.[53]Use read-only containers and minimal images, when possible, to prevent the running of commands.Regularly analyze border and host-level protections, including spam-filtering capabilities, to ensure their continued effectiveness in blocking the delivery and execution of malware [D3-MA]. Assess whether HTML Application (HTA) files are used for business purposes in your environment; if HTAs are not used, remap the default program for opening them from mshta.exe to notepad.exe.

Software Manufacturers

NSA and CISA recommend software manufacturers implement the recommendations in Table 11 to reduce the prevalence of misconfigurations identified in this advisory. These mitigations align with tactics provided in joint guide Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default. NSA and CISA strongly encourage software manufacturers apply these recommendations to ensure their products are secure “out of the box” and do not require customers to spend additional resources making configuration changes, performing monitoring, and conducting routine updates to keep their systems secure.[1]

MisconfigurationRecommendations for Software Manufacturers
Default configurations of software and applicationsEmbed security controls into product architecture from the start of development and throughout the entire SDLC by following best practices in NIST’s Secure Software Development Framework (SSDF), SP 800-218.[54]Provide software with security features enabled “out of the box” and accompanied with “loosening” guides instead of hardening guides. “Loosening” guides should explain the business risk of decisions in plain, understandable language.
Default configurations of software and applications: Default credentialsEliminate default passwords: Do not provide software with default passwords that are universally shared. To eliminate default passwords, require administrators to set a “strong” password [CPG 2.B] during installation and configuration.
Default configurations of software and applications: Default service permissions and configuration settingsConsider the user experience consequences of security settings: Each new setting increases the cognitive burden on end users and should be assessed in conjunction with the business benefit it derives. Ideally, a setting should not exist; instead, the most secure setting should be integrated into the product by default. When configuration is necessary, the default option should be broadly secure against common threats.
Improper separation of user/administrator privilege:Excessive account privileges,Elevated service account permissions, andNon-essential use of elevated accountsDesign products so that the compromise of a single security control does not result in compromise of the entire system. For example, ensuring that user privileges are narrowly provisioned by default and ACLs are employed can reduce the impact of a compromised account. Also, software sandboxing techniques can quarantine a vulnerability to limit compromise of an entire application.Automatically generate reports for:Administrators of inactive accounts. Prompt administrators to set a maximum inactive time and automatically suspend accounts that exceed that threshold.Administrators of accounts with administrator privileges and suggest ways to reduce privilege sprawl.Automatically alert administrators of infrequently used services and provide recommendations for disabling them or implementing ACLs.
Insufficient internal network monitoring Provide high-quality audit logs to customers at no extra charge. Audit logs are crucial for detecting and escalating potential security incidents. They are also crucial during an investigation of a suspected or confirmed security incident. Consider best practices such as providing easy integration with a security information and event management (SIEM) system with application programming interface (API) access that uses coordinated universal time (UTC), standard time zone formatting, and robust documentation techniques.
Lack of network segmentationEnsure products are compatible with and tested in segmented network environments.
Poor patch management: Lack of regular patchingTake steps to eliminate entire classes of vulnerabilities by embedding security controls into product architecture from the start of development and throughout the SDLC by following best practices in NIST’s SSDFSP 800-218.[54] Pay special attention to:Following secure coding practices [SSDF PW 5.1]. Use memory-safe programming languages where possible, parametrized queries, and web template languages.Conducting code reviews [SSDF PW 7.2, RV 1.2] against peer coding standards, checking for backdoors, malicious content, and logic flaws.Testing code to identify vulnerabilities and verify compliance with security requirements [SSDF PW 8.2].Ensure that published CVEs include root cause or common weakness enumeration (CWE) to enable industry-wide analysis of software security design flaws.
Poor patch management: Use of unsupported operating OSs and outdated firmwareCommunicate the business risk of using unsupported OSs and firmware in plain, understandable language.
Bypass of system access controlsProvide sufficient detail in audit records to detect bypass of system controls and queries to monitor audit logs for traces of such suspicious activity (e.g., for when an essential step of an authentication or authorization flow is missing).
Weak or Misconfigured MFA Methods: Misconfigured Smart Cards or Tokens Fully support MFA for all users, making MFA the default rather than an opt-in feature. Utilize threat modeling for authentication assertions and alternate credentials to examine how they could be abused to bypass MFA requirements.
Weak or Misconfigured MFA Methods: Lack of phishing-resistant MFAMandate MFA, ideally phishing-resistant, for privileged users and make MFA a default rather than an opt-in feature.[3]
Insufficient ACL on network shares and servicesEnforce use of ACLs with default ACLs only allowing the minimum access needed, along with easy-to-use tools to regularly audit and adjust ACLs to the minimum access needed.
Poor credential hygiene: easily crackable passwords Allow administrators to configure a password policy consistent with NIST’s guidelines—do not require counterproductive restrictions such as enforcing character types or the periodic rotation of passwords.[29]Allow users to use password managers to effortlessly generate and use secure, random passwords within products.
Poor credential hygiene: cleartext password disclosureSalt and hash passwords using a secure hashing algorithm with high computational cost to make brute force cracking more difficult.
Unrestricted code executionSupport execution controls within operating systems and applications “out of the box” by default at no extra charge for all customers, to limit malicious actors’ ability to abuse functionality or launch unusual applications without administrator or informed user approval.

VALIDATE SECURITY CONTROLS

In addition to applying mitigations, NSA and CISA recommend exercising, testing, and validating your organization’s security program against the threat behaviors mapped to the MITRE ATT&CK for Enterprise framework in this advisory. NSA and CISA recommend testing your existing security controls inventory to assess how they perform against the ATT&CK techniques described in this advisory.

To get started:

  1. Select an ATT&CK technique described in this advisory (see Table 12–Table 21).
  2. Align your security technologies against the technique.
  3. Test your technologies against the technique.
  4. Analyze your detection and prevention technologies’ performance.
  5. Repeat the process for all security technologies to obtain a set of comprehensive performance data.
  6. Tune your security program, including people, processes, and technologies, based on the data generated by this process.

CISA and NSA recommend continually testing your security program, at scale, in a production environment to ensure optimal performance against the MITRE ATT&CK techniques identified in this advisory.

LEARN FROM HISTORY

The misconfigurations described above are all too common in assessments and the techniques listed are standard ones leveraged by multiple malicious actors, resulting in numerous real network compromises. Learn from the weaknesses of others and implement the mitigations above properly to protect the network, its sensitive information, and critical missions.

WORKS CITED

[1]   Joint Guide: Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default (2023), https://www.cisa.gov/sites/default/files/2023-06/principles_approaches_for_security-by-design-default_508c.pdf
[2]   CISA, Known Exploited Vulnerabilities Catalog, https://www.cisa.gov/known-exploited-vulnerabilities-catalog
[3]   CISA, Implementing Phishing-Resistant MFA, https://www.cisa.gov/sites/default/files/publications/fact-sheet-implementing-phishing-resistant-mfa-508c.pdf
[4]   MITRE, ATT&CK for Enterprise, https://attack.mitre.org/versions/v13/matrices/enterprise/
[5]   MITRE, D3FEND, https://d3fend.mitre.org/
[6]   CISA, Best Practices for MITRE ATT&CK Mapping, https://www.cisa.gov/news-events/news/best-practices-mitre-attckr-mapping
[7]   CISA, Decider Tool, https://github.com/cisagov/Decider/
[8]   CISA, Cyber Assessment Fact Sheet, https://www.cisa.gov/sites/default/files/publications/VM_Assessments_Fact_Sheet_RVA_508C.pdf
[9]   Joint CSA: Weak Security Controls and Practices Routinely Exploited for Initial Access, https://media.defense.gov/2022/May/17/2002998718/-1/-1/0/CSA_WEAK_SECURITY_CONTROLS_PRACTICES_EXPLOITED_FOR_INITIAL_ACCESS.PDF
[10]  Microsoft KB5005413: Mitigating NTLM Relay Attacks on Active Directory Certificate Services (AD CS), https://support.microsoft.com/en-us/topic/kb5005413-mitigating-ntlm-relay-attacks-on-active-directory-certificate-services-ad-cs-3612b773-4043-4aa9-b23d-b87910cd3429
[11]  Raj Chandel, Domain Escalation: PetitPotam NTLM Relay to ADCS Endpoints, https://www.hackingarticles.in/domain-escalation-petitpotam-ntlm-relay-to-adcs-endpoints/
[12]  SpecterOps – Will Schroeder, Certified Pre-Owned, https://posts.specterops.io/certified-pre-owned-d95910965cd2
[13]  CISA, CSA: CISA Red Team Shares Key Findings to Improve Monitoring and Hardening of Networks, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-059a
[14]  Joint CSA: Threat Actors Exploit Progress Telerik Vulnerabilities in Multiple U.S. Government IIS Servers, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-074a
[15]  Joint CSA: Iranian Government-Sponsored APT Actors Compromise Federal Network, Deploy Crypto Miner, Credential Harvester, https://www.cisa.gov/news-events/cybersecurity-advisories/aa22-320a
[16]  Joint CSA: Threat Actors Exploiting Multiple CVEs Against Zimbra Collaboration Suite, https://www.cisa.gov/news-events/cybersecurity-advisories/aa22-228a
[17]  Microsoft, How to verify that MS17-010 is installed, https://support.microsoft.com/en-us/topic/how-to-verify-that-ms17-010-is-installed-f55d3f13-7a9c-688c-260b-477d0ec9f2c8
[18]  Microsoft, Microsoft Security Bulletin MS08-067 – Critical Vulnerability in Server Service Could Allow Remote Code Execution (958644), https://learn.microsoft.com/en-us/security-updates/SecurityBulletins/2008/ms08-067
[19]  Joint CSA: Impacket and Exfiltration Tool Used to Steal Sensitive Information from Defense Industrial Base Organization, https://www.cisa.gov/news-events/cybersecurity-advisories/aa22-277a
[20]  CISA, Malware Analysis Report: 10365227.r1.v1, https://www.cisa.gov/sites/default/files/2023-06/mar-10365227.r1.v1.clear_.pdf
[21]  Joint CSA: #StopRansomware: BianLian Ransomware Group, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-136a
[22]  CISA Analysis Report: FiveHands Ransomware, https://www.cisa.gov/news-events/analysis-reports/ar21-126a
[23]  Snaffler, https://github.com/SnaffCon/Snaffler
[24]  CISA, Cross-Sector Cybersecurity Performance Goals, https://www.cisa.gov/cross-sector-cybersecurity-performance-goals
[25]  Defense Information Systems Agency (DISA), Security Technical Implementation Guides (STIGs), https://public.cyber.mil/stigs/
[26]  NSA, Network Infrastructure Security Guide, https://media.defense.gov/2022/Jun/15/2003018261/-1/-1/0/CTR_NSA_NETWORK_INFRASTRUCTURE_SECURITY_GUIDE_20220615.PDF
[27]  NSA, Actively Manage Systems and Configurations, https://media.defense.gov/2019/Sep/09/2002180326/-1/-1/0/Actively%20Manage%20Systems%20and%20Configurations.docx%20-%20Copy.pdf
[28]  NSA, Cybersecurity Advisories & Guidance, https://www.nsa.gov/cybersecurity-guidance
[29]  National Institute of Standards and Technologies (NIST), NIST SP 800-63B: Digital Identity Guidelines: Authentication and Lifecycle Management, https://csrc.nist.gov/pubs/sp/800/63/b/upd2/final
[30]  Microsoft, Uninstall-AdcsWebEnrollment, https://learn.microsoft.com/en-us/powershell/module/adcsdeployment/uninstall-adcswebenrollment
[31]  Microsoft, KB5021989: Extended Protection for Authentication, https://support.microsoft.com/en-au/topic/kb5021989-extended-protection-for-authentication-1b6ea84d-377b-4677-a0b8-af74efbb243f
[32]  Microsoft, Network security: Restrict NTLM: NTLM authentication in this domain, https://learn.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/network-security-restrict-ntlm-ntlm-authentication-in-this-domain
[33]  Microsoft, Network security: Restrict NTLM: Incoming NTLM traffic, https://learn.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/network-security-restrict-ntlm-incoming-ntlm-traffic
[34]  Microsoft, How to disable the Subject Alternative Name for UPN mapping, https://learn.microsoft.com/en-us/troubleshoot/windows-server/windows-security/disable-subject-alternative-name-upn-mapping
[35]  Microsoft, Overview of Server Message Block signing, https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/overview-server-message-block-signing
[36]  Microsoft, SMB signing required by default in Windows Insider, https://aka.ms/SmbSigningRequired
[37]  NSA, Defend Privileges and Accounts, https://media.defense.gov/2019/Sep/09/2002180330/-1/-1/0/Defend%20Privileges%20and%20Accounts%20-%20Copy.pdf
[38]  NSA, Advancing Zero Trust Maturity Throughout the User Pillar, https://media.defense.gov/2023/Mar/14/2003178390/-1/-1/0/CSI_Zero_Trust_User_Pillar_v1.1.PDF
[39]  NSA, Continuously Hunt for Network Intrusions, https://media.defense.gov/2019/Sep/09/2002180360/-1/-1/0/Continuously%20Hunt%20for%20Network%20Intrusions%20-%20Copy.pdf
[40]  Joint CSI: Detect and Prevent Web Shell Malware, https://media.defense.gov/2020/Jun/09/2002313081/-1/-1/0/CSI-DETECT-AND-PREVENT-WEB-SHELL-MALWARE-20200422.PDF
[41]  NSA, Segment Networks and Deploy Application-aware Defenses, https://media.defense.gov/2019/Sep/09/2002180325/-1/-1/0/Segment%20Networks%20and%20Deploy%20Application%20Aware%20Defenses%20-%20Copy.pdf
[42]  Joint CSA: NSA and CISA Recommend Immediate Actions to Reduce Exposure Across all Operational Technologies and Control Systems, https://media.defense.gov/2020/Jul/23/2002462846/-1/-1/0/OT_ADVISORY-DUAL-OFFICIAL-20200722.PDF
[43]  NSA, Stop Malicious Cyber Activity Against Connected Operational Technology, https://media.defense.gov/2021/Apr/29/2002630479/-1/-1/0/CSA_STOP-MCA-AGAINST-OT_UOO13672321.PDF
[44]  NSA, Performing Out-of-Band Network Management, https://media.defense.gov/2020/Sep/17/2002499616/-1/-1/0/PERFORMING_OUT_OF_BAND_NETWORK_MANAGEMENT20200911.PDF
[45]  NSA, Update and Upgrade Software Immediately, https://media.defense.gov/2019/Sep/09/2002180319/-1/-1/0/Update%20and%20Upgrade%20Software%20Immediately.docx%20-%20Copy.pdf
[46]  Microsoft, Microsoft Security Advisory 2871997: Update to Improve Credentials Protection and Management, https://learn.microsoft.com/en-us/security-updates/SecurityAdvisories/2016/2871997
[47]  CISA, Secure Cloud Business Applications Hybrid Identity Solutions Architecture, https://www.cisa.gov/sites/default/files/2023-03/csso-scuba-guidance_document-hybrid_identity_solutions_architecture-2023.03.22-final.pdf
[48]  CISA, Secure Cloud Business Applications (SCuBA) Project, https://www.cisa.gov/resources-tools/services/secure-cloud-business-applications-scuba-project
[49]  NSA, Transition to Multi-factor Authentication, https://media.defense.gov/2019/Sep/09/2002180346/-1/-1/0/Transition%20to%20Multi-factor%20Authentication%20-%20Copy.pdf
[50]  Committee on National Security Systems (CNSS), CNSS Policy 15, https://www.cnss.gov/CNSS/issuances/Policies.cfm
[51]  NSA, NSA Releases Future Quantum-Resistant (QR) Algorithm Requirements for National Security Systems, https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/3148990/nsa-releases-future-quantum-resistant-qr-algorithm-requirements-for-national-se/
[52]  NSA, Enforce Signed Software Execution Policies, https://media.defense.gov/2019/Sep/09/2002180334/-1/-1/0/Enforce%20Signed%20Software%20Execution%20Policies%20-%20Copy.pdf
[53]  Joint CSI: Keeping PowerShell: Security Measures to Use and Embrace, https://media.defense.gov/2022/Jun/22/2003021689/-1/-1/0/CSI_KEEPING_POWERSHELL_SECURITY_MEASURES_TO_USE_AND_EMBRACE_20220622.PDF
[54]  NIST, NIST SP 800-218: Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities, https://csrc.nist.gov/publications/detail/sp/800-218/final

Disclaimer of Endorsement

The information and opinions contained in this document are provided “as is” and without any warranties or guarantees. Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not constitute or imply its endorsement, recommendation, or favoring by the United States Government, and this guidance shall not be used for advertising or product endorsement purposes.

Trademarks

Active Directory, Microsoft, and Windows are registered trademarks of Microsoft Corporation.
MITRE ATT&CK is registered trademark and MITRE D3FEND is a trademark of The MITRE Corporation.
SoftPerfect is a registered trademark of SoftPerfect Proprietary Limited Company.
Telerik is a registered trademark of Progress Software Corporation.
VMware is a registered trademark of VMWare, Inc.
Zimbra is a registered trademark of Synacor, Inc.

Purpose

This document was developed in furtherance of the authoring cybersecurity organizations’ missions, including their responsibilities to identify and disseminate threats, and to develop and issue cybersecurity specifications and mitigations. This information may be shared broadly to reach all appropriate stakeholders.

Contact

Cybersecurity Report Feedback: CybersecurityReports@nsa.gov
General Cybersecurity Inquiries: Cybersecurity_Requests@nsa.gov 
Defense Industrial Base Inquiries and Cybersecurity Services: DIB_Defense@cyber.nsa.gov
Media Inquiries / Press Desk: 443-634-0721, MediaRelations@nsa.gov 

To report suspicious activity contact CISA’s 24/7 Operations Center at report@cisa.gov or (888) 282-0870. When available, please include the following information regarding the incident: date, time, and location of the incident; type of activity; number of people affected; type of equipment used for the activity; the name of the submitting company or organization; and a designated point of contact.

Appendix: MITRE ATT&CK Tactics and Techniques

See Table 12–Table 21 for all referenced threat actor tactics and techniques in this advisory.

Technique TitleIDUse
Active Scanning: Vulnerability ScanningT1595.002Malicious actors scan victims for vulnerabilities that be exploited for initial access.
Gather Victim Host InformationT1592Malicious actors gather information on victim client configurations and/or vulnerabilities through vulnerabilities scans and searching the web.
Gather Victim Identity Information: CredentialsT1589.001Malicious actors find default credentials through searching the web.
Phishing for InformationT1598Malicious actors masquerade as IT staff and convince a target user to provide their MFA code over the phone to gain access to email and other organizational resources.
Technique TitleIDUse
External Remote ServicesT1133Malicious actors use default credentials for VPN access to internal networks.
Valid Accounts: Default AccountsT1078.001Malicious actors gain authenticated access to devices by finding default credentials through searching the web.Malicious actors use default credentials for VPN access to internal networks, and default administrative credentials to gain access to web applications and databases.
Exploit Public-Facing ApplicationT1190Malicious actors exploit CVEs in Telerik UI, VM Horizon, Zimbra Collaboration Suite, and other applications for initial access to victim organizations.
PhishingT1566Malicious actors gain initial access to systems by phishing to entice end users to download and execute malicious payloads.
Trust RelationshipT1199Malicious actors gain access to OT networks despite prior assurance that the networks were fully air gapped, with no possible connection to the IT network, by finding special purpose, forgotten, or even accidental network connections.
Technique TitleIDUse
Software Deployment ToolsT1072Malicious actors use default or captured credentials on software deployment tools to execute code and move laterally.
User ExecutionT1204Malicious actors gain initial access to systems by phishing to entice end users to download and execute malicious payloads or to run code on their workstations.
Command and Scripting InterpreterT1059Malicious actors use scripting languages to obscure their actions and bypass allowlisting.
Command and Scripting Interpreter: Visual BasicT1059.005Malicious actors use macros for initial access, persistence, and lateral movement.
Technique TitleIDUse
Account ManipulationT1098Malicious actors reset built-in administrative accounts via predictable, forgotten password questions.
Technique TitleIDUse
Valid AccountsT1078Malicious actors analyze topical and nested Active Directory groups to find privileged accounts to target.
Valid Accounts: Domain AccountsT1078.002Malicious actors obtain loaded domain credentials from printers and scanners and use them to move laterally from the network device.
Exploitation for Privilege EscalationT1068Malicious actors load vulnerable drivers and then exploit their known vulnerabilities to execute code in the kernel with the highest level of system privileges to completely compromise the device.
Technique TitleIDUse
Obfuscated Files or Information: Command ObfuscationT1027.010Malicious actors often use scripting languages to obscure their actions.
Technique TitleIDUse
Adversary-in-the-MiddleT1557Malicious actors force a device to communicate through actor-controlled systems, so they can collect information or perform additional actions.
Adversary-in-the-Middle: LLMNR/NBT-NS Poisoning and SMB RelayT1557.001Malicious actors execute spoofing, poisoning, and relay techniques if Link-Local Multicast Name Resolution (LLMNR), NetBIOS Name Service (NBT-NS), and Server Message Block (SMB) services are enabled in a network.
Brute Force: Password CrackingT1110.002Malicious actors capture user hashes and leverage dictionary wordlists and rulesets to extract cleartext passwords.
Credentials from Password StoresT1555Malicious actors gain access to and crack credentials from PFX stores, enabling elevation of privileges and lateral movement within networks.
Multi-Factor Authentication InterceptionT1111Malicious actors can obtain password hashes for accounts enabled for MFA with smart codes or tokens and use the hash via PtH techniques.
Multi-Factor Authentication Request GenerationT1621Malicious actors use “push bombing” against non-phishing resistant MFA to induce “MFA fatigue” in victims, gaining access to MFA authentication credentials or bypassing MFA, and accessing the MFA-protected system.
Steal Application Access TokenT1528Malicious actors can steal administrator account credentials and the authentication token generated by Active Directory when the account is logged into a compromised host.
Steal or Forge Authentication CertificatesT1649Unauthenticated malicious actors coerce an ADCS server to authenticate to an actor-controlled server, and then relay that authentication to the web certificate enrollment application to obtain a trusted illegitimate certificate.
Steal or Forge Kerberos Tickets: Golden TicketT1558.001Malicious actors who have obtained authentication certificates can use the certificate for Active Directory authentication to obtain a Kerberos TGT.
Steal or Forge Kerberos Tickets: KerberoastingT1558.003Malicious actors obtain and abuse valid Kerberos TGTs to elevate privileges and laterally move throughout an organization’s network.
Unsecured Credentials: Credentials in FilesT1552.001Malicious actors find cleartext credentials that organizations or individual users store in spreadsheets, configuration files, and other documents.
Technique TitleIDUse
Account DiscoveryT1087Malicious actors with valid domain credentials enumerate the AD to discover elevated accounts and where they are used.
File and Directory DiscoveryT1083Malicious actors use commands, such as net share, open source tools, such as SoftPerfect Network Scanner, or custom malware, such as CovalentStealer to discover and categorize files.Malicious actors search for text files, spreadsheets, documents, and configuration files in hopes of obtaining desired information, such as cleartext passwords.
Network Share DiscoveryT1135Malicious actors use commands, such as net share, open source tools, such as SoftPerfect Network Scanner, or custom malware, such as CovalentStealer, to look for shared folders and drives.
Technique TitleIDUse
Exploitation of Remote ServicesT1210Malicious actors can exploit OS and firmware vulnerabilities to gain unauthorized network access, compromise sensitive data, and disrupt operations.
Remote Services: SMB/Windows Admin SharesT1021.002If SMB signing is not enforced, malicious actors can use name resolution poisoning to access remote systems.
Use Alternate Authentication Material: Application Access TokenT1550.001Malicious actors with stolen administrator account credentials and AD authentication tokens can use them to operate with elevated permissions throughout the domain.
Use Alternate Authentication Material: Pass the HashT1550.002Malicious actors collect hashes in a network and authenticate as a user without having access to the user’s cleartext password.
Technique TitleIDUse
Data from Network Shared DriveT1039Malicious actors find sensitive information on network shares that could facilitate follow-on activity or provide opportunities for extortion.

Source :
https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-278a

What network ports are used by Synology DSM services?

Last updated: Aug 10, 2023

Details

The operations of DSM services require specific ports to be opened to ensure normal functionality. In this article, you can find the network ports and protocols required by DSM services for operations.

Contents

Resolution

Setup Utilities

TypePort NumberProtocol
Synology Assistant9999, 9998, 9997UDP

Backup

TypePort NumberProtocol
Active Backup for Business5510 (Synology NAS)1TCP
443 (vCenter Server and ESXi host), 902 (ESXi host),
445 (SMB for Hyper-V host), 5985 (HTTP for Hyper-V host), 5986 (HTTPS for Hyper-V host)
TCP
Data Replicator, Data Replicator II, Data Replicator III9999, 9998, 9997, 137, 138, 139, 445TCP
DSM 5.2 Data Backup, rsync, Shared Folder Sync, Remote Time Backup873, 22 (if encrypted over SSH)TCP
Hyper Backup (destination)6281 (remote Synology NAS), 22 (rsync with transfer encryption enabled), 873 (rsync without transfer encryption)TCP
Hyper Backup Vault6281,
For DSM 7.0 or above: 5000 (HTTP), 5001 (HTTPS)
TCP
DSM 5.2 Archiving Backup6281TCP
LUN Backup3260 (iSCSI), 873, 22 (if encrypted over SSH)TCP
Snapshot Replication5566 (Advanced LUNs and shared folders)TCP
3261 (Legacy Advanced LUNs)TCP

Download

TypePort NumberProtocol
BTFor DSM 2.0.1 or above: 16881,
For DSM 2.0.1-3.0401 or below: 6890-6999
TCP/UDP
eMule4662TCP
4672UDP

Web Applications

TypePort NumberProtocol
DSM5000 (HTTP), 5001 (HTTPS)TCP

Mail Service

TypePort NumberProtocol
IMAP143TCP
IMAP over SSL/TLS993TCP
POP3110TCP
POP3 over SSL/TLS995TCP
SMTP25TCP
SMTP-SSL465TCP
SMTP-TLS587TCP

File Transferring

TypePort NumberProtocol
AFP548TCP
CIFS/SMBsmbd: 139 (netbios-ssn), 445 (microsoft-ds)TCP/UDP
Nmbd: 137, 138UDP
FTP, FTP over SSL, FTP over TLS21 (command),
20 (data connection in Active Mode), 1025-65535 (data connection in Passive Mode)2
TCP
iSCSI3260, 3263, 3265TCP
NFS111, 892, 2049TCP/UDP
TFTP69UDP
WebDAV5005, 5006 (HTTPS)TCP

Packages

TypePort NumberProtocol
Audio Station1900 (UDP), 5000 (HTTP), 5001 (HTTPS), 5353 (Bonjour service), 6001-6010 (AirPlay control/timing)TCP/UDP
C2 Identity Edge Server389 (LDAP), 7712 (HTTP), 8864TCP
53UDP
Central Management System5000 (HTTP), 5001 (HTTPS)TCP
CIFS Scale-out Cluster49152-49252TCP/UDP
17909, 17913, 19998, 24007, 24008, 24009-24045, 38465-38501, 4379TCP
Cloud Station6690TCP
DHCP Server53, 67, 68TCP/UDP
DNS Server53 (named)TCP/UDP
LDAP Server (formerly Directory Server)389 (LDAP), 636 (LDAP with SSL)TCP
Download Station5000 (HTTP), 5001 (HTTPS)TCP
File Station5000 (HTTP), 5001 (HTTPS)TCP
Hybrid Share50051 (catalog), 443 (API), 4222 (NATS)TCP
iTunes Server3689TCP
Log Center (syslog server)514 (additional port can be added)TCP/UDP
Logitech® Media Server3483, 9002TCP
MailPlus Server1344, 4190, 5000 (HTTP), 5001 (HTTPS), 5252, 8500 – 8520, 8893, 9526 – 9529, 10025, 10465, 10587, 11211, 11332 – 11334, 12340, 24245, 24246TCP
MailPlus web client5000 (HTTP), 5001 (HTTPS)TCP
Mail Station80 (HTTP), 443 (HTTPS)TCP
Media Server1900 (UPnP), 50001 (content browsing), 50002 (content streaming)TCP/UDP
Migration Assistant7400-7499 (DRBD), 22 (SSH)3DRBD
Note Station5000 (HTTP), 5001 (HTTPS)TCP
Photo Station, Web Station80 (HTTP), 443 (HTTPS)TCP
Presto File Server3360, 3361TCP/UDP
Proxy Server3128TCP
RADIUS Server1812, 18120UDP
SMI-S Provider5988 (HTTP), 5989 (HTTPS)TCP
Surveillance Station5000 (HTTP), 5001 (HTTPS)TCP
Synology Calendar5000 (HTTP), 5001 (HTTPS)TCP
Synology CardDAV Server8008 (HTTP), 8443 (HTTPS)TCP
Synology Chat5000 (HTTP), 5001 (HTTPS)TCP
Synology Contacts5000 (HTTP), 5001 (HTTPS)TCP
Synology Directory Server88 (Kerberos), 389 (LDAP), 464 (Kerberos password change)TCP/UDP
135 (RPC Endpoint Mapper), 636 (LDAP SSL), 1024 (RPC), 3268 (LDAP GC), 3269 (LDAP GC SSL), 49152 (RPC)4, 49300-49320 (RPC)TCP
Synology Drive Server80 (link sharing), 443 (link sharing), 5000 (HTTP), 5001 (HTTPS), 6690 (file syncing/backup)TCP
Synology High Availability (HA)123 (NTP), ICMP, 5000 (HTTP), 5001 (HTTPS),
1234, 9997, 9998, 9999 (Synology Assistant), 874, 5405, 5406, 7400-7999 (HA)
TCP/UDP
Synology Moments5000 (HTTP), 5001 (HTTPS)TCP
Synology Photos5000 (HTTP), 5001 (HTTPS)TCP
Video Station1900 (UDP), 5000 (HTTP), 5001 (HTTPS), 9025-9040, 5002, 5004, 65001 (for using the HDHomeRun network tuner)TCP/UDP
Virtual Machine Manager2379-2382 (cluster network), ICMP, 3260-3265 (iSCSI), 5000 (HTTP), 5001 (HTTPS), 5566 (replication), 16509, 16514, 30200-30300, 5900-5999 (QEMU), 2385 (Redis Server)TCP
VPN Server (OpenVPN)1194UDP
VPN Server (PPTP)1723TCP
VPN Server (L2TP/IPSec)500, 1701, 4500UDP

Mobile Applications

TypePort NumberProtocol
DS audio5000 (HTTP), 5001 (HTTPS)TCP
DS cam5000 (HTTP), 5001 (HTTPS)TCP
DS cloud6690TCP
DS file5000 (HTTP), 5001 (HTTPS)TCP
DS finder5000 (HTTP), 5001 (HTTPS)TCP
DS get5000 (HTTP), 5001 (HTTPS)TCP
DS note5000 (HTTP), 5001 (HTTPS)TCP
DS photo80(HTTP), 443 (HTTPS)TCP
DS video5000 (HTTP), 5001 (HTTPS)TCP
MailPlus5000 (HTTP), 5001 (HTTPS)TCP
Synology Drive5000 (HTTP), 5001 (HTTPS)TCP
Synology Moments5000 (HTTP), 5001 (HTTPS)TCP
Synology Photos5000 (HTTP), 5001 (HTTPS)TCP

Peripheral Equipment

TypePort NumberProtocol
Bonjour5353UDP
LPR515UDP
Network Printer (IPP)/CUPS631TCP
Network MFP3240-3259TCP
UPS3493TCP

System

TypePort NumberProtocol
LDAP389, 636 (SLAPD)TCP
MySQL3306TCP
NTP123UDP
Resource Monitor/SNMP161TCP/UDP
SSH/SFTP22TCP
Telnet23TCP
WS-Discovery3702UDP
WS-Discovery5357 (Nginx)TCP

Notes:

  1. For the backup destination of Synology NAS, Hyper-V, or physical Windows/Linux/macOS devices.
  2. The default range varies according to your Synology product models.
  3. For the SSH service that runs on a customized port, make sure the port is accessible.
  4. Only Synology Directory Server version 4.10.18-0300 requires port 49152.

Further reading

Source :
https://kb.synology.com/en-global/DSM/tutorial/What_network_ports_are_used_by_Synology_services

Enable Remote Desktop (Windows 10, 11, Windows Server)

Last Updated: June 22, 2023 by Robert Allen

In this guide, you will learn how to enable Remote Desktop on Windows 10, 11, and Windows Server. I’ll also show you on to enable RDP using PowerShell and group policy.

Tip: Use a remote desktop connection manager to manage multiple remote desktop connections. You can organize your desktops and servers into groups for easy access.

Table of contents

In the diagram below, my admin workstation is PC1. I’m going to enable RDP on PC2, PC3, and Server1 so that I can remotely connect to them. RDP uses port TCP 3389. You can change the RDP listening port by modifying the registry.

Enable Remote Desktop on Windows 10

In this example, I’m going to enable remote desktop on PC2 that is running windows 10.

Step 1. Enable Remote Desktop

Right click the start menu and select system.

Under related settings click on Remote desktop.

Click the slider button to enable remote desktop.

You will get a popup to confirm that you want to enable Remote desktop. Click confirm.

Next, Click on Advanced Settings.

Make sure “Require computers to use Network Level Authentication to connect” is selected.

This setting will force the user to authenticate before it will start a remote desktop session. This setting will enable a layer of security and prevent unauthorized remote connections.

Step 2. Select Users Accounts

The next step is to ensure only specific accounts can use RDP.

By default, only members of the local administrators group will be allowed to connect using remote desktop.

To add or remove user accounts click on “select users that can remotely access this PC”.

To add a user click the Add button and search for the username.

In this example, I’m going to add a user Adam A. Anderson.

Tip. I recommend creating a domain group to allow RDP access. This will make it easier to manage and audit RDP access.

That was the last step, remote desktop is now enabled.

Let’s test the connection.

From PC1 I open Remote Desktop Connection and enter PC2.

I am prompted to enter credentials.

Success!

I now have a remote desktop connection to PC2.

In the screenshot below you can see I’m connected via console to PC1 and I have a remote desktop connection open to PC2.

Damware Mini Remote Control

Multiple monitor support. Reboot and wake sleeping computers remotely.

Remote access to Windows, Linux, and Mac OS X operating systems. In session chat, remote screenshot, file transfer, and more.

Download 14 Day Free Trial

Enable Remote Desktop on Windows 11

In this example, I’ll enable remote desktop on my Windows 11 computer (PC3).

Step 1. Enable Remote Desktop

Click on search.

Enter “remote desktop” and click on “Remote desktop settings”

Click the slider to enable remote desktop. You will get a popup to confirm.

Click the down arrow and verify “Require devices to use Network Level Authentication to connect” is enabled.

Remote Desktop is now enabled. In the next step, you will select which users are allowed to use remote desktop.

Step 2. Remote Desktop Users

By default, only members of the local administrators group can use remote desktop. To add additional users follow these steps.

Click on “Remote Desktop users”

Click on add and search or enter a user to add. In this example, I’ll add the user adam.reed.

Now I’ll test if remote desktop is working.

From my workstation PC1 I’ll create a remote desktop connection to PC3 (windows 11).

Enter the password to connect.

The connection is good!

You can see in the screenshot below I’m on the console of PC1 and I have a remote desktop connection to PC3 that is running Windows 11.

Enable Remote Desktop on Windows Server

In this example, I’ll enable remote desktop on Windows Server 2022.

Step 1. Enable Remote Desktop.

Right click the start menu and select System.

On the settings screen under related settings click on “Remote desktop”.

Click the slider button to enable remote desktop.

You will get a popup to confirm that you want to enable Remote desktop. Click confirm.

Click on Advanced settings.

Make sure “Require computers to use Network level Authentication to connect” is enabled.

Remote desktop is now enabled, the next step is to select users that can remotely access the PC.

Step 2. Select User accounts

By default, only members of the local administrators group will be allowed to connect using remote desktop.

To add additional users click on click on “select users that can remotely access this pc”.

Next, click add then enter or search for users to add. In this example, I’ll add the user robert.allen. Click ok.

Now I’ll test if remote desktop is working on my Windows 2022 server.

From my workstation (pc2) I open the remote desktop connection client and enter srv-vm1and click connect. Enter my username and password and click ok.

Awesome, it works!

I’ve established a remote session to my Windows 2022 server from my Windows 10 computer.

PowerShell Enable Remote Desktop

To enable Remote Desktop using PowerShell use the command below. This will enable RDP on the local computer.

Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Control\Terminal Server' -name "fDenyTSConnections" -value 0

You can use the below PowerShell command to check if remote desktop is enabled.


if ((Get-ItemProperty "hklm:\System\CurrentControlSet\Control\Terminal Server").fDenyTSConnections -eq 0) { write-host "RDP is Enabled" } else { write-host "RDP is NOT enabled" }

To enable remote desktop remotely you can use the invoke-command. This requires PS remoting to be enabled, check out my article on remote powershell for more details.

In this example, I’ll enable remote desktop on the remote computer PC2.

invoke-command -ComputerName pc2 -scriptblock {Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Control\Terminal Server' -name "fDenyTSConnections" -value 0} 

Group Policy Configuration to allow RDP

If you need to enable and manage the remote desktop settings on multiple computers then you should use Group Policy or intune.

Follow the steps below to create a new GPO.

Step 1. Create a new GPO

Open the group policy management console and right click the OU or root domain to create a new GPO.

In this example, I’m going to create a new GPO on my ADPPRO Computers OU, this OU has all my client computers.

Give the GPO a name.

Edit the GPO and browse to the following policy setting.

Computer Configuration -> Policies -> Administrative Templates -> Windows Components -> Remote Desktop Services -> Remote Desktop Session Host -> Connections;

Enable the policy setting -> Allow users to connect remotely by using Remote Desktop Services

That is the only policy setting that needs to be enabled to allow remote desktop

Step 2. Update Computer GPO

The GPO policies will auto refresh on remote computers every 90 minutes.

To manually update GPO on a computer run the gpupdate command.

When remote desktop is managed with group policy the settings will be greyed out. This will allow you to have consistent settings across all your computers. It will also prevent users or the helpdesk from modifying the settings.

That’s a wrap.

I just showed you several ways to enable remote desktop on Windows computers. If you are using Active Directory with domain joined computers then enabling RDP via group policy is the best option.

Related Articles

Recommended: Active Directory Permissions Reporting Tool

The ARM Permissions Reporting Tool helps you monitor, analyze, and report on the permissions assigned to users, groups, computers, and organizational units in your Active Directory

You can easily identify who has what permissions, where they came from, and when they were granted or revoked. You can also generate compliance-ready reports for various standards and regulations, such as HIPAA, PCI DSS, SOX, and GDPR

Get instant visibility into user and group permissions.

Download Free Trial

Source :
https://activedirectorypro.com/enable-remote-desktop-windows/

PHD Virtual Backup 6.0

By Vladan SEGET | Last Updated: June 28, 2023

PHD Virtual Backup 6.0 – Backup, Restore, Replication and Instant recovery. PHD Virtual has released their new version of backup software for VMware vSphere environments. PHD Virtual backup 6.0 comes up with several completely new features. Those features that are specific to virtualized environments. In this review I’ll focus more on those new features instead on the installation process, which is fairly simple. This review contains images, which can be clicked and enlarged (most of them) to see all the details from the UI.

Now first something that I was not aware of. Even if I work as a consultant, I must say I focus most of the time on the technical side of a solution which I’m implementing and I leave the commercial (licensing) part to vendors or resellers.  But with this review I would like to point out that PHD Virtual Backup 6.0 is licensed on a per-host basis. Not CPU Socket like some vendors do, but also not per site like other vendors do. As a result, their price is a fraction of the cost of competitive alternatives.

Introduction of PHD Virtual Backup and Recovery 6.0

The PHD Virtual Backup 6.0 comes up with quite a few new features that I will try to cover in my review. One of them is the Instant Recovery, which enables to run VM directly from a backup location and initiate storage vMotion from within VMware vSphere to move the VM back to your SAN.

But PHD Virtual goes even further by developing a proprietary function to initiate the move of the VM by using PHD Motion. What is it? It’s an alternative for SMB users which does not have VMware Enterprise and Enterprise Plus License, which includes storage vMotion.

PHD Motion does not require VMware’s storage vMotion in order to work. It leverages multiple streams, intelligent data restore, direct storage recovery to copy a running state of a VM back to the SAN, while the VM still runs in the sandbox at the storage location. Therefore, it is much faster at moving the data back to production than storage vMotion.

The delta changes to the VM are maintained in another, separate temporary location.  So the final switch back to SAN happens fairly quickly since only the deltas of changes between the VM which runs from the backup and the VM which is located back on SAN, are quickly copied. So small planned downtime (about the time for a VM reboot) is necessary.

Installation of the Software

PHD Virtual Backup 6.0

The installation will take like 5 minutes, just to deploy the OVF into vCenter and configure the network interface, storage …. and that’s it. Pretty cool!

One of those differences from previous version of PHD Virtual backup is the Instant Recovery Configuration TAB, since this feature has just been introduced in the PHD Virtual Backup 6.0.

The Instant recovery feature is available for Virtual Full backups only. The full/incremental backup types are not currently supported for instant recovery, so if you select the full/incremental option, you might see that the Instant Recovery option isn’t available. Use Virtual Full option when configuring your backup jobs to take benefit of Instant recovery.

PHD Virtual Backup and Replication 6.0 If you choose the full/incremental backup type, the Instant VM recovery isn't currently supported

PHD Virtual backup 6.0 – Replication of VMs.

Replication – This feature requires at least one PHD VBA installed and configured with access to both environments – but if you will be using replication in larger environments, you may need additional PHD VBAs. For instance, one PHD VBA deployed at the primary site would be configured to run regular backups of your VMs while a second PHD VBA could be deployed to the DR site configured to replicate VMs from the primary site to the secondary location.

The replication of VMs is functionality that is very useful for DR plans. You can also configure the replication within the same site as well, and choose a different datastore ( and ESXi host) as a destination. This is my case, because I wanted to test this function, since my lab don’t have two different locations.

The replication job works the way that only the first replica is full copy. PHD VM replication takes data from existing backups and replicates those to a cold standby VM. After the VM is initially created during the first pass, PHD uses its own logic to transfer only the changes from the previous run.

You can see the first and second job, when finishes on the image below. The latter one took only 51 s.

PHD Virtual Backup 6.0 - Replication Jobs

Testing Failover – After the replica VM is created, you have the option to test each replica to validate your standby environment or to failover to your replicated VMs. There is a Start Test button in order to proceed.

PHD Virtual 6.0 - testing failover button

What’s happening during the test. At first, another snapshot is created of the Replica VM. This is only to have the ability to get back to the state before the test. See the image below.

PHD Virtual Backup 6.0 - Testing the Replication with the Failover Test Button

This second snapshot is deleted the moment when you’re done with the testing of that failover VM, you tested that the application is working etc…. The VM is powered off and it is rolled back to the state it was in prior to testing mode.

So when you click the Stop Test button (it changed text), the replica Status is changed back to STANDBY, once again click Refresh button to refresh the UI.

If you lose your primary site, you can go to the PHD console at the DR site and failover the VMs which has been replicated there.  You can recover your production environment there by starting the VMs that has been replicated.  And now, when you run your production (or at least the most critical VMs) from DR site, and because you don’t have a failover site anymore, you should consider start backing up those VMs in failover mode….. it will be helpful when failing back to the main primary site, when damages there gets repaired.

Why one would have to start doing backups as soon as the VMs are in failover state ? …. Here is a quick quote from the manual:

When ending Failover, any changes made to the replica VM will be lost the next time replication runs. To avoid losing changes, be sure to fail back the replica VM (backup and restore) to a primary site prior to ending Failover mode.

I can only highly recommend to read the manual where you’ll find all the step-by-steps and all those details. In this review I can’t focus to provide all those step-by-step procedures. The manual is a PDF file very good quality, with many screenshots and walk through guides. In addition, there are some nice FAQ which were certainly created as a result of feedback from customer’s sites. One of them is for example a FAQ for increasing backup storage and the step-by-step follows. Nice.

You can see the possibility to end the failover test with the Stop Test button.

PHD Virtual Backup 6.0 - end falover test.

Seeding – If you have some huge amount of data to replicate for the DR site you can seed the VMs data before configuring the replication process. The seeding process is process when you pre-populate the VMs to the DR site first. This can be done through removable USB drives, or small NAS device. When the seeding is complete, you can start creating the replication jobs to move only the subsequent changes.

In fact the seeding process is fairly simple. Here is the outline. First create full backup of VMs > copy those backups to NAS or USB for transport >  Go to the DR site and deploy PHD VBA and add the data that you have with you as a replication datastore > create and run replication job to replicate all the VMs from the NAS (USB) to your DR site > Remove the replication datastore and the NAS and create the replication job where you specify the the primary site datastore as a source. Only the small, incremental changes will be replicated and sent over the WAN.

PHD Virtual Backup 6.0 – File level Recovery

File level recovery is a feature that is used at most in virtual environments, when it comes to console manipulations. I think, since more frequently you (or your users) are in need for file restore, than VM crashes or corruption, so the full VM needs to be restored.

I’ve covered the the FLR process in the 5.1 version by creating an iSCSI target and then mounting the volume as an additional disk in computer management, but the option was greatly simplified in PHD Virtual Backup 6.0. In fact when you run the assistant, you have the now a choice between the creation of iSCSI target and create windows share. I took the option Create Windows share.

All the backup/recovery/replication tasks are done through assistants. The task is composed with just few steps:

First selecting the recovery point , then create a windows share (or iSCSI target) > and mount this share to finally be able to copy-paste the files that needs to be restored from withing that particular VM.

The process is fast and direct. It takes few clicks to get the files back to the user’s VM. You can see the part of the process on the images at left and bellow.

PHD Virtual Backup and Replication 6.0 - file level restore final shot - you can than easily copy paste the files you need

PHD Virtual Backup 6.0 – Instant VM Recovery and PHD Motion – as said in the beginning of my review, the PHD virtual backup 6.0 has the ability to run VMs directly from backup location.

The Instant VM Recovery works out of the box without further necessity to setup the temporarily storage location, but if needed, the location for temporary changes can be changed from the defaults. But there is usually no need to do so.

You can do it in Configuration > Instant VM Recovery.

There is a choice between the attached virtual disk and VBA’s backup storage.

PHD Virtual Backup 6.0 - configuration of temporary storage location for Instant VM recovery

Then we can have a look and see how the Instant VM recovery option works. Let’s start by selecting the recovery point that we would want to use for that. An XP VM which I backed up earlier will do. Right Click the point in time from which one you want to recover (usually the latest), and choose recover.

PHD Virtual Backup 6.0 - Instant VM Recovery

At the next screen there is many options. I checked the Power On VM after recovery and Recover using original storage and network settings from backup. Like this the VM is up and running with network connectivity as soon as possible. I did also checked the option to Automatically start PHD Motion Seeding, which will start copying the VM back to my SAN.

When the copy finishes I’ll receive a confirmation e-mail…..  Note that you have a possibility to schedule this task as well.

PHD Virtual Backup 6.0 - Instant VM recovery and PHD Motion

On the next screen you can see the final screen before you hit the submit button. You can make changes there if you want.

PHD Virtual Backup 6.0 - Instant VM recovery and PHD Motion

The VM is registered in my vCenter and started from the backup location. 1 min later my VM was up. The VM was running from temporary storage created by PHD Virtual backup 6.0. The temporary storage that I configured before, when setting up the software.

You can see on the image below which tasks are performed by PHD Virtual backup 6.0 in the background.

PHD Virtual Backup 6.0 - Instant VM Recovery with PHD Motion

So, we have the Instant VM Recovery tested and our VM is up and running. Now there are two options, depending if you have storage vMotion licensed or not.

With VMware Storage vMotion – If that’s the case, you can initiate storage vMotion from the temporary datastore created by PHD Virtual back to your datastore located on your SAN.

When the migration completes, open the PHD Console and click Instant VM Recovery. In the Current tab, select the VM that
you migrated and click End Instant Recovery to remove the VM from the list.

Using PHD Motion – If you don’t have storage vMotion, you can use PHD Motion. How it works… Let’s see. You remember that during the assistant launching the Instant VM recovery, we selected an option to start PHD Motion seeding.

This option will start to copy the whole VM back to the datastore on the SAN (in my case it’s the Freenas datastore). I checked that option to start Automatically PHD Motion seeding when setting up the job, remember?

You can see it in the properties of the VM being run in the Instant VM recovery mode. On the image below you can see the temporary datastore (PHDIR-423…….) and the final destination’s datastore of the VM (the Freenas datastore).

PHD Virtual Backup 6.0 - Instant VM Recovery and PHD Motion

This process will take some time. So when you go back to the PHD Virtual console, you choose the Instant VM Recovery Menu option > Current Tab, you’ll see that Complete PHD Motion is grayed out. That’s because of the above mentioned copy hasn’t finished. Well it really does not matter, since you (or your users) can still work and use the VM.

PHD Virtual Backup 6.0 - Instant VM Recovery and PHD Motion

And you can see on the image below that when the seeding process has finished, the button Complete PHD Motion became activ. (In fact, the software drops you an e-mail that the seeding process has finished copying

PHD Virtual Backup 6.0 - PHD Motion

And then, after few minutes the VM dissapears from this tab. The process has finished the copy of the deltas and the VM can be powered back on. It’s definitely a time saver, and when no storage vMotion licenses (in SMBs) are available, this solution can cut the the downtime quite impressively. The History tab shows you the details.

PHD Virtual Backup 6.0 - Instant VM recovery with PHD Motion

PHD Virtual Backup 6.0 – The E-mail Reporting Capabilities.

PHD Virtual Backup 6.0 has got the possibility to report on backup/replication jobs success (failure). The configuration of it it’s made mores simpler now than in previous release, since there is a big Test button there in order to send test e-mail. I haven’t had any issues after entering the information for my e-mail server, but in case you’re using different ports or you’re behind a firewall, this option is certainly very useful.

PHD Virtual Backup 6.0 - E-mail Reporting Capabilities

In v6, PHD made the email reports WAY more attractive.  They have a great job summary at the job and lots of great information in a nicely formatted chart that shows details for each VM and each virtual disk.  They even color code errors and warnings.  Very cool.

PHD Virtual Backup 6.0 - E-mail reports

PHD Exporter

PHD Virtual Backup .60 has also few tools bundled within the software suite which can be useful. PHD Exporter is one of them. This application can help when you need to archive VMs with data. Usually you would want to install this software on physical windows server which has got a tape library attached. It’s great because you can schedule existing backups to be exported as compressed OVF files. So if you ever had to recover from an archive, you wouldn’t even need PHD to do the recovery.

The tool basically connects itself to the location where the backups are stored and through an internal processing does extract those backup files to be stored temporary in a location that you configure when you setting up – it’s called staging location. Usually it’s a local storage. Then the files are sent to tape for archiving purposes.

Through the console you configure exporting jobs where the VM backups are exported to staging location.

PHD Exporter - Tool to export backups to Tape for archiving purposes

PHD Virtual Backup 6.0 is Application Aware Backup Solution

PHD virtual Backup 6.0 can make a transactionally-consistent backups of MS Exchange with the possibility to truncate the logs. Log truncation is supported for Microsoft Exchange running on Windows 2003 Server 64 bit SP2 and later and Windows Server 2008 R2 SP1 and later.

When an application aware backup is started, PHD Guest Tools initiates the quiesce process and an application-consistent VSS snapshot is created on the VM. The backup process continues and writes the data to the backup store while this snapshot exists on disk. When the backup process completes, post-backup processing options are executed and the VSS snapshot is removed from the guest virtual machine.

PHD Virtual Backup 6.0 provides small agent called PHD Guest Tools, which is installed inside of the VM.  This application performs the necessary application aware functions, including Exchange log truncation. Additionally, you can add your own scripts to perform tasks for other applications. Scripts can be added before and after a snapshot, and after a backup completes. So it looks like they’ve got all the bases covered for when you might want to execute something on your own. I’ve tested with an Exchange 2010 VM and it worked great!

I was nicely surprised with the deduplication performance at the destination datastore. Here is a screenshot from the dashboard where you can see that the Dedupe ration is 33:1 and saved space 1.4 TB.

PHD Virtual Backup 6.0 - The dashboard

During the few days that I had the chance and time to play with the solution in my lab I did not have to look often in the manual, but if you do plan using the replication feature with several remote sites, I highly recommend to read the manual which is as I already told you, good quality.

PHD Virtual Backup 6.0 provides many features that are useful and provide real value for VMware admins. Replication and Instant Recovery are features which becomes a necessity providing short RTO.

PHD Virtual Backup 6.0  is an agent-less backup solution (except VMs which needs Application aware backups) which don’t use physical hardware, but runs as a virtual appliance with 1CPU and 1Gigs of RAM.  This backup software solution can certainly have its place in today’s virtualized infrastructures running VMware vSphere.

Please note that this review was sponsored by PHD Virtual.

Source :
https://www.vladan.fr/phd-virtual-backup-6-0/

Qnap QuTS hero h5.1.0 | Release Notes

QuTS hero h5.1.0
2023-05-29

QuTS hero h5.1.0 brings many important new features to further enhance security, improve performance, and boost productivity for your QNAP NAS. You can now log in with more secure verification methods, delegate administrative tasks to general users, and centrally manage NAS devices via AMIZ Cloud. You can also benefit from smarter disk migration, smoother file browsing and search in File Station, more powerful SMB signing and file sharing, more convenient storage pool expansion, and much more. See What’s New to learn about main features and Other Changes to learn about other features, enhancements, and changes.

We also include fixes for reported issues and provide information about known issues. For details, see Fixed and Known Issues. You should also see Important Notes before updating QuTS hero.

What’s New

Storage pool expansion by adding disks to an existing RAID group

Users can now expand a storage pool by adding disks to expand an existing RAID group within the pool. When expanding the RAID group, users can also migrate the RAID group to a different RAID type.

To use this function, go to Storage & Snapshots > Storage > Storage/Snapshots, select a storage pool, click Manage > Storage Pool > Action > Expand Pool to open the Expand Storage Pool Wizard, and then select Add new disk(s) to an existing RAID group.

Support for SMB multichannel

Users can now allow SMB 3.x clients to establish multiple network connections simultaneously to an SMB file share. Multichannel can increase the network performance by aggregating network bandwidth over multiple NICs and mitigating network disruption by increasing network fault tolerance.

To enable SMB multichannel, go to Control Panel > Network & File Services > Win/Mac/NFS/WebDAV > Microsoft Networking, and then select Enable SMB Multichannel.

SMB multichannel is only supported on the following clients using SMB 3.0 or later:

  • Windows 8.1 and later
  • Windows Server 2012 and later
  • macOS Big Sur 11.3.1 and later

AES-128-GMAC algorithm support for SMB signing

QuTS hero h5.1.0 now supports the Advanced Encryption Standard (AES) Galois Message Authentication Code (GMAC) cipher suite for SMB signing. SMB signing can use this algorithm to encode and decode using 128-bit keys and can automatically negotiate this method when connecting to a client device that also supports the same algorithm standard.

To enable SMB signing, go to Control Panel > Network & File Services > Win/Mac/NFS/WebDAV > Microsoft Networking > Advanced Settings, and then configure the SMB signing settings. Make sure that you select the highest SMB version as SMB 3.

Delegated Administration for better organization flexibility and productivity

In modern organizations, IT administrators are often overwhelmed by a sheer number of tasks and responsibilities. QuTS hero h5.1.0 now supports Delegated Administration, which allows administrators to delegate various roles to general users, so that they can perform routine tasks, control their data, manage system resources, and monitor device status even when IT administrators are not available. You can choose from a wide range of roles, including System Management, Application Management, Backup Management, Shared Folder Management, and many more. To ensure system security, we recommend only granting permissions that are essential for performing required tasks.

This feature not only helps reduce the workloads of administrators but also greatly enhances productivity and flexibility for your organization. You can also easily view the roles currently assigned to each user and change their roles anytime according to your needs. To configure these settings, go to Control Panel > Privilege > Delegated Administration. To learn more about Delegated Administration, check QuTS hero h5.1.0 User Guide.

2-step verification and passwordless login for enhanced account security

QuTS hero now supports passwordless login, which replaces your password with a more secure verification method. Instead of entering a password, you can scan a QR code or approve a login request with your mobile device to verify your identify. QuTS hero now also supports more verification methods for 2-step verification. In addition to a security code (TOTP), you can also choose to scan a QR code, approve a login request, or enter an online verification code to add an extra layer of security to protect your NAS account.

To configure these settings, go to the NAS desktop, click your username on the taskbar, and then select Login and Security. You can download and install QNAP Authenticator from App Store or Google Play and pair this mobile app with your NAS to secure your NAS account. Note that you cannot use 2-step verification and passwordless login at the same time.

Centralized NAS management with AMIZ Cloud

You can now add the NAS to an organization when setting up the myQNAPcloud service for your NAS. This allows organization administrators to remotely access, manage, and monitor various system resources on the NAS via AMIZ Cloud, a central cloud management platform designed for QNAP devices.

To manage the NAS via AMIZ Cloud, you must enable AMIZ Cloud Agent in myQNAPcloud. This utility communicates with AMIZ Cloud and collects the data of various resources on your device for analytics purposes without any identifiable person information.

Automatic disk replacement with Predictive Migration before potential failure

Predictive Migration is a major improvement over the original Predictive S.M.A.R.T. Migration feature in Storage & Snapshots. This upgrade now allows users to specify multiple trigger events that prompt the system to automatically replace a disk before it fails.

Besides S.M.A.R.T. warnings, users can also specify trigger events from other monitoring systems such as Western Digital Device Analytics (WDDA), IronWolf Health Management (IHM), DA Drive Analyzer, and SSD estimated remaining life. When a specified trigger event occurs—for example, a disk ‘s Galois WDDA status changes to “Warning” or the SSD estimated remaining life reaches 3%—the system automatically replaces the disk and migrates all its data to a spare disk. This process protects your data better and is safer than manually initiating a full RAID rebuild after the disk fails.

To configure Predictive Migration, go to Storage & Snapshots > Global Settings > Disk Health.

Lists of recent files in File Station for easier file browsing

With the new Recent Files feature in File Station, you can now easily locate files that were recently uploaded, opened, or deleted. These three folders are conveniently grouped together under the Recent File folder at the upper left portion of File Station.

File content search in File Station with Qsirch integration

The original search function in File Station could only search for file names of a specific file type. With the integration of Qsirch into File Station, you can now search for file content using keywords, and also search for multiple file types using these keywords at the same time. To use this feature, you need to install Qsirch, an app that can index the files on your device and greatly facilitate your file search.

Other Changes

Control Panel

  • Users can now configure an individual folder to inherit permissions from its parent folder or to remove the inherited permissions anytime. Users can also make a folder extend its permissions to all its subfolders and files. To configure permission inheritance on a folder, go to Control Panel > Privilege > Shared Folders, and then click the Edit Shared Folder Permissions icon under Action.
  • Added additional specification information for memory slots in Control Panel > System Status > Hardware Information.
  • Changed the behavior and the description of certain permission settings as we do not recommend using the default administrator account “admin”.
  • Optimized the process of restoring the LDAP database.
  • The “Network Recycle Bin” feature has been renamed to “Recycle Bin” in Network & File Services.
  • The automatic firmware update settings have been streamlined with the following changes: – The selectable options for automatic firmware updates have been greatly simplified. Users now select one of three firmware types to automatically update their system with: quality updates, critical updates, or latest updates. – “Security updates” are now “critical updates”. Critical updates include security fixes as well as critical system issue fixes. – “Quality updates” now include security fixes and critical issue fixes in addition to bug fixes.- “Feature updates” are now “latest updates” and include quality and critical updates in addition to new features, enhancements, and bug fixes. – Update notifications no longer need to be enabled separately for each firmware type. Notifications are now either enabled or disabled for all firmware types.
  • The time interval for observing successive failed login attempts can now be configured to be between 0 and 600 minutes. Moreover, a time interval of 0 minutes means that failed login attempts are never reset.
  • You can now include more information from account profiles when importing and exporting user accounts.
  • You can now select the direction to append the custom header for the reverse proxy rule.
  • Users can now edit and enable or disable existing power schedules in Control Panel > System > Power > Power Schedule. Previously, users could only add or remove power schedules.
  • The “Network Recycle Bin” feature has been renamed to “Recycle Bin” in Network & File Services.

Desktop & Login

  • You can now log out of your account on all devices, browsers, and applications at once. To use this feature, go to the desktop, click your username on the taskbar, and then go to Login and Security > Password.
  • Added an icon on the top-right corner of the desktop to indicate whether the device has enabled myQNAPcloud and been associated with a QNAP ID or whether the device has joined AMIZ Cloud.
  • Users can now save their QuTS Hero login credentials in their web browser. To enhance the security of your QuTS Hero user account, we recommend enabling 2-step verification.

App Center

  • Users can now configure a schedule for automatic installations of app updates.

File Station

  • Added prompt banners to remind users to turn on related browsing functions for multimedia files.
  • Enhanced the Background Tasks display UI.
  • Improved File Station performance and enhanced file browsing experience.

Help Center

  • Redesigned the user interface of Help Center for a better user experience.

Initialization

  • You can now purchase licenses during QuTScloud installation.

iSCSI & Fibre Channel

  • Added a new settings page for managing default iSCSI CHAP authentication settings, which you can use for multiple iSCSI targets. You can find these settings in iSCSI & Fibre Channel > Global Settings > Default iSCSI CHAP. When creating or editing a target, you can choose to use the default CHAP settings or configure unique settings for the target.
  • Added the client umask feature to assign default permissions for existing and new files and folders.
  • When creating an iSCSI target, you can now select the network interfaces that an iSCSI target will use for data transmission. Previously, users could only do so after the target was created.

Network & Virtual Switch

  • Network & Virtual Switch can now record event logs when the system identifies conflicting IP addresses between the local device and another device on the same network.
  • Users can now configure the MAC address when creating or modifying a virtual switch.
  • When selecting the system default gateway automatically, you can now configure the checking target by specifying the domain name or IP address.

NFS

  • NFS service now supports both NFSv4 and NFSv4.1 protocols.
  • Users can now set the rcpbind to assign fixed ports to RPC services. Make sure that you configure the firewall rules accordingly to allow connections only on the fixed ports.

PHP System Module

  • Updated the built-in PHP version to 8.2.0.

Resource Monitor

  • Resource Monitor now displays the space used by files created from Qsync file versioning.

SAMBA

  • Updated Samba to version 4.15.
  • You can now aggregate up to 50 shared folders on a Windows network.

Storage & Snapshots

  • Added support for disk failure prediction from ULINK’s DA Drive Analyzer. Registered users of DA Drive Analyzer can now also monitor disk failure prediction statuses in Storage & Snapshots > Storage > Disks/VJBOD > Disks.
  • Added support for Seagate dual-actuator disks. These disks appear with a “Seagate DA” tag in Storage & Snapshots > Storage > Disks/VJBOD > Disks.
  • Added support for Western Digital Device Analytics (WDDA) for Western Digital (WD) disks. To view WDDA information, go to Storage & Snapshots > Storage > Disks/VJBOD > Disks, select a WD disk, and click Health > View Details.
  • Improved the “Enable Read Acceleration” feature so that it not only improves the read performance of new files added to a shared folder (starting in QuTS hero h5.0.1), but also improves the read performance of existing files (starting in QuTS hero h5.1.0). This feature can be enabled for shared folders after upgrading from QuTS hero h5.0.0 or earlier to QuTS hero h5.0.1 or later.
  • Increased the maximum number of disks in RAID-TP from 16 to 24.
  • Redesigned the presentation of disk information into tabular format for enhanced user experience, now viewable in Storage & Snapshots > Storage > Disks/VJBOD > Disks.
  • Renamed the function “Replace & Detach” to “Replace” and added the option for users to choose whether to designate the replaced disk as a spare disk or to detach it from the system.
  • You can now select up to 24 disks for a single RAID-TP group.
  • Encrypted LUNs are now supported in VJBOD, SnapSync, Snapshot Replica, and snapshot import/export operations.
  • Improved the user interface on various snapshot-related screens.
  • Users can now change the destination IP address in Snapshot Replica jobs.
  • Added a new window that automatically appears when you insert new disks and helps you decide what to do with them. You can also access this window any time by going to Storage & Snapshots > Storage > Disks/VJBOD > Disks > More > Manage Free Disks.
  • After rebuilding a RAID group with a spare disk, the failed disk’s slot becomes reserved for a spare disk. To free up this slot for other purposes, go to Storage & Snapshots > Storage > Disks/VJBOD > Disks, select the disk slot, and click Action > Free Up Spare Disk Slot.
  • Users can now enable and disable QNAP SSD Antiwear Leveling (QSAL) on an existing SSD storage pool any time. Richer information is also available for QSAL-enabled pools, including replacement priority recommendation and charts showing the remaining capacity and life of the SSDs in the pool. To configure QSAL or view QSAL information, go to Storage & Snapshots > Storage > Storage/Snapshots, click an SSD storage pool, and then click Manage > QSAL.

System

  • You now need to enter a verification code when resetting your password if you forgot your password. This extra step helps enhance your account security.

Important Note

  • In QuTS Hero h5.0.1 or earlier, users can no longer create new VJBOD disks from a remote NAS if the remote NAS is running QuTS Hero h5.1.0 or later. If there are existing VJBOD disks connections to the remote NAS before it is updated to QuTS Hero h5.1.0 or later, these VJBOD disks are unaffected and remain operational after the update. In QuTS Hero h5.1.0 or later, users can still create VJBOD disks from a remote NAS running QuTS Hero h5.0.1 or earlier.
  • Removed support for CO Video.


Source :
https://www.qnap.com/en/release-notes/quts_hero/overview/h5.1.0

Fixing WSUS – When the Best Defense is a Good Offense

By Johan Arwidmark / April 12, 2018

This week started pretty harsh, a ton of customers reaching out to our team having WSUS issues. Everything from the “traditional” CPU and Memory spikes, to severe network traffic over port 8530 to the WSUS/SUP server. Basically Clients downloading massive amount of info, some customers reporting up to 700 MB per endpoint.

Note #1: One ongoing issue right now seem to be that the Windows version next updates contains a ton of metadata, causing a massive headache for WSUS admins. See below for scripts to help cleanup the mess, and to perform needed maintenance tasks. Also, if you are missing some info here, let me know. I’m @jarwidmark on Twitter.

WARNING: Whatever solution you pick for the maintenance of your WSUS/SUP server, ensure that you do not sync your WSUS/SUP during the maintenance process!

WSUS Housekeeping

Until Microsoft replaces WSUS with something better, you have to do some housekeeping for WSUS to behave. Your mileage is going to vary, but you simply have to keep the WSUS database in shape, as well as declining unused updates. Here are a few resources that can help when WSUS goes bad.

Update:
The network traffic from WSUS can also be heavily loaded due an out-dated Microsoft Compatibility Appraiser version on the machines. See this KB:

Unexpected high network bandwidth consumption when clients scan for updates from local WSUS server
http://support.microsoft.com/en-us/help/4163525/high-bandwidth-use-when-clients-scan-for-updates-from-local-wsus-serve

I have also published a PowerShell script you can run, either via remote PowerShell, or via the “Run Script” feature in ConfigMgr:

Checking the Microsoft Compatibility Appraiser version to prevent unwanted network traffic
https://deploymentresearch.com/666/Checking-the-Microsoft-Compatibility-Appraiser-version-to-prevent-unwanted-network-traffic

Step 1 – Buy you some time

When all 8 CPU’s on your site site server are constantly at 95-100 percent, there is little room for any admin work, nor cleanup. So make sure to throttle CPU on the WsusPool application pool, to give you some working room.

image
WsusPool application pool.

Here is a good write-up of the preceding steps.

ConfigMgr Software Update Point: Out-of-Control App Pool
http://www.windowsmanagementexperts.com/configmgr-software-update-point-out-of-control-app-pool/configmgr-software-update-point-out-of-control-app-pool.htm

Step 2 – More application pool settings, and the WSUS web.config file

Next step is to configure everything else in the application pool, together with the web.config file. I was lazy so I “borrowed” some settings from Sherry’s post below, and added them to a PowerShell script: http://github.com/DeploymentResearch/DRFiles/blob/master/Scripts/Invoke-WSUSConfiguration.ps1

The script came from a series of ConfigMgr Configuration Items posted by Sherry Kissinger 

WSUS Administration, WSUSPool, web.config, settings enforcement via Configuration Items
http://www.mnscug.org/blogs/sherry-kissinger/512-wsus-administration-wsuspool-web-config-settings-enforcement-via-configuration-items    

Step 3 – Decline weird stuff

Use any or all of the listed solutions to get rid of junk in your WSUS database:

Tip: Before starting to run decline scripts, PowerShell / SQL etc., make sure your SUSDB is not heavily fragmented. Use the Maintenance Solution from Ola Hallengren to optimize the SUSDB indexes: http://ola.hallengren.com/

Optional Speed Tip: If you don’t mind going totally unsupported, you can create additional indexes in the WSUS database that speeds up the cleanup dramatically. More info here: http://kickthatcomputer.wordpress.com/2017/08/15/wsus-delete-obsolete-updates, a great post by Scott Williams (@ip1). Again, not supported by Microsoft so don’t blame me if something happens 🙂 Fun fact: In my environment that change made the deletions go 30 times faster!!!

Here is a copy of the “code”: http://github.com/DeploymentResearch/DRFiles/blob/master/Scripts/Create-WSUS-Index.sql

Decline weird stuff #1 – Fully Automate Software Update Maintenance in Configuration Manager

As the title implies, a script that automates software updates, including cleanup, optimization and more. Written by Bryan Dam (@bdam555).
https://damgoodadmin.com/2017/11/05/fully-automate-software-update-maintenance-in-cm/

Update April 17, 2018: Bryan recently updated the script to support standalone WSUS too, below you find a sample syntax for that:

.\Invoke-DGASoftwareUpdateMaintenance.ps1 -UpdateListOutputFile .\UpdateListOutputFile.csv -StandaloneWSUS WSUS01 -RunCleanUpWizard -DeclineSuperseded -DeclineByTitle @('*Itanium*','*ia64*','*Beta*') -DeclineByPlugins -Force

Decline weird stuff #2 – SQL Cleanup scripts

Some shiny SQL scripts from paul salwey @psalwey

Especially checkout the WSUSSQLMaintenance_4_DeclineUpdates_XML_Lengthover5000.sql one. I had not seen that before.

http://drive.google.com/drive/folders/11dNPRZgqlultZql7rVHZZm3Dom8eKlVJ

Tip on usage:

  1. Reindex
  2. Obsolete script
  3. Superseded script
  4. XML script
  5. Reindex again
  6. Reboot server

Tip #1:  If you have a lot of obsolete updates (Script 2.), consider using an alternate version below that runs in batches, and also shows total number of updates. The script is from Scott Williams (see Resource #6 further down this post). I just added a comment on where to change the batch numbers.

http://github.com/DeploymentResearch/DRFiles/blob/master/Scripts/WSUSSQLMaintenance_2_RemoveObsoleteUpdates_BatchVersion.sql

Tip #2: If you just want to quickly see how many obsolete updates you have, use this script:

http://github.com/DeploymentResearch/DRFiles/blob/master/Scripts/Get-WSUSObsoleteUpdatesNumber.sql

Tip #3: Benjamin Reynolds (@SqlBenjamin), with Microsoft, has put together a combination of creating indexes for speed with a more optimized version of cleaning up obsolete updates, and Steve Thompson (@Steve_TSQL), has it all explained and published here: http://stevethompsonmvp.wordpress.com/2018/05/01/enhancing-wsus-database-cleanup-performance-sql-script/

Decline weird stuff #3 – Decline Updates Script by Jeff Carreon

In the same post as the SQL script to view updates with large metadata (In the “Additional Resources” section” further down this post), you find a great decline update scripts by Jeff Carreon (@jeffctangsoo10). It’s in the same post as the SQL script, but kind of hidden if you don’t look carefully. Here is a direct link:

https://www.tcsmug.org/images/carryon/Run-DeclineUpdate-CleanupV5.zip

By default the script run in “What-If” mode ($TrialRun set to $True). Here is a syntax to run it in declining mode, without sending an email report:  

.\Run-DeclineUpdate-CleanupV3.ps1 -Servers CM01 -TrialRun:$false -EmailReport:$false

Decline weird stuff #4 – WSUS Automated Maintenance (Formerly Adamj Clean-WSUS

I have not personally tested this one, but the community seems to like it quite a bit. Cleanup and DB script from Adam Marshall (@Adamj_1)

http://community.spiceworks.com/scripts/show/2998-wsus-automated-maintenance-formerly-adamj-clean-wsus

Additional Resources

Here follows some additional resources that I found useful:

Resource #1 – Script to view updates with large metadata

Here is another contribution from Sherry’s team. This SQL script was put together by Jeff Carreon, after working with Microsoft support on a WSUS performance issue. Very shiny.

The script is used to identify and measure the metadata that the clients are downloading, it tells you what articles (fancy word for update metadata) the are deployable and the size of each article.

What’s SUP???

http://mnscug.org/blogs/jeff-carreon/513-what-s-sup

Resource #2 – The complete guide to Microsoft WSUS and Configuration Manager SUP maintenance

Info from Microsoft. The title is a bit misleading, since it’s not actually a complete guide. But there is still lots of good info.

http://blogs.technet.microsoft.com/configurationmgr/2016/01/26/the-complete-guide-to-microsoft-wsus-and-configuration-manager-sup-maintenance/

Resource #3 – Clients cannot report Scan Results back to WSUS

During the day, Matthew Krause (@MatthewT_Krause) also provided info on an issue he was having: Quite many clients, 75 percent out of 6500,were not reporting back the scan results to WSUS. Basically the server got overloaded with IIS 500 errors as the clients kept trying to report scan results, fail, and then try again. In the WindowsUpdate.log on the client they found that clients would get the error message stating invalid parameter but the sub message was Message:parameters.InstalledNonLeafUpdateIDs (see below).

6G6vXV6c

WindowsUpdate.log on a client failing to report back scan results.

So if you are running into the non-leaf error message, one solution that proved to be working was changing the maxInstalledPrerequisites value in the WSUS Web.config file, and then do an IIS Reset. Doing this change made 90% of clients report scan results back within one day for this environment.

Change WSUS Web.config from:

<add key=”maxInstalledPrerequisites” value=”400″/>

to:

<add key=”maxInstalledPrerequisites” value=”800″/>

Resource #4 – Optimizing WSUS with Configuration Manager, via Adaptiva

Good WSUS overview article with a few technical tricks in it. Written by Matt Tinney (@mnt2556) from Windows Management Experts.

https://insights.adaptiva.com/2018/wsus-configuration-manager/

Resource #5 – Unleash WSUS performance, via Pawel Jarosz

Here is another reading I found useful.

Simon says – unleash WSUS performance

http://paweljarosz.wordpress.com/2018/03/23/simon-says-unleash-wsus-performance

Resource #6 – WSUS Delete Obsolete Updates, via Scott Williams

Yet another useful resources. Written by Scott Williams (@ip1).

WSUS Delete Obsolete Updates
http://kickthatcomputer.wordpress.com/2017/08/15/wsus-delete-obsolete-updates

That all for now,

Happy Deployment / Johan

Source :
https://www.deploymentresearch.com/fixing-wsus-when-the-best-defense-is-a-good-offense/