How to Disable or Remove OneDrive on Windows 11

BY ABHISHEK KUMAR MISHRA
PUBLISHED AUG 12, 2022

Windows 11 comes with OneDrive pre-installed, but you don’t have to live with it if you don’t want to.

In newer versions of Windows, Microsoft added OneDrive to its “out-of-box experience,” meaning you get the app on your system from day one. OneDrive is a great tool for keeping your files stored in the cloud, but some users have their own preferred cloud storage app or want to keep everything on local storage.

If you feel that OneDrive serves no useful purpose to you, it is possible to disable the app. This post will discuss the methods to disable the app on your system. In addition, you will learn multiple ways to completely remove it.

Why Should You Disable OneDrive?

You may not want to keep a cloud backup of sensitive files related to work or personal use cases. One another issue is that OneDrive runs as soon as you boot your system. It continues to run in the background and consumes system and network resources in the process.

If you have a low-spec system or metered network connection, you can relate to this issue. So, it would be a good idea to disable or remove the app from your system in these circumstances.

Does Microsoft OneDrive Impact PC Performance?

OneDrive launches as soon as your system boots up. It tries to sync your documents to the cloud and runs in the background. Systems that have bare minimum compute resources can struggle with performance.

As such, if your PC isn’t the fastest out there, disabling OneDrive or getting rid of it entirely can help squeeze some precious processing power out of your system for other tasks.

How to Disable OneDrive on Windows 11

There are multiple ways to disable OneDrive on your system. You can disable the service from running at startup, disable it via the Group Policy Editor, or you can unlink your account from it.

1. How to Stop OneDrive From Launching at Startup

To disable OneDrive from launching at startup, do as follows:

  1. Go to the system tray icon area and click on the arrow icon. Then click on the OneDrive system tray icon to reveal the settings.
  2. Click on the gear icon and then select settings from the context menu.
  3. Switch to the Settings tab. Uncheck the Start OneDrive automatically when I sign in to the Windows option under the General section.Disable OneDrive From Launching At Startup
  4. Close the window. Now OneDrive won’t start automatically when you boot your PC up.

If you unlink your OneDrive account, the app won’t be able to sync your files anymore. To unlink your account, repeat the following steps.

  1. Open the OneDrive app from the system tray icon. Click on the gear icon and then click on the Settings option.
  2. Find the Unlink this PC option present under the Accounts tab.Unlink-Your-OneDrive-Account-1
  3. Follow the on-screen prompts to unlink your account and close the Window.

3. How to Disable OneDrive Using the Group Policy Editor

Windows users who own an Enterprise or Professional copy can use the Group Policy Editor to disable OneDrive. If you’re not on either of those versions, you’ll need to learn how to access the Group Policy Editor on Windows Home before you try these steps.

  1. Press Win + R to launch the Run command box on your system. Input gpedit.msc and press the Enter key.
  2. Group Policy Editor will launch.
  3. Navigate to Computer Configuration > Administrative Templates > Windows Components > OneDrive.
  4. Once you are inside the OneDrive folder, find the Prevent the usage of OneDrive for file storage policy.Disable OneDrive Using Group Policy Editor
  5. Double-click on it to edit the policy. A new window with detailed settings will pop up.
  6. Click on the Disabled radio button and then click the Apply button.
  7. Click on the OK button and exit the Group Policy Editor.

How to Remove OneDrive From Windows 11

If you’d rather just get rid of the app entirely, here are a few methods to remove OneDrive from your computer.

1. How to Uninstall OneDrive Using the Settings App

To remove OneDrive using the Settings app, do as follows:

  1. Press Win + I to launch the Settings app. Then navigate to the left-hand side menu and click on Apps.
  2. Then click on the Installed apps option in the Apps section.
  3. Scroll down and locate Microsoft OneDrive app in the list.
  4. Click on the three dots and select the Uninstall option.Uninstall OneDrive Using the Settings App
  5. Confirm your action and click on the Uninstall button again.
  6. Now, follows the on-screen prompts to remove the app from your system.

2. How to Uninstall OneDrive Using the Command Prompt

To remove OneDrive using the command prompt, do as follows:

  1. Press the Win key and search command prompt. Right-click on the first result and select the Run as administrator option.
  2. The command prompt will launch. Now, input the following command in the terminal: TASKKILL /f /im OneDrive.exe
  3. Once the command finishes executing, enter the uninstallation command: %systemroot%\SysWOW64\OneDriveSetup.exe /uninstallUninstall OneDrive Using CMD
  4. Wait for the execution to complete. CMD will not display any message about the uninstallation command.
  5. Exit the command prompt window. OneDrive won’t bother you anymore.

3. How to Uninstall OneDrive Using the PowerShell

To remove OneDrive using PowerShell, do as follows:

  1. Press the Win key and search for PowerShell. Right-click on the first search result and click on the Run as administrator option.
  2. PowerShell will launch. Now input the following command: winget uninstall onedriveUninstall OneDrive Using Powershell
  3. Press the Enter key to execute the command. You will see a successfully uninstalled message if the command executes without any error.
  4. Now, exit the PowerShell window.

4. How to Uninstall OneDrive Using a Batch Script

The above-mentioned processes uninstall the app but do not delete the remaining traces of the OneDrive app. However, there is a batch script that you can use to uninstall the app as well as remove all the traces of OneDrive from your system.

To remove OneDrive using a batch script, do as follows:

  1. Press the Win key and search for Notepad on your system. Click on the first result to launch the notepad app.
  2. Now, copy the following code into the notepad app window. Make sure to recheck the document for missing lines of code, if any.@echo off
    cls

    set x86="%SYSTEMROOT%\System32\OneDriveSetup.exe"
    set x64="%SYSTEMROOT%\SysWOW64\OneDriveSetup.exe"

    echo Closing OneDrive process.
    echo.
    taskkill /f /im OneDrive.exe > NUL 2>&1
    ping 127.0.0.1 -n 5 > NUL 2>&1

    echo Uninstalling OneDrive.
    echo.
    if exist %x64% (
    %x64% /uninstall
    ) else (
    %x86% /uninstall
    )
    ping 127.0.0.1 -n 5 > NUL 2>&1

    echo Removing OneDrive leftovers.
    echo.
    rd "%USERPROFILE%\OneDrive" /Q /S > NUL 2>&1
    rd "C:\OneDriveTemp" /Q /S > NUL 2>&1
    rd "%LOCALAPPDATA%\Microsoft\OneDrive" /Q /S > NUL 2>&1
    rd "%PROGRAMDATA%\Microsoft OneDrive" /Q /S > NUL 2>&1

    echo Removing OneDrive from the Explorer Side Panel.
    echo.
    REG DELETE "HKEY_CLASSES_ROOT\CLSID\{018D5C66-4533-4307-9B53-224DE2ED1FE6}" /f > NUL 2>&1
    REG DELETE "HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{018D5C66-4533-4307-9B53-224DE2ED1FE6}" /f > NUL 2>&1
    pause
  3. Now, navigate to the top area and click on the File option. Select the Save as option from the dropdown menu.Uninstall OneDrive Using batch script
  4. Choose the save location as desktop so that it is easier to find the file.
  5. Input a name that reflects the use case of the .bat file. For example, OneDriveRemovalTool.
  6. Now, click on the Save as type option and select All files from the drop-down list.
  7. Click on the Save button to save your .bat file.
  8. Go to the desktop and right-click on the newly created .bat file. Select the Run as administrator option.
  9. Let it finish execution and restart your computer. OneDrive won’t bug you anymore.

5. How to Uninstall OneDrive Using a Third-party Uninstaller Program

Uninstalling OneDrive using the Windows uninstaller leaves a lot of files and folders behind. In such cases, you can try a third-party uninstaller tool that will clean up everything related to OneDrive.

We will use Revo uninstaller app for this method, but you can use your preferred third-party uninstaller app for Windows if you like.

  1. Open any browser on your system and go to the official Revo uninstaller download page.
  2. Download the free version and install it on your system.
  3. Launch Revo uninstaller. Find OneDrive from the list of installed programs.
  4. Right-click on OneDrive and select the Uninstall option from the context menu.
  5. Uninstall window will pop up. Click on the continue button to proceed.Uninstall OneDrive Using Revo unistaller
  6. After the uninstall completes, select the advanced option to search for files and folders associated with OneDrive.
  7. Follow the on-screen prompts and delete everything. Restart your system for changes to take effect. You won’t find OneDrive anywhere on your system.

OneDrive Won’t Bother You Anymore

These were the steps to disable or remove OneDrive on your Windows machine. If you plan on reusing OneDrive someday, stick with the disabling methods. But if you want to remove it from your system, you can try out any of the uninstallation methods mentioned above.

Source :
https://www.makeuseof.com/windows-11-disable-remove-onedrive/

WinRM QuickConfig, HowTo Enable via GPO or Remotely on All Servers

Marc Wilson UPDATED: February 3, 2023

Configuring “WinRM Quickconfig” on remote computers can be a little difficult at times, especially if this is your first time using the Windows Remote Management service.

There are several ways to go about enabling winrm quickconfig on remote computers, many admins like to push the task to a GPO and others like to do it through powershell or 3rd Party programs.

We’ll highlight 3 different methods we’ve used in the past to get Windows Remote Management service enabled, along with avoiding the dreaded “WINRM QUICKCONFIG ACCESS DENIED” error that many people get when going through this process.

3 Ways to Remotely Enable WinRM on Windows Clients/Servers:

  1. Download and Run this Free Utility from Solarwinds to activate it on Remote Machines
  2. Setup new Group Policy Object to enable the WinRM Service and Firewall Rules
  3. Use PSEXEC to Remotely Enable on Client Machines

1. Free Utility for Remote Activation

This is by far the easiest way if you’ve already configured the Windows Firewall in your network. Solarwinds has a nifty free tool called “Remote Execution Enabler for PowerShell” that has the functionality to enable and configure WinRm on local and remote hosts within your network.

Its as easy as entering in the IP Address(es) or IP Address Range of all the computers your targeting, than enter credentials that have Administrative Rights to that PC (usually accounts in the Domain Administrators Group will do the trick) and click the “START CONFIGURATION” button:

winrm remote enabler by solarwinds screenshot

When the utility is finished starting the Windows Remote Management services on all IP Addresses or Ranges specified, you will see a green dot with a “Complete” status, as seen below:

completed screen

Grab the little utility from below and give it a go:

Download Free!

2. Setup via Group Policy Object (GPO)

If the above solution didn’t work for you, then setting up a GPO to do all the configuration is the next best thing, as you can assign it to any given computer or OU if necessary. Below are instructions for Windows 2012 R2 Server GPO setup.

  1. Open Group Policy Management from within Administrative Tools folder.
  2. Right-click on the desired OU that you want to create a Group Policy Object for and click on “Create a GPO in this Domain, and Link it here…
  3. Rename the GPO to whatever you would like, “Enable WinRM via GPO” or something along those lines then click OK.
  4. Now that the new GPO has been created, right-click on the Newly created GPO and click “EDIT“.
  5. Expand the Menu tree as follows: Computer Configuration > Policies Administrative Templates: Policy definitions > Windows Components > Windows Remote Management (WinRM) WinRM Service.
  6. Find the setting that says “Allow remote server management through WinRM” and right-click and click “EDIT” to configure the settings. (see image below)
    winrm group policy setting
  7. When the dialog box opens up, click “Enabled” and under the options section, either specify an IP Address range or put an Asterisk “*” to allow all IP addresses to remotely manage the PC. (We recommend specifying an IP Address to reduce any risk of a security compromise of your systems/network).
    config settings
  8. Now lets enable the Windows Remote Mangement (WS-Management) Service to start automatically.
    Go to Computer Configuration >  Preferences > Control Panel Settings > Services and right-click and select “NEW” and the select “Service“.
  9. A New Service Properties window will come up and you will need to change Startup to “Automatic (Delayed Start)” and then in the Service Name dialog box, click the box with the 3 dots in it to the right of the Service name box and select “Windows Remote Management (WS-Management)” and click the Select button.
  10. Once you’ve selected the Service, under the “Service action:” pull down, we’ll want to click “Start service“.
    gpo service setup
  11. Last Step of this Process is to configure the Windows Firewall to Allow the proper ports inbound.Go to Computer Configuration > expand Policies > expand Windows Settings > expand Security Settings > expand Windows Firewall with Advanced Security > expand Windows Firewall with Advanced Security > expand Inbound Rules.Right-click the Inbound Rules node and choose New Rule. (see screenshot below)inbound firewall rule
  12. When the New Inbound Rule wizard box opens, click on the “Predefined” radio button and scroll down to “Windows Remote Management” and click on it. (see screenshot below)predefinied winrm rule
  13. Next we’ll click on the Left Sidebar menu item that says “Predefined Rules” in order to not Allow the Firewall to open this Port to the Public network.When the window opens, uncheck the box that says Public profile next to it, as seen in the image below. This ensures that we only allow WinRM access to the Private and Domain networks.Then Click the Next button:remove public firewall rule
  14. The Last screen of the New Inbound Firewall Wizard will just ask whether to Allow The Connection or block it.Make sure the that “Allow the connection” radio box is checked and click Finish.

    allow connection

At this point, you’ve successfully finished the GPO and you’ll need to wait for the GPO to propagate throughout your network.

3. PSEXEC for WinRM Activation

If either of the two options above don’t work for you, using PSEXEC to remotely enable the service is another option, if you prefer. Here are the relevant commands you will need in order to execute “winrm quickconfig” using PSexec command line utility.

  1. Make sure you have PSEXEC installed on your machine and the proper “PATH” setup within your system variables – this should be automatically added when you install PSEXEC.If you dont have PSEXEC installed yet, grab the download from here (its part of the PSTOOLS package) and install it.
  2. Launch the an elevated “Command Prompt” window using your local/domain administrator account as the user of the target machine/s, to ensure that you have the necessary permissions to configure WinRM remotely on machines in your network.To run “Command Prompt” as a different user, hold the Shift Key down and right-click on the Command Prompt link and click on “Run as Different User” and then enter in a user account that has Administrator Privileges on all computers your targeting.elevated command prompt domain adminlogin
  3. Run one of the following commands below:psexec \\ComputerName -s winrm.cmd quickconfig -q or create a text file with all the computer names you want to target and save them on your C:\ drive and run the command below to enable on all PC’s in the specified text file we just created.Now you can run any one of the commands below depending on what you want to accomplishpsexec @c:\ALLComputerNames.txt -s winrm.cmd quickconfig -q
  4. If all goes well, you’ll see a screen like the one below that confirms everything has been configured properly for WinRM on the remote machine/s.psexec winrm remotely success

WinRM FAQs

is WinRM the same as RDP?

WinRM and RDP are two different systems but they can be used for the same purpose, which is to get remote access to a computer and execute commands. WinRM is a text-based system but RDP shows the Desktop of the remote computer.

Is WinRM a security risk?

WinRM uses HTTP by default and that isn’t secure. However, you can configure the system to use HTTPS for connections and that makes WinRM secure.

Does WinRM use TLS?

WinRM will use Transport Layer Security (TLS) if you specify that it should run its connections with HTTPS.

How does WinRM QuickConfig configure the firewall on the remote computer?

WinRM QuickConfig configures the firewall on the remote computer by adding a rule to allow incoming traffic on the WinRM port (default is 5985).

Is WinRM QuickConfig secure?

WinRM QuickConfig uses the HTTPS protocol to encrypt communication between the remote and local computers, making it secure for remote management.

What are the prerequisites for using WinRM QuickConfig on a remote computer?

The prerequisites for using WinRM QuickConfig on a remote computer include having administrative privileges on both the local and remote computers and having network connectivity between the computers.

How can I verify that WinRM is configured correctly on a remote computer?

To verify that WinRM is configured correctly on a remote computer, you can use the “WinRM Id” command to retrieve information about the remote computer and the WinRM configuration. If the command returns information about the remote computer and the WinRM configuration, it means that WinRM is configured correctly.

Related Articles:

What Is WinRM? Windows Remote Management command-line Utility

Source :
http://Source : https://www.pcwdld.com/winrm-quickconfig-remotely-configure-and-enable/

How to Install Windows 11 on Unsupported Hardware (Without TPM & Secure Boot)

June 8, 2023

To install Windows 11 (or upgrade from Windows 10), your computer must meet certain minimum system requirements: TPM 2.0 chip (Trusted Platform Module), UEFI + Secure Boot mode enabled, 3+ GB RAM, at least 64 GB hard drive, and a compatible 1GHz dual-core CPU (not all processors are supported!). Microsoft restricts the installation of Windows 11 to non-compatible devices by performing some pre-installation hardware requirements checks. In this article, we’ll show how to install Windows 11 on unsupported hardware without checking CPU, TPM, Secure Boot, and other requirements.

If your computer does not meet the minimum Windows 11 hardware requirements, you will see the following error during the OS installation:This PC can’t run Windows 11. This PC doesn’t meet the minimum system requirements to install this version of Windows. For more information, visit aka.ms/WindowsSysReq

To understand what requirements your computer doesn’t meet, check the Windows 11 setup errors in the setuperr.log file. This file contains only Windows installation errors (you can find the complete Windows install log in the setupact.log file, but it is hard to debug it since it is very large).

To open the Windows setup error log, open the command prompt right on the setup screen by pressing Shift + F10 and run this command:

notepad x:\windows\panther\setuperr.log

troubleshoot Windows 11 installation erorrs, minimum system requirement check via setuperr.log

In our case, the error says that there is not enough RAM on the computer (only 2 GB instead of 3 GB):2022-02-02 08:17:57, Error VerifyRAMRequirements: System has INSUFFICIENT system memory: [2048 MB] vs [3686 MB]

Please note that if your computer doesn’t meet several minimal requirements to install Windows, only the first one will be displayed in the setup log. It means that if you fixed or bypassed one of the compatibility errors, another compatibility error will appear in the installation log the next time you run Windows 11 Setup.

After we have added more RAM to the device, another error appeared:2022-02-02 08:43:21, Error VerifyTPMSupported:Tbsi_GetDeviceInfo function failed – 0x8028400f[gle=0x0000007a]

This means that the Windows 11 setup wizard detected there was no TPM chip on the computer.

If you install Windows 11 to a VMWare virtual machine, you can add a virtual TPM chip following the following guide. Another guide is used for Hyper-V VMs.

However, you can continue Windows 11 setup by ignoring one or more of the compatibility requirements.

To do it, use the command prompt on the Windows 11 setup screen:

  1. Run the Registry Editor regedit.exe.
  2. Go to HKEY_LOCAL_MACHINE\SYSTEM\Setup registry key and create a new key with the name LabConfig;
  3. Create reg DWORD parameters with values 1 for those compatibility checks you want to skip during installation.

The following bypass options are available for installing Windows 11 on unsupported hardware:

  • BypassCPUCheck – for incompatible CPUs
  • BypassTPMCheck – without a TPM 2.0+ chip
  • BypassRAMCheck – to skip minimum RAM check
  • BypassSecureBootCheck – for Legacy BIOS devices (or UEFI firmware with Secure Boot disabled)
  • BypassStorageCheck – to minimal bypass system drive size check

For example, in order not to check the TPM module during installation, create the BypassTPMCheck registry parameter with the value 1. You can do it using the graphical Registry Editor or with the command:

reg add HKLM\SYSTEM\Setup\LabConfig /v BypassTPMCheck /t REG_DWORD /d 1

windows 11 setup screen: bypass unsupported device checks using the bypasstpmcheck parameter under the reg key labconfig

In the same way, create other registry parameters for the checks you want to skip when installing Windows 11.

Then get back to Windows 11 setup window, go one step back, and continue with a typical Windows installation without compatibility checks.

You can modify Windows 11 installation ISO image so that all checks (TPM, SecureBoot, disk size, RAM, CPU) are skipped during OS installation. To do it, create a text file AutoUnattend.xml with the following contents:<unattend xmlns=”urn:schemas-microsoft-com:unattend”> <settings pass=”windowsPE”> <component name=”Microsoft-Windows-Setup” processorArchitecture=”amd64″ publicKeyToken=”31bf3856ad364e35″ language=”neutral” versionScope=”nonSxS” xmlns:wcm=”http://schemas.microsoft.com/WMIConfig/2002/State” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”> <RunSynchronous> <RunSynchronousCommand wcm:action=”add”> <Order>1</Order> <Path>reg add HKLM\System\Setup\LabConfig /v BypassTPMCheck /t reg_dword /d 0x00000001 /f</Path> </RunSynchronousCommand> <RunSynchronousCommand wcm:action=”add”> <Order>2</Order> <Path>reg add HKLM\System\Setup\LabConfig /v BypassSecureBootCheck /t reg_dword /d 0x00000001 /f</Path> </RunSynchronousCommand> <RunSynchronousCommand wcm:action=”add”> <Order>3</Order> <Path>reg add HKLM\System\Setup\LabConfig /v BypassRAMCheck /t reg_dword /d 0x00000001 /f</Path> </RunSynchronousCommand> <RunSynchronousCommand wcm:action=”add”> <Order>5</Order> <Path>reg add HKLM\System\Setup\LabConfig /v BypassCPUCheck /t reg_dword /d 0x00000001 /f</Path> </RunSynchronousCommand> <RunSynchronousCommand wcm:action=”add”> <Order>4</Order> <Path>reg add HKLM\System\Setup\LabConfig /v BypassStorageCheck /t reg_dword /d 0x00000001 /f</Path> </RunSynchronousCommand> </RunSynchronous> <UserData> <ProductKey> <Key></Key> </ProductKey> </UserData> </component> </settings> </unattend>

autounattend.xml bypass windows 11 hardware requirements

If you want to disable Microsoft online account creation screen, add the following component section to the file:<component name=”Microsoft-Windows-Shell-Setup” processorArchitecture=”amd64″ publicKeyToken=”31bf3856ad364e35″ language=”neutral” versionScope=”nonSxS” xmlns:wcm=”http://schemas.microsoft.com/WMIConfig/2002/State” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”> <OOBE> <HideOnlineAccountScreens>true</HideOnlineAccountScreens> <HideWirelessSetupInOOBE>true</HideWirelessSetupInOOBE> <ProtectYourPC>3</ProtectYourPC> </OOBE> </component>

Copy this file to the root of your Windows 11 USB installation media.

If you want to add an answer file to Windows 11 ISO image, extract its contents to any folder on your computer, copy AutoUnattend.xml to the same directory, and rebuild the ISO image.

I used free DISM++ (Toolkit -> ISO maker) to create the custom Windows 11 ISO image.

dism ++ modify windows iso image add answer file autounattend.xml

Then no hardware compatibility checks will be performed during Windows 11 setup.

Also, you can use the new Rufus version to create installation USB flash drives. It contains a special option: Extended Windows 11 Installation (no TPM/no Secure Boot/8Gb- RAM). This option allows you to create an image to install Windows 11 without checking TPM and Secure Boot:

rufus create installation media without TPM ans secure boot checks

Source :
https://woshub.com/windows-11-unsupported-hardware-no-tpm-secure-boot/

Attacks on 5G Infrastructure From Users’ Devices

By: Salim S.I.
September 20, 2023
Read time: 8 min (2105 words)

Crafted packets from cellular devices such as mobile phones can exploit faulty state machines in the 5G core to attack cellular infrastructure. Smart devices that critical industries such as defense, utilities, and the medical sectors use for their daily operations depend on the speed, efficiency, and productivity brought by 5G. This entry describes CVE-2021-45462 as a potential use case to deploy a denial-of-service (DoS) attack to private 5G networks.

5G unlocks unprecedented applications previously unreachable with conventional wireless connectivity to help enterprises accelerate digital transformation, reduce operational costs, and maximize productivity for the best return on investments. To achieve its goals, 5G relies on key service categories: massive machine-type communications (mMTC), enhanced mobile broadband (eMBB), and ultra-reliable low-latency communication (uRLLC).

With the growing spectrum for commercial use, usage and popularization of private 5G networks are on the rise. The manufacturing, defense, ports, energy, logistics, and mining industries are just some of the earliest adopters of these private networks, especially for companies rapidly leaning on the internet of things (IoT) for digitizing production systems and supply chains. Unlike public grids, the cellular infrastructure equipment in private 5G might be owned and operated by the user-enterprise themselves, system integrators, or by carriers. However, given the growing study and exploration of the use of 5G for the development of various technologies, cybercriminals are also looking into exploiting the threats and risks that can be used to intrude into the systems and networks of both users and organizations via this new communication standard. This entry explores how normal user devices can be abused in relation to 5G’s network infrastructure and use cases.

5G topology

In an end-to-end 5G cellular system, user equipment (aka UE, such as mobile phones and internet-of-things [IoT] devices), connect to a base station via radio waves. The base station is connected to the 5G core through a wired IP network.

Functionally, the 5G core can be split into two: the control plane and the user plane. In the network, the control plane carries the signals and facilitates the traffic based on how it is exchanged from one endpoint to another. Meanwhile, the user plane functions to connect and process the user data that comes over the radio area network (RAN).

The base station sends control signals related to device attachment and establishes the connection to the control plane via NGAP (Next-Generation Application Protocol). The user traffic from devices is sent to the user plane using GTP-U (GPRS tunneling protocol user plane). From the user plane, the data traffic is routed to the external network. 

fig1-attacks-on-5g-infrastructure-from-users-devices
Figure 1. The basic 5G network infrastructure

The UE subnet and infrastructure network are separate and isolated from each other; user equipment is not allowed to access infrastructure components. This isolation helps protect the 5G core from CT (Cellular Technology) protocol attacks generated from users’ equipment.

Is there a way to get past this isolation and attack the 5G core? The next sections elaborate on the how cybercriminals could abuse components of the 5G infrastructure, particularly the GTP-U.

GTP-U

GTP-U is a tunneling protocol that exists between the base station and 5G user plane using port 2152. The following is the structure of a user data packet encapsulated in GTP-U.

fig2-attacks-on-5g-infrastructure-from-users-devices
Figure 2. GTP-U data packet

A GTP-U tunnel packet is created by attaching a header to the original data packet. The added header consists of a UDP (User Datagram Protocol) transport header plus a GTP-U specific header. The GTP-U header consists of the following fields:

  • Flags: This contains the version and other information (such as an indication of whether optional header fields are present, among others).
  • Message type: For GTP-U packet carrying user data, the message type is 0xFF.
  • Length: This is the length in bytes of everything that comes after the Tunnel Endpoint Identifier (TEID) field.
  • TEID: Unique value for a tunnel that maps the tunnel to user devices

The GTP-U header is added by the GTP-U nodes (the base station and User Plane Function or UPF). However, the user cannot see the header on the user interface of the device. Therefore, user devices cannot manipulate the header fields.

Although GTP-U is a standard tunneling technique, its use is mostly restricted to CT environments between the base station and the UPF or between UPFs. Assuming the best scenario, the backhaul between the base station and the UPF is encrypted, protected by a firewall, and closed to outside access. Here is a breakdown of the ideal scenario: GSMA recommends IP security (IPsec) between the base station and the UPF. In such a scenario, packets going to the GTP-U nodes come from authorized devices only. If these devices follow specifications and implement them well, none of them will send anomalous packets. Besides, robust systems are expected to have strong sanity checks to handle received anomalies, especially obvious ones such as invalid lengths, types, and extensions, among others.

In reality, however, the scenario could often be different and would require a different analysis altogether. Operators are reluctant to deploy IPsec on the N3 interface because it is CPU-intensive and reduces the throughput of user traffic. Also, since the user data is perceived to be protected at the application layer (with additional protocols such as TLS or Transport Layer Security), some consider IP security redundant. One might think that for as long as the base station and packet-core conform to the specific, there will be no anomalies. Besides, one might also think that for all robust systems require sanity checks to catch any obvious anomalies. However, previous studies have shown that many N3 nodes (such as UPF) around the world, although they should not be, are exposed to the internet. This is shown in the following sections.

fig3-attacks-on-5g-infrastructure-from-users-devices

Figure 3. Exposed UPF interfaces due to misconfigurations or lack of firewalls; screenshot taken from Shodan and used in a previously published research

We discuss two concepts that can exploit the GTP-U using CVE-2021-45462. In Open5GS, a C-language open-source implementation for 5G Core and Evolved Packet Core (EPC), sending a zero-length, type=255 GTP-U packet from the user device resulted in a denial of service (DoS) of the UPF. This is CVE-2021-45462, a security gap in the packet core that can crash the UPF (in 5G) or Serving Gateway User Plane Function (SGW-U in 4G/LTE) via an anomalous GTP-U packet crafted from the UE and by sending this anomalous GTP-U packet in the GTP-U. Given that the exploit affects a critical component of the infrastructure and cannot be resolved as easily, the vulnerability has received a Medium to High severity rating.

GTP-U nodes: Base station and UPF

GTP-U nodes are endpoints that encapsulate and decapsulate GTP-U packets. The base station is the GTP-U node on the user device side. As the base station receives user data from the UE, it converts the data to IP packets and encapsulates it in the GTP-U tunnel.

The UPF is the GTP-U node on the 5G core (5GC) side. When it receives a GTP-U packet from the base station, the UPF decapsulates the outer GTP-U header and takes out the inner packet. The UPF looks up the destination IP address in a routing table (also maintained by the UPF) without checking the content of the inner packet, after which the packet is sent on its way.

GTP-U in GTP-U

What if a user device crafts an anomalous GTP-U packet and sends it to a packet core?

fig4-attacks-on-5g-infrastructure-from-users-devices
Figure 4. A specially crafted anomalous GTP-U packet
fig5-attacks-on-5g-infrastructure-from-users-devices
Figure 5. Sending an anomalous GTP-U packet from the user device

As intended, the base station will tunnel this packet inside its GTP-U tunnel and send to the UPF. This results in a GTP-U in the GTP-U packet arriving at the UPF. There are now two GTP-U packets in the UPF: The outer GTP-U packet header is created by the base station to encapsulate the data packet from the user device. This outer GTP-U packet has 0xFF as its message type and a length of 44. This header is normal. The inner GTP-U header is crafted and sent by the user device as a data packet. Like the outer one, this inner GTP-U has 0xFF as message type, but a length of 0 is not normal.

The source IP address of the inner packet belongs to the user device, while the source IP address of the outer packet belongs to the base station. Both inner and outer packets have the same destination IP address: that of the UPF.

The UPF decapsulates the outer GTP-U and passes the functional checks. The inner GTP-U packet’s destination is again the same UPF. What happens next is implementation-specific:

  • Some implementations maintain a state machine for packet traversal. Improper implementation of the state machine might result in processing this inner GTP-U packet. This packet might have passed the checks phase already since it shares the same packet-context with the outer packet. This leads to having an anomalous packet inside the system, past sanity checks.
  • Since the inner packet’s destination is the IP address of UPF itself, the packet might get sent to the UPF. In this case, the packet is likely to hit the functional checks and therefore becomes less problematic than the previous case.

Attack vector

Some 5G core vendors leverage Open5GS code. For example, NextEPC (4G system, rebranded as Open5GS in 2019 to add 5G, with remaining products from the old brand) has an enterprise offer for LTE/5G, which draws from Open5GS’ code. No attacks or indications of threats in the wild have been observed, but our tests indicate potential risks using the identified scenarios.

The importance of the attack is in the attack vector: the cellular infrastructure attacks from the UE. The exploit only requires a mobile phone (or a computer connected via a cellular dongle) and a few lines of Python code to abuse the opening and mount this class of attack. The GTP-U in GTP-U attacks is a well-known technique, and backhaul IP security and encryption do not prevent this attack. In fact, these security measures might hinder the firewall from inspecting the content.

Remediation and insights

Critical industries such as the medical and utility sectors are just some of the early adopters of private 5G systems, and its breadth and depth of popular use are only expected to grow further. Reliability for continuous, uninterrupted operations is critical for these industries as there are lives and real-world implications at stake. The foundational function of these sectors are the reason that they choose to use a private 5G system over Wi-Fi. It is imperative that private 5G systems offer unfailing connectivity as a successful attack on any 5G infrastructure could bring the entire network down.

In this entry, the abuse of CVE-2021-45462 can result in a DoS attack. The root cause of CVE-2021-45462 (and most GTP-U-in-GTP-U attacks) is the improper error checking and error handling in the packet core. While GTP-U-in-GTP-U itself is harmless, the proper fix for the gap has to come from the packet-core vendor, and infrastructure admins must use the latest versions of the software.

A GTP-U-in-GTP-U attack can also be used to leak sensitive information such as the IP addresses of infrastructure nodes. GTP-U peers should therefore be prepared to handle GTP-U-in-GTP-U packets. In CT environments, they should use an intrusion prevention system (IPS) or firewalls that can understand CT protocols. Since GTP-U is not normal user traffic, especially in private 5G, security teams can prioritize and drop GTP-U-in-GTP-U traffic.

As a general rule, the registration and use of SIM cards must be strictly regulated and managed. An attacker with a stolen SIM card could insert it to an attacker’s device to connect to a network for malicious deployments. Moreover, the responsibility of security might be ambiguous to some in a shared operating model, such as end-devices and the edge of the infrastructure chain owned by the enterprise. Meanwhile, the cellular infrastructure is owned by the integrator or carrier. This presents a hard task for security operation centers (SOCs) to bring relevant information together from different domains and solutions.

In addition, due to the downtime and tests required, updating critical infrastructure software regularly to keep up with vendor’s patches is not easy, nor will it ever be. Virtual patching with IPS or layered firewalls is thus strongly recommended. Fortunately, GTP-in-GTP is rarely used in real-world applications, so it might be safe to completely block all GTP-in-GTP traffic. We recommend using layered security solutions that combine IT and communications technology (CT) security and visibility. Implementing zero-trust solutions, such as Trend Micro™ Mobile Network Security, powered by CTOne, adds another security layer for enterprises and critical industries to prevent the unauthorized use of their respective private networks for a continuous and undisrupted industrial ecosystem, and by ensuring that the SIM is used only from an authorized device. Mobile Network Security also brings CT and IT security into a unified visibility and management console.

Source :
https://www.trendmicro.com/it_it/research/23/i/attacks-on-5g-infrastructure-from-users-devices.html

Electric Power System Cybersecurity Vulnerabilities

By: Mayumi Nishimura
October 06, 2023
Read time: 4 min (1096 words)

Digitalization has changed the business environment of the electric power industry, exposing it to various threats. This webinar will help you uncover previously unnoticed threats and develop countermeasures and solutions.

The Electric utility industry is constantly exposed to various threats, including physical threats and sophisticated national-level cyber attacks. It has been an industry that has focused on security measures. But in the last few years, power system changes have occurred. As OT becomes more networked and connected to IT, the number of interfaces between IT and OT increases, and various cyber threats that have not surfaced until now have emerged.

Trend Micro held a webinar to discuss these changes in the situation, what strategies to protect your company’s assets from the latest cyber threats, and the challenges and solutions in implementing these strategies.

This blog will provide highlights from the webinar and share common challenges in the power industry that emerged from the survey. We hope that this will be helpful to cybersecurity directors in the Electric utility industry who recognize the need for consistent security measures for IT and OT but are faced with challenges in implementing them.

Click here to watch the On-demand Webinar

Vulnerabilities in Electric Power Systems

In the webinar, we introduced examples of threats from “Critical Infrastructures Exposed and at Risk: Energy and Water Industries” conducted by Trend Micro. The main objective of this study was to demonstrate how easy it is to discover and exploit OT assets in the water and energy sector using basic open-source intelligence (OSINT) techniques. As a result of the investigation, it was possible to access the HMI remotely, view the database containing customer data, and control the start and stop of the turbine.

Figure1. Interoperability and Connected Resources
Figure1. Interoperability and Connected Resources

Cyberattacks Against Electric Power

Explained the current cyberattacks on electric power companies. Figure 2 shows the attack surface (attack x digital assets), attack flow, and ultimately, possible damage in IT and OT of energy systems.

An example of an attack surface in an IT network is an office PC that exploits VPN vulnerabilities. If attackers infiltrate the monitoring system through a VPN, they can seize privileges and gain unauthorized access to OT assets such as the HMI. There is also the possibility of ransomware being installed.

A typical attack surface in an OT network is a PC for maintenance. If this terminal is infected with a virus and a maintenance person connects to the OT network, the virus may infect the OT network and cause problems such as stopping the operation of the OT equipment.

To protect these interconnected systems, it is necessary to review cybersecurity strategies across IT, OT, and different technology domains.

Figure 2. Attack Surface Includes Both IT and OT
Figure 2. Attack Surface Includes Both IT and OT

Solutions

We have organized the issues from People, Process, and Technology perspectives when reviewing security strategies across different technology domains such as IT and OT.

One example of people-related issues is labor shortages and skills gaps. The reason for the lack of skills is that IT security personnel are not familiar with the operations side, and vice versa. In the webinar, we introduced three ideas for approaches to solving these people’s problems.

The first is improving employee security awareness and training. From management to employees, we must recognize the need for security and work together. Second, to understand the work of the IT and OT departments, we recommend job rotation and workshops for mutual understanding. The third is documentation and automation of incident response. Be careful not to aim for automation. First, it is important to identify unnecessary tasks and reduce work. After that, we recommend automating the necessary tasks. We also provide examples of solutions for Process and Technology issues in the webinar.

 Figure 3. Need Consistent Cybersecurity across IT and OT
Figure 3. Need Consistent Cybersecurity across IT and OT

Finally, we introduced Unified Kill Chain as an effective approach. It extends and combines existing models such as Lockheed Martin’s Cyber KillChain® and MITER’s ATT&CK™ to show an attacker’s steps from initiation to completion of a cyber attack. The attacker will not be able to reach their goal unless all of these steps are completed successfully, but the defender will need to break this chain at some point, which will serve as a reference for the defender’s strategy. Even when attacks cross IT and OT, it is possible to use this approach as a reference to evaluate the expected attacks and the current security situation and take appropriate security measures in response.

Figure 4. Unified Kill Chain
Figure 4. Unified Kill Chain

The Webinar’s notes

To understand the situation and thoughts of security leaders in the Electric utility industry, we have included some of the survey results regarding this webinar. The webinar, held on June 29th, was attended in real time by nearly 100 people working in the energy sector and engaged in cybersecurity-related work.

When asked what information they found most helpful, the majority of survey respondents selected “consistent cybersecurity issues and solutions across IT and OT,” indicating that “consistent cybersecurity issues and solutions across IT and OT.” I am glad that I was able to help those who feel that there are issues in implementing countermeasures.

 Chart 1. Question What was the most useful information for you from today's seminar? (N=28)
Chart 1. Question What was the most useful information for you from today’s seminar? (N=28)

Also, over 90% of respondents answered “Agree” when asked if they needed consistent cybersecurity across IT and OT. Among those who chose “Agree”, 39% answered that they have already started some kind of action, indicating the consistent importance of cybersecurity in IT and OT.

 Chart 2. Do you agree with Need Consistent Cybersecurity across IT and OT? (N=28)
Chart 2. Do you agree with Need Consistent Cybersecurity across IT and OT? (N=28)

Lastly, I would like to share the results of a question asked during webinar registration about what issues people in this industry think about OT security. Number one was siloed risk and threat visibility, and number two was legacy system support. The tie for 3rd place was due to a lack of preparation for attacks across different NWs and lack of staff personnel/skills. There is a strong sense of challenges in the visualization of risks and threats, other organizational efforts, and technical countermeasures.

 Chart 3. Please select all of the challenges you face in thinking about OT cybersecurity. (N=55 multiple choice)
Chart 3. Please select all of the challenges you face in thinking about OT cybersecurity. (N=55 multiple choice)

Resources

The above is a small excerpt from the webinar. We recommend watching the full webinar video below if you are interested in the power industry’s future cybersecurity strategy.

Click here to watch the On-demand Webinar

In addition, Trend Micro reports introduced in the webinar and other reference links related to the energy industry can be accessed below.

Tech Paper
ICS/OT Security for the Electric Utility
Critical Infrastructures Exposed and at Risk: Energy and Water Industries

VIDEO
Electric utilities need to know cross domain attack

Blog
Electricity/Energy Cybersecurity: Trends & Survey Response

Source :
https://www.trendmicro.com/it_it/research/23/j/electric-power-system-cybersecurity-vulnerabilities.html

Fixing ‘The Network Path Was Not Found’ 0x80070035 Error Code on Windows

August 31, 2023

In some cases, you may receive the error ‘Windows cannot access sharename. The network path was not found. Error code: 0x80070035‘ when you try to open a shared network folder on a Windows computer, Samba share, or NAS device. In this article, we’ll look at how to fix this shared folder error on Windows 10 and 11.

Contents:

Network Error Windows cannot access \\sharedNAS Check the spelling of the name. Otherwise, there might be a problem with your network. To try identify and resolve network problems, click Diagnose. Error code: 0x80070035. The network path was not found.

Error code: 0x80070035. The network path was not found - win 10

Another error occurs if you try to map such a shared folder as a network drive using Group Policy or the net use command:System error 54 has occurred. The network path was not found.


At the same time, you can easily open this shared folder from other computers (running older versions of Windows 10, 8.1 or 7), smartphones, and other devices.

Disable Legacy SMB Versions of File Shares

In most cases, the ‘0x80070035: The network path not found‘ error indicates that the target shared folder on the remote computer only supports SMBv1 connections or SMBv2 guest access. These are legacy and insecure versions of the Server Message Block (SMB, CIFS) file-sharing protocol. Enabling these protocols on your client will probably solve the problem, but it will reduce the security of your Windows device. So reconfiguring the remote file server device to support at least SMBv2 with authentication, or ideally SMBv3, is the first thing to try. This is the most correct and secure method.

Change your file server’s SMB configuration:

  • NAS device – disable SMBv1, enable authenticated SMBv2 access (depending on NAS vendor);
  • Samba server on Linux – disable guest access in smb.config file under [global] section:map to guest = never restrict anonymous = 2Specify the minimum SMB version supported:server min protocol = SMB2_10 client max protocol = SMB3 client min protocol = SMB2_10 encrypt passwords = trueDisable anonymous access in the configuration of each shared folder:guest ok = no
  • On the Windows file server, disable the SMBv1 and SMBv2 protocols (described in a separate section of the article). Enable the Turn on password protected sharing  option (navigate to Control Panel -> All Control Panel Items -> Network and Sharing Center -> Advanced sharing settings -> All networks, or run the command control.exe /name Microsoft.NetworkAndSharingCenter /page Advanced ).
enable password protected share access on windows

Check the Windows SMB Client Settings

 Perform the following simple checks on your Windows client. These steps can help you resolve the “Network Path Not Found” error without compromising the security of your computer:

  • Check that you have entered the correct file server name. Try opening the network folder not by name (\\FS01\Public), but by IP address (\\192.168.3.111\Public);
  • In the properties of the shared network folder (both at the NTFS file system permissions and the shared folder level), check that your user has permission to read the contents of the folder;
  • Reset the DNS cache on both computers:
    ipconfig /flushdns
  • If you simultaneously have two active network interfaces on your device (Wi-Fi and Ethernet), try temporarily disabling one of them and check access to your local network resources;
  • Check that the following services are running on your computer (open the services.msc console). Start these services and change the startup type to Automatic Delayed Start:Function Discovery Provider Host – fdPHost Function Discovery Resource Publication – FDResPub SSDP Discovery – SSDPSRV UPnP Device Host – upnphost DNS Client (dnscache)
  • Try temporarily disabling your anti-virus and/or firewall application and see if the problem persists when you access network resources;
  • Try to disable the IPv6 protocol in the properties of your network adapter in the Control Panel. Check that the following protocols are enabled for your network adapter: Client for Microsoft Network and File and Printer Sharing for Microsoft Networksdisable ipv6 protocol
  • Try to reset the Windows network settings on the computer with the commands:
    netsh winsock reset
    netsh int ip reset
  • If you are using a Windows workgroup network, make sure that NetBIOS protocol support is not disabled in the TCP/IPv4 properties of your network adapter.  Next, open the Local Security Policy Settings (secpol.msc), go to Local Policies -> Security Options -> Network security: LAN Manager authentication level and select Send LM & NTLM — use NTLMv2 session security if negotiated (this is an unsafe option!!).

Allow SMBv2 Insecure Guest Logons on Windows

If you are using anonymous shared folder access to NAS storage or other computers (without entering a username and password), you will need to enable the insecure guest logon policy on the client computer. By default, modern versions of Windows don’t allow anonymous (guest) access to shared network folders using the SMB 2.0 protocol.

If you try to connect to the shared folder as an anonymous (guest) user, an event with Event ID 31017 will appear in the Event Viewer log.Source: Microsoft-Windows-SMBClient Date: Date/Time Event ID: 31017 Task Category: None Level: Error Keywords: (128) User: NETWORK SERVICE Computer: fs01.woshub.com Description: Rejected an insecure guest logon. User name: Ned Server name: ServerName

To allow SMBv2 guest logons (this is an unsafe option and should only be used when it is absolutely necessary!), open the Local Group Policy editor (gpedit.msc), and turn on the Enable insecure guest logons policy (Computer Configuration -> Administrative templates -> Network -> Lanman Workstation).

windows 10 Enable insecure guest logons policy

Or you can enable insecure SMB shared folder access under guest account via the registry using the command:

reg add HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters /v AllowInsecureGuestAuth /t reg_dword /d 00000001 /f

Enable Legacy SMB v1 Client on Windows

You must enable the SMB1Protocol-Client component on the client computer if your network device (file storage) only supports the SMB 1.0 file-sharing protocol (although this is not recommended for security reasons).

The SMB v1.0 protocol is disabled by default in modern versions of Windows 10/11 and Windows Server 2019/2022. This is because SMB 1.0 is a legacy and vulnerable protocol for file and folder sharing on Windows. When you try to connect from Windows 10/11 to an SMBv1-only file share (for example, an old version of NAS storage, a computer running Windows XP/Windows Server 2003) and list the remote device’s shared network folders (by the UMC path, such as \\FileStorageNetworkName), you will receive an error ‘Network path not found‘.

You can use the DISM command to check if the SMBv1 protocol is enabled in Windows:

Dism /online /Get-Features /format:table | find "SMB1Protocol"

dism get SMB1Protocol state in windows 10

As you can see, in this case the SMB1Protocol-Client feature is disabled.SMB1Protocol                                 | Disabled SMB1Protocol-Client                          | Disabled SMB1Protocol-Server                          | Disabled SMB1Protocol-Deprecation | Disabled

You can enable the SMB v1 client protocol to access legacy shared folders from the Turn Windows features on or off panel ( optionalfeatures.exe -> SMB 1.0 / CIFS File Sharing Support -> SMB 1.0 / CIFS Client).

install SMB 1.0 / CIFS Client in WIndows 10 1709 /1803

Or you can enable the SMB 1.0 client with the DISM command:

Dism /online /Enable-Feature /FeatureName:"SMB1Protocol-Client"

Dism install SMB1Protocol-Client feature

After installing the SMBv1 client, restart your computer and check that the shared network folder can now be opened.

On Windows Server 2019/2022, you can enable SMBv1 with the command

Install-WindowsFeature FS-SMB1

Important! If you have enabled the SMB1 client, remember that this protocol is vulnerable and has a large number of remote exploitation vulnerabilities. If you don’t need the SMB v1 protocol for legacy device access, be sure to disable it.

In Windows 10/11, the SMBv1 client is automatically disabled if it has not been used for more than 15 days.

Disable SMB 1.0 and SMB 2.0 Protocols on Windows Clients

If only modern devices that support SMB v3 are used on your network (Windows 8.1/Windows Server 2012 R2 and later, see the table of SMB versions in Windows), you can fix the 0x80070035 error by completely disabling SMB1 and SMB2 on all clients. The fact is that your computer may try to use the SMB 2.0 protocol to access shared folders that only accept SMB 3.0 connections

First, disable the SMB 1.0 protocol using the Turn Windows features on or off panel (optionalfeatures.exe) or with commands:

sc.exe config lanmanworkstation depend= bowser/mrxsmb10/nsi
sc.exe config mrxsmb10 start= disabled
Dism /online /Disable-Feature /FeatureName:"SMB1Protocol"

Then disable the SMB 2.0 protocol:

reg.exe add "HKLM\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters" /v "SMB2" /t REG_DWORD /d "0" /f
sc.exe config lanmanworkstation depend= bowser/mrxsmb10/nsi
sc.exe config mrxsmb20 start= disabled
PowerShell -ExecutionPolicy Unrestricted
Set-SmbServerConfiguration –EnableSMB2Protocol $false

You can check that the SMB 1 and SMB 2 protocols are disabled by running the following PowerShell command:

Get-SmbServerConfiguration | select "*enablesmb*"|flEnableSMB1Protocol              : False EnableSMB2Protocol              : False

Get-SmbServerConfiguration check smb 1 and 2 version installed on windows 10

Check the Network Discovery Settings on Windows

If your computers are joined to a workgroup, I strongly recommend that you follow the recommendations from the article Network computers are not showing up in Windows.

In the Network and Sharing Center section of the Control Panel on both computers, check that the Private network profile is set as the current profile (Private: Current profile). Make sure that the following options are enabled:

  • Turn on network discovery + Turn on automatic setup of network connected devices;
  • Turn on file and printer sharing.
turn on network discovery and file sharing windows 10 1803

In the All Networks section, enable the following options:

  • Turn off password Protect Sharing;
  • Turn on sharing.

Add Windows Credentials to access NAS or Samba Shares

If the problem only occurs when accessing the NAS share or Samba server on Linux, you can try saving the connection credentials (username and password used to connect to the SMB share) to the Windows Credential Manager (Control Panel\All Control Panel Items\Credential Manager\Windows Credential or run the command control.exe keymgr.dll).

Click Add a Windows credential and specify the SMB file server hostname (or IP) and the connection credentials.

windows saved credentials

Then go to Network and Sharing Center and enable the option Use user accounts and passwords to connect to other computers in the Advanced sharing settings.

Use user accounts and passwords to connect to other computers

Windows automatically uses the saved credentials to access the specified file server resources.

I hope that my article will be useful to you and that you will be able to restore access to your shared folders on LAN.

Source :
https://woshub.com/error-code-0x80070035-network-path-not-found-windows-10/

Enable Single Sign-On (SSO) Authentication on RDS Windows Server

May 23, 2023

Single Sign-On (SSO) allows an authenticated (signed-on) user to access other domain services without having to re-authenticate (re-entering a password) and without using saved credentials (including RDP). SSO can be used when connecting to Remote Desktop Services (terminal) servers. This prevents a user logged on to a domain computer from entering their account name and password multiple times in the RDP client window when connecting to different RDS hosts or running published RemoteApps.

This article shows how to configure transparent SSO (Single Sign-On) for users of RDS servers running Windows Server 2022/2019/2016.

Contents:

System requirements:

  • The Connection Broker server and all RDS hosts must be running Windows Server 2012 or newer;
  • You can use Windows 11,10,8.1 with Pro/Enterprise editions as client workstations.
  • SSO works only in the domain environment: Active Directory user accounts must be used, the RDS servers and user’s workstations must be joined to the same AD domain;
  • The RDP 8.0 or later must be used on the RDP clients;
  • SSO works only with password authentication (smart cards are not supported);
  • The RDP Security Layer in the connection settings should be set to Negotiate or SSL (TLS 1.0), and the encryption mode to High or FIPS Compliant.

The single sign-on setup process consists of the following steps:

  • You need to issue and assign an SSL certificate on RD Gateway, RD Web, and RD Connection Broker servers;
  • Web SSO has to be enabled on the RDWeb server;
  • Configure credential delegation group policy;
  • Add the RDS certificate thumbprint to the trusted .rdp publishers using GPO.

Enable SSO Authentication on RDS Host with Windows Server 2022/2019/2016

First, you need to issue and assign an SSL certificate to your RDS deployment. The certificate’s Enhanced Key Usage (EKU) must contain the Server Authentication identifier.  The procedure for obtaining an SSL certificate for RDS deployment is not covered. This is outside the scope of this article (you can generate a self-signed SSL certificate yourself, but you will have to deploy it to the trusted cert on all clients using the group policy).

Learn more about using SST/TLS certificates to secure RDP connections.

The certificate is assigned in the Certificates section of RDS Deployment properties.

RDS certificates

Then, on all servers with the RD Web Access role, enable Windows Authentication for the IIS RDWeb directory and disable Anonymous Authentication.

IIS Windows Authentication

After you have saved the changes, restart the IIS:

iisreset /noforce

If Remote Desktop Gateway is used, ensure that it is not used to connect internal clients (the Bypass RD Gateway server for local address option should be checked).

RD Gateway deployment

Now you need to obtain the SSL certificate thumbprint of the RD Connection Broker and add it to the list of trusted RDP publishers. For that, run the following PowerShell command on the RDS Connection Broker host:

Get-Childitem CERT:\LocalMachine\My

Get-Childitem CERT:\LocalMachine\My

Copy the value of the certificate’s thumbprint and add it to the Specify SHA1 thumbprints of certificates representing RDP publishers policy (Computer Configuration -> Administrative Templates -> Windows Desktop Services -> Remote Desktop Connection Client).

Specify SHA1 thumbprints of certificates representing RDP publishers

Configure Remote Desktop Single Sign-on on Windows Clients

The next step is to configure the credential delegation policy for user computers.

  1. Open the domain Group Policy Management Console (gpmc.msc);
  2. Create a new domain GPO and link it to an OU with users (computers) that need to be allowed to use SSO to access the RDS server;
  3. Enable the policy Allow delegation defaults credential under Computer Configuration -> Administrative Templates -> System -> Credential Delegation
  4. Add the names of RDS hosts to which the client can automatically send user credentials to perform SSO authentication. Use the following format for RDS hosts: TERMSRV/rd.contoso.com (all TERMSRV characters must be in upper case). If you need to allow credentials to be sent to all terminals in the domain (less secure), you can use this construction: TERMSRV/*.contoso.com .

The above policy will work if you are using Kerberos authentication. If the NTLM authentication protocol is not disabled in the domain, you must configure the Allow delegation default credentials with NTLM-only server authentication policy in the same way.

Then, to prevent a window warning that the remote application publisher is untrusted, add the address of the server running the RD Connection Broker role to the trusted zone on the client computers using the policy “Site to Zone Assignment List” (similar to the article How to disable Open File security warning on Windows 10):

  1. Go to the GPO section User/Computer Configuration -> Administrative Tools -> Windows Components -> Internet Explorer -> Internet Control Panel -> Security Page.
  2. Enable the policy Site to Zone Assignment List
  3. Specify the FQDN of the RD Connection Broker hostname and set Zone 2 (Trusted sites).
Site to Zone assignment : trusted zone

Next, you need to enable the Logon options policy under User/Computer Configuration -> Administrative Tools -> Windows Components -> Internet Explorer -> Internet Control Panel -> Security -> Trusted Sites Zone. Select ‘Automatic logon with current username and password’ from the dropdown list.

Then navigate to the Computer Configuration -> Policies -> Administrative Templates ->Windows Components ->Remote Desktop Services ->Remote Desktop Connection Client and disable the policy Prompt for credentials on the client computer.

GPO: dont prompt for credentials on the client computer

After updating the Group Policy settings on the client, open the mstsc.exe (Remote Desktop Connection) client and specify the FQDN of the RDS host. The UserName field automatically displays your name in the format user@domain.com:

Your Windows logon credentials will be used to connect.

Now, when you start a RemoteApp or connect directly to a Remote Desktop Services host, you will not be prompted for your password.

Your Windows logon credentials will be used to connect.

To use the RD Gateway with SSO, enable the policy Set RD Gateway Authentication Method User Configuration -> Policies -> Administrative Templates -> Windows Components -> Remote Desktop Services -> RD Gateway) and set its value to Use Locally Logged-On Credentials.

Active X component Microsoft Remote Desktop Services Web Access Control (MsRdpClientShell)

To use Web SSO on RD Web Access, please note that it is recommended to use Internet Explorer with enabled Active X component named Microsoft Remote Desktop Services Web Access Control (MsRdpClientShell, MsRdpWebAccess.dll).

On modern versions of Windows, Internet Explorer is disabled by default and you will need to use Microsoft Edge instead. You must open this URL in Microsoft Edge in compatibility mode to use RD Web with SSO (Edge won’t run Active-X components without compatibility mode).

In order for all the client computers to be able to open RDWeb in compatibility mode, you will need to install the MS Edge Administrative Templates GPO and configure policy settings under Computer Configuration -> Administrative Templates -> Microsoft:

Possible errors:

  • In our case, RDP SSO stopped working on an RDS farm with User Profile Disks profiles after installing security updates KB5018410 (Windows 10) or KB5018418 (Windows 11) in Autumn 2022. To solve the problem, edit the *.rdp connection file and change the following line:use redirection server name:i:1
  • If you have not added the SSL thumbprint of the RDCB certificate to the Trusted RDP Publishers, a warning will appear when trying to connect:Do you trust the publisher of this RemoteApp program?
  • An authentication error has occured (Code: 0x607)Check that you have assigned the correct certificate to the RDS roles.

    Source :
    https://woshub.com/sso-single-sign-on-authentication-on-rds/

Remote Desktop Licensing Mode is not Configured

August 24, 2023

When configuring a new RDS farm node on Windows Server 2022/2019/2016/2012 R2, you may see the following tray warning pop-up:Licensing mode for the Remote Desktop Session Host is not configured. Remote Desktop Service will stop working in 104 days. On the RD Connection Broker server, use Server Manager to specify the Remote Desktop licensing mode and the license server.

WinServer 2012 R2 - Licensing mode for the RDSH is not configured

At the same time, there will be warnings with an Event ID 18 in the Event Viewer:Log Name: System Source: Microsoft-Windows-TerminalServices-Licensing Level: Warning Description: The Remote Desktop license server UK-RDS01 has not been activated and therefore will only issue temporary licenses. To issue permanent licenses, the Remote Desktop license server must be activated.

This problem will also occur if there are no Remote Desktop Licensing (RDS) servers available on your network to provide a license.

These errors are an indication that your RDS is running in the License grace period mode. You can use Remote Desktop Session Host for 120 days without activating RDS licenses during the grace period. When the grace period expires, users won’t be able to connect to RDSH with an error:Remote Desktop Services will stop working because this computer is past grace period and has not contacted at least a valid Windows Server 2012 license server. Click this message to open RD Session Host Server Configuration to use Licensing Diagnosis.

The number of days remaining before the RDS grace period expires can be displayed using the command:

wmic /namespace:\\root\CIMV2\TerminalServices PATH Win32_TerminalServiceSetting WHERE (__CLASS !="") CALL GetGracePeriodDays

Check the Licensing Settings on the Remote Desktop Server

To diagnose the problem, run the Remote Desktop Licensing Diagnoser tool (lsdiag.msc, or  Administrative Tools -> Remote Desktop Services -> RD Licensing Diagnoser). The tool should display the following error:Licenses are not available for the Remote Desktop Session Host server, and RD Licensing Diagnoser has identified licensing problem for the RD Session Host server. Licensing mode for the Remote Desktop Session Host is not configured. Number of licenses available for clients: 0 Set the licensing mode on the Remote Desktop Session Host server to either Per User or Per Device. Use RD Licensing Manager to install the corresponding licenses on the license server The Remote Desktop Session Host server is within its grace period, but the Session Host server has not been configured with any license server.

As you can see, there are no licenses available to Clients on the RDS Host because the Licensing Mode is not set.

Remote Desktop Licensing Diagnoser: Licensing mode for the Remote Desktop Session Host is not configured

The most likely problem is that the administrator has not set the RDS Licensing Server and/or the licensing mode. This should be done even if the license type was already specified when the RDS Host was deployed (Configure the deployment -> RD Licensing -> Select the Remote Desktop licensing mode).

set rd licensing mode during deployment

Configuring the RDS Licensing Mode on Windows Server

There are several ways to configure host RDS licensing settings:

  • Using PowerShell
  • Via the Windows Registry
  • Using the Group Policy (preferred)

Set the Remote Desktop licensing mode via GPO

To configure the license server settings on the RDS host, you must use the domain GPO management console (gpmc.msc) or the local Group Policy editor (gpedit.msc).

On a standalone RDSH host (in a domain and workgroup), it’s easiest to use local policy. Go to Computer Configuration -> Administrative Templates -> Windows Components -> Remote Desktop Services -> Remote Desktop Session Host -> Licensing.

We need two GPO options:

  • Use the specified Remote Desktop license servers – enable the policy and specify the RDS license server addresses. If the RD license server is running on the same host, type 127.0.0.1You can specify the addresses of multiple hosts with the RDS Licensing role separated by commas; Policy - Use the specified Remote Desktop license servers
  • Set the Remote Desktop licensing mode – select the licensing mode. In our case, it is Per UserPOlicy - Set the Remote Desktop licensing mode - Per User

If you have deployed an RDS host without an AD domain (in a workgroup), you can only use Per Device RDS CALs. Otherwise, a message is displayed when a user logs in to the RDSH server in the workgroup:Remote Desktop Issue.There is a problem with your Remote Desktop license, and your session will be disconnected in 60 minutes. Contact your system administrator to fix the problem.

Set RDS licensing mode from the PowerShell prompt

Open a PowerShell console and check that the RDS licensing server address is configured on your RDSH:

$obj = gwmi -namespace "Root/CIMV2/TerminalServices" Win32_TerminalServiceSetting
$obj.GetSpecifiedLicenseServerList()

GetSpecifiedLicenseServerList

NoteIn this case, the data that the Get-RDLicenseConfiguration cmdlet returns may be completely different and incorrect.

If the RDS license server is not configured, you can set it using the command:

$obj.SetSpecifiedLicenseServerList("uk-rdslic1.woshub.com")

You can also set the licensing mode (4 — Per User, or 2 — Per Device):

$obj.ChangeMode(4)

powershell: change RDS licensing mode

You can use the Get-ADObject cmdlet from the ActiveDirectory PowerShell module to list servers with the RDS Licensing role in an Active Directory domain:

Get-ADObject -Filter {objectClass -eq 'serviceConnectionPoint' -and Name -eq 'TermServLicensing'}

You can also configure the licensing parameters of the RDS host via a host with the RD Connection Broker role:

Set-RDLicenseConfiguration -LicenseServer @("uk-rdslic1.woshub.com","uk-rdslic2.woshub.com") -Mode PerDevice -ConnectionBroker "uk-rdcb1.woshub.com"

Configuring RDS licensing settings via the registry

In the HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\RCM\Licensing Core key, you will need to change the DWORD value of the parameter LicensingMode from a value of (license mode not set):

  • 2 – if Per Device RDS licensing mode is used;
  • 4 – if Per User licensing is used.
rds licensing mode - LicensingMode registry parameter

You can change the registry setting manually by using regedit.exe or following PowerShell commands that allow you to change the values of registry items:

# Specify the RDS licensing mode: 2 - Per Device CAL, 4 - Per User CAL
$RDSCALMode = 2
# RDS Licensing hostname
$RDSlicServer = "uk-rdslic1.woshub.com"
# Set the server name and licensing mode in the registry
New-Item "HKLM:\SYSTEM\CurrentControlSet\Services\TermService\Parameters\LicenseServers"
New-ItemProperty "HKLM:\SYSTEM\CurrentControlSet\Services\TermService\Parameters\LicenseServers" -Name SpecifiedLicenseServers -Value $RDSlicServer -PropertyType "MultiString"
Set-ItemProperty "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\RCM\Licensing Core\" -Name "LicensingMode" -Value $RDSCALMode

Once you have made the changes, restart your RDSH server. Then open the RDS Licensing Diagnoser console.  If you have configured everything correctly, you should see the number of licenses available for clients and the licensing mode you have set (Licensing mode: Per device).RD Licensing Diagnoser did not identify any licensing problems for the Remote Desktop Session Host.

number of available rds licenses

If a firewall is used on your network, you must open the following ports from the RDSH host to the RDS licensing server – TCP:135, UDP:137, UDP:138, TCP:139, TCP:445, TCP:49152–65535 (RPC range).

You can use the Test-NetConnection cmdlet to check for open and closed ports. If ports are closed in the local Windows Defender firewall, you can use PowerShell or GPO to manage firewall rules.

Also note that if the RD Licensing Server has, for example, Windows Server 2016 OS and CALs for RDS 2016 installed, you will not be able to install RDS CAL licenses for Windows Server 2019 or 2022. The 'Remote Desktop Licensing mode is not configured'error persists even when you specify the correct license type and RDS license server name. Older versions of Windows Server simply don’t support RDS CALs for newer versions of WS.

In this case, the following message will be displayed in the RD License Diagnoser window:The Remote Desktop Session Host is in Per User licensing mode and no Redirector Mode, but license server does not have any installed license with the following attributes: Product version: Windows Server 2016 Use RD Licensing Manager to install the appropriate licenses on the license server.

The Remote Desktop Session Host is in Per User licensing mode and no Redirector Mode, but license server does not have any installed appropriate license with the

You must first upgrade the version of Windows Server on the license server or deploy a new RD License host. A newer version of Windows Server (for example, WS 2022) has support for RDS CALs for all previous versions of Windows Server.

Note. Licensing report not generated if RDS host is in a workgroup. Although the terminal RDS licenses themselves are correctly issued to clients/devices. You will need to keep track of the number of RDS CALs you have left. You must monitor the number of RDS CALs remaining.

Source :
https://woshub.com/licensing-mode-rds-host-not-configured/

CISA Red Team Shares Key Findings to Improve Monitoring and Hardening of Networks

Release Date February 28, 2023
Alert CodeAA23-059A

SUMMARY

The Cybersecurity and Infrastructure Security Agency (CISAis releasing this Cybersecurity Advisory (CSAdetailing activity and key findings from a recent CISA red team assessmentin coordination with the assessed organizationto provide network defenders recommendations for improving their organization’s cyber posture.

Actions to take today to harden your local environment:

  • Establish a security baseline of normal network activity; tune network and host-based appliances to detect anomalous behavior.
  • Conduct regular assessments to ensure appropriate procedures are created and can be followed by security staff and end users.
  • Enforce phishing-resistant MFA to the greatest extent possible.

In 2022, CISA conducted a red team assessment (RTA) at the request of a large critical infrastructure organization with multiple geographically separated sites. The team gained persistent access to the organization’s network, moved laterally across the organization’s multiple geographically separated sites, and eventually gained access to systems adjacent to the organization’s sensitive business systems (SBSs). Multifactor authentication (MFA) prompts prevented the team from achieving access to one SBS, and the team was unable to complete its viable plan to compromise a second SBSs within the assessment period.

Despite having a mature cyber posture, the organization did not detect the red team’s activity throughout the assessment, including when the team attempted to trigger a security response.

CISA is releasing this CSA detailing the red team’s tactics, techniques, and procedures (TTPs) and key findings to provide network defenders of critical infrastructure organizations proactive steps to reduce the threat of similar activity from malicious cyber actors. This CSA highlights the importance of collecting and monitoring logs for unusual activity as well as continuous testing and exercises to ensure your organization’s environment is not vulnerable to compromise, regardless of the maturity of its cyber posture.

CISA encourages critical infrastructure organizations to apply the recommendations in the Mitigations section of this CSA—including conduct regular testing within their security operations center—to ensure security processes and procedures are up to date, effective, and enable timely detection and mitigation of malicious activity.

Download the PDF version of this report:

CISA Red Team Shares Key Findings to Improve Monitoring and Hardening of Networks(PDF, 1.06 MB )

TECHNICAL DETAILS

Note: This advisory uses the MITRE ATT&CK® for Enterprise framework, version 12. See the appendix for a table of the red team’s activity mapped to MITRE ATT&CK tactics and techniques.

Introduction

CISA has authority to, upon request, provide analyses, expertise, and other technical assistance to critical infrastructure owners and operators and provide operational and timely technical assistance to Federal and non-Federal entities with respect to cybersecurity risks. (See generally 6 U.S.C. §§ 652[c][5], 659[c][6].) After receiving a request for a red team assessment (RTA) from an organization and coordinating some high-level details of the engagement with certain personnel at the organization, CISA conducted the RTA over a three-month period in 2022.

During RTAs, a CISA red team emulates cyber threat actors to assess an organization’s cyber detection and response capabilities. During Phase I, the red team attempts to gain and maintain persistent access to an organization’s enterprise network while avoiding detection and evading defenses. During Phase II, the red team attempts to trigger a security response from the organization’s people, processes, or technology.

The “victim” for this assessment was a large organization with multiple geographically separated sites throughout the United States. For this assessment, the red team’s goal during Phase I was to gain access to certain sensitive business systems (SBSs).

Phase I: Red Team Cyber Threat Activity
Overview

The organization’s network was segmented with both logical and geographical boundaries. CISA’s red team gained initial access to two organization workstations at separate sites via spearphishing emails. After gaining access and leveraging Active Directory (AD) data, the team gained persistent access to a third host via spearphishing emails. From that host, the team moved laterally to a misconfigured server, from which they compromised the domain controller (DC). They then used forged credentials to move to multiple hosts across different sites in the environment and eventually gained root access to all workstations connected to the organization’s mobile device management (MDM) server. The team used this root access to move laterally to SBS-connected workstations. However, a multifactor authentication (MFA) prompt prevented the team from achieving access to one SBS, and Phase I ended before the team could implement a seemingly viable plan to achieve access to a second SBS.

Initial Access and Active Directory Discovery

The CISA red team gained initial access [TA0001] to two workstations at geographically separated sites (Site 1 and Site 2) via spearphishing emails. The team first conducted open-source research [TA0043] to identify potential targets for spearphishing. Specifically, the team looked for email addresses [T1589.002] as well as names [T1589.003] that could be used to derive email addresses based on the team’s identification of the email naming scheme. The red team sent tailored spearphishing emails to seven targets using commercially available email platforms [T1585.002]. The team used the logging and tracking features of one of the platforms to analyze the organization’s email filtering defenses and confirm the emails had reached the target’s inbox.

The team built a rapport with some targeted individuals through emails, eventually leading these individuals to accept a virtual meeting invite. The meeting invite took them to a red team-controlled domain [T1566.002] with a button, which, when clicked, downloaded a “malicious” ISO file [T1204]. After the download, another button appeared, which, when clicked, executed the file.

Two of the seven targets responded to the phishing attempt, giving the red team access to a workstation at Site 1 (Workstation 1) and a workstation at Site 2. On Workstation 1, the team leveraged a modified SharpHound collector, ldapsearch, and command-line tool, dsquery, to query and scrape AD information, including AD users [T1087.002], computers [T1018], groups [T1069.002], access control lists (ACLs), organizational units (OU), and group policy objects (GPOs) [T1615]. Note: SharpHound is a BloodHound collector, an open-source AD reconnaissance tool. Bloodhound has multiple collectors that assist with information querying.

There were 52 hosts in the AD that had Unconstrained Delegation enabled and a lastlogon timestamp within 30 days of the query. Hosts with Unconstrained Delegation enabled store Kerberos ticket-granting tickets (TGTs) of all users that have authenticated to that host. Many of these hosts, including a Site 1 SharePoint server, were Windows Server 2012R2. The default configuration of Windows Server 2012R2 allows unprivileged users to query group membership of local administrator groups.

The red team queried parsed Bloodhound data for members of the SharePoint admin group and identified several standard user accounts with administrative access. The team initiated a second spearphishing campaign, similar to the first, to target these users. One user triggered the red team’s payload, which led to installation of a persistent beacon on the user’s workstation (Workstation 2), giving the team persistent access to Workstation 2.

Lateral Movement, Credential Access, and Persistence

The red team moved laterally [TA0008] from Workstation 2 to the Site 1 SharePoint server and had SYSTEM level access to the Site 1 SharePoint server, which had Unconstrained Delegation enabled. They used this access to obtain the cached credentials of all logged-in users—including the New Technology Local Area Network Manager (NTLM) hash for the SharePoint server account. To obtain the credentials, the team took a snapshot of lsass.exe [T1003.001] with a tool called nanodump, exported the output, and processed the output offline with Mimikatz.

The team then exploited the Unconstrained Delegation misconfiguration to steal the DC’s TGT. They ran the DFSCoerce python script (DFSCoerce.py), which prompted DC authentication to the SharePoint server using the server’s NTLM hash. The team then deployed Rubeus to capture the incoming DC TGT [T1550.002], [T1557.001]. (DFSCoerce abuses Microsoft’s Distributed File System [MS-DFSNM] protocol to relay authentication against an arbitrary server.[1])

The team then used the TGT to harvest advanced encryption standard (AES)-256 hashes via DCSync [T1003.006] for the krbtgt account and several privileged accounts—including domain admins, workstation admins, and a system center configuration management (SCCM) service account (SCCM Account 1). The team used the krbtgt account hash throughout the rest of their assessment to perform golden ticket attacks [T1558.001] in which they forged legitimate TGTs. The team also used the asktgt command to impersonate accounts they had credentials for by requesting account TGTs [T1550.003].

The team first impersonated the SCCM Account 1 and moved laterally to a Site 1 SCCM distribution point (DP) server (SCCM Server 1) that had direct network access to Workstation 2. The team then moved from SCCM Server 1 to a central SCCM server (SCCM Server 2) at a third site (Site 3). Specifically, the team:

  1. Queried the AD using Lightweight Directory Access Protocol (LDAP) for information about the network’s sites and subnets [T1016]. This query revealed all organization sites and subnets broken down by classless inter-domain routing (CIDR) subnet and description.
  2. Used LDAP queries and domain name system (DNS) requests to identify recently active hosts.
  3. Listed existing network connections [T1049] on SCCM Server 1, which revealed an active Server Message Block (SMB) connection from SCCM Server 2.
  4. Attempted to move laterally to the SCCM Server 2 via AppDomain hijacking, but the HTTPS beacon failed to call back.
  5. Attempted to move laterally with an SMB beacon [T1021.002], which was successful.

The team also moved from SCCM Server 1 to a Site 1 workstation (Workstation 3) that housed an active server administrator. The team impersonated an administrative service account via a golden ticket attack (from SCCM Server 1); the account had administrative privileges on Workstation 3. The user employed a KeePass password manager that the team was able to use to obtain passwords for other internal websites, a kernel-based virtual machine (KVM) server, virtual private network (VPN) endpoints, firewalls, and another KeePass database with credentials. The server administrator relied on a password manager, which stored credentials in a database file. The red team pulled the decryption key from memory using KeeThief and used it to unlock the database [T1555.005].

At the organization’s request, the red team confirmed that SCCM Server 2 provided access to the organization’s sites because firewall rules allowed SMB traffic to SCCM servers at all other sites.

The team moved laterally from SCCM Server 2 to an SCCM DP server at Site 5 and from the SCCM Server 1 to hosts at two other sites (Sites 4 and 6). The team installed persistent beacons at each of these sites. Site 5 was broken into a private and a public subnet and only DCs were able to cross that boundary. To move between the subnets, the team moved through DCs. Specifically, the team moved from the Site 5 SCCM DP server to a public DC; and then they moved from the public DC to the private DC. The team was then able to move from the private DC to workstations in the private subnet.

The team leveraged access available from SCCM 2 to move around the organization’s network for post-exploitation activities (See Post-Exploitation Activity section).

See Figure 1 for a timeline of the red team’s initial access and lateral movement showing key access points.

Figure 1: Red Team Cyber Threat Activity: Initial Access and Lateral Movement
Figure 1: Red Team Cyber Threat Activity: Initial Access and Lateral Movement

While traversing the network, the team varied their lateral movement techniques to evade detection and because the organization had non-uniform firewalls between the sites and within the sites (within the sites, firewalls were configured by subnet). The team’s primary methods to move between sites were AppDomainManager hijacking and dynamic-link library (DLL) hijacking [T1574.001]. In some instances, they used Windows Management Instrumentation (WMI) Event Subscriptions [T1546.003].

The team impersonated several accounts to evade detection while moving. When possible, the team remotely enumerated the local administrators group on target hosts to find a valid user account. This technique relies on anonymous SMB pipe binds [T1071], which are disabled by default starting with Windows Server 2016. In other cases, the team attempted to determine valid accounts based on group name and purpose. If the team had previously acquired the credentials, they used asktgt to impersonate the account. If the team did not have the credentials, they used the golden ticket attack to forge the account.

Post-Exploitation Activity: Gaining Access to SBSs

With persistent, deep access established across the organization’s networks and subnetworks, the red team began post-exploitation activities and attempted to access SBSs. Trusted agents of the organization tasked the team with gaining access to two specialized servers (SBS 1 and SBS 2). The team achieved root access to three SBS-adjacent workstations but was unable to move laterally to the SBS servers:

  • Phase I ended before the team could implement a plan to move to SBS 1.
  • An MFA prompt blocked the team from moving to SBS 2, and Phase I ended before they could implement potential workarounds.

However, the team assesses that by using Secure Shell (SSH) session socket files (see below), they could have accessed any hosts available to the users whose workstations were compromised.

Plan for Potential Access to SBS 1

Conducting open-source research [1591.001], the team identified that SBS 1 and 2 assets and associated management/upkeep staff were located at Sites 5 and 6, respectively. Adding previously collected AD data to this discovery, the team was able to identify a specific SBS 1 admin account. The team planned to use the organization’s mobile device management (MDM) software to move laterally to the SBS 1 administrator’s workstation and, from there, pivot to SBS 1 assets.

The team identified the organization’s MDM vendor using open-source and AD information [T1590.006] and moved laterally to an MDM distribution point server at Site 5 (MDM DP 1). This server contained backups of the MDM MySQL database on its D: drive in the Backup directory. The backups included the encryption key needed to decrypt any encrypted values, such as SSH passwords [T1552]. The database backup identified both the user of the SBS 1 administrator account (USER 2) and the user’s workstation (Workstation 4), which the MDM software remotely administered.

The team moved laterally to an MDM server (MDM 1) at Site 3, searched files on the server, and found plaintext credentials [T1552.001] to an application programming interface (API) user account stored in PowerShell scripts. The team attempted to leverage these credentials to browse to the web login page of the MDM vendor but were unable to do so because the website directed to an organization-controlled single-sign on (SSO) authentication page.

The team gained root access to workstations connected to MDM 1—specifically, the team accessed Workstation 4—by:

  1. Selecting an MDM user from the plaintext credentials in PowerShell scripts on MDM 1.
  2. While in the MDM MySQL database,
    • Elevating the selected MDM user’s account privileges to administrator privileges, and
    • Modifying the user’s account by adding Create Policy and Delete Policy permissions [T1098], [T1548].
  3. Creating a policy via the MDM API [T1106], which instructed Workstation 4 to download and execute a payload to give the team interactive access as root to the workstation.
  4. Verifying their interactive access.
  5. Resetting permissions back to their original state by removing the policy via the MDM API and removing Create Policy and Delete Policy and administrator permissions and from the MDM user’s account.

While interacting with Workstation 4, the team found an open SSH socket file and a corresponding netstat connection to a host that the team identified as a bastion host from architecture documentation found on Workstation 4. The team planned to move from Workstation 4 to the bastion host to SBS 1. Note: A SSH socket file allows a user to open multiple SSH sessions through a single, already authenticated SSH connection without additional authentication.

The team could not take advantage of the open SSH socket. Instead, they searched through SBS 1 architecture diagrams and documentation on Workstation 4. They found a security operations (SecOps) network diagram detailing the network boundaries between Site 5 SecOps on-premises systems, Site 5 non-SecOps on-premises systems, and Site 5 SecOps cloud infrastructure. The documentation listed the SecOps cloud infrastructure IP ranges [T1580]. These “trusted” IP addresses were a public /16 subnet; the team was able to request a public IP in that range from the same cloud provider, and Workstation 4 made successful outbound SSH connections to this cloud infrastructure. The team intended to use that connection to reverse tunnel traffic back to the workstation and then access the bastion host via the open SSH socket file. However, Phase 1 ended before they were able to implement this plan.

Attempts to Access SBS 2

Conducting open-source research, the team identified an organizational branch [T1591] that likely had access to SBS 2. The team queried the AD to identify the branch’s users and administrators. The team gathered a list of potential accounts, from which they identified administrators, such as SYSTEMS ADMIN or DATA SYSTEMS ADMINISTRATOR, with technical roles. Using their access to the MDM MySQL database, the team queried potential targets to (1) determine the target’s last contact time with the MDM and (2) ensure any policy targeting the target’s workstation would run relatively quickly [T1596.005]. Using the same methodology as described by the steps in the Plan for Potential Access to SBS 1 section above, the team gained interactive root access to two Site 6 SBS 2-connected workstations: a software engineering workstation (Workstation 5) and a user administrator workstation (Workstation 6).

The Workstation 5 user had bash history files with what appeared to be SSH passwords mistyped into the bash prompt and saved in bash history [T1552.003]. The team then attempted to authenticate to SBS 2 using a similar tunnel setup as described in the Access to SBS 1 section above and the potential credentials from the user’s bash history file. However, this attempt was unsuccessful for unknown reasons.

On Workstation 6, the team found a .txt file containing plaintext credentials for the user. Using the pattern discovered in these credentials, the team was able to crack the user’s workstation account password [T1110.002]. The team also discovered potential passwords and SSH connection commands in the user’s bash history. Using a similar tunnel setup described above, the team attempted to log into SBS 2. However, a prompt for an MFA passcode blocked this attempt.

See figure 2 for a timeline of the team’s post exploitation activity that includes key points of access.

Figure 2: Red Team Cyber Threat Activity: Post Exploitation
Figure 2: Red Team Cyber Threat Activity: Post Exploitation
Command and Control

The team used third-party owned and operated infrastructure and services [T1583] throughout their assessment, including in certain cases for command and control (C2) [TA0011]. These included:

  • Cobalt Strike and Merlin payloads for C2 throughout the assessment. Note: Merlin is a post-exploit tool that leverages HTTP protocols for C2 traffic.
    • The team maintained multiple Cobalt Strike servers hosted by a cloud vendor. They configured each server with a different domain and used the servers for communication with compromised hosts. These servers retained all assessment data.
  • Two commercially available cloud-computing platforms.
    • The team used these platforms to create flexible and dynamic redirect servers to send traffic to the team’s Cobalt Strike servers [T1090.002]. Redirecting servers make it difficult for defenders to attribute assessment activities to the backend team servers. The redirectors used HTTPS reverse proxies to redirect C2 traffic between the target organization’s network and the Cobalt Strike team servers [T1071.002]. The team encrypted all data in transit [T1573] using encryption keys stored on team’s Cobalt Strike servers.
  • A cloud service to rapidly change the IP address of the team’s redirecting servers in the event of detection and eradication.
  • Content delivery network (CDN) services to further obfuscate some of the team’s C2 traffic.
    • This technique leverages CDNs associated with high-reputation domains so that the malicious traffic appears to be directed towards a reputation domain but is actually redirected to the red team-controlled Cobalt Strike servers.
    • The team used domain fronting [T1090.004] to disguise outbound traffic in order to diversify the domains with which the persistent beacons were communicating. This technique, which also leverages CDNs, allows the beacon to appear to connect to third-party domains, such as nytimes.com, when it is actually connecting to the team’s redirect server.
Phase II: Red Team Measurable Events Activity

The red team executed 13 measurable events designed to provoke a response from the people, processes, and technology defending the organization’s network. See Table 1 for a description of the events, the expected network defender activity, and the organization’s actual response.

Measurable EventDescriptionMITRE ATT&CK Technique(s)Expected Detection PointsExpected Network Defender ReactionsReported Reactions
Internal Port ScanLaunch scan from inside the network from a previously gained workstation to enumerate ports on target workstation, server, and domain controller system(s).Network Service Discovery [T1046]Network Monitoring and Analysis ToolsIntrusion Detection or Prevention SystemsEndpoint Protection PlatformDetect target hosts and portsIdentify associated scanning processAnalyze scanning host once detectedDevelop response planNone
 Comprehensive Active Directory and Host EnumerationPerform AD enumeration by querying all domain objects from the DC; and enumerating trust relationships within the AD Forest, user accounts, and current session information from every domain computer (Workstation and Server).Domain Trust Discovery [T1482]Account Discovery: Domain Account [T1087.002]System Owner/User Discovery [T1033]Remote System Discovery [T1018]Network Monitoring and Analysis ToolsIntrusion Detection or Prevention SystemsEndpoint Protection PlatformDetect target hosts and portsIdentify associated scanning processAnalyze scanning host once detectedDevelop response planCollection process stopped before completion. Host isolated and sent for forensics.
Data Exfiltration—1 GB of DataSend a large amount (1 GB) of mock sensitive information to an external system over various protocols, including ICMP, DNS, FTP, and/or HTTP/S.Exfiltration Over Alternative Protocol [T1048]Network Monitoring and Analysis ToolsIntrusion Detection or Prevention SystemsEndpoint Protection PlatformDetect target hosts and portsIdentify associated scanning processAnalyze scanning host once detectedDevelop response planNone
Malicious Traffic Generation—Workstation to External HostEstablish a session that originates from a target Workstation system directly to an external host over a clear text protocol, such as HTTP.Application Layer Protocol [T1071]Intrusion Detection or Prevention SystemsEndpoint Protection PlatformWindows Event LogsDetect and Identify source IP and source process of enumerationAnalyze scanning host once detectedDevelop response planNone
Active Directory Account LockoutLock out several administrative AD accountsAccount Access Removal [T1531Windows Event LogsEnd User ReportingDetect and Identify source IP and source process of exfiltrationAnalyze host used for exfiltration once detectedDevelop response planNone
Local Admin User Account Creation (workstation)Create a local administrator account on a target workstation system.Create Account: Local Account [T1136.001]Account Manipulation [T1098]Intrusion Detection or Prevention SystemsEndpoint Protection PlatformWeb Proxy LogsDetect and identify source IP and source process of malicious trafficInvestigate destination IP addressTriage compromised hostDevelop response planNone
Local Admin User Account Creation (server)Create a local administrator account on a target server system.Create Account: Local Account [T1136.001]Account Manipulation [T1098]Windows Event LogsDetect account creationIdentify source of changeVerify change with system ownerDevelop response planNone
Active Directory Account CreationCreate AD accounts and add it to domain admins groupCreate Account: Domain Account [T1136.002]Account Manipulation [T1098]Windows Event LogsDetect account creationIdentify source of changeVerify change with system ownerDevelop response planNone
Workstation Admin Lateral Movement—Workstation to WorkstationUse a previously compromised workstation admin account to upload and execute a payload via SMB and Windows Service Creation, respectively, on several target Workstations. Valid Accounts: Domain Accounts [T1078.002]Remote Services: SMB/Windows Admin Shares, Sub-technique [T1021.002]Create or Modify System Process: Windows Service [T1543.003]Windows Event LogsDetect account compromiseAnalyze compromised hostDevelop response planNone
Domain Admin Lateral Movement—Workstation to Domain ControllerUse a previously compromised domain admin account to upload and execute a payload via SMB and Windows Service Creation, respectively, on a target DC.Valid Accounts: Domain Accounts [T1078.002]Remote Services: SMB/Windows Admin Shares, Sub-technique [T1021.002]Create or Modify System Process: Windows Service [T1543.003]Windows Event LogsDetect account compromiseTriage compromised hostDevelop response planNone
Malicious Traffic Generation—Domain Controller to External HostEstablish a session that originates from a target Domain Controller system directly to an external host over a clear text protocol, such as HTTP.Application Layer Protocol [T1071]Intrusion Detection or Prevention SystemsEndpoint Protection PlatformWeb Proxy LogsDetect and identify source IP and source process of malicious trafficInvestigate destination IP addressTriage compromised hostDevelop response planNone
Trigger Host-Based Protection—Domain ControllerUpload and execute a well-known (e.g., with a signature) malicious file to a target DC system to generate host-based alerts.Ingress Tool Transfer [T1105]Endpoint Protection PlatformEndpoint Detection and ResponseDetect and identify source IP and source process of malicious trafficInvestigate destination IP addressTriage compromised hostDevelop response planMalicious file was removed by antivirus
Ransomware SimulationExecute simulated ransomware on multiple Workstation systems to simulate a ransomware attack.Note: This technique does NOT encrypt files on the target system.N/AEnd User ReportingInvestigate end user reported eventTriage compromised hostDevelop response PlanFour users reported event to defensive staff
Findings
Key Issues

The red team noted the following key issues relevant to the security of the organization’s network. These findings contributed to the team’s ability to gain persistent, undetected access across the organization’s sites. See the Mitigations section for recommendations on how to mitigate these issues.

  • Insufficient host and network monitoring. Most of the red team’s Phase II actions failed to provoke a response from the people, processes, and technology defending the organization’s network. The organization failed to detect lateral movement, persistence, and C2 activity via their intrusion detection or prevention systems, endpoint protection platform, web proxy logs, and Windows event logs. Additionally, throughout Phase I, the team received no deconflictions or confirmation that the organization caught their activity. Below is a list of some of the higher risk activities conducted by the team that were opportunities for detection:
    • Phishing
    • Lateral movement reuse
    • Generation and use of the golden ticket
    • Anomalous LDAP traffic
    • Anomalous internal share enumeration
    • Unconstrained Delegation server compromise
    • DCSync
    • Anomalous account usage during lateral movement
    • Anomalous outbound network traffic
    • Anomalous outbound SSH connections to the team’s cloud servers from workstations
  • Lack of monitoring on endpoint management systems. The team used the organization’s MDM system to gain root access to machines across the organization’s network without being detected. Endpoint management systems provide elevated access to thousands of hosts and should be treated as high value assets (HVAs) with additional restrictions and monitoring.
  • KRBTGT never changed. The Site 1 krbtgt account password had not been updated for over a decade. The krbtgt account is a domain default account that acts as a service account for the key distribution center (KDC) service used to encrypt and sign all Kerberos tickets for the domain. Compromise of the krbtgt account could provide adversaries with the ability to sign their own TGTs, facilitating domain access years after the date of compromise. The red team was able to use the krbtgt account to forge TGTs for multiple accounts throughout Phase I.
  • Excessive permissions to standard users. The team discovered several standard user accounts that have local administrator access to critical servers. This misconfiguration allowed the team to use the low-level access of a phished user to move laterally to an Unconstrained Delegation host and compromise the entire domain.
  • Hosts with Unconstrained Delegation enabled unnecessarily. Hosts with Unconstrained Delegation enabled store the Kerberos TGTs of all users that authenticate to that host, enabling actors to steal service tickets or compromise krbtgt accounts and perform golden ticket or “silver ticket” attacks. The team performed an NTLM-relay attack to obtain the DC’s TGT, followed by a golden ticket attack on a SharePoint server with Unconstrained Delegation to gain the ability to impersonate any Site 1 AD account.
  • Use of non-secure default configurations. The organization used default configurations for hosts with Windows Server 2012 R2. The default configuration allows unprivileged users to query group membership of local administrator groups. The red team used and identified several standard user accounts with administrative access from a Windows Server 2012 R2 SharePoint server.
Additional Issues

The team noted the following additional issues.

  • Ineffective separation of privileged accounts. Some workstations allowed unprivileged accounts to have local administrator access; for example, the red team discovered an ordinary user account in the local admin group for the SharePoint server. If a user with administrative access is compromised, an actor can access servers without needing to elevate privileges. Administrative and user accounts should be separated, and designated admin accounts should be exclusively used for admin purposes.
  • Lack of server egress control. Most servers, including domain controllers, allowed unrestricted egress traffic to the internet.
  • Inconsistent host configuration. The team observed inconsistencies on servers and workstations within the domain, including inconsistent membership in the local administrator group among different servers or workstations. For example, some workstations had “Server Admins” or “Domain Admins” as local administrators, and other workstations had neither.
  • Potentially unwanted programs. The team noticed potentially unusual software, including music software, installed on both workstations and servers. These extraneous software installations indicate inconsistent host configuration (see above) and increase the attack surfaces for malicious actors to gain initial access or escalate privileges once in the network.
  • Mandatory password changes enabled. During the assessment, the team keylogged a user during a mandatory password change and noticed that only the final character of their password was modified. This is potentially due to domain passwords being required to be changed every 60 days.
  • Smart card use was inconsistent across the domain. While the technology was deployed, it was not applied uniformly, and there was a significant portion of users without smartcard protections enabled. The team used these unprotected accounts throughout their assessment to move laterally through the domain and gain persistence.
Noted Strengths

The red team noted the following technical controls or defensive measures that prevented or hampered offensive actions:

  • The organization conducts regular, proactive penetration tests and adversarial assessments and invests in hardening their network based on findings.
    • The team was unable to discover any easily exploitable services, ports, or web interfaces from more than three million external in-scope IPs. This forced the team to resort to phishing to gain initial access to the environment.
    • Service account passwords were strong. The team was unable to crack any of the hashes obtained from the 610 service accounts pulled. This is a critical strength because it slowed the team from moving around the network in the initial parts of the Phase I.
    • The team did not discover any useful credentials on open file shares or file servers. This slowed the progress of the team from moving around the network.
  • MFA was used for some SBSs. The team was blocked from moving to SBS 2 by an MFA prompt.
  • There were strong security controls and segmentation for SBS systems. Direct access to SBS were located in separate networks, and admins of SBS used workstations protected by local firewalls.

MITIGATIONS

CISA recommends organizations implement the recommendations in Table 2 to mitigate the issues listed in the Findings section of this advisory. These mitigations align with the Cross-Sector Cybersecurity Performance Goals (CPGs) developed by CISA and the National Institute of Standards and Technology (NIST). The CPGs provide a minimum set of practices and protections that CISA and NIST recommend all organizations implement. CISA and NIST based the CPGs on existing cybersecurity frameworks and guidance to protect against the most common and impactful threats, tactics, techniques, and procedures. See CISA’s Cross-Sector Cybersecurity Performance Goals for more information on the CPGs, including additional recommended baseline protections.

IssueRecommendation
Insufficient host and network monitoringEstablish a security baseline of normal network traffic and tune network appliances to detect anomalous behavior [CPG 3.1]. Tune host-based products to detect anomalous binaries, lateral movement, and persistence techniques.Create alerts for Windows event log authentication codes, especially for the domain controllers. This could help detect some of the pass-the-ticket, DCSync, and other techniques described in this report.From a detection standpoint, focus on identity and access management (IAM) rather than just network traffic or static host alerts.Consider who is accessing what (what resource), from where (what internal host or external location), and when (what day and time the access occurs).Look for access behavior that deviates from expected or is indicative of AD abuse.Reduce the attack surface by limiting the use of legitimate administrative pathways and tools such as PowerShell, PSExec, and WMI, which are often used by malicious actors. CISA recommends selecting one tool to administer the network, ensuring logging is turned on [CPG 3.1], and disabling the others.Consider using “honeypot” service principal names (SPNs) to detect attempts to crack account hashes [CPG 1.1].Conduct regular assessments to ensure processes and procedures are up to date and can be followed by security staff and end users.Consider using red team tools, such as SharpHound, for AD enumeration to identify users with excessive privileges and misconfigured hosts (e.g., with Unconstrained Delegation enabled).Ensure all commercial tools deployed in your environment are regularly tuned to pick up on relevant activity in your environment.
Lack of monitoring on endpoint management systemsTreat endpoint management systems as HVAs with additional restrictions and monitoring because they provide elevated access to thousands of hosts.
KRBTGT never changedChange the krbtgt account password on a regular schedule such as every 6 to 12 months or if it becomes compromised. Note that this password change must be carefully performed to effectively change the credential without breaking AD functionality. The password must be changed twice to effectively invalidate the old credentials. However, the required waiting period between resets must be greater than the maximum lifetime period of Kerberos tickets, which is 10 hours by default. See Microsoft’s KRBTGT account maintenance considerations guidance for more information.
Excessive permissions to standard users and ineffective separation of privileged accountsImplement the principle of least privilege:Grant standard user rights for standard user tasks such as email, web browsing, and using line-of-business (LOB) applications.Periodically audit standard accounts and minimize where they have privileged access.Periodically Audit AD permissions to ensure users do not have excessive permissions and have not been added to admin groups.Evaluate which administrative groups should administer which servers/workstations. Ensure group members administrative accounts instead of standard accounts.Separate administrator accounts from user accounts [CPG 1.5]. Only allow designated admin accounts to be used for admin purposes. If an individual user needs administrative rights over their workstation, use a separate account that does not have administrative access to other hosts, such as servers.Consider using a privileged access management (PAM) solution to manage access to privileged accounts and resources [CPG 3.4]. PAM solutions can also log and alert usage to detect any unusual activity and may have helped stop the red team from accessing resources with admin accounts. Note: password vaults associated with PAM solutions should be treated as HVAs with additional restrictions and monitoring (see below).Configure time-based access for accounts set at the admin level and higher. For example, the just-in-time (JIT) access method provisions privileged access when needed and can support enforcement of the principle of least privilege, as well as the Zero Trust model. This is a process in which a network-wide policy is set in place to automatically disable administrator accounts at the AD level when the account is not in direct need. When individual users need the account, they submit their requests through an automated process that enables access to a system but only for a set timeframe to support task completion.
Hosts with Unconstrained Delegation enabledRemove Unconstrained Delegation from all servers. If Unconstrained Delegation functionality is required, upgrade operating systems and applications to leverage other approaches (e.g., constrained delegation) or explore whether systems can be retired or further isolated from the enterprise. CISA recommends Windows Server 2019 or greater.Consider disabling or limiting NTLM and WDigest Authentication if possible, including using their use as criteria for prioritizing updates to legacy systems or for segmenting the network. Instead use more modern federation protocols (SAML, OIDC) or Kerberos for authentication with AES-256 bit encryption [CPG 3.4].If NTLM must be enabled, enable Extended Protection for Authentication (EPA) to prevent some NTLM-relay attacks, and implement SMB signing to prevent certain adversary-in-the-middle and pass-the-hash attacks CPG 3.4]. See Microsoft Mitigating NTLM Relay Attacks on Active Directory Certificate Services (AD CS) and Microsoft Overview of Server Message Block signing for more information.
Use of non-secure default configurationsKeep systems and software up to date [CPG 5.1]. If updates cannot be uniformly installed, update insecure configurations to meet updated standards.
Lack of server egress controlConfigure internal firewalls and proxies to restrict internet traffic from hosts that do not require it. If a host requires specific outbound traffic, consider creating an allowlist policy of domains.
Large number of credentials in a shared vaultTreat password vaults as HVAs with additional restrictions and monitoring [CPG 3.4]:If on-premise, require MFA for admin and apply network segmentation [CPG 1.3]. Use solutions with end-to-end encryption where applicable [CPG 3.3].If cloud-based, evaluate the provider to ensure use of strong security controls such as MFA and end-to-end encryption [CPG 1.3, 3.3].
Inconsistent host configurationEstablish a baseline/gold-image for workstations and servers and deploy from that image [CPG 2.5]. Use standardized groups to administer hosts in the network.
Potentially unwanted programsImplement software allowlisting to ensure users can only install software from an approved list [CPG 2.1].Remove unnecessary, extraneous software from servers and workstations.
Mandatory password changes enabledConsider only requiring changes for memorized passwords in the event of compromise. Regular changing of memorized passwords can lead to predictable patterns, and both CISA and the National Institute of Standards and Technology (NIST) recommend against changing passwords on regular intervals.

Additionally, CISA recommends organizations implement the mitigations below to improve their cybersecurity posture:

  • Provide users with regular training and exercises, specifically related to phishing emails [CPG 4.3]. Phishing accounts for majority of initial access intrusion events.
  • Enforce phishing-resistant MFA to the greatest extent possible [CPG 1.3].
  • Reduce the risk of credential compromise via the following:
    • Place domain admin accounts in the protected users group to prevent caching of password hashes locally; this also forces Kerberos AES authentication as opposed to weaker RC4 or NTLM.
    • Implement Credential Guard for Windows 10 and Server 2016 (Refer to Microsoft: Manage Windows Defender Credential Guard for more information). For Windows Server 2012R2, enable Protected Process Light for Local Security Authority (LSA).
    • Refrain from storing plaintext credentials in scripts [CPG 3.4]. The red team discovered a PowerShell script containing plaintext credentials that allowed them to escalate to admin.
  • Upgrade to Windows Server 2019 or greater and Windows 10 or greater. These versions have security features not included in older operating systems.

As a long-term effort, CISA recommends organizations prioritize implementing a more modern, Zero Trust network architecture that:

  • Leverages secure cloud services for key enterprise security capabilities (e.g., identity and access management, endpoint detection and response, policy enforcement).
  • Upgrades applications and infrastructure to leverage modern identity management and network access practices.
  • Centralizes and streamlines access to cybersecurity data to drive analytics for identifying and managing cybersecurity risks.
  • Invests in technology and personnel to achieve these goals.

CISA encourages organizational IT leadership to ask their executive leadership the question: Can the organization accept the business risk of NOT implementing critical security controls such as MFA? Risks of that nature should typically be acknowledged and prioritized at the most senior levels of an organization.

VALIDATE SECURITY CONTROLS

In addition to applying mitigations, CISA recommends exercising, testing, and validating your organization’s security program against the threat behaviors mapped to the MITRE ATT&CK for Enterprise framework in this advisory. CISA recommends testing your existing security controls inventory to assess how they perform against the ATT&CK techniques described in this advisory.

To get started:

  1. Select an ATT&CK technique described in this advisory (see Table 3).
  2. Align your security technologies against the technique.
  3. Test your technologies against the technique.
  4. Analyze your detection and prevention technologies’ performance.
  5. Repeat the process for all security technologies to obtain a set of comprehensive performance data.
  6. Tune your security program, including people, processes, and technologies, based on the data generated by this process.

CISA recommends continually testing your security program, at scale, in a production environment to ensure optimal performance against the MITRE ATT&CK techniques identified in this advisory.

RESOURCES

See CISA’s RedEye tool on CISA’s GitHub page. RedEye is an interactive open-source analytic tool used to visualize and report red team command and control activities. See CISA’s RedEye tool overview video for more information.

REFERENCES
[1] Bleeping Computer: New DFSCoerce NTLM Relay attack allows Windows domain takeover

APPENDIX: MITRE ATT&CK TACTICS AND TECHNIQUES

See Table 3 for all referenced red team tactics and techniques in this advisory. Note: activity was from Phase I unless noted.

 Reconnaissance 
Technique TitleIDUse
Gather Victim Identity Information: Email AddressesT1589.002 The team found employee email addresses via open-source research.
Gather Victim Identify Information: Employee Names T1589.003 The team identified employee names via open-source research that could be used to derive email addresses.
Gather Victim Network Information: Network Security AppliancesT1590.006The team identified the organization’s MDM vendor and leveraged that information to move laterally to SBS-connected assets.
Gather Victim Org InformationT1591The team conducted open-source research and identified an organizational branch that likely had access to an SBS asset.
Gather Victim Org Information: Determine Physical LocationsT1591.001The team conducted open-source research to identify the physical locations of upkeep/management staff of selected assets.
Search Open Technical Databases: Scan Databases T1596.005The team queried an MDM SQL database to identify target administrators who recently connected with the MDM.
 Resource Development 
Technique TitleIDUse
Acquire InfrastructureT1583The team used third-party owned and operated infrastructure throughout their assessment for C2.
Establish Accounts: Email AccountsT1585.002The team used commercially available email platforms for their spearphishing activity.
Obtain Capabilities: ToolT1588.002The team used the following tools:Cobalt Strike and Merlin payloads for C2.KeeThief to obtain a decryption key from a KeePass databaseRubeus and DFSCoerce in an NTLM relay attack
 Initial Access 
Technique TitleIDUse
Phishing: Spearphishing LinkT1566.002The team sent spearphishing emails with links to a red-team-controlled domain to gain access to the organization’s systems.
 Execution 
Technique TitleIDUse
Native APIT1106The team created a policy via the MDM API, which downloaded and executed a payload on a workstation.
User ExecutionT1204Users downloaded and executed the team’s initial access payloads after clicking buttons to trigger download and execution.
 Persistence 
Technique TitleIDUse 
Account ManipulationT1098The team elevated account privileges to administrator and modified the user’s account by adding Create Policy and Delete Policy permissions.During Phase II, the team created local admin accounts and an AD account; they added the created AD account to a domain admins group.
Create Account: Local AccountT1136.001During Phase II, the team created a local administrator account on a workstation and a server.
Create Account: Domain AccountT1136.002During Phase II, the team created an AD account.
Create or Modify System Process: Windows ServiceT1543.003During Phase II, the team leveraged compromised workstation and domain admin accounts to execute a payload via Windows Service Creation on target workstations and the DC.
Event Triggered Execution: Windows Management Instrumentation Event SubscriptionT1546.003The team used WMI Event Subscriptions to move laterally between sites.
Hijack Execution Flow: DLL Search Order HijackingT1574.001The team used DLL hijacking to move laterally between sites.
 Privilege Escalation 
Technique TitleIDUse
Abuse Elevation Control MechanismT1548The team elevated user account privileges to administrator by modifying the user’s account via adding Create Policy and Delete Policy permissions.
 Defense Evasion 
Technique TitleIDUse
Valid Accounts: Domain AccountsT1078.002During Phase II, the team compromised a domain admin account and used it to laterally to multiple workstations and the DC.
 Credential Access 
Technique TitleIDUse
OS Credential Dumping: LSASS MemoryT1003.001The team obtained the cached credentials from a SharePoint server account by taking a snapshot of lsass.exe with a tool called nanodump, exporting the output and processing the output offline with Mimikatz.
OS Credential Dumping: DCSyncT1003.006The team harvested AES-256 hashes via DCSync.
Brute Force: Password CrackingT1110.002The team cracked a user’s workstation account password after learning the user’s patterns from plaintext credentials.
Unsecured CredentialsT1552The team found backups of a MySQL database that contained the encryption key needed to decrypt SSH passwords.
Unsecured Credentials: Credentials in FilesT1552.001The team found plaintext credentials to an API user account stored in PowerShell scripts on an MDM server.
Unsecured Credentials: Bash HistoryT1552.003The team found bash history files on a Workstation 5, and the files appeared to be SSH passwords saved in bash history.
Credentials from Password Stores: Password ManagersT1555.005The team pulled credentials from a KeePass database. 
Adversary-in-the-middle: LLMNR/NBT-NS Poisoning and SMB RelayT1557.001The team ran the DFSCoerce python script, which prompted DC authentication to a server using the server’s NTLM hash. The team then deployed Rubeus to capture the incoming DC TGT.
Steal or Forge Kerberos Tickets: Golden TicketT1558.001The team used the acquired krbtgt account hash throughout their assessment to forge legitimate TGTs.
Steal or Forge Kerberos Tickets: KerberoastingT1558.003The team leveraged Rubeus and DFSCoerce in a NTLM relay attack to obtain the DC’s TGT from a host with Unconstrained Delegation enabled.
 Discovery 
Technique TitleIDUse
System Network Configuration DiscoveryT1016The team queried the AD for information about the network’s sites and subnets. 
Remote System DiscoveryT1018The team queried the AD, during phase I and II, for information about computers on the network. 
System Network Connections DiscoveryT1049The team listed existing network connections on SCCM Server 1 to reveal an active SMB connection with server 2.
Permission Groups Discovery: Domain GroupsT1069.002The team leveraged ldapsearch and dsquery to query and scrape active directory information. 
Account Discovery: Domain AccountT1087.002The team queried AD for AD users (during Phase I and II), including for members of a SharePoint admin group and several standard user accounts with administrative access.
Cloud Infrastructure DiscoveryT1580The team found SecOps network diagrams on a host detailing cloud infrastructure boundaries.
Domain Trust DiscoveryT1482During Phase II, the team enumerated trust relationships within the AD Forest.
Group Policy DiscoveryT1615The team scraped AD information, including GPOs.
Network Service DiscoveryT1046During Phase II, the team enumerated ports on target systems from a previously compromised workstation.
System Owner/User DiscoveryT1033During Phase II, the team enumerated the AD for current session information from every domain computer (Workstation and Server).
 Lateral Movement 
Technique TitleIDUse
Remote Services: SMB/Windows Admin SharesT1021.002The team moved laterally with an SMB beacon.During Phase II, they used compromised workstation and domain admin accounts to upload a payload via SMB on several target Workstations and the DC.
Use Alternate Authentication Material: Pass the HashT1550.002The team ran the DFSCoerce python script, which prompted DC authentication to a server using the server’s NTLM hash. The team then deployed Rubeus to capture the incoming DC TGT.
Pass the TicketT1550.003The team used the asktgt command to impersonate accounts for which they had credentials by requesting account TGTs.
 Command and Control 
Technique TitleIDUse
Application Layer ProtocolT1071The team remotely enumerated the local administrators group on target hosts to find valid user accounts. This technique relies on anonymous SMB pipe binds, which are disabled by default starting with Server 2016.During Phase II, the team established sessions that originated from a target Workstation and from the DC directly to an external host over a clear text protocol.
Application Layer Protocol: Web ProtocolsT1071.001The team’s C2 redirectors used HTTPS reverse proxies to redirect C2 traffic.
Application Layer Protocol: File Transfer ProtocolsT1071.002The team used HTTPS reverse proxies to redirect C2 traffic between target network and the team’s Cobalt Strike servers.
Encrypted ChannelT1573The team’s C2 traffic was encrypted in transit using encryption keys stored on their C2 servers.
Ingress Tool TransferT1105During Phase II, the team uploaded and executed well-known malicious files to the DC to generate host-based alerts.
Proxy: External ProxyT1090.002The team used redirectors to redirect C2 traffic between the target organization’s network and the team’s C2 servers.
Proxy: Domain FrontingT1090.004The team used domain fronting to disguise outbound traffic in order to diversify the domains with which the persistent beacons were communicating.
 Impact 
Technique TitleIDUse
Account Access RemovalT1531During Phase II, the team locked out several administrative AD accounts.

Please share your thoughts. We recently updated our anonymous Product Feedback Survey and we’d welcome your feedback.

Source :
https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-059a

Certified Pre-Owned

Will Schroeder
Researcher @SpecterOps . Coding towards chaotic good while living on the decision boundary.

Published in Posts By SpecterOps Team Members
22 min read
Jun 17, 2021

TL;DR Active Directory Certificate Services has a lot of attack potential! Check out our whitepaper “Certified Pre-Owned: Abusing Active Directory Certificate Services” for complete details. We’re also presenting this material at Black Hat USA 2021.

[EDIT 06/22/21] — We’ve updated some of the details for ESC1 and ESC2 in this post which will be shortly updated in the whitepaper.

For the past several months, we (Will Schroeder and Lee Christensen) have been diving into the security of Active Directory Certificate Services (AD CS). While several aspects of Active Directory have received thorough attention from a security perspective, Active Directory Certificate Services has been relatively overlooked. AD CS is Microsoft’s PKI implementation that provides everything from encrypting file systems, to digital signatures, to user authentication (a large focus of our research), and more. While AD CS is not installed by default for Active Directory environments, from our experience in enterprise environments it is widely deployed, and the security ramifications of misconfigured certificate service instances are enormous.

Today we’re releasing the results of our research so far (there is still much to look at, and we know we have missed things) in the form of an extensive whitepaper and a defensive PowerShell toolkit for auditing these issues. The toolkit is heavily defensive focused, but we will also release two offensive tools in ~45 days at Black Hat, as we believe that the issues described in the paper are severe and widespread enough to warrant a delay in the offensive tool release. The whitepaper also contains substantial preventative and detective guidance.

AD CS and its security implications are complicated, and we highly recommend reading the whitepaper for complete context. This post is a brief summary of the paper, and we will release a number of additional posts in the coming weeks and months to highlight elements of the research.

So why care about this? Certificate abuse can grant an attacker:

Of note, nearly every environment with AD CS that we’ve examined for domain escalation misconfigurations has been vulnerable. It’s hard for us to overstate what a big deal these issues are.

Sidenote: because of the number of attacks we ended up documenting in this research, we have tagged each attack with an ID (e.g., ESC2) as well as each defense (e.g., DETECT3). This is for ease of mapping of attacks to appropriate defenses in the whitepaper.

Active Directory Certificate Services Crash Course

Common Terms and Acronyms

There are a lot of terms and acronyms we’re going to be using throughout this post (and paper), so here’s a quick breakdown of a few for reference:

  • PKI (Public Key Infrastructure) — a system to manage certificates/public key encryption
  • AD CS (Active Directory Certificate Services) — Microsoft’s PKI implementation
  • CA (Certificate Authority) — PKI server that issues certificates
  • Enterprise CA — CA integrated with AD (as opposed to a standalone CA), offers certificate templates
  • Certificate Template — a collection of settings and policies that defines the contents of a certificate issued by an enterprise CA
  • CSR (Certificate Signing Request) — a message sent to a CA to request a signed certificate
  • EKU (Extended/Enhanced Key Usage) — one or more object identifiers (OIDs) that define how a certificate can be used

Overview

AD CS is a server role that functions as Microsoft’s public key infrastructure PKI implementation. As expected, it integrates tightly with Active Directory and enables the issuing of certificates, which are X.509-formatted digitally signed electronic documents that can be used for encryption, message signing, and/or authentication (our research focus).

The information included in a certificate binds an identity (the subject) to a public/private key pair. An application can then use the key pair in operations as proof of the identity of the user. Certificate Authorities (CAs) are responsible for issuing certificates.

At a high level, clients generate a public-private key pair, and the public key is placed in a certificate signing request (CSR) message along with other details such as the subject of the certificate and the certificate template name. Clients then send the CSR to the Enterprise CA server. The CA server then checks if the client is allowed to request certificates. If so, it determines if it will issue a certificate by looking up the certificate template AD object (more on these shortly) specified in the CSR. The CA will check if the certificate template AD object’s permissions allow the authenticating account to obtain a certificate. If so, the CA generates a certificate using the “blueprint” settings defined by the certificate template (e.g., EKUs, cryptography settings, issuance requirements, etc.) and using the other information supplied in the CSR if allowed by the certificate’s template settings. The CA signs the certificate using its private key and then returns it to the client.

That’s a lot of text. So here’s a graphic:

Certificate Templates

AD CS Enterprise CAs issue certificates with settings defined by AD objects known as certificate templates. These templates are collections of enrollment policies and predefined certificate settings and contain things like “How long is this certificate valid for?”, “What is the certificate used for?”, “How is the subject specified?”, “Who is allowed to request a certificate?”, and a myriad of other settings:

The pKIExtendedKeyUsage attribute on an AD certificate template object contains an array of object identifiers (OIDs) enabled for the template. These EKU object identifiers affect what the certificate can be used for (PKI Solutions has a breakdown of the EKU OIDs available from Microsoft). Our research focused on EKUs that, when present in a certificate, permit authentication to Active Directory. We originally thought that only the “Client Authentication“ OID (1.3.6.1.5.5.7.3.2) enabled this; however, our research also found that the following OID scenarios can enable certificate-based authentication:

*The 1.3.6.1.5.2.3.4 OID is not present in AD CS deployments by default and needs to be added manually, but it does work for client authentication.

Templates also have a number of other interesting settings which we explore in depth in the whitepaper. The paper also covers template “Issuance Requirements” which can function as preventative controls, which we will briefly touch on in this post.

Subject Alternative Names

A Subject Alternative Name (SAN) is an extension that allows additional identities to be bound to a certificate beyond just the subject of the certificate. For example, if a web server hosts content for multiple domains, each applicable domain could be included in the SAN so that the web server only needs a single HTTPS certificate instead of one for each domain.

This is all well and good for HTTPS certificates, but when combined with certificates that allow domain authentication, a dangerous scenario can arise. By default during certificate-based authentication, certificates are mapped to Active Directory accounts based on a user principal name (UPN) specified in the SAN. So, if an attacker can specify an arbitrary SAN when requesting a certificate that enables domain authentication, and the CA creates and signs a certificate using the attacker-supplied SAN, the attacker can become any user in the domain! Domain escalation scenarios can result from various AD CS template misconfigurations that allow unprivileged users to supply an arbitrary SAN in a certificate enrollment. We’ll cover these situations in the Domain Escalation section.

Active Directory Authentication with Certificates

Last year, @_ethicalchaos_ made a PR to Rubeus to implement PKINIT abuse, and covers more details on this in depth in their post on attacking smart card based Active Directory networks. This was a missing link for us offensively, and means that we can now use Rubeus to request a Kerberos ticket granting ticket (TGT) using a certificate enabled for domain authentication:

That’s right, we don’t need a physical smart card or the Windows Credential Store to perform this certificate-based Kerberos authentication! Benjamin Delpy’s (@gentilkiwiKekeo has supported this for years, but the Rubeus implementation made it more readily usable for our operations.

During our research, we also found that some protocols use Schannel — the security package backing SSL/TLS — to authenticate domain users. LDAPS is a commonly enabled use case. For example, the following screenshot shows the PowerShell script Get-LdapCurrentUser authenticating to LDAPS using a certificate for authentication and performing an LDAP whoami to see what account authenticated:

Account Persistence

If an Enterprise CA exists, a user (or machine) can request a cert for any template available to them for enrollment. The whitepaper covers theft of existing certificates, but we’re only going to touch on “active” malicious enrollments here. Our goal, in the context of user credential theft, is to request a certificate for a template that allows us to authenticate to Active Directory as that user (or machine). For complete details, see the “Account Persistence” section in the whitepaper.

The Certify.exe find /clientauth command will query LDAP for available templates that we can examine for our desired criteria:

This can also be done via PSPKIAudit with Get-AuditCertificateTemplate | ?{$_.HasAuthenticationEku}

If we have GUI access to a host, we can manually request a certificate through certmgr.msc. Alternatively, Certify (or certreq.exe) can be used be used for these malicious enrollments:

These issued certificates can then be used with Rubeus to authenticate to Active Directory as this user, for as long as the certificate is valid. This is an alternative method of long-term credential theft that doesn’t touch LSASS and can be performed from a non-elevated context!

This also works for machine certificates, which can be combined with S4U2Self to obtain a Kerberos service ticket to any service on the host (e.g., CIFS, HTTP, RPCSS, etc.) as any user. Elad Shamir’s excellent post about Kerberos delegation attacks details this attack scenario.

And since certificates are independent authentication material, these certificates will still be usable even if the user (or computer) resets their password!

Domain Escalation

While there isn’t anything necessarily inherently insecure about AD CS (except for ESC8 as detailed below), it is surprisingly easy to misconfigure its various elements, resulting in ways for unelevated users to escalate in the domain. We’ll briefly cover the main sets of misconfigurations, but again, see the whitepaper for complete details.

Misconfigured Certificate Templates — ESC1

In order to abuse this misconfiguration, the following conditions must be met:

  1. The Enterprise CA grants low-privileged users enrollment rights. The Enterprise CA’s configuration must permit low-privileged users the ability to request certificates. See the “Background — Certificate Enrollment” in the whitepaper paper for more details.
  2. Manager approval is disabled. This setting necessitates that a user with certificate manager permissions review and approve the requested certificate before the certificate is issued. See the “Background — Certificate Enrollment — Issuance Requirements’ section in the whitepaper paper for more details.
  3. No authorized signatures are required. This setting requires any CSR to be signed by an existing authorized certificate. See the “Background — Certificate Enrollment — Issuance Requirements” section in the whitepaper for more details.
  4. An overly permissive certificate template security descriptor grants certificate enrollment rights to low-privileged users. Having certificate enrollment rights allows a low-privileged attacker to request and obtain a certificate based on the template. Enrollment rights are granted via the certificate template AD object’s security descriptor.
  5. The certificate template defines EKUs that enable authentication. Applicable EKUs include Client Authentication (OID 1.3.6.1.5.5.7.3.2), PKINIT Client Authentication (1.3.6.1.5.2.3.4), Smart Card Logon (OID 1.3.6.1.4.1.311.20.2.2), Any Purpose (OID 2.5.29.37.0), or no EKU (SubCA).
  6. The certificate template allows requesters to specify a subjectAltName (SAN) in the CSR. If a requester can specify the SAN in a CSR, the requester can request a certificate as anyone (e.g., a domain admin user). The certificate template’s AD object specifies if the requester can specify the SAN in its mspki-certificate-name-flag property. The mspki-certificate-name-flag property is a bitmask and if the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT flag is present, a requester can specify the SAN. This is surfaced as the “Supply in request” option in the “Subject Name” tab in certtmpl.msc.

Misconfigured Certificate Templates — ESC2

In order to abuse this misconfiguration, the following conditions must be met:

  1. The Enterprise CA grants low-privileged users enrollment rights. Details are the same as in ESC1.
  2. Manager approval is disabled. Details are the same as in ESC1.
  3. No authorized signatures are required. Details are the same as in ESC1.
  4. An overly permissive certificate template security descriptor grants certificate enrollment rights to low-privileged users. Details are the same as in ESC1.
  5. The certificate template defines Any Purpose EKUs or no EKU.

[EDIT 06/22/21]

While templates with these EKUs can’t be used to request authentication certificates as other users without the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT flag being present (i.e., ESC1), an attacker can use them to authenticate to AD as the user who requested them and these two EKUs are certainly dangerous on their own.

We were initially a bit unclear about the capabilities of the Any Purpose and subordinate CA (SubCA) EKUs, but others reached out and helped us clarify our understanding. An attacker can use a certificate with the Any Purpose EKU for (surprise!) any purpose — client authentication, server authentication, code signing, etc. In contrast, an attacker can use a certificate with no EKUs — a subordinate CA certificate — for any purpose as well but could also use it to sign new certificates. As such, using a subordinate CA certificate, an attacker could specify arbitrary EKUs or fields in the new certificates.

HOWEVER, if the subordinate CA is not trusted by the NTAuthCertificates object (which it won’t be by default), the attacker cannot create new certificates that will work for domain authentication. Still, the attacker can create new certificates with any EKU and arbitrary certificate values, of which there’s plenty the attacker could potentially abuse (e.g., code signing, server authentication, etc.) and might have large implications for other applications in the network like SAML, AD FS, or IPSec.

We feel confident in stating that it’s very bad if an attacker can obtain an Any Purpose or subordinate CA (SubCA) certificate, regardless of whether it’s trusted by NTAuthCertificates or not.

[/EDIT]

Enrollment Agent Templates — ESC3

In order to abuse this misconfiguration, the following conditions must be met:

  1. The Enterprise CA grants low-privileged users enrollment rights. Details are the same as in ESC1.
  2. Manager approval is disabled. Details are the same as in ESC1.
  3. No authorized signatures are required. Details are the same as in ESC1.
  4. An overly permissive certificate template security descriptor grants certificate enrollment rights to low-privileged users. Details are the same as in ESC1.
  5. The certificate template defines the Certificate Request Agent EKU. The Certificate Request Agent OID (1.3.6.1.4.1.311.20.2.1) allows for requesting other certificate templates on behalf of other principals.
  6. Enrollment agent restrictions are not implemented on the CA.

The Certificate Request Agent EKU (OID 1.3.6.1.4.1.311.20.2.1), known as “Enrollment Agent” in Microsoft documentationallows a principal to enroll for a certificate on behalf of another user. For anyone who enrolls in such a template, the resulting certificate can be used to co-sign requests on behalf of any user, for any Schema Version 1 template or any Schema Version 2+ template that requires the appropriate “Authorized Signatures/Application Policy” Issuance Requirement. This also assumes that there are no limiting Enrollment Agent Restrictions on the CA.

The few sentences before this throwback meme might need a bit of clarification. If an attacker is able to enroll in a template with a “Certificate Request Agent” EKU, they can enroll on behalf of any user for any Version 1 certificate template, or any Version 2+ template configured to explicitly require this co-signing scenario. Schema Version 1 templates don’t implement this type of Issuance Requirement, so all are on the table. Specifically, the User and Machine/Computer templates are prime targets as they contain the Client Authentication EKU and are published by default (though this can be changed), and there are other Version 1 templates that can be vulnerable if published.

If a Version 1 template is duplicated for modification, it automatically becomes Schema Version 2 by default, meaning a “Certificate Request Agent” template will NOT work unless such an issuance requirement is explicitly specified.

A bit confusing? We know. We do our best to break this down in more depth in the whitepaper, but it’s a complex set of interwoven restrictions.

Vulnerable Certificate Template Access Control — ESC4

Certificate templates are securable objects in Active Directory, meaning they have a security descriptor that specifies which Active Directory principals have specific permissions over the template. For more background on Active Directory ACLs, see our (other) whitepaper on the subject.

We say that a template is misconfigured at the access control level if it has Access Control Entries (ACEs) that allow unintended, or otherwise unprivileged, Active Directory principals to edit sensitive security settings in the template. That is, if an attacker is able to chain access to a point that they can actively push a misconfiguration to a template that is not otherwise vulnerable (e.g., by enabling the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT bit in the mspki-certificate-name-flag property for a template that allows for domain authentication), we end up with domain compromise scenarios similar to what we’ve already covered. An example of this we have seen in multiple environments is Domain Computers having FullControl or WriteDacl permissions over a certificate template’s AD object, allowing attackers with access to any AD computer modify the certificate template to a dangerous state. This is a scenario explored in Christoph Falta’s GitHub repo.

Vulnerable PKI Object Access Control — ESC5

We won’t touch on this one as heavily here, but a number of objects outside of certificate templates and the certificate authority itself can have a security impact on the entire AD CS system.

These possibilities include (but are not limited to):

  • CA server’s AD computer object (i.e., compromise through RBCD)
  • The CA server’s RPC/DCOM server
  • Any descendant AD object or container in the container CN=Public Key Services,CN=Services,CN=Configuration,DC=<COMPANY>,DC=<COM> (e.g., the Certificate Templates container, Certification Authorities container, the NTAuthCertificates object, the Enrollment Services container, etc.)

EDITF_ATTRIBUTESUBJECTALTNAME2 — ESC6

Another way to supply arbitrary SANs, described in a CQure Academy post, involves the EDITF_ATTRIBUTESUBJECTALTNAME2 flag. As Microsoft describes, “If this flag is set on the CA, any request (including when the subject is built from Active Directory®) can have user defined values in the subject alternative name.” This means that ANY template configured for domain authentication that also allows unprivileged users to enroll (e.g., the default User template) can be abused to obtain a certificate that allows us to authenticate as a domain admin (or any other active user/machine). As this Keyfactor post describes, this setting “just makes it work”, which is why it’s likely flipped in many environments by sysadmins who don’t fully understand the security implications.

Our normal reaction to seeing this setting in enterprise environments:

Vulnerable Certificate Authority Access Control — ESC7

Outside of certificate templates, a certificate authority itself has permissions (accessible through certsrv.msc) that secure various CA actions. From a security perspective we care about the ManageCA (aka “CA Administrator”) and ManageCertificates (aka “Certificate Manager/Officer”) permissions.

The ManageCA permission grants a principal the ability to perform “Administrative” CA actions, including the modification of persistent configuration data. This includes the EDITF_ATTRIBUTESUBJECTALTNAME2 flag, allowing any principal with the ManageCA permission to fixate ESC6. This can be done with PSPKI’s Enable-PolicyModuleFlag cmdlet.

The ManageCertificates permission allows the principal to approve pending certificate requests, negating the “Manager Approval” Issuance Requirement/protection. So while it can’t be used on its own to compromise the domain, it can function as a protection bypass.

NTLM Relay to AD CS HTTP Endpoints — ESC8

We cover this in more detail in the “Background — Certificate Enrollment” section of the whitepaper, but AD CS supports several HTTP-based enrollment methods via additional server roles that administrators can optionally install:

These HTTP-based certificate enrollment interfaces are all vulnerable to NTLM relay attacks. Using NTLM relay, an attacker can impersonate an inbound-NTLM-authenticating victim user. While impersonating the victim user, an attacker could access these web interfaces and request a client authentication certificate based on the User or Machine certificate templates.

This attack, like all NTLM relay attacks, requires a victim account to authenticate to an attacker-controlled machine. An attacker can coerce authentication by many means, but a simple technique is to coerce a machine account to authenticate to the attacker’s host using the MS-RPRN RpcRemoteFindFirstPrinterChangeNotification(Ex) methods using a tool like SpoolSample or Dementor. The attacker can then use NTLM relay to impersonate the machine account and request a client authentication certificate (e.g., the default Machine/Computer template) as the victim machine account. If the victim machine account can perform privileged actions such as domain replication (e.g., domain controllers or Exchange servers), the attacker could use this certificate to compromise the domain. Otherwise, the attacker could logon as the victim machine account and use S4U2Self as previously described to access the victim machine’s host OS.

  • Note: Newer OS’es have patched the MS-RPRN coerced authentication “feature”. However, almost every environment we examine still has Server 2016 machines running, which are still vulnerable to this. There are other ways to coerce accounts to authenticate to an attacker as well.

In summary, if an environment has AD CS installed, along with a vulnerable web enrollment endpoint and at least one certificate template published that allows for domain computer enrollment and client authentication (like the default Machine/Computer template), then an attacker can compromise ANY computer with the spooler service running!

These attack scenarios work because some enrollment HTTP endpoints do not have HTTPS enabled and none of them have any NTLM relay protections enabled by default. Organizations should disable these HTTP-based enrollment server roles if they are not in use. Otherwise, network defenders can disable NTLM authentication using GPOs or configuring the associated IIS applications to only accept Kerberos authentication. If organizations cannot remove the endpoints or outright disable NTLM authentication, they should only allow HTTPS traffic and configure the IIS applications to Extended Protection for Authentication .

This specific issue was reported to MSRC, along with the other template escalation misconfigurations. The official response was, “We determined your finding is valid but does not meet our bar for a security update release.

Note: While we have verified that this attack is possible, we are waiting to publicly demonstrate it at our Black Hat talk to help facilitate fixing the issue first.

Domain Persistence

Active Directory Enterprise CAs are hooked into the authentication system of AD and the CA root certificate private key is used to sign newly issued certificates. If we stole this private key, would we be able to forge our own certificates that could be used (without a smart card) to authenticate to Active Directory as anyone in the organization?

Spoiler: yes. And this has already been possible with Mimikatz/Kekeo for years. I guess we should call these golden certificates?

The certificate exists on the CA server itself, with its private key protected by machine DPAPI if a TPM/HSM is not used for hardware-based protection. If the key is not hardware protected, Mimikatz and SharpDPAPI can extract the CA certificate and private key from the CA:

With this key, you can create and sign new certificates for ANY user and use these forged certificates to authenticate to AD for as long as the CA cert is valid (default of 5 years, but often longer). Our tool ForgeCert (which will be released at Black Hat USA 2021 along with Certify) can perform these forgeries:

Oh, and these certs can’t be revoked, since they were never actually issued by the CA itself, as detailed by Benjamin Delpy:

Unfortunately, there isn’t a huge amount of public incident response guidance as far as AD CS. But if a root CA’s key is stolen, the entire AD CS system will likely need to be rebuilt, invalidating every issued certificate.

Defensive Advice

Not only are we self-embargoing the offensive tool release for these abuses, but we’ve also spent a large amount of effort researching both preventative and detective controls for these attacks. Part of the motivation for breaking out attacks and associated defensive protections with individual identifiers was to make the whitepaper material as digestible as possible for defenders.

Besides identifying and mitigating the privilege escalation vulnerabilities, something we want to emphasize from an incident response perspective is that it is not enough to reset a compromised user’s password and/or reimage their machine. Certificate theft is trivial in most environments given code execution in a user or computer context and would allow an attacker to authenticate to AD for years — even after the account’s password has been reset. Therefore, when an account or machine is compromised, incident responders should identify and invalidate any certificates associated with the compromised accounts as well. PSPKIAudit’s Get-CertRequest can help perform this type of triage.

As the defenses for these attacks are multi-pronged, at this point we’re recommending defenders study the attacks, read the extensive “Defensive Guidance” section of the whitepaper, and reference Microsoft’s Securing PKI documentation. Defenders can also try out the PSPKIAudit’s Invoke-PKIAudit function the misconfigurations described in this post:

Wrap-up

Even months into this research, we believed that there wasn’t necessarily anything inherently insecure about Active Directory Certificate Services. While the entire system is very dangerous if an organization doesn’t fully understand AD CS or its security implications (as it’s extremely easy to misconfigure) there didn’t appear to be any “out of the box” vulnerabilities. That said, we have seen a proliferation of the ESC1–7 elevation issues in real environments since we began looking in January 2021. We feel administrators have been given a powerful weapon with the safety off for 20 years and there’s been little safety training. An attitude of, “Well, admins should have known better” in this scenario, without even providing a way to audit or investigate these issues programmatically from a defensive context, is, well, a position we suppose.

However, beyond the template misconfiguration scenarios, the ESC8 relay situation is a serious security issue. We reported this relay issue to MSRC on May 19th along with all domain escalation scenarios, and received a response on June 8th of “We determined your finding is valid but does not meet our bar for a security update release. We considered that servers with AD CS roles could mitigate this risk with a change in configuration settings to enable Extended Protection for Authentication (EPA), per this blog post.” MSRC stated that they also opened up a bug concerning the template issues and our comments about poor telemetry with the AD CS feature team, who may consider additional design changes in a future release.

To be clear, based on our research, if you are running AD CS with ANY template a domain computer can enroll in that also allows domain authentication (e.g., the Machine/Computer template that is available by default), ANY system running the spooler service can be compromised. Based on our extensive experience assessing AD environments, we believe this is very bad. If you find you are vulnerable to this, consider contacting your nearest Microsoft representative and question them as to why this insecure default configuration is allowed. As of right now, they have no intentions of directly servicing the issue, but said they may fix it at some indeterminate future date.

From a defensive perspective, you should either immediately enumerate the Web Enrollment interfaces enabled in your environment (possible with PSPKIAudit) and then either remove them, disable NTLM authentication to them, or enforce HTTPS to them and enable EPA on the IIS server component. For specifics on how to do this, please see “Defensive Guidance — Harden AD CS HTTP Endpoints — PREVENT8” in the whitepaper. We also strongly recommend organizations audit their AD CS architecture and certificate templates and treat CA servers (including subordinate CAs) as Tier 0 assets with the same protections as Domain Controllers! The “Defensive Guidance” section of the whitepaper has more information on how to proactively prevent, detect, and respond to the attacks we’ve detailed.

Yes, we’re working to integrate the escalation paths into BloodHound, but as you can see this whole thing is rather complicated, and we want to get it right. But rest assured, it’s currently under development at the moment and will be released in FOSS BloodHound.

And finally, as a disclaimer, we are not stating that we know every security issue concerning AD CS. We took our best shot in this research, but we are confident that there are additional issues and attacker tradecraft implications that we (or others) will find in the coming months, or things we have missed.

Acknowledgements

As is almost always the case, we’re standing on a number of shoulders with this research. The whitepaper gives a more complete treatment of prior work, but as a summary:

Special thanks to Mark Gamache for collaborating with us on parts of this work. He independently discovered many of these abuses, reached out to us, and brought many additional details to our attention while we were performing this research.

As always, we tried our best to cite the existing work out there that we came across, but we’re sure we missed things.

Source :

https://posts.specterops.io/certified-pre-owned-d95910965cd2