The complete guide to WSUS and Configuration Manager SUP maintenance

This article addresses some common questions about WSUS maintenance for Configuration Manager environments.

Original product version:   Windows Servers, Windows Server Update Services, Configuration Manager
Original KB number:   4490644

Introduction

Questions are often along the lines of How should I properly run this maintenance in a Configuration Manager environment, or How often should I run this maintenance. It’s not uncommon for conscientious Configuration Manager administrators to be unaware that WSUS maintenance should be run at all. Most of us just set up WSUS servers because it’s a prerequisite for a software update point (SUP). Once the SUP is set up, we close the WSUS console and pretend it doesn’t exist. Unfortunately, it can be problematic for Configuration Manager clients, and the overall performance of the WSUS/SUP server.

With the understanding that this maintenance needs to be done, you’re wondering what maintenance you need to do and how often you need to be doing it. The answer is that you should perform monthly maintenance. Maintenance is easy and doesn’t take long for WSUS servers that have been well maintained from the start. However, if it has been some time since WSUS maintenance was done, the cleanup may be more difficult or time consuming the first time. It will be much easier or faster in subsequent months.

Maintain WSUS while supporting Configuration Manager current branch version 1906 and later versions

If you are using Configuration Manager current branch version 1906 or later versions, we recommend that you enable the WSUS Maintenance options in the software update point configuration at the top-level site to automate the cleanup procedures after each synchronization. It would effectively handle all cleanup operations described in this article, except backup and reindexing of WSUS database. You should still automate backup of WSUS database along with reindexing of the WSUS database on a schedule.

Screenshot of the WSUS Maintenance options in Software Update Point Components Properties window.

For more information about software update maintenance in Configuration Manager, see Software updates maintenance.

Important considerations

 Note

If you are utilizing the maintenance features that have been added in Configuration Manager, version 1906, you don’t need to consider these items since Configuration Manager handles the cleanup after each synchronization.

  1. Before you start the maintenance process, read all of the information and instructions in this article.
  2. When using WSUS along with downstream servers, WSUS servers are added from the top down, but should be removed from the bottom up. When syncing or adding updates, they go to the upstream WSUS server first, then replicate down to the downstream servers. When performing a cleanup and removing items from WSUS servers, you should start at the bottom of the hierarchy.
  3. WSUS maintenance can be performed simultaneously on multiple servers in the same tier. When doing so, ensure that one tier is done before moving onto the next one. The cleanup and reindex steps described below should be run on all WSUS servers, regardless of whether they are a replica WSUS server or not. For more information about determining if a WSUS server is a replica, see Decline superseded updates.
  4. Ensure that SUPs don’t sync during the maintenance process, as it may cause a loss of some work already done. Check the SUP sync schedule and temporarily set it to manual during this process.Screenshot of the Enable synchronization on a schedule setting.
  5. If you have multiple SUPs of the primary site or central administration sit (CAS) which don’t share the SUSDB, consider the WSUS server that syncs with the first SUP on the site as residing in a tier below the site. For example, my CAS site has two SUPs:
    • The one named New syncs with Microsoft Update, it would be my top tier (Tier1).
    • The server named 2012 syncs with New, and it would be considered in the second tier. It can be cleaned up at the same time I would do all my other Tier2 servers, such as my primary site’s single SUP.
    Screenshot of the two example SUPs.

Perform WSUS maintenance

The basic steps necessary for proper WSUS maintenance include:

  1. Back up the WSUS database
  2. Create custom indexes
  3. Reindex the WSUS database
  4. Decline superseded updates
  5. Run the WSUS Server Cleanup Wizard

Back up the WSUS database

Back up the WSUS database (SUSDB) by using the desired method. For more information, see Create a Full Database Backup.

Create custom indexes

This process is optional but recommended, it greatly improves performance during subsequent cleanup operations.

If you are using Configuration Manager current branch version 1906 or a later version, we recommend that you use Configuration Manager to create the indexes. To create the indexes, configure the Add non-clustered indexes to the WSUS database option in the software update point configuration for the top-most site.

Screenshot of the Add non-clustered indexes to the WSUS database option under WSUS Maintenance tab.

If you use an older version of Configuration Manager or standalone WSUS servers, follow these steps to create custom indexes in the SUSDB database. For each SUSDB, it’s a one-time process.

  1. Make sure that you have a backup of the SUSDB database.
  2. Use SQL Management Studio to connect to the SUSDB database, in the same manner as described in the Reindex the WSUS database section.
  3. Run the following script against SUSDB, to create two custom indexes:SQLCopy-- Create custom index in tbLocalizedPropertyForRevision USE [SUSDB] CREATE NONCLUSTERED INDEX [nclLocalizedPropertyID] ON [dbo].[tbLocalizedPropertyForRevision] ( [LocalizedPropertyID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] -- Create custom index in tbRevisionSupersedesUpdate CREATE NONCLUSTERED INDEX [nclSupercededUpdateID] ON [dbo].[tbRevisionSupersedesUpdate] ( [SupersededUpdateID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] If custom indexes have been previously created, running the script again results in an error similar to the following one:Msg 1913, Level 16, State 1, Line 4
    The operation failed because an index or statistics with name ‘nclLocalizedPropertyID’ already exists on table ‘dbo.tbLocalizedPropertyForRevision’.

Reindex the WSUS database

To reindex the WSUS database (SUSDB), use the Reindex the WSUS Database T-SQL script.

The steps to connect to SUSDB and perform the reindex differ, depending on whether SUSDB is running in SQL Server or Windows Internal Database (WID). To determine where SUSDB is running, check value of the SQLServerName registry entry on the WSUS server located at the HKEY_LOCAL_MACHINE\Software\Microsoft\Update Services\Server\Setup subkey.

If the value contains just the server name or server\instance, SUSDB is running on a SQL Server. If the value includes the string ##SSEE or ##WID in it, SUSDB is running in WID, as shown:

Screenshot of SqlServerName-SSEE.
Screenshot of SqlServerName-WID.

If SUSDB was installed on WID

If SUSDB was installed on WID, SQL Server Management Studio Express must be installed locally to run the reindex script. Here’s an easy way to determine which version of SQL Server Management Studio Express to install:

After installing SQL Server Management Studio Express, launch it, and enter the server name to connect to:

  • If the OS is Windows Server 2012 or later versions, use \\.\pipe\MICROSOFT##WID\tsql\query.
  • If the OS is older than Windows Server 2012, enter \\.\pipe\MSSQL$MICROSOFT##SSEE\sql\query.

For WID, if errors similar to the following occur when attempting to connect to SUSDB using SQL Server Management Studio (SSMS), try launching SSMS using the Run as administrator option.

Screenshot of the Cannot connect to server error.

If SUSDB was installed on SQL Server

If SUSDB was installed on full SQL Server, launch SQL Server Management Studio and enter the name of the server (and instance if needed) when prompted.

 Tip

Alternatively, a utility called sqlcmd can be used to run the reindex script. For more information, see Reindex the WSUS Database.

Running the script

To run the script in either SQL Server Management Studio or SQL Server Management Studio Express, select New Query, paste the script in the window, and then select Execute. When it’s finished, a Query executed successfully message will be displayed in the status bar. And the Results pane will contain messages related to what indexes were rebuilt.

Screenshot of executing the SQL statement.
Screenshot of the successful log.

Decline superseded updates

Decline superseded updates in the WSUS server to help clients scan more efficiently. Before declining updates, ensure that the superseding updates are deployed, and that superseded ones are no longer needed. Configuration Manager includes a separate cleanup, which allows it to expire superseded updates based on specified criteria. For more information, see the following articles:

The following SQL query can be run against the SUSDB database, to quickly determine the number of superseded updates. If the number of superseded updates is higher than 1500, it can cause various software update related issues on both the server and client sides.

SQLCopy

-- Find the number of superseded updates
Select COUNT(UpdateID) from vwMinimalUpdate where IsSuperseded=1 and Declined=0

If you are using Configuration Manager current branch version 1906 or a later version, we recommend that you automatically decline the superseded updates by enabling the Decline expired updates in WSUS according to supersedence rules option in the software update point configuration for the top-most site.

Screenshot of the Decline expired updates in WSUS according to supersedence rules option under WSUS Maintenance tab.

When you use this option, you can see how many updates were declined by reviewing the WsyncMgr.log file after the synchronization process finishes. If you use this option, you don’t need to use the script described later in this section (either by manually running it or by setting up as task to run it on a schedule).

If you are using standalone WSUS servers or an older version of configuration Manager, you can manually decline superseded updates by using the WSUS console. Or you can run this PowerShell script. Then, copy and save the script as a Decline-SupersededUpdatesWithExclusionPeriod.ps1 script file.

 Note

This script is provided as is. It should be fully tested in a lab before you use it in production. Microsoft makes no guarantees regarding the use of this script in any way. Always run the script with the -SkipDecline parameter first, to get a summary of how many superseded updates will be declined.

If Configuration Manager is set to Immediately expire superseded updates (see below), the PowerShell script can be used to decline all superseded updates. It should be done on all autonomous WSUS servers in the Configuration Manager/WSUS hierarchy.

Screenshot of the Immediately expire superseded updates options under Supersedence Rules tab.

You don’t need to run the PowerShell script on WSUS servers that are set as replicas, such as secondary site SUPs. To determine whether a WSUS server is a replica, check the Update Source settings.

Screenshot of the Update Source and Proxy Server option.

If updates are not configured to be immediately expired in Configuration Manager, the PowerShell script must be run with an exclusion period that matches the Configuration Manager setting for number of days to expire superseded updates. In this case, it would be 60 days since SUP component properties are configured to wait two months before expiring superseded updates:

Screenshot of the months to expire superseded updates.

The following command lines illustrate the various ways that the PowerShell script can be run (if the script is being run on the WSUS server, LOCALHOST can be used in place of the actual SERVERNAME):

PowerShellCopy

Decline-SupersededUpdatesWithExclusionPeriod.ps1 -UpdateServer SERVERNAME -Port 8530 –SkipDecline

Decline-SupersededUpdatesWithExclusionPeriod.ps1 -UpdateServer SERVERNAME -Port 8530 –ExclusionPeriod 60

Decline-SupersededUpdatesWithExclusionPeriod.ps1 -UpdateServer SERVERNAME -Port 8530

Decline-SupersededUpdatesWithExclusionPeriod.ps1 -UpdateServer SERVERNAME -UseSSL -Port 8531

Running the script with a -SkipDecline and -ExclusionPeriod 60 to gather information about updates on the WSUS server, and how many updates could be declined:

Screenshot of the Windows PowerShell window running SkipDecline and ExclusionPeriod 60.

Running the script with -ExclusionPeriod 60, to decline superseded updates older than 60 days:

Screenshot of the Windows PowerShell window with ExclusionPeriod 60 running.

The output and progress indicators are displayed while the script is running. Note the SupersededUpdates.csv file, which will contain a list of all updates that are declined by the script:

Screenshot of the Windows PowerShell output and progress indicator.

 Note

If issues occur when attempting to use the above PowerShell script to decline superseded updates, see the section Running the Decline-SupersededUpdatesWithExclusionPeriod.ps1 script times out when connecting to the WSUS server, or a 401 error occurs while running for troubleshooting steps.

After superseded updates have been declined, for best performance, SUSDB should be reindexed again. For related information, see Reindex the WSUS database.

Run the WSUS Server Cleanup Wizard

WSUS Server Cleanup Wizard provides options to clean up the following items:

  • Unused updates and update revisions (also known as Obsolete updates)
  • Computers not contacting the server
  • Unneeded update files
  • Expired updates
  • Superseded updates

In a Configuration Manager environment, Computers not contacting the server and Unneeded update files options are not relevant because Configuration Manager manages software update content and devices, unless either the Create all WSUS reporting events or Create only WSUS status reporting events options are selected under Software Update Sync Settings. If you have one of these options configured, you should consider automating the WSUS Server Cleanup to perform cleanup of these two options.

If you are using Configuration Manager current branch version 1906 or a later version, enabling the Decline expired updates in WSUS according to supersedence rules option handles declining of Expired updates and Superseded updates based on the supersedence rules that are specified in Configuration Manager. Enabling the Remove obsolete updates from the WSUS database option in Configuration Manager current branch version 1906 handles the cleanup of Unused updates and update revisions (Obsolete updates). It’s recommended to enable these options in the software update point configuration on the top-level site to allow Configuration Manager to clean up the WSUS database.

Screenshot of the Remove obsolete updates from the WSUS database option.

If you’ve never cleaned up obsolete updates from WSUS database before, this task may time out. You can review WsyncMgr.log for more information, and manually run the SQL script that is specified in HELP! My WSUS has been running for years without ever having maintenance done and the cleanup wizard keeps timing out once, which would allow subsequent attempts from Configuration Manager to run successfully. For more information about WSUS cleanup and maintenance in Configuration Manager, see the docs.

For standalone WSUS servers, or if you are using an older version of Configuration Manager, it is recommended that you run the WSUS Cleanup wizard periodically. If the WSUS Server Cleanup Wizard has never been run and the WSUS has been in production for a while, the cleanup may time out. In that case, reindex with step 2 and step 3 first, then run the cleanup with only the Unused updates and update revisions option checked.

If you have never run WSUS Cleanup wizard, running the cleanup with Unused updates and update revisions may require a few passes. If it times out, run it again until it completes, and then run each of the other options one at a time. Lastly make a full pass with all options checked. If timeouts continue to occur, see the SQL Server alternative in HELP! My WSUS has been running for years without ever having maintenance done and the cleanup wizard keeps timing out. It may take multiple hours or days for the Server Cleanup Wizard or SQL alternative to run through completion.

The WSUS Server Cleanup Wizard runs from the WSUS console. It is located under Options, as shown here:

Screenshot of the WSUS Server Cleanup Wizard location page.

For more information, see Use the Server Cleanup Wizard.

Screenshot of the WSUS Server Cleanup Wizard start page.

After it reports the number of items it has removed, the cleanup finishes. If you do not see this information returned on your WSUS server, it is safe to assume that the cleanup timed out. In that case, you will need to start it again or use the SQL alternative.

Screenshot of the WSUS Server Cleanup Wizard when finished.

After superseded updates have been declined, for best performance, SUSDB should be reindexed again. See the Reindex the WSUS database section for related information.

Troubleshooting

HELP! My WSUS has been running for years without ever having maintenance done and the cleanup wizard keeps timing out

There are two different options here:

  1. Reinstall WSUS with a fresh database. There are a number of caveats related to this, including length of initial sync, and full client scans against SUSDB, versus differential scans.
  2. Ensure you have a backup of the SUSDB database, then run a reindex. When that completes, run the following script in SQL Server Management Studio or SQL Server Management Studio Express. After it finishes, follow all of the above instructions for running maintenance. This last step is necessary because the spDeleteUpdate stored procedure only removes unused updates and update revisions.

 Note

Before you run the script, follow the steps in The spDeleteUpdate stored procedure runs slowly to improve the performance of the execution of spDeleteUpdate.

SQLCopy

DECLARE @var1 INT
DECLARE @msg nvarchar(100)

CREATE TABLE #results (Col1 INT)
INSERT INTO #results(Col1) EXEC spGetObsoleteUpdatesToCleanup

DECLARE WC Cursor
FOR
SELECT Col1 FROM #results

OPEN WC
FETCH NEXT FROM WC
INTO @var1
WHILE (@@FETCH_STATUS > -1)
BEGIN SET @msg = 'Deleting' + CONVERT(varchar(10), @var1)
RAISERROR(@msg,0,1) WITH NOWAIT EXEC spDeleteUpdate @localUpdateID=@var1
FETCH NEXT FROM WC INTO @var1 END

CLOSE WC
DEALLOCATE WC

DROP TABLE #results

Running the Decline-SupersededUpdatesWithExclusionPeriod.ps1 script times out when connecting to the WSUS server, or a 401 error occurs while running

If errors occur when you attempt to use the PowerShell script to decline superseded updates, an alternative SQL script can be run against SUDB.

  1. If Configuration Manager is used along with WSUS, check Software Update Point Component Properties > Supersedence Rules to see how quickly superseded updates expire, such as immediately or after X months. Make a note of this setting.Screenshot of the Supersedence Rules.
  2. If you haven’t backed up the SUSDB database, do so before proceeding further.
  3. Use SQL Server Management Studio to connect to SUSDB.
  4. Run the following query. The number 90 in the line that includes DECLARE @thresholdDays INT = 90 should correspond with the Supersedence Rules from step 1 of this procedure, and the correct number of days that aligns with the number of months that is configured in Supersedence Rules. If this is set to expire immediately, the value in the SQL query for @thresholdDays should be set to zero.SQLCopy-- Decline superseded updates in SUSDB; alternative to Decline-SupersededUpdatesWithExclusionPeriod.ps1 DECLARE @thresholdDays INT = 90 -- Specify the number of days between today and the release date for which the superseded updates must not be declined (i.e., updates older than 90 days). This should match configuration of supersedence rules in SUP component properties, if ConfigMgr is being used with WSUS. DECLARE @testRun BIT = 0 -- Set this to 1 to test without declining anything. -- There shouldn't be any need to modify anything after this line. DECLARE @uid UNIQUEIDENTIFIER DECLARE @title NVARCHAR(500) DECLARE @date DATETIME DECLARE @userName NVARCHAR(100) = SYSTEM_USER DECLARE @count INT = 0 DECLARE DU CURSOR FOR SELECT MU.UpdateID, U.DefaultTitle, U.CreationDate FROM vwMinimalUpdate MU JOIN PUBLIC_VIEWS.vUpdate U ON MU.UpdateID = U.UpdateId WHERE MU.IsSuperseded = 1 AND MU.Declined = 0 AND MU.IsLatestRevision = 1 AND MU.CreationDate < DATEADD(dd,-@thresholdDays,GETDATE()) ORDER BY MU.CreationDate PRINT 'Declining superseded updates older than ' + CONVERT(NVARCHAR(5), @thresholdDays) + ' days.' + CHAR(10) OPEN DU FETCH NEXT FROM DU INTO @uid, @title, @date WHILE (@@FETCH_STATUS > - 1) BEGIN SET @count = @count + 1 PRINT 'Declining update ' + CONVERT(NVARCHAR(50), @uid) + ' (Creation Date ' + CONVERT(NVARCHAR(50), @date) + ') - ' + @title + ' ...' IF @testRun = 0 EXEC spDeclineUpdate @updateID = @uid, @adminName = @userName, @failIfReplica = 1 FETCH NEXT FROM DU INTO @uid, @title, @date END CLOSE DU DEALLOCATE DU PRINT CHAR(10) + 'Attempted to decline ' + CONVERT(NVARCHAR(10), @count) + ' updates.'
  5. To check progress, monitor the Messages tab in the Results pane.

What if I find out I needed one of the updates that I declined?

If you decide you need one of these declined updates in Configuration Manager, you can get it back in WSUS by right-clicking the update, and selecting Approve. Change the approval to Not Approved, and then resync the SUP to bring the update back in.

Screenshot of the WSUS Approve Updates screen.

If the update is no longer in WSUS, it can be imported from the Microsoft Update Catalog, if it hasn’t been expired or removed from the catalog.

Screenshot shows how to import updates in WSUS.

Automating WSUS maintenance

 Note

If you are using Configuration Manager version1906 or a later version, automate the cleanup procedures by enabling the WSUS Maintenance options in the software update point configuration of the top-level site. These options handle all cleanup operations that are performed by the WSUS Server Cleanup Wizard. However, you should still automatically back up and reindex the WSUS database on a schedule.

WSUS maintenance tasks can be automated, assuming that a few requirements are met first.

  1. If you have never run WSUS cleanup, you need to do the first two cleanups manually. Your second manual cleanup should be run 30 days from your first since it takes 30 days for some updates and update revisions to age out. There are specific reasons for why you don’t want to automate until after your second cleanup. Your first cleanup will probably run longer than normal. So you can’t judge how long this maintenance will normally take. The second cleanup is a much better indicator of what is normal for your machines. This is important because you need to figure out about how long each step takes as a baseline (I also like to add about 30-minutes wiggle room) so that you can determine the timing for your schedule.
  2. If you have downstream WSUS servers, you will need to perform maintenance on them first, and then do the upstream servers.
  3. To schedule the reindex of the SUSDB, you will need a full version of SQL Server. Windows Internal Database (WID) doesn’t have the capability of scheduling a maintenance task though SQL Server Management Studio Express. That said, in cases where WID is used you can use the Task Scheduler with SQLCMD mentioned earlier. If you go this route, it’s important that you don’t sync your WSUS servers/SUPs during this maintenance period! If you do, it’s possible your downstream servers will just end up resyncing all of the updates you just attempted to clean out. I schedule this overnight before my AM sync, so I have time to check on it before my sync runs.

Needed/helpful links:

WSUS cleanup script

PowerShellCopy

[reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration")` 
 | out-null 
$wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer(); 
$cleanupScope = new-object Microsoft.UpdateServices.Administration.CleanupScope; 
$cleanupScope.DeclineSupersededUpdates = $true        
$cleanupScope.DeclineExpiredUpdates = $true 
$cleanupScope.CleanupObsoleteUpdates = $true 
$cleanupScope.CompressUpdates = $true 
#$cleanupScope.CleanupObsoleteComputers = $true 
$cleanupScope.CleanupUnneededContentFiles = $true 
$cleanupManager = $wsus.GetCleanupManager(); 
$cleanupManager.PerformCleanup($cleanupScope);

Setting up the WSUS Cleanup task in Task Scheduler

 Note

As mentioned previously, if you are using Configuration Manager current branch version 1906 or a later version, automate the cleanup procedures by enabling the WSUS Maintenance options in the software update point configuration of the top-level site. For standalone WSUS servers or older versions of Configuration Manager, you can continue to use the following steps.

The Weekend Scripter blog post mentioned in the previous section contains basic directions and troubleshooting for this step. However, I’ll walk you through the process in the following steps.

  1. Open Task Scheduler and select Create a Task. On the General tab, set the name of the task, the user that you want to run the PowerShell script as (most people use a service account). Select Run whether a user is logged on or not, and then add a description if you wish.Screenshot of the WSUS Create a task screen.
  2. Under the Actions tab, add a new action and specify the program/script you want to run. In this case, we need to use PowerShell and point it to the PS1 file we want it to run. You can use the WSUS Cleanup script. This script performs cleanup options that Configuration Manager current branch version 1906 doesn’t do. You can uncomment them if you are using standalone WSUS or an older version of Configuration Manager. If you would like a log, you can modify the last line of the script as follows:PowerShellCopy[reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration") | out-null $wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer(); $cleanupScope = new-object Microsoft.UpdateServices.Administration.CleanupScope; # $cleanupScope.DeclineSupersededUpdates = $true # Performed by CM1906 # $cleanupScope.DeclineExpiredUpdates = $true # Performed by CM1906 # $cleanupScope.CleanupObsoleteUpdates = $true # Performed by CM1906 $cleanupScope.CompressUpdates = $true $cleanupScope.CleanupObsoleteComputers = $true $cleanupScope.CleanupUnneededContentFiles = $true $cleanupManager = $wsus.GetCleanupManager(); $cleanupManager.PerformCleanup($cleanupScope) | Out-File C:\WSUS\WsusClean.txt; You’ll get an FYI/warning in Task Scheduler when you save. You can ignore this warning.Screenshot shows WSUS add a line of script to start the task.
  3. On the Triggers tab, set your schedule for once a month or on any schedule you want. Again, you must ensure that you don’t sync your WSUS during the entire cleanup and reindex time.Screenshot shows Set the WSUS Edit Trigger for the task.
  4. Set any other conditions or settings you would like to tweak as well. When you save the task, you may be prompted for credentials of the Run As user.
  5. You can also use these steps to configure the Decline-SupersededUpdatesWithExclusionPeriod.ps1 script to run every three months. I usually set this script to run before the other cleanup steps, but only after I have run it manually and ensured it completed successfully. I run at 12:00 AM on the first Sunday every three months.

Setting up the SUSDB reindex for WID using SQLCMD and Task Scheduler

  1. Save the Reindex the WSUS database script as a .sql file (for example, SUSDBMaint.sql).
  2. Create a basic task and give it a name:Screenshot of the WSUS Create Basic Task Wizard screen.
  3. Schedule this task to start about 30 minutes after you expect your cleanup to finish running. My cleanup is running at 1:00 AM every first Sunday. It takes about 30 minutes to run and I am going to give it another 30 minutes before starting my reindex. It means I would schedule this task for every first Sunday at 2:00 AM, as shown here:Screenshot shows set the frequency for that task in the Create Basic Task Wizard.
  4. Select the action to Start a program. In the Program/script box, type the following command. The file specified after the -i parameter is the path to the SQL script you saved in step 1. The file specified after the -o parameter is where you would like the log to be placed. Here’s an example:"C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SQLCMD.exe" -S \\.\pipe\Microsoft##WID\tsql\query -i C:\WSUS\SUSDBMaint.sql -o c:\WSUS\reindexout.txtScreenshot shows how the script should look in the Create Basic Task Wizard.
  5. You’ll get a warning, similar to the one you got when creating the cleanup task. Select Yes to accept the arguments, and then select Finish to apply:Screenshot of the Task Scheduler confirmation popup window.
  6. You can test the script by forcing it to run and reviewing the log for errors. If you run into issues, the log will tell you why. Usually if it fails, the account running the task doesn’t have appropriate permissions or the WID service isn’t started.

Setting up a basic Scheduled Maintenance Task in SQL for non-WID SUSDBs

 Note

You must be a sysadmin in SQL Server to create or manage maintenance plans.

  1. Open SQL Server Management Studio and connect to your WSUS instance. Expand Management, right-click Maintenance Plans, and then select New Maintenance Plan. Give your plan a name.Screenshot of the typed name for your WSUS maintenance plan.
  2. Select subplan1 and then ensure your Toolbox is in context:Screenshot to ensure your Toolbox is in context.
  3. Drag and drop the task Execute T-SQL Statement Task:Screenshot of the Execute T-SQL Statement Task option.
  4. Right-click it and select Edit. Copy and paste the WSUS reindex script, and then select OK:Screenshot to Copy and paste the WSUS reindex script.
  5. Schedule this task to run about 30 minutes after you expect your cleanup to finish running. My cleanup is running at 1:00 AM every first Sunday. It takes about 30 minutes to run, and I am going to give it another 30 minutes before starting reindex. It means I would schedule this task to run every first Sunday at 2:00 AM.Screenshot of the WSUS New Job Schedule screen.
  6. While creating the maintenance plan, consider adding a backup of the SUSDB into the plan as well. I usually back up first, then reindex. It may add more time to the schedule.

Putting it all together

When running it in a hierarchy, the WSUS cleanup run should be done from the bottom of the hierarchy up. However, when using the script to decline superseded updates, the run should be done from the top down. Declining superseded updates is really a type of addition to an update rather than a removal. You’re actually adding a type of approval in this case.

Since a sync can’t be done during the actual cleanup, it’s suggested to schedule/complete all tasks overnight. Then check on their completion via the logging the following morning, before the next scheduled sync. If something failed, maintenance can be rescheduled for the next night, once the underlying issue is identified and resolved.

These tasks may run faster or slower depending on the environment, and timing of the schedule should reflect that. Hopefully they are faster since my lab environment tends to be a bit slower than a normal production environment. I am a bit aggressive on the timing of the decline scripts. If Tier2 overlaps Tier3 by a few minutes, it will not cause a problem because my sync isn’t scheduled to run.

Not syncing keeps the declines from accidentally flowing into my Tier3 replica WSUS servers from Tier2. I did give myself extra time between the Tier3 decline and the Tier3 cleanup since I definitely want to make sure the decline script finishes before running my cleanup.

It brings up a common question: Since I’m not syncing, why shouldn’t I run all of the cleanups and reindexes at the same time?

The answer is that you probably could, but I wouldn’t. If my coworker across the globe needs to run a sync, with this schedule I would minimize the risk of orphaned updates in WSUS. And I can schedule it to rerun to completion the next night.

TimeTierTasks
12:00 AMTier1-Decline
12:15 AMTier2-Decline
12:30 AMTier3-Decline
1:00 AMTier3 WSUS Cleanup
2:00 AMTier3 ReindexTier2 WSUS Cleanup
3:00 AMTier1-CleanupTier2 Reindex
4:00 AMTier1 Reindex

 Note

If you’re using Configuration Manager current branch version 1906 or a later version to perform WSUS Maintenance, Configuration Manager performs the cleanup after synchronization using the top-down approach. In this scenario, you can schedule the WSUS database backup and reindexing jobs to run before the configured sync schedule without worrying about any of the other steps, because Configuration Manager will handle everything else.

For more information about SUP maintenance in Configuration Manager, see the following articles:

Phishing attack abuses Microsoft Azure, Google Sites to steal crypto

A new large-scale phishing campaign targeting Coinbase, MetaMask, Kraken, and Gemini users is abusing Google Sites and Microsoft Azure Web App to create fraudulent sites.

These phishing pages are promoted through comments posted to legitimate sites by a network of bots controlled by the threat actors. Posting links to phishing pages on various legitimate sites aims to increase traffic and boost the malicious site’s search engine rankings.

Furthermore, because the phishing sites are hosted in Microsoft and Google services, they aren’t flagged by automated moderator systems, allowing promotional messages to stay in the comment section for longer.

Comment containing multiple links to phishing pages
Comment containing multiple links to phishing pages (Netskope)

The new campaign was spotted by analysts at Netskope, who noted that this tactic has allowed some of the fraudulent sites to appear as the first result in Google Search.

Even worse, as shown below, Google has also included the phishing pages as featured snippets, giving them the highest exposure possible in the search results.

The first result for the given search term
The first result for the given search term (Netskope)

Abusing legitimate services

Google Sites is a free web page creation tool, part of Google’s online service suite, allowing users to create websites and host them on Google Cloud or other providers.

Similarly, Microsoft’s Azure Web Apps is a platform helping users create, deploy, and manage web applications and websites.

Both services are trusted by internet security tools, offer competitive pricing and high availability, so they are a good option for creating phishing pages.

The crooks in the campaign seen by Netskope created sites that mimicked Metamask, Coinbase, Gemini, and Kraken, targeting people’s wallets and their assets.

The  sites are just landing pages, and their visitors are redirected to the actual phishing sites when they click on the “login” buttons.

Landing page for Kraken phishing
Landing page for Kraken phishing (Netskope)

Targeting wallets and services

The phishing campaign is currently attempting to steal MetaMask wallets and credentials for crypto exchanges, such as CoinBase, Kraken, and Gemini.

The MetaMask phishing site attempts to steal the user’s password and wallet’s secret recovery phrase (seed phrase). This information allows the threat actor to import the wallet on their own devices and drain the contents.

MetaMask phishing site asking the seed phrase
MetaMask phishing site asking the seed phrase (Netskope)

For the crypto exchange phishing pages, the threat actors attempt to steal their login credentials.

In all four cases, users who enter their credentials are redirected to a fake 2FA (two-factor authentication) page that requests the victim to provide their phone number.

After entering the code, the websites generate a fake error alleging unauthorized activity and authentication problems, prompting the victim to click on an “Ask Expert” button.

Bogus error message served to victims
Bogus error message served to victims (Netskope)

This takes the victims to an online chat page where a scammer pretending to be a customer support agent promises to solve the problem by directing the victim to install the TeamViewer remote access tool.

The remote access is likely to allow the threat actors to retrieve the multi-factor authentication codes required to log in to the exchanges with the stolen credentials.

Don’t get phished

When attempting to log in to a crypto exchange, always make sure you are on the platform’s official website and not on a clone.

Users of locally installed cryptocurrency wallets, such as MetaMask, Phantom, and TrustWallet, should never share their recovery phrase on any website, regardless of the reason.

It is also important to remember that Google Ads can be abused, and Google Search SEO can be manipulated, so the ranking of the results shouldn’t be seen as a guarantee of safety.

Finally, protect your cryptocurrency exchange accounts with MFA and keep most of your crypto investments on cold wallets that are much more challenging to hack.

Source :
https://www.bleepingcomputer.com/news/security/phishing-attack-abuses-microsoft-azure-google-sites-to-steal-crypto/

Microsoft Teams outage also takes down Microsoft 365 services

What initially started like a minor Microsoft Teams outage has also taken down multiple Microsoft 365 services with Teams integration, including Exchange Online, Windows 365, and Office Online.

“We’ve received reports of users being unable to access Microsoft Teams or leverage any features,” the company revealed on its official Microsoft 365 Status Twitter account more than 8 hours ago.

Two hours later, Redmond said the issue causing the connection problems was a recent deployment that featured a broken connection to an internal storage service.

However, Teams was not the only product impacted by the outage since users also began reporting failures to connect to various Microsoft 365 services.

Microsoft confirmed the issues saying that the subsequent Microsoft 365 outage only affected services that came with Teams integration.

“We’ve identified downstream impact to multiple Microsoft 365 services with Teams integration, such as Microsoft Word, Office Online and SharePoint Online,” Microsoft explained.

Microsoft Teams outage tweet

As the company further detailed on its Microsoft 365 Service health status page, affected customers experienced issues with one or more of the following services:

  • Microsoft Teams (Access, chat, and meetings)
  • Exchange Online (Delays sending mail)
  • Microsoft 365 Admin center (Inability to access)
  • Microsoft Word within multiple services (Inability to load)
  • Microsoft Forms (Inability to use via Teams)
  • Microsoft Graph API (Any service relying on this API may be affected)
  • Office Online (Microsoft Word access issues)
  • SharePoint Online (Microsoft Word access issues)
  • Project Online (Inability to access)
  • PowerPlatform and PowerAutomate (Inability to create an environment with a database)
  • Autopatches within Microsoft Managed Desktop
  • Yammer (Impact to Yammer experiments)
  • Windows 365 (Unable to provision Cloud PCs)

After redirecting traffic to a healthy service to mitigate the impact, Redmond said its telemetry indicates that Microsoft Teams functionality started to recover.

“Service availability has mostly recovered with only a few service features still requiring attention,” Microsoft added on the service health status page and on Twitter two hours ago, at 4 AM EST.

“We’ll continue to monitor the service as new regions enter business hours to ensure the service health does not fluctuate while the remaining actions are completed.”

Source :
https://www.bleepingcomputer.com/news/microsoft/microsoft-teams-outage-also-takes-down-microsoft-365-services/

5 Key Things We Learned from CISOs of Smaller Enterprises Survey

New survey reveals lack of staff, skills, and resources driving smaller teams to outsource security.

As business begins its return to normalcy (however “normal” may look), CISOs at small and medium-size enterprises (500 – 10,000 employees) were asked to share their cybersecurity challenges and priorities, and their responses were compared the results with those of a similar survey from 2021.

Here are the 5 key things we learned from 200 responses:

— Remote Work Has Accelerated the Use of EDR Technologies

In 2021, 52% of CISOs surveyed were relying on endpoint detection and response (EDR) tools. This year that number has leapt to 85%. In contrast, last year 45% were using network detection and response (NDR) tools, while this year just 6% employ NDR. Compared to 2021, double the number of CISOs and their organizations are seeing the value of extended detection and response (XDR) tools, which combine EDR with integrated network signals. This is likely due to the increase in remote work, which is more difficult to secure than when employees work within the company’s network environment.

— 90% of CISOs Use an MDR Solution

There is a massive skills gap in the cybersecurity industry, and CISOs are under increasing pressure to recruit internally. Especially in small security teams where additional headcount is not the answer, CISOs are turning to outsourced services to fill the void. In 2021, 47% of CISOs surveyed relied on a Managed Security Services Provider (MSSP), while 53% were using a managed detection and response (MDR) service. This year, just 21% are using an MSSP, and 90% are using MDR.

— Overlapping Threat Protection Tools are the #1 Pain Point for Small Teams

The majority (87%) of companies with small security teams struggle to manage and operate their threat protection products. Among these companies, 44% struggle with overlapping capabilities, while 42% struggle to visualize the full picture of an attack when it occurs. These challenges are intrinsically connected, as teams find it difficult to get a single, comprehensive view with multiple tools.

— Small Security Teams Are Ignoring More Alerts

Small security teams are giving less attention to their security alerts. Last year 14% of CISOs said they look only at critical alerts, while this year that number jumped to 21%. In addition, organizations are increasingly letting automation take the wheel. Last year, 16% said they ignore automatically remediated alerts, and this year that’s true for 34% of small security teams.

— 96% of CISOs Are Planning to Consolidate Security Platforms

Almost all CISOs surveyed have consolidation of security tools on their to-do lists, compared to 61% in 2021. Not only does consolidation reduce the number of alerts – making it easier to prioritize and view all threats – respondents believe it will stop them from missing threats (57%), reduce the need for specific expertise (56%), and make it easier to correlate findings and visualize the risk landscape (46%). XDR technologies have emerged as the preferred method of consolidation, with 63% of CISOs calling it their top choice.

Download 2022 CISO Survey of Small Cyber Security Teams to see all the results.

Source :
https://thehackernews.com/2022/07/5-key-things-we-learned-from-cisos-of.html

Windows Autopatch has arrived!

The public anticipation surrounding Windows Autopatch has been building since we announced it in April. Fortunately for all, the wait is over. We are pleased to announce that this service is now generally available for customers with Windows Enterprise E3 and E5 licenses. Microsoft will continue to release updates on the second Tuesday of every month and now Autopatch helps streamline updating operations and create new opportunities for IT pros.

Want to share the excitement? Watch this video to learn how Autopatch can improve security and productivity across your organization:

https://www.youtube-nocookie.com/embed/yut19JoreUo

What Is Autopatch? In case you missed the public preview announcement, Windows Autopatch automates updating of Windows 10/11, Microsoft Edge, and Microsoft 365 software. Essentially, Microsoft engineers use the Windows Update for Business client policies and deployment service tools on your behalf. The service creates testing rings and monitors rollouts-pausing and even rolling back changes where possible.

thumbnail image 1 captioned Windows Autopatch is a service that uses the Windows Update for Business solutions on your behalf.Windows Autopatch is a service that uses the Windows Update for Business solutions on your behalf.

The Autopatch documentation gets more granular if you want to learn more, and if you have questions, our engineers have created a dedicated community to answer your questions that may be more specific than are covered in our FAQ (which gets updated regularly).

Getting started with Autopatch

To start enrolling devices:

  • Find the Windows Autopatch entry in the Tenant Administration blade of the Microsoft Endpoint Manager admin center.
  • Select Tenant enrollment.
  • Select the check box to agree to the terms and conditions and select Agree.
  • Select Enroll.

Follow along with this how-to video for more detailed instructions on enrolling devices into the Autopatch service:

https://www.youtube-nocookie.com/embed/GI9_mXEbd24

Microsoft FastTrack Specialists are also available to help customers with more than 150 eligible licenses work through the Windows Autopatch technical prerequisites described in the documentation. Sign in to https://fasttrack.microsoft.com with a valid Azure ID to learn more and submit a request for assistance, or contact your Microsoft account team.

Working with Autopatch

Once you’ve enrolled devices into Autopatch, the service does most of the work. But through the Autopatch blade in Microsoft Endpoint Manager, you can fine-tune ring membership, access the service health dashboard, generate reports, and file support requests. The reporting capabilities will grow more robust as the service matures. For even more information on how to use Autopatch, see the resources sidebar on the Windows Autopatch community.

Increase confidence with Autopatch

The idea of delegating this kind of responsibility may give some IT administrators pause. Changing systems in any way can cause hesitation-but unpatched software can leave gaps in protection-and by keeping Windows and Microsoft 365 apps updated you get all the value of new features designed to enhance creativity and collaboration.

Because the Autopatch service has such a broad footprint, and pushes updates around the clock, we are able to detect potential issues among an incredibly diverse array of hardware and software configurations. This means that an issue that may have an impact on your portfolio could be detected and resolved before ever reaching your estate. And as the service expands and grows, the ability to detect issues will get more robust.Microsoft invests resources into rigorous testing and validation of our releases. We want to give you the confidence to act. We have a record of 99.6%[1] app compatibility with our updates and an App Assure team that has your back in case you should encounter an application compatibility issue at no additional cost for eligible customers.

In some organizations, where update deployment rings are already in place, and the update process is robust, the appetite for this kind automation may not be as strong. In talking to customers, we’re learning how to evolve the Autopatch service to meet more use cases and deliver more value and are excited for some of the developments which will be announced in the upcoming months in this blog.

What’s ahead for Autopatch

One announcement we can make is that Windows Autopatch will support updating of Windows 365 cloud PCs. We’ll be covering this enhancement in the Windows in the Cloud on July 14th and that special episode will be available on demand on Windows IT Pro YouTube Channel later this month, so be sure to subscribe to the channel for updates.

thumbnail image 2 of blog post titled
Windows Autopatch has arrived!

We love hearing from you, during the past months, we have met with some of you, received feedback in our Windows Autopatch community, and during our ‘Ask Microsoft Anything’ event. We are working hard on addressing asks and improving the service–so please keep sharing feedback.

Please note that we have an evergreen FAQ page here and you can learn more about how Windows Autopatch works in our docs.

Microsoft Mechanics, who have been doing an incredible deep dive into update management, will be talking about Autopatch and endpoint management in a future episode, so be sure to subscribe to their channel, too.

Of course, if you subscribe to the Windows Autopatch blog you’ll get notified about these events and all the excitement moving forward.

Source :
https://techcommunity.microsoft.com/t5/windows-it-pro-blog/windows-autopatch-has-arrived/ba-p/3570119

Spectre and Meltdown Attacks Against OpenSSL

The OpenSSL Technical Committee (OTC) was recently made aware of several potential attacks against the OpenSSL libraries which might permit information leakage via the Spectre attack.1 Although there are currently no known exploits for the Spectre attacks identified, it is plausible that some of them might be exploitable.

Local side channel attacks, such as these, are outside the scope of our security policy, however the project generally does introduce mitigations when they are discovered. In this case, the OTC has decided that these attacks will not be mitigated by changes to the OpenSSL code base. The full reasoning behind this is given below.

The Spectre attack vector, while applicable everywhere, is most important for code running in enclaves because it bypasses the protections offered. Example enclaves include, but are not limited to:

The reasoning behind the OTC’s decision to not introduce mitigations for these attacks is multifold:

  • Such issues do not fall under the scope of our defined security policy. Even though we often apply mitigations for such issues we do not mandate that they are addressed.
  • Maintaining code with mitigations in place would be significantly more difficult. Most potentially vulnerable code is extremely non-obvious, even to experienced security programmers. It would thus be quite easy to introduce new attack vectors or fix existing ones unknowingly. The mitigations themselves obscure the code which increases the maintenance burden.
  • Automated verification and testing of the attacks is necessary but not sufficient. We do not have automated detection for this family of vulnerabilities and if we did, it is likely that variations would escape detection. This does not mean we won’t add automated checking for issues like this at some stage.
  • These problems are fundamentally a bug in the hardware. The software running on the hardware cannot be expected to mitigate all such attacks. Some of the in-CPU caches are completely opaque to software and cannot be easily flushed, making software mitigation quixotic. However, the OTC recognises that fixing hardware is difficult and in some cases impossible.
  • Some kernels and compilers can provide partial mitigation. Specifically, several common compilers have introduced code generation options addressing some of these classes of vulnerability:
    • GCC has the -mindirect-branch-mfunction-return and -mindirect-branch-register options
    • LLVM has the -mretpoline option
    • MSVC has the /Qspectre option

  1. Nicholas Mosier, Hanna Lachnitt, Hamed Nemati, and Caroline Trippel, “Axiomatic Hardware-Software Contracts for Security,” in Proceedings of the 49th ACM/IEEE International Symposium on Computer Architecture (ISCA), 2022.

Posted by OpenSSL Technical Committee May 13th, 2022 12:00 am

Source :
https://www.openssl.org/blog/blog/2022/05/13/spectre-meltdown/

Microsoft: Windows Server 2012 reaches end of support in October 2023

Microsoft has reminded customers that Windows Server 2012/2012 R2 will reach its extended end-of-support (EOS) date next year, on October 10, 2023.

Released in October 2012, Windows Server 2012 has entered its tenth year of service and has already reached the mainstream end date over three years ago, on October 9, 2018.

Redmond also revealed today that Microsoft SQL Server 2012, the company’s relational database management system, will be retired on July 12, 2022, ten years after its release in May 2012.

Once EOS reached, Microsoft will stop providing technical support and bug fixes for newly discovered issues that may impact the usability or stability of servers running the two products.

“Microsoft recommends customers migrate applications and workloads to Azure to run securely. Azure SQL Managed Instance is fully managed and always updated (PaaS),” the company said.

“Customers can also lift-and-shift to Azure Virtual Machines, including Azure Dedicated Host, Azure VMware Solution, and Azure Stack (Hub, HCI, Edge), to get three additional years of extended security updates at no cost.”

What are the options?

Microsoft advises admins who want to keep their servers running and still receiving bug fixes and security updates to upgrade to Windows Server 2019 and SQL Server 2019.

Redmond also reminded admins in July 2021 that Windows Server 2012 and SQL Server 2012 will reach their extended support end dates in two years, urging them to upgrade as soon as possible to avoid compliance and security gaps.

“We understand that SQL Server and Windows Server run many business-critical applications that may take more time to modernize,” Microsoft said.

“Customers that cannot meet the end of support deadline and have Software Assurance or subscription licenses under an enterprise agreement enrollment will have the option to buy Extended Security Updates to get three more years of security updates for SQL Server 2012, and Windows Server 2012 and 2012 R2.”

Regarding the pricing scheme for Extended Security Updates, Microsoft says that they will only cost for on-premises deployments:

  • In Azure: Customers running SQL Server 2012 and Windows Server 2012 and 2012 R2 in Azure will get Extended Security Updates for free.
  • On-premises: Customers with active Software Assurance or subscription licenses can purchase Extended Security Updates annually for 75 percent of the license cost of the latest version of SQL Server or Windows Server for the first year, 100 percent of the license cost for the second year, and 125 percent of the license cost for the third year.

Additional information regarding eligibility requirements and onboarding details is available on the Extended Security Updates frequently asked questions page.

SQL Server 2008/R2 and Windows Server 2008/R2 Extended Security Updates (ESUs) will also reach their end support on July 12, 2022, and January 10, 2023, respectively.

Customers who will require additional time to upgrade servers may re-host them on Azure for an additional year of free Extended Security Updates (ESUs).

Source :
https://www.bleepingcomputer.com/news/microsoft/microsoft-windows-server-2012-reaches-end-of-support-in-october-2023/

Azure powers rapid deployment of private 4G and 5G networks

As the cloud continues to expand into a ubiquitous and highly distributed fabric, a new breed of application is emerging: Modern Connected Applications. We define these new offerings as network-intelligent applications at the edge, powered by 5G, and enabled by programmable interfaces that give developer access to network resources. Along with internet of things (IoT) and real-time AI, 5G is enabling this new app paradigm, unlocking new services and business models for enterprises, while accelerating their network and IT transformation.

At Mobile World Congress this year, Microsoft announced a significant step towards helping enterprises in this journey: Azure Private 5G Core, available as a part of the Azure private multi-access edge compute (MEC) solution. Azure Private 5G Core enables operators and system integrators (SIs) to provide a simple, scalable, and secure deployment of private 4G and 5G networks on small footprint infrastructure, at the enterprise edge.

This blog dives a little deeper into the fundamentals of the service and highlights some extensions that enterprises can leverage to gain more visibility and control over their private network. It also includes a use case of an early deployment of Azure Kubernetes Services (AKS) on an edge platform, leveraged by the Azure Private 5G Core to rapidly deploy such networks.

Building simple, scalable, and secure private networks

Azure Private 5G Core dramatically simplifies the deployment and operation of private networks. With just a few clicks, organizations can deploy a customized set of selectable 5G core functions, radio access network (RAN), and applications on a small edge-compute platform, at thousands of locations. Built-in automation delivers security patches, assures compliance, and performs audits and reporting. Enterprises benefit from a consistent management experience and improved service assurance experience, with all logs and metrics from cloud to edge available for viewing within Azure dashboards.

Enterprises need the highest level of security to connect their mission critical operations. Azure Private 5G Core makes this possible by natively integrating into a broad range of Azure capabilities. With Azure Arc, we provide seamless and secure connectivity from an on-premises edge platform into the Azure cloud. With Azure role-based access control (RBAC), administrators can author policies and define privileges that will allow an application to access all necessary resources. Likewise, users can be given appropriate access to manage all resources in a resource group, such as virtual machines, websites, and subnets. Our Zero Trust security frameworks are integrated from devices to the cloud to keep users and data secure. And our complete, “full-stack” solution (hardware, host and guest operating system, hypervisor, AKS, packet core, IoT Edge Runtime for applications, and more) meets standard Azure privacy and compliance benchmarks in the cloud and on the enterprise edge, meaning that data privacy requirements are adhered to in each geographic region.

Deploying private 5G networks in minutes

Microsoft partner Inventec is a leading design manufacturer of enterprise-class technology solutions like laptops, servers, and wireless communication products. The company has been quick to see the potential benefit in transforming its own world-class manufacturing sites into 5G smart factories to fully utilize the power of AI and IoT.

In a compelling example of rapid private 5G network deployment, Inventec recently installed our Azure private MEC solution in their Taiwan smart factory. It took only 56 minutes to fully deploy the Azure Private 5G Core and connect it to 5G access points that served multiple 5G endpoints—a significant reduction from the months that enterprises have come to expect. Azure Private 5G Core leverages Azure Arc and Azure Kubernetes Service on-prem to provide security and manageability for the entire core network stack. Figures 1 and 2 below show snapshots from the trial.

Logs with time stamps showing start and completion of the core network deployment.

Figure 1: Screenshot of logs with time stamps showing start and completion of the core network deployment.

Trial showing one access point successfully connected to seven endpoints.

Figure 2: Screenshot from the trial showing one access point successfully connected to seven endpoints.

Inventec is developing applications for manufacturing use-cases that leverage private 5G networks and Microsoft’s Azure Private 5G Core. Examples of these high-value MEC use cases include Automatic Optical Inspection (AOI), facial recognition, and security surveillance systems.

Extending enterprise control and visibility from the 5G core

Through close integration with other elements of the Azure private MEC solution, our Azure Private 5G Core essentially acts as an enterprise “control point” for private wireless networks. Through comprehensive APIs, the Azure Private 5G Core can extend visibility into the performance of connected network elements, simplify the provisioning of subscriber identity modules (SIMs) for end devices, secure private wireless deployments, and offer 5G connectivity between cloud services (like IoT Hub) and associated on-premises devices.

Azure Private 5G Core is a central control point for private wireless networks.

Figure 3: Azure Private 5G Core is a central control point for private wireless networks.

Customers, developers, and partners are finding value today with a number of early integrations with both Azure and third-party services that include:

  • Plug and play RAN: Azure private MEC offers a choice of 4G or 5G Standalone radio access network (RAN) partners that integrate directly with the Azure Private 5G Core. By integrating RAN monitoring with the Azure Private 5G Core, RAN performance can be made visible through the Azure management portal. Our RAN partners are also onboarding their Element Management System (EMS) and Service Management and Orchestrator (SMO) products to Azure, simplifying the deployment processes and have a framework for closed-loop radio performance automation.
  • Azure Arc managed edge: The Azure Private 5G Core takes advantage of the security and reliability capabilities of Azure Arc-enabled Azure Kubernetes Service running on Azure Stack Edge Pro. These include policy definitions with Azure Policy for Kubernetes, simplified access to AKS clusters for High Availability with Cluster Connect and fine-grained identity and access management with Azure RBAC. 
  • Device and Profile Management: Azure Private 5G Core APIs integrate with SIM management services to securely provision the 5G devices with appropriate profiles. In addition, integration with Azure IoT Hub enables unified management of all connected IoT devices across an enterprise and provides a message hub for IoT telemetry data. 
  • Localized ISV MEC applications: Low-latency MEC applications benefit from running side-by-side with core network functions on the common (Azure private MEC) edge-compute platform. By integrating tightly with the Azure Private 5G Core using Azure Resource Manager APIs, third-party applications can configure network resources and devices. Applications offered by partners are available in, and deployable from the Azure Marketplace.

It’s easy to get started with Azure private MEC

As innovative use cases for private wireless networks continue to develop and industry 4.0 transformation accelerates, we welcome ISVs, platform partners, operators, and SIs to learn more about Azure private MEC.

  • Application ISVs interested in deploying their industry or horizontal solutions on Azure should begin by onboarding their applications to Azure Marketplace.
  • Platform partners, operators, and SIs interested in partnering with Microsoft to deploy or integrate with private MEC can get started by reaching out to the Azure private MEC Team.

Microsoft is committed to helping organizations innovate from the cloud, to the edge, and to space—offering the platform and ecosystem strong enough to support the vision and vast potential of 5G. As the cloud continues to expand and a new breed of modern connected apps at the edge emerges, the growth and transformation opportunities for enterprises will be profound. Learn more about how Microsoft is helping developers embrace 5G.

Source :
https://azure.microsoft.com/en-us/blog/azure-powers-rapid-deployment-of-private-4g-and-5g-networks/

Simplify and centralize network security management with Azure Firewall Manager

We are excited to share that Azure Web Application Firewall (WAF) policy and Azure DDoS Protection plan management in Microsoft Azure Firewall Manager is now generally available.

With an increasing need to secure cloud deployments through a Zero Trust approach, the ability to manage network security policies and resources in one central place is a key security measure.

Today, you can now centrally manage Azure Web Application Firewall (WAF) to provide Layer 7 application security to your application delivery platforms, Azure Front Door, and Azure Application Gateway, in your networks and across subscriptions. You can also configure DDoS Protection Standard for protecting your virtual networks from Layer 3 and Layer 4 attacks.

Azure Firewall Manager is a central network security policy and route management service that allows administrators and organizations to protect their networks and cloud platforms at a scale, all in one central place. 

Azure Web Application Firewall is a cloud-native web application firewall (WAF) service that provides powerful protection for web apps from common hacking techniques such as SQL injection and security vulnerabilities such as cross-site scripting.

Azure DDoS Protection Standard provides enhanced Distributed Denial-of-Service (DDoS) mitigation features to defend against DDoS attacks. It is automatically tuned to protect all public IP addresses in virtual networks. Protection is simple to enable on any new or existing virtual network and does not require any application or resource changes. 

By utilizing both WAF policy and DDoS protection in your network, this provides multi-layered protection across all your essential workloads and applications.

WAF policy and DDoS Protection plan management are an addition to Azure Firewall management in Azure Firewall Manager.

Centrally protect your application delivery platforms using WAF policies 

In Azure Firewall Manager, you can now manage and protect your Azure Front Door or Application Gateway deployments by associating WAF policies, at scale. This allows you to view all your key deployments in one central place, alongside Azure Firewall deployments and DDoS Protection plans.

Associating a WAF policy to an Azure Front Door

Upgrade from WAF configuration to WAF policy

In addition, the platform supports administrators to upgrade from a WAF config to WAF policies for Application Gateways, by selecting the service and Upgrade from WAF configuration. This allows for a more seamless process for migrating to WAF policies, which supports WAF policy settings, managed rulesets, exclusions, and disabled rule-groups.

As a note, all WAF configurations that were previously created in Application Gateway can be done through WAF policy.

Upgrading a WAF configuration to WAF policy

Manage DDoS Protection plans for your virtual networks

You can enable DDoS Protection Plan Standard on your virtual networks listed in Azure Firewall Manager, across subscriptions and regions. This allows you to see which virtual networks have Azure Firewall and/or DDoS protection in a single place.

 Figure 3: Enabling DDoS Protection Standard on a virtual network in Azure Firewall Manager

View and create WAF policies and DDoS Protection Plans in Azure Firewall Manager

You can view and create WAF policies and DDoS Protection Plans from the Azure Firewall Manager experience, alongside Azure Firewall policies.

In addition, you can import existing WAF policies to create a new WAF policy, so you do not need to start from scratch if you want to maintain similar settings.

Figure 4: View of Web Application Firewall Policies in Azure Firewall Manager
Figure 5: View of DDoS Protection Plans in Azure Firewall Manager

Monitor your overall network security posture

Azure Firewall Manager provides monitoring of your overall network security posture. Here, you can easily see which virtual networks and virtual hubs are protected by Azure Firewall, a third-party security provider, or DDoS Protection Standard. This overview can help you identify and prioritize any security gaps that are in your Azure environment, across subscriptions or for the whole tenant.

Figure 6: Monitoring page in Azure Firewall Manager

Coming soon, you’ll also be able to view your Application Gateway and Azure Front Door monitors, for a full network security overview.

Learn more

To learn more about these features in Azure Firewall Manager, visit the Manage Web Application Firewall policies tutorial, WAF on Application Gateway documentation, and WAF on Azure Front Door documentation. For DDoS information, visit the Configure Azure DDoS Protection Plan using Azure Firewall Manager tutorial and Azure DDoS Protection documentation.

To learn more about Azure Firewall Manager, please visit the Azure Firewall Manager home page.

Source :
https://azure.microsoft.com/en-us/blog/simplify-and-centralize-network-security-management-with-azure-firewall-manager/

An Analysis of Azure Managed Identities Within Serverless Environments

We examine Azure’s Managed Identities service and its security capability in a threat model as developers’ go-to feature for managing secrets and credentials.

Authentication and authorization play crucial parts when securing resources. Authentication verifies that the service or user accessing the secured resources has provided valid credentials, while authorization makes sure that they have sufficient permissions for the request itself.

Broken Access Control is listed among the top 10 OWASP prevalent web application issues from 2017 to 2021, and we have previously written about the importance of secrets management used for authentication. This occurs when an unauthorized user can access, modify, delete, or perform actions within an application or system that is outside the set permissions or policies, malicious or unintended. Broken access control has become the number one concern in the organization’s list, and in this article, we discuss Azure’s Managed Identities service inside the cloud service provider (CSP) to tackle the said web application issue.

Managing system and user identities

Managed Identities for Azure allows users to authenticate certain services available within the CSP. This is done by providing the cloud application a token used for service authentication. We distinguish between two types of managed identities: system-assigned identities and user-assigned identities. To differentiate, system-assigned identities are restricted from one to the resource, which means that different user roles can’t be applied to the same resource. On the other hand, user-managed identities solve this problem and we can imagine them as user roles.

Figure 1. Usage of Managed Identities


For instance, we want to use an Azure storage account within a serverless application for saving our application records. For this purpose, we decided to use a system-managed identity.

This practically means:

  • Enable managed identities inside a serverless function
  • Grant serverless functions the necessary permissions for storage account access

Figure 2. Enabling managed identities in a serverless function


After that, we can start using the managed identity for authentication to the storage account. In the following sections, we will look at how the managed identities interface is technically implemented within the serverless environment and the corresponding security implications based on our recent research.

Managing identities in the serverless environment

To make it work, the serverless environment runs a special .NET application process named “dotnet TokenServiceContainer.dll.” This process listens on a localhost and port 8081 to accept HTTP requests. The endpoint for requesting a token is http://localhost:8081/msi/token, and the required parameters specifies that the API version used and resource identifier for which the service requests the token. Optionally, it uses “client_id,” which is a parameter used when a managed user identity token is requested. The request also needs a specific X-IDENTITY-HEADER, and the needed value is present inside IDENTITY_HEADER or an MSI_SECRET environmental variable.

After receiving this token request, the request is delegated to the endpoint within the CSP (another service) and provides the requested token. The endpoint is publicly available and is a part of the *.identity.azure.net subdomain based on the region of the serverless application. By design and public access to the endpoint the service requires authentication, and this is done using a X509 client certificate. This certificate is unique to the specific application ID (meaning the serverless function has a one-to-one pairing of certificate and app ID) and valid for 180 days. If the request is successful, it returns a JSON response with a bearer token valid for one day.

Figure 3. Managed identities inside serverless environments


From that perspective, the security standard is high, which is expected from a CSP service. However, there is one hidden danger and that is the certificate itself. The certificate can be leaked by leaking environmental variables.

The Managed Service Identity (MSI) certificate is part of the encrypted container context, which can be accessed inside using a URL-specified CONTAINER_START_CONTEXT_SAS_URI and decrypted using the CONTAINER_ENCRYPTION_KEY variable. Once the certificate is leaked, it can be used to obtain the token outside the scope of CSP services and successfully used for publicly available service endpoints as it would be called from the CSP service.

Threat model and scenario

Figure 4. PoC of getting token using leaked environmental variables from Managed Identity service


At this point, we should emphasize that to be able to abuse the retained token, a certain factor (or malicious actor) must first leak these environmental variables and there must be an assigned role within the requested resource, the pre-requisites being the identities enabled and the role set for the application. This means there are no default roles unless explicitly specified within the CSP settings.

However, as this example of potential compromise shows from a gap leaking environmental variables of a Linux endpoint, using environmental variables for storing sensitive information is not a valid secure approach as they are by default inherited into the child process. Considering that the information is available inside the environment itself and that the certificate contains all the information provided, the endpoint for getting the token now becomes publicly available. A threat actor can get the authentication token outside of the CSP’s service and get all the permissions as the original user.

In this example, the token provider service within the serverless environment is running under a different user. Why is the client certificate available not only for this user in the form of a file with permissions only for that user? This allows a compromised serverless function to leak it and obtain the access token from the external service. But while the unauthorized user can’t get additional privileges other than what the function has, this is enough to conduct activities inside the environment that can have a range of damaging effects. By moving a client certificate into the security boundary of token service user and setting access permissions for the token service user as read-only, we guarantee that even in case of a compromise, the client certificate could not be leaked and used outside the CSP service without additional lateral movement.

The security chain is only as strong as its weakest parts. And while CSP services are not inherently insecure, small design weaknesses put together with improper user configurations could lead to bigger, more damaging consequences. Design applications, environments, and all their related variables with security in mind. If possible, avoid using environmental variables. Following best security practices such as applying the principle of least privilege helps to mitigate the consequences of a breach.

Source :
https://www.trendmicro.com/vinfo/us/security/news/virtualization-and-cloud/an-analysis-of-azure-managed-identities-within-serverless-environments