Why Organizations Should Adopt Wi-Fi 6 Now

With its new SonicWave 641 and SonicWave 681 access points, SonicWall has combined the security and performance benefits of Wi-Fi 6 with our simplified management and industry-leading TCO.

Organizations are evolving — some more quickly, others more reluctantly. But over the past three years, the pace of change for everyone has accelerated to hyperspeed.

In early 2020, very few people could have foreseen the changes that were about to be unleashed on the world. And even fewer could have successfully predicted the long-term impact that COVID-19 would have on the way the world’s eight billion people live and work.

Prior to the pandemic, only about 2% of employees worked remotely. By May 2020, that number had risen to 70%, according to the Society for Human Resource Management. This pivot was possible because organizations were able to adjust their infrastructure to meet new working demands — and wireless technology played an important part in this solution.

The importance of wireless technology goes far beyond simply enabling employees to work remotely.  According to a study, 87% of organizations believe that adopting advanced wireless capabilities can be a competitive advantage, because it allows them to innovate and increase agility. And 86% of networking executives believe advanced wireless will soon transform their organization.

But wireless technology impacts more than just how we work: It has changed the way we shop, watch movies, listen to music, navigate in our cars, or spend time with family and friends (some of whom may be a half a world away). And every one of us expects a good experience every single time we use wireless. That’s a tall order, especially given the sheer number of existing devices and the ever-growing amount of bandwidth being consumed.

The need for high-performing, secure wireless technology has never been greater — and Wi-Fi 6 is a massive next step toward this reality. SonicWall’s SonicWave 641 and SonicWave 681 access points provide the combination of performance and security that we all demand.

What is Wi-Fi 6?

Wi-Fi 6, also known as 802.11ax, is the successor to 802.11ac Wave 2, or Wi-Fi 5. While the primary goal of Wi-Fi 6 is to enhance throughput in complex environments, there are additional benefits:

  • OFDMA’s multi-user support can make Wi-Fi 6 access points more efficient than Wi-Fi 5’s single-user OFDM. This results in lower latency.
  • Wi-Fi 6 utilizes WPA3, which provides advanced security features to enable more robust authentication.
  • BSS coloring marks traffic on a shared frequency to determine if it can be used. The result is less interference and more consistent service in complex environments.
  • Target Wake Time (TWT) allows devices to determine how often to wake to send or receive data, improving battery life.
  • Wi-Fi 6’s multi-user, multiple input, multiple output (or MU-MIMO) supports multiple users within a single network environment. This allows multiple users to upload and download data at the same time, resulting in less wait time and faster network speed.

Some of these features are designed to improve performance, while some are designed to improve security. Any one of them can make a positive difference in an organization’s wireless network.  Combined, however, the feature improvements provided by Wi-Fi 6 can create a significant wireless network advancement for any organization.

SonicWave 641 and SonicWave 681

SonicWall’s SonicWave 641 and SonicWave 681 are Wi-Fi 6 access points that deliver wireless performance and security that are superior to the 802.11ac standard.

But there are additional benefits available with the SonicWave 641 and SonicWave 681, such as SonicWall Capture Security Center, a scalable cloud security management system that helps you control assets and defend your entire network against cyberattacks.

SonicWave 600 series APs also integrate with Wireless Network Manager, an intuitive centralized network management system that leverages the cloud to make it easy to manage complex wireless and security environments with a single-pane-of-glass management portal.

WiFi Planner is a site-survey tool that allows you to optimally design and deploy a wireless network to get maximum coverage with the fewest number of APs, resulting in a lower TCO.

And the SonicExpress mobile app allows you to easily register and use the Wireless Network Manager to set up, manage and monitor SonicWall wireless appliances.

A strong wireless network is not a “nice to have” — it’s a necessity. What today’s organizations require is the high performance and security of the SonicWave 641 and SonicWave 681 access points.

To learn more about the SonicWave 641 and SonicWave 681 access points, as well as SonicWall’s entire wireless portfolio, visit www.sonicwall.com/wireless.

Source :
https://blog.sonicwall.com/en-us/2022/08/why-organizations-should-adopt-wi-fi-6-now/

Ten Cybersecurity Books for Your Late Summer Reading List

While you probably aren’t headed back to school this fall, that doesn’t mean it’s not a great time to hit the books.

August 9 is National Book Lovers Day. While there’s really no bad time for a good book, we know it’s often hard to find space in your schedule to stop and read. If this is you, we’ve put together ten compelling reasons to get back into the habit — including two that were released just this past year.

The Hacker and the State: Cyberattacks and The New Normal of Geopolitics
Ben Buchanan2020
In the recently released mid-year update to the 2022 SonicWall Cyber Threat Report, we outline the growing role the geopolitical environment plays in cybercrime and cybersecurity. In “The Hacker and the State: Cyberattacks and The New Normal of Geopolitics,” author Ben Buchanan explores how the world’s superpowers use cyberattacks in a relentless struggle for dominance.

Women Know Cyber: 100 Fascinating Females Fighting Cybercrime
Steve Morgan, 2019
Women are still underrepresented in cybersecurity, but their numbers — as well as their mark on the industry — is growing. This book outlines the contributions of 100 women from every corner of cybersecurity, including government digital forensics, corporate risk assessment, law and more, and argues that encouraging and recruiting women will be key to closing the cybersecurity skills gap.

American Kingpin: The Epic Hunt for the Criminal Mastermind Behind the Silk Road 
Nick Bilton, 2018
Detailing the saga of the notorious Dark Web destination for hacking tools, drugs, forged passports and more, “American Kingpin: The Epic Hunt for the Criminal Mastermind Behind the Silk Road” is endlessly compelling. It follows founder Ross Ulbricht on his journey from boy-next-door programmer, to head of a sprawling illegal empire, to fugitive and captive, and tracks the growth and legacy of the Silk Road.

The Wires of War: Technology and the Global Struggle for Power (Oct 12 2021)
Jacob Helberg, October 2021
There’s a high-stakes global cyberwar brewing between Western democracies and authoritarian regimes — and the latter have a major advantage. Author Jacob Helberg headed efforts to combat misinformation and foreign influence at Google from 2016 to 2020, and “The Wires of War” draws upon this experience to expose the various means used to destabilize nations. In it, he explains why we’re fighting enemies of freedom both over the information we receive and how we receive it, as well as what’s at stake if democratic nations lose this war.

Click Here to Kill Everybody: Security and Survival in a Hyperconnected World
Bruce Schneier, 2018
As we’ve detailed numerous times before, smart devices aren’t necessarily, well, smart. As the world increases its reliance on internet-connected devices, author Bruce Schneier argues, the risks from bad actors will continue to increase in tandem — and if cybersecurity measures don’t keep up, the results could be fatal.

This Is How They Tell Me The World Ends
Nicole Perlroth, 2021
For years, the U.S. government became a major collector of zero-days. But when that cache was compromised, these vulnerabilities fell into the hands of cybercriminals and hostile nations. In her book, “This Is How They Tell Me the World Ends,” author Nicole Perlroth gives a journalistic account of how these vulnerabilities could endanger our democracy, our infrastructure and our lives.

Inside Jobs: Why Insider Risk Is the Biggest Cyber Threat You Can’t Ignore
Joe Payne, Jadee Hanson, Mark Wojtasiak, 2020
While greater access and collaboration are necessary for modern organizations, they bring with them greater risk — not just from cybercriminals, but also from employees and business partners. “Inside Jobs: Why Insider Risk is the Biggest Cyber Threat You Can’t Ignore” details the main types of insider risk, and provides ways to combat them without hampering productivity.

The Art of Invisibility: The World’s Most Famous Hacker Teaches You How to Be Safe in the Age of Big Brother and Big Data
Kevin Mitnick, 2019
Kevin Mitnick was once the FBI’s most wanted hacker. In his recent book, “The Art of Invisibility,” he uses what he learned through years of successfully sneaking into networks to offer readers tips on how to be invisible in a world where privacy is a vanishing commodity: everything from smart Wi-Fi usage, password protection and more. While you may already be familiar with some of the guidance offered, Mitnik’s experience, as well as his account of how we got here in the first place, make this well worth a read.

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity
Christian Espinoza, 2021
Having the best cybersecurity tools to protect your organization is only one piece of the puzzle. In “The Smartest Person in the Room,” cybersecurity expert Christian Espinosa outlines the extent to which your cybersecurity team impacts your ability to protect your organization — and offers ways to help upskill even your most intelligent employees.

Cybersecurity Is Everybody’s Business: Solve the Security Puzzle for Your Small Business and Home
Scott N. Schober, 2019
Not all cybersecurity professionals work in a SOC or safeguard huge enterprises — many work to defend millions of small organizations or home offices. If this is you (or someone you know), you know how challenging it can be to find cybersecurity information geared to your security environment. In his most recent book, “Hacked Again” author Scott Schober explains why small businesses are becoming cybercriminals’ biggest targets, and what they can do to protect against threats like identity theft, phishing and ransomware.

Happy Book Lovers Day, and happy reading!

Source :
https://blog.sonicwall.com/en-us/2022/08/ten-cybersecurity-books-for-your-late-summer-reading-list/

CISA Warns of Active Exploitation of Palo Alto Networks’ PAN-OS Vulnerability

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Monday added a security flaw impacting Palo Alto Networks PAN-OS to its Known Exploited Vulnerabilities Catalog, based on evidence of active exploitation.

The high-severity vulnerability, tracked as CVE-2022-0028 (CVSS score: 8.6), is a URL filtering policy misconfiguration that could allow an unauthenticated, remote attacker to carry out reflected and amplified TCP denial-of-service (DoS) attacks.

CyberSecurity

“If exploited, this issue would not impact the confidentiality, integrity, or availability of our products,” Palo Alto Networks said in an alert. “However, the resulting denial-of-service (DoS) attack may help obfuscate the identity of the attacker and implicate the firewall as the source of the attack.

The weakness impacts the following product versions and has been addressed as part of updates released this month –

  • PAN-OS 10.2 (version < 10.2.2-h2)
  • PAN-OS 10.1 (version < 10.1.6-h6)
  • PAN-OS 10.0 (version < 10.0.11-h1)
  • PAN-OS 9.1 (version < 9.1.14-h4)
  • PAN-OS 9.0 (version < 9.0.16-h3), and
  • PAN-OS 8.1 (version < 8.1.23-h1)
CyberSecurity

The networking equipment maker said it discovered the vulnerability after being notified that susceptible firewall appliances from different vendors, including Palo Alto Networks, were being used as part of an attempted reflected denial-of-service (RDoS) attack.

In light of active exploitation, customers of affected products are advised to apply the relevant patches to mitigate potential threats. Federal Civilian Executive Branch (FCEB) agencies are mandated to update to the latest version by September 12, 2022.

Source :
https://thehackernews.com/2022/08/cisa-warns-of-active-exploitation-of.html

The complete guide to WSUS and Configuration Manager SUP maintenance

This article addresses some common questions about WSUS maintenance for Configuration Manager environments.

Original product version:   Windows Servers, Windows Server Update Services, Configuration Manager
Original KB number:   4490644

Introduction

Questions are often along the lines of How should I properly run this maintenance in a Configuration Manager environment, or How often should I run this maintenance. It’s not uncommon for conscientious Configuration Manager administrators to be unaware that WSUS maintenance should be run at all. Most of us just set up WSUS servers because it’s a prerequisite for a software update point (SUP). Once the SUP is set up, we close the WSUS console and pretend it doesn’t exist. Unfortunately, it can be problematic for Configuration Manager clients, and the overall performance of the WSUS/SUP server.

With the understanding that this maintenance needs to be done, you’re wondering what maintenance you need to do and how often you need to be doing it. The answer is that you should perform monthly maintenance. Maintenance is easy and doesn’t take long for WSUS servers that have been well maintained from the start. However, if it has been some time since WSUS maintenance was done, the cleanup may be more difficult or time consuming the first time. It will be much easier or faster in subsequent months.

Maintain WSUS while supporting Configuration Manager current branch version 1906 and later versions

If you are using Configuration Manager current branch version 1906 or later versions, we recommend that you enable the WSUS Maintenance options in the software update point configuration at the top-level site to automate the cleanup procedures after each synchronization. It would effectively handle all cleanup operations described in this article, except backup and reindexing of WSUS database. You should still automate backup of WSUS database along with reindexing of the WSUS database on a schedule.

Screenshot of the WSUS Maintenance options in Software Update Point Components Properties window.

For more information about software update maintenance in Configuration Manager, see Software updates maintenance.

Important considerations

 Note

If you are utilizing the maintenance features that have been added in Configuration Manager, version 1906, you don’t need to consider these items since Configuration Manager handles the cleanup after each synchronization.

  1. Before you start the maintenance process, read all of the information and instructions in this article.
  2. When using WSUS along with downstream servers, WSUS servers are added from the top down, but should be removed from the bottom up. When syncing or adding updates, they go to the upstream WSUS server first, then replicate down to the downstream servers. When performing a cleanup and removing items from WSUS servers, you should start at the bottom of the hierarchy.
  3. WSUS maintenance can be performed simultaneously on multiple servers in the same tier. When doing so, ensure that one tier is done before moving onto the next one. The cleanup and reindex steps described below should be run on all WSUS servers, regardless of whether they are a replica WSUS server or not. For more information about determining if a WSUS server is a replica, see Decline superseded updates.
  4. Ensure that SUPs don’t sync during the maintenance process, as it may cause a loss of some work already done. Check the SUP sync schedule and temporarily set it to manual during this process.Screenshot of the Enable synchronization on a schedule setting.
  5. If you have multiple SUPs of the primary site or central administration sit (CAS) which don’t share the SUSDB, consider the WSUS server that syncs with the first SUP on the site as residing in a tier below the site. For example, my CAS site has two SUPs:
    • The one named New syncs with Microsoft Update, it would be my top tier (Tier1).
    • The server named 2012 syncs with New, and it would be considered in the second tier. It can be cleaned up at the same time I would do all my other Tier2 servers, such as my primary site’s single SUP.
    Screenshot of the two example SUPs.

Perform WSUS maintenance

The basic steps necessary for proper WSUS maintenance include:

  1. Back up the WSUS database
  2. Create custom indexes
  3. Reindex the WSUS database
  4. Decline superseded updates
  5. Run the WSUS Server Cleanup Wizard

Back up the WSUS database

Back up the WSUS database (SUSDB) by using the desired method. For more information, see Create a Full Database Backup.

Create custom indexes

This process is optional but recommended, it greatly improves performance during subsequent cleanup operations.

If you are using Configuration Manager current branch version 1906 or a later version, we recommend that you use Configuration Manager to create the indexes. To create the indexes, configure the Add non-clustered indexes to the WSUS database option in the software update point configuration for the top-most site.

Screenshot of the Add non-clustered indexes to the WSUS database option under WSUS Maintenance tab.

If you use an older version of Configuration Manager or standalone WSUS servers, follow these steps to create custom indexes in the SUSDB database. For each SUSDB, it’s a one-time process.

  1. Make sure that you have a backup of the SUSDB database.
  2. Use SQL Management Studio to connect to the SUSDB database, in the same manner as described in the Reindex the WSUS database section.
  3. Run the following script against SUSDB, to create two custom indexes:SQLCopy-- Create custom index in tbLocalizedPropertyForRevision USE [SUSDB] CREATE NONCLUSTERED INDEX [nclLocalizedPropertyID] ON [dbo].[tbLocalizedPropertyForRevision] ( [LocalizedPropertyID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] -- Create custom index in tbRevisionSupersedesUpdate CREATE NONCLUSTERED INDEX [nclSupercededUpdateID] ON [dbo].[tbRevisionSupersedesUpdate] ( [SupersededUpdateID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] If custom indexes have been previously created, running the script again results in an error similar to the following one:Msg 1913, Level 16, State 1, Line 4
    The operation failed because an index or statistics with name ‘nclLocalizedPropertyID’ already exists on table ‘dbo.tbLocalizedPropertyForRevision’.

Reindex the WSUS database

To reindex the WSUS database (SUSDB), use the Reindex the WSUS Database T-SQL script.

The steps to connect to SUSDB and perform the reindex differ, depending on whether SUSDB is running in SQL Server or Windows Internal Database (WID). To determine where SUSDB is running, check value of the SQLServerName registry entry on the WSUS server located at the HKEY_LOCAL_MACHINE\Software\Microsoft\Update Services\Server\Setup subkey.

If the value contains just the server name or server\instance, SUSDB is running on a SQL Server. If the value includes the string ##SSEE or ##WID in it, SUSDB is running in WID, as shown:

Screenshot of SqlServerName-SSEE.
Screenshot of SqlServerName-WID.

If SUSDB was installed on WID

If SUSDB was installed on WID, SQL Server Management Studio Express must be installed locally to run the reindex script. Here’s an easy way to determine which version of SQL Server Management Studio Express to install:

After installing SQL Server Management Studio Express, launch it, and enter the server name to connect to:

  • If the OS is Windows Server 2012 or later versions, use \\.\pipe\MICROSOFT##WID\tsql\query.
  • If the OS is older than Windows Server 2012, enter \\.\pipe\MSSQL$MICROSOFT##SSEE\sql\query.

For WID, if errors similar to the following occur when attempting to connect to SUSDB using SQL Server Management Studio (SSMS), try launching SSMS using the Run as administrator option.

Screenshot of the Cannot connect to server error.

If SUSDB was installed on SQL Server

If SUSDB was installed on full SQL Server, launch SQL Server Management Studio and enter the name of the server (and instance if needed) when prompted.

 Tip

Alternatively, a utility called sqlcmd can be used to run the reindex script. For more information, see Reindex the WSUS Database.

Running the script

To run the script in either SQL Server Management Studio or SQL Server Management Studio Express, select New Query, paste the script in the window, and then select Execute. When it’s finished, a Query executed successfully message will be displayed in the status bar. And the Results pane will contain messages related to what indexes were rebuilt.

Screenshot of executing the SQL statement.
Screenshot of the successful log.

Decline superseded updates

Decline superseded updates in the WSUS server to help clients scan more efficiently. Before declining updates, ensure that the superseding updates are deployed, and that superseded ones are no longer needed. Configuration Manager includes a separate cleanup, which allows it to expire superseded updates based on specified criteria. For more information, see the following articles:

The following SQL query can be run against the SUSDB database, to quickly determine the number of superseded updates. If the number of superseded updates is higher than 1500, it can cause various software update related issues on both the server and client sides.

SQLCopy

-- Find the number of superseded updates
Select COUNT(UpdateID) from vwMinimalUpdate where IsSuperseded=1 and Declined=0

If you are using Configuration Manager current branch version 1906 or a later version, we recommend that you automatically decline the superseded updates by enabling the Decline expired updates in WSUS according to supersedence rules option in the software update point configuration for the top-most site.

Screenshot of the Decline expired updates in WSUS according to supersedence rules option under WSUS Maintenance tab.

When you use this option, you can see how many updates were declined by reviewing the WsyncMgr.log file after the synchronization process finishes. If you use this option, you don’t need to use the script described later in this section (either by manually running it or by setting up as task to run it on a schedule).

If you are using standalone WSUS servers or an older version of configuration Manager, you can manually decline superseded updates by using the WSUS console. Or you can run this PowerShell script. Then, copy and save the script as a Decline-SupersededUpdatesWithExclusionPeriod.ps1 script file.

 Note

This script is provided as is. It should be fully tested in a lab before you use it in production. Microsoft makes no guarantees regarding the use of this script in any way. Always run the script with the -SkipDecline parameter first, to get a summary of how many superseded updates will be declined.

If Configuration Manager is set to Immediately expire superseded updates (see below), the PowerShell script can be used to decline all superseded updates. It should be done on all autonomous WSUS servers in the Configuration Manager/WSUS hierarchy.

Screenshot of the Immediately expire superseded updates options under Supersedence Rules tab.

You don’t need to run the PowerShell script on WSUS servers that are set as replicas, such as secondary site SUPs. To determine whether a WSUS server is a replica, check the Update Source settings.

Screenshot of the Update Source and Proxy Server option.

If updates are not configured to be immediately expired in Configuration Manager, the PowerShell script must be run with an exclusion period that matches the Configuration Manager setting for number of days to expire superseded updates. In this case, it would be 60 days since SUP component properties are configured to wait two months before expiring superseded updates:

Screenshot of the months to expire superseded updates.

The following command lines illustrate the various ways that the PowerShell script can be run (if the script is being run on the WSUS server, LOCALHOST can be used in place of the actual SERVERNAME):

PowerShellCopy

Decline-SupersededUpdatesWithExclusionPeriod.ps1 -UpdateServer SERVERNAME -Port 8530 –SkipDecline

Decline-SupersededUpdatesWithExclusionPeriod.ps1 -UpdateServer SERVERNAME -Port 8530 –ExclusionPeriod 60

Decline-SupersededUpdatesWithExclusionPeriod.ps1 -UpdateServer SERVERNAME -Port 8530

Decline-SupersededUpdatesWithExclusionPeriod.ps1 -UpdateServer SERVERNAME -UseSSL -Port 8531

Running the script with a -SkipDecline and -ExclusionPeriod 60 to gather information about updates on the WSUS server, and how many updates could be declined:

Screenshot of the Windows PowerShell window running SkipDecline and ExclusionPeriod 60.

Running the script with -ExclusionPeriod 60, to decline superseded updates older than 60 days:

Screenshot of the Windows PowerShell window with ExclusionPeriod 60 running.

The output and progress indicators are displayed while the script is running. Note the SupersededUpdates.csv file, which will contain a list of all updates that are declined by the script:

Screenshot of the Windows PowerShell output and progress indicator.

 Note

If issues occur when attempting to use the above PowerShell script to decline superseded updates, see the section Running the Decline-SupersededUpdatesWithExclusionPeriod.ps1 script times out when connecting to the WSUS server, or a 401 error occurs while running for troubleshooting steps.

After superseded updates have been declined, for best performance, SUSDB should be reindexed again. For related information, see Reindex the WSUS database.

Run the WSUS Server Cleanup Wizard

WSUS Server Cleanup Wizard provides options to clean up the following items:

  • Unused updates and update revisions (also known as Obsolete updates)
  • Computers not contacting the server
  • Unneeded update files
  • Expired updates
  • Superseded updates

In a Configuration Manager environment, Computers not contacting the server and Unneeded update files options are not relevant because Configuration Manager manages software update content and devices, unless either the Create all WSUS reporting events or Create only WSUS status reporting events options are selected under Software Update Sync Settings. If you have one of these options configured, you should consider automating the WSUS Server Cleanup to perform cleanup of these two options.

If you are using Configuration Manager current branch version 1906 or a later version, enabling the Decline expired updates in WSUS according to supersedence rules option handles declining of Expired updates and Superseded updates based on the supersedence rules that are specified in Configuration Manager. Enabling the Remove obsolete updates from the WSUS database option in Configuration Manager current branch version 1906 handles the cleanup of Unused updates and update revisions (Obsolete updates). It’s recommended to enable these options in the software update point configuration on the top-level site to allow Configuration Manager to clean up the WSUS database.

Screenshot of the Remove obsolete updates from the WSUS database option.

If you’ve never cleaned up obsolete updates from WSUS database before, this task may time out. You can review WsyncMgr.log for more information, and manually run the SQL script that is specified in HELP! My WSUS has been running for years without ever having maintenance done and the cleanup wizard keeps timing out once, which would allow subsequent attempts from Configuration Manager to run successfully. For more information about WSUS cleanup and maintenance in Configuration Manager, see the docs.

For standalone WSUS servers, or if you are using an older version of Configuration Manager, it is recommended that you run the WSUS Cleanup wizard periodically. If the WSUS Server Cleanup Wizard has never been run and the WSUS has been in production for a while, the cleanup may time out. In that case, reindex with step 2 and step 3 first, then run the cleanup with only the Unused updates and update revisions option checked.

If you have never run WSUS Cleanup wizard, running the cleanup with Unused updates and update revisions may require a few passes. If it times out, run it again until it completes, and then run each of the other options one at a time. Lastly make a full pass with all options checked. If timeouts continue to occur, see the SQL Server alternative in HELP! My WSUS has been running for years without ever having maintenance done and the cleanup wizard keeps timing out. It may take multiple hours or days for the Server Cleanup Wizard or SQL alternative to run through completion.

The WSUS Server Cleanup Wizard runs from the WSUS console. It is located under Options, as shown here:

Screenshot of the WSUS Server Cleanup Wizard location page.

For more information, see Use the Server Cleanup Wizard.

Screenshot of the WSUS Server Cleanup Wizard start page.

After it reports the number of items it has removed, the cleanup finishes. If you do not see this information returned on your WSUS server, it is safe to assume that the cleanup timed out. In that case, you will need to start it again or use the SQL alternative.

Screenshot of the WSUS Server Cleanup Wizard when finished.

After superseded updates have been declined, for best performance, SUSDB should be reindexed again. See the Reindex the WSUS database section for related information.

Troubleshooting

HELP! My WSUS has been running for years without ever having maintenance done and the cleanup wizard keeps timing out

There are two different options here:

  1. Reinstall WSUS with a fresh database. There are a number of caveats related to this, including length of initial sync, and full client scans against SUSDB, versus differential scans.
  2. Ensure you have a backup of the SUSDB database, then run a reindex. When that completes, run the following script in SQL Server Management Studio or SQL Server Management Studio Express. After it finishes, follow all of the above instructions for running maintenance. This last step is necessary because the spDeleteUpdate stored procedure only removes unused updates and update revisions.

 Note

Before you run the script, follow the steps in The spDeleteUpdate stored procedure runs slowly to improve the performance of the execution of spDeleteUpdate.

SQLCopy

DECLARE @var1 INT
DECLARE @msg nvarchar(100)

CREATE TABLE #results (Col1 INT)
INSERT INTO #results(Col1) EXEC spGetObsoleteUpdatesToCleanup

DECLARE WC Cursor
FOR
SELECT Col1 FROM #results

OPEN WC
FETCH NEXT FROM WC
INTO @var1
WHILE (@@FETCH_STATUS > -1)
BEGIN SET @msg = 'Deleting' + CONVERT(varchar(10), @var1)
RAISERROR(@msg,0,1) WITH NOWAIT EXEC spDeleteUpdate @localUpdateID=@var1
FETCH NEXT FROM WC INTO @var1 END

CLOSE WC
DEALLOCATE WC

DROP TABLE #results

Running the Decline-SupersededUpdatesWithExclusionPeriod.ps1 script times out when connecting to the WSUS server, or a 401 error occurs while running

If errors occur when you attempt to use the PowerShell script to decline superseded updates, an alternative SQL script can be run against SUDB.

  1. If Configuration Manager is used along with WSUS, check Software Update Point Component Properties > Supersedence Rules to see how quickly superseded updates expire, such as immediately or after X months. Make a note of this setting.Screenshot of the Supersedence Rules.
  2. If you haven’t backed up the SUSDB database, do so before proceeding further.
  3. Use SQL Server Management Studio to connect to SUSDB.
  4. Run the following query. The number 90 in the line that includes DECLARE @thresholdDays INT = 90 should correspond with the Supersedence Rules from step 1 of this procedure, and the correct number of days that aligns with the number of months that is configured in Supersedence Rules. If this is set to expire immediately, the value in the SQL query for @thresholdDays should be set to zero.SQLCopy-- Decline superseded updates in SUSDB; alternative to Decline-SupersededUpdatesWithExclusionPeriod.ps1 DECLARE @thresholdDays INT = 90 -- Specify the number of days between today and the release date for which the superseded updates must not be declined (i.e., updates older than 90 days). This should match configuration of supersedence rules in SUP component properties, if ConfigMgr is being used with WSUS. DECLARE @testRun BIT = 0 -- Set this to 1 to test without declining anything. -- There shouldn't be any need to modify anything after this line. DECLARE @uid UNIQUEIDENTIFIER DECLARE @title NVARCHAR(500) DECLARE @date DATETIME DECLARE @userName NVARCHAR(100) = SYSTEM_USER DECLARE @count INT = 0 DECLARE DU CURSOR FOR SELECT MU.UpdateID, U.DefaultTitle, U.CreationDate FROM vwMinimalUpdate MU JOIN PUBLIC_VIEWS.vUpdate U ON MU.UpdateID = U.UpdateId WHERE MU.IsSuperseded = 1 AND MU.Declined = 0 AND MU.IsLatestRevision = 1 AND MU.CreationDate < DATEADD(dd,-@thresholdDays,GETDATE()) ORDER BY MU.CreationDate PRINT 'Declining superseded updates older than ' + CONVERT(NVARCHAR(5), @thresholdDays) + ' days.' + CHAR(10) OPEN DU FETCH NEXT FROM DU INTO @uid, @title, @date WHILE (@@FETCH_STATUS > - 1) BEGIN SET @count = @count + 1 PRINT 'Declining update ' + CONVERT(NVARCHAR(50), @uid) + ' (Creation Date ' + CONVERT(NVARCHAR(50), @date) + ') - ' + @title + ' ...' IF @testRun = 0 EXEC spDeclineUpdate @updateID = @uid, @adminName = @userName, @failIfReplica = 1 FETCH NEXT FROM DU INTO @uid, @title, @date END CLOSE DU DEALLOCATE DU PRINT CHAR(10) + 'Attempted to decline ' + CONVERT(NVARCHAR(10), @count) + ' updates.'
  5. To check progress, monitor the Messages tab in the Results pane.

What if I find out I needed one of the updates that I declined?

If you decide you need one of these declined updates in Configuration Manager, you can get it back in WSUS by right-clicking the update, and selecting Approve. Change the approval to Not Approved, and then resync the SUP to bring the update back in.

Screenshot of the WSUS Approve Updates screen.

If the update is no longer in WSUS, it can be imported from the Microsoft Update Catalog, if it hasn’t been expired or removed from the catalog.

Screenshot shows how to import updates in WSUS.

Automating WSUS maintenance

 Note

If you are using Configuration Manager version1906 or a later version, automate the cleanup procedures by enabling the WSUS Maintenance options in the software update point configuration of the top-level site. These options handle all cleanup operations that are performed by the WSUS Server Cleanup Wizard. However, you should still automatically back up and reindex the WSUS database on a schedule.

WSUS maintenance tasks can be automated, assuming that a few requirements are met first.

  1. If you have never run WSUS cleanup, you need to do the first two cleanups manually. Your second manual cleanup should be run 30 days from your first since it takes 30 days for some updates and update revisions to age out. There are specific reasons for why you don’t want to automate until after your second cleanup. Your first cleanup will probably run longer than normal. So you can’t judge how long this maintenance will normally take. The second cleanup is a much better indicator of what is normal for your machines. This is important because you need to figure out about how long each step takes as a baseline (I also like to add about 30-minutes wiggle room) so that you can determine the timing for your schedule.
  2. If you have downstream WSUS servers, you will need to perform maintenance on them first, and then do the upstream servers.
  3. To schedule the reindex of the SUSDB, you will need a full version of SQL Server. Windows Internal Database (WID) doesn’t have the capability of scheduling a maintenance task though SQL Server Management Studio Express. That said, in cases where WID is used you can use the Task Scheduler with SQLCMD mentioned earlier. If you go this route, it’s important that you don’t sync your WSUS servers/SUPs during this maintenance period! If you do, it’s possible your downstream servers will just end up resyncing all of the updates you just attempted to clean out. I schedule this overnight before my AM sync, so I have time to check on it before my sync runs.

Needed/helpful links:

WSUS cleanup script

PowerShellCopy

[reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration")` 
 | out-null 
$wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer(); 
$cleanupScope = new-object Microsoft.UpdateServices.Administration.CleanupScope; 
$cleanupScope.DeclineSupersededUpdates = $true        
$cleanupScope.DeclineExpiredUpdates = $true 
$cleanupScope.CleanupObsoleteUpdates = $true 
$cleanupScope.CompressUpdates = $true 
#$cleanupScope.CleanupObsoleteComputers = $true 
$cleanupScope.CleanupUnneededContentFiles = $true 
$cleanupManager = $wsus.GetCleanupManager(); 
$cleanupManager.PerformCleanup($cleanupScope);

Setting up the WSUS Cleanup task in Task Scheduler

 Note

As mentioned previously, if you are using Configuration Manager current branch version 1906 or a later version, automate the cleanup procedures by enabling the WSUS Maintenance options in the software update point configuration of the top-level site. For standalone WSUS servers or older versions of Configuration Manager, you can continue to use the following steps.

The Weekend Scripter blog post mentioned in the previous section contains basic directions and troubleshooting for this step. However, I’ll walk you through the process in the following steps.

  1. Open Task Scheduler and select Create a Task. On the General tab, set the name of the task, the user that you want to run the PowerShell script as (most people use a service account). Select Run whether a user is logged on or not, and then add a description if you wish.Screenshot of the WSUS Create a task screen.
  2. Under the Actions tab, add a new action and specify the program/script you want to run. In this case, we need to use PowerShell and point it to the PS1 file we want it to run. You can use the WSUS Cleanup script. This script performs cleanup options that Configuration Manager current branch version 1906 doesn’t do. You can uncomment them if you are using standalone WSUS or an older version of Configuration Manager. If you would like a log, you can modify the last line of the script as follows:PowerShellCopy[reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration") | out-null $wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer(); $cleanupScope = new-object Microsoft.UpdateServices.Administration.CleanupScope; # $cleanupScope.DeclineSupersededUpdates = $true # Performed by CM1906 # $cleanupScope.DeclineExpiredUpdates = $true # Performed by CM1906 # $cleanupScope.CleanupObsoleteUpdates = $true # Performed by CM1906 $cleanupScope.CompressUpdates = $true $cleanupScope.CleanupObsoleteComputers = $true $cleanupScope.CleanupUnneededContentFiles = $true $cleanupManager = $wsus.GetCleanupManager(); $cleanupManager.PerformCleanup($cleanupScope) | Out-File C:\WSUS\WsusClean.txt; You’ll get an FYI/warning in Task Scheduler when you save. You can ignore this warning.Screenshot shows WSUS add a line of script to start the task.
  3. On the Triggers tab, set your schedule for once a month or on any schedule you want. Again, you must ensure that you don’t sync your WSUS during the entire cleanup and reindex time.Screenshot shows Set the WSUS Edit Trigger for the task.
  4. Set any other conditions or settings you would like to tweak as well. When you save the task, you may be prompted for credentials of the Run As user.
  5. You can also use these steps to configure the Decline-SupersededUpdatesWithExclusionPeriod.ps1 script to run every three months. I usually set this script to run before the other cleanup steps, but only after I have run it manually and ensured it completed successfully. I run at 12:00 AM on the first Sunday every three months.

Setting up the SUSDB reindex for WID using SQLCMD and Task Scheduler

  1. Save the Reindex the WSUS database script as a .sql file (for example, SUSDBMaint.sql).
  2. Create a basic task and give it a name:Screenshot of the WSUS Create Basic Task Wizard screen.
  3. Schedule this task to start about 30 minutes after you expect your cleanup to finish running. My cleanup is running at 1:00 AM every first Sunday. It takes about 30 minutes to run and I am going to give it another 30 minutes before starting my reindex. It means I would schedule this task for every first Sunday at 2:00 AM, as shown here:Screenshot shows set the frequency for that task in the Create Basic Task Wizard.
  4. Select the action to Start a program. In the Program/script box, type the following command. The file specified after the -i parameter is the path to the SQL script you saved in step 1. The file specified after the -o parameter is where you would like the log to be placed. Here’s an example:"C:\Program Files\Microsoft SQL Server\110\Tools\Binn\SQLCMD.exe" -S \\.\pipe\Microsoft##WID\tsql\query -i C:\WSUS\SUSDBMaint.sql -o c:\WSUS\reindexout.txtScreenshot shows how the script should look in the Create Basic Task Wizard.
  5. You’ll get a warning, similar to the one you got when creating the cleanup task. Select Yes to accept the arguments, and then select Finish to apply:Screenshot of the Task Scheduler confirmation popup window.
  6. You can test the script by forcing it to run and reviewing the log for errors. If you run into issues, the log will tell you why. Usually if it fails, the account running the task doesn’t have appropriate permissions or the WID service isn’t started.

Setting up a basic Scheduled Maintenance Task in SQL for non-WID SUSDBs

 Note

You must be a sysadmin in SQL Server to create or manage maintenance plans.

  1. Open SQL Server Management Studio and connect to your WSUS instance. Expand Management, right-click Maintenance Plans, and then select New Maintenance Plan. Give your plan a name.Screenshot of the typed name for your WSUS maintenance plan.
  2. Select subplan1 and then ensure your Toolbox is in context:Screenshot to ensure your Toolbox is in context.
  3. Drag and drop the task Execute T-SQL Statement Task:Screenshot of the Execute T-SQL Statement Task option.
  4. Right-click it and select Edit. Copy and paste the WSUS reindex script, and then select OK:Screenshot to Copy and paste the WSUS reindex script.
  5. Schedule this task to run about 30 minutes after you expect your cleanup to finish running. My cleanup is running at 1:00 AM every first Sunday. It takes about 30 minutes to run, and I am going to give it another 30 minutes before starting reindex. It means I would schedule this task to run every first Sunday at 2:00 AM.Screenshot of the WSUS New Job Schedule screen.
  6. While creating the maintenance plan, consider adding a backup of the SUSDB into the plan as well. I usually back up first, then reindex. It may add more time to the schedule.

Putting it all together

When running it in a hierarchy, the WSUS cleanup run should be done from the bottom of the hierarchy up. However, when using the script to decline superseded updates, the run should be done from the top down. Declining superseded updates is really a type of addition to an update rather than a removal. You’re actually adding a type of approval in this case.

Since a sync can’t be done during the actual cleanup, it’s suggested to schedule/complete all tasks overnight. Then check on their completion via the logging the following morning, before the next scheduled sync. If something failed, maintenance can be rescheduled for the next night, once the underlying issue is identified and resolved.

These tasks may run faster or slower depending on the environment, and timing of the schedule should reflect that. Hopefully they are faster since my lab environment tends to be a bit slower than a normal production environment. I am a bit aggressive on the timing of the decline scripts. If Tier2 overlaps Tier3 by a few minutes, it will not cause a problem because my sync isn’t scheduled to run.

Not syncing keeps the declines from accidentally flowing into my Tier3 replica WSUS servers from Tier2. I did give myself extra time between the Tier3 decline and the Tier3 cleanup since I definitely want to make sure the decline script finishes before running my cleanup.

It brings up a common question: Since I’m not syncing, why shouldn’t I run all of the cleanups and reindexes at the same time?

The answer is that you probably could, but I wouldn’t. If my coworker across the globe needs to run a sync, with this schedule I would minimize the risk of orphaned updates in WSUS. And I can schedule it to rerun to completion the next night.

TimeTierTasks
12:00 AMTier1-Decline
12:15 AMTier2-Decline
12:30 AMTier3-Decline
1:00 AMTier3 WSUS Cleanup
2:00 AMTier3 ReindexTier2 WSUS Cleanup
3:00 AMTier1-CleanupTier2 Reindex
4:00 AMTier1 Reindex

 Note

If you’re using Configuration Manager current branch version 1906 or a later version to perform WSUS Maintenance, Configuration Manager performs the cleanup after synchronization using the top-down approach. In this scenario, you can schedule the WSUS database backup and reindexing jobs to run before the configured sync schedule without worrying about any of the other steps, because Configuration Manager will handle everything else.

For more information about SUP maintenance in Configuration Manager, see the following articles:

Fake DDoS Pages On WordPress Sites Lead to Drive-By-Downloads

It’s not uncommon for users to experience “DDoS Protection” pages when casually browsing the web. These DDoS protection pages are typically associated with browser checks performed by WAF/CDN services which verify if the site visitor is, in fact, a human or is part of a Distributed Denial of Service (DDoS) attack or other unwanted bot.

Under normal circumstances, DDoS pages usually don’t affect users much — they simply perform a check or request a skill testing question in order to proceed to the desired webpage. However, a recent surge in JavaScript injections targeting WordPress sites has resulted in fake DDoS prevent prompts which lead victims to download remote access trojan malware.

Bots prompt usage of DDoS protection

The web is absolutely rife with bots and crawlers. There are varying estimates of how much total web traffic are bots but most put it at anywhere between 25-45% of all traffic!

Bots themselves are automated queries to websites done by computer programs. Some bots are good and actually essential for the functioning of the web as we know it today. These include crawlers such as GoogleBot, BingBot, and Baidu Spider which scan and index content from webpages so that they can be discovered during search.

Bad bots, on the other hand, make up an even greater portion of web traffic. These include DDoS traffic, scrapers gobbling up emails addresses to send spam, bots attempting to find vulnerable websites to compromise, content stealers, and more.

Furthermore, bots eat up bandwidth on websites, causing increased hosting costs and disruption of meaningful website visitor statistics. The gradual increase of bad bot traffic has prompted many websites to deter or otherwise block them entirely, resulting in the nearly ubiquitous usage of DDoS prevention services and CAPTCHAs on websites.

CAPTCHA for human verification
CAPTCHA for human verification

Although a nuisance, these browser verification checks are essential to deterring unwanted and malicious traffic from legitimate websites.

Sucuri Firewall Block Page

Fake DDoS protection prompts used to serve RATs

Unfortunately, attackers have begun leveraging these familiar security assets in their own malware campaigns. We recently discovered a malicious JavaScript injection affecting WordPress websites which results in a fake CloudFlare DDoS protection popup.

Fake DDoS protection prompt
Fake DDoS protection prompt

Since these types of browser checks are so common on the web many users wouldn’t think twice before clicking this prompt to access the website they’re trying to visit. However, the prompt actually downloads a malicious .iso file onto the victim’s computer.

Malicious .iso downloaded from fake DDoS prompt
Malicious .iso downloaded from fake DDoS prompt

This is followed by a new message coaxing the user into opening the file in order to obtain a verification code to access the website:

Verification code request
Verification code request
The .iso file does in fact contain a verification code so as to not disrupt the ruse
The .iso file does in fact contain a verification code so as to not disrupt the ruse

What most users do not realise is that this file is in fact a remote access trojan, currently flagged by 13 security vendors at the time of writing this article.

The VirusTotal report for this malicious file
The VirusTotal report for this malicious file

We reached out to our good friend Jerome Segura at MalwareBytes to see what happens to unfortunate victims’ Windows computers when they install this malware onto their endpoint devices:

“This is NetSupport RAT. It has been linked to FakeUpdates/SocGholish and typically used to check victims before ransomware rollout. The ISO file contains a shortcut disguised as an executable that runs powershell from another text file.

It also installs RaccoonStealer and drops the following payloads:

https://www.virustotal.com/gui/file/4d24b359176389301c14a92607b5c26b8490c41e7e3a2abbc87510d1376f4a87/detection

https://www.virustotal.com/gui/file/299472f1d7e227f31ef573758452e9a57da2e3f30f3160c340b09451b032f8f3?nocache=1

After that, just about anything can happen depending on the victim.”

– Jerome Segura

Screenshot courtesy of Jerome Segura
Screenshot courtesy of Jerome Segura

The infected computer could be used to pilfer social media or banking credentials, detonate ransomware, or even entrap the victim into a nefarious “slave” network, extort the computer owner, and violate their privacy — all depending on what the attackers decide to do with the compromised device.

A look at the malware itself

When malicious actors aim to infect endpoint devices they need a distribution network. Quite often this takes form in malicious or phishing emails sent to potential victim’s inboxes. In this case, however, the remote access trojans are distributed through hacked WordPress websites.

So, how does this WordPress malware actually work and what does it look like?

Most prominently we see three short lines of malicious code affecting the following file: ./wp-includes/js/jquery/jquery.min.js

Malware found in jquery.min.js
Malicious code found in jquery.min.js

We have also seen instances of this very same malware injected into the active theme file of the victim’s WordPress website. In any event, the files it is appended onto will load once the site is opened up in the browser, prompting the download of the malicious remote access trojan.

Located at adogeevent[.]com is a heavily obfuscated JavaScript sample containing the payload:

Payload in heavily obfuscated JavaScript


This JavaScript then communicates with a second malicious domain which loads more JavaScript that initiates the download prompt for the malicious .iso file: hxxps://confirmation-process[.]at/fortest/parsez[.]php?base=www.REDACTED.com&full=https://www.REDACTED.com/?v11

Malicious .iso file


We can see the reference to the security_install.iso file here (hosted at a free Austrian file sharing service free[.]files[.]cnow[.]at), as well as a second malicious .msi file which also contains malware — although, in this case, it is commented out:

Second malicious .msi file

How to protect your site from infection

This case is a great example of both the importance of website security — and the importance of remaining vigilant when browsing the web. It’s not just SEO rankings or website reputations that are on the line, but the very security and privacy of everyone who visits your website. Malicious actors will take whatever avenues are available to them to compromise computers and push their malware onto unsuspecting victims.

Remote Access Trojans (RATs) are regarded as one of the worst types of infections that can affect a computer as it gives the attackers full control over the device. At that point, the victim is at their mercy. Website owners and visitors alike must take any and all precautions to protect themselves.

Here are a number of key steps you can take to mitigate risk from this infection.

Website owners:

  • Keep all software on your website up to date
  • Use strong passwords
  • Use 2FA on your administrative panel
  • Place your website behind a firewall service
  • Employ file integrity monitoring

Regular website visitors:

  • Make sure your computer is running a robust antivirus program
  • Place 2FA on all important logins (such as your bank, social media)
  • Practice good browsing habits; don’t open strange files!
  • Keep your browser and all software on your computer updated/patched
  • Use a script blocker in your browser (advanced)

If you think that your website has been infected by this malware or you want to protect your website against infection, we’re always happy to help.

Source :
https://blog.sucuri.net/2022/08/fake-ddos-pages-on-wordpress-lead-to-drive-by-downloads.html

An encrypted ZIP file can have two correct passwords — here’s why

Password-protected ZIP archives are common means of compressing and sharing sets of files—from sensitive documents to malware samples to even malicious files (i.e. phishing “invoices” in emails).

But, did you know it is possible for an encrypted ZIP file to have two correct passwords, with both producing the same outcome when the ZIP is extracted?

A ZIP file with two passwords

Arseniy Sharoglazov, a cybersecurity researcher at Positive Technologies shared over the weekend a simple experiment where he produced a password-protected ZIP file called x.zip.

The password Sharoglazov picked for encrypting his ZIP was a pun on the 1987 hit that’s become a popular tech meme:

Nev1r-G0nna-G2ve-Y8u-Up-N5v1r-G1nna-Let-Y4u-D1wn-N8v4r-G5nna-D0sert-You

But the researcher demonstrated that when extracting x.zip using a completely different password, he received no error messages.

In fact, using the different password resulted in successful extraction of the ZIP, with original contents intact:

pkH8a0AqNbHcdw8GrmSp

different passwords for same ZIP
Two different passwords for same ZIP file result in successful extraction (Sharoglazov)

BleepingComputer was able to successfully reproduce the experiment using different ZIP programs. We used both p7zip (7-Zip equivalent for macOS) and another ZIP utility called Keka.

Like the researcher’s ZIP archive, ours was created with the aforementioned longer password, and with AES-256 encryption mode enabled.

While the ZIP was encrypted with the longer password, using either password extracted the archive successfully.

How’s this possible?

Responding to Sharoglazov’s demo, a curious reader, Rafa raised an important question, “How????”

Twitter user Unblvr seems to have figured out the mystery:

https://platform.twitter.com/embed/Tweet.html?creatorScreenName=BleepinComputer&dnt=false&embedId=twitter-widget-0&features=eyJ0ZndfdGltZWxpbmVfbGlzdCI6eyJidWNrZXQiOlsibGlua3RyLmVlIiwidHIuZWUiXSwidmVyc2lvbiI6bnVsbH0sInRmd19ob3Jpem9uX3RpbWVsaW5lXzEyMDM0Ijp7ImJ1Y2tldCI6InRyZWF0bWVudCIsInZlcnNpb24iOm51bGx9LCJ0ZndfdHdlZXRfZWRpdF9iYWNrZW5kIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19yZWZzcmNfc2Vzc2lvbiI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0ZndfdHdlZXRfcmVzdWx0X21pZ3JhdGlvbl8xMzk3OSI6eyJidWNrZXQiOiJ0d2VldF9yZXN1bHQiLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3NlbnNpdGl2ZV9tZWRpYV9pbnRlcnN0aXRpYWxfMTM5NjMiOnsiYnVja2V0IjoiaW50ZXJzdGl0aWFsIiwidmVyc2lvbiI6bnVsbH0sInRmd19leHBlcmltZW50c19jb29raWVfZXhwaXJhdGlvbiI6eyJidWNrZXQiOjEyMDk2MDAsInZlcnNpb24iOm51bGx9LCJ0ZndfZHVwbGljYXRlX3NjcmliZXNfdG9fc2V0dGluZ3MiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3R3ZWV0X2VkaXRfZnJvbnRlbmQiOnsiYnVja2V0Ijoib2ZmIiwidmVyc2lvbiI6bnVsbH19&frame=false&hideCard=false&hideThread=false&id=1561112433812463616&lang=en&origin=https%3A%2F%2Fwww.bleepingcomputer.com%2Fnews%2Fsecurity%2Fan-encrypted-zip-file-can-have-two-correct-passwords-heres-why%2F&sessionId=a152f893a25a6e8ee78e7bde19e8d6acb85ac127&siteScreenName=BleepinComputer&theme=light&widgetsVersion=31f0cdc1eaa0f%3A1660602114609&width=550px

When producing password-protected ZIP archives with AES-256 mode enabled, the ZIP format uses the PBKDF2 algorithm and hashes the password provided by the user, if the password is too long. By too long, we mean longer than 64 bytes (characters), explains the researcher.

Instead of the user’s chosen password (in this case “Nev1r-G0nna-G2ve-…”) this newly calculated hash becomes the actual password to the file.

When the user attempts to extract the file, and enters a password that is longer than 64 bytes (“Nev1r-G0nna-G2ve-… “), the user’s input will once again be hashed by the ZIP application and compared against the correct password (which is now itself a hash). A match would lead to a successful file extraction.

The alternative password used in this example (“pkH8a0AqNbHcdw8GrmSp“) is in fact ASCII representation of the longer password’s SHA-1 hash.

SHA-1 checksum of “Nev1r-G0nna-G2ve-…” = 706b4838613041714e62486364773847726d5370.

This checksum when converted to ASCII produces: pkH8a0AqNbHcdw8GrmSp

Note, however, that when encrypting or decrypting a file, the hashing process only occurs if the length of the password is greater than 64 characters.

In other words, shorter passwords will not be hashed at either stage of compressing or decompressing the ZIP.

This is why when picking the long “Nev1r-G0nna-G2ve-… ” string as the password at the encryption stage, the actual password being set by the ZIP program is effectively the (SHA1) hash of this string.

At the decryption stage, if you were to enter “Nev1r-G0nna-G2ve-…,” it will be hashed and compared against the previously stored password (which is the SHA1 hash). However, entering the shorter “pkH8a0AqNbHcdw8GrmSp” password at the decryption stage will have the application directly compare this value to the stored password (which is, again the SHA1 hash).

The HMAC collisions subsection of PBKDF2 on Wikipedia provides some more technical insight to interested readers.

“PBKDF2 has an interesting property when using HMAC as its pseudo-random function. It is possible to trivially construct any number of different password pairs with collisions within each pair,” notes the entry.

“If a supplied password is longer than the block size of the underlying HMAC hash function, the password is first pre-hashed into a digest, and that digest is instead used as the password.”

But, the fact that there are now two possible passwords to the same ZIP does not represent a security vulnerability, “as one still must know the original password in order to generate the hash of the password,” the entry further explains.

Arriving at a perfect password

An interesting key aspect to note here is, ASCII representations of every SHA-1 hash need not be alphanumeric.

In other words, let’s assume we had chosen the following password for our ZIP file during this experiment. The password is longer than 64 bytes:

Bl33pingC0mputer-Sh0w-M3-H0W-t0-pR0Duc3-an-eNcRyPT3D-ZIP-File-in-the-simplest-way

It’s SHA-1 checksum comes out to be: bd0b8c7ab2bf5934574474fb403e3c0a7e789b61

And the ASCII representation of this checksum looks like a gibberish set of bytes—not nearly elegant as the alternative password generated by the researcher for his experiment:

gibberish password
ASCII representation of SHA-1 hash of Bl33pingC0mputer… password

BleepingComputer asked Sharoglazov how was he able to pick a password whose SHA-1 checksum would be such that its ASCII representation yields a clean, alphanumeric string.

“That’s why hashcat was used,” the researcher tells BleepingComputer.

By using a slightly modified version of the open source password recovery tool, hashcat, the researcher generated variations of the “Never Gonna Give You Up…” string using alphanumeric characters until he arrived at a perfect password.

“I tested Nev0rNev1rNev2r and so on… And I found the password I need.”

And, that’s how Sharoglazov arrived at a password that roughly reads like “Never Gonna Give You Up…,” but the ASCII representation of its SHA-1 checksum is one neat alphanumeric string.

For most users, creating a password-protected ZIP file with a choice of their password should be sufficient and that is all they would need to know.

But should you decide to get adventurous, this experiment provides a peek into one of the many mysteries surrounding encrypted ZIPs, like having two passwords to your guarded secret.

Source :
https://www.bleepingcomputer.com/news/security/an-encrypted-zip-file-can-have-two-correct-passwords-heres-why/

Enhance Security and Control Access to Critical Assets with Network Segmentation

Before COVID-19, most corporate employees worked in offices, using computers connected to the internal network. Once users connected to these internal networks, they typically had access to all the data and applications without many restrictions. Network architects designed flat internal networks where the devices in the network connected with each other directly or through a router or a switch.

But while flat networks are fast to implement and have fewer bottlenecks, they’re extremely vulnerable — once compromised, attackers are free to move laterally across the internal network.

Designing flat networks at a time when all the trusted users were on the internal networks might have been simpler and more efficient. But times have changed: Today, 55% of those surveyed say they work more hours remotely than at the physical office. Due to the rapid evolution of the way we work, corporations must now contend with:

  • Multiple network perimeters at headquarters, in remote offices and in the cloud
  • Applications and data scattered across different cloud platforms and data centers
  • Users who expect the same level of access to internal networks while working remotely

While this is a complex set of issues, there is a solution. Network segmentation, when implemented properly, can unflatten the network, allowing security admins to compartmentalize internal networks and provide granular user access.

What is network segmentation?

The National Institute of Standards and Technology (NIST) offers the following definition for network segmentation: “Splitting a network into sub-networks; for example, by creating separate areas on the network which are protected by firewalls configured to reject unnecessary traffic. Network segmentation minimizes the harm of malware and other threats by isolating it to a limited part of the network.”

The main principle of segmentation is making sure that each segment is protected from the other, so that if a breach does occur, it is limited to only a portion of the network. Segmentation should be applied to all entities in the IT environment, including users, workloads, physical servers, virtual machines, containers, network devices and endpoints.

Connections between these entities should be allowed only after their identities have been verified and proper access rights have been established. The approach of segmenting with granular and dynamic access is also known as Zero Trust Network Access (ZTNA).

As shown in Figure 1, instead of a network with a single perimeter, inside which entities across the network are freely accessible, a segmented network environment features smaller network zones with firewalls separating them.

Achieving network segmentation

Implementing segmentation may seem complex, and figuring out the right place to start might seem intimidating. But by following these steps, it can be achieved rather painlessly.

1. Understand and Visualize

Network admins need to map all the subnets and virtual local area networks (VLANs) on the corporate networks. Visualizing the current environment provides a lot of value right away in understanding both how to and what to segment.

At this step, network and security teams also need to work together to see where security devices such as firewalls, IPS and network access controls are deployed in the corporate network. An accurate map of the network and a complete inventory of security systems will help tremendously in creating efficient segments.

2. Segment and Create Policies

The next step in the process is to create the segments themselves: Large subnets or zones should be segmented, monitored and protected with granular access policies. Segments can be configured based on a variety of categories, including geo-location, corporate departments, server farms, data centers and cloud platforms.

After defining segments, create security policies and access-control rules between those segments. These polices can be created and managed using firewalls, VLANs or secure mobile access devices. In most cases, security admins can simply use existing firewalls or secure mobile access solutions to segment and create granular policies. It’s best for administrators to ensure that segments and policies are aligned with business processes.

3. Monitor and Enforce Policies

After creating segments and policies, take some time to monitor the traffic patterns between those segments. The first time the security policies are enforced, it may cause disruption to regular business functions. So it’s best to apply policies in non-blocking or alert mode and monitor for false positives or other network errors.

Next, it’s the time to enforce policies. Once the individual policies are pushed, each segment is protected from cyber attackers’ lateral movements and from internal users trying to reach resources they are not authorized to use. It’s a good idea to continuously monitor and apply new policies as needed whenever there are changes to networks, applications or user roles.

Policy-based segmentation: A way forward for distributed networks

What today’s enterprises require is a way to deliver granular policy enforcement to multiple segments within the network. Through segmentation, companies can protect critical digital assets against any lateral attacks and provide secure access to remote workforces.

The good news is that, with the power and flexibility of a next-generation firewall (NGFW) and with other technologies such as secure mobile access and ZTNA solutions, enterprises can safeguard today’s distributed networks by enforcing policy-based segmentation.

SonicWall’s award-winning hardware and advanced technologies include NGFWsSecure Mobile Access and Cloud Edge Secure Access. These solutions are designed to allow any network— from small businesses to large enterprises, from the datacenter to the cloud — to segment and achieve greater protection with SonicWall.

Source :
https://blog.sonicwall.com/en-us/2022/06/enhance-security-and-control-access-to-critical-assets-with-network-segmentation/

Oil and Gas Cybersecurity: Recommendations Part 3

The oil and gas industry continues to be a prime target for threat actors who want to disrupt the operation and wreak havoc. In part two, we discussed various threats that can affect an oil and gas company, including ransomware, DNS tunneling, and zero-day exploits. For the final installment of the series, we’ll investigate the APT33 case study—a group generally considered to be responsible for many spear-phishing campaigns targeting the oil industry and its supply chain. We’ll also lay out several recommendations to better strengthen the cybersecurity framework of oil and gas companies.

APT33: a case study

The group APT33 is known to target the oil supply chain, the aviation industry, and military and defense companies. Our team observed that the group has had some limited success in infecting targets related to oil, the U.S. military, and U.S. national security. In 2019, we found that the group infected a U.S. company providing support services to national security.

APT33 has also compromised oil companies in Europe and Asia. A large oil company with a presence in the U.K. and India had concrete APT33-related infections in the fall of 2018. Some of the IP addresses of the oil company communicated with the C&C server times-sync.com, which hosted a so-called Powerton C&C server from October to December 2018, and then again in 2019. A computer server in India owned by a European oil company communicated with a Powerton C&C server used by APT33 for at least three weeks in November and December 2019. We also observed that a large U.K.-based company offering specialized services to oil refineries and petrochemical installations was likely compromised by APT33 in the fall of 2018.

Read more: Obfuscated APT33 C&Cs Used for Narrow Targeting

table-1
Table 1. Known job offering campaigns of APT33

APT33’s best-known infection technique has been using social engineering through emails. It has been using the same type of lure for several years: a spear-phishing email containing a job opening offer that may look quite legitimate. There have been campaigns involving job openings in the oil and aviation industries.

The email contains a link to a malicious .hta file, which would attempt to download a PowerShell script. This would then download additional malware from APT33 so that the group could gain persistence in the target network. Table 1 lists some of the campaigns we were able to recover from data based on feedback from the Trend Micro™ Smart Protection Network™ infrastructure. The company names in the campaigns are not necessarily targets in the campaign, but they are usually part of the social lure used in the campaigns.

figure-1
Figure 1. PHP mailer script probably used by APT33. The script was hosted on the personal website of a European senator who had a seat on his nation’s defense committee.

The job opening social engineering lures are used for a reason: Some of the targets actually get legitimate email notifications about job openings for the same companies used in the spear-phishing emails. This means that APT33 has some knowledge of what their targets are receiving from legitimate sources.

APT33 is known to be related to the destructive malware called StoneDrill and is possibly related to attacks involving Shamoon, although we don’t have solid evidence for the latter.
Besides the relatively aggressive attacks of APT33 on the supply chain, we found that APT33 has been using several C&C domains, listed in Table 2, for small botnets composed of about a dozen bots each. It appears that APT33 has taken special care to make tracking more difficult.

The C&C domains are hosted on cloud-hosted proxies. These proxies relay URL requests from the infected bots to back-ends at shared web servers that may host thousands of legitimate domains. These back-ends are protected with special software that detects unusual probing from researchers. The back-ends report bot data back to a dedicated aggregator and bot control server on a dedicated IP address. The APT33 actors connect to these aggregators via a private VPN with exit nodes that are changed frequently. Using these VPN connections, the APT33 actors issue commands and retrieve data from the bots.

figure-2
Figure 2. Schema showing the multiple obfuscation layers used by APT33

Regarding APT33, we were able to track private VPN exit nodes for more than a year. We could cross relate the exit nodes with admin connections to servers controlled by APT33. It appears that these private VPN exit nodes are also used for reconnaissance of networks that are relevant to the supply chain of the oil industry. More concretely, we witnessed IP addresses that we believe are under the control of APT33 doing reconnaissance on the networks of an oil exploration company in the Middle East, an oil company in the U.S., and military hospitals in the Middle East.

table-2
Table 2. IP addresses associated with a few private VPN exit nodes connected to APT33

Table 2 shows a list of IP addresses that have been used by APT33. The IP addresses are likely to have been used for a longer time than the time frames indicated in the table. The data can be used to determine whether an organization was on the radar of APT33 for, say, reconnaissance or concrete compromises.

Security recommendations

Here are several general tips that may help companies in the oil and gas industry combat threat actors:

  • Perform data integrity checks
    While there may not be an immediate need for encrypting all data communications in an oil and gas company, there is some merit in taking steps to ensure data integrity. For example, regarding the information from the different sensors at oil production sites, the risk of tampering with oil production can be reduced by at least making sure that all data communication is signed. This can greatly decrease the risk of man-in-the-middle attacks where sensor values could be changed or where a third party could alter commands or inject commands without authorization.
  • Implement DNSSEC
    We have noticed that many oil and gas companies don’t have Domain Name System Security Extensions (DNSSEC) implemented. DNSSEC means digitally signing the DNS records of a domain name at the authoritative nameserver with a private key. DNS resolvers can check whether DNS records are properly signed.
  • Lock down domain names
    Domain names can potentially be taken over by a malicious actor, for example, through an unauthorized change in the DNS settings. To prevent this, it is important to use only a DNS service provider that requires two-factor authentication for any changes in the DNS settings of the domains of an organization.
  • Monitor SSL certificates
    For the protection of a brand name and for early warnings of possible upcoming attacks, it is important to monitor newly created SSL certificates that have certain keywords in the Common Name field.
  • Look out for business email compromise
    Protection against business email compromise (BEC) is possible through spam filtering, user training for spotting suspicious emails, and AI techniques that will recognize the writing styles of individuals in the company.
  • Require at least two-factor authentication for webmail
    A webmail hostname might get DNS-hijacked or hacked because of a vulnerability in the webmail software. And webmail can also be attacked with credential-phishing attacks; a well-prepared credential-phishing attack can be quite convincing. The risk of using webmail can be greatly reduced by requiring two-factor authentication (preferably with a physical key) and corporate VPNs for webmail access.
  • Hold employee training sessions for security awareness
    It is important to have regular training sessions for all employees. These sessions may include awareness training on credential phishing, spear phishing, social media use, data management, privacy policies, protecting intellectual property, and physical security.
  • Monitor for data leaks
    Watermarks make it easier to find leaked documents since the company can constantly monitor for these specific marks. Some companies specialize in finding leaked data and compromised credentials; through active monitoring for leaks, potential damage to the company can be mitigated earlier.
  • Keep VPN software up to date
    Several weaknesses in VPN software were found in recent years.36, 37 For various reasons, some companies do not update their VPN software immediately after patches become available. This is particularly dangerous since APT actors start to probe for vulnerable VPN servers (including those of oil companies) as soon as a vulnerability becomes public.
  • Review the security settings of cloud services
    Cloud services can boost efficiency and reduce cost, but companies sometimes forget to effectively use all security measures offered by cloud services. Some services help companies with cloud infrastructure security.

To learn more about digital threats that the oil and gas industry face, download our comprehend research here.

Source :
https://www.trendmicro.com/en_us/research/22/h/oil-gas-cybersecurity-recommendations-part-3.html

Oil and Gas Cybersecurity: Threats Part 2

The Russia-Ukraine war has posed threats to the oil and gas industry. Our team even uncovered several alleged attacks perpetrated by various groups during a March 2022 research. In part one, we exhibit how a typical oil and gas company works and why it can be susceptible to cyberattacks. We also explain different threats that can disrupt its operation.

In part two, let’s continue identifying threats that pose great risk to an oil and gas company.

Threats

  • Ransomware
    Ransomware remains a serious threat to oil and gas companies. Targeting individuals using ransomware is fairly easy for cybercriminals, even for those with a lower level of computer knowledge. The easiest business model consists of subscribing to ransomware-as-a-service (RaaS) offers on underground cybercrime marketplaces.18 Any fraudster can buy such a service and start delivering ransomware to thousands of individuals’ computers by using exploit kits or spam emails.

    During our research, we found that a U.S. oil and natural gas company was hit by ransomware, infecting three computers and its cloud backups. The computers that were targeted contained essential data for the company, and the estimated total loss was more than US$30 million. While we do not have additional details on this case, we believe the attackers did plan this attack carefully and were able to target a few strategic computers rather than hitting the company with a massive infection.

    Read more: Cuba Ransomware Group’s New Variant Found Using Optimized Infection Techniques
  • Malware
    Various kinds of malware serve different purposes, functioning and communicating between the infected computers and the C&C servers. Compromising and planting malware inside a target network is just the initial stage for attackers. Yet for several reasons, these actions can be detected after a while or even just deleted automatically by any antivirus or security solution.

    To avoid being kicked off from the network when the only available access is via their malware, attackers generally choose to regularly update their malware. And if possible, they use different malware families so that they have more than one way to access the compromised network.
  • Webshells
    Webshells are tiny files, generally written in PHP, ASP, or JavaScript language, that have been fraudulently uploaded to a web server belonging to a targeted entity. An attacker just needs to browse it to get access to the web server. Most common options for webshells provide upload or download file operations, command line (shell), and dump databases.

    Threat actors sometimes utilize webshells to ease their operations. They can use webshells to:
    • Download or upload files to the compromised web server;
    • Run other tools (such as credential stealers);
    • Maintain persistence on the compromised infrastructure;
    • Bounce to other servers and move on with more compromises; or
    • Steal information.
  • Cookies
    Cookies are small files sent from web servers and stored in the browser of an internet user. They serve different legitimate purposes, such as allowing a browser to know if the user is logged in or not (as in the case of authentication cookies) or storing stateful information (like items in shopping carts).

    Some variants of the backdoor BKDR64_RGDOOR22 used cookies23 to handle communications between the malware and its C&C server. They used the string “RGSESSIONID=” followed by encrypted content. Careful cookie field monitoring in HTTP traffic can help detect this kind of activity.
  • DNS tunnelling
    The most common way for malware to communicate with its C&C server is by using HTTP or HTTPS protocol. However, some attackers allow their malware to communicate via DNS tunnelling. In this content, DNS tunnelling exploits the DNS protocol to transmit data between the malware and its controller, via DNS queries and response packets.

    The DNS client software (the malware) sends data, generally encoded in some ways, prepended as the hostname of the DNS query.
  • Email as communication channel
    An APT attacker might want to use this method mostly for two reasons: email services, especially external online services, might be less monitored than other services in the compromised network, and it might provide an additional level of anonymity depending on the email service provider that is used.
  • Zero-day exploits
    More often than not, attackers use known exploits and only use zero-day exploits when really necessary. It doesn’t take much effort to compromise most networks, gain access and exfiltrate information with standard malware and tools.

    The Stuxnet case is a solid and interesting example of zero-day exploits, using four different types. No other known attack has been seen exploiting so many unpatched and unknown vulnerabilities — it has shown an extraordinary level of sophistication.

    Two years before Stuxnet, another malware from the Equation group27 was using two of the four zero-day exploits that Stuxnet used. The Equation group targeted many different sectors, including oil and gas, energy, and nuclear research. It showed advanced technical capabilities, including infecting the hard drive firmware of several major hard drive manufacturers, which had seemed impossible without the firmware source code.
  • Mobile phone malware
    There has been an increase in the use of mobile phone malware in recent years. It is typically used for cybercrime, but can also be utilized for espionage.

    The Reaper threat actor has developed Android malware, which we detect as AndroidOS_KevDroid. This malware has several functionalities, including starting a video or audio recording, downloading the address book from the compromised phone, fetching specific files, and reading SMS messages and other information from the phone.

    The MuddyWater APT group29 has used several variants of Android malware (AndroidOS_Mudwater.HRX, AndroidOS_HiddenApp.SAB, AndroidOS_Androrat.AXM, and .AXMA) posing as legitimate applications. These malware variants can completely take control of an Android phone, spread infecting links via SMS, and steal contacts, SMS messages, screenshots, and call logs.
  • Bluetooth
    Bluetooth can also be exploited by threat actors. And one of the most interesting recent discoveries in this regard is the USB Bluetooth Harvester.30 It is very uncommon, but it highlights the need for organizations to stay up to date on threat actor developments.
  • Cloud services
    Attackers can use legitimate cloud services to render the traffic between malware and the C&C server undetectable. For example, the Slub malware has been used for APT attacks. While it hasn’t affected the industry just yet, it bears mentioning as it use Git Hub (a software development platform), and Slack (a messaging service), for C&C communication can easily be copied by other threat actors.

In the final installation of our series, we’ll look at APT33—a group generally considered responsible for many spear-phishing campaigns targeting the oil industry and its supply chain. We’ll also discuss recommendations that oil and gas companies can utilize to further improve their cybersecurity.

To learn more about digital threats that the oil and gas industry face, download our comprehend research here.

Source :
https://www.trendmicro.com/en_us/research/22/h/oil-gas-cybersecurity-threats-part-2.html

Oil and Gas Cybersecurity: Industry Overview Part 1

The oil and gas industry is no stranger to major cybersecurity attacks, attempting to disrupt operations and services. Most of the best understood attacks against the oil industry are initial attempts to break into the corporate networks of oil companies.

Geopolitical tensions can cause major changes not only in physical space, but also in cyberspace. In March 2022, our researchers observed several alleged cyberattacks perpetrated by different groups. It has now become important more than ever to identify potential threats that may disrupt oil and gas companies, especially in these times when tensions are high.

Our survey also found that oil and gas companies have experienced disruptions with their supply due to cyberattacks. On average, the disruption lasted six days. The the financial damage amounts to approximately $3.3 million. Due to long disruption, the oil and gas industry has a much larger damage, too.

It is important to have an in-depth at cyberattacks than can disrupt oil and gas companies because they affect operations and profit in a major way. By looking closer at the infrastructure of an oil and gas company and identifying threats that can disrupt operation, a company can seal off loopholes and improve their cybersecurity framework.

The Infrastructure of a Typical Oil and Gas Company

An oil and gas company’s product chain usually has three parts—upstream, midstream, and downstream. Processes related to oil exploration and production is called an upstream, while the midstream refers to the transportation and storage of crude oil through pipelines, trains, ships, or trucks. Lastly, the downstream the production of end products. Cyber risks are present in all three categories, but for midstream and upstream, there are few publicly documented incidents.

Generally, an oil company has production sites where crude oil is extracted from wells, tank farms, where oil is stored temporarily, and a transportation system to bring the crude oil to a refinery. Transportation may include pipelines, trains, and ships. After processing in the refinery, different end products like diesel fuel, gasoline, and jet fuel are transported to tank farms and the products are later shipped to customers.

A gas company also typically has production sites and a transportation system such as railroads, ships, and pipelines. However, it needs compressor stations where the natural gas is compressed before transport. The natural gas is then transported to another plant that separates different hydrocarbon components, from natural gas, like LPG and cooking gas.

The intricate process of oil and gas companies mean they require constant monitoring to ensure the optimal performance measurement, performance improvement, quality control and safety.

Monitoring metrics include temperature, pressure, chemical composition, and detection of leaks. Some oil and gas production sites are in very remote locations where the weather can be extreme. For these sites, communication of the monitored metrics over the air, fixed (optic or copper) lines, or satellite is important. The systems of an oil and gas company is typically controlled by software and can be compromised by an attacker.

Threats

There are several threats that oil and gas companies should be aware of. The biggest threat to the industry is those that have a direct negative impact on the production of their end products. In addition, espionage is something that such companies need to defend themselves against, too.

In our in-depth research, the expert team at Trend Micro identified the following threats that can compromise oil and gas companies:

  • Sabotage
    In the context of the oil and gas industry, sabotage can be done by changing the behavior of software, deleting or wiping specific content to disrupt company activity or deleting or wiping as much content as possible on every accessible machine.

    Some examples of these kinds of sabotage operations have been reported broadly, the most famous being the Stuxnet case. Stuxnet was a piece of self-replicating malware that contained a very targeted and specific payload. Most infections of the worm were in Iran and analysis revealed that it was designed to exclusively target the centrifuge in the uranium enrichment facility of the Natanz Nuclear Plant in the country.
  • Insider threat
    In most cases, an insider is a disgruntled employee seeking revenge or wanting to make easy money by selling valuable data to competitors. This person can sabotage operations. They can alter data to create problems, delete or destroy data from corporate servers or shared project folders, steal intellectual property, and leak sensitive documents to third parties.

    Defense against insider threats is very complex since insiders generally have access to a lot of data. An insider also does not need months to know the internal network of the company — the insider probably already knows the inner workings of the organization.
  • Espionage and data theft
    Data theft and espionage can be the starting point of a larger destructive attack. Attackers often need specific information before attempting further action. Obtaining sensitive data like well drilling techniques, data on suspected oil and gas reserves, and special recipes for premium products can also translate to monetary gain for attackers.
  • DNS hijacking
    DNS hijacking is a form of data theft used by advanced attackers. The objective is to gain access to the corporate VPN network or corporate emails of governments and companies. We have seen several oil companies being targeted by advanced attackers who probably have certain geopolitical goals in mind.

    In DNS hijacking, the DNS settings of a domain name are modified by an unauthorized third party. The third-party can, for instance, add an entry to the zone file of a domain or alter the resolution of one or more of the existing hostnames. The simplest things the attacker can do are committing vandalism(defacement), leaving a message on the hijacked website, and making the website unavailable. This will usually be noticed quickly and the result may just be reputational damage.
  • Attacks on Webmail and Corporate VPN Servers
    While webmail and file-sharing services have become a vital tool for accessing emails and important documents on the go, these services can increase the possibility of a cyberattack on the surface.

    For instance, a webmail hostname might get DNS-hijacked or hacked because of the vulnerability in the webmail software. Webmail and file-sharing and collaboration platforms can be compromised in credential-phishing attacks.

    A well-prepared credential-phishing attack can be quite convincing, as when an actor registers a domain name can be quite convincing, as when an actor registers a domain name that resembles the legitimate webmail hostname, or when an actor creates a valid SSL certificate and chooses the targets within an organization carefully. The risk of webmail and third-party file-sharing services can be greatly reduced by requiring two factor authentication (preferably with a physical key) and corporate VPN access to these services.
  • Data leaks
    Data leaks have always been problematic. But the oil and gas industry is more susceptible to these threats because leaked information can be quite beneficial to a competitor. Data leaks can also cause substantial damage to a company’s reputation.

    During our research, we easily found dozens of sensitive documents related to the oil industry online. One way of finding these documents is by using specially crafted Google queries, called Google Dorks.

    Another way to find such content is to hunt for data on public services like Pastebin, an online service that allows anyone to copy and paste any text-based content and store it there, privately, or publicly. Another source of data is public sandboxes meant for analysis of suspicious files. Users can mistakenly send legitimate documents to these sandboxes for analysis. Once uploaded, these documents can be parsed or downloaded by third parties.
  • External emails
    In general, emails are well-protected inside companies. However, external emails cannot be controlled the same way. Employees regularly send emails to external addresses, hence some sensitive internal content ends up outside the company’s purview. Even worse, sensitive information can be copied to unsecured backup systems or stored locally on personal computers without standard corporate security protocols, which makes it easier for attackers to get hold of the information. Once a computer is compromised, an attacker can get the emails and use them in different ways to harm a company. For example, an actor could leak them on public servers or services like Pastebin.

In part two of our series, we look at additional threats that can compromise oil and gas companies, such as ransomware, malware, DNS tunneling, and zero-day exploits.

To learn more about digital threats that the oil and gas industry face, download our comprehend research here.

Source :
https://www.trendmicro.com/en_us/research/22/h/oil-gas-cybersecurity-part-1.html