Critical PHP flaw exposes QNAP NAS devices to RCE attacks

QNAP has warned customers today that some of its Network Attached Storage (NAS) devices (with non-default configurations) are vulnerable to attacks that would exploit a three-year-old critical PHP vulnerability allowing remote code execution.

“A vulnerability has been reported to affect PHP versions 7.1.x below 7.1.33, 7.2.x below 7.2.24, and 7.3.x below 7.3.11. If exploited, the vulnerability allows attackers to gain remote code execution,” QNAP explained in a security advisory released today.

“To secure your device, we recommend regularly updating your system to the latest version to benefit from vulnerability fixes.”

The Taiwanese hardware vendor has already patched the security flaw (CVE-2019-11043) for some operating system versions exposed to attacks (QTS 5.0.1.2034 build 20220515 or later and QuTS hero h5.0.0.2069 build 20220614 or later).

However, the bug affects a wide range of devices running:

  • QTS 5.0.x and later
  • QTS 4.5.x and later
  • QuTS hero h5.0.x and later
  • QuTS hero h4.5.x and later
  • QuTScloud c5.0.x and later

QNAP customers who want to update their NAS devices to the latest firmware automatically need to log on to QTS, QuTS hero, or QuTScloud as administrator and click the “Check for Update” button under Control Panel > System > Firmware Update.

You can also manually upgrade your device after downloading the update on the QNAP website from Support > Download Center.

QNAP devices targeted by ransomware

Today’s warning comes after the NAS maker warned its customers on Thursday to secure their devices against active attacks deploying DeadBolt ransomware payloads.

BleepingComputer also reported over the weekend that ech0raix ransomware has started targeting vulnerable QNAP NAS devices again, according to sample submissions on the ID Ransomware platform and multiple user reports who had their systems encrypted.

Until QNAP issues more details on ongoing attacks, the infection vector used in these new DeadBolt and ech0raix campaigns remains unknown.

While QNAP is working on patching the CVE-2019-11043 PHP vulnerability in all vulnerable firmware versions, you should ensure that your device is not exposed to Internet access as an easy way to block incoming attacks.

As QNAP has advised in the past, users with Internet-exposed NAS devices should take the following measures to prevent remote access:

  • Disable the Port Forwarding function of the router: Go to the management interface of your router, check the Virtual Server, NAT, or Port Forwarding settings, and disable the port forwarding setting of the NAS management service port (port 8080 and 433 by default).
  • Disable the UPnP function of the QNAP NAS: Go to myQNAPcloud on the QTS menu, click the “Auto Router Configuration,” and unselect “Enable UPnP Port forwarding.”

QNAP also provides detailed info on how to toggle off remote SSH and Telnet connections, change the system port number, change device passwords, and enable IP and account access protection to further secure your device.


Update June 22, 08:45 EDT: After this story was published, QNAP’s PSIRT team updated the original advisory and told BleepingComputer that devices with default configurations are not impacted by CVE-2019-11043.

Also, QNAP said that the Deadbolt ransomware attacks are targeting devices running older system software (released between 2017 and 2019).

For CVE-2019-11043, described in QSA-22-20, to affect our users, there are some prerequisites that need to be met, which are:

  1. nginx is running, and
  2. php-fpm is running.

As we do not have nginx in our software by default, QNAP NAS are not affected by this vulnerability in their default state. If nginx is installed by the user and running, then the update provided with QSA-22-20 should be applied as soon as possible to mitigate associated risks.

We are updating our security advisory QSA-22-20 to reflect the facts stated above. Again we would like to point out that most QNAP NAS users are not affected by this vulnerability since its prerequisites are not met. The risk only exists when there is user-installed nginx present in the system.

 We have also updated the story to reflect the new information provided by QNAP.

Source :
https://www.bleepingcomputer.com/news/security/critical-php-flaw-exposes-qnap-nas-devices-to-rce-attacks/

NSA shares tips on securing Windows devices with PowerShell

The National Security Agency (NSA) and cybersecurity partner agencies issued an advisory today recommending system administrators to use PowerShell to prevent and detect malicious activity on Windows machines.

PowerShell is frequently used in cyberattacks, leveraged mostly in the post-exploitation stage, but the security capabilities embedded in Microsoft’s automation and configuration tool can also benefit defenders in their forensics efforts, improve incident response, and to automate repetitive tasks.

The NSA and cyber security centres in the U.S. (CISA), New Zealand (NZ NCSC), and the U.K. (NCSC-UK) have created a set of recommendations for using PowerShell to mitigate cyber threats instead of removing or disabling it, which would lower defensive capabilities.

“Blocking PowerShell hinders defensive capabilities that current versions of PowerShell can provide, and prevents components of the Windows operating system from running properly. Recent versions of PowerShell with improved capabilities and options can assist defenders in countering abuse of PowerShell”

Lower risk for abuse

Reducing the risk of threat actors abusing PowerShell requires leveraging capabilities in the framework such as PowerShell remoting, which does not expose plain-text credentials when executing commands remotely on Windows hosts.

Administrators should be aware that enabling this feature on private networks automatically adds a new rule in Windows Firewall that permits all connections.

Customizing Windows Firewall to allow connections only from trusted endpoints and networks helps reduce an attacker’s chance for successful lateral movement.

For remote connections, the agencies advise using the Secure Shell protocol (SSH), supported in PowerShell 7, to add the convenience and security of public-key authentication:

  • remote connections don’t need HTTPS with SSL certificates
  • no need for Trusted Hosts, as required when remoting over WinRM outside a domain
  • secure remote management over SSH without a password for all commands and connections
  • PowerShell remoting between Windows and Linux hosts

Another recommendation is to reduce PowerShell operations with the help of AppLocker or Windows Defender Application Control (WDAC) to set the tool to function in Constrained Language Mode (CLM), thus denying operations outside the policies defined by the administrator.

“Proper configuration of WDAC or AppLocker on Windows 10+ helps to prevent a malicious actor from gaining full control over a PowerShell session and the host”

Detecting malicious PowerShell use

Recording PowerShell activity and monitoring the logs are two recommendations that could help administrators find signs of potential abuse.

The NSA and its partners propose turning on features like Deep Script Block Logging (DSBL), Module Logging, and Over-the-Shoulder transcription (OTS).

The first two enable building a comprehensive database of logs that can be used to look for suspicious or malicious PowerShell activity, including hidden action and the commands and scripts used in the process.

With OTS, administrators get records of every PowerShell input or output, which could help determine an attacker’s intentions in the environment.

Administrators can use the table below to check the features that various PowerShell versions provide to help enable better defenses on their environment:

Security features in PowerShell
Security features present in PowerShell versions

The document the NSA released today states that “PowerShell is essential to secure the Windows operating system,” particularly the newer versions that dealt away with previous limitations.

When properly configured and managed, PowerShell can be a reliable tool for system maintenance, forensics, automation, and security.

The full document, titled “Keeping PowerShell: Security Measures to Use and Embrace” is available here [PDF].

Source :
https://www.bleepingcomputer.com/news/security/nsa-shares-tips-on-securing-windows-devices-with-powershell/

Real IT Pros Reveal Their Homelab Secrets

For many years, a home IT lab was a “requirement” for any budding IT Pro – you needed a place to test out new software and learn. In some ways, this requirement has lessened with the rise of cloud computing but many of our great DOJO contributors continue to use a home labs setup. In this article, we’ll hear from them, what their setup is, why, choices they made along the way and what they plan for the future.

Andy Syrewicze

Altaro/Hornetsecurity Technical Evangelist – Microsoft MVP

Why do you have a lab?

The main reason I’ve always maintained a lab is to keep my skills current. Not only does my lab allow me to fill knowledge gaps in existing technologies I work with, but it allows me to test new features, or work with other technologies I’ve never worked with before. In doing this I can make sure I’m effective with and knowledgeable about current and emerging technologies. Plus… it’s just fun as well =)

How did I source my home lab?

I research other commonly used home lab equipment on the web and paired that with my working knowledge of the hardware industry and settled on commodity SuperMicro gear that was cost-effective yet had some of the features I was looking for. Other bits and pieces I picked up over the years as needed. For example, I’ve recently been doing some work with Azure site-to-site VPNs and as such purchased a Ubiquiti firewall capable of pairing with an Azure VPN gateway.

What’s your setup?

I have a 2 node hyper-converged cluster that is running either Storage Spaces DirectAzure Stack HCI, or VMware VSAN at any given time.

Currently, each node has:

  • 1 x 6-core Intel Xeon CPU
  • 32GB of Memory (Soon to be upgraded to 64GB)
  • 4 x 1TB HDDs for Capacity Storage
  • 2 x 500GB NVMEs for Cache
  • 1 x 250GB SSD for the host Operating System disk
  • 1 x Intel i350 1Gbps Quad Port Ethernet Adapter for management and compute traffic
  • 1 x Dual port 10Gbps Mellanox Connect-X 3 for east/west storage traffic

Additionally, my physical lab has:

  • 1 Cyberpower UPS with about 1-hour runtime in case of power outages
  • 1 ReadyNAS 316 for backup storage with 4 x 1TB HDDs
  • 1 Ubiquiti UDM Pro for firewalling and layer-3 routing
  • 2 Ubiquiti WAPs for Wireless access in the house
  • 2 NetGear ProSAFE switches wired in a redundant capacity

On top of that, I do pair some Azure cloud resources with my lab and send private traffic over my site-to-site VPN between my UDM-Pro and my Azure vNet. Services running in the cloud include:

  • 1 x IaaS VM with AD Domain Services running on it
  • 1 x storage account for Azure Files storage
  • 1 x storage account for blob offsite backup storage
  • 1 x container in Azure container instance running a Minecraft Server for my son and his friends (HIGHLY critical workload I know…)
  • Some basic Azure ARC services (Been slowly working on this over the last few months)

What services do you run and how do they interact with each other?

I mostly run virtualized workloads on the on-prem cluster. This is typically VMs, but I’ve started tinkering a bit with containers and Azure Kubernetes Service. The cluster also runs VMs for AD/DNS, DHCP, Backup/DR, File-Service and a few other critical gaming workloads for the end-users in the house! The cloud resources also have backup AD/DNS components, file storage, and offsite storage for the on-prem backups. I also use Azure for the occasional large VM that I don’t have the resources on-prem to run.

What do you like and don’t like about your setup?

I’ll start with the positive. I really like that my lab is hyper-converged as well as hybrid-cloud in that there are used resources in Azure access via VPN.

There are two things I’d like to change about my setup they’d:

  • >More memory for the compute nodes. When running VMware VSAN, VSAN itself and vCenter (required for VSAN) consume about 48GB of memory. This doesn’t leave much memory left over for VMs. Thankfully S2D and Azure Stack HCI don’t have this issue. Either way, memory is my next upgrade coming soon
  • Upgraded Mellanox Cards. Don’t get me wrong, the Connect-X 3s were amazing for their time, but they are starting to get quite outdated. More recent Connect-X cards would be preferred and better supported, but there certainly is a cost associated with them.

What does your roadmap look like?

As mentioned above I’m likely to add more memory soon, and potentially upgrade my storage NICs. Additionally, I’d like to add a 3rd node at some point but that is quite a ways down the line.

Any horror stories to share?

Not really, I had one situation where I was away from the house on a work trip and the cluster rebooted due to an extended power outage. The OpenSM service which runs the subnet for the storage network between the direct-connected Mellanox cards didn’t start, thus the storage network never came online. This meant that the core services never came online for the house. Thankfully, the VPN to azure remained online and things in the house were able to use my Azure IaaS hosted Domain Controller for DNS resolution until I got home.

Eric Siron

Senior System Administrator – Microsoft MVP

You may know Eric as a long-time DOJO contributor whose first articles for this site were written on stone tablets. He knows more about the inner workings of Hyper-V than anyone else I know.

All the technical articles that I write depend on first-hand tests and screenshots. My home lab provides the platform that I need while risking no production systems or proprietary data. Like the small business audience that I target, I have a small budget and long refresh cycles. It contained no cutting-edge technology when I originally obtained it, and it has fallen further behind in its four years of use. However, it still serves its purpose admirably.

Component Selection Decisions

Tight budgets lead to hard choices. Besides the cost restraint, I had to consider that my design needed to serve as a reproducible model. That ruled out perfectly viable savings approaches such as secondhand, refurbished, or clearance equipment. So, I used only new, commonly available, and inexpensive parts.

Architectural Design Decisions

Even on a budget, I believe that organizations need a strong computing infrastructure. To meet that goal, I designed a failover cluster with shared storage. As most of the items that I used now have superior alternatives at a similar or lower price, I will list only generic descriptions:

  • >2x entry-level tower server-class computers with out-of-band module
    • 16 GB RAM
    • 2x small internal drives
    • 2x 2-port gigabit adapters
    • 1 port on each adapter for virtual networks
    • 1 port on each adapter for SMB and iSCSI
  • 1x entry-level tower server-class computers (as shared storage)
    • 8 GB RAM
    • 4x large internal drives
    • 2 additional gigabit adapters for SMB and iSCSI
  • 24-port switch
  • Battery backup

All the technical articles that I have written in the last few years involved this lab build in some fashion.

Lab Configuration and Usage

Since the first day, I have used essentially the same configuration.

The two towers with an out-of-band module run Windows Server with Hyper-V and belong to a cluster. Each one hosts one of the lab’s domain controllers on mirrored internal storage.

The single tower with the large drive set acts as shared storage for the cluster. The drives are configured in a RAID-5. Also, because this is a lab, it contains virtual machine backups.

I generally do not integrate cloud services with my lab, primarily because a lot of small businesses do not yet have a purpose for integration between on-premises servers and public clouds. I do use basic services that enhance the administrative quality of life without straining the budget, such as Azure Active Directory.

Lab Maintenance, Management, and Monitoring

Whenever possible and practical, I use PowerShell to manage my lab. When graphical tools provide better solutions, I use a mix of Windows Admin Center and the traditional MMC tools (Hyper-V Manager, Failover Cluster Manager, Active Directory Users and Computers, etc.). For monitoring, I use Nagios with alerts through a personal e-mail account. I back up my virtual machines with Altaro VM Backup.

Aside from Altaro, none of the tools that I use in the lab requires additional license purchases. For instance, I do not use any System Center products. I believe that this practice best matches my audience’s uses and constraints. Most paid tools are too expensive, too complex, too resource-hungry, and require too much maintenance of their own to justify use in small businesses.

I only reformat drives for operating system upgrades. The in-place upgrade has become more viable through the years, but I still see no reward for the risk. On general principle, I do not reload operating systems as a fix for anything less than drive failures or ransomware. Once I feel that Windows Server 2022 has had enough testing by others, these hosts will undergo their third ever reformat.

Pros and Cons of this Lab

Overall, this lab satisfies me. A few of the reasons that I like it:

  • Low cost
  • Stability
  • Acceptable performance for typical small business daily functions
  • Good balance of performance and capacity
  • Ability to test the most common operations for a Microsoft-centric shop

Things that I would improve:

  • The storage performs well enough for a regular small business, but I’m an impatient administrator
  • Memory
  • Network adapter capabilities

Theresa Miller

Principal Technologist at Cohesity and Microsoft MVP

Why do you have a lab?

I have had various forms of home labs over the years for varying reasons. In fact, when I built my home, I made sure my house was hard-wired for internet, which shows how long I have been in the technology industry. At the time hard wiring was the only way to distribute the internet to all the rooms in your home; unlike today where we have wireless and Wi-Fi extender options to help with network stability, Wi-Fi extending to places like the outdoors, and additional security features. Back to the question at hand, What do you use it for? my home lab options are what enable to me put forth the IT Community work that I have done. This includes having the tech to create training courses, blogging, events speaking and more. So, “When and why did you decide to get a home lab? I decided to get a home lab over 8 years ago and continue to use every evolution of my home lab for this function, educating myself and others.

How did I source my home lab?

Initially, my home lab was sourced by end-of-life equipment that my employer allowed employees to wipe the storage on, but eventually, I transitioned to source my hardware through a side business I have had for over 8 years. Purchasing a single Dell PowerEdge server, I was able to virtualize all of the servers I needed to run Active Directory and any necessary windows servers needed at the time. Beyond that my IT Community involvement has allowed me to enjoy the appropriate software licensing needed to support such an environment.

Over time my home lab has changed, my hardware became end-of-life and what was once set up in my basement lab is now hosted in the Azure Cloud. Yep, I decommissioned my hardware and switched to cloud.

What were your considerations and decision points for what you decided to purchase?

The transition to the cloud came from the fact that has become a challenge to deal with end-of-life hardware, and ever-evolving hardware requirements becoming outdated for the latest software running. Not only did it become time-consuming to manage, but it also became too costly.

What’s your setup?

My setup today is now in the Azure cloud, so the only hardware I have in my home is my internet router and the appropriate Eero wifi extenders that are needed to ensure network reliability. I find that running all cloud keeps my backend infrastructure up to date accordingly. For storage, I leverage all Azure-managed disks are block-level storage volumes that are managed by Azure on my servers that I need to leverage with keeping the consumption of resources low in mind.

What services do you run and how do they interact with each other and what services do you run and how do they interact with each other?

My minimal environment consists of a windows VM with Active Directory deployed the Azure DNS service, and one additional basic VM that changes depending on the technology I am testing. The basic VM can sometimes grow to multiple VMs if the project software being deployed requires it. In that scenario, I may also have SQL server deployed if that’s required. I try to keep the deployment simple but keep the core foundational elements in place as needed, and wipe systems as neededHow do I manage all of this? I leverage cost management services that notify me if I hit the threshold that I am willing to pay. At that point I may need to make some decisions around which systems must stay online and what I can shut down, or if I want to pay more that month.

What do you like and don’t like about your setup?

I am really happy with my setup since I have moved to a cloud model because maintaining the hardware including the cost of electricity became time-consuming. While costs with the cloud virtual machines that I have to keep me from having a large-scale deployment, I am ok with that. It’s fun to tear down and bring online what I need when I am looking to try something new with technology.

What does your roadmap look like?

My roadmap is strictly focused on what technology to try out next, and I find that I make these decisions based on technology that I cross paths with that is interesting in that moment. It could be something new, or something that has been around for some time that I may need to dive deeper into for a project or just for new learning and sharing.

Any horror stories to share?

I don’t have any horror stories to share when it comes to my home lab. I have adapted as needed from on-premises hardware in my home to a cloud model that has allowed me to be agile and keep my learning and technology sharing ongoing.

Paul Schnackenburg

Finally, here are some words from me. IT Consultant & DOJO editor.

If you’re starting out in IT today, you probably don’t realize the importance of having a home IT lab setup. But when the cloud was just a faint promise if you wanted to practice on your own, to further your skills or try something out, you had to have your own hardware to do it on. Early on I used VMware workstation to spin up VMs, but there are limitations on what you can fit, especially when you need multiple VMs running simultaneously, and 15 years ago, RAM was a lot more expensive (and came with a lot less GB) than it is today.

After some years I realized that I needed separate machines to practice setting up Hyper-V clusters, Live Migration etc. so I bought the first parts of my set-up back in 2012, starting with three “servers”. I couldn’t justify the cost of real servers, so I got desktop-class motherboards, Intel i5 CPUs and 32 GB of RAM for three servers. One became a storage server, running Windows Server 2012 as an iSCSI target (again I didn’t have the budget for a real iSCSI SAN), and the other two VM hosting nodes in the cluster. Connectivity came from Intel 4 port 1 Gb/s NICs, offering decent bandwidth between nodes. A few years later I added two more nodes and a separate domain controller PC. The backend storage for Hyper-V VM disks was changed over to an SMB 3 file server as Hyper-V was now supporting this. All throughout this time, I was writing articles on Hyper-V and System Center for various outlets and this setup served as my test bed for several different applications and systems. From an “investment” point of view, it made perfect sense to have these systems in place.

I also worked as a part-time teacher and because we were only given “hand me down” hardware for the first few years of Hyper-V and VMware becoming mainstream and part of the curriculum I opted to house the servers on a desk in our hardware lab. That way my students could experiment with Live Migration etc. and through my own VPN connection to the boxes, I could access the cluster after hours to test new software apps and write articles.

In early 2016 this cluster was three nodes and one storage server, but two things happened – Windows Server 2016 offered a new option – Storage Spaces Direct (S2D) and I outfitted all four servers with two 1 TB HDDs and two 120 GB SSDs (small by today’s standard, but this is now eight years ago). These were all consumer grade (again – budget) and wouldn’t have been supported for production, especially not connected to desktop-class hardware but they did allow me (and my students) to explore S2D and VM High Availability.

The other thing that happened was that Chelsio – makers of high-end Remote Direct Memory Access (RDMA) / iWarp 10/25/40 Gb/s Ethernet hardware offered me some NICs in exchange for writing a few reviews. So, two nodes in the cluster were outfitted with a two-port 40 Gb/s card, and the other two with a two-port 10 Gb/s card. Initially, I did testing with the cabling running directly between two nodes, but this didn’t allow for a full, four-node cluster so I purchased a Dell X4012, 12 port 10 Gb/s switch. The two 10 Gb/s NICs used two cables each for a total bandwidth of 20 Gb/s, while the 40 Gb/s NICs came with “spider” cables with a 40 Gb/s interface in the server end, and four 10 Gb/s cables connected to the switches for a total bandwidth of 40 Gb/s. This was ample for the S2D configuration and gave blazing-fast Live Migrations, storage traffic and other East-West flows.

Dell X4012 10Gb/s switch

Dell X4012 10Gb/s switch

In late 2020 I left the teaching job so the whole cluster was mothballed in my home office for 1 ½ years and over the last month I’ve been resurrecting it (after purchasing an Ikea bookshelf to hold it all). Currently, it’s running Windows Server 2022 Datacenter. Each upgrade has been a complete wipe and reinstall of Windows Server (desktop experience, Server Core is just too hard to troubleshoot).

Trying to revive this old hardware has taught me two things – first, the “fun” of misbehaving (or just plain old) hardware to wrestle with was a lot more attractive when I was younger, and the cloud is SO much better for this stuff. Hence my home lab was mothballed for so long and I didn’t really miss it.

I use Windows Admin Center to manage it all, and I’ll also use various Azure cloud services for backup etc. to test them out.

My only “horror story” (apart from all the silly, day-to-day mistakes we all make) is during the wipe and reinstall to Windows Server 2019, using the wrong product key and ending up with four Windows Server Standard nodes – which don’t support Storage Spaces Direct.

What’s your Homelab Setup (and do you even need one)?

As you can see, home labs come in many shapes and sizes. If you’re a budding IT Pro today and you’re wondering if a home lab is right for you, consider the use cases it would fulfil for you very carefully. I see some trainers and IT Pros opting for laptops with large amounts of storage and memory and virtualizing everything on a single PC – certainly that cover many use cases. But if your employers are still mostly on-premises and supporting server clusters is still part of your daily life, nothing beats having two or three physical cluster nodes to test and troubleshoot. Expect to pay a few thousand US dollars (or the equivalent in your currency) and balance the extra cost of “real” servers with the cost savings but time investment in building your own PCs.

If you’re considering setting up a machine or two for your home lab I have the following recommendations – select cases that allow for upgrades and changes in the future, you never know what you’ll need to install and test. Don’t spend money on expensive, server-grade hardware unless you have to – your home lab is unlikely to be mission-critical. Go for fewer nodes, it’s easy to fit a cost-effective machine today with 64, 128 or even more RAM, giving you plenty of space for running VMs. And use SSDs (or NVMe) for all storage if you can afford it, using HDDs is just too slow.

And don’t forget the power of hosting your lab in the cloud, making it easy to rebuild and scale up and down, with a lower initial cost but a monthly subscription cost instead to keep an eye on.

Source :
https://www.altaro.com/hyper-v/it-pros-homelab-secrets/

Top 10 PowerShell Tasks in Exchange Online

Today, there is no question that IT admins are busier than ever, juggling multiple tasks and responsibilities. These include managing and administering Exchange email services, both on-premises and in the cloud. Exchange Online is an extremely popular solution for organizations to host mail services as many businesses have migrated email and file storage to the public cloud. PowerShell is a great scripting language that allows admins to make the best use of their time by automating common tasks and day-to-day activities.

Why use PowerShell?

Before considering PowerShell specifically in the context of Exchange Online, why should admins consider using PowerShell in general? Today, PowerShell has quickly become one of the most popular and fully-featured scripting languages. Many software vendors are developing and releasing their own PowerShell modules, allowing admins to control, configure, and manage many different solutions across the board with the familiar PowerShell syntax.

IT admins, especially Windows admins, are familiar with PowerShell as version 1.0 was released in 2006 for Windows Server 2003, Windows XP SP2, and Windows Vista. In addition, Windows PowerShell is included in modern Windows Server and client operating systems, with the newer PowerShell Core as an optional download.

PowerShell is both familiar and understandable for many admins, given its verb-noun constructs and very human-readable syntax. However, even for non-developers, writing simple PowerShell one-liner scripts can significantly reduce the number of manual tasks performed daily.

PowerShell is also very extensible. As mentioned, third-party software vendors can write their own PowerShell snap-ins and modules to integrate into the PowerShell framework, allowing PowerShell to be customized to work with many different software solutions. Third-party vendors are not the only ones that have extensively used Powershell modules and cmdlets. Most modern Microsoft software and cloud solutions have their own PowerShell modules, allowing for seamless automation, including configuration and management.

What is Exchange Online (EXO)?

Microsoft Exchange Online (EXO) is a hosted unified messaging solution that provides email, calendaring, contacts, and task management from a wide range of devices. Exchange Online is a modern counterpart to the traditional Exchange on-premises solutions organizations have used for decades. In addition, Exchange Online can leverage modern Microsoft technologies, including Azure Active Directory. With Exchange Online’s Azure integration, organizations have the tools needed to support the modern hybrid workforce worldwide.

Exchange Online is the email component included in an Office 365 or Microsoft 365 subscription. However, you can purchase Exchange Online services without the other components of Office/Microsoft 365. With Exchange Online, you retain control over the messaging services offered to your users.

Microsoft Exchange Online PowerShell

Exchange Online includes the ability to administer, configure, and manage your Exchange Online environment using PowerShell. In addition, Exchange Online Powershell provides many robust cmdlets allowing administrators to automate many common tasks.

The Exchange Online PowerShell V2 module is the latest iteration and release of the Exchange Online module and provides modern features, such as the ability to work with multi-factor authentication (MFA). With MFA, organizations can greatly bolster the security of their PowerShell sessions by requiring more than one authentication factor, such as a one-time code delivered via an authenticator app or text message.

Automated Configuration and Benefits of Exchange Online PowerShell

IT admins may ask why they would want to use PowerShell instead of simply using the GUI that is familiar and does most of what they way to do. When performing specific tasks one time or only a few times during a day on one object, the GUI tools are well suited to carry out these tasks and are quite efficient at carrying out a single job or a few tasks in an ad-hoc way. However, there are multiple reasons why you would use PowerShell instead of the Exchange Online GUI management tools. These include:

  • Bulk operations
  • Data filtering
  • Data piping

Bulk operations

GUI management tools do not scale well when dealing with tasks that may need to be performed on multiple users or other objects. Also, what if you need to carry out specific tasks on hundreds of objects on a schedule? GUI management tools are not suited for doing this. For example, can you imagine manually changing an attribute on hundreds of Exchange Online users through the GUI? It would be extremely time-consuming and not very efficient.

When needing to perform bulk operations on multiple objects, PowerShell is much better suited at doing this than the Exchange Online GUI. For example, when manually changing values and attributes on an object numerous times through a GUI, there is a high likelihood a mistake can be made. However, if you use PowerShell to make the changes, the actions are repeated precisely each time the code updates the object, eliminating mistakes due to human error.

Making changes using a PowerShell script on hundreds of users might take minutes or less, whereas making the same changes manually through the GUI might take hours. It can save many hours and manual labour for low-level administrative tasks.

Data filtering

One of the powerful reasons to use PowerShell with Exchange Online is the data filtering capabilities of PowerShell. Powershell is a powerful object-oriented scripting language that can pull out objects and filter data in ways that may not be available in the Exchange Online Management GUI.

When you think about it, GUI tools only allow filtering by the specific criteria built into the GUI tool or management console. If the specific filter you need is not available, you can’t see the information in the way you need it displayed. In addition, GUI tools generally do not provide IT admins with the filtering and data extraction capabilities of command-line tools and scripting languages.

With the filtering capabilities built into PowerShell for Exchange Online, IT admins can query and filter data as needed. PowerShell is an object-oriented scripting language that can return various data objects. For example, let’s say you want to get the archivestatus attribute from all your user mailboxes. You could do that with a simple PowerShell one-liner as follows:

  • get-mailbox | select name, archivestatus

With Exchange Online PowerShell, getting the value of any mailbox attribute is the same as following this simple syntax shown above. Now, things get more interesting by piping returned values and data into other PowerShell cmdlets.

Data piping

Another powerful capability of data filtering with PowerShell is to take the data returned from a data query with a filter and then pipe the return into another PowerShell command. This simple feature contained natively in PowerShell allows querying for specific matching objects such as mailboxes and then doing something with those returned objects, such as running another Exchange Online PowerShell cmdlet on them.

A very simple example of piping your return data into another PowerShell cmdlet is a simple “out-file” cmdlet. It allows you to export your returned data to a simple text file.

  • get-mailbox | select name, archivestatus | out-file c:\archivestatus.txt

But, you can do anything you want with the pipe from a get-mailbox, get-user, or other PowerShell “get” command. You can think of the workflow like this: you are querying for a specific list of objects that match the filter criteria you have specified and then take that set of matching objects and then feed these into another PowerShell cmdlet.

Manually Configuring Exchange Online PowerShell

To get started using Exchange Online PowerShell cmdlets, you need first to install the required PowerShell modules to work with Exchange Online PowerShell. The Exchange Online PowerShell module is part of several modules that fall under the umbrella of services contained in Microsoft 365. As mentioned earlier, the Exchange Online service can be purchased as a standalone product or included with the mail services offered by Microsoft 365.

Each of the Microsoft 365 services has its own PowerShell modules, including:

  • Azure Active Directory (Azure AD)
  • Exchange Online
  • SharePoint Online
  • Skype for Business Online
  • Teams

If you are explicitly working with Exchange Online (EXO), two modules are needed to interact with the low-level Azure AD user objects and the Exchange Online mailboxes:

  • Azure Active Directory (Azure AD) PowerShell – Allows querying the Azure Active Directory environment users, attributes, etc
  • Exchange Online PowerShell – Allows querying and performing critical tasks at the mailbox level for users with Exchange Online mailboxes

Let’s see how to install both of these PowerShell modules for specifically interacting with Exchange Online via PowerShell.

Azure Active Directory (Azure AD)

First, we are going to install the AzureAD PowerShell module. As a note. It does not matter if you install the AzureAD module first or the ExchangeOnline module. To install the module, run the following cmdlet:

  • Install-Module AzureAD
  • Accept the warning message displayed regarding the untrusted repository by typing “Y.” Learn more about AzureAD PowerShell module cmdlet reference here: AzureAD Module | Microsoft Docs.
Installing AzureAD PowerShell module using Windows Terminal


Installing AzureAD PowerShell module using Windows Terminal

Installing Exchange Online PowerShell Module

Now, installing the Exchange Online PowerShell module is the same process. To install the Exchange Online Powershell module, run the following cmdlet:

  • Install-Module ExchangeOnlineManagement
Installing the ExchangeOnlineManagement PowerShell module


Installing the ExchangeOnlineManagement PowerShell module

Accept the warning message displayed regarding the untrusted repository by typing “Y.” For details on using the Exchange Online Management PowerShell, look at Microsoft’s Exchange Online PowerShell documentation here: Exchange Online PowerShell | Microsoft Docs.

Allowing all of the features of Exchange Online to PowerShell

By default, all accounts you create in Microsoft 365 can connect to and use Exchange Online PowerShell. However, IT admins can use Exchange Online PowerShell to enable or disable a user’s ability to use Exchange Online PowerShell in the environment.

As a security note, just because a user can connect to Exchange Online PowerShell, it does not give them administrator access. A user’s permissions in Exchange Online are defined by the built-in role-based access control (RBAC) used by Exchange Online.

Using the Exchange Online PowerShell cmdlets shown below, Exchange administrators can enable or disable users’ access to Exchange Online PowerShell.

  • Disable Exchange Online PowerShell – Set-User -Identity myuser@mydomain.com -RemotePowerShellEnabled $false
  • Enable Exchange Online PowerShell – Set-User -Identity myuser@mydomain.com -RemotePowerShellEnabled $true

To enable or disable for multiple users based on a user attribute, you can also use the filtering and piping features discussed above with Exchange Online PowerShell. To enable Exchange Online Powershell for users with a specific Title, like “Manager,” you can do the following:

  • $managers = Get-User -ResultSize unlimited -Filter “(RecipientType -eq ‘UserMailbox’) -and (Title -like ‘Manager*’)”
  • $managers | foreach {Set-User -Identity $_.WindowsEmailAddress -RemotePowerShellEnabled $true}

Connecting to Exchange Online PowerShell with Basic Authentication

If you search for connecting to Exchange Online PowerShell, you will see reference to basic authentication and modern authentication. To follow best practices, don’t attempt to use Basic Authentication any longer. All organizations at this point need to be switching to modern authentication with MFA enabled.

There is an additional reason. Microsoft is deprecating Basic Authentication access to Exchange Online on October 1, 2022. With this announcement, starting on October 1, 2022, they will begin disabling Basic Authentication for Outlook, EWS, RPS, POP, IMAP, and EAS protocols in Exchange Online. SMTP Auth will also be disabled if it is not being used. Read the official announcement here.

If you want to use the older Exchange Online Remote connection using Basic Authentication, you can view those instructions from Microsoft here. Again, note this method will be deprecated later this year.

Connecting to Exchange Online PowerShell with Modern Authentication

To connect to Exchange Online, use the Exchange Online PowerShell V2 module (installation shown above) to connect to your Exchange Online environment. The EXO PowerShell V2 module uses modern authentication and works with multi-factor authentication (MFA) for securing your Exchange Online PowerShell environment.

To connect to your Exchange Online environment, you need to import the ExchangeOnlineManagement module and then use the Connect-ExchangeOnline cmdlet.

  • Import-Module ExchangeOnlineManagement
  • Connect-ExchangeOnline -ShowProgress $true
Connecting to Exchange Online using the Connect-ExchangeOnline cmdlet


Connecting to Exchange Online using the Connect-ExchangeOnline cmdlet

It will bring up the login box to log into your Office/Microsoft 365 account. It allows taking advantage of the MFA configured for the account, etc.

Logging into Exchange Online with the Exchange Online PowerShell management module


Logging into Exchange Online with the Exchange Online PowerShell management module

The Top 10 Most Common Tasks in Exchange Online PowerShell

Now that we have installed the Exchange Online PowerShell module, what are some common tasks we can accomplish using Exchange Online PowerShell? Let’s take a look at the following:

  1. Getting Migration information
  2. Getting mailboxes
  3. Viewing mailbox statistics
  4. Increasing deleted items retention
  5. Enable Mailbox Audit Logging
  6. Identify inactive mailboxes
  7. Identify mailboxes enabled with forwarding
  8. Setting mailbox autoreply configuration
  9. Assigning roles to users
  10. Identifying ActiveSyncDevices

1. Getting Migration Information

You may be migrating users from one Exchange Server, such as on-premises, to another Exchange Server (Exchange Online). The Get-MigrationUser cmdlet is a great command to check the status of a migration batch used to migrate user batches.

  • Get-MigrationUser -BatchId Marketing | Get-MigrationUserStatistics
Using the Get-MigrationUser


Using the Get-MigrationUser

2. Getting Mailboxes

One of the most basic tasks an Exchange admin needs to carry out is getting information about mailboxes. The most basic cmdlet to use for this use case is the Get-Mailbox cmdlet. The Get-Mailbox cmdlet is generally used with piping into other cmdlets to pull mailboxes meeting specific filters and then performing configuration on the mailboxes queried with the Get-Mailbox cmdlet.

Using the Get-Mailbox cmdlet to get mailbox information in Exchange Online


Using the Get-Mailbox cmdlet to get mailbox information in Exchange Online

3. Viewing mailbox statistics

A common task of Exchange admins is keeping an eye on the size of mailboxes in the environment, so these do not become unwieldy. Using the Get-MailboxStatistics cmdlet allows getting the size information, the number of messages it contains, and the last time it was accessed.

  • Get-MailboxStatistics -identity <username>
Using the Get-MailboxStatistics cmdlet in Exchange Online to get mailbox information


Using the Get-MailboxStatistics cmdlet in Exchange Online to get mailbox information

4. Increasing deleted items retention

By default, Exchange Online is configured to retain deleted items for 14 days. However, this limit can be increased easily for users using the Exchange Online PowerShell module cmdlet Set-Mailbox.

  • Set-Mailbox -Identity “John Doe” -RetainDeletedItemsFor 30
The Set-Mailbox cmdlet allows configuring many aspects of the user mailbox in Exchange Online


The Set-Mailbox cmdlet allows configuring many aspects of the user mailbox in Exchange Online

5. Enable Mailbox Audit Logging

Even though audit logging is on by default for all organizations in Microsoft 365, only users with E5 licenses will return mailbox audit log events in audit log searches. If you want to retrieve audit log events for users without an E5 license, PowerShell is a great way to do that. You can use the Exchange Online PowerShell cmdlet one-liner:

  • Set-Mailbox -Identity <mailbox> -AuditEnabled $true
Using the Set-Mailbox cmdlet to turn on the AuditEnabled flag


Using the Set-Mailbox cmdlet to turn on the AuditEnabled flag

6. Identity mailboxes that are inactive

Using a combination of Exchange Online PowerShell cmdlets and a simple foreach loop, we can see when each user last logged into their mailbox.

  • Get-Mailbox -ResultSize Unlimited | Foreach {Get-MailboxStatistics -Identity $_.UserPrincipalName | Select DisplayName, LastLogonTime}
Getting the last logon time using Exchange Online PowerShell


Getting the last logon time using Exchange Online PowerShell

7. Identify mailboxes enabled with forwarding

What if you want to identify mailboxes enabled with a forwarding address as these have not been documented? You can easily do this with another useful Exchange Online PowerShell one-liner:

  • Get-mailbox -ResultSize Unlimited| where {$_.ForwardingAddress -ne $Null} | select DisplayName,ForwardingAddress

8. Setting mailbox autoreply configuration

A user may forget to set their autoreply configuration. If they go away on vacation or if there is a need to set the autoreply on a user mailbox for other reasons, you can easily accomplish this using PowerShell. It eliminates the need to log in as that user and do this interactively in Outlook.

To do this, you can use the Set-MailboxAutoReplyConfiguration cmdlet. It allows setting both an internal message and an external message for the mailbox.

Setting autoreply messages using PowerShell


Setting autoreply messages using PowerShell

9. Manage roles for groups

Using the New-ManagementRoleAssignment cmdlet, you can assign a management role to a management role group, management role assignment policy, user, or universal security group.

  • New-ManagementRoleAssignment -Role “Mail Recipients” -SecurityGroup “Tier 2 Help Desk”
Assigning management roles using the New-ManagementRoleAssignment cmdlet


Assigning management roles using the New-ManagementRoleAssignment cmdlet

10. Identifying ActiveSync Devices

Identifying and seeing ActiveSync Devices in use in the organization can easily be accomplished with Exchange Online PowerShell using the Get-MobileDevice cmdlet.

Getting mobile devices paired with Exchange Online Users


Getting mobile devices paired with Exchange Online Users

To properly protect your Hyper-V virtual machines, use Altaro VM Backup to securely backup and replicate your virtual machines. We work hard perpetually to give our customers confidence in their Hyper-V backup strategy.

To keep up to date with the latest Hyper-V best practices, become a member of the DOJO | Hyper-V now (it’s free).

The Future is Automated

Many organizations are now migrating and hosting their mail services in the cloud. Exchange Online provides businesses with a great way to host their mail services in Microsoft’s cloud infrastructure, either as a standalone subscription or part of their Office/Microsoft 365 subscription.

While Exchange admins can undoubtedly use the GUI management tools for daily tasks, Exchange Online PowerShell provides a great way to automate and carry out everyday tasks much more quickly, efficiently, and using automation. The Exchange Online PowerShell module is easy to install. In addition, it provides quick time to value by allowing Exchange admins to easily query and configure multiple objects in their Exchange Online environments.

Used in automated processes, Exchange Online PowerShell allows Exchange admins to carry out tasks consistently and in a way that helps to eliminate human error from mundane low-level tasks.

Source :
https://www.altaro.com/hyper-v/10-tasks-online-powershell/

Over a Dozen Flaws Found in Siemens’ Industrial Network Management System

Cybersecurity researchers have disclosed details about 15 security flaws in Siemens SINEC network management system (NMS), some of which could be chained by an attacker to achieve remote code execution on affected systems.

“The vulnerabilities, if exploited, pose a number of risks to Siemens devices on the network including denial-of-service attacks, credential leaks, and remote code execution in certain circumstances,” industrial security company Claroty said in a new report.

The shortcomings in question — tracked from CVE-2021-33722 through CVE-2021-33736 — were addressed by Siemens in version V1.0 SP2 Update 1 as part of patches shipped on October 12, 2021.

“The most severe could allow an authenticated remote attacker to execute arbitrary code on the system, with system privileges, under certain conditions,” Siemens noted in an advisory at the time.

Siemens vulnerabilities

Chief among the weaknesses is CVE-2021-33723 (CVSS score: 8.8), which allows for privilege escalation to an administrator account and could be combined with CVE-2021-33722 (CVSS score: 7.2), a path traversal flaw, to execute arbitrary code remotely.

Another notable flaw relates to a case of SQL injection (CVE-2021-33729, CVSS score: 8.8) that could be exploited by an authenticated attacker to execute arbitrary commands in the local database.

“SINEC is in a powerful central position within the network topology because it requires access to the credentials, cryptographic keys, and other secrets granting it administrator access in order to manage devices in the network,” Claroty’s Noam Moshe said.

“From an attacker’s perspective carrying out a living-off-the-land type of attack where legitimate credentials and network tools are abused to carry out malicious activity, access to, and control of, SINEC puts an attacker in prime position for: reconnaissance, lateral movement, and privilege escalation.”

Source :
https://thehackernews.com/2022/06/over-dozen-flaws-found-in-siemens.html

Frequency Throttling Side Channel Software Guidance for Cryptography Implementations

In modern processors, the time to execute a given set of instructions may vary depending on many factors. Two important factors in these variations are the number of cycles and the CPU frequency.

For developers implementing cryptographic algorithms, Intel recommends selecting instructions whose execution time is data-independent in order to mitigate timing side channels due to cycle differences. Intel has provided guidance for developing constant time/ constant cycle code in Intel’s Guidelines for Mitigating Timing Side Channels Against Cryptographic Implementations.

This document provides software guidance for mitigating timing side channels due to CPU frequency behavior. Power management algorithms in most modern processors, including Intel® processors, provide mechanisms to enforce electrical parameters (such as power and current) to remain below specified limits. CPU frequency throttling is triggered when one of these limits is reached, which results in CPU frequency changes regardless of whether Intel® Turbo Boost Technology is enabled or not. Such frequency change and derived behavior may be correlated with information being processed by the CPU and it may be possible to infer parts of the information through sophisticated analysis of the frequency change behavior. The guidance in this document is based on Intel’s information and understanding at the time of writing. However as with other security guidance, Intel’s guidance is subject to change as the threat landscape evolves or new information becomes available. 

Frequency Throttling Side Channel

When CPUs process data, transistors are switched on and off depending on the data being processed. Switching transistors uses energy. Consequently, running the same workload with different data may change the CPU’s power consumption. This physical property may lead to malicious actors correlating the system’s reported power consumption with possible secret data being processed on the system. Refer to Intel’s Running Average Power Limit Energy Reporting technical article for more details on Intel’s mitigations.

The CPU power management unit routinely calculates the running averaged electrical parameters during the past time window and compares it against the power management reactive limits. If any of the limits are exceeded, the power management algorithm will trigger CPU throttling and adjust the maximal allowed frequency accordingly. As a result, there is an inverse correlation between the average throttling frequency and the power consumption1 before frequency throttling: a workload with higher power consumption before throttling tends to run at lower average throttled frequency, and vice versa. Furthermore, since the power consumption of a workload may be correlated with the data being processed, the throttling frequency may also be correlated with the data, which becomes a frequency side channel. The CPU frequency change also causes a difference in the execution time of the workload and results in a timing side channel. 

Figure 1: Power management reactive limits throttling converts power differences to frequency/timing differences

Figure 1 explains the side channel using an illustrative example. Figure 1(a) shows the same program executed twice with the input data 1 (in blue) and data 2 (in orange), respectively. Assuming the program is a constant-cycle implementation, the number of cycles with data 1 and data 2 are the same (c1=c2). On the other hand, power consumption of processing data 1 and data 2 might be different. Without loss of generality, we assume processing data 1 consumes higher power (p1) compared to that of data 2 (p2). When neither p1 nor p2 exceeds the default power limit (or any other reactive limit), there is no throttling, and the frequency stays at fdefault when the program is running. As a result, the execution time of the program is the same with either data 1 or data 2. 

Assuming the total power consumption exceeds the power limit due to increased system power consumption (such as when stressor code starts to run in parallel with the function) or a reduction of the power limit, frequency throttling occurs. As shown in Figure 1(c), since processing data 1 consumes more power than data 2, the averaged throttled frequency of data 1 (freq1) will become lower than that of data 2 (freq2) to ensure the power limit is satisfied. Of course, both throttling frequencies are lower than fdefault. Therefore, even if the number of execution cycles of the program is still data-invariant, the throttling frequency, and hence execution time, becomes data-variant. An attacker may utilize this side channel to extract secret data (such as cryptographic keys) from a constant-time cryptography implementation, since a typical constant-time implementation ensures only constant-cycle execution, and data-dependent variations in CPU frequency will result in data-dependent code execution time. 

Power Management Reactive Limits

Intel® processors have several reactive limits related to power management, such as Running Average Power Limit (RAPL) and Voltage Regulator Thermal Design Current Limit (VR-TDC).

Running Average Power Limit (RAPL)

RAPL is a feature supported by Intel power management architecture to cap the power consumption on the system. When the configured power limit is exceeded, the CPU will be forced to run at a lower frequency to maximize performance while meeting the power limit requirement. Intel currently provides multiple power limit capabilities, including package-level power limits and platform-level power limits. Ring 0 software can configure both the running average window and the power limit of each capability through model specific registers (MSRs) such as, MSR_PKG_POWER_LIMIT for package-level power limit. Refer to Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3 Section 14.10 “Platform Specific Power Management Support” for more details.

Voltage Regulator Thermal Design Current Limit (VR-TDC)

VR-TDC is a power management feature supported by Intel power management architecture. The feature introduces a current limit, naturally specified in amperes, that is to be maintained to preserve the electrical constraints on properties of the voltage regulator (VR). Generally, the control algorithm monitors the Exponential Moving Average (EMA) current, which is also measured in amperes, by reading current measurements from the VR. As with other control algorithms, this algorithm controls the power budget based on a given time window. If the limit is hit, the processor will reduce the CPU frequency (frequency throttling) to ensure the current remains below this limit.

Related Issues

Intel released microcode updates (MCUs) in IPU 2020.2 and IPU 2021.2 responding to Running Average Power Limit energy reporting vulnerabilities (CVE-2020-8694 and CVE-2020-8695). None of the MCUs are meant to mitigate the frequency throttling side channel vulnerability. In the Software Guidance for Cryptographic Implementations section of this article, Intel provides software guidance for mitigating frequency throttling side channels against cryptographic implementations. Intel recommends cryptographic library and application developers refer to the suggested methods in this article to assess and harden their code against the frequency throttling side channel (also known as “Hertzbleed”).

Software Guidance for Cryptographic Implementations

This section provides cryptographic application and library developers2 with guidance to assess the risk and reduce the impact of frequency throttling side channels on cryptographic implementations. Remember that the root cause of the frequency throttling side channel is the power side channel, and the mitigation of it has been researched exhaustively. This document is not intended to provide comprehensive solutions to mitigate the frequency throttling side channel for all cryptography implementations, but instead provides recommendations to enable cryptography developers to evaluate the risks and to help harden their software implementations against this side channel. Intel recommends that cryptographic implementations follow existing guidance for developing constant-cycle code, as described in Intel’s Guidelines for Mitigating Timing Side Channels Against Cryptographic Implementations, and the Data Operand Independent Timing Instruction Set Architecture Guidance.

Conditions of the Attack

Cryptographic implementations may be vulnerable to frequency throttling side channels when all the following conditions are met. If one or more of these listed prerequisites is not satisfied, the cryptography implementation should not be impacted by this type of side channel.

Check the list of conditions below to assess your implementation’s risk based on the nature of the implementation and your threat model. 

The Cryptographic Implementation is Vulnerable to Power Side Channel Attack

The power side channel is the fundamental root cause of the frequency throttling side channel. An implementation that is vulnerable to the frequency throttling side channel needs to satisfy all the prerequisites of physical power side channel attacks, except for the physical access capability to measure power. The prerequisites for a malicious actor to exploit the physical power side channel include, but are not limited to:

  • The ability to repeatedly initiate cryptographic operations with the same secret key to collect enough data. 
  • For block ciphers, the ability to read input/output or inter-round state of the block cipher primitives. Note that the input/output is not necessarily the plaintext or ciphertext. One example is the counter (CTR) mode of block cipher, where the input to the block cipher is the concatenation of the nonce and counter.

Ability to Ensure Victim Execution hits the CPU’s Reactive Limits

For the power frequency to be correlated with secret data, a reactive limit needs to be hit while the victim workload is running. There are several techniques that an attacker may take to meet this necessary condition.

  • An attacker may run multiple instances of the same victim workload (using the same secret data) on multiple cores to increase power consumption of the package (to hit the limit) and increase Signal-to-Noise Ratio (SNR)3.
  • An attacker may run stressor workloads in parallel with the victim workload to increase package power consumption. 
  • An attacker with ring 0 privilege can reduce the limits through reactive limit configuration interfaces (for example, MSRs) to ensure the victim workload hits the limit.

Ability to Monitor Frequency Change or Related CPU Behaviors with Sufficient Resolution

The attacker needs to be able to sample CPU frequency while the victim workload is running, or else observe the execution time of the victim workload with sufficient resolution to identify data-dependent differences in the measured information. 

Software Implementation

The following guidance can help developers mitigate the frequency throttling side channel. Additionally, this guidance can be used as a defense-in-depth mechanism even when not all conditions for the frequency throttling side channel are met.

Applying Effective Solutions Against Power Side Channel

Most of the proven software-based countermeasures against power side channels on cryptographic primitives will also be effective against the frequency throttling side channel. For example, mitigations that reduce power SNR of single instructions are effective for both the physical power side channel and the frequency throttling side channel, since the SNR of averaged power consumption (and timing due to throttling) during the running average window is also reduced. Other techniques, such as software-based masking [1] [2], should be also effective against frequency throttling side channels. 

Note that countermeasures that randomize the execution order of instructions, while being effective in making trace alignment and identification of point of interest harder for physical power side channel, are less effective at mitigating the frequency throttling side channel. This is because reordering instructions at the cycle granularity is less likely to impact averaged power consumption during the averaging time window of milliseconds or longer.

For cryptographic applications, an example of a generic countermeasure against power side channel is key refresh. One of the necessary conditions for power side channel is the ability to repeatedly kick off cryptographic operations with the same sensitive key. If the secret key is refreshed before enough traces can be collected, it will be harder for the attacker to fully deduce the secret. Key refresh frequency may be based on timing (for example, refresh per several hours) or data volume (for example, the volume of data being encrypted with the same key). If the implementor is uncertain of which threshold to use, the lowest threshold that meets performance/design requirements should be selected. It should be noted that the practicality of key refresh depends on the specific cryptographic use case (for example, key refresh is typically not applicable to disk encryption).

Avoiding Unnecessary Exposure of Reactive Limit Configuration Interfaces

As stated in the Conditions of the Attack section, if an adversary has access to certain hardware interfaces (for example, MSR_PKG_POWER_LIMIT), they may configure and reduce the reactive limits to make it easier for the victim workload to trigger frequency throttling. To reduce the attack surface, privileged software (for example, hypervisor or ring-0 software) that has access to these interfaces should avoid unnecessarily exposing those interfaces to untrusted entities (for example guest VMs or ring-3 software). If there is a business need to expose these interfaces, the designers of the privileged software should be aware of the potential security implication. 

Restricting Correlation of Frequency Change or Related Behaviors

Another common countermeasure against side channel attacks is to jam the channel with noise to deter the attacker from deducing the secret. As the side channel in this attack is frequency change or derived behaviors, the noise can be injected into the frequency transition or timing information. 

One method to do this is to leverage the inherent noise during cryptographic application calls. The cryptographic library provider or cryptographic application provider may restrict the maximum size of the plaintext/ciphertext allowed per API invocation, so that more invocations of the API are needed to process the same amount of data, which introduces more inherent noise. 

Besides that, a cryptography implementor may proactively inject random noise into the cryptographic operations to increase timing variation. To implement this countermeasure, the developer may add dummy instructions that introduce sufficient power or latency variation. The dummy instructions should be independent from the secret data using cryptographic functions. For example, timing variations can be introduced using a loop of instructions with random iterations. In addition to that, any power variation induced by the dummy instructions may also increase the entropy of the frequency transition. To help ensure every frequency change is impacted by noise, Intel recommends injecting some noise during the running time window of the reactive limits that may be leveraged by an attacker. One possible way to balance the trade-offs between security and performance is to combine this scheme with the key refresh countermeasure described in Applying Effective Solutions Against Power Side Channel section to increase the time needed to perform a successful attack to a key lifetime that is acceptable for your implementation.

Steps to Take to Protect Your Code

Cryptographic library application providers are advised to take the following steps to assess and protect their code:

  1. Assess whether the implementation is impacted based on the threat model and the necessary conditions of the attack.
  2. If the cryptographic implementation is impacted and mitigation is needed:
    1. Apply generic countermeasures against the power side channel on the cryptography primitive level (for example, masking) or cryptography application level (for example, key refresh).
    2. Restrict correlation of frequency change or related behaviors. Examples include restricting maximum input data size per invocation and injecting random delay noise.
  3. For privileged software or hypervisors, avoid unnecessary exposure of reactive limit configuration interfaces to untrusted entities.

References

  1. S. Mangard, E. Oswald and T. Popp, Power Analysis Attacks: Revealing the Secrets of Smart Cards. 
  2. E. Prouff and M. Rivain, “Masking against Side-Channel Attacks: a Formal Security Proof,” in Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2013. 

Footnotes

  1. For simplicity, in this document, we use power to indicate any electrical parameter that exhibits similar behavior.
  2. The definition of application is the software that utilizes cryptographic library primitives and owns/manages cryptography keys. The definition of a library is the software that provides cryptography primitives but does not own the key. Instead, the cryptographic library relies on the application to provide the cryptography key to use.
  3. Here, signal is the secret-correlated power consumption, and noise is the secret-independent power consumption.

    Source :
    https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/technical-documentation/frequency-throttling-side-channel-guidance.html

Hertzbleed Attack

Hertzbleed is a new family of side-channel attacks: frequency side channels. In the worst case, these attacks can allow an attacker to extract cryptographic keys from remote servers that were previously believed to be secure.

Hertzbleed takes advantage of our experiments showing that, under certain circumstances, the dynamic frequency scaling of modern x86 processors depends on the data being processed. This means that, on modern processors, the same program can run at a different CPU frequency (and therefore take a different wall time) when computing, for example, 2022 + 23823 compared to 2022 + 24436.

Hertzbleed is a real, and practical, threat to the security of cryptographic software. We have demonstrated how a clever attacker can use a novel chosen-ciphertext attack against SIKE to perform full key extraction via remote timing, despite SIKE being implemented as “constant time”.

Research Paper

The Hertzbleed paper will appear in the 31st USENIX Security Symposium (Boston, 10–12 August 2022) with the following title:

  • Hertzbleed: Turning Power Side-Channel Attacks Into Remote Timing Attacks on x86

You can download a preprint from here.

The paper is the result of a collaboration between the following researchers:

Questions and Answers

Am I affected by Hertzbleed?

Likely, yes.

Intel’s security advisory states that all Intel processors are affected. We experimentally confirmed that several Intel processors are affected, including desktop and laptop models from the 8th to the 11th generation Core microarchitecture.

AMD’s security advisory states that several of their desktop, mobile and server processors are affected. We experimentally confirmed that AMD Ryzen processors are affected, including desktop and laptop models from the Zen 2 and Zen 3 microarchitectures.

Other processor vendors (e.g., ARM) also implement frequency scaling in their products and were made aware of Hertzbleed. However, we have not confirmed if they are, or are not, affected by Hertzbleed.

What is the impact of Hertzbleed?

First, Hertzbleed shows that on modern x86 CPUs, power side-channel attacks can be turned into (even remote!) timing attacks—lifting the need for any power measurement interface. The cause is that, under certain circumstances, periodic CPU frequency adjustments depend on the current CPU power consumption, and these adjustments directly translate to execution time differences (as 1 hertz = 1 cycle per second).

Second, Hertzbleed shows that, even when implemented correctly as constant time, cryptographic code can still leak via remote timing analysis. The result is that current industry guidelines for how to write constant-time code (such as Intel’s one) are insufficient to guarantee constant-time execution on modern processors.

Is there an assigned CVE for Hertzbleed?

Yes. Hertzbleed is tracked under CVE-2022-23823 and CVE-2022-24436 in the Common Vulnerabilities and Exposures (CVE) system.

Is Hertzbleed a bug?

No. The root cause of Hertzbleed is dynamic frequency scaling, a feature of modern processors, used to reduce power consumption (during low CPU loads) and to ensure that the system stays below power and thermal limits (during high CPU loads).

When did you disclose Hertzbleed?

We disclosed our findings, together with proof-of-concept code, to Intel, Cloudflare and Microsoft in Q3 2021 and to AMD in Q1 2022. Intel originally requested our findings be held under embargo until May 10, 2022. Later, Intel requested a significant extension of that embargo, and we coordinated with them on publicly disclosing our findings on June 14, 2022.

Do Intel and AMD plan to release microcode patches to mitigate Hertzbleed?

No. To our knowledge, Intel and AMD do not plan to deploy any microcode patches to mitigate Hertzbleed. However, Intel provides guidance to mitigate Hertzbleed in software. Cryptographic developers may choose to follow Intel’s guidance to harden their libraries and applications against Hertzbleed. For more information, we refer to the official security advisories (Intel and AMD).

Why did Intel ask for a long embargo, considering they are not deploying patches?

Ask Intel.

Is there a workaround?

Technically, yes. However, it has a significant system-wide performance impact.

In most cases, a workload-independent workaround to mitigate Hertzbleed is to disable frequency boost. Intel calls this feature “Turbo Boost”, and AMD calls it “Turbo Core” or “Precision Boost”. Disabling frequency boost can be done either through the BIOS or at runtime via the frequency scaling driver. In our experiments, when frequency boost was disabled, the frequency stayed fixed at the base frequency during workload execution, preventing leakage via Hertzbleed. However, this is not a recommended mitigation strategy as it will significantly impact performance. Moreover, on some custom system configurations (with reduced power limits), data-dependent frequency updates may occur even when frequency boost is disabled.

What is SIKE?

SIKE (Supersingular Isogeny Key Encapsulation) is a decade old, widely studied key encapsulation mechanism. It is currently a finalist in NIST’s Post-Quantum Cryptography competition. It has multiple industrial implementations and was the subject of an in-the-wild deployment experiment. Among its claimed advantages are a “well-understood” side channel posture. You can find author names, implementations, talks, studies, articles, security analyses and more about SIKE on its official website.

What is a key encapsulation mechanism?

A key encapsulation mechanism is a protocol used to securely exchange a symmetric key using asymmetric (public-key) cryptography.

How did Cloudflare and Microsoft mitigate the attack on SIKE?

Both Cloudflare and Microsoft deployed the mitigation suggested by De Feo et al. (who, while our paper was under the long Intel embargo, independently re-discovered how to exploit anomalous 0s in SIKE for power side channels). The mitigation consists of validating, before decapsulation, that the ciphertext consists of a pair of linearly independent points of the correct order. The mitigation adds a decapsulation performance overhead of 5% for CIRCL and of 11% for PQCrypto-SIDH.

Is my constant-time cryptographic library affected?

Affected? Likely yes. Vulnerable? Maybe.

Your constant-time cryptographic library might be vulnerable if is susceptible to secret-dependent power leakage, and this leakage extends to enough operations to induce secret-dependent changes in CPU frequency. Future work is needed to systematically study what cryptosystems can be exploited via the new Hertzbleed side channel.

Yes. The Hertzbleed logo is free to use under a CC0 license.

  • Download logo: SVGPNG
  • Download logo with text: SVGPNG

We know some of you don’t really like vulnerability logos, and we hear you. However, we really like our logo (and hope you do too!).

Did you release the source code of the Hertzbleed attack?

Yes, for full reproducibility. You can find the source code of all the experiments from our paper at the link: https://github.com/FPSG-UIUC/hertzbleed

source :
https://www.hertzbleed.com/

Horde Webmail – Remote Code Execution via Email

A webmail application enables organizations to host a centralized, browser-based email client for their members. Typically, users log into the webmail server with their email credentials, then the webmail server acts as a proxy to the organization’s email server and allows authenticated users to view and send emails.

With so much trust being placed into webmail servers, they naturally become a highly interesting target for attackers. If a sophisticated adversary could compromise a webmail server, they can intercept every sent and received email, access password-reset links, and sensitive documents, impersonate personnel and steal all credentials of users logging into the webmail service.

This blog post discusses a vulnerability that the Sonar R&D team discovered in Horde Webmail. The vulnerability allows an attacker to fully take over an instance as soon as a victim opens an email the attacker sent. At the time of writing, no official patch is available.


Impact

The discovered code vulnerability (CVE-2022-30287) allows an authenticated user of a Horde instance to execute arbitrary code on the underlying server. 

The vulnerability can be exploited with a single GET request which can be triggered via Cross-Site-Request-Forgery.  For this, an attacker can craft a malicious email and include an external image that when rendered exploits the vulnerability without further interaction of a victim: the only requirement is to have a victim open the malicious email.

The vulnerability exists in the default configuration and can be exploited with no knowledge of a targeted Horde instance. We confirmed that it exists in the latest version. The vendor has not released a patch at the time of writing. 

Another side-effect of this vulnerability is that the clear-text credentials of the victim triggering the exploit are leaked to the attacker. The adversary could then use them to gain access to even more services of an organization. This is demonstrated in our video:

https://youtube.com/watch?v=pDXos77YHpc%3Ffeature%3Doembed


Technical details

In the following sections, we go into detail about the root cause of this vulnerability and how attackers could exploit it.


Background – Horde Address Book configuration

Horde Webmail allows users to manage contacts. From the web interface, they can add, delete and search contacts. Administrators can configure where these contacts should be stored and create multiple address books, each backed by a different backend server and protocol.

The following snippet is an excerpt from the default address book configuration file and shows the default configuration for an LDAP backend:

turba/config/backends.php

$cfgSources['personal_ldap'] = array(
   // Disabled by default
   'disabled' => true,
   'title' => _("My Address Book"),
   'type' => 'LDAP',
   'params' => array(
       'server' => 'localhost',
       'tls' => false,
    // …

As can be seen, this LDAP configuration is added to an array of available address book backends stored in the $cfgSources array. The configuration itself is a key/value array containing entries used to configure the LDAP driver.

CVE-2022-30287 – Lack of type checking in Factory class

When a user interacts with an endpoint related to contacts, they are expected to send a string identifying the address book they want to use. Horde then fetches the corresponding configuration from the $cfgSources array and manages the connection to the address book backend.

The following code snippet demonstrates typical usage of this pattern:

turba/merge.php

 14 require_once __DIR__ . '/lib/Application.php';
 15 Horde_Registry::appInit('turba');
 16
 17 $source = Horde_Util::getFormData('source');
 18 // …
 19 $mergeInto = Horde_Util::getFormData('merge_into');
 20 $driver = $injector->getInstance('Turba_Factory_Driver')->create($source);
 21 // …
 30 $contact = $driver->getObject($mergeInto);

The code snippet above shows how the parameter $source is received and passed to the create() method of the Turba_Factory_Driver. Turba is the name of the address book component of Horde.

Things start to become interesting when looking at the create() method:

turba/lib/Factory/Driver.php

 51     public function create($name, $name2 = '', $cfgSources = array())
 52     {
 53     // …
 57         if (is_array($name)) {
 58             ksort($name);
 59             $key = md5(serialize($name));
 60             $srcName = $name2;
 61             $srcConfig = $name;
 62         } else {
 63             $key = $name;
 64             $srcName = $name;
 65             if (empty($cfgSources[$name])) {
 66                 throw new Turba_Exception(sprintf(_("The address book \"%s\" does not exist."), $name));
 67             }
 68             $srcConfig = $cfgSources[$name];
 69         }

On line 57, the type of the $name parameter is checked. This parameter corresponds to the previously shown $source parameter. If it is an array, it is used directly as a config by setting it to $srcConfig variable. If it is a string, the global $cfgSources is accessed with it and the corresponding configuration is fetched.

This behavior is interesting to an attacker as Horde expects a well-behaved user to send a string, which then leads to a trusted configuration being used. However, there is no type checking in place which could stop an attacker from sending an array as a parameter and supplying an entirely controlled configuration.

Some lines of code later, the create() method dynamically instantiates a driver class using values from the attacker-controlled array:

turba/lib/Factory/Driver.php

 75  $class = 'Turba_Driver_' . ucfirst(basename($srcConfig['type']));
 76	// …
112  $driver = new $class($srcName, $srcConfig['params']);

With this level of control, an attacker can choose to instantiate an arbitrary address book driver and has full control over the parameters passed to it, such as for example the host, username, password, file paths etc.


Instantiating a driver that enables an attacker to execute arbitrary code

The next step for an attacker would be to inject a driver configuration that enables them to execute arbitrary code on the Horde instance they are targeting.

We discovered that Horde supports connecting to an IMSP server, which uses a protocol that was drafted in 1995 but never finalized as it was superseded by the ACAP protocol. When connecting to this server, Horde fetches various entries. Some of these entries are interpreted as PHP serialized objects and are then unserialized. 

The following code excerpt from the _read() method of the IMSP driver class shows how the existence of a __members entry is checked. If it exists, it is deserialized:

turba/lib/Driver/Imsp.php

223   if (!empty($temp['__members'])) {
224      $tmembers = @unserialize($temp['__members']);
225   }

Due to the presence of viable PHP Object Injection gadgets discovered by Steven Seeley, an attacker can force Horde to deserialize malicious objects that lead to arbitrary code execution.


Exploiting the vulnerability via CSRF

By default, Horde blocks any images in HTML emails that don’t have a data: URI. An attacker can bypass this restriction by using the HTML tags <picture> and <source>. A <picture> tag allows developers to specify multiple image sources that are loaded depending on the dimensions of the user visiting the site. The following example bypasses the blocking of external images:

<picture>
  <source media="(min-width:100px)" srcset="../../?EXPLOIT">
  <img src="blocked.jpg" alt="Exploit image" style="width:auto;">
</picture>

Patch

At the time of writing, no official patch is available. As Horde seems to be no longer actively maintained, we recommend considering alternative webmail solutions.

Timeline

DateAction
2022-02-02We report the issue to the vendor and inform about our 90 disclosure policy
2022-02-17We ask for a status update.
2022-03-02Horde releases a fix for a different issue we reported previously and acknowledge this report.
2022-05-03We inform the vendor that the 90-day disclosure deadline has passed


Summary

In this blog post, we described a vulnerability that allows an attacker to take over a Horde webmail instance simply by sending an email to a victim and having the victim read the email. 

The vulnerability occurs in PHP code, which is typically using dynamic types. In this case, a security-sensitive branch was entered if a user-controlled variable was of the type array. We highly discourage developers from making security decisions based on the type of a variable, as it is often easy to miss language-specific quirks.

Source :
https://blog.sonarsource.com/horde-webmail-rce-via-email/

Atlassian fixes Confluence zero-day widely exploited in attacks

Atlassian has released security updates to address a critical zero-day vulnerability in Confluence Server and Data Center actively exploited in the wild to backdoor Internet-exposed servers.

The zero-day (CVE-2022-26134) affects all supported versions of Confluence Server and Data Center and allows unauthenticated attackers to gain remote code execution on unpatched servers.

Since it was disclosed as an actively exploited bug, the Cybersecurity and Infrastructure Security Agency (CISA) has also added it to its ‘Known Exploited Vulnerabilities Catalog‘ requiring federal agencies to block all internet traffic to Confluence servers on their networks.

The company has now released patches and advises all customers to upgrade their appliances to versions 7.4.17, 7.13.7, 7.14.3, 7.15.2, 7.16.4, 7.17.4, and 7.18.1, which contain a fix for this flaw.

“We strongly recommend upgrading to a fixed version of Confluence as there are several other security fixes included in the fixed versions of Confluence,” Atlassian said.

Admins who cannot immediately upgrade their Confluence installs can also use a temporary workaround to mitigate the CVE-2022-26134 security bug by updating some JAR files on their Confluence servers by following the detailed instructions available here.

Widely exploited in ongoing attacks

The security vulnerability was discovered by cybersecurity firm Volexity over the Memorial Day weekend during an incident response.

While analyzing the incident, Volexity discovered that the zero-day was used to install a BEHINDER JSP web shell allowing the threat actors to execute commands on the compromised server remotely.

They also deployed a China Chopper web shell and a simple file upload tool as backups to maintain access to the hacked server.

Volexity threat analysts added that they believe multiple threat actors from China are using CVE-2022-26134 exploits to hack into Internet-exposed and unpatched Confluence servers.

The company also released a list of IP addresses used in the attacks and some Yara rules to identify web shell activity on potentially breached Confluence servers.

“The targeted industries/verticals are quite widespread. This is a free-for-all where the exploitation seems coordinated,” Volexity President Steven Adair revealed today.

“It is clear that multiple threat groups and individual actors have the exploit and have been using it in different ways.

“Some are quite sloppy and others are a bit more stealth. Loading class files into memory and writing JSP shells are the most popular we have seen so far.”

A similar Atlassian Confluence remote code execution vulnerability was exploited in the wild in September 2021 to install cryptomining malware after a PoC exploit was publicly shared online.

Source :
https://www.bleepingcomputer.com/news/security/atlassian-fixes-confluence-zero-day-widely-exploited-in-attacks/

Novartis says no sensitive data was compromised in cyberattack

Pharmaceutical giant Novartis says no sensitive data was compromised in a recent cyberattack by the Industrial Spy data-extortion gang.

Industrial Spy is a hacking group that runs an extortion marketplace where they sell data stolen from compromised organizations.

Yesterday, the hacking group began selling data allegedly stolen from Novartis on their Tor extortion marketplace for $500,000 in bitcoins.

The threat actors claim that the data is related to RNA and DNA-based drug technology and tests from Novartis and were stolen “directly from the laboratory environment of the manufacturing plant.”

Novartis data sold on the Industrial Spy extortion marketplace
Novartis data sold on the Industrial Spy extortion marketplace
Source: BleepingComputer

The data being sold consists of 7.7 MB of PDF files, which all have a timestamp of 2/25/2022 04:26, likely when the data was stolen.

As the amount of data for sale is minimal, it is not clear if this is all the threat actors stole or if they have further data to sell later.

BleepingComputer emailed Novartis to confirm the attack and theft of data and received the following statement.

“Novartis is aware of this matter. We have thoroughly investigated it and we can confirm that no sensitive data has been compromised. We take data privacy and security very seriously and have implemented industry standard measures in response to these kind of threats to ensure the safety of our data.” – Novartis.

Novartis declined to answer any further questions about the breach, when it occurred, and how the threat actors gained access to their data.

Industrial Spy is also known to use ransomware in attacks, but there is no evidence that devices were encrypted during the Novartis incident.

Source :
https://www.bleepingcomputer.com/news/security/novartis-says-no-sensitive-data-was-compromised-in-cyberattack/