Adobe Acrobat may block antivirus tools from monitoring PDF files

Security researchers found that Adobe Acrobat is trying to block security software from having visibility into the PDF files it opens, creating a security risk for the users.

Adobe’s product is checking if components from 30 security products are loaded into its processes and likely blocks them, essentially denying them from monitoring for malicious activity.

Flagging incompatible AVs

For a security tool to work, it needs visibility into all processes on the system, which is achieved by injecting dynamic-link libraries (DLLs) into software products launching on the machine.

PDF files have been abused in the past to execute malware on the system. One method is to add a command in the ‘OpenAction’ section of document to run PowerShell commands for malicious activity, explain the researchers at cybersecurity company Minerva Labs.

“Since March of 2022 we’ve seen a gradual uptick in Adobe Acrobat Reader processes attempting to query which security product DLLs are loaded into it by acquiring a handle of the DLL” – Minerva Labs

According to a report this week, the list has grown to include 30 DLLs from security products of various vendors. Among the more popular ones with consumers are Bitdefender, Avast, Trend Micro, Symantec, Malwarebytes, ESET, Kaspersky, F-Secure, Sophos, Emsisoft.

Querying the system is done with ‘libcef.dll’, a Chromium Embedded Framework (CEF) Dynamic Link Library used by a wide variety of programs.

While the Chromium DLL comes with a short list of components to be blacklisted because they cause conflicts, vendors using it can make modifications and add any DLL they want.

Chromium’s list of hardcoded DLLssource: Minerva Labs

The researchers explain that “libcef.dll is loaded by two Adobe processes: AcroCEF.exe and RdrCEF.exe” so both products are checking the system for components of the same security products.

Looking closer at what happens with the DLLs injected into Adobe processes, Minerva Labs found that Adobe checks if the bBlockDllInjection value under the registry key ‘SOFTWARE\Adobe\Adobe Acrobat\DC\DLLInjection\’ is set to 1. If so, it will prevent antivirus software’s DLLs from being injected into processes.

It is worth noting that the registry key’s value when Adobe Reader runs for the first time is ‘0’ and that it can be modified at any time.

“With the registry key name dBlockDllInjection, and looking at the cef documentation, we can assume that the the blacklisted DLLs are designated to be unloaded” – Minerva Labs

According to Minerva Labs researcher Natalie Zargarov, the default value for the registry key is set to ‘1’ – indicating active blocking. This setting may depend on the operating system or the Adobe Acrobat version installed, as well as other variables on the system.

In a post on Citrix forums on March 28, a user complaining about Sophos AV errors due to having an Adobe product installed said that the company “suggested to disable DLL-injection for Acrobat and Reader.

Adobe responding to Citrix user experiencing errors on machine with Sophos AV

Working on the problem

Replying to BleepingComputer, Adobe confirmed that users have reported experiencing issue due to DLL components from some security products being incompatible with Adobe Acrobat’s usage of the CEF library.

“We are aware of reports that some DLLs from security tools are incompatible with Adobe Acrobat’s usage of CEF, a Chromium based engine with a restricted sandbox design, and may cause stability issues” – Adobe

The company added that it is currently working with these vendors to address the problem and “to ensure proper functionality with Acrobat’s CEF sandbox design going forward.”

Minerva Labs researchers argue that Adobe chose a solution that solves compatibility problems but introduces a real attack risk by preventing security software from protecting the system.

BleepingComputer has contacted Adobe with further questions to explain the conditions the DLL blocking occurs and will update the article once we have the information.

Source :
https://www.bleepingcomputer.com/news/security/adobe-acrobat-may-block-antivirus-tools-from-monitoring-pdf-files/

7-zip now supports Windows ‘Mark-of-the-Web’ security feature

7-zip has finally added support for the long-requested ‘Mark-of-the-Web’ Windows security feature, providing better protection from malicious downloaded files.

When you download documents and executables from the web, Windows adds a special ‘Zone.Id’ alternate data stream to the file called the Mark-of-the-Web (MoTW).

This identifier tells Windows and supported applications that the file was downloaded from another computer or the Internet and, therefore, could be a risk to open.

When you attempt to open a downloaded file, Windows will check if a MoTW exists and, if so, display additional warnings to the user, asking if they are sure they wish to run the file.

Launching a downloaded executable containing a MoTW
Launching a downloaded executable containing a MoTW
Source: BleepingComputer

 Microsoft Office will also check for the Mark-of-the-Web, and if found, it will open documents in Protected View, with the file in read-only mode and macros disabled.

Word document opened in Protected View
Word document opened in Protected View
Source: BleepingComputer

To check if a downloaded file has the Mark-of-the-Web, you can right-click on it in Windows Explorer and open its properties.

If the file contains a MoTW, you will see a message at the bottom stating, “This file came from another computer and might be blocked to help protection this computer.”

File property indicator for the Mark-of-the-Web
File property indicator for the Mark-of-the-Web
Source: BleepingComputer

If you trust the file and its source, you can put a check in the ‘Unblock‘ box and click on the ‘Apply‘ button, which will remove the MoTW from the file.

Furthermore, running the file for the first time and allowing it to open will also remove the MoTW, so warnings are not shown in the future.

7-zip adds support for Mark-of-the-Web

7-zip is one of the most popular archiving programs in the world, but, until now, it lacked support for Mark-of-the-Web.

This meant that if you downloaded an archive from the Internet and extracted it with 7-zip, the Mark-of-the-Web would not propagate to the extracted files, and Windows would not treat the extracted files as risky.

For example, if you downloaded a ZIP file containing a Word document, the ZIP file would have a MoTW, but the extracted Word document would not. Therefore, Microsoft Office would not open the file in Protected View.

Over the years, numerous security researchers, developers, and engineers have requested that the 7-Zip developer, Igor Pavlov, add the security feature to his archiving utility.

Tweet by SwiftOnSecurity

Pavlov said he doesn’t like the feature as it adds extra overhead to the program.

“The overhead for that property (additional Zone Identifier stream for each file) is not good in some cases,” explained Pavlov in a 7-zip bug report.

However, this all changed last week after Pavlov added a new setting in 7-zip 22.00 that enables you to propagate MoTW streams from downloaded archives to its extracted files.

To enable this setting, search for and open the ‘7-Zip File Manager,’ and when it opens, click on Tools and then Options. Under the 7-Zip tab, you will now see a new option titled ‘Propagate Zone.Id stream’ and the ability to set it to ‘No,’ ‘Yes,’ or ‘For Office files.’

Set this option to ‘Yes’ or ‘For Office files,’ which is less secure, and then press the OK button.

New Propagate Zone.Id stream in 7-Zip
New Propagate Zone.Id stream in 7-Zip
Source: BleepingComputer

With this setting enabled, when you download an archive and extract its files, the Mark-of-the-Web will also propagate to the extracted files.

With this additional security, Windows will now prompt you as to whether you wish to run downloaded files and Microsoft Office will open documents in Protected View, offering increased security.

To take advantage of this new feature, you can download 7-zip 22.0 from 7-zip.org.

Source :
https://www.bleepingcomputer.com/news/microsoft/7-zip-now-supports-windows-mark-of-the-web-security-feature/

Critical PHP flaw exposes QNAP NAS devices to RCE attacks

QNAP has warned customers today that some of its Network Attached Storage (NAS) devices (with non-default configurations) are vulnerable to attacks that would exploit a three-year-old critical PHP vulnerability allowing remote code execution.

“A vulnerability has been reported to affect PHP versions 7.1.x below 7.1.33, 7.2.x below 7.2.24, and 7.3.x below 7.3.11. If exploited, the vulnerability allows attackers to gain remote code execution,” QNAP explained in a security advisory released today.

“To secure your device, we recommend regularly updating your system to the latest version to benefit from vulnerability fixes.”

The Taiwanese hardware vendor has already patched the security flaw (CVE-2019-11043) for some operating system versions exposed to attacks (QTS 5.0.1.2034 build 20220515 or later and QuTS hero h5.0.0.2069 build 20220614 or later).

However, the bug affects a wide range of devices running:

  • QTS 5.0.x and later
  • QTS 4.5.x and later
  • QuTS hero h5.0.x and later
  • QuTS hero h4.5.x and later
  • QuTScloud c5.0.x and later

QNAP customers who want to update their NAS devices to the latest firmware automatically need to log on to QTS, QuTS hero, or QuTScloud as administrator and click the “Check for Update” button under Control Panel > System > Firmware Update.

You can also manually upgrade your device after downloading the update on the QNAP website from Support > Download Center.

QNAP devices targeted by ransomware

Today’s warning comes after the NAS maker warned its customers on Thursday to secure their devices against active attacks deploying DeadBolt ransomware payloads.

BleepingComputer also reported over the weekend that ech0raix ransomware has started targeting vulnerable QNAP NAS devices again, according to sample submissions on the ID Ransomware platform and multiple user reports who had their systems encrypted.

Until QNAP issues more details on ongoing attacks, the infection vector used in these new DeadBolt and ech0raix campaigns remains unknown.

While QNAP is working on patching the CVE-2019-11043 PHP vulnerability in all vulnerable firmware versions, you should ensure that your device is not exposed to Internet access as an easy way to block incoming attacks.

As QNAP has advised in the past, users with Internet-exposed NAS devices should take the following measures to prevent remote access:

  • Disable the Port Forwarding function of the router: Go to the management interface of your router, check the Virtual Server, NAT, or Port Forwarding settings, and disable the port forwarding setting of the NAS management service port (port 8080 and 433 by default).
  • Disable the UPnP function of the QNAP NAS: Go to myQNAPcloud on the QTS menu, click the “Auto Router Configuration,” and unselect “Enable UPnP Port forwarding.”

QNAP also provides detailed info on how to toggle off remote SSH and Telnet connections, change the system port number, change device passwords, and enable IP and account access protection to further secure your device.


Update June 22, 08:45 EDT: After this story was published, QNAP’s PSIRT team updated the original advisory and told BleepingComputer that devices with default configurations are not impacted by CVE-2019-11043.

Also, QNAP said that the Deadbolt ransomware attacks are targeting devices running older system software (released between 2017 and 2019).

For CVE-2019-11043, described in QSA-22-20, to affect our users, there are some prerequisites that need to be met, which are:

  1. nginx is running, and
  2. php-fpm is running.

As we do not have nginx in our software by default, QNAP NAS are not affected by this vulnerability in their default state. If nginx is installed by the user and running, then the update provided with QSA-22-20 should be applied as soon as possible to mitigate associated risks.

We are updating our security advisory QSA-22-20 to reflect the facts stated above. Again we would like to point out that most QNAP NAS users are not affected by this vulnerability since its prerequisites are not met. The risk only exists when there is user-installed nginx present in the system.

 We have also updated the story to reflect the new information provided by QNAP.

Source :
https://www.bleepingcomputer.com/news/security/critical-php-flaw-exposes-qnap-nas-devices-to-rce-attacks/

Microsoft reveals cause behind this week’s Microsoft 365 outage

Microsoft has revealed that this week’s Microsoft 365 worldwide outage was caused by an infrastructure power outage that led to traffic management servicing failovers in multiple regions.

Starting on Monday, June 20, at 11:00 PM UTC, customers began experiencing and reporting several issues while trying to access and use Microsoft 365 services.

According to Microsoft, problems encountered during the incident included delays and failures when accessing some Microsoft 365 services.

Customer reports also shared info on continuous re-login requests, emails not getting delivered after being stuck in queues, and the inability to access Exchange Online mailboxes despite trying all available connection methods.

The affected services included the Microsoft Teams communication platform, the Exchange Online hosted email platform, SharePoint Online, Universal Print, and the Graph API.

Microsoft’s response while investigating the root cause behind the outage also brought to light some issues related to how the company fails to share new incident-related info with customers.

Even though Microsoft told customers they could find out more about this incident from the admin center under EX394347 and MO394389, user reports suggest that those incident tickets were not showing up, effectively keeping the customers in the dark.

16-hour-long incident caused by power failure

More than 16 hours after the first signs of the outage were detected, on Tuesday, June 21, at 3:27 PM UTC, Microsoft said in an update to the MO394389 service alert sent to customers that the root cause was an infrastructure power loss.

“An infrastructure power outage necessitated failing over Microsoft 365 traffic management servicing users primarily in Western Europe,” the company explained.

“This action failed to properly complete, leading to functional delays and access failures for several Microsoft 365 services.”

The outage was most severe for customers in Western Europe. Still, the impact extended to “a small percentage” users throughout EMEA (Europe, the Middle East, and Africa), North America, and the Asia-Pacific regions.

Redmond also refuted reports that a separate outage affecting the company’s Outlook on the web service was also linked to this incident.

“We’ve confirmed from our updated service monitoring that all services remain healthy following the targeted restarts,” Microsoft added.

“Additionally, we completed our investigation into the potential remaining impact to Outlook on the web and confirmed that this is a known issue

which is unrelated to this event.”

On Tuesday, Cloudflare was also hit by a massive outage that affected over a dozen data centers and hundreds of major online platforms and services.

Cloudflare revealed that the incident was caused by a configuration error while implementing a change that would have otherwise increased its network’s resilience.

Source :
https://www.bleepingcomputer.com/news/microsoft/microsoft-reveals-cause-behind-this-week-s-microsoft-365-outage/

NSA shares tips on securing Windows devices with PowerShell

The National Security Agency (NSA) and cybersecurity partner agencies issued an advisory today recommending system administrators to use PowerShell to prevent and detect malicious activity on Windows machines.

PowerShell is frequently used in cyberattacks, leveraged mostly in the post-exploitation stage, but the security capabilities embedded in Microsoft’s automation and configuration tool can also benefit defenders in their forensics efforts, improve incident response, and to automate repetitive tasks.

The NSA and cyber security centres in the U.S. (CISA), New Zealand (NZ NCSC), and the U.K. (NCSC-UK) have created a set of recommendations for using PowerShell to mitigate cyber threats instead of removing or disabling it, which would lower defensive capabilities.

“Blocking PowerShell hinders defensive capabilities that current versions of PowerShell can provide, and prevents components of the Windows operating system from running properly. Recent versions of PowerShell with improved capabilities and options can assist defenders in countering abuse of PowerShell”

Lower risk for abuse

Reducing the risk of threat actors abusing PowerShell requires leveraging capabilities in the framework such as PowerShell remoting, which does not expose plain-text credentials when executing commands remotely on Windows hosts.

Administrators should be aware that enabling this feature on private networks automatically adds a new rule in Windows Firewall that permits all connections.

Customizing Windows Firewall to allow connections only from trusted endpoints and networks helps reduce an attacker’s chance for successful lateral movement.

For remote connections, the agencies advise using the Secure Shell protocol (SSH), supported in PowerShell 7, to add the convenience and security of public-key authentication:

  • remote connections don’t need HTTPS with SSL certificates
  • no need for Trusted Hosts, as required when remoting over WinRM outside a domain
  • secure remote management over SSH without a password for all commands and connections
  • PowerShell remoting between Windows and Linux hosts

Another recommendation is to reduce PowerShell operations with the help of AppLocker or Windows Defender Application Control (WDAC) to set the tool to function in Constrained Language Mode (CLM), thus denying operations outside the policies defined by the administrator.

“Proper configuration of WDAC or AppLocker on Windows 10+ helps to prevent a malicious actor from gaining full control over a PowerShell session and the host”

Detecting malicious PowerShell use

Recording PowerShell activity and monitoring the logs are two recommendations that could help administrators find signs of potential abuse.

The NSA and its partners propose turning on features like Deep Script Block Logging (DSBL), Module Logging, and Over-the-Shoulder transcription (OTS).

The first two enable building a comprehensive database of logs that can be used to look for suspicious or malicious PowerShell activity, including hidden action and the commands and scripts used in the process.

With OTS, administrators get records of every PowerShell input or output, which could help determine an attacker’s intentions in the environment.

Administrators can use the table below to check the features that various PowerShell versions provide to help enable better defenses on their environment:

Security features in PowerShell
Security features present in PowerShell versions

The document the NSA released today states that “PowerShell is essential to secure the Windows operating system,” particularly the newer versions that dealt away with previous limitations.

When properly configured and managed, PowerShell can be a reliable tool for system maintenance, forensics, automation, and security.

The full document, titled “Keeping PowerShell: Security Measures to Use and Embrace” is available here [PDF].

Source :
https://www.bleepingcomputer.com/news/security/nsa-shares-tips-on-securing-windows-devices-with-powershell/

Real IT Pros Reveal Their Homelab Secrets

For many years, a home IT lab was a “requirement” for any budding IT Pro – you needed a place to test out new software and learn. In some ways, this requirement has lessened with the rise of cloud computing but many of our great DOJO contributors continue to use a home labs setup. In this article, we’ll hear from them, what their setup is, why, choices they made along the way and what they plan for the future.

Andy Syrewicze

Altaro/Hornetsecurity Technical Evangelist – Microsoft MVP

Why do you have a lab?

The main reason I’ve always maintained a lab is to keep my skills current. Not only does my lab allow me to fill knowledge gaps in existing technologies I work with, but it allows me to test new features, or work with other technologies I’ve never worked with before. In doing this I can make sure I’m effective with and knowledgeable about current and emerging technologies. Plus… it’s just fun as well =)

How did I source my home lab?

I research other commonly used home lab equipment on the web and paired that with my working knowledge of the hardware industry and settled on commodity SuperMicro gear that was cost-effective yet had some of the features I was looking for. Other bits and pieces I picked up over the years as needed. For example, I’ve recently been doing some work with Azure site-to-site VPNs and as such purchased a Ubiquiti firewall capable of pairing with an Azure VPN gateway.

What’s your setup?

I have a 2 node hyper-converged cluster that is running either Storage Spaces DirectAzure Stack HCI, or VMware VSAN at any given time.

Currently, each node has:

  • 1 x 6-core Intel Xeon CPU
  • 32GB of Memory (Soon to be upgraded to 64GB)
  • 4 x 1TB HDDs for Capacity Storage
  • 2 x 500GB NVMEs for Cache
  • 1 x 250GB SSD for the host Operating System disk
  • 1 x Intel i350 1Gbps Quad Port Ethernet Adapter for management and compute traffic
  • 1 x Dual port 10Gbps Mellanox Connect-X 3 for east/west storage traffic

Additionally, my physical lab has:

  • 1 Cyberpower UPS with about 1-hour runtime in case of power outages
  • 1 ReadyNAS 316 for backup storage with 4 x 1TB HDDs
  • 1 Ubiquiti UDM Pro for firewalling and layer-3 routing
  • 2 Ubiquiti WAPs for Wireless access in the house
  • 2 NetGear ProSAFE switches wired in a redundant capacity

On top of that, I do pair some Azure cloud resources with my lab and send private traffic over my site-to-site VPN between my UDM-Pro and my Azure vNet. Services running in the cloud include:

  • 1 x IaaS VM with AD Domain Services running on it
  • 1 x storage account for Azure Files storage
  • 1 x storage account for blob offsite backup storage
  • 1 x container in Azure container instance running a Minecraft Server for my son and his friends (HIGHLY critical workload I know…)
  • Some basic Azure ARC services (Been slowly working on this over the last few months)

What services do you run and how do they interact with each other?

I mostly run virtualized workloads on the on-prem cluster. This is typically VMs, but I’ve started tinkering a bit with containers and Azure Kubernetes Service. The cluster also runs VMs for AD/DNS, DHCP, Backup/DR, File-Service and a few other critical gaming workloads for the end-users in the house! The cloud resources also have backup AD/DNS components, file storage, and offsite storage for the on-prem backups. I also use Azure for the occasional large VM that I don’t have the resources on-prem to run.

What do you like and don’t like about your setup?

I’ll start with the positive. I really like that my lab is hyper-converged as well as hybrid-cloud in that there are used resources in Azure access via VPN.

There are two things I’d like to change about my setup they’d:

  • >More memory for the compute nodes. When running VMware VSAN, VSAN itself and vCenter (required for VSAN) consume about 48GB of memory. This doesn’t leave much memory left over for VMs. Thankfully S2D and Azure Stack HCI don’t have this issue. Either way, memory is my next upgrade coming soon
  • Upgraded Mellanox Cards. Don’t get me wrong, the Connect-X 3s were amazing for their time, but they are starting to get quite outdated. More recent Connect-X cards would be preferred and better supported, but there certainly is a cost associated with them.

What does your roadmap look like?

As mentioned above I’m likely to add more memory soon, and potentially upgrade my storage NICs. Additionally, I’d like to add a 3rd node at some point but that is quite a ways down the line.

Any horror stories to share?

Not really, I had one situation where I was away from the house on a work trip and the cluster rebooted due to an extended power outage. The OpenSM service which runs the subnet for the storage network between the direct-connected Mellanox cards didn’t start, thus the storage network never came online. This meant that the core services never came online for the house. Thankfully, the VPN to azure remained online and things in the house were able to use my Azure IaaS hosted Domain Controller for DNS resolution until I got home.

Eric Siron

Senior System Administrator – Microsoft MVP

You may know Eric as a long-time DOJO contributor whose first articles for this site were written on stone tablets. He knows more about the inner workings of Hyper-V than anyone else I know.

All the technical articles that I write depend on first-hand tests and screenshots. My home lab provides the platform that I need while risking no production systems or proprietary data. Like the small business audience that I target, I have a small budget and long refresh cycles. It contained no cutting-edge technology when I originally obtained it, and it has fallen further behind in its four years of use. However, it still serves its purpose admirably.

Component Selection Decisions

Tight budgets lead to hard choices. Besides the cost restraint, I had to consider that my design needed to serve as a reproducible model. That ruled out perfectly viable savings approaches such as secondhand, refurbished, or clearance equipment. So, I used only new, commonly available, and inexpensive parts.

Architectural Design Decisions

Even on a budget, I believe that organizations need a strong computing infrastructure. To meet that goal, I designed a failover cluster with shared storage. As most of the items that I used now have superior alternatives at a similar or lower price, I will list only generic descriptions:

  • >2x entry-level tower server-class computers with out-of-band module
    • 16 GB RAM
    • 2x small internal drives
    • 2x 2-port gigabit adapters
    • 1 port on each adapter for virtual networks
    • 1 port on each adapter for SMB and iSCSI
  • 1x entry-level tower server-class computers (as shared storage)
    • 8 GB RAM
    • 4x large internal drives
    • 2 additional gigabit adapters for SMB and iSCSI
  • 24-port switch
  • Battery backup

All the technical articles that I have written in the last few years involved this lab build in some fashion.

Lab Configuration and Usage

Since the first day, I have used essentially the same configuration.

The two towers with an out-of-band module run Windows Server with Hyper-V and belong to a cluster. Each one hosts one of the lab’s domain controllers on mirrored internal storage.

The single tower with the large drive set acts as shared storage for the cluster. The drives are configured in a RAID-5. Also, because this is a lab, it contains virtual machine backups.

I generally do not integrate cloud services with my lab, primarily because a lot of small businesses do not yet have a purpose for integration between on-premises servers and public clouds. I do use basic services that enhance the administrative quality of life without straining the budget, such as Azure Active Directory.

Lab Maintenance, Management, and Monitoring

Whenever possible and practical, I use PowerShell to manage my lab. When graphical tools provide better solutions, I use a mix of Windows Admin Center and the traditional MMC tools (Hyper-V Manager, Failover Cluster Manager, Active Directory Users and Computers, etc.). For monitoring, I use Nagios with alerts through a personal e-mail account. I back up my virtual machines with Altaro VM Backup.

Aside from Altaro, none of the tools that I use in the lab requires additional license purchases. For instance, I do not use any System Center products. I believe that this practice best matches my audience’s uses and constraints. Most paid tools are too expensive, too complex, too resource-hungry, and require too much maintenance of their own to justify use in small businesses.

I only reformat drives for operating system upgrades. The in-place upgrade has become more viable through the years, but I still see no reward for the risk. On general principle, I do not reload operating systems as a fix for anything less than drive failures or ransomware. Once I feel that Windows Server 2022 has had enough testing by others, these hosts will undergo their third ever reformat.

Pros and Cons of this Lab

Overall, this lab satisfies me. A few of the reasons that I like it:

  • Low cost
  • Stability
  • Acceptable performance for typical small business daily functions
  • Good balance of performance and capacity
  • Ability to test the most common operations for a Microsoft-centric shop

Things that I would improve:

  • The storage performs well enough for a regular small business, but I’m an impatient administrator
  • Memory
  • Network adapter capabilities

Theresa Miller

Principal Technologist at Cohesity and Microsoft MVP

Why do you have a lab?

I have had various forms of home labs over the years for varying reasons. In fact, when I built my home, I made sure my house was hard-wired for internet, which shows how long I have been in the technology industry. At the time hard wiring was the only way to distribute the internet to all the rooms in your home; unlike today where we have wireless and Wi-Fi extender options to help with network stability, Wi-Fi extending to places like the outdoors, and additional security features. Back to the question at hand, What do you use it for? my home lab options are what enable to me put forth the IT Community work that I have done. This includes having the tech to create training courses, blogging, events speaking and more. So, “When and why did you decide to get a home lab? I decided to get a home lab over 8 years ago and continue to use every evolution of my home lab for this function, educating myself and others.

How did I source my home lab?

Initially, my home lab was sourced by end-of-life equipment that my employer allowed employees to wipe the storage on, but eventually, I transitioned to source my hardware through a side business I have had for over 8 years. Purchasing a single Dell PowerEdge server, I was able to virtualize all of the servers I needed to run Active Directory and any necessary windows servers needed at the time. Beyond that my IT Community involvement has allowed me to enjoy the appropriate software licensing needed to support such an environment.

Over time my home lab has changed, my hardware became end-of-life and what was once set up in my basement lab is now hosted in the Azure Cloud. Yep, I decommissioned my hardware and switched to cloud.

What were your considerations and decision points for what you decided to purchase?

The transition to the cloud came from the fact that has become a challenge to deal with end-of-life hardware, and ever-evolving hardware requirements becoming outdated for the latest software running. Not only did it become time-consuming to manage, but it also became too costly.

What’s your setup?

My setup today is now in the Azure cloud, so the only hardware I have in my home is my internet router and the appropriate Eero wifi extenders that are needed to ensure network reliability. I find that running all cloud keeps my backend infrastructure up to date accordingly. For storage, I leverage all Azure-managed disks are block-level storage volumes that are managed by Azure on my servers that I need to leverage with keeping the consumption of resources low in mind.

What services do you run and how do they interact with each other and what services do you run and how do they interact with each other?

My minimal environment consists of a windows VM with Active Directory deployed the Azure DNS service, and one additional basic VM that changes depending on the technology I am testing. The basic VM can sometimes grow to multiple VMs if the project software being deployed requires it. In that scenario, I may also have SQL server deployed if that’s required. I try to keep the deployment simple but keep the core foundational elements in place as needed, and wipe systems as neededHow do I manage all of this? I leverage cost management services that notify me if I hit the threshold that I am willing to pay. At that point I may need to make some decisions around which systems must stay online and what I can shut down, or if I want to pay more that month.

What do you like and don’t like about your setup?

I am really happy with my setup since I have moved to a cloud model because maintaining the hardware including the cost of electricity became time-consuming. While costs with the cloud virtual machines that I have to keep me from having a large-scale deployment, I am ok with that. It’s fun to tear down and bring online what I need when I am looking to try something new with technology.

What does your roadmap look like?

My roadmap is strictly focused on what technology to try out next, and I find that I make these decisions based on technology that I cross paths with that is interesting in that moment. It could be something new, or something that has been around for some time that I may need to dive deeper into for a project or just for new learning and sharing.

Any horror stories to share?

I don’t have any horror stories to share when it comes to my home lab. I have adapted as needed from on-premises hardware in my home to a cloud model that has allowed me to be agile and keep my learning and technology sharing ongoing.

Paul Schnackenburg

Finally, here are some words from me. IT Consultant & DOJO editor.

If you’re starting out in IT today, you probably don’t realize the importance of having a home IT lab setup. But when the cloud was just a faint promise if you wanted to practice on your own, to further your skills or try something out, you had to have your own hardware to do it on. Early on I used VMware workstation to spin up VMs, but there are limitations on what you can fit, especially when you need multiple VMs running simultaneously, and 15 years ago, RAM was a lot more expensive (and came with a lot less GB) than it is today.

After some years I realized that I needed separate machines to practice setting up Hyper-V clusters, Live Migration etc. so I bought the first parts of my set-up back in 2012, starting with three “servers”. I couldn’t justify the cost of real servers, so I got desktop-class motherboards, Intel i5 CPUs and 32 GB of RAM for three servers. One became a storage server, running Windows Server 2012 as an iSCSI target (again I didn’t have the budget for a real iSCSI SAN), and the other two VM hosting nodes in the cluster. Connectivity came from Intel 4 port 1 Gb/s NICs, offering decent bandwidth between nodes. A few years later I added two more nodes and a separate domain controller PC. The backend storage for Hyper-V VM disks was changed over to an SMB 3 file server as Hyper-V was now supporting this. All throughout this time, I was writing articles on Hyper-V and System Center for various outlets and this setup served as my test bed for several different applications and systems. From an “investment” point of view, it made perfect sense to have these systems in place.

I also worked as a part-time teacher and because we were only given “hand me down” hardware for the first few years of Hyper-V and VMware becoming mainstream and part of the curriculum I opted to house the servers on a desk in our hardware lab. That way my students could experiment with Live Migration etc. and through my own VPN connection to the boxes, I could access the cluster after hours to test new software apps and write articles.

In early 2016 this cluster was three nodes and one storage server, but two things happened – Windows Server 2016 offered a new option – Storage Spaces Direct (S2D) and I outfitted all four servers with two 1 TB HDDs and two 120 GB SSDs (small by today’s standard, but this is now eight years ago). These were all consumer grade (again – budget) and wouldn’t have been supported for production, especially not connected to desktop-class hardware but they did allow me (and my students) to explore S2D and VM High Availability.

The other thing that happened was that Chelsio – makers of high-end Remote Direct Memory Access (RDMA) / iWarp 10/25/40 Gb/s Ethernet hardware offered me some NICs in exchange for writing a few reviews. So, two nodes in the cluster were outfitted with a two-port 40 Gb/s card, and the other two with a two-port 10 Gb/s card. Initially, I did testing with the cabling running directly between two nodes, but this didn’t allow for a full, four-node cluster so I purchased a Dell X4012, 12 port 10 Gb/s switch. The two 10 Gb/s NICs used two cables each for a total bandwidth of 20 Gb/s, while the 40 Gb/s NICs came with “spider” cables with a 40 Gb/s interface in the server end, and four 10 Gb/s cables connected to the switches for a total bandwidth of 40 Gb/s. This was ample for the S2D configuration and gave blazing-fast Live Migrations, storage traffic and other East-West flows.

Dell X4012 10Gb/s switch

Dell X4012 10Gb/s switch

In late 2020 I left the teaching job so the whole cluster was mothballed in my home office for 1 ½ years and over the last month I’ve been resurrecting it (after purchasing an Ikea bookshelf to hold it all). Currently, it’s running Windows Server 2022 Datacenter. Each upgrade has been a complete wipe and reinstall of Windows Server (desktop experience, Server Core is just too hard to troubleshoot).

Trying to revive this old hardware has taught me two things – first, the “fun” of misbehaving (or just plain old) hardware to wrestle with was a lot more attractive when I was younger, and the cloud is SO much better for this stuff. Hence my home lab was mothballed for so long and I didn’t really miss it.

I use Windows Admin Center to manage it all, and I’ll also use various Azure cloud services for backup etc. to test them out.

My only “horror story” (apart from all the silly, day-to-day mistakes we all make) is during the wipe and reinstall to Windows Server 2019, using the wrong product key and ending up with four Windows Server Standard nodes – which don’t support Storage Spaces Direct.

What’s your Homelab Setup (and do you even need one)?

As you can see, home labs come in many shapes and sizes. If you’re a budding IT Pro today and you’re wondering if a home lab is right for you, consider the use cases it would fulfil for you very carefully. I see some trainers and IT Pros opting for laptops with large amounts of storage and memory and virtualizing everything on a single PC – certainly that cover many use cases. But if your employers are still mostly on-premises and supporting server clusters is still part of your daily life, nothing beats having two or three physical cluster nodes to test and troubleshoot. Expect to pay a few thousand US dollars (or the equivalent in your currency) and balance the extra cost of “real” servers with the cost savings but time investment in building your own PCs.

If you’re considering setting up a machine or two for your home lab I have the following recommendations – select cases that allow for upgrades and changes in the future, you never know what you’ll need to install and test. Don’t spend money on expensive, server-grade hardware unless you have to – your home lab is unlikely to be mission-critical. Go for fewer nodes, it’s easy to fit a cost-effective machine today with 64, 128 or even more RAM, giving you plenty of space for running VMs. And use SSDs (or NVMe) for all storage if you can afford it, using HDDs is just too slow.

And don’t forget the power of hosting your lab in the cloud, making it easy to rebuild and scale up and down, with a lower initial cost but a monthly subscription cost instead to keep an eye on.

Source :
https://www.altaro.com/hyper-v/it-pros-homelab-secrets/

Creating the Perfect Homelab for VMware Admins

Working in infrastructure has been a blast since I went down that route many years ago. One of the most enjoyable things in this line of work is learning about cool tech and playing around with it in a VMware homelab project for instance. Running a homelab involves sacrificing some of your free time and dedicating it to learning and experimenting.

Now, it is obvious that learning without a purpose is a tricky business as motivation tends to fade quite quickly. For that reason, it is best to work towards a goal and use your own hardware to conduct a VMware homelab project that will get you a certification, material to write interesting blogs, automate things in your home or follow a learning path to aim for a specific job or a different career track. When interviewing for engineering roles, companies are receptive to candidates that push the envelope to sharpen their skills and don’t fear investing time and money to get better.

This article is a bit different than usual as we, at Altaro, decided to have a bit of fun! We asked our section editors, authors, as well as third-party authors to talk about their homelabs. We set a rough structure regarding headlines to keep things consistent but we also wanted to leave freedom to the authors as VMware homelab projects are all different and serve a range of specific purposes.

Brandon Lee

In my honest opinion, it is one of the best investments in the learning and career goals I have made – a home lab. However, as the investment isn’t insignificant, why would I recommend owning and running a home lab environment? What do you use it for? What considerations should you make when purchasing equipment and servers?

Around ten years ago, I decided that having my own personal learning environment and sandbox would benefit all the projects and learning goals I had in mind. So, the home lab was born! Like many IT admins out there, my hobby and my full-time job are geeking out on technology. So, I wanted to have access at home to the same technologies, applications, and server software I use in my day job.

Why do you have a lab?

Like many, I started with a “part-time” VMware homelab project running inside VMware Workstation. So, the first hardware I purchased was a Dell Precision workstation with 32 gigs of memory. Instead of running vSphere on top of the hardware, I ran VMware Workstation. I believe this may have been before the VMUG Advantage subscription was available, or at least before I knew about it.

I would advise anyone thinking of owning and operating a home lab to start small. Running a lab environment inside VMware Workstation, Hyper-V, Virtualbox, or another solution is a great way to get a feel for the benefits of using a home lab environment. It may also be that a few VMs running inside VMware Workstation or another workstation-class hypervisor is all you need.

For my purposes, the number of workloads and technologies I wanted to play around with outgrew what I was able to do inside VMware Workstation. So, after a few years of running VMware Workstation on several other workstation-class machines, I decided to invest in actual servers. The great thing about a home lab is you are only constrained in its design by your imagination (and perhaps funds). Furthermore, unlike production infrastructure, you can redesign and repurpose along the way as you see fit. As a result, the home lab can be very fluid for your needs.

What’s your setup?

I have written quite a bit about my home lab environment, detailing hardware and software. However, I am a fan of Supermicro servers for the hardware side of things. I have found the Supermicro kits to be very stable, affordable, and many are supported on VMware’s HCL for installing vSphere, etc.

Enclosure

  • Sysracks 27U server enclosure

Servers

I have the following models of Supermicro servers:

  • (4) Supermicro SYS-5028D-TN4T
    • Mini tower form factor
    • (3) are in a vSAN cluster
    • (1) is used as a standalone host in other testing
  • (1) SYS-E301-9D-8CN8TP
    • Mini 1-U (actually 1.5 U) form factor
    • This host is used as another standalone host for various testing and nested labs

Networking

  • Cisco SG350-28 – Top of rack switch for 1 gig connectivity with (4) 10 gig SFP ports
  • Ubiquiti – Edgeswitch 10 Gig, TOR for Supermicro servers
  • Cisco SG300-20 – Top of rack IDF

Storage

  • VMFS datastores running on consumer-grade NVMe drives
  • vSAN datastore running on consumer-grade NVMe drives, (1) disk group per server
  • Synology Diskstation 1621xs+ – 30 TB of useable space

In terms of license requirements; I cannot stress enough how incredible the VMUG Advantage subscription is for obtaining real software licensing to run VMware solutions. It is arguably the most “bang for your buck” in terms of software you will purchase in your VMware homelab project. For around $200 (you can find coupons most of the year), you can access the full suite of VMware solutions, including vSphereNSX-TVMware HorizonvRealize AutomationvRealize Operations, etc.

The VMUG Advantage subscription is how I started with legitimate licensing in the VMware home lab environment and have maintained a VMUG Advantage subscription ever since. You can learn more about the VMUG advantage subscription here: » VMUG Advantage Membership.

I used Microsoft Evaluation center licensing for Windows, suitable for 180 days, generally long enough for most of my lab scenarios.

What software am I running?

The below list is only an excerpt, as there are too many items, applications, and solutions to list. As I mentioned, my lab is built on top of VMware solutions. In it, I have the following running currently:

  • vSphere 7.0 Update 3d with latest updates
  • vCenter Server 7.0 U3d with the latest updates
  • vSAN 7.0 Update 3
  • vRealize Operations Manager
  • vRealize Automation
  • vRealize Network Insight
  • VMware NSX-T
  • Currently using Windows Server 2022 templates
  • Linux templates are Ubuntu Server 21.10 and 20.04

Nested vSphere labs:

  • Running vSAN nested labs with various configurations
    • Running vSphere with Tanzu with various containers on top of Tanzu
    • Running Rancher Kubernetes clusters

Do I leverage the cloud?

Even though I have a VMware homelab project, I do leverage the cloud. For example, I have access to AWS and Azure and often use these to build out PoC environments and services between my home lab and the cloud to test real-world scenarios for hybrid cloud connectivity for clients and learning purposes.

What does your roadmap look like?

I am constantly looking at new hardware and better equipment across the board on the hardware roadmap. It would be nice to get 25 gig networking in the lab environment at some point in the future. Also, I am looking at new Supermicro models with the refreshed Ice Lake Xeon-D processors.

On the software/solutions side, I am on a continuous path to learning new coding and DevOps skills, including new Infrastructure-as-Code solutions. Also, Kubernetes is always on my radar, and I continue to use the home lab to learn new Kubernetes skills. I want to continue building new Kubernetes solutions with containerized workloads in the home lab environment, which is on the agenda this year in the lab environment.

Any horror stories to share?

One of the more memorable homelab escapades involved accidentally wiping out an entire vSAN datastore as I had mislabeled two of my Supermicro servers. So, when I reloaded two of the servers, I realized I had rebuilt the wrong servers. Thankfully, I am the CEO, CIO, and IT Manager of the home lab environment, and I had backups of my VMs 😊.

I like to light up my home lab server rack

One of the recent additions to the VMware homelab project this year has been the addition of LED lights. I ran LED light strips along the outer edge of my server rack and can change the color via remote or have the lights cycle through different colors on a timer. You can check out a walkthrough of my home lab environment (2022 edition with lights) here: (574) VMware Home Lab Tour 2022 Edition Server Room with LED lights at night! A geek’s delight! – YouTube

Rack servers for VMware homelab project

Rack servers for myVMware homelab project

Xavier Avrillier

VMware | DOJO Author & Section Editor

http://vxav.fr

Why do you have a lab?

When I started my career in IT, I didn’t have any sort of lab and relied exclusively on the environment I had at work to learn new things and play around with tech. This got me started with running virtual machines in VMware workstations at home but computers back then (10 years ago) didn’t come with 16GB of RAM as a common requirement so I had to get crafty with resources.

When studying to take the VCP exam, things started to get a bit frustrating as running a vCenter with just 2 vSphere nodes on 16 GB of ram is cumbersome (and slow). At this point, I got lucky enough that I could use a fairly good test environment at work to delay the inevitable and manage to get the certification without investing a penny in hardware or licenses.

I then changed my employer and started technical writing so I needed the capacity to play around with and resources pile up fast when you add vSAN, NSX, SRM and other VMware products into the mix. For that reason, I decided to get myself a homelab that would be dedicated to messing around. I started with Intel NUC mini-PCs like many of us and then moved to a more solid Dell rack server that I am currently running.

I decided to go the second-hand route as it was so much cheaper and I don’t really care about official support, newer software usually works unless on dinosaur hardware. I got a great deal on a Dell R430, my requirements were pretty easy as I basically needed lots of cores, memory, a fair amount of storage and an out-of-band card for when I’m not at home and need to perform power actions on it.

What’s your setup?

I am currently running my cluster labs nested on the R430 and run natively in VMs when possible. For instance, I have the DC, NSX manager, VCD, and vCenter run in VMs on the physical host, but I have a nested VSAN cluster with NSX-T networking managed by this same vCenter server. This is the most consolidated way I could think of while offering flexibility.

  • Dell R430
  • VMware vSphere ESXi 7 Update 3
  • 2 x Intel Xeon E5-2630 v3 (2 x 8 pCores @2.40GHz)
  • 128GB of RAM
  • 6 x 300GB 15K rpm in RAID 5 (1.5TB raw)
  • PERC H730 mini
  • Dual 550W power supply (only one connected)
  • iDRAC 8 enterprise license
  • I keep the firmware up to date with Dell OME running in a VM in a workstation on my laptop that I fire up every now and again (when I have nothing better to do).

On the side, I also have a Gigabyte mini-pc running. That one is installed with an Ubuntu server with K3s running on it (Kubernetes). I use it to run a bunch of home automation stuff that are managed by ArgoCD in a private Github repository (GitOps), that way I can track my change through commits and pull requests. I also use it for CAPV to quickly provision Kubernetes (and Tanzu TCE) clusters in my lab.

  • Gigabyte BSi3-6100
  • Ubuntu 20.04 LTS
  • Core i3 6th gen
  • 8GB of ram

I also have an old Synology DS115j NAS (Network Access Storage) that participates in the home automation stuff. It is also a target for vCenter backups and a few VMs I don’t want to have to rebuild using Altaro VM backup. It’s only 1TB but I am currently considering my options to replace it with a more powerful model with more storage.

Network wise all the custom stuff happens nested with OpnSense and NSX-T, I try to keep my home network as simple as possible if I don’t need to complicate it any further.

I currently don’t leverage any cloud services on a daily basis but I spin up the odd instance or cloud service now and again to check out new features or learn about new tech in general.

I try to keep my software and firmware as up-to-date as possible. However, it tends to depend on what I’m currently working on or interested in. I haven’t touched my Horizon install in a while but I am currently working with my NSX-T + ALB + VCD + vSAN setup to deploy a Kubernetes cluster with Cluster API.

VMware homelab project architecture

VMware homelab project architecture”

What do you like and don’t like about your setup?

I like that I have a great deal of flexibility by having a pool of resources that I can consume with nested installs or natives VMs. I can scratch projects and start over easily.

However, I slightly underestimated storage requirements and 1.5TB is proving a bit tricky as I have to really keep an eye on it to avoid filling it up. My provisioning ratio is currently around 350% so I don’t want to hit the 100% used space mark. And finding spare 15K SAS disks isn’t as easy as I’d hope.

What does your roadmap look like?

As mentioned, I’m reaching a point where storage can become a bottleneck as interoperable VMware products require more and more resources (NSX-T + ALB + Tanzu + VCD …). I could add a couple of disks but that would only add 600GB of storage and I’ll have to find 15K rpm 300GB disks with caddies so not an easy find. For that reason, I’m considering getting a NAS that I can then use as NFS or iSCSI storage backend with SSDs.

Things I am currently checking out include VMware Cloud Director with NSX-T and ALB integration and Kubernetes on top of all that. I’d also like to get in touch with CI/CD pipelines and other cloud-native stuff.

Any horror stories to share?

The latest to date was my physical ESXi host running on a consumer-grade USB key plugged in the internal USB port, which got fried (the USB key) after a few months of usage. My whole environment was running on this host and I had no backup then. But luckily, I was able to reinstall it on a new USB key (plugged in the external port) and re-register all my resources one by one manually.

Also, note that I am incredibly ruthless with my home lab. I only turn it on when needed. So, when I am done with it, none of that proper shutdown sequence, thanks very much. I trigger the shut down of the physical host from vCenter which takes care of stopping the VMs, sometimes I even push the actual physical button (yes there’s one). While I haven’t nuked anything that way somehow, I would pay to see my boss’s face should I stop production hypervisors with the button!

Ivo Beerens

https://www.ivobeerens.nl/

Why do you have a lab?

The home lab is mainly used for learning, testing new software versions, and automating new image releases. Back when I started down this journey, my first home lab was in the Novell Netware 3.11 era which I acquired using my own money, no employer’s subvention 😊

My main considerations and decision points for what I decided to purchase were low noise, low power consumption for running 24×7, room for PCI-Express cards and NVMe support.

What’s your setup?

From a hardware standpoint, computing power is handled by two Shuttle barebone machines with the following specifications:

    • 500 W Plus Silver PSU
    • Intel Core i7 8700 with 6 cores and 12 threads
    • 64 GB memory
    • Samsung 970 EVO 1 TB m.2
    • 2 x 1 GbE Network cards
    • Both barebones are running the latest VMware vSphere version.

In terms of storage, I opted for a separate QNAP TS-251+ NAS with two Western Digital (WD) Red 8 TB disks in a RAID-1 configuration. The barebones machines have NVM drives with no RAID protection.

The bulk of my workloads are hosted on VMware vSphere and for the VDI solution, I run VMware Horizon with Windows 10/11 VDIs. Cloud-wise, I use an Azure Visual Studio subscription for testing IAAS and Azure Virtual Desktop services.

I manage the environments by automating as much as possible using Infrastructure as Code (IaC). I automated the installation process of almost every part so I can start over from scratch whenever I want.

What do you like and don’t like about your setup?

I obviously really enjoy the flexibility that automation brings to the table. However, the lack of resources sometimes (max 128 GB) can sometimes be a limiting factor. I also miss having remote management boards such as HPE iLO, Dell iDRAC or a KVM switch to facilitate hardware operations.

What does your roadmap look like?

I currently have in the works to upgrade to a 10 GbE Switch and bump the memory to 128GB per barebone.

Paolo Valsecchi

https://nolabnoparty.com/

Why do you have a lab?

I am an IT professional and I often find myself in the situation of implementing new products and configurations without having the right knowledge or tested procedures at hand. Since it is a bad idea to experiment with things directly on production environments, having a lab is the ideal solution to learn, study, and practice new products or test new configurations without the hassle of messing up critical workloads.

Because I’m also a blogger, I study and test procedures to publish them on my blog. This required a better test environment than what I had. Since my computer didn’t have enough resources to allow complex deployments, in 2015 I decided to invest some money and build my own home lab.

It was clear that the ideal lab was not affordable due to high costs. For that reason, I decided to start with a minimum set of equipment to extend later. It took a while before finding the configuration that met the requirements. After extensive research on the Internet, I was finally able to complete the design by comparing other lab setups.

My requirements for the lab were simple: Low power, cost-effective hardware, acceptable performance, at least two nodes, one external storage, compatibility with the platforms I use, and components size.

What’s your setup?

Despite my lab still meeting my requirements, it is starting to be a little bit obsolete now. My current lab setup is the following:

  • PROD Servers: 3 x Supermicro X11SSH-L4NF
    • Intel Xeon E3-1275v5
    • 64GB RAM
    • 2TB WD Red
  • DR Server: Intel NUC NUC8i3BEH
    • Intel Core i3-8109U
    • 32GB RAM
    • Kingston SA1000M8 240G SSD A1000
  • Storage PROD: Synology DS918
    • 12TB WD Red RAID5
    • 250GB read/write cache
    • 16GB RAM
  • Storage Backup: Synology DS918
    • 12TB WD Red RAID5
    • 8GB RAM
  • Storage DR: Synology DS119j + 3TB WD Red
  • Switch: Cisco SG350-28
  • Router: Ubiquiti USG
  • UPS: APC 1400

The lab is currently composed of three nodes cluster running VMware vSphere 7.0.2 with vSAN as main storage. Physical shared storage devices are configured with RAID 5 and connected to vSphere or backup services via NFS or dedicated LUNs.

Installed Windows Servers are running version 2016 or 2019 while Linux VMs belong to different distributions and versions may vary.

My lab runs different services, such as:

  • VMware vSphere and vSAN
  • Active Directory, ADFS, Office 365 sync
  • VMware Horizon
  • Different backup solutions (at least 6 different products including Altaro)

In terms of Cloud service, I use cloud object storage (S3 and S3-compatible) solutions for backup purposes. I also use Azure to manage services such as Office 365, Active Directory and MFA. Due to high costs, workloads running on AWS or Azure are just created on-demand and for specific tests.

I try to keep the software always up-to-date with in-place upgrades, except for Windows Server which I always reinstall. Only once did I have to wipe the lab due to hardware failure

What do you like and don’t like about your setup?

With my current setup, I’m able to run the workloads I need and do my tests. Let’s say I’m satisfied with my lab, but…

vSAN disks are not SSD (only the cache), RAM installed on each host is limited to 64GB and the network speed is 1GB. These constraints are affecting the performance and the number of running machines that are demanding always more and more resources.

What does your roadmap look like?

To enhance my lab, the replacement of HDDs with SSDs is the first step in my roadmap. Smaller physical servers to better fit in my room as well as a 10 Gbps network would be the icing on the cake. Unfortunately, this means replacing most of the installed hardware in my lab.

Any horror stories to share?

After moving my lab from my former company to my house, the original air conditioning system in use during the very first days was not so good and a hot summer was fatal to my hardware… the storage with all my backups failed, losing a lot of important VMs. Pity that some days before I deleted such VMs from the lab. I spent weeks re-creating all the VMs! I have now a better cooling system and a stronger backup (3-2-1!)

Mayur Parmar

Why do you have a lab?

I use my Home LAB primarily for testing various products to explore new features and functionality that I’d never played with before. This greatly helps me in learning about the product as well as testing it.

I decided to go for a Home Lab 4 years ago because of the complete flexibility and control you have over your own environment. You can easily (or not) deploy, configure and manage things yourself. I bought my Dell Workstation directly from Dell by customizing its configuration according to my needs and requirements.

The first thing I considered was whether it should be bare metal with Rack servers, Network Switches and Storage devices or simply nested virtualization inside VMware Workstation. I went for the nested virtualization route for flexibility and convenience and sized the hardware resources according to what I needed at the time.

What’s your setup?

My home lab is pretty simple, it is made up of a Dell Workstation, a TP link switch and a Portable hard drive.

Dell Workstation:

  • Dell Precision Tower 5810
  • Intel Xeon E5-2640v4 10 Core processor
  • 96 GB of DDR4 Memory
  • 2x1TB of SSDs
  • 2 TB of Portable hard drive
  • Windows 10 with VMware Workstation

At the moment I currently run a variety of VMs such as ESXi hosts, AD-DNS, Backup software, a mail server and a number of Windows and Linux boxes. Because all VMs running on VMware Workstation there is no additional network configuration required as all VMs can interact with each other on virtual networks.

Since my Home LAB is on VMware Workstation it gives the flexibility to keep up-to-date versions as well as lower versions to test and compare features for instance. Because it runs in VMware Workstation, I often got to wipe out and recreate the complete setup. Whenever newer versions are released, I always upgrade to try out new features.

What do you like and don’t like about your setup?

I like the flexibility VMware Workstation gives me to set things up easily and scratch them just as easily.

On the other hand, there is a number of things I can’t explore such as setting up solutions directly on the physical server, working on Firmware, Configuring Storage & RAID levels, Configure Networking, routing and so on.

What does your roadmap look like?

Since I bought my Dell Workstation, I constantly keep an eye on the resources to avoid running out of capacity. In the near future, I plan to continue with that trend but I am considering buying a new one to extend the capacity.

However, I am currently looking at buying a NAS device to provide shared storage capacity to the compute node(s). While I don’t use any just now, my future home lab may include cloud services at some point.

Any horror stories to share?

A couple of mistakes I made in the home lab included failure to create DNS Records before deploying a solution, messed up vCenter Upgrade which required to deploying new vCenter servers or a failed Standard Switch to Distributed Switch migration which caused network outage and needed to reset the whole networking stack.

Simon Cranney

https://esxsi.com/

Why do you have a lab?

A couple of years ago I stood up my first proper VMware home lab project. I had messed about with running VMware Workstation on a gaming PC in the past, but this time I wanted something I could properly get my teeth into and have a VMware vSphere home lab without resource contention.

Prior to this, I had no home lab. Many people that are fortunate to work in large enterprise infrastructure environments may be able to fly under the radar and play about with technologies on works hardware. I cannot confirm nor deny if this was something I used to do! But hey learning and testing new technologies benefits the company in the long run.

What’s your setup?

Back to the current VMware home lab then, I had a budget in mind so ended up going with a pair of Intel NUC boxes. Each with 32 GB RAM and a 1 TB PCIe NVMe SSD.

The compute and storage are used to run a fairly basic VMware vSphere home lab setup. I have a vCenter Server as you’d expect, a 2-node vSAN cluster, and vRealize Operations Manager, with a couple of Windows VMs running Active Directory and some different applications depending on what I’m working on at any given point in time.

My VMware home lab licenses are all obtained free of charge through the VMware vExpert program but there are other ways of accessing VMware home lab licenses such as through the VMUG Advantage membership or even the vSphere Essentials Plus Kit. If you are building a VMware home lab though, why not blog about it and shoot for the VMware vExpert application?

In terms of networking, I’ve put in a little more effort! Slightly out of scope here but in a nutshell;

  • mini rack with the Ubiquiti UniFi Dream Machine Pro
  • UniFi POE switch
  • And a number of UniFi Access Points providing full house and garden coverage

I separate out homelab and trusted devices onto an internal network, partner and guest devices onto an external network, and smart devices or those that like to listen onto a separate IoT network. Each network is backed by a different VLAN and associated firewall rules.

What do you like and don’t like about your setup?

Being 8th Generation, the Intel NUC boxes caused me some pain when upgrading to vSphere 7. I used the Community Network Driver for ESXi Fling and played about adding some USB network adapters to build out distributed switches.

I’m also fortunate enough to be running a VMware SD-WAN (VeloCloud) Edge device, which plugs directly into my works docking station and optimizes my corporate network traffic for things like Zoom and Teams calls.

What does your roadmap look like?

In the future, I’d like to connect my VMware home lab project to some additional cloud services, predominantly in AWS. This will allow me to deep dive into technologies like VMware Tanzu, by getting hands-on with the deployment and configuration.

Whilst VMware Hands-on Labs are an excellent resource, like many techies I do find that the material sticks and resonates more when I have had to figure out integrations and fixes in a real-life environment. I hope you found my setup interesting. I’d love to hear in the comments section if you’re running VMware Tanzu in your home lab and from any other UniFi fans!

Get More Out of Your Homelab

It is always fun to discuss home labs and discover how your peers do it. It’s a great way to share “tips and tricks” and to learn from the success and failures of others. Hardware is expensive and so is electricity, real estate to store it and so on.

Learn how to design on a budget for the VMware homelab building process

For these reasons and many others, you should ask yourself a few questions before even looking at home lab options to better steer your research towards something that will fit your needs:

  • Do I need hardware, Cloud services or both? On-premise hardware involves investing a chunk of money at the beginning but it means you are in total control of the budget as electricity will be the only variable from now on. On the other hand, cloud services will let you pay for only what you use. It can be very expensive but it could also be economical under the right circumstances. Also, some of you will only require Azure services because it’s your job, while I couldn’t run VMware Cloud Director, NSX-T and ALB in the cloud.
  • Do you have limited space or noise constraints? Rack and tower servers are cool, but they are bulky and loud. A large number of IT professionals went for small, passive and silent mini-pcs such as Intel NUC. It grew in popularity after William Lam from VMware endorsed it and network drivers for the USB adapters were released as Flings. These small form factor machines are great and offer pretty good performances with i3, i5 or i7 processors. You can get a bunch of these to build a cluster that won’t use up much energy and won’t make a peep.
  • Nested or Bare-Metal? Another question that is often asked is if you should run everything bare-metal. I personally like the flexibility of nested setups but it’s also because I don’t have the room for a rack at home (and let’s face it, I would get bad looks!). However, as you saw in this blog, people go for one or the other for various reasons and you will have to find yours.
  • What do you want to get out of it? If you are in the VMware dojo, you most likely are interested in testing VMware products. Meaning vSphere will probably be your go-to platform. In which case you will have to think about licenses. Sure, you can use evaluation licenses but you’ll have to start over every 60 days, not ideal at all. The vExpert program and the VMUG advantage program are your best bets in this arena. On the other hand, if you are only playing with Open-source software you can install Kubernetes, OpenStack or KVM on bare metal for instance and you won’t have to pay for anything.
  • How much resources do you need? This question goes hand in hand with the next one. While playing around with vSphere, vCenter or vSAN won’t set you back that much. If you want to get into Cloud Director, Tanzu, NSX-T and the likes, you will find that they literally eat up CPU, memory and storage for breakfast. So, try to look at the resource requirements for the products you want to test in order to get a rough idea of what you will need.
  • What is your budget? Now the tough question, how much do you want to spend? In hardware and energy (which links back to small form factor machines)? It is important to set yourself a budget and not just start buying stuff for the sake of it (unless you have the funds). Home lab setups are expensive and, while you might get a 42U rack full of servers for cheap on the second-hand market, your energy bill will skyrocket. On the other hand, getting a very cheap setup will cost you a certain amount of money but you may not get anything from it due to hardware limitations. So set yourself a budget and try to find the sweet spot.
  • Check compatibility: Again, don’t jump in guns blazing at the first offer. Double-check that the hardware is compatible with whatever you want to evaluate. Sure, it is likely to work even if it isn’t in the VMware HCL, but it is always worth it to do your research to look for red flags before buying.

Those are only a few key points I could think of but I’d be happy to hear about yours in the comments!

Is a VMware Homelab Worth it?

We think that getting a home lab is definitely worth it. While the money aspect might seem daunting at first, investing in a home lab is investing in yourself. The wealth of knowledge you can get from 16 cores/128GB servers is lightyears away from running VMware Workstation on your 8 cores/16GB laptop. Even though running products in a lab isn’t real-life experience, this might be the differentiating factor that gets you that dream job you’ve been after. And once you get it, the $600 you spent for that home lab will feel like money well spent with a great ROI!

VMware Homelab Alternatives

However, if your objective is to learn about VMware products in a guided way and you are not ready to buy a home lab just yet for whatever reason, fear not, online options are there for you! You can always start with the VMware Hands-on-labs (HOL) which offers a large number of learning paths where you can get to grips with most of the products sold by the company. Many of them you couldn’t even test in your home lab actually (especially the cloud ones like carbon black or workspace one). Head over to https://pathfinder.vmware.com/v3/page/hands-on-labs and register to Hands-on-labs to start learning instantly.

The other option to run a home lab for cheap is to install a VMware workstation on your local workstation if you have enough resources. This is, in almost 100% of the cases, the first step before moving to a more serious and expensive setup.

To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.

Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.

What Homelab Set Up is Right for You?

I think we will all agree that our work doesn’t fit within the traditional 9-to-5 as keeping our skills up is also part of the job and it can’t always be done on company time. Sometimes we’ll be too busy or it might just be that we want to learn about something that has nothing to do with the company’s business. Home labs aren’t limited to VMware or Azure infrastructure and what your employer needs. You can put them to good use by running overkill wifi infrastructures or by managing your movie collection with an enterprise-grade and highly resilient setup that many SMBs would benefit from. The great thing about it is that it is useful on a practical and personal level while also being good fun (if you’re a nerd like me).

Gathering testimonies about VMware homelab projects and discussing each other’s setup has been a fun and very interesting exercise. It is also beneficial to see what is being done out there and identify ways to improve and optimize our own setup, I now know that I need an oversized shared storage device in my home (This will be argued)!

Now we would love to hear about your VMware homelab project that you run at home, let’s have a discussion in the comments section!

Source :
https://www.altaro.com/vmware/perfect-homelab-vmware/

Top 10 PowerShell Tasks in Exchange Online

Today, there is no question that IT admins are busier than ever, juggling multiple tasks and responsibilities. These include managing and administering Exchange email services, both on-premises and in the cloud. Exchange Online is an extremely popular solution for organizations to host mail services as many businesses have migrated email and file storage to the public cloud. PowerShell is a great scripting language that allows admins to make the best use of their time by automating common tasks and day-to-day activities.

Why use PowerShell?

Before considering PowerShell specifically in the context of Exchange Online, why should admins consider using PowerShell in general? Today, PowerShell has quickly become one of the most popular and fully-featured scripting languages. Many software vendors are developing and releasing their own PowerShell modules, allowing admins to control, configure, and manage many different solutions across the board with the familiar PowerShell syntax.

IT admins, especially Windows admins, are familiar with PowerShell as version 1.0 was released in 2006 for Windows Server 2003, Windows XP SP2, and Windows Vista. In addition, Windows PowerShell is included in modern Windows Server and client operating systems, with the newer PowerShell Core as an optional download.

PowerShell is both familiar and understandable for many admins, given its verb-noun constructs and very human-readable syntax. However, even for non-developers, writing simple PowerShell one-liner scripts can significantly reduce the number of manual tasks performed daily.

PowerShell is also very extensible. As mentioned, third-party software vendors can write their own PowerShell snap-ins and modules to integrate into the PowerShell framework, allowing PowerShell to be customized to work with many different software solutions. Third-party vendors are not the only ones that have extensively used Powershell modules and cmdlets. Most modern Microsoft software and cloud solutions have their own PowerShell modules, allowing for seamless automation, including configuration and management.

What is Exchange Online (EXO)?

Microsoft Exchange Online (EXO) is a hosted unified messaging solution that provides email, calendaring, contacts, and task management from a wide range of devices. Exchange Online is a modern counterpart to the traditional Exchange on-premises solutions organizations have used for decades. In addition, Exchange Online can leverage modern Microsoft technologies, including Azure Active Directory. With Exchange Online’s Azure integration, organizations have the tools needed to support the modern hybrid workforce worldwide.

Exchange Online is the email component included in an Office 365 or Microsoft 365 subscription. However, you can purchase Exchange Online services without the other components of Office/Microsoft 365. With Exchange Online, you retain control over the messaging services offered to your users.

Microsoft Exchange Online PowerShell

Exchange Online includes the ability to administer, configure, and manage your Exchange Online environment using PowerShell. In addition, Exchange Online Powershell provides many robust cmdlets allowing administrators to automate many common tasks.

The Exchange Online PowerShell V2 module is the latest iteration and release of the Exchange Online module and provides modern features, such as the ability to work with multi-factor authentication (MFA). With MFA, organizations can greatly bolster the security of their PowerShell sessions by requiring more than one authentication factor, such as a one-time code delivered via an authenticator app or text message.

Automated Configuration and Benefits of Exchange Online PowerShell

IT admins may ask why they would want to use PowerShell instead of simply using the GUI that is familiar and does most of what they way to do. When performing specific tasks one time or only a few times during a day on one object, the GUI tools are well suited to carry out these tasks and are quite efficient at carrying out a single job or a few tasks in an ad-hoc way. However, there are multiple reasons why you would use PowerShell instead of the Exchange Online GUI management tools. These include:

  • Bulk operations
  • Data filtering
  • Data piping

Bulk operations

GUI management tools do not scale well when dealing with tasks that may need to be performed on multiple users or other objects. Also, what if you need to carry out specific tasks on hundreds of objects on a schedule? GUI management tools are not suited for doing this. For example, can you imagine manually changing an attribute on hundreds of Exchange Online users through the GUI? It would be extremely time-consuming and not very efficient.

When needing to perform bulk operations on multiple objects, PowerShell is much better suited at doing this than the Exchange Online GUI. For example, when manually changing values and attributes on an object numerous times through a GUI, there is a high likelihood a mistake can be made. However, if you use PowerShell to make the changes, the actions are repeated precisely each time the code updates the object, eliminating mistakes due to human error.

Making changes using a PowerShell script on hundreds of users might take minutes or less, whereas making the same changes manually through the GUI might take hours. It can save many hours and manual labour for low-level administrative tasks.

Data filtering

One of the powerful reasons to use PowerShell with Exchange Online is the data filtering capabilities of PowerShell. Powershell is a powerful object-oriented scripting language that can pull out objects and filter data in ways that may not be available in the Exchange Online Management GUI.

When you think about it, GUI tools only allow filtering by the specific criteria built into the GUI tool or management console. If the specific filter you need is not available, you can’t see the information in the way you need it displayed. In addition, GUI tools generally do not provide IT admins with the filtering and data extraction capabilities of command-line tools and scripting languages.

With the filtering capabilities built into PowerShell for Exchange Online, IT admins can query and filter data as needed. PowerShell is an object-oriented scripting language that can return various data objects. For example, let’s say you want to get the archivestatus attribute from all your user mailboxes. You could do that with a simple PowerShell one-liner as follows:

  • get-mailbox | select name, archivestatus

With Exchange Online PowerShell, getting the value of any mailbox attribute is the same as following this simple syntax shown above. Now, things get more interesting by piping returned values and data into other PowerShell cmdlets.

Data piping

Another powerful capability of data filtering with PowerShell is to take the data returned from a data query with a filter and then pipe the return into another PowerShell command. This simple feature contained natively in PowerShell allows querying for specific matching objects such as mailboxes and then doing something with those returned objects, such as running another Exchange Online PowerShell cmdlet on them.

A very simple example of piping your return data into another PowerShell cmdlet is a simple “out-file” cmdlet. It allows you to export your returned data to a simple text file.

  • get-mailbox | select name, archivestatus | out-file c:\archivestatus.txt

But, you can do anything you want with the pipe from a get-mailbox, get-user, or other PowerShell “get” command. You can think of the workflow like this: you are querying for a specific list of objects that match the filter criteria you have specified and then take that set of matching objects and then feed these into another PowerShell cmdlet.

Manually Configuring Exchange Online PowerShell

To get started using Exchange Online PowerShell cmdlets, you need first to install the required PowerShell modules to work with Exchange Online PowerShell. The Exchange Online PowerShell module is part of several modules that fall under the umbrella of services contained in Microsoft 365. As mentioned earlier, the Exchange Online service can be purchased as a standalone product or included with the mail services offered by Microsoft 365.

Each of the Microsoft 365 services has its own PowerShell modules, including:

  • Azure Active Directory (Azure AD)
  • Exchange Online
  • SharePoint Online
  • Skype for Business Online
  • Teams

If you are explicitly working with Exchange Online (EXO), two modules are needed to interact with the low-level Azure AD user objects and the Exchange Online mailboxes:

  • Azure Active Directory (Azure AD) PowerShell – Allows querying the Azure Active Directory environment users, attributes, etc
  • Exchange Online PowerShell – Allows querying and performing critical tasks at the mailbox level for users with Exchange Online mailboxes

Let’s see how to install both of these PowerShell modules for specifically interacting with Exchange Online via PowerShell.

Azure Active Directory (Azure AD)

First, we are going to install the AzureAD PowerShell module. As a note. It does not matter if you install the AzureAD module first or the ExchangeOnline module. To install the module, run the following cmdlet:

  • Install-Module AzureAD
  • Accept the warning message displayed regarding the untrusted repository by typing “Y.” Learn more about AzureAD PowerShell module cmdlet reference here: AzureAD Module | Microsoft Docs.
Installing AzureAD PowerShell module using Windows Terminal


Installing AzureAD PowerShell module using Windows Terminal

Installing Exchange Online PowerShell Module

Now, installing the Exchange Online PowerShell module is the same process. To install the Exchange Online Powershell module, run the following cmdlet:

  • Install-Module ExchangeOnlineManagement
Installing the ExchangeOnlineManagement PowerShell module


Installing the ExchangeOnlineManagement PowerShell module

Accept the warning message displayed regarding the untrusted repository by typing “Y.” For details on using the Exchange Online Management PowerShell, look at Microsoft’s Exchange Online PowerShell documentation here: Exchange Online PowerShell | Microsoft Docs.

Allowing all of the features of Exchange Online to PowerShell

By default, all accounts you create in Microsoft 365 can connect to and use Exchange Online PowerShell. However, IT admins can use Exchange Online PowerShell to enable or disable a user’s ability to use Exchange Online PowerShell in the environment.

As a security note, just because a user can connect to Exchange Online PowerShell, it does not give them administrator access. A user’s permissions in Exchange Online are defined by the built-in role-based access control (RBAC) used by Exchange Online.

Using the Exchange Online PowerShell cmdlets shown below, Exchange administrators can enable or disable users’ access to Exchange Online PowerShell.

  • Disable Exchange Online PowerShell – Set-User -Identity myuser@mydomain.com -RemotePowerShellEnabled $false
  • Enable Exchange Online PowerShell – Set-User -Identity myuser@mydomain.com -RemotePowerShellEnabled $true

To enable or disable for multiple users based on a user attribute, you can also use the filtering and piping features discussed above with Exchange Online PowerShell. To enable Exchange Online Powershell for users with a specific Title, like “Manager,” you can do the following:

  • $managers = Get-User -ResultSize unlimited -Filter “(RecipientType -eq ‘UserMailbox’) -and (Title -like ‘Manager*’)”
  • $managers | foreach {Set-User -Identity $_.WindowsEmailAddress -RemotePowerShellEnabled $true}

Connecting to Exchange Online PowerShell with Basic Authentication

If you search for connecting to Exchange Online PowerShell, you will see reference to basic authentication and modern authentication. To follow best practices, don’t attempt to use Basic Authentication any longer. All organizations at this point need to be switching to modern authentication with MFA enabled.

There is an additional reason. Microsoft is deprecating Basic Authentication access to Exchange Online on October 1, 2022. With this announcement, starting on October 1, 2022, they will begin disabling Basic Authentication for Outlook, EWS, RPS, POP, IMAP, and EAS protocols in Exchange Online. SMTP Auth will also be disabled if it is not being used. Read the official announcement here.

If you want to use the older Exchange Online Remote connection using Basic Authentication, you can view those instructions from Microsoft here. Again, note this method will be deprecated later this year.

Connecting to Exchange Online PowerShell with Modern Authentication

To connect to Exchange Online, use the Exchange Online PowerShell V2 module (installation shown above) to connect to your Exchange Online environment. The EXO PowerShell V2 module uses modern authentication and works with multi-factor authentication (MFA) for securing your Exchange Online PowerShell environment.

To connect to your Exchange Online environment, you need to import the ExchangeOnlineManagement module and then use the Connect-ExchangeOnline cmdlet.

  • Import-Module ExchangeOnlineManagement
  • Connect-ExchangeOnline -ShowProgress $true
Connecting to Exchange Online using the Connect-ExchangeOnline cmdlet


Connecting to Exchange Online using the Connect-ExchangeOnline cmdlet

It will bring up the login box to log into your Office/Microsoft 365 account. It allows taking advantage of the MFA configured for the account, etc.

Logging into Exchange Online with the Exchange Online PowerShell management module


Logging into Exchange Online with the Exchange Online PowerShell management module

The Top 10 Most Common Tasks in Exchange Online PowerShell

Now that we have installed the Exchange Online PowerShell module, what are some common tasks we can accomplish using Exchange Online PowerShell? Let’s take a look at the following:

  1. Getting Migration information
  2. Getting mailboxes
  3. Viewing mailbox statistics
  4. Increasing deleted items retention
  5. Enable Mailbox Audit Logging
  6. Identify inactive mailboxes
  7. Identify mailboxes enabled with forwarding
  8. Setting mailbox autoreply configuration
  9. Assigning roles to users
  10. Identifying ActiveSyncDevices

1. Getting Migration Information

You may be migrating users from one Exchange Server, such as on-premises, to another Exchange Server (Exchange Online). The Get-MigrationUser cmdlet is a great command to check the status of a migration batch used to migrate user batches.

  • Get-MigrationUser -BatchId Marketing | Get-MigrationUserStatistics
Using the Get-MigrationUser


Using the Get-MigrationUser

2. Getting Mailboxes

One of the most basic tasks an Exchange admin needs to carry out is getting information about mailboxes. The most basic cmdlet to use for this use case is the Get-Mailbox cmdlet. The Get-Mailbox cmdlet is generally used with piping into other cmdlets to pull mailboxes meeting specific filters and then performing configuration on the mailboxes queried with the Get-Mailbox cmdlet.

Using the Get-Mailbox cmdlet to get mailbox information in Exchange Online


Using the Get-Mailbox cmdlet to get mailbox information in Exchange Online

3. Viewing mailbox statistics

A common task of Exchange admins is keeping an eye on the size of mailboxes in the environment, so these do not become unwieldy. Using the Get-MailboxStatistics cmdlet allows getting the size information, the number of messages it contains, and the last time it was accessed.

  • Get-MailboxStatistics -identity <username>
Using the Get-MailboxStatistics cmdlet in Exchange Online to get mailbox information


Using the Get-MailboxStatistics cmdlet in Exchange Online to get mailbox information

4. Increasing deleted items retention

By default, Exchange Online is configured to retain deleted items for 14 days. However, this limit can be increased easily for users using the Exchange Online PowerShell module cmdlet Set-Mailbox.

  • Set-Mailbox -Identity “John Doe” -RetainDeletedItemsFor 30
The Set-Mailbox cmdlet allows configuring many aspects of the user mailbox in Exchange Online


The Set-Mailbox cmdlet allows configuring many aspects of the user mailbox in Exchange Online

5. Enable Mailbox Audit Logging

Even though audit logging is on by default for all organizations in Microsoft 365, only users with E5 licenses will return mailbox audit log events in audit log searches. If you want to retrieve audit log events for users without an E5 license, PowerShell is a great way to do that. You can use the Exchange Online PowerShell cmdlet one-liner:

  • Set-Mailbox -Identity <mailbox> -AuditEnabled $true
Using the Set-Mailbox cmdlet to turn on the AuditEnabled flag


Using the Set-Mailbox cmdlet to turn on the AuditEnabled flag

6. Identity mailboxes that are inactive

Using a combination of Exchange Online PowerShell cmdlets and a simple foreach loop, we can see when each user last logged into their mailbox.

  • Get-Mailbox -ResultSize Unlimited | Foreach {Get-MailboxStatistics -Identity $_.UserPrincipalName | Select DisplayName, LastLogonTime}
Getting the last logon time using Exchange Online PowerShell


Getting the last logon time using Exchange Online PowerShell

7. Identify mailboxes enabled with forwarding

What if you want to identify mailboxes enabled with a forwarding address as these have not been documented? You can easily do this with another useful Exchange Online PowerShell one-liner:

  • Get-mailbox -ResultSize Unlimited| where {$_.ForwardingAddress -ne $Null} | select DisplayName,ForwardingAddress

8. Setting mailbox autoreply configuration

A user may forget to set their autoreply configuration. If they go away on vacation or if there is a need to set the autoreply on a user mailbox for other reasons, you can easily accomplish this using PowerShell. It eliminates the need to log in as that user and do this interactively in Outlook.

To do this, you can use the Set-MailboxAutoReplyConfiguration cmdlet. It allows setting both an internal message and an external message for the mailbox.

Setting autoreply messages using PowerShell


Setting autoreply messages using PowerShell

9. Manage roles for groups

Using the New-ManagementRoleAssignment cmdlet, you can assign a management role to a management role group, management role assignment policy, user, or universal security group.

  • New-ManagementRoleAssignment -Role “Mail Recipients” -SecurityGroup “Tier 2 Help Desk”
Assigning management roles using the New-ManagementRoleAssignment cmdlet


Assigning management roles using the New-ManagementRoleAssignment cmdlet

10. Identifying ActiveSync Devices

Identifying and seeing ActiveSync Devices in use in the organization can easily be accomplished with Exchange Online PowerShell using the Get-MobileDevice cmdlet.

Getting mobile devices paired with Exchange Online Users


Getting mobile devices paired with Exchange Online Users

To properly protect your Hyper-V virtual machines, use Altaro VM Backup to securely backup and replicate your virtual machines. We work hard perpetually to give our customers confidence in their Hyper-V backup strategy.

To keep up to date with the latest Hyper-V best practices, become a member of the DOJO | Hyper-V now (it’s free).

The Future is Automated

Many organizations are now migrating and hosting their mail services in the cloud. Exchange Online provides businesses with a great way to host their mail services in Microsoft’s cloud infrastructure, either as a standalone subscription or part of their Office/Microsoft 365 subscription.

While Exchange admins can undoubtedly use the GUI management tools for daily tasks, Exchange Online PowerShell provides a great way to automate and carry out everyday tasks much more quickly, efficiently, and using automation. The Exchange Online PowerShell module is easy to install. In addition, it provides quick time to value by allowing Exchange admins to easily query and configure multiple objects in their Exchange Online environments.

Used in automated processes, Exchange Online PowerShell allows Exchange admins to carry out tasks consistently and in a way that helps to eliminate human error from mundane low-level tasks.

Source :
https://www.altaro.com/hyper-v/10-tasks-online-powershell/

Cisco says it won’t fix zero-day RCE in end-of-life VPN routers

Cisco advises owners of end-of-life Small Business RV routers to upgrade to newer models after disclosing a remote code execution vulnerability that will not be patched.

The vulnerability is tracked as CVE-2022-20825 and has a CVSS severity rating of 9.8 out of 10.0.

According to a Cisco security advisory, the flaw exists due to insufficient user input validation of incoming HTTP packets on the impacted devices.

An attacker could exploit it by sending a specially crafted request to the web-based management interface, resulting in command execution with root-level privileges.

Impact and remediation

The vulnerability impacts four Small Business RV Series models, namely the RV110W Wireless-N VPN Firewall, the RV130 VPN Router, the RV130W Wireless-N Multifunction VPN Router, and the RV215W Wireless-N VPN Router.

This vulnerability only affects devices with the web-based remote management interface enabled on WAN connections.

While the remote management feature is not enabled in the default configuration, brief searches using Shodan found exposed devices.

To determine whether remote management is enabled, admins should log in to the web-based management interface, navigate to “Basic Settings > Remote Management,” and verify the state of the relevant check box.

Cisco states that they will not be releasing a security update to address CVE-2022-20825 as the devices are no longer supported. Furthermore, there are no mitigations available other than to turn off remote management on the WAN interface, which should be done regardless for better overall security.

Users are advised to apply the configuration changes until they migrate to Cisco Small Business RV132W, RV160, or RV160W Routers, which the vendor actively supports.

Cisco warned last year that admins should upgrade to newer models after disclosing that they would not fix a critical vulnerability in Universal Plug-and-Play (UPnP) service.

This week, Cisco patched a critical vulnerability in Cisco Secure Email that could allow attackers to bypass authentication and login into the web management interface of the Cisco email gateway.

Source :
https://www.bleepingcomputer.com/news/security/cisco-says-it-won-t-fix-zero-day-rce-in-end-of-life-vpn-routers/

The More You Know, The More You Know You Don’t Know

A Year in Review of 0-days Used In-the-Wild in 2021

Posted by Maddie Stone, Google Project Zero

This is our third annual year in review of 0-days exploited in-the-wild [20202019]. Each year we’ve looked back at all of the detected and disclosed in-the-wild 0-days as a group and synthesized what we think the trends and takeaways are. The goal of this report is not to detail each individual exploit, but instead to analyze the exploits from the year as a group, looking for trends, gaps, lessons learned, successes, etc. If you’re interested in the analysis of individual exploits, please check out our root cause analysis repository.

We perform and share this analysis in order to make 0-day hard. We want it to be more costly, more resource intensive, and overall more difficult for attackers to use 0-day capabilities. 2021 highlighted just how important it is to stay relentless in our pursuit to make it harder for attackers to exploit users with 0-days. We heard over and over and over about how governments were targeting journalists, minoritized populations, politicians, human rights defenders, and even security researchers around the world. The decisions we make in the security and tech communities can have real impacts on society and our fellow humans’ lives.

We’ll provide our evidence and process for our conclusions in the body of this post, and then wrap it all up with our thoughts on next steps and hopes for 2022 in the conclusion. If digging into the bits and bytes is not your thing, then feel free to just check-out the Executive Summary and Conclusion.

Executive Summary

2021 included the detection and disclosure of 58 in-the-wild 0-days, the most ever recorded since Project Zero began tracking in mid-2014. That’s more than double the previous maximum of 28 detected in 2015 and especially stark when you consider that there were only 25 detected in 2020. We’ve tracked publicly known in-the-wild 0-day exploits in this spreadsheet since mid-2014.

While we often talk about the number of 0-day exploits used in-the-wild, what we’re actually discussing is the number of 0-day exploits detected and disclosed as in-the-wild. And that leads into our first conclusion: we believe the large uptick in in-the-wild 0-days in 2021 is due to increased detection and disclosure of these 0-days, rather than simply increased usage of 0-day exploits.

With this record number of in-the-wild 0-days to analyze we saw that attacker methodology hasn’t actually had to change much from previous years. Attackers are having success using the same bug patterns and exploitation techniques and going after the same attack surfaces. Project Zero’s mission is “make 0day hard”. 0-day will be harder when, overall, attackers are not able to use public methods and techniques for developing their 0-day exploits. When we look over these 58 0-days used in 2021, what we see instead are 0-days that are similar to previous & publicly known vulnerabilities. Only two 0-days stood out as novel: one for the technical sophistication of its exploit and the other for its use of logic bugs to escape the sandbox.

So while we recognize the industry’s improvement in the detection and disclosure of in-the-wild 0-days, we also acknowledge that there’s a lot more improving to be done. Having access to more “ground truth” of how attackers are actually using 0-days shows us that they are able to have success by using previously known techniques and methods rather than having to invest in developing novel techniques. This is a clear area of opportunity for the tech industry.

We had so many more data points in 2021 to learn about attacker behavior than we’ve had in the past. Having all this data, though, has left us with even more questions than we had before. Unfortunately, attackers who actively use 0-day exploits do not share the 0-days they’re using or what percentage of 0-days we’re missing in our tracking, so we’ll never know exactly what proportion of 0-days are currently being found and disclosed publicly.

Based on our analysis of the 2021 0-days we hope to see the following progress in 2022 in order to continue taking steps towards making 0-day hard:

  1. All vendors agree to disclose the in-the-wild exploitation status of vulnerabilities in their security bulletins.
  2. Exploit samples or detailed technical descriptions of the exploits are shared more widely.
  3. Continued concerted efforts on reducing memory corruption vulnerabilities or rendering them unexploitable.Launch mitigations that will significantly impact the exploitability of memory corruption vulnerabilities.

A Record Year for In-the-Wild 0-days

2021 was a record year for in-the-wild 0-days. So what happened?

bar graph showing the number of in-the-wild 0-day detected per year from 2015-2021. The totals are taken from this tracking spreadsheet: https://docs.google.com/spreadsheets/d/1lkNJ0uQwbeC1ZTRrxdtuPLCIl7mlUreoKfSIgajnSyY/edit#gid=2129022708

Is it that software security is getting worse? Or is it that attackers are using 0-day exploits more? Or has our ability to detect and disclose 0-days increased? When looking at the significant uptick from 2020 to 2021, we think it’s mostly explained by the latter. While we believe there has been a steady growth in interest and investment in 0-day exploits by attackers in the past several years, and that security still needs to urgently improve, it appears that the security industry’s ability to detect and disclose in-the-wild 0-day exploits is the primary explanation for the increase in observed 0-day exploits in 2021.

While we often talk about “0-day exploits used in-the-wild”, what we’re actually tracking are “0-day exploits detected and disclosed as used in-the-wild”. There are more factors than just the use that contribute to an increase in that number, most notably: detection and disclosure. Better detection of 0-day exploits and more transparently disclosed exploited 0-day vulnerabilities is a positive indicator for security and progress in the industry.

Overall, we can break down the uptick in the number of in-the-wild 0-days into:

  • More detection of in-the-wild 0-day exploits
  • More public disclosure of in-the-wild 0-day exploitation

More detection

In the 2019 Year in Review, we wrote about the “Detection Deficit”. We stated “As a community, our ability to detect 0-days being used in the wild is severely lacking to the point that we can’t draw significant conclusions due to the lack of (and biases in) the data we have collected.” In the last two years, we believe that there’s been progress on this gap.

Anecdotally, we hear from more people that they’ve begun working more on detection of 0-day exploits. Quantitatively, while a very rough measure, we’re also seeing the number of entities credited with reporting in-the-wild 0-days increasing. It stands to reason that if the number of people working on trying to find 0-day exploits increases, then the number of in-the-wild 0-day exploits detected may increase.

A bar graph showing the number of distinct reporters of 0-day in-the-wild vulnerabilities per year for 2019-2021. 2019: 9, 2020: 10, 2021: 20. The data is taken from: https://docs.google.com/spreadsheets/d/1lkNJ0uQwbeC1ZTRrxdtuPLCIl7mlUreoKfSIgajnSyY/edit#gid=2129022708
a line graph showing how many in-the-wild 0-days were found by their own vendor per year from 2015 to 2021. 2015: 0, 2016: 0, 2017: 2, 2018: 0, 2019: 4, 2020: 5, 2021: 17. Data comes from: https://docs.google.com/spreadsheets/d/1lkNJ0uQwbeC1ZTRrxdtuPLCIl7mlUreoKfSIgajnSyY/edit#gid=2129022708

We’ve also seen the number of vendors detecting in-the-wild 0-days in their own products increasing. Whether or not these vendors were previously working on detection, vendors seem to have found ways to be more successful in 2021. Vendors likely have the most telemetry and overall knowledge and visibility into their products so it’s important that they are investing in (and hopefully having success in) detecting 0-days targeting their own products. As shown in the chart above, there was a significant increase in the number of in-the-wild 0-days discovered by vendors in their own products. Google discovered 7 of the in-the-wild 0-days in their own products and Microsoft discovered 10 in their products!

More disclosure

The second reason why the number of detected in-the-wild 0-days has increased is due to more disclosure of these vulnerabilities. Apple and Google Android (we differentiate “Google Android” rather than just “Google” because Google Chrome has been annotating their security bulletins for the last few years) first began labeling vulnerabilities in their security advisories with the information about potential in-the-wild exploitation in November 2020 and January 2021 respectively. When vendors don’t annotate their release notes, the only way we know that a 0-day was exploited in-the-wild is if the researcher who discovered the exploitation comes forward. If Apple and Google Android had not begun annotating their release notes, the public would likely not know about at least 7 of the Apple in-the-wild 0-days and 5 of the Android in-the-wild 0-days. Why? Because these vulnerabilities were reported by “Anonymous” reporters. If the reporters didn’t want credit for the vulnerability, it’s unlikely that they would have gone public to say that there were indications of exploitation. That is 12 0-days that wouldn’t have been included in this year’s list if Apple and Google Android had not begun transparently annotating their security advisories.

bar graph that shows the number of Android and Apple (WebKit + iOS + macOS) in-the-wild 0-days per year. The bar graph is split into two color: yellow for Anonymously reported 0-days and green for non-anonymous reported 0-days. 2021 is the only year with any anonymously reported 0-days. 2015: 0, 2016: 3, 2018: 2, 2019: 1, 2020: 3, 2021: Non-Anonymous: 8, Anonymous- 12. Data from: https://docs.google.com/spreadsheets/d/1lkNJ0uQwbeC1ZTRrxdtuPLCIl7mlUreoKfSIgajnSyY/edit#gid=2129022708

Kudos and thank you to Microsoft, Google Chrome, and Adobe who have been annotating their security bulletins for transparency for multiple years now! And thanks to Apache who also annotated their release notes for CVE-2021-41773 this past year.

In-the-wild 0-days in Qualcomm and ARM products were annotated as in-the-wild in Android security bulletins, but not in the vendor’s own security advisories.

It’s highly likely that in 2021, there were other 0-days that were exploited in the wild and detected, but vendors did not mention this in their release notes. In 2022, we hope that more vendors start noting when they patch vulnerabilities that have been exploited in-the-wild. Until we’re confident that all vendors are transparently disclosing in-the-wild status, there’s a big question of how many in-the-wild 0-days are discovered, but not labeled publicly by vendors.

New Year, Old Techniques

We had a record number of “data points” in 2021 to understand how attackers are actually using 0-day exploits. A bit surprising to us though, out of all those data points, there was nothing new amongst all this data. 0-day exploits are considered one of the most advanced attack methods an actor can use, so it would be easy to conclude that attackers must be using special tricks and attack surfaces. But instead, the 0-days we saw in 2021 generally followed the same bug patterns, attack surfaces, and exploit “shapes” previously seen in public research. Once “0-day is hard”, we’d expect that to be successful, attackers would have to find new bug classes of vulnerabilities in new attack surfaces using never before seen exploitation methods. In general, that wasn’t what the data showed us this year. With two exceptions (described below in the iOS section) out of the 58, everything we saw was pretty “meh” or standard.

Out of the 58 in-the-wild 0-days for the year, 39, or 67% were memory corruption vulnerabilities. Memory corruption vulnerabilities have been the standard for attacking software for the last few decades and it’s still how attackers are having success. Out of these memory corruption vulnerabilities, the majority also stuck with very popular and well-known bug classes:

  • 17 use-after-free
  • 6 out-of-bounds read & write
  • 4 buffer overflow
  • 4 integer overflow

In the next sections we’ll dive into each major platform that we saw in-the-wild 0-days for this year. We’ll share the trends and explain why what we saw was pretty unexceptional.

Chromium (Chrome)

Chromium had a record high number of 0-days detected and disclosed in 2021 with 14. Out of these 14, 10 were renderer remote code execution bugs, 2 were sandbox escapes, 1 was an infoleak, and 1 was used to open a webpage in Android apps other than Google Chrome.

The 14 0-day vulnerabilities were in the following components:

When we look at the components targeted by these bugs, they’re all attack surfaces seen before in public security research and previous exploits. If anything, there are a few less DOM bugs and more targeting these other components of browsers like IndexedDB and WebGL than previously. 13 out of the 14 Chromium 0-days were memory corruption bugs. Similar to last year, most of those memory corruption bugs are use-after-free vulnerabilities.

A couple of the Chromium bugs were even similar to previous in-the-wild 0-days. CVE-2021-21166 is an issue in ScriptProcessorNode::Process() in webaudio where there’s insufficient locks such that buffers are accessible in both the main thread and the audio rendering thread at the same time. CVE-2019-13720 is an in-the-wild 0-day from 2019. It was a vulnerability in ConvolverHandler::Process() in webaudio where there were also insufficient locks such that a buffer was accessible in both the main thread and the audio rendering thread at the same time.

CVE-2021-30632 is another Chromium in-the-wild 0-day from 2021. It’s a type confusion in the  TurboFan JIT in Chromium’s JavaScript Engine, v8, where Turbofan fails to deoptimize code after a property map is changed. CVE-2021-30632 in particular deals with code that stores global properties. CVE-2020-16009 was also an in-the-wild 0-day that was due to Turbofan failing to deoptimize code after map deprecation.

WebKit (Safari)

Prior to 2021, Apple had only acknowledged 1 publicly known in-the-wild 0-day targeting WebKit/Safari, and that was due the sharing by an external researcher. In 2021 there were 7. This makes it hard for us to assess trends or changes since we don’t have historical samples to go off of. Instead, we’ll look at 2021’s WebKit bugs in the context of other Safari bugs not known to be in-the-wild and other browser in-the-wild 0-days.

The 7 in-the-wild 0-days targeted the following components:

The one semi-surprise is that no DOM bugs were detected and disclosed. In previous years, vulnerabilities in the DOM engine have generally made up 15-20% of the in-the-wild browser 0-days, but none were detected and disclosed for WebKit in 2021.

It would not be surprising if attackers are beginning to shift to other modules, like third party libraries or things like IndexedDB. The modules may be more promising to attackers going forward because there’s a better chance that the vulnerability may exist in multiple browsers or platforms. For example, the webaudio bug in Chromium, CVE-2021-21166, also existed in WebKit and was fixed as CVE-2021-1844, though there was no evidence it was exploited in-the-wild in WebKit. The IndexedDB in-the-wild 0-day that was used against Safari in 2021, CVE-2021-30858, was very, very similar to a bug fixed in Chromium in January 2020.

Internet Explorer

Since we began tracking in-the-wild 0-days, Internet Explorer has had a pretty consistent number of 0-days each year. 2021 actually tied 2016 for the most in-the-wild Internet Explorer 0-days we’ve ever tracked even though Internet Explorer’s market share of web browser users continues to decrease.

Bar graph showing the number of Internet Explorer itw 0-days discovered per year from 2015-2021. 2015: 3, 2016: 4, 2017: 3, 2018: 1, 2019: 3, 2020: 2, 2021: 4. Data from: https://docs.google.com/spreadsheets/d/1lkNJ0uQwbeC1ZTRrxdtuPLCIl7mlUreoKfSIgajnSyY/edit#gid=2129022708

So why are we seeing so little change in the number of in-the-wild 0-days despite the change in market share? Internet Explorer is still a ripe attack surface for initial entry into Windows machines, even if the user doesn’t use Internet Explorer as their Internet browser. While the number of 0-days stayed pretty consistent to what we’ve seen in previous years, the components targeted and the delivery methods of the exploits changed. 3 of the 4 0-days seen in 2021 targeted the MSHTML browser engine and were delivered via methods other than the web. Instead they were delivered to targets via Office documents or other file formats.

The four 0-days targeted the following components:

For CVE-2021-26411 targets of the campaign initially received a .mht file, which prompted the user to open in Internet Explorer. Once it was opened in Internet Explorer, the exploit was downloaded and run. CVE-2021-33742 and CVE-2021-40444 were delivered to targets via malicious Office documents.

CVE-2021-26411 and CVE-2021-33742 were two common memory corruption bug patterns: a use-after-free due to a user controlled callback in between two actions using an object and the user frees the object during that callback and a buffer overflow.

There were a few different vulnerabilities used in the exploit chain that used CVE-2021-40444, but the one within MSHTML was that as soon as the Office document was opened the payload would run: a CAB file was downloaded, decompressed, and then a function from within a DLL in that CAB was executed. Unlike the previous two MSHTML bugs, this was a logic error in URL parsing rather than a memory corruption bug.

Windows

Windows is the platform where we’ve seen the most change in components targeted compared with previous years. However, this shift has generally been in progress for a few years and predicted with the end-of-life of Windows 7 in 2020 and thus why it’s still not especially novel.

In 2021 there were 10 Windows in-the-wild 0-days targeting 7 different components:

The number of different components targeted is the shift from past years. For example, in 2019 75% of Windows 0-days targeted Win32k while in 2021 Win32k only made up 20% of the Windows 0-days. The reason that this was expected and predicted was that 6 out of 8 of those 0-days that targeted Win32k in 2019 did not target the latest release of Windows 10 at that time; they were targeting older versions. With Windows 10 Microsoft began dedicating more and more resources to locking down the attack surface of Win32k so as those older versions have hit end-of-life, Win32k is a less and less attractive attack surface.

Similar to the many Win32k vulnerabilities seen over the years, the two 2021 Win32k in-the-wild 0-days are due to custom user callbacks. The user calls functions that change the state of an object during the callback and Win32k does not correctly handle those changes. CVE-2021-1732 is a type confusion vulnerability due to a user callback in xxxClientAllocWindowClassExtraBytes which leads to out-of-bounds read and write. If NtUserConsoleControl is called during the callback a flag is set in the window structure to signal that a field is an offset into the kernel heap. xxxClientAllocWindowClassExtraBytes doesn’t check this and writes that field as a user-mode pointer without clearing the flag. The first in-the-wild 0-day detected and disclosed in 2022, CVE-2022-21882, is due to CVE-2021-1732 actually not being fixed completely. The attackers found a way to bypass the original patch and still trigger the vulnerability. CVE-2021-40449 is a use-after-free in NtGdiResetDC due to the object being freed during the user callback.

iOS/macOS

As discussed in the “More disclosure” section above, 2021 was the first full year that Apple annotated their release notes with in-the-wild status of vulnerabilities. 5 iOS in-the-wild 0-days were detected and disclosed this year. The first publicly known macOS in-the-wild 0-day (CVE-2021-30869) was also found. In this section we’re going to discuss iOS and macOS together because: 1) the two operating systems include similar components and 2) the sample size for macOS is very small (just this one vulnerability).

Bar graph showing the number of macOS and iOS itw 0-days discovered per year. macOs is 0 for every year except 2021 when 1 was discovered. iOS - 2015: 0, 2016: 2, 2017: 0, 2018: 2, 2019: 0, 2020: 3, 2021: 5. Data from: https://docs.google.com/spreadsheets/d/1lkNJ0uQwbeC1ZTRrxdtuPLCIl7mlUreoKfSIgajnSyY/edit#gid=2129022708

For the 5 total iOS and macOS in-the-wild 0-days, they targeted 3 different attack surfaces:

These 4 attack surfaces are not novel. IOMobileFrameBuffer has been a target of public security research for many years. For example, the Pangu Jailbreak from 2016 used CVE-2016-4654, a heap buffer overflow in IOMobileFrameBuffer. IOMobileFrameBuffer manages the screen’s frame buffer. For iPhone 11 (A13) and below, IOMobileFrameBuffer was a kernel driver. Beginning with A14, it runs on a coprocessor, the DCP.  It’s a popular attack surface because historically it’s been accessible from sandboxed apps. In 2021 there were two in-the-wild 0-days in IOMobileFrameBuffer. CVE-2021-30807 is an out-of-bounds read and CVE-2021-30883 is an integer overflow, both common memory corruption vulnerabilities. In 2022, we already have another in-the-wild 0-day in IOMobileFrameBuffer, CVE-2022-22587.

One iOS 0-day and the macOS 0-day both exploited vulnerabilities in the XNU kernel and both vulnerabilities were in code related to XNU’s inter-process communication (IPC) functionality. CVE-2021-1782 exploited a vulnerability in mach vouchers while CVE-2021-30869 exploited a vulnerability in mach messages. This is not the first time we’ve seen iOS in-the-wild 0-days, much less public security research, targeting mach vouchers and mach messages. CVE-2019-6625 was exploited as a part of an exploit chain targeting iOS 11.4.1-12.1.2 and was also a vulnerability in mach vouchers.

Mach messages have also been a popular target for public security research. In 2020 there were two in-the-wild 0-days also in mach messages: CVE-2020-27932 & CVE-2020-27950. This year’s CVE-2021-30869 is a pretty close variant to 2020’s CVE-2020-27932. Tielei Wang and Xinru Chi actually presented on this vulnerability at zer0con 2021 in April 2021. In their presentation, they explained that they found it while doing variant analysis on CVE-2020-27932TieLei Wang explained via Twitter that they had found the vulnerability in December 2020 and had noticed it was fixed in beta versions of iOS 14.4 and macOS 11.2 which is why they presented it at Zer0Con. The in-the-wild exploit only targeted macOS 10, but used the same exploitation technique as the one presented.

The two FORCEDENTRY exploits (CVE-2021-30860 and the sandbox escape) were the only times that made us all go “wow!” this year. For CVE-2021-30860, the integer overflow in CoreGraphics, it was because:

  1. For years we’ve all heard about how attackers are using 0-click iMessage bugs and finally we have a public example, and
  2. The exploit was an impressive work of art.

The sandbox escape (CVE requested, not yet assigned) was impressive because it’s one of the few times we’ve seen a sandbox escape in-the-wild that uses only logic bugs, rather than the standard memory corruption bugs.

For CVE-2021-30860, the vulnerability itself wasn’t especially notable: a classic integer overflow within the JBIG2 parser of the CoreGraphics PDF decoder. The exploit, though, was described by Samuel Groß & Ian Beer as “one of the most technically sophisticated exploits [they]’ve ever seen”. Their blogpost shares all the details, but the highlight is that the exploit uses the logical operators available in JBIG2 to build NAND gates which are used to build its own computer architecture. The exploit then writes the rest of its exploit using that new custom architecture. From their blogpost:

Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It’s not as fast as Javascript, but it’s fundamentally computationally equivalent.

The bootstrapping operations for the sandbox escape exploit are written to run on this logic circuit and the whole thing runs in this weird, emulated environment created out of a single decompression pass through a JBIG2 stream. It’s pretty incredible, and at the same time, pretty terrifying.

This is an example of what making 0-day exploitation hard could look like: attackers having to develop a new and novel way to exploit a bug and that method requires lots of expertise and/or time to develop. This year, the two FORCEDENTRY exploits were the only 0-days out of the 58 that really impressed us. Hopefully in the future, the bar has been raised such that this will be required for any successful exploitation.

Android

There were 7 Android in-the-wild 0-days detected and disclosed this year. Prior to 2021 there had only been 1 and it was in 2019: CVE-2019-2215. Like WebKit, this lack of data makes it hard for us to assess trends and changes. Instead, we’ll compare it to public security research.

For the 7 Android 0-days they targeted the following components:

5 of the 7 0-days from 2021 targeted GPU drivers. This is actually not that surprising when we consider the evolution of the Android ecosystem as well as recent public security research into Android. The Android ecosystem is quite fragmented: many different kernel versions, different manufacturer customizations, etc. If an attacker wants a capability against “Android devices”, they generally need to maintain many different exploits to have a decent percentage of the Android ecosystem covered. However, if the attacker chooses to target the GPU kernel driver instead of another component, they will only need to have two exploits since most Android devices use 1 of 2 GPUs: either the Qualcomm Adreno GPU or the ARM Mali GPU.

Public security research mirrored this choice in the last couple of years as well. When developing full exploit chains (for defensive purposes) to target Android devices, Guang GongMan Yue Mo, and Ben Hawkes all chose to attack the GPU kernel driver for local privilege escalation. Seeing the in-the-wild 0-days also target the GPU was more of a confirmation rather than a revelation. Of the 5 0-days targeting GPU drivers, 3 were in the Qualcomm Adreno driver and 2 in the ARM Mali driver.

The two non-GPU driver 0-days (CVE-2021-0920 and CVE-2021-1048) targeted the upstream Linux kernel. Unfortunately, these 2 bugs shared a singular characteristic with the Android in-the-wild 0-day seen in 2019: all 3 were previously known upstream before their exploitation in Android. While the sample size is small, it’s still quite striking to see that 100% of the known in-the-wild Android 0-days that target the kernel are bugs that actually were known about before their exploitation.

The vulnerability now referred to as CVE-2021-0920 was actually found in September 2016 and discussed on the Linux kernel mailing lists. A patch was even developed back in 2016, but it didn’t end up being submitted. The bug was finally fixed in the Linux kernel in July 2021 after the detection of the in-the-wild exploit targeting Android. The patch then made it into the Android security bulletin in November 2021.

CVE-2021-1048 remained unpatched in Android for 14 months after it was patched in the Linux kernel. The Linux kernel was actually only vulnerable to the issue for a few weeks, but due to Android patching practices, that few weeks became almost a year for some Android devices. If an Android OEM synced to the upstream kernel, then they likely were patched against the vulnerability at some point. But many devices, such as recent Samsung devices, had not and thus were left vulnerable.

Microsoft Exchange Server

In 2021, there were 5 in-the-wild 0-days targeting Microsoft Exchange Server. This is the first time any Exchange Server in-the-wild 0-days have been detected and disclosed since we began tracking in-the-wild 0-days. The first four (CVE-2021-26855CVE-2021-26857CVE-2021-26858, and CVE-2021-27065)  were all disclosed and patched at the same time and used together in a single operation. The fifth (CVE-2021-42321) was patched on its own in November 2021. CVE-2021-42321 was demonstrated at Tianfu Cup and then discovered in-the-wild by Microsoft. While no other in-the-wild 0-days were disclosed as part of the chain with CVE-2021-42321, the attackers would have required at least another 0-day for successful exploitation since CVE-2021-42321 is a post-authentication bug.

Of the four Exchange in-the-wild 0-days used in the first campaign, CVE-2021-26855, which is also known as “ProxyLogon”, is the only one that’s pre-auth. CVE-2021-26855 is a server side request forgery (SSRF) vulnerability that allows unauthenticated attackers to send arbitrary HTTP requests as the Exchange server. The other three vulnerabilities were post-authentication. For example, CVE-2021-26858 and CVE-2021-27065 allowed attackers to write arbitrary files to the system. CVE-2021-26857 is a remote code execution vulnerability due to a deserialization bug in the Unified Messaging service. This allowed attackers to run code as the privileged SYSTEM user.

For the second campaign, CVE-2021-42321, like CVE-2021-26858, is a post-authentication RCE vulnerability due to insecure deserialization. It seems that while attempting to harden Exchange, Microsoft inadvertently introduced another deserialization vulnerability.

While there were a significant amount of 0-days in Exchange detected and disclosed in 2021, it’s important to remember that they were all used as 0-day in only two different campaigns. This is an example of why we don’t suggest using the number of 0-days in a product as a metric to assess the security of a product. Requiring the use of four 0-days for attackers to have success is preferable to an attacker only needing one 0-day to successfully gain access.

While this is the first time Exchange in-the-wild 0-days have been detected and disclosed since Project Zero began our tracking, this is not unexpected. In 2020 there was n-day exploitation of Exchange Servers. Whether this was the first year that attackers began the 0-day exploitation or if this was the first year that defenders began detecting the 0-day exploitation, this is not an unexpected evolution and we’ll likely see it continue into 2022.

Outstanding Questions

While there has been progress on detection and disclosure, that progress has shown just how much work there still is to do. The more data we gained, the more questions that arose about biases in detection, what we’re missing and why, and the need for more transparency from both vendors and researchers.

Until the day that attackers decide to happily share all their exploits with us, we can’t fully know what percentage of 0-days are publicly known about. However when we pull together our expertise as security researchers and anecdotes from others in the industry, it paints a picture of some of the data we’re very likely missing. From that, these are some of the key questions we’re asking ourselves as we move into 2022:

Where are the [x] 0-days?

Despite the number of 0-days found in 2021, there are key targets missing from the 0-days discovered. For example, we know that messaging applications like WhatsApp, Signal, Telegram, etc. are targets of interest to attackers and yet there’s only 1 messaging app, in this case iMessage, 0-day found this past year. Since we began tracking in mid-2014 the total is two: a WhatsApp 0-day in 2019 and this iMessage 0-day found in 2021.

Along with messaging apps, there are other platforms/targets we’d expect to see 0-days targeting, yet there are no or very few public examples. For example, since mid-2014 there’s only one in-the-wild 0-day each for macOS and Linux. There are no known in-the-wild 0-days targeting cloud, CPU vulnerabilities, or other phone components such as the WiFi chip or the baseband.

This leads to the question of whether these 0-days are absent due to lack of detection, lack of disclosure, or both?

Do some vendors have no known in-the-wild 0-days because they’ve never been found or because they don’t publicly disclose?

Unless a vendor has told us that they will publicly disclose exploitation status for all vulnerabilities in their platforms, we, the public, don’t know if the absence of an annotation means that there is no known exploitation of a vulnerability or if there is, but the vendor is just not sharing that information publicly. Thankfully this question is something that has a pretty clear solution: all device and software vendors agreeing to publicly disclose when there is evidence to suggest that a vulnerability in their product is being exploited in-the-wild.

Are we seeing the same bug patterns because that’s what we know how to detect?

As we described earlier in this report, all the 0-days we saw in 2021 had similarities to previously seen vulnerabilities. This leads us to wonder whether or not that’s actually representative of what attackers are using. Are attackers actually having success exclusively using vulnerabilities in bug classes and components that are previously public? Or are we detecting all these 0-days with known bug patterns because that’s what we know how to detect? Public security research would suggest that yes, attackers are still able to have success with using vulnerabilities in known components and bug classes the majority of the time. But we’d still expect to see a few novel and unexpected vulnerabilities in the grouping. We posed this question back in the 2019 year-in-review and it still lingers.

Where are the spl0itz?

To successfully exploit a vulnerability there are two key pieces that make up that exploit: the vulnerability being exploited, and the exploitation method (how that vulnerability is turned into something useful).

Unfortunately, this report could only really analyze one of these components: the vulnerability. Out of the 58 0-days, only 5 have an exploit sample publicly available. Discovered in-the-wild 0-days are the failure case for attackers and a key opportunity for defenders to learn what attackers are doing and make it harder, more time-intensive, more costly, to do it again. Yet without the exploit sample or a detailed technical write-up based upon the sample, we can only focus on fixing the vulnerability rather than also mitigating the exploitation method. This means that attackers are able to continue to use their existing exploit methods rather than having to go back to the design and development phase to build a new exploitation method. While acknowledging that sharing exploit samples can be challenging (we have that challenge too!), we hope in 2022 there will be more sharing of exploit samples or detailed technical write-ups so that we can come together to use every possible piece of information to make it harder for the attackers to exploit more users.

As an aside, if you have an exploit sample that you’re willing to share with us, please reach out. Whether it’s sharing with us and having us write a detailed technical description and analysis or having us share it publicly, we’d be happy to work with you.

Conclusion

Looking back on 2021, what comes to mind is “baby steps”. We can see clear industry improvement in the detection and disclosure of 0-day exploits. But the better detection and disclosure has highlighted other opportunities for progress. As an industry we’re not making 0-day hard. Attackers are having success using vulnerabilities similar to what we’ve seen previously and in components that have previously been discussed as attack surfaces.The goal is to force attackers to start from scratch each time we detect one of their exploits: they’re forced to discover a whole new vulnerability, they have to invest the time in learning and analyzing a new attack surface, they must develop a brand new exploitation method.  And while we made distinct progress in detection and disclosure it has shown us areas where that can continue to improve.

While this all may seem daunting, the promising part is that we’ve done it before: we have made clear progress on previously daunting goals. In 2019, we discussed the large detection deficit for 0-day exploits and 2 years later more than double were detected and disclosed. So while there is still plenty more work to do, it’s a tractable problem. There are concrete steps that the tech and security industries can take to make it even more progress:

  1. Make it an industry standard behavior for all vendors to publicly disclose when there is evidence to suggest that a vulnerability in their product is being exploited,
  2. Vendors and security researchers sharing exploit samples or detailed descriptions of the exploit techniques.
  3. Continued concerted efforts on reducing memory corruption vulnerabilities or rendering them unexploitable.

Through 2021 we continually saw the real world impacts of the use of 0-day exploits against users and entities. Amnesty International, the Citizen Lab, and others highlighted over and over how governments were using commercial surveillance products against journalistshuman rights defenders, and government officials. We saw many enterprises scrambling to remediate and protect themselves from the Exchange Server 0-days. And we even learned of peer security researchers being targeted by North Korean government hackers. While the majority of people on the planet do not need to worry about their own personal risk of being targeted with 0-days, 0-day exploitation still affects us all. These 0-days tend to have an outsized impact on society so we need to continue doing whatever we can to make it harder for attackers to be successful in these attacks.

2021 showed us we’re on the right track and making progress, but there’s plenty more to be done to make 0-day hard.

Source :
https://googleprojectzero.blogspot.com/2022/04/the-more-you-know-more-you-know-you.html