In this report, we highlight the notable email threats of 2021, including over 33.6 million high-risk email threats (representing a 101% increase from 2020’s numbers) that we’ve detected using the Trend Micro Cloud App Security platform.
Email is an integral cog in the digital transformation machine. This was especially true in 2021, when organizations found themselves trying to keep business operations afloat in the middle of a pandemic that has forever changed how people work. At a time when the workplace had already largely shifted from offices to homes, malicious actors continued to favor email as a low-effort yet high-impact attack vector to disseminate malware.
Email is not only popular among cybercriminals for its simplicity but also for its efficacy. In fact, 74.1% of the total threats blocked by Trend Micro in 2021 are email threats. Meanwhile, the 2021 Internet Crime Report by the FBI’s Internet Crime Complaint Center (IC3) states that there was “an unprecedented increase in cyberattacks and malicious cyber activity” last year, with business email compromise (BEC) being among the top incidents.
In this report, we discuss the notable email threats of 2021 based on the data that we’ve gathered using the Trend Micro™ Cloud App Security™, a security solution that supplements the preexisting security features in email and collaboration platforms.
Malware detections surge as attacks become more elaborate, targeted
The Trend Micro Cloud App Security solution detected and thwarted a total of 3,315,539 total malware files in 2021. More urgently, this number represents an increase of a whopping 196% from 2020’s numbers. There were also huge spikes in both known and unknown malware detections in 2021 at 133.8% and 221%, respectively.
Cybercriminals worked overtime to attach malware in malicious emails in 2021 using advanced tactics and social engineering lures. In January, we saw how Emotet sent spam emails that used hexadecimal and octal representations of IP addresses for detection evasion in its delivery of malware such as TrickBot and Cobalt Strike.
In May last year, we reported on Panda Stealer, an information stealer that targets cryptocurrency wallets and credentials via spam emails. We also shared an update on APT-C-36 (aka Blind Eagle), an advanced persistent threat (APT) group targeting South American entities using a spam campaign that used fraudulent emails impersonating Colombia’s national directorate of taxes and customs and even fake infidelity email lures.
QAKBOT operators also resumed their spam campaign in late 2021 after an almost three-month hiatus and abused hijacked email threads to lead victims to both QAKBOT and the SquirrelWaffle malware loader.
Meanwhile, ransomware detections continued to decline in 2021, a consistent trend that we have been seeing in previous years. Last year, the Trend Micro Cloud App Security solution detected and blocked 101,215 ransomware files — a 43.4% decrease compared to 2020’s detections.
The reason behind this continuing decline is possibly two-fold: One, unlike legacy ransomware that focuses on the quantity of victims, modern ransomware focuses on waging highly targeted and planned attacks to yield bigger profits. Since today’s ransomware actors no longer abide by the spray-and-pray ransomware model, the number of attacks are no longer as massive as the number that we witnessed in ransomware’s early days. We identified the other reason in our year-end roundup report: That is, it’s possible that ransomware detections are down because our cybersecurity solutions continue to block an increasing number of ransomware affiliate tools each year, including TrickBot and BazarLoader. This could have prevented ransomware attacks from being successfully executed on victim environments.
Known, unknown, and overall credential phishing attacks rose in 2021
Based on Trend Micro Cloud App Security data, 6,299,883 credential phishing attacks were detected and blocked in 2021, which accounts for a 15.2% overall increase. Similar to last year, the number of known credential phishing attacks is greater than the unknown ones. However, this year, the percentage of increase is at a staggering 72.8%.
When comparing 2020 and 2021’s numbers, we saw an 8.4% increase in the number of detections for known credential phishing links, while a 30% growth is observed in the number of detections for unknown credential phishing links.
Abnormal Security noted the increase in overall credential phishing attacks in one 2021 report and stated that credential phishing is attributed to 73% of all advanced threats that they’ve analyzed.
We have also documented the rise in credential phishing attacks from previous years. In fact, in the first half of 2019, the Trend Micro Cloud App Security solution detected and blocked 2.4 million credential phishing attacks alone.
BEC’s small numbers bring big business losses
The Trend Micro Cloud App Security solution intercepted a total of 283,859 BEC attacks in 2021. Compared with 2020’s BEC detections, this number represents a 10.61% decrease. Interestingly, there is an 82.7% increase in this year’s BEC attacks that were detected using Writing Style DNA, while there is a 38.59% decrease in attacks that have been blocked using the antispam engine.
Overall, BEC numbers have consistently been on a downward trend since 2020. But the reduction in BEC victims doesn’t equate to a dip in cybercriminal profits. According to the FBI’s IC3, BEC accounted for US$2.4 billion in adjusted losses for both businesses and consumers in 2021. According to the same organization, BEC losses have reached over US$43 billion between June 2016 and December 2021 for both domestic and international incidents.
We have also observed how BEC actors continuously tweak their tactics for ill gain. In August last year, our telemetry showed a gradual increase in BEC detections. Upon investigation, we discovered that instead of impersonating company executives and upper management personnel, this BEC-related email campaign impersonated and targeted ordinary employees for money transfers and bank payroll account changes.
Covid-19 lures, cybercriminal campaigns behind massive jump in phishing numbers
The Trend Micro Cloud App Security solution data shows that a total of 16,451,166 phishing attacks were detected and blocked in 2021. This is a 137.6% growth from 2020’s phishing numbers.
In contrast to last year’s numbers, we saw a significant jump in phishing attacks detected via spam count this year — a whopping 596% increase, to be specific. Meanwhile, we observed a notable 15.26% increase in credential phishing count compared to last year.
These high numbers reflect organizations’ sentiments about phishing attacks. According to a survey in an Osterman Research report titled “How to Reduce the Risk of Phishing and Ransomware,” organizations were “concerned” or “extremely concerned” about phishing attempts making their way to end users and employees failing to spot phishing and social engineering attacks before accessing a link or attachment.
While they kicked off majority of Covid-19-related phishing emails and sites in 2020, cybercriminals still exploited the global pandemic for financial gain. Last year, Mexico-based medical laboratory El Chopo shared that a fraudulent website that looked identical to the company’s had been launched. On that website, users could schedule a vaccination appointment after paying MXN2,700 (approximately US$130). To make the fake website appear credible, the malicious actors behind it added fake contact information such as email addresses and social media pages that victims can use for inquiries.
Early last year, we reported on a wave of phishing emails that pretended to be coming from national postal systems. This campaign attempted to steal credit card numbers from 26 countries. We also investigated a spear-phishing campaign that used Pegasus spyware-related emails to lead victims into downloading a file stealer. This campaign targeted high-ranking political leaders, activists, and journalists in 11 countries.
Protect emails, endpoints, and cloud-based services and apps from attacks with Trend Micro Cloud App Security
Organizations should consider a comprehensive multilayered security solution such as Trend Micro Cloud App Security. It supplements the preexisting security features in email and collaboration platforms like Microsoft 365 and Google Workspace (formerly known as G Suite) by using machine learning (ML) to analyze and detect any suspicious content in the message body and attachments of an email. It also acts as a second layer of protection after emails and files have passed through Microsoft 365 or Gmail’s built-in security.
Trend Micro Cloud App Security uses technologies such as sandbox malware analysis, document exploit detection, and file, email, and web reputation technologies to detect malware hidden in Microsoft 365 or PDF documents. It provides data loss prevention (DLP) and advanced malware protection for Box, Dropbox, Google Drive, SharePoint Online, OneDrive for Business, and Salesforce while also enabling consistent DLP policies across multiple cloud-based applications. It also offers seamless integration with an organization’s existing cloud setup, preserving full user and administrator functionality, providing direct cloud-to-cloud integration through vendor APIs, and minimizing the need for additional resources by assessing threat risks before sandbox malware analysis.
Trend Micro Cloud App Security stands on the cutting edge of email and software-as-a-service (SaaS) security, offering ML-powered features that combat two of the primary email-based threats: BEC and credential phishing. Writing Style DNA can help determine if an email is legitimate by using ML to check a user’s writing style based on past emails and then comparing suspicious emails against it. Computer vision, on the other hand, combines image analysis and ML to check branded elements, login forms, and other site content. It then pools this information with site reputation elements and optical character recognition (OCR) to check for fake and malicious sites — all while reducing instances of false positives to detect credential phishing email.
This security solution also comes with an option to rescan historical URLs in users’ email metadata and perform continued remediation (automatically taking configured actions or restoring quarantined messages) using newer patterns updated by Web Reputation Services.
This is a significant option since users’ email metadata might include undetected suspicious or dangerous URLs that have only recently been discovered. The examination of such metadata is thus an important part of forensic investigations that help determine if your email service has been affected by attacks. This solution also officially supports the Time-of-Click Protection feature to protect Exchange Online users against potential risks when they access URLs in incoming email messages.
Trend Micro Cloud App Security also comes with the advanced and extended security capabilities of Trend Micro XDR, providing investigation, detection, and response across your endpoints, email, and servers.
The U.S. Cybersecurity and Infrastructure Security Agency (CISA), along with the Coast Guard Cyber Command (CGCYBER), on Thursday released a joint advisory warning of continued attempts on the part of threat actors to exploit the Log4Shell flaw in VMware Horizon servers to breach target networks.
“Since December 2021, multiple threat actor groups have exploited Log4Shell on unpatched, public-facing VMware Horizon and [Unified Access Gateway] servers,” the agencies said. “As part of this exploitation, suspected APT actors implanted loader malware on compromised systems with embedded executables enabling remote command-and-control (C2).”
In one instance, the adversary is said to have been able to move laterally inside the victim network, obtain access to a disaster recovery network, and collect and exfiltrate sensitive law enforcement data.
Log4Shell, tracked as CVE-2021-44228 (CVSS score: 10.0), is a remote code execution vulnerability affecting the Apache Log4j logging library that’s used by a wide range of consumers and enterprise services, websites, applications, and other products.
Successful exploitation of the flaw could enable an attacker to send a specially-crafted command to an affected system, enabling the actors to execute malicious code and seize control of the target.
Based on information gathered as part of two incident response engagements, the agencies said that the attackers weaponized the exploit to drop rogue payloads, including PowerShell scripts and a remote access tool dubbed “hmsvc.exe” that’s equipped with capabilities to log keystrokes and deploy additional malware.
“The malware can function as a C2 tunneling proxy, allowing a remote operator to pivot to other systems and move further into a network,” the agencies noted, adding it also offers a “graphical user interface (GUI) access over a target Windows system’s desktop.”
The PowerShell scripts, observed in the production environment of a second organization, facilitated lateral movement, enabling the APT actors to implant loader malware containing executables that include the ability to remotely monitor a system’s desktop, gain reverse shell access, exfiltrate data, and upload and execute next-stage binaries.
Furthermore, the adversarial collective leveraged CVE-2022-22954, a remote code execution vulnerability in VMware Workspace ONE Access and Identity Manager that came to light in April 2022, to deliver the Dingo J-spy web shell.
Ongoing Log4Shell-related activity even after more than six months suggests that the flaw is of high interest to attackers, including state-sponsored advanced persistent threat (APT) actors, who have opportunistically targeted unpatched servers to gain an initial foothold for follow-on activity.
According to cybersecurity company ExtraHop, Log4j vulnerabilities have been subjected to relentless scanning attempts, with financial and healthcare sectors emerging as an outsized market for potential attacks.
“Log4j is here to stay, we will see attackers leveraging it again and again,” IBM-owned Randori said in an April 2022 report. “Log4j buried deep into layers and layers of shared third-party code, leading us to the conclusion that we’ll see instances of the Log4j vulnerability being exploited in services used by organizations that use a lot of open source.”
Security researchers found that Adobe Acrobat is trying to block security software from having visibility into the PDF files it opens, creating a security risk for the users.
Adobe’s product is checking if components from 30 security products are loaded into its processes and likely blocks them, essentially denying them from monitoring for malicious activity.
Flagging incompatible AVs
For a security tool to work, it needs visibility into all processes on the system, which is achieved by injecting dynamic-link libraries (DLLs) into software products launching on the machine.
PDF files have been abused in the past to execute malware on the system. One method is to add a command in the ‘OpenAction’ section of document to run PowerShell commands for malicious activity, explain the researchers at cybersecurity company Minerva Labs.
“Since March of 2022 we’ve seen a gradual uptick in Adobe Acrobat Reader processes attempting to query which security product DLLs are loaded into it by acquiring a handle of the DLL” – Minerva Labs
According to a report this week, the list has grown to include 30 DLLs from security products of various vendors. Among the more popular ones with consumers are Bitdefender, Avast, Trend Micro, Symantec, Malwarebytes, ESET, Kaspersky, F-Secure, Sophos, Emsisoft.
Querying the system is done with ‘libcef.dll’, a Chromium Embedded Framework (CEF) Dynamic Link Library used by a wide variety of programs.
While the Chromium DLL comes with a short list of components to be blacklisted because they cause conflicts, vendors using it can make modifications and add any DLL they want.
Chromium’s list of hardcoded DLLs, source: Minerva Labs
The researchers explain that “libcef.dll is loaded by two Adobe processes: AcroCEF.exe and RdrCEF.exe” so both products are checking the system for components of the same security products.
Looking closer at what happens with the DLLs injected into Adobe processes, Minerva Labs found that Adobe checks if the bBlockDllInjection value under the registry key ‘SOFTWARE\Adobe\Adobe Acrobat\DC\DLLInjection\’ is set to 1. If so, it will prevent antivirus software’s DLLs from being injected into processes.
It is worth noting that the registry key’s value when Adobe Reader runs for the first time is ‘0’ and that it can be modified at any time.
“With the registry key name dBlockDllInjection, and looking at the cef documentation, we can assume that the the blacklisted DLLs are designated to be unloaded” – Minerva Labs
According to Minerva Labs researcher Natalie Zargarov, the default value for the registry key is set to ‘1’ – indicating active blocking. This setting may depend on the operating system or the Adobe Acrobat version installed, as well as other variables on the system.
In a post on Citrix forums on March 28, a user complaining about Sophos AV errors due to having an Adobe product installed said that the company “suggested to disable DLL-injection for Acrobat and Reader.
Adobe responding to Citrix user experiencing errors on machine with Sophos AV
Working on the problem
Replying to BleepingComputer, Adobe confirmed that users have reported experiencing issue due to DLL components from some security products being incompatible with Adobe Acrobat’s usage of the CEF library.
“We are aware of reports that some DLLs from security tools are incompatible with Adobe Acrobat’s usage of CEF, a Chromium based engine with a restricted sandbox design, and may cause stability issues” – Adobe
The company added that it is currently working with these vendors to address the problem and “to ensure proper functionality with Acrobat’s CEF sandbox design going forward.”
Minerva Labs researchers argue that Adobe chose a solution that solves compatibility problems but introduces a real attack risk by preventing security software from protecting the system.
BleepingComputer has contacted Adobe with further questions to explain the conditions the DLL blocking occurs and will update the article once we have the information.
7-zip has finally added support for the long-requested ‘Mark-of-the-Web’ Windows security feature, providing better protection from malicious downloaded files.
When you download documents and executables from the web, Windows adds a special ‘Zone.Id’ alternate data stream to the file called the Mark-of-the-Web (MoTW).
This identifier tells Windows and supported applications that the file was downloaded from another computer or the Internet and, therefore, could be a risk to open.
When you attempt to open a downloaded file, Windows will check if a MoTW exists and, if so, display additional warnings to the user, asking if they are sure they wish to run the file.
Launching a downloaded executable containing a MoTW Source: BleepingComputer
Microsoft Office will also check for the Mark-of-the-Web, and if found, it will open documents in Protected View, with the file in read-only mode and macros disabled.
Word document opened in Protected View Source: BleepingComputer
To check if a downloaded file has the Mark-of-the-Web, you can right-click on it in Windows Explorer and open its properties.
If the file contains a MoTW, you will see a message at the bottom stating, “This file came from another computer and might be blocked to help protection this computer.”
File property indicator for the Mark-of-the-Web Source: BleepingComputer
If you trust the file and its source, you can put a check in the ‘Unblock‘ box and click on the ‘Apply‘ button, which will remove the MoTW from the file.
Furthermore, running the file for the first time and allowing it to open will also remove the MoTW, so warnings are not shown in the future.
7-zip adds support for Mark-of-the-Web
7-zip is one of the most popular archiving programs in the world, but, until now, it lacked support for Mark-of-the-Web.
This meant that if you downloaded an archive from the Internet and extracted it with 7-zip, the Mark-of-the-Web would not propagate to the extracted files, and Windows would not treat the extracted files as risky.
For example, if you downloaded a ZIP file containing a Word document, the ZIP file would have a MoTW, but the extracted Word document would not. Therefore, Microsoft Office would not open the file in Protected View.
Over the years, numerous security researchers, developers, and engineers have requested that the 7-Zip developer, Igor Pavlov, add the security feature to his archiving utility.
Pavlov said he doesn’t like the feature as it adds extra overhead to the program.
“The overhead for that property (additional Zone Identifier stream for each file) is not good in some cases,” explained Pavlov in a 7-zip bug report.
However, this all changed last week after Pavlov added a new setting in 7-zip 22.00 that enables you to propagate MoTW streams from downloaded archives to its extracted files.
To enable this setting, search for and open the ‘7-Zip File Manager,’ and when it opens, click on Tools and then Options. Under the 7-Zip tab, you will now see a new option titled ‘Propagate Zone.Id stream’ and the ability to set it to ‘No,’ ‘Yes,’ or ‘For Office files.’
Set this option to ‘Yes’ or ‘For Office files,’ which is less secure, and then press the OK button.
New Propagate Zone.Id stream in 7-Zip Source: BleepingComputer
With this setting enabled, when you download an archive and extract its files, the Mark-of-the-Web will also propagate to the extracted files.
With this additional security, Windows will now prompt you as to whether you wish to run downloaded files and Microsoft Office will open documents in Protected View, offering increased security.
To take advantage of this new feature, you can download 7-zip 22.0 from 7-zip.org.
QNAP has warned customers today that some of its Network Attached Storage (NAS) devices (with non-default configurations) are vulnerable to attacks that would exploit a three-year-old critical PHP vulnerability allowing remote code execution.
“A vulnerability has been reported to affect PHP versions 7.1.x below 7.1.33, 7.2.x below 7.2.24, and 7.3.x below 7.3.11. If exploited, the vulnerability allows attackers to gain remote code execution,” QNAP explained in a security advisory released today.
“To secure your device, we recommend regularly updating your system to the latest version to benefit from vulnerability fixes.”
The Taiwanese hardware vendor has already patched the security flaw (CVE-2019-11043) for some operating system versions exposed to attacks (QTS 5.0.1.2034 build 20220515 or later and QuTS hero h5.0.0.2069 build 20220614 or later).
However, the bug affects a wide range of devices running:
QTS 5.0.x and later
QTS 4.5.x and later
QuTS hero h5.0.x and later
QuTS hero h4.5.x and later
QuTScloud c5.0.x and later
QNAP customers who want to update their NAS devices to the latest firmware automatically need to log on to QTS, QuTS hero, or QuTScloud as administrator and click the “Check for Update” button under Control Panel > System > Firmware Update.
You can also manually upgrade your device after downloading the update on the QNAP website from Support > Download Center.
QNAP devices targeted by ransomware
Today’s warning comes after the NAS maker warned its customers on Thursday to secure their devices against active attacks deploying DeadBolt ransomware payloads.
BleepingComputer also reported over the weekend that ech0raix ransomware has started targeting vulnerable QNAP NAS devices again, according to sample submissions on the ID Ransomware platform and multiple user reports who had their systems encrypted.
Until QNAP issues more details on ongoing attacks, the infection vector used in these new DeadBolt and ech0raix campaigns remains unknown.
While QNAP is working on patching the CVE-2019-11043 PHP vulnerability in all vulnerable firmware versions, you should ensure that your device is not exposed to Internet access as an easy way to block incoming attacks.
As QNAP has advised in the past, users with Internet-exposed NAS devices should take the following measures to prevent remote access:
Disable the Port Forwarding function of the router: Go to the management interface of your router, check the Virtual Server, NAT, or Port Forwarding settings, and disable the port forwarding setting of the NAS management service port (port 8080 and 433 by default).
Disable the UPnP function of the QNAP NAS: Go to myQNAPcloud on the QTS menu, click the “Auto Router Configuration,” and unselect “Enable UPnP Port forwarding.”
QNAP also provides detailed info on how to toggle off remote SSH and Telnet connections, change the system port number, change device passwords, and enable IP and account access protection to further secure your device.
Update June 22, 08:45 EDT: After this story was published, QNAP’s PSIRT team updated the original advisory and told BleepingComputer that devices with default configurations are not impacted by CVE-2019-11043.
Also, QNAP said that the Deadbolt ransomware attacks are targeting devices running older system software (released between 2017 and 2019).
For CVE-2019-11043, described in QSA-22-20, to affect our users, there are some prerequisites that need to be met, which are:
nginx is running, and
php-fpm is running.
As we do not have nginx in our software by default, QNAP NAS are not affected by this vulnerability in their default state. If nginx is installed by the user and running, then the update provided with QSA-22-20 should be applied as soon as possible to mitigate associated risks.
We are updating our security advisory QSA-22-20 to reflect the facts stated above. Again we would like to point out that most QNAP NAS users are not affected by this vulnerability since its prerequisites are not met. The risk only exists when there is user-installed nginx present in the system.
We have also updated the story to reflect the new information provided by QNAP.
Working in infrastructure has been a blast since I went down that route many years ago. One of the most enjoyable things in this line of work is learning about cool tech and playing around with it in a VMware homelab project for instance. Running a homelab involves sacrificing some of your free time and dedicating it to learning and experimenting.
Now, it is obvious that learning without a purpose is a tricky business as motivation tends to fade quite quickly. For that reason, it is best to work towards a goal and use your own hardware to conduct a VMware homelab project that will get you a certification, material to write interesting blogs, automate things in your home or follow a learning path to aim for a specific job or a different career track. When interviewing for engineering roles, companies are receptive to candidates that push the envelope to sharpen their skills and don’t fear investing time and money to get better.
This article is a bit different than usual as we, at Altaro, decided to have a bit of fun! We asked our section editors, authors, as well as third-party authors to talk about their homelabs. We set a rough structure regarding headlines to keep things consistent but we also wanted to leave freedom to the authors as VMware homelab projects are all different and serve a range of specific purposes.
In my honest opinion, it is one of the best investments in the learning and career goals I have made – a home lab. However, as the investment isn’t insignificant, why would I recommend owning and running a home lab environment? What do you use it for? What considerations should you make when purchasing equipment and servers?
Around ten years ago, I decided that having my own personal learning environment and sandbox would benefit all the projects and learning goals I had in mind. So, the home lab was born! Like many IT admins out there, my hobby and my full-time job are geeking out on technology. So, I wanted to have access at home to the same technologies, applications, and server software I use in my day job.
Why do you have a lab?
Like many, I started with a “part-time” VMware homelab project running inside VMware Workstation. So, the first hardware I purchased was a Dell Precision workstation with 32 gigs of memory. Instead of running vSphere on top of the hardware, I ran VMware Workstation. I believe this may have been before the VMUG Advantage subscription was available, or at least before I knew about it.
I would advise anyone thinking of owning and operating a home lab to start small. Running a lab environment inside VMware Workstation, Hyper-V, Virtualbox, or another solution is a great way to get a feel for the benefits of using a home lab environment. It may also be that a few VMs running inside VMware Workstation or another workstation-class hypervisor is all you need.
For my purposes, the number of workloads and technologies I wanted to play around with outgrew what I was able to do inside VMware Workstation. So, after a few years of running VMware Workstation on several other workstation-class machines, I decided to invest in actual servers. The great thing about a home lab is you are only constrained in its design by your imagination (and perhaps funds). Furthermore, unlike production infrastructure, you can redesign and repurpose along the way as you see fit. As a result, the home lab can be very fluid for your needs.
What’s your setup?
I have written quite a bit about my home lab environment, detailing hardware and software. However, I am a fan of Supermicro servers for the hardware side of things. I have found the Supermicro kits to be very stable, affordable, and many are supported on VMware’s HCL for installing vSphere, etc.
Enclosure
Sysracks 27U server enclosure
Servers
I have the following models of Supermicro servers:
(4) Supermicro SYS-5028D-TN4T
Mini tower form factor
(3) are in a vSAN cluster
(1) is used as a standalone host in other testing
(1) SYS-E301-9D-8CN8TP
Mini 1-U (actually 1.5 U) form factor
This host is used as another standalone host for various testing and nested labs
Networking
Cisco SG350-28 – Top of rack switch for 1 gig connectivity with (4) 10 gig SFP ports
Ubiquiti – Edgeswitch 10 Gig, TOR for Supermicro servers
Cisco SG300-20 – Top of rack IDF
Storage
VMFS datastores running on consumer-grade NVMe drives
vSAN datastore running on consumer-grade NVMe drives, (1) disk group per server
Synology Diskstation 1621xs+ – 30 TB of useable space
In terms of license requirements; I cannot stress enough how incredible the VMUG Advantage subscription is for obtaining real software licensing to run VMware solutions. It is arguably the most “bang for your buck” in terms of software you will purchase in your VMware homelab project. For around $200 (you can find coupons most of the year), you can access the full suite of VMware solutions, including vSphere, NSX-T, VMware Horizon, vRealize Automation, vRealize Operations, etc.
The VMUG Advantage subscription is how I started with legitimate licensing in the VMware home lab environment and have maintained a VMUG Advantage subscription ever since. You can learn more about the VMUG advantage subscription here: » VMUG Advantage Membership.
I used Microsoft Evaluation center licensing for Windows, suitable for 180 days, generally long enough for most of my lab scenarios.
What software am I running?
The below list is only an excerpt, as there are too many items, applications, and solutions to list. As I mentioned, my lab is built on top of VMware solutions. In it, I have the following running currently:
vSphere 7.0 Update 3d with latest updates
vCenter Server 7.0 U3d with the latest updates
vSAN 7.0 Update 3
vRealize Operations Manager
vRealize Automation
vRealize Network Insight
VMware NSX-T
Currently using Windows Server 2022 templates
Linux templates are Ubuntu Server 21.10 and 20.04
Running Gitlab and Jenkins for CI/CD
I have a CI/CD pipeline process that I use to keep VM templates updated with the latest builds
Running vSAN nested labs with various configurations
Running vSphere with Tanzu with various containers on top of Tanzu
Running Rancher Kubernetes clusters
Do I leverage the cloud?
Even though I have a VMware homelab project, I do leverage the cloud. For example, I have access to AWS and Azure and often use these to build out PoC environments and services between my home lab and the cloud to test real-world scenarios for hybrid cloud connectivity for clients and learning purposes.
What does your roadmap look like?
I am constantly looking at new hardware and better equipment across the board on the hardware roadmap. It would be nice to get 25 gig networking in the lab environment at some point in the future. Also, I am looking at new Supermicro models with the refreshed Ice Lake Xeon-D processors.
On the software/solutions side, I am on a continuous path to learning new coding and DevOps skills, including new Infrastructure-as-Code solutions. Also, Kubernetes is always on my radar, and I continue to use the home lab to learn new Kubernetes skills. I want to continue building new Kubernetes solutions with containerized workloads in the home lab environment, which is on the agenda this year in the lab environment.
Any horror stories to share?
One of the more memorable homelab escapades involved accidentally wiping out an entire vSAN datastore as I had mislabeled two of my Supermicro servers. So, when I reloaded two of the servers, I realized I had rebuilt the wrong servers. Thankfully, I am the CEO, CIO, and IT Manager of the home lab environment, and I had backups of my VMs .
I like to light up my home lab server rack
One of the recent additions to the VMware homelab project this year has been the addition of LED lights. I ran LED light strips along the outer edge of my server rack and can change the color via remote or have the lights cycle through different colors on a timer. You can check out a walkthrough of my home lab environment (2022 edition with lights) here: (574) VMware Home Lab Tour 2022 Edition Server Room with LED lights at night! A geek’s delight! – YouTube
When I started my career in IT, I didn’t have any sort of lab and relied exclusively on the environment I had at work to learn new things and play around with tech. This got me started with running virtual machines in VMware workstations at home but computers back then (10 years ago) didn’t come with 16GB of RAM as a common requirement so I had to get crafty with resources.
When studying to take the VCP exam, things started to get a bit frustrating as running a vCenter with just 2 vSphere nodes on 16 GB of ram is cumbersome (and slow). At this point, I got lucky enough that I could use a fairly good test environment at work to delay the inevitable and manage to get the certification without investing a penny in hardware or licenses.
I then changed my employer and started technical writing so I needed the capacity to play around with and resources pile up fast when you add vSAN, NSX, SRM and other VMware products into the mix. For that reason, I decided to get myself a homelab that would be dedicated to messing around. I started with Intel NUC mini-PCs like many of us and then moved to a more solid Dell rack server that I am currently running.
I decided to go the second-hand route as it was so much cheaper and I don’t really care about official support, newer software usually works unless on dinosaur hardware. I got a great deal on a Dell R430, my requirements were pretty easy as I basically needed lots of cores, memory, a fair amount of storage and an out-of-band card for when I’m not at home and need to perform power actions on it.
What’s your setup?
I am currently running my cluster labs nested on the R430 and run natively in VMs when possible. For instance, I have the DC, NSX manager, VCD, and vCenter run in VMs on the physical host, but I have a nested VSAN cluster with NSX-T networking managed by this same vCenter server. This is the most consolidated way I could think of while offering flexibility.
Dell R430
VMware vSphere ESXi 7 Update 3
2 x Intel Xeon E5-2630 v3 (2 x 8 pCores @2.40GHz)
128GB of RAM
6 x 300GB 15K rpm in RAID 5 (1.5TB raw)
PERC H730 mini
Dual 550W power supply (only one connected)
iDRAC 8 enterprise license
I keep the firmware up to date with Dell OME running in a VM in a workstation on my laptop that I fire up every now and again (when I have nothing better to do).
On the side, I also have a Gigabyte mini-pc running. That one is installed with an Ubuntu server with K3s running on it (Kubernetes). I use it to run a bunch of home automation stuff that are managed by ArgoCD in a private Github repository (GitOps), that way I can track my change through commits and pull requests. I also use it for CAPV to quickly provision Kubernetes (and Tanzu TCE) clusters in my lab.
Gigabyte BSi3-6100
Ubuntu 20.04 LTS
Core i3 6th gen
8GB of ram
I also have an old Synology DS115j NAS (Network Access Storage) that participates in the home automation stuff. It is also a target for vCenter backups and a few VMs I don’t want to have to rebuild using Altaro VM backup. It’s only 1TB but I am currently considering my options to replace it with a more powerful model with more storage.
Network wise all the custom stuff happens nested with OpnSense and NSX-T, I try to keep my home network as simple as possible if I don’t need to complicate it any further.
I currently don’t leverage any cloud services on a daily basis but I spin up the odd instance or cloud service now and again to check out new features or learn about new tech in general.
I try to keep my software and firmware as up-to-date as possible. However, it tends to depend on what I’m currently working on or interested in. I haven’t touched my Horizon install in a while but I am currently working with my NSX-T + ALB + VCD + vSAN setup to deploy a Kubernetes cluster with Cluster API.
“VMware homelab project architecture”
What do you like and don’t like about your setup?
I like that I have a great deal of flexibility by having a pool of resources that I can consume with nested installs or natives VMs. I can scratch projects and start over easily.
However, I slightly underestimated storage requirements and 1.5TB is proving a bit tricky as I have to really keep an eye on it to avoid filling it up. My provisioning ratio is currently around 350% so I don’t want to hit the 100% used space mark. And finding spare 15K SAS disks isn’t as easy as I’d hope.
What does your roadmap look like?
As mentioned, I’m reaching a point where storage can become a bottleneck as interoperable VMware products require more and more resources (NSX-T + ALB + Tanzu + VCD …). I could add a couple of disks but that would only add 600GB of storage and I’ll have to find 15K rpm 300GB disks with caddies so not an easy find. For that reason, I’m considering getting a NAS that I can then use as NFS or iSCSI storage backend with SSDs.
Things I am currently checking out include VMware Cloud Director with NSX-T and ALB integration and Kubernetes on top of all that. I’d also like to get in touch with CI/CD pipelines and other cloud-native stuff.
Any horror stories to share?
The latest to date was my physical ESXi host running on a consumer-grade USB key plugged in the internal USB port, which got fried (the USB key) after a few months of usage. My whole environment was running on this host and I had no backup then. But luckily, I was able to reinstall it on a new USB key (plugged in the external port) and re-register all my resources one by one manually.
Also, note that I am incredibly ruthless with my home lab. I only turn it on when needed. So, when I am done with it, none of that proper shutdown sequence, thanks very much. I trigger the shut down of the physical host from vCenter which takes care of stopping the VMs, sometimes I even push the actual physical button (yes there’s one). While I haven’t nuked anything that way somehow, I would pay to see my boss’s face should I stop production hypervisors with the button!
Ivo Beerens
https://www.ivobeerens.nl/
Why do you have a lab?
The home lab is mainly used for learning, testing new software versions, and automating new image releases. Back when I started down this journey, my first home lab was in the Novell Netware 3.11 era which I acquired using my own money, no employer’s subvention
My main considerations and decision points for what I decided to purchase were low noise, low power consumption for running 24×7, room for PCI-Express cards and NVMe support.
What’s your setup?
From a hardware standpoint, computing power is handled by two Shuttle barebone machines with the following specifications:
500 W Plus Silver PSU
Intel Core i7 8700 with 6 cores and 12 threads
64 GB memory
Samsung 970 EVO 1 TB m.2
2 x 1 GbE Network cards
Both barebones are running the latest VMware vSphere version.
In terms of storage, I opted for a separate QNAP TS-251+ NAS with two Western Digital (WD) Red 8 TB disks in a RAID-1 configuration. The barebones machines have NVM drives with no RAID protection.
The bulk of my workloads are hosted on VMware vSphere and for the VDI solution, I run VMware Horizon with Windows 10/11 VDIs. Cloud-wise, I use an Azure Visual Studio subscription for testing IAAS and Azure Virtual Desktop services.
I manage the environments by automating as much as possible using Infrastructure as Code (IaC). I automated the installation process of almost every part so I can start over from scratch whenever I want.
What do you like and don’t like about your setup?
I obviously really enjoy the flexibility that automation brings to the table. However, the lack of resources sometimes (max 128 GB) can sometimes be a limiting factor. I also miss having remote management boards such as HPE iLO, Dell iDRAC or a KVM switch to facilitate hardware operations.
What does your roadmap look like?
I currently have in the works to upgrade to a 10 GbE Switch and bump the memory to 128GB per barebone.
Paolo Valsecchi
https://nolabnoparty.com/
Why do you have a lab?
I am an IT professional and I often find myself in the situation of implementing new products and configurations without having the right knowledge or tested procedures at hand. Since it is a bad idea to experiment with things directly on production environments, having a lab is the ideal solution to learn, study, and practice new products or test new configurations without the hassle of messing up critical workloads.
Because I’m also a blogger, I study and test procedures to publish them on my blog. This required a better test environment than what I had. Since my computer didn’t have enough resources to allow complex deployments, in 2015 I decided to invest some money and build my own home lab.
It was clear that the ideal lab was not affordable due to high costs. For that reason, I decided to start with a minimum set of equipment to extend later. It took a while before finding the configuration that met the requirements. After extensive research on the Internet, I was finally able to complete the design by comparing other lab setups.
My requirements for the lab were simple: Low power, cost-effective hardware, acceptable performance, at least two nodes, one external storage, compatibility with the platforms I use, and components size.
What’s your setup?
Despite my lab still meeting my requirements, it is starting to be a little bit obsolete now. My current lab setup is the following:
PROD Servers: 3 x Supermicro X11SSH-L4NF
Intel Xeon E3-1275v5
64GB RAM
2TB WD Red
DR Server: Intel NUC NUC8i3BEH
Intel Core i3-8109U
32GB RAM
Kingston SA1000M8 240G SSD A1000
Storage PROD: Synology DS918
12TB WD Red RAID5
250GB read/write cache
16GB RAM
Storage Backup: Synology DS918
12TB WD Red RAID5
8GB RAM
Storage DR: Synology DS119j + 3TB WD Red
Switch: Cisco SG350-28
Router: Ubiquiti USG
UPS: APC 1400
The lab is currently composed of three nodes cluster running VMware vSphere 7.0.2 with vSAN as main storage. Physical shared storage devices are configured with RAID 5 and connected to vSphere or backup services via NFS or dedicated LUNs.
Installed Windows Servers are running version 2016 or 2019 while Linux VMs belong to different distributions and versions may vary.
My lab runs different services, such as:
VMware vSphere and vSAN
Active Directory, ADFS, Office 365 sync
VMware Horizon
Different backup solutions (at least 6 different products including Altaro)
In terms of Cloud service, I use cloud object storage (S3 and S3-compatible) solutions for backup purposes. I also use Azure to manage services such as Office 365, Active Directory and MFA. Due to high costs, workloads running on AWS or Azure are just created on-demand and for specific tests.
I try to keep the software always up-to-date with in-place upgrades, except for Windows Server which I always reinstall. Only once did I have to wipe the lab due to hardware failure
What do you like and don’t like about your setup?
With my current setup, I’m able to run the workloads I need and do my tests. Let’s say I’m satisfied with my lab, but…
vSAN disks are not SSD (only the cache), RAM installed on each host is limited to 64GB and the network speed is 1GB. These constraints are affecting the performance and the number of running machines that are demanding always more and more resources.
What does your roadmap look like?
To enhance my lab, the replacement of HDDs with SSDs is the first step in my roadmap. Smaller physical servers to better fit in my room as well as a 10 Gbps network would be the icing on the cake. Unfortunately, this means replacing most of the installed hardware in my lab.
Any horror stories to share?
After moving my lab from my former company to my house, the original air conditioning system in use during the very first days was not so good and a hot summer was fatal to my hardware… the storage with all my backups failed, losing a lot of important VMs. Pity that some days before I deleted such VMs from the lab. I spent weeks re-creating all the VMs! I have now a better cooling system and a stronger backup (3-2-1!)
I use my Home LAB primarily for testing various products to explore new features and functionality that I’d never played with before. This greatly helps me in learning about the product as well as testing it.
I decided to go for a Home Lab 4 years ago because of the complete flexibility and control you have over your own environment. You can easily (or not) deploy, configure and manage things yourself. I bought my Dell Workstation directly from Dell by customizing its configuration according to my needs and requirements.
The first thing I considered was whether it should be bare metal with Rack servers, Network Switches and Storage devices or simply nested virtualization inside VMware Workstation. I went for the nested virtualization route for flexibility and convenience and sized the hardware resources according to what I needed at the time.
What’s your setup?
My home lab is pretty simple, it is made up of a Dell Workstation, a TP link switch and a Portable hard drive.
Dell Workstation:
Dell Precision Tower 5810
Intel Xeon E5-2640v4 10 Core processor
96 GB of DDR4 Memory
2x1TB of SSDs
2 TB of Portable hard drive
Windows 10 with VMware Workstation
At the moment I currently run a variety of VMs such as ESXi hosts, AD-DNS, Backup software, a mail server and a number of Windows and Linux boxes. Because all VMs running on VMware Workstation there is no additional network configuration required as all VMs can interact with each other on virtual networks.
Since my Home LAB is on VMware Workstation it gives the flexibility to keep up-to-date versions as well as lower versions to test and compare features for instance. Because it runs in VMware Workstation, I often got to wipe out and recreate the complete setup. Whenever newer versions are released, I always upgrade to try out new features.
What do you like and don’t like about your setup?
I like the flexibility VMware Workstation gives me to set things up easily and scratch them just as easily.
On the other hand, there is a number of things I can’t explore such as setting up solutions directly on the physical server, working on Firmware, Configuring Storage & RAID levels, Configure Networking, routing and so on.
What does your roadmap look like?
Since I bought my Dell Workstation, I constantly keep an eye on the resources to avoid running out of capacity. In the near future, I plan to continue with that trend but I am considering buying a new one to extend the capacity.
However, I am currently looking at buying a NAS device to provide shared storage capacity to the compute node(s). While I don’t use any just now, my future home lab may include cloud services at some point.
Any horror stories to share?
A couple of mistakes I made in the home lab included failure to create DNS Records before deploying a solution, messed up vCenter Upgrade which required to deploying new vCenter servers or a failed Standard Switch to Distributed Switch migration which caused network outage and needed to reset the whole networking stack.
Simon Cranney
https://esxsi.com/
Why do you have a lab?
A couple of years ago I stood up my first proper VMware home lab project. I had messed about with running VMware Workstation on a gaming PC in the past, but this time I wanted something I could properly get my teeth into and have a VMware vSphere home lab without resource contention.
Prior to this, I had no home lab. Many people that are fortunate to work in large enterprise infrastructure environments may be able to fly under the radar and play about with technologies on works hardware. I cannot confirm nor deny if this was something I used to do! But hey learning and testing new technologies benefits the company in the long run.
What’s your setup?
Back to the current VMware home lab then, I had a budget in mind so ended up going with a pair of Intel NUC boxes. Each with 32 GB RAM and a 1 TB PCIe NVMe SSD.
The compute and storage are used to run a fairly basic VMware vSphere home lab setup. I have a vCenter Server as you’d expect, a 2-node vSAN cluster, and vRealize Operations Manager, with a couple of Windows VMs running Active Directory and some different applications depending on what I’m working on at any given point in time.
My VMware home lab licenses are all obtained free of charge through the VMware vExpert program but there are other ways of accessing VMware home lab licenses such as through the VMUG Advantage membership or even the vSphere Essentials Plus Kit. If you are building a VMware home lab though, why not blog about it and shoot for the VMware vExpert application?
In terms of networking, I’ve put in a little more effort! Slightly out of scope here but in a nutshell;
mini rack with the Ubiquiti UniFi Dream Machine Pro
UniFi POE switch
And a number of UniFi Access Points providing full house and garden coverage
I separate out homelab and trusted devices onto an internal network, partner and guest devices onto an external network, and smart devices or those that like to listen onto a separate IoT network. Each network is backed by a different VLAN and associated firewall rules.
What do you like and don’t like about your setup?
Being 8th Generation, the Intel NUC boxes caused me some pain when upgrading to vSphere 7. I used the Community Network Driver for ESXi Fling and played about adding some USB network adapters to build out distributed switches.
I’m also fortunate enough to be running a VMware SD-WAN (VeloCloud) Edge device, which plugs directly into my works docking station and optimizes my corporate network traffic for things like Zoom and Teams calls.
What does your roadmap look like?
In the future, I’d like to connect my VMware home lab project to some additional cloud services, predominantly in AWS. This will allow me to deep dive into technologies like VMware Tanzu, by getting hands-on with the deployment and configuration.
Whilst VMware Hands-on Labs are an excellent resource, like many techies I do find that the material sticks and resonates more when I have had to figure out integrations and fixes in a real-life environment. I hope you found my setup interesting. I’d love to hear in the comments section if you’re running VMware Tanzu in your home lab and from any other UniFi fans!
Get More Out of Your Homelab
It is always fun to discuss home labs and discover how your peers do it. It’s a great way to share “tips and tricks” and to learn from the success and failures of others. Hardware is expensive and so is electricity, real estate to store it and so on.
Learn how to design on a budget for the VMware homelab building process
For these reasons and many others, you should ask yourself a few questions before even looking at home lab options to better steer your research towards something that will fit your needs:
Do I need hardware, Cloud services or both? On-premise hardware involves investing a chunk of money at the beginning but it means you are in total control of the budget as electricity will be the only variable from now on. On the other hand, cloud services will let you pay for only what you use. It can be very expensive but it could also be economical under the right circumstances. Also, some of you will only require Azure services because it’s your job, while I couldn’t run VMware Cloud Director, NSX-T and ALB in the cloud.
Do you have limited space or noise constraints? Rack and tower servers are cool, but they are bulky and loud. A large number of IT professionals went for small, passive and silent mini-pcs such as Intel NUC. It grew in popularity after William Lam from VMware endorsed it and network drivers for the USB adapters were released as Flings. These small form factor machines are great and offer pretty good performances with i3, i5 or i7 processors. You can get a bunch of these to build a cluster that won’t use up much energy and won’t make a peep.
Nested or Bare-Metal? Another question that is often asked is if you should run everything bare-metal. I personally like the flexibility of nested setups but it’s also because I don’t have the room for a rack at home (and let’s face it, I would get bad looks!). However, as you saw in this blog, people go for one or the other for various reasons and you will have to find yours.
What do you want to get out of it? If you are in the VMware dojo, you most likely are interested in testing VMware products. Meaning vSphere will probably be your go-to platform. In which case you will have to think about licenses. Sure, you can use evaluation licenses but you’ll have to start over every 60 days, not ideal at all. The vExpert program and the VMUG advantage program are your best bets in this arena. On the other hand, if you are only playing with Open-source software you can install Kubernetes, OpenStack or KVM on bare metal for instance and you won’t have to pay for anything.
How much resources do you need? This question goes hand in hand with the next one. While playing around with vSphere, vCenter or vSAN won’t set you back that much. If you want to get into Cloud Director, Tanzu, NSX-T and the likes, you will find that they literally eat up CPU, memory and storage for breakfast. So, try to look at the resource requirements for the products you want to test in order to get a rough idea of what you will need.
What is your budget? Now the tough question, how much do you want to spend? In hardware and energy (which links back to small form factor machines)? It is important to set yourself a budget and not just start buying stuff for the sake of it (unless you have the funds). Home lab setups are expensive and, while you might get a 42U rack full of servers for cheap on the second-hand market, your energy bill will skyrocket. On the other hand, getting a very cheap setup will cost you a certain amount of money but you may not get anything from it due to hardware limitations. So set yourself a budget and try to find the sweet spot.
Check compatibility: Again, don’t jump in guns blazing at the first offer. Double-check that the hardware is compatible with whatever you want to evaluate. Sure, it is likely to work even if it isn’t in the VMware HCL, but it is always worth it to do your research to look for red flags before buying.
Those are only a few key points I could think of but I’d be happy to hear about yours in the comments!
Is a VMware Homelab Worth it?
We think that getting a home lab is definitely worth it. While the money aspect might seem daunting at first, investing in a home lab is investing in yourself. The wealth of knowledge you can get from 16 cores/128GB servers is lightyears away from running VMware Workstation on your 8 cores/16GB laptop. Even though running products in a lab isn’t real-life experience, this might be the differentiating factor that gets you that dream job you’ve been after. And once you get it, the $600 you spent for that home lab will feel like money well spent with a great ROI!
VMware Homelab Alternatives
However, if your objective is to learn about VMware products in a guided way and you are not ready to buy a home lab just yet for whatever reason, fear not, online options are there for you! You can always start with the VMware Hands-on-labs (HOL) which offers a large number of learning paths where you can get to grips with most of the products sold by the company. Many of them you couldn’t even test in your home lab actually (especially the cloud ones like carbon black or workspace one). Head over to https://pathfinder.vmware.com/v3/page/hands-on-labs and register to Hands-on-labs to start learning instantly.
The other option to run a home lab for cheap is to install a VMware workstation on your local workstation if you have enough resources. This is, in almost 100% of the cases, the first step before moving to a more serious and expensive setup.
To protect your VMware environment, Altaro offers the ultimate VMware backup service to secure backup quickly and replicate your virtual machines. We work hard perpetually to give our customers confidence in their backup strategy.
Plus, you can visit our VMware blog to keep up with the latest articles and news on VMware.
What Homelab Set Up is Right for You?
I think we will all agree that our work doesn’t fit within the traditional 9-to-5 as keeping our skills up is also part of the job and it can’t always be done on company time. Sometimes we’ll be too busy or it might just be that we want to learn about something that has nothing to do with the company’s business. Home labs aren’t limited to VMware or Azure infrastructure and what your employer needs. You can put them to good use by running overkill wifi infrastructures or by managing your movie collection with an enterprise-grade and highly resilient setup that many SMBs would benefit from. The great thing about it is that it is useful on a practical and personal level while also being good fun (if you’re a nerd like me).
Gathering testimonies about VMware homelab projects and discussing each other’s setup has been a fun and very interesting exercise. It is also beneficial to see what is being done out there and identify ways to improve and optimize our own setup, I now know that I need an oversized shared storage device in my home (This will be argued)!
Now we would love to hear about your VMware homelab project that you run at home, let’s have a discussion in the comments section!
A Year in Review of 0-days Used In-the-Wild in 2021
Posted by Maddie Stone, Google Project Zero
This is our third annual year in review of 0-days exploited in-the-wild [2020, 2019]. Each year we’ve looked back at all of the detected and disclosed in-the-wild 0-days as a group and synthesized what we think the trends and takeaways are. The goal of this report is not to detail each individual exploit, but instead to analyze the exploits from the year as a group, looking for trends, gaps, lessons learned, successes, etc. If you’re interested in the analysis of individual exploits, please check out our root cause analysis repository.
We perform and share this analysis in order to make 0-day hard. We want it to be more costly, more resource intensive, and overall more difficult for attackers to use 0-day capabilities. 2021 highlighted just how important it is to stay relentless in our pursuit to make it harder for attackers to exploit users with 0-days. We heard over and over and over about how governments were targeting journalists, minoritized populations, politicians, human rights defenders, and even security researchers around the world. The decisions we make in the security and tech communities can have real impacts on society and our fellow humans’ lives.
We’ll provide our evidence and process for our conclusions in the body of this post, and then wrap it all up with our thoughts on next steps and hopes for 2022 in the conclusion. If digging into the bits and bytes is not your thing, then feel free to just check-out the Executive Summary and Conclusion.
Executive Summary
2021 included the detection and disclosure of 58 in-the-wild 0-days, the most ever recorded since Project Zero began tracking in mid-2014. That’s more than double the previous maximum of 28 detected in 2015 and especially stark when you consider that there were only 25 detected in 2020. We’ve tracked publicly known in-the-wild 0-day exploits in this spreadsheet since mid-2014.
While we often talk about the number of 0-day exploits used in-the-wild, what we’re actually discussing is the number of 0-day exploits detected and disclosedas in-the-wild. And that leads into our first conclusion: we believe the large uptick in in-the-wild 0-days in 2021 is due to increased detection and disclosure of these 0-days, rather than simply increased usage of 0-day exploits.
With this record number of in-the-wild 0-days to analyze we saw that attacker methodology hasn’t actually had to change much from previous years. Attackers are having success using the same bug patterns and exploitation techniques and going after the same attack surfaces. Project Zero’s mission is “make 0day hard”. 0-day will be harder when, overall, attackers are not able to use public methods and techniques for developing their 0-day exploits. When we look over these 58 0-days used in 2021, what we see instead are 0-days that are similar to previous & publicly known vulnerabilities. Only two 0-days stood out as novel: one for the technical sophistication of its exploit and the other for its use of logic bugs to escape the sandbox.
So while we recognize the industry’s improvement in the detection and disclosure of in-the-wild 0-days, we also acknowledge that there’s a lot more improving to be done. Having access to more “ground truth” of how attackers are actually using 0-days shows us that they are able to have success by using previously known techniques and methods rather than having to invest in developing novel techniques. This is a clear area of opportunity for the tech industry.
We had so many more data points in 2021 to learn about attacker behavior than we’ve had in the past. Having all this data, though, has left us with even more questions than we had before. Unfortunately, attackers who actively use 0-day exploits do not share the 0-days they’re using or what percentage of 0-days we’re missing in our tracking, so we’ll never know exactly what proportion of 0-days are currently being found and disclosed publicly.
Based on our analysis of the 2021 0-days we hope to see the following progress in 2022 in order to continue taking steps towards making 0-day hard:
All vendors agree to disclose the in-the-wild exploitation status of vulnerabilities in their security bulletins.
Exploit samples or detailed technical descriptions of the exploits are shared more widely.
Continued concerted efforts on reducing memory corruption vulnerabilities or rendering them unexploitable.Launch mitigations that will significantly impact the exploitability of memory corruption vulnerabilities.
A Record Year for In-the-Wild 0-days
2021 was a record year for in-the-wild 0-days. So what happened?
Is it that software security is getting worse? Or is it that attackers are using 0-day exploits more? Or has our ability to detect and disclose 0-days increased? When looking at the significant uptick from 2020 to 2021, we think it’s mostly explained by the latter. While we believe there has been a steady growth in interest and investment in 0-day exploits by attackers in the past several years, and that security still needs to urgently improve, it appears that the security industry’s ability to detect and disclose in-the-wild 0-day exploits is the primary explanation for the increase in observed 0-day exploits in 2021.
While we often talk about “0-day exploits used in-the-wild”, what we’re actually tracking are “0-day exploits detected and disclosed as used in-the-wild”. There are more factors than just the usethat contribute to an increase in that number, most notably: detection and disclosure. Better detection of 0-day exploits and more transparently disclosed exploited 0-day vulnerabilities is a positive indicator for security and progress in the industry.
Overall, we can break down the uptick in the number of in-the-wild 0-days into:
More detection of in-the-wild 0-day exploits
More public disclosure of in-the-wild 0-day exploitation
More detection
In the 2019 Year in Review, we wrote about the “Detection Deficit”. We stated “As a community, our ability to detect 0-days being used in the wild is severely lacking to the point that we can’t draw significant conclusions due to the lack of (and biases in) the data we have collected.” In the last two years, we believe that there’s been progress on this gap.
Anecdotally, we hear from more people that they’ve begun working more on detection of 0-day exploits. Quantitatively, while a very rough measure, we’re also seeing the number of entities credited with reporting in-the-wild 0-days increasing. It stands to reason that if the number of people working on trying to find 0-day exploits increases, then the number of in-the-wild 0-day exploits detected may increase.
We’ve also seen the number of vendors detecting in-the-wild 0-days in their own products increasing. Whether or not these vendors were previously working on detection, vendors seem to have found ways to be more successful in 2021. Vendors likely have the most telemetry and overall knowledge and visibility into their products so it’s important that they are investing in (and hopefully having success in) detecting 0-days targeting their own products. As shown in the chart above, there was a significant increase in the number of in-the-wild 0-days discovered by vendors in their own products. Google discovered 7 of the in-the-wild 0-days in their own products and Microsoft discovered 10 in their products!
More disclosure
The second reason why the number of detected in-the-wild 0-days has increased is due to more disclosure of these vulnerabilities. Apple and Google Android (we differentiate “Google Android” rather than just “Google” because Google Chrome has been annotating their security bulletins for the last few years) first began labeling vulnerabilities in their security advisories with the information about potential in-the-wild exploitation in November 2020 and January 2021 respectively. When vendors don’t annotate their release notes, the only way we know that a 0-day was exploited in-the-wild is if the researcher who discovered the exploitation comes forward. If Apple and Google Android had not begun annotating their release notes, the public would likely not know about at least 7 of the Apple in-the-wild 0-days and 5 of the Android in-the-wild 0-days. Why? Because these vulnerabilities were reported by “Anonymous” reporters. If the reporters didn’t want credit for the vulnerability, it’s unlikely that they would have gone public to say that there were indications of exploitation. That is 12 0-days that wouldn’t have been included in this year’s list if Apple and Google Android had not begun transparently annotating their security advisories.
Kudos and thank you to Microsoft, Google Chrome, and Adobe who have been annotating their security bulletins for transparency for multiple years now! And thanks to Apache who also annotated their release notes for CVE-2021-41773 this past year.
In-the-wild 0-days in Qualcomm and ARM products were annotated as in-the-wild in Android security bulletins, but not in the vendor’s own security advisories.
It’s highly likely that in 2021, there were other 0-days that were exploited in the wild and detected, but vendors did not mention this in their release notes. In 2022, we hope that more vendors start noting when they patch vulnerabilities that have been exploited in-the-wild. Until we’re confident that all vendors are transparently disclosing in-the-wild status, there’s a big question of how many in-the-wild 0-days are discovered, but not labeled publicly by vendors.
New Year, Old Techniques
We had a record number of “data points” in 2021 to understand how attackers are actually using 0-day exploits. A bit surprising to us though, out of all those data points, there was nothing new amongst all this data. 0-day exploits are considered one of the most advanced attack methods an actor can use, so it would be easy to conclude that attackers must be using special tricks and attack surfaces. But instead, the 0-days we saw in 2021 generally followed the same bug patterns, attack surfaces, and exploit “shapes” previously seen in public research. Once “0-day is hard”, we’d expect that to be successful, attackers would have to find new bug classes of vulnerabilities in new attack surfaces using never before seen exploitation methods. In general, that wasn’t what the data showed us this year. With two exceptions (described below in the iOS section) out of the 58, everything we saw was pretty “meh” or standard.
Out of the 58 in-the-wild 0-days for the year, 39, or 67% were memory corruption vulnerabilities. Memory corruption vulnerabilities have been the standard for attacking software for the last few decades and it’s still how attackers are having success. Out of these memory corruption vulnerabilities, the majority also stuck with very popular and well-known bug classes:
17 use-after-free
6 out-of-bounds read & write
4 buffer overflow
4 integer overflow
In the next sections we’ll dive into each major platform that we saw in-the-wild 0-days for this year. We’ll share the trends and explain why what we saw was pretty unexceptional.
Chromium (Chrome)
Chromium had a record high number of 0-days detected and disclosed in 2021 with 14. Out of these 14, 10 were renderer remote code execution bugs, 2 were sandbox escapes, 1 was an infoleak, and 1 was used to open a webpage in Android apps other than Google Chrome.
The 14 0-day vulnerabilities were in the following components:
When we look at the components targeted by these bugs, they’re all attack surfaces seen before in public security research and previous exploits. If anything, there are a few less DOM bugs and more targeting these other components of browsers like IndexedDB and WebGL than previously. 13 out of the 14 Chromium 0-days were memory corruption bugs. Similar to last year, most of those memory corruption bugs are use-after-free vulnerabilities.
A couple of the Chromium bugs were even similar to previous in-the-wild 0-days. CVE-2021-21166 is an issue in ScriptProcessorNode::Process() in webaudio where there’s insufficient locks such that buffers are accessible in both the main thread and the audio rendering thread at the same time. CVE-2019-13720 is an in-the-wild 0-day from 2019. It was a vulnerability in ConvolverHandler::Process() in webaudio where there were also insufficient locks such that a buffer was accessible in both the main thread and the audio rendering thread at the same time.
CVE-2021-30632 is another Chromium in-the-wild 0-day from 2021. It’s a type confusion in the TurboFan JIT in Chromium’s JavaScript Engine, v8, where Turbofan fails to deoptimize code after a property map is changed. CVE-2021-30632 in particular deals with code that stores global properties. CVE-2020-16009 was also an in-the-wild 0-day that was due to Turbofan failing to deoptimize code after map deprecation.
WebKit (Safari)
Prior to 2021, Apple had only acknowledged 1 publicly known in-the-wild 0-day targeting WebKit/Safari, and that was due the sharing by an external researcher. In 2021 there were 7. This makes it hard for us to assess trends or changes since we don’t have historical samples to go off of. Instead, we’ll look at 2021’s WebKit bugs in the context of other Safari bugs not known to be in-the-wild and other browser in-the-wild 0-days.
The 7 in-the-wild 0-days targeted the following components:
The one semi-surprise is that no DOM bugs were detected and disclosed. In previous years, vulnerabilities in the DOM engine have generally made up 15-20% of the in-the-wild browser 0-days, but none were detected and disclosed for WebKit in 2021.
It would not be surprising if attackers are beginning to shift to other modules, like third party libraries or things like IndexedDB. The modules may be more promising to attackers going forward because there’s a better chance that the vulnerability may exist in multiple browsers or platforms. For example, the webaudio bug in Chromium, CVE-2021-21166, also existed in WebKit and was fixed as CVE-2021-1844, though there was no evidence it was exploited in-the-wild in WebKit. The IndexedDB in-the-wild 0-day that was used against Safari in 2021, CVE-2021-30858, was very, very similar to a bug fixed in Chromium in January 2020.
Internet Explorer
Since we began tracking in-the-wild 0-days, Internet Explorer has had a pretty consistent number of 0-days each year. 2021 actually tied 2016 for the most in-the-wild Internet Explorer 0-days we’ve ever tracked even though Internet Explorer’s market share of web browser users continues to decrease.
So why are we seeing so little change in the number of in-the-wild 0-days despite the change in market share? Internet Explorer is still a ripe attack surface for initial entry into Windows machines, even if the user doesn’t use Internet Explorer as their Internet browser. While the number of 0-days stayed pretty consistent to what we’ve seen in previous years, the components targeted and the delivery methods of the exploits changed. 3 of the 4 0-days seen in 2021 targeted the MSHTML browser engine and were delivered via methods other than the web. Instead they were delivered to targets via Office documents or other file formats.
The four 0-days targeted the following components:
For CVE-2021-26411 targets of the campaign initially received a .mht file, which prompted the user to open in Internet Explorer. Once it was opened in Internet Explorer, the exploit was downloaded and run. CVE-2021-33742 and CVE-2021-40444 were delivered to targets via malicious Office documents.
CVE-2021-26411 and CVE-2021-33742 were two common memory corruption bug patterns: a use-after-free due to a user controlled callback in between two actions using an object and the user frees the object during that callback and a buffer overflow.
There were a few different vulnerabilities used in the exploit chain that used CVE-2021-40444, but the one within MSHTML was that as soon as the Office document was opened the payload would run: a CAB file was downloaded, decompressed, and then a function from within a DLL in that CAB was executed. Unlike the previous two MSHTML bugs, this was a logic error in URL parsing rather than a memory corruption bug.
Windows
Windows is the platform where we’ve seen the most change in components targeted compared with previous years. However, this shift has generally been in progress for a few years and predicted with the end-of-life of Windows 7 in 2020 and thus why it’s still not especially novel.
In 2021 there were 10 Windows in-the-wild 0-days targeting 7 different components:
The number of different components targeted is the shift from past years. For example, in 2019 75% of Windows 0-days targeted Win32k while in 2021 Win32k only made up 20% of the Windows 0-days. The reason that this was expected and predicted was that 6 out of 8 of those 0-days that targeted Win32k in 2019 did not target the latest release of Windows 10 at that time; they were targeting older versions. With Windows 10 Microsoft began dedicating more and more resources to locking down the attack surface of Win32k so as those older versions have hit end-of-life, Win32k is a less and less attractive attack surface.
Similar to the many Win32k vulnerabilities seen over the years, the two 2021 Win32k in-the-wild 0-days are due to custom user callbacks. The user calls functions that change the state of an object during the callback and Win32k does not correctly handle those changes. CVE-2021-1732 is a type confusion vulnerability due to a user callback in xxxClientAllocWindowClassExtraBytes which leads to out-of-bounds read and write. If NtUserConsoleControl is called during the callback a flag is set in the window structure to signal that a field is an offset into the kernel heap. xxxClientAllocWindowClassExtraBytes doesn’t check this and writes that field as a user-mode pointer without clearing the flag. The first in-the-wild 0-day detected and disclosed in 2022, CVE-2022-21882, is due to CVE-2021-1732 actually not being fixed completely. The attackers found a way to bypass the original patch and still trigger the vulnerability. CVE-2021-40449 is a use-after-free in NtGdiResetDC due to the object being freed during the user callback.
iOS/macOS
As discussed in the “More disclosure” section above, 2021 was the first full year that Apple annotated their release notes with in-the-wild status of vulnerabilities. 5 iOS in-the-wild 0-days were detected and disclosed this year. The first publicly known macOS in-the-wild 0-day (CVE-2021-30869) was also found. In this section we’re going to discuss iOS and macOS together because: 1) the two operating systems include similar components and 2) the sample size for macOS is very small (just this one vulnerability).
For the 5 total iOS and macOS in-the-wild 0-days, they targeted 3 different attack surfaces:
These 4 attack surfaces are not novel. IOMobileFrameBuffer has been a target of public security research for many years. For example, the Pangu Jailbreak from 2016 used CVE-2016-4654, a heap buffer overflow in IOMobileFrameBuffer. IOMobileFrameBuffer manages the screen’s frame buffer. For iPhone 11 (A13) and below, IOMobileFrameBuffer was a kernel driver. Beginning with A14, it runs on a coprocessor, the DCP. It’s a popular attack surface because historically it’s been accessible from sandboxed apps. In 2021 there were two in-the-wild 0-days in IOMobileFrameBuffer. CVE-2021-30807 is an out-of-bounds read and CVE-2021-30883 is an integer overflow, both common memory corruption vulnerabilities. In 2022, we already have another in-the-wild 0-day in IOMobileFrameBuffer, CVE-2022-22587.
One iOS 0-day and the macOS 0-day both exploited vulnerabilities in the XNU kernel and both vulnerabilities were in code related to XNU’s inter-process communication (IPC) functionality. CVE-2021-1782 exploited a vulnerability in mach vouchers while CVE-2021-30869 exploited a vulnerability in mach messages. This is not the first time we’ve seen iOS in-the-wild 0-days, much less public security research, targeting mach vouchers and mach messages. CVE-2019-6625 was exploited as a part of an exploit chain targeting iOS 11.4.1-12.1.2 and was also a vulnerability in mach vouchers.
Mach messages have also been a popular target for public security research. In 2020 there were two in-the-wild 0-days also in mach messages: CVE-2020-27932 & CVE-2020-27950. This year’s CVE-2021-30869 is a pretty close variant to 2020’s CVE-2020-27932. Tielei Wang and Xinru Chi actually presented on this vulnerability at zer0con 2021 in April 2021. In their presentation, they explained that they found it while doing variant analysis on CVE-2020-27932. TieLei Wang explained via Twitter that they had found the vulnerability in December 2020 and had noticed it was fixed in beta versions of iOS 14.4 and macOS 11.2 which is why they presented it at Zer0Con. The in-the-wild exploit only targeted macOS 10, but used the same exploitation technique as the one presented.
The two FORCEDENTRY exploits (CVE-2021-30860 and the sandbox escape) were the only times that made us all go “wow!” this year. For CVE-2021-30860, the integer overflow in CoreGraphics, it was because:
For years we’ve all heard about how attackers are using 0-click iMessage bugs and finally we have a public example, and
The exploit was an impressive work of art.
The sandbox escape (CVE requested, not yet assigned) was impressive because it’s one of the few times we’ve seen a sandbox escape in-the-wild that uses only logic bugs, rather than the standard memory corruption bugs.
For CVE-2021-30860, the vulnerability itself wasn’t especially notable: a classic integer overflow within the JBIG2 parser of the CoreGraphics PDF decoder. The exploit, though, was described by Samuel Groß & Ian Beer as “one of the most technically sophisticated exploits [they]’ve ever seen”. Their blogpost shares all the details, but the highlight is that the exploit uses the logical operators available in JBIG2 to build NAND gates which are used to build its own computer architecture. The exploit then writes the rest of its exploit using that new custom architecture. From their blogpost:
Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It’s not as fast as Javascript, but it’s fundamentally computationally equivalent.
The bootstrapping operations for the sandbox escape exploit are written to run on this logic circuit and the whole thing runs in this weird, emulated environment created out of a single decompression pass through a JBIG2 stream. It’s pretty incredible, and at the same time, pretty terrifying.
This is an example of what making 0-day exploitation hard could look like: attackers having to develop a new and novel way to exploit a bug and that method requires lots of expertise and/or time to develop. This year, the two FORCEDENTRY exploits were the only 0-days out of the 58 that really impressed us. Hopefully in the future, the bar has been raised such that this will be required for any successful exploitation.
Android
There were 7 Android in-the-wild 0-days detected and disclosed this year. Prior to 2021 there had only been 1 and it was in 2019: CVE-2019-2215. Like WebKit, this lack of data makes it hard for us to assess trends and changes. Instead, we’ll compare it to public security research.
For the 7 Android 0-days they targeted the following components:
5 of the 7 0-days from 2021 targeted GPU drivers. This is actually not that surprising when we consider the evolution of the Android ecosystem as well as recent public security research into Android. The Android ecosystem is quite fragmented: many different kernel versions, different manufacturer customizations, etc. If an attacker wants a capability against “Android devices”, they generally need to maintain many different exploits to have a decent percentage of the Android ecosystem covered. However, if the attacker chooses to target the GPU kernel driver instead of another component, they will only need to have two exploits since most Android devices use 1 of 2 GPUs: either the Qualcomm Adreno GPU or the ARM Mali GPU.
Public security research mirrored this choice in the last couple of years as well. When developing full exploit chains (for defensive purposes) to target Android devices, Guang Gong, Man Yue Mo, and Ben Hawkes all chose to attack the GPU kernel driver for local privilege escalation. Seeing the in-the-wild 0-days also target the GPU was more of a confirmation rather than a revelation. Of the 5 0-days targeting GPU drivers, 3 were in the Qualcomm Adreno driver and 2 in the ARM Mali driver.
The two non-GPU driver 0-days (CVE-2021-0920 and CVE-2021-1048) targeted the upstream Linux kernel. Unfortunately, these 2 bugs shared a singular characteristic with the Android in-the-wild 0-day seen in 2019: all 3 were previously known upstream before their exploitation in Android. While the sample size is small, it’s still quite striking to see that 100% of the known in-the-wild Android 0-days that target the kernel are bugs that actually were known about before their exploitation.
CVE-2021-1048 remained unpatched in Android for 14 months after it was patched in the Linux kernel. The Linux kernel was actually only vulnerable to the issue for a few weeks, but due to Android patching practices, that few weeks became almost a year for some Android devices. If an Android OEM synced to the upstream kernel, then they likely were patched against the vulnerability at some point. But many devices, such as recent Samsung devices, had not and thus were left vulnerable.
Microsoft Exchange Server
In 2021, there were 5 in-the-wild 0-days targeting Microsoft Exchange Server. This is the first time any Exchange Server in-the-wild 0-days have been detected and disclosed since we began tracking in-the-wild 0-days. The first four (CVE-2021-26855, CVE-2021-26857, CVE-2021-26858, and CVE-2021-27065) were all disclosed and patched at the same time and used together in a single operation. The fifth (CVE-2021-42321) was patched on its own in November 2021. CVE-2021-42321 was demonstrated at Tianfu Cup and then discovered in-the-wild by Microsoft. While no other in-the-wild 0-days were disclosed as part of the chain with CVE-2021-42321, the attackers would have required at least another 0-day for successful exploitation since CVE-2021-42321 is a post-authentication bug.
Of the four Exchange in-the-wild 0-days used in the first campaign, CVE-2021-26855, which is also known as “ProxyLogon”, is the only one that’s pre-auth. CVE-2021-26855 is a server side request forgery (SSRF) vulnerability that allows unauthenticated attackers to send arbitrary HTTP requests as the Exchange server. The other three vulnerabilities were post-authentication. For example, CVE-2021-26858 and CVE-2021-27065 allowed attackers to write arbitrary files to the system. CVE-2021-26857 is a remote code execution vulnerability due to a deserialization bug in the Unified Messaging service. This allowed attackers to run code as the privileged SYSTEM user.
For the second campaign, CVE-2021-42321, like CVE-2021-26858, is a post-authentication RCE vulnerability due to insecure deserialization. It seems that while attempting to harden Exchange, Microsoft inadvertently introduced another deserialization vulnerability.
While there were a significant amount of 0-days in Exchange detected and disclosed in 2021, it’s important to remember that they were all used as 0-day in only two different campaigns. This is an example of why we don’t suggest using the number of 0-days in a product as a metric to assess the security of a product. Requiring the use of four 0-days for attackers to have success is preferable to an attacker only needing one 0-day to successfully gain access.
While this is the first time Exchange in-the-wild 0-days have been detected and disclosed since Project Zero began our tracking, this is not unexpected. In 2020 there was n-day exploitation of Exchange Servers. Whether this was the first year that attackers began the 0-day exploitation or if this was the first year that defenders began detecting the 0-day exploitation, this is not an unexpected evolution and we’ll likely see it continue into 2022.
Outstanding Questions
While there has been progress on detection and disclosure, that progress has shown just how much work there still is to do. The more data we gained, the more questions that arose about biases in detection, what we’re missing and why, and the need for more transparency from both vendors and researchers.
Until the day that attackers decide to happily share all their exploits with us, we can’t fully know what percentage of 0-days are publicly known about. However when we pull together our expertise as security researchers and anecdotes from others in the industry, it paints a picture of some of the data we’re very likely missing. From that, these are some of the key questions we’re asking ourselves as we move into 2022:
Where are the [x] 0-days?
Despite the number of 0-days found in 2021, there are key targets missing from the 0-days discovered. For example, we know that messaging applications like WhatsApp, Signal, Telegram, etc. are targets of interest to attackers and yet there’s only 1 messaging app, in this case iMessage, 0-day found this past year. Since we began tracking in mid-2014 the total is two: a WhatsApp 0-day in 2019 and this iMessage 0-day found in 2021.
Along with messaging apps, there are other platforms/targets we’d expect to see 0-days targeting, yet there are no or very few public examples. For example, since mid-2014 there’s only one in-the-wild 0-day each for macOS and Linux. There are no known in-the-wild 0-days targeting cloud, CPU vulnerabilities, or other phone components such as the WiFi chip or the baseband.
This leads to the question of whether these 0-days are absent due to lack of detection, lack of disclosure, or both?
Do some vendors have no known in-the-wild 0-days because they’ve never been found or because they don’t publicly disclose?
Unless a vendor has told us that they will publicly disclose exploitation status for all vulnerabilities in their platforms, we, the public, don’t know if the absence of an annotation means that there is no known exploitation of a vulnerability or if there is, but the vendor is just not sharing that information publicly. Thankfully this question is something that has a pretty clear solution: all device and software vendors agreeing to publicly disclose when there is evidence to suggest that a vulnerability in their product is being exploited in-the-wild.
Are we seeing the same bug patterns because that’s what we know how to detect?
As we described earlier in this report, all the 0-days we saw in 2021 had similarities to previously seen vulnerabilities. This leads us to wonder whether or not that’s actually representative of what attackers are using. Are attackers actually having success exclusively using vulnerabilities in bug classes and components that are previously public? Or are we detecting all these 0-days with known bug patterns because that’s what we know how to detect? Public security research would suggest that yes, attackers are still able to have success with using vulnerabilities in known components and bug classes the majority of the time. But we’d still expect to see a few novel and unexpected vulnerabilities in the grouping. We posed this question back in the 2019 year-in-review and it still lingers.
Where are the spl0itz?
To successfully exploit a vulnerability there are two key pieces that make up that exploit: the vulnerability being exploited, and the exploitation method (how that vulnerability is turned into something useful).
Unfortunately, this report could only really analyze one of these components: the vulnerability. Out of the 58 0-days, only 5 have an exploit sample publicly available. Discovered in-the-wild 0-days are the failure case for attackers and a key opportunity for defenders to learn what attackers are doing and make it harder, more time-intensive, more costly, to do it again. Yet without the exploit sample or a detailed technical write-up based upon the sample, we can only focus on fixing the vulnerability rather than also mitigating the exploitation method. This means that attackers are able to continue to use their existing exploit methods rather than having to go back to the design and development phase to build a new exploitation method. While acknowledging that sharing exploit samples can be challenging (we have that challenge too!), we hope in 2022 there will be more sharing of exploit samples or detailed technical write-ups so that we can come together to use every possible piece of information to make it harder for the attackers to exploit more users.
As an aside, if you have an exploit sample that you’re willing to share with us, please reach out. Whether it’s sharing with us and having us write a detailed technical description and analysis or having us share it publicly, we’d be happy to work with you.
Conclusion
Looking back on 2021, what comes to mind is “baby steps”. We can see clear industry improvement in the detection and disclosure of 0-day exploits. But the better detection and disclosure has highlighted other opportunities for progress. As an industry we’re not making 0-day hard. Attackers are having success using vulnerabilities similar to what we’ve seen previously and in components that have previously been discussed as attack surfaces.The goal is to force attackers to start from scratch each time we detect one of their exploits: they’re forced to discover a whole new vulnerability, they have to invest the time in learning and analyzing a new attack surface, they must develop a brand new exploitation method. And while we made distinct progress in detection and disclosure it has shown us areas where that can continue to improve.
While this all may seem daunting, the promising part is that we’ve done it before: we have made clear progress on previously daunting goals. In 2019, we discussed the large detection deficit for 0-day exploits and 2 years later more than double were detected and disclosed. So while there is still plenty more work to do, it’s a tractable problem. There are concrete steps that the tech and security industries can take to make it even more progress:
Make it an industry standard behavior for all vendors to publicly disclose when there is evidence to suggest that a vulnerability in their product is being exploited,
Vendors and security researchers sharing exploit samples or detailed descriptions of the exploit techniques.
Continued concerted efforts on reducing memory corruption vulnerabilities or rendering them unexploitable.
Through 2021 we continually saw the real world impacts of the use of 0-day exploits against users and entities. Amnesty International, the Citizen Lab, and others highlighted over and over how governments were using commercial surveillance products against journalists, human rights defenders, and government officials. We saw many enterprises scrambling to remediate and protect themselves from the Exchange Server 0-days. And we even learned of peer security researchers being targeted by North Korean government hackers. While the majority of people on the planet do not need to worry about their own personal risk of being targeted with 0-days, 0-day exploitation still affects us all. These 0-days tend to have an outsized impact on society so we need to continue doing whatever we can to make it harder for attackers to be successful in these attacks.
2021 showed us we’re on the right track and making progress, but there’s plenty more to be done to make 0-day hard.
At Wordfence, we see large amounts of threat actor data, and often that data tells unexpected stories. Taking a look at just the top five attacking IP addresses over a 30 day period, you might be surprised to find out where these attacks are originating, and what they are doing. When most people hear about threat actors, they think about countries like Russia, China, and North Korea. In reality, attacks originate from all over the world, with the top five attackers we have tracked over the past 30 days coming from Australia, Germany, the United States, Ukraine, and Finland.
The purpose of these attacks is nearly as varied as their locations. Each of the top five malicious IP addresses was found to be attempting unauthorized access to websites or file systems. In sixth place was an IP address that was attempting brute force attacks, but the remaining malicious IP addresses in the top ten were all found to be attempting malicious access by other means. Several of the addresses were seen scanning for vulnerabilities, downloading or uploading files, accessing web shells, and even viewing or writing custom wp-config.php files. While one of the malicious indicators was consistent across all of the top five IP addresses, there are also some actions that were unique to a specific attack source.
IP Threat #1 Originating From Australia
The IP address found in Australia, 20.213.156.164, which is owned by Microsoft, may seem like the most surprising one to make this list, let alone first on the list. In a 30 day period, we tracked 107,569,810 requests from this single IP address out of Sydney. The threat actor using this IP was primarily attempting to open potential web shells on victims’ websites which could indicate that the attacker was looking for left-over webshells from other attackers’ successful exploits.
This is a common technique for threat actors, as it can be automated and does not require actively uploading their own shells and backdoors to a potential victim’s website. This could help the attacker save time and money instead of launching their own attack campaign to compromise servers.
The following is an example of a request the offending IP tried to make to access a known shell. It was blocked by the Wordfence firewall.
IP Threat #2 Originating From Germany
The German IP address, 217.160.145.62, may have a tracked attack quantity that is around 35% lower than the Sydney IP address, with only 70,752,527 tracked events, but its actions are much more varied. In fact, this IP address triggered four different web application firewall (WAF) rules, including attempts to upload zip files to the attacked websites. This is a common action performed as a first step to get malicious files onto the server. There were also attempts to exploit a remote code execution (RCE) vulnerability in the Tatsu Builder plugin, and access the wp-config.php file from a web-visible location.Sample of an exploit targeting the Tatsu Builder plugin vulnerability from this IP Address.
IP Threat #3 Originating From The United States
The attacks originating from the IP address 20.29.48.70 in the United States were slightly lower in quantity than those from Germany, with 54,020,587 detected events. The logged events are similar to those found coming from Australia. Searching for previously installed shells and backdoors appears to be the main purpose of these attacks as well. It’s important to note that this does not indicate that a backdoor is actually present on the site. This is just a method attackers use in hopes of landing on a webshell that had been installed previously by another attacker to save time and resources. One filename we saw the IP address attempting to access is commonly used to serve spam or redirect to potentially malicious e-commerce websites.Example of a pharma website that was the end result of a redirect chain.
IP Threat #4 Originating from Ukraine
The attacks starting in Ukraine are from the IP address 194.38.20.161, and the purpose of these attacks is different from what we see from the IP addresses in the other entries in the top five. The majority of the 51,293,613 requests appear to be checking for jQuery upload capabilities on the affected websites. This is done with a web request that uses a JPEG image file in an attempted upload. Once they know an upload is possible, the attacker can upload malicious files that range from spam to backdoors, and everything in between.
IP Threat #5 Originating From Finland
Rounding out our top five with only 44,954,492 registered events is the IP address 65.108.195.44 from Helsinki, Finland. This one also attempts to access web shells and backdoors. The majority of requests from this IP address seem to be accessing previously uploaded malicious files, rather than trying to exploit vulnerabilities or activate code that was added to otherwise legitimate files, such as the example below.The s_e.php file sample in its raw form: a file this IP was trying to access.
One Thing in Common: All IPs Made it on to the Wordfence IP Blocklist
While the threat actors behind these IP addresses may have tried a variety of methods to gain control of these WordPress sites, one thing all these IP addresses have in common is that their attempts were blocked by the Wordfence Network and made their way onto the Wordfence IP Blocklist, a Premium feature of Wordfence.
This means that due to the volume of attacks these IP addresses were initiating they ended up on the Wordfence Real-Time IP blocklist, which prevents these IP addresses from accessing your site in the first place.
Conclusion
While the top five locations may not be commonly thought of as locations that web attacks may originate from, these are areas where computers and the internet are common. Wherever you have both of these, you will have attack origins. What is not as surprising is that despite widely varied locations for attackers, the methods they use are typically common and often predictable. Hosting accounts that threat actors use to launch attacks can live anywhere in the world while a threat actor themselves may be in an entirely different location.
By knowing how an attacker thinks, and the methods they use, we can defend against their attacks. These top five offenders averaged more than 10 million access attempts per day in the reviewed period, but having a proper web application firewall with Wordfence in place meant the attackers had no chance of accomplishing their goals.
All Wordfence users with the Wordfence Web Application Firewall active, including Wordfence free customers, are protected against the types of attacks seen from these IP addresses, and the vulnerabilities they may be attempting to exploit.If you believe your site has been compromised as a result of this vulnerability or any other vulnerability, we offer Incident Response services via Wordfence Care. If you need your site cleaned immediately, Wordfence Response offers the same service with 24/7/365 availability and a 1-hour response time. Both these products include hands-on support in case you need further assistance.
On June 16, 2022, the Wordfence Threat Intelligence team noticed a back-ported security update in Ninja Forms, a WordPress plugin with over one million active installations. As with all security updates in WordPress plugins and themes, our team analyzed the plugin to determine the exploitability and severity of the vulnerability that had been patched.
We uncovered a code injection vulnerability that made it possible for unauthenticated attackers to call a limited number of methods in various Ninja Forms classes, including a method that unserialized user-supplied content, resulting in Object Injection. This could allow attackers to execute arbitrary code or delete arbitrary files on sites where a separate POP chain was present.
There is evidence to suggest that this vulnerability is being actively exploited in the wild, and as such we are alerting our users immediately to the presence of this vulnerability.
This flaw has been fully patched in versions 3.0.34.2, 3.1.10, 3.2.28, 3.3.21.4, 3.4.34.2, 3.5.8.4, and 3.6.11.WordPress appears to have performed a forced automatic update for this plugin, so your site may already be using one of the patched version. Nonetheless, we strongly recommend ensuring that your site has been updated to one of the patched versions as soon as possible since automatic updates are not always successful.
Wordfence Premium, Wordfence Care, and Wordfence Response customers received a rule on June 16, 2022 to protect against active exploitation of this vulnerability. Wordfence users still using the free version will receive the same protection on July 16, 2022. Regardless of your protection status with Wordfence, you can update the plugin on your site to one of the patched versions to avoid exploitation.
Description: Code Injection Affected Plugin:Ninja Forms Contact Form – The Drag and Drop Form Builder for WordPress Plugin Slug: ninja-forms Plugin Developer: Saturday Drive Affected Versions: 3.6-3.6.10, 3.5-3.5.8.3, 3.4-3.4.34.1, 3.3-3.3.21.3, 3.2-3.2.27, 3.1-3.1.9, 3.0-3.0.34.1 CVE ID: Pending CVSS Score: 9.8 (Critical) CVSS Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H Fully Patched Version: 3.0.34.2, 3.1.10, 3.2.28, 3.3.21.4, 3.4.34.2, 3.5.8.4, 3.6.11
Ninja Forms is a popular WordPress plugin designed to enhance WordPress sites with easily customizable forms. One feature of Ninja Forms is the ability to add “Merge Tags” to forms that will auto-populate values from other areas of WordPress like Post IDs and logged in user’s names. Unfortunately, this functionality had a flaw that made it possible to call various Ninja Form classes that could be used for a wide range of exploits targeting vulnerable WordPress sites.
Without providing too many details on the vulnerability, the Merge Tag functionality does an is_callable() check on a supplied Merge Tags. When a callable class and method is supplied as a Merge Tag, the function is called and the code executed. These Merge Tags can be supplied by unauthenticated users due to the way NF_MergeTags_Other class handles Merge Tags.
We determined that this could lead to a variety of exploit chains due to the various classes and functions that the Ninja Forms plugin contains. One potentially critical exploit chain in particular involves the use of the NF_Admin_Processes_ImportForm class to achieve remote code execution via deserialization, though there would need to be another plugin or theme installed on the site with a usable gadget.
As we learn more about the exploit chains attackers are using to exploit this vulnerability, we will update this post.
Conclusion
In today’s post, we detailed a critical vulnerability in Ninja Forms Contact Form which allows unauthenticated attackers to call static methods on a vulnerable site that could be used for the site. This can be used to completely take over a WordPress site. There is evidence to suggest that this vulnerability is being actively exploited.
This flaw has been fully patched in versions 3.0.34.2, 3.1.10, 3.2.28, 3.3.21.4, 3.4.34.2, 3.5.8.4, and 3.6.11. It appears as though WordPress may have performed a forced update so your site may already be on one of the patched versions. Nonetheless, we strongly recommend ensuring that your site has been updated to one of the patched versions as soon as possible.
Wordfence Premium, Wordfence Care, and Wordfence Response customers received a rule on June 16, 2022 to protect against active exploitation of this vulnerability. Wordfence users still using the free version will receive the same protection on July 16, 2022. Regardless of your protection status with Wordfence, you can update the plugin on your site to one of the patched versions to avoid exploitation.
If you believe your site has been compromised as a result of this vulnerability or any other vulnerability, we offer Incident Response services via Wordfence Care. If you need your site cleaned immediately, Wordfence Response offers the same service with 24/7/365 availability and a 1-hour response time. Both these products include hands-on support in case you need further assistance.
If you know a friend or colleague who is using this plugin on their site, we highly recommend forwarding this advisory to them to help keep their sites protected, as this is a serious vulnerability that can lead to complete site takeover.
Special thanks to Ramuel Gall, a Wordfence Threat Analyst, for his work reverse engineering the vulnerability’s patches to develop a working Proof of Concept and for his contributions to this post.
Hertzbleed is a new family of side-channel attacks: frequency side channels. In the worst case, these attacks can allow an attacker to extract cryptographic keys from remote servers that were previously believed to be secure.
Hertzbleed takes advantage of our experiments showing that, under certain circumstances, the dynamic frequency scaling of modern x86 processors depends on the data being processed. This means that, on modern processors, the same program can run at a different CPU frequency (and therefore take a different wall time) when computing, for example, 2022 + 23823 compared to 2022 + 24436.
Hertzbleed is a real, and practical, threat to the security of cryptographic software. We have demonstrated how a clever attacker can use a novel chosen-ciphertext attack against SIKE to perform full key extraction via remote timing, despite SIKE being implemented as “constant time”.
Research Paper
The Hertzbleed paper will appear in the 31st USENIX Security Symposium (Boston, 10–12 August 2022) with the following title:
Hertzbleed: Turning Power Side-Channel Attacks Into Remote Timing Attacks on x86
Intel’s security advisory states that all Intel processors are affected. We experimentally confirmed that several Intel processors are affected, including desktop and laptop models from the 8th to the 11th generation Core microarchitecture.
AMD’s security advisory states that several of their desktop, mobile and server processors are affected. We experimentally confirmed that AMD Ryzen processors are affected, including desktop and laptop models from the Zen 2 and Zen 3 microarchitectures.
Other processor vendors (e.g., ARM) also implement frequency scaling in their products and were made aware of Hertzbleed. However, we have not confirmed if they are, or are not, affected by Hertzbleed.
What is the impact of Hertzbleed?
First, Hertzbleed shows that on modern x86 CPUs, power side-channel attacks can be turned into (even remote!) timing attacks—lifting the need for any power measurement interface. The cause is that, under certain circumstances, periodic CPU frequency adjustments depend on the current CPU power consumption, and these adjustments directly translate to execution time differences (as 1 hertz = 1 cycle per second).
Second, Hertzbleed shows that, even when implemented correctly as constant time, cryptographic code can still leak via remote timing analysis. The result is that current industry guidelines for how to write constant-time code (such as Intel’s one) are insufficient to guarantee constant-time execution on modern processors.
Is there an assigned CVE for Hertzbleed?
Yes. Hertzbleed is tracked under CVE-2022-23823 and CVE-2022-24436 in the Common Vulnerabilities and Exposures (CVE) system.
Is Hertzbleed a bug?
No. The root cause of Hertzbleed is dynamic frequency scaling, a feature of modern processors, used to reduce power consumption (during low CPU loads) and to ensure that the system stays below power and thermal limits (during high CPU loads).
When did you disclose Hertzbleed?
We disclosed our findings, together with proof-of-concept code, to Intel, Cloudflare and Microsoft in Q3 2021 and to AMD in Q1 2022. Intel originally requested our findings be held under embargo until May 10, 2022. Later, Intel requested a significant extension of that embargo, and we coordinated with them on publicly disclosing our findings on June 14, 2022.
Do Intel and AMD plan to release microcode patches to mitigate Hertzbleed?
No. To our knowledge, Intel and AMD do not plan to deploy any microcode patches to mitigate Hertzbleed. However, Intel provides guidance to mitigate Hertzbleed in software. Cryptographic developers may choose to follow Intel’s guidance to harden their libraries and applications against Hertzbleed. For more information, we refer to the official security advisories (Intel and AMD).
Why did Intel ask for a long embargo, considering they are not deploying patches?
Ask Intel.
Is there a workaround?
Technically, yes. However, it has a significant system-wide performance impact.
In most cases, a workload-independent workaround to mitigate Hertzbleed is to disable frequency boost. Intel calls this feature “Turbo Boost”, and AMD calls it “Turbo Core” or “Precision Boost”. Disabling frequency boost can be done either through the BIOS or at runtime via the frequency scaling driver. In our experiments, when frequency boost was disabled, the frequency stayed fixed at the base frequency during workload execution, preventing leakage via Hertzbleed. However, this is not a recommended mitigation strategy as it will significantly impact performance. Moreover, on some custom system configurations (with reduced power limits), data-dependent frequency updates may occur even when frequency boost is disabled.
What is SIKE?
SIKE (Supersingular Isogeny Key Encapsulation) is a decade old, widely studied key encapsulation mechanism. It is currently a finalist in NIST’s Post-Quantum Cryptography competition. It has multiple industrial implementations and was the subject of an in-the-wild deployment experiment. Among its claimed advantages are a “well-understood” side channel posture. You can find author names, implementations, talks, studies, articles, security analyses and more about SIKE on its official website.
What is a key encapsulation mechanism?
A key encapsulation mechanism is a protocol used to securely exchange a symmetric key using asymmetric (public-key) cryptography.
How did Cloudflare and Microsoft mitigate the attack on SIKE?
Both Cloudflare and Microsoft deployed the mitigation suggested by De Feo et al. (who, while our paper was under the long Intel embargo, independently re-discovered how to exploit anomalous 0s in SIKE for power side channels). The mitigation consists of validating, before decapsulation, that the ciphertext consists of a pair of linearly independent points of the correct order. The mitigation adds a decapsulation performance overhead of 5% for CIRCL and of 11% for PQCrypto-SIDH.
Is my constant-time cryptographic library affected?
Affected? Likely yes. Vulnerable? Maybe.
Your constant-time cryptographic library might be vulnerable if is susceptible to secret-dependent power leakage, and this leakage extends to enough operations to induce secret-dependent changes in CPU frequency. Future work is needed to systematically study what cryptosystems can be exploited via the new Hertzbleed side channel.
Can I use the logo?
Yes. The Hertzbleed logo is free to use under a CC0 license.