The Wordfence Threat Intelligence team has been monitoring a sudden increase in attack attempts targeting Kaswara Modern WPBakery Page Builder Addons. This ongoing campaign is attempting to take advantage of an arbitrary file upload vulnerability, tracked as CVE-2021-24284, which has been previously disclosed and has not been patched on the now closed plugin. As the plugin was closed without a patch, all versions of the plugin are impacted by this vulnerability. The vulnerability can be used to upload malicious PHP files to an affected website, leading to code execution and complete site takeover. Once they’ve established a foothold, attackers can also inject malicious JavaScript into files on the site, among other malicious actions.
All Wordfence customers have been protected from this attack campaign by the Wordfence Firewall since May 21, 2021, with Wordfence Premium, Care, and Response customers having received the firewall rule 30 days earlier on April 21, 2021. Even though Wordfence provides protection against this vulnerability, we strongly recommend completely removing Kaswara Modern WPBakery Page Builder Addons as soon as possible and finding an alternative as it is unlikely the plugin will ever receive a patch for this critical vulnerability. We are currently protecting over 1,000 websites that still have the plugin installed, and we estimate that between 4,000 and 8,000 websites in total still have the plugin installed.
We have blocked an average of 443,868 attack attempts per day against the network of sites that we protect during the course of this campaign. Please be aware that while 1,599,852 unique sites were targeted, a majority of those sites were not running the vulnerable plugin.
Description: Arbitrary File Upload/Deletion and Other Affected Plugin: Kaswara Modern WPBakery Page Builder Addons Plugin Slug: kaswara Affected Versions: <= 3.0.1 CVE ID:CVE-2021-24284 CVSS Score: 10.0 (Critical) CVSS Vector:CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H Fully Patched Version: NO AVAILABLE PATCH.
Indicators of Attack
The majority of the attacks we have seen are sending a POST request to /wp-admin/admin-ajax.php using the uploadFontIcon AJAX action found in the plugin to upload a file to the impacted website. Your logs may show the following query string on these events:
We have observed 10,215 attacking IP addresses, with the vast majority of exploit attempts coming from these top ten IPs:
217.160.48.108 with 1,591,765 exploit attempts blocked
5.9.9.29 with 898,248 exploit attempts blocked
2.58.149.35 with 390,815 exploit attempts blocked
20.94.76.10 with 276,006 exploit attempts blocked
20.206.76.37 with 212,766 exploit attempts blocked
20.219.35.125 with 187,470 exploit attempts blocked
20.223.152.221 with 102,658 exploit attempts blocked
5.39.15.163 with 62,376 exploit attempts blocked
194.87.84.195 with 32,890 exploit attempts blocked
194.87.84.193 with 31,329 exploit attempts blocked
Indicators of Compromise
Based on our analysis of the attack data, a majority of attackers are attempting to upload a zip file named a57bze8931.zip. When attackers are successful at uploading the zip file, a single file named a57bze8931.php will be extracted into the /wp-content/uploads/kaswara/icons/ directory. The malicious file has an MD5 hash of d03c3095e33c7fe75acb8cddca230650. This file is an uploader under the control of the attacker. With this file, a malicious actor has the ability to continue uploading files to the compromised website.
The indicators observed in these attacks also include signs of the NDSW trojan, which injects code into otherwise legitimate JavaScript files and redirects site visitors to malicious websites. The presence of this string in your JavaScript files is a strong indication that your site has been infected with NDSW:
;if(ndsw==
Some additional filenames that attackers are attempting to upload includes:
[xxx]_young.zip where [xxx] varies and typically consists of 3 characters like ‘svv_young’
inject.zip
king_zip.zip
null.zip
plugin.zip
What Should I Do If I Use This Plugin?
All Wordfence users, including Free, Premium, Care, and Response, are protected from exploits targeting this vulnerability. However, at this time the plugin has been closed, and the developer has not been responsive regarding a patch. The best option is to fully remove the Kaswara Modern WPBakery Page Builder Addons plugin from your WordPress website.
If you know a friend or colleague who is using this plugin on their site, we highly recommend forwarding this advisory to them to help keep their sites protected, as this is a serious vulnerability that can lead to complete site takeover.
If you believe your site has been compromised as a result of this vulnerability or any other vulnerability, we offer Incident Response services via Wordfence Care. If you need your site cleaned immediately, Wordfence Response offers the same service with 24/7/365 availability and a 1-hour response time. Both of these products include hands-on support in case you need further assistance.
Microsoft uncovered a vulnerability in macOS that could allow specially crafted codes to escape the App Sandbox and run unrestricted on the system. We shared these findings with Apple through Coordinated Vulnerability Disclosure (CVD) via Microsoft Security Vulnerability Research (MSVR) in October 2021. A fix for this vulnerability, now identified as CVE-2022-26706, was included in the security updates released by Apple on May 16, 2022. Microsoft shares the vulnerability disclosure credit with another researcher, Arsenii Kostromin (0x3c3e), who discovered a similar technique independently.
We encourage macOS users to install these security updates as soon as possible. We also want to thank the Apple product security team for their responsiveness in fixing this issue.
The App Sandbox is Apple’s access control technology that application developers must adopt to distribute their apps through the Mac App Store. Essentially, an app’s processes are enforced with customizable rules, such as the ability to read or write specific files. The App Sandbox also restricts the processes’ access to system resources and user data to minimize the impact or damage if the app becomes compromised. However, we found that specially crafted codes could bypass these rules. An attacker could take advantage of this sandbox escape vulnerability to gain elevated privileges on the affected device or execute malicious commands like installing additional payloads.
We found the vulnerability while researching potential ways to run and detect malicious macros in Microsoft Office on macOS. For backward compatibility, Microsoft Word can read or write files with an “~$” prefix. Our findings revealed that it was possible to escape the sandbox by leveraging macOS’s Launch Services to run an open –stdin command on a specially crafted Python file with the said prefix.
Our research shows that even the built-in, baseline security features in macOS could still be bypassed, potentially compromising system and user data. Therefore, collaboration between vulnerability researchers, software vendors, and the larger security community remains crucial to helping secure the overall user experience. This includes responsibly disclosing vulnerabilities to vendors.
In addition, insights from this case study not only enhance our protection technologies, such as Microsoft Defender for Endpoint, but they also help strengthen the security strategies of software vendors and the computing landscape at large. This blog post thus provides details of our research and overviews of similar sandbox escape vulnerabilities reported by other security researchers that helped enrich our analysis.
How macOS App Sandbox works
In a nutshell, macOS apps can specify sandbox rules for the operating system to enforce on themselves. The App Sandbox restricts system calls to an allowed subset, and the said system calls can be allowed or disallowed based on files, objects, and arguments. Simply put, the sandbox rules are a defense-in-depth mechanism that dictates the kind of operations an application can or can’t do, regardless of the type of user running it. Examples of such operations include:
the kind of files an application can or can’t read or write;
whether the application can access specific resources such as the camera or the microphone, and;
whether the application is allowed to perform inbound or outbound network connections.
Figure 1. Illustration of a sandboxed app, from the App Sandbox documentation (photo credit: Apple)
Therefore, the App Sandbox is a useful tool for all macOS developers in providing baseline security for their applications, especially for those that have large attack surfaces and run user-provided code. One example of these applications is Microsoft Office.
Sandboxing Microsoft Office in macOS
Attackers have targeted Microsoft Office in their attempts to gain a foothold on devices and networks. One of their techniques is abusing Office macros, which they use in social engineering attacks to trick users into downloading malware and other payloads.
On Windows systems, Microsoft Defender Application Guard for Office helps secure Microsoft Office against such macro abuse by isolating the host environment using Hyper-V. With this feature enabled, an attacker must first be equipped with a Hyper-V guest-to-host vulnerability to affect the host system—a very high bar compared to simply running a macro. Without a similar isolation technology and default setting on macOS, Office must rely on the operating system’s existing mitigation strategies. Currently, the most promising technology is the macOS App Sandbox.
Viewing the Microsoft sandbox rules is quite straightforward with the codesign utility. Figure 2 below shows the truncated sandbox rules for Microsoft Word:
Figure 2. Viewing the Microsoft Word sandbox rules with the codesign utility
One of the rules dictates the kind of files the application is allowed to read or write. As seen in the screenshot of the syntax below, Word is allowed to read or write files with filenames that start with the “~$” prefix. The reason for this rule is rooted in the way Office works internally and remains intact for backward compatibility.
Figure 3. File read and write sandbox rule for Microsoft Word
Despite the security restrictions imposed by the App Sandbox’s rules on applications, it’s possible for attackers to bypass the said rules and let malicious codes “escape” the sandbox and execute arbitrary commands on an affected device. These codes could be hidden in a specially crafted Word macro, which, as mentioned earlier, is one of the attackers’ preferred entry points.
For example, in 2018, MDSec reported a vulnerability in Microsoft Office on macOS that could allow an attacker to bypass the App Sandbox. As explained in their blog post, MDSec’s proof-of-concept (POC) exploit took advantage of the fact that Word could drop files with arbitrary contents to arbitrary directories (even after passing traditional permission checks), as long as these files’ filenames began with a “~$” prefix. This bypass was relatively straightforward: have a specially crafted macro drop a .plist file in the user’s LaunchAgents directory.
The LaunchAgents directory is a well-known persistence mechanism in macOS. PLIST files that adhere to a specific structure describe (that is, contain the metadata of) macOS launch agents initiated by the launchd process when a user signs in. Since these launch agents will be the children of launchd, they won’t inherit the sandbox rules enforced onto Word, and therefore will be out of the Office sandbox.
Shortly after the above vulnerability was reported, Microsoft deployed a fix that denied file writes to the LaunchAgents directory and other folders with similar implications. The said disclosure also prompted us to look into different possible sandbox escapes in Microsoft Word and other applications.
Exploring Launch Services as means of escaping the sandbox
In 2020, several blog posts described a generic sandbox escape vulnerability in macOS’s /usr/bin/open utility, a command commonly used to launch files, folders, and applications just as if a user double-clicked them. While open is a handy command, it doesn’t create child processes on its own. Instead, it performs an inter-process communication (IPC) with the macOS Launch Services, whose logic is implemented in the context of the launchd process. Launch Services then performs the heavy lifting by resolving the handler and launching the right app. Since launchd creates the process, it’s not restricted by the caller’s sandbox, similar to how MDSec’s POC exploit worked in 2018.
However, using open for sandbox escape purposes isn’t trivial because the destination app must be registered within Launch Services. This means that, for example, one couldn’t run files like osascript outside the sandbox using open. Our internal offensive security team therefore decided to reassess the open utility for sandbox escape purposes and use it in a larger end-to-end attack simulation.
Our obvious first attempt in creating a POC exploit was to create a macro that launches a shell script with the Terminal app. Surprisingly, the POC didn’t work because files dropped from within the sandboxed Word app were automatically given the extended attribute com.apple.quarantine (the same one used by Safari to keep track of internet-downloaded files, as well as by Gatekeeper to block malicious files from executing), and Terminal simply refused to run files with that attribute. We also tried using Python scripts, but the Python app had similar issues running files having the said attribute.
Our second attempt was to use application extensibility features. For example, Terminal would run the default macOS shell (zsh), which would then run arbitrary commands from files like ~/.zshenv before running its own command line. This meant that dropping a .zshenv file in the user’s home directory and launching the Terminal app would cause the sandbox escape. However, due to Word’s sandbox rules, dropping a .zshenv file wasn’t straightforward, as the rules only allowed an application to write to files that begin with the “~$” prefix.
However, there is an interesting way of writing such a file indirectly. macOS was shipped with an application called Archive Utility responsible of extracting archive files (such as ZIP files). Such archives were extracted without any user interaction, and the files inside an archive were extracted in the same directory as the archive itself. Therefore, our second POC worked as follows:
Prepare the payload by creating a .zshenv file with arbitrary commands and placing it in a ZIPfile. Encode the ZIPfile contents in a Word macro and drop those contents into a file “~$exploit.zip” in the user’s home directory.
Launch Archive Utility with the open command on the “~$exploit.zip” file. Archive Utility ran outside the sandbox (since it’s the child process of /usr/bin/open) and was therefore permitted to create files with arbitrary names. By default, Archive Utility extracted the files next to the archive itself—in our case, the user’s home directory. Therefore, this step successfully created a .zshenv file with arbitrary contents in the user’s home directory.
Launch the Terminal app with the open command. Since Terminal hosted zsh and zsh ran commands from the .zshenv file, the said file could escape the Word sandbox successfully.
Figure 4. Preparing a Word macro with our sandbox escape for an internal Red Team operation
Perception Point’s CVE-2021-30864
In October 2021, Perception Point published a blog post that discussed a similar finding (and more elegant, in our opinion). In the said post, Perception Point released details about their sandbox escape (now identified as CVE-2021-30864), which used the following facts:
Every sandboxed process had its own container directory that’s used as a “scratch space.” The sandboxed process could write arbitrary files, including arbitrary filenames, to that directory unrestricted.
The open command had an interesting –env option that could set or override arbitrary environment variables for the launched app.
Therefore, Perception Point’s POC exploit was cleverly simple:
Drop a .zshenv file in the container directory. This was allowed because sandbox rules weren’t enforced on that directory.
Launch Terminal with the open command but use the –env option to override the HOME environment variable to point to the container directory. This made zsh consider the user’s home directory to be the container directory, and run commands from the planted .zshenv file.
Apple has since patched the vulnerability Perception Point reported in the latest version of macOS, Monterey. While we could still create the “~$exploit.zip” file in the user’s home directory, using open to launch the Archive Utility on the ZIP file now resulted in it being extracted to the Downloads folder. While this is an interesting behavior, we could no longer use it for sandbox escape purposes.
Final exploit attempt: Revisiting the ‘open’ command
After discovering that Apple has fixed both variants that abuse .zshenv, , we decided to examine all the command line options of the open command. Soon after, we came across the following:
Figure 5. The –stdin option in the open utility as presented by its manual entry
As mentioned earlier, we couldn’t run Python with a dropped .py file since Python refuses to run files with the “com.apple.quarantine” extended attribute. We also considered abusing the PYTHONSTARTUP environment variable, but Apple’s fix to CVE-2021-30864 apparently prevented that option, too. However, –stdin bypassed the “com.apple.quarantine” extended attribute restriction, as there was no way for Python to know that the contents from its standard input originated from a quarantined file.
Our POC exploit thus became simply as follows:
Drop a “~$exploit.py” file with arbitrary Python commands.
Run open –stdin=’~$exploit.py’ -a Python, which runs the Python app with our dropped file serving as its standard input. Python happily runs our code, and since it’s a child process of launchd, it isn’t bound to Word’s sandbox rules.
Figure 6. Sample minimal POC exploit code
We also came up with a version that’s short enough to be a Twitter post:
Figure 7. “Tweetable” POC exploit
Detecting App Sandbox escapes with Microsoft Defender for Endpoint
Since our initial discovery of leveraging Launch Services in macOS for generic sandbox escapes, we have been using our POC exploits in Red Team operations to emulate end-to-end attacks against Microsoft Defender for Endpoint, improve its capabilities, and challenge our detections. Shortly after our Red Team used our first POC exploit, our Blue Team members used it to train artificial intelligence (AI) models to detect our exploit not only in Microsoft Office but also on any app used for a similar Launch Services-based sandbox escape.
After we learned of Perception Point’s technique and created our own new exploit technique (the Python POC), our Red Team saw another opportunity to fully test our own detection durability. Indeed, the same set of detection rules that handled our first sandbox escape vulnerability still turned out to be durable—even before the vulnerability related to our second POC exploit was patched.
Figure 8. Microsoft Defender for Endpoint detecting Office sandbox escape
For Defender for Endpoint customers, such detection durability feeds into the product’s threat and vulnerability management capabilities, which allows them to quickly discover, prioritize, and remediate misconfigurations and vulnerabilities—including those affecting non-Windows devices—through a unified security console.
Although there is a greater awareness of cybersecurity threats than ever before, it is becoming increasingly difficult for IT departments to get their security budgets approved. Security budgets seem to shrink each year and IT pros are constantly being asked to do more with less. Even so, the situation may not be hopeless. There are some things that IT pros can do to improve the chances of getting their security budgets approved.
Presenting the Problem in a Compelling Way
If you want to get your proposed security budget approved, you will need to present security problems in a compelling way. While those who are in charge of the organization’s finances are likely aware of the need for good security, they have probably also seen enough examples of “a security solution in search of a problem” to make them skeptical of security spending requests. If you want to persuade those who control the money, then you will need to convince them of three things:
You are trying to protect against a real issue that presents a credible threat to the organization’s wellbeing.
Your proposed solution will be effective and that it isn’t just a “new toy for the IT department to play with”
Your budget request is both realistic and justified.
Use Data to Your Advantage
One of the best ways to convince those who are in charge that there is a credible cyber threat against the organization is to provide them with quantifiable metrics. Don’t resort to gathering statistics from the Internet. Your organization’s financial staff is probably smart enough to know that most of those statistics are manufactured by security companies who are trying to sell a product or service. Instead, gather your own metrics from inside your organization by using tools that are freely available for download.
Specops for example, offers a free Password Auditor that can generate reports demonstrating the effectiveness of your organization’s password policy and existing password security vulnerabilities. This free tool can also help you to identify other vulnerabilities, such as accounts that are using passwords that are known to have been leaked or passwords that do not adhere to compliance standards or industry best practices.
Example of Specops Password Auditor results in an Active Directory environment
Of course, this is just one of the many free security tools that are available for download. In any case, it is important to use metrics from within your own organization to demonstrate the fact that the security problem that you are trying to solve is real.
Highlight What a Solution Would Do
Once you demonstrate the problem to those who are in charge of the organization’s finances, do not make the mistake of leaving them guessing as to how you are planning on solving the problem. Be prepared to clearly explain what tools you are planning on using, and how those tools will solve the problem that you have demonstrated.
It’s a good idea to use visuals to demonstrate the practicality of your proposed solution. Be sure to explain how the problem is solved in non-technical language and enhance your argument with examples that are specific to your organization.
Estimated Time of Implementation and Seeing Results
We have probably all heard horror stories of IT projects that have gone off the rails. Organizations sometimes spend millions of dollars and invest years of planning into IT projects that never ultimately materialize. That being the case, it is important to set everyone’s mind at ease by showing them exactly how long it will take to get your proposed solution up and running and then how much additional time will be needed in order to achieve the desired result.
When you are making these projections, be careful to be realistic and not to make promises based on an overly ambitious implementation schedule. You should also be prepared to explain how you arrived at your projection. Keep in mind upcoming projects, company-wide goals, and fiscal year ideals when factoring in timing.
Demonstrate the Estimated Savings
Although security is of course a concern for most organizations, those who are in charge of an organization’s finances typically want to see some sort of return on investment. As such, it is important to consider how your proposed solution might save the company money. A few ideas might include:
Saving the IT department time, thereby reducing the number of overtime hours worked
Avoiding a regulatory penalty that could cost the organization a lot of money
Bringing down insurance premiums because data is being better protected
Of course, these are just ideas. Every situation is different, and you will need to consider how your security project can produce a return on investment given your own unique circumstances. It is important to include a cost-saving element for clarity sake, even if it is citing the average cost of a data breach in your industry.
Show You’ve Done Your Homework with a Pricing Comparison
As you pitch your proposed solution, stakeholders are almost certain to ask whether there might be a less expensive product that would accomplish your objectives. As such, it’s important to spend some time researching the solutions offered by competing vendors. Here are a few things that you should be prepared to demonstrate:
The total cost for implementing each potential solution (this may include licensing, labor, support, and hardware costs)
Why you are proposing a particular solution even if it is not the least expensive
If your solution is the least expensive, then be prepared to explain what you might be giving up by using the cheapest vendor.
What each vendor offers relative to the others
A Few Quick Tips
As you make your budgetary pitch, keep in mind that those to whom you are presenting likely have a limited understanding of IT concepts. Avoid using unnecessary technical jargon and be prepared to clearly explain key concepts, but without sounding condescending in the process.
It’s also smart to anticipate any questions that may be asked of you and have answers to those questions ready to go. This is especially true if there is a particular question that makes you a little bit uncomfortable.
Present your information clearly, confidently, and in a concise manner (I.e., make it quick!) so you can make your case without wasting time.
In June 2022, we reported on the largest HTTPS DDoS attack that we’ve ever mitigated — a 26 million request per second attack – the largest attack on record. Our systems automatically detected and mitigated this attack and many more. Since then, we have been tracking this botnet, which we’ve called “Mantis”, and the attacks it has launched against almost a thousand Cloudflare customers.
Cloudflare WAF/CDN customers are protected against HTTP DDoS attacks including Mantis attacks. Please refer to the bottom of this blog for additional guidance on how to best protect your Internet properties against DDoS attacks.
Have you met Mantis?
We named the botnet that launched the 26M rps (requests per second) DDoS attack “Mantis” as it is also like the Mantis shrimp, small but very powerful. Mantis shrimps, also known as “thumb-splitters”, are very small; less than 10 cm in length, but their claws are so powerful that they can generate a shock wave with a force of 1,500 Newtons at speeds of 83 km/h from a standing start. Similarly, the Mantis botnet operates a small fleet of approximately 5,000 bots, but with them can generate a massive force — responsible for the largest HTTP DDoS attacks we have ever observed.
The Mantis botnet was able to generate the 26M HTTPS requests per second attack using only 5,000 bots. I’ll repeat that: 26 million HTTPS requests per second using only 5,000 bots. That’s an average of 5,200 HTTPS rps per bot. Generating 26M HTTP requests is hard enough to do without the extra overhead of establishing a secure connection, but Mantis did it over HTTPS. HTTPS DDoS attacks are more expensive in terms of required computational resources because of the higher cost of establishing a secure TLS encrypted connection. This stands out and highlights the unique strength behind this botnet.
As opposed to “traditional” botnets that are formed of Internet of Things (IoT) devices such as DVRs, CC cameras, or smoke detectors, Mantis uses hijacked virtual machines and powerful servers. This means that each bot has a lot more computational resources — resulting in this combined thumb-splitting strength.
Mantis is the next evolution of the Meris botnet. The Meris botnet relied on MikroTik devices, but Mantis has branched out to include a variety of VM platforms and supports running various HTTP proxies to launch attacks. The name Mantis was chosen to be similar to “Meris” to reflect its origin, and also because this evolution hits hard and fast. Over the past few weeks, Mantis has been especially active directing its strengths towards almost 1,000 Cloudflare customers.
Who is Mantis attacking?
In our recent DDoS attack trends report, we talked about the increasing number of HTTP DDoS attacks. In the past quarter, HTTP DDoS attacks increased by 72%, and Mantis has surely contributed to that growth. Over the past month, Mantis has launched over 3,000 HTTP DDoS attacks against Cloudflare customers.
When we take a look at Mantis’ targets we can see that the top attacked industry was the Internet & Telecommunications industry with 36% of attack share. In second place, the News, Media & Publishing industry, followed by Gaming and Finance.
When we look at where these companies are located, we can see that over 20% of the DDoS attacks targeted US-based companies, over 15% Russia-based companies, and less than five percent included Turkey, France, Poland, Ukraine, and more.
How to protect against Mantis and other DDoS attacks
Cloudflare’s automated DDoS protection system leverages dynamic fingerprinting to detect and mitigate DDoS attacks. The system is exposed to customers as the HTTP DDoS Managed Ruleset. The ruleset is enabled and applying mitigation actions by default, so if you haven’t made any changes, there is no action for you to take — you are protected. You can also review our guides Best Practices: DoS preventive measures and Responding to DDoS attacks for additional tips and recommendations on how to optimize your Cloudflare configurations.
Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.
To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.
A group of academics from the New Jersey Institute of Technology (NJIT) has warned of a novel technique that could be used to defeat anonymity protections and identify a unique website visitor.
“An attacker who has complete or partial control over a website can learn whether a specific target (i.e., a unique individual) is browsing the website,” the researchers said. “The attacker knows this target only through a public identifier, such as an email address or a Twitter handle.”
The cache-based targeted de-anonymization attack is a cross-site leak that involves the adversary leveraging a service such as Google Drive, Dropbox, or YouTube to privately share a resource (e.g., image, video, or a YouTube playlist) with the target, followed by embedding the shared resource into the attack website.
This can be achieved by, say, privately sharing the resource with the target using the victim’s email address or the appropriate username associated with the service and then inserting the leaky resource using an <iframe> HTML tag.
In the next step, the attacker tricks the victim into visiting the malicious website and clicking on the aforementioned content, causing the shared resource to be loaded as a pop-under window (as opposed to a pop-up) or a browser tab — a method that’s been used by advertisers to sneakily load ads.
This exploit page, as it’s rendered by the target’s browser, is used to determine if the visitor can access the shared resource, successful access indicating that the visitor is indeed the intended target.
The attack, in a nutshell, aims to unmask the users of a website under the attacker’s control by connecting the list of accounts tied to those individuals with their social media accounts or email addresses through a piece of shared content.
In a hypothetical scenario, a bad actor could share a video hosted on Google Drive with a target’s email address, and follow it up by inserting this video in the lure website. Thus when visitors land on the portal, a successful loading of the video could be used as a yardstick to infer if their victim is one among them.
The attacks, which are practical to exploit across desktop and mobile systems with multiple CPU microarchitectures and different web browsers, are made possible by means of a cache-based side channel that’s used to glean if the shared resource has been loaded and therefore distinguish between targeted and non-targeted users.
Put differently, the idea is to observe the subtle timing differences that arise when the shared resource is being accessed by the two sets of users, which, in turn, occurs due to differences in the time it takes to return an appropriate response from the web server depending on the user’s authorization status.
The attacks also take into account a second set of differences on the client-side that happens when the web browser renders the relevant content or error page based on the response received.
“There are two main causes for differences in the observed side channel leakages between targeted and non-targeted users – a server-side timing difference and a client-side rendering difference,” the researchers said.
While most popular platforms such as those from Google, Facebook, Instagram, LinkedIn, Twitter, and TikTok were found susceptible, one notable service that’s immune to the attack is Apple iCloud.
It’s worth pointing out the de-anonymization method banks on the prerequisite that the targeted user is already logged in to the service. As mitigations, the researchers have released a browser extension called Leakuidator+ that’s available for Chrome, Firefox, and Tor browsers.
To counter the timing and rendering side channels, website owners are recommended to design web servers to return their responses in constant time, irrespective of whether the user is provisioned to access the shared resource, and make their error pages as similar as possible to the content pages to minimize the attacker-observable differences.
“As an example, if an authorized user was going to be shown a video, the error page for the non-targeted user should also be made to show a video,” the researchers said, adding websites should also be made to require user interaction before rendering content.
“Knowing the precise identity of the person who is currently visiting a website can be the starting point for a range of nefarious targeted activities that can be executed by the operator of that website.”
The findings arrive weeks after researchers from the University of Hamburg, Germany, demonstrated that mobile devices leak identifying information such as passwords and past holiday locations via Wi-Fi probe requests.
In a related development, MIT researchers last month revealed the root cause behind a website fingerprinting attack as not due to signals generated by cache contention (aka a cache-based side channel) but rather due to system interrupts, while showing that interrupt-based side channels can be used to mount a powerful website fingerprinting attack.
New survey reveals lack of staff, skills, and resources driving smaller teams to outsource security.
As business begins its return to normalcy (however “normal” may look), CISOs at small and medium-size enterprises (500 – 10,000 employees) were asked to share their cybersecurity challenges and priorities, and their responses were compared the results with those of a similar survey from 2021.
Here are the 5 key things we learned from 200 responses:
1 — Remote Work Has Accelerated the Use of EDR Technologies
In 2021, 52% of CISOs surveyed were relying on endpoint detection and response (EDR) tools. This year that number has leapt to 85%. In contrast, last year 45% were using network detection and response (NDR) tools, while this year just 6% employ NDR. Compared to 2021, double the number of CISOs and their organizations are seeing the value of extended detection and response (XDR) tools, which combine EDR with integrated network signals. This is likely due to the increase in remote work, which is more difficult to secure than when employees work within the company’s network environment.
2 — 90% of CISOs Use an MDR Solution
There is a massive skills gap in the cybersecurity industry, and CISOs are under increasing pressure to recruit internally. Especially in small security teams where additional headcount is not the answer, CISOs are turning to outsourced services to fill the void. In 2021, 47% of CISOs surveyed relied on a Managed Security Services Provider (MSSP), while 53% were using a managed detection and response (MDR) service. This year, just 21% are using an MSSP, and 90% are using MDR.
3 — Overlapping Threat Protection Tools are the #1 Pain Point for Small Teams
The majority (87%) of companies with small security teams struggle to manage and operate their threat protection products. Among these companies, 44% struggle with overlapping capabilities, while 42% struggle to visualize the full picture of an attack when it occurs. These challenges are intrinsically connected, as teams find it difficult to get a single, comprehensive view with multiple tools.
4 — Small Security Teams Are Ignoring More Alerts
Small security teams are giving less attention to their security alerts. Last year 14% of CISOs said they look only at critical alerts, while this year that number jumped to 21%. In addition, organizations are increasingly letting automation take the wheel. Last year, 16% said they ignore automatically remediated alerts, and this year that’s true for 34% of small security teams.
5 — 96% of CISOs Are Planning to Consolidate Security Platforms
Almost all CISOs surveyed have consolidation of security tools on their to-do lists, compared to 61% in 2021. Not only does consolidation reduce the number of alerts – making it easier to prioritize and view all threats – respondents believe it will stop them from missing threats (57%), reduce the need for specific expertise (56%), and make it easier to correlate findings and visualize the risk landscape (46%). XDR technologies have emerged as the preferred method of consolidation, with 63% of CISOs calling it their top choice.
The OpenSSL Technical Committee (OTC) was recently made aware of several potential attacks against the OpenSSL libraries which might permit information leakage via the Spectre attack.1 Although there are currently no known exploits for the Spectre attacks identified, it is plausible that some of them might be exploitable.
Local side channel attacks, such as these, are outside the scope of our security policy, however the project generally does introduce mitigations when they are discovered. In this case, the OTC has decided that these attacks will not be mitigated by changes to the OpenSSL code base. The full reasoning behind this is given below.
The Spectre attack vector, while applicable everywhere, is most important for code running in enclaves because it bypasses the protections offered. Example enclaves include, but are not limited to:
The reasoning behind the OTC’s decision to not introduce mitigations for these attacks is multifold:
Such issues do not fall under the scope of our defined security policy. Even though we often apply mitigations for such issues we do not mandate that they are addressed.
Maintaining code with mitigations in place would be significantly more difficult. Most potentially vulnerable code is extremely non-obvious, even to experienced security programmers. It would thus be quite easy to introduce new attack vectors or fix existing ones unknowingly. The mitigations themselves obscure the code which increases the maintenance burden.
Automated verification and testing of the attacks is necessary but not sufficient. We do not have automated detection for this family of vulnerabilities and if we did, it is likely that variations would escape detection. This does not mean we won’t add automated checking for issues like this at some stage.
These problems are fundamentally a bug in the hardware. The software running on the hardware cannot be expected to mitigate all such attacks. Some of the in-CPU caches are completely opaque to software and cannot be easily flushed, making software mitigation quixotic. However, the OTC recognises that fixing hardware is difficult and in some cases impossible.
Some kernels and compilers can provide partial mitigation. Specifically, several common compilers have introduced code generation options addressing some of these classes of vulnerability:
GCC has the -mindirect-branch, -mfunction-return and -mindirect-branch-register options
LLVM has the -mretpoline option
MSVC has the /Qspectre option
Nicholas Mosier, Hanna Lachnitt, Hamed Nemati, and Caroline Trippel, “Axiomatic Hardware-Software Contracts for Security,” in Proceedings of the 49th ACM/IEEE International Symposium on Computer Architecture (ISCA), 2022.↩
Posted by OpenSSL Technical Committee May 13th, 2022 12:00 am
The National Institute of Standards and Technology (NIST) has announced that a new post-quantum cryptographic standard will replace current public-key cryptography, which is vulnerable to quantum-based attacks. Note: the term “post-quantum cryptography” is often referred to as “quantum-resistant cryptography” and includes, “cryptographic algorithms or methods that are assessed not to be specifically vulnerable to attack by either a CRQC [cryptanalytically relevant quantum computer] or classical computer.” (See the National Security Memorandum on Promoting United States Leadership in Quantum Computing While Mitigating Risks to Vulnerable Cryptographic Systems for more information).
Although NIST will not publish the new post-quantum cryptographic standard for use by commercial products until 2024, CISA and NIST strongly recommend organizations start preparing for the transition now by following the Post-Quantum Cryptography Roadmap, which includes:
Inventorying your organization’s systems for applications that use public-key cryptography.
Testing the new post-quantum cryptographic standard in a lab environment; however, organizations should wait until the official release to implement the new standard in a production environment.
Creating a plan for transitioning your organization’s systems to the new cryptographic standard that includes:
Performing an interdependence analysis, which should reveal issues that may impact the order of systems transition;
Decommissioning old technology that will become unsupported upon publication of the new standard; and
Ensuring validation and testing of products that incorporate the new standard.
Creating acquisition policies regarding post-quantum cryptography. This process should include:
Setting new service levels for the transition.
Surveying vendors to determine possible integration into your organization’s roadmap and to identify needed foundational technologies.
Alerting your organization’s IT departments and vendors about the upcoming transition.
Educating your organization’s workforce about the upcoming transition and providing any applicable training.
For additional guidance and background, CISA and NIST strongly encourage users and administrators to review:
Microsoft says that a recently spotted Windows worm has been found on the networks of hundreds of organizations from various industry sectors.
The malware, dubbed Raspberry Robin, spreads via infected USB devices, and it was first spotted in September 2021 by Red Canary intelligence analysts.
Cybersecurity firm Sekoia also observed it using QNAP NAS devices as command and control servers (C2) servers in early November [PDF], while Microsoft said it found malicious artifacts linked to this worm created in 2019.
Redmond’s findings align with those of the Red Canary’s Detection Engineering team, which also detected this worm on the networks of multiple customers, some of them in the technology and manufacturing sectors.
Although Microsoft observed the malware connecting to addresses on the Tor network, the threat actors are yet to exploit the access they gained to their victims’ networks.
This is in spite of the fact that they could easily escalate their attacks given that the malware can bypass User Account Control (UAC) on infected systems using legitimate Windows tools.
Microsoft shared this info in a private threat intelligence advisory shared with Microsoft Defender for Endpoint subscribers and seen by BleepingComputer.
Raspberry Robin worm infection flow (Red Canary)
Abuses Windows legitimate tools to infect new devices
As already mentioned, Raspberry Robin is spreading to new Windows systems via infected USB drives containing a malicious .LNK file.
Once the USB device is attached and the user clicks the link, the worm spawns a msiexec process using cmd.exe to launch a malicious file stored on the infected drive.
It infects new Windows devices, communicates with its command and control servers (C2), and executes malicious payloads using several legitimate Windows utilities:
fodhelper (a trusted binary for managing features in Windows settings),
msiexec (command line Windows Installer component),
and odbcconf (a tool for configuring ODBC drivers).
“While msiexec.exe downloads and executes legitimate installer packages, adversaries also leverage it to deliver malware,” Red Canary researchers explained.
“Raspberry Robin uses msiexec.exe to attempt external network communication to a malicious domain for C2 purposes.”
Security researchers who spotted Raspberry Robin in the wild are yet to attribute the malware to a threat group and are still working on finding its operators’ end goal.
However, Microsoft has tagged this campaign as high-risk, given that the attackers could download and deploy additional malware within the victims’ networks and escalate their privileges at any time.
Shadow IT refers to the practice of users deploying unauthorized technology resources in order to circumvent their IT department. Users may resort to using shadow IT practices when they feel that existing IT policies are too restrictive or get in the way of them being able to do their jobs effectively.
An old school phenomenon
Shadow IT is not new. There have been countless examples of widespread shadow IT use over the years. In the early 2000s, for example, many organizations were reluctant to adopt Wi-Fi for fear that it could undermine their security efforts. However, users wanted the convenience of wireless device usage and often deployed wireless access points without the IT department’s knowledge or consent.
The same thing happened when the iPad first became popular. IT departments largely prohibited iPads from being used with business data because of the inability to apply group policy settings and other security controls to the devices. Even so, users often ignored IT and used iPads anyway.
Of course, IT pros eventually figured out how to secure iPads and Wi-Fi and eventually embraced the technology. However, shadow IT use does not always come with a happy ending. Users who engage in shadow IT use can unknowingly do irreparable harm to an organization.
Even so, the problem of shadow IT use continues to this day. If anything, shadow IT use has increased over the last several years. In 2021 for example, Gartner found that between 30% and 40% of all IT spending (in a large enterprise) goes toward funding shadow IT.
Shadow IT is on the rise in 2022
Remote work post-pandemic
One reason for the rise in shadow IT use is remote work. When users are working from home, it is easier for them to escape the notice if the IT department than it might be if they were to try using unauthorized technology from within the corporate office. A study by Core found that remote work stemming from COVID requirements increased shadow IT use by 59%.
Tech is getting simpler for end-users
Another reason for the increase in shadow IT is the fact that it is easier than ever for a user to circumvent the IT department. Suppose for a moment that a user wants to deploy a particular workload, but the IT department denies the request.
A determined user can simply use their corporate credit card to set up a cloud account. Because this account exists as an independent tenant, IT will have no visibility into the account and may not even know that it exists. This allows the user to run their unauthorized workload with total impunity.
In fact, a 2020 study found that 80% of workers admitted to using unauthorized SaaS applications. This same study also found that the average company’s shadow IT cloud could be 10X larger than the company’s sanctioned cloud usage.
Know your own network
Given the ease with which a user can deploy shadow IT resources, it is unrealistic for IT to assume that shadow IT isn’t happening or that they will be able to detect shadow IT use. As such, the best strategy may be to educate users about the risks posed by shadow IT. A user who has a limited IT background may inadvertently introduce security risks by engaging in shadow IT. According to a Forbes Insights report 60% of companies do not include shadow IT in their threat assessments.
Similarly, shadow IT use can expose an organization to regulatory penalties. In fact, it is often compliance auditors – not the IT department – who end up being the ones to discover shadow IT use.
Of course, educating users alone is not sufficient to stopping shadow IT use. There will always be users who choose to ignore the warnings. Likewise, giving in to user’s demands for using particular technologies might not always be in the organization’s best interests either. After all, there is no shortage of poorly written or outdated applications that could pose a significant threat to your organization. Never mind applications that are known for spying on users.
The zero-trust solution to Shadow IT
One of the best options for dealing with shadow IT threats may be to adopt zero trust. Zero-trust is a philosophy in which nothing in your organization is automatically assumed to be trustworthy. User and device identities must be proven each time that they are used to access a resource.
There are many different aspects to a zero-trust architecture, and each organization implements zero-trust differently. Some organizations for instance, use conditional access policies to control access to resources. That way, an organization isn’t just granting a user unrestricted access to a resource, but rather is considering how the user is trying to access the resource. This may involve setting up restrictions around the user’s geographic location, device type, time of day, or other factors.
Zero-trust at the helpdesk
One of the most important things that an organization can do with regard to implementing zero trust is to better secure its helpdesk. Most organizations’ help desks are vulnerable to social engineering attacks.
When a user calls and requests a password reset, the helpdesk technician assumes that the user is who they claim to be, when in reality, the caller could actually be a hacker who is trying to use a password reset request as a way of gaining access to the network. Granting password reset requests without verifying user identities goes against everything that zero trust stands for.