DDoS threat report for 2023 Q4

09/01/2024
Omer Yoachimik – Jorge Pacheco

Welcome to the sixteenth edition of Cloudflare’s DDoS Threat Report. This edition covers DDoS trends and key findings for the fourth and final quarter of the year 2023, complete with a review of major trends throughout the year.

What are DDoS attacks?

DDoS attacks, or distributed denial-of-service attacks, are a type of cyber attack that aims to disrupt websites and online services for users, making them unavailable by overwhelming them with more traffic than they can handle. They are similar to car gridlocks that jam roads, preventing drivers from getting to their destination.

There are three main types of DDoS attacks that we will cover in this report. The first is an HTTP request intensive DDoS attack that aims to overwhelm HTTP servers with more requests than they can handle to cause a denial of service event. The second is an IP packet intensive DDoS attack that aims to overwhelm in-line appliances such as routers, firewalls, and servers with more packets than they can handle. The third is a bit-intensive attack that aims to saturate and clog the Internet link causing that ‘gridlock’ that we discussed. In this report, we will highlight various techniques and insights on all three types of attacks.

Previous editions of the report can be found here, and are also available on our interactive hub, Cloudflare Radar. Cloudflare Radar showcases global Internet traffic, attacks, and technology trends and insights, with drill-down and filtering capabilities for zooming in on insights of specific countries, industries, and service providers. Cloudflare Radar also offers a free API allowing academics, data sleuths, and other web enthusiasts to investigate Internet usage across the globe.

To learn how we prepare this report, refer to our Methodologies.

Key findings

  1. In Q4, we observed a 117% year-over-year increase in network-layer DDoS attacks, and overall increased DDoS activity targeting retail, shipment and public relations websites during and around Black Friday and the holiday season.
  2. In Q4, DDoS attack traffic targeting Taiwan registered a 3,370% growth, compared to the previous year, amidst the upcoming general election and reported tensions with China. The percentage of DDoS attack traffic targeting Israeli websites grew by 27% quarter-over-quarter, and the percentage of DDoS attack traffic targeting Palestinian websites grew by 1,126% quarter-over-quarter — as the military conflict between Israel and Hamas continues.
  3. In Q4, there was a staggering 61,839% surge in DDoS attack traffic targeting Environmental Services websites compared to the previous year, coinciding with the 28th United Nations Climate Change Conference (COP 28).

For an in-depth analysis of these key findings and additional insights that could redefine your understanding of current cybersecurity challenges, read on!

Illustration of a DDoS attack

Hyper-volumetric HTTP DDoS attacks

2023 was the year of uncharted territories. DDoS attacks reached new heights — in size and sophistication. The wider Internet community, including Cloudflare, faced a persistent and deliberately engineered campaign of thousands of hyper-volumetric DDoS attacks at never before seen rates.

These attacks were highly complex and exploited an HTTP/2 vulnerability. Cloudflare developed purpose-built technology to mitigate the vulnerability’s effect and worked with others in the industry to responsibly disclose it.

As part of this DDoS campaign, in Q3 our systems mitigated the largest attack we’ve ever seen — 201 million requests per second (rps). That’s almost 8 times larger than our previous 2022 record of 26 million rps.

Largest HTTP DDoS attacks as seen by Cloudflare, by year

Growth in network-layer DDoS attacks

After the hyper-volumetric campaign subsided, we saw an unexpected drop in HTTP DDoS attacks. Overall in 2023, our automated defenses mitigated over 5.2 million HTTP DDoS attacks consisting of over 26 trillion requests. That averages at 594 HTTP DDoS attacks and 3 billion mitigated requests every hour.

Despite these astronomical figures, the amount of HTTP DDoS attack requests actually declined by 20% compared to 2022. This decline was not just annual but was also observed in 2023 Q4 where the number of HTTP DDoS attack requests decreased by 7% YoY and 18% QoQ.

On the network-layer, we saw a completely different trend. Our automated defenses mitigated 8.7 million network-layer DDoS attacks in 2023. This represents an 85% increase compared to 2022.

In 2023 Q4, Cloudflare’s automated defenses mitigated over 80 petabytes of network-layer attacks. On average, our systems auto-mitigated 996 network-layer DDoS attacks and 27 terabytes every hour. The number of network-layer DDoS attacks in 2023 Q4 increased by 175% YoY and 25% QoQ.

HTTP and Network-layer DDoS attacks by quarter

DDoS attacks increase during and around COP 28

In the final quarter of 2023, the landscape of cyber threats witnessed a significant shift. While the Cryptocurrency sector was initially leading in terms of the volume of HTTP DDoS attack requests, a new target emerged as a primary victim. The Environmental Services industry experienced an unprecedented surge in HTTP DDoS attacks, with these attacks constituting half of all its HTTP traffic. This marked a staggering 618-fold increase compared to the previous year, highlighting a disturbing trend in the cyber threat landscape.

This surge in cyber attacks coincided with COP 28, which ran from November 30th to December 12th, 2023. The conference was a pivotal event, signaling what many considered the ‘beginning of the end’ for the fossil fuel era. It was observed that in the period leading up to COP 28, there was a noticeable spike in HTTP attacks targeting Environmental Services websites. This pattern wasn’t isolated to this event alone.

Looking back at historical data, particularly during COP 26 and COP 27, as well as other UN environment-related resolutions or announcements, a similar pattern emerges. Each of these events was accompanied by a corresponding increase in cyber attacks aimed at Environmental Services websites.

In February and March 2023, significant environmental events like the UN’s resolution on climate justice and the launch of United Nations Environment Programme’s Freshwater Challenge potentially heightened the profile of environmental websites, possibly correlating with an increase in attacks on these sites​​​​.

This recurring pattern underscores the growing intersection between environmental issues and cyber security, a nexus that is increasingly becoming a focal point for attackers in the digital age.

DDoS attacks and Iron Swords

It’s not just UN resolutions that trigger DDoS attacks. Cyber attacks, and particularly DDoS attacks, have long been a tool of war and disruption. We witnessed an increase in DDoS attack activity in the Ukraine-Russia war, and now we’re also witnessing it in the Israel-Hamas war. We first reported the cyber activity in our report Cyber attacks in the Israel-Hamas war, and we continued to monitor the activity throughout Q4.

Operation “Iron Swords” is the military offensive launched by Israel against Hamas following the Hamas-led 7 October attack. During this ongoing armed conflict, we continue to see DDoS attacks targeting both sides.

DDoS attacks targeting Israeli and Palestinian websites, by industry

Relative to each region’s traffic, the Palestinian territories was the second most attacked region by HTTP DDoS attacks in Q4. Over 10% of all HTTP requests towards Palestinian websites were DDoS attacks, a total of 1.3 billion DDoS requests — representing a 1,126% increase in QoQ. 90% of these DDoS attacks targeted Palestinian Banking websites. Another 8% targeted Information Technology and Internet platforms.

Top attacked Palestinian industries

Similarly, our systems automatically mitigated over 2.2 billion HTTP DDoS requests targeting Israeli websites. While 2.2 billion represents a decrease compared to the previous quarter and year, it did amount to a larger percentage out of the total Israel-bound traffic. This normalized figure represents a 27% increase QoQ but a 92% decrease YoY. Notwithstanding the larger amount of attack traffic, Israel was the 77th most attacked region relative to its own traffic. It was also the 33rd most attacked by total volume of attacks, whereas the Palestinian territories was 42nd.

Of those Israeli websites attacked, Newspaper & Media were the main target — receiving almost 40% of all Israel-bound HTTP DDoS attacks. The second most attacked industry was the Computer Software industry. The Banking, Financial Institutions, and Insurance (BFSI) industry came in third.

Top attacked Israeli industries

On the network layer, we see the same trend. Palestinian networks were targeted by 470 terabytes of attack traffic — accounting for over 68% of all traffic towards Palestinian networks. Surpassed only by China, this figure placed the Palestinian territories as the second most attacked region in the world, by network-layer DDoS attack, relative to all Palestinian territories-bound traffic. By absolute volume of traffic, it came in third. Those 470 terabytes accounted for approximately 1% of all DDoS traffic that Cloudflare mitigated.

Israeli networks, though, were targeted by only 2.4 terabytes of attack traffic, placing it as the 8th most attacked country by network-layer DDoS attacks (normalized). Those 2.4 terabytes accounted for almost 10% of all traffic towards Israeli networks.

Top attacked countries

When we turned the picture around, we saw that 3% of all bytes that were ingested in our Israeli-based data centers were network-layer DDoS attacks. In our Palestinian-based data centers, that figure was significantly higher — approximately 17% of all bytes.

On the application layer, we saw that 4% of HTTP requests originating from Palestinian IP addresses were DDoS attacks, and almost 2% of HTTP requests originating from Israeli IP addresses were DDoS attacks as well.

Main sources of DDoS attacks

In the third quarter of 2022, China was the largest source of HTTP DDoS attack traffic. However, since the fourth quarter of 2022, the US took the first place as the largest source of HTTP DDoS attacks and has maintained that undesirable position for five consecutive quarters. Similarly, our data centers in the US are the ones ingesting the most network-layer DDoS attack traffic — over 38% of all attack bytes.

HTTP DDoS attacks originating from China and the US by quarter

Together, China and the US account for a little over a quarter of all HTTP DDoS attack traffic in the world. Brazil, Germany, Indonesia, and Argentina account for the next twenty-five percent.

Top source of HTTP DDoS attacks

These large figures usually correspond to large markets. For this reason, we also normalize the attack traffic originating from each country by comparing their outbound traffic. When we do this, we often get small island nations or smaller market countries that a disproportionate amount of attack traffic originates from. In Q4, 40% of Saint Helena’s outbound traffic were HTTP DDoS attacks — placing it at the top. Following the ‘remote volcanic tropical island’, Libya came in second, Swaziland (also known as Eswatini) in third. Argentina and Egypt follow in fourth and fifth place.

Top source of HTTP DDoS attacks with respect to each country’s traffic

On the network layer, Zimbabwe came in first place. Almost 80% of all traffic we ingested in our Zimbabwe-based data center was malicious. In second place, Paraguay, and Madagascar in third.

Top source of Network-layer DDoS attacks with respect to each country’s traffic

Most attacked industries

By volume of attack traffic, Cryptocurrency was the most attacked industry in Q4. Over 330 billion HTTP requests targeted it. This figure accounts for over 4% of all HTTP DDoS traffic for the quarter. The second most attacked industry was Gaming & Gambling. These industries are known for being coveted targets and attract a lot of traffic and attacks.

Top industries targeted by HTTP DDoS attacks

On the network layer, the Information Technology and Internet industry was the most attacked — over 45% of all network-layer DDoS attack traffic was aimed at it. Following far behind were the Banking, Financial Services and Insurance (BFSI), Gaming & Gambling, and Telecommunications industries.

Top industries targeted by Network-layer DDoS attacks

To change perspectives, here too, we normalized the attack traffic by the total traffic for a specific industry. When we do that, we get a different picture.

Top attacked industries by HTTP DDoS attacks, by region

We already mentioned in the beginning of this report that the Environmental Services industry was the most attacked relative to its own traffic. In second place was the Packaging and Freight Delivery industry, which is interesting because of its timely correlation with online shopping during Black Friday and the winter holiday season. Purchased gifts and goods need to get to their destination somehow, and it seems as though attackers tried to interfere with that. On a similar note, DDoS attacks on retail companies increased by 16% compared to the previous year.

Top industries targeted by HTTP DDoS attacks with respect to each industry’s traffic

On the network layer, Public Relations and Communications was the most targeted industry — 36% of its traffic was malicious. This too is very interesting given its timing. Public Relations and Communications companies are usually linked to managing public perception and communication. Disrupting their operations can have immediate and widespread reputational impacts which becomes even more critical during the Q4 holiday season. This quarter often sees increased PR and communication activities due to holidays, end-of-year summaries, and preparation for the new year, making it a critical operational period — one that some may want to disrupt.

Top industries targeted by Network-layer DDoS attacks with respect to each industry’s traffic

Most attacked countries and regions

Singapore was the main target of HTTP DDoS attacks in Q4. Over 317 billion HTTP requests, 4% of all global DDoS traffic, were aimed at Singaporean websites. The US followed closely in second and Canada in third. Taiwan came in as the fourth most attacked region — amidst the upcoming general elections and the tensions with China. Taiwan-bound attacks in Q4 traffic increased by 847% compared to the previous year, and 2,858% compared to the previous quarter. This increase is not limited to the absolute values. When normalized, the percentage of HTTP DDoS attack traffic targeting Taiwan relative to all Taiwan-bound traffic also significantly increased. It increased by 624% quarter-over-quarter and 3,370% year-over-year.

Top targeted countries by HTTP DDoS attacks

While China came in as the ninth most attacked country by HTTP DDoS attacks, it’s the number one most attacked country by network-layer attacks. 45% of all network-layer DDoS traffic that Cloudflare mitigated globally was China-bound. The rest of the countries were so far behind that it is almost negligible.

Top targeted countries by Network-layer DDoS attacks
Top targeted countries by Network-layer DDoS attacks

When normalizing the data, Iraq, Palestinian territories, and Morocco take the lead as the most attacked regions with respect to their total inbound traffic. What’s interesting is that Singapore comes up as fourth. So not only did Singapore face the largest amount of HTTP DDoS attack traffic, but that traffic also made up a significant amount of the total Singapore-bound traffic. By contrast, the US was second most attacked by volume (per the application-layer graph above), but came in the fiftieth place with respect to the total US-bound traffic.

Top targeted countries by HTTP DDoS attacks with respect to each country’s traffic
Top targeted countries by HTTP DDoS attacks with respect to each country’s traffic

Similar to Singapore, but arguably more dramatic, China is both the number one most attacked country by network-layer DDoS attack traffic, and also with respect to all China-bound traffic. Almost 86% of all China-bound traffic was mitigated by Cloudflare as network-layer DDoS attacks. The Palestinian territories, Brazil, Norway, and again Singapore followed with large percentages of attack traffic.

Top targeted countries by Network-layer DDoS attacks with respect to each country’s traffic
Top targeted countries by Network-layer DDoS attacks with respect to each country’s traffic

Attack vectors and attributes

The majority of DDoS attacks are short and small relative to Cloudflare’s scale. However, unprotected websites and networks can still suffer disruption from short and small attacks without proper inline automated protection — underscoring the need for organizations to be proactive in adopting a robust security posture.

In 2023 Q4, 91% of attacks ended within 10 minutes, 97% peaked below 500 megabits per second (mbps), and 88% never exceeded 50 thousand packets per second (pps).

Two out of every 100 network-layer DDoS attacks lasted more than an hour, and exceeded 1 gigabit per second (gbps). One out of every 100 attacks exceeded 1 million packets per second. Furthermore, the amount of network-layer DDoS attacks exceeding 100 million packets per second increased by 15% quarter-over-quarter.

DDoS attack stats you should know

One of those large attacks was a Mirai-botnet attack that peaked at 160 million packets per second. The packet per second rate was not the largest we’ve ever seen. The largest we’ve ever seen was 754 million packets per second. That attack occurred in 2020, and we have yet to see anything larger.

This more recent attack, though, was unique in its bits per second rate. This was the largest network-layer DDoS attack we’ve seen in Q4. It peaked at 1.9 terabits per second and originated from a Mirai botnet. It was a multi-vector attack, meaning it combined multiple attack methods. Some of those methods included UDP fragments flood, UDP/Echo flood, SYN Flood, ACK Flood, and TCP malformed flags.

This attack targeted a known European Cloud Provider and originated from over 18 thousand unique IP addresses that are assumed to be spoofed. It was automatically detected and mitigated by Cloudflare’s defenses.

This goes to show that even the largest attacks end very quickly. Previous large attacks we’ve seen ended within seconds — underlining the need for an in-line automated defense system. Though still rare, attacks in the terabit range are becoming more and more prominent.

1.9 Terabit per second Mirai DDoS attacks
1.9 Terabit per second Mirai DDoS attacks

The use of Mirai-variant botnets is still very common. In Q4, almost 3% of all attacks originate from Mirai. Though, of all attack methods, DNS-based attacks remain the attackers’ favorite. Together, DNS Floods and DNS Amplification attacks account for almost 53% of all attacks in Q4. SYN Flood follows in second and UDP floods in third. We’ll cover the two DNS attack types here, and you can visit the hyperlinks to learn more about UDP and SYN floods in our Learning Center.

DNS floods and amplification attacks

DNS floods and DNS amplification attacks both exploit the Domain Name System (DNS), but they operate differently. DNS is like a phone book for the Internet, translating human-friendly domain names like “www.cloudfare.com” into numerical IP addresses that computers use to identify each other on the network.

Simply put, DNS-based DDoS attacks comprise the method computers and servers used to identify one another to cause an outage or disruption, without actually ‘taking down’ a server. For example, a server may be up and running, but the DNS server is down. So clients won’t be able to connect to it and will experience it as an outage.

DNS flood attack bombards a DNS server with an overwhelming number of DNS queries. This is usually done using a DDoS botnet. The sheer volume of queries can overwhelm the DNS server, making it difficult or impossible for it to respond to legitimate queries. This can result in the aforementioned service disruptions, delays or even an outage for those trying to access the websites or services that rely on the targeted DNS server.

On the other hand, a DNS amplification attack involves sending a small query with a spoofed IP address (the address of the victim) to a DNS server. The trick here is that the DNS response is significantly larger than the request. The server then sends this large response to the victim’s IP address. By exploiting open DNS resolvers, the attacker can amplify the volume of traffic sent to the victim, leading to a much more significant impact. This type of attack not only disrupts the victim but also can congest entire networks.

In both cases, the attacks exploit the critical role of DNS in network operations. Mitigation strategies typically include securing DNS servers against misuse, implementing rate limiting to manage traffic, and filtering DNS traffic to identify and block malicious requests.

Top attack vectors
Top attack vectors

Amongst the emerging threats we track, we recorded a 1,161% increase in ACK-RST Floods as well as a 515% increase in CLDAP floods, and a 243% increase in SPSS floods, in each case as compared to last quarter. Let’s walk through some of these attacks and how they’re meant to cause disruption.

Top emerging attack vectors
Top emerging attack vectors

ACK-RST floods

An ACK-RST Flood exploits the Transmission Control Protocol (TCP) by sending numerous ACK and RST packets to the victim. This overwhelms the victim’s ability to process and respond to these packets, leading to service disruption. The attack is effective because each ACK or RST packet prompts a response from the victim’s system, consuming its resources. ACK-RST Floods are often difficult to filter since they mimic legitimate traffic, making detection and mitigation challenging.

CLDAP floods

CLDAP (Connectionless Lightweight Directory Access Protocol) is a variant of LDAP (Lightweight Directory Access Protocol). It’s used for querying and modifying directory services running over IP networks. CLDAP is connectionless, using UDP instead of TCP, making it faster but less reliable. Because it uses UDP, there’s no handshake requirement which allows attackers to spoof the IP address thus allowing attackers to exploit it as a reflection vector. In these attacks, small queries are sent with a spoofed source IP address (the victim’s IP), causing servers to send large responses to the victim, overwhelming it. Mitigation involves filtering and monitoring unusual CLDAP traffic.

SPSS floods

Floods abusing the SPSS (Source Port Service Sweep) protocol is a network attack method that involves sending packets from numerous random or spoofed source ports to various destination ports on a targeted system or network. The aim of this attack is two-fold: first, to overwhelm the victim’s processing capabilities, causing service disruptions or network outages, and second, it can be used to scan for open ports and identify vulnerable services. The flood is achieved by sending a large volume of packets, which can saturate the victim’s network resources and exhaust the capacities of its firewalls and intrusion detection systems. To mitigate such attacks, it’s essential to leverage in-line automated detection capabilities.

Cloudflare is here to help – no matter the attack type, size, or duration

Cloudflare’s mission is to help build a better Internet, and we believe that a better Internet is one that is secure, performant, and available to all. No matter the attack type, the attack size, the attack duration or the motivation behind the attack, Cloudflare’s defenses stand strong. Since we pioneered unmetered DDoS Protection in 2017, we’ve made and kept our commitment to make enterprise-grade DDoS protection free for all organizations alike — and of course, without compromising performance. This is made possible by our unique technology and robust network architecture.

It’s important to remember that security is a process, not a single product or flip of a switch. Atop of our automated DDoS protection systems, we offer comprehensive bundled features such as firewallbot detectionAPI protection, and caching to bolster your defenses. Our multi-layered approach optimizes your security posture and minimizes potential impact. We’ve also put together a list of recommendations to help you optimize your defenses against DDoS attacks, and you can follow our step-by-step wizards to secure your applications and prevent DDoS attacks. And, if you’d like to benefit from our easy to use, best-in-class protection against DDoS and other attacks on the Internet, you can sign up — for free! — at cloudflare.com. If you’re under attack, register or call the cyber emergency hotline number shown here for a rapid response.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/ddos-threat-report-2023-q4/

How AI and Automation Help E-Commerce Scale

WRITTEN BY THE CLOUDINARY TEAM FEB-07-2023 7 MIN READ

Post-pandemic, consumer reliance on online shopping remains steady, meaning e-commerce businesses need to continue to adopt new technologies to scale their business operations. 

Digital Asset Management (DAM) software can make it easier for creators to store, search, and organize their assets. Unfortunately, legacy DAM solutions are no longer sufficient to manage large volumes of product-related content. After all, using ‘old school’ DAM software requires a large staff who can manually optimize media and customize experiences for their audience—a practice that goes against agile methodology.

Staying competitive in today’s e-commerce environment requires brands to harness the power of AI and the efficiency of automation. A business using AI can quickly match audiences to relevant products and edit assets on the fly, creating more convenient and personalized shopping experiences. On the back-end, automation simplifies asset management, saving time and resources while increasing sales efficiency and marketing effectiveness.

Harnessing New Technology to Grow E-commerce

Copy link to this heading

During the pandemic, the US saw a 50% increase in e-commerce sales. This rapid shift to online shopping forced many businesses to find new asset management solutions. The right tool saves time for creative teams by taking on the labor involved in cropping, tagging, recoloring, background removal, and numerous other tedious tasks. AI tools can also automate higher-level functions, performing object recognition and asset categorization and efficiently organizing even legacy datasets.

Together, these tools free up a marketing team to address more strategic concerns, like finding opportunities to generate interest across new sales channels and touchpoints. 

E-commerce activity generates a lot of data that can be used for discovery. However, creators and developers can’t use what they can’t access, and studies show that 73% of data is never used for analytics. This wasted data is more than just lost revenue: Storing and transmitting data is expensive and also poses environmental concerns. To optimize asset delivery and extract the most valuable data from e-commerce activity, businesses must enhance their DAM tools with AI and automation.

AI and Automation for Scaling

Copy link to this heading

Let’s look at how AI and automation can help an e-commerce business achieve greater customer satisfaction, higher revenue, lower costs, happier employees, and more efficient and agile business operations.

Marketing

Copy link to this heading

Many websites collect cookies to track their customers’ buying patterns and enable personalized product recommendations. AI can analyze this information, so we can use it to automate outreach and customize customer campaigns and newsletters. 

Effective tools can provide extensible APIs to automate DAM and target specific user segments and devices. For example, Cloudinary’s Admin API lets you retrieve and manipulate asset metadata as part of an automated pipeline. In conjunction with Cloudinary’s object detection tools, it’s a powerful tool to modernize legacy databases.

Product

Copy link to this heading

Most companies offer flexible return policies to stay competitive in a market where customers cannot appraise a product in person before purchase. It’s expensive to provide the customer with this freedom—product returns cost companies millions of dollars annually. 

One of the most common reasons customers return products is because they feel they’ve received something different than what they saw before purchase, which could occur if the product page had insufficient photos or poor-quality images. For an e-commerce retailer, saving money by taking fewer photos is a false economy; a loss of revenue and the cost of processing returns can offset any savings.

AI-powered content creation helps ensure customers are happy with their purchases. For example, Cloudinary’s image and video transformation API provides a suite of tools to generate high-quality derivative assets from a small number of product images. For example, suppose you’re selling a sweater in a range of colors. Cloudinary’s image transformation API enables us to recolor a photo of it, so the product team only needs to photograph it once.

AI is also a powerful tool for matching visitors to the products they’re most likely to buy. By combining in-session user behavior patterns with cookies, an AI-based system can recommend appropriately sized clothing that matches the customer’s style.

Then, when a potential buyer is matched to a product, we can use AI-powered tools to generate interest. For example, on Mazda’s purchase page, customers can apply 3D model transformation functions to create a 360-degree view of their vehicle build with all the personalized upgrade options and the color they’ve selected.

AI also enables customers to preview personalized products. If a clothing retailer offers the option to add a custom inscription or design, for example, then an AI-powered displacement map can show what the final product will look like much more clearly than a simple overlay.

We can implement much of this functionality with a tool like Cloudinary’s content-aware object detection add-on. When used alongside the AI-powered background removal tool, we can generate and edit image assets for any context. For instance, consider an automotive manufacturer with a database of automotive add-ons. An AI could analyze image assets and apply smart tags to categorize product options. If the manufacturer offers numerous upgrade options across a range of a dozen or more vehicles, this will save a lot of time and work. The technology can even help with cleaning up legacy databases and regaining control over lost or mislabeled assets.

Customer Service

Copy link to this heading

A well-organized asset database also creates happier customers. Suppose visitors to our storefront have access to a search field or chatbot for queries. In that case, we can combine this data with user behavior data we collected earlier and compare it against our meticulously and automatically tagged and organized product catalog.

As we integrate AI tools more deeply into our supply chain, we can also expect more efficient fulfillment as we optimize for customer preference, location, and even local weather. For example, we can integrate Cloudinary-managed assets with Next.js Middleware in Netlify to find out where visitors are located and inject shipping information. If customers find the status updates useful, they’re more likely to become repeat buyers.

AI also helps build customer trust. AI-powered tools can automatically synchronize sales across multiple devices, identify high-risk transactions, and offer discounts to loyal customers more intelligently than rule-based implementations would. We can even use virtual assistants to handle administrative tasks that impact the end-user experience. 

For example, AI can help a storefront become more responsive by determining which media assets should be cached locally in a Content Delivery Network (CDN) or by identifying the most routine customer queries and offloading them to automated chatbots. An apparel storefront can provide a more bespoke experience by offering AI-powered fit and sizing assistance or even suggestions for complementary wardrobe choices.

When a customer decides to purchase, AI can help us ensure we’ve minimized human error in the inventory handling and fulfillment stages. If our product has a loyal following, we can keep customers engaged by providing AI-optimized, up-to-date stock arrival notifications.

If we allow end users to create their own content, such as photos in product reviews (or if we’re using AI to pull from external content stores), we should use a tool like Cloudinary’s asset moderation. Depending on the type and volume of content, we can configure these add-ons to flag content for manual or automatic review or a combination of both. For instance, we might want to automatically reject some content, such as low-quality images or images that have not been anonymized. Other content might need human approval, such as automatically smart-tagged product images. 

Sales

Copy link to this heading

To be competitive in sales within a digital ecosystem, you often need to analyze trends in external data. AI tools help us stay competitive with comprehensive industry monitoring and analysis. Rather than manually searching for a competitive edge, we can feed raw data into our models and expect better insights—notably, often without needing to perform the tedious process of data normalization.

Another common necessity of e-commerce businesses—namely, complex integrations—can break continuity between upstream and downstream portions of the sales pipeline, especially when integrating legacy applications. This process can create extra work and delays for the sales team, who either have to troubleshoot integrations or rely on support teams or developer teams to make changes. AI-powered automation can solve this issue and create a more extensible and easy-to-use pipeline for the sales team.

Financial Processing

Copy link to this heading

In an e-commerce business, payroll, accounting, and invoicing are all digital (and often cloud-first) processes. This makes them ideally suited to administrative automation and AI.

Cloudinary’s broad set of integrations enables Cloudinary-managed assets to be deployed through commercial platforms, like Adobe Commerce (formerly Magento) or Salesforce. We get the benefits of the financial tooling of top e-commerce and marketing frameworks while delivering quality, relevant content that’s been automatically curated by asset management technologies.

Ride the E-Commerce Wave

Copy link to this heading

To grow an e-commerce business in a cloud-first world, you need the help of cutting-edge technologies. In the DAM space, AI can make the difference between a digital storefront that needs constant manual labor to stay effective and an e-commerce business that’s ready to sail the tide of internet commerce. To start integrating AI into your business plan, visit Cloudinary today.

Source :
https://cloudinary.com/blog/how-ai-and-automation-help-e-commerce-scale

Sonicwall Sonicos 7.1.1 FAQ

01/11/2024

Description

This article helps answer frequently asked questions regarding SonicOS 7.1.1.

Q. What is SonicOS 7.1.1?
A. SonicOS 7.1.1 is the feature release available on all Gen 7 firewalls which brings in new capabilities around security, content filtering, integrations and virtual platforms.


Q. Will we be able to manage SonicOS 7.1 from NSM 2.3.5?
A. NSM 2.3.5 will not support SonicOS 7.1. The support for SonicOS 7.1 will be available from NSM 2.4.0, which will be released early next year (2024). Please read the following article on NSM Compatibility with SonicOS 7.1.


Q. What are the new features available on SonicOS 7.1.1?
A. The major features implemented in SonicOS 7.1.1 are DNS Filtering, reputation-based content filtering, Wi-Fi 6 access-point management, Network Access Control (NAC) integration with Aruba ClearPass, NSv bootstrapping, auto-update firmware and some other enhancements with storage and user interface (UI) for ease of use.


Q. How can existing firewall customers running SonicOS 7 upgrade/migrate to SonicOS 7.1.1?
A. You can upgrade the firewall to SonicOS 7.1 on box without using a migration tool.


Q. How can existing firewall customers running SonicOS6.5 and previous versions upgrade to SonicOS 7.1.1?
A. Users will be required to make use of our Secure Upgrade Program to upgrade their existing hardware models to Gen 7. They will then need to migrate their settings to the new firewall running 7.1.1 OS
Learn more about the Secure Upgrade Program


Q. Are there any new features in 7.1.1 that will require new licenses?
A. The DNS Filtering feature is a licensed feature that will be available as an a la carte license for Gen 7 firewalls without the APSS bundle.


Q. Do I need any additional licensing if I already have the APSS license available on my current Gen 7 firewall?
A. No.


Q. Can I perform a firmware/OS upgrade on my existing NSv NGFW running SonicOS 7.1.1?
A. The downgrade of firmware from SonicOS 7.1 to SonicOS 7.0 is not supported. Please refer to this article when upgrading your firewall: How can I upgrade SonicOS Firmware?


Q. Is there any change in behavior with regard to Policy Mode with 7.1.1?
A. There is no change in behavior with regard to Policy Mode in SonicOS 7.1.1. The NSv 270, 470 and 870 will continue to support both Global and Policy Mode. The NSsp15700 will continue to support only Policy Mode.


Q. What is CFS 5.0? How does it differ from CFS 4.0?
A. Content Filtering Service 5.0 brings category extension with CFS 4.0. SonicOS 7.0.1 supported 64 categories and that has been increased to 89. Content Filtering 5.0 brings in performance improvements along with reputation-based blocking.


Q. What is upgrade behavior when a user upgrades from SonicOS 7.0.1 to SonicOS 7.1.1 with regard to CFS policies?
A. There will be no impact on the existing CFS policies, however as CFS 5.0 brings in reputation-based blocking, users will be required to configure the CFS policies with the new reputation parameter in CFS 5.0. Please refer to this upgrade article.


Q. Can we downgrade the firewall from SonicOS 7.1 to SonicOS 7.0?
A. The downgrade of firmware from SonicOS 7.1 to SonicOS 7.0 is not supported. Please refer to this article when upgrading your firewall.


Q. What is DNS Filtering? How is it different from the current DNS capabilities in SonicOS 7.0.1?
A. DNS Filtering inspects the DNS traffic in real time and provides the ability to block threats and access to malicious websites. DNS Filtering blocks threats before they can reach your network. The DNS security capabilities on 7.0.1 include DNS Tunnel Detection and DNS Sinkholes. Please read DNS Security to understand them in detail.


Q. What is the upgrade behavior when users upgrade from SonicOS 7.0.1 to SonicOS 7.1.1 with regard to DNS proxy and sink-holing?
A. The upgrade from SonicOS 7.0.1 to SonicOS 7.1.1 would have no impact on the behavior that was there previous to the SonicOS 7.1 upgrade.


Q. What does the NAC integration feature do?
A. SonicWall Next-Generation Firewalls (NGFWs) provide Restful threat API which integrates with Aruba ClearPass as network access control (NAC). ClearPass can pass the security context vectors using the restful API which is included with SonicWall NGFWs. ClearPass can pass security context vectors including Source IP, Source MAC, User ID, User Role, Domain, Device Category, Device Family, Device Name, OS Type, Hostname and Health Posture to SonicWall NGFWs to enforce real-time rules based on Device Type, OS and Device Health Posture at every point of control. When an alert is generated on a client machine, it can be shared by ClearPass to SonicWall NGFWs which would trigger a range of predetermined, policy-based actions from quarantine to blocking.


Q. Does this NAC integration feature work with any NAC providers?
A. No, this NAC integration only works with Aruba ClearPass.


Q. Which access point models can I integrate with firewalls running SonicOS 7.1.1?
A. With the launch of SonicOS 7.1.1, users will now also be able to integrate and manage Wi-Fi 6 APs like 621, 641 and 681.


Q. How can I automate NSv deployment using the bootstrapping feature? Which platforms support this feature?
Bootstrapping helps with NSv automated deployments. Token-based registration will help ease the bootstrapping process. KVM already supported bootstrapping in SonicOS 7.0.1. With the launch of 7.1.1, other platforms like VMWare, Hyper-V, AWS and Azure will also support bootstrapping features.


Q. How is the bootstrapping process different between private cloud and public cloud?
A. The bootstrapping process is not different between private cloud and public cloud. SonicOS supports bootstrapping on AWS, Azure, VMware, KVM and Hyper-V.


Q. What are the new parameters that will be stored in secondary storage modules with the launch of 7.1.1?
A. TSR , exp, PCAP, threat logs and appflow logs will be stored in the secondary storage module as part of SonicOS 7.1.1


Q. Will the new features available in SonicOS 7.1.1 be available in the Capture Threat Assessment (CTA) report?
A. During the launch, the new features in SonicOS 7.1.1 will not be included in the CTA report.


Q. Are the new features available on NSM?
A. Yes. The upcoming NSM version 2.4 is planned to support the new features on SonicOS 7.1.1.


Q. Can I manage SonicOS 7.1.1 on the previous versions of NSM (prior to 2.4)?
A. You can upgrade the SonicOS version to 7.1.1, but the new features which are part of 7.1.1 will not be available on NSM versions prior to 2.4


Q. What are the best practices to be followed on SonicOS 7.1.1?
A. Please follow the best practices when upgrading the firewall from SonicOS 7.0.1 to SonicOS 7.1 documented here.

The migration tool is not required for the configuration migration from SonicOS 7.0 to SonicOS 7.1. Any customer migrating from Gen 6 to SonicOS 7.1 would need to upgrade to SonicOS 7.0.1 using the migration tool and then migrate to SonicOS 7.1.

 DNS Filtering is the first line of defense and works independent of Content Filtering Services (CFS). Please follow the admin guides for seamless configuration with best practices.
 
Q. What is the new website for URL rating and reputation lookup with CFS 5.0?
A. https://cfssupportapi.global.sonicwall.com/


Q. How can I check the URL rating on the firewall UI?
A. Device –> Diagnostics –> URL Rating Request Tool 


Q. What is the performance impact of enabling the new SonicOS 7.1 features on an existing firewall?
A. We do not expect there to be any impact on the performance of an existing firewall because of new features.


Q. Can DNS proxy 4to4 and 4to6 features work alongside DNS filtering? Can this be accomplished by adding an additional DNS proxy-only rule alongside a DNS filtering rule for X0 Interface? If so, what will take precedence/priority?
A. DNS rules give the choice of either proxy or filtering on a single rule. When proxy is enabled, Client 4to4 or 4to6 DNS queries can be proxied. When DNS filtering is enabled, only Client 4to4 Requests DNS queries will be proxied and filtered.
—While DNS proxies will process both DNS TCP and DNS UDP, DNS filtering is only for DNS UDP.
—Both proxy or filtering DNS rules can be stacked, the most specific match will be applied, and the lookup precedence/priority is top-down.
—To have DNS proxy 4to6 alongside DNS filtering, the proxy rule must explicitly have source zone and address of the 4to6 Clients for the traffic to hit the rule and the policy to be applied


Q. Can DNS Filtering be applied on custom zones or is it restricted to default zones, LAN, DMZ and WLAN?
A. DNS Filtering can be applied to LAN, DMZ and WLAN zones as well as custom zones with Trusted, Public and Wireless Security Types.


Q. How long does a cache entry last before we request a category for a specific domain again?
A. The cache entry of a domain would depend on the TTL of the domain. 


Q. Are there plans to support DNS over TLS and DNS over HTTPS?
A. Yes. DNS over TLS and DNS over HTTPS will be available in a future release.


Q. Will the DNS Filtering license be included with any existing bundle or does the customer need to buy it separately?
A. DNS Filtering will be available with APSS and there will be a la carte SKUs for EPSS, TPSS and HW only.

Q. What happens to the WNM managed access-point when the firewall is upgraded to SonicOS 7.1?

A. Please note that if you have 600 series access points on the network connected to a WLAN zone of a firewall with 7.0.x managed by WNM, after the update to 7.1 the access points will be acquired by the firewall. All WNM settings will not be available. Please “Disable SonicPoint/SonicWave management” on the WLAN zone for seamless management.

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/sonicos-7-1-1-faq/231212121859137/

Black Basta-Affiliated Water Curupira’s Pikabot Spam Campaign

By: Shinji Robert Arasawa, Joshua Aquino, Charles Steven Derion, Juhn Emmanuel Atanque, Francisrey Joshua Castillo, John Carlo Marquez, Henry Salcedo, John Rainier Navato, Arianne Dela Cruz, Raymart Yambot, Ian Kenefick
January 09, 2024
Read time: 8 min (2105 words)

A threat actor we track under the Intrusion set Water Curupira (known to employ the Black Basta ransomware) has been actively using Pikabot. a loader malware with similarities to Qakbot, in spam campaigns throughout 2023.

Pikabot is a type of loader malware that was actively used in spam campaigns by a threat actor we track under the Intrusion set Water Curupira in the first quarter of 2023, followed by a break at the end of June that lasted until the start of September 2023. Other researchers have previously noted its strong similarities to Qakbot, the latter of which was taken down by law enforcement in August 2023. An increase in the number of phishing campaigns related to Pikabot was recorded in the last quarter of 2023, coinciding with the takedown of Qakbot — hinting at the possibility that Pikabot might be a replacement for the latter (with DarkGate being another temporary replacement in the wake of the takedown).

Pikabot’s operators ran phishing campaigns, targeting victims via its two components — a loader and a core module — which enabled unauthorized remote access and allowed the execution of arbitrary commands through an established connection with their command-and-control (C&C) server. Pikabot is a sophisticated piece of multi-stage malware with a loader and core module within the same file, as well as a decrypted shellcode that decrypts another DLL file from its resources (the actual payload).

In general, Water Curupira conducts campaigns for the purpose of dropping backdoors such as Cobalt Strike, leading to Black Basta ransomware attacks (coincidentally, Black Basta also returned to operations in September 2023). The threat actor conducted several DarkGate spam campaigns and a small number of IcedID campaigns in the early weeks of the third quarter of 2023, but has since pivoted exclusively to Pikabot.

Pikabot, which gains initial access to its victim’s machine through spam emails containing an archive or a PDF attachment, exhibits the same behavior and campaign identifiers as Qakbot

Figure 1. Our observations from the infection chain based on Trend’s investigation
Figure 1. Our observations from the infection chain based on Trend’s investigation

Initial access via email

The malicious actors who send these emails employ thread-hijacking, a technique where malicious actors use existing email threads (possibly stolen from previous victims) and create emails that look like they were meant to be part of the thread to trick recipients into believing that they are legitimate. Using this technique increases the chances that potential victims would select malicious links or attachments. Malicious actors send these emails using addresses (created either through new domains or free email services) with names that can be found in original email threads hijacked by the malicious actor. The email contains most of the content of the original thread, including the email subject, but adds a short message on top directing the recipient to open the email attachment.

This attachment is either a password-protected archive ZIP file containing an IMG file or a PDF file. The malicious actor includes the password in the email message. Note that the name of the file attachment and its password vary for each email.

Figure 2. Sample email with a malicious ZIP attachment
Figure 2. Sample email with a malicious ZIP attachment
Figure 3. Sample email with a malicious PDF attachment
Figure 3. Sample email with a malicious PDF attachment

The emails containing PDF files have a shorter message telling the recipient to check or view the email attachment.

The first stage of the attack

The attached archive contains a heavily obfuscated JavaScript (JS) with a file size amounting to more than 100 KB. Once executed by the victim, the script will attempt to execute a series of commands using conditional execution.

Figure 4. Files extracted to the attached archive (.zip or .img)
Figure 4. Files extracted to the attached archive (.zip or .img)
Figure 5. Deobfuscated JS command
Figure 5. Deobfuscated JS command

The script attempts command execution using cmd.exe. If this initial attempt is unsuccessful, the script proceeds with the following steps: It echoes a designated string to the console and tries to ping a specified target using the same string. In case the ping operation fails, the script employs Curl.exe to download the Pikabot payload from an external server, saving the file in the system’s temporary directory.

Subsequently, the script will retry the ping operation. If the retry is also unsuccessful, it uses rundll32.exe to execute the downloaded Pikabot payload (now identified as a .dll file) with “Crash” as the export parameter. The sequence of commands concludes by exiting the script with the specified exit code, ciCf51U2FbrvK.

We were able to observe another attack chain where the malicious actors implemented a more straightforward attempt to deliver the payload. As before, similar phishing techniques were performed to trick victims into downloading and executing malicious attachments. In this case, password-protected archive attachments were deployed, with the password contained in the body of the email.

However, instead of a malicious script, an IMG file was extracted from the attachment. This file contained two additional files — an LNK file posing as a Word document and a DLL file, which turned out to be the Pikabot payload extracted straight from the email attachment:

Figure 6. The content of the IMG file
Figure 6. The content of the IMG file

Contrary to the JS file observed earlier, this chain maintained its straightforward approach even during the execution of the payload.

Once the victim is lured into executing the LNK file, rundll32.exe will be used to run the Pikabot DLL payload using an export parameter, “Limit”.

The content of the PDF file is disguised to look like a file hosted on Microsoft OneDrive to convince the recipient that the attachment is legitimate. Its primary purpose is to trick victims into accessing the PDF file content, which is a link to download malware.

Figure 7. Malicious PDF file disguised to look like a OneDrive attachment; note the misspelling of the word “Download”
Figure 7. Malicious PDF file disguised to look like a OneDrive attachment; note the misspelling of the word “Download”
Figure 7. Malicious PDF file disguised to look like a OneDrive attachment; note the misspelling of the word “Download”

When the user selects the download button, it will attempt to access a malicious URL, then proceed to download a malicious JS file (possibly similar to the previously mentioned JS file).

The delivery of the Pikabot payload via PDF attachment is a more recent development, emerging only in the fourth quarter of 2023.

We discovered an additional variant of the malicious downloader that employed obfuscation methods involving array usage and manipulation:

Figure 8. Elements of array “_0x40ee” containing download URLs and JS methods used for further execution
Figure 8. Elements of array “_0x40ee” containing download URLs and JS methods used for further execution

Nested functions employed array manipulation methods using “push” and “shift,” introducing complexity to the code’s structure and concealing its flow to hinder analysis. The presence of multiple download URLs, the dynamic creation of random directories using the mkdir command, and the use of Curl.exe, as observed in the preceding script, are encapsulated within yet another array. 

The JavaScript will run multiple commands in an attempt to retrieve the malicious payload from different external websites using Curl.exe, subsequently storing it in a random directory created using mkdir.

Figure 9. Payload retrieval commands using curl.exe
Figure 9. Payload retrieval commands using curl.exe

The rundll32.exe file will continue to serve as the execution mechanism for the payload, incorporating its export parameter.

Figure 10. Payload execution using rundll32.exe
Figure 10. Payload execution using rundll32.exe

The Pikabot payload

We analyzed the DLL file extracted from the archive shown in Figure 6 and found it to be a sample of a 32-bit DLL file with 1515 exports. Calling its export function “Limit”, the file will decrypt and execute a shellcode that identifies if the process is being debugged by calling the Windows API NtQueryInformationProcess twice with the flag 0x7 (ProcessDebugPort) on the first call and 0x1F ProcessDebugFlags on the second call. This shellcode also decrypts another DLL file that it loads into memory and then eventually executes.

Figure 11. The shellcode calling the entry point of the decrypted DLL file
Figure 11. The shellcode calling the entry point of the decrypted DLL file

The decrypted DLL file will execute another anti-analysis routine by loading incorrect libraries and other junk to detect sandboxes. This routine seems to be copied from a certain GitHub article.

Security/Virtual Machine/Sandbox DLL filesReal DLL filesFake DLL files
cmdvrt.32.dllkernel32.dllNetProjW.dll
cmdvrt.64.dllnetworkexplorer.dllGhofr.dll
cuckoomon.dllNlsData0000.dllfg122.dll
pstorec.dll  
avghookx.dll  
avghooka.dll  
snxhk.dll  
api_log.dll  
dir_watch.dll  
wpespy.dll  

Table 1. The DLL files loaded to detect sandboxes

After performing the anti-analysis routine, the malware loads a set of PNG images from its resources section which contains an encrypted chunk of the core module and then decrypts them. Once the core payload has been decrypted, the Pikabot injector creates a suspended process (%System%\SearchProtocolHost) and injects the core module into it. The injector uses indirect system calls to hide its injection.

Figure 12. Loading the PNG images to build the core module
Figure 12. Loading the PNG images to build the core module

Resolving the necessary APIs is among the malware’s initial actions. Using a hash of each API (0xF4ACDD80x03A5AF65E, and 0xB1D50DE4), Pikabot uses two functions to obtain the addresses of the three necessary APIs, GetProcAddressLoadLibraryA, and HeapFree. This process is done by looking through kernel32.dll exports. The rest of the used APIs are resolved using GetProcAddress with decrypted strings. Other pertinent strings are also decrypted during runtime before they are used.

Figure 13. Harvesting the GetProcAddress and LoadLibrary API
Figure 13. Harvesting the GetProcAddress and LoadLibrary API
Figure 13. Harvesting the GetProcAddress and LoadLibrary API

The Pikabot core module checks the system’s languages and stops its execution if the language is any of the following:

  • Russian (Russia)
  • Ukrainian (Ukraine)
  •   

It will then ensure that only one instance of itself is running by creating a hard-coded mutex, {A77FC435-31B6-4687-902D-24153579C738}.

The next stage of the core module involves obtaining details about the victim’s system and forwarding them to a C&C server. The collected data uses a JSON format, with every data item  using the wsprintfW function to fill its position. The stolen data will look like the image in Figure 13 but with the collected information before encryption:

Figure 14. Stolen information in JSON format before encryption
Figure 14. Stolen information in JSON format before encryption

Pikabot seems to have a binary version and a campaign ID. The keys 0fwlm4g and v2HLF5WIO are present in the JSON data, with the latter seemingly being a campaign ID.

The malware creates a named pipe and uses it to temporarily store the additional information gathered by creating the following processes: 

  • whoami.exe /all
  • ipconfig.exe /all
  • netstat.exe -aon

Each piece of information returned will be encrypted before the execution of the process.

A list of running processes on the system will also be gathered and encrypted by calling CreateToolHelp32Snapshot and listing processes through Process32First and Process32Next.

Once all the information is gathered, it will be sent to one of the following IP addresses appended with the specific URL, cervicobrachial/oIP7xH86DZ6hb?vermixUnintermixed=beatersVerdigrisy&backoff=9zFPSr: 

  • 70[.]34[.]209[.]101:13720
  • 137[.]220[.]55[.]190:2223
  • 139[.]180[.]216[.]25:2967
  • 154[.]61[.]75[.]156:2078
  • 154[.]92[.]19[.]139:2222
  • 158[.]247[.]253[.]155:2225
  • 172[.]233[.]156[.]100:13721

However, as of writing, these sites are inaccessible.

C&C servers and impact

As previously mentioned, Water Curupira conducts campaigns to drop backdoors such as Cobalt Strike, which leads to Black Basta ransomware attacks.It is this potential association with a sophisticated type of ransomware such as Black Basta that makes Pikabot campaigns particularly dangerous.

The threat actor also conducted several DarkGate spam campaigns and a small number of IcedID campaigns during the early weeks of the third quarter of 2023, but has since pivoted exclusively to Pikabot.

Lastly, we have observed distinct clusters of Cobalt Strike beacons with over 70 C&C domains leading to Black Basta, and which have been dropped via campaigns conducted by this threat actor.

Security recommendations

To avoid falling victim to various online threats such as phishing, malware, and scams, users should stay vigilant when it comes to emails they receive. The following are some best practices in user email security:

  • Always hover over embedded links with the pointer to learn where the link leads.
  • Check the sender’s identity. Unfamiliar email addresses, mismatched email and sender names, and spoofed company emails are signs that the sender has malicious intent.
  • If the email claims to come from a legitimate company, verify both the sender and the email content before downloading attachments or selecting embedded links.
  • Keep operating systems and all pieces of software updated with the latest patches.
  • Regularly back up important data to an external and secure location. This ensures that even if you fall victim to a phishing attack, you can restore your information.

A multilayered approach can help organizations guard possible entry points into their system (endpoint, email, web, and network). Security solutions can detect malicious components and suspicious behavior, which can help protect enterprises.  

  • Trend Vision One™ provides multilayered protection and behavior detection, which helps block questionable behavior and tools before ransomware can do any damage. 
  • Trend Cloud One™ – Workload Security protects systems against both known and unknown threats that exploit vulnerabilities. This protection is made possible through techniques such as virtual patching and machine learning.  
  • Trend Micro™ Deep Discovery™ Email Inspector employs custom sandboxing and advanced analysis techniques to effectively block malicious emails, including phishing emails that can serve as entry points for ransomware.  
  • Trend Micro Apex One™ offers next-level automated threat detection and response against advanced concerns such as fileless threats and ransomware, ensuring the protection of endpoints.
     

Indicators of Compromise (IOCs)

The indicators of compromise for this blog entry can be found here.

Source :
https://www.trendmicro.com/it_it/research/24/a/a-look-into-pikabot-spam-wave-campaign.html

Trend Micro Defends FIFA World Cup from Cyber Threats

By: Jon Clay
January 11, 2024
Read time: 4 min (970 words)

Trend Micro collaborates with INTERPOL to defend FIFA World Cup by preventing attacks & mitigating risks to fight against the rising threat of cybercrime.

The prominent sporting event, FIFA World Cup, concluded in December 2022, and it generated a lot of online engagements from millions of fans around the world. The remarkable penalty-shootout in the finals was hailed the champion of the event and it was a trending topic in social media and headline news. Before and during this event, the online users were rejoicing and betting their favorite teams at the same time cybercriminals were taking advantage of the event to deploy spam and scams. With this, law enforcement, and in particular, INTERPOL, had to step up and tapped its gateway partners to be on the lookout and report to them the cyberthreats surrounding the 2022 FIFA World Cup. Trend Micro helped by proactively monitoring our global threat intelligence that revealed many malicious websites and scams before and during the event. For example, we saw websites disguised as ticketing systems of the 2022 FIFA World Cup and many survey scams. We shared this information to INTERPOL, helping in their goal of preventing attacks and mitigating the risk posed by the fraudsters of this event. Furthermore, through our global threat intelligence, we monitored the detections of malicious websites and files from the country of Qatar as INTERPOL worked closely with them to prevent cybercriminals and malicious actors in disrupting the sporting event.

Let’s look a bit deeper into the different cyber threats we discovered and shared with INTERPOL, besides blocking them for our customers.

Malicious Websites found throughout 2022

figure-1
Figure 1: Trend Micro detections of malicious sites bearing keywords of “FIFA” and “World Cup”
figure-2
Figure 2: Top affected countries of malicious sites related to FIFA World Cup
figure-3
Figure 3: Timeline of FIFA World Cup Cyberthreats

Fake Ticketing System

It is no wonder due to the millions of potential victims that cybercriminals created dubious sites for selling tickets to the 2022 FIFA World Cup and trick users into inputting their personal information and credit card details in phishing attempts. We observed a few sites such as fifa-ticketssales[.]com and prime-ticketssales[.]com, both imitating the FIFA World Cup ticketing page and one showing an unbelievable number of sold tickets and remaining number of seats. We also identified contact details of scammers such as phone numbers and email addresses, some of these phone numbers were linked to other scam sites which is typical for scammers to reuse phone numbers.

figure-4
Figure 4: Fake selling tickets of FIFA World Cup
figure-5
Figure 5: Questionable number of tickets sold and used as lure to users

Fake Live Streaming

Cybercriminals created several fake streaming sites to lure victims to click on it. We identified around 40 unique domains that hosted fake streaming of FIFA World Cup. Example sites are watchvsportstv[.]com/2022-FIFA-WORLD-CUP-FINAL, sportshdlivetv[.]com/FIFA-WORLD-CUP-FINAL and istream2watch[.]stream/video/fifa-world-cup. From our analysis of these fake live streaming pages, the user will be redirected to websites with subscription forms or premium access requests and lure these users to subscribe and pay. Among the top countries detected were Brazil, Philippines, and Malaysia.

Survey Scams

Survey scams are relentless and scammers have been using them for a long time now. One we reported for example was https://www.theregister.com/2012/03/23/pinterest_attracts_scammers/. While the FIFA World Cup 2022 was ongoing, especially as we approached the semi-finals and final game, we observed malicious sites hosting survey scams that offered free 50GB mobile data. We identified more than 40 IP addresses or servers hosting the scam sites. Mostly were registered by Chinese names and hosted under Google LLC. Survey scams aim to trick users into obtaining free mobile data 50GB for a faster streaming of video or a free mobile network. It tricks users into inputting phone number and personal information thus in the end it will incur charges to the victims not knowing that it is a scam and may use their personal information for future spam or scams. Additionally, mostly it will redirect to fake dating sites and would require and harvest email address which can allow spammers to include them in their next wave of spam.

figure-6
Figure 6: FIFA World Cup Survey scam that offers free mobile data
figure-7
Figure 7: It requests for phone number which may lead to unwanted charges.
figure-8
Figure 8: Displays the offer is successful, however, it requires the user to share it in WhatsApp, thus propagating this survey scam
figure-9
Figure 9: Survey scam common web page title

Crypto scamming and malicious app

Based on external reports there were crypto scammers that leveraged the sporting event. We observed some scam sites such as cristiano-binance[.]xyz, binance[.]supply, football-blnance[.]com, football-binance[.]com, birance[.]online and birance[.]site that lure users to click on the button “Connect wallet” and will compromise the account. We also observed malicious app or Android RAT which was reported from https://twitter.com/ESETresearch/status/1596222440996311040 https://blog.cyble.com/2022/12/09/threat-actors-targeting-fans-amid-fifa-world-cup-fever/ and it was called “ Kora 442” with malicious site kora442[.]com. It lured users to download the app “kora442.apk” and promised live and exclusive broadcasts of the 2022 FIFA World Cup. Example of hashes are 2299d4e4ba3e9c2643ee876bb45d6a976362ce3c, c66564b7f66f22ac9dd2e7a874c6874a5bb43a26, 9c904c821edaff095e833ee342aedfcaac337e04 and 60b1da6905857073c4c46e7e964699d9c7a74ec7. The package name is com.app.projectappkora and we detect it as AndroidOS_DummyColl.HRX. It steals information from the infected device and sends it to the Command &Control (C&C) server.

figure-10
Figure 10: Fraudulent site potential hijacking of Crypto account
figure-11
Figure 11: Malicious mobile app site with download request

Trend Micro’s mission has always been making the world safe for exchanging digital information and our support of INTERPOL and the 2022 FIFA World Cup gave us an opportunity to do exactly this. We’re proud of our continued support of INTERPOL, whether it is helping them with investigations of cybercriminals, or helping with a major worldwide sporting event. Our 34 years of experience in proactively identifying new threats and attacks and protecting users against them will continue in the future and we look forward to more engagements with law enforcement and organizations managing these events.

Source :
https://www.trendmicro.com/it_it/research/24/a/trend-micro-defends-fifa-world-cup-from-cyber-threats.html

Forward Momentum: Key Learnings From Trend Micro’s Security Predictions for 2024

By: Trend Micro
December 06, 2023
Read time: 4 min (971 words)

In this blog entry, we discuss predictions from Trend Micro’s team of security experts about the drivers of change that will figure prominently in 2024.

Digital transformations in the year ahead will be led by organizations pursuing a pioneering edge from the integration of emergent technologies. Advances in cloud technology, artificial intelligence and machine learning (AI/ML), and Web3 are poised to reshape the threat landscape, giving it new frontiers outside the purview of traditional defenses. However, these technological developments are only as efficient as the IT structures that support them. In 2024, business leaders will have to take measures to ensure that their organization’s systems and processes are equipped to stay in step with these modern solutions — not to mention the newfound security challenges that come with implementing and securing them.

As the new year draws closer, decision-makers will need to stay on top of key trends and priority areas in enterprise cybersecurity if they are to make room for growth and fend off any upcoming threats along their innovation journey. In this blog entry, we discuss predictions from Trend Micro’s team of security experts about the drivers of change that will figure prominently next year.

Misconfigurations will allow cybercriminals to scale up their attacks using cloud-native worms

Enterprises should come into 2024 prepared to ensure that their cloud resources can’t be turned against them in “living-off-the-cloud” attacks. Security teams need to closely monitor cloud environments in anticipation of cyberattacks that, tailored with worming capabilities, can also abuse cloud misconfigurations to gain a foothold in their targets and use rootkits for persistence. Cloud technologies like containerized applications are especially at risk as once infected, these can serve as a launchpad from which attackers can spread malicious payloads to other accounts and services. Given their ability to infect multiple containers at once, leverage vulnerabilities at scale, and automate various tasks like reconnaissance, exploitation, and achieving persistence, worms will endure as a prominent tactic among cybercriminals next year.

AI-generated media will give rise to more sophisticated social engineering scams

The gamut of use cases for generative AI will be a boon not only for enterprises but also for fraudsters seeking new ways of profiteering in 2024. Though they’re often behind the curve when it comes to new technologies, expect cybercriminals — swayed by the potential of lucrative pay — to incorporate AI-generated lures as part of their upgraded social engineering attacks. Notably, despite the shutdown of malicious large language model (LLM) tool WormGPT, similar tools could still emerge from the dark web. In the interim, cybercriminals will also continue to find other ways to circumvent the limitations of legitimate AI tools available online. In addition to their use of digital impostors that combine various AI-powered tools in emerging threats like virtual kidnapping, we predict that malicious actors will resort specifically to voice cloning in more targeted attacks.

The rising tide of data poisoning will be a scourge on ML models under training

Integrating machine-learning (ML) models into their operations promises to be a real game changer for businesses that are banking on the potential of these models to supercharge innovation and productivity. As we step into 2024, attempts to corrupt the training data of these models will start gaining ground. Threat actors will likely carry out these attacks by taking advantage of a model’s data-collection phase or by compromising its data storage or data pipeline infrastructure. Specialized models using focused datasets will also be more vulnerable to data poisoning than LLMs and generative AI models trained on extensive datasets, which will prompt security practitioners to pay closer attention to the risks associated with tapping into external resources for ML training data.

Attackers will take aim at software supply chains through their CI/CD pipelines

Software supply chains will have a target on their back in 2024, as cybercriminals will aim to infiltrate them through their continuous integration and delivery (CI/CD) systems. For example, despite their use in expediting software development, components and code sourced from third-party libraries and containers are not without security risks, such as lacking thorough security audits, containing malicious or outdated components, or harboring overlooked vulnerabilities that could open the door to code-injection attacks. The call for developers to be wary of anything sourced from third parties will therefore remain relevant next year. Similarly, to safeguard the resilience of critical software development pipelines and weed out bugs in the coming year, DevOps practitioners should exercise caution and conduct routine scans of any external code they plan to use.

New extortion schemes and criminal gangs will be built around the blockchain

Whereas public blockchains are hardened by continuous cyberattacks, the same can’t be said of their permissioned counterparts because of the latter’s centralized nature. This lack of hard-won resilience will drive malicious actors to develop new extortion business models specific to private blockchains next year. In such extortion operations, criminals could use stolen keys to insert malicious data or modify existing records on the blockchain and then demand a payoff to stay mum on the attack. Threat actors can also strong-arm their victims into paying the ransom by wresting control of enough nodes to encrypt an entire private blockchain. As for criminal groups, we predict that 2024 will see the debut of the first criminal organizations running entirely on blockchains with smart contract or decentralized autonomous organizations (DAOs).

Countering future cyberthreats

Truly transformative technologies inevitably cross the threshold into standard business operations. But as they make that transition from novel to industry norm, newly adopted tools and solutions require additional layers of protection if they are to contribute to an enterprise’s expansion. So long as their security stance is anchored on preparedness and due diligence, organizations stand to reap the benefits from a growing IT stack without exposing themselves to unnecessary risks. To learn more about the key security considerations and challenges that lie ahead for organizations and end users, read our report, “Critical Scalability: Trend Micro Security Predictions for 2024.”

Source :
https://www.trendmicro.com/it_it/research/23/l/forward-momentum–key-learnings-from-trend-micro-s-security-pred.html

Stopping bot traffic: A guide for WordPress websites

DECEMBER 18, 2023 BY PAUL G.

When you picture your website visitors, you most likely picture a person sitting at a desk, or perhaps scrolling on their phone. However, not all your site’s visitors are flesh and bone; many are in fact bots, running automated tasks. 

Although some of these bots are legitimate, others can put your site at risk, so it’s important to take appropriate security measures. This article will take you through the ways bots interact with your site, give you some insights on the risks of leaving bad bots unchecked, and take you through how Shield Security PRO can help protect your site. 

What are WordPress bots?

Before we dive into how to protect your WordPress site from bad bots, let’s take a step back and talk about bots in general. Put simply, a bot is software that runs an automated task. 

Many of the bots that visit your website are perfectly fine – and, indeed, there are many good bots that you want to visit your site. For example, search engine crawlers automatically evaluate the value of your site’s content to determine its rank in search results.

However, there are also bots out there designed with nefarious purposes in mind. In the next section, we will look at good vs. bad bots in more detail so you know which ones you need to look out for. 

It’s worth remembering: One of the key challenges in cybersecurity is giving both good bots and human users a positive experience on your site, without enabling malicious bots to wreak havoc and compromise your security.

Good bots vs. bad bots

You may be surprised to learn that there are several kinds of good bots out there that should be perfectly welcome on your website. We mentioned search engine crawlers earlier, but they’re just one form of friendly bot that could visit your site. Others include:

  • Uptime monitoring bots: These collect performance data so you can see how well your site is doing 
  • SEO tracking bots: Many sites looking to improve their search engine ranks use analytics software to evaluate results. Tracking bots collect the data reflected in your key performance indicators.
  • Translation bots: These assist with language translation by automatically translating content to another language, helping viewers understand what your web pages are about.
  • AI Bots: AI companies use site crawlers to train their AI systems, particularly in terms of language learning. 

Some types of bad bots include: 

  • Comment spam bots: These are bots that automatically leave irrelevant comments on your site, often advertising another product or service, and generating links to that site. 
  • Brute force bots: Some cybercriminals use bots to perform brute force attacks in order to guess login credentials and gain access to restricted information. 
  • Probing bots: These are bots that simply probe your site for vulnerabilities – you can think of them as casing the joint. If they find any, they make a note so attackers can come back and exploit those vulnerabilities later. 

All of these can sap your resources and make you more vulnerable to major cyber security threats. The right cyber security approach will allow good bots to do their thing without leaving the door open to the baddies. 

Real-world Examples: How bad bots put your website at risk

Left unchecked, bad bots can damage your business in both the short and long term. They can drain your resources and increase your vulnerability to hacking attempts. Bots may flood your contact forms and comment sections with spam, which clutters your site and damages your credibility.

One example of enabling bots to run wild on your site is the Dunkin Donuts attack in 2015. The Dunkin Donuts brute force attack happened when hackers began using a type of attack called “credential stuffing” to gain access to and steal money from customer accounts. This is when bots use compromised passwords obtained from previous breaches to log in to their accounts and steal their data and card details.

A picture of coffee and a donut. Photo credit Pexels.

According to a lawsuit filed against Dunkin, the coffee shop’s parent company failed to address the attacks, despite warnings from developers to do so. While they never denied or accepted responsibility for the hacks, the company agreed to a $650,000 settlement. 

This illustrates that the stakes can get very high, especially when you’re handling sensitive information. Blocking bad bots from your website protects your business, your customers and your reputation, by restricting access to your site and data.

Bots are a drain on your site’s resources

Even if bots don’t put you in direct financial harm, they will still consume your site’s resources. 

An example of this is the case of Geeks2you, where bots were used to attempt to gain access to their servers. Monitoring software discovered over 8,000+ failed login attempts, and at least another 5,000 each hour after the attack was discovered. 

While it was extremely hard for them to actually get into the server (thanks to the company’s excellent password policy), with at least two attempts to hack every second, the attack ate into resources and rapidly degraded the site’s responsiveness to legitimate visitors. 

This demonstrates the harmful impact bots can have, even just for failed attempts to hack a site. Users can be robbed of a pleasant experience, sites can load slowly, images may not look right, and on-page features may fail. This can damage your reputation and cause you to lose valuable traffic. 

Bottom line: At a minimum, bad bots hog your resources and drag down your site’s performance. 

Your Solution: The AntiBot Detection Engine

When it comes to stopping bot traffic, you need to find a technological solution that can filter out the bad and leave you with the good. This is where Shield Security PRO comes in. 

The AntiBot Detection Engine, or ADE, works to distinguish between good bots, bad bots, and human users based on the behaviour of each visitor on the site. It can also distinguish fake web crawlers from true web crawlers. 

The way the technology does this is with “bot signals” it watches for when visitors interact with the site. (We’ll take a closer look at how the ADE does this in the next section.) 

Shield Security PRO displays bot signals logged and bot likelihood for blocked IP addresses.

When a user crosses the threshold of acceptable suspicious activity, Shield Security PRO automatically blocks their IP address and stops them from being able to access your site.

Spotting bot behaviour: login attempts 

One example of bad bot behaviour the ADE is designed to spot is excessive login attempts. Shield Security PRO can detect and capture login bots that can slow down your site and cause harm going forward. It does this by penalising visitors who use a valid username but the wrong password, as well as trying to log in without a username or with a username that doesn’t exist. 

Legitimate users might get their username and password wrong once in a while, but their behaviour is still going to be easy to distinguish from bots, especially when you look at their actions across the site as a whole. 

“Bots are just computer programs,” said Paul Goodchild, creator of Shield Security PRO, “They perform a limited number of tasks, such as login attempts, comment SPAM, and probing to trigger 404 errors.

“When you look at all these actions collectively,” Goodchild continued, “it looks nothing like normal human activity. The ADE acts as a ‘bot watcher’, looking at all requests collectively to sort the bots from the people.”

All-sides defence with Shield Security PRO 

ADE and bad bot blocking are core features of Shield Security PRO, but they’re also just a couple of the plugin’s features designed to keep your site safe and secure. For example, the security plugin has a comprehensive dashboard that allows you to see the current state of your website at a glance.  

Screenshot of Shield Security PRO’s security dashboard.

Other functionalities that help Shield Security PRO protect your site include:

  • DoS protection with traffic rate limitingThis essentially limits the rate at which traffic can access a network or web service, stopping it from being overwhelmed. DoS attacks aim to overwhelm a system’s resources, ultimately slowing or shutting down the site. 
  • Malware detection and vulnerability scanning: These are essential to your website’s safety, and identify and mitigate potential threats to your system. Our technology offers real-time protection and firewalls, scans for patterns or signs of existing malware, and identifies flaws and weak points in your defence.
  • Login protection for WooCommerce and other WordPress plugins: Shield Security PRO allows you to set up strong password requirements and two-factor authentication, keeping site access secure. You can also set customizable login attempt limits to further protect your site from malicious access attempts. 

Cybersecurity is most effective when you tackle it from all sides. The Shield Security PRO plugin kicks bad bots and suspicious visitors off your site and helps you detect any threats that do manage to sneak through. 

Banish bots from your site with Shield Security PRO 

If you let bad bots have unlimited access to your site, you’re taking a serious risk. Bad bots can increase your chances of hacking and data loss, as well as hog server resources and slow your site down. Both of these can damage your reputation as well as your bottom line.

Site owners can take action and protect their websites with a bad-bot blocking plugin like Shield Security PRO. The ADE efficiently identifies bad bots and blocks their IP addresses so they can’t bring their nefarious plans to fruition.

Don’t delay, get started with Shield Security PRO and kick bad bots off your site today for instant peace of mind.

Hey beautiful!

If you’re curious about ShieldPRO and would like to explore the powerful features for protecting your WordPress sites, click here to get started today. (14-day satisfaction guarantee!)

You’ll get all PRO features, including AI Malware Scanning, WP Config File Protection, Plugin and Theme File Guard, import/export, exclusive customer support, and so much more.

Source :
https://getshieldsecurity.com/blog/stopping-bot-traffic-guide-wordpress-websites/

Configuring DFSR to a Static Port – The rest of the story

By Ned Pyle
Published Apr 04 2019 02:39 PM

First published on TechNet on Jul 16, 2009
Ned-san here again. Customers frequently call us about configuring their servers to listen over specific network ports. This is usually to satisfy firewall rules – more on this later. A port in TCP/IP is simply an endpoint to communication between computers. Some are reserved, some are well-known, and the rest are simply available to any application to use. Today I will explain the network communication done through all facets of DFSR operation and administration. Even if you don’t care about firewalls and ports, this should shed some light on DFSR networking in general, and may save you skull sweat someday.

DFSR and RPC

Plenty of Windows components support hard-coding to exclusive ports, and at a glance, DFSR is no exception. By running the DFSRDIAG STATICRPC command against the DFSR servers you force them to listen on whatever port you like for file replication:

thumbnail image 1 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

Many Windows RPC applications use the Endpoint Mapper (EPM) component for these types of client-server operations. It’s not a requirement though; an RPC application is free to declare its own port and only listen on that one, with a client that is hard-coded to contact that port only. This range of ports is 1025-5000 in Windows Server 2003 and older, and 49152-65535 in Vista and … DFSR uses EPM.

Update 3/3/2011 (nice catch Walter)

As you have probably found, we later noticed a bug in DFSR on Win2008 and Win2008 R2 DCs (only – not member servers) where the service would always send-receive on port 5722. This article was done before that and doesn’t reflect it. Read more on this here:

http://support.microsoft.com/default.aspx?scid=kb;EN-US;832017

http://blogs.technet.com/b/askds/archive/2010/05/14/friday-mail-sack-it-s-about-to-get-re…
All of the below is accurate for non-DCs

By setting the port, you are telling EPM to always respond with the same port instead of one within the dynamic range. So when DFSR contacted the other server, it would only need to use two ports:

thumbnail image 2 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

So with a Netmon 3.3 capture, it will look something like this when the DFSR service starts up:

1. The local computer opens a dynamic client port and connects to EPM on the remote computer, asking for connectivity to DFSR.

thumbnail image 3 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

2. That remote computer responds with a port that the local computer can connect to for DFSR communication. Because I have statically assigned port 55555, the remote computer will always respond with this port.

thumbnail image 4 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

3. The local computer then opens a new client port and binds to that RPC port on the remote server, where the DFSR service is actually listening. At this point two DFSR servers can replicate files between each other.

thumbnail image 5 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

The Rest of the Story

If it’s that easy, why the blog post? Because there’s much more DFSR than just the RPC replication port. To start, your DFSR servers need to be able to contact DC’s. To do that, they need name resolution. And they will need to use Kerberos. And the management tools will need DRS API connectivity to the DC’s. There will also need to be SMB connectivity to create replicated folders and communicate with the Service Control Manager to manipulate DFSR. And all of the above also need the dynamic client ports available outbound through the firewall to allow that communication. So now that’s:

  • EPM port 135 (inbound on remote DFSR servers and DC’s)
  • DFSR port (inbound on remote DFSR servers)
  • SMB port 445 (inbound on remote DFSR servers)
  • DNS port 53 (inbound on remote DNS servers)
  • LDAP port 389 (inbound on remote DC’s)
  • Kerberos port 88 (inbound on remote DC’s)
  • Ports 1025-5000 or 49152-65535 (outbound, Win2003 and Win2008 respectively – and inbound on remote DC’s).

Let’s see this in action. Here I gathered a Netmon 3.3 capture of configuring a new replication group:

  • Server-01 – IP 10.10.0.101 – DC/DNS
  • Server-02 – IP 10.10.0.102 – DFSR
  • Server-03 – IP 10.10.0.103 – DFSR
  • Server-04 – IP 10.10.0.104 – Computer running the DFSMGMT.MSC snap-in

1. First the snap-in gets name resolution for the DC from my management computer (local port 51562 to remote port 53):

thumbnail image 6 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

2. Then it contacts the DC – the EPM is bound (local port 49199 to remote port 135) and a dynamic port is negotiated so that the client knows which port on which to talk to the DC (port 49156).

thumbnail image 7 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

3. Having connected to the DC through RPC to DRS (a management API), it then returns information about the domain and other things needed by the snap-in.

thumbnail image 8 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

4. The snap-in then performs an LDAP query to the DC to locate the DFSR-GlobalSettings container in that domain o that it can read in any new Replication Groups (local port 49201 to remote port 389).

thumbnail image 9 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

5. The snap-performs LDAP and DNS queries to get the names of the computers being selected for replication:

thumbnail image 10 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

6. The DFSR service must be verified (is it installed? Is it running?) This requires a Kerberos CIFS (SMB) request to the DC as well as an SMB connection to the DFSR servers – this is actually a ‘named pipe’ operation over remote port 445, where RPC uses SMB as a transport:

thumbnail image 11 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 12 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 13 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

7. The Replicated Folders are created (or verified to exist) on the DFSR servers – I called mine ‘testrf’. This uses SMB again from the snap-in computer to the DFSR server, over remote port 445:

thumbnail image 14 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

8. The snap-in will write all the configuration data through LDAP over remote port 389 against the DC. This creates all the AD objects and attributes, creates the topology, writes to each DFSR computer object, etc. There are quite a few frames here so I will just highlight a bit of it:

thumbnail image 15 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

9. If you wait for AD replication to complete and the DFSR servers to poll for changes, you will see the DFSR servers request configuration info through LDAP, and then start working normally on their static RPC port 55555 – just like I showed at the beginning of this post above.

DCOM and WMI

All of the things I’ve discussed are guaranteed needs in order to use DFSR. For the most part you don’t have to have too many remote ports open on the DFSR server itself. However, if you want to use tools like DFSRDIAG.EXE and WMIC.EXE remotely against a DFSR server, or have a remote DFSR server generate ‘Diagnostic Health Reports’, there is more to do.

DFSR utilizes Windows Management Instrumentation as its ‘quasi-API’. When tools like DFS Management are run to generate health reports, or DFSRDIAG POLLAD is targeted against a remote server, you are actually using DCOM and WMI to tell the targeted server to perform actions on your behalf.

There is no mechanism to control which RPC DCOM/WMI will listen on as there is for DFSR and other services. At service startup DCOM/WMI will pick the next available dynamic RPC port. This means in theory that you would have to have open the entire range of dynamic ports for the target OS, 1025-5000 (Win2003) or 49152-65535 (Win2008)

For example, here I am running DFSRDIAG POLLAD /MEM:2008-02 to force that server to poll its DC for configuration changes. Note the listening port that I am talking to on the DFSR server (hint – it’s not 55555):

thumbnail image 16 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 17 of blog post titled
Configuring DFSR to a Static Port - The rest of the story
thumbnail image 18 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

And in my final example, here I am running the DFS Management snap-in and requesting a diagnostic health report. Note again how we use DCOM/WMI/RPC and do not connect directly to the DFSR service; again this requires that we have all those inbound dynamic ports open on the DFSR server:

thumbnail image 19 of blog post titled
Configuring DFSR to a Static Port - The rest of the story

Wrap Up

So is it worth it to try and use a static replication port? Maybe. If you don’t plan on directly administering a DFSR server and just need it talking to its DC, its DNS server, and its replication partners, can definitely keep the number of ports used quite low. But if you ever want to communicate directly with it as an administrator, you will need quite a few holes punched through your firewall.

That is, unless you are using IPSEC tunnels through your Firewalls like we recommend. 🙂

– Ned ‘Honto’ Pyle

Source :
https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/configuring-dfsr-to-a-static-port-the-rest-of-the-story/ba-p/396746

What Is DFS Replication and How to Configure It?

Updated: May 23, 2023
By: NAKIVO Team

File shares are used in organizations to allow users to access and exchange files. If the number of file shares is large, it may be difficult to manage them because mapping many shared resources to each user’s computer takes time and effort. If the configuration of one file share changes, you need to update shared drive mappings for all users using this share. In this case, DFS can help you optimize the hierarchy of shared folders to streamline administration and the use of shared resources.

This blog post explains DFS configuration and how to set up DFS replication in Windows Server 2019.

NAKIVO for Windows Backup

NAKIVO for Windows Backup

Fast backup of Windows servers and workstations to onsite, offiste and cloud. Recovery of full machines and objects in minutes for low RTOs and maximum uptime.

DISCOVER SOLUTION

What Is DFS and How It Works

A Distributed File System (DFS) is a logical organization that transparently groups existing file shares on multiple servers into a structured hierarchy. This hierarchy can be accessed using a single share on a DFS server.
A DFS file share can be replicated across multiple file servers in different locations to optimize server load and increase access speed to shared files. In this case, a user can access a file share on a server that is closest to them. DFS is intended to simplify access to shared files.

Using a DFS namespace server

DFS uses the Server Message Block (SMB) protocol, which is also known as the Common Internet File System (CIFS). Microsoft’s implementation of DFS doesn’t work with other file sharing protocols like NFS or HDFS. However, you can connect multiple SMB shares configured on NAS devices and Linux machines using Samba to your DFS server running on Windows Server. DFS consists of server and client components.

You can configure one DFS share that includes multiple file shares and connect users to this single file share using a unified namespace. When users connect to this file share using a single path, they see a tree structure of shared folders (as they are subfolders of the main share) and can access all needed file shares transparently. Underlying physical file servers hosting file shares are abstracted from the namespace used to access shares. DFS namespaces and DFS replication are the two main components used for DFS functioning.

What is a DFS namespace?

A DFS namespace is a virtual folder that contains links to shared folders stored on different file servers. DFS namespaces can be organized in different ways depending on business needs. They can be organized by geographical location, organization units, a combination of multiple parameters, etc. You can configure multiple namespaces on a DFS server. A DFS namespace can be standalone or domain-based.

DFS namespace and folder targets
  • standalone DFS namespace stores configuration information and metadata locally on a root server in the system registry. A path to access the root namespace is started with the root server name. A standalone DFS namespace is located only on one server and is not fault-tolerant. If a root server is unavailable, the entire DFS namespace is unavailable. You can use this option if you don’t have an Active Directory domain configured (when using a Workgroup).
  • domain-based DFS namespace stores configuration in Active Directory. A path to access a root namespace starts with the domain name. You can store a domain-based DFS namespace on multiple servers to increase the namespace availability. This approach allows you to provide fault tolerance and load balancing across servers. Using domain-based DFS namespaces is recommended.

A namespace consists of the root, links (folders), and folder targets.

  • namespace root is a starting point of a DFS namespace tree. Depending on the type, a namespace can look like this:

\\ServerName\RootName (a standalone namespace)

\\DomainName\RootName (a domain-based namespace)

  • namespace server is a physical server (or a VM) that hosts a DFS namespace. A namespace server can be a regular server with the DFS role installed or a domain controller.
  • folder is a link in a DFS namespace that points to a target folder containing content for user access. There are also folders without targets used for organizing the structure.
  • folder target is a link to a shared file resource located on a particular file server and available via the UNC path (Universal Naming Convention). A folder target is associated with the folder in a DFS namespace, for example, \\FS2\TestShare on the FS2 server. A folder target is what users need to access files.

One folder target can be a link to a single folder or multiple folders (if these folders are located on two different servers and are synchronized/replicated with each other). For example, a user needs to access \\DFS-server01\TestShare\Doc but depending on the user’s location, the user is redirected to a shared folder \\FS01\Doc or \\FS02\Doc.

The DFS tree structure includes the following components:

  • DFS root, which is a DFS server on which the DFS service is running
  • DFS links, which are links pointing to network shares used in DFS
  • DFS targets, which are real network shares to which DFS links point

What is DFS replication?

DFS replication is a feature used to duplicate existing data by replicating copies of that data to multiple locations. Physical file shares can be synchronized with each other at two or more locations.

An important feature of DFS replication is that the replication of a file starts only after that file has been closed. For this reason, DFS replication is not suitable for replicating databases, given that databases have files opened during the operation of a database management system. DFS replication supports multi-master replication technology, and any member of a replication group can change data that is then replicated.

DFS replication group is a group of servers participating in the replication of one or multiple replication folders. A replicated folder is synchronized between all members of the replication group.

DFS replication group

DFS replication uses a special Remote Differential Compression algorithm that allows DFS to detect changes and copy only changed blocks of files instead of copying all data. This approach allows you to save time and reduce replication traffic over the network.

DFS replication is performed asynchronously. There can be a delay between writing changes to the source location and replicating those changes to the target location.

DFS Replication topologies

There are two main DFS replication topologies:

  • Hub and spoke. This topology requires at least three replication members: one which acts as a hub and two others act as spokes. This technique is useful if you have a central source originating data (hub) and you need to replicate this data to multiple locations (spokes).
  • Full mesh. Each member of a replication group replicates data to each group member. Use this technique if you have 10 members or less in a replication group.

What are the requirements for DFS?

The main requirement is using Windows Server 2008 DataCenter or Enterprise editions, Windows Server 2012, or a newer Windows Server version. It is better to use Windows Server 2016 or Windows Server 2019 nowadays.

NTFS must be a file system to store shared files on Windows Server hosts.

If you use domain-based namespaces, all servers of a DFS replication group must belong to one Active Directory forest.

How to Set Up DFS in Your Windows Environment

You need to prepare at least two servers. In this example, we use two machines running Windows Server 2019, one of which is an Active Directory domain controller:

  • Server01-dc.domain1.local is a domain controller.
  • Server02.domain1.local is a domain member.

This is because configuring DFS in a domain environment has advantages compared to Workgroup, as explained above. The domain name is domain1.local in our case. If you use a domain, don’t forget to configure Active Directory backup.

Enable the DFS roles

First of all, you need to enable the DFS roles in Windows Server 2019.

  1. Open Server Manager.
  2. Click Add Roles and Features in Server Manager.
  3. Select Role-based or featured-based installation in the Installation type screen of the Add Roles and Features wizard.
  4. In the Server Selection screen, make sure your current server (which is a domain controller in our case) is selected. Click Next at each step of the wizard to continue.
  5. Select server roles. Select DFS Namespaces and DFS Replication, as explained in the screenshot below.
Setting up DFS in Windows Server 2019 – installing DFS roles
  1. In the Features screen, you can leave settings as is.
  2. Check your configuration in the confirmation screen and if everything is correct, click Install.
  3. Wait for a while until the installation process is finished and then close the window.

DFS Namespace Setup

Create at least one shared folder on any server that is a domain member. In this example, we create a shared folder on our domain controller. The folder name is shared01 (D:\DATA\shared01).

Creating a shared folder

  1. Right-click a folder and, in the context menu, hit Properties.
  2. On the Sharing tab of the folder properties window, click Share.
  3. Share the folder with Domain users and set permissions. We use Read/Write permissions in this example.
  4. Click Share to finish. Then you can close the network sharing options window.
Sharing a folder in Windows Server 2019 to set up DFS

Now the share is available at this address:

\\server01-dc\shared01

Creating a DFS namespace

Let’s create a DFS namespace to link shared folders in a namespace.

  • Press Win+R and run dfsmgmt.msc to open the DFS Management window. You can also run this command in the Windows command line (CMD).

As an alternative, you can click Start > Windows Administrative Tools > DFS Management.

  • In the DFS Management section, click New Namespace.
How to configure DFS namespaces
  • The New Namespace Wizard opens in a new window.
  1. Namespace Server. Enter a server name. If you are not sure that the name is correct, click Browse, enter a server name and click Check Names. In this example, we enter the name of our domain controller (server01-dc). Click Next at each step of the wizard to continue.
Adding a DFS namespace server
  1. Namespace Name and Settings. Enter a name for a namespace, for example, DFS-01. Click Edit Settings.
Entering a name for a DFS namespace

Pay attention to the local path of a shared folder. Change this path if needed. We use the default path in our example (C:\DFSRoots\DFS-01).

  1. You need to configure access permissions for network users. Click Use custom permissions and hit Customize.
Configuring access permissions for a shared folder on a DFS namespace server
  1. We grant all permissions for domain users (Full Control). Click Add, select Domain Users, select the appropriate checkboxes, and hit OK to save settings.
Configuring permissions for a shared folder
  1. Namespace type. Select the type of namespace to create. We select Domain-based namespace and select the Enable Windows Server 2008 mode checkbox. Select this checkbox if the functional level of your domain is Windows Server 2008 when you use Windows Server 2016 or Windows Server 2019 for better compatibility.

It is recommended that you use a Domain-based namespace due to advantages such as high DFS namespace availability by using multiple namespace servers and transferring namespaces to other servers.

Selecting a domain-based namespace for DFS configuration
  1. Review Settings. Review settings and, if everything is correct, click Create.
Reviewing configuration to finish DFS namespace setup
  1. Confirmation. The window view in case of success is displayed in the screenshot below. The namespace creation has finished. Click Close.
A DFS namespace has been created

Adding a new folder to a namespace

Now we need to add a new folder into the existing namespace. We are adding a folder on the same server, which is a domain controller, but this method is applicable for all servers within a domain.

  1. Open the DFS management window by running dfsmgmt.msc as we did before. Perform the following actions in the DFS management window.
  2. In the left pane, expand a namespace tree and select a namespace (\\domain1.local\DFS-01\ in our case).
  3. In the right pane (the Actions pane), click New Folder.
  4. In the New Folder window, enter a folder name, for example, Test-Folder to link the DFS folder and a shared folder created before. Click Add.
Adding a new folder into a DFS namespace
  1. Enter the path to the existing folder. We use \\server01-dc\shared01 in this example. You can click Browse and select a folder. Click OK to save the path to the folder target.
Adding a folder target

The folder target has been added.

  1. Click OK to save settings and close the New Folder window.
A folder target has been added

Now you can access the shared folder by entering the network address in the address bar of Windows Explorer:

\\server01-dc\dfs-01\Test-Folder

You should enter a path in the format:

\\DomainName\DFS-NameSpace\

Accessing a shared folder in Windows Explorer

How to Configure DFS Replication

We need to configure the second server to replicate data. The name of the second server is Server02 and this server is added to the domain1.local domain in this example. Add your second server to a domain if you have not done this operation before.
Install the DFS roles, as we did for the first server. As an alternative method, you can use PowerShell instead of the Add Roles wizard. Run these two commands in PowerShell to install DFS replication and DFS namespace roles.

Install-WindowsFeature -name “FS-DFS-Replication” -IncludeManagementTools

Install-WindowsFeature -name “FS-DFS-Namespace” -IncludeManagementTools

First of all, we need to install the DFS Replication role on the second server.

How to set up DFS roles in PowerShell

Create a folder for replicated data, for example, D:\Replication

We are going to use this folder to replicate data from the first folder created on the first server before.

Share this folder (D:\Replication) on the second server and configure access permissions the same way as for the previous shared folder. In this example, we share the folder with Domain Users and grant Read/Write permissions.

Sharing a folder on the second server

The network path is \\server02\replication in this example after sharing this folder. To check the network path to the folder, you can right-click the folder name and open the Sharing tab.

Let’s go back to the domain controller (server01-dc) and open the DFS Management window.

In the left pane of the DFS Management window, expand the tree and select the namespace created before (Test-Folder in this case).

Click Add Folder Target in the Actions pane located in the top right corner of the window.

The New Folder Target window appears. Enter the network path of the folder that was created on the second server before:

\\Server02\Replication

Click OK to save settings and close the window.

Adding a new folder target to configure Windows DFS replication

A notification message is displayed:

A replication group can be used to keep these folder targets synchronized. Do you want to create a replication group?

Click Yes.

A notification message is displayed when creating a DFS replication group

Wait until the configuration process is finished.

As a result, you should see the Replicate Folder Wizard window. Perform the next steps in the wizard window.

Check the replication group name and replicated folder name. Click Next to continue.

Entering a replication group name and replication folder name

Check folder paths in the Replication Eligibility screen.

Checking paths of shared folders

Select the primary member from the drop-down list. In this example, the primary member is Server01-dc. Data from the primary member is replicated to other folders that are a part of the DFS namespace.

Selecting a primary member when configuring DFS replication

Select the topology of connections for replication.

Full mesh is the recommended option when using a DFS replication group with less than ten servers. We use Full mesh to replicate changes made on one server to other servers.

The No Topology option can be used if you want to create a custom topology after finishing the wizard.

The Hub and spoke option is inactive (grayed out) because we use less than three servers.

Selecting a full mesh topology to configure DFS replication

Configure replication group schedule and bandwidth. There are two options:

  • Replicate continuously using the specified bandwidth. Replication is performed as soon as possible. You can allocate bandwidth. Continuous replication of data that changes extensively can consume a lot of network bandwidth. To avoid a negative impact on other processes using the network, you can limit bandwidth for DFS replication. Keep in mind that hard disk load can be high.
  • Replicate during the specified days and times. You can configure the schedule to perform DFS replication at the custom date and time. You can use this option if you don’t need to always have the last version of replicated data in target folders.

We select the first option in our example.

Setting up DFS replication group schedule

Review settings for your DFS replication group. If everything is correct, click Create.

Reviewing settings for a DFS replication group before finishing configuration

View the DFS replication configuration status on the Confirmation screen. You should see the Success status for all tasks as displayed on the screenshot below. Click Close to close the wizard window.

A DFS replication group has been created successfully

A notification message about the replication delay is displayed. Read the message and hit OK.

A notification message about DFS replication delay

DFS replication has been configured. Open a shared folder from which data must be replicated initially. Write a file to that network folder and check whether the new data is replicated to the second folder on another server. Don’t forget that opened files are not replicated until they are closed after saving changes to a disk. In a few moments, you should see a file-replica in the target folder.

Using filters for DFS Replication

Use file filters to select the file types you don’t want to replicate. Some applications can create temporary files and replicating them wastes network bandwidth, loads hard disk drives, consumes additional storage space in the target folder, and increases overall time to replicate data. You can exclude the appropriate file types from DFS replication by using filters.

To configure filters, perform the following steps in the DFS Management window:

  1. Expand the Replication tree in the navigation pane and select the needed DFS replication group folder name (domain1.local\dfs-01\Test-folder in our case).
  2. Select the Replicated Folders tab.
  3. Select the needed folder, right-click the folder name and hit Properties. Alternatively, you can select the folder and click Properties in the Actions pane.
  4. Set the filtered file types by using masks in the folder properties window. In this example, files matching the rule are excluded from replication:

~*, *.bak, *.tmp

You can also filter subfolders, for example, exclude Temp subfolders from DFS replication.

Configuring DFS replication filters

Staging location

There can be a conflict when two or more users save changes to a file before these changes are replicated. The most recent changes have precedence for replication. Older versions of changed files are moved to the Conflict or Deleted folder. This issue can happen when replication speed is low and the file size is large (amount of changes is high) when the amount of time to transfer changed data is lower than the interval between writing changes to the file by users.

Staging folders act as a cache for new and changed files that are ready to be replicated from source folders to target folders. The staging location is intended for files that exceed a certain file size. Staging is used as a queue to store files that must be replicated and ensure that changes can be replicated without worrying about changes to them during the transfer process.

Another aspect of configuring staging folders is performance optimization. DFS replication can consume additional CPU and disk resources, slow down and even stop if the staging quota is too small for your tasks. The recommended size of the staging quota is equal to the size of the 32 largest files in the replication folder.

You can edit staging folder properties for DFS Replication in the DFS Management window:

  1. Select a replication group in the left pane of the DFS Management window.
  2. Select the Memberships tab.
  3. Select the needed replication folder, right-click the folder, and hit Properties.
  4. Select the Staging tab in the Properties window.
  5. Edit the staging path and quota according to your needs.
Configuring DFS staging location

Saved changes are not applied immediately. New staging settings must be replicated across all DFS servers within a domain. Time depends on Active Directory Domain Services replication latency and the polling interval of servers (5 minutes or more). Server reboot is not required.

DFS Replication vs. Backup

Don’t confuse DFS Replication of data in shared folders and data backup. DFS replication makes copies of data on different servers, but if unwanted changes are written to a file on one server, these changes are replicated to other servers. As a result, you don’t have a recovery point because the file has been overwritten with unwanted changes on all servers and you can use it for recovery in case of failure. This threat is present in case of a ransomware attack.

Use NAKIVO Backup & Replication to protect data stored on your physical Windows Server machines including data stored in shared folders. The product also supports Hyper-V VM backup and VMware VM backup at the host level for effective protection.

1 Year of Free Data Protection: NAKIVO Backup & Replication

1 Year of Free Data Protection: NAKIVO Backup & Replication

Deploy in 2 minutes and protect virtual, cloud, physical and SaaS data. Backup, replication, instant recovery options.

GET THE FREE EDITION

Conclusion

Distributed File System (DFS) can significantly simplify shared resources management for administrators and make accessing shared folders more convenient for end-users. DFS makes transparent links to shared folders located on different servers.

DFS namespaces and DFS replication are two main features that you can configure in the DFS Management window after installing the appropriate Windows server roles. Opt for configuring DFS in a domain environment rather than in a Workgroup environment because there are many advantages, such as high availability and flexibility in an Active Directory domain.

Source :
https://www.nakivo.com/blog/configure-dfs-replication-for-windows-server/

Manually Clearing the ConflictAndDeleted Folder in DFSR

By Ned Pyle
Published Apr 04 2019 01:30 PM

First published on TechNet on Oct 06, 2008
Ned here again. Today I’m going to talk about a couple of scenarios we run into with the ConflictAndDeleted folder in DFSR. These are real quick and dirty, but they may save you a call to us someday.

Scenario 1: We need to empty out the ConflictAndDeleted folder in a controlled manner as part of regular administration (i.e. we just lowered quota and we want to reclaim that space).

Scenario 2: The ConflictAndDeleted folder quota is not being honored due to an error condition and the folder is filling the drive.

Let’s walk through these now.

Emptying the folder normally

It’s possible to clean up the ConflictAndDeleted folder through the DFSMGMT.MSC and SERVICES.EXE snap-ins, but it’s disruptive and kind of gross (you could lower the quota, wait for AD replication, wait for DFSR polling, and then restart the DFSR service). A much faster and slicker way is to call the WMI method CleanupConflictDirectory from the command-line or a script:

1.  Open a CMD prompt as an administrator on the DFSR server.
2.  Get the GUID of the Replicated Folder you want to clean:

WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderconfig get replicatedfolderguid,replicatedfoldername

(This is all one line, wrapped)

Example output:

thumbnail image 1 of blog post titled
Manually Clearing the ConflictAndDeleted Folder in DFSR

3.  Then call the CleanupConflictDirectory method:

WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo where “replicatedfolderguid='<RF GUID>'” call cleanupconflictdirectory

Example output with a sample GUID:

WMIC.EXE /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo where “replicatedfolderguid=’70bebd41-d5ae-4524-b7df-4eadb89e511e'” call cleanupconflictdirectory
thumbnail image 2 of blog post titled
Manually Clearing the ConflictAndDeleted Folder in DFSR

4.  At this point the ConflictAndDeleted folder will be empty and the ConflictAndDeletedManifest.xml will be deleted.

Emptying the ConflictAndDeleted folder when in an error state

We’ve also seen a few cases where the ConflictAndDeleted quota was not being honored at all. In every single one of those cases, the customer had recently had hardware problems (specifically with their disk system) where files had become corrupt and the disk was unstable – even after repairing the disk (at least to the best of their knowledge), the ConflictAndDeleted folder quota was not being honored by DFSR.

Here’s where quota is set:

thumbnail image 3 of blog post titled
Manually Clearing the ConflictAndDeleted Folder in DFSR

Usually when we see this problem, the ConflictAndDeletedManifest.XML file has grown to hundreds of MB in size. When you try to open the file in an XML parser or in Internet Explorer, you will receive an error like “The XML page cannot be displayed” or that there is an error at line . This is because the file is invalid at some section (with a damaged element, scrambled data, etc).

To fix this issue:

  1. Follow steps 1-4 from above. This may clean the folder as well as update DFSR to say that cleaning has occurred. We always want to try doing things the ‘right’ way before we start hacking.
  2. Stop the DFSR service.
  3. Delete the contents of the ConflictAndDeleted folder manually (with explorer.exe or DEL).
  4. Delete the ConflictAndDeletedManifest.xml file.
  5. Start the DFSR service back up.

For a bit more info on conflict and deletion handling in DFSR, take a look at:

Staging folders and Conflict and Deleted folders (TechNet)
DfsrConflictInfo Class (MSDN)

Until next time…

– Ned “Unhealthy love for DFSR” Pyle

Source :
https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/manually-clearing-the-conflictanddeleted-folder-in-dfsr/ba-p/395711

Exit mobile version