How WordPress Can Help Scale Your Business

SEPTEMBER 12, 2023 BY PAUL G.

Having a robust online presence is not just an option but a necessity for businesses looking to scale. While there are numerous platforms that offer varying degrees of customization and functionality, WordPress stands out as a versatile Content Management System (CMS) that transcends its initial design as a platform for bloggers. WordPress has evolved into an incredibly powerful tool that can assist in the growth and management of your business, whether you’re a start-up or an established enterprise.

From its SEO capabilities to its eCommerce solutions, WordPress offers a range of features designed to make your business more efficient, reachable, and scalable. This article will delve into the multiple ways WordPress can be your business’s best friend, helping you navigate the complex maze of scaling effectively. So, if you’re contemplating how to take your business to the next level without getting tangled in the complexities of coding or spending a fortune, read on.

Why Choose WordPress for Business

When it comes to setting up and managing an online business, the platform you choose serves as the backbone of your operations. WordPress emerges as a leading choice for several compelling reasons:

1. Open-Source Platform With a Large Community
One of the most appealing aspects of WordPress is that it’s an open-source platform, meaning you have the freedom to customize and modify your website as you see fit. This also means that there is a large community of developers continually contributing to its improvement. You’re never alone when you have a problem; chances are someone else has faced the same issue and found a solution that they’ve shared.

2. High Level of Customization and Scalability
WordPress offers an almost limitless array of customization options. With thousands of themes and plugins available, you can tailor the appearance and functionality of your website to perfectly match your brand and business objectives. This high degree of customization extends to supporting different currencies, enabling you to appeal to an international customer base effortlessly.

3. User-Friendly Interface
WordPress is designed to be used by people who may not have any coding experience. The platform is intuitive, making it easy to add content, make updates, and manage various aspects of your site without needing specialized technical knowledge.

4. SEO Capabilities
Search engine optimization (SEO) is critical for any business aiming for long-term success. WordPress is coded to be SEO-friendly right out of the box. Additionally, there are several SEO plugins available that can help you optimize your content and improve your rankings further.

5. Cost-Effective
Starting a website with WordPress can be incredibly cost-efficient. The platform itself is free, and many high-quality themes and plugins are available at no cost. While there may be some expenses, such as for specialized plugins or a more premium hosting service, these costs are generally lower compared to developing a custom website from scratch.

6. eCommerce Ready
For businesses looking to sell products or services online, WordPress seamlessly integrates with eCommerce solutions like WooCommerce. This allows for easy inventory management, payment gateway integration, and functionalities like printing shipping labels directly from your dashboard.

By choosing WordPress as your business platform, you’re not just creating a website—you’re building a scalable, customizable, and efficient business operation that can grow with you. With its blend of user-friendly design, SEO capabilities, and versatile functionality, WordPress proves to be a strong ally in achieving your business goals. Keep reading as we delve into these aspects in greater detail, starting with the platform’s unmatched flexibility and customization options.

Flexibility and Customization

One of the most significant advantages of using WordPress is its unparalleled flexibility and customization options. Whether you’re in the healthcare sector, the food and beverage industry, or running an eCommerce store, WordPress has you covered. With its array of specialized themes and plugins tailored to business needs, you can establish a strong online presence that aligns with your brand and business goals.

Themes

Themes offer the first layer of customization. Designed specifically for various business sectors, they provide built-in functionalities like portfolios, customer testimonials, and eCommerce features. You can establish your visual brand identity effortlessly, without writing a single line of code.

Plugins

Moving on to plugins, they are the true workhorses of WordPress customization. WordPress has tens of thousands of plugins in its directory, and these handy additions can install virtually any functionality you can imagine. Whether you need an appointment booking system, a members-only section, or automated marketing solutions, there’s a plugin for that. Some plugins even allow you to handle multiple currencies, making your website more accommodating for international customers.

By combining the right themes and plugins, WordPress allows you unparalleled control over how your website looks and functions. This isn’t just advantageous for you as the business owner; it also dramatically enhances the user experience. Your customers can interact with a platform that is both visually appealing and highly functional, meeting their needs no matter where they are in the world or what currency they prefer to use.

Security

Having a secure website is a non-negotiable for businesses. Luckily, WordPress takes security seriously and offers a multitude of features to help you protect your online assets. For starters, the platform releases regular updates to address known security vulnerabilities, ensuring that you are always running the most secure version possible.

Change reporting is another powerful feature provided by WordPress security plugins, allowing you to monitor real-time changes on your site. Any unauthorized modifications can trigger alerts, enabling you to take quick action. Additionally, many plugins offer malware scanning, which continuously scans your site’s files to detect malicious code and potential threats.

Intrusion prevention mechanisms are also commonly found in WordPress security solutions. These tools can block suspicious IP addresses, limit login attempts, and even implement two-factor authentication to add an extra layer of protection to your site.

While no system can guarantee 100% security, WordPress comes close by offering a range of robust features that work together to minimize risks. By taking advantage of these tools, you’re not just protecting your website; you’re safeguarding your business reputation and the trust of your customers. However, it’s crucial to remember that users also bear the responsibility for keeping themes and plugins updated, as outdated software can pose security risks.

SEO Capabilities

Visibility is crucial for any online business, and WordPress shines when it comes to search engine optimization (SEO). The platform is designed with built-in SEO features that allow for custom permalinks, meta descriptions, and image alt text, making it easier for search engines to read and index your site.

Additionally, SEO plugins like Yoast and All in One SEO can further enhance your optimization efforts. These plugins help you target specific keywords and improve content readability. Site speed, an important SEO factor, can be optimized by choosing a quality hosting service like HostDash.

WordPress themes are also generally responsive, adapting to various screen sizes, which is vital for mobile optimization—a significant factor in search rankings. Analytics plugins offer insights into your site’s performance, and local SEO can be easily managed for businesses operating in specific geographic locations.

Whether you want to target a global or local audience, WordPress has the tools and setup to help you achieve your specific SEO goals. By leveraging WordPress’s SEO features, you set the stage for better visibility, increased customer engagement, and, ultimately, business growth.

eCommerce Solutions

In today’s digital age, having an eCommerce capability is often essential for business growth. WordPress makes this transition smooth and simple. Through its seamless integration with WooCommerce and other eCommerce plugins, WordPress allows businesses to set up an online store effortlessly.

WooCommerce Integration

WooCommerce is the go-to eCommerce plugin for WordPress users, enabling a wide range of functionalities, from inventory management to payment gateway integration. The setup is straightforward, allowing even those with minimal technical expertise to launch an online store.

Payment and Currency Flexibility

One of the benefits of using WordPress for eCommerce is the range of payment options available. Whether your customers prefer credit card payments, PayPal, or digital wallets, WordPress has you covered. Some plugins even support transactions in multiple currencies, which is ideal for businesses looking to serve an international clientele.

Shipping Solutions

Shipping is a critical component of any eCommerce operation. WordPress simplifies this aspect as well, with options for calculating real-time shipping costs and even printing shipping labels directly from your dashboard.

Content Management

Managing content effectively is at the heart of any successful online business. WordPress makes this task simple and intuitive. Built originally as a blogging platform, WordPress has advanced content management capabilities that extend far beyond just text-based posts. It supports a wide range of media types, including images, videos, and audio files, allowing you to create a rich, multimedia experience for your visitors.

One of the standout features is the built-in editor, which provides a user-friendly interface for creating and formatting your content. This editor allows for real-time previews so you can see how changes will look before they go live. Beyond the visual aspects, WordPress enables easy content organization through categories, tags, and custom taxonomies. You can also schedule posts in advance, freeing you from having to manually update content and allowing you to focus on other aspects of your business.

Even more appealing is how WordPress content management intersects with other functionalities. You can easily link blog posts to specific products in your online store or incorporate SEO best practices directly into your content using plugins. All these features work in tandem to make your site not just a promotional tool, but a comprehensive platform for customer engagement and business growth.

Scalability

As your business grows, you need a platform that can grow with you, and WordPress excels in this aspect. The platform allows you to scale up or down easily based on your business needs. Whether you’re adding new products, launching a subscription service, or expanding into new markets, WordPress remains stable and functional. Furthermore, it’s essential to choose a hosting plan that can adapt as you grow. The right host will offer various server resources and hosting plans that can be modified to meet your increasing requirements, ensuring that scaling up doesn’t become a bottleneck for your business.

Analytics and Reporting

Data is vital in understanding how your business is performing, and WordPress allows for seamless integration with analytics tools like Google Analytics. With just a few clicks, you can have access to a wealth of information ranging from visitor demographics to behavior patterns. WordPress also offers plugins that can help you monitor key performance indicators (KPIs). By keeping an eye on these metrics, you can gain valuable insights into customer behavior, which in turn can inform your business strategies and help you make data-driven decisions.

Conclusion

In sum, WordPress isn’t just a platform for bloggers; it’s a comprehensive tool for businesses of all sizes. Its open-source nature, scalability, and a vast array of customization options make it a compelling choice for entrepreneurs looking to build an online presence without breaking the bank. With robust security measures, SEO capabilities, and integrated eCommerce solutions, WordPress offers a well-rounded package that can adapt to your evolving business needs.

Whether you’re looking to attract a global audience, keep your site secure, or gain valuable insights through analytics, WordPress provides the tools you need to not just survive, but thrive in the competitive digital landscape.

Source :
https://getshieldsecurity.com/blog/how-wordpress-can-help-scale-your-business/

Top 5 Security Misconfigurations Causing Data Breaches in 2023

Edward Kost
updated May 15, 2023

Security misconfigurations are a common and significant cybersecurity issue that can leave businesses vulnerable to data breaches. According to the latest data breach investigation report by IBM and the Ponemon Institute, the average cost of a breach has peaked at US$4.35 million. Many data breaches are caused by avoidable errors like security misconfiguration. By following the tips in this article, you could identify and address a security error that could save you millions of dollars in damages.

Learn how UpGuard can help you detect data breach risks >

What is a Security Misconfiguration?

A security misconfiguration occurs when a system, application, or network device’s settings are not correctly configured, leaving it exposed to potential cyber threats. This could be due to default configurations left unchanged, unnecessary features enabled, or permissions set too broadly. Hackers often exploit these misconfigurations to gain unauthorized access to sensitive data, launch malware attacks, or carry out phishing attacks, among other malicious activities.

What Causes Security Misconfigurations?

Security misconfigurations can result from various factors, including human error, lack of awareness, and insufficient security measures. For instance, employees might configure systems without a thorough understanding of security best practices, security teams might overlook crucial security updates due to the growing complexity of cloud services and infrastructures.

Additionally, the rapid shift to remote work during the pandemic has increased the attack surface for cybercriminals, making it more challenging for security teams to manage and monitor potential vulnerabilities.

List of Common Types of Security Configurations Facilitating Data Breaches

Some common types of security misconfigurations include:

1. Default Settings

With the rise of cloud solutions such as Amazon Web Services (AWS) and Microsoft Azure, companies increasingly rely on these platforms to store and manage their data. However, using cloud services also introduces new security risks, such as the potential for misconfigured settings or unauthorized access.

A prominent example of insecure default software settings that could have facilitated a significant breach is the Microsoft Power Apps data leak incident of 2021. By default, Power Apps portal data feeds were set to be accessible to the public.

Unless developers specified for OData feeds to be set to private, virtually anyone could access the backend databases of applications built with Power Apps. UpGuard researchers located the exposure and notified Microsoft, who promptly addressed the leak. UpGuard’s detection helped Microsoft avoid a large-scale breach that could have potentially compromised 38 million records.

Read this whitepaper to learn how to prevent data breaches >

2. Unnecessary Features

Enabling features or services not required for a system’s operation can increase its attack surface, making it more vulnerable to threats. Some examples of unnecessary product features include remote administration tools, file-sharing services, and unused network ports. To mitigate data breach risks, organizations should conduct regular reviews of their systems and applications to identify and disable or remove features that are not necessary for their operations.

Additionally, organizations should practice the principle of least functionality, ensuring that systems are deployed with only the minimal set of features and services required for their specific use case.

3. Insecure Permissions

Overly permissive access controls can allow unauthorized users to access sensitive data or perform malicious actions. To address this issue, organizations should implement the principle of least privilege, granting users the minimum level of access necessary to perform their job functions. This can be achieved through proper role-based access control (RBAC) configurations and regular audits of user privileges. Additionally, organizations should ensure that sensitive data is appropriately encrypted both in transit and at rest, further reducing the risk of unauthorized access.

4. Outdated Software

Failing to apply security patches and updates can expose systems to known vulnerabilities. To protect against data breaches resulting from outdated software, organizations should have a robust patch management program in place. This includes regularly monitoring for available patches and updates, prioritizing their deployment based on the severity of the vulnerabilities being addressed, and verifying the successful installation of these patches.

Additionally, organizations should consider implementing automated patch management solutions and vulnerability scanning tools to streamline the patching process and minimize the risk of human error.

5. Insecure API Configurations

APIs that are not adequately secured can allow threat actors to access sensitive information or manipulate systems. API misconfigurations – like the one that led to T-Mobile’s 2023 data breach, are becoming more common. As more companies move their services to the cloud, securing these APIs and preventing the data leaks they facilitate is becoming a bigger challenge.

To mitigate the risks associated with insecure API configurations, organizations should implement strong authentication and authorization mechanisms, such as OAuth 2.0 or API keys, to ensure only authorized clients can access their APIs. Additionally, organizations should conduct regular security assessments and penetration testing to identify and remediate potential vulnerabilities in their API configurations.

Finally, adopting a secure software development lifecycle (SSDLC) and employing API security best practices, such as rate limiting and input validation, can help prevent data breaches stemming from insecure APIs.

Learn how UpGuard protects against third-party breaches >

How to Avoid Security Misconfigurations Impacting Your Data Breach Resilience

To protect against security misconfigurations, organizations should:

1. Implement a Comprehensive Security Policy

Implement a cybersecurity policy covering all system and application configuration aspects, including guidelines for setting permissions, enabling features, and updating software.

2. Implement a Cyber Threat Awareness Program

An essential security measure that should accompany the remediation of security misconfigurations is employee threat awareness training. Of those who recently suffered cloud security breaches, 55% of respondents identified human error as the primary cause.

With your employees equipped to correctly respond to common cybercrime tactics that preceded data breaches, such as social engineering attacks and social media phishing attacks, your business could avoid a security incident should threat actors find and exploit an overlooked security misconfiguration.

Phishing attacks involve tricking individuals into revealing sensitive information that could be used to compromise an account or facilitate a data breach. During these attacks, threat actors target account login credentials, credit card numbers, and even phone numbers to exploit Multi-Factor authentication.

Learn the common ways MFA can be exploited >

Phishing attacks are becoming increasingly sophisticated, with cybercriminals using automation and other tools to target large numbers of individuals. 

Here’s an example of a phishing campaign where a hacker has built a fake login page to steal a customer’s banking credentials. As you can see, the fake login page looks almost identical to the actual page, and an unsuspecting eye will not notice anything suspicious.

Real Commonwealth Bank Login Page
Real Commonwealth Bank Login Page.
Fake Commonwealth Bank Login Page
Fake Commonwealth Bank Login Page

Because this poor cybersecurity habit is common amongst the general population, phishing campaigns could involve fake login pages for social media websites, such as LinkedIn, popular websites like Amazon, and even SaaS products. Hackers implementing such tactics hope the same credentials are used for logging into banking websites.

Cyber threat awareness training is the best defense against phishing, the most common attack vector leading to data breaches and ransomware attacks.

Because small businesses often lack the resources and expertise of larger companies, they usually don’t have the budget for additional security programs like awareness training. This is why, according to a recent report, 61% of small and medium-sized businesses experienced at least one cyber attack in the past year, and 40% experienced eight or more attacks.

Luckily, with the help of ChatGPT, small businesses can implement an internal threat awareness program at a fraction of the cost. Industries at a heightened risk of suffering a data breach, such as healthcare, should especially prioritize awareness of the cyber threat landscape.

Learn how to implement an internal cyber threat awareness campaign >

3. Use Multi-Factor Authentication

MFA and strong access management control to limit unauthorized access to sensitive systems and data.

Previously compromised passwords are often used to hack into accounts. MFA adds additional authentication protocols to the login process, making it difficult to compromise an account, even if hackers get their hands on a stolen password

4. Use Strong Access Management Controls

Identity and Access Management (IAM) systems ensure users only have access to the data and applications they need to do their jobs and that permissions are revoked when an employee leaves the company or changes roles.

The 2023 Thales Dara Threat Report found that 28% of respondents found IAM to be the most effective data security control preventing personal data compromise.

5. Keep All Software Patched and Updated

Keep all environments up-to-date by promptly applying patches and updates. Consider patching a “golden image” and deploying it across your environment. Perform regular scans and audits to identify potential security misconfigurations and missing patches.

An attack surface monitoring solution, such as UpGuard, can detect vulnerable software versions that have been impacted by zero-days and other known security flaws.

6. Deploy Security Tools

Security tools, such as intrusion detection and prevention systems (IDPS) and security information and event management (SIEM) solutions, to monitor and respond to potential threats.

It’s essential also to implement tools to defend against tactics often used to complement data breach attempts, for example. DDoS attacks – a type of attack where a server is flooded with fake traffic to force it offline, allowing hackers to exploit security misconfigurations during the chaos of excessive downtime.

Another important security tool is a data leak detection solution for discovering compromised account credentials published on the dark web. These credentials, if exploited, allow hackers to compress the data breach lifecycle, making these events harder to detect and intercept.

Dara leaks compressing the data breach lifecycle.

Learn how to detect and prevent data leaks >

7. Implement a Zero-Trust Architecture

One of the main ways that companies can protect themselves from cloud-related security threats is by implementing a Zero Trust security architecture. This approach assumes all requests for access to resources are potentially malicious and, therefore, require additional verification before granting access.

Learn how to implement a Zero-Trust Architecture >

A Zero-Trust approach to security assumes that all users, devices, and networks are untrustworthy until proven otherwise.

8. Develop a Repeatable Hardening Process

Establish a process that can be easily replicated to ensure consistent, secure configurations across production, development, and QA environments. Use different passwords for each environment and automate the process for efficient deployment. Be sure to address IoT devices in the hardening process. 

These devices tend to be secured with their default factory passwords, making them highly vulnerable to DDoS attacks.

9. Implement a Secure Application Architecture

Design your application architecture to obfuscate general access to sensitive resources using the principle of network segmentation.

Learn more about network segmentation >

Cloud infrastructure has become a significant cybersecurity issue in the last decade. Barely a month goes by without a major security breach at a cloud service provider or a large corporation using cloud services.

10. Maintain a Structured Development Cycle

Facilitate security testing during development by adhering to a well-organized development process. Following cybersecurity best practices this early in the development process sets the foundation for a resilient security posture that will protect your data even as your company scales.

Implement a secure software development lifecycle (SSDLC) that incorporates security checkpoints at each stage of development, including requirements gathering, design, implementation, testing, and deployment. Additionally, train your development team in secure coding practices and encourage a culture of security awareness to help identify and remediate potential vulnerabilities before they make their way into production environments.

11. Review Custom Code

If using custom code, employ a static code security scanner before integrating it into the production environment. These scanners can automatically analyze code for potential vulnerabilities and compliance issues, reducing the risk of security misconfigurations.

Additionally, have security professionals conduct manual reviews and dynamic testing to identify issues that may not be detected by automated tools. This combination of automated and manual testing ensures that custom code is thoroughly vetted for security risks before deployment.

12. Utilize a Minimal Platform

Remove unused features, insecure frameworks, and unnecessary documentation, samples, or components from your platform. Adopt a “lean” approach to your software stack by only including components that are essential for your application’s functionality.

This reduces the attack surface and minimizes the chances of security misconfigurations. Furthermore, keep an inventory of all components and their associated security risks to better manage and mitigate potential vulnerabilities.

13. Review Cloud Storage Permissions

Regularly examine permissions for cloud storage, such as S3 buckets, and incorporate security configuration updates and reviews into your patch management process. This process should be a standard inclusion across all cloud security measures. Ensure that access controls are properly configured to follow the principle of least privilege, and encrypt sensitive data both in transit and at rest.

Implement monitoring and alerting mechanisms to detect unauthorized access or changes to your cloud storage configurations. By regularly reviewing and updating your cloud storage permissions, you can proactively identify and address potential security misconfigurations, thereby enhancing your organization’s data breach resilience.

How UpGuard Can Help

UpGuard’s IP monitoring feature monitors all IP addresses associated with your attack surface for security issues, misconfigurations, and vulnerabilities. UpGuard’s attack surface monitoring solution can also identify common misconfigurations and security issues shared across your organization and its subsidiaries, including the exposure of WordPress user names, vulnerable server versions, and a range of attack vectors facilitating first and third data breaches.

UpGuard's Risk Profile feature displays security vulnerabilities associated with end-of-life software.
UpGuard’s Risk Profile feature displays security vulnerabilities associated with end-of-life software.

To further expand its mitigation of data breach threat categories, UpGuard offersa data leak detection solution that scans ransomware blogs on the dark web for compromised credentials, and any leaked data could help hackers breach your network and sensitive resources.

UpGuard's ransomware blog detection feature.
UpGuard’s ransomware blog detection feature.

Source :
https://www.upguard.com/blog/security-misconfigurations-causing-data-breaches

Introducing Wordfence CLI: A High Performance Malware Scanner Built for the Command Line

Matt Barry August 24, 2023

Today, we are incredibly excited to announce the launch of Wordfence CLI: an open source, high performance malware scanner built for the command-line. With Wordfence CLI you can detect malware and other indicators of compromise on a host system by running an extremely fast scanner that is at home in the Linux command line environment. This provides site owners, security administrators, operations teams, and security focused organizations more performance and flexibility in malware detection.

While the Wordfence plugin continues to provide industry leading security with its Web Application Firewall, 2-Factor Authentication, IP Blocklist, Malware Scanner, and other security features, Wordfence CLI can be used to provide a second layer of detection for malware or provide an option for those who choose not to utilize a security plugin.

Wordfence CLI does not provide the firewall, two-factor authentication, brute force protection and other security features that the Wordfence Free and Paid plugin provides. Wordfence CLI is purely focused on high performance, scalable and scriptable malware detection.

Wordfence CLI is for the following customers:

  • Individual site owners comfortable on the Linux command line, who choose to run (or schedule) high performance malware scans on the command line instead of using the malware scanning built into the Wordfence plugin.
  • Site cleaners who need a high performance malware scanner to scan a large number of files as part of remediation.
  • Developers providing hosting to several customers and who want to configure high performance scans in the Linux environment.
  • Hosting companies small and large that want to parallelize scanning across thousands or millions of hosts, fully utilizing all available CPU cores and IO throughput.
  • Operations teams in any organization who are looking for a highly configurable command line scanner that can slot right in to a comprehensive, scheduled and scripted security policy.

Wordfence CLI aims to provide the fastest PHP malware scanner in the world with the highest detection rate, in an scriptable tool that can work in concert with other tools and utilities in the Linux command line environment.

What is Wordfence CLI?

Malware Detection Designed with Performance in Mind

Under the hood, Wordfence CLI is a multi-process malware scanner written in Python. It’s designed to have low memory overhead while being able to utilize multiple cores for scanning large filesystems for malware. We’ve opted to use libpcre over Python’s existing regex libraries for speed and compatibility with our signature set.

From some of our own benchmarks, we’ve seen ~324 files per second and  approximately 13 Megabytes scanned per second using 16 workers on an AMD Ryzen 7 1700 with 8 Cores utilizing our full commercial signature set of over 5,000 malware signatures. That is approximately 46 Gigabytes per hour on modest hardware.

Here are some examples of Wordfence CLI in action.

Performing a basic scan of a single directory in a file system:

wordfence scan --output-path /home/wordfence/wordfence-cli.csv /var/www

This will recursively scan files in the /var/www directory and write the results of the scan in CSV format to /home/wordfence/wordfence-cli.csv. A scan like this could be scheduled using a cron job to be performed daily, which would be similar to how the Wordfence plugin performs scans. Additionally, we can use other utilities like find to select which files we want to scan using Wordfence CLI:

find /var/www/ -cmin -60 -type f -print0 | wordfence scan --output-path /home/wordfence/wordfence-cli.csv

In this example, we can find which files have been changed within the last hour and pipe those from the find command to Wordfence CLI for scanning. It is recommended that you use ctime over mtime and atime as changing the ctime of a file requires root access to the file system. mtime and atime can be arbitrarily set by the file owner using the touch command.

We don’t recommend solely scanning recently changed files on your file system. We frequently add new malware signatures to Wordfence CLI, and we therefore recommend periodically performing a full scan of your filesystem.

Flexibility at Your Fingertips

One key benefit of Wordfence CLI is flexibility. The tool comes with many options that enable users to utilize the output of the scan in various ways.

Some of these options include the ability to:

  • Format output in various ways like CSV, TSV, human readable, and more
  • Choose a number of workers based on available CPUs, that can increase speed and performance of a scan.
  • Include or skip certain files and directories from a scan.
  • Look for all malware signature matches in each file, or immediately stop scanning a file if we find malware (the default).
  • Include or exclude specific signatures from a scan.
  • And much more.

For more information on all of the options available, we recommend reviewing our help documentation at https://www.wordfence.com/help/wordfence-cli/, or downloading Wordfence CLI and running wordfence scan --help

How Wordfence CLI Licensing Works

Wordfence CLI comes in two primary license types, Wordfence CLI Free and Wordfence CLI Commercial.

Wordfence CLI Free is free for individual use and can not be used in a commercial setting. The free version uses our Free Signature Set which is a smaller set of signatures appropriate for entry-level malware detection. Wordfence CLI Free is a great way to get familiar with the tool and to conduct quick scans.

Wordfence CLI Commercial includes our Commercial Signature Set of over 5,000 malware signatures, and can be used in any commercial setting. We release new malware signatures in real-time to our commercial customers. For a sense of scale, our team has released over 100 new malware signatures in the past four months.

Wordfence CLI Commercial includes product support from our world-class Customer Support Engineers.

Wordfence CLI Commercial is available in four pricing tiers:

  • CLI-100 can be used to scan up to 100 unique sites, at just $299 per year.
  • CLI-1,000 can be used to scan up to 1,000 different sites, at just $950 per year.
  • CLI-10,000 can be used to scan up to 10,000 different sites, at just $2,950 per year.
  • CLI-Enterprise which is tailored to any organization or enterprise use case, where the number of sites to be scanned exceeds 10,000. Please contact us at presales@wordfence.com if you are interested in this option.

We trust that users will self-select into the appropriate CLI tier based on the number of sites they need to scan within the license year. You can sign up for a Wordfence CLI free license, or purchase a Wordfence CLI Commercial license at: https://www.wordfence.com/products/wordfence-cli

Contributing to Open Source

Wordfence was founded on a commitment to building and maintaining open source software, and Wordfence CLI is no different. This is why we’ve decided to release the Wordfence CLI application under the GPLv3 license. You can clone the repository here:

https://github.com/wordfence/wordfence-cli/

We’ve also included documentation about how to install, configure, and run Wordfence CLI here:

https://www.wordfence.com/help/wordfence-cli/

Come see us at WordCamp US!

Wordfence is a proud Admin level sponsor at WordCamp US in Maryland this year. Join us in celebrating our launch of Wordfence CLI by stopping by our booth and saying hi! We’ll be there 8AM – 5PM tomorrow (Friday) and 8AM – 3:30PM on Saturday. We’ll have team members from Engineering, Threat Intelligence, Customer Service, Operations, and Security who will be happy to answer any questions you have about the launch of Wordfence CLI. We can also help with any questions about our current product lineup which includes Wordfence Premium, Wordfence Care, and Wordfence Response along with Wordfence Intelligence. If the rumors are true, we might even be teaching the public how to pick locks, and you might have the opportunity to win your own lock picking set if you can crack it.

Did you enjoy this post? Share it!

Source :
https://www.wordfence.com/blog/2023/08/wordfence-cli/

New SEC Cybersecurity Rules: What You Need to Know

By: Greg Young – Trendmicro
August 03, 2023
Read time: 4 min (1014 words)

The US Securities and Exchange Commission (SEC) recently adopted rules regarding mandatory cybersecurity disclosure. Explore what this announcement means for you and your organization.

On July 26, 2023, the US Securities and Exchange Commission (SEC) adopted rules regarding mandatory cybersecurity disclosure. What does this mean for you and your organization? As I understand them, here are the major takeaways that cybersecurity and business leaders need to know:

Who does this apply to?

The rules announced apply only to registrants of the SEC i.e., companies filing documents with the US SEC. Not surprisingly, this isn’t limited to attacks on assets located within the US, so incidents concerning SEC registrant companies’ assets in other countries are in scope. This scope also, not surprisingly, does not include the government, companies not subject to SEC reporting (i.e., privately held companies), and other organizations.

Breach notification for these others will be the subject of separate compliance regimes, which will hopefully, at some point in time, be harmonized and/or unified to some degree with the SEC reporting.

Advice for security leaders: be aware that these new rules could require “double reporting,” such as for publicly traded critical infrastructure companies. Having multiple compliance regimes, however, is not new for cybersecurity.

What are the general disclosure requirements?

Some pundits have said “four days after an incident” but that’s not quite correct. The SEC says that “material breaches” must be reported “four business days after a registrant determines that a cybersecurity incident is material.”

We’ve hit the first squishy bit: materiality. Directing companies to disclose material events shouldn’t be necessary before there’s a mixed record of companies making materiality for public company operation. But what kind of cybersecurity incident would be likely to be important to a reasonable investor?

We’ve seen giant breaches that paradoxically did not move stock prices, and minor breaches that did the opposite. I’m clearly on the side of compliance and disclosure, but I recognize it is a gray area. Recently we saw some companies that had the MOVEit vulnerability exploited but had no data loss. Should they report? But in some cases, their response to the vulnerability was in the millions: how about then? I expect and hope there will be further guidance.

Advice for security leaders: monitor the breach investigation and monitor the analysis of materiality. Security leaders won’t often make that call but should give guidance and continuous updates to the CxO who are responsible.

The second squishy bit is that the requirement is the reporting should be made four days after determining the incident is material. So not four days after the incident, but after the materiality determination. I understand why it was structured this way, as a small indicator of compromise must be followed up before understanding the scope and nature of a breach, including whether a breach has occurred at all. But this does give a window to some of the foot-dragging for disclosure we’ve unfortunately seen, including product companies with vulnerabilities.

Advice for security leaders: make management aware of the four-day reporting requirement and monitor the clock once the material line is crossed or identified.

Are there extensions?

There are, but not because you need more time. Instead “The disclosure may be delayed if the United States Attorney General determines that immediate disclosure would pose a substantial risk to national security or public safety and notifies the Commission of such determination in writing.” Note that it specifically states that the Attorney General (AG) makes that determination, and the AG communicates this to the SEC. There could be some delegation of this authority within the Department of Justice in the future, but today it is the AG.

How does it compare to other countries and compliance regimes?

Breach and incident reporting and disclosure is not new, and the concept of reporting material events is already commonplace around the world. GDPR breach reporting is 72 hours, HHS HIPAA requires notice not later than 60 days and 90 days to individuals affected, and the UK Financial Conduct Authority (FCA) has breach reporting requirements. Canada has draft legislation in Bill C-26 that looks at mandatory reporting through the lens of critical industries, which includes verticals such as banking and telecoms but not public companies. Many of the world’s financial oversight bodies do not require breach notification for public companies in the exchanges they are responsible for.

Advice to security leaders: consider the new SEC rules as clarification and amplification of existing reporting requirements for material events rather than a new regime or something that is harsher or different to other geographies.

Is breach reporting the only new rule?

No, I’ve only focused on incident reporting in this post. There’s a few more. The two most noteworthy ones are:

  • Regulation S-K Item 106, requiring registrants to “describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats, as well as the material effects or reasonably likely material effects of risks from cybersecurity threats and previous cybersecurity incidents.”
  • Also specified is that annual 10-Ks “describe the board of directors’ oversight of risks from cybersecurity threats and management’s role and expertise in assessing and managing material risks from cybersecurity threats.”

Bottom line

SEC mandatory reporting for material cybersecurity events was already a requirement under the general reporting requirements, however the timelines and nature of the reporting are getting real and have a ticking four-day timer on them.

Stepping back from the rules, the importance of visibility and continuous monitoring are the real takeaways. Time to detection can’t be at the speed of your least experienced analyst. Platform means unified visibility rather than a wall of consoles. Finding and stopping breaches means internal visibility must include a rich array of telemetry, and that it be continuously monitored.

Many SEC registrants have operations outside the US, and that means visibility needs to include threat intelligence that is localized to other geographies. These new SEC rules show more than ever that that cyber risk is business risk.

To learn more about cyber risk management, check out the following resources:

Source :
https://www.trendmicro.com/en_us/research/23/h/sec-cybersecurity-rules-2023.html

Cybersecurity Threat 1H 2023 Brief with Generative AI

By: Trend Micro
August 08, 2023
Read time: 4 min (1020 words)

How generative AI influenced threat trends in 1H 2023

A lot can change in cybersecurity over the course of just six months in criminal marketplaces. In the first half of 2023, the rapid expansion of generative AI tools began to be felt in scams such as virtual kidnapping and tools by cybercriminals. Tools like WormGPT and FraudGPT are being marketed. The use of AI empowers adversaries to carry out more sophisticated attacks and poses a new set of challenges. The good news is that the same technology can also be used to empower security teams to work more effectively.

As we analyze the major events and patterns observed during this time, we uncover critical insights that can help businesses stay ahead of risk and prepare for the challenges that lie ahead in the second half of the year.

AI-Driven Tools in Cybercrime

The adoption of AI in organizations has increased significantly, offering numerous benefits. However, cybercriminals are also harnessing the power of AI to carry out attacks more efficiently.

As detailed in a Trend research report in June, virtual kidnapping is a relatively new and concerning type of imposter scam. The scammer extorts their victims by tricking them into believing they are holding a friend or family member hostage. In reality, it is AI technology known as a “deepfake,” which enables the fraudster to impersonate the real voice of the “hostage” whilst on the phone. Audio harvested from their social media posts will typically be used to train the AI model.

However, it is generative AI that’s playing an increasingly important role earlier on in the attack chain—by accelerating what would otherwise be a time-consuming process of selecting the right victims. To find those most likely to pay up when confronted with traumatic content, threat groups can use generative AI like ChatGPT to filter large quantities of potential victim data, fusing it with geolocation and advertising analytics. The result is a risk-based scoring system that can show scammers at a glance where they should focus their attacks.

This isn’t just theory. Virtual kidnapping scams are already happening. The bad news is that generative AI could be leveraged to make such attacks even more automated and effective in the future. An attacker could generate a script via ChatGPT to then convert to the hostage’s voice using deepfake and a text-to-speech app.

Of course, virtual kidnapping is just one of a growing number of scams that are continually being refined and improved by threat actors. Pig butchering is another type of investment fraud where the victim is befriended online, sometimes on romance sites, and then tricked into depositing their money into fictitious cryptocurrency schemes. It’s feared that these fraudsters could use ChatGPT and similar tools to improve their conversational techniques and perhaps even shortlist victims most likely to fall for the scams.

What to expect

The emergence of generative AI tools enables cybercriminals to automate and improve the efficiency of their attacks. The future may witness the development of AI-driven threats like DDoS attacks, wipers, and more, increasing the sophistication and scale of cyberattacks.

One area of concern is the use of generative AI to select victims based on extensive data analysis. This capability allows cybercriminals to target individuals and organizations with precision, maximizing the impact of their attacks.

Fighting back

Fortunately, security experts like Trend are also developing AI tools to help customers mitigate such threats. Trend pioneered the use of AI and machine learning for cybersecurity—embedding the technology in products as far back as 2005. From those early days of spam filtering, we began developing models designed to detect and block unknown threats more effectively.

Trend’s defense strategy

Most recently, we began leveraging generative AI to enhance security operations. Companion is a cybersecurity assistant designed to automate repetitive tasks and thereby free up time-poor analysts to focus on high-value tasks. It can also help to fill skills gaps by decoding complex scripts, triaging and recommending actions, and explaining and contextualizing alerts for SecOps staff.

What else happened in 1H 2023?

Ransomware: Adapting and Growing

Ransomware attacks are becoming sophisticated, with illegal actors leveraging AI-enabled tools to automate their malicious activities. One new player on the scene, Mimic, has abused legitimate search tools to identify and encrypt specific files for maximum impact. Meanwhile, the Royal ransomware group has expanded its targets to include Linux platforms, signaling an escalation in their capabilities.

According to Trend data, ransomware groups have been targeting finance, IT, and healthcare industries the most in 2023. From January 1 to July 17, 2023, there have been 219, 206, and 178 successful compromises of victims in these industries, respectively.

Our research findings revealed that ransomware groups are collaborating more frequently, leading to lower costs and increased market presence. Some groups are showing a shift in motivation, with recent attacks resembling those of advanced persistent threat (APT) groups. To combat these evolving threats, organizations need to implement a “shift left” strategy, fortifying their defenses to prevent threats from gaining access to their networks in the first place.

Vulnerabilities: Paring Down Cyber Risk Index

While the Cyber Risk Index (CRI) has lowered to a moderate range, the threat landscape remains concerning. Smaller platforms are exploited by threat actors, such as Clop ransomware targeting MOVEIt and compromising government agencies. New top-level domains by Google pose risks for concealing malicious URLs. Connected cars create new avenues for hackers. Proactive cyber risk management is crucial.

Campaigns: Evading Detection and Expanding Targets

Malicious actors are continually updating their tools, techniques and procedures (TTP) to evade detection and cast a wider net for victims. APT34, for instance, used DNS-based communication combined with legitimate SMTP mail traffic to bypass security policies. Meanwhile, Earth Preta has shifted its focus to target critical infrastructure and key institutions using hybrid techniques to deploy malware.

Persistent threats like the APT41 subgroup Earth Longzhi have resurfaced with new techniques, targeting firms in multiple countries. These campaigns require a coordinated approach to cyber espionage, and businesses must remain vigilant against such attacks.

To learn more about Trend’s 2023 Midyear Cybersecurity Report, please visit: https://www.trendmicro.com/vinfo/us/security/research-and-analysis/threat-reports/roundup/stepping-ahead-of-risk-trend-micro-2023-midyear-cybersecurity-threat-report

Source :
https://www.trendmicro.com/en_us/research/23/h/cybersecurity-threat-2023-generative-ai.html

The Journey to Zero Trust with Industry Frameworks

By: Alifiya Sadikali – Trendmicro
August 09, 2023
Read time: 4 min (1179 words)

Discover the core principles and frameworks of Zero Trust, NIST 800-207 guidelines, and best practices when implementing CISA’s Zero Trust Maturity Model.

With the growing number of devices connected to the internet, traditional security measures are no longer enough to keep your digital assets safe. To protect your organization from digital threats, it’s crucial to establish strong security protocols and take proactive measures to stay vigilant.

What is Zero Trust?

Zero Trust is a cybersecurity philosophy based on the premise that threats can arise internally and externally. With Zero Trust, no user, system, or service should automatically be trusted, regardless of its location within or outside the network. Providing an added layer of security to protect sensitive data and applications, Zero Trust only grants access to authenticated and authorized users and devices. And in the event of a data breach, compartmentalizing access to individual resources limits potential damage.

Your organization should consider Zero Trust as a proactive security strategy to protect its data and assets better.

The pillars of Zero Trust

At its core, the basis for Zero Trust is comprised of a few fundamental principles:

  • Verify explicitly. Only grant access once the user or device has been explicitly authenticated and verified. By doing so, you can ensure that only those with a legitimate need to access your organization’s resources can do so.
  • Least privilege access. Only give users access to the resources they need to do their job and nothing more. Limiting access in this way prevents unauthorized access to your organization’s data and applications.
  • Assume breach. Act as if a compromise to your organization’s security has occurred. Take steps to minimize the damage, including monitoring for unusual activity, limiting access to sensitive data, and ensuring that backups are up-to-date and secure.
  • Microsegmentation. Divide your organization’s network into smaller, more manageable segments and apply security controls to each segment individually. This reduces the risk of a breach spreading from one part of your network to another.
  • Security automation. Use tools and technologies to automate the process of monitoring, detecting, and responding to security threats. This ensures that your organization’s security is always up-to-date and can react quickly to new threats and vulnerabilities.

A Zero Trust approach is a proactive and effective way to protect your organization’s data and assets from cyber-attacks and data breaches. By following these core principles, your organization can minimize the risk of unauthorized access, reduce the impact of a breach, and ensure that your organization’s security is always up-to-date and effective.

The role of NIST 800-207 in Zero Trust

NIST 800-207 is a cybersecurity framework developed by the National Institute of Standards and Technology. It provides guidelines and best practices for organizations to manage and mitigate cybersecurity risks.

Designed to be flexible and adaptable for a variety of organizations and industries, the framework supports the customization of cybersecurity plans to meet their specific needs. Its implementation can help organizations improve their cybersecurity posture and protect against cyber threats.

One of the most important recommendations of NIST 800-207 is to establish a policy engine, policy administrator, and policy enforcement point. This will help ensure consistent policy enforcement and that access is granted only to those who need it.

Another critical recommendation is conducting continuous monitoring and having real-time risk-based decision-making capabilities. This can help you quickly identify and respond to potential threats.

Additionally, it is essential to understand and map dependencies among assets and resources. This will help you ensure your security measures are appropriately targeted based on potential vulnerabilities.

Finally, NIST recommends replacing traditional paradigms, such as implicit trust in assets or entities, with a “trust but verify” methodology. Adopting this approach can better protect your organization’s assets and resources from internal and external threats.

CISA’s Zero Trust Maturity Model

The Zero Trust Maturity Model (ZMM), developed by CISA, provides a comprehensive framework for assessing an organization’s Zero Trust posture. This model covers critical areas including:

  • Identity management: To implement a Zero Trust strategy, it is important to begin with identity. This involves continuously verifying, authenticating, and authorizing any entity before granting access to corporate resources. To achieve this, comprehensive visibility is necessary.
  • Devices, networks, applications: To maintain Zero Trust, use endpoint detection and response capabilities to detect threats and keep track of device assets, network connections, application configurations, and vulnerabilities. Continuously assess and score device security posture and implement risk-informed authentication protocols to ensure only trusted devices, networks and applications can access sensitive data and enterprise systems.
  • Data and governance: To maximize security, implement prevention, detection, and response measures for identity, devices, networks, IoT, and cloud. Monitor legacy protocols and device encryption status. Apply Data Loss Prevention and access control policies based on risk profiles.
  • Visibility and analytics: Zero Trust strategies cannot succeed within silos. By collecting data from various sources within an organization, organizations can gain a complete view of all entities and resources. This data can be analyzed through threat intelligence, generating reliable and contextualized alerts. By tracking broader incidents connected to the same root cause, organizations can make informed policy decisions and take appropriate response actions.
  • Automation and orchestration: To effectively automate security responses, it is important to have access to comprehensive data that can inform the orchestration of systems and manage permissions. This includes identifying the types of data being protected and the entities that are accessing it. By doing so, it ensures that there is proper oversight and security throughout the development process of functions, products, and services.

By thoroughly evaluating these areas, your organization can identify potential vulnerabilities in its security measures and take prompt action to improve your overall cybersecurity posture. CISA’s ZMM offers a holistic approach to security that will enable your organization to remain vigilant against potential threats.

Implementing Zero Trust with Trend Vision One

Trend Vision One seamlessly integrates with third-party partner ecosystems and aligns to industry frameworks and best practices, including NIST and CISA, offering coverage from prevention to extended detection and response across all pillars of zero trust.

Trend Vision One is an innovative solution that empowers organizations to identify their vulnerabilities, monitor potential threats, and evaluate risks in real-time, enabling them to make informed decisions regarding access control. With its open platform approach, Trend enables seamless integration with third-party partner ecosystems, including IAM, Vulnerability Management, Firewall, BAS, and SIEM/SOAR vendors, providing a comprehensive and unified source of truth for risk assessment within your current security framework. Additionally, Trend Vision One is interoperable with SWG, CASB, and ZTNA and includes Attack Surface Management and XDR, all within a single console.

Conclusion

CISOs today understand that the journey towards achieving Zero Trust is a gradual process that requires careful planning, step-by-step implementation, and a shift in mindset towards proactive security and cyber risk management. By understanding the core principles of Zero Trust and utilizing the guidelines provided by NIST and CISA to operationalize Zero Trust with Trend Vision One, you can ensure that your organization’s cybersecurity measures are strong and can adapt to the constantly changing threat landscape.

To read more thought leadership and research about Zero Trust, click here.

Source :
https://www.trendmicro.com/en_us/research/23/h/industry-zero-trust-frameworks.html

8 Essential Tips for Data Protection and Cybersecurity in Small Businesses

Michelle Quill — June 6, 2023

Small businesses are often targeted by cybercriminals due to their lack of resources and security measures. Protecting your business from cyber threats is crucial to avoid data breaches and financial losses.

Why is cyber security so important for small businesses?

Small businesses are particularly in danger of cyberattacks, which can result in financial loss, data breaches, and damage to IT equipment. To protect your business, it’s important to implement strong cybersecurity measures.

Here are some tips to help you get started:

One important aspect of data protection and cybersecurity for small businesses is controlling access to customer lists. It’s important to limit access to this sensitive information to only those employees who need it to perform their job duties. Additionally, implementing strong password policies and regularly updating software and security measures can help prevent unauthorized access and protect against cyber attacks. Regular employee training on cybersecurity best practices can also help ensure that everyone in the organization is aware of potential threats and knows how to respond in the event of a breach.

When it comes to protecting customer credit card information in small businesses, there are a few key tips to keep in mind. First and foremost, it’s important to use secure payment processing systems that encrypt sensitive data. Additionally, it’s crucial to regularly update software and security measures to stay ahead of potential threats. Employee training and education on cybersecurity best practices can also go a long way in preventing data breaches. Finally, having a plan in place for responding to a breach can help minimize the damage and protect both your business and your customers.

Small businesses are often exposed to cyber attacks, making data protection and cybersecurity crucial. One area of particular concern is your company’s banking details. To protect this sensitive information, consider implementing strong passwords, two-factor authentication, and regular monitoring of your accounts. Additionally, educate your employees on safe online practices and limit access to financial information to only those who need it. Regularly backing up your data and investing in cybersecurity software can also help prevent data breaches.

Small businesses are often at high risk of cyber attacks due to their limited resources and lack of expertise in cybersecurity. To protect sensitive data, it is important to implement strong passwords, regularly update software and antivirus programs, and limit access to confidential information.

It is also important to have a plan in place in case of a security breach, including steps to contain the breach and notify affected parties. By taking these steps, small businesses can better protect themselves from cyber threats and ensure the safety of their data.

Tips for protecting your small business from cyber threats and data breaches are crucial in today’s digital age. One of the most important steps is to educate your employees on cybersecurity best practices, such as using strong passwords and avoiding suspicious emails or links.

It’s also important to regularly update your software and systems to ensure they are secure and protected against the latest threats. Additionally, implementing multi-factor authentication and encrypting sensitive data can add an extra layer of protection. Finally, having a plan in place for responding to a cyber-attack or data breach can help minimize the damage and get your business back on track as quickly as possible.

Small businesses are attackable to cyber-attacks and data breaches, which can have devastating consequences. To protect your business, it’s important to implement strong cybersecurity measures. This includes using strong passwords, regularly updating software and systems, and training employees on how to identify and avoid phishing scams.

It’s also important to have a data backup plan in place and to regularly test your security measures to ensure they are effective. By taking these steps, you can help protect your business from cyber threats and safeguard your valuable data.

To protect against cyber threats, it’s important to implement strong data protection and cybersecurity measures. This can include regularly updating software and passwords, using firewalls and antivirus software, and providing employee training on safe online practices. Additionally, it’s important to have a plan in place for responding to a cyber attack, including backing up data and having a designated point person for handling the situation.

In today’s digital age, small businesses must prioritize data protection and cybersecurity to safeguard their operations and reputation. With the rise of remote work and cloud-based technology, businesses are more vulnerable to cyber attacks than ever before. To mitigate these risks, it’s crucial to implement strong security measures for online meetings, advertising, transactions, and communication with customers and suppliers. By prioritizing cybersecurity, small businesses can protect their data and prevent unauthorized access or breaches.

Here are 8 essential tips for data protection and cybersecurity in small businesses.

8 Essential Tips for Data Protection and Cybersecurity in Small Businesses

1. Train Your Employees on Cybersecurity Best Practices

Your employees are the first line of defense against cyber threats. It’s important to train them on cybersecurity best practices to ensure they understand the risks and how to prevent them. This includes creating strong passwords, avoiding suspicious emails and links, and regularly updating software and security systems. Consider providing regular training sessions and resources to keep your employees informed and prepared.

2. Use Strong Passwords and Two-Factor Authentication

One of the most basic yet effective ways to protect your business from cyber threats is to use strong passwords and two-factor authentication. Encourage your employees to use complex passwords that include a mix of letters, numbers, and symbols, and to avoid using the same password for multiple accounts. Two-factor authentication adds an extra layer of security by requiring a second form of verification, such as a code sent to a mobile device, before granting access to an account. This can help prevent unauthorized access even if a password is compromised.

3. Keep Your Software and Systems Up to Date

One of the easiest ways for cybercriminals to gain access to your business’s data is through outdated software and systems. Hackers are constantly looking for vulnerabilities in software and operating systems, and if they find one, they can exploit it to gain access to your data. To prevent this, make sure all software and systems are kept up-to-date with the latest security patches and updates. This includes not only your computers and servers but also any mobile devices and other connected devices used in your business. Set up automatic updates whenever possible to ensure that you don’t miss any critical security updates.

4. Use Antivirus and Anti-Malware Software

Antivirus and anti-malware software are essential tools for protecting your small business from cyber threats. These programs can detect and remove malicious software, such as viruses, spyware, and ransomware before they can cause damage to your systems or steal your data. Make sure to install reputable antivirus and anti-malware software on all devices used in your business, including computers, servers, and mobile devices. Keep the software up-to-date and run regular scans to ensure that your systems are free from malware.

5. Backup Your Data Regularly

One of the most important steps you can take to protect your small business from data loss is to back up your data regularly. This means creating copies of your important files and storing them in a secure location, such as an external hard drive or cloud storage service. In the event of a cyber-attack or other disaster, having a backup of your data can help you quickly recover and minimize the impact on your business. Make sure to test your backups regularly to ensure that they are working properly and that you can restore your data if needed.

6. Carry out a risk assessment

Small businesses are especially in peril of cyber attacks, making it crucial to prioritize data protection and cybersecurity. One important step is to assess potential risks that could compromise your company’s networks, systems, and information. By identifying and analyzing possible threats, you can develop a plan to address security gaps and protect your business from harm.

For Small businesses making data protection and cybersecurity is a crucial part. To start, conduct a thorough risk assessment to identify where and how your data is stored, who has access to it, and potential threats. If you use cloud storage, consult with your provider to assess risks. Determine the potential impact of breaches and establish risk levels for different events. By taking these steps, you can better protect your business from cyber threats

7. Limit access to sensitive data

One effective strategy is to limit access to critical data to only those who need it. This reduces the risk of a data breach and makes it harder for malicious insiders to gain unauthorized access. To ensure accountability and clarity, create a plan that outlines who has access to what information and what their roles and responsibilities are. By taking these steps, you can help safeguard your business against cyber threats.

8. Use a firewall

For Small businesses, it’s important to protect the system from cyber attacks by making data protection and reducing cybersecurity risk. One effective measure is implementing a firewall, which not only protects hardware but also software. By blocking or deterring viruses from entering the network, a firewall provides an added layer of security. It’s important to note that a firewall differs from an antivirus, which targets software affected by a virus that has already infiltrated the system.

Small businesses can take steps to protect their data and ensure cybersecurity. One important step is to install a firewall and keep it updated with the latest software or firmware. Regularly checking for updates can help prevent potential security breaches.

Conclusion

Small businesses are particularly vulnerable to cyber attacks, so it’s important to take steps to protect your data. One key tip is to be cautious when granting access to your systems, especially to partners or suppliers. Before granting access, make sure they have similar cybersecurity practices in place. Don’t hesitate to ask for proof or to conduct a security audit to ensure your data is safe.

Source :
https://onlinecomputertips.com/support-categories/networking/tips-for-cybersecurity-in-small-businesses/

Part 2: Rethinking cache purge with a new architecture

21/06/2023

In Part 1: Rethinking Cache Purge, Fast and Scalable Global Cache Invalidation, we outlined the importance of cache invalidation and the difficulties of purging caches, how our existing purge system was designed and performed, and we gave a high level overview of what we wanted our new Cache Purge system to look like.

It’s been a while since we published the first blog post and it’s time for an update on what we’ve been working on. In this post we’ll be talking about some of the architecture improvements we’ve made so far and what we’re working on now.

Cache Purge end to end

We touched on the high level design of what we called the “coreless” purge system in part 1, but let’s dive deeper into what that design encompasses by following a purge request from end to end:

Step 1: Request received locally

An API request to Cloudflare is routed to the nearest Cloudflare data center and passed to an API Gateway worker. This worker looks at the request URL to see which service it should be sent to and forwards the request to the appropriate upstream backend. Most endpoints of the Cloudflare API are currently handled by centralized services, so the API Gateway worker is often just proxying requests to the nearest “core” data center which have their own gateway services to handle authentication, authorization, and further routing. But for endpoints which aren’t handled centrally the API Gateway worker must handle authentication and route authorization, and then proxy to an appropriate upstream. For cache purge requests that upstream is a Purge Ingest worker in the same data center.

Step 2: Purges tested locally

The Purge Ingest worker evaluates the purge request to make sure it is processible. It scans the URLs in the body of the request to see if they’re valid, then attempts to purge the URLs from the local data center’s cache. This concept of local purging was a new step introduced with the coreless purge system allowing us to capitalize on existing logic already used in every data center.

By leveraging the same ownership checks our data centers use to serve a zone’s normal traffic on the URLs being purged, we can determine if those URLs are even cacheable by the zone. Currently more than 50% of the URLs we’re asked to purge can’t be cached by the requesting zones, either because they don’t own the URLs (e.g. a customer asking us to purge https://cloudflare.com) or because the zone’s settings for the URL prevent caching (e.g. the zone has a “bypass” cache rule that matches the URL). All such purges are superfluous and shouldn’t be processed further, so we filter them out and avoid broadcasting them to other data centers freeing up resources to process more legitimate purges.

On top of that, generating the cache key for a file isn’t free; we need to load zone configuration options that might affect the cache key, apply various transformations, et cetera. The cache key for a given file is the same in every data center though, so when we purge the file locally we now return the generated cache key to the Purge Ingest worker and broadcast that key to other data centers instead of making each data center generate it themselves.

Step 3: Purges queued for broadcasting

purge request to small colo, ingest worker sends to queue worker in T1

Once the local purge is done the Purge Ingest worker forwards the purge request with the cache key obtained from the local cache to a Purge Queue worker. The queue worker is a Durable Object worker using its persistent state to hold a queue of purges it receives and pointers to how far along in the queue each data center in our network is in processing purges.

The queue is important because it allows us to automatically recover from a number of scenarios such as connectivity issues or data centers coming back online after maintenance. Having a record of all purges since an issue arose lets us replay those purges to a data center and “catch up”.

But Durable Objects are globally unique, so having one manage all global purges would have just moved our centrality problem from a core data center to wherever that Durable Object was provisioned. Instead we have dozens of Durable Objects in each region, and the Purge Ingest worker looks at the load balancing pool of Durable Objects for its region and picks one (often in the same data center) to forward the request to. The Durable Object will write the purge request to its queue and immediately loop through all the data center pointers and attempt to push any outstanding purges to each.

While benchmarking our performance we found our particular workload exhibited a “goldilocks zone” of throughput to a given Durable Object. On script startup we have to load all sorts of data like network topology and data center health–then refresh it continuously in the background–and as long as the Durable Object sees steady traffic it stays active and we amortize those startup costs. But if you ask a single Durable Object to do too much at once like send or receive too many requests, the single-threaded runtime won’t keep up. Regional purge traffic fluctuates a lot depending on local time of day, so there wasn’t a static quantity of Durable Objects per region that would let us stay within the goldilocks zone of enough requests to each to keep them active but not too many to keep them efficient. So we built load monitoring into our Durable Objects, and a Regional Autoscaler worker to aggregate that data and adjust load balancing pools when we start approaching the upper or lower edges of our efficiency goldilocks zone.

Step 4: Purges broadcast globally

multiple regions, durable object sends purges to fanouts in other regions, fanout sends to small colos in their region

Once a purge request is queued by a Purge Queue worker it needs to be broadcast to the rest of Cloudflare’s data centers to be carried out by their caches. The Durable Objects will broadcast purges directly to all data centers in their region, but when broadcasting to other regions they pick a Purge Fanout worker per region to take care of their region’s distribution. The fanout workers manage queues of their own as well as pointers for all of their region’s data centers, and in fact they share a lot of the same logic as the Purge Queue workers in order to do so. One key difference is fanout workers aren’t Durable Objects; they’re normal worker scripts, and their queues are purely in memory as opposed to being backed by Durable Object state. This means not all queue worker Durable Objects are talking to the same fanout worker in each region. Fanout workers can be dropped and spun up again quickly by any metal in the data center because they aren’t canonical sources of state. They maintain queues and pointers for their region but all of that info is also sent back downstream to the Durable Objects who persist that data themselves, reliably.

But what does the fanout worker get us? Cloudflare has hundreds of data centers all over the world, and as we mentioned above we benefit from keeping the number of incoming and outgoing requests for a Durable Object fairly low. Sending purges to a fanout worker per region means each Durable Object only has to make a fraction of the requests it would if it were broadcasting to every data center directly, which means it can process purges faster.

On top of that, occasionally a request will fail to get where it was going and require retransmission. When this happens between data centers in the same region it’s largely unnoticeable, but when a Durable Object in Canada has to retry a request to a data center in rural South Africa the cost of traversing that whole distance again is steep. The data centers elected to host fanout workers have the most reliable connections in their regions to the rest of our network. This minimizes the chance of inter-regional retries and limits the latency imposed by retries to regional timescales.

The introduction of the Purge Fanout worker was a massive improvement to our distribution system, reducing our end-to-end purge latency by 50% on its own and increasing our throughput threefold.

Current status of coreless purge

We are proud to say our new purge system has been in production serving purge by URL requests since July 2022, and the results in terms of latency improvements are dramatic. In addition, flexible purge requests (purge by tag/prefix/host and purge everything) share and benefit from the new coreless purge system’s entrypoint workers before heading to a core data center for fulfillment.

The reason flexible purge isn’t also fully coreless yet is because it’s a more complex task than “purge this object”; flexible purge requests can end up purging multiple objects–or even entire zones–from cache. They do this through an entirely different process that isn’t coreless compatible, so to make flexible purge fully coreless we would have needed to come up with an entirely new multi-purge mechanism on top of redesigning distribution. We chose instead to start with just purge by URL so we could focus purely on the most impactful improvements, revamping distribution, without reworking the logic a data center uses to actually remove an object from cache.

This is not to say that the flexible purges haven’t benefited from the coreless purge project. Our cache purge API lets users bundle single file and flexible purges in one request, so the API Gateway worker and Purge Ingest worker handle authorization, authentication and payload validation for flexible purges too. Those flexible purges get forwarded directly to our services in core data centers pre-authorized and validated which reduces load on those core data center auth services. As an added benefit, because authorization and validity checks all happen at the edge for all purge types users get much faster feedback when their requests are malformed.

Next steps

While coreless cache purge has come a long way since the part 1 blog post, we’re not done. We continue to work on reducing end-to-end latency even more for purge by URL because we can do better. Alongside improvements to our new distribution system, we’ve also been working on the redesign of flexible purge to make it fully coreless, and we’re really excited to share the results we’re seeing soon. Flexible cache purge is an incredibly popular API and we’re giving its refresh the care and attention it deserves.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/rethinking-cache-purge-architecture/

Part 1: Rethinking Cache Purge, Fast and Scalable Global Cache Invalidation

14/05/2022

There is a famous quote attributed to a Netscape engineer: “There are only two difficult problems in computer science: cache invalidation and naming things.” While naming things does oddly take up an inordinate amount of time, cache invalidation shouldn’t.

In the past we’ve written about Cloudflare’s incredibly fast response times, whether content is cached on our global network or not. If content is cached, it can be served from a Cloudflare cache server, which are distributed across the globe and are generally a lot closer in physical proximity to the visitor. This saves the visitor’s request from needing to go all the way back to an origin server for a response. But what happens when a webmaster updates something on their origin and would like these caches to be updated as well? This is where cache “purging” (also known as “invalidation”) comes in.

Customers thinking about setting up a CDN and caching infrastructure consider questions like:

  • How do different caching invalidation/purge mechanisms compare?
  • How many times a day/hour/minute do I expect to purge content?
  • How quickly can the cache be purged when needed?

This blog will discuss why invalidating cached assets is hard, what Cloudflare has done to make it easy (because we care about your experience as a developer), and the engineering work we’re putting in this year to make the performance and scalability of our purge services the best in the industry.

What makes purging difficult also makes it useful

(i) Scale
The first thing that complicates cache invalidation is doing it at scale. With data centers in over 270 cities around the globe, our most popular users’ assets can be replicated at every corner of our network. This also means that a purge request needs to be distributed to all data centers where that content is cached. When a data center receives a purge request, it needs to locate the cached content to ensure that subsequent visitor requests for that content are not served stale/outdated data. Requests for the purged content should be forwarded to the origin for a fresh copy, which is then re-cached on its way back to the user.

This process repeats for every data center in Cloudflare’s fleet. And due to Cloudflare’s massive network, maintaining this consistency when certain data centers may be unreachable or go offline, is what makes purging at scale difficult.

Making sure that every data center gets the purge command and remains up-to-date with its content logs is only part of the problem. Getting the purge request to data centers quickly so that content is updated uniformly is the next reason why cache invalidation is hard.  

(ii) Speed
When purging an asset, race conditions abound. Requests for an asset can happen at any time, and may not follow a pattern of predictability. Content can also change unpredictably. Therefore, when content changes and a purge request is sent, it must be distributed across the globe quickly. If purging an individual asset, say an image, takes too long, some visitors will be served the new version, while others are served outdated content. This data inconsistency degrades user experience, and can lead to confusion as to which version is the “right” version. Websites can sometimes even break in their entirety due to this purge latency (e.g. by upgrading versions of a non-backwards compatible JavaScript library).

Purging at speed is also difficult when combined with Cloudflare’s massive global footprint. For example, if a purge request is traveling at the speed of light between Tokyo and Cape Town (both cities where Cloudflare has data centers), just the transit alone (no authorization of the purge request or execution) would take over 180ms on average based on submarine cable placement. Purging a smaller network footprint may reduce these speed concerns while making purge times appear faster, but does so at the expense of worse performance for customers who want to make sure that their cached content is fast for everyone.

(iii) Scope
The final thing that makes purge difficult is making sure that only the unneeded web assets are invalidated. Maintaining a cache is important for egress cost savings and response speed. Webmasters’ origins could be knocked over by a thundering herd of requests, if they choose to purge all content needlessly. It’s a delicate balance of purging just enough: too much can result in both monetary and downtime costs, and too little will result in visitors receiving outdated content.

At Cloudflare, what to invalidate in a data center is often dictated by the type of purge. Purge everything, as you could probably guess, purges all cached content associated with a website. Purge by prefix purges content based on a URL prefix. Purge by hostname can invalidate content based on a hostname. Purge by URL or single file purge focuses on purging specified URLs. Finally, Purge by tag purges assets that are marked with Cache-Tag headers. These markers offer webmasters flexibility in grouping assets together. When a purge request for a tag comes into a data center, all assets marked with that tag will be invalidated.

With that overview in mind, the remainder of this blog will focus on putting each element of invalidation together to benchmark the performance of Cloudflare’s purge pipeline and provide context for what performance means in the real-world. We’ll be reviewing how fast Cloudflare can invalidate cached content across the world. This will provide a baseline analysis for how quick our purge systems are presently, which we will use to show how much we will improve by the time we launch our new purge system later this year.

How does purge work currently?

In general, purge takes the following route through Cloudflare’s data centers.

  • A purge request is initiated via the API or UI. This request specifies how our data centers should identify the assets to be purged. This can be accomplished via cache-tag header(s), URL(s), entire hostnames, and much more.
  • The request is received by any Cloudflare data center and is identified to be a purge request. It is then routed to a Cloudflare core data center (a set of a few data centers responsible for network management activities).
  • When a core data center receives it, the request is processed by a number of internal services that (for example) make sure the request is being sent from an account with the appropriate authorization to purge the asset. Following this, the request gets fanned out globally to all Cloudflare data centers using our distribution service.
  • When received by a data center, the purge request is processed and all assets with the matching identification criteria are either located and removed, or marked as stale. These stale assets are not served in response to requests and are instead re-pulled from the origin.
  • After being pulled from the origin, the response is written to cache again, replacing the purged version.

Now let’s look at this process in practice. Below we describe Cloudflare’s purge benchmarking that uses real-world performance data from our purge pipeline.

Benchmarking purge performance design

In order to understand how performant Cloudflare’s purge system is, we measured the time it took from sending the purge request to the moment that the purge is complete and the asset is no longer served from cache.  

In general, the process of measuring purge speeds involves: (i) ensuring that a particular piece of content is cached, (ii) sending the command to invalidate the cache, (iii) simultaneously checking our internal system logs for how the purge request is routed through our infrastructure, and (iv) measuring when the asset is removed from cache (first miss).

This process measures how quickly cache is invalidated from the perspective of an average user.

  • Clock starts
    As noted above, in this experiment we’re using sampled RUM data from our purge systems. The goal of this experiment is to benchmark current data for how long it can take to purge an asset on Cloudflare across different regions. Once the asset was cached in a region on Cloudflare, we identify when a purge request is received for that asset. At that same instant, the clock started for this experiment. We include in this time any retrys that we needed to make (due to data centers missing the initial purge request) to ensure that the purge was done consistently across our network. The clock continues as the request transits our purge pipeline  (data center > core > fanout > purge from all data centers).  
  • Clock stops
    What caused the clock to stop was the purged asset being removed from cache, meaning that the data center is no longer serving the asset from cache to visitor’s requests. Our internal logging measures the precise moment that the cache content has been removed or expired and from that data we were able to determine the following benchmarks for our purge types in various regions.  

Results

We’ve divided our benchmarks in two ways: by purge type and by region.

We singled out Purge by URL because it identifies a single target asset to be purged. While that asset can be stored in multiple locations, the amount of data to be purged is strictly defined.

We’ve combined all other types of purge (everything, tag, prefix, hostname) together because the amount of data to be removed is highly variable. Purging a whole website or by assets identified with cache tags could mean we need to find and remove a multitude of content from many different data centers in our network.

Secondly, we have segmented our benchmark measurements by regions and specifically we confined the benchmarks to specific data center servers in the region because we were concerned about clock skews between different data centers. This is the reason why we limited the test to the same cache servers so that even if there was skew, they’d all be skewed in the same way.  

We took the latency from the representative data centers in each of the following regions and the global latency. Data centers were not evenly distributed in each region, but in total represent about 90 different cities around the world:  

  • Africa
  • Asia Pacific Region (APAC)
  • Eastern Europe (EEUR)
  • Eastern North America (ENAM)
  • Oceania
  • South America (SA)
  • Western Europe (WEUR)
  • Western North America (WNAM)

The global latency numbers represent the purge data from all Cloudflare data centers in over 270 cities globally. In the results below, global latency numbers may be larger than the regional numbers because it represents all of our data centers instead of only a regional portion so outliers and retries might have an outsized effect.

Below are the results for how quickly our current purge pipeline was able to invalidate content by purge type and region. All times are represented in seconds and divided into P50, P75, and P99 quantiles. Meaning for “P50” that 50% of the purges were at the indicated latency or faster.  

Purge By URL

P50P75P99
AFRICA0.95s1.94s6.42s
APAC0.91s1.87s6.34s
EEUR0.84s1.66s6.30s
ENAM0.85s1.71s6.27s
OCEANIA0.95s1.96s6.40s
SA0.91s1.86s6.33s
WEUR0.84s1.68s6.30s
WNAM0.87s1.74s6.25s
GLOBAL1.31s1.80s6.35s

Purge Everything, by Tag, by Prefix, by Hostname

P50P75P99
AFRICA1.42s1.93s4.24s
APAC1.30s2.00s5.11s
EEUR1.24s1.77s4.07s
ENAM1.08s1.62s3.92s
OCEANIA1.16s1.70s4.01s
SA1.25s1.79s4.106s
WEUR1.19s1.73s4.04s
WNAM0.9995s1.53s3.83s
GLOBAL1.57s2.32s5.97s

A general note about these benchmarks — the data represented here was taken from over 48 hours (two days) of RUM purge latency data in May 2022. If you are interested in how quickly your content can be invalidated on Cloudflare, we suggest you test our platform with your website.

Those numbers are good and much faster than most of our competitors. Even in the worst case, we see the time from when you tell us to purge an item to when it is removed globally is less than seven seconds. In most cases, it’s less than a second. That’s great for most applications, but we want to be even faster. Our goal is to get cache purge to as close as theoretically possible to the speed of light limit for a network our size, which is 200ms.

Intriguingly, LEO satellite networks may be able to provide even lower global latency than fiber optics because of the straightness of the paths between satellites that use laser links. We’ve done calculations of latency between LEO satellites that suggest that there are situations in which going to space will be the fastest path between two points on Earth. We’ll let you know if we end up using laser-space-purge.

Just as we have with network performance, we are going to relentlessly measure our cache performance as well as the cache performance of our competitors. We won’t be satisfied until we verifiably are the fastest everywhere. To do that, we’ve built a new cache purge architecture which we’re confident will make us the fastest cache purge in the industry.

Our new architecture

Through the end of 2022, we will continue this blog series incrementally showing how we will become the fastest, most-scalable purge system in the industry. We will continue to update you with how our purge system is developing  and benchmark our data along the way.

Getting there will involve rearchitecting and optimizing our purge service, which hasn’t received a systematic redesign in over a decade. We’re excited to do our development in the open, and bring you along on our journey.

So what do we plan on updating?

Introducing Coreless Purge

The first version of our cache purge system was designed on top of a set of central core services including authorization, authentication, request distribution, and filtering among other features that made it a high-reliability service. These core components had ultimately become a bottleneck in terms of scale and performance as our network continues to expand globally. While most of our purge dependencies have been containerized, the message queue used was still running on bare metals, which led to increased operational overhead when our system needed to scale.

Last summer, we built a proof of concept for a completely decentralized cache invalidation system using in-house tech – Cloudflare Workers and Durable Objects. Using Durable Objects as a queuing mechanism gives us the flexibility to scale horizontally by adding more Durable Objects as needed and can reduce time to purge with quick regional fanouts of purge requests.

In the new purge system we’re ripping out the reliance on core data centers and moving all that functionality to every data center, we’re calling it coreless purge.

Here’s a general overview of how coreless purge will work:

  • A purge request will be initiated via the API or UI. This request will specify how we should identify the assets to be purged.
  • The request will be routed to the nearest Cloudflare data center where it is identified to be a purge request and be passed to a Worker that will perform several of the key functions that currently occur in the core (like authorization, filtering, etc).
  • From there, the Worker will pass the purge request to a Durable Object in the data center. The Durable Object will queue all the requests and broadcast them to every data center when they are ready to be processed.
  • When the Durable Object broadcasts the purge request to every data center, another Worker will pass the request to the service in the data center that will invalidate the content in cache (executes the purge).

We believe this re-architecture of our system built by stringing together multiple services from the Workers platform will help improve both the speed and scalability of the purge requests we will be able to handle.

Conclusion

We’re going to spend a lot of time building and optimizing purge because, if there’s one thing we learned here today, it’s that cache invalidation is a difficult problem but those are exactly the types of problems that get us out of bed in the morning.

If you want to help us optimize our purge pipeline, we’re hiring.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/part1-coreless-purge/

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

23/06/2023

Throughout Speed Week, we have talked about the importance of optimizing performance. Compression plays a crucial role by reducing file sizes transmitted over the Internet. Smaller file sizes lead to faster downloads, quicker website loading, and an improved user experience.

Take household cleaning products as a real world example. It is estimated “a typical bottle of cleaner is 90% water and less than 10% actual valuable ingredients”. Removing 90% of a typical 500ml bottle of household cleaner reduces the weight from 600g to 60g. This reduction means only a 60g parcel, with instructions to rehydrate on receipt, needs to be sent. Extrapolated into the gallons, this weight reduction soon becomes a huge shipping saving for businesses. Not to mention the environmental impact.

This is how compression works. The sender compresses the file to its smallest possible size, and then sends the smaller file with instructions on how to handle it when received. By reducing the size of the files sent, compression ensures the amount of bandwidth needed to send files over the Internet is a lot less. Where files are stored in expensive cloud providers like AWS, reducing the size of files sent can directly equate to significant cost savings on bandwidth.

Smaller file sizes are also particularly beneficial for end users with limited Internet connections, such as mobile devices on cellular networks or users in areas with slow network speeds.

Cloudflare has always supported compression in the form of Gzip. Gzip is a widely used compression algorithm that has been around since 1992 and provides file compression for all Cloudflare users. However, in 2013 Google introduced Brotli which supports higher compression levels and better performance overall. Switching from gzip to Brotli results in smaller file sizes and faster load times for web pages. We have supported Brotli since 2017 for the connection between Cloudflare and client browsers. Today we are announcing end-to-end Brotli support for web content: support for Brotli compression, at the highest possible levels, from the origin server to the client.

If your origin server supports Brotli, turn it on, crank up the compression level, and enjoy the performance boost.

Brotli compression to 11

Brotli has 12 levels of compression ranging from 0 to 11, with 0 providing the fastest compression speed but the lowest compression ratio, and 11 offering the highest compression ratio but requiring more computational resources and time. During our initial implementation of Brotli five years ago, we identified that compression level 4 offered the balance between bytes saved and compression time without compromising performance.

Since 2017, Cloudflare has been using a maximum compression of Brotli level 4 for all compressible assets based on the end user’s “accept-encoding” header. However, one issue was that Cloudflare only requested Gzip compression from the origin, even if the origin supported Brotli. Furthermore, Cloudflare would always decompress the content received from the origin before compressing and sending it to the end user, resulting in additional processing time. As a result, customers were unable to fully leverage the benefits offered by Brotli compression.

Old world

With Cloudflare now fully supporting Brotli end to end, customers will start seeing our updated accept-encoding header arriving at their origins. Once available customers can transfer, cache and serve heavily compressed Brotli files directly to us, all the way up to the maximum level of 11. This will help reduce latency and bandwidth consumption. If the end user device does not support Brotli compression, we will automatically decompress the file and serve it either in its decompressed format or as a Gzip-compressed file, depending on the Accept-Encoding header.

Full end-to-end Brotli compression support

End user cannot support Brotli compression

Customers can implement Brotli compression at their origin by referring to the appropriate online materials. For example, customers that are using NGINX, can implement Brotli by following this tutorial and setting compression at level 11 within the nginx.conf configuration file as follows:

brotli on;
brotli_comp_level 11;
brotli_static on;
brotli_types text/plain text/css application/javascript application/x-javascript text/xml 
application/xml application/xml+rss text/javascript image/x-icon 
image/vnd.microsoft.icon image/bmp image/svg+xml;

Cloudflare will then serve these assets to the client at the exact same compression level (11) for the matching file brotli_types. This means any SVG or BMP images will be sent to the client compressed at Brotli level 11.

Testing

We applied compression against a simple CSS file, measuring the impact of various compression algorithms and levels. Our goal was to identify potential improvements that users could experience by optimizing compression techniques. These results can be seen in the following table:

TestSize (bytes)% Reduction of original file (Higher % better)
Uncompressed response (no compression used)2,747
Cloudflare default Gzip compression (level 8)1,12159.21%
Cloudflare default Brotli compression (level 4)1,11059.58%
Compressed with max Gzip level (level 9)1,12159.21%
Compressed with max Brotli level (level 11)90966.94%

By compressing Brotli at level 11 users are able to reduce their file sizes by 19% compared to the best Gzip compression level. Additionally, the strongest Brotli compression level is around 18% smaller than the default level used by Cloudflare. This highlights a significant size reduction achieved by utilizing Brotli compression, particularly at its highest levels, which can lead to improved website performance, faster page load times and an overall reduction in egress fees.

To take advantage of higher end to end compression rates the following Cloudflare proxy features need to be disabled.

  • Email Obfuscation
  • Rocket Loader
  • Server Side Excludes (SSE)
  • Mirage
  • HTML Minification – JavaScript and CSS can be left enabled.
  • Automatic HTTPS Rewrites

This is due to Cloudflare needing to decompress and access the body to apply the requested settings. Alternatively a customer can disable these features for specific paths using Configuration Rules.

If any of these rewrite features are enabled, your origin can still send Brotli compression at higher levels. However, we will decompress, apply the Cloudflare feature(s) enabled, and recompress on the fly using Cloudflare’s default Brotli level 4 or Gzip level 8 depending on the user’s accept-encoding header.

For browsers that do not accept Brotli compression, we will continue to decompress and send Gzipped responses or uncompressed.

Implementation

The initial step towards implementing Brotli from the origin involved constructing a decompression module that could be integrated into Cloudflare software stack. It allows us to efficiently convert the compressed bits received from the origin into the original, uncompressed file. This step was crucial as numerous features such as Email Obfuscation and Cloudflare Workers Customers, rely on accessing the body of a response to apply customizations.

We integrated the decompressor into  the core reverse web proxy of Cloudflare. This integration ensured that all Cloudflare products and features could access Brotli decompression effortlessly. This also allowed our Cloudflare Workers team to incorporate Brotli Directly into Cloudflare Workers allowing our Workers customers to be able to interact with responses returned in Brotli or pass through to the end user unmodified.

Introducing Compression rules – Granular control of compression to end users

By default Cloudflare compresses certain content types based on the Content-Type header of the file. Today we are also announcing Compression Rules for our Enterprise Customers to allow you even more control on how and what Cloudflare will compress.

Today we are also announcing the introduction of Compression Rules for our Enterprise Customers. With Compression Rules, you gain enhanced control over Cloudflare’s compression capabilities, enabling you to customize how and which content Cloudflare compresses to optimize your website’s performance.

For example, by using Cloudflare’s Compression Rules for .ktx files, customers can optimize the delivery of textures in webGL applications, enhancing the overall user experience. Enabling compression minimizes the bandwidth usage and ensures that webGL applications load quickly and smoothly, even when dealing with large and detailed textures.

Alternatively customers can disable compression or specify a preference of how we compress. Another example could be an Infrastructure company only wanting to support Gzip for their IoT devices but allow Brotli compression for all other hostnames.

Compression rules use the filters that our other rules products are built on top of with the added fields of Media Type and Extension type. Allowing users to easily specify the content you wish to compress.

Deprecating the Brotli toggle

Brotli has been long supported by some web browsers since 2016 and Cloudflare offered Brotli Support in 2017. As with all new web technologies Brotli was unknown and we gave customers the ability to selectively enable or disable BrotlI via the API and our UI.

Now that Brotli has evolved and is supported by all browsers, we plan to enable Brotli on all zones by default in the coming months. Mirroring the Gzip behavior we currently support and removing the toggle from our dashboard. If browsers do not support Brotli, Cloudflare will continue to support their accepted encoding types such as Gzip or uncompressed and Enterprise customers will still be able to use Compression rules to granularly control how we compress data towards their users.

The future of web compression

We’ve seen great adoption and great performance for Brotli as the new compression technique for the web. Looking forward, we are closely following trends and new compression algorithms such as zstd as a possible next-generation compression algorithm.

At the same time, we’re looking to improve Brotli directly where we can. One development that we’re particularly focused on is shared dictionaries with Brotli. Whenever you compress an asset, you use a “dictionary” that helps the compression to be more efficient. A simple analogy of this is typing OMW into an iPhone message. The iPhone will automatically translate it into On My Way using its own internal dictionary.

OMW
OnMyWay

This internal dictionary has taken three characters and morphed this into nine characters (including spaces) The internal dictionary has saved six characters which equals performance benefits for users.

By default, the Brotli RFC defines a static dictionary that both clients and the origin servers use. The static dictionary was designed to be general purpose and apply to everyone. Optimizing the size of the dictionary as to not be too large whilst able to generate best compression results. However, what if an origin could generate a bespoke dictionary tailored to a specific website? For example a Cloudflare-specific dictionary would allow us to compress the words and phrases that appear repeatedly on our site such as the word “Cloudflare”. The bespoke dictionary would be designed to compress this as heavily as possible and the browser using the same dictionary would be able to translate this back.

new proposal by the Web Incubator CG aims to do just that, allowing you to specify your own dictionaries that browsers can use to allow websites to optimize compression further. We’re excited about contributing to this proposal and plan on publishing our research soon.

Try it now

Compression Rules are available now! With End to End Brotli being rolled out over the coming weeks. Allowing you to improve performance, reduce bandwidth and granularly control how Cloudflare handles compression to your end users.

Watch on Cloudflare TV

https://customer-rhnwzxvb3mg4wz3v.cloudflarestream.com/f1c71fdb05263b5ec3077e6e7acdb7e2/iframe?preload=true&poster=https%3A%2F%2Fcustomer-rhnwzxvb3mg4wz3v.cloudflarestream.com%2Ff1c71fdb05263b5ec3077e6e7acdb7e2%2Fthumbnails%2Fthumbnail.jpg%3Ftime%3D1s%26height%3D600

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/this-is-brotli-from-origin/