Part 1: Rethinking Cache Purge, Fast and Scalable Global Cache Invalidation

14/05/2022

There is a famous quote attributed to a Netscape engineer: “There are only two difficult problems in computer science: cache invalidation and naming things.” While naming things does oddly take up an inordinate amount of time, cache invalidation shouldn’t.

In the past we’ve written about Cloudflare’s incredibly fast response times, whether content is cached on our global network or not. If content is cached, it can be served from a Cloudflare cache server, which are distributed across the globe and are generally a lot closer in physical proximity to the visitor. This saves the visitor’s request from needing to go all the way back to an origin server for a response. But what happens when a webmaster updates something on their origin and would like these caches to be updated as well? This is where cache “purging” (also known as “invalidation”) comes in.

Customers thinking about setting up a CDN and caching infrastructure consider questions like:

  • How do different caching invalidation/purge mechanisms compare?
  • How many times a day/hour/minute do I expect to purge content?
  • How quickly can the cache be purged when needed?

This blog will discuss why invalidating cached assets is hard, what Cloudflare has done to make it easy (because we care about your experience as a developer), and the engineering work we’re putting in this year to make the performance and scalability of our purge services the best in the industry.

What makes purging difficult also makes it useful

(i) Scale
The first thing that complicates cache invalidation is doing it at scale. With data centers in over 270 cities around the globe, our most popular users’ assets can be replicated at every corner of our network. This also means that a purge request needs to be distributed to all data centers where that content is cached. When a data center receives a purge request, it needs to locate the cached content to ensure that subsequent visitor requests for that content are not served stale/outdated data. Requests for the purged content should be forwarded to the origin for a fresh copy, which is then re-cached on its way back to the user.

This process repeats for every data center in Cloudflare’s fleet. And due to Cloudflare’s massive network, maintaining this consistency when certain data centers may be unreachable or go offline, is what makes purging at scale difficult.

Making sure that every data center gets the purge command and remains up-to-date with its content logs is only part of the problem. Getting the purge request to data centers quickly so that content is updated uniformly is the next reason why cache invalidation is hard.  

(ii) Speed
When purging an asset, race conditions abound. Requests for an asset can happen at any time, and may not follow a pattern of predictability. Content can also change unpredictably. Therefore, when content changes and a purge request is sent, it must be distributed across the globe quickly. If purging an individual asset, say an image, takes too long, some visitors will be served the new version, while others are served outdated content. This data inconsistency degrades user experience, and can lead to confusion as to which version is the “right” version. Websites can sometimes even break in their entirety due to this purge latency (e.g. by upgrading versions of a non-backwards compatible JavaScript library).

Purging at speed is also difficult when combined with Cloudflare’s massive global footprint. For example, if a purge request is traveling at the speed of light between Tokyo and Cape Town (both cities where Cloudflare has data centers), just the transit alone (no authorization of the purge request or execution) would take over 180ms on average based on submarine cable placement. Purging a smaller network footprint may reduce these speed concerns while making purge times appear faster, but does so at the expense of worse performance for customers who want to make sure that their cached content is fast for everyone.

(iii) Scope
The final thing that makes purge difficult is making sure that only the unneeded web assets are invalidated. Maintaining a cache is important for egress cost savings and response speed. Webmasters’ origins could be knocked over by a thundering herd of requests, if they choose to purge all content needlessly. It’s a delicate balance of purging just enough: too much can result in both monetary and downtime costs, and too little will result in visitors receiving outdated content.

At Cloudflare, what to invalidate in a data center is often dictated by the type of purge. Purge everything, as you could probably guess, purges all cached content associated with a website. Purge by prefix purges content based on a URL prefix. Purge by hostname can invalidate content based on a hostname. Purge by URL or single file purge focuses on purging specified URLs. Finally, Purge by tag purges assets that are marked with Cache-Tag headers. These markers offer webmasters flexibility in grouping assets together. When a purge request for a tag comes into a data center, all assets marked with that tag will be invalidated.

With that overview in mind, the remainder of this blog will focus on putting each element of invalidation together to benchmark the performance of Cloudflare’s purge pipeline and provide context for what performance means in the real-world. We’ll be reviewing how fast Cloudflare can invalidate cached content across the world. This will provide a baseline analysis for how quick our purge systems are presently, which we will use to show how much we will improve by the time we launch our new purge system later this year.

How does purge work currently?

In general, purge takes the following route through Cloudflare’s data centers.

  • A purge request is initiated via the API or UI. This request specifies how our data centers should identify the assets to be purged. This can be accomplished via cache-tag header(s), URL(s), entire hostnames, and much more.
  • The request is received by any Cloudflare data center and is identified to be a purge request. It is then routed to a Cloudflare core data center (a set of a few data centers responsible for network management activities).
  • When a core data center receives it, the request is processed by a number of internal services that (for example) make sure the request is being sent from an account with the appropriate authorization to purge the asset. Following this, the request gets fanned out globally to all Cloudflare data centers using our distribution service.
  • When received by a data center, the purge request is processed and all assets with the matching identification criteria are either located and removed, or marked as stale. These stale assets are not served in response to requests and are instead re-pulled from the origin.
  • After being pulled from the origin, the response is written to cache again, replacing the purged version.

Now let’s look at this process in practice. Below we describe Cloudflare’s purge benchmarking that uses real-world performance data from our purge pipeline.

Benchmarking purge performance design

In order to understand how performant Cloudflare’s purge system is, we measured the time it took from sending the purge request to the moment that the purge is complete and the asset is no longer served from cache.  

In general, the process of measuring purge speeds involves: (i) ensuring that a particular piece of content is cached, (ii) sending the command to invalidate the cache, (iii) simultaneously checking our internal system logs for how the purge request is routed through our infrastructure, and (iv) measuring when the asset is removed from cache (first miss).

This process measures how quickly cache is invalidated from the perspective of an average user.

  • Clock starts
    As noted above, in this experiment we’re using sampled RUM data from our purge systems. The goal of this experiment is to benchmark current data for how long it can take to purge an asset on Cloudflare across different regions. Once the asset was cached in a region on Cloudflare, we identify when a purge request is received for that asset. At that same instant, the clock started for this experiment. We include in this time any retrys that we needed to make (due to data centers missing the initial purge request) to ensure that the purge was done consistently across our network. The clock continues as the request transits our purge pipeline  (data center > core > fanout > purge from all data centers).  
  • Clock stops
    What caused the clock to stop was the purged asset being removed from cache, meaning that the data center is no longer serving the asset from cache to visitor’s requests. Our internal logging measures the precise moment that the cache content has been removed or expired and from that data we were able to determine the following benchmarks for our purge types in various regions.  

Results

We’ve divided our benchmarks in two ways: by purge type and by region.

We singled out Purge by URL because it identifies a single target asset to be purged. While that asset can be stored in multiple locations, the amount of data to be purged is strictly defined.

We’ve combined all other types of purge (everything, tag, prefix, hostname) together because the amount of data to be removed is highly variable. Purging a whole website or by assets identified with cache tags could mean we need to find and remove a multitude of content from many different data centers in our network.

Secondly, we have segmented our benchmark measurements by regions and specifically we confined the benchmarks to specific data center servers in the region because we were concerned about clock skews between different data centers. This is the reason why we limited the test to the same cache servers so that even if there was skew, they’d all be skewed in the same way.  

We took the latency from the representative data centers in each of the following regions and the global latency. Data centers were not evenly distributed in each region, but in total represent about 90 different cities around the world:  

  • Africa
  • Asia Pacific Region (APAC)
  • Eastern Europe (EEUR)
  • Eastern North America (ENAM)
  • Oceania
  • South America (SA)
  • Western Europe (WEUR)
  • Western North America (WNAM)

The global latency numbers represent the purge data from all Cloudflare data centers in over 270 cities globally. In the results below, global latency numbers may be larger than the regional numbers because it represents all of our data centers instead of only a regional portion so outliers and retries might have an outsized effect.

Below are the results for how quickly our current purge pipeline was able to invalidate content by purge type and region. All times are represented in seconds and divided into P50, P75, and P99 quantiles. Meaning for “P50” that 50% of the purges were at the indicated latency or faster.  

Purge By URL

P50P75P99
AFRICA0.95s1.94s6.42s
APAC0.91s1.87s6.34s
EEUR0.84s1.66s6.30s
ENAM0.85s1.71s6.27s
OCEANIA0.95s1.96s6.40s
SA0.91s1.86s6.33s
WEUR0.84s1.68s6.30s
WNAM0.87s1.74s6.25s
GLOBAL1.31s1.80s6.35s

Purge Everything, by Tag, by Prefix, by Hostname

P50P75P99
AFRICA1.42s1.93s4.24s
APAC1.30s2.00s5.11s
EEUR1.24s1.77s4.07s
ENAM1.08s1.62s3.92s
OCEANIA1.16s1.70s4.01s
SA1.25s1.79s4.106s
WEUR1.19s1.73s4.04s
WNAM0.9995s1.53s3.83s
GLOBAL1.57s2.32s5.97s

A general note about these benchmarks — the data represented here was taken from over 48 hours (two days) of RUM purge latency data in May 2022. If you are interested in how quickly your content can be invalidated on Cloudflare, we suggest you test our platform with your website.

Those numbers are good and much faster than most of our competitors. Even in the worst case, we see the time from when you tell us to purge an item to when it is removed globally is less than seven seconds. In most cases, it’s less than a second. That’s great for most applications, but we want to be even faster. Our goal is to get cache purge to as close as theoretically possible to the speed of light limit for a network our size, which is 200ms.

Intriguingly, LEO satellite networks may be able to provide even lower global latency than fiber optics because of the straightness of the paths between satellites that use laser links. We’ve done calculations of latency between LEO satellites that suggest that there are situations in which going to space will be the fastest path between two points on Earth. We’ll let you know if we end up using laser-space-purge.

Just as we have with network performance, we are going to relentlessly measure our cache performance as well as the cache performance of our competitors. We won’t be satisfied until we verifiably are the fastest everywhere. To do that, we’ve built a new cache purge architecture which we’re confident will make us the fastest cache purge in the industry.

Our new architecture

Through the end of 2022, we will continue this blog series incrementally showing how we will become the fastest, most-scalable purge system in the industry. We will continue to update you with how our purge system is developing  and benchmark our data along the way.

Getting there will involve rearchitecting and optimizing our purge service, which hasn’t received a systematic redesign in over a decade. We’re excited to do our development in the open, and bring you along on our journey.

So what do we plan on updating?

Introducing Coreless Purge

The first version of our cache purge system was designed on top of a set of central core services including authorization, authentication, request distribution, and filtering among other features that made it a high-reliability service. These core components had ultimately become a bottleneck in terms of scale and performance as our network continues to expand globally. While most of our purge dependencies have been containerized, the message queue used was still running on bare metals, which led to increased operational overhead when our system needed to scale.

Last summer, we built a proof of concept for a completely decentralized cache invalidation system using in-house tech – Cloudflare Workers and Durable Objects. Using Durable Objects as a queuing mechanism gives us the flexibility to scale horizontally by adding more Durable Objects as needed and can reduce time to purge with quick regional fanouts of purge requests.

In the new purge system we’re ripping out the reliance on core data centers and moving all that functionality to every data center, we’re calling it coreless purge.

Here’s a general overview of how coreless purge will work:

  • A purge request will be initiated via the API or UI. This request will specify how we should identify the assets to be purged.
  • The request will be routed to the nearest Cloudflare data center where it is identified to be a purge request and be passed to a Worker that will perform several of the key functions that currently occur in the core (like authorization, filtering, etc).
  • From there, the Worker will pass the purge request to a Durable Object in the data center. The Durable Object will queue all the requests and broadcast them to every data center when they are ready to be processed.
  • When the Durable Object broadcasts the purge request to every data center, another Worker will pass the request to the service in the data center that will invalidate the content in cache (executes the purge).

We believe this re-architecture of our system built by stringing together multiple services from the Workers platform will help improve both the speed and scalability of the purge requests we will be able to handle.

Conclusion

We’re going to spend a lot of time building and optimizing purge because, if there’s one thing we learned here today, it’s that cache invalidation is a difficult problem but those are exactly the types of problems that get us out of bed in the morning.

If you want to help us optimize our purge pipeline, we’re hiring.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/part1-coreless-purge/

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

23/06/2023

Throughout Speed Week, we have talked about the importance of optimizing performance. Compression plays a crucial role by reducing file sizes transmitted over the Internet. Smaller file sizes lead to faster downloads, quicker website loading, and an improved user experience.

Take household cleaning products as a real world example. It is estimated “a typical bottle of cleaner is 90% water and less than 10% actual valuable ingredients”. Removing 90% of a typical 500ml bottle of household cleaner reduces the weight from 600g to 60g. This reduction means only a 60g parcel, with instructions to rehydrate on receipt, needs to be sent. Extrapolated into the gallons, this weight reduction soon becomes a huge shipping saving for businesses. Not to mention the environmental impact.

This is how compression works. The sender compresses the file to its smallest possible size, and then sends the smaller file with instructions on how to handle it when received. By reducing the size of the files sent, compression ensures the amount of bandwidth needed to send files over the Internet is a lot less. Where files are stored in expensive cloud providers like AWS, reducing the size of files sent can directly equate to significant cost savings on bandwidth.

Smaller file sizes are also particularly beneficial for end users with limited Internet connections, such as mobile devices on cellular networks or users in areas with slow network speeds.

Cloudflare has always supported compression in the form of Gzip. Gzip is a widely used compression algorithm that has been around since 1992 and provides file compression for all Cloudflare users. However, in 2013 Google introduced Brotli which supports higher compression levels and better performance overall. Switching from gzip to Brotli results in smaller file sizes and faster load times for web pages. We have supported Brotli since 2017 for the connection between Cloudflare and client browsers. Today we are announcing end-to-end Brotli support for web content: support for Brotli compression, at the highest possible levels, from the origin server to the client.

If your origin server supports Brotli, turn it on, crank up the compression level, and enjoy the performance boost.

Brotli compression to 11

Brotli has 12 levels of compression ranging from 0 to 11, with 0 providing the fastest compression speed but the lowest compression ratio, and 11 offering the highest compression ratio but requiring more computational resources and time. During our initial implementation of Brotli five years ago, we identified that compression level 4 offered the balance between bytes saved and compression time without compromising performance.

Since 2017, Cloudflare has been using a maximum compression of Brotli level 4 for all compressible assets based on the end user’s “accept-encoding” header. However, one issue was that Cloudflare only requested Gzip compression from the origin, even if the origin supported Brotli. Furthermore, Cloudflare would always decompress the content received from the origin before compressing and sending it to the end user, resulting in additional processing time. As a result, customers were unable to fully leverage the benefits offered by Brotli compression.

Old world

With Cloudflare now fully supporting Brotli end to end, customers will start seeing our updated accept-encoding header arriving at their origins. Once available customers can transfer, cache and serve heavily compressed Brotli files directly to us, all the way up to the maximum level of 11. This will help reduce latency and bandwidth consumption. If the end user device does not support Brotli compression, we will automatically decompress the file and serve it either in its decompressed format or as a Gzip-compressed file, depending on the Accept-Encoding header.

Full end-to-end Brotli compression support

End user cannot support Brotli compression

Customers can implement Brotli compression at their origin by referring to the appropriate online materials. For example, customers that are using NGINX, can implement Brotli by following this tutorial and setting compression at level 11 within the nginx.conf configuration file as follows:

brotli on;
brotli_comp_level 11;
brotli_static on;
brotli_types text/plain text/css application/javascript application/x-javascript text/xml 
application/xml application/xml+rss text/javascript image/x-icon 
image/vnd.microsoft.icon image/bmp image/svg+xml;

Cloudflare will then serve these assets to the client at the exact same compression level (11) for the matching file brotli_types. This means any SVG or BMP images will be sent to the client compressed at Brotli level 11.

Testing

We applied compression against a simple CSS file, measuring the impact of various compression algorithms and levels. Our goal was to identify potential improvements that users could experience by optimizing compression techniques. These results can be seen in the following table:

TestSize (bytes)% Reduction of original file (Higher % better)
Uncompressed response (no compression used)2,747
Cloudflare default Gzip compression (level 8)1,12159.21%
Cloudflare default Brotli compression (level 4)1,11059.58%
Compressed with max Gzip level (level 9)1,12159.21%
Compressed with max Brotli level (level 11)90966.94%

By compressing Brotli at level 11 users are able to reduce their file sizes by 19% compared to the best Gzip compression level. Additionally, the strongest Brotli compression level is around 18% smaller than the default level used by Cloudflare. This highlights a significant size reduction achieved by utilizing Brotli compression, particularly at its highest levels, which can lead to improved website performance, faster page load times and an overall reduction in egress fees.

To take advantage of higher end to end compression rates the following Cloudflare proxy features need to be disabled.

  • Email Obfuscation
  • Rocket Loader
  • Server Side Excludes (SSE)
  • Mirage
  • HTML Minification – JavaScript and CSS can be left enabled.
  • Automatic HTTPS Rewrites

This is due to Cloudflare needing to decompress and access the body to apply the requested settings. Alternatively a customer can disable these features for specific paths using Configuration Rules.

If any of these rewrite features are enabled, your origin can still send Brotli compression at higher levels. However, we will decompress, apply the Cloudflare feature(s) enabled, and recompress on the fly using Cloudflare’s default Brotli level 4 or Gzip level 8 depending on the user’s accept-encoding header.

For browsers that do not accept Brotli compression, we will continue to decompress and send Gzipped responses or uncompressed.

Implementation

The initial step towards implementing Brotli from the origin involved constructing a decompression module that could be integrated into Cloudflare software stack. It allows us to efficiently convert the compressed bits received from the origin into the original, uncompressed file. This step was crucial as numerous features such as Email Obfuscation and Cloudflare Workers Customers, rely on accessing the body of a response to apply customizations.

We integrated the decompressor into  the core reverse web proxy of Cloudflare. This integration ensured that all Cloudflare products and features could access Brotli decompression effortlessly. This also allowed our Cloudflare Workers team to incorporate Brotli Directly into Cloudflare Workers allowing our Workers customers to be able to interact with responses returned in Brotli or pass through to the end user unmodified.

Introducing Compression rules – Granular control of compression to end users

By default Cloudflare compresses certain content types based on the Content-Type header of the file. Today we are also announcing Compression Rules for our Enterprise Customers to allow you even more control on how and what Cloudflare will compress.

Today we are also announcing the introduction of Compression Rules for our Enterprise Customers. With Compression Rules, you gain enhanced control over Cloudflare’s compression capabilities, enabling you to customize how and which content Cloudflare compresses to optimize your website’s performance.

For example, by using Cloudflare’s Compression Rules for .ktx files, customers can optimize the delivery of textures in webGL applications, enhancing the overall user experience. Enabling compression minimizes the bandwidth usage and ensures that webGL applications load quickly and smoothly, even when dealing with large and detailed textures.

Alternatively customers can disable compression or specify a preference of how we compress. Another example could be an Infrastructure company only wanting to support Gzip for their IoT devices but allow Brotli compression for all other hostnames.

Compression rules use the filters that our other rules products are built on top of with the added fields of Media Type and Extension type. Allowing users to easily specify the content you wish to compress.

Deprecating the Brotli toggle

Brotli has been long supported by some web browsers since 2016 and Cloudflare offered Brotli Support in 2017. As with all new web technologies Brotli was unknown and we gave customers the ability to selectively enable or disable BrotlI via the API and our UI.

Now that Brotli has evolved and is supported by all browsers, we plan to enable Brotli on all zones by default in the coming months. Mirroring the Gzip behavior we currently support and removing the toggle from our dashboard. If browsers do not support Brotli, Cloudflare will continue to support their accepted encoding types such as Gzip or uncompressed and Enterprise customers will still be able to use Compression rules to granularly control how we compress data towards their users.

The future of web compression

We’ve seen great adoption and great performance for Brotli as the new compression technique for the web. Looking forward, we are closely following trends and new compression algorithms such as zstd as a possible next-generation compression algorithm.

At the same time, we’re looking to improve Brotli directly where we can. One development that we’re particularly focused on is shared dictionaries with Brotli. Whenever you compress an asset, you use a “dictionary” that helps the compression to be more efficient. A simple analogy of this is typing OMW into an iPhone message. The iPhone will automatically translate it into On My Way using its own internal dictionary.

OMW
OnMyWay

This internal dictionary has taken three characters and morphed this into nine characters (including spaces) The internal dictionary has saved six characters which equals performance benefits for users.

By default, the Brotli RFC defines a static dictionary that both clients and the origin servers use. The static dictionary was designed to be general purpose and apply to everyone. Optimizing the size of the dictionary as to not be too large whilst able to generate best compression results. However, what if an origin could generate a bespoke dictionary tailored to a specific website? For example a Cloudflare-specific dictionary would allow us to compress the words and phrases that appear repeatedly on our site such as the word “Cloudflare”. The bespoke dictionary would be designed to compress this as heavily as possible and the browser using the same dictionary would be able to translate this back.

new proposal by the Web Incubator CG aims to do just that, allowing you to specify your own dictionaries that browsers can use to allow websites to optimize compression further. We’re excited about contributing to this proposal and plan on publishing our research soon.

Try it now

Compression Rules are available now! With End to End Brotli being rolled out over the coming weeks. Allowing you to improve performance, reduce bandwidth and granularly control how Cloudflare handles compression to your end users.

Watch on Cloudflare TV

https://customer-rhnwzxvb3mg4wz3v.cloudflarestream.com/f1c71fdb05263b5ec3077e6e7acdb7e2/iframe?preload=true&poster=https%3A%2F%2Fcustomer-rhnwzxvb3mg4wz3v.cloudflarestream.com%2Ff1c71fdb05263b5ec3077e6e7acdb7e2%2Fthumbnails%2Fthumbnail.jpg%3Ftime%3D1s%26height%3D600

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/this-is-brotli-from-origin/

Speeding up your (WordPress) website is a few clicks away

22/06/2023

Every day, website visitors spend far too much time waiting for websites to load in their browsers. This waiting is partially due to browsers not knowing which resources are critically important so they can prioritize them ahead of less-critical resources. In this blog we will outline how millions of websites across the Internet can improve their performance by specifying which critical content loads first with Cloudflare Workers and what Cloudflare will do to make this easier by default in the future.

Popular Content Management Systems (CMS) like WordPress have made attempts to influence website resource priority, for example through techniques like lazy loading images. When done correctly, the results are magical. Performance is optimized between the CMS and browser without needing to implement any changes or coding new prioritization strategies. However, we’ve seen that these default priorities have opportunities to improve greatly.

In this co-authored blog with Google’s Patrick Meenan we will explain where the opportunities exist to improve website performance, how to check if a specific site can improve performance, and provide a small JavaScript snippet which can be used with Cloudflare Workers to do this optimization for you.

What happens when a browser receives the response?

Before we dive into where the opportunities are to improve website performance, let’s take a step back to understand how browsers load website assets by default.

After the browser sends a HTTP request to a server, it receives a HTTP response containing information like status codes, headers, and the requested content. The browser carefully analyzes the response’s status code and response headers to ensure proper handling of the content.

Next, the browser processes the content itself. For HTML responses, the browser extracts important information from the <head> section of the HTML, such as the page title, stylesheets, and scripts. Once this information is parsed, the browser moves on to the response <body> which has the actual page content. During this stage, the browser begins to present the webpage to the visitor.

If the response includes additional 3rd party resources like CSS, JavaScript, or other content, the browser may need to fetch and integrate them into the webpage. Typically, browsers like Google Chrome delay loading images until after the resources in the HTML <head> have loaded. This is also known as “blocking” the render of the webpage. However, developers can override this blocking behavior using fetch priority or other methods to boost other content’s priority in the browser. By adjusting an important image’s fetch priority, it can be loaded earlier, which can lead to significant improvements in crucial performance metrics like LCP (Largest Contentful Paint).

Images are so central to web pages that they have become an essential element in measuring website performance from Core Web Vitals. LCP measures the time it takes for the largest visible element, often an image, to be fully rendered on the screen. Optimizing the loading of critical images (like LCP images) can greatly enhance performance, improving the overall user experience and page performance.

But here’s the challenge – a browser may not know which images are the most important for the visitor experience (like the LCP image) until rendering begins. If the developer can identify the LCP image or critical elements before it reaches the browser, its priority can be increased at the server to boost website performance instead of waiting for the browser to naturally discover the critical images.

In our Smart Hints blog, we describe how Cloudflare will soon be able to automatically prioritize content on behalf of website developers, but what happens if there’s a need to optimize the priority of the images right now? How do you know if a website is in a suboptimal state and what can you do to improve?

Using Cloudflare, developers should be able to improve image performance with heuristics that identify likely-important images before the browser parses them so these images can have increased priority and be loaded sooner.

Identifying Image Priority opportunities

Just increasing the fetch priority of all images won’t help if they are lazy-loaded or not critical/LCP images. Lazy-loading is a method that developers use to generally improve the initial load of a webpage if it includes numerous out-of-view elements. For example, on Instagram, when you continually scroll down the application to see more images, it would only make sense to load those images when the user arrives at them otherwise the performance of the page load would be needlessly delayed by the browser eagerly loading these out-of-view images. Instead the highest priority should be given to the LCP image in the viewport to improve performance.

So developers are left in a situation where they need to know which images are on users’ screens/viewports to increase their priority and which are off their screens to lazy-load them.

Recently, we’ve seen attempts to influence image priority on behalf of developers. For example, by default, in WordPress 5.5 all images with an IMG tag and aspect ratios were directed to be lazy-loaded. While there are plugins and other methods WordPress developers can use to boost the priority of LCP images, lazy-loading all images in a default manner and not knowing which are LCP images can cause artificial performance delays in website performance (they’re working on this though, and have partially resolved this for block themes).

So how do we identify the LCP image and other critical assets before they get to the browser?

To evaluate the opportunity to improve image performance, we turned to the HTTP Archive. Out of the approximately 22 million desktop pages tested in February 2023, 46% had an LCP element with an IMG tag. Meaning that for page load metrics, LCP had an image included about half the time. Though, among these desktop pages, 8.5 million had the image in the static HTML delivered with the page, indicating a total potential improvement opportunity of approximately 39% of the desktop pages within the dataset.

In the case of mobile pages, out of the ~28.5 million tested, 40% had an LCP element as an IMG tag. Among these mobile pages, 10.3 million had the image in the static HTML delivered with the page, suggesting a potential improvement opportunity in around 36% of the mobile pages within the dataset.

However, as previously discussed, prioritizing an image won’t be effective if the image is lazy-loaded because the directives are contradictory. In the dataset,  approximately 1.8 million LCP desktop images and 2.4 million LCP mobile images were lazy-loaded.

Therefore, across the Internet, the opportunity to improve image performance would be about ~30% of pages that have an LCP image in the original HTML markup that weren’t lazy-loaded, but with a more advanced Cloudflare Worker, the additional 9% of lazy-loaded LCP images can also be improved improved by removing the lazy-load attribute.

If you’d like to determine which element on your website serves as the LCP element so you can increase the priority or remove any lazy-loading, you can use browser developer tools, or speed tests like Webpagetest or Cloudflare Observatory.

39% of desktop images seems like a lot of opportunity to improve image performance. So the next question is how can Cloudflare determine the LCP image across our network and automatically prioritize them?

Image Index

We thought that how soon the LCP image showed up in the HTML would serve as a useful indicator. So we analyzed the HTTP Archive dataset to see where the cumulative percentage of LCP images are discovered based on their position in the HTML, including lazy-loaded images.

We found that approximately 25% of the pages had the LCP image as the first image in the HTML (around 10% of all pages). Another 25% had the LCP image as the second image. WordPress seemed to arrive at a similar conclusion and recently released a development to remove the default lazy-load attribute from the first image on block themes, but there are opportunities to go further.

Our analysis revealed that implementing a straightforward rule like “do not lazy-load the first four images,” either through the browser, a content management system (CMS), or a Cloudflare Worker could address approximately 75% of the issue of lazy-loading LCP images (example Worker below).

Ignoring small images

In trying to find other ways to identify likely LCP images we next turned to the size of the image. To increase the likelihood of getting the LCP image early in the HTML, we looked into ignoring “small” images as they are unlikely to be big enough to be a LCP element. We explored several sizes and 10,000 pixels (less than 100×100) was a pretty reliable threshold that didn’t skip many LCP images and avoided a good chunk of the non-LCP images.

By ignoring small images (<10,000px), we found that the first image became the LCP image in approximately 30-34% of cases. Adding the second image increased this percentage to 56-60% of pages.

Therefore, to improve image priority, a potential approach could involve assigning a higher priority to the first four “not-small” images.

Chrome 114 Image Prioritization Experiment

An experiment running in Chrome 114 does exactly what we described above. Within the browser there are a few different prioritization knobs to play with that aren’t web-exposed so we have the opportunity to assign a “medium” priority to images that we want to boost automatically (directly controlling priority with “fetch priority” lets you set high or low). This will let us move the images ahead of other images, async scripts and parser-blocking scripts late in the body but still keep the boosted image priority below any high-priority requests, particularly dynamically-injected blocking scripts.

We are experimenting with boosting the priority of varying numbers of images (2, 5 and 10) and with allowing one of those medium-priority images to load at a time during Chromes “tight” mode (when it is loading the render-blocking resources in the head) to increase the likelihood that the LCP image will be available when the first paint is done.

The data is still coming in and no “ship” decisions have been made yet but the early results are very promising, improving the LCP time across the entire web for all arms of the experiment (not by massive amounts but moving the metrics of the whole web is notoriously difficult).

How to use Cloudflare Workers to boost performance

Now that we’ve seen that there is a large opportunity across the Internet for helping prioritize images for performance and how to identify images on individual pages that are likely LCP images, the question becomes, what would the results be of implementing a network-wide rule that could boost image priority from this study?

We built a test worker and deployed it on some WordPress test sites with our friends at Rocket.net, a WordPress hosting platform focused on performance. This worker boosts the priority of the first four images while removing the lazy-load attribute, if present. When deployed we saw good performance results and the expected image prioritization.

export default {
  async fetch(request) {
    const response = await fetch(request);
 
    // Check if the response is HTML
    const contentType = response.headers.get('Content-Type');
    if (!contentType || !contentType.includes('text/html')) {
      return response;
    }
 
    const transformedResponse = transformResponse(response);
 
    // Return the transformed response with streaming enabled
    return transformedResponse;
  },
};
 
async function transformResponse(response) {
  // Create an HTMLRewriter instance and define the image transformation logic
  const rewriter = new HTMLRewriter()
    .on('img', new ImageElementHandler());
 
  const transformedBody = await rewriter.transform(response).text()
 
  const transformresponse = new Response(transformedBody, response)
 
  // Return the transformed response with streaming enabled
  return transformresponse
}
 
class ImageElementHandler {
  constructor() {
    this.imageCount = 0;
    this.processedImages = new Set();
  }
 
  element(element) {
    const imgSrc = element.getAttribute('src');
 
    // Check if the image is small based on Chrome's criteria
    if (imgSrc && this.imageCount < 4 && !this.processedImages.has(imgSrc) && !isImageSmall(element)) {
      element.removeAttribute('loading');
      element.setAttribute('fetchpriority', 'high');
      this.processedImages.add(imgSrc);
      this.imageCount++;
    }
  }
}
 
function isImageSmall(element) {
  // Check if the element has width and height attributes
  const width = element.getAttribute('width');
  const height = element.getAttribute('height');
 
  // If width or height is 0, or width * height < 10000, consider the image as small
  if ((width && parseInt(width, 10) === 0) || (height && parseInt(height, 10) === 0)) {
    return true;
  }
 
  if (width && height) {
    const area = parseInt(width, 10) * parseInt(height, 10);
    if (area < 10000) {
      return true;
    }
  }
 
  return false;
}

When testing the Worker, we saw that default image priority was boosted into “high” for the first four images and the fifth image remained “low.” This resulted in an LCP range of “good” from a speed test. While this initial test is not a dispositive indicator that the Worker will boost performance in every situation, the results are promising and we look forward to continuing to experiment with this idea.

While we’ve experimented with WordPress sites to illustrate the issues and potential performance benefits, this issue is present across the Internet.

Website owners can help us experiment with the Worker above to improve the priority of images on their websites or edit it to be more specific by targeting likely LCP elements. Cloudflare will continue experimenting using a very similar process to understand how to safely implement a network-wide rule to ensure that images are correctly prioritized across the Internet and performance is boosted without the need to configure a specific Worker.

Automatic Platform Optimization

Cloudflare’s Automatic Platform Optimization (APO) is a plugin for WordPress which allows Cloudflare to deliver your entire WordPress site from our network ensuring consistent, fast performance for visitors. By serving cached sites, APO can improve performance metrics. APO does not currently have a way to prioritize images over other assets to improve browser render metrics or dynamically rewrite HTML, techniques we’ve discussed in this post. Although this presents a potential opportunity for future development, it requires thorough testing to ensure safe and reliable support.

In the future we’ll look to include the techniques discussed today as part of APO, however in the meantime we recommend using Snippets (and Experiments) to test with the code example above to see the performance impact on your website.

Get in touch!

If you are interested in using the JavaScript above, we recommended testing with Workers or using Cloudflare Snippets. We’d love to hear from you on what your results were. Get in touch via social media and share your experiences.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/speeding-up-your-website-in-a-few-clicks/

A step-by-step guide to transferring domains to Cloudflare

23/06/2023

Transferring your domains to a new registrar isn’t something you do every day, and getting any step of the process wrong could mean downtime and disruption. That’s why this Speed Week we’ve prepared a domain transfer checklist. We want to empower anyone to quickly transfer their domains to Cloudflare Registrar, without worrying about missing any steps along the way or being left with any unanswered questions.

Domain Transfer Checklist

Confirm eligibility

  • Confirm you want to use Cloudflare’s nameservers: We built our registrar specifically for customers who want to use other Cloudflare products. This means domains registered with Cloudflare can only use our nameservers. If your domain requires non-Cloudflare nameservers then we’re not the right registrar for you.
  • Confirm Cloudflare supports your domain’s TLD: You can view the full list of TLDs we currently support hereNote: We plan to support .dev and .app by mid-July 2023.
  • Confirm your domain is not a premium domain or internationalized domain name (IDNs): Cloudflare currently does not support premium domains or internationalized domain names (Unicode).
  • Confirm your domain hasn’t been registered or transferred in the past 60 days: ICANN rules prohibit a domain from being transferred if it has been registered or previously transferred within the last 60 days.
  • Confirm your WHOIS Registrant contact information hasn’t been updated in the past 60 days: ICANN rules also prohibit a domain from being transferred if the WHOIS Registrant contact information was modified in the past 60 days.

Before you transfer

  • Gather your credentials for your current registrar: Make sure you have your credentials for your current registrar. It’s possible you haven’t logged in for many years and you may have to reset your password.
  • Make note of your current DNS settings: Make note of your current DNS settings: When transferring your domain, Cloudflare will automatically scan your DNS records, but you’ll want to capture your current settings in case there are any issues. If your current provider supports it, you could use the standard BIND Zone File format to export your records.
  • Remove WHOIS privacy (if necessary): In most cases, domains may be transferred even if WHOIS privacy services have been enabled. However, some registrars may prohibit the transfer if the WHOIS privacy service has been enabled.
  • Disable DNSSEC: You can disable DNSSEC by removing the DS record at your current DNS host and disabling DNSSEC in the Cloudflare dashboard.
  • Renew your domain if up for renewal in the next 15 days: If your domain is up for renewal, you’ll need to renew it with your current registrar before initiating a transfer to Cloudflare.
  • Unlock the domain: Registrars include a lightweight safeguard to prevent unauthorized users from starting domain transfers – often called a registrar or domain lock. This lock prevents any other registrar from attempting to initiate a transfer. Only the registrant can enable or disable this lock, typically through the administration interface of the registrar.
  • Sign up for Cloudflare: If you don’t already have a Cloudflare account, you can sign up here.
  • Add your domain to Cloudflare: You can add a new domain to your Cloudflare account by following these instructions.
  • Add a valid credit card to your Cloudflare account: If you haven’t already added a payment method into your  Cloudflare dashboard billing profile, you’ll be prompted to add one when you add your domain.
  • Review DNS records at Cloudflare: Once you’ve added your domain, review the DNS records that Cloudflare automatically configured with what you have at your current registrar to make sure nothing was missed.
  • Change your DNS nameservers to Cloudflare: In order to transfer your domain, your nameservers will need to be set to Cloudflare.
  • (optional) Configure Cloudflare Email Routing: If you’re using email forwarding, ensure that you follow this guide to migrate to Cloudflare Email Routing.
  • Wait for your DNS changes to propagate: Registrars can take up to 24 hours to process nameserver updates. You will receive an email when Cloudflare has confirmed that these changes are in place. You can’t proceed with transferring your domain until this process is complete.

Initiating and confirming transfer process

  • Request an authorization code: Cloudflare needs to confirm with your old registrar that the transfer flow is authorized. To do that, your old registrar will provide an authorization code to you. This code is often referred to as an authorization code, auth code, authinfo code, or transfer code. You will need to input that code to complete your transfer to Cloudflare. We will use it to confirm the transfer is authentic.
  • Initiate your transfer to Cloudflare: Visit the Transfer Domains section of your Cloudflare dashboard. Here you’ll be presented with any domains available for transfer. If your domain isn’t showing, ensure you completed all the proceeding steps. If you have, review the list on this page to see if any apply to your domain.
  • Review the transfer price: When you transfer a domain, you are required by ICANN to pay to extend its registration by one year from the expiration date. You will not be billed at this step. Cloudflare will only bill your card when you input the auth code and confirm the contact information at the conclusion of your transfer request.
  • Input your authorization code: In the next page, input the authorization code for each domain you are transferring.
  • Confirm or input your contact information: In the final stage of the transfer process, input the contact information for your registration. Cloudflare Registrar redacts this information by default but is required to collect the authentic contact information for this registration.
  • Approve the transfer with Cloudflare: Once you have requested your transfer, Cloudflare will begin processing it, and send a Form of Authorization (FOA) email to the registrant, if the information is available in the public WHOIS database. The FOA is what authorizes the domain transfer.
  • Approve the transfer with your previous registrar: After this step, your previous registrar will also email you to confirm your request to transfer. Most registrars will include a link to confirm the transfer request. If you follow that link, you can accelerate the transfer operation. If you do not act on the email, the registrar can wait up to five days to process the transfer to Cloudflare. You may also be able to approve the transfer from within your current registrar dashboard.
  • Follow your transfer status in your Cloudflare dashboard: Your domain transfer status will be viewable under Account Home > Overview > Domain Registration for your domain.

After you transfer

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/a-step-by-step-guide-to-transferring-domains-to-cloudflare/

Introducing the Cloudflare Radar Internet Quality Page

23/06/2023

Internet connections are most often marketed and sold on the basis of “speed”, with providers touting the number of megabits or gigabits per second that their various service tiers are supposed to provide. This marketing has largely been successful, as most subscribers believe that “more is better”. Furthermore, many national broadband plans in countries around the world include specific target connection speeds. However, even with a high speed connection, gamers may encounter sluggish performance, while video conference participants may experience frozen video or audio dropouts. Speeds alone don’t tell the whole story when it comes to Internet connection quality.

Additional factors like latency, jitter, and packet loss can significantly impact end user experience, potentially leading to situations where higher speed connections actually deliver a worse user experience than lower speed connections. Connection performance and quality can also vary based on usage – measured average speed will differ from peak available capacity, and latency varies under loaded and idle conditions.

The new Cloudflare Radar Internet Quality page

A little more than three years ago, as residential Internet connections were strained because of the shift towards working and learning from home due to the COVID-19 pandemic, Cloudflare announced the speed.cloudflare.com speed test tool, which enabled users to test the performance and quality of their Internet connection. Within the tool, users can download the results of their individual test as a CSV, or share the results on social media. However, there was no aggregated insight into Cloudflare speed test results at a network or country level to provide a perspective on connectivity characteristics across a larger population.

Today, we are launching these long-missing aggregated connection performance and quality insights on Cloudflare Radar. The new Internet Quality page provides both country and network (autonomous system) level insight into Internet connection performance (bandwidth) and quality (latencyjitter) over time. (Your Internet service provider is likely an autonomous system with its own autonomous system number (ASN), and many large companies, online platforms, and educational institutions also have their own autonomous systems and associated ASNs.) The insights we are providing are presented across two sections: the Internet Quality Index (IQI), which estimates average Internet quality based on aggregated measurements against a set of Cloudflare & third-party targets, and Connection Quality, which presents peak/best case connection characteristics based on speed.cloudflare.com test results aggregated over the previous 90 days. (Details on our approach to the analysis of this data are presented below.)

Users may note that individual speed test results, as well as the aggregate speed test results presented on the Internet Quality page will likely differ from those presented by other speed test tools. This can be due to a number of factors including differences in test endpoint locations (considering both geographic and network distance), test content selection, the impact of “rate boosting” by some ISPs, and testing over a single connection vs. multiple parallel connections. Infrequent testing (on any speed test tool) by users seeking to confirm perceived poor performance or validate purchased speeds will also contribute to the differences seen in the results published by the various speed test platforms.

And as we announced in April, Cloudflare has partnered with Measurement Lab (M-Lab) to create a publicly-available, queryable repository for speed test results. M-Lab is a non-profit third-party organization dedicated to providing a representative picture of Internet quality around the world. M-Lab produces and hosts the Network Diagnostic Tool, which is a very popular network quality test that records millions of samples a day. Given their mission to provide a publicly viewable, representative picture of Internet quality, we chose to partner with them to provide an accurate view of your Internet experience and the experience of others around the world using openly available data.

Connection speed & quality data is important

While most advertisements for fixed broadband and mobile connectivity tend to focus on download speeds (and peak speeds at that), there’s more to an Internet connection, and the user’s experience with that Internet connection, than that single metric. In addition to download speeds, users should also understand the upload speeds that their connection is capable of, as well as the quality of the connection, as expressed through metrics known as latency and jitter. Getting insight into all of these metrics provides a more well-rounded view of a given Internet connection, or in aggregate, the state of Internet connectivity across a geography or network.

The concept of download speeds are fairly well understood as a measure of performance. However, it is important to note that the average download speeds experienced by a user during common Web browsing activities, which often involves the parallel retrieval of multiple smaller files from multiple hosts, can differ significantly from peak download speeds, where the user is downloading a single large file (such as a video or software update), which allows the connection to reach maximum performance. The bandwidth (speed) available for upload is sometimes mentioned in ISP advertisements, but doesn’t receive much attention. (And depending on the type of Internet connection, there’s often a significant difference between the available upload and download speeds.) However, the importance of upload came to the forefront in 2020 as video conferencing tools saw a surge in usage as both work meetings and school classes shifted to the Internet during the COVID-19 pandemic. To share your audio and video with other participants, you need sufficient upload bandwidth, and this issue was often compounded by multiple people sharing a single residential Internet connection.

Latency is the time it takes data to move through the Internet, and is measured in the number of milliseconds that it takes a packet of data to go from a client (such as your computer or mobile device) to a server, and then back to the client. In contrast to speed metrics, lower latency is preferable. This is especially true for use cases like online gaming where latency can make a difference between a character’s life and death in the game, as well as video conferencing, where higher latency can cause choppy audio and video experiences, but it also impacts web page performance. The latency metric can be further broken down into loaded and idle latency. The former measures latency on a loaded connection, where bandwidth is actively being consumed, while the latter measures latency on an “idle” connection, when there is no other network traffic present. (These specific loaded and idle definitions are from the device’s perspective, and more specifically, from the speed test application’s perspective. Unless the speed test is being performed directly from a router, the device/application doesn’t have insight into traffic on the rest of the network.) Jitter is the average variation found in consecutive latency measurements, and can be measured on both idle and loaded connections. A lower number means that the latency measurements are more consistent. As with latency, Internet connections should have minimal jitter, which helps provide more consistent performance.

Our approach to data analysis

The Internet Quality Index (IQI) and Connection Quality sections get their data from two different sources, providing two different (albeit related) perspectives. Under the hood they share some common principles, though.

IQI builds upon the mechanism we already use to regularly benchmark ourselves against other industry players. It is based on end user measurements against a set of Cloudflare and third-party targets, meant to represent a pattern that has become very common in the modern Internet, where most content is served from distribution networks with points of presence spread throughout the world. For this reason, and by design, IQI will show worse results for regions and Internet providers that rely on international (rather than peering) links for most content.

IQI is also designed to reflect the traffic load most commonly associated with web browsing, rather than more intensive use. This, and the chosen set of measurement targets, effectively biases the numbers towards what end users experience in practice (where latency plays an important role in how fast things can go).

For each metric covered by IQI, and for each ASN, we calculate the 25th percentile, median, and 75th percentile at 15 minute intervals. At the country level and above, the three calculated numbers for each ASN visible from that region are independently aggregated. This aggregation takes the estimated user population of each ASN into account, biasing the numbers away from networks that source a lot of automated traffic but have few end users.

The Connection Quality section gets its data from the Cloudflare Speed Test tool, which exercises a user’s connection in order to see how well it is able to perform. It measures against the closest Cloudflare location, providing a good balance of realistic results and network proximity to the end user. We have a presence in 285 cities around the world, allowing us to be pretty close to most users.

Similar to the IQI, we calculate the 25th percentile, median, and 75th percentile for each ASN. But here these three numbers are immediately combined using an operation called the trimean — a single number meant to balance the best connection quality that most users have, with the best quality available from that ASN (users may not subscribe to the best available plan for a number of reasons).

Because users may choose to run a speed test for different motives at different times, and also because we take privacy very seriously and don’t record any personally identifiable information along with test results, we aggregate at 90-day intervals to capture as much variability as we can.

At the country level and above, the calculated trimean for each ASN in that region is aggregated. This, again, takes the estimated user population of each ASN into account, biasing the numbers away from networks that have few end users but which may still have technicians using the Cloudflare Speed Test to assess the performance of their network.

The new Internet Quality page includes three views: Global, country-level, and autonomous system (AS). In line with the other pages on Cloudflare Radar, the country-level and AS pages show the same data sets, differing only in their level of aggregation. Below, we highlight the various components of the Internet Quality page.

Global

The top section of the global (worldwide) view includes time series graphs of the Internet Quality Index metrics aggregated at a continent level. The time frame shown in the graphs is governed by the selection made in the time frame drop down at the upper right of the page, and at launch, data for only the last three months is available. For users interested in examining a specific continent, clicking on the other continent names in the legend removes them from the graph. Although continent-level aggregation is still rather coarse, it still provides some insight into regional Internet quality around the world.

Further down the page, the Connection Quality section presents a choropleth map, with countries shaded according to the values of the speed, latency, or jitter metric selected from the drop-down menu. Hovering over a country displays a label with the country’s name and metric value, and clicking on the country takes you to the country’s Internet Quality page. Note that in contrast to the IQI section, the Connection Quality section always displays data aggregated over the previous 90 days.

Country-level

Within the country-level page (using Canada as an example in the figures below), the country’s IQI metrics over the selected time frame are displayed. These time series graphs show the median bandwidth, latency, and DNS response time within a shaded band bounded at the 25th and 75th percentile and represent the average expected user experience across the country, as discussed in the Our approach to data analysis section above.

Below that is the Connection Quality section, which provides a summary view of the country’s measured upload and download speeds, as well as latency and jitter, over the previous 90 days. The colored wedges in the Performance Summary graph are intended to illustrate aggregate connection quality at a glance, with an “ideal” connection having larger upload and download wedges and smaller latency and jitter wedges. Hovering over the wedges displays the metric’s value, which is also shown in the table to the right of the graph.

Below that, the Bandwidth and Latency/Jitter histograms illustrate the bucketed distribution of upload and download speeds, and latency and jitter measurements. In some cases, the speed histograms may show a noticeable bar at 1 Gbps, or 1000 ms (1 second) on the latency/jitter histograms. The presence of such a bar indicates that there is a set of measurements with values greater than the 1 Gbps/1000 ms maximum histogram values.

Autonomous system level

Within the upper-right section of the country-level page, a list of the top five autonomous systems within the country is shown. Clicking on an ASN takes you to the Performance page for that autonomous system. For others not displayed in the top five list, you can use the search bar at the top of the page to search by autonomous system name or number. The graphs shown within the AS level view are identical to those shown at a country level, but obviously at a different level of aggregation. You can find the ASN that you are connected to from the My Connection page on Cloudflare Radar.

Exploring connection performance & quality data

Digging into the IQI and Connection Quality visualizations can surface some interesting observations, including characterizing Internet connections, and the impact of Internet disruptions, including shutdowns and network issues. We explore some examples below.

Characterizing Internet connections

Verizon FiOS is a residential fiber-based Internet service available to customers in the United States. Fiber-based Internet services (as opposed to cable-based, DSL, dial-up, or satellite) will generally offer symmetric upload and download speeds, and the FiOS plans page shows this to be the case, offering 300 Mbps (upload & download), 500 Mbps (upload & download), and “1 Gig” (Verizon claims average wired speeds between 750-940 Mbps download / 750-880 Mbps upload) plans. Verizon carries FiOS traffic on AS701 (labeled UUNET due to a historical acquisition), and in looking at the bandwidth histogram for AS701, several things stand out. The first is a rough symmetry in upload and download speeds. (A cable-based Internet service provider, in contrast, would generally show a wide spread of download speeds, but have upload speeds clustered at the lower end of the range.) Another is the peaks around 300 Mbps and 750 Mbps, suggesting that the 300 Mbps and “1 Gig” plans may be more popular than the 500 Mbps plan. It is also clear that there are a significant number of test results with speeds below 300 Mbps. This is due to several factors: one is that Verizon also carries lower speed non-FiOS traffic on AS701, while another is that erratic nature of in-home WiFi often means that the speeds achieved on a test will be lower than the purchased service level.

Traffic shifts drive latency shifts

On May 9, 2023, the government of Pakistan ordered the shutdown of mobile network services in the wake of protests following the arrest of former Prime Minister Imran Khan. Our blog post covering this shutdown looked at the impact from a traffic perspective. Within the post, we noted that autonomous systems associated with fixed broadband networks saw significant increases in traffic when the mobile networks were shut down – that is, some users shifted to using fixed networks (home broadband) when mobile networks were unavailable.

Examining IQI data after the blog post was published, we found that the impact of this traffic shift was also visible in our latency data. As can be seen in the shaded area of the graph below, the shutdown of the mobile networks resulted in the median latency dropping about 25% as usage shifted from higher latency mobile networks to lower latency fixed broadband networks. An increase in latency is visible in the graph when mobile connectivity was restored on May 12.

Bandwidth shifts as a potential early warning sign

On April 4, UK mobile operator Virgin Media suffered several brief outages. In examining the IQI bandwidth graph for AS5089, the ASN used by Virgin Media (formerly branded as NTL), indications of a potential problem are visible several days before the outages occurred, as median bandwidth dropped by about a third, from around 35 Mbps to around 23 Mbps. The outages are visible in the circled area in the graph below. Published reports indicate that the problems lasted into April 5, in line with the lower median bandwidth measured through mid-day.

Submarine cable issues cause slower browsing

On June 5, Philippine Internet provider PLDT Tweeted an advisory that noted “One of our submarine cable partners confirms a loss in some of its internet bandwidth capacity, and thus causing slower Internet browsing.” IQI latency and bandwidth graphs for AS9299, a primary ASN used by PLDT, shows clear shifts starting around 06:45 UTC (14:45 local time). Median bandwidth dropped by half, from 17 Mbps to 8 Mbps, while median latency increased by 75% from 37 ms to around 65 ms. 75th percentile latency also saw a significant increase, nearly tripling from 63 ms to 180 ms coincident with the reported submarine cable issue.

Conclusion

Making network performance and quality insights available on Cloudflare Radar supports Cloudflare’s mission to help build a better Internet. However, we’re not done yet – we have more enhancements planned. These include making data available at a more granular geographical level (such as state and possibly city), incorporating AIM scores to help assess Internet quality for specific types of use cases, and embedding the Cloudflare speed test directly on Radar using the open source JavaScript module.

In the meantime, we invite you to use speed.cloudflare.com to test the performance and quality of your Internet connection, share any country or AS-level insights you discover on social media (tag @CloudflareRadar on Twitter or @radar@cloudflare.social on Mastodon), and explore the underlying data through the M-Lab repository or the Radar API.

Watch on Cloudflare TV

https://customer-rhnwzxvb3mg4wz3v.cloudflarestream.com/debcbed2114d086c870059ac604eca49/iframe?preload=true&poster=https%3A%2F%2Fcustomer-rhnwzxvb3mg4wz3v.cloudflarestream.com%2Fdebcbed2114d086c870059ac604eca49%2Fthumbnails%2Fthumbnail.jpg%3Ftime%3D1s%26height%3D600

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

 Discuss on Hacker News

Source :
https://blog.cloudflare.com/introducing-radar-internet-quality-page/

10 Useful Code Snippets for WordPress Users

LAST UPDATED: APRIL 26, 2023

We know that plugins can be used to extend the functionality of WordPress. But what if you can do some smaller things in WordPress without installing them? Say, you dislike the admin bar at the top and wish to eliminate it? Yes, that can be accomplished by means of code snippets for WordPress.

Basically, code snippets for WordPress are used to do certain actions that might otherwise require a dedicated smaller plugin. Such code snippets are placed in one of the WordPress core or theme files (generally the functions.php file of your theme).

10 useful #code #snippets for #WordPress users 🖥️

CLICK TO TWEET 

In this article, we will share ⏩ ten very useful code snippets for WordPress users. We’ll then wrap it up by explaining ⏩ how you can add these code snippets to your WordPress pages and posts by using a free plugin.

10 Useful Code Snippets for WordPress Users 📚

  1. Allow Contributors to Upload Images
  2. Show Popular Posts Without Plugins
  3. Disable Search in WordPress
  4. Protect Your Site from Malicious Requests
  5. Paginate Your Site Without Plugins
  6. Disable the Admin Bar
  7. Show Post Thumbnails in RSS Feed
  8. Change the Author Permalink Structure
  9. Automatically Link to Twitter Usernames in Content
  10. Create a PayPal Donation Shortcode

Word of Caution! ✋

As you might have guessed, code snippets for WordPress, while really useful, tend to alter the default functionality. There can be a small margin of error with each snippet. Generally, such issues tend to arise due to incompatible plugins and/or themes and tend to disappear once you eliminate the said theme/plugin or decide not to use the said snippet.

However, to be on the safer side, be very sure to take proper backups of your WordPress website before making any changes by means of snippets. Also, if you encounter any error or performance issues, rollback your site and check for any plugins or incompatible theme issues.

Now, on to the code snippets for WordPress users!

1. Allow Contributors to Upload Images

By default, WordPress does not permit contributor accounts to upload images. You can, of course, promote that particular account to Author or Editor and this will give them the rights to upload and modify images, However, it will also grant them additional rights, such as the ability to publish their own articles (as opposed to submission for review).

This particular code snippet allows contributor accounts to upload images to their articles, without granting them any additional privileges or rights. Paste it in the functions.php file of your theme:

if ( current_user_can('contributor') && !current_user_can('upload_files') )
     add_action('admin_init', 'allow_contributor_uploads');      
     function allow_contributor_uploads() {
          $contributor = get_role('contributor');
          $contributor->add_cap('upload_files');
     }

This one is a little trickier. However, if you are not too keen on installing an extra plugin to showcase popular posts (say, you have limited server memory or disk space), follow this snippet.

Paste the following in functions.php:

function count_post_visits() {
    if( is_single() ) {
        global $post;
        $views = get_post_meta( $post->ID, 'my_post_viewed', true );
        if( $views == '' ) {
            update_post_meta( $post->ID, 'my_post_viewed', '1' );   
        } else {
            $views_no = intval( $views );
            update_post_meta( $post->ID, 'my_post_viewed', ++$views_no );
        }
    }
}
add_action( 'wp_head', 'count_post_visits' );

Thereafter, paste the following wherever in your template files that you wish to display the popular posts:

$popular_posts_args = array(
    'posts_per_page' => 3,
    'meta_key' => 'my_post_viewed',
    'orderby' => 'meta_value_num',
    'order'=> 'DESC'
);
$popular_posts_loop = new WP_Query( $popular_posts_args );
  while( $popular_posts_loop->have_posts() ):
    $popular_posts_loop->the_post();
    // Loop continues
endwhile;
wp_reset_query();

3. Disable Search in WordPress

The search feature of WordPress has been around for a long time. However, if your website does not need it, or you do not want users to “search” through your website for some reason, you can use this code snippet.

Essentially, it is a custom function that simply nullifies the search feature. Not just the search bar in your sidebar or the menu, but the entire concept of native WP search is gone. Why can this be useful? Again, it can help if you are running your website on low spec server and do not have content that needs to be searched (probably you aren’t running a blog).

Again, add this to the functions.php file:

function fb_filter_query( $query, $error = true ) {
if ( is_search() ) {
$query->is_search = false;
$query->query_vars[s] = false;
$query->query[s] = false;
// to error
if ( $error == true )
$query->is_404 = true;
}
}
add_action( 'parse_query', 'fb_filter_query' );
add_filter( 'get_search_form', create_function( '$a', "return null;" ) );

4. Protect Your Site from Malicious Requests

There are various ways to secure your website. You can install a security plugin, turn on a firewall or opt for a free feature such as Jetpack Protect that blocks brute force attacks on your website.

The following code snippet, once placed in your functions.php file, rejects all malicious URL requests:

global $user_ID; if($user_ID) {
    if(!current_user_can('administrator')) {
        if (strlen($_SERVER['REQUEST_URI']) > 255 ||
            stripos($_SERVER['REQUEST_URI'], "eval(") ||
            stripos($_SERVER['REQUEST_URI'], "CONCAT") ||
            stripos($_SERVER['REQUEST_URI'], "UNION+SELECT") ||
            stripos($_SERVER['REQUEST_URI'], "base64")) {
                @header("HTTP/1.1 414 Request-URI Too Long");
                @header("Status: 414 Request-URI Too Long");
                @header("Connection: Close");
                @exit;
        }
    }
}

5. Paginate Your Site Without Plugins

Good pagination is very useful for allowing users to browse through your website. Rather than “previous” or “next” links. This is where another one of our code snippets for WordPress comes into play – it adds good pagination to your content.

In functions.php:

global $wp_query;
$total = $wp_query->max_num_pages;
// only bother with the rest if we have more than 1 page!
if ( $total > 1 )  {
     // get the current page
     if ( !$current_page = get_query_var('paged') )
          $current_page = 1;
     // structure of "format" depends on whether we're using pretty permalinks
     $format = empty( get_option('permalink_structure') ) ? '&page=%#%' : 'page/%#%/';
     echo paginate_links(array(
          'base' => get_pagenum_link(1) . '%_%',
          'format' => $format,
          'current' => $current_page,
          'total' => $total,
          'mid_size' => 4,
          'type' => 'list'
     ));
}

6. Disable the Admin Bar

The WordPress Admin Bar provides handy links to several key functions such as the ability to add new posts and pages, etc. However, if you find no use for it and wish to remove it, simply paste the following code snippet to your functions.php file:

// Remove the admin bar from the front end
add_filter( 'show_admin_bar', '__return_false' );

7. Show Post Thumbnails in RSS Feed

If you wish to show post thumbnail images in your blog’s RSS feed, the following code snippet for WordPress can be useful.

Place it in your functions.php file:

// Put post thumbnails into rss feed
function wpfme_feed_post_thumbnail($content) {
global $post;
if(has_post_thumbnail($post->ID)) {
$content = '' . $content;
}
return $content;
}
add_filter('the_excerpt_rss', 'wpfme_feed_post_thumbnail');
add_filter('the_content_feed', 'wpfme_feed_post_thumbnail');

By default, WordPress shows author profiles as yoursite.com/author/name. However, you can change it to anything that you like, such as yoursite.com/writer/name

The following code snippet needs to be pasted in the functions.php file. Then, it changes the author permalink structure to “/profile/name”:

add_action('init', 'cng_author_base');
function cng_author_base() {
    global $wp_rewrite;
    $author_slug = 'profile'; // change slug name
    $wp_rewrite->author_base = $author_slug;
}

This is especially useful if you are running a website that focuses a lot on Twitter (probably a viral content site, etc.) The following code snippet for functions.php converts all @ mentions in your content to their respective Twitter profiles.

For example, an @happy mention in your content will be converted to a link to the Twitter account “twitter.com/happy” (“happy” being the username):

function content_twitter_mention($content) {
return preg_replace('/([^a-zA-Z0-9-_&])@([0-9a-zA-Z_]+)/', "$1<a href=\"http://twitter.com/$2\" target=\"_blank\" rel=\"nofollow\">@$2</a>", $content);
}
add_filter('the_content', 'content_twitter_mention');   
add_filter('comment_text', 'content_twitter_mention');

10. Create a PayPal Donation Shortcode

If you are using the PayPal Donate function to accept donations from your website’s visitors, you can use this code snippet to create a shortcode, and thus make donating easier. First, paste the following in your functions.php file:

function donate_shortcode( $atts, $content = null) {
global $post;extract(shortcode_atts(array(
'account' => 'your-paypal-email-address',
'for' => $post->post_title,
'onHover' => '',
), $atts));
if(empty($content)) $content='Make A Donation';
return '<a href="https://www.paypal.com/cgi-bin/webscr?cmd=_xclick&business='.$account.'&item_name=Donation for '.$for.'" title="'.$onHover.'">'.$content.'</a>';
}
add_shortcode('donate', 'donate_shortcode');

Then, you can easily use the

[donate] shortcode, such as:

[donate]My Text Here[/donate]

Go to top

How to Add Code Snippets? 🤔

As mentioned with each code snippet, you just need to add the said snippet to the required file. Mostly, you would only need to add code snippets to the functions.php file (in some cases, it can differ).

However, what if you are just not comfortable editing your theme’s files? If that is the case, have no fear. The Code Snippets plugin can help you out!

Code Snippets

Author(s): Code Snippets Pro

Current Version: 3.4.2

Last Updated: July 5, 2023

code-snippets.zip

94%Ratings800,000+InstallsWP 5.0+Requires

It is a simple plugin that lets you add code snippets to your functions.php without any manual file editing. It treats code snippets as individual plugins of their own – you add the code and hit save … and the rest is handled by the Code Snippets plugin.

Once you activate the plugin, you will find a Snippets menu right under “Plugins.” Head to Snippets » Add New:

Code Snippets for WordPress

Add a name for your snippet, paste the snippet in the code area, and then provide a description for your own reference. Once done, activate the snippet and you’re good to go! Even if you change the theme, the code snippet remains functional.

10 useful #code #snippets for #WordPress users 🖥️

CLICK TO TWEET 

This way, you can add and delete code snippets as if they were posts or pages without having to edit theme files at all.

Source :
https://themeisle.com/blog/code-snippets-for-wordpress/#gref

How to use Inspect Element in Chrome, Safari, and Firefox

By Bryce Emley · June 20, 2023

Hero image showing the Inspect Element feature in Chrome

There’s a powerful tool hiding in your browser: Inspect Element.

Right-click on any webpage, click Inspect, and you’ll see the innards of that site: its source code, the images and CSS that form its design, the fonts and icons it uses, the JavaScript code that powers animations, and more. You can see how long the site takes to load, how much bandwidth it used to download, and the exact color in its text.

5 things you should automate today

Start automating

Or, you could use it to change anything you want on the page.

Inspect Element is a perfect way to learn what makes the web tick, figure out what’s broken on your sites, mock up what a color and font change would look like, and keep yourself from having to Photoshop out private details in screenshots. Here’s how to use Inspect Element—your browser’s secret superpower—to do all the above and more.

Most web browsers include an Inspect Element tool, while Microsoft’s Edge browser includes a similar set of Developer Tools. This tutorial focuses on Inspect Element tools for Google Chrome, Mozilla Firefox, and Apple Safari, but most of the features work the same in other browsers like Brave.

Table of contents:

Why should I use Inspect Element?

Screenshot showing the writer using Inspect Element

If you’ve never peeked at a website’s code out of curiosity, you might wonder why you should learn how to use Inspect Element. Below are just a few reasons why different roles can benefit from learning this trick of the trade. 

  • Designer: Want to preview how a site design would look on mobile? Or want to see how a different shade of green would look on a sign-up button? You can do both in seconds with Inspect Element.
  • Marketer: Curious what keywords competitors use in their site headers, or want to see if your site’s loading too slow for Google’s PageSpeed test? Inspect Element can show both.
  • Writer: Tired of blurring out your name and email in screenshots? With Inspect Element, you can change any text on a webpage in a second.
  • Support agent: Need a better way to tell developers what needs to be fixed on a site? Inspect Element lets you make a quick example change to show what you’re talking about.
  • Web developer: Need to look for broken code, compare layouts, or make live edits to a page? Inspect Element does that, too.

For these and dozens of other use cases, Inspect Element is a handy tool to keep around. For now, let’s see how to use the main Elements tab to tweak a webpage on your own.

How to inspect elements with Google Chrome

There are a few ways to access Google Chrome Inspect Element. Just open a website you want to try editing (to follow along with this tutorial, open the Zapier blog post What is AI?), then open the Inspect Element tool in one of these three ways:

  • Method 1: Right-click anywhere on the webpage, and at the very bottom of the menu that pops up, click Inspect.Screen shot showing the writer navigating to Inspect
  • Method 2: Click the hamburger menu (the icon with three stacked dots) on the far-right of your Google Chrome toolbar, click More Tools, then select Developer ToolsScreenshot of the writer navigating to Developer Tools
  • Method 3: Prefer keyboard shortcuts? Press command + option + I on a Mac, or Ctrl + Shift + C on a PC to open Inspect Element without clicking anything.

Once you take your preferred route to opening the Developer Tools pane, by default, it will show the Elements tab—that’s the famed Inspect Element tool we’ve been looking for.

If you want to change the orientation of the Inspect Element pane, click the three vertical dots on the top-right side of the Inspect Element pane near the “X” (which you’d click to close the pane). Now, you’ll see options to move the pane to the bottom, left, or right side of your browser or to open the pane in a completely separate window (undock view).

Screenshot of the writer showing how to change the orientation of the Inspect Element pane

For this tutorial, let’s dock the pane on the right side of our browser window to give us more space to work. You can make the Developer Tools panel wider or narrower by hovering over the left-side border. Once the  cursor appears, drag the pane left to widen it or right to narrow it.

How to inspect elements with Firefox

To get to Inspect Element on Firefox, like Chrome, you have three options.

  • Method 1: Right-click anywhere on the page and click Inspect at the bottom of the menu.Screenshot showing the user navigating to Inspect
  • Method 2: Click the hamburger menu (three horizontal lines at the top-right corner of the window), select More tools, then click Web Developer Tools.Screenshot of the writer navigating to More tools
  • Method 3: The keyboard shortcut on Firefox is command  + option + I for Macs and Control + Shift + C for PCs.

The Element pane in Firefox likes to pop up at the bottom of the window, which doesn’t give you much room to work with. To move that pane to the side and free up more room, click the hamburger menu (three horizontal dots, next to the “X” in the top-right corner) and click Dock to Right (or left, if you prefer). 

Screenshot of the writer navigating to Dock to Right

If you like, you can also move the pane into a separate window in this menu. You can also expand the pane further or narrow it by hovering over the edge until your cursor changes, and then drag it to the left or right.

How to inspect elements with Safari

To launch Inspect Element with Safari, you’ll need to activate the developer capabilities in the advanced settings first. Here’s how.

  1. Click the Safari dropdown in the top navigation bar above the Safari window, and then click Preferences.Screenshot of the writer navigating to preferences
  2. Navigate to Advanced, and check the box at the bottom of the window by Show Develop menu in the menu bar. Close the window.Screenshot of the writer navigating to Advanced and clicking Show Develop menu in main bar
  3. Now, you should be able to right-click anywhere on the page and click Inspect Elementto open the Elements pane.Screenshot of the writer navigating to Inspect Element
  4. The pane should appear along the bottom of your window. To move it to a side alignment and give yourself a little more space to look at the code, click the Dock to right of window (or left of window) option on the top-left corner of the pane, next to the “X.”Screenshot of the writer showing how to side align the pane

I prefer right, but you can easily switch this to the other side or detach the pane into its own separate window if you prefer. To make the pane wider or narrower, just hover over the edge until the cursor changes to the dragger, then drag to move the edge.

Tools you can access through Inspect Element (+ tutorials)

Now that we’re in Inspect Element, there’s an array of useful tools at our fingertips that we can use to make any site look exactly how we want. For this tutorial, we’ll focus on the SearchElements, and Emulation tabs. These aren’t the only useful tools Inspect Element opens up—not by a long shot—but they’re extremely helpful ones that beginners can start putting to use right away.

Note that, for simplicity, I’ll be using Chrome to demonstrate, but the instructions should be essentially the same for all three browsers.

Find anything on a site with Inspect Element Search

Wondering what goes into your favorite sites? Search is your best tool for that, aside from reading a site’s entire source code.

You can open the default Elements view, press Ctrl + F or command + F, and search through the source code. But the full Search tool will also let you search through every file on a page, helping you find text inside CSS and JavaScript files or locate an icon image you need for an article.

To get started, open Zapier’s blog article on “What is AI?” in Chrome, then open Inspect Element, click the hamburger menu, and select Search. The Search tab will appear on the bottom half of the Developer Tools pane.

Screenshot of the writer navigating to Search

In the search field, you can type anything—anything—that you want to find on this webpage, and it will appear in this pane. Let’s see how we can use this.

Type meta name into the search field, press Enter, and you’ll immediately see every occurrence of “meta name” in the code on this page. Now, you can see this page’s metadata, the SEO keywords it’s targeting, and whether or not it’s configured to let Google index it for search. That’s an easy way to see what your competitors are targeting—and to make sure you didn’t mess anything up on your site.

Screenshot the writer searching meta name

Search is an effective tool for designers as well since you can search by color, too. Type #ff4a00 into the search field and press Enter (and make sure to check the box beside Ignore case to see all of the results). You should now see every time the color #ff4a00, Zapier’s shade of orange, appears in this site’s CSS and HTML files. Then, just click the line that reads “color: #ff4a00;” to jump to that line in the site’s HTML and tweak it on your own (something we’ll look at in the next section).

Screenshot of the writer searching for the color: #ff4a00

This is a handy way for designers to make sure a site is following their brand’s style guide. With the Search tool, designers can easily check the CSS of a webpage to see if a color is applied to the wrong element, if an incorrect font family is used on a webpage, or if you’re still using your old color somewhere on your site.

The Search tool is also the perfect way to communicate with developers better since you can show them exactly where you’ve found a mistake or exactly what needs changing. Just tell them the line number where the problem exists, and you’ll get your fix that much quicker.

Or you can change the webpage yourself with Elements, the core part of Chrome’s Developer Tools.

Change anything with Elements

Front-end developers use the Inspect Element tool every day to modify the appearance of a webpage and experiment with new ideas—and you can, too. Inspect Element lets you tweak the appearance and content of a webpage by adding temporary edits to the site’s CSS and HTML files.

Once you close or reload the page, your changes will be gone; you’ll only see the changes on your computer and aren’t actually editing the real website itself. That way, you can feel free to experiment and change anything—and then copy and save the very best changes to use later.

Let’s see what we can do with it.

Click the Elements tab in the Developer Tools pane—and if you want more room, tap your Esc key to close the search box you had open before. You should see the HTML for this page—now you know how the sausage gets made.

Screenshot of the writer highlighting a part line of the  Elements tab in the Developer Tools pane

In the top-left corner of the Developer pane, you’ll see an icon of a mouse on top of a square. Click it, then you can select any element on the page you want to change. So let’s change some things!

Change the text on a webpage

Ever wanted to change text on a site? Perhaps to see how a new tagline would look on your homepage or to take your email address off a Gmail screenshot? Now you can.

  1. Click the icon of a mouse cursor on a square in the top-left corner of the pane.
  2. Click any text on the page (like the copy on our “What is AI?” blog), which will correspond with a blue highlight over the related code.Screenshot of the writer highlighting text on the What is AI post
  3. Double-click the highlighted text in the Developer Tools pane (not the text on the live page) to turn it into an editable text field.Screenshot of the writer highlighting a section of copy
  4. Type anything you want in this text field (“Auri is a genius” should work just fine), and press Enter.

Voila! You’ve just (temporarily) changed the text on the webpage.

Screenshot of the writer highlighting the changed text

Refresh the page, and everything will go back to normal.

Fun? Let’s try another way to change some things on this page by closing out of the Developer pane altogether. You can then highlight any part of the live webpage you want to edit, then right-click it and hit Inspect.

Screenshot of the writer navigating to Inspect

When your Developer Tools pane opens, it should automatically highlight that sentence. Pretty neat, huh? It’s the little things that count.

Now that we’ve selected a sentence to change on our blog, let’s change how it looks.

Change the color and font of elements

On the lower half of the Developer Tools pane, you’ll see a sub-pane with a few additional tabs that allow you to change how this text looks on the page. Let’s get started on the Styles tab.

Screenshot of the writer highlighting a specific line of code for "What is AI"

You may notice that some things are crossed out. This means that these styles are not active for the element we’ve selected, so changing these values will have no effect. 

Let’s try changing something.  

  1. Look through the code for the “font-size” field and click into it. Let’s change it from 34px to 42px.Screenshot of the writer changing the font size
  2. Now scroll down to “color” and change it to Zapier’s signature #ff4a00.Screenshot of the writer navigating to the code color
  3. This will look a bit cramped, so let’s finish by changing the “line-height” to 44px.Screenshot of the writer changing the line height
  4. Now check the blog post to see the difference.Screenshot of the current "What is AI? sectionScreenshot of the "What is AI" example turning orange

Now let’s try something really cool.

Change element states

Want to see how a button or link will look once someone interacts with it? Inspect Element can show that, too, with force element state tools. You can see how the element will look once a visitor hovers over the element (hover state), selects the element (focus state), and/or has clicked that link (visited state).

As with the other examples, you’ll need to click the mouse cursor/box icon. For this example, we’ll select the “Artificial Intelligence (AI)” tag on the “What is AI” article to try a color change. 

  1. In the Developer Tools pane, right-click on that code in the Elements tab, hover over Force state, and click the :active: option. Do this one more time, but click the :hover: option this time.
  2. That will change the button’s background to black, which is what happens when you hover over the button on the live site. 
  3. Now, change the “background-color” value to #ff4a00.
  4. You should instantly be able to see what the new hover color will look like.Screenshot of the orange CTA button

Try experimenting—change the :hover: color, then uncheck :hover: in the right-click menu and drag your mouse over the button to see the new button color.

Change images

You can easily change images on a webpage with Inspect Element, too. Using the same “What is AI?” blog post as an example, let’s replace the orange solid color background on the “Power your automation with AI” button with a dramatic photo of a solar flare from NASA.

  1. First, copy this link to the image: https://c1.staticflickr.com/9/8314/7931831962_7652860bae_b.jpg
  2. Open Inspect Element on the orange background of the “Power your automation with AI” button and look for the “background-color” code in the pane.Screenshot of the writer highlighting the background color field
  3. Click “background-color” and replace color with image—this should cause an error. Just replace the color code with url and then paste the URL you copied into the parentheses.Screenshot of the writer highlighting the code where the user should add the background image URL
  4. This should automatically replace that boring single-color background with a flashy new image.Screenshot of the changed "What is AI" post and CTA button

Note: You can also change a photo to a GIF or a video—all you need is a link to the file, and you can add it in.


Editing text is handy, swapping out images is fun, and changing colors and styles just might help you quickly mock up the changes you want made to your site. But how will that new tagline and button design look on mobile?

That’s where Emulation comes in—it’s where everything we’ve reviewed so far can be applied even further. Let’s see how.

Test a site on any device with Emulation

Everything has to be responsive today. Websites are no longer only viewed on computers—they’re more likely than ever to be viewed on a phone, tablet, TV, or just about any other type of screen. You should always keep that in mind when creating new content and designs.

Emulation is a great tool to approximate how websites will look to users across various devices, browsers, and even locations. Though this does not replace actual testing on a variety of devices and browsers, it’s a great start.

In the Developer Tools pane, you’ll notice a little phone icon in the top-left corner. Click it. This should change the page into a tiny, phone-styled page with a menu at the top to change the size.

Screenshot of the writer using Emulation

Resize the small browser to see how things look if you were browsing on a tablet, phone, or even smaller screen. Or, click the menu at the top to select default device sizes like Surface Duo or iPhone 12 Pro—let’s go ahead and select the latter.

The webpage screen should shrink down to the device’s size, and you can zoom in a bit by clicking the percentage dropdown next to the dimensions.

If you change the device preset to “Responsive,” you can enlarge the view by dragging the right edge of the webpage emulation right. See what happens? Dragging the screen along the grid allows you to see how the webpage will change as the screen size changes. You can even toggle portrait and landscape views by clicking the little rotation icon at the end of the top menu.

Play around with the other devices to see how the webpage and screen resolution changes. All of the other developer tools we’ve gone over so far will also react to the device view. 

Emulate mobile device sensors

When you start interacting with a device preview, you may notice that your mouse now appears as a little circle on the webpage. This allows you to interact with the page as if you’re on your mobile device.

If you click while dragging the page down, it doesn’t highlight text like it normally would in your browser—it drags the screen down like you’re on a touchscreen device. Using this view, you can see how large touch zones are on a webpage. This means you can see which buttons, icons, links, or other elements are easily touchable with the finger.

You can even make your browser act like a phone. Press your Esc key to open the Search pane in Inspect Element again, and this time click the hamburger menu on the top-right. Select More toolsand then Sensors to get four new tools: Location, Orientation, Touch, and Emulate Idle Detector state.

  • Touch lets you choose whether the circle selector that acts more like a finger than a normal mouse cursor is forced or device-specific. 
  • Orientation lets you interact with motion-sensitive websites, such as online games that let you move things by moving your phone. 
  • Location lets you pretend you’re in a different location.
  • Emulate Idle Detector state allows you to toggle between different idle user conditions.

Let’s try viewing this site from Berlin. Just click the dropdown and select the city—nothing changes, right?

This is because there isn’t content on this page that changes based on your location. If you change the coordinates on a site like Groupon.com that uses your location to show localized content, though, you would get different results. Go to Google.com in a different location, and you’ll perhaps see a new Google logo for a holiday in another country, or at least will get the results in a different language.

Emulation is a great way to put yourself in your user’s shoes and consider what the user may be seeing on your webpage—and it’s a fun way to explore the international web.

Emulate mobile networks

You can also see what it’s like to browse a site on different networks—perhaps to see if your site will load even if your users are on a slower 3G network.

To give it a try, click the hamburger menu in the top-right corner of the pane, hover over More tools, and select Network conditions.

Screenshot of the writer navigating to Network conditions

There, you can choose from fast or slow 3G, or offline to see how the page works without internet. Or, click Add… to include your own testing (perhaps add 56Kbps to test dial-up internet). Now, reload the page, and you’ll see just how long it’d take for the site to load on a slow connection—and how the site looks while it’s loading. That’ll show why you should improve your site to load faster on slow connections.

Screenshot of the writer navigating to Fast 3G

You can also change your user agent—uncheck Use browser default in the User agent field and select Firefox — Mac perhaps to see if the site changes its rendering for other browsers on different devices. That’s also a handy hack to make webpages load even if they claim they only work in a different browser.

This is by no means a complete list of things you can do with Inspect Element. As you start exploring, you’ll see many more features. My advice: click all the buttons.

Read more:

This article was originally published in January 2015 by Auri Pope. The most recent update was in June 2023.

Source :
https://zapier.com/blog/inspect-element-tutorial/

Content Delivery Networks (CDNs)

  • Article
  • 02/17/2023
  • 7 contributors

Feedback

In this article

  1. What exactly is a CDN?
  2. How do CDNs make services work faster?
  3. The Microsoft 365 CDN
  4. Other Microsoft CDNs

Show 11 more

This article applies to Microsoft 365 Enterprise.

CDNs help keep Microsoft 365 fast and reliable for end users. Cloud services like Microsoft 365 use CDNs to cache static assets closer to the browsers requesting them to speed up downloads and reduce perceived end user latency. The information in this topic will help you learn about Content Delivery Networks (CDNs) and how they’re used by Microsoft 365.

What exactly is a CDN?

A CDN is a geographically distributed network consisting of proxy and file servers in datacenters connected by high-speed backbone networks. CDNs are used to reduce latency and load times for a specified set of files and objects in a web site or service. A CDN may have many thousands of endpoints for optimal servicing of incoming requests from any location.

CDNs are commonly used to provide faster downloads of generic content for a web site or service such as Javascript files, icons and images, and can also provide private access to user content such as files in SharePoint Online document libraries, streaming media files, and custom code.

CDNs are used by most enterprise cloud services. Cloud services like Microsoft 365 have millions of customers downloading a mix of proprietary content (such as emails) and generic content (such as icons) at one time. It’s more efficient to put images everyone uses, like icons, as close to the user’s computer as possible. It isn’t practical for every cloud service to build CDN datacenters that store this generic content in every metropolitan area, or even in every major Internet hub around the world, so some of these CDNs are shared.

How do CDNs make services work faster?

Downloading common objects like site images and icons over and over again can take up network bandwidth that can be better used for downloading important personal content, like email or documents. Because Microsoft 365 uses an architecture that includes CDNs, the icons, scripts, and other generic content can be downloaded from servers closer to client computers, making the downloads faster. This means faster access to your personal content, which is securely stored in Microsoft 365 datacenters.

CDNs help to improve cloud service performance in several ways:

  • CDNs shift part of the network and file download burden away from the cloud service, freeing up cloud service resources for serving user content and other services by reducing the need to serve requests for static assets.
  • CDNs are purpose built to provide low-latency file access by implementing high performance networks and file servers, and by leveraging updated network protocols such as HTTP/2 with highly efficient compression and request multiplexing.
  • CDN networks use many globally distributed endpoints to make content available as close as possible to users.

The Microsoft 365 CDN

The built-in Microsoft 365 Content Delivery Network (CDN) allows Microsoft 365 administrators to provide better performance for their organization’s SharePoint Online pages by caching static assets closer to the browsers requesting them, which helps to speed up downloads and reduce latency. The Microsoft 365 CDN uses the HTTP/2 protocol for improved compression and download speeds.

 Note

The Microsoft 365 CDN is only available to tenants in the Production (worldwide) cloud. Tenants in the US Government, China and Germany clouds do not currently support the Microsoft 365 CDN.

The Microsoft 365 CDN is composed of multiple CDNs that allow you to host static assets in multiple locations, or origins, and serve them from global high-speed networks. Depending on the kind of content you want to host in the Microsoft 365 CDN, you can add public origins, private origins or both.

Microsoft 365 CDN conceptual diagram.

Content in public origins within the Microsoft 365 CDN is accessible anonymously, and can be accessed by anyone who has URLs to hosted assets. Because access to content in public origins is anonymous, you should only use them to cache non-sensitive generic content such as Javascript files, scripts, icons and images. The Microsoft 365 CDN is used by default for downloading generic resource assets like the Microsoft 365 client applications from a public origin.

Private origins within the Microsoft 365 CDN provide private access to user content such as SharePoint Online document libraries, sites and proprietary images. Access to content in private origins is secured with dynamically generated tokens so it can only be accessed by users with permissions to the original document library or storage location. Private origins in the Microsoft 365 CDN can only be used for SharePoint Online content, and you can only access assets through redirection from your SharePoint Online tenant.

The Microsoft 365 CDN service is included as part of your SharePoint Online subscription.

For more information about how to use the Microsoft 365 CDN, see Use the Microsoft 365 content delivery network with SharePoint Online.

To watch a series of short videos that provide conceptual and HOWTO information about using the Microsoft 365 CDN, visit the SharePoint Developer Patterns and Practices YouTube channel.

Other Microsoft CDNs

Although not a part of the Microsoft 365 CDN, you can use these CDNs in your Microsoft 365 tenant for access to SharePoint development libraries, custom code and other purposes that fall outside the scope of the Microsoft 365 CDN.

Azure CDN

 Note

Beginning in Q3 2020, SharePoint Online will begin caching videos on the Azure CDN to support improved video playback and reliability. Popular videos will be streamed from the CDN endpoint closest to the user. This data will remain within the Microsoft Purview boundary. This is a free service for all tenants and it does not require any customer action to configure.

You can use the Azure CDN to deploy your own CDN instance for hosting custom web parts, libraries and other resource assets, which allows you to apply access keys to your CDN storage and exert greater control over your CDN configuration. Use of the Azure CDN isn’t free, and requires an Azure subscription.

For more information on how to configure an Azure CDN instance, see Quickstart: Integrate an Azure storage account with Azure CDN.

For an example of how the Azure CDN can be used to host SharePoint web parts, see Deploy your SharePoint client-side web part to Azure CDN.

For information about the Azure CDN PowerShell module, see Manage Azure CDN with PowerShell.

Microsoft Ajax CDN

Microsoft’s Ajax CDN is a read-only CDN that offers many popular development libraries including jQuery (and all of its other libraries), ASP.NET Ajax, Bootstrap, Knockout.js, and others.

To include these scripts in your project, simply replace any references to these publicly available libraries with references to the CDN address instead of including it in your project itself. For example, use the following code to link to jQuery:

HTMLCopy

<script src=https://ajax.aspnetcdn.com/ajax/jquery-2.1.1.js> </script>

For more information about how to use the Microsoft Ajax CDN, see Microsoft Ajax CDN.

How does Microsoft 365 use content from a CDN?

Regardless of what CDN you configure for your Microsoft 365 tenant, the basic data retrieval process is the same.

  1. Your client (a browser or Office client application) requests data from Microsoft 365.
  2. Microsoft 365 either returns the data directly to your client or, if the data is part of a set of content hosted by the CDN, redirects your client to the CDN URL.a. If the data is already cached in a public origin, your client downloads the data directly from the nearest CDN location to your client.b. If the data is already cached in a private origin, the CDN service checks your Microsoft 365 user account’s permissions on the origin. If you have permissions, SharePoint Online dynamically generates a custom URL composed of the path to the asset in the CDN and two access tokens, and returns the custom URL to your client. Your client then downloads the data directly from the nearest CDN location to your client using the custom URL.
  3. If the data isn’t cached at the CDN, the CDN node requests the data from Microsoft 365 and then caches the data for time after your client downloads the data.

The CDN figures out the closest datacenter to the user’s browser and, using redirection, downloads the requested data from there. CDN redirection is quick, and can save users a lot of download time.

How should I set up my network so that CDNs work best with Microsoft 365?

Minimizing latency between clients on your network and CDN endpoints is the key consideration for ensuring optimal performance. You can use the best practices outlined in Managing Microsoft 365 endpoints to ensure that your network configuration permits client browsers to access the CDN directly rather than routing CDN traffic through central proxies to avoid introducing unnecessary latency.

You can also read Microsoft 365 Network Connectivity Principles to understand the concepts behind optimizing Microsoft 365 network performance.

Is there a list of all the CDNs that Microsoft 365 uses?

The CDNs in use by Microsoft 365 are always subject to change and in many cases there are multiple CDN partners configured in the event one is unavailable. The primary CDNs used by Microsoft 365 are:

CDNCompanyUsageLink
Microsoft 365 CDNMicrosoft AzureGeneric assets in public origins, SharePoint user content in private originsMicrosoft Azure CDN
Azure CDNMicrosoftCustom code, SharePoint Framework solutionsMicrosoft Azure CDN
Microsoft Ajax CDN (read only)MicrosoftCommon libraries for Ajax, jQuery, ASP.NET, Bootstrap, Knockout.js etc.Microsoft Ajax CDN

What performance gains does a CDN provide?

There are many factors involved in measuring specific differences in performance between data downloaded directly from Microsoft 365 and data downloaded from a specific CDN, such as your location relative to your tenant and to the nearest CDN endpoint, the number of assets on a page that are served by the CDN, and transient changes in network latency and bandwidth. However, a simple A/B test can help to show the difference in download time for a specific file.

The following screenshots illustrate the difference in download speed between the native file location in Microsoft 365 and the same file hosted on the Microsoft Ajax Content Delivery Network. These screenshots are from the Network tab in the Internet Explorer 11 developer tools. These screenshots show the latency on the popular library jQuery. To bring up this screen, in Internet Explorer, press F12 and select the Network tab, which is symbolized with a Wi-Fi icon.

Screenshot of F12 Network.

This screenshot shows the library uploaded to the master page gallery on the SharePoint Online site itself. The time it took to upload the library is 1.51 seconds.

Screenshot of load time 1.51s.

The second screenshot shows the same file delivered by Microsoft’s CDN. This time the latency is around 496 milliseconds. This is a large improvement and shows that a whole second is shaved off the total time to download the object.

Screenshot of load times in 469 ms.

Is my data safe?

We take great care to protect the data that runs your business. Data stored in the Microsoft 365 CDN is encrypted both in transit and at rest, and access to data in the Microsoft 365 SharePoint CDN is secured by Microsoft 365 user permissions and token authorization. Requests for data in the Microsoft 365 SharePoint CDN must be referred (redirected) from your Microsoft 365 tenant or an authorization token won’t be generated.

To ensure that your data remains secure, we recommend that you never store user content or other sensitive data in a public CDN. Because access to data in a public CDN is anonymous, public CDNs should only be used to host generic content such as web script files, icons, images and other non-sensitive assets.

 Note

3rd party CDN providers may have privacy and compliance standards that differ from the commitments outlined by the Microsoft 365 Trust Center. Data cached through the CDN service may not conform to the Microsoft Data Processing Terms (DPT), and may be outside of the Microsoft 365 Trust Center compliance boundaries.

For in-depth information about privacy and data protection for Microsoft 365 CDN providers, visit the following:

How can I secure my network with all these 3rd party services?

Using an extensive set of partner services allows Microsoft 365 to scale and meet availability requirements and enhance the user experience when using Microsoft 365. The 3rd party services Microsoft 365 leverages include both certificate revocation lists; such as crl.microsoft.com or sa.symcb.com, and CDNs; such as r3.res.outlook.com. Every CDN FQDN generated by Microsoft 365 is a custom FQDN for Microsoft 365. If you’re sent to a FQDN at the request of Microsoft 365, you can be assured that the CDN provider controls the FQDN and the underlying content at that location.

For customers that want to segregate requests destined for a Microsoft 365 datacenter from requests that are destined for a 3rd party, we’ve written up guidance on Managing Microsoft 365 endpoints.

Is there a list of all the FQDNs that leverage CDNs?

The list of FQDNs and how they leverage CDNs change over time. Refer to our published Microsoft 365 URLs and IP address ranges page to get up to date on the latest FQDNs that leverage CDNs.

You can also use the Microsoft 365 IP Address and URL Web service to request the current Microsoft 365 URLs and IP address ranges formatted as CSV or JSON.

Can I use my own CDN and cache content on my local network?

We’re continually looking for new ways to support our customers’ needs and are currently exploring the use of caching proxy solutions and other on-premises CDN solutions.

Although it isn’t a part of the Microsoft 365 CDN, you can also use the Azure CDN for hosting custom web parts, libraries and other resource assets, which allows you to apply access keys to your CDN storage and exert greater control over your CDN configuration. Use of the Azure CDN isn’t free, and requires an Azure subscription. For more information on how to configure an Azure CDN instance, see Quickstart: Integrate an Azure storage account with Azure CDN.

I’m using Azure ExpressRoute for Microsoft 365, does that change things?

Azure ExpressRoute for Microsoft 365 provides a dedicated connection to Microsoft 365 infrastructure that is segregated from the public internet. This means that clients will still need to connect over non-ExpressRoute connections to connect to CDNs and other Microsoft infrastructure that isn’t explicitly included in the list of services supported by ExpressRoute. For more information about how to route specific traffic such as requests destined for CDNs, see Implementing ExpressRoute for Microsoft 365.

Can I use CDNs with SharePoint Server on-premises?

Using CDNs only makes sense in a SharePoint Online context and should be avoided with SharePoint Server. This is because all of the advantages around geographic location don’t hold true if the server is located on-premises or geographically close anyway. Additionally, if there’s a network connection to the servers where it’s hosted, then the site may be used without an Internet connection and therefore can’t retrieve the CDN files. Otherwise, you should use a CDN if there’s one available and stable for the library and files you need for your site.

See also

Microsoft 365 Network Connectivity Principles

Assessing Microsoft 365 network connectivity

Managing Microsoft 365 endpoints

Microsoft 365 URLs and IP address ranges

Use the Microsoft 365 content delivery network with SharePoint Online

Microsoft Trust Center

Tune Microsoft 365 performance

Source :
https://learn.microsoft.com/en-us/microsoft-365/enterprise/content-delivery-networks?view=o365-worldwide

Site structure: the ultimate guide

3 May 2023

Your site needs to have a defined structure because, without it, it’ll just be a random collection of pages and blog posts. Your users need this structure to navigate on your site, to click from one page to another. Google also uses the structure of your site to determine what content is important and what is less relevant. This guide tells you everything you need to know about site structure.

Table of contents

What is site structure, and why is it important?

Site structure refers to organizing and arranging a website’s pages and content. It defines the information hierarchy within the site and serves as a roadmap for search engine crawlers. A well-structured site facilitates easy navigation, enhances user experience, and helps search engines like Google understand and effectively index the site’s content. This, in turn, can improve the site’s performance by making it easier for users to find and engage with the content. Ultimately, an optimized site structure helps achieve higher rankings, more traffic, and better conversion rates.

Importance for usability

The structure of your website significantly impacts the experience for your visitors (UX). If visitors can’t find the products and information they’re looking for, they’ll not likely become regular visitors or customers. In other words, you should help them navigate your site. A good site structure will help with this.

Navigating should be easy. You need to categorize and link your posts and products so they are easy to find. New visitors should be able to grasp what you’re writing about or selling instantly.

Importance of your site structure for SEO

A solid site structure vastly improves your chances of ranking in search engines. There are three main reasons for this:

a. It helps Google ‘understand’ your site

The way you structure your site will give Google vital clues about where to find the most valuable content on your site. It helps search engines understand what your site is mainly about or what you’re selling. A decent site structure also enables search engines to find and index content quickly. A good structure should, therefore, lead to a higher ranking in Google.

b. It prevents you from competing with yourself

On your site, you might have blog posts that are quite similar. If, for example, you write a lot about SEO, you could have multiple blog posts about site structure, each covering a different aspect. Consequently, Google won’t be able to tell which of these pages is the most important, so you’ll be competing with your content for high rankings. You should let Google know which page you think is most important. You need a good internal linking and taxonomy structure to do this, so all those pages can work for you instead of against you.

c. It deals with changes on your website

The products you sell in your shop will likely evolve. So does the content you’re writing. You probably add new product lines as old stock sells out. Or you write new articles that make old ones redundant. You don’t want Google to show outdated products or deleted blog posts, so you need to deal with these kinds of changes in the structure of your site.

Are you struggling with setting up your site’s structure? Don’t know the best strategy to link from one post to another? Check out our Site structure training, part of the Yoast SEO academy. Access to Yoast SEO academy is included in the price of Yoast SEO Premium. Before you know it, you’ll be able to improve your rankings by creating the best structure for your site!

How to set up the structure of your site

So, how do you construct a solid site structure? First, we’ll look at an ideal site structure and then explain how to achieve this for your site.

What’s an ideal site structure?

Let’s start by looking at an ideal situation: How should you organize your site if you’re starting from scratch? We think a well-organized website looks like a pyramid with several levels:

  1. Homepage
  2. Categories (or sections)
  3. Subcategories (only for larger sites)
  4. Individual pages and posts

The homepage should be at the top. Then, you have some sections or category pages beneath it. You should be able to file your content under one of these categories. You can divide these sections or categories into subcategories if your site is larger. Beneath your categories or subcategories are your pages and posts.

ideal site structure
An ideal site structure looks like a pyramid. On top, you’ll find the homepage and, right below, the main sections or categories, possibly followed by subcategories. On the ground, you’ll find all the individual posts and pages.

Your homepage

On top of the pyramid is the homepage. Your homepage should act as a navigation hub for your visitors. This means, amongst others, that you should link to your most important pages from your homepage. By doing this:

  1. Your visitors are more likely to end up on the pages you want them to end up on;
  2. You show Google that these pages are important.

Further down this article, we’ll help you determine which pages are essential to your business.

Beware not to link too many pages from your homepage, which will cause clutter. And a cluttered homepage doesn’t guide your visitors anywhere. If you want to optimize your homepage further, you can do many other things. Read our article on homepage SEO to find out what.

In addition to having a well-structured homepage, it’s also important to create a clear navigation path on your site. Your site-wide navigation consists of two main elements: the menu and the breadcrumbs.

The menu

First, let’s take a look at the menu. The website menu is the most common aid for navigation on your website, and you want to make the best possible use of it. Visitors use your menu to find things on your website. It helps them understand the structure of your website. That’s why the main categories on your site should all have a place in the menu on your homepage.

Furthermore, putting everything in just one menu is not always necessary. If you have a big site with lots of categories, this may clutter your website and makes your main menu a poor reflection of the rest of your site. Where it makes sense, creating a second menu is perfectly fine.

For instance, eBay has one menu at the top of the page – also called the top bar menu – and, in addition to that, a main menu. This top bar menu links to important pages that aren’t categories in the shop, like pages that relate to the visitor’s account on the site. The main menu reflects the most important product categories on eBay.

ebay's top menu with a colorful logo, links to various sections on the site and a big search bar
eBay has multiple ways to start navigating from the homepage

Finally, just like on your homepage, you shouldn’t add too many links to your menu. They will become less valuable for your users and search engines if you do.

Read about optimizing your website’s menu here, or enroll in our site structure training that includes many examples!

Adding breadcrumbs to your pages can make your site’s structure even clearer. Breadcrumbs are clickable links, usually at the top of a page or post. Breadcrumbs reflect the structure of your site. They help visitors determine where they are on your site. They improve your site’s user experience and SEO, as you can read in our guide on breadcrumbs.

You can use one of the many breadcrumb plugins for your WordPress site. You can also use our Yoast SEO plugin, as we’ve implemented a breadcrumb functionality in our plugin as well.

Taxonomies

WordPress uses so-called taxonomies to group content; other CMSs have similar systems. The word ‘taxonomy’ is a fancy term for a group of things — website pages, in this case — that have something in common. This is convenient because people looking for more information on the same topic can find similar articles more easily. You can group content in different ways. The default taxonomies in WordPress are categories and tags.

Categories

You should divide your site’s blog posts or products into several categories. If these categories grow too big, you should divide these categories into subcategories to clear things up again. For example, if you have a clothing store and sell shoes, you can divide this category into subcategories: ‘boots’, ‘heels’, and ‘flats’. These subcategories contain products, in this case, shoes, of that specific type.

Adding this hierarchy and categorizing your pages helps your user and Google make sense of every page you write. Add your main categories to your site’s menu when implementing your category structure.

Read more: Using category and tag pages for SEO »

Tags

Your site’s structure will also benefit from adding tags. The difference between a category and a tag mostly concerns structure. Categories are hierarchical: you can have subcategories and even sub-subcategories. Tags, however, don’t have that hierarchy. Tags say: “Hey, this article or product has a certain property that might interest a visitor.” Think of it like this: categories are the table of contents of your website, and tags are the index. A tag for the online clothing store mentioned above could be a brand, for instance, Timberlands.

Keep reading: What is the difference between tags and categories? »

Try not to create too many tags. You’re not structuring anything if you add a new unique tag to every post or article. Ensure each tag is used at least twice, and your tags group articles that genuinely belong together.

Some WordPress themes display tags with each post, but some don’t. Ensure your tags are available to visitors somewhere, preferably at the bottom of your article or in the sidebar. Google isn’t the only one that likes tags: they are useful for visitors wanting to read more about the same topic.

Read on: Tagging post properly for users and SEO »

Contextual internal linking

Site structure is all about grouping and linking the content on your site. Until now, we mostly discussed so-called classifying links: links on your homepage, navigation, and taxonomies. On the other hand, contextual links are internal links within the copy on your pages that refer to other pages within your site. For a link to be contextual, the page you link to should be relevant for someone reading the current page. If you look at the previous paragraph, for instance, we link to a post about tagging, so people can learn more about it if they’re interested.

Your most important pages are often very relevant to mention on several pages across your site, so you’ll link to them most often. Just remember that not only the page you’re linking to is relevant, the context of the link is important as well.

Google uses the context of your links to gather information about the page you’re linking to. It always uses the anchor text (or link text) to understand what the page you’re linking to is about. But the anchor text isn’t the only thing Google looks at. Nowadays, it also considers the content around the link to gather extra information. Google is becoming better at recognizing related words and concepts. Adding links from a meaningful context allows Google to value and rank your pages properly. Yoast SEO Premium makes internal linking a breeze by automatically suggesting relevant content from your site to link to.

Contextual linking for blogs

For blogs, you should write extensively on the topics you want to rank for. You should write some main articles — your cornerstone articles — and write various posts about subtopics of that topic. Then link from these related posts to your cornerstone articles and from the cornerstone articles back to related posts. In this way, you’ll ensure that your most important pages have both the most and most relevant links.

The following metaphor might help you understand this principle:

Imagine you’re looking at a map of a state or country. You’ll probably see many small towns and some bigger cities. All towns and cities will be interconnected somehow. You’ll notice that small towns often have roads leading to the big cities. Those cities are your cornerstones, receiving the most links. The small towns are your posts on more specific topics. Some roads (links) lead to these smaller towns, but not as much as the big cities.

internal links metaphor roads

Keep on reading: Internal linking why and how »

Contextual linking opportunities for online shops

Contextual internal linking works differently on an online store with very few to no pages that are exclusively meant to inform. You don’t explore a specific topic on your product pages: you’re selling a product. Therefore, on product pages, you mostly want to keep people on a page and convince them to buy the product. Consequently, contextual linking is far less prominent in this context. You generally shouldn’t add contextual links to your product descriptions because it could lead to people clicking away from the page.

There are just a couple of meaningful ways of adding contextual links to the product pages for your ecommerce SEO:

  1. link from a product bundle page to the individual products
  2. a ‘related items’ or ‘compare with similar items’ section
  3. a ‘customers also bought’ section
  4. a ‘product bundles’ or ‘frequently bought together’ section.

Learn all about setting up a great (internal linking) structure for your online store with our Site structure training, part of our Yoast SEO academy training subscription. We’ve included lots of examples from real websites!

Landing pages

Landing pages are the pages you want your audience to find when they search for specific keywords you’ve optimized for. For instance, we want people who search for ‘free SEO training’ to end up on the page about our free training called ‘SEO for beginners’. You need to approach the content of your most important landing pages differently than your regular pages.

Here, we’ll discuss two types of landing pages: cornerstone pages and product landing pages. They’re both pages you’d like people to land on from the search engines, but they require quite a different approach. But first, we’ll shortly go into search intent because you have to know what your audience is really looking for.

Search intent

When setting up your site structure, you must consider search intent. It’s about what you think people are looking for when they enter a query into a search engine. What do people want to find? And: what do they expect to find?

Consider different possibilities in search intent, as you might want to cater to different types on your site. Are people just looking for an answer to a question or a definition? Are they comparing products before purchase? Or are they intending to buy something right away? This is often reflected in the type of query they make. You can also use Google’s search results to create great content that fits someone’s needs.

When you have an idea of the search intent, ensuring your landing page fits your audience’s search intent is essential. Pages can answer multiple search intents, but you need a clear view of at least your most important pages.

Read all about search intent and why it’s important for SEO.

Cornerstone content pages

Cornerstone articles are the most important informational articles on your website. Their focus is to provide the best and most complete information on a particular topic; their main goal is not to sell products.

Because of this focus, we usually think of blogs when discussing cornerstone content. Of course, that doesn’t mean it can only be a blog post. All different kinds of websites have cornerstone articles! Rule of thumb: if an article brings everything you know about a broad topic together, it’s a cornerstone content article.

This article explains what cornerstone content is and how to create it. Want to set up your cornerstone content strategy? Our Internal linking SEO workout makes the cornerstone content approach easy to implement!

Product landing pages

Product landing pages significantly differ from cornerstone articles. The latter are lengthy, whereas product landing pages shouldn’t be that long. Rather than complete articles, they should be focused. These pages only need to show what your visitors need to know to be convinced. They don’t need to hold all the information.

You want to rank with these pages, meaning they need content. Enough content for Google to understand what the page is about and what keyword it should rank for. Where cornerstone articles could be made up of thousands of words, a couple of hundred could be enough for product landing pages. The main focus of the content should be on your products.

Michiel listed all the essentials of your product landing page here.

Maintaining your site structure

Structuring or restructuring your content doesn’t always have high priority in everything you have to do. Especially when you blog a lot or add other content regularly, it might feel like a chore. Although it isn’t always fun, you must do it, or your website might become messy. To prevent that from happening, you need to fix your site structure and keep an eye on it while adding new content. Site structure should be part of your long-term SEO strategy.

When your business goal or website changes, your menu must also change. Planning things visually will pay off when you start thinking about restructuring your site. Make a flowchart.

Start with your new menu one or two levels deep and see if you can fit in more pages you have created over the years. You’ll find that some pages are still valid but don’t seem relevant to your menu anymore. No problem, just be sure to link to them on related pages and in your sitemaps so that Google and your visitors can still find these pages. The flowchart will also show you any gaps in the site structure.

Read more: Optimizing your website menu »

Rethink your taxonomy

Creating an overview of your categories, subcategories, and products or posts will also help you to rethink your site’s taxonomy. This could be a simple spreadsheet, but you can use more visual tools like LucidChart or MindNode.

Do your product categories and subcategories provide a logical overview of your product range or your posts and pages? Perhaps you’ve noticed somewhere down the line that one category has been far more successful than others, or you wrote many blog posts on one subject and very few on others.

If one category grows much larger than others, your site’s pyramid could be thrown off balance. Think about splitting this category into different categories. But, if some product lines end up much smaller than others, you might want to merge them. Don’t forget to redirect the ones you delete.

If you have built your HTML sitemap manually, update that sitemap after changing your site structure. In the far more likely event you have an XML sitemapre-submit it to Google Search Console.

Keep reading: The structure of a growing blog »

Clean up outdated content

You might be able to update and republish some outdated articles to make them relevant again. If an article is outdated, but no one reads it anyway, you might delete it. This could clean up your site nicely.

What you should know, in that case, is that you should never delete a page or article without thinking. If Google cannot find the page, it serves your user a 404 error page. Both the search engine and your visitor will see this error message saying the page doesn’t exist, and that is a bad experience and, thus, bad for your SEO.

Be smart about this! You need to redirect the URL of the page you’re deleting properly so your user (and Google) lands on a different page that is relevant to them. That could even improve your SEO!

Got some old content to clean up on your site? Sort out hidden pages and dead ends in four easy steps with our orphaned content SEO workout, available in Yoast SEO Premium.

Avoid keyword cannibalization

Your website is about a specific topic, which could be quite broad or rather specific. While adding content, you should be aware of keyword cannibalization. If you optimize your articles for keywords that are all too similar, you’ll be devouring your chances of ranking in Google. If you optimize different articles for similar key terms, you’ll be competing with yourself, making both pages rank lower.

You’ll have some work to do if you suffer from keyword cannibalization. In short, you should research the performance of your content and probably merge and redirect some of it. When merging posts, we recommend creating a new draft by cloning one of the original posts with the free Yoast Duplicate Post plugin. This allows you to work on your merged post without making these changes to a live post. Read the guide by Joost to learn more about keyword cannibalization and how to fix it.

Feeling a bit overwhelmed by all this advice? Yoast SEO has some handy tools to make internal linking so much easier.

Yoast SEO’s text link counter visualizes your links so you can optimize them. It shows the internal links in a post and the internal links to a post. This tool can enhance your site structure by improving the links between your related posts. Make sure your cornerstones get the most (relevant) links! You can identify your cornerstones by finding them in the column with the pyramid icon.

Quickly see which posts have internal links pointing to them with the text link counter in Yoast SEO

Yoast SEO Premium helps you with your internal linking as well. Our internal linking suggestions tool will show you which articles are related to the one you’re writing, so you can easily link to them: just by dragging the link into your editor!

internal linking suggestions in Yoast SEO sidebar
The internal linking suggestions even include other content types

Moreover, our tool allows you to indicate which articles you consider cornerstone content on your site. Those articles will be shown at the top of the internal linking suggestions. You’ll never forget to link to them again.

Read on: How to use Yoast SEO for your cornerstone content strategy »

The importance of site structure

As we have seen, there are several reasons why site structure is important. A good site structure helps both your visitors and Google navigate your site. It makes it easier to implement changes and prevents competing with your content. So use the tips and pointers in this guide to check and improve your site structure. That way, you’ll stay on top and keep your website from growing out of control!

Want to improve your site structure but don’t know where to start? Get Yoast SEO Premium and get loads of helpful tools and guidance, including free access to Yoast SEO Academy, our Site structure training, and our SEO workouts!

Keep on reading: WordPress SEO: The definitive guide to higher rankings for WordPress sites »

Marieke van de Rakt

Marieke was head of strategy and former CEO at Yoast. After the sale of Yoast to Newfold Digital in 2021 she is no longer active at Yoast in 2023. Marieke, together with her husband Joost, actively invests in and advises several startups through their company Emilia Capital.

Source :
https://yoast.com/site-structure-the-ultimate-guide/

What is on-page SEO?

28 June 2023

In SEO, there are on-page factors and off-page factors. On-page SEO factors are aspects of your website that you can optimize for better search rankings. It’s about improving things like your technical set-up, your content, and how user-friendly your site is. In this post, we’ll explain all about on-page SEO and how it differs from off-page SEO, and we’ll talk about some on-page optimizations that can help you to rank better.

On-page and off-page SEO: what’s the difference?

Every SEO strategy is focused on ranking as high as possible in the search engines. To do this, we all try to design and develop a website that Google’s algorithm — and people! — will love. That’s basically what SEO is about. The factors in Google’s algorithm can be divided into two categories that will determine the ranking of your website: on-page factors and off-page factors.

On-page SEO factors all have to do with elements on your website. For instance, things you work on to improve your E-E-A-T also fall in this category. Some of the most important on-page SEO factors are:

  • Your site set-up and technical features, site speed in particular
  • The quality of your content and use of keywords
  • How do you use additional media, such as images and videos
  • Your site structure and internal linking
  • Structured data and search appearance
  • Your URL structure
  • User experience

Meanwhile, off-page SEO looks at what happens away from your website. Some off-page SEO factors include:

  • Relevant links from other websites leading to your site
  • Social media activity
  • Business and map listings
  • External marketing activities

Pro tip: Find out more about on-page SEO with our front-end SEO inspector! You can use the front-end inspector tool in Yoast SEO Premium to explore the SEO data, metadata and schema output for pages on your site. It’s a great way to get to grips with your on-page SEO.

Importance of on-page SEO

On-page SEO consists of all the elements of SEO that you have control over. If you own a website, you can control the technical issues and the quality of your content. We believe you should be able to tackle all of these factors as they’re in your own hands. Remember: if you create an excellent website, it will start ranking.

Focusing on on-page SEO will also increase the chance that your off-page SEO strategy will be successful. Link building with a crappy site is very tough. Nobody wants to link to poorly written articles or sites that don’t work correctly.

How to optimize on-page SEO factors

1. Make sure search engines can crawl and index your site

If you’re unfamiliar with crawlability and indexing, here’s a quick explanation of what it is and what it has to do with Google. To show your page in the search results, Google must first know about that page. It has to be indexed by Google, meaning that this page has been stored in their index. And for that to be possible, you must ensure you’re not blocking Google from indexing your post or your whole site. So check if you’re not unintentionally doing that (we still see this happening!), and ensure your site is indexed.

Although this isn’t technically a ranking factor, getting your site into the search results requires it, so we thought it should be included here.

2. Invest time in creating quality content based on solid keyword research

Why do you think people visit your site? Most likely because it contains the information they’re looking for. Therefore it’s essential to write excellent content that corresponds with their needs. Search engines like Google also read your text. Which site ranks highest for a specific search term is primarily based on a website’s content. That’s why your content should be informative, easy to read, and focused on the right keywords that your audience uses.

Aside from creating quality content, you must remove or remedy low-quality pages. So-called thin content can harm your SEO. Take time to find these pages and do something with them occasionally to keep your content in good shape.

Learn about writing high-quality content in our Ultimate guide to SEO copywriting, or take our SEO copywriting training course.

3. Improve your site speed

A significant on-page ranking factor is site speed. Users don’t want to wait for pages to load, so Google tends to rank fast-loading sites higher. If you’re unsure how fast (or slow) your site is, check out your Core Web Vital scores using the report in Google Search Console. This helpful tool will point out areas where your site speed can be improved so you know what to work on.

If you’re tech-savvy, you can probably handle this on your own. If you’re unsure where to start, our Technical SEO training can help you.

4. Get your site structure and internal linking right

A good site structure helps Google (and users) understand your site and navigate your content. And when it comes to making that site structure, internal linking is the way to do it. Firstly, you must channel many internal links to your most important content. We call those pages cornerstone content. Secondly, you should tidy up pages that aren’t getting many (if any) internal links. Those pages are what we refer to as orphaned content. It would be best if you decided whether to improve those pages and add more internal links pointing to them or remove them altogether.

Yoast SEO Premium has two SEO workouts to help you improve your site structure and internal linking. Using the workouts can help you to make big improvements quickly, so give them a go!

5. Optimize your use of images and videos

You’ll want to include images on your site to make it attractive, and maybe some videos too. Doing that wrong can harm your SEO, but doing it correctly comes with some SEO benefits.

High-quality images are usually large files that can slow your site down, and that’s a problem. Using smaller image files and giving them descriptive names, captions, and alt tags will favor your SEO. Plus, there are additional benefits. For one, you’ll make your site more accessible, so it’s helpful for a wider audience. And for another thing, you’ll have a chance of your images ranking in the Google Image search results. Read more about these topics in our posts about image SEO and alt tags.

Adding videos to your site is a bit more complicated than images. And ranking your videos on Google (or YouTube) comes with its own set of challenges. We’ve got a great series of posts all about video SEO, if you’d like to learn more about optimizing in this area. There’s also a dedicated Yoast Video SEO plugin, if you’re serious about getting your videos ranking.

6. Create a persuasive search appearance

How your site looks in the search results is vital for SEO. While the search results aren’t part of your site, the things you do to optimize your search appearance are. Therefore, we consider SEO titles, meta descriptions, and structured data part of on-page SEO.

Optimize your SEO title and meta description, and then add structured data for results that stand out

Optimizing the text for your search snippets is fairly straightforward. Adding structured data can be trickier. Good to know: Yoast SEO can help with all these tasks. With checks and previews to help you, getting your SEO titles and meta descriptions right couldn’t be easier. And when it comes to structured data, Yoast does all the hard work for you — all you need to do is select the content type and fill in the blanks.

7. Make your URLs SEO-friendly

A well-crafted URL structure helps your on-page SEO — it’s like giving your web pages a good road map. Think of it as a friendly address that guides search engines and invites users to explore your content. Creating SEO-friendly URLs makes it easier for humans and search engines to understand what your page is all about. Opt for concise and descriptive URLs that include relevant keywords, as they provide a clear signpost. A clean and organized URL structure enhances navigation, making it easier for everyone to understand your website. Don’t forget to keep it short and readable.

8. Design an excellent user experience

The last thing we want to mention is user experience. Simply put, users need to understand your website easily. They should be able to find what they want in a heartbeat. They should know where to click and how to navigate through your site. And your site should be fast! A beautifully designed website is nice, but you should make it your top priority to create a user-friendly website first.

If you want to learn more about combining SEO and UX to get more people to your site, we’d advise you to look at our other articles on user experience. Or check out our all-around SEO training course.

To conclude

We’ve talked about the most important on-page SEO factors. First, ensure that your website works correctly and that your technical SEO is up to par. Secondly, create content that is user-centered and focused on the right keywords. Thirdly, work on the usability and speed of your site to help users and search engines around your website.

As these factors are all a part of your site, you can work on them to ensure your on-page SEO is top-notch! That being said, do remember to also work on your off-page SEO. Although you may not have total control over these factors, you can still put some effort into creating that exposure on other sites too!

Read more: What is off-page SEO? »

Edwin Toonen

Edwin is a strategic content specialist. Before joining Yoast, he spent years honing his skill at The Netherlands’ leading web design magazine.

Source :
https://yoast.com/what-is-onpage-seo/