How to Promote Your Blog Without Social Media?

How to promote your blog without social media? The best way to boost your blog traffic is by using SEO (search engine optimization) techniques. This includes writing great content that people will want to read, creating high-quality images for your posts, and making sure your site loads quickly. However, you need to prompt your blog to attract more visitors and increase the traffic and clicks. Most webmasters ask themselves, how do I get people to read my blog? The answer is straightforward: you can follow the next few tips and get a boosted blog. In this post, seobase will explain how to promote your blog without social media, how to get your blog noticed, and how to get traffic to your website without social media.

Comment and Engage With Other Blogs.

There are compelling methods to get boosted blogs without using social media. Commenting and engaging with other blogs is key to getting your blog noticed. Some web admins create their website blogs and overlook that community is essential to achieve their goals from this post. 

One of the most effective ways to promote your blog without social media you can do is to visit and read other bloggers’ content, and to boost your blog’s traffic is to comment and engage with other blogs. In this step all you need to do in this step is to visit other blogs and comment. As a result, your fellow bloggers might also return the favor to your blog website. You can do this through commenting on other blogs, sharing links to them on social media, or even asking questions on forums.

how to get traffic to your website without social media

Write Authentic Blog Content.

To promote your blog posts without social media, you need to know some more creative ideas; one of them is unique content. Writing unique and authentic content will attract the readers. Catchy headlines and optimized blogging content will attract the most readers to your blog. Attracting more readers increases traffic, clicks, and converts to customers. Also, do not overlook including the right keywords. Learn how to set a successful SEO keyword strategy.

If you want to write authentic blog content, you need to think about what you would say to a friend who was writing a similar post. This will help you avoid sounding too much like everyone else. When you start writing, build the content structure so that readers can find answers to their queries and do not have to leave your site quickly or look for another blog that answers their queries and questions. To learn how to promote your blog without social media with unique content, check the new content ideas

Sign up to our blog to stay tuned about the latest industry news.Subscribe

Share Your Blog Posts on Pinterest.

Suppose you don’t use the social media platforms or even if you don’t have social media yet to support your content promotion. Here’s a vital and valuable tip to follow regarding how to get traffic to your website without social media; you can share your posts on Pinterest to get boosted blog. 

Just make sure you use the right hashtags and keywords so people can find your content easily. On Pinterest, users engage in niche topics; this will help you to get your blog noticed to increase your website position and rank #1 on Google SERPs. No worries, there are plenty of places to promote your blog without social media. 

get your blog noticed

Write Guest Posts.

If you write guest posts to get boosted blog, you will not only gain exposure for your own website, but you will also help others by sharing their work. This is one of the easiest ways to generate new business leads and get people to read my blog. As a result, you will get traffic to your website without social media. 

If you’re following the guest post way to get your blog noticed and promote your blog without social media, you may face only one problem; getting people from your guest post to your website is a bit challenging. According to Backlinko, one industry study found that the average guest post brings in only 50 visitors. To solve this problem, you need to use the Guest Post Bonuses. With a Guest Post Bonus, you give the readers and webmasters motivation to make them visit your website. Read Why Everyone Ignores Your Guest Post Outreach Email.

Start a Podcast.

A podcast is an audio show that usually records interviews with experts in various fields. You can record these yourself or use services such as Blubrry.com. Once your episodes are ready, you need to find a platform to host them. Several options are available, including iTunesSoundCloudStitcherGoogle Play Music, and more.

The podcast may be one of the best places to promote your blog. The podcast bonus strategy is similar to the Guest Post Bonus strategy discussed above. Instead of creating a reward for each guest post, you can create a set of bonuses for each podcast you go to as a guest. 

Furthermore, you can inform about these rewards through email campaigns. But how does this strategy start? Follow these steps to implement a successful podcast strategy:

  • First, create content that your podcast listeners will care about.
  • Then, assign podcast rewards to what you’ll talk about in the podcast.
  • After that, play it by the podcast host.
  • Finally, host your rewards section landing page at a URL that’s easy to remember and write.

boosted blog

How to Promote Your Blog Without Social Media: Conclusion.

Blogging without social media is not very popular, and not many bloggers follow it. However, it is a very successful strategy by which you can get a boosted blog. There are a lot of places to promote your blog that no one has visited yet. The competition in blogging social media marketing is tough and fierce. Despite its outstanding results, it takes a lot of time and effort to get a high ranking on Google.

So, at some point, you have to find alternative ways and know how to get traffic to your website without social media and how to promote an article or blog without social media. seobase always offers solutions to make it easier to rank on Google for websites. However, always remember to make your blog SEO optimized. You can use the best online SEO tools from seobase to constantly improve your blogs, measure your SEO strategy‘s success and effectiveness, and make your blog posts Google Algerismths friendly.

Source :
https://seobase.com/how-to-promote-your-blog-without-social-media/

#StopRansomware: Vice Society

Summary

Actions to take today to mitigate cyber threats from ransomware:

• Prioritize and remediate known exploited vulnerabilities.
• Train users to recognize and report phishing attempts.
• Enable and enforce multifactor authentication.

Note: This joint Cybersecurity Advisory (CSA) is part of an ongoing #StopRansomware effort to publish advisories for network defenders that detail various ransomware variants and ransomware threat actors. These #StopRansomware advisories include recently and historically observed tactics, techniques, and procedures (TTPs) and indicators of compromise (IOCs) to help organizations protect against ransomware. Visit stopransomware.gov to see all #StopRansomware advisories and to learn more about other ransomware threats and no-cost resources.

The Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), and the Multi-State Information Sharing and Analysis Center (MS-ISAC) are releasing this joint CSA to disseminate IOCs and TTPs associated with Vice Society actors identified through FBI investigations as recently as September 2022. The FBI, CISA, and the MS-ISAC have recently observed Vice Society actors disproportionately targeting the education sector with ransomware attacks.

Over the past several years, the education sector, especially kindergarten through twelfth grade (K-12) institutions, have been a frequent target of ransomware attacks. Impacts from these attacks have ranged from restricted access to networks and data, delayed exams, canceled school days, and unauthorized access to and theft of personal information regarding students and staff. The FBI, CISA, and the MS-ISAC anticipate attacks may increase as the 2022/2023 school year begins and criminal ransomware groups perceive opportunities for successful attacks. School districts with limited cybersecurity capabilities and constrained resources are often the most vulnerable; however, the opportunistic targeting often seen with cyber criminals can still put school districts with robust cybersecurity programs at risk. K-12 institutions may be seen as particularly lucrative targets due to the amount of sensitive student data accessible through school systems or their managed service providers.

The FBI, CISA, and the MS-ISAC encourage organizations to implement the recommendations in the Mitigations section of this CSA to reduce the likelihood and impact of ransomware incidents.

Download the PDF version of this report: pdf, 521 KB

Technical Details

Note: This advisory uses the MITRE ATT&CK® for Enterprise framework, version 11. See MITRE ATT&CK for Enterprise for all referenced tactics and techniques.

Vice Society is an intrusion, exfiltration, and extortion hacking group that first appeared in summer 2021. Vice Society actors do not use a ransomware variant of unique origin. Instead, the actors have deployed versions of Hello Kitty/Five Hands and Zeppelin ransomware, but may deploy other variants in the future.

Vice Society actors likely obtain initial network access through compromised credentials by exploiting internet-facing applications [T1190]. Prior to deploying ransomware, the actors spend time exploring the network, identifying opportunities to increase accesses, and exfiltrating data [TA0010] for double extortion–a tactic whereby actors threaten to publicly release sensitive data unless a victim pays a ransom. Vice Society actors have been observed using a variety of tools, including SystemBC, PowerShell Empire, and Cobalt Strike to move laterally. They have also used “living off the land” techniques targeting the legitimate Windows Management Instrumentation (WMI) service [T1047] and tainting shared content [T1080].

Vice Society actors have been observed exploiting the PrintNightmare vulnerability (CVE-2021-1675 and CVE-2021-34527 ) to escalate privileges [T1068]. To maintain persistence, the criminal actors have been observed leveraging scheduled tasks [T1053], creating undocumented autostart Registry keys [T1547.001], and pointing legitimate services to their custom malicious dynamic link libraries (DLLs) through a tactic known as DLL side-loading [T1574.002]. Vice Society actors attempt to evade detection through masquerading their malware and tools as legitimate files [T1036], using process injection [T1055], and likely use evasion techniques to defeat automated dynamic analysis [T1497]. Vice Society actors have been observed escalating privileges, then gaining access to domain administrator accounts, and running scripts to change the passwords of victims’ network accounts to prevent the victim from remediating. 

Indicators of Compromise (IOCs)

Email Addresses
v-society.official@onionmail[.]org
ViceSociety@onionmail[.]org
OnionMail email accounts in the format of [First Name][Last Name]@onionmail[.]org
TOR Address
http://vsociethok6sbprvevl4dlwbqrzyhxcxaqpvcqt5belwvsuxaxsutyad[.]onion
IP Addresses for C2Confidence Level
5.255.99[.]59High Confidence
5.161.136[.]176Medium Confidence
198.252.98[.]184Medium Confidence
194.34.246[.]90Low Confidence

See Table 1 for file hashes obtained from FBI incident response investigations in September 2022.

Table 1: File Hashes as of September 2022

MD5SHA1
fb91e471cfa246beb9618e1689f1ae1da0ee0761602470e24bcea5f403e8d1e8bfa29832
 3122ea585623531df2e860e7d0df0f25cce39b21
 41dc0ba220f30c70aea019de214eccd650bc6f37
 c9c2b6a5b930392b98f132f5395d54947391cb79

MITRE ATT&CK TECHNIQUES

Vice Society actors have used ATT&CK techniques, similar to Zeppelin techniques, listed in Table 2.

Table 2: Vice Society Actors ATT&CK Techniques for Enterprise

Initial Access
Technique TitleIDUse
Exploit Public-Facing ApplicationT1190Vice Society actors exploit vulnerabilities in an internet-facing systems to gain access to victims’ networks.
Valid AccountsT1078Vice Society actors obtain initial network access through compromised valid accounts.
Execution
Technique TitleIDUse
Windows Management Instrumentation (WMI)T1047Vice Society actors leverage WMI as a means of “living off the land” to execute malicious commands. WMI is a native Windows administration feature.
Scheduled Task/JobT1053Vice Society have used malicious files that create component task schedule objects, which are often mean to register a specific task to autostart on system boot. This facilitates recurring execution of their code.
Persistence
Technique TitleIDUse
Modify System ProcessT1543.003Vice Society actors encrypt Windows Operating functions to preserve compromised system functions.
Registry Run Keys/Startup FolderT1547.001Vice Society actors have employed malicious files that create an undocumented autostart Registry key to maintain persistence after boot/reboot.
DLL Side-LoadingT1574.002Vice Society actors may directly side-load their payloads by planting their own DLL then invoking a legitimate application that executes the payload within that DLL. This serves as both a persistence mechanism and a means to masquerade actions under legitimate programs.
Privilege Escalation
Technique TitleIDUse
Exploitation for Privilege EscalationT1068Vice Society actors have been observed exploiting PrintNightmare vulnerability (CVE-2021-1675 and CVE-2021-34527) to escalate privileges.
Defense Evasion
Technique TitleIDUse
MasqueradingT1036Vice Society actors may attempt to manipulate features of the files they drop in a victim’s environment to mask the files or make the files appear legitimate.
Process InjectionT1055Vice Society artifacts have been analyzed to reveal the ability to inject code into legitimate processes for evading process-based defenses. This tactic has other potential impacts, including the ability to escalate privileges or gain additional accesses.
Sandbox EvasionT1497Vice Society actors may have included sleep techniques in their files to hinder common reverse engineering or dynamic analysis.
Lateral Movement
Technique TitleIDUse
Taint Shared ContentT1080Vice Society actors may deliver payloads to remote systems by adding content to shared storage locations such as network drives.
Exfiltration
Technique TitleIDUse
ExfiltrationTA0010Vice Society actors are known for double extortion, which is a second attempt to force a victim to pay by threatening to expose sensitive information if the victim does not pay a ransom.
Impact
Technique TitleIDUse
Data Encrypted for ImpactT1486Vice Society actors have encrypted data on target systems or on large numbers of systems in a network to interrupt availability to system and network resources.
Account Access RemovalT1531Vice Society actors run a script to change passwords of victims’ email accounts.

Mitigations

The FBI and CISA recommend organizations, particularly the education sector, establish and maintain strong liaison relationships with the FBI Field Office in their region and their regional CISA Cybersecurity Advisor. The location and contact information for FBI Field Offices and CISA Regional Offices can be located at www.fbi.gov/contact-us/field-offices and www.cisa.gov/cisa-regions, respectively. Through these partnerships, the FBI and CISA can assist with identifying vulnerabilities to academia and mitigating potential threat activity. The FBI and CISA further recommend that academic entities review and, if needed, update incident response and communication plans that list actions an organization will take if impacted by a cyber incident.

The FBI, CISA, and the MS-ISAC recommend network defenders apply the following mitigations to limit potential adversarial use of common system and network discovery techniques and to reduce the risk of compromise by Vice Society actors:

Preparing for Cyber Incidents

  • Maintain offline backups of data, and regularly maintain backup and restoration.  By instituting this practice, the organization ensures they will not be severely interrupted, and/or only have irretrievable data.
  • Ensure all backup data is encrypted, immutable (i.e., cannot be altered or deleted), and covers the entire organization’s data infrastructure. Ensure your backup data is not already infected.
  • Review the security posture of third-party vendors and those interconnected with your organization. Ensure all connections between third-party vendors and outside software or hardware are monitored and reviewed for suspicious activity.
  • Implement listing policies for applications and remote access that only allow systems to execute known and permitted programs under an established security policy.
  • Document and monitor external remote connections. Organizations should document approved solutions for remote management and maintenance, and immediately investigate if an unapproved solution is installed on a workstation.
  • Implement a recovery plan to maintain and retain multiple copies of sensitive or proprietary data and servers in a physically separate, segmented, and secure location (i.e., hard drive, storage device, the cloud).

Identity and Access Management

  • Require all accounts with password logins (e.g., service account, admin accounts, and domain admin accounts) to comply with National Institute of Standards and Technology (NIST) standards for developing and managing password policies.
    • Use longer passwords consisting of at least 8 characters and no more than 64 characters in length;
    • Store passwords in hashed format using industry-recognized password managers;
    • Add password user “salts” to shared login credentials;
    • Avoid reusing passwords;
    • Implement multiple failed login attempt account lockouts;
    • Disable password “hints”;
    • Refrain from requiring password changes more frequently than once per year unless a password is known or suspected to be compromised.
      Note: NIST guidance suggests favoring longer passwords instead of requiring regular and frequent password resets. Frequent password resets are more likely to result in users developing password “patterns” cyber criminals can easily decipher.
    • Require administrator credentials to install software.
  • Require phishing-resistant multifactor authentication for all services to the extent possible, particularly for webmail, virtual private networks, and accounts that access critical systems.
  • Review domain controllers, servers, workstations, and active directories for new and/or unrecognized accounts.
  • Audit user accounts with administrative privileges and configure access controls according to the principle of least privilege. 
  • Implement time-based access for accounts set at the admin level and higher. For example, the Just-in-Time (JIT) access method provisions privileged access when needed and can support enforcement of the principle of least privilege (as well as the Zero Trust model). This is a process where a network-wide policy is set in place to automatically disable admin accounts at the Active Directory level when the account is not in direct need. Individual users may submit their requests through an automated process that grants them access to a specified system for a set timeframe when they need to support the completion of a certain task.

Protective Controls and Architecture

  • Segment networks to prevent the spread of ransomware. Network segmentation can help prevent the spread of ransomware by controlling traffic flows between—and access to—various subnetworks and by restricting adversary lateral movement.
  • Identify, detect, and investigate abnormal activity and potential traversal of the indicated ransomware with a networking monitoring tool. To aid in detecting the ransomware, implement a tool that logs and reports all network traffic, including lateral movement activity on a network. Endpoint detection and response (EDR) tools are particularly useful for detecting lateral connections as they have insight into common and uncommon network connections for each host.
  • Install, regularly update, and enable real time detection for antivirus software on all hosts.
  • Secure and closely monitor remote desktop protocol (RDP) use.
    • Limit access to resources over internal networks, especially by restricting RDP and using virtual desktop infrastructure. If RDP is deemed operationally necessary, restrict the originating sources and require MFA to mitigate credential theft and reuse. If RDP must be available externally, use a VPN, virtual desktop infrastructure, or other means to authenticate and secure the connection before allowing RDP to connect to internal devices. Monitor remote access/RDP logs, enforce account lockouts after a specified number of attempts to block brute force campaigns, log RDP login attempts, and disable unused remote access/RDP ports.

Vulnerability and Configuration Management

  • Keep all operating systems, software, and firmware up to date. Timely patching is one of the most efficient and cost-effective steps an organization can take to minimize its exposure to cybersecurity threats. Organizations should prioritize patching of vulnerabilities on CISA’s Known Exploited Vulnerabilities catalog.
  • Disable unused ports.
  • Consider adding an email banner to emails received from outside your organization.
  • Disable hyperlinks in received emails.
  • Disable command-line and scripting activities and permissions. Privilege escalation and lateral movement often depend on software utilities running from the command line. If threat actors are not able to run these tools, they will have difficulty escalating privileges and/or moving laterally.
  • Ensure devices are properly configured and that security features are enabled.
  • Disable ports and protocols that are not being used for a business purpose (e.g., RDP Transmission Control Protocol Port 3389).
  • Restrict Server Message Block (SMB) Protocol within the network to only access servers that are necessary, and remove or disable outdated versions of SMB (i.e., SMB version 1). Threat actors use SMB to propagate malware across organizations.

REFERENCES

REPORTING

The FBI is seeking any information that can be shared, to include boundary logs showing communication to and from foreign IP addresses, a sample ransom note, communications with Vice Society actors, Bitcoin wallet information, decryptor files, and/or a benign sample of an encrypted file.

The FBI, CISA, and the MS-ISAC strongly discourage paying ransom as payment does not guarantee victim files will be recovered. Furthermore, payment may also embolden adversaries to target additional organizations, encourage other criminal actors to engage in the distribution of ransomware, and/or fund illicit activities. Regardless of whether you or your organization have decided to pay the ransom, the FBI and CISA urge you to promptly report ransomware incidents to a local FBI Field Office, or to CISA at report@cisa.gov or (888) 282-0870. SLTT government entities can also report to the MS-ISAC (SOC@cisecurity.org or 866-787-4722).

DISCLAIMER

The information in this report is being provided “as is” for informational purposes only. The FBI, CISA, and the MS-ISAC do not endorse any commercial product or service, including any subjects of analysis. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply endorsement, recommendation, or favoring by the FBI, CISA, or the MS-ISAC.

Revisions

September 6, 2022: Initial Version

Source :
https://www.cisa.gov/uscert/ncas/alerts/aa22-249a

How To Improve Cumulative Layout Shift (CLS) on WordPress

Table of Contents

What is Cumulative Layout Shift (CLS) and Why it Matters?

The Cumulative Layout Shift is a Core Web Vital metric and measures how visually stable the page is. The visual stability is calculated by how many unexpected layout shifts occur without interacting with the page. Every time the content shifts — not because you clicked on a link and so on — counts as a layout shift.

The sum of all these shifts determines the Cumulative Layout Shift score.

How many times have you been reading an article, and the content moved down because of some new ads? Or, have you ever tried to click on a button and ended up clicking on another link because a new big image suddenly pushed the content down?

All these examples are layout shifts — they’re caused by some elements on the page being unstable and changing their position on the page.
Elements change their position due to different reasons. For instance, a new image or an ad loading above-the-fold (at the top of the page) forces some content to go down and occupy another section of the page.

You can imagine how annoying this experience is for users.

That’s why Cumulative Layout Shift is one of the three Core Web Vitals metrics assessing a page’s user experience. Alongside Largest Contentful Paint and First Input Delay, CLS will roll out as part of the Page Experience ranking factor in June 2021.

The Cumulative Layout Shift accounts for 15% of the PageSpeed score and it’s a highly relevant metric for user experience and the new ranking factor. Therefore, it could also affect your SEO performance.

What’s a Good CLS Score

Cumulative Layout Shift

As we explained, the CLS is the sum of all the unexpected content shifts occurring on the page. According to the sum of all the shifts, your CLS grade could pass Google’s assessment or not.

good CLS score should be equal to or less than 0.1.

A CLS score between 0.1 and 0.25 means that the performance “needs improvements”.

The grade is “poor” if  CLS is more than 0.25.

How to Find and Measure the Cumulative Layout Shift (CLS)

There are several ways to measure the CLS score, both from Lab and Data tools. Here are the most popular tools you can use:

You can better understand the difference between Lab and Field Data in our dedicated post on PageSpeed Insights.

Let’s see how PageSpeed Insights and Search Console can help you find and measure Cumulative Layout Shift.

Measuring and Finding CLS with PageSpeed Insights

PageSpeed Insights is one the best tools to measure and find Cumulative Layout Shift.

The tool provides you with the CLS score from the Lab and Field data so that you can measure both controlled and user data.

PageSpeed Insights also shows you the potential elements causing a layout shift.

The example below shows a bad score for CLS, both for the Field and Lab Data:

PageSpeed score - Bad CLS grade

Jumping to the Diagnostics area, you can find what is causing the issue under the  “Avoid large layout shift” section.

In this case, there’s only one element affecting CLS. It’s a preformatted text included on the page to explain how to measure CLS in JavaScript:

Avoid large layout shifts - PageSpeed Insights

As you can see, PSI gives you the score related to each element so that you can understand how much every element contributes to the overall CLS score.

If you have more than one element listed in this section, you should start fixing the ones that contribute the most to the bad grade. It’s likely that by fixing the most relevant issues, you’ll get a score good enough to pass Google’s assessment.

Note: Let’s say that you get a bad Field grade and a good Lab one — yes, it could happen! In that case, you should make sure you get a good score for CLS in the Field Data, too. Keep in mind that Field Data is related to the real user experience.
As for CLS, Lab Data only considers layout shifts during page load. On the other hand, Field Data counts all the layout shifts during the whole visit, from entry until the page is closed. Therefore, it’s more representative of your site than the Lab Data. It’s no coincidence that Google uses field Data for search rankings.

Measuring and Finding LCP on Search Console

In addition to measuring the CLS score and finding the elements causing layout shifts, you should analyze the sitewide performance.

The Core Web Vitals report in Search Console is the best way to find all the URLs affected by the same issue. For instance, in the Mobile or Desktop report’s Details tab, you may read “CLS issue: more than 0.1 (mobile)”.

By clicking on the specific row, you’ll find a list of URLs that need to be fixed for the same reason.

Search Console Report - CLS issues

Once you solve the issue, it will be easy to validate the fix for all the pages affected by the same problem.

Let’s now understand what causes a bad CLS score and how you can solve it.

What Factors Affect CLS and Cause a Slow Score

The most common factors of a poor Cumulative Layout Shift grade on WordPress are:

  • Images and videos without dimensions
  • Ads, embeds, and iframes without dimensions
  • Web Fonts causing Flash of Unstyled Text (FOUT) or Flash of invisible text (FOIT)
  • Actions waiting for a network response before updating DOM (especially for ads)
  • Dynamically injected content (e.g., animations).

Keep in mind that CLS has the most significant impact on mobile — the most critical and challenging device for optimizing performance. There are several reasons, from a smaller viewport to a challenging mobile network and a weaker Central Processing unit (CPU).

Images and Videos Without Dimensions

Images and videos without dimensions are a common cause for a layout shift.

If you don’t specify the width and height size attributes, the browser doesn’t know how much space has to allocate while loading these elements. Likely, the space reserved won’t be enough. As a result, once these elements are fully loaded, they will take more space than expected — the content already displayed will shift.

You can solve this issue by including image dimensions on images and video elements in different ways. We’ve got you covered in the dedicated section!

Ads, Embeds, and Iframes Without Dimensions

The same “dimension” issue goes for ads, embeds, and iframes. Once again, not reserving enough space makes these dynamic elements push down the content already displayed. Therefore, new layout shifts will occur on the page.

You’ll manage this problem by assigning fixed dimensions to the ads and managing the size reserved for such elements through specific tactics.

Web Fonts Causing Flash of Unstyled Text (FOUT) or Flash of Invisible Text (FOIT)

Web fonts can cause layout shifts, plus a pretty unpleasant user experience while rendering the page. It’s about how slow the fonts load. You might face two different issues: Flash of Unstyled Text (FOUT) or Flash of invisible text (FOIT).

On the one hand, you could see the text on the page with a “not-so-good” style (FOUT). It’s because the custom font takes a bit to load. In the meantime, you’ll see the fallback font. Once the custom font is ready, it will replace the fallback one. You’ll then see the font changing on the page — and the content will inevitably shift.

On the other hand, you could wait a bit before seeing any text displayed. It’s because the custom font is still being loaded (FOIT). You’ll see the content on the page only after the custom fonts have been rendered. Once fully loaded, that content might cause a layout shift.

The main way to solve this issue is to preload fonts, as you’ll read in a minute.

Actions Waiting for a Network Response Before Updating DOM & Content Injected on the Page

Animations and dynamic content injected on the page — such as banners, ads, or Instagram feeds —  can also cause a layout shift. Once again, it’s because there’s not enough space reserved for such elements.

At this point, you know how essential it is to allocate space for the elements that engage users and shouldn’t ruin the user experience.

Let’s see how to fix these problems.

You can read our in-depth and detailed guide, or you can jump to the video that shows how to improve CLS with WP Rocket!

How to Fix a Cumulative Layout Shift More Than 0.25 s or 0.1 s on Mobile and Desktop

If you want to avoid large layout shifts on your WordPress site, here’s how you can reduce a bad CLS score (with and without plugins):

1. Include Width and Height Size Attributes on Images and Video Elements

2. Preload Fonts (And Optimize Them)

3. Manage Space and Size for Ad Slots

4. Manage Space for Embeds and Iframes

5. Manage Dynamic Content

6. Prefer the CSS Transform Property for Animations

By going over each point, you’ll understand how to fix the Search Console status: CLS issue: more than 0.25s or CLS issue: more than 0.1s, both from mobile and/or desktop.

🚀 For each of our recommendations, you’ll find a piece of information about its performance impact — from low to high. The higher the impact is, the higher chance that the Cumulative Layout Shift grade will improve after following that specific recommendation.

Some best practices to avoid large layout shifts don’t include a specific solution — they’re more about managing well space for ads and other crucial elements.

1. Include Width and Height Size Attributes on Images and Video Elements

Performance impact: high 🚀🚀🚀

One of the simplest ways to fix CLS is to include the width and height attributes on your images and video elements in your WordPress CMS:

Setting image dimensions fixes CLS

WordPress adds image dimensions by default. So, this action should be automatically solved.

In case you’re facing any issue, keep in mind that WP Rocket automatically includes any missing “width” and “height” values to images.

You only have to select the “Add missing image dimensions” option in the Media tab. Fast and straightforward as that!

You can easily add missing image dimensions.

Another way to solve this issue is to take advantage of the CSS aspect ratio boxes and let the browsers set the default ratio of images.

Simply put, you should include the width or the height attribute and set the aspect ratio using CSS. The browser will figure out the missing attribute and get the image dimensions before rendering the page. By doing so, it will allocate the space needed while the image is loading. As a result, the content won’t move around, and layout shifts will be avoided.

It’s helpful information to keep in mind because many plugins, such as YouTube video embed ones, use aspect-ratio on their output.

Don’t forget about responsive images! You can use the srcset attribute:

<img
width="1000"
height="1000"
src="puppy-1000.jpg"
srcset="puppy-1000.jpg 1000w, puppy-2000.jpg 2000w, puppy-3000.jpg 3000w"
alt="Puppy with balloons"
/>

SourceGoogle

Thanks to srcset, the browser can choose between a set of images and related sizes. Keep in mind that images should use the same aspect ratio to set image size.

By including size images, you’ll serve images with correct dimensions and address the PageSpeed Insights opportunity.

2. Preload Fonts (And Optimize Them)

Performance impact: low (high only if the site had large text) 🚀

As we explained, if web fonts don’t load fast, they cause a terrible user experience and affect the CLS grade.

As a best practice for avoiding layout shifts, you should preload fonts.

By preloading fonts, you’ll tell the browser to load the fonts as one of the top-priority resources. When rendering the page, the browser will load the fonts as fast as possible. As a result, the browser will likely include the fonts in the first meaningful paint — that’s when the page’s primary content is fully loaded and displayed. In that case, no layout shift will occur.

You can add  the rel=preload to the key web fonts:

<link rel="preload" href="font.woff2" as="font" type="font/woff2" crossorigin>

Did you know that you can easily preload fonts with WP Rocket? In the dedicated tab, you only have to include the URLs of the font files to be preloaded:

Preload tab - Preload fonts feature

Please note that it’s useful to enable this WP Rocket option only if you have not activated the Remove Unused CSS feature (File optimization tab). If RUCSS is activated, you don’t need to activate the Preload fonts option.

By preloading fonts, you’ll address the “Ensure text remains visible during Webfont load” PageSpeed Insight recommendation.

There’s more to this point. To prevent any FOIT and FOUT issues, you should combine the rel=preload (or the WP Rocket feature enabled) with the CSS line font-display: optional.

The CSS font-display descriptor determines how font files are downloaded and displayed by the browser.
With font-display: optional, the browser will download and cache the font files to make them immediately available for rendering. So, even though font-display has several values, optional is the one you should use.

Another useful tip to reduce the FOUT issue is to add the display:swap;missing on font-display properties. WP Rocket can help you do it if you minify/combine CSS files.

There are other ways to load fonts faster:

Convert the icon fonts to SVG. Font icons take a while to load and don’t help accessibility. There’s no reason to use them. Using SVGs, the font will render faster, and you will load the exact fonts you need.

Make multiple font formats available. By doing so, the browsers will pick the compatible format and only load its font. Here is some information about font formats that you may find helpful:

  1. Avoid TTF. It’s usually 10 – 20% more heavy than WOFF.
  2. Use SVG for Safari. It’s usually a bit smaller than WOFF.
  3. Use WOFF2 for modern browsers. It’s the smallest size – around 30% smaller than WOFF and SVG.
  4. Implement WOFF as a fallback when SVG or WOFF2 can’t be used.

Host your fonts locally or use a CDN to cache them. You’ll avoid any delay and deliver fonts faster.

Optimize your fonts to make them as small and fast as possible. As for Google Fonts, did you know that WP Rocket automatically takes care of them?

By applying these recommendations, you’ll optimize your fonts and avoid several layout shifts. You’ll address the PSI recommendation: “Ensure text remains visible during webfont load” on your WordPress site.

3. Manage Space and Size for Ad Slots

Performance impact: high 🚀🚀🚀

There are several best practices to avoid any layout shift for ads:

  • Assign fixed dimensions to the ads so that you’ll reserve enough space for the ads to be loaded.
  • Reserve the biggest possible space for ads. Historical data come in handy to assess what’s the best dimension for each ad slot.
  • Keep every space reserved for ads that have not been displayed. In other words, you shouldn’t collapse any area on the viewport. You could rather include a placeholder or a fallback element.
  • Place non-sticky ads in the middle of the page — anyway, far from the top of the viewport.

The Delay JavaScript Execution feature provided by WP Rocket can help you control dynamic content above the fold like Google Ads. The feature can be used to stop dynamic interaction, content injection (ads), and dynamic class changes until there is an interaction on the page.

Once again, you’ll address the “Serve images with correct dimensions” PSI recommendation. The same goes for the next section.

4. Manage Space for Embeds and Iframes

Performance impact: high 🚀🚀🚀

The recommendations for managing embeds and iframes are similar to the ones for ads.

In particular, you should precompute enough space for such elements. Once again, historical data can be useful to understand how much space you should reserve.

Placeholder or fallback is an excellent solution to manage the unknown embed size.

5. Manage Dynamic Content

Performance impact: high 🚀🚀🚀

Dynamic content such as banners can also affect Cumulative Layout Shift. That’s why you should avoid displaying new content unless it’s triggered by user interaction. As you know, CLS counts only the layout shifts that occurred when users are not interacting with the page.

As explained in the “Manage Space and Size for Ad Slots” section, you can take advantage of the Delay JavaScript Execution option provided by WP Rocket to control dynamic content above the fold.

By managing dynamic content, you’ll take care of the following PageSpeed recommendations:

  • Avoid large layout shifts
  • Avoids enormous network payloads.

6. Prefer the CSS Transform Property for Animations

Performance impact: low 🚀

The last best practice to ensure visual stability is to take care of animations. You can use the CSS property: transform, which doesn’t trigger any layout changes.

You’ll address the “Avoid non-composited animations” PageSpeed recommendation.

Source :
https://wp-rocket.me/google-core-web-vitals-wordpress/improve-cumulative-layout-shift/

How To Improve First Input Delay (FID) on WordPress

Table of Contents

What is First Input Delay (FID) and Why it Matters?

First Input Delay is a Core Web Vitals metric and measures how long it takes for the browser to respond to the first user’s interaction with a page — that is, clicking on a link, tapping on a button, or interacting with another element.

Let’s say that you land on a site and click on a link. Of course, you would expect the page to answer as soon as possible, right? Unfortunately, this is not always the case. For instance, you may click on a link, and nothing happens for a while — it’s because the browser is not able to process the user’s request immediately.

In more technical words, it’s because the browser’s main thread is processing another request and can’t respond to the user’s one. Quite often, the requests that keep the browser busy are related to processing JavaScript files. We’ll go over the JS files in the next section and explain how to fix the main issues.

So, back to you: you click on a link and keep waiting for something to happen on the page… Pretty annoying, isn’t it?

That’s why FID is part of the Core Web Vitals metrics and helps measure a page’s user experience.

Unlike the other two Core Web Vitals metrics, FID can only be measured on the field — after all, it’s all about users’ interaction.

For this reason, some tools, such as Lighthouse, can’t measure the First Input Delay and use Total Blocking Time as a proxy. TBT is a lab metric that also measures interactivity and responsiveness (without user interaction). As long as the TBT score is good, the FID grade should also be fine.

TBT accounts for 25% of the overall PageSpeed Insights score. It’s the highest weight, and only LCP has the same one. By improving TBT performance, you’ll likely improve your page speed grade and the FID performance.

What’s a good FID score

First Input Delay

As for the other Core Web Vitals, scores are divided into three buckets: Good, Needs improvement, and Poor.

A good FID score needs to have a First Input Delay of less or equal to 100 milliseconds.

The score “needs improvement” if it’s between 100 and 300 milliseconds.

On the other hand, a “poor” score is beyond 300 milliseconds.

What’s a Good Total Blocking Time Score

You may wonder if the same score buckets apply to the Total Blocking Time. TBT score is slightly different — as the metric itself is. The key difference is that TBT measures interactivity without user input. That’s why it can be calculated as lab Data.

Total Blocking time measures how long the page is “blocked” before responding to user input such as a keyboard press, screen tap, or mouse click — no user interaction, you see? The sum of all the “block times” determines the TBT score.

To be a bit more technical, TBT measures the sum of all periods between First Contentful Paint (when you can interact with the first content on the page) and Time to Interactive (how long it takes for the page to get fully interactive).

As usual, Total Blocking Time includes three buckets:

Good – less or equal to 300 milliseconds

Needs improvement – Between 300-600 milliseconds

Poor – over 600 milliseconds.

In short: both FID and TBT capture a page’s interactivity and responsiveness — even though FID takes into account the user interaction, whereas TBT relies on user input. For this reason, both metrics are similar in terms of improvements needed.

What’s The Maximum Potential First Input Delay

You may have come across the Maximum Potential First Input Delay and wondered what’s the relationship with FID.

The Maximum Potential First Input Delay measures the maximum delay between the user’s interaction and the browser’s response.

It’s the worst-case scenario based on the duration of the longest task after the First Contentful Paint — that’s when the first part of a content’s page is displayed on the screen, and you can start interacting with the page.

Once you can click on a link or tap a button (First Contentful Paint accomplished), you can measure how long it takes for the page to respond to your request while the longest task is running. The longest task’s length is the Maximum Potential First Input Delay.

By measuring the Maximum Potential First Input Delay, you’ll know how long users will wait when interacting with the page after seeing the first content.

How to Measure the First Input Delay

Being a field metric, First Input Delay can be only measured by a few tools:

You can also measure FID in JavaScript by using the Event Timing API.

You can measure the proxy metric, Total Blocking Time, on these other tools:

Measuring FID and TBT with PageSpeed Insights

PageSpeed Insights give you the easiest opportunity to measure the First Input Delay score on a page basis, as well as the Total Blocking Time:

PageSpeed Insight scores - FID and TTB grades

Measuring FID with Search Console

If you want to assess your site’s sitewide FID performance, you should take a look at the Core Web Vitals report in Search Console. The report is based on the Chrome User Experience Report.

By choosing either the mobile or desktop report, you can identify the FID performance’s potential issues and dive deeper into the URLs affected by the same problem — for instance, FID Issue: longer than 100 ms.

Search Console report - FID issues

What factors affect FID and cause a slow score on WordPress

At this point, you can guess that FID is mainly impacted by JavaScript execution. When the browser is busy dealing with heavy JavaScript files, it can’t process other requests, including the users’ ones.

As a result, interactivity is poor; JavaScript execution times are longer; the main thread is busy and blocked. In short, the page can’t respond to user inputs nor interactions.

We’ll see different ways to fix these issues.

Since JavaScript is the key to improving FID, you should be aware that having many plugins — especially the JavaScript-based ones — could also affect the First Input Delay grade. You should avoid any unnecessary JavaScript execution on the pages where the plugin is not needed. It’s also important to remove any plugin that is not essential.

Heavy WordPress themes can also affect the First Input Delay grade. They have more JS files, complex layouts, and an inefficient style that will affect the main thread — therefore, the FID performance.

That’s why the less complexity the themes have, the better. And that’s also why the tendency now is to simplify everything: layouts, animations, more native JS use vs. relying on complex libraries.

We’ll go over the actions to improve FID in the next section. If you prefer, you can first watch the video that shows how to optimize FID with WP Rocket!

How to Reduce the First Input Delay Longer Than 100 ms or 300 ms on Mobile and Desktop

Improving how the browser deals with JavaScript execution reduces the First Input Delay on WordPress and enhances the FID score.

The goal is to make the process faster and smoother so that interactivity and responsiveness can get better.

If your FID grade has any issues, in the Core Web Vitals report on Search Console you’ll read “FID issue: longer than 100ms” or “FID issue: longer than 300ms“. The issue can be from mobile and/or desktop.

There are several ways to optimize the First Input Delay grade on WordPress:

Let’s see in detail what actions you should take and what’s the performance impact.

1. Defer JavaScript

Performance Impact: high

As for optimizing the JavaScript execution, you should defer Javascript files.

By deferring JavaScript files, these render-blocking resources will be loaded after the browser has rendered the most relevant content — that is, the content needed to let users interact with the page.

As a result, the loading time will improve, as well as the FID grade.

Once you have identified the JS resources to defer, you should add the defer attribute to the JavaScript files. The browser will then know which files to defer until the page rendering is complete.

Here’s an example of the defer attribute:

<script defer src="/example-js-script"></script>

You can easily manage the JavaScript files’ deferring with WP Rocket and its Load Javascript Deferred feature.

You’ll find this option in the File optimization tab. You’ll also be able to exclude specific JS files from being deferred — in case you need this option due to any conflict.

File optimization Tab - Load JavaScript deferred

You’ll address the “Eliminate render-blocking resources” and “Reduce the impact of third party code” PageSpeed recommendations — even though the JS render-blocking resources issues don’t stop here.

Keep reading to learn what other actions you should implement.

2. Remove Unused JavaScript

Performance Impact: medium

You can also tackle the JS issues by removing the unused JavaScript files that slow downloading time and make interactivity worse.

Unused JS files are the JavaScript resources not essential for rendering the page or not useful at all. Yet, these files are in the code, so you should manage them. Examples of unused JS files are the third-party JavaScript files such as the analytics and ads tracking codes.

The PageSpeed Insights report shows you the list of the unused JS files you should take care of:

List of unused Javascript files- PageSpeed Insights Report

You have two options to tackle unused Javascript files:

1. Load the JavaScript files only when needed.
You can use plugins such as Perfmatters and Assets Cleanup to load these files only when needed. The execution of JavaScript files should be disabled in any other case. As an additional tip, you may ask your theme and plugin developers to ensure that only the needed assets are loaded when their respective features are used.

2. Delay the JavaScript files.
Delaying JavaScript resources means that the JavaScript files won’t be loaded until the first user interaction (e.g., scrolling, clicking a button). In other words, no JS files will be loaded unless there’s user interaction. It’s important to keep in mind that not all the scripts from the PageSpeed recommendation list, like the one included above, can be safely delayed.

An additional advantage of delaying JavaScript is that Lighthouse won’t detect any JS files. As a result, the tool won’t include any of these JS resources in the “Remove unused Javascript files” recommendation.

To be clear: you shouldn’t delay JS files to solve this PSI recommendation. You’ll find more information about the main reason why you should delay JS in the next point. However, it’s useful for you to know as an added value for improving your PSI score.

So, how can you delay JavaScript files? You can use a free plugin such as Flying Scripts.

On the other hand, you can take advantage of WP Rocket and its Delay JavaScript execution feature. The File optimization tab allows you to delay the JavaScript execution in a few clicks.

File optimization tab - Delay JavaScript execution

Lastly, here’s a list of other plugins that can help you to minimize unused JS. We recommend using them carefully:

Removing unused Javascript files will address the specific PageSpeed recommendation listed in the report. You’ll also address the “Eliminating render-blocking resources” and “Reducing javascript execution time” recommendations.

c3f3153aff0b1618406603-LCP.png

Don’t Miss Out!

The Core Web Vitals Cheat Sheets are the easiest and fastest way to learn how to optimize LCP, FID, and CLS and prioritize your performance tasks.Yes, I Want This!

3. Delay JS Execution Time Until User Interaction

Performance impact: very high

You can optimize JavaScript resources and prioritize the scripts needed for interaction by delaying the JS files and their execution until user interaction.

In other words, no JavaScript files will be loaded until user interaction, such as clicking a button or scrolling the content.

As explained above, you should delay all the JavaScript files that affect loading time and interaction for no reason, such as the unused JS files included in the previous section.

As you already know, you have different options to delay JavaScript files. You can use a free plugin such as Flying Scripts or take advantage of the Delay JavaScript execution feature option provided by WP Rocket — more details above.

This is how you’ll address the “Reduce javascript execution time” PSI recommendation.

4. Minify JS

Performance impact: low

Another effective way to manage the JavaScript execution time goes through the minification of JavaScript files.

By minifying JS files, you’ll remove any comments, line breaks, and white spaces included in the code. The goal is to make the files’ size smaller and faster.

Minification can be time-consuming, and there’s always the risk of missing out on anything. For these reasons, it’s best to use a minification tool or use WP Rocket. WP Rocket is the easiest way to minify JS files in a few clicks. You only have to enable the Minify JavaScript file option in the file optimization tab.

File optimization tab - Minifying JS files

You’ll address the following PageSpeed Insights recommendations:

  • Minify JS
  • Avoid enormous network payloads.

5. Remove (or Reduce) Unused CSS

Performance impact: medium

As explained in the LCP section, removing or reducing unused CSS helps improve loading time — therefore, it improves interactivity and the FID metric.

WP Rocket offers a powerful feature that allows you to tackle unused CSS in one click. You only need to enable the option below, and the plugin will remove the unused CSS included in the HTML of the page.

By enabling this feature, you’ll easily address the “Reduce unused CSS” recommendation.

6. Async or Defer CSS

Performance impact: medium

The main thread work can have a significant impact on interactivity and FID performance. That’s why one of the PSI recommendations is “Minimize main thread work.” To address the issue and improve FID time, you should defer or async the CSS files.

In the Defer JavaScript section, you read why defer is essential to allow the browser to focus only on the resources essential to page rendering. The same goes for the CSS files that are render-blocking resources.

An option is to include the Defer attribute to all the CSS files:

<script defer src="/example-css-script"></script>

And here’s another 2-step process to make the CSS render-blocking resources load asynchronously:

  1. Extract and inline the Critical Path CSS (CPCSS) using an available generator tool like this one.
  2. Load the rest of the classes asynchronously by applying the following pattern.

If you’re looking for more detailed information, we recommend you read the dedicated Google resource.

An extra tip to keep in mind is to avoid placing large non-critical CSS code in the <head> of the code.

If you’re looking for a faster and easier way to quickly take care of both critical and non-critical CSS, WP Rocket can help you. Under Optimize CSS delivery, our cache plugin offers the Load CSS asynchronously option that defers non-critical CSS and inline critical CSS.

You’ll remove all the render-blocking CSS resources by enabling the option in the File Optimization tab:

Please note that if you have already enabled the Remove Unused CSS option (RUCSS), you can’t choose this option — simply because you don’t need it. WP Rocket is already optimizing CSS files at its best. We recommend optimizing CSS Delivery only in case RUCSS is not working for you.

By implementing these actions, you’ll take care once again of the “Eliminate render-blocking resources” PageSpeed Insights recommendations. You’ll also address the  “Avoid chaining critical requests” recommendation.

7. Compress text files

Performance impact: high

As you can guess at this point, compression is something you need to take care of. It goes without saying that “Enable text compression” is one of the PSI recommendations that apply to FID times.

By compression and reducing files’ size, the browser and the server will send over files faster. The browser will load these resources quicker.

The most common compression formats are Gzip and Brotli. Brotli is the most recommended format right now. You can read more about Brotli and GZIP in our dedicated article.

The easiest way to enable Gzip compression on WordPress is using a plugin. You can choose between different options, from the Enable Gzip Compression plugin to WP Rocket, which includes GZIP compression by default. Keep in mind that some hosts enable GZIP compression automatically.

8. Break up Long Tasks

Performance impact: high

As we explained at the beginning of the article, when the main thread is busy and blocked, the FID grade is negatively affected, and the page can’t respond to user inputs nor interactions.

The main thread is blocked because of the long tasks that the browser can’t interrupt — that is, all the tasks running longer than 50 ms. That’s why when the main thread is blocked, a page can’t respond to user inputs, and responsiveness gets affected.

To solve this issue, you should split long-running scripts into smaller chunks that can be run in less than 50ms.

content-visibility is a new powerful CSS property that can help boost the rendering performance by enabling the user agent to skip an element’s rendering work until it is needed.

You can improve your load performance by applying content-visibility: auto; contain-intrinsic-size: 1px 5000px; to elements where you want to delay the paint. Don’t forget the second part: it’s important to fix some usability issues.

Currently, this CSS property works only on Chrome and the majority of browsers based on it.

Lastly, we recommend reading two resources on this topic, about the long tasks and intensive JavaScript.

Source :
https://wp-rocket.me/google-core-web-vitals-wordpress/improve-first-input-delay/

How To Improve Largest Contentful Paint (LCP) on WordPress

Table of Contents

What is Largest Contentful Paint (LCP) and Why it Matters?

The Largest Contentful Paint (LCP) is a Core Web Vital metric and measures how long it takes for the largest element to become visible in the viewport. Until the LCP doesn’t load, you won’t see almost anything on the page. It’s because the LCP element is always above the fold — that’s at the top of the page.

LCP is usually an image or a text block. However, it could also be a video or an animation. The LCP element can differ between mobile and desktop.

As you can guess, the largest element is the most relevant one for user experience and perceived load speed.

Think about it: if the page’s biggest element doesn’t load fast, your user experience won’t be good. You would look at an almost blank page, waiting for the page to load… You could even leave the site! When browsing, a few seconds can make a huge difference.

That’s why Google has included LCP as one of the Core Web Vitals metrics — the metrics measuring how great your user experience is.

In June, LCP will roll out as part of the new ranking factor, the Page Experience Signal. For this reason, Largest Contentful Paint matters not only for delivering a great user experience but also for improving your SEO performance.

What’s more, LCP accounts for 25% of the overall PageSpeed Insights score. By improving its performance, you’ll likely improve your page speed grade.

In short, LCP is one of the most important performance metrics right now.

What’s a Good LCP Score

largest-contentful-paint

good score means that LCP should be less or equal to 2.5 seconds. If so, the page will get the green score and pass the assessment.

If LCP is between 2.5 and 4.0 s, the score “needs improvements” — you’ll get an orange grade.

Suppose the LCP is more than 4 seconds. Then the score is “poor”, and you need to fix it as soon as possible.

Let’s now see how we can find the LCP element.

How to Find and Measure the Largest Contentful Paint

There are several ways to find and measure the LCP element, both with Lab and Field Data. Here are the most popular tools you can use to test LCP.

Lab Data tools:

Field Data tools:

You can learn more about the difference between Lab and Field Data in our post on PageSpeed Insights.

Let’s go over some of the easiest and most effective tools: PageSpeed Insights and Search Console.

Measuring and Finding LCP with PageSpeed Insights

PageSpeed Insights is the easiest way to measure and find the LCP element.

After testing your URL’s page, you’ll get the LCP grade in the Lab Data and the Field Data (if available). You’ll also get the PageSpeed recommendations to improve your performance.

Go to the Diagnostics area and look for the “Largest Contentful Paint element” section. You’ll discover what’s the LCP element for the page you’re analyzing.

Let’s go over a couple of examples and make things easier to understand.

The LCP example From Mobile

Let’s analyze a WP Rocket blog page from mobile.

We test the performance of the page in the tool and go to the Diagnostics area. The LCP element is the H1, which is the blog post title:

LCP-mobile-example

As long as the main title loads fast, the page will deliver an excellent user experience.

LCP-mobile-example

The LCP example From Desktop

Let’s now take a look at the same URL from the desktop.

We go again to the Diagnostics Area and look for the LCP element. Not surprisingly, LCP is not the same as for mobile. For desktop, the LCP is an image:

LCP-element-desktop

More precisely, it’s the image of the post.

LCP-desktop-example

Since PageSpeed Insights is a page-based tool, we recommend that you run several tests for different pages.

For an overall analysis, you should also take advantage of the Search Console.

Measuring and Finding LCP on Search Console

You can assess your sitewide performance and detect any issues with the Core Web Vitals report in Search Console.

You’ll be able to access both performances from mobile and desktop:

Once you open the report, you’ll see how your site’s pages perform according to each threshold: good, needs improvement, and poor.

Core Web Vitals report mobile tab - Search Console

Search Console report groups each Core Web Vital performance by status, issue type, and URLs.

For example, you might have some URLs not performing well for LCP. If so, in the report, you may read “LCP issue: longer than 2.5 s (mobile).

By clicking on the row related to that specific issue, you’ll land on a page that provides the list of URLs that need to be fixed. While these URLs are an example, they can give you a pretty accurate idea of what’s going on.

It will be pretty easy to find the URL pattern and move forward with the fixing and the validation.

What Factors Affect LCP and Cause a Slow Score

For WordPress sites, three factors affect LCP: slow server response times, render-blocking JavaScript and CSS, and slow resource load time.

Slow Server Response Times

The browser makes a request to the server, but the server takes too long to send the content requested. Since the browser doesn’t receive the content quickly enough, it takes a while to get something rendered on your screen. As a result, load time is not great. The LCP score gets affected.

You’ll fix the issue by improving your Time to First Byte, using a CDN, and establishing third-party connections early.

Render-blocking JavaScript and CSS

The browser makes a request and gets the content from the server. At this point, the browser will render the content and show it, right? Not so fast.

To render any content, the browser has to analyze (or parse) the HTML of the page and make it “readable” into the HTML structure of the page — that’s the DOM tree. After that, the content will be rendered and fully displayed — unless some scripts and stylesheets block the HTML parsing. These scripts and stylesheets are the render-blocking resources.

As a result of this block, the parsing is delayed. Once again, the content you requested takes a bit before being loaded. The LCP performance gets affected again.

You’ll tackle these issues by deferring and removing unused JS files. Don’t worry! You’ll find all the information you need in the next section.

Slow Resource Load Times

Other files can also cause poor performance and a bad user experience: images, videos, and block-level elements like HTML and CSS files.

As you already know, LCP is related to the elements at the top of the page. And this issue comes up precisely when these files are rendered above-the-fold and take too long to load. As a result, loading time and LCP are affected once again.

You’ll manage the resource load times by optimizing images, minifying and compressing CSS, JS, HTML files, and preloading critical assets.

The bottom line: how fast the browser receives and renders the content requested determines the LCP score.

Let’s understand how to fix all these issues in detail.

If you prefer, you can first watch the video that shows how to improve LCP with WP Rocket.

How to Reduce a Largest Contentful Paint Longer Than 2.5 s or 4 s on Mobile and Desktop

Here are ten ways to improve the Largest Contentful Paint performance and fix the Search Console status “LCP issue: longer than 2.5s or LCP issue: longer than 4s“, both from mobile and/or desktop.

1. Improve the Time to First Byte and Reduce Server Response Time

2. Use a CDN

3. Defer JavaScript

4. Remove Unused JavaScript

5. Defer Non-Critical CSS, Inline Critical CSS, and Remove Unused CSS

6. Minify CSS and JS Files

7. Optimize Your Images

8. Compress Text Files

9. Use Preload for Critical Assets

10. Establish Third-party Connections Early.

Let’s see them in detail.

🚀 For each suggestion, you’ll find a piece of information about its performance impact —  from low to high. The highest the impact is, the highest chance that the Largest Contentful Paint score will improve after following that specific recommendation.

1. Improve the Time to First Byte and Reduce Server Response Time

Performance Impact: high 🚀🚀🚀

One of the main reasons for a bad LCP is a slow server response time.

You can measure your server response time by looking at the Time to First Byte (TTFB).

Every time you want to consume any piece of content, the browser sends a request to the server. The TTFB measures how long it takes for the browser to receive the first byte of content from the server.

By improving your TTFB, you’ll improve your server response time and the LCP score.

Please note that a good TTFB should be under 200 ms — you can quickly check this metric by testing your URL’s site on WebPageTest.

WebPageTest example

There are two ways to fix a bad server time:

1. Enable Page Caching

By enabling page caching, your site’s pages will be stored as HTML files on the server after the page is loaded for the first time. As a result, the content will be displayed faster. It’s an easy and effective way to improve TTFB.

You can also choose one of the top WordPress hosting providers that include a server-level caching option.

WP Rocket can easily take care of page caching with no effort from your side.

A dedicated tab will allow you to enable mobile caching and set the options you prefer. WP Rocket enables 80% of web performance best practices automatically. So, if you’re in doubt, you’ll get covered anyway!

Cache tab

2. Choose a Fast Server Hosting Service

A fast hosting can make a huge difference in performance. And maybe it’s time to upgrade your hosting plan!

As the first thing, your hosting provider should have servers close to the majority of your users. The closer your users are to the server, the fastest the data will be sent.

You should also choose the right server host type. A dedicated hosting server will ensure the fastest performance. Take into consideration how much traffic your site gets, and make your decision.

By enabling caching and choosing a fast hosting, you’ll take care of the following PageSpeed Insights recommendations:

  • Reduce server response times (TTFB)
  • Serve static assets with an efficient cache policy.

2. Use a CDN

Performance Impact: medium 🚀🚀

A CDN helps you reduce the length of time between the user request and the server response. This amount of time is the latency. The back and forth between the browser request and the server response is the round trip time (RTT).

If your users are located far from the server’s location, it could take a while before all the assets (e.g., images, JS and CSS files, videos ) are sent. Latency and RTT will be high and will affect loading time and the LCP score.

You already saw how the location of your server could affect your site’s performance.

A CDN solves the issue thanks to a global network of servers. No matter where your users are located. Every time they request a page, they will receive the assets from the closest server. Simple as that.

RocketCDN is the best way to let your users access your site quickly and easily.

If you want to know more about the CDN benefits and the different types, you can read our article.

Choosing a CDN will help you address the following PageSpeed recommendations:

  • Serve static assets with an efficient cache policy
  • Enable text compression.

Please note that a CDN will address such recommendations only if properly configured. The default options might not be enough to improve performance as expected.

3. Defer JavaScript

Performance Impact: high 🚀🚀🚀

Render-blocking resources like JavaScript files are one of the main causes of a bad LCP score.

Deferring the JavaScript files will help you tackle the issue. In other words, you’ll change the priority of the JS files being loaded.

Remember? The browser parses the HTML, builds the DOM tree, and then renders the page — unless there is any blocking resource to slow the process down.

By deferring JavaScript, the browser will process and load the JS files only after parsing the HTML document and building the DOM tree. Since there won’t be anything to block the process, rendering will be much faster — and the LCP metric will improve.

You can add the defer attribute to the JavaScript files so that the browser can detect the resources to defer. The browser will analyze the HTML and build the DOM tree with no interruption.

Here’s an example of the defer attribute:

<script defer src="/example-js-script"></script>

The easiest way to manage the JavaScript resources is to take advantage of WP Rocket and its Load Javascript Deferred feature.

You can choose this option in the File optimization tab. What’s more, you can easily exclude specific JS files from being deferred — in case the defer feature conflicts with any file.

File optimization Tab - Load JavaScript deferred

You’ll address the “Eliminate render-blocking resources” PSI recommendation in a few clicks — even though the render-blocking resources issues don’t stop here.

Let’s move to the next point about tackling render-blocking resources.

4. Remove Unused JavaScript

Performance Impact: medium 🚀🚀

Another way to eliminate the render-blocking resources is to remove the JavaScript resources that are not used. They may not be used for two reasons:

  • They’re not used anymore on your site. They’re still in the code but are completely useless.
  • They aren’t included in the above-the-fold content. Therefore, they’re non-critical for building the DOM tree. Yet, these files have a reason to be there (e.g., Google Analytics tracking code).

You can find the list of the unused JS files in the PageSpeed report in the “Remove unused Javascript” section:

List of unused Javascript files - PageSpeed Insights Report

There are two ways to solve this issue in WordPress:

1. Load the JavaScript files only when needed.
For instance, the files should be loaded only on the pages that need that specific file — in any other case, the execution of JS should be disabled. You can take care of this aspect with plugins such as Perfmatters and Assets Cleanup.

2. Delay the JavaScript files.
The JavaScript files won’t be loaded until the first user interaction (e.g., scrolling, clicking a button). If there’s no user interaction, the JS files won’t be loaded at all.

By delaying JavaScript, the JS files won’t be detected by Lighthouse nor listed in the “Remove unused Javascript files” recommendation — even though not all the scripts from the PageSpeed recommendation list can be safely delayed.
For instance, the Google Analytics tracking code is usually included in this PageSpeed Insights recommendation. If you delay the JS files, the Google Analytics JS file won’t be reported anymore.

Note: Delaying JS files doesn’t have the purpose of solving this PSI recommendation per se. However, it works well in addressing this PageSpeed audit and improving your LCP score. So, it’s good for you to know.

So, how can you delay JS resources? You have different options.

If you’re looking for a free plugin to delay JavaScript files, you can use Flying Scripts.

Another way to safely tackle any unused JavaScript is to take advantage of WP Rocket! The plugin allows you to delay the JavaScript execution in a few clicks from the File optimization tab. You can easily add the URLs you want to exclude from delaying execution:

File optimization tab - Delay JavaScript execution

As we mentioned, by removing unused Javascript files, you’ll address the specific PageSpeed recommendation. Overall, you’ll work towards “Eliminating render-blocking resources” and “Reducing javascript execution time”.

Your LCP grade will get another boost.

5. Defer Non-Critical CSS, Inline Critical CSS, and Remove Unused CSS

Performance Impact: medium 🚀🚀

As for the JS files, you should also defer non-critical CSS — all the files not relevant for rendering the page. In other words, you should change the priority for these files, too.

They will load after the browser has rendered the most relevant content on the page.

While deferring the CSS files, you should also inline critical CSS — the resources above-the-fold that need to be loaded as fast as possible. It means that you should identify the critical CSS (or Critical Path CSS) and inline them inside the HTML structure.

If you want to implement both actions on WordPress, here’s how the process looks like:

  1. First, you should extract and inline the Critical Path CSS (CPCSS) using one available generator tool. You can find one here.
  2. Then, you should load the rest of the classes asynchronously by applying the following pattern.

You can read more about the process in the dedicated Google resource.

Another tip is to avoid placing large non-critical CSS code in the <head> of the code.

If you want to take care of both critical and non-critical CSS quickly, you can take advantage of WP Rocket’s features!

An effective way to tackle CSS is to remove unused CSS. WP Rocket can also help you to do it easily, thanks to its dedicated feature. You only need to enable the Remove unused CSS option, and the plugin will remove the unused CSS from the HTML of each page.

Remove Unused CSS or load CSS asynchronously - Source: WP Rocket

By implementing these actions, you’ll address the “Eliminate render-blocking resources”, “Reduce unused CSS”, and “Avoid chaining critical requests” PageSpeed Insights recommendations.

6. Minify CSS and JS Files

Performance Impact: low 🚀

Another effective way to optimize Largest Contentful Paint is to minify CSS and JS files.

Minification comes down to optimizing your code by making it more compact. It means to remove any white spaces, line breaks, and comments included in the code. As a result, minification will reduce CSS and JS files’ size and make them load faster.

It sounds easy, but the reality is more complicated. It’s not always simple to minify both file types and be sure to have excluded all the right resources — especially if you’re not a developer. Either way, it’s time-consuming.

The easiest and most effective way to take care of minification is to use a plugin like WP Rocket.

In the file optimization tab, you’ll have the opportunity to minify both CSS and JavaScript files.

 File optimization tab - Minifying CSS and JS files

You’ll address the following PageSpeed Insights recommendations:

  • Minify CSS
  • Minify JS
  • Avoid enormous network payloads.

7. Optimize Your Images

Performance Impact: high 🚀🚀🚀

Optimizing images is another relevant way to fix a bad LCP score.

Images are often the LCP element from mobile or desktop. By improving their loading time, you’ll boost the Largest Contentful Paint performance.

Here’s what you can do to fix any performance issues about images.

Compress and resize your images. You should reduce the file size without losing quality. The smaller the image dimension is, the faster the loading time will be.

To be clear: if you use a tool to optimize your images on the desktop, you will only optimize the original size. The images that you upload on WordPress won’t be resized. As you may know, in WordPress, there are different image sizes. Unless you use a proper image optimization plugin, you won’t optimize anything for performance.

For optimizing a few images, you could use a tool like ImageOptim. On the other hand, if you want to optimize more images and their thumbnails in bulk, Imagify is the perfect solution. You’ll reduce your images’ weight without sacrificing their quality. You’ll save plenty of time!

Convert your images into new formats. Overall, Google recommends the WebP format. And that’s why all WordPress image optimizer plugins now include the option to convert images to WebP. Other formats you may take into account are JPEG 2000 and JPEG XR.  These formats are smaller than the JPEG, PNG, and GIF ones and help improve performance.

Use responsive images. You shouldn’t use the same images’ size for desktop and mobile. For instance, if the desktop image size is large, the mobile image size should be medium.

Page builders like Elementor allow users to upload different image sizes according to the device. Mobile image optimization is pretty essential, and the mobile score matters the most. Don’t underestimate its impact on your LCP grade!

Exclude the LCP element from lazy-loading. While overall lazy-load helps improve loading time, it can make the LCP score worse, especially when the LCP element is an image and gets lazy-loaded. That’s why excluding the LCP element from lazy-load and displaying it directly in the HTML of the page is an excellent way to optimize the LCP score.

Use a static image instead of a slider. Sliders can be very heavy to load because of the code. On the other hand, a static image made by HTML code is lighter and faster.

By optimizing your images, you’ll address the following PageSpeed Insights audits:

  • Serve images in next-gen formats
  • Properly size images
  • Efficiently encode images
  • Avoid enormous network payloads.

8. Compress Text Files

Performance Impact: high 🚀🚀🚀

You should also compress text files such as HTML, CSS, or JavaScript resources.

Compression means to “zip” your files in a smaller and lighter format so that they load faster. Once you reduce their size, the transfer between browser and server will be quicker. The browser will be able to load these resources faster. Load time and LCP will improve.

You can use compression formats such as GZIP and Brotli. On the one hand, GZIP is supported by most of the hosts. On the other one, Brotli is more performant and currently mostly recommended. Learn more about  GZIP vs. Brotli in our blog post.

You can easily enable GZIP compression on WordPress by using a plugin. You can choose between different options, from the straightforward Enable Gzip Compression plugin to WP Rocket, which automatically includes GZIP compression. Also, some hosts enable GZIP compression by default.

Either way, you’ll address the “Enable text compression” PageSpeed recommendation.

9. Use Preload for Critical Assets (As the Largest Contentful Paint Image)

Performance Impact: low 🚀

At this point, you know how much the assets above the fold are critical for a good performance score. These critical resources can be fonts, images, videos, CSS, or JavaScript files.

To improve your LCP score, you should always make the critical assets load as fast as possible.

So, you may be wondering how to preload the Largest Contentful Paint image.

The Preload option comes in handy. It tells the browser to prioritize the load of these resources. In other words, the Preload prevents the browser from discovering and loading these critical files (including the LCP image) until much later.

You can include the rel=”preload” in the code:

<link rel="preload" as="script" href="script.js">
<link rel="preload" as="style" href="style.css">
<link rel="preload" as="image" href="img.png">
<link rel="preload" as="video" href="vid.webm" type="video/webm">
<link rel="preload" href="font.woff2" as="font" type="font/woff2" crossorigin>

Source: Google

For preloading the LCP image, you can also use a plugin like Perfmatters.

If you need to preload fonts, you can take advantage of the WP Rocket feature (only if you have not enabled the Remove Unused CSS feature):

Preload tab - Preload fonts feature

You can read more about the best practices for web font preloading in our dedicated article.

By using preload for critical assets, you’ll address the “Preload key requests” PageSpeed recommendation.

10. Establish Third-party Connections Early

Performance Impact: low 🚀

Making the third-party connections faster is an additional way to optimize your LCP performance.

You should use the Preconnect option.

Let’s say that there’s a CSS or JS file requested from a third-party, such as Facebook or Google Analytics. The browser will request the external resource.

If enabled, the Preconnect option tells the browser to establish a connection with the external domain as fast as possible. The browser will then handle the request in parallel with the ongoing rendering process.

You can include the rel=”preconnect” in your code:

<link rel="preconnect" href="https://example.com">.

As an alternative, you can use a plugin as Perfmatters.

Since your browser may not support the preconnect option, it’s always best to implement dns-prefetch as a fallback technique. You’ll then resolve the DNS lookups faster. In other words, the external files will load more quickly, especially on mobile networks.

You can add the rel=”dns-prefetch” to your code — as a separate tag from the preconnect attribute:

<head>
 …
<link rel="preconnect" href="https://example.com">
<link rel="dns-prefetch" href="https://example.com">
</head>

WP Rocket’s Preload tab allows you to prefetch the DNS requests. You only have to specify the external hosts to be prefetched:

Preload tab - Prefetch DNS requests

By establishing third-party connections earlier, you’ll improve the Time to First Byte and the server response time. You’ll also address the “Preconnect to required origins” PageSpeed recommendation.

Start Optimizing Your LCP Score on WordPress Today

You should now understand why Largest Contentful Paint is essential for performance and user experience and how you can improve its score. By applying all these optimization techniques, you’ll enhance the LCP grade on your WordPress site.

Not a WP Rocket customer yet? Save yourself time and let WP Rocket do the job for you. WP Rocket is the easiest way to get an excellent LCP score. You don’t even have to touch any settings! WP Rocket will automatically apply 80% of web performance best practices. You’ll see an instant improvement to the naked eye in your Core Web Vitals scores right away.

What’s more, you’ll stop managing multiple web performance plugins. You will only need WP Rocket to boost your LCP grade — no technical knowledge required, we promise!

Do you want to take a look first? Then watch our video and learn how to improve Largest Contentful Paint with WP Rocket in a few clicks!

  • First Input Delay (FID)What’s the First Input Delay and what’s the impact on UX and SEO performance? On this page, you’ll learn what FID is, how you can test it, and what factors affect its grade. You’ll also understand how to fix a bad score from mobile and desktop (FID longer than 100 or 300 ms) and improve your WordPress performance. Everything in plain English!
  • Cumulative Layout Shift (CLS)Wondering what Cumulative Layout Shift means and what’s the impact on your UX and SEO performance? Keep reading! You’ll find out what CLS is, how you can test it, and what factors affect its score. You’ll discover how to avoid large layout shifts and fix a bad grade from mobile and desktop (Cumulative Layout Shift more than 0.25 or 0.1 s). Everything in plain English!

    Source :
    https://wp-rocket.me/google-core-web-vitals-wordpress/improve-largest-contentful-paint/

Google Core Web Vitals for WordPress: How to Test and Improve Them

Table of Contents

Heard about this new Google Core Web Vitals project but not sure how it connects to your WordPress site? Or maybe you have no idea what the Core Web Vitals project is and why it matters for WordPress?

Either way, this post is going to cover everything you need to know about Core Web Vitals and WordPress. We’ll tell you what they are, how to test them, and how to improve your site’s scores to create a better user experience and maybe even boost your search rankings in 2021 and beyond.

What Are Google Core Web Vitals?

Core Web Vitals are a new initiative from Google designed to measure and improve user experience on the web. Instead of focusing on generic metrics like how long it takes your entire website to load, Core Web Vitals focus on how your WordPress site’s performance connects to delivering a high-quality user experience.

Users care about how fast they can start interacting with a page. That’s precisely what the Core Web Vitals metrics aim to measure.

Currently, there are three Core Web Vitals: Largest Contentful Paint (loading performance), Cumulative Layout Shift (visual stability), and First Input Delay (interactivity).

According to Google, these metrics are the most important ones for providing a great user experience.

If you think that these names are confusing, and if you tend to mix one metric with another, don’t worry! We’ll explain each metric in the easiest way. We want you to understand what each Core Web Vital means and its impact on user experience.

It’s the first step for improving the scores and your overall SEO and WordPress performance.

Explaining Largest Contentful Paint (LCP)

Largest Contentful Paint

Largest Contentful Paint (LCP) measures how long it takes for the most meaningful content on your site to load – that’s usually your site’s hero section or featured image.

According to Google, how long it takes for a page’s main content to load affects how quickly users perceive your site to load.

Practical example: you land on a page and don’t see the top image fully displayed right away. You would be annoyed, right? You would even think about leaving the page right away. Here’s why the Largest Contentful Paint metric is closely related to user experience — more than the overall site’s loading time.

The LCP “element” is different for each site, and it’s also different between the mobile and desktop versions of your site. Sometimes the LCP element could be an image, while other times, it could just be text. You’ll get a clear example in the section on how to test and measure Core Web Vitals.

If you’re wondering what a good LCP time is, here are Google’s thresholds:

  • Good – Less than or equal to 2.5 seconds
  • Needs Improvement – Less than or equal to 4.0 seconds
  • Poor – More than 4.0 seconds.

On a side note: LCP is very similar to First Contentful Paint (FCP), another metric included in PageSpeed Insights.

The key difference is that LCP measures when the “main” content loads. FCP is focused on just when the “first” content loads — which could be a splash screen or loading indicator, that’s a less relevant user-experience element.

Explaining Cumulative Layout Shift (CLS)

Cumulative Layout Shift

The Cumulative Layout Shift (CLS) measures how much your site’s content “shifts” or “moves around” as it loads.

Practical example: you’re about to click on a link or CTA, and you can’t do it because your content has just gone down after being loaded. You have a terrible user experience, and that’s a layout shift. The same goes when you accidentally click the wrong button because the late-loading content caused a button to shift.

Or, have you ever been on a news website where the content in the article keeps shifting around as the site loads ads, and you are unable to keep reading? That’s a layout shift, too.

You can see from yourself how the cumulative layout shift is super annoying for users and how they will have a poor experience.

Here’s how Google defines the CLS scores:

  • Good – Less than or equal to 0.1 seconds
  • Needs Improvement – Less than or equal to 0.25 seconds
  • Poor – More than 0.25 seconds.

Explaining First Input Delay (FID)

First Input Delay

First Input Delay (FID) measures the time between when a user interacts with something on your page (e.g., clicking a button or a link) and when their browser can begin processing that event.

Practical example: if you click on a button to expand an accordion section, how long does it take for your site to respond to that and show the content?

First Input Delay is probably the most complicated metric to understand and optimize for, also because it’s heavily affected by JavaScript.

Let’s say that you land on a site from mobile and click on a link, but you don’t get an immediate response. It could be because your phone is busy processing a large JavaScript file from that site.

Here’s how Google defines FID scores:

  • Good – Less than or equal to 100 ms
  • Needs Improvement – Less than or equal to 300 ms
  • Poor – More than 300 ms.

Do Core Web Vitals Affect SEO as a Ranking Factor?

In June 2021, Google will start using Core Web Vitals as a ranking factor – therefore, these metrics could affect your SEO performance.

Core Web Vitals will be part of the new Page experience signals, together with HTTPS-security, safe-browsing, mobile-friendliness, and intrusive interstitial guidelines.

Core Web Vitals will affect both mobile and desktop organic results, as well as whether or not your site appears in Top Stories. Previously, your site needed to use AMP to appear in Top Stories. That will no longer be the case when Google rolls out the change, but your site will need to meet specific minimum Core Web Vitals scores to appear in Top Stories.

What’s more, it seems like all Core Web Vitals metrics need to be met to improve organic ranking. And the Core Web Vitals score for noindexed pages may matter, too.

In short: if you care about your SEO performance, improving your Core Web Vital scores is now mandatory.

How to Test & Measure Core Web Vitals on WordPress

You can test and measure the Core Web Vitals with all of Google’s tools for web developers, from PageSpeed Insights to the Chrome DevTools, the CrUX Report, and much more.

As you can see in the image below, Google’s tools measure all the three metrics — except for Chrome DevTools and Lighthouse.

These two tools use the Total Blocking Time as a proxy for the First Input Delay. That’s because FID can only be measured with real user data (Field Data), whereas Lighthouse only provides Lab Data.

Google’s tools to measure Core Web Vitals
Google’s tools to measure Core Web Vitals

If you prefer using another performance tool, you should know that both GTmetrix and WebPageTest have started to use the Lighthouse performance score.

Keep in mind that both tools only provide you with the Largest Contentful Paint and the Cumulative Layout Shift scores.

The reason is always the same: the First Input Delay can only be measured with real user interaction, and these tools rely on the Lighthouse Lab Data.

Let’s now go over two of the most popular tools: PageSpeed Insights and Search Console. The first one helps you detect individual page issues; the other allows you to diagnose sitewide problems.

How to Test and Measure the Core Web Vitals with PageSpeed Insights

The easiest way to test your site’s pages against Core Web Vitals is via Google PageSpeed Insights.

Google’s tool provides data on all three metrics and gives specific recommendations to improve their performance.

The Diagnostics section will become your best ally to get a better score!

Just plug in your site’s URL, and you’ll see Core Web Vitals metrics in both the Field Data (based on the CrUX report) and the Lab Data (based on Lighthouse 6.0).

The Core Web Vitals metrics are marked with a blue ribbon – as long as you get it, you meet the threshold required by Google.

PageSpeed score for the WP Rocket homepage
PageSpeed score for the WP Rocket homepage

You should keep in mind some notes:

  • The Core Web Vitals scores can slightly differ between the Field and Lab Data. In the screenshot above, LCP is 1.8 s according to the Field Data and 2.2 s in the Lab Data. That’s normal, and it depends on how data is collected.
  • Not having any Field Data when running your test is not an issue. It’s because there’s not enough real user data available. It doesn’t impact your Core Web Vitals because PageSpeed Insights considers the Lab Data for the page speed score.If you’re wondering what happens with the First Input Delay, not included in the Lab Data, you’ll get your answer in a few lines!
  • Always check both the mobile and desktop results. Your Core Web Vitals metrics will differ between the two. Keep in mind that the mobile score is the most relevant and the most challenging.

Let’s now look at how you can use PageSpeed Insights to identify the Core Web Vitals elements that need improvement.

Discovering the Largest Contentful Paint Element with PageSpeed Insights

As we explained, the LCP score measures how long it takes for the most meaningful element to become visible to your visitors.

To discover your site’s Largest Contentful Paint element, scroll down to the Diagnostics section and expand the Largest Contentful Paint element tab.

There, Google will display the HTML for the element that it’s using to measure LCP.

For example, on the desktop version of the WordPress.org homepage, the LCP element is an image:

The LCP element from the desktop
The LCP element from the desktop – PageSpeed Insights

However, on the mobile version of the site, the LCP element is the subheading text:

The LCP element from the mobile
The LCP element from the mobile  – PageSpeed Insights

Discovering the Cumulative Layout Shift Elements with PageSpeed Insights

Quick recap: the Cumulative Layout Shift deals with how your site loads and whether or not your content “moves around” as new content is loaded.

To find the individual elements on your site that are “shifting” and affecting your score, go to the Avoid large layout shifts section in the Diagnostics area:

The CLS elements - PageSpeed Insights
The CLS elements – PageSpeed Insights

Discovering First Input Delay and Total Blocking Time with PageSpeed Insights

First Input Delay is about user interaction, remember? Meaning, how long it takes for the page to respond after interacting with an element such as a link or a button.

That’s why FID is based on actual user data, and you won’t find its score in the Lab Data. As we explained, you’ll only see FID times in the Field Data section — and only if the CrUX report has collected enough data.

In the Field Data, Total Blocking Time (TBT) will replace First Input Delay.

Total Blocking Time replaces First Input Delay in the Lab Data.
Total Blocking Time replaces First Input Delay in the Lab Data.

As long as you improve your Total Blocking Time, you’ll likely improve the FID score.

If you have a bad TBT score, you should go to the Minimize third-party usage section in the Diagnostics section.

Here, you’ll see what you can minimize in terms of third-party usage. It’s one of the main performance issues you need to solve – unless it’s already solved and included under the “Passed audits” sections, as you can see below:

Minimize third-party usage recommendatio
Minimize third-party usage recommendation – PageSpeed Insights

How to Read the Core Web Vitals Report on Search Console

If you want to diagnose issues with your site as a whole, you should use the Core Web Vitals report in Google Search Console.

The report is based on an aggregate of real users’ data from CrUX. For this reason, the data included in the report could take a while before reporting issues. That’s why the Lab Data from Lighthouse is always valuable.

That said, the Core Web Vitals report is great to identify the groups of pages that require attention – both for desktop and mobile.

The Core Web Vitals report in Search Console
The Core Web Vitals report in Search Console – Overview

Once you open the report, you’ll find a Details tab that groups the URL performance by several criteria:

  • Status (Poor or Need improvement)
  • Metric type (e.g., CLS issue: more than 0.25 (desktop))
  • URL group (the list of URLs with similar performance).

Once you have fixed the URLs that needed an improvement, you’ll also be able to click on the Validation column and move forward with the “Validate Fix” option. Keep in mind that the validation process takes up to two weeks — be patient!

The Core Web Vitals report in Search Console – Details tab

How to Measure Core Web Vitals with Chrome Extensions

If you’re looking for a useful Chrome Extension, you could choose Web Vitals.

It gives you the Core Web Vital scores for any page you’re browsing:

Web Vitals Chrome extension
Web Vitals Chrome extension

You may also want to try CORE Serp Vitals, which shows you the Core Web Vitals results directly on the SERP. Remember that you need to enter a Chrome UX Report API key to let the extension work.

CORE Serp Vitals Chrome extension
CORE Serp Vitals Chrome extension

How to Improve Core Web Vitals on WordPress

Now for the critical question — if you aren’t currently meeting Google’s recommendations for the three Core Web Vitals metrics, how can you optimize your WordPress site to improve your Core Web Vitals scores?

The strategies are different for each metric. Most optimizations involve implementing WordPress performance best practices, though with a few points of emphasis — and that’s why choosing the best WordPress caching plugin will help you with no effort from your side.

Watch the video to understand how to optimize your Core Web Vitals, and keep reading to learn more about it.

How to Improve Largest Contentful Paint on WordPress

Optimizing for Largest Contentful Paint is the most straightforward metric as it’s pretty much entirely WordPress performance best practices:

  1. Set up page caching. Page caching speeds up how quickly your server can respond and reduces the server response times (TTFB). Did you know that WP Rocket enables this automatically?
  2. Optimize browser caching. You should set the right option for the static files that your browser keeps in its cache. By doing so, you’ll address the “Serve static assets with an efficient cache policy” PageSpeed Insights recommendation. Guess what? WP Rocket enables the optimal expiration length automatically.
  3. Optimize your images. A lot of times, your LCP element will be an image. Optimizing your images will speed up your site and address PageSpeed recommendations such as “Properly size images”, “Defer offscreen images”, “Serve images in next-gen formats”, and “Efficiently encode images”. You can use Imagify to optimize WordPress images automatically.
  4. Optimize your code. Loading unnecessary CSS or JavaScript files before your main content will slow down the loading time. You can fix this by eliminating render-blocking resources on your WordPress site. You should also minify CSS and Javascript files and remove unused CSS. Optimizing your code will help you address the “Avoid chaining critical requests” PageSpeed recommendation. Once again, you’ll get most of the job done by setting these optimizations up in the File Optimization tab in WP Rocket.
  5. Use server-level compression. Using Gzip or Brotli compression will reduce your site’s file size, which speeds up LCP and addresses the “Enable text compression” recommendation. WP Rocket automatically enables Gzip compression.
  6. Use preconnect for important resources. Preconnect lets you establish important third-party connections early and addresses the “Preload key requests” and “Preconnect to required origins” recommendations. You can learn more in our tutorial.
  7. Use a content delivery network (CDN) for global audiences. If you have a global audience, a CDN can speed up your LCP time for visitors around the world. It’s another effective way to reduce the Time to First Byte (TTFB). You can use our RocketCDN service.

The easiest way to implement most of these best practices is to use WP Rocket. WP Rocket will automatically apply page caching and server-level compression as soon as you activate it. It also includes other features to help you optimize your site’s code and performance, all of which improve your LCP time.

Source :
https://wp-rocket.me/google-core-web-vitals-wordpress/

Basic Authentication Deprecation in Exchange Online – September 2022 Update

One month from today, we’re going to start to turn off basic auth for specific protocols in Exchange Online for customers who use them.

Since our first announcement nearly three years ago, we’ve seen millions of users move away from basic auth, and we’ve disabled it in millions of tenants to proactively protect them.

We’re not done yet though, and unfortunately usage isn’t yet at zero. Despite that, we will start to turn off basic auth for several protocols for tenants not previously disabled.

Starting October 1st, we will start to randomly select tenants and disable basic authentication access for MAPI, RPC, Offline Address Book (OAB), Exchange Web Services (EWS), POP, IMAP, Exchange ActiveSync (EAS), and Remote PowerShell. We will post a message to the Message Center 7 days prior, and we will post Service Health Dashboard notifications to each tenant on the day of the change.

We will not be disabling or changing any settings for SMTP AUTH.

If you have removed your dependency on basic auth, this will not affect your tenant or users. If you have not (or are not sure), check the Message Center for the latest data contained in the monthly usage reports we have been sending monthly since October 2021. The data for August 2022 will be sent within the first few days of September.

What If You Are Not Ready for This Change?

We recognize that unfortunately there are still many tenants unprepared for this change. Despite multiple blog posts, Message Center posts, interruptions of service, and coverage via tweets, videos, conference presentations and more, some customers are still unaware this change is coming. There are also many customers aware of the deadline who simply haven’t done the necessary work to avoid an outage.

Our goal with this effort has only ever been to protect your data and accounts from the increasing number of attacks we see that are leveraging basic auth.

However, we understand that email is a mission-critical service for many of our customers and turning off basic auth for many of them could potentially be very impactful.

One-Time Re-Enablement

Today we are announcing an update to our plan to offer customers who are unaware or are not ready for this change.

When we turn off basic auth after October 1st, all customers will be able to use the self-service diagnostic to re-enable basic auth for any protocols they need, once per protocol. Details on this process are below.

Once this diagnostic is run, basic auth will be re-enabled for those protocol(s). Selected protocol(s) will stay enabled for basic auth use until end of December 2022. During the first week of calendar year 2023, those protocols will be disabled for basic auth use permanently, and there will be no possibility of using basic auth after that.

Avoiding Disruption

If you already know you need more time and wish to avoid the disruption of having basic auth disabled you can run the diagnostics during the month of September, and when October comes, we will not disable basic for protocol(s) you specify. We will disable basic for any non-opted-out protocols, but you will be able to re-enable them (until the end of the year) by following the steps below if you later decide you need those too.

In other words – if you do not want basic for a specific protocol or protocols disabled in October, you can use the same self-service diagnostic in the month of September. Details on this process below.

Diagnostic Options

Thousands of customers have already used the self-service diagnostic we discussed in earlier blog posts (here and here) to re-enable basic auth for a protocol that had been turned off, or to tell us not to include them in our proactive protection expansion program. We’re using this same diagnostic again, but the workflow is changing a little.

Today, we have archived all prior re-enable and opt-out requests. If you have previously opted out or re-enabled basic for some protocol, you’ll need to follow the steps below during the month of September to indicate you want us to leave something enabled for basic auth after Oct 1.

To invoke the self-service diagnostic, you can go directly to the basic auth self-help diagnostic by simply clicking on this button (it’ll bring up the diagnostic in the Microsoft 365 admin center if you’re a tenant Global Admin):

thumbnail image 1 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Basic Authentication Deprecation in Exchange Online – September 2022 Update

Or you can open the Microsoft 365 admin center and click the green Help & support button in the lower right-hand corner of the screen.

thumbnail image 2 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Basic Authentication Deprecation in Exchange Online – September 2022 Update
thumbnail image 3 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Basic Authentication Deprecation in Exchange Online – September 2022 Update

When you click the button, you enter our self-help system. Here you can enter the phrase “Diag: Enable Basic Auth in EXO

Customers with tenants in the Government Community Cloud (GCC) are unable to use the self-service diagnostic covered here. Those tenants may opt out by following the process contained in the Message Center post sent to their tenant today. If GCC customers need to re-enable a protocol following the Oct 1st deadline they will need to open a support ticket.

Opting Out

During the month of September 2022, the diagnostic will offer only the option to opt-out. By submitting your opt-out request during September, you are telling us that you do not want us to disable basic for a protocol or protocols during October.  Please understand we will be disabling basic auth for all tenants permanently in January 2023, regardless of their opt-out status.

The diagnostic will show a version of the dialog below, and you can re-run it for multiple protocols. It might look a bit different if some protocols have already been disabled. Note too that protocols are not removed from the list as you opt-out but rest assured (unless you receive an error) we will receive the request.

thumbnail image 4 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Basic Authentication Deprecation in Exchange Online – September 2022 Update

Re-Enabling Basic for protocols

Starting October 1, the diagnostic will only allow you to re-enable basic auth for a protocol that it was disabled for.

If you did not opt-out during September, and we disabled basic for a protocol you later realize you need, you can use this to re-enable it.

thumbnail image 5 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Basic Authentication Deprecation in Exchange Online – September 2022 Update

Within an hour (usually much sooner) after you run the diagnostics and ask us to re-enable basic for a protocol, basic auth will start to work again.

At this point, we have to remind you that by re-enabling basic for a protocol, you are leaving your users and data vulnerable to security risks, and that we have customers suffering from basic auth-based attacks every single day (but you know that already).

Starting January 1, 2023, the self-serve diagnostic will no longer be available, and basic auth will soon thereafter be disabled for all protocols.

Summary of timelines and actions

Please see the following flow chart to help illustrate the changes and actions that you might need to take:

thumbnail image 6 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Basic Authentication Deprecation in Exchange Online – September 2022 Update

Blocking Basic Authentication Yourself

If you re-enable basic for a protocol because you need some extra time and then afterward no longer need basic auth you can block it yourself instead of waiting for us to do it in January 2023. The quickest and most effective way to do this is to use Authentication Policies which block basic auth connections at the first point of contact to Exchange Online.

Just go into the Microsoft 365 admin center, navigate to SettingsOrg SettingsModern Authentication and uncheck the boxes to block basic for all protocols you no longer need (these checkboxes will do nothing once we block basic for a protocol permanently, and we’ll remove them some time after January 2023).

thumbnail image 7 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Basic Authentication Deprecation in Exchange Online – September 2022 Update

Reporting Web Service Endpoint

For those of you using the Reporting Web Service REST endpoint to get access to Message Tracking Logs and more, we’re also announcing today that this service will continue to have basic auth enabled until Dec 31st for all customers, no opt-out or re-enablement is required. And, we’re pleased to be able to provide the long-awaited guidance for this too right here

EOP/SCC PowerShell

Basic authentication will remain enabled until Dec 31st, 2022. Customers need to migrate to certificate based authentication. Follow the Instructions here: App-only authentication

One Other Basic Authentication Related Update

We’re adding a new capability to Microsoft 365 to help our customers avoid the risks posed by basic authentication. This new feature changes the default behavior of Office applications to block sign-in prompts using basic authentication. With this change, if users try to open Office files on servers that only use basic authentication, they won’t see any basic authentication sign-in prompts. Instead, they’ll see a message that the file has been blocked because it uses a sign-in method that may be insecure.

You can read more about this great new feature here: Basic authentication sign-in prompts are blocked by default in Microsoft 365 Apps.

Office Team is looking for customers to opt-in to their Private Preview Program for this feature. Please send them an email if you are interested in signing up: basicauthdeprmailer@microsoft.com.

Summary

This effort has taken three years from initial communication until now, and even that has not been enough time to ensure that all customers know about this change and take all necessary steps. IT and change can be hard, and the pandemic changed priorities for many of us, but everyone wants the same thing: better security for their users and data.

Our customers are important to us, and we do not want to see them breached, or disrupted. It’s a fine balance but we hope this final option will allow the remaining customers using Basic auth to finally get rid of it.

The end of 2022 will see us collectively reach that goal, to Improve Security – Together.

The Exchange Team

Source :
https://techcommunity.microsoft.com/t5/exchange-team-blog/basic-authentication-deprecation-in-exchange-online-september/ba-p/3609437

A CISO’s Ultimate Security Validation Checklist

If you’re heading out of the office on a well-deserved vacation, are you certain the security controls you have in place will let you rest easy while you’re away? More importantly – do you have the right action plan in place for a seamless return?

Whether you’re on the way out of – or back to – the office, our Security Validation Checklist can help make sure your security posture is in good shape.

1. Check the logs and security events of your key critical systems. Stay up-to-date on recent activities. Check for changes – and attempted changes – and any potential indicators of compromise. Planning to be gone for longer than a week? Designate a team member to perform a weekly review in your absence, reducing the chances of a critical event going undetected.

2. Check for any new security vulnerabilities that were identified on your vacation. Use your preferred scanning tool or check one of the regularly updated databases, such as CVE Details.

3. Investigate failures of critical components and the reasons behind them. If remediation is needed, create an action plan to address the immediate issues and prevent repeated failures in the future.

4. Review whether there were any key changes to your products and their corresponding security controls. While now isn’t the time to implement major changes to your EDR, SIEM system, or other corresponding solutions, do make sure you’re aware of any updates that were made in your absence. Once you’re back – and able to monitor the impact on your overall security posture – you can make larger-scale changes to your controls.

5. Check with HR for any relevant changes. Did any new employees join the company and therefore need access to specific systems? Conversely, did any employees leave and need their credentials revoked? Were there any other incidents or red flags that require your attention?

6. Be aware of new business orientations. Did the organization introduce any new services or products that expanded the potential attack surface? For instance, did a new website or mobile app go live, or was a new version of a software product rolled out? Make sure your team is up to speed on the latest changes.

7. Check your password policies. Password policies shouldn’t be dependent on your vacation status, but as you work through this security checklist, take the opportunity to make sure policies are appropriately protecting the organization. Consider reviewing length, complexity, and special character requirements, as well as expiration and re-use policies.

8. Review firewall configurations . With many security experts recommending a review of firewall configurations every three to six months, now is an opportune time for an audit. Review network traffic filtering rules, configuration parameters, and authorized administrators – among other configurations – to make sure you’re using the appropriate configurations

There are plenty of tools that can help work through this checklist – but do you have all the resources needed to make sure everything will be addressed?

If you need help automating and standardizing your processes – or making sure critical vulnerabilities aren’t slipping through the cracks – Automated Security Validation can help. With real-time visibility, complete attack surface management, and actual exploitation measures – not just simulations – it provides what you need to rest easy while you’re away. And when you get back? Risk-based remediation plans help you create your roadmap for keeping your organization protected.

When you’re back, we’ve got your back. To learn more about protecting your security posture with Automated Security Validation, request a demo of the Pentera platform.

Source :
https://thehackernews.com/2022/08/a-cisos-ultimate-security-validation.html

Use this Identity Checklist to secure your M365 tenant

Securing a Microsoft 365 tenant must start with identity.

Protecting identities is a fundamental part of Zero Trust and it’s the first “target” that most attackers look for. We used to say that attackers hack their way in, now we say they log in, using bought, found or stolen/phished credentials. This article will show you why MFA is so important and how to implement advanced security features in Azure AD such as PIM, Password protection, Conditional Access policies (also a strong part of Zero Trust), auditing and more.

Below is the first chapter from our free Microsoft 365 Security Checklist eBook. The Microsoft 365 Security Checklist shows you all the security settings and configurations you need to know for each M365 license to properly secure your environment. Download the full eBook and checklist spreadsheet.

Multi-Factor Authentication

It should be no surprise that we start with identity, it’s the new security perimeter or the new firewall and having a strong identity equals strong security. The first step to take here is implementing Multi Factor Authentication (MFA). It’s free for all Office / Microsoft tenants. If you want to use Conditional Access (CA) to enforce it (rather than just enabling users “in bulk”), you need Azure AD Premium P1+ licensing. A username and a simple password are no longer adequate (it never was, we just never had a simple, affordable, easy to use alternative) to protect your business.

Hand-in-hand with MFA you need user training. If your business is relying on users doing the right thing when they get the prompt on their phone – they MUST also know that if they get a prompt when they’re NOT logging in anywhere, they must click Block / No / Reject.

To enable MFA on a per-user basis, go to aad.portal.azure.com, login as an administrator, click Azure Active Directory – Security – MFA and click on the blue link “Additional cloud-based MFA settings”.

Additional MFA settings

Additional MFA settings

There are two parts (tabs) on this page, “service settings” where you should disable app passwords (a workaround for legacy clients that don’t support MFA, shouldn’t be necessary in 2022), add trusted public IP addresses (so that users aren’t prompted when they’re in the corporate office – we and Microsoft recommend not using this setting), disabling Call and Text message to phone and remember MFA on trusted devices setting (1-365 days), Microsoft recommends either using CA policies to manage Sign-In frequency or setting this to 90 days. Phone call / text message MFA are not strong authentication methods and should not be used unless there’s no other choice.

On the user’s tab you can enable MFA for individual users or click bulk update and upload a CSV file with user accounts.

If you have AAD Premium P1, it’s better to use a CA policy to enforce MFA, it’s more flexible and the MFA settings page will eventually be retired.

Enforcing MFA with a Conditional Access Policy

Enforcing MFA with a Conditional Access Policy

A few words of caution, enabling MFA for all your administrators is a given today. Seriously, if you aren’t requiring every privileged account to use MFA (or 2FA / passwordless, see below), stop reading and go and do that right now. Yes, it’s an extra step and yes, you’ll get push back but there’s just no excuse – it’s simply unprofessional and you don’t belong in IT if you’re not using it. For what it is worth, I’ve been using Azure MFA for over seven years and require it for administrators at my clients – no exceptions.

Enabling MFA for all users is also incredibly important but takes some planning. You may have some users who refuse to run the Microsoft Authenticator app on their personal phone – ask for it to be put in their hiring contract. You need to train them as to why MFA is being deployed, what to do, both for authentic logins and malicious ones. Furthermore, you need to have a smooth process for enrolling new users and offboarding people who are leaving.

You should also strongly consider creating separate (cloud only) accounts for administrators. They don’t require a license and it separates the day-to-day work of a person who only performs administrative actions in your tenant occasionally (or use PIM, Chapter 10).

MFA protects you against 99.9% of identity-based attacks but it’s not un-phishable. Stronger alternatives include biometrics such as Windows Hello for Business (WHFB) and 2FA hardware keys which bring you closer to the ultimate in identity security: passwordless.

Legacy Authentication

However, it’s not enough to enable MFA for all administrators and users, the bad guys can still get in with no MFA prompt in sight. The reason is that Office 365 still supports legacy protocols that don’t support modern authentication / MFA. You need to disable these; you can’t just turn them off, you need to check if there are legitimate applications / workflows / scripts that use any of them. Go to aad.portal.azure.com, login as a Global Administrator, click Azure Active Directory – Monitoring – Sign-in logs. Change the time to last one month, and click Add filters, then click Client app and then None Selected, in the drop-down pick all 13 checkboxes under Legacy Authentication Clients and click Apply.

Filtering Azure AD Sign-in logs for legacy authentication

Filtering Azure AD Sign-in logs for legacy authentication

This will show you all the logins over the last month that used any of the legacy protocols. If you get a lot of results, add a filter for Status and add Success to filter out password stuffing attacks that failed. Make sure you check the four different tabs for interactive / non-interactive, service principals and managed identity sign-ins.

You’ll now need to investigate the logins. In my experience there will be some users who are using Android / Apple mail on smartphones; point them to the free Outlook app instead (Apple mail can be configured to use modern authentication). There’s also likely to be line-of-business (LOB) applications and printers / scanners that send emails via Office 365, so you’ll need updates for these. Alternatively, you can use another email service for these such as smtp2go.

Once you have eliminated all legitimate legacy authentication protocol usage you can disable it in two ways, it’s best to use both. Start by creating a Conditional Access policy based on the new template to block it, also go to admin.microsoft.com, Settings – Org settings – Services – Modern authentication and turn off basic authentication protocols.

Disable legacy authentication protocols in the M365 Admin Center

Disable legacy authentication protocols in the M365 Admin Center

Break Glass accounts

Create at least one, preferably two break glass accounts, also known as emergency access accounts. These accounts are exempted from MFA, all CA policies and PIM (see below) and have very long (40 characters+), complex passwords. They’re only used if AAD MFA is down, for example, to gain access to your tenant to temporarily disable MFA or a similar setting, depending on the outage.

A second part to this is that you want to be notified if these accounts are ever used. One way to do this is to send your Azure AD sign-in logs to Azure Monitor (also known as Log Analytics), with instructions here. Another option is to use Microsoft Sentinel (which is built on top of Log Analytics) and create an Analytics rule.

Microsoft Sentinel alert rule when a Break Glass account is used

Microsoft Sentinel alert rule when a Break Glass account is used

Security Defaults

If yours is a very small business, with few requirements for flexibility, the easiest way to set up Azure AD with MFA for everyone, plus several other security features enabled, is to turn on Security Defaults. Note that you can’t have break-glass accounts or other service accounts with Security Defaults as there’s no way to configure exceptions. Go to Properties for your Azure AD tenant and scroll to the bottom, and click on Manage Security defaults, here you can enable and disable it.

Privileged Identity Management

It’s worth investing in Azure Active Directory (AAD) Premium P2 for your administrator’s accounts and enabling Privileged Identity Management (PIM). This means their accounts are ordinary user accounts who are eligible to elevate their privileges to whatever administrator type they are assigned (see Chapter 10).

If you’re not using PIM, create dedicated admin accounts in AAD only. Don’t sync these accounts from on-premises but enforce MFA and strong passwords. Since they won’t be used for day-to-day work, they won’t require an M365 license.

Password Protection

After MFA, your second most important step is banning bad passwords. You’re probably aware that we’ve trained users to come up with bad passwords over the last few decades with “standard” policies (at least 8 characters, uppercase, lowercase, special character and numbers) which results in P@ssw0rd1 and when they’re forced to change it every 30 days, P@ssw0rd2. Both NIST in the US and GHCQ in the UK now recommends allowing (but not enforcing) the use of upper / lowercase etc., but not mandating frequent password changes and instead of checking the password at the time of creation against a list of known, common bad passwords and blocking those. In Microsoft’s world that’s called Password protection which is enabled for cloud accounts by default. There’s a global list of about 2000 passwords (and their variants) that Microsoft maintains, based on passwords they find in dumps, and you should add (up to 1000) company-specific words (brands, locations, C-suite people’s names, local sports teams, etc.) for your organization.

You find Password protection in the AAD portal – Security – Authentication Methods.

Password protection settings

Password protection settings

Remember, you don’t have to add common passwords to the list, they’re already managed by Microsoft, just add company / region specific words that your staff are likely to use.

If you’re syncing accounts from Active Directory on-premises to AAD, you should also extend Password protection to your DCs. It involves the installation of an agent on each DC, a proxy agent, and a reboot of each DC.

Continuous Access Evaluation

This feature has been in preview for quite some time but is now in general availability. Before Continuous Access Evaluation (CAE), when you disabled a user’s account, or they changed location (from the office to a public Wi-Fi for example) it could be up to one hour before their state was re-evaluated and new policies applied, or they were blocked from accessing services. With CAE, this time is much shorter, in most cases in the order of a few minutes. It’s turned on by default for all tenants (unless you were part of the preview and intentionally disabled it). Another benefit of CAE is that tokens are now valid for 28 hours, letting people keep working during a shorter Azure AD outage. You can disable CAE in a CA policy, but it’s not recommended.

Conditional Access policies

We’ve mentioned Conditional Access (CA) policies several times already as it’s a crucial component of strong identity security and Zero Trust. Unlike other recommendations, there isn’t a one size fit all set of CA policies we can give you, however (at a minimum) you should have policies for:

  • Require MFA for admins (see MFA above)
  • Require MFA for users (see MFA above)
  • Require MFA for Azure management
  • Block legacy authentication (see MFA above)
  • Require compliant or Hybrid AAD joined device for admins
  • Require compliant or Hybrid AAD joined device for users
  • Block access to M365 from outside your country
  • Require MFA for risky sign-ins (if you have AAD Premium P2)
  • Require password change for high-risk users (if you have AAD Premium P2)

This is all going to be a lot easier going forward with the new policy templates for identity and devices. Go to Azure AD – Security – Conditional Access – New policy – Create a new policy from templates. Another step to take is to create a system for managing the lifecycle of policies and there’s an API for backing up and updating policies, that you can access in several ways, including PowerShell. There’s even a tutorial to set up a backup system using a Logic App.

Conditional Access policy templates for identity

Conditional Access policy templates for identity

A common question is if there’s a priority when policies are evaluated and there isn’t, they’re all processed together for a particular sign-in, from a specific device and location to an individual application. If there are multiple policies with different controls (MFA + compliant device), all controls must be fulfilled for access. And if there are conflicting policies with different access (block vs grant), block access will win.

To get you started, here are the step-by-step instructions for a policy blocking access to M365 from outside your country, appropriate for most small and medium businesses that only operate in one or a few countries. Keep in mind that travelling staff may be caught out by this so make sure you align with business objectives and be aware that this won’t stop every attack as a VPN or TOR exit node can make it appear as if the attacker is in your country, but it’s one extra step they must take. Remember, you don’t have to run faster than the Fancy Bear, just faster than other companies around you.

Start by going to Azure AD – Security – Conditional Access – Named locations and click +Countries location and call the location Blocked countries. Leave Determine location by IP address, a new feature is using GPS location from the Microsoft Authenticator app which will be more accurate once all your users are using Azure AD MFA (and therefore can be located via GPS). Click the box next to Name to select all countries, then find the one(s) that you need to allow login from and click Create.

Creating a Named Location for a Conditional Access Policy

Creating a Named Location for a Conditional Access Policy

Go to Azure AD – Security – Conditional Access – New policy – Create new policy and name your policy with a name that clearly defines what the policy does and adheres to your naming standard. Click on All Users… and Include All users and Exclude your Break Glass accounts.

Click on No cloud apps… and select All cloud apps. Select 0 conditions… and click Not configured under Locations. Pick Selected locations under Include and select your newly created location. Finally, under Access controls – Grant, click 0 controls selected and then Block access.

CA policies can be either in Report-only mode where you can look at reports of what they would have blocked and control they would have enforced, or they can be turned on / off. Report-only can be handy to make sure you don’t get fired for accidentally locking everyone out but turn this policy on as soon as possible.

Conditional Access policy to block logins from outside Australia

Conditional Access policy to block logins from outside Australia

A common question is, how can I control how often users are prompted for MFA or signing in again? While it might be counterintuitive, the default in Azure AD is a rolling windows of 90 days. Remember, if you change a user’s password, block non-compliant devices, or disable an account (plus any number of other CA policies you have in place that might affect the security posture of the session), it’ll automatically require new authentications. Don’t prompt the users for authentication when nothing has changed because if you do it too frequently, they’re more likely to approve a malicious login.

Branding Log-on Pages

While in the Azure AD portal, click on Company branding and add a company-specific Sign-in page background image (1920x1080px) and a Banner logo (280x60px). Note that these files have to be small (300 KB and 10 KB respectively) so you may have to do some fancy compression. This isn’t just a way to make users feel at home when they see a login page, in most cases when attackers send phishing emails to harvest credentials, they’ll send users to a fake login page that looks like the generic Office 365 one, not your custom one which is another clue that should alert your users to the danger. Also – Windows Autopilot doesn’t work unless you have customized AAD branding.

Edit Azure AD Company Branding images

Edit Azure AD Company Branding images

Self Service Password Reset

The benefit of Self Service Password Reset (SSPR) is to lower the load on your help desk to manage password resets for users. Once enabled, users must register various ways of being identified when they’re resetting their password, mobile app notification/code, email (non-Office 365), mobile/office phone call, security questions (not available to administrators, plus you can create custom questions). If you are synchronizing user accounts from AD to Azure AD, take care in setting up SSPR as the passwords must be written back to AD from the cloud once changed.

Configuring Self Service Password Reset in Azure AD

Configuring Self Service Password Reset in Azure AD

Unified Auditing

Not restricted to security but nevertheless, a fundamental building block is auditing across Microsoft 365. Go to the Microsoft 365 Defender portal and find Audit in the left-hand menu (it’s almost at the end). If for some reason unified auditing isn’t enabled in your tenant a yellow banner will give you a button to turn it on (it’s on by default for new tenants). Once enabled, click the Audit retention policies tab, and create a policy for your tenant. You want to ensure that you have logs to investigate if there’s a breach and you want them kept for as long as possible.

With Business Premium you get a maximum of 90 days of retention and Microsoft 365 E5 gives you one year, but you want to make sure to create a policy to set this, rather than rely on the default policy (which you can’t see). Give the policy a name, a description and add all the record types, one by one. This policy will now apply to all users (including new ones that are created) for all activities. Only use the Users option when you want to have a specific policy for a particular user. Give the policy a priority, 1 is the highest and 10,000 is the lowest.

Create an audit retention policy for maximum retention

Create an audit retention policy for maximum retention

Integrating applications into Azure AD

One of the most powerful but often overlooked features (at least in SMBs) is the ability to use Azure AD to publish applications to your users. Users can go to myapps.microsoft.com (or office.com) and see tiles for all applications they have access to. But there’s more to that story. Say, for example, you have a shared, corporate Twitter account that a few executives and marketing staff should have access to. Instead of sharing a password amongst them all and having to remember to reset it if someone leaves the organization, you can create a security group in AAD, add the relevant users, link Twitter to the group and they’ll automatically have access – without knowing the password to the account. There are a lot more actions you can take here to simplify access and secure management of applications, here’s more information.

Azure AD Connect

If you’re synchronizing accounts from Active Directory to Azure Active Directory (AAD), check the configuration of AAD Connect and make sure you’re not replicating an entire domain or forest to AAD. There’s no reason that service accounts etc. should be exposed in both directories, start the AAD Connect wizard on the server where it’s installed and double-check that only relevant OUs are synchronized. One other thing to note here is the fact that any machine running Azure AD Connect should be treated with the same care (in terms of security) as a domain controller. This is because AAD Connect requires the same level of access as AD itself and has the ability to read password hashes. Making sure security best practices for access, patching, etc. are followed to the letter for the system running AAD connect is critically important.

The M365 Identity Checklist

Work through the Identity checklist.
 
Enable MFA for administrators
Enable MFA for users
Create cloud-only administrator accounts for privileged users / occasional administrators
Disable app passwords
(Configure trusted IPs)
Disable text message MFA
Disable phone call MFA
Remember MFA trusted devices 90 days
Train staff in using MFA correctly
Use Windows Hello where possible
Use FIDO2 / 2FA keys where possible
Investigate legacy authentication protocol usage in AAD Sign-in logs
Block legacy authentication with CA Policy
Block legacy authentication in M365 Admin Center
Create two Break glass accounts and exempt from MFA, CA Policies etc.
Configure alerting if a Break glass account is used
Enable Security Defaults in AAD (consider the limitations)
Enable PIM (AAD Premium P2) for all admin users
Add organization-specific words to Password protection
Deploy Password protection in AD on-premises
CA Policy Require MFA for admins
CA Policy Require MFA for users
CA Policy Require MFA for Azure management
CA Policy Block legacy authentication
CA Policy Require compliant or Hybrid AAD joined device for admins
CA Policy Require compliant or Hybrid AAD joined device for users
CA Policy Block access to M365 from outside your country
Require MFA for risky sign-ins [Only for E5)
Require password change for high-risk users [Only for E5)
Create custom branding logos and text in Azure AD
Enable and configure Self Service Password Reset, including password writeback
Check that Unified Auditing is enabled
Define audit retention policies (90 or 365 days)
Integrate applications into Azure AD

Download the Excel template to use with your team >

Go Further than Identity to Protect your M365 Tenant

There you have it, all the most important steps to take to make sure your users’ identities are kept secure, and therefore your tenant and its data also safeguarded. Keen to learn and do more?

The Microsoft 365 Security Checklist has another nine chapters of security recommendations each with its own checklist for:

  • Email
  • Teams
  • SharePoint
  • Applications
  • Endpoint Manager
  • Information Protection
  • Secure Score
  • Business Premium
  • Microsoft 365 Enterprise E5

Download the full Microsoft 365 Security Checklist eBook and checklist template >

Source :
https://www.altaro.com/microsoft-365/identity-checklist-m365-tenant/

Password Security and the Internet of Things (IoT)

The Internet of Things (IoT) is here, and we’re using it for everything from getting instant answers to random trivia questions to screening visitors at the door. According to Gartner, we were expected to use more than 25 billion internet-connected devices by the end of 2021. But as our digital lives have become more convenient, we might not yet have considered the risks involved with using IoT devices.

How can you keep yourself secure in today’s IoT world, where hackers aim to outsmart your smart home? First we’ll look at how hackers infiltrate the IoT, and then we’ll look at what you can do right now to make sure the IoT is working for you – not against you.

How hackers are infiltrating the Internet of Things

While we’ve become comfortable asking voice assistants to give us the weather forecast while we prep our dinners, hackers have been figuring out how to commandeer our IoT devices for cyber attacks. Here are just a few examples of how cyber criminals are already infiltrating the IoT.

Gaining access to and control of your camera

Have you ever seen someone with a sticker covering the camera on their laptop or smartphone? There’s a reason for that. Hackers have been known to gain access to these cameras and spy on people. This has become an even more serious problem in recent years, as people have been relying on videoconferencing to safely connect with friends and family, participate in virtual learning, and attend telehealth appointments during the pandemic. Cameras now often come with an indicator light that lets you know whether they’re being used. It’s a helpful protective measure, but not a failsafe one.

Using voice assistants to obtain sensitive information

According to Statista, 132 million Americans used a digital voice assistant once a month in 2021. Like any IoT gadget, however, they can be vulnerable to attack. According to Ars Technica, academic researchers have discovered that the Amazon Echo can be forced to take commands from itself, which opens the door to major mischief in a smart home. Once an attacker has compromised an Echo, they can use it to unlock doors, make phone calls and unauthorized purchases, and control any smart home appliances that the Echo manages.

Many bad actors prefer the quiet approach, however, slipping in undetected and stealing information. They can piggyback on a voice assistant’s privileged access to a victim’s online accounts or other IoT gadgets and make off with any sensitive information they desire. With the victim being none the wiser, the attackers can use that information to commit identity fraud or stage even more ambitious cyber crimes.

Hacking your network and launching a ransomware attack

Any device that is connected to the internet, whether it’s a smart security system or even a smart fridge, can be used in a cyber attack. Bad actors know that most people aren’t keeping their IoT gadgets’ software up to date in the same way they do their computers and smartphones, so they take advantage of that false sense of security. Once cyber criminals have gained access to an IoT device, they can go after other devices on the same network. (This is because most home networks are designed to trust devices that are already connected to them.) When these malicious actors are ready, they can launch a ransomware attack that brings your entire digital life to a halt – unless you agree to fork over a hefty sum in bitcoin, that is.

Using bots to launch a DDOS attack

Although most people never notice it, hackers can and do infect IoT devices with malware en masse, gaining control over them in the process. Having turned these zombie IoT devices into bots, the hackers then collectively use them to stage what’s called a botnet attack on their target of choice. This form of assault is especially popular for launching distributed denial of service (DDOS) attacks, in which all the bots in a botnet collectively flood a target with network requests until it buckles and goes offline.

How you can keep your Internet of Things gadgets safe from hackers

So how can you protect your IoT devices from these determined hackers? Fortunately, you can take back control by becoming just a little more cyber smart. Here are a few ways to keep your IoT gadgets safe from hackers:

  • Never use the default settings on your IoT devices. Although IoT devices are designed to be plug-and-play so you can start enjoying them right away, their default settings are often not nearly as secure as they should be. With that in mind, set up a unique username and strong password combination before you start using any new IoT technology. While you’re at it, see if there’s an option to encrypt the traffic to and from your IoT device. If there is, turn it on.
  • Keep your IoT software up to date. Chances are, you regularly install the latest software updates on your computer and phone. Hackers are counting on you to leave your IoT gadgets unpatched, running outdated software with vulnerabilities they can exploit, so be sure to keep the software on your IoT devices up to date as well.
  • Practice good password hygiene. We all slip into bad password habits from time to time – it’s only human – but they put our IoT security at risk. With this in mind, avoid re-using passwords and be sure to set unique, strong passwords on each of your IoT devices. Update those passwords from time to time, too. Don’t store your passwords in a browser, and don’t share them via email. A password manager can help you securely store and share your passwords, so hackers never have a chance to snatch them.
  • Use secure, password-protected WiFi. Cyber criminals are notorious for sneaking onto open, insecure WiFi networks. Once they’re connected, they can spy on any internet activity that happens over those networks, steal login credentials, and launch cyber attacks if they feel like it. For this reason, make sure that you and your IoT devices only use secure, password-protected WiFi.
  • Use multi-factor authentication as an extra layer of protection. Multi-factor authentication (MFA), gives you extra security on top of all the other measures we mentioned above. It asks you to provide one more credential, or factor, in addition to a password to confirm you are who you say you are. If you have MFA enabled and a hacker tries to log in as you, you’ll get a notification that a login attempt is in progress. Whenever you have the option to enable MFA on any account or technology, take advantage of it.

Protect your Internet of Things devices with smart password security

The IoT is making our lives incredibly convenient, but that convenience can be a little too seductive at times. It’s easy to forget that smart home devices, harmless-looking and helpful as they are, can be targeted in cyber attacks just like our computers and phones. Hackers are counting on you to leave your IoT gadgets unprotected so they can use them to launch damaging attacks. By following these smart IoT security tips, you can have the best of both worlds, enjoying your smart life and better peace of mind at the same time.

Learn how LastPass Premium helps you strengthen your password security.

Source :
https://blog.lastpass.com/2022/08/password-security-and-the-iot/