Best WordPress Database Cleanup Methods and Plugins to Boost Site Speed (2023 Guide)

Last updated on Apr 6th, 2023 | 14 min

As your website grows and you create more content, your WordPress database can become cluttered with unnecessary data, which can slow down your site and create security risks.

In this guide, we will discuss the best WordPress database cleanup methods and plugins to boost site speed and performance.


Why WordPress database cleanup is necessary

Your WordPress database stores all your website data, including posts, pages, comments, media files, and user information. As your website grows, your database can become bloated with unnecessary data, which can:

  • Lead to slower page load times;
  • Use up more server resources;
  • Slow down backups and maintenance tasks;
  • Impact caching effectiveness;
  • Slow down search functionality.

Maintaining a clean WordPress database is an effective step not only toward better website speed and performance but also:

  • Improved website security
  • Improved website backup and recovery
  • Reduced disk space usage
  • Less strain on your servers


8 manual cleanup techniques for a faster WordPress database (via phpMyAdmin)

Before performing any database cleanup, it’s essential to perform a backup. This ensures you have a copy of your website’s data in case anything goes wrong during the cleanup process.

To create a backup, you can use a plugin or access cPanel and use the built-in backup tool. It’s important to save the backup in a secure location (like a Cloud space) so you can restore your website easily.

Generally, the manual approach requires access to your WordPress dashboard and phpMyAdmin – a web-based application for managing MySQL databases.

To access the phpMyAdmin interface, log in from your web hosting control panel or through a plugin like WP phpMyAdminPlease note all SQL commands shared below use the standard “wp_” prefix. Change it to match the ones used by your database.

SQL tab

Note: Manual WordPress database cleanup requires some technical knowledge. If you don’t feel confident running the command line, we recommend you go with any of the tried-and-tested plugins. Jump to the list.


1. Optimize database tables

In general, you can optimize all database tables, but some may require more attention than others. Here are some tips on how to identify the tables that need optimization:

  • Look for tables that are frequently updated: Tables, such as the posts, comments, and user meta tables, are good candidates for optimization.
  • Check for tables with large sizes: Tables, such as the wp_options and wp_postmeta tables, may benefit from optimization to improve website performance.
  • Identify tables with overhead: Overhead is the amount of space in a table that is used but not required. Tables with a high amount of overhead may need optimization.

To optimize tables, select them from the list on the left in phpMyAdmin, and check the box next to them.

From the “With selected” drop-down menu, select “Optimize table.” Click on the “Go” button to start the optimization process.

Post overhead



2. Delete unused data

Unused data can accumulate over time, leading to a cluttered database. This includes unused themes and plugins, media files, posts and pages, and tags.

Unused Themes and Plugins
To delete inactive WordPress themes and plugins, run the following SQL commands in phpMyAdmin:

For unused themes:

DELETE FROM wp_options WHERE option_name LIKE ‘template_%’ OR option_name LIKE ‘stylesheet_%’;

For unused plugins:

DELETE FROM wp_options WHERE option_name = ‘active_plugins’;

Alternatively, log in to your WordPress dashboard and select the “Appearance” or “Plugins” options for inactive themes and plugins, respectively.

WordPress dashboard

From there, you can select and delete them (in the example below, we’re deleting an inactive WordPress theme).

Delete WordPress theme

To delete inactive WordPress plugins completely, we suggest you follow our latest step-by-step guide.


Unused Media Files
To delete unused media files:

  1. Log into phpMyAdmin and select your WordPress database from the list of databases.
  2. Click on the “wp_posts” table to open it.
  3. Look for rows with a “post_type” value of “attachment.” These are media files.

    Attachment post type
     
  4. To find unused media files, look for rows where the “post_parent” column is set to a value of “0”. This means the media file is not attached to any post or page on your site.
  5. Delete the row associated with the file.

To remove unused media files in one hit, use the following command in the SQL tab:

DELETE FROM wp_posts WHERE post_type = ‘attachment’ AND post_parent = 0;


Unused Posts and Pages

To delete unused posts and pages via phpMyAdmin, you can use the following SQL commands:

  1. For unused posts:DELETE FROM wp_posts WHERE post_type = ‘post’ AND post_status = ‘draft’;
  2. For unused pages:

         DELETE FROM wp_posts WHERE post_type = ‘page’ AND post_status = ‘draft’;

Alternatively, you can delete unused posts and pages inside the WordPress dashboard in a few simple steps:

  1. Log in to your WordPress dashboard.
  2. Go to the “Posts” or “Pages” section, depending on which content type you want to remove.
  3. Look for any posts or pages that you no longer need and want to delete and click on the checkbox next to them
  4. Select “Move to Trash” from the “Bulk Actions” drop-down menu and click on the “Apply” button
  5. To permanently delete the posts or pages, go to the “Trash” section and perform a bulk action “Delete Permanently,” and confirm with “Apply.”
Bulk delete post


Unused Tags
To remove unused tags in phpMyAdmin, you can use a combination of SQL queries.

Run the following query to identify all tags that are not associated with any posts or content:

SELECT * FROM wp_terms AS t 
LEFT JOIN wp_term_taxonomy AS tt ON t.term_id = tt.term_id 
WHERE tt.taxonomy = ‘post_tag’ 
AND tt.count = 0;

This query will list all the tags that are not associated with any posts or content. Make sure that the list contains only the tags you want to delete.

To delete these tags run the following query:

DELETE FROM wp_terms WHERE term_id IN (
  SELECT term_id FROM wp_term_taxonomy WHERE count = 0
);

This query will delete all the tags that have a count of zero, i.e., those that are not associated with any posts or content.


Removing unused tags is also done through your WordPress dashboard.

  1. Go to the “Posts” section and click “Tags” from the menu on the left-hand side.
  2. Look for any tags that you no longer need and click on the checkbox next to the tag that you want to delete.
  3. Select “Delete” from the “Bulk Actions” drop-down menu.
  4. Click the “Apply” button to delete the selected tag.
Bulk delete tags

If the tag you want to delete is still associated with any posts, you will need to remove it from those posts first. To do this: 

  1. Click on the tag you want to remove. 
  2. Check the list of posts that use the tag.
  3. Click on each post that uses the tag and remove the tag by clicking on the “X.” 
     
Remove tag from post

Once the tag is removed from all the associated posts, go back to the “Tags” section and repeat the steps from above.


3. Remove spam comments

In phpMyAdmin, run the following query to identify all comments that have been marked as spam:

SELECT * FROM wp_comments WHERE comment_approved = ‘spam’;

Once you have confirmed the list, you can delete these comments by running the following query:

DELETE FROM wp_comments WHERE comment_approved = ‘spam’;

To remove spam comments via the WordPress dashboard, follow these steps:

  1. Go to the “Comments” section and select “Spam” from the “All Comments” drop-down menu.
  2. Look for any spam comments you want to delete and click on the checkbox next to them.
  3. Select “Delete Permanently” from the “Bulk Actions” drop-down menu.
  4. Click on the “Apply” button to confirm the removal.
Delete spam comment


4. Remove unapproved comments

Similarly to the spam comments, run the following query to identify all comments that have not been approved:

SELECT * FROM wp_comments WHERE comment_approved = 0;

Then, you can delete these comments by running the following query:

DELETE FROM wp_comments WHERE comment_approved = 0;

Alternatively, navigate to your WordPress dashboard. In the “Comments” section, select “Pending” or “Unapproved” from the “All Comments” drop-down menu.

  1. Look for any unapproved comments you want to delete and click on the checkbox.
  2. Select “Move to Trash” from the “Bulk Actions” drop-down menu.
  3. Click the “Apply” button to move the selected unapproved comments to the trash.
Delete unapproved comment

To permanently delete the unapproved comments, go to the “Trash” section and perform the bulk action “Delete Permanently” for all the unapproved comments you want gone.

Click on the “Apply” button to finish the process.


5. Remove post revisions

To delete post revisions, enter the following command in the SQL query box and click Go to execute:

DELETE FROM wp_posts WHERE post_type = “revision”;

For a more granular control, the safest alternative is to use a plugin from our recommendations list.


6. Remove old shortcodes

Identifying old shortcodes in WordPress can be challenging, especially if you have a large number of pages or posts on your site. However, there is a way to identify which shortcodes are old and no longer used on your site:

  • Check the theme and plugin documentation: If you are using a theme or a plugin that came with shortcodes, check their documentation to see if any of the shortcodes have been deprecated.

Note: We recommend using a plugin if you have a large number of posts. Going through each post to identify and delete a shortcode can prove more time-consuming than expected.

Once you identify which shortcodes are no longer needed, go to your WordPress dashboard. 

  1. Go to the “Pages” or “Posts” section and select the page or post where the old shortcode is used.
  2. Switch to the “Text” or “HTML” editor mode for the page or post and search for the old shortcode.
  3. Once you have located the old shortcode, delete it and save the changes.
  4. Repeat this process for each page or post where the old shortcode is used.
Delete shortcode from post


7. Remove pingbacks and trackbacks

Pingbacks and trackbacks are two methods that WordPress uses to notify you when another website links to your content. Both methods are designed to help you manage your incoming links and engage with other bloggers and website owners.

However, they can also be a source of spam and unwanted notifications, which is why it’s important to manage them properly or disable them altogether.

In phpMyAdmin, run the following query to identify all comments that have the comment type of “pingback” or “trackback”:

SELECT * FROM wp_comments WHERE comment_type = ‘pingback’ OR comment_type = ‘trackback’;

Delete these comments by running the following query:

DELETE FROM wp_comments WHERE comment_type = ‘pingback’ OR comment_type = ‘trackback’;

You could also use the built-in comment management system in your WordPress dashboard:

  1. Click on “Comments” in the left-hand menu.
  2. You will see a list of comments, including pingbacks and trackbacks. Check the boxes next to the comments you want to delete.
  3. Click the “Bulk Actions” drop-down menu and select “Move to Trash.”
  4. Click on the “Apply” button.


8. Remove transients

Transients are temporary pieces of data used to cache data and speed up your website’s performance. However, if not taken care of on a regular basis, they can start harming your speed instead. Here’s how to remove transients in a WordPress database cleanup:

1. via phpMyAdmin
Log into your phpMyAdmin. Then, select your WordPress database and click on the SQL tab. In the SQL window, enter the following command to delete all transients:

DELETE FROM wp_options WHERE option_name LIKE ‘_transient_%’;

2. via WP-CLI
If you’re comfortable using the command line, log in to your server and open a terminal window. Then, navigate to your WordPress directory and run the following command to delete all transients:

wp transient delete –all

Note: It’s important to note that when you remove transients, they will be recreated the next time they are needed. Therefore, it’s a good idea to regularly clean up your database to maintain a fast and efficient website.


5 best plugins for WordPress database cleanup

Advantages of using plugins for database cleanup

Manually cleaning up your database can be a time-consuming and tedious task, especially if you’re not familiar with SQL queries.

Fortunately, there are many plugins available that can automate the process for you. They can help you:

  • quickly identify and remove unnecessary data;
  • reduce the risk of errors;
  • ensure your database is clean and optimized.


Is it safe to use plugins for database cleanup?

Using a plugin for database cleanup is generally safe as long as you choose a reputable plugin and follow best practices. Flip through our list of the best WordPress plugins for database optimization.


1. WP-Optimize

WP-Optimize is a popular plugin that can remove unnecessary data, such as old post revisions, spam comments, and unused tags. It can also optimize your database tables and remove expired transients. One of the standout features of WP-Optimize is the ability to schedule automatic cleanups, so you don’t have to remember to do it manually. The plugin is easy to use and has a user-friendly interface.

WP Optimize

Pros:

  • Scheduled automatic optimization and cleanup
  • Easy to use interface
  • Comprehensive cleanup options

Cons:

  • Some features are only available in the pro version
  • May not be suitable for large databases


2. WP Sweep

WP Sweep can remove unused, orphaned, and duplicated data, as well as optimize your database tables. The plugin is lightweight and easy to use, with a simple interface. WP Sweep also includes a preview function, so you can see what data will be deleted before you confirm the cleanup.

WP Sweep

Pros:

  • Lightweight and easy to use
  • Preview function to see what data will be deleted
  • Ability to clean up specific types of data

Cons:

  • May not be suitable for large databases
  • Some features are only available in the pro version


3. Advanced Database Cleaner

Advanced Database Cleaner comes in both free and paid versions, with the paid version offering more features and functionality. The plugin is designed to help you clean up and optimize your database by removing unnecessary data, such as post revisions, spam comments, and unused tables.

Advanced database cleanup

One of the standout features of Advanced Database Cleaner is its ability to schedule automatic cleanups. The plugin also allows you to create custom queries to clean up specific parts of your database.

Pros:

  • Free version available
  • Schedule automatic cleanups
  • Create custom queries for targeted cleaning

Cons:

  • Some features only available in the pro version
  • Can be more complex to use than some other plugins


4. WP DBManager

WP DBManager offers a range of features, including database backup and optimization options, as well as the ability to repair and restore your database. 

WP DBManager

One of the standout features of WP DBManager is its easy-to-use interface, which makes it simple to perform database maintenance tasks. The plugin also includes a range of optimization options to help you speed up your site, such as removing spam comments 
and cleaning up post revisions.

Pros:

  • Backup and optimization options
  • Easy-to-use interface
  • Optimization options to speed up your site

Cons:

  • Some users have reported issues with the backup functionality
  • Not as customizable as some other plugins
     

5. WP Reset

WP Reset is a powerful WordPress plugin that helps users quickly and easily reset their website to its default settings. Not only can users take a snapshot of their website to quickly restore their website to a specific previous state, but also rely on efficient database cleanup and emergency recovery script.

WP Reset

Pros:

  • Easy to use and user-friendly interface perfect for non-technical users
  • Time-saving website resetting
  • No important data loss during the reset process guaranteed

Cons:

  • Some advanced features are only available in the paid version

Best practices for choosing a plugin for WordPress database cleanup

When choosing a plugin for database cleanup, there are a few things to keep in mind.

First, look for a plugin that is regularly updated and has a large user base. This can help ensure that the plugin is compatible with the latest version of WordPress and is free from bugs and security vulnerabilities.

Second, look for a plugin that has a good reputation and positive reviews.

Finally, choose a plugin that meets your specific needs. Some plugins are designed to clean up specific types of data, while others offer more comprehensive cleaning and optimization options.


Tips for maintaining a clean and fast database

Regularly Scheduled Cleanups

One of the best ways to ensure your WordPress database remains clean is to schedule regular cleanups. Depending on the size of your website and how frequently you update content, you may need to schedule cleanups more frequently.

For websites with a lot of traffic and frequent content updates, it’s recommended to schedule cleanups on a weekly or bi-weekly basis. For smaller websites, a monthly or bi-monthly cleanup may suffice.


Monitor Database Size

It’s important to keep an eye on your database size to determine how frequently you need to schedule cleanups. You can use plugins like WP-Optimize, WP-Sweep, or Advanced Database Cleaner to monitor your database size and set up automatic cleanups. Alternatively, you can monitor your database size using cPanel.


Frequently Asked Questions

Why is my WP database so big?

Your WordPress database may be big because of accumulated data over time. This includes post revisions, spam comments, and unused data like media files, themes, and plugins. Additionally, some plugins may create their own tables in the database, which can also contribute to its size.


Does WordPress store everything in the database?

No, not everything is stored in the database. WordPress stores content like posts, pages, and comments in the database, but media files like images and videos are stored on your server’s file system. Plugins and themes may also store data outside of the database.


What happens if I accidentally delete important data from my database?

If you accidentally delete important data from your database, it may result in errors or a broken website. This is why it’s vital to always back up your database before performing any cleanup. You can then use the backup to restore any accidentally deleted data.


What happens if I delete my WordPress database?

Deleting your WordPress database will result in a broken website. Without the database, WordPress won’t be able to access any content, comments, or settings, and your website won’t function properly.


Take it away

А clean WordPress database is essential for optimal website speed, performance, and security.

There are several cleanup techniques that you can use to keep your database clean, including backing up your database, manually optimizing your database via phpMyAdmin by going over database tables and removing unused data like themes, plugins, media files, posts, and tags.

There are also several plugins available that can help you with WordPress database cleanup, including WP-Optimize, WP-Sweep, and Advanced Database Cleaner. When choosing a plugin, it’s important to consider its features, ease of use, and reliability.

By scheduling regular cleanups and monitoring your database size, you can ensure that your website remains fast, secure, and optimized for optimal user experience. Prioritize a database cleanup today and reap the benefits tomorrow.

Source :
https://nitropack.io/blog/post/wordpress-database-cleanup-guide

Back/Forward Cache: What It Is and How to Use It to Serve Content Immediately

Last updated on Feb 20th, 2023 | 7 min

Imagine this…

A user is browsing your website. They go to your product page. Then to your pricing page. Then back to your product page as they forgot to check if you offer that specific feature. Finally, they navigate forward to your pricing page and finish their order. 

As it turns out, it’s a pretty common scenario. 

Chrome usage data shows that 1 in 10 (10%) navigations on desktop and 1 in 5 (20%) on mobile are either back or forward.

Truly spectacular numbers. 

But…

The more important thing is – how can you guarantee that after navigating back and forward to your pages, they load immediately? 

Enter back/forward cache (or bfcache).

In the following lines, you will learn everything about bfcache and how to implement it to improve speed and perceived performance.

Spoiler alert: it’s easier than you think. 


What is the back/forward cache?

Bfcache is a feature that allows browsers to create and store a snapshot of an already visited web page in their in-memory. So the next time a visitor navigates back or forward to it, the browser can display it immediately.

The whole behind-the-scene process looks like this…

When a visitor requests to load a specific page, the browser goes through the following process:

  1. Establishes a connection with the server
  2. Downloads and parses the information
  3. Constructs the Document Object Model (DOM) and CSS Object Model (CSSOM)
  4. Renders the content
  5. Makes the page interactive
Browser loading a web page


If the back/forward cache isn’t enabled for the specific page, it means that every time you leave it and then navigate back to it, the browser will have to go through the whole 5-step process. 

And that takes time. 

On the contrary, with bfcache enabled, the browser “freezes” the page with all of its resources, so the next time you re-visit it, the browser won’t need to waste time rebuilding and will be able to load it instantly. 

The following Addy Osmani’s video illustrates best how fast a web page loads with and without bfcache:

data:text/html;https://www.youtube.com/embed/_me7_7C6Drs?autoplay=1;base64,PGJvZHkgc3R5bGU9J3dpZHRoOjEwMCU7aGVpZ2h0OjEwMCU7bWFyZ2luOjA7cGFkZGluZzowO2JhY2tncm91bmQ6dXJsKGh0dHBzOi8vaW1nLnlvdXR1YmUuY29tL3ZpL19tZTdfN0M2RHJzLzAuanBnKSBjZW50ZXIvMTAwJSBuby1yZXBlYXQnPjxzdHlsZT5ib2R5ey0tYnRuQmFja2dyb3VuZDpyZ2JhKDAsMCwwLC42NSk7fWJvZHk6aG92ZXJ7LS1idG5CYWNrZ3JvdW5kOnJnYmEoMCwwLDApO2N1cnNvcjpwb2ludGVyO30jcGxheUJ0bntkaXNwbGF5OmZsZXg7YWxpZ24taXRlbXM6Y2VudGVyO2p1c3RpZnktY29udGVudDpjZW50ZXI7Y2xlYXI6Ym90aDt3aWR0aDoxMDBweDtoZWlnaHQ6NzBweDtsaW5lLWhlaWdodDo3MHB4O2ZvbnQtc2l6ZTo0NXB4O2JhY2tncm91bmQ6dmFyKC0tYnRuQmFja2dyb3VuZCk7dGV4dC1hbGlnbjpjZW50ZXI7Y29sb3I6I2ZmZjtib3JkZXItcmFkaXVzOjE4cHg7dmVydGljYWwtYWxpZ246bWlkZGxlO3Bvc2l0aW9uOmFic29sdXRlO3RvcDo1MCU7bGVmdDo1MCU7bWFyZ2luLWxlZnQ6LTUwcHg7bWFyZ2luLXRvcDotMzVweH0jcGxheUFycm93e3dpZHRoOjA7aGVpZ2h0OjA7Ym9yZGVyLXRvcDoxNXB4IHNvbGlkIHRyYW5zcGFyZW50O2JvcmRlci1ib3R0b206MTVweCBzb2xpZCB0cmFuc3BhcmVudDtib3JkZXItbGVmdDoyNXB4IHNvbGlkICNmZmY7fTwvc3R5bGU+PGRpdiBpZD0ncGxheUJ0bic+PGRpdiBpZD0ncGxheUFycm93Jz48L2Rpdj48L2Rpdj48c2NyaXB0PmRvY3VtZW50LmJvZHkuYWRkRXZlbnRMaXN0ZW5lcignY2xpY2snLCBmdW5jdGlvbigpe3dpbmRvdy5wYXJlbnQucG9zdE1lc3NhZ2Uoe2FjdGlvbjogJ3BsYXlCdG5DbGlja2VkJ30sICcqJyk7fSk7PC9zY3JpcHQ+PC9ib2R5Pg==


As you can see from the video, the loading time is almost non-existent. On top of that, bfcache will reduce your visitors’ data usage as they won’t have to re-download the same resources repeatedly. 

And while all of these benefits sound incredible, a certain question might still bother you:

I already have an HTTP cache set up for my website. Do I need bfcache as well? 

Here’s the answer…
 

What is the difference between bfcache and HTTP cache?

Put simply, bfcache is a snapshot of the entire page stored in-memory (including the JavaScript heap), whereas the HTTP cache includes only the previously requested resources. 

And as Google claims:

“…it’s quite rare that all requests required to load a page can be fulfilled from the HTTP cache…”


Not all resources are allowed to be cached in the HTTP Cache. For instance, some sites don’t cache the HTML document itself, but only the resources. As a result, every time a visitor loads a specific page, the browser needs to re-download the document. 

Another reason back/forward cache can be faster is the difference between in-memory and disk cache. 

It’s true that loading resources from the disk cache (HTTP cache) could be much faster than requesting them over the network. But there’s an extra boost from not even having to read them from disk and fetching the entire page directly from the browser’s in-memory. 
 

What browsers support the back/forward cache?

All of them – Chrome, Safari, Firefox, Opera, and Edge:

Bfcache browser support

The truth is back/forward cache isn’t a new concept. Safari added support for this feature back in 2009. Firefox has supported it since version 1.5.

Edge and Chrome were the latest to join the party, with the former introducing bfcache in 2020, while the latter did it a year later. 

Now that you know that all major browsers support it let’s see how you can check if your page is served from the bfcache. 


How can I check if my site can be served from the back/forward cache?

The best thing about back/forward cache is that it just works in the majority of cases because browsers automatically do all the work for you.

In some cases, however, your pages will not be restored by the bfcache. 

The easiest way to check if everything works correctly is to run a PageSpeed Insights audit. 


Using Google PageSpeed Insights

Since the release of Lighthouse v10, there’s been a new PSI audit called “Page prevented back/forward cache restoration.” 

The audit will fail if the page you tested cannot be restored from bfcache for any reason. Clicking on the warning, a drop-down menu will open, and you’ll see a list with reasons and the frame(s) that caused the issue.

Failure reasons are separated into three categories:

  • Actionable: You can fix these issues to enable caching.
  • Pending Support: Chrome doesn’t support these features yet, so they prevent caching. However, once supported, Chrome removes these limitations.
  • Not Actionable: You can’t fix these issues on this page. Something that is outside the page’s control prevents caching.
Page prevented back/forward cache restoration warning

Using Chrome DevTools

Another option is to use Chrome’s Developer Tools, following these steps:

1. Open Chrome DevTools on the page you want to test:

How to open Chrome DevTools

2. Navigate to Application > Cache > Back/forward cache:

How to open bfcache settings in Chrome DevTools

3. Click Test back/forward cache

Test back/forward cache in Chrome Devtools

If bfcache works on your page, you’ll see this message:

Page eligible for bfcache

If not, you will see a list of issues:

Page ineligible for bfcache


Now that you know how to test it, let’s see how you can optimize your pages for bfcache and fix PSI’s warning. 
 

How to fix the “Page prevented back/forward cache restoration” warning in PageSpeed Insights

Even if you don’t see the warning, meaning your page is eligible for bfcache, it’s good to know that it won’t stay there indefinitely.

That’s why it’s crucial to know how to optimize for back/forward cache.

Here are some best practices you can use to make it as likely as possible that browsers bfcache your pages:

1. Avoid using the unload event 

The most surefire way to optimize for bfcache is to avoid using the unload event at all costs. 

The unload event fires when the user navigates away from the page (by clicking on a link, submitting a form, closing the browser window, etc.).

On desktop, Chrome and Firefox consider a page ineligible for bfcache if it uses the unload event. Safari, on the other hand, will cache some pages that fire the unload event listener, but to reduce potential breakage, it will not run it when a user is navigating away.

On mobile, Chrome and Safari will cache a page that uses the event, but Firefox won’t. 

In general, avoid using the unload event and instead go for the pagehide event. Otherwise, you’re risking slowing down your site, and your code won’t even run most of the time in Chrome and Safari. 

Also, there’s an ongoing discussion between browsers to deprecate unload
 

2. Be careful with adding beforeunload listeners

It’s ok to use beforeunload events in Chrome and Safari, but keep in mind that Firefox will flag your pages as ineligible for bfcache. 

However, there are legitimate use cases for the beforeunload event, unlike the unload event. One example is when you must caution the user about losing unsaved changes if they exit the page. It’s advisable to attach beforeunload event listeners only when there are unsaved changes and to remove them promptly after saving those changes.
 

3. Use Cache-Control: no-store only with information-sensitive pages

If a page contains sensitive information and caching is inappropriate, then Cache-Control: no-store should be used to prevent it from being eligible for bfcache. On the other hand, if a page doesn’t contain sensitive information and always requires up-to-date content, Cache-Control: no-cache or Cache-Control: max-age=0 can be used. These directives prompt the browser to revalidate the content before serving it and don’t impact a page’s eligibility for bfcache.
 

4. Update sensitive data after bfcache restore

The bfcache isn’t supposed to work for pages that contain sensitive data. For instance, when a user signs out of a website on a public computer, the next user shouldn’t be able to sign back in just by hitting the back button. 

To achieve that, it’s a good practice to update the page after a pageshow event if event.persisted is true.

Here’s a code from web.dev you can use:

Web dev code


5. Avoid window.opener references

Whenever possible, use rel=”noopener” instead of window.opener references. The opened window or the opener won’t be eligible for bfcache if your site opens windows and controls them through window.postMessage().

Always close connections and disconnect observers during the pagehide and freeze event

When the page is stored in the bfcache, all JavaScript tasks are paused and resumed as soon as it is taken out of the cache.

If these tasks only access APIs isolated to the current page, there won’t be any problems. 

However, if these tasks are connected to APIs that are also accessible from other pages in the same origin, then they may prevent code in other tabs from running properly.

If that’s the case, some browsers will not put a page in bfcache in the following scenarios:

The best thing you can do is to permanently close connections and remove or disconnect observers during pagehide or freeze events if your page uses any of these APIs. By doing this, the browser can cache the page without worrying about other open tabs being affected.
 

Key Takeaways

For something handled by browsers, we’ve covered a lot of information. 

So here are the key takeaways from this article:

  • Bfcache allows browsers to create and store a snapshot of an already visited web page in their in-memory, making the subsequent back/forward navigation load instantly. 
  • The benefits of your page being served from the bfcache include reduced data usage, better perceived performance, improved Core Web Vitals, and user experience.
  • The difference between bfcache and HTTP cache is that the former stores a snapshot of the whole page while the latter only the previously used resources. Also, with bfcache, the content is restored from the browser’s in-memory, while with HTTP cache is from a disk cache. 
  • All major browsers support back/forward cache. 
  • You can check if a particular page is eligible for back/forward caching using PageSpeed Insights or Chrome DevTools.
  • То optimize your pages for bfcache and fix the “Page prevented back/forward cache restoration” warning by doing the following:
  • Avoid using the unload event
  • Be careful with adding beforeunload listeners
  • Use Cache-Control: no-store only with information-sensitive pages
  • Avoid window.opener references
  • Always close connections and disconnect observers during the pagehide and freeze event

As always, don’t forget to test. Back/forward cache is a great feature, but remember that not every page should be eligible for it. Your visitors’ experience should always be a first priority. 

Source :
https://nitropack.io/blog/post/back-forward-cache

Google Introduces Passwordless Secure Sign-In with Passkeys for Google Accounts

May 03, 2023 Ravie Lakshmanan

Almost five months after Google added support for passkeys to its Chrome browser, the tech giant has begun rolling out the passwordless solution across Google Accounts on all platforms.

Passkeys, backed by the FIDO Alliance, are a more secure way to sign in to apps and websites without having to use a traditional password. This, in turn, can be achieved by simply unlocking their computer or mobile device with their biometrics (e.g., fingerprint or facial recognition) or a local PIN.

“And, unlike passwords, passkeys are resistant to online attacks like phishing, making them more secure than things like SMS one-time codes,” Google noted.

Passkeys, once created, are locally stored on the device, and are not shared with any other party. This also obviates the need for setting up two-factor authentication, as it proves that “you have access to your device and are able to unlock it.”

Users also have the choice of creating passkeys for every device they use to login to Google Account. That said, a passkey created on one device will be synced to all the users’ other devices running the same operating system platform (i.e., Android, iOS/macOS, or Windows) and if they are signed in to the same account. Viewed in that light, passkeys are not truly interoperable.

It’s worth pointing out that both Google Password Manager and iCloud Keychain use end-to-end encryption to keep the passkeys private, thereby preventing users from getting locked out should they lose access to their devices or making it easier to upgrade from one device to another.

Passwordless Secure Sign-In with Passkeys

Additionally, users can sign in on a new device or temporarily use a different device by selecting the option to “use a passkey from another device,” which then uses the phone’s screen lock and proximity to approve a one-time sign-in.

“The device then verifies that your phone is in proximity using a small anonymous Bluetooth message and sets up an end-to-end encrypted connection to the phone through the internet,” the company explained.

“The phone uses this connection to deliver your one-time passkey signature, which requires your approval and the biometric or screen lock step on the phone. Neither the passkey itself nor the screen lock information is sent to the new device.”

While this may be the “beginning of the end of the password,” the company said it intends to continue to support existing login methods like passwords and two-factor authentication for the foreseeable future.

Google is also recommending that users do not create passkeys on devices that are shared with others, a move that could effectively undermine all its security protections.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

Source :
https://thehackernews.com/2023/05/google-introduces-passwordless-secure.html

Ubiquiti UniFi Network – UniFi Cloud Adoption (Layer 3)

Updated on 5 mag 2023Print

Layer 3 adoption is the process of adopting a UniFi device to a remote UniFi Network Application. This is only recommended for advanced users, or those adopting devices to the UniFi Cloud Console. 

We highly recommend that users refer to Device Adoption for standard device adoption.

L3 Adoption Methods

For layer 3 adoption, your UniFi Network Application and connected devices must have internet access.

UniFi Network Mobile App

The Cloud Console can leverage your UniFi Network Mobile App (iOS / Android) to provide the easiest L3 adoption experience. 

  1. Refer to our UniFi Device LED Status guide to ensure the device is in a factory-default state.
  2. Connect your mobile device to the same local network as your UniFi device. 
  3. Open your UniFi Network Mobile App and connect to site you want to adopt your device.
  4. Your device should appear for adoption.

DHCP Option 43

This option leverages your DHCP server to inform your UniFi device of the location of your remote Network Application host. Those with a UniFi Gateway can easily accomplish this by entering the IP address of the remote Network Application in Option 43 Application Host Address field located in the Network Settings.

For those using a third-party gateway or DHCP server, we recommend consulting your manufacturer’s documentation to learn more.

DNS

You’ll need to configure your DNS server to resolve ‘unifi’ to your remote UniFi Network Application host.

There are two methods of specifying the Network Application host:

SSH

  1. Make sure your device is in a factory-default state. You can refer to our UniFi Device LED Status guide. 
  2. SSH into the device. You may refer to our guide on how to Login with SSH.
  3. Issue the following command: set-inform http://ip-of-host:8080/inform
  4. The UniFi device will now show up for adoption and can be treated as a standard L2 adoption.

Migrating From Another Network Application

A Layer 3 migration is useful for moving devices from a current Network application to a new Cloud Console. See Backups and Migration for more information.

Source :
https://help.ui.com/hc/en-us/articles/204909754

NIST Password Guidelines

Joe Dibley

Published: November 14, 2022

Updated: March 17, 2023

What are NIST Password Guidelines?

Since 2014, the National Institute of Standards and Technology (NIST), a U.S. federal agency, has issued  guidelines for managing digital identities via Special Publication 800-63B. The latest revision (rev. 3) was released in 2017, and has been updated as recently as 2019. Revision 4 was made available for comment and review; however, revision 3 is still the standard as of the time of this blog post.

Section 5.1.1 – Memorized Secrets provides recommendations for requirements around how users may create new passwords or make password changes, including guidelines around issues such as password strength. Special Publication 800-63B also covers verifiers (software, websites, network directory services, etc.) that validate and handle passwords during authentication and other processes.

Handpicked related content:

Not all organizations must adhere to NIST guidelines. However, many follow NIST password policy recommendations even if it’s not required because they provide a good foundation for sound digital identity management. Indeed, strong password security helps companies block many cybersecurity attacks, including  hackers, brute force attacks like credential stuffing and dictionary attacks. In addition, mitigating identity-related security risks helps organizations ensure compliance with a wide range of regulations, such as HIPAA, FISMA and SOX.

Quick List of NIST Password Guidelines

This blog explain many NIST password guidelines in detail, but here’s a quick list:

  • User-generated passwords should be at least 8 characters in length.
  • Machine-generated passwords should be at least 6 characters in length.
  • Users should be able to create passwords at least 64 characters in length.
  • All ASCII/Unicode characters should be allowed, including emojis and spaces.
  • Stored passwords should be hashed and salted, and never truncated.
  • Prospective passwords should be compared against password breach databases and rejected if there’s a match.
  • Passwords should not expire.
  • Users should be prevented from using sequential characters (e.g., “1234”) or repeated characters (e.g., “aaaa”).
  • Two-factor authentication (2FA) should not use SMS for codes.
  • Knowledge-based authentication (KBA), such as “What was the name of your first pet?”, should not be used.
  • Users should be allowed 10 failed password attempts before being locked out of a system or service.
  • Passwords should not have hints.
  • Complexity requirements — like requiring special characters, numbers or uppercase letters — should not be used.
  • Context-specific words, such as the name of the service or the individual’s username, should not be permitted.

You probably notice that some of these recommendations represent a departure from previous assumptions and standards. For example, NIST has removed complexity requirements like special characters in passwords; this change was made in part because users find ways to circumvent stringent complexity requirements. Instead of struggling to remember complex passwords and risking getting locked out, they may write their passwords down and leave them near physical computers or servers. Or they simply recycle old passwords based on dictionary words by making minimal changes during password creation, such as incrementing a number at the end.

NIST Guidelines

Now let’s explore the NIST guidelines in more detail.

Password length & processing

Length has long been considered a crucial factor for password security. NIST now recommends a password policy that requires all user-created passwords to be at least 8 characters in length, and all machine-generated passwords to be at least 6 characters in length. Additionally, it’s recommended to allow passwords to be at least 64 characters as a maximum length.

Verifiers should no longer truncate any passwords during processing. Passwords should be hashed and salted, with the full password hash stored.

Also the recommended NIST account lockout policy is to allow users at least 10 attempts at entering their password before being locked out.

Accepted characters

All ASCII characters, including the space character, should be supported in passwords. NIST specifies that Unicode characters, such as emojis, should be accepted as well.

Users should be prevented from using sequential characters (e.g., “1234”), repeated characters (e.g., “aaaa”) and simple dictionary words.

Commonly used & breached passwords

Passwords that are known to be commonly used or compromised should not be permitted. For example, you should disallow passwords in lists from breaches (such as the Have I Been Pwned? database, which contains 570+ million passwords from breaches), previously used passwords, well-known commonly used passwords, and context-specific passwords (e.g., the name of the service).

When a user attempts to use a password that fails this check, a message should be displayed asking them for a different password and providing an explanation for why their previous entry was rejected.

Reduced complexity & password expiration

As explained earlier in the blog, previous password complexity requirements have led to less secure human behavior, instead of the intended effect of tightening security. With that in mind, NIST recommends reduced complexity requirements, which includes removing requirements for special characters, numbers, uppercase characters, etc.

A related recommendation for reducing insecure human behavior is to eliminate password expiration.

No more hints or knowledge-based authentication (KBA)

Although password hints were intended to help users to create more complex passwords, users often choose hints that practically give away their passwords. Accordingly, NIST recommends not allowing password hints.

NIST also recommends not using knowledge-based authentication (KBA), such as questions like “What was the name of your first pet?”

Password managers & two-factor authentication (2FA)

To account for the growing popularity of password managers, users should be able to paste passwords.

SMS is no longer considered a secure option for 2FA. Instead, one-time code provider, such as Google Authenticator or Okta Verify, should be used.

How Netwrix Can Help

Netwrix offers several solutions specifically designed to streamline and strengthen access and password management:

  • Netwrix Password Policy Enforcer makes it easy to create strong yet flexible password policies that enhance security and compliance without hurting user productivity or burdening helpdesk and IT teams.
  • Netwrix Password Reset enables users to safely unlock their own accounts and reset or change their own passwords, right from their web browser. This self-service functionality dramatically reduces user frustration and productivity losses while slashing helpdesk call volume.

FAQ

What is NIST Special Publication 800-63B?

NIST’s Digital Identity Guidelines (Special Publication 800-63B) provides reliable recommendations for identity and access management, including effective password policies.

Why does NIST recommend reducing password complexity requirements?

While requiring complex passwords makes them more difficult for attackers to crack, it also makes passwords harder for users to remember. To avoid frustrating lockouts, users tend to respond with behaviors like writing down their credentials on a sticky note by their desk or choosing to periodically reuse the same (or nearly the same) password — which increase security risks. Accordingly, NIST now recommends less stringent complexity requirements.

Source :
https://blog.netwrix.com/2022/11/14/nist-password-guidelines/

RSA Report: Cybersecurity is National Security

The role of government in stopping supply chain attacks and other threats to our way of life.

By Amber Wolff
April 26, 2023

How governments play a vital role in developing regulations, stopping supply chain attacks, and diminishing other threats to our way of life.

While new issues are always emerging in the world of cybersecurity, some have been present since the beginning, such as what role cybersecurity should play in government operations and, conversely, what role government should play in cybersecurity. The answer to this question continues to shift and evolve over time, but each new leap in technology introduces additional considerations. As we move into the AI era, how can government best keep citizens safe without constraining innovation and the free market — and how can the government use its defensive capabilities to retain an edge in the conflicts of tomorrow?

The day’s first session, “Cybersecurity and Military Defense in an Increasingly Digital World,” offered a deep dive into the latter question. Over the past 20 years, military conflicts have moved from involving just Land, Air and Sea to also being fought in Space and Cyber. While superior technology has given us an upper hand in previous conflicts, in some areas our allies — and our adversaries — are catching up or even surpassing us. In each great technological leap, companies and countries alike ascend and recede, and to keep our edge in the conflicts of the future, the U.S. will need to shed complacency, develop the right policies, move toward greater infrastructure security and tap the capabilities of the private sector.

SonicWall in particular is well-positioned to work with the federal government and the military. For years, we’ve helped secure federal agencies and defense deployments against enemies foreign and domestic, and have woked to shorten and simplify the acquisition and procurement process. Our list of certifications includes FIPS 140-2, Common Criteria, DoDIN APL, Commercial Solutions for Classified (CSfC), USGv6, IPv6 and TAA and others. And our wide range of certified solutions have been used in a number of government use cases, such as globally distributed networks in military deployments and federal agenciestip-of-the-spearhub-and-spokedefense in-depth layered firewall strategies and more.

Because Zero Trust is just as important for federal agencies as it is for private sector organizations, SonicWall offers the SMA 1000, which offers Zero Trust Network Architecture that complies with federal guidelines, including the DoDIN APL, FIPS and CSfC, as well as the U.S. National Cybersecurity Strategy.

This new strategy was at the center of the day’s next session. In “The National Cyber Strategy as Roadmap to a Secure Cyber Future,” panelists outlined this strategic guidance, which was released just two months ago and offered a roadmap for how the U.S. should protect its digital ecosystem against malicious criminal and nation-state actors. The guidance consists of five pillars, all of which SonicWall is in accord with:

  • Pillar One: Defend Critical Infrastructure
    SonicWall offers several security solutions that align with Pillar One, including firewalls, intrusion prevention, VPN, advanced threat protection, email security, Zero-Trust network access and more. We’re also working to align with and conform to NIST SSDF and NIST Zero Trust Architecture standards.
  • Pillar Two: Disrupt and Dismantle Threat Actors

SonicWall uses its Email Security to disrupt and mitigate the most common ransomware vector: Phishing. And in 2022 alone, we helped defend against 493.3 million ransomware attacks.

  • Pillar Three: Shape Market Forces to Drive Security and Resilience

This pillar shifts liability from end users to software providers that ignore best practices, ship insecure or vulnerable products or integrate unvetted or unsafe third-party software. And as part of our efforts to aign with the NIST SSDF, we’re implementing a Software Bill of Materials (SBOM).

  • Pillar Four: Invest in a Resilient Future

Given CISA’s prominence in this guidance, any regulations created will likely include threat emulation testing, and will likely be mapped to threat techniques, such as MITRE ATT&CKSonicWall Capture Client (our EDR solution) is powered by SentinelOne, which has been a participant in the MITRE ATT&CK evaluations since 2018 and was a top performer in the 2022 Evaluations.

  • Pillar Five: Forge International Partnerships to Pursue Shared Goals

An international company, SonicWall recognizes the importance of international partnerships and works to comply with global regulations such as GDPR, HIPAA, PCI-DSS and more. By sharing threat intelligence and collaborating no mitigation strategies, we work with governments and the rest of the cybersecurity community to pursue shared cybersecurity goals.

And with the continued rise in cybercrime, realizing these goals has never been more important. In “The State of Cybersecurity: Year in Review,” Mandiant CEO Kevin Mandia summarized findings from the 1,163 intrusions his company investigated in 2022. The good news, Mandia said, is that we’re detecting threats faster. In just ten years, we’ve gone from averaging 200 days to notice there’s a problem, to just 16 days currently — but at the same time, an increase in the global median dwell time for ransomware shows there’s still work to be done.

Mandia also outined the evolution of how cybercriminals are entering networks, from Unix platforms, to Windows-based attacks, and from phishing, to spearphishing to vulnerabilities — bringing patch management once again to the fore.

Deep within the RSAC Sandbox, where today’s defenders learn, play and test their skills, panelists convened to discuss how to stop attackers’ relentless attempts to shift left. “Software Supply Chain: Panel on Threat Intel, Trends, Mitigation Strategies” explained that while the use of third-party components increases agility, it comes with tremendous risk. More than 96% of software organizations rely on third-party code, 90% of which consists of open source—but the developers of this software are frequently single individuals or small groups who may not have time to incorporate proper security, or even know how. Our current strategy of signing at the end isn’t enough, panelists argued—to truly ensure safety, signing should be done throughout the process (otherwise known as “sign at the station”).

Israel provides an example of how a country can approach the issue of software supply chain vulnerability — among other things, the country has created a GitHub and browser extension allowing developers to check packages for malicious code — but much work would need to be done to implement the Israel model in the U.S. AI also provides some hope, but given its current inability to reliably detect malicious code, we’re still a long way from being able to rely on it. In the meantime, organizations will need to rely on tried-and-true solutions such as SBOMs to help guard against supply chain attacks in the near future.

But while AI has tremendous potential to help defenders, it also has terrible potential to aid attackers. In “ChatGPT: A New Generation of Dynamic Machine-Based Attacks,” the speakers highlighted ways that attackers are using the new generation of AI technology to dramatically improve social engineering attempts, expand their efforts to targets in new areas, and even write ransomware and other malicious code. In real time, the speakers demonstrated the difference between previous phishing emails and phishing generated by ChatGPT, including the use of more natural language, the ability to instantly access details about the target and the ability to imitate a leader or colleague trusted by the victim with a minimum of effort. These advancements will lead to a sharp increase in victims of phishing attacks, as well as things like Business Email Compromise.

And while there are guardrails in place to help prevent ChatGPT from being used maliciously, they can be circumvented with breathtaking ease. With the simple adjustment of a prompt, the speakers demonstrated, ransomware and other malicious code can be generated. While this code isn’t functional on its own, it’s just one or two simple adjustments away — and this capability could be used to rapidly increase the speed with which attacks are launched.

These capabilities are especially concerning given the rise in state-sponsored attacks. In “State of the Hack 2023: NSA’s Perspective,” NSA Director of Cybersecurity Rob Joyce addressed a packed house regarding the NSA’s work to prevent the increasing wave of nation-state threats. The two biggest nation-state threats to U.S. cybersecurity continue to be Russia and China, with much of the Russian effort centering around the U.S.’ assistance in the Russia/Ukraine conflict.

As we detailed in our SonicWall 2023 Cyber Threat Report, since the beginning of the conflict, attacks by Russia’s military and associated groups have driven a massive spike in cybercrime in Ukraine. The good news, Joyce said, is that Russia is currently in intelligence-gathering mode when it comes to the U.S., and is specifically taking care not to release large-scale NotPetya-type attacks. But Russia also appears to be playing the long game, and is showing no signs of slowing or scaling back their efforts.

China also appears to be biding its time — but unlike Russia, whose efforts appear to be focused around traditional military dominance, China is seeking technological dominance. Exploitation by China has increased so much that we’ve become numb to it, Joyce argued. And since these nation-state sponsored attackers don’t incur much reputational damage for their misdeeds, they’ve become increasingly brazen in their attacks, going so far as to require any citizen who finds a zero-day to pass details to the government and hosting competitions for building exploits and finding vulnerabilities. And the country is also making efforts to influence international tech standards in an attempt to tip scales in their favor for years to come.

The 2023 RSA Conference has offered a wealth of information on a wide variety of topics, but it will soon draw to a close. Thursday is the last day to visit the SonicWall booth (#N-5585 in Moscone North) and enjoy demos and presentations on all of our latest technology. Don’t head home without stopping by — and don’t forget to check back for the conclusion of our RSAC 2023 coverage!

Source :
https://blog.sonicwall.com/en-us/2023/04/rsa-report-cybersecurity-is-national-security/

CrowdStrike Falcon Platform Detects and Prevents Active Intrusion Campaign Targeting 3CXDesktopApp Customers

Note: Content from this post first appeared in r/CrowdStrike

We will continue to update on this dynamic situation as more details become available. CrowdStrike’s Intelligence team is in contact with 3CX.

On March 29, 2023, CrowdStrike observed unexpected malicious activity emanating from a legitimate, signed binary, 3CXDesktopApp — a softphone application from 3CX. The malicious activity includes beaconing to actor-controlled infrastructure, deployment of second-stage payloads, and, in a small number of cases, hands-on-keyboard activity. 

The CrowdStrike Falcon® platform has behavioral preventions and atomic indicator detections targeting the abuse of 3CXDesktopApp. In addition, CrowdStrike® Falcon OverWatch™ helps customers stay vigilant against hands-on-keyboard activity.

CrowdStrike customers can log into the customer support portal and follow the latest updates in Trending Threats & Vulnerabilities: Intrusion Campaign Targeting 3CX Customers

The 3CXDesktopApp is available for Windows, macOS, Linux and mobile. At this time, activity has been observed on both Windows and macOS.

CrowdStrike Intelligence has assessed there is suspected nation-state involvement by the threat actor LABYRINTH CHOLLIMA. CrowdStrike Intelligence customers received an alert this morning on this active intrusion. 

Get fast and easy protection with built-in threat intelligence — request a free trial of CrowdStrike Falcon® Pro today

CrowdStrike Falcon Detection and Protection

The CrowdStrike Falcon platform protects customers from this attack and has coverage utilizing behavior-based indicators of attack (IOAs) and indicators of compromise (IOCs) based detections targeting malicious behaviors associated with 3CX on both macOS and Windows. 

Customers should ensure that prevention policies are properly configured with Suspicious Processes enabled.

Figure 1. CrowdStrike’s indicator of attack (IOA) identifies and blocks the malicious behavior in macOS (click to enlarge)

Figure 2. CrowdStrike’s indicator of attack (IOA) identifies and blocks the malicious behavior in Windows (click to enlarge)

Hunting in the CrowdStrike Falcon Platform

Falcon Discover

CrowdStrike Falcon® Discover customers can use the following link: US-1 | US-2 | EU | Gov to look for the presence of 3CXDesktopApp in their environment.

Falcon Insight customers can assess if the 3CXDesktopApp is running in their environment with the following query:

Event Search — Application Search

event_simpleName IN (PeVersionInfo, ProcessRollup2) FileName IN ("3CXDesktopApp.exe", "3CX Desktop App")
| stats dc(aid) as endpointCount by event_platform, FileName, SHA256HashData

Falcon Long Term Repository — Application Search

#event_simpleName=/^(PeVersionInfo|ProcessRollup2)$/ AND (event_platform=Win ImageFileName=/\\3CXDesktopApp\.exe$/i) OR (event_platform=Mac ImageFileName=/\/3CX\sDesktop\sApp/i)
| ImageFileName = /.+(\\|\/)(?.+)$/i
| groupBy([event_platform, FileName, SHA256HashData], function=count(aid, distinct=true, as=endpointCount))

Atomic Indicators

The following domains have been observed beaconing, which should be considered an indication of malicious intent.

akamaicontainer[.]com
akamaitechcloudservices[.]com
azuredeploystore[.]com
azureonlinecloud[.]com
azureonlinestorage[.]com
dunamistrd[.]com
glcloudservice[.]com
journalide[.]org
msedgepackageinfo[.]com
msstorageazure[.]com
msstorageboxes[.]com
officeaddons[.]com
officestoragebox[.]com
pbxcloudeservices[.]com
pbxphonenetwork[.]com
pbxsources[.]com
qwepoi123098[.]com
sbmsa[.]wiki
sourceslabs[.]com
visualstudiofactory[.]com
zacharryblogs[.]com

CrowdStrike Falcon® Insight customers, regardless of retention period, can search for the presence of these domains in their environment spanning back one year using Indicator Graph: US-1 | US-2 | EU | Gov.

Event Search — Domain Search

event_simpleName=DnsRequest DomainName IN (akamaicontainer.com, akamaitechcloudservices.com, azuredeploystore.com, azureonlinecloud.com, azureonlinestorage.com, dunamistrd.com, glcloudservice.com, journalide.org, msedgepackageinfo.com, msstorageazure.com, msstorageboxes.com, officeaddons.com, officestoragebox.com, pbxcloudeservices.com, pbxphonenetwork.com, pbxsources.com, qwepoi123098.com, sbmsa.wiki, sourceslabs.com, visualstudiofactory.com, zacharryblogs.com)
| stats dc(aid) as endpointCount, earliest(ContextTimeStamp_decimal) as firstSeen, latest(ContextTimeStamp_decimal) as lastSeen by DomainName
| convert ctime(firstSeen) ctime(lastSeen)

Falcon LTR — Domain Search

#event_simpleName=DnsRequest
| in(DomainName, values=[akamaicontainer.com, akamaitechcloudservices.com, azuredeploystore.com, azureonlinecloud.com, azureonlinestorage.com, dunamistrd.com, glcloudservice.com, journalide.org, msedgepackageinfo.com, msstorageazure.com, msstorageboxes.com, officeaddons.com, officestoragebox.com, pbxcloudeservices.com, pbxphonenetwork.com, pbxsources.com, qwepoi123098.com, sbmsa.wiki, sourceslabs.com, visualstudiofactory.com, zacharryblogs.com])
| groupBy([DomainName], function=([count(aid, distinct=true, as=endpointCount), min(ContextTimeStamp, as=firstSeen), max(ContextTimeStamp, as=lastSeen)]))
| firstSeen := firstSeen * 1000 | formatTime(format="%F %T.%L", field=firstSeen, as="firstSeen")
| lastSeen := lastSeen * 1000 | formatTime(format="%F %T.%L", field=lastSeen, as="lastSeen")
| sort(endpointCount, order=desc)

File Details

SHA256Operating SystemInstaller SHA256FileName
dde03348075512796241389dfea5560c20a3d2a2eac95c894e7bbed5e85a0accWindowsaa124a4b4df12b34e74ee7f6c683b2ebec4ce9a8edcf9be345823b4fdcf5d8683cxdesktopapp-18.12.407.msi
fad482ded2e25ce9e1dd3d3ecc3227af714bdfbbde04347dbc1b21d6a3670405Windows59e1edf4d82fae4978e97512b0331b7eb21dd4b838b850ba46794d9c7a2c09833cxdesktopapp-18.12.416.msi
92005051ae314d61074ed94a52e76b1c3e21e7f0e8c1d1fdd497a006ce45fa61macOS5407cda7d3a75e7b1e030b1f33337a56f293578ffa8b3ae19c671051ed3142903CXDesktopApp-18.11.1213.dmg
b86c695822013483fa4e2dfdf712c5ee777d7b99cbad8c2fa2274b133481eadbmacOSe6bbc33815b9f20b0cf832d7401dd893fbc467c800728b5891336706da0dbcec3cxdesktopapp-latest.dmg

Recommendations 

The current recommendation for all CrowdStrike customers is:

  1. Locate the presence of 3CXDesktopApp software in your environment by using the queries outlined above.
  2. Ensure Falcon is deployed to applicable systems. 
  3. Ensure “Suspicious Processes” is enabled in applicable Prevention Policies.
  4. Hunt for historical presence of atomic indicators in third-party tooling (if available).

Additional Resources

Source :
https://www.crowdstrike.com/blog/crowdstrike-detects-and-prevents-active-intrusion-campaign-targeting-3cxdesktopapp-customers/

3 Overlooked Cybersecurity Breaches

Here are three of the worst breaches, attacker tactics and techniques of 2022, and the security controls that can provide effective, enterprise security protection for them.

#1: 2 RaaS Attacks in 13 Months#

Ransomware as a service is a type of attack in which the ransomware software and infrastructure are leased out to the attackers. These ransomware services can be purchased on the dark web from other threat actors and ransomware gangs. Common purchasing plans include buying the entire tool, using the existing infrastructure while paying per infection, or letting other attackers perform the service while sharing revenue with them.

In this attack, the threat actor consists of one of the most prevalent ransomware groups, specializing in access via third parties, while the targeted company is a medium-sized retailer with dozens of sites in the United States.

The threat actors used ransomware as a service to breach the victim’s network. They were able to exploit third-party credentials to gain initial access, progress laterally, and ransom the company, all within mere minutes.

The swiftness of this attack was unusual. In most RaaS cases, attackers usually stay in the networks for weeks and months before demanding ransom. What is particularly interesting about this attack is that the company was ransomed in minutes, with no need for discovery or weeks of lateral movement.

A log investigation revealed that the attackers targeted servers that did not exist in this system. As it turns out, the victim was initially breached and ransomed 13 months before this second ransomware attack. Subsequently, the first attacker group monetized the first attack not only through the ransom they obtained, but also by selling the company’s network information to the second ransomware group.

In the 13 months between the two attacks, the victim changed its network and removed servers, but the new attackers were not aware of these architectural modifications. The scripts they developed were designed for the previous network map. This also explains how they were able to attack so quickly – they had plenty of information about the network. The main lesson here is that ransomware attacks can be repeated by different groups, especially if the victim pays well.

“RaaS attacks such as this one are a good example of how full visibility allows for early alerting. A global, converged, cloud-native SASE platform that supports all edges, like Cato Networks provides complete network visibility into network events that are invisible to other providers or may go under the radar as benign events. And, being able to fully contextualize the events allows for early detection and remediation.

#2: The Critical Infrastructure Attack on Radiation Alert Networks#

Attacks on critical infrastructure are becoming more common and more dangerous. Breaches of water supply plants, sewage systems and other such infrastructures could put millions of residents at risk of a human crisis. These infrastructures are also becoming more vulnerable, and attack surface management tools for OSINT like Shodan and Censys allow security teams to find such vulnerabilities with ease.

In 2021, two hackers were suspected of targeting radiation alert networks. Their attack relied on two insiders that worked for a third party. These insiders disabled the radiation alert systems, significantly debilitating their ability to monitor radiation attacks. The attackers were then able to delete critical software and disable radiation gauges (which is part of the infrastructure itself).

Cybersecurity Breaches

“Unfortunately, scanning for vulnerable systems in critical infrastructure is easier than ever. While many such organizations have multiple layers of security, they are still using point solutions to try and defend their infrastructure rather than one system that can look holistically at the full attack lifecycle. Breaches are never just a phishing problem, or a credentials problem, or a vulnerable system problem – they are always a combination of multiple compromises performed by the threat actor,” said Etay Maor, Sr. Director of Security Strategy at Cato Networks.

#3: The Three-Step Ransomware Attack That Started with Phishing#

The third attack is also a ransomware attack. This time, it consisted of three steps:

1. Infiltration – The attacker was able to gain access to the network through a phishing attack. The victim clicked on a link that generated a connection to an external site, which resulted in the download of the payload.

2. Network activity – In the second phase, the attacker progressed laterally in the network for two weeks. During this time, it collected admin passwords and used in-memory fileless malware. Then on New Year’s Eve, it performed the encryption. This date was chosen since it was (rightfully) assumed the security team would be off on vacation.

3. Exfiltration – Finally, the attackers uploaded the data out of the network.

In addition to these three main steps, additional sub-techniques were employed during the attack and the victim’s point security solutions were not able to block this attack.

Cybersecurity Breaches

“A multiple choke point approach, one that looks horizontally (so to speak) at the attack rather than as a set of vertical, disjointed issues, is the way to enhance detection, mitigation and prevention of such threats. Opposed to popular belief, the attacker needs to be right many times and the defenders only need to be right just once. The underlying technologies to implement a multiple choke point approach are full network visibility via a cloud-native backbone, and a single pass security stack that’s based on ZTNA.” said Etay Maor, Sr. Director of Security Strategy at Cato Networks.

How Do Security Point Solutions Stack Up?#

It is common for security professionals to succumb to the “single point of failure fallacy”. However, cyber-attacks are sophisticated events that rarely involve just one tactic or technique which is the cause of the breach. Therefore, an all-encompassing outlook is required to successfully mitigate cyber-attacks. Security point solutions are a solution for single points of failure. These tools can identify risks, but they will not connect the dots, which could and has led to a breach.

Here’s Watch Out for in the Coming Months#

According to ongoing security research conducted by Cato Networks Security Team, they have identified two additional vulnerabilities and exploit attempts that they recommend including in your upcoming security plans:

1. Log4j#

While Log4j made its debut as early as December of 2021, the noise its making hasn’t died down. Log4j is still being used by attackers to exploit systems, as not all organizations have been able to patch their Log4j vulnerabilities or detect Log4j attacks, in what is known as “virtual patching”. They recommend prioritizing Log4j mitigation.

2. Misconfigured Firewalls and VPNs#

Security solutions like firewalls and VPNs have become access points for attackers. Patching them has become increasingly difficult, especially in the era of architecture cloudification and remote work. It is recommended to pay close attention to these components as they are increasingly vulnerable.

How to Minimize Your Attack Surface and Gain Visibility into the Network#

To reduce the attack surface, security professionals need visibility into their networks. Visibility relies on three pillars:

  • Actionable information – that can be used to mitigate attacks
  • Reliable information – that minimizes the number of false positives
  • Timely information – to ensure mitigation happens before the attack has an impact

Once an organization has complete visibility to the activity on their network they can contextualize the data, decide whether the activity witnessed should be allowed, denied, monitored, restricted (or any other action) and then have the ability to enforce this decision. All these elements must be applied to every entity, be it a user, device, cloud app etc. All the time everywhere. That is what SASE is all about.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

Source :
https://thehackernews.com/2023/02/3-overlooked-cybersecurity-breaches.html

The Definitive Browser Security Checklist

Security stakeholders have come to realize that the prominent role the browser has in the modern corporate environment requires a re-evaluation of how it is managed and protected. While not long-ago web-borne risks were still addressed by a patchwork of endpoint, network, and cloud solutions, it is now clear that the partial protection these solutions provided is no longer sufficient. Therefore, more and more security teams are now turning to the emerging category of purpose-built Browser Security Platform as the answer to the browser’s security challenges.

However, as this security solution category is still relatively new, there is not yet an established set of browser security best practices, nor common evaluation criteria. LayerX, the User-First Browser Security Platform, is addressing security teams’ need with the downable Browser Security Checklistthat guides its readers through the essentials of choosing the best solution and provides them with an actionable checklist to use during the evaluation process.

The Browser is The Most Important Work Interface and the Most Targeted Attack Surface #

The browser has become the core workspace in the modern enterprise. On top of being the gateway to sanctioned SaaS apps and other non-corporate web destinations, the browser is the intersection point between cloud\web environments and physical or virtual endpoints. This makes the browser both a target for multiple types of attacks, as well as a potential source of unintentional data leakage.

Some of these attacks have been around for more than a decade, exploitation of browser vulnerabilities or drive-by download of malicious files, for example. Others have gained recent momentum alongside the steep rise in SaaS adoption, like social engineering users with phishing webpages. Yet others leverage the evolution in web page technology to launch sophisticated and hard-to-detect modifications and abuse of browser features to capture and exfiltrate sensitive data.

Browser Security 101 – What is It That We Need to Protect?#

Browser security can be divided into two different groups: preventing unintended data exposure and protection against various types of malicious activity.

From the data protection aspect, such a platform enforces policies that ensure sensitive corporate data is not shared or downloaded in an insecure manner from sanctioned apps, nor uploaded from managed devices to non-corporate web destinations.

From the threat protection aspect, such a platform detects and prevents three types of attacks:

  • Attacks that target the browser itself, with the purpose of compromising the host device or the data that resides within the browser application itself, such as cookies, passwords, and others.
  • Attacks that utilize the browser via compromised credentials to access corporate data that resides in both sanctioned and unsanctioned SaaS applications.
  • Attacks that leverage the modern web page as an attack vector to target user’s passwords, via a wide range of phishing methods or through malicious modification of browser features.

How to Choose the Right Solution#

What should you focus on when choosing the browser security solution for your environment? What are the practical implications of the differences between the various offerings? How should deployment methods, the solution’s architecture, or user privacy be weighed in the overall consideration? How should threats and risks be prioritized?

As we’ve said before – unlike with other security solutions, you can’t just ping one of your peers and ask what he or she is doing. Browser security is new, and the wisdom of the crowd is yet to be formed. In fact, there’s an excellent chance that your peers are now struggling with the very same questions you are.

The Definitive Browser Security Platform Checklist – What it is and How to Use It#

The checklist (download it here) breaks down the high-level ‘browser security’ headline to small and digestible chunks of the concrete needs that need to be solved. These are brought to the reader in five pillars – deployment, user experience, security functionalities and user privacy. For each pillar there is a short description of its browser context and a more detailed explanation of its capabilities.

The most significant pillar, in terms of scope, is of course, the security functionalities one, which is divided into five sub-sections. Since, in most cases, this pillar would be the initial driver to pursuing browser security platform in the first place it’s worth going over them in more detail:

Browser Security Deep Dive#

The need for browser security platform typically arises from one of the following:

— Attack Surface Management: Proactive reduction of the browser’s exposure to various types of threats, eliminating adversaries’ ability to carry them out.

— Zero Trust Access: Hardening the authentication requirements to ensure that the username and password were indeed provided by the legitimate user and were not compromised.

— SaaS Monitoring and Protection: 360° visibility into all users’ activity and data usage within sanctioned and unsanctioned apps, as well as other non-corporate web destinations, while safeguarding corporate data from compromise or loss.

— Protection Against Malicious Web Pages: Real-time detection and prevention of all the malicious tactics adversaries embed in the modern web page, including credential phishing, downloading of malicious files and data theft.

— Secure 3rd Party Access and BYOD: Enablement of secure access to corporate web resources from unmanaged devices of both the internal workforce as well as external contractors and service providers.

This list enables anyone to easily identify the objective for their browser security platform search and find out the required capabilities for fulfilling it.

The Checklist – A Straightforward Evaluation Shortcut #

The most important and actionable part in the guide is the concluding checklist, which provides, for the first time, a concise summary of all the essential capabilities a browser security platform should provide. This checklist makes the evaluation process easier than ever. All you have to do now is test the solutions you’ve shortlisted against it and see which one scores the highest. Once you have all of them lined up, you can make an informed decision based on the needs of your environment, as you understand them.

Download the checklist here.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

Source :
https://thehackernews.com/2023/01/the-definitive-browser-security.html

Is Once-Yearly Pen Testing Enough for Your Organization?

Any organization that handles sensitive data must be diligent in its security efforts, which include regular pen testing. Even a small data breach can result in significant damage to an organization’s reputation and bottom line.

There are two main reasons why regular pen testing is necessary for secure web application development:

  • Security: Web applications are constantly evolving, and new vulnerabilities are being discovered all the time. Pen testing helps identify vulnerabilities that could be exploited by hackers and allows you to fix them before they can do any damage.
  • Compliance: Depending on your industry and the type of data you handle, you may be required to comply with certain security standards (e.g., PCI DSS, NIST, HIPAA). Regular pen testing can help you verify that your web applications meet these standards and avoid penalties for non-compliance.

How Often Should You Pentest?#

Many organizations, big and small, have once a year pen testing cycle. But what’s the best frequency for pen testing? Is once a year enough, or do you need to be more frequent?

The answer depends on several factors, including the type of development cycle you have, the criticality of your web applications, and the industry you’re in.

You may need more frequent pen testing if:

You Have an Agile or Continuous Release Cycle#

Agile development cycles are characterized by short release cycles and rapid iterations. This can make it difficult to keep track of changes made to the codebase and makes it more likely that security vulnerabilities will be introduced.

If you’re only testing once a year, there’s a good chance that vulnerabilities will go undetected for long periods of time. This could leave your organization open to attack.

To mitigate this risk, pen testing cycles should align with the organization’s development cycle. For static web applications, testing every 4-6 months should be sufficient. But for web applications that are updated frequently, you may need to test more often, such as monthly or even weekly.

Your Web Applications Are Business-Critical#

Any system that is essential to your organization’s operations should be given extra attention when it comes to security. This is because a breach of these systems could have a devastating impact on your business. If your organization relies heavily on its web applications to do business, any downtime could result in significant financial losses.

For example, imagine that your organization’s e-commerce site went down for an hour due to a DDoS attack. Not only would you lose out on potential sales, but you would also have to deal with the cost of the attack and the negative publicity.

To avoid this scenario, it’s important to ensure that your web applications are always available and secure.

Non-critical web applications can usually get away with being tested once a year, but business-critical web applications should be tested more frequently to ensure they are not at risk of a major outage or data loss.

Your Web Applications Are Customer-Facing#

If all your web applications are internal, you may be able to get away with pen testing less frequently. However, if your web applications are accessible to the public, you must be extra diligent in your security efforts.

Web applications accessible to external traffic are more likely to be targeted by attackers. This is because there is a greater pool of attack vectors and more potential entry points for an attacker to exploit.

Customer-facing web applications also tend to have more users, which means that any security vulnerabilities will be exploited more quickly. For example, a cross-site scripting (XSS) vulnerability in an external web application with millions of users could be exploited within hours of being discovered.

To protect against these threats, it’s important to pen test customer-facing web applications more frequently than internal ones. Depending on the size and complexity of the application, you may need to pen test every month or even every week.

You Are in a High-Risk Industry#

Certain industries are more likely to be targeted by hackers due to the sensitive nature of their data. Healthcare organizations, for example, are often targeted because of the protected health information (PHI) they hold.

If your organization is in a high-risk industry, you should consider conducting pen testing more frequently to ensure that your systems are secure and meet regulatory compliance. This will help protect your data and reduce the chances of a costly security incident.

You Don’t Have Internal Security Operations or a Pen testing Team#

This might sound counterintuitive, but if you don’t have an internal security team, you may need to conduct pen testing more frequently.

Organizations that don’t have dedicated security staff are more likely to be vulnerable to attacks.

Without an internal security team, you will need to rely on external pen testers to assess your organization’s security posture.

Depending on the size and complexity of your organization, you may need to pen test every month or even every week.

You Are Focused on Mergers or Acquisitions#

During a merger or acquisition, there is often a lot of confusion and chaos. This can make it difficult to keep track of all the systems and data that need to be secured. As a result, it’s important to conduct pen testing more frequently during these times to ensure that all systems are secure.

M&A also means that you are adding new web applications to your organization’s infrastructure. These new applications may have unknown security vulnerabilities that could put your entire organization at risk.

In 2016, Marriott acquired Starwood without being aware that hackers had exploited a flaw in Starwood’s reservation system two years earlier. Over 500 million customer records were compromised. This placed Marriott in hot water with the British watchdog ICO, resulting in 18.4 million pounds in fines in the UK. According to Bloomberg, there is more trouble ahead, as the hotel giant could “face up to $1 billion in regulatory fines and litigation costs.”

To protect against these threats, it’s important to conduct pen testing before and after an acquisition. This will help you identify potential security issues so they can be fixed before the transition is complete.

The Importance of Continuous Pen Testing#

While periodic pen testing is important, it is no longer enough in today’s world. As businesses rely more on their web applications, continuous pen testing becomes increasingly important.

There are two main types of pen testing: time-boxed and continuous.

Traditional pen testing is done on a set schedule, such as once a year. This type of pen testing is no longer enough in today’s world, as businesses rely more on their web applications.

Continuous pen testing is the process of continuously scanning your systems for vulnerabilities. This allows you to identify and fix vulnerabilities before they can be exploited by attackers. Continuous pen testing allows you to find and fix security issues as they happen instead of waiting for a periodic assessment.

Continuous pen testing is especially important for organizations that have an agile development cycle. Since new code is deployed frequently, there is a greater chance for security vulnerabilities to be introduced.

Pen testing as a service models is where continuous pen testing shine. Outpost24’s PTaaS (Penetration-Testing-as-a-Service) platform enables businesses to conduct continuous pen testing with ease. The Outpost24 platform is always up-to-date with an organization’s latest security threats and vulnerabilities, so you can be confident that your web applications are secure.

  • Manual and automated pen testing: Outpost24’s PTaaS platform combines manual and automated pen testing to give you the best of both worlds. This means you can find and fix vulnerabilities faster while still getting the benefits of expert analysis.
  • Provides comprehensive coverage: Outpost24’s platform covers all OWASP Top 10 vulnerabilities and more. This means that you can be confident that your web applications are secure against the latest threats.
  • Is cost-effective: With Outpost24, you only pay for the services you need. This makes it more affordable to conduct continuous pen testing, even for small businesses.

The Bottom Line#

Regular pen testing is essential for secure web application development. Depending on your organization’s size, industry, and development cycle, you may need to revise your pen testing schedule.

Once-a-year pen testing cycle may be enough for some organizations, but for most, it is not. For business-critical, customer-facing, or high-traffic web applications, you should consider continuous pen testing.

Outpost24’s PTaaS platform makes it easy and cost-effective to conduct continuous pen testing. Contact us today to learn more about our platform and how we can help you secure your web applications.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

Source :
https://thehackernews.com/2023/01/is-once-yearly-pen-testing-enough-for.html

Exit mobile version