Database Cleanup and Optimisation: A Quick Guide for WordPress Users

FEBRUARY 23, 2024
BY PAUL G.

Much like a well-oiled machine, your WordPress requires regular maintenance to ensure peak performance and security. Without it, you can end up with a disorganised and bloated database, which can affect your site’s speed and leave it vulnerable to online threats.

But fear not! The solution lies in a simple yet often overlooked aspect of website management – database cleanup and optimisation.

In this comprehensive guide, we’ll show you how a little housekeeping can not only give your site the speed boost it desperately needs, but also strengthen its security against lurking threats. From manual tweaks to security plugins like Shield Security Pro, you’ll learn how to cleanse your digital space efficiently!

Understanding WordPress database health: Why it matters

Over time, a WordPress database accumulates old and unused data. Think of this as digital clutter – rows upon rows of data that are no longer in use but still take up space. This includes old post revisions, trashed items, spam comments, and data left by uninstalled plugins.

This clutter doesn’t just take up digital space. Every time your website performs a task, your server has to sift through everything. This creates an unnecessary workload that slows down your site, affecting user experience and potentially harming your SEO rankings.

Regular database maintenance ensures seamless website performance and significantly lowers the risk of malware infection. Since malware often hides in the clutter, a clean and optimised database is less vulnerable to attacks.

Here’s what to do to make sure your database is well-maintained:

  • Regular backups: Before any cleanup, ensure you have a recent backup. It’s your safety net in case something goes wrong.
  • Routine scanning for malware: Use reliable tools like the Shield Security Pro plugin for regular scans. Catching and removing malware early can prevent more significant issues down the line.
  • Removing unused themes and plugins: Inactive themes and plugins are not just dead weight; they’re potential security risks. Regularly clean them out.
  • Spam comment cleanup: Spam comments bloat your database and can harm your site’s credibility. Regularly purging them is crucial.

Optimising database tables

Table optimisation, or defragmentation, is about removing excess data from your site’s data tables. Think of it like organising a messy bookshelf so you can find books faster. It rearranges the data to use space more efficiently, improving performance. This process is important for larger websites, where data operations can become significantly slower over time.

Popular plugins like WP-OptimizeWP-DBManager, and Advanced Database Cleaner offer a user-friendly way to handle database optimisation without needing deep technical expertise. They help automate the cleanup process, ensuring your WordPress site remains speedy and efficient.

Backing up your site before initiating cleanup

Before you dive into the nitty-gritty of database cleanup, you must always back up your site first.

While database cleanup aims to remove only redundant data, the process isn’t infallible. There’s always a risk, however small, that something might go wrong. In such cases, a backup is your quick ticket to recovery, allowing you to restore your site to its pre-cleanup state with minimal fuss.

Here are some scenarios where backups save the day:

  • User error: Sometimes, the biggest threat to your website can be accidental mishaps, like deleting important files or making erroneous changes.
  • Platform and plugin updates: Updates are essential for security and performance, but they can occasionally lead to compatibility issues or data loss.
  • Security breaches: In the unfortunate event of a hack or malware infection, a backup can be vital in restoring your site to a secure state.

The good news is that backing up your WordPress site can be made easy via backup plugins like WP-StagingUpdraftPlusBackupBuddy, and VaultPress (Jetpack Backup). These tools automate the process, ensuring that your site is regularly backed up without requiring manual intervention every time. 

How often should you back up?

The frequency of backups should reflect how often your site is updated. For a dynamic site with daily changes, a daily backup is ideal. However, for smaller sites with less frequent updates, weekly or even monthly backups might suffice. 

The key is to never skip backups altogether. It’s a small effort that can save a lot of time and stress in the long run.

Manual cleanup vs. plugin-assisted optimisation

When it comes to optimising your WordPress database, you have two primary approaches: manual cleanup or using plugins. While manual cleanup requires more technical know-how, it also gives you more precise control over the optimisation process.

Manual cleanup

  1. Before any changes, ensure you have a recent backup of your WordPress site, including the database. This step is non-negotiable and serves as your safety net.
  2. Log in to your database using phpMyAdmin, which is typically available through your web hosting control panel.
  3. In phpMyAdmin, select your WordPress database from the list on the left. 
Accessing the WordPress database via phpMyAdmin.
  1. You’ll see a list of all the tables in your database. Check the tables you want to optimise (or select all).
  2. From the drop-down menu, select Optimize table. This will defragment the selected tables and can improve performance.
Defragmenting (optimising) the WordPress database table via phpMyAdmin.
  1. Navigate to the SQL tab.
Opening the SQL window in phpMyAdmin to run SQL queries.
  1. WordPress saves every change you make in the posts as revisions. This can lead to a bloated database. To delete these post revisions, run this command:
DELETE FROM wp_posts WHERE post_type = "revision";

Make sure to change the wp_ table prefix to the prefix you or your hosting provider set up during installation.

  1. To delete spam comments, run the following SQL command:
DELETE FROM wp_comments WHERE comment_approved = 'spam';
  1. In WordPress, ‘trash’ is a post status used for content that has been moved to the trash but not yet permanently deleted. To empty the trash and permanently delete what’s in it, run this query:
DELETE FROM wp_posts WHERE post_status = 'trash';
  1. Transients are used to speed up WordPress by caching data that is expensive to compute or retrieve. They are typically temporary and can be safely removed:
DELETE FROM wp_options WHERE option_name LIKE ('%\_transient\_%');
  1. Unused tags can be removed with a query like:
DELETE FROM wp_terms wt INNER JOIN wp_term_taxonomy wtt ON wt.term_id = wtt.term_id WHERE wtt.taxonomy = 'post_tag' AND wtt.count = 0;

This command deletes all tags (from both wp_terms and wp_term_taxonomy tables) that are not assigned to any posts on your WordPress website.

  1. After cleaning up, it’s a good idea to check the database for any errors. Select your database and use the Check table or Repair table options if needed.
Checking and repairing database tables via phpMyadmin.
  1. You might have tables from old plugins that are no longer used – many will leave tables behind even when the plugin itself have been removed from the site. Review these tables and delete them if they’re not needed. Be very careful with this step, as deleting the wrong table can cause issues with your site. If you’re in doubt, you can always reach out to the plugin developers to ensure you’re deleting the right items. 

For plugin-assisted optimisation

For WordPress users who prefer a more straightforward, less technical approach to database optimisation, plugin-assisted methods are a game-changer. These tools offer:

  • Ease of use: Plugins provide a simple interface for tasks that would otherwise require technical expertise. They’re designed to be intuitive and accessible, even for those with minimal technical background.
  • Automation: Many WordPress plugins can operate in the background, performing routine cleanups and optimisations without your constant oversight. This automation saves time and ensures regular maintenance is carried out.
  • Less technical involvement: By automating database optimisation, you free up valuable time and resources to focus on other areas of your business, like content creation, marketing, and customer engagement.

Among the most notable plugins for database optimisation include:

  • WP-OptimizeThis popular plugin cleans your database, compresses images, and caches your site, making it a comprehensive tool for site optimisation.
WP-Optimize download page
  • WP-DBManagerKnown for its database backup, repair, and optimisation features, WP-DBManager is a solid choice for those looking to maintain their database’s health.

WP-DBManger download page

  • Advanced Database Cleaner: If you’re looking for a plugin that goes beyond basic cleanup, this tool helps you get rid of orphaned items and old revisions with ease.
Advanced Database Cleaner download page.

If you want even more plugin options, check out our post on the best backup plugins for WordPress websites.

Setting up and using database optimisation plugins

For the sake of this tutorial, we’re going to be using the WP-Optimize plugin:

  1. From your WordPress admin dashboard, go to Plugins > Add New Plugin.
Adding a new plugin in WordPress.
  1. Search for the WP-Optimize plugin and click on Install Now, then Active.
Installing the WP-Optimize plugin.
  1. A new icon for WP-Optimize will appear in your WordPress left-hand side menu. Click on it and go through the plugin settings to configure the optimisation tasks you want to automate, like spam comment cleanup, post-revision removal, and database table optimisation.
Configuring the WP-Optimize plugin settings.
  1. Many plugins offer the option to schedule regular cleanups. Setting this up ensures your database remains optimised without manual intervention.
Setting up automatic database cleanup using WP-Optimize.

Choosing the right plugin

When selecting a plugin, especially those designed to modify or remove data, it’s necessary to pick one that is well-reviewed and regularly updated. Check the plugin’s ratings, user reviews, and update history to ensure reliability and compatibility with your version of WordPress.

By choosing the right plugin, you can significantly reduce the effort required in database maintenance, ensuring your WordPress site remains optimised, fast, and secure with minimal hassle.

Using Shield Security Pro for malware removal

When it comes to safeguarding your WordPress site against malware, Shield Security PRO stands out as a robust solution. Its advanced malware scanning and removal tool is specifically designed to protect your website by detecting and eliminating malicious code.

Features of Shield Security PRO’s malware scanner

  • Comprehensive scanning: The malware scanner in Shield Security PRO thoroughly examines your files, looking for patterns that indicate malware infiltration. This proactive approach ensures that even the most cunningly hidden malware is identified.
  • Detailed reporting: When it detects malware, the plugin creates a detailed report alerting you to the affected files. This feature allows you to download and inspect these files closely, giving you a clear understanding of the nature and extent of the malware.
  • Automatic repair option: For those seeking a hands-off approach, Shield Security PRO offers an automatic repair feature. This functionality enables the plugin to edit and remove suspicious code autonomously, saving you the time and effort of manual intervention.
  • Customisable repair settings: You have the flexibility to set automatic repairs for core files, plugin files, theme files, or all three, depending on your preference and website structure.

While automatic repair is a convenient feature, it’s essential to use it wisely, especially if you regularly modify your WordPress files. In such cases, automatic repairs might unintentionally alter your customisations. Therefore, if you often tweak your WordPress code, manual inspection and repair might be more suitable.

Regardless of whether you choose automatic repairs or prefer to handle file fixes manually, the key advantage of Shield Security PRO’s malware scanning lies in its prompt detection. Fast identification of malicious data is crucial in preventing it from causing significant damage to your site.

Enhance your WordPress site’s performance & security with Shield Security PRO

Regular WordPress database cleanups and optimisations are necessary for maintaining a high-performing, secure website. 

While manual database maintenance is certainly an option, it can be time-consuming and requires a certain level of technical expertise. This is where plugins come into play, offering a simpler, more efficient solution. By automating key aspects of the maintenance process, these tools significantly reduce the workload on website owners.

Shield Security PRO is designed to address both the performance and security needs of your WordPress site. It features advanced vulnerability and malware scanning capabilities, which play a vital role in identifying and removing unused data and potentially dangerous elements from your site.

Don’t let the health and security of your WordPress site take a backseat. With Shield Security PRO, you have a powerful tool at your disposal to keep your site running smoothly and securely. 

Download Shield Security PRO today, and take the first step towards a faster, safer, and more efficient WordPress experience!

Source :
https://getshieldsecurity.com/blog/clean-wordpress-database/

Sonicwall How can I setup CFS policies with LDAP and SSO to restrict Internet access on CFS?

02/20/2024

Description

This article explains about how to integrate Content Filtering Service with LDAP (With Single Sign On) by using SonicOS 7.0.1 or older.

Restricted user group on the active directory is imported to SonicWall and give restricted web access to those users in that group.

Where in the Full Access User group has full access or partial access to websites.

Image

Resolution

  1. Enable  Content Filtering Service  from Policy | Security Services | Content FilterImage
  2. Navigate to Profile Objects| Content Filter and access the Profile Objects tab. Create the new Content Filter Profile and Allow/Block for each category according with your need.

    Image
  3. Make sure to Enable HTTPS content Filtering. This option is disabled by default.Image

    4. Create another Content Filter Profile as Restricted Access CFS Policy for Restricted User Group.Click on Add, Add a Policy for Restricted Group with most of the categories enabled (Depends on what should be Blocked) 

    5. Creating a Full Access CFS Policy for Full Access User Group.Add second Policy for the Full Access Group with certain category enabled or all categories enabled (Depends on what should be allowed).



 Configuring LDAP on SonicWall

For more information about how to enable LDAP on Sonicwall, please reach below link.

https://www.sonicwall.com/support/knowledge-base/how-to-integrate-ldap-active-directory-user-authentication/170707170351983/
  1. Navigate to Users | Settings pagein the Authentication method for login drop-down list, select LDAP + Local Users and click Configure.                     Image
  2. On the Settings tab of the LDAP Configuration window, configure the following fields. 

    Name or IP address: IP address of the LDAP serverPort Number: 389 (Default LDAP Port)Server timeout (seconds): 10 Seconds (Default)Overall operation timeout (minutes): 5(Default)Select Give login name/location in tree
    Image
  3. On the Login/Bind, Give login name/loction in three. Set the admin user and password to access on your LDAP server. 
  4. On the Schema tab, configure the following fields: LDAP Schema:Microsoft Active Directory.
  5. On the Directory tab, configure the following fields.
    • Primary domain:The user domain used by your LDAP implementation.
    • User tree for login to server:The location of where the tree is that the user specified in the settings tab.
    • Click Auto-configure. (This will populate the Trees containing users and Trees containing user groups fields by scanning through the directories in search of all trees that contain user objects.)

      Image
  6. On the LDAP Test tab, Test LDAP connectivity to make sure that the communication is successful.

Image

Importing Groups from LDAP to the SonicWall unit

  1. Navigate to Users | Local Groups.
  2. Click Import from LDAP  

  3. Click  Configure for the Group that is imported from LDAP.
  4. Go to CFS Policy tab , Select the appropriate CFS Policy from the drop down and Click OK.

Configuring Single Sign-On Method on SonicWall 

For more information about how to enable SSO Agent and Enable SSO on Sonicwall, please reach below link.

https://www.sonicwall.com/support/knowledge-base/how-can-i-install-single-sign-on-sso-software-and-configure-the-sso-feature/170505740046553/
  1. Navigate to Users | Settings.
  2. In the Single-sign-on method , select SonicWall SSO Agent and Configure
    Image
  3. Click Configure button. The SSO configuration page is displayed.
  4. Under the Settings tab, Click Add button to add the IP address of the work station that has SSO agent running. 
    • Click on the ADD button: settings window is displayed
    • In the Host Name or IP Address field, enter the name or IP Address of the workstation on which SonicWall SSO Agent is installed
    • In Port Number, enter the port number of the workstation on which SonicWall SSO Agent is installed. The default port is 2258
    • In the Shared Key field, enter the shared key that you created or generated in the SonicWall SSO Agent. 
      The shared key must match exactly. Re-enter the shared key in the Confirm Shared Key field.
      Click Apply.
       Image
  5. Once the SSO Agent is successfully added, under the Authentication Agent Settings a green light is shown for status.
  6. Click Test tab. The Test Authentication Agent Settings page displays.
  7. Select the Check agent connectivity radio button then click the Test button. This will test communication with the authentication agent. If the SonicWall security appliance can connect to the agent, you will see the message Agent is ready.

  8. Select the Check user radio button, enter the IP address of a workstation in the Workstation IP address field, then click Test. This will test if the agent is property configured to identify the user logged into a workstation.

     NOTE: Performing tests on this page applies any changes that have been made.
     TIP: If you receive the messages Agent is not responding or Configuration error, check your settings and perform these tests again.
  9. When you are finished, click OK


Enabling CFS for the LAN Zone and applying Imported LDAP Group

 CAUTION: It is not recommended to do this change on a Production Environment because this changes are instant and can affect all the computers on the LAN. So it is best to schedule a downtime before proceeding further.

  1. Navigate to Network | Zones, click Configure Button for LAN Zone.
  2. Check the box Enforce Content Filtering Service, select the Default CFS Policy from the drop down.
    Image

How to TEST

  • Log out from the windows domain computer and log in back with a user from either the full access or restricted access groups and check whether the policy is getting enforced correctly for the user.

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/how-can-i-setup-cfs-policies-with-ldap-and-sso-to-restrict-internet-access-on-cfs/170505721991619/

Windows 11 KB5034765 won’t install, taskbar issues, and explorer.exe crashes

By Mayank Parmar -February 19, 2024

You’re not alone if you have issues with the Windows 11 KB5034765. February 2024 security update for Windows 11 causes File Explorer to crash when rebooting the system, and some have found it’s causing the taskbar to disappear. Additionally, many users are having problems installing the Windows 11 February 2024 update.

Microsoft sources have confirmed to Windows Latest that the company is aware of an issue that causes the taskbar to crash or disappear briefly after installing KB5034765. I’m told the company has already rolled out a fix. This means some of you should be able to see the taskbar again after reinstalling the patch (remove and install it again).

But that’s not all. The February 2024 update has other problems, too. In our tests, we observed that the Windows 11 KB5034765 update repeatedly fails to install with 0x800f0922, 0x800f0982, and 0x80070002.

Multiple users told me that when they tried to install the security patch, everything seemed fine at first. The update downloads and asks for a restart. But during the installation, Windows Update stopped and confirmed there was a problem. It tries a few more times and then goes back to the desktop without updating.

KB5034765 is not installing, but there’s a fix

Windows 11 KB5034765 won't install
Windows 11 January 2024 Update fails with 0x80070002 | Image Courtesy: WindowsLatest.com

Our device also attempted the “rollback” after successfully downloading the February 2024 cumulative update, but the process was stuck on the following screen for ten minutes:

  • Something didn’t go as planned. No need to worry—undoing changes. Please keep your computer on.

I tried tried a few things to fix it. For example, I removed programs that didn’t come with Windows, cleared the Windows Update cache and used the Windows Update troubleshooter. None of these solutions have worked.

However, there’s some good news. It looks like we can successfully install KB5034765 by deleting a hidden folder named $WinREAgent. There are multiple ways to locate and delete this folder from Windows 11 installation, and you choose your preferred one:

  • Method 1: Run Disk Cleanup as an administrator, select the system drive, and check the boxes for “Temporary files” and other relevant options. Finally, click “OK” to remove the system files, including Windows Update files. This will delete unnecessary files within $WinREAgent.
  • Method 2: Open File Explorer and open the system drive, but make sure you’ve turned on view hidden items from folder settings. Locate $WinREAgent and remove it from the system.
  • Method 3: Open Command Prompt as Administrator, and run the following command: rmdir /S /Q C:\$WinREAgent
Windows 11 0x800f0922 0x800f0982 and 0x80070002

Windows Update causes File Explorer to crash on reboot

Some PC owners are also running into another problem that causes the File Explorer to crash when rebooting or shutting down the system.

This issue was previously observed in Windows 11’s January 2024 optional update, and it seems to have slipped into the mandatory security patch.

The error message indicates an application error with explorer.exe, mentioning a specific memory address and stating, “The memory could not be written”.

“The instruction at 0x00007FFB20563ACa referenced memory at 0x0000000000000024. The memory could not be written. Click on OK to terminate the program,” the error message titled “explorer.exe – Application Error” reads.

KB5034765 crashes explorer
explorer.exe crashes with a referenced memory error when rebooting

This issue seems to persist regardless of various troubleshooting efforts. Users have tried numerous fixes, including running the System File Checker tool (sfc /scannow), testing their RAM with Windows’ built-in tool and memtest86+, and even performing a clean installation of the latest Windows 11 version.

Despite these efforts, the error remains.

Interestingly, a common factor among affected users is the presence of a controller accessory, such as an Xbox 360 controller for Windows, connected to the PC. This connection has been observed, but it’s unclear if it directly contributes to the problem.

Microsoft’s release notes for the KB5034765 update mentioned a fix for an issue where explorer.exe could stop responding when a PC with a controller accessory attached is restarted or shut down.

However, despite this so-called official fix, users report that the problem still occurs, and it’s not possible to manually fix it.

Windows 11 taskbar crashes or disappears after the patch

As mentioned at the outset, the Windows 11 KB5034765 update causes the taskbar to disappear or crash when you reboot or turn on the device.

KB5034765 taskbar disappears
Taskbar is missing/disappeared in Windows 11 virtual machine after new update | Image Courtesy: WindowsLatest.com

According to my sources, Microsoft has already patched the issue via server-side update, but if your taskbar or quick settings like Wi-Fi still disappear, try the following steps:

  1. Open Settings, go to the Windows Update section and click Update History. On the Windows Update history page, click Uninstall updates, locate KB5034765 and click Uninstall.
  2. Confirm your decision, click Uninstall again, and reboot the system.
  3. Go to Settings > Windows Update and check for updates to reinstall the security patch.

The above steps are unnecessary, as the server-side update will automatically apply to your device.

About The Author

Mayank Parmar

Mayank Parmar is Windows Latest’s owner, Editor-in-Chief and entrepreneur. Mayank has been in tech journalism for over seven years and has written on various topics, but he is mostly known for his well-researched work on Microsoft’s Windows. His articles and research works have been referred to by CNN, Business Insiders, Forbes, Fortune, CBS Interactive, Microsoft and many others over the years.

Source :
https://www.windowslatest.com/2024/02/19/windows-11-kb5034765-wont-install-and-causes-other-issues-but-theres-a-fix/

Lineage OS Changelog 28 – Fantastic Fourteen, Amazing Applications, Undeniable User-Experience

WRITTEN ON FEBRUARY 14, 2024 BY NOLEN JOHNSON (NPJOHNSON)

21 – Finally old enough to drink (at least in the US)!

Hey y’all! Welcome back!

We’re a bit ahead of schedule this year, we know normally you don’t expect to hear from us until April-ish.

This was largely thanks to some new faces around the scene, some old faces stepping up to the plate, and several newly appointed Project Directors!

With all that said, we have been working extremely hard since Android 14’s release last October to port our features to this new version of Android. Thanks to our hard work adapting to Google’s largely UI-based changes in Android 12/13, and Android 14’s dead-simple device bring-up requirements, we were able to rebase our changes onto Android 14 much more efficiently.

This lets us spend some much overdue time on our apps suite! Applications such as Aperture had their features and UX improved significantly, while many of our aging apps such as Jelly, Dialer, Contacts, Messaging, LatinIME (Keyboard), and Calculator got near full redesigns that bring them into the Material You era!

…and last but not least, yet another new app landed in our apps suite! Don’t get used to it though, or maybe do, we’re not sure yet.

Now, let’s remind everyone about versioning conventions – To match AOSP’s versioning conventions, and due to the fact it added no notable value to the end-user, we dropped our subversion from a branding perspective.

As Android has moved onto the quarterly maintenance release model, this release will be “LineageOS 21”, not 21.0 or 21.1 – though worry not – we are based on the latest and greatest Android 14 version, QPR1.

Additionally, to you developers out there – any repository that is not core-platform, or isn’t expected to change in quarterly maintenance releases will use branches without subversions – e.g., lineage-21 instead of lineage-21.0.

New Features!

  • Security patches from January 2023 to February 2024 have been merged to LineageOS 18.1 through 21.
  • Glimpse of Us: We now have a shining new app, Glimpse! It will become the default gallery app starting from LineageOS 21
  • An extensive list of applications were heavily improved or redesigned:
    • Aperture: A touch of Material You, new video features, and more!
    • Calculator: Complete Material You redesign
    • Contacts: Design adjustments for Material You
    • Dialer: Large cleanups and code updates, Material You and bugfixes
    • Eleven: Some Material You design updates
    • Jelly: Refreshed interface, Material You and per-website location permissions
    • LatinIME: Material You enhancements, spacebar trackpad, fixed number row
    • Messaging: Design adjustments for Material You
  • A brand new boot animation by our awesome designer Vazguard!
  • SeedVault and Etar have both been updated to their newest respective upstream version.
  • WebView has been updated to Chromium 120.0.6099.144.
  • We have further developed our side pop-out expanding volume panel.
  • Our Updater app should now install A/B updates much faster (thank Google!)
  • We have contributed even more changes and improvements back upstream to the FOSS Etar calendar app we integrated some time back!
  • We have contributed even more changes and improvements back upstream to the Seedvault backup app.
  • Android TV builds still ship with an ad-free Android TV launcher, unlike Google’s ad-enabled launcher – most Android TV Google Apps packages now have options to use the Google ad-enabled launcher or our ad-restricted version.
  • Our merge scripts have been largely overhauled, greatly simplifying the Android Security Bulletin merge process, as well as making supporting devices like Pixel devices that have full source releases much more streamlined.
  • Our extract utilities can now extract from OTA images and factory images directly, further simplifying monthly security updates for maintainers on devices that receive security patches regularly.
  • LLVM has been fully embraced, with builds now defaulting to using LLVM bin-utils and optionally, the LLVM integrated assembler. For those of you with older kernels, worry not, you can always opt-out.
  • A global Quick Settings light mode has been developed so that this UI element matches the device’s theme.
  • Our Setup Wizard has seen adaptation for Android 14, with improved styling, more seamless transitions, and significant amounts of legacy code being stripped out.
  • The developer-kit (e.g. Radxa 0, Banana Pi B5, ODROID C4, Jetson X1) experience has been heavily improved, with UI elements and settings that aren’t related to their more restricted hardware feature-set being hidden or tailored!

Amazing Applications!

Calculator

calculator

Our Calculator app has received a UI refresh, bringing it in sync with the rest of our app suite, as well as a few new features:

  • Code cleanup
  • Reworked UI components to look more modern
  • Added support for Material You
  • Fixed some bugs

Glimpse

glimpse

We’ve been working on a new gallery app, called Glimpse, which will replace Gallery2, the AOSP default gallery app.

Thanks to developers SebaUbuntu, luca020400 and LuK1337 who started the development, together with the help of designer Vazguard.

We focused on a clean, simple and modern-looking UI, designed around Material You’s guidelines, making sure all the features that you would expect from a gallery app are there.

It’ll be available on all devices starting from LineageOS 21.

Aperture

This has been the first year for this new application and we feel it has been received well by the community. As promised, we have continued to improve it and add new features, while keeping up with Google’s changes to the CameraX library (even helping them fix some bugs found on some of our maintained devices). We’d like to also thank the community for their work on translations, especially since Aperture strings changed quite often this year.

Here’s a quick list of some of the new features and improvements since the last update:

  • Added a better dialog UI to ask the user for location permissions when needed
  • UI will now rotate to follow the device orientation
  • Added Material You support
  • Improved QR code scanner, now with support for Wi-Fi and Wi-Fi Easy Connect™ QR codes
  • Added support for Google Assistant voice actions
  • Added photo and video mirroring (flipping) options
  • Audio can be muted while recording a video
  • Better error handling, including when no camera is available
  • Added configurable volume button gestures
  • The app will now warn you if the device overheats and is now able to automatically stop recording if the device temperature is too high
  • Added an information chip on top of the viewfinder to show some useful information, like low battery or disabled microphone
  • Added some advanced video processing settings (noise reduction, sharpening, etc.)
  • You can now set the flash to torch mode in photo mode by long-pressing the flash button
  • Added support for HDR video recording

Jelly

jelly

Our browser app has received a UI refresh, bringing it in sync with the rest of our app suite, as well as a few new features:

  • Code cleanup
  • Reworked UI components to look more modern
  • Added support for Material You
  • Fixed some bugs regarding downloading files
  • Added Brave as a search engine and suggestions provider
  • Dropped Google encrypted search engine, as Google defaults to HSTS now
  • Baidu suggestion provider now uses HTTPS
  • Implemented per-website location permissions

Dialer, Messaging, and Contacts

Dialer

Since AOSP abandoned deprecated the Dialer, we have taken over the code base and did heavy cleanups, updating to newer standards (AndroidX) and redesigning:

  • Code cleanup
  • Changed to using Material You design
  • Proper dark and light themes
  • Several bugfixes, specifically with number lookups and the contact list

While Messaging was also deprecated by AOSP, at least the Contacts app was not. Nonetheless we gave both of them an overhaul and made them also follow the system colors and look more integrated.

Careful Commonization

Several of our developers have worked hard on SoC-specific common kernels to base on that can be merged on a somewhat regular basis to pull in the latest features/security patches to save maintainers additional effort.

Go check them out and consider basing your device kernels on them!

Supported SoCs right now are:

SoC (system-on-chip)Kernel VersionAndroid Version 
Qualcomm MSM89963.1811 
Qualcomm MSM8998/MSM89964.413 
Qualcomm SDM8454.913 
Qualcomm SM81504.1413 
Qualcomm SDM6604.1913 
Qualcomm SM82504.1913 
Qualcomm SM83505.413 
Qualcomm SM84505.1013– Coming soon!
Qualcomm SM85505.1513 

Additionally, many legacy devices require interpolating libraries that we colloquially refer to as “shims” – these have long been device and maintainer managed, but this cycle we have decided to commonize them to make the effort easier on everyone and not duplicate effort!

You can check it out here and contribute shims that you think other devices may need or add additional components to additional shims and compatibility layers provided via Gerrit!

Deprecations

Overall, we feel that the 21 branch has reached feature and stability parity with 20 and is ready for initial release.

For the first time in many cycles, all devices that shipped LineageOS 19.1 were either promoted or dropped by the maintainer by the time of this blog post, so LineageOS 19.1 was retired naturally. As such, no new device submissions targeting the 19.1 branch will be able to ship builds (you can still apply and fork your work to the organization, though!).

LineageOS 18.1 builds were still not deprecated this year, as Google’s somewhat harsh requirements of BPF support in all Android 12+ device’s kernels meant that a significant amount of our legacy devices on the build-roster would have died.

LineageOS 18.1, is still on a feature freeze, and building each device monthly, shortly after the Android Security Bulletin is merged for that month.

We will allow new LineageOS 18.1 submissions to be forked to the organization, but we no longer will allow newly submitted LineageOS 18.1 devices to ship.

LineageOS 21 will launch building for a decent selection of devices, with additional devices to come as they are marked as both Charter compliant and ready for builds by their maintainer.

Upgrading to LineageOS 21

To upgrade, please follow the upgrade guide for your device by clicking on it here and then on “Upgrade to a higher version of LineageOS”.

If you’re coming from an unofficial build, you need to follow the good ole’ install guide for your device, just like anyone else looking to install LineageOS for the first time. These can be found at the same place here by clicking on your device and then on “Installation”.

Please note that if you’re currently on an official build, you DO NOT need to wipe your device, unless your device’s wiki page specifically dictates otherwise, as is needed for some devices with massive changes, such as a repartition.

Download portal

While it has been in the making for quite a while and already released a year ago, it’s still news in regards to this blog post. Our download portal has been redesigned and also gained a few functional improvements:

  • Dark mode
  • Downloads of additional images (shown for all devices but not used on all of them, read the instructions to know which ones you need for your device’s installation!)
  • Verifying downloaded files (see here) – if you go with any download not obtained from us, you can still verify it was originally signed by us and thus untampered with

Wiki

The LineageOS Wiki has also been expanded throughout the year and now offers, in addition to the known and tested instructions for all supported devices, some improvements:

  • The device overview allows filtering for various attributes you might be interested in a device (please note: choosing a device only based on that list still does not guarantee any device support beyond the point of when you chose it)
  • The device overview now lists variants of a device and other known marketing names in a more visible way, also allowing for different device information and instructions per variant to be shown
  • The installation instructions have been paginated, giving users less chance to skip a section involuntarily

In addition to that we’d like to take this time to remind users to follow instructions on their device’s respective Wiki Page given the complexity introduced by AOSP changes like System-As-Root, A/B Partition Scheme, Dynamic Partitions, and most recently Virtual A/B found on the Pixel 5 and other devices launching with Android 11, the instructions many of you are used to following from memory are either no longer valid or are missing very critical steps. As of 16.0, maintainers have been expected to run through the full instructions and verify they work on their devices. The LineageOS Wiki was recently further extended, and maintainers were given significantly more options to customize their device’s specific installation, update, and upgrade instructions.

Developers, Developers, Developers

Or, in this case, maintainers, maintainers, maintainers. We want your device submissions!

If you’re a developer and would like to submit your device for officials, it’s easier than ever. Just follow the instructions here.

The above also applies to people looking to bring back devices that were at one point official but are no longer supported – seriously – even if it’s not yet completely compliant, submit it! Maybe we can help you complete it.

After you submit, within generally a few weeks, but in most cases a week, you’ll receive some feedback on your device submission; and if it’s up to par, you’ll be invited to our communications instances and your device will be forked to LineageOS’s official repositories.

Don’t have the knowledge to maintain a device, but want to contribute to the platform? We have lots of other things you can contribute to. For instance, our apps suite is always looking for new people to help improve them, or you can contribute to the wiki by adding more useful information & documentation. Gerrit is always open for submissions! Once you’ve contributed a few things, send an email to devrel(at)lineageos.org detailing them, and we’ll get you in the loop.

Also, if you sent a submission via Gmail over the last few months, due to infrastructural issues, some of them didn’t make it to us, so please resend them!

Generic Targets

We’ve talked about these before, but these are important, so we will cover them again.

Though we’ve had buildable generic targets since 2019, to make LineageOS more accessible to developers, and really anyone interested in giving LineageOS a try, we’ve documented how to use them in conjunction with the Android Emulator/Android Studio!

Additionally, similar targets can now be used to build GSI in mobile, Android TV configurations, and Android Automotive (we’ll talk more about this later) making LineageOS more accessible than ever to devices using Google’s Project Treble. We won’t be providing official builds for these targets, due to the fact the user experience varies entirely based on how well the device manufacturer complied with Treble’s requirements, but feel free to go build them yourself and give it a shot!

Please note that Android 12 (and by proxy Android 13/14) diverged GSI and Emulator targets. Emulator targets reside in lineage_sdk_$arch, while GSI targets reside in lineage_gsi_$arch.

Translations

Bilingual? Trilingual? Anything-lingual?

If you think you can help translate LineageOS to a different language, jump over to our wiki and have a go! If your language is not supported natively in Android, reach out to us on Crowdin and we’ll take the necessary steps to include your language. For instance, LineageOS is the first Android custom distribution that has complete support for the Welsh (Cymraeg) language thanks to its community of translators.

Please, contribute to translations only if you are reasonably literate in the target language; poor translations waste both our time and yours.

Build roster

Added 21 devices

Device nameWikiMaintainersMoved from
ASUS Zenfone 5Z (ZS620KL)Z01Rrohanpurohit, Jackeagle, ThEMarD20
Banana Pi M5 (Tablet)m5_tabnpjohnson, stricted20
Essential PH-1matahaggertk, intervigil, npjohnson, rashed20
F(x)tec Pro¹ Xpro1xBadDaemon, bgcngm, mccreary, npjohnson, qsnc, tdm20
F(x)tec Pro¹pro1BadDaemon, bgcngm, intervigil, mccreary, npjohnson, tdm20
Fairphone 4FP4mikeioannina20
Google Pixel 2 XLtaimenchrmhoffmann, Eamo5, npjohnson, jro197920
Google Pixel 2walleyechrmhoffmann, Eamo5, npjohnson, jro197920
Google Pixel 3 XLcrosshatchrazorloves, cdesai, intervigil, mikeioannina20
Google Pixel 3bluelinerazorloves, cdesai, intervigil, mikeioannina20
Google Pixel 3a XLbonitocdesai, mikeioannina, npjohnson20
Google Pixel 3asargocdesai, mikeioannina, npjohnson20
Google Pixel 4 XLcoralcdesai, Eamo5, mikeioannina, npjohnson20
Google Pixel 4flamecdesai, Eamo5, mikeioannina, npjohnson20
Google Pixel 4a 5Gbramblealeasto, mikeioannina20
Google Pixel 4asunfishPeterCxy, cdesai, mikeioannina20
Google Pixel 5redfinaleasto, mikeioannina20
Google Pixel 5abarbetaleasto, mikeioannina20
Google Pixel 6 Proravenmikeioannina20
Google Pixel 6oriolemikeioannina20
Google Pixel 6abluejaymikeioannina20
Google Pixel 7 Procheetahmikeioannina, npjohnson20
Google Pixel 7panthermikeioannina, neelc20
Google Pixel 7alynxmikeioannina, niclimcy20
Google Pixel 8 Prohuskymikeioannina 
Google Pixel 8shibamikeioannina 
Google Pixel Foldfelixmikeioannina 
Google Pixel TablettangorproLuK1337, mikeioannina, npjohnson, neelc20
Google Pixel XLmarlinnpjohnson, electimon20
Google Pixelsailfishnpjohnson, electimon20
HardKernel ODROID-C4 (Tablet)odroidc4_tabnpjohnson, stricted20
LG G5 (International)h850aleasto, AShiningRay, npjohnson, ROMSG, x86cpu20
LG G5 (T-Mobile)h830aleasto, AShiningRay, npjohnson, ROMSG, x86cpu20
LG G5 (US Unlocked)rs988aleasto, AShiningRay, npjohnson, ROMSG, x86cpu20
LG G6 (EU Unlocked)h870aleasto, AShiningRay, npjohnson, ROMSG, x86cpu20
LG G6 (T-Mobile)h872aleasto, AShiningRay, npjohnson, ROMSG, x86cpu20
LG G6 (US Unlocked)us997aleasto, AShiningRay, npjohnson, ROMSG, x86cpu20
LG V20 (AT&T)h910aleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V20 (GSM Unlocked – DirtySanta)us996daleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V20 (GSM Unlocked)us996aleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V20 (Global)h990aleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V20 (Sprint)ls997aleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V20 (T-Mobile)h918aleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V20 (Verizon)vs995aleasto, AShiningRay, npjohnson, ROMSG, xxseva44, x86cpu20
LG V30 (Unlocked) / LG V30 (T-Mobile)joanlifehackerhansol, SGCMarkus20
Motorola edge 20 propstarnpjohnson, SGCMarkus20
Motorola edge 20berlinnpjohnson, SGCMarkus20
Motorola edge 2021berlnaSyberHexen20
Motorola edge 30dubaithemard, sb6596, Demon00020
Motorola edge s / Motorola moto g100niodianlujitao20
Motorola moto g200 5G / Motorola Edge S30xpengthemard, rogers260220
Motorola moto g32devonDhina17, mikeioannina20
Motorola moto g42hawaoDhina17, mikeioannina20
Motorola moto g52rhodeDhina17, mikeioannina20
Motorola moto g6 plusevertjro197920
Motorola moto g7 playchannelSyberHexen, deadman96385, erfanoabdi, npjohnson20
Motorola moto g7 pluslakejro1979, npjohnson20
Motorola moto g7 poweroceanSyberHexen, erfanoabdi, npjohnson20
Motorola moto g7rivererfanoabdi, npjohnson, SyberHexen20
Motorola moto x4paytonerfanoabdi, ThEMarD, electimon20
Motorola moto z2 force / Motorola moto z (2018)nasherfanoabdi, npjohnson, qsnc20
Motorola moto z3 playbeckhamjro197920
Motorola moto z3messinpjohnson20
Motorola one actiontroikaStricted, npjohnson20
Motorola one vision / Motorola p50kaneStricted, npjohnson20
Nokia 6.1 (2018)PL2npjohnson, theimpulson20
Nokia 6.1 PlusDRGnpjohnson, theimpulson20
Nubia Mini 5GTP1803ArianK16a, npjohnson20
OnePlus 11 5Gsalamibgcngm 
OnePlus 5cheeseburgertrautamaki20
OnePlus 5Tdumplingtrautamaki, qsnc20
OnePlus 6enchiladaLuK133720
OnePlus 6TfajitaEdwinMoq20
OnePlus 7 ProguacamoleLuK1337, Tortel20
OnePlus 7guacamolebshantanu-sarkar20
OnePlus 7T Prohotdogqsnc20
OnePlus 7ThotdogbLuK133720
OnePlus 8 ProinstantnoodlepLuK133720
OnePlus 8instantnoodlejabashque20
OnePlus 8TkebabLuK133720
OnePlus 9 ProlemonadepLuK1337, bgcngm, mikeioannina20
OnePlus 9lemonademikeioannina, tangalbert919, ZVNexus20
OnePlus 9Rlemonadesmikeioannina20
OnePlus 9RTmartinimikeioannina20
OnePlus NordaviciiMajorP93, KakatkarAkshay20
Radxa Zero (Tablet)radxa0_tabbgcngm, npjohnson, stricted20
Razer Phone 2auramikeioannina, npjohnson20
Razer Phonecherylmikeioannina, npjohnson20
Samsung Galaxy Tab A7 10.4 2020 (LTE)gta4lchrmhoffmann20
Samsung Galaxy Tab A7 10.4 2020 (Wi-Fi)gta4lwifichrmhoffmann20
Samsung Galaxy Tab S5e (LTE)gts4lvbgcngm, LuK133720
Samsung Galaxy Tab S5e (Wi-Fi)gts4lvwifiLuK1337, bgcngm20
Sony Xperia 1 IIpdx203hellobbn20
Sony Xperia 1 IIIpdx215hellobbn20
Sony Xperia 10 PlusmermaidLuK133720
Sony Xperia 10kirinLuK133720
Sony Xperia 5 IIpdx206kyasu, hellobbn20
Sony Xperia 5 IIIpdx214kyasu, hellobbn20
Sony Xperia XA2 PlusvoyagerLuK133720
Sony Xperia XA2 UltradiscoveryLuK133720
Sony Xperia XA2pioneerLuK1337, Stricted, cdesai20
Xiaomi Mi 5geminibgcngm, ikeramat20
Xiaomi Mi 5s PlusnatriumLuK133720
Xiaomi Mi 6sagitArianK16a20
Xiaomi Mi 8 Explorer Editionursabgcngm20
Xiaomi Mi 8 Proequuleusbgcngm20
Xiaomi Mi 8dipperinfrag20
Xiaomi Mi 9 SEgrusSebaUbuntu20
Xiaomi Mi CC 9 / Xiaomi Mi 9 Litepyxisceracz20
Xiaomi Mi CC9 Meitu Editionvela0xCAFEBABE20
Xiaomi Mi MIX 2chironmikeioannina20
Xiaomi Mi MIX 2Spolarisbgcngm20
Xiaomi Mi MIX 3perseusbgcngm, rtx4d20
Xiaomi Poco F1berylliumbgcngm, warabhishek20
Xiaomi Redmi 3S / Xiaomi Redmi 3X / Xiaomi Redmi 4 (India) / Xiaomi Redmi 4X / Xiaomi Redmi Note 5A Prime / Xiaomi Redmi Y1 PrimeMi89370xCAFEBABE20
Xiaomi Redmi 4A / Xiaomi Redmi 5A / Xiaomi Redmi Note 5A Lite / Xiaomi Redmi Y1 LiteMi89170xCAFEBABE20
Xiaomi Redmi 8 / Xiaomi Redmi 8A / Xiaomi Redmi 8A DualMi4390xCAFEBABE20

Added 20 devices

Device nameWikiMaintainersMoved from
10.or GGkardebayan 
ASUS ZenFone 8sakeZVNexus, Demon000, DD3Boh19.1
ASUS Zenfone Max Pro M1X00TDVivekachooz19.1
BQ Aquaris X ProbardockproQuallenauge, jmpfbmx18.1
BQ Aquaris XbardockQuallenauge, jmpfbmx18.1
Banana Pi M5 (Android TV)m5stricted 
Dynalink TV Box 4K (2021)wadenpjohnson, bgcngm, stricted, webgeek1234, deadman96385, trautamaki, luca020400, aleasto19.1
Fairphone 3 / Fairphone 3+FP3dk1978, teamb5819.1
Google ADT-3deadpoolnpjohnson, stricted, webgeek1234, deadman96385, trautamaki, luca020400, aleasto19.1
HardKernel ODROID-C4 (Android TV)odroidc4stricted 
Motorola one fusion+ / Motorola one fusion+ (India)liberWilliam, Hasaber819.1
Motorola one zoomparkerHasaber819.1
Nubia Play 5G / Nubia Red Magic 5G Litenx651jCyborg2017 
Nubia Red Magic 5G (Global) / Nubia Red Magic 5G (China) / Nubia Red Magic 5S (Global) / Nubia Red Magic 5S (China)nx659jDD3Boh 
Nubia Red Magic Marsnx619jCyborg2017 
Nubia Red Magicnx609jCyborg2017 
Nubia Z17nx563jBeYkeRYkt, Cyborg201719.1
Nubia Z18 Mininx611jCyborg201719.1
Nubia Z18nx606jCyborg2017 
OnePlus Nord N200dretangalbert91919.1
Radxa Zero (Android TV)radxa0bgcngm, npjohnson, stricted 
SHIFT SHIFT6mqaxolotlamartinz, joey, mikeioannina19.1
Samsung Galaxy A52 4Ga52qSimon151119.1
Samsung Galaxy A52s 5Ga52sxqSimon1511 
Samsung Galaxy A72a72qSimon151119.1
Samsung Galaxy A73 5Ga73xqSimon1511 
Samsung Galaxy F62 / Samsung Galaxy M62f62Linux4 
Samsung Galaxy M52 5Gm52xqSimon1511 
Samsung Galaxy Note 9crownltebaddar9017.1
Samsung Galaxy Note10d1Linux419.1
Samsung Galaxy Note10+ 5Gd2xLinux419.1
Samsung Galaxy Note10+d2sLinux419.1
Samsung Galaxy S10 5GbeyondxLinux419.1
Samsung Galaxy S10beyond1lteLinux419.1
Samsung Galaxy S10+beyond2lteLinux419.1
Samsung Galaxy S10ebeyond0lteLinux419.1
Samsung Galaxy S9starltebaddar9017.1
Samsung Galaxy S9+star2ltebaddar9017.1
Samsung Galaxy Tab A 8.0 (2019)gtowifilifehackerhansol 
Samsung Galaxy Tab S6 Lite (LTE)gta4xlhaggertk, Linux419.1
Samsung Galaxy Tab S6 Lite (Wi-Fi)gta4xlwifiLinux4, haggertk19.1
Sony Xperia XZ2 Compactxz2cdtrunk9019.1
Sony Xperia XZ2 Premiumauroradtrunk9019.1
Sony Xperia XZ2akaridtrunk9019.1
Sony Xperia XZ3akatsukidtrunk9019.1
Walmart onn. TV Box 4K (2021)dopindernpjohnson, bgcngm, stricted, webgeek1234, deadman96385, trautamaki, luca020400, aleasto 
Xiaomi 11 Lite 5G NE / Xiaomi 11 Lite NE 5G / Xiaomi Mi 11 LElisaItsVixano19.1
Xiaomi Mi 10T / Xiaomi Mi 10T Pro / Xiaomi Redmi K30S UltraapollonRamisky, SebaUbuntu19.1
Xiaomi Mi 10T Lite 5G / Xiaomi Mi 10i 5G / Xiaomi Redmi Note 9 Pro 5GgauguinHridaya, Lynnrin19.1
Xiaomi Mi 11 Lite 5GrenoirArianK16a19.1
Xiaomi Mi 11 PromarsFlower Sea 
Xiaomi Mi 11i / Xiaomi Redmi K40 Pro / Xiaomi Redmi K40 Pro+ / Xiaomi Mi 11X ProhaydnAdarshGrewal, erfanoabdi19.1
Xiaomi Mi 9T / Xiaomi Redmi K20 (China) / Xiaomi Redmi K20 (India)davinciArianK16a17.1
Xiaomi Mi A1tissotabhinavgupta37119.1
Xiaomi POCO F2 Pro / Xiaomi Redmi K30 ProlmiSebaUbuntu19.1
Xiaomi POCO F3 / Xiaomi Redmi K40 / Xiaomi Mi 11XaliothSahilSonar, SebaUbuntu, althafvly19.1
Xiaomi POCO M2 Pro / Xiaomi Redmi Note 9S / Xiaomi Redmi Note 9 Pro (Global) / Xiaomi Redmi Note 9 Pro (India) / Xiaomi Redmi Note 9 Pro Max / Xiaomi Redmi Note 10 Litemiatolldereference23, ItsVixano19.1
Xiaomi POCO X3 NFCsuryaShimitar, TheStrechh19.1
Xiaomi POCO X3 ProvayuSebaUbuntu19.1
Xiaomi Redmi 7 / Xiaomi Redmi Y3oncliteDhina1719.1
Xiaomi Redmi 9lancelotsurblazer 
Xiaomi Redmi Note 10 Pro / Xiaomi Redmi Note 10 Pro (India) / Xiaomi Redmi Note 10 Pro Max (India)sweetbasamaryan, danielml3 
Xiaomi Redmi Note 10S / Xiaomi Redmi Note 10S NFC / Xiaomi Redmi Note 10S Latin Americarosemarysurblazer 
Xiaomi Redmi Note 7 Provioletjashvakharia, raghavt2016.0
Xiaomi Redmi Note 9merlinxsurblazer, bengris32 
ZUK Z2 Plusz2_plusDD3Boh19.1

Added 18.1 devices

Device nameWikiMaintainersMoved from
Google Nexus 7 2013 (LTE, Repartitioned)debxnpjohnson, surblazer, Elektroschmock, hpnightowl, ROMSG 
Motorola moto zgriffinerfanoabdi, npjohnson17.1

Source :
https://lineageos.org/Changelog-28/

Part 8: See How Customers Are Unlocking the Power of Hybrid Cloud with VMware Cloud on AWS

Ruchi Tandon
September 11, 2023

Looking to rapidly migrate to the cloud? Scale cost-effectively and strengthen disaster recovery? You’re not alone. Here’s how organizations are unlocking the power of hybrid cloud with VMware Cloud on AWS. In this blog, let’s dive into a collection of compelling customer stories that offer a glimpse into the impactful experiences of our customers with VMware Cloud on AWS. Also, check out Part 1Part 2Part 3Part 4Part 5Part 6, and Part 7 of this blog series for more customer stories across various use cases.

USE CASE: DATA CENTER MIGRATION

Kingston University

Kingston University Accelerates consciously hybrid cloud strategy by migrating to VMware Cloud on AWS

Kingston University is a prestigious higher education institution based in London. It offers courses across various disciplines from undergraduate to postgraduate level and prides itself on producing the most sought-after graduates in the country.

The university needed flexible and agile technology that enables it to respond rapidly to changing requirements. The IT team at the university also wanted to support the university’s transition to more sustainable and energy efficient solutions. Additionally, the university’s ageing on-premises data centers were complex to manage and maintain. The infrastructure refresh cycle was problematically variable, there was growing technical debt, and a need for more scalability. Also, the IT team wanted to move to the cloud and at the same time run some resource-intensive workloads on premises.

By migrating their workloads to VMware Cloud on AWS, the IT team at Kingston University was able to reduce the data center footprint by 90%. They bulk migrated 200 virtual machines (VMs) with zero downtime in just three weeks. In total, 650 of the 750 VMs were moved to the cloud by March 2023.

Using VMware Cloud on AWS, the IT team at the university gained flexibility to pivot quickly on where to host applications to ensure best performance and cost benefits. They also enabled micro-segmentation to increase workload-level security using VMware NSX Distributed Firewall with VMware Cloud on AWS. The team was also able to accelerate application security and networking for the university using VMware Aria Operations for Networks with VMware Cloud on AWS.

Here is what Kingston University shared about their experience of using this hybrid cloud service:

We chose VMware technology so we could use our existing skills and wouldn’t need to reconfigure servers. We already use VMware technology, so we knew we could achieve a more seamless migration to cloud. We set out to remove the barriers to innovation, and with support from Xtravirt and VMware, we’re free to explore everything the cloud has to offer.

– Daniel Bolton, Head of Technical Services, Kingston University

Check out this case study to know more about Kingston University’s experience with VMware Cloud on AWS.

Quality Bicycle Products

Quality Bicycle Products Accelerates its Path to the Cloud by Migrating to VMware Cloud on AWS

Founded 1981 and headquartered in Bloomington (Minnesota, USA), Quality Bicycle Products (QBP) is North America’s largest distributor of bicycles, accessories, and parts, and an industry leader in distribution, education, advocacy, and product innovation. As a certified B-corporation, QBP is committed to environmental protection and is working toward full carbon neutrality by 2030.

Earlier last year, QBP’s colocation provider decided to close down leaving them in a lurch and forcing their IT team to move the entire production environment to a new location within 14 months. After evaluating several options, including engaging with another colocation provider facility, QBP decided to migrate to the public cloud keeping in mind their long-term vision.

QBP wanted to modernize their stack in the cloud in the short timeframe and also wanted to migrate their core applications without rearchitecting or refactoring them. Additionally, as a certified B Corporation, QBP aims to be carbon neutral by 2030 and wanted to work with providers like VMware that have similar goals.

With VMware Cloud on AWS, the IT team at QBP was able to migrate 300 virtual machines (VMs) in under 14 months without refactoring or rearchitecting them. This included their workhorse enterprise resource planning (ERP), warehouse management systems and e-commerce applications. A key business impact was providing their developers with cutting-edge tools and the ability to work on frequent and faster pilot projects while keeping their traditional apps as-is using VMware Cloud on AWS. The IT team at QBP was able to achieve a stable environment to better optimize resources, improve innovation and delivery times. They also gained more clarity on TCO with the ability to produce reports & graphs at CFO level and make better decisions.

Here is what QBP has to say about their experience of using the product: 

“VMware Cloud on AWS was the perfect way to accelerate our move to the cloud. We could move existing applications without refactoring or rearchitecting them and have native cloud services live on the same infrastructure. Being able to offer our developers those cutting-edge tools and keep our traditional apps was a huge win for us.”
– Joe Van Ert, Systems Architect, Quality Bicycle Products

Read Quality Bicycle Product’s experience with VMware Cloud on AWS here and watch their video here.

USE CASE: DATA CENTER EXTENSION

East London NHS Foundation Trust

East London NHS Foundation Trust Scales its IT Infrastructure to Meet Rising Demand for Healthcare Services

The East London NHS Foundation Trust (ELFT) is a National Health Service trust that provides an array of mental health and community health services in the East London region of the United Kingdom.

The team at ELFT wanted to meet rising demand for healthcare services, and to address unexpected and overwhelming stress on their systems due to the COVID-19 pandemic. They also needed a robust platform to deliver innovative solutions that can improve patient & service user care, e.g., patients accessing and managing their own records. Additionally, they needed to get the right digital solutions that help make a difference in how care is delivered.

To meet the above challenges, the team at ELFT chose VMware Cloud on AWS to migrate to the cloud, using the additional capacity to significantly scale its IT infrastructure, virtualize its networks, and implement a robust disaster recovery solution.

They migrated swiftly and smoothly to the cloud after a few weeks of planning together with VMware teams. After the migration, some of the benefits were very noticeable like achieving a faster experience for clinicians and hospital staff as they logged into their systems post migration. The team also put in place a robust disaster recovery solution by activating VMware Cloud Disaster Recovery features on VMware Cloud on AWS as well as enabling VMware NSX features on VMware Cloud on AWS to create virtualized network devices, including switches and routers.

The way VMware supported us when looking at different cloud platforms, different options, attending calls with us where needed or even face-to-face meetings if required, they really went that extra mile to make sure we progressed to that platform.

– James Slaven, Chief Technology Officer – East London NHS Foundation Trust

Read ELFT’s detailed story here and watch their video here.

École Polytechnique Fédérale de Lausanne

École Polytechnique Fédérale de Lausanne (EPFL) Supports Research Dynamism with Anything as a Service Approach

The École Polytechnique Fédérale de Lausanne (EPFL) is a research institute based in Lausanne, Switzerland, specializing in physical sciences and engineering. It comprises 11,000 students, 350 faculty and 6,000 staff. It is home to over 500 laboratories and research groups, each at the forefront of science and technology.

EPFL needed to change how it provided IT services to the researchers. It had to be fast, easy and reliable. It also needed to abide by Switzerland’s data sovereignty laws. Furthermore, management set the IT team a task of being able to provision a new virtual machine within 15 minutes.

With these goals in mind, the IT team at EPFL chose to use VMware products and achieve suitable results. EPFL already had a set-up with VMware Cloud on AWS for disaster recovery purposes. Next, they developed a private cloud with VMware that made better use of their existing resources. Doing so helped them meet the goal of “15-minute provisioning” for a new virtual machine (VM) and the number of VMs in use is up by 100% since deployment.

The IT team at EPFL was able to provide compute, storage and networking as well as load balancing and flexibility with VMware Cloud Foundation, protect sensitive research work with VMware NSX Distributed IDS/IPS as well as establish the necessary management control using VMware Aria Automation.

Here is what EPFL has to say about their experience of using the product: 

“From an IT infrastructure perspective, we want to provide researchers with everything they need. They should be able to start their research on day one.”
– Philippe Morel, Director of IT Operations and Infrastructure, EPFL

Read EPFL’s story in detail here and watch their video here.

Route Mobile

Route Mobile Migrates to VMware Cloud on AWS for the Fastest Transition and Lowest Cost to Cloud-based Infrastructure

Route Mobile (RML) is a leading cloud communication platform provider, catering to enterprises, over-the-top (OTT) players and mobile network operators (MNO). RML enterprise communication services include smart solutions in messaging, voice, email and SMS filtering, analytics and monetization. The company is headquartered in Mumbai, India, with a global presence in Asia Pacific, the Middle East, Africa, Europe and North America.

The customer faced several challenges that prompted their move to the cloud. They had scarce skilled manpower, reduced efficiencies, and increasing infrastructure and maintenance costs that were obstructing RML’s growth. Besides eliminating dependency on third party and private hosting services they also needed to maintain maximum uptime, greater agility, and lower manual intervention to improve customer experience and satisfaction. Additionally, they needed a solution with lower carbon footprint and better ROI compared to their existing platform.

The team at RML chose to migrate to the cloud with VMware Cloud on AWS and gained multiple benefits as a result. They ensured 100% application availability among RML’s client’s customers with VMware Cloud on AWS as well as achieved a 20% increase in its market share and projected revenue growth of 30% YoY. Additionally, the team at RML was able to host the business applications without any reskilling of manpower or downtime. They were able to put in place an agile, consumption-based infrastructure with strong operational control while achieving cost efficiencies and ease of deployment with a scalable platform.

Here is what Route Mobile has to say about their experience of using the product: 

“Upgrading to a scalable platform like VMware Cloud on AWS supported by robust infrastructure, Route Mobile is able to support the business focus on product expansion, accelerated adoption of digital communication updates and solutions and omni channel platform capabilities to create deeper customer engagement.”
– Ramesh Helaiya, CTO, Route Mobile

Check out RML’s success story to learn more about their experience of using VMware Cloud on AWS.

City of Potsdam

City of Potsdam Advances Digital Education in Schools by Expanding Data Center to the Public Cloud with VMware Cloud on AWS

Situated in the southwest of Berlin, Potsdam is the capital and largest city of the German state of Brandenburg. Surrounded by lakes, rivers and forests, the city is an inspiring place to live, study and work.

But during COVID-19 school closures, there was a need to accelerate rollout of urgently needed devices in the German school system, manage devices in compliance with the EU GDPR requirements as well as absorb device management workload across all the schools effectively by a resources-constrained IT team. So, the City of Potsdam decided to advance the digital infrastructure of the German school system by expanding data center to the public cloud.

They decided to use VMware Cloud on AWS to Setup a GDPR-compliant hybrid cloud infrastructure in just a few weeks. They were able to minimize deployment time with this flexible, scalable, easy to manage solution. The City of Potsdam was also able to provide a modern educational experience – including digital services – for 25,000 students and 5,000 educators by using VMware Cloud on AWS and Workspace ONE Unified Endpoint Management.

Here is what the City of Potsdam has to say about their experience of using the product: 

“With VMware Cloud on AWS and Workspace ONE UEM, we have a cloud foundation which accommodates adjustments and additions as our digital learning platform expands and matures.”
– Mathias Horezky, Head of IT Infrastructure and Service, City of Potsdam

Read their customer story here in detail.

What’s Next:

Don’t wait any further. Start your hybrid cloud journey today with VMware Cloud on AWS. Now, you can sign up for the free trial of VMware Cloud on AWS and try out the migration yourselves for free for the first 30 days. If you are interested in finding out the TCO savings, also check out the VMware Cloud on AWS TCO Calculator. And please check out some additional resources as mentioned below to learn more:

Resources for VMware Cloud on AWS

Ruchi Tandon

Ruchi is a Senior Product Marketing Manager for VMware Cloud on AWS at VMware Inc. With 14+ years of strong technology, data, and marketing background, Ruchi brings deep experience in…

Source :
https://blogs.vmware.com/cloud/2023/09/11/part-8-see-how-customers-are-unlocking-the-power-of-hybrid-cloud-with-vmware-cloud-on-aws/

Part 7: See How Customers are Accelerating Cloud Transformation with VMware Cloud on AWS

Ruchi Tandon
August 25, 2022

Looking to rapidly migrate to the cloud? Scale cost-effectively and strengthen disaster recovery? You’re not alone. Here’s how organizations are unlocking the power of hybrid cloud with VMware Cloud on AWS.

As we welcome back customers, partners, colleagues, and friends of VMware in person once again at VMware Explore 2022, one thing is unchanged – the impact that VMware Cloud on AWS has had on our customers’ cloud migration journeys.

In this blog, I want to highlight some of the recent customer stories and share our customers’ experiences with VMware Cloud on AWS. Also, check out Part 1Part 2Part 3Part 4Part 5, and Part 6 of this blog series for more customer stories across various use cases.

Schibsted Media Group

Schibsted Moves to the Cloud to Support Rapid Expansion and Gain Competitive Advantage

Schibsted Media Group, or Schibsted, a leading media corporation in Scandinavia, wanted to create a unified digital platform for its 55+ brands portfolio, allowing each company to scale operations easily. To support their rapid growth, Schibsted’s team also knew that having a cloud strategy in place would be beneficial when acquiring new companies. It would also significantly reduce time and resources otherwise spent managing multiple vendors and local data centers.

With VMware Cloud on AWS, the team at Schibsted was able to shut down 350 on-premises servers and migrate traditional workloads and legacy software to the cloud faster than expected. Their enterprise systems running apps such as Newspilot, SAP, HR systems, and a variety of advertising platforms now all run on VMware Cloud on AWS. Working with VMware, Schibsted has achieved considerable cost savings compared to on-premises data centers and hopes to save more on operating costs with every new acquisition in the future.

Here is what Schibsted has to say about their experience of using the service: 

“We have a public cloud strategy, and traditional workloads are now running on VMware Cloud on AWS. It is a scalable platform that we are taking full advantage of to become cloud-native. We were one of the first customers in the Nordics who started using it.”

– Ken Sivertsen, Cloud Infrastructure Architect, Schibsted Enterprise Technology

Check out the case study to learn more about their experience of using VMware Cloud on AWS.

Lotte

Lotte Moves to the Cloud for Future-Readiness

When several divisions of Lotte merged into a single corporate entity in 2018—the year of its 70th anniversary—they decided to embark on digital transformation (DX) journey to enhance synergies and encourage business growth. After merging, Lotte found it challenging to ensure smooth business operations and employee experience due to silos between the merged departments. This had the potential to impact business development. Moreover, Lotte had been operating an on-premises VDI environment on Windows 7. When Windows 7 support ended, there was an urgent need to move to Windows 10 to strengthen the Lotte VDI resources and improve their system more broadly.

Lotte decided to use VMware Cloud on AWS because it offered the agility and flexibility of the cloud with a proven track record and made migration easy. With the help of VMware and partnering with DXC Technology Japan, and AWS Japan, Lotte has now migrated 4,000 VDI units to VMware Cloud on AWS. Doing so has improved employee experience and maximized business profitability. It has also positioned them well for future expansion and helped them reduce the time needed for infrastructure maintenance and operations.

Here is what Lotte has to say about their experience of using the product: 

“We’re currently running VMware Cloud on AWS VDI environment alongside our on-premises environment and there is no difference between them. The user experience is virtually unchanged and everyone finds it easy to use.”

– Mr. Hisaaki Ogata, Senior Manager of ICT Strategic Division, Lotte Corporation

Read their customer story here in detail.

KDDI Corporation

KDDI Corporation Innovates from Modern Applications.

Keeping its agile mindset front and center, KDDI Corporation needed a go-to-market strategy to deploy its applications and deliver new services more rapidly than before. Their developers wanted the ability to concentrate on app development above all things. They also wanted monitoring, log collection, and security features integrated into the platform. In addition, the IT team at KDDI expected their environments to become more and more complex as KDDI deployed applications at the edge and in the cloud.

Solutions like VMware Cloud on AWS and Project Pacific helped KDDI achieve consistent control over such complex environments with a single portal.

Here is what KDDI Corporation has to say about their experience of using the product:

“Going forward, our environments will become more and more complex as we deploy applications at the edge and in the cloud. So, we are looking to VMware Cloud on AWS and Project Pacific to help us achieve consistent control over such complex environments with a single portal.” 

– Takeshi Maehara, General Manager, KDDI Corporation

Learn more about KDDI’s story here.

State of Louisiana

State of Louisiana Unifies IT Service Delivery, Improves Medicare Enrollment, and Rapidly Responds to Disasters with VMware Cloud on AWS

Reforming government is a constant process requiring continuous innovation, creativity, and vigilance, including the technology on which government operates. For the State of Louisiana, that meant embarking on a statewide initiative to transform security and modernize data center operations. The goal: Take IT from legacy mainframes to cloud-based, mobile-ready application delivery. Louisiana decided to partner with VMware to modernize its data centers, transform digital workspaces for users, and move toward a common operating model that spans both private and public clouds.

To extend its on-premises data centers and easily migrate application workloads to the public cloud, the state decided to use VMware Cloud on AWS. With VMware SDDC software running on the AWS cloud, the state can seamlessly integrate with the public cloud and scale easily while leveraging existing VMware skill sets. It can also use familiar tools, such as vRealize Suite and NSX, to extend intelligent operations and micro-segmentation to the public cloud, helping keep its environment manageable and secure. As Louisiana adopts a public cloud-first strategy to reduce costs further, it will use VMware Cloud on AWS to evolve into DevOps methodologies and become an even more efficient broker of IT services.

Here is what the State of Louisiana has to say about their experience of using the product:

“VMware Cloud on AWS will help us take advantage of the elasticity of public cloud, giving us workload portability, a platform for next-gen apps, and easy access to AWS services.”

– Michael Allison, CTO, State of Louisiana

Learn more about State of Louisiana’s experience of using VMware Cloud on AWS in this case study

GuideOne

VMware Cloud on AWS Enables Cloud-Native Capabilities Without Increasing IT Budget For GuideOne

GuideOne, an insurance firm in the United States with over 600 employees and more than $500 million in annual revenue, maintains an environment of 16 ESXi hosts and 800 virtual machines (VMs).

The organization faced several challenges that prompted its investment in VMware Cloud on AWS. They supported workloads with on-premises hardware but wanted to move to the cloud to avoid the headaches and costs associated with managing its on-premises deployment. The organization also wanted the cost and capability benefits of cloud computing, but it wanted to minimize the likelihood of outages, delays, and cost overruns that could occur when migrating legacy workloads to the public cloud.

The move to VMware Cloud on AWS produced some great results: 

  • Eliminated 40% of its data center footprint and reduced power costs
  • Reallocated resources to strategic IT initiatives
  • Invested in its own employees and avoided costs of recruiting cloud-native skills
  • Avoided hardware expansion and refresh costs
  • Avoided costly application rearchitecture
  • Flattened IT budget while providing business with new capabilities
  • Enabled a more responsive and compliant security environment

Here is what GuideOne has to say about their experience of using the product:

“[VMware Cloud on AWS] is a quick way of getting into the cloud. You don’t have to do as much QA when it comes to switching over the workloads because you are doing it at the hypervisor level, and you’re really only worried about performance and latency.”

– IT Director, GuideOne

Read more about their experience here.

A Global Financial Firm

VMware Cloud on AWS Provides Frictionless Path to Capital and Operational Cost Savings to a global financial services firm

A global financial services firm headquartered in the United States with over 10,000 employees and more than $3 billion in annual revenue, now maintains three software-defined data centers (SDDCs), with a total of 42 hosts and roughly 800 virtual machines (VMs).

Prior to investing in VMware Cloud on AWS, the organization relied on outsource vendors to maintain its data centers. When the contract was up, the organization could not easily switch providers and did not want to reinvest in building a new data center. In an on-premises environment, the organization was also limited to inefficient disaster recovery processes, which hindered development teams. Additionally, a portfolio of 150 applications, many of which were legacy applications, meant unnecessary maintenance and operations costs. The organization was struggling to modernize its application portfolio due to the speed of service of the vendors managing its on-premises environment.

The business decision-makers were vary of upcoming data center deadlines

The move to VMware Cloud on AWS produced some great results: 

  • Retired on-premises data center and reduced annual operating costs by 59%
  • Avoided costly infrastructure refreshes, saving ~$10M
  • Reduced downtime
  • Improved IT agility
  • Modernized application portfolio, saving $200K in annual spend
  • Improved business resilience across 35 offices during the pandemic by being in the cloud

Here is what this Financial Services organization has to say about their experience of using the product:

“Modern applications require modern infrastructure. So today we’re upscaling, we’re new-skilling, and we’re reskilling. I’ve been trying to retire apps my whole time here and was not able to until we moved to the cloud [with VMware Cloud on AWS]”

– Associate Director of Cloud Infrastructure, Financial Services organization

Read more about their experience here.

So don’t wait any further. Start your cloud migration and application modernization journey with VMware Cloud on AWS. If you are interested in finding out how much you could save, try the VMware Cloud on AWS TCO Calculator. To learn more about VMware Cloud on AWS, here are some learning resources. Or, you can get started now with VMware Cloud on AWS by purchasing the service online.

Resources for VMware Cloud on AWS

Ruchi Tandon

Ruchi is a Senior Product Marketing Manager for VMware Cloud on AWS at VMware Inc. With 14+ years of strong technology, data, and marketing background, Ruchi brings deep experience in…

Source :
https://blogs.vmware.com/cloud/2022/08/25/part-7-see-how-customers-are-accelerating-cloud-transformation-with-vmware-cloud-on-aws/

Reflecting on the GDPR to celebrate Privacy Day 2024

26/01/2024
Emily Hancock

10 min read

This post is also available in DeutschFrançais日本語 and Nederlands.

Reflecting on the GDPR to celebrate Privacy Day 2024

Just in time for Data Privacy Day 2024 on January 28, the EU Commission is calling for evidence to understand how the EU’s General Data Protection Regulation (GDPR) has been functioning now that we’re nearing the 6th anniversary of the regulation coming into force.

We’re so glad they asked, because we have some thoughts. And what better way to celebrate privacy day than by discussing whether the application of the GDPR has actually done anything to improve people’s privacy?

The answer is, mostly yes, but in a couple of significant ways – no.

Overall, the GDPR is rightly seen as the global gold standard for privacy protection. It has served as a model for what data protection practices should look like globally, it enshrines data subject rights that have been copied across jurisdictions, and when it took effect, it created a standard for the kinds of privacy protections people worldwide should be able to expect and demand from the entities that handle their personal data. On balance, the GDPR has definitely moved the needle in the right direction for giving people more control over their personal data and in protecting their privacy.

In a couple of key areas, however, we believe the way the GDPR has been applied to data flowing across the Internet has done nothing for privacy and in fact may even jeopardize the protection of personal data. The first area where we see this is with respect to cross-border data transfers. Location has become a proxy for privacy in the minds of many EU data protection regulators, and we think that is the wrong result. The second area is an overly broad interpretation of what constitutes “personal data” by some regulators with respect to Internet Protocol or “IP” addresses. We contend that IP addresses should not always count as personal data, especially when the entities handling IP addresses have no ability on their own to tie those IP addresses to individuals. This is important because the ability to implement a number of industry-leading cybersecurity measures relies on the ability to do threat intelligence on Internet traffic metadata, including IP addresses.  

Location should not be a proxy for privacy

Fundamentally, good data security and privacy practices should be able to protect personal data regardless of where that processing or storage occurs. Nevertheless, the GDPR is based on the idea that legal protections should attach to personal data based on the location of the data – where it is generated, processed, or stored. Articles 44 to 49 establish the conditions that must be in place in order for data to be transferred to a jurisdiction outside the EU, with the idea that even if the data is in a different location, the privacy protections established by the GDPR should follow the data. No doubt this approach was influenced by political developments around government surveillance practices, such as the revelations in 2013 of secret documents describing the relationship between the US NSA (and its Five Eyes partners) and large Internet companies, and that intelligence agencies were scooping up data from choke points on the Internet. And once the GDPR took effect, many data regulators in the EU were of the view that as a result of the GDPR’s restrictions on cross-border data transfers, European personal data simply could not be processed in the United States in a way that would be consistent with the GDPR.

This issue came to a head in July 2020, when the European Court of Justice (CJEU), in its “Schrems II” decision1, invalidated the EU-US Privacy Shield adequacy standard and questioned the suitability of the EU standard contractual clauses (a mechanism entities can use to ensure that GDPR protections are applied to EU personal data even if it is processed outside the EU). The ruling in some respects left data protection regulators with little room to maneuver on questions of transatlantic data flows. But while some regulators were able to view the Schrems II ruling in a way that would still allow for EU personal data to be processed in the United States, other data protection regulators saw the decision as an opportunity to double down on their view that EU personal data cannot be processed in the US consistent with the GDPR, therefore promoting the misconception that data localization should be a proxy for data protection.

In fact, we would argue that the opposite is the case. From our own experience and according to recent research2, we know that data localization threatens an organization’s ability to achieve integrated management of cybersecurity risk and limits an entity’s ability to employ state-of-the-art cybersecurity measures that rely on cross-border data transfers to make them as effective as possible. For example, Cloudflare’s Bot Management product only increases in accuracy with continued use on the global network: it detects and blocks traffic coming from likely bots before feeding back learnings to the models backing the product. A diversity of signal and scale of data on a global platform is critical to help us continue to evolve our bot detection tools. If the Internet were fragmented – preventing data from one jurisdiction being used in another – more and more signals would be missed. We wouldn’t be able to apply learnings from bot trends in Asia to bot mitigation efforts in Europe, for example. And if the ability to identify bot traffic is hampered, so is the ability to block those harmful bots from services that process personal data.

The need for industry-leading cybersecurity measures is self-evident, and it is not as if data protection authorities don’t realize this. If you look at any enforcement action brought against an entity that suffered a data breach, you see data protection regulators insisting that the impacted entities implement ever more robust cybersecurity measures in line with the obligation GDPR Article 32 places on data controllers and processors to “develop appropriate technical and organizational measures to ensure a level of security appropriate to the risk”, “taking into account the state of the art”. In addition, data localization undermines information sharing within industry and with government agencies for cybersecurity purposes, which is generally recognized as vital to effective cybersecurity.

In this way, while the GDPR itself lays out a solid framework for securing personal data to ensure its privacy, the application of the GDPR’s cross-border data transfer provisions has twisted and contorted the purpose of the GDPR. It’s a classic example of not being able to see the forest for the trees. If the GDPR is applied in such a way as to elevate the priority of data localization over the priority of keeping data private and secure, then the protection of ordinary people’s data suffers.

Applying data transfer rules to IP addresses could lead to balkanization of the Internet

The other key way in which the application of the GDPR has been detrimental to the actual privacy of personal data is related to the way the term “personal data” has been defined in the Internet context – specifically with respect to Internet Protocol or “IP” addresses. A world where IP addresses are always treated as personal data and therefore subject to the GDPR’s data transfer rules is a world that could come perilously close to requiring a walled-off European Internet. And as noted above, this could have serious consequences for data privacy, not to mention that it likely would cut the EU off from any number of global marketplaces, information exchanges, and social media platforms.

This is a bit of a complicated argument, so let’s break it down. As most of us know, IP addresses are the addressing system for the Internet. When you send a request to a website, send an email, or communicate online in any way, IP addresses connect your request to the destination you’re trying to access. These IP addresses are the key to making sure Internet traffic gets delivered to where it needs to go. As the Internet is a global network, this means it’s entirely possible that Internet traffic – which necessarily contains IP addresses – will cross national borders. Indeed, the destination you are trying to access may well be located in a different jurisdiction altogether. That’s just the way the global Internet works. So far, so good.

But if IP addresses are considered personal data, then they are subject to data transfer restrictions under the GDPR. And with the way those provisions have been applied in recent years, some data regulators were getting perilously close to saying that IP addresses cannot transit jurisdictional boundaries if it meant the data might go to the US. The EU’s recent approval of the EU-US Data Privacy Framework established adequacy for US entities that certify to the framework, so these cross-border data transfers are not currently an issue. But if the Data Privacy Framework were to be invalidated as the EU-US Privacy Shield was in the Schrems II decision, then we could find ourselves in a place where the GDPR is applied to mean that IP addresses ostensibly linked to EU residents can’t be processed in the US, or potentially not even leave the EU.

If this were the case, then providers would have to start developing Europe-only networks to ensure IP addresses never cross jurisdictional boundaries. But how would people in the EU and US communicate if EU IP addresses can’t go to the US? Would EU citizens be restricted from accessing content stored in the US? It’s an application of the GDPR that would lead to the absurd result – one surely not intended by its drafters. And yet, in light of the Schrems II case and the way the GDPR has been applied, here we are.

A possible solution would be to consider that IP addresses are not always “personal data” subject to the GDPR. In 2016 – even before the GDPR took effect – the Court of Justice of the European Union (CJEU) established the view in Breyer v. Bundesrepublik Deutschland that even dynamic IP addresses, which change with every new connection to the Internet, constituted personal data if an entity processing the IP address could link the IP addresses to an individual. While the court’s decision did not say that dynamic IP addresses are always personal data under European data protection law, that’s exactly what EU data regulators took from the decision, without considering whether an entity actually has a way to tie the IP address to a real person3.

The question of when an identifier qualifies as “personal data” is again before the CJEU: In April 2023, the lower EU General Court ruled in SRB v EDPS4 that transmitted data can be considered anonymised and therefore not personal data if the data recipient does not have any additional information reasonably likely to allow it to re-identify the data subjects and has no legal means available to access such information. The appellant – the European Data Protection Supervisor (EDPS) – disagrees. The EDPS, who mainly oversees the privacy compliance of EU institutions and bodies, is appealing the decision and arguing that a unique identifier should qualify as personal data if that identifier could ever be linked to an individual, regardless of whether the entity holding the identifier actually had the means to make such a link.

If the lower court’s common-sense ruling holds, one could argue that IP addresses are not personal data when those IP addresses are processed by entities like Cloudflare, which have no means of connecting an IP address to an individual. If IP addresses are then not always personal data, then IP addresses will not always be subject to the GDPR’s rules on cross-border data transfers.

Although it may seem counterintuitive, having a standard whereby an IP address is not necessarily “personal data” would actually be a positive development for privacy. If IP addresses can flow freely across the Internet, then entities in the EU can use non-EU cybersecurity providers to help them secure their personal data. Advanced Machine Learning/predictive AI techniques that look at IP addresses to protect against DDoS attacks, prevent bots, or otherwise guard against personal data breaches will be able to draw on attack patterns and threat intelligence from around the world to the benefit of EU entities and residents. But none of these benefits can be realized in a world where IP addresses are always personal data under the GDPR and where the GDPR’s data transfer rules are interpreted to mean IP addresses linked to EU residents can never flow to the United States.

Keeping privacy in focus

On this Data Privacy Day, we urge EU policy makers to look closely at how the GDPR is working in practice, and to take note of the instances where the GDPR is applied in ways that place privacy protections above all other considerations – even appropriate security measures mandated by the GDPR’s Article 32 that take into account the state of the art of technology. When this happens, it can actually be detrimental to privacy. If taken to the extreme, this formulaic approach would not only negatively impact cybersecurity and data protection, but even put into question the functioning of the global Internet infrastructure as a whole, which depends on cross-border data flows. So what can be done to avert this?

First, we believe EU policymakers could adopt guidelines (if not legal clarification) for regulators that IP addresses should not be considered personal data when they cannot be linked by an entity to a real person. Second, policymakers should clarify that the GDPR’s application should be considered with the cybersecurity benefits of data processing in mind. Building on the GDPR’s existing recital 49, which rightly recognizes cybersecurity as a legitimate interest for processing, personal data that needs to be processed outside the EU for cybersecurity purposes should be exempted from GDPR restrictions to international data transfers. This would avoid some of the worst effects of the mindset that currently views data localization as a proxy for data privacy. Such a shift would be a truly pro-privacy application of the GDPR.

1 Case C-311/18, Data Protection Commissioner v Facebook Ireland and Maximillian Schrems.
2 Swire, Peter and Kennedy-Mayo, DeBrae and Bagley, Andrew and Modak, Avani and Krasser, Sven and Bausewein, Christoph, Risks to Cybersecurity from Data Localization, Organized by Techniques, Tactics, and Procedures (2023).
3 Different decisions by the European data protection authorities, namely the Austrian DSB (December 2021), the French CNIL (February 2022) and the Italian Garante (June 2022), while analyzing the use of Google Analytics, have rejected the relative approach used by the Breyer case and considered that an IP address should always be considered as personal data. Only the decision issued by the Spanish AEPD (December 2022) followed the same interpretation of the Breyer case. In addition, see paragraphs 109 and 136 in Guidelines by Supervisory Authorities for Tele-Media Providers, DSK (2021).
4 Single Resolution Board v EDPS, Court of Justice of the European Union, April 2023.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/reflecting-on-the-gdpr-to-celebrate-privacy-day-2024/

Thanksgiving 2023 security incident

01/02/2024
Matthew Prince John Graham-Cumming Grant Bourzikas

11 min read

On Thanksgiving Day, November 23, 2023, Cloudflare detected a threat actor on our self-hosted Atlassian server. Our security team immediately began an investigation, cut off the threat actor’s access, and on Sunday, November 26, we brought in CrowdStrike’s Forensic team to perform their own independent analysis.

Yesterday, CrowdStrike completed its investigation, and we are publishing this blog post to talk about the details of this security incident.

We want to emphasize to our customers that no Cloudflare customer data or systems were impacted by this event. Because of our access controls, firewall rules, and use of hard security keys enforced using our own Zero Trust tools, the threat actor’s ability to move laterally was limited. No services were implicated, and no changes were made to our global network systems or configuration. This is the promise of a Zero Trust architecture: it’s like bulkheads in a ship where a compromise in one system is limited from compromising the whole organization.

From November 14 to 17, a threat actor did reconnaissance and then accessed our internal wiki (which uses Atlassian Confluence) and our bug database (Atlassian Jira). On November 20 and 21, we saw additional access indicating they may have come back to test access to ensure they had connectivity.

They then returned on November 22 and established persistent access to our Atlassian server using ScriptRunner for Jira, gained access to our source code management system (which uses Atlassian Bitbucket), and tried, unsuccessfully, to access a console server that had access to the data center that Cloudflare had not yet put into production in São Paulo, Brazil.

They did this by using one access token and three service account credentials that had been taken, and that we failed to rotate, after the Okta compromise of October 2023. All threat actor access and connections were terminated on November 24 and CrowdStrike has confirmed that the last evidence of threat activity was on November 24 at 10:44.

(Throughout this blog post all dates and times are UTC.)

Even though we understand the operational impact of the incident to be extremely limited, we took this incident very seriously because a threat actor had used stolen credentials to get access to our Atlassian server and accessed some documentation and a limited amount of source code. Based on our collaboration with colleagues in the industry and government, we believe that this attack was performed by a nation state attacker with the goal of obtaining persistent and widespread access to Cloudflare’s global network.

“Code Red” Remediation and Hardening Effort

On November 24, after the threat actor was removed from our environment, our security team pulled in all the people they needed across the company to investigate the intrusion and ensure that the threat actor had been completely denied access to our systems, and to ensure we understood the full extent of what they accessed or tried to access.

Then, from November 27, we redirected the efforts of a large part of the Cloudflare technical staff (inside and outside the security team) to work on a single project dubbed “Code Red”. The focus was strengthening, validating, and remediating any control in our environment to ensure we are secure against future intrusion and to validate that the threat actor could not gain access to our environment. Additionally, we continued to investigate every system, account and log to make sure the threat actor did not have persistent access and that we fully understood what systems they had touched and which they had attempted to access.

CrowdStrike performed an independent assessment of the scope and extent of the threat actor’s activity, including a search for any evidence that they still persisted in our systems. CrowdStrike’s investigation provided helpful corroboration and support for our investigation, but did not bring to light any activities that we had missed. This blog post outlines in detail everything we and CrowdStrike uncovered about the activity of the threat actor.

The only production systems the threat actor could access using the stolen credentials was our Atlassian environment. Analyzing the wiki pages they accessed, bug database issues, and source code repositories, it appears they were looking for information about the architecture, security, and management of our global network; no doubt with an eye on gaining a deeper foothold. Because of that, we decided a huge effort was needed to further harden our security protocols to prevent the threat actor from being able to get that foothold had we overlooked something from our log files.

Our aim was to prevent the attacker from using the technical information about the operations of our network as a way to get back in. Even though we believed, and later confirmed, the attacker had limited access, we undertook a comprehensive effort to rotate every production credential (more than 5,000 individual credentials), physically segment test and staging systems, performed forensic triages on 4,893 systems, reimaged and rebooted every machine in our global network including all the systems the threat actor accessed and all Atlassian products (Jira, Confluence, and Bitbucket).

The threat actor also attempted to access a console server in our new, and not yet in production, data center in São Paulo. All attempts to gain access were unsuccessful. To ensure these systems are 100% secure, equipment in the Brazil data center was returned to the manufacturers. The manufacturers’ forensic teams examined all of our systems to ensure that no access or persistence was gained. Nothing was found, but we replaced the hardware anyway.

We also looked for software packages that hadn’t been updated, user accounts that might have been created, and unused active employee accounts; we went searching for secrets that might have been left in Jira tickets or source code, examined and deleted all HAR files uploaded to the wiki in case they contained tokens of any sort. Whenever in doubt, we assumed the worst and made changes to ensure anything the threat actor was able to access would no longer be in use and therefore no longer be valuable to them.

Every member of the team was encouraged to point out areas the threat actor might have touched, so we could examine log files and determine the extent of the threat actor’s access. By including such a large number of people across the company, we aimed to leave no stone unturned looking for evidence of access or changes that needed to be made to improve security.

The immediate “Code Red” effort ended on January 5, but work continues across the company around credential management, software hardening, vulnerability management, additional alerting, and more.

Attack timeline

The attack started in October with the compromise of Okta, but the threat actor only began targeting our systems using those credentials from the Okta compromise in mid-November.

The following timeline shows the major events:

October 18 – Okta compromise

We’ve written about this before but, in summary, we were (for the second time) the victim of a compromise of Okta’s systems which resulted in a threat actor gaining access to a set of credentials. These credentials were meant to all be rotated.

Unfortunately, we failed to rotate one service token and three service accounts (out of thousands) of credentials that were leaked during the Okta compromise.

One was a Moveworks service token that granted remote access into our Atlassian system. The second credential was a service account used by the SaaS-based Smartsheet application that had administrative access to our Atlassian Jira instance, the third account was a Bitbucket service account which was used to access our source code management system, and the fourth was an AWS environment that had no access to the global network and no customer or sensitive data.

The one service token and three accounts were not rotated because mistakenly it was believed they were unused. This was incorrect and was how the threat actor first got into our systems and gained persistence to our Atlassian products. Note that this was in no way an error on the part of Atlassian, AWS, Moveworks or Smartsheet. These were merely credentials which we failed to rotate.

November 14 09:22:49 – threat actor starts probing

Our logs show that the threat actor started probing and performing reconnaissance of our systems beginning on November 14, looking for a way to use the credentials and what systems were accessible. They attempted to log into our Okta instance and were denied access. They attempted access to the Cloudflare Dashboard and were denied access.

Additionally, the threat actor accessed an AWS environment that is used to power the Cloudflare Apps marketplace. This environment was segmented with no access to global network or customer data. The service account to access this environment was revoked, and we validated the integrity of the environment.

November 15 16:28:38 – threat actor gains access to Atlassian services

The threat actor successfully accessed Atlassian Jira and Confluence on November 15 using the Moveworks service token to authenticate through our gateway, and then they used the Smartsheet service account to gain access to the Atlassian suite. The next day they began looking for information about the configuration and management of our global network, and accessed various Jira tickets.

The threat actor searched the wiki for things like remote access, secret, client-secret, openconnect, cloudflared, and token. They accessed 36 Jira tickets (out of a total of 2,059,357 tickets) and 202 wiki pages (out of a total of 194,100 pages).

The threat actor accessed Jira tickets about vulnerability management, secret rotation, MFA bypass, network access, and even our response to the Okta incident itself.

The wiki searches and pages accessed suggest the threat actor was very interested in all aspects of access to our systems: password resets, remote access, configuration, our use of Salt, but they did not target customer data or customer configurations.

November 16 14:36:37 – threat actor creates an Atlassian user account

The threat actor used the Smartsheet credential to create an Atlassian account that looked like a normal Cloudflare user. They added this user to a number of groups within Atlassian so that they’d have persistent access to the Atlassian environment should the Smartsheet service account be removed.

November 17 14:33:52 to November 20 09:26:53 – threat actor takes a break from accessing Cloudflare systems

During this period, the attacker took a break from accessing our systems (apart from apparently briefly testing that they still had access) and returned just before Thanksgiving.

November 22 14:18:22 – threat actor gains persistence

Since the Smartsheet service account had administrative access to Atlassian Jira, the threat actor was able to install the Sliver Adversary Emulation Framework, which is a widely used tool and framework that red teams and attackers use to enable “C2” (command and control), connectivity gaining persistent and stealthy access to a computer on which it is installed. Sliver was installed using the ScriptRunner for Jira plugin.

This allowed them continuous access to the Atlassian server, and they used this to attempt lateral movement. With this access the Threat Actor attempted to gain access to a non-production console server in our São Paulo, Brazil data center due to a non-enforced ACL. The access was denied, and they were not able to access any of the global network.

Over the next day, the threat actor viewed 120 code repositories (out of a total of 11,904 repositories). Of the 120, the threat actor used the Atlassian Bitbucket git archive feature on 76 repositories to download them to the Atlassian server, and even though we were not able to confirm whether or not they had been exfiltrated, we decided to treat them as having been exfiltrated.

The 76 source code repositories were almost all related to how backups work, how the global network is configured and managed, how identity works at Cloudflare, remote access, and our use of Terraform and Kubernetes. A small number of the repositories contained encrypted secrets which were rotated immediately even though they were strongly encrypted themselves.

We focused particularly on these 76 source code repositories to look for embedded secrets, (secrets stored in the code were rotated), vulnerabilities and ways in which an attacker could use them to mount a subsequent attack. This work was done as a priority by engineering teams across the company as part of “Code Red”.

As a SaaS company, we’ve long believed that our source code itself is not as precious as the source code of software companies that distribute software to end users. In fact, we’ve open sourced a large amount of our source code and speak openly through our blog about algorithms and techniques we use. So our focus was not on someone having access to the source code, but whether that source code contained embedded secrets (such as a key or token) and vulnerabilities.

November 23 – Discovery and threat actor access termination begins

Our security team was alerted to the threat actor’s presence at 16:00 and deactivated the Smartsheet service account 35 minutes later. 48 minutes later the user account created by the threat actor was found and deactivated. Here’s the detailed timeline for the major actions taken to block the threat actor once the first alert was raised.

15:58 – The threat actor adds the Smartsheet service account to an administrator group.
16:00 – Automated alert about the change at 15:58 to our security team.
16:12 – Cloudflare SOC starts investigating the alert.
16:35 – Smartsheet service account deactivated by Cloudflare SOC.
17:23 – The threat actor-created Atlassian user account is found and deactivated.
17:43 – Internal Cloudflare incident declared.
21:31 – Firewall rules put in place to block the threat actor’s known IP addresses.

November 24 – Sliver removed; all threat actor access terminated

10:44 – Last known threat actor activity.
11:59 – Sliver removed.

Throughout this timeline, the threat actor tried to access a myriad of other systems at Cloudflare but failed because of our access controls, firewall rules, and use of hard security keys enforced using our own Zero Trust tools.

To be clear, we saw no evidence whatsoever that the threat actor got access to our global network, data centers, SSL keys, customer databases or configuration information, Cloudflare Workers deployed by us or customers, AI models, network infrastructure, or any of our datastores like Workers KV, R2 or Quicksilver. Their access was limited to the Atlassian suite and the server on which our Atlassian runs.

A large part of our “Code Red” effort was understanding what the threat actor got access to and what they tried to access. By looking at logging across systems we were able to track attempted access to our internal metrics, network configuration, build system, alerting systems, and release management system. Based on our review, none of their attempts to access these systems were successful. Independently, CrowdStrike performed an assessment of the scope and extent of the threat actor’s activity, which did not bring to light activities that we had missed and concluded that the last evidence of threat activity was on November 24 at 10:44.

We are confident that between our investigation and CrowdStrike’s, we fully understand the threat actor’s actions and that they were limited to the systems on which we saw their activity.

Conclusion

This was a security incident involving a sophisticated actor, likely a nation-state, who operated in a thoughtful and methodical manner. The efforts we have taken ensure that the ongoing impact of the incident was limited and that we are well-prepared to fend off any sophisticated attacks in the future. This required the efforts of a significant number of Cloudflare’s engineering staff, and, for over a month, this was the highest priority at Cloudflare. The entire Cloudflare team worked to ensure that our systems were secure, the threat actor’s access was understood, to remediate immediate priorities (such as mass credential rotation), and to build a plan of long-running work to improve our overall security based on areas for improvement discovered during this process.

We are incredibly grateful to everyone at Cloudflare who responded quickly over the Thanksgiving holiday to conduct an initial analysis and lock out the threat actor, and all those who contributed to this effort. It would be impossible to name everyone involved, but their long hours and dedicated work made it possible to undertake an essential review and change of Cloudflare’s security while keeping our global network running and our customers’ service running.

We are grateful to CrowdStrike for having been available immediately to conduct an independent assessment. Now that their final report is complete, we are confident in our internal analysis and remediation of the intrusion and are making this blog post available.

IOCs
Below are the Indications of Compromise (IOCs) that we saw from this threat actor. We are publishing them so that other organizations, and especially those that may have been impacted by the Okta breach, can search their logs to confirm the same threat actor did not access their systems.

IndicatorIndicator TypeSHA256Description
193.142.58[.]126IPv4N/APrimary threat actor
Infrastructure, owned by
M247 Europe SRL (Bucharest,
Romania)
198.244.174[.]214IPv4N/ASliver C2 server, owned by
OVH SAS (London, England)
idowall[.]comDomainN/AInfrastructure serving Sliver
payload
jvm-agentFilenamebdd1a085d651082ad567b03e5186d1d4
6d822bb7794157ab8cce95d850a3caaf
Sliver payload

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/thanksgiving-2023-security-incident

AnyDesk says hackers breached its production servers, reset passwords

By Lawrence Abrams
February 2, 2024

AnyDesk confirmed today that it suffered a recent cyberattack that allowed hackers to gain access to the company’s production systems. BleepingComputer has learned that source code and private code signing keys were stolen during the attack.

AnyDesk is a remote access solution that allows users to remotely access computers over a network or the internet. The program is very popular with the enterprise, which use it for remote support or to access colocated servers.

The software is also popular among threat actors who use it for persistent access to breached devices and networks.

The company reports having 170,000 customers, including 7-Eleven, Comcast, Samsung, MIT, NVIDIA, SIEMENS, and the United Nations.

AnyDesk hacked

In a statement shared with BleepingComputer late Friday afternoon, AnyDesk says they first learned of the attack after detecting indications of an incident on their production servers. 

After conducting a security audit, they determined their systems were compromised and activated a response plan with the help of cybersecurity firm CrowdStrike.

AnyDesk did not share details on whether data was stolen during the attack. However, BleepingComputer has learned that the threat actors stole source code and code signing certificates.

The company also confirmed ransomware was not involved but didn’t share too much information about the attack other than saying their servers were breached, with the advisory mainly focusing on how they responded to the incident.

As part of their response, AnyDesk says they have revoked security-related certificates and remediated or replaced systems as necessary. They also reassured customers that AnyDesk was safe to use and that there was no evidence of end-user devices being affected by the incident.

“We can confirm that the situation is under control and it is safe to use AnyDesk. Please ensure that you are using the latest version, with the new code signing certificate,” AnyDesk said in a public statement.

While the company says that no authentication tokens were stolen, out of caution, AnyDesk is revoking all passwords to their web portal and suggests changing the password if it’s used on other sites.

“AnyDesk is designed in a way which session authentication tokens cannot be stolen. They only exist on the end user’s device and are associated with the device fingerprint. These tokens never touch our systems, “AnyDesk told BleepingComputer in response to our questions about the attack.

“We have no indication of session hijacking as to our knowledge this is not possible.”

The company has already begun replacing stolen code signing certificates, with Günter Born of BornCity first reporting that they are using a new certificate in AnyDesk version 8.0.8, released on January 29th. The only listed change in the new version is that the company switched to a new code signing certificate and will revoke the old one soon.

BleepingComputer looked at previous versions of the software, and the older executables were signed under the name ‘philandro Software GmbH’ with serial number 0dbf152deaf0b981a8a938d53f769db8. The new version is now signed under ‘AnyDesk Software GmbH,’ with a serial number of 0a8177fcd8936a91b5e0eddf995b0ba5, as shown below.

Signed AnyDesk 8.0.6 (left) vs AnyDesk 8.0.8 (right)
Signed AnyDesk 8.0.6 (left) vs AnyDesk 8.0.8 (right)
Source: BleepingComputer

Certificates are usually not invalidated unless they have been compromised, such as being stolen in attacks or publicly exposed.

While AnyDesk had not shared when the breach occurred, Born reported that AnyDesk suffered a four-day outage starting on January 29th, during which the company disabled the ability to log in to the AnyDesk client.

“my.anydesk II is currently undergoing maintenance, which is expected to last for the next 48 hours or less,” reads the AnyDesk status message page.

“You can still access and use your account normally. Logging in to the AnyDesk client will be restored once the maintenance is complete.”

Yesterday, access was restored, allowing users to log in to their accounts, but AnyDesk did not provide any reason for the maintenance in the status updates.

However, AnyDesk has confirmed to BleepingComputer that this maintenance is related to the cybersecurity incident.

It is strongly recommended that all users switch to the new version of the software, as the old code signing certificate will soon be revoked.

Furthermore, while AnyDesk says that passwords were not stolen in the attack, the threat actors did gain access to production systems, so it is strongly advised that all AnyDesk users change their passwords. Furthermore, if they use their AnyDesk password at other sites, they should be changed there as well.

Every week, it feels like we learn of a new breach against well-known companies.

Last night, Cloudflare disclosed that they were hacked on Thanksgiving using authentication keys stolen during last years Okta cyberattack.

Last week, Microsoft also revealed that they were hacked by Russian state-sponsored hackers named Midnight Blizzard, who also attacked HPE in May.

Related Articles:

GTA 5 source code reportedly leaked online a year after Rockstar hack

Lurie Children’s Hospital took systems offline after cyberattack

Johnson Controls says ransomware attack cost $27 million, data stolen

A mishandled GitHub token exposed Mercedes-Benz source code

How SMBs can lower their risk of cyberattacks and data breaches

Source :
https://www.bleepingcomputer.com/news/security/anydesk-says-hackers-breached-its-production-servers-reset-passwords/

Does the WiFi channel matter? A guide to which channel you should choose.

SEPTEMBER 20, 2022 BY MARK B

When having trouble getting a good performance from your wireless router or access point, the first settings that people usually change is the WiFi channel. And it makes sense considering that it may be just a bit ‘too crowded’, so change the number, save and the WiFi speed should come back to life, right?

It is possible to see an increase in throughput, but you should never change the settings blindly, hoping that something may stick. I admit that I am guilty of doing just that some time ago, but the concept behind the WiFi channels doesn’t need to be mystifying. So let’s have a look at what they are, their relationship with the channel bandwidth and which should be the suitable settings for your network.

Table of Contents

What is a WiFi channel?

I am sure that most of you are familiar with the 2.4GHz and the 5GHz radio band, but you need to understand that they’re not some fixed frequency points, instead, they’re more like a spectrum of frequencies. The 2.4GHz has a range of frequencies from 2,402MHz to 2,483MHz and, when you tune to a specific frequency within this spectrum, you essentially are selecting a WiFi channel for your data transmission.

wifi-channels
2.4GHz Channels – 20MHz channel bandwidth.

For example, the channel 1 is associated with the 2,412MHz (the range is between 2,401 to 2,423MHz), the channel two is 2,417MHz (2,406 to 2,428MHz range), channel 7 is 2,442MHz (2,431 to 2,453MHz range) and the channel 14 is 2,484MHz (2,473 to 2,495MHz range). As you can see, there is some overlapping in the frequency range between certain channels, but we’ll talk more about it in a minute. The range of 5GHz radio band spans between 5.035MHz and 5.980MHz.

This means that the channel 36 is associated with the 5,180MHz (the range between 5,170 and 5,190MHz), the channel 40 is 5,200MHz (between 5,190 and 5,210MHz) and channel 44 can be associated with the 5,220MHz frequency (the range between 5,210 and 5,230MHz). Now, let’s talk about overlapping and non-overlapping channels.

Overlapping vs non-overlapping channels

If you had a look at the channel representation that I put together for the 2.4GHz frequency band using the 20MHz WiFi channel bandwidth, you can see that three channels are different from the others. The channels 1, 6 and 11 are non-overlapping and you can see from the graph that if your APs are using these channels, then they’re far less prone to interference.

channel-bandwidth-5ghz
5GHz – Channel allocation.

To get an even better idea is to have a look at the graph representing the 5GHz channels and the way they’re grouped to create a larger channel bandwidth. We have talked about the two main types of interference, the co-channel and the adjacent channel interference when we analyzed the best channel bandwidth to use for the 5GHz band. And the idea is that when using the same channel, the devices will be forced to take turns, therefore slowing down the network.

But it’s also possible that the adjacent channels may bleed into each other, adding noise to the data, rendering the WiFi connection unusable. That’s why most people suggest to keep a less wide channel bandwidth and use non-overlapping channels if there are lots of APs in the area (which are not properly adjusted by a system admin).

Changing the channel, but not the channel bandwidth

We already know that changing the channel bandwidth will have a significant impact on the WiFi performance because 20MHz or 40MHz will deliver a far more stable throughput on the 5GHz frequency band (although not that high) in a crowded environment.

fs-s3150-8t2fp-switch-aps
Multiple wireless access points.

But what happens when we change the WiFi channel, while keeping the same channel bandwidth? Again, it depends if you’re switching from overlapping to non-overlapping channels because doing so, you may see a noticeable increase in performance (just keep an eye on the available channels because the wider the channel bandwidth, the less the non-overlapping channels will be available for you to use). Now, in the ideal scenario, where there is no interference, when moving from one channel to the other within the same bandwidth shouldn’t really make that much of a difference in terms of data transfer rate.

Auto or manual WiFi channel selection?

The wireless routers and access points usually have the WiFi channel selection set to auto, which means that you may see that your neighbors change theirs annoyingly often. That’s because every time they restart the router/AP or there’s a power outage, the channel may be changed, so that it’s the least crowded available.

engenius-ecs2512fp-aps
Abundance of Wireless Access Points.

If you choose yours manually, you will have to keep up with the changes to your neighboring WiFi networks, which is why it’s a good idea to keep the WiFi channel on your AP on auto as well. If we’re talking about an office or some large enterprise network, it’s obviously better to have full control on how the network behaves, so the manual selection is better.

When you should use DFS channels?

DFS stands for Dynamic Frequency Selection and it refers to those frequencies that are usually limited for military use or for radars (such as weather devices or airport equipment), which means that they can differ from country to country. So make sure to check whether you’re allowed to use certain channels (especially if you got the wireless router or AP from abroad), before you get a knock on your door. Also, it’s pretty much obvious that you won’t be able to use these channels if you live near an airport.

engenius-ews850ap-outdoor-access-point
Engenius EWS850AP access point.

That being said, the main benefit to using DFS channels is that you are no longer impacted by interference from your neighbors WiFi. But do be aware that, depending on the router, there is a high chance that in case it detects a near-by radar using the same frequency, then it will switch to another WiFi channel automatically.

Also, there is another problem that I have often encountered. Not that many client devices will actually connect to a WiFi network that uses DFS channels, so you may find out that while your PC and smartphone continue to have access to the Internet, pretty much every other smart or IoT device will drop the connection.

Source :
https://www.mbreviews.com/does-the-wifi-channel-matter/

Exit mobile version