On February 15th, 2024, during our second Bug Bounty Extravaganza, we received a submission for an authenticated SQL Injection vulnerability in Tutor LMS, a WordPress plugin with more than 80,000+ active installations. This vulnerability can be leveraged to extract sensitive data from the database, such as password hashes.
Props to Muhammad Hassham Nagori who discovered and responsibly reported this vulnerability through the Wordfence Bug Bounty Program. This researcher earned a bounty of $625.00 for this discovery during our Bug Bounty Program Extravaganza. Our mission is to Secure the Web, which is why we are investing in quality vulnerability research and collaborating with researchers of this caliber through our Bug Bounty Program. We are committed to making the WordPress ecosystem more secure, which ultimately makes the entire web more secure.
All Wordfence Premium, Wordfence Care, and Wordfence Response customers, as well as those using the free version of our plugin, are protected against any exploits targeting this vulnerability by the Wordfence firewall’s built-in SQL Injection protection.
We contacted Themeum on February 22, 2024, and received a response on February 23, 2024. After providing full disclosure details, the developer released a patch on March 11, 2024. We would like to commend Themeum for their prompt response and timely patch.
We urge users to update their sites with the latest patched version of Tutor LMS, which is version 2.6.2, as soon as possible.
The Tutor LMS – eLearning and online course solution plugin for WordPress is vulnerable to time-based SQL Injection via the question_id parameter in all versions up to, and including, 2.6.1 due to insufficient escaping on the user supplied parameter and lack of sufficient preparation on the existing SQL query. This makes it possible for authenticated attackers, with subscriber/student access or higher, to append additional SQL queries into already existing queries that can be used to extract sensitive information from the database.
Technical Analysis
Tutor LMS is a WordPress plugin which includes many features, such as a course builder, quiz and assignment types, dashboard, payment and WooCommerce integration, and a lot of other add-ons.
Unfortunately, insecure implementation of the plugin’s Q&A questions query functionality allows for SQL injection. Examining the code reveals that the plugin uses the get_qa_questions() function in the Utils class to query Q&A questions, where the id can be specified with the ‘question_id’ parameter.
This function is called in several view files. In some cases, the ‘question_id’ GET input value is sanitized in the view file:
However, there are also cases where it is not sanitized:
1
$question_id= Input::get( 'question_id');
The get_qa_questions() function contains the following code:
4451
$question_clause= $question_id? ' AND _question.comment_ID='. $question_id: '';
Here we can see that no sanitization function is being used on the question id in the get_qa_questions() function although this id is always meant to be an integer.
Typically, the prepare() function would parameterize and escape the SQL query for safe execution in WordPress, thereby providing protection against SQL injection attacks. But, in this instance, the $question_clause value is not used as a parameter, it is just appended to the query as a string. This means that prepare() will not actually escape the data being passed to the SQL query, thus making it possible to break out of the current SQL query and inject new queries to extract data.
Union-Based SQL injection is not possible due to the structure of the query, which means an attacker would need to use a Time-Based blind approach to extract information from the database. This means that they would need to use SQL CASE statements along with the SLEEP() command while observing the response time of each request to steal information from the database. This is an intricate, yet frequently successful method to obtain information from a database when exploiting SQL Injection vulnerabilities.
Wordfence Firewall
The following graphic demonstrates the steps to exploitation an attacker might take and at which point the Wordfence firewall would block an attacker from successfully exploiting the vulnerability.
The Wordfence firewall rule detects the malicious SQL query and blocks the request.
Disclosure Timeline
February 15, 2024 – We receive the submission of the SQL Injection vulnerability in Tutor LMS via the Wordfence Bug Bounty Program. February 22, 2024 – We validate the report and confirm the proof-of-concept exploit. February 22, 2024 – We initiate contact with the plugin vendor asking that they confirm the inbox for handling the discussion. February 23, 2024 – The vendor confirms the inbox for handling the discussion. February 24, 2024 – We send over the full disclosure details. The vendor acknowledges the report and begins working on a fix. March 11, 2024 – The fully patched version of the plugin, 2.6.2, is released.
Conclusion
In this blog post, we detailed a SQL Injection vulnerability within the Tutor LMS plugin affecting versions 2.6.1 and earlier. This vulnerability allows authenticated threat actors to inject malicious SQL queries to steal sensitive information from the database. The vulnerability has been fully addressed in version 2.6.2 of the plugin.
We encourage WordPress users to verify that their sites are updated to the latest patched version of Tutor LMS.
All Wordfence users, including those running Wordfence Premium, Wordfence Care, and Wordfence Response, as well as sites running the free version of Wordfence, are fully protected against this vulnerability.
If you know someone who uses this plugin on their site, we recommend sharing this advisory with them to ensure their site remains secure, as this vulnerability poses a significant risk.
This article explains about how to integrate Content Filtering Service with LDAP (With Single Sign On) by using SonicOS 7.0.1 or older.
Restricted user group on the active directory is imported to SonicWall and give restricted web access to those users in that group.
Where in the Full Access User group has full access or partial access to websites.
Resolution
Enable Content Filtering Service from Policy | Security Services | Content Filter
Navigate to Profile Objects| Content Filter and access the Profile Objects tab. Create the new Content Filter Profile and Allow/Block for each category according with your need.
Make sure to Enable HTTPS content Filtering. This option is disabled by default.
4. Create another Content Filter Profile as Restricted Access CFS Policy for Restricted User Group.Click on Add, Add a Policy for Restricted Group with most of the categories enabled (Depends on what should be Blocked)
5. Creating a Full Access CFS Policy for Full Access User Group.Add second Policy for the Full Access Group with certain category enabled or all categories enabled (Depends on what should be allowed).
Configuring LDAP on SonicWall
For more information about how to enable LDAP on Sonicwall, please reach below link.
Navigate to Users | Settings page, in the Authentication method for login drop-down list, select LDAP + Local Users and click Configure.
On the Settings tab of the LDAP Configuration window, configure the following fields.
Name or IP address: IP address of the LDAP serverPort Number: 389 (Default LDAP Port)Server timeout (seconds): 10 Seconds (Default)Overall operation timeout (minutes): 5(Default)Select Give login name/location in tree
On the Login/Bind, Give login name/loction in three. Set the admin user and password to access on your LDAP server.
On the Schema tab, configure the following fields: LDAP Schema:Microsoft Active Directory.
On the Directory tab, configure the following fields.
Primary domain:The user domain used by your LDAP implementation.
User tree for login to server:The location of where the tree is that the user specified in the settings tab.
Click Auto-configure. (This will populate the Trees containing users and Trees containing user groups fields by scanning through the directories in search of all trees that contain user objects.)
On the LDAP Test tab, Test LDAP connectivity to make sure that the communication is successful.
Importing Groups from LDAP to the SonicWall unit
Navigate to Users | Local Groups.
Click Import from LDAP
Click Configure for the Group that is imported from LDAP.
Go to CFS Policy tab , Select the appropriate CFS Policy from the drop down and Click OK.
Configuring Single Sign-On Method on SonicWall
For more information about how to enable SSO Agent and Enable SSO on Sonicwall, please reach below link.
Navigate to Users | Settings.
In the Single-sign-on method , select SonicWall SSO Agent and Configure
Click Configure button. The SSO configuration page is displayed.
Under the Settings tab, Click Add button to add the IP address of the work station that has SSO agent running.
Click on the ADD button: settings window is displayed
In the HostName or IP Address field, enter the name or IP Address of the workstation on which SonicWall SSO Agent is installed
In Port Number, enter the port number of the workstation on which SonicWall SSO Agent is installed. The default port is 2258
In the Shared Key field, enter the shared key that you created or generated in the SonicWall SSO Agent. The shared key must match exactly. Re-enter the shared key in the Confirm Shared Key field. Click Apply.
Once the SSO Agent is successfully added, under the Authentication Agent Settings a green light is shown for status.
Click Test tab. The Test Authentication Agent Settings page displays.
Select the Check agent connectivity radio button then click the Test button. This will test communication with the authentication agent. If the SonicWall security appliance can connect to the agent, you will see the message Agent is ready.
Select the Check user radio button, enter the IP address of a workstation in the Workstation IP address field, then click Test. This will test if the agent is property configured to identify the user logged into a workstation.
NOTE: Performing tests on this page applies any changes that have been made. TIP: If you receive the messages Agent is not responding or Configuration error, check your settings and perform these tests again.
When you are finished, click OK.
Enabling CFS for the LAN Zone and applying Imported LDAP Group
CAUTION: It is not recommended to do this change on a Production Environment because this changes are instant and can affect all the computers on the LAN. So it is best to schedule a downtime before proceeding further.
Navigate to Network | Zones, click Configure Button for LAN Zone.
Check the box Enforce Content Filtering Service, select the Default CFS Policy from the drop down.
How to TEST
Log out from the windows domain computer and log in back with a user from either the full access or restricted access groups and check whether the policy is getting enforced correctly for the user.
You’re not alone if you have issues with the Windows 11 KB5034765. February 2024 security update for Windows 11 causes File Explorer to crash when rebooting the system, and some have found it’s causing the taskbar to disappear. Additionally, many users are having problems installing the Windows 11 February 2024 update.
Microsoft sources have confirmed to Windows Latest that the company is aware of an issue that causes the taskbar to crash or disappear briefly after installing KB5034765. I’m told the company has already rolled out a fix. This means some of you should be able to see the taskbar again after reinstalling the patch (remove and install it again).
But that’s not all. The February 2024 update has other problems, too. In our tests, we observed that the Windows 11 KB5034765 update repeatedly fails to install with 0x800f0922, 0x800f0982, and 0x80070002.
Multiple users told me that when they tried to install the security patch, everything seemed fine at first. The update downloads and asks for a restart. But during the installation, Windows Update stopped and confirmed there was a problem. It tries a few more times and then goes back to the desktop without updating.
KB5034765 is not installing, but there’s a fix
Our device also attempted the “rollback” after successfully downloading the February 2024 cumulative update, but the process was stuck on the following screen for ten minutes:
Something didn’t go as planned. No need to worry—undoing changes. Please keep your computer on.
I tried tried a few things to fix it. For example, I removed programs that didn’t come with Windows, cleared the Windows Update cache and used the Windows Update troubleshooter. None of these solutions have worked.
However, there’s some good news. It looks like we can successfully install KB5034765 by deleting a hidden folder named $WinREAgent. There are multiple ways to locate and delete this folder from Windows 11 installation, and you choose your preferred one:
Method 1: Run Disk Cleanup as an administrator, select the system drive, and check the boxes for “Temporary files” and other relevant options. Finally, click “OK” to remove the system files, including Windows Update files. This will delete unnecessary files within $WinREAgent.
Method 2: Open File Explorer and open the system drive, but make sure you’ve turned on view hidden items from folder settings. Locate $WinREAgent and remove it from the system.
Method 3: Open Command Prompt as Administrator, and run the following command: rmdir /S /Q C:\$WinREAgent
Windows Update causes File Explorer to crash on reboot
Some PC owners are alsorunning into another problem that causes the File Explorer to crash when rebooting or shutting down the system.
The error message indicates an application error with explorer.exe, mentioning a specific memory address and stating, “The memory could not be written”.
“The instruction at 0x00007FFB20563ACa referenced memory at 0x0000000000000024. The memory could not be written. Click on OK to terminate the program,” the error message titled “explorer.exe – Application Error” reads.
This issue seems to persist regardless of various troubleshooting efforts. Users have tried numerous fixes, including running the System File Checker tool (sfc /scannow), testing their RAM with Windows’ built-in tool and memtest86+, and even performing a clean installation of the latest Windows 11 version.
Despite these efforts, the error remains.
Interestingly, a common factor among affected users is the presence of a controller accessory, such as an Xbox 360 controller for Windows, connected to the PC. This connection has been observed, but it’s unclear if it directly contributes to the problem.
Microsoft’s release notes for the KB5034765 update mentioned a fix for an issue where explorer.exe could stop responding when a PC with a controller accessory attached is restarted or shut down.
However, despite this so-called official fix, users report that the problem still occurs, and it’s not possible to manually fix it.
Windows 11 taskbar crashes or disappears after the patch
As mentioned at the outset, the Windows 11 KB5034765 update causes the taskbar to disappear or crash when you reboot or turn on the device.
According to my sources, Microsoft has already patched the issue via server-side update, but if your taskbar or quick settings like Wi-Fi still disappear, try the following steps:
Open Settings, go to the Windows Update section and click Update History. On the Windows Update history page, click Uninstall updates, locate KB5034765 and click Uninstall.
Confirm your decision, click Uninstall again, and reboot the system.
Go to Settings > Windows Update and check for updates to reinstall the security patch.
The above steps are unnecessary, as the server-side update will automatically apply to your device.
Mayank Parmar is Windows Latest’s owner, Editor-in-Chief and entrepreneur. Mayank has been in tech journalism for over seven years and has written on various topics, but he is mostly known for his well-researched work on Microsoft’s Windows. His articles and research works have been referred to by CNN, Business Insiders, Forbes, Fortune, CBS Interactive, Microsoft and many others over the years.
WRITTEN ON FEBRUARY 14, 2024 BY NOLEN JOHNSON (NPJOHNSON)
21 – Finally old enough to drink (at least in the US)!
Hey y’all! Welcome back!
We’re a bit ahead of schedule this year, we know normally you don’t expect to hear from us until April-ish.
This was largely thanks to some new faces around the scene, some old faces stepping up to the plate, and several newly appointed Project Directors!
With all that said, we have been working extremely hard since Android 14’s release last October to port our features to this new version of Android. Thanks to our hard work adapting to Google’s largely UI-based changes in Android 12/13, and Android 14’s dead-simple device bring-up requirements, we were able to rebase our changes onto Android 14 much more efficiently.
This lets us spend some much overdue time on our apps suite! Applications such as Aperture had their features and UX improved significantly, while many of our aging apps such as Jelly, Dialer, Contacts, Messaging, LatinIME (Keyboard), and Calculator got near full redesigns that bring them into the Material You era!
…and last but not least, yet another new app landed in our apps suite! Don’t get used to it though, or maybe do, we’re not sure yet.
Now, let’s remind everyone about versioning conventions – To match AOSP’s versioning conventions, and due to the fact it added no notable value to the end-user, we dropped our subversion from a branding perspective.
As Android has moved onto the quarterly maintenance release model, this release will be “LineageOS 21”, not 21.0 or 21.1 – though worry not – we are based on the latest and greatest Android 14 version, QPR1.
Additionally, to you developers out there – any repository that is not core-platform, or isn’t expected to change in quarterly maintenance releases will use branches without subversions – e.g., lineage-21 instead of lineage-21.0.
New Features!
Security patches from January 2023 to February 2024 have been merged to LineageOS 18.1 through 21.
Glimpse of Us: We now have a shining new app, Glimpse! It will become the default gallery app starting from LineageOS 21
An extensive list of applications were heavily improved or redesigned:
Aperture: A touch of Material You, new video features, and more!
Calculator: Complete Material You redesign
Contacts: Design adjustments for Material You
Dialer: Large cleanups and code updates, Material You and bugfixes
Eleven: Some Material You design updates
Jelly: Refreshed interface, Material You and per-website location permissions
LatinIME: Material You enhancements, spacebar trackpad, fixed number row
Messaging: Design adjustments for Material You
A brand new boot animation by our awesome designer Vazguard!
SeedVault and Etar have both been updated to their newest respective upstream version.
WebView has been updated to Chromium 120.0.6099.144.
We have further developed our side pop-out expanding volume panel.
Our Updater app should now install A/B updates much faster (thank Google!)
We have contributed even more changes and improvements back upstream to the FOSS Etar calendar app we integrated some time back!
We have contributed even more changes and improvements back upstream to the Seedvault backup app.
Android TV builds still ship with an ad-free Android TV launcher, unlike Google’s ad-enabled launcher – most Android TV Google Apps packages now have options to use the Google ad-enabled launcher or our ad-restricted version.
Our merge scripts have been largely overhauled, greatly simplifying the Android Security Bulletin merge process, as well as making supporting devices like Pixel devices that have full source releases much more streamlined.
Our extract utilities can now extract from OTA images and factory images directly, further simplifying monthly security updates for maintainers on devices that receive security patches regularly.
LLVM has been fully embraced, with builds now defaulting to using LLVM bin-utils and optionally, the LLVM integrated assembler. For those of you with older kernels, worry not, you can always opt-out.
A global Quick Settings light mode has been developed so that this UI element matches the device’s theme.
Our Setup Wizard has seen adaptation for Android 14, with improved styling, more seamless transitions, and significant amounts of legacy code being stripped out.
The developer-kit (e.g. Radxa 0, Banana Pi B5, ODROID C4, Jetson X1) experience has been heavily improved, with UI elements and settings that aren’t related to their more restricted hardware feature-set being hidden or tailored!
Amazing Applications!
Calculator
Our Calculator app has received a UI refresh, bringing it in sync with the rest of our app suite, as well as a few new features:
Code cleanup
Reworked UI components to look more modern
Added support for Material You
Fixed some bugs
Glimpse
We’ve been working on a new gallery app, called Glimpse, which will replace Gallery2, the AOSP default gallery app.
Thanks to developers SebaUbuntu, luca020400 and LuK1337 who started the development, together with the help of designer Vazguard.
We focused on a clean, simple and modern-looking UI, designed around Material You’s guidelines, making sure all the features that you would expect from a gallery app are there.
It’ll be available on all devices starting from LineageOS 21.
Aperture
This has been the first year for this new application and we feel it has been received well by the community. As promised, we have continued to improve it and add new features, while keeping up with Google’s changes to the CameraX library (even helping them fix some bugs found on some of our maintained devices). We’d like to also thank the community for their work on translations, especially since Aperture strings changed quite often this year.
Here’s a quick list of some of the new features and improvements since the last update:
Added a better dialog UI to ask the user for location permissions when needed
UI will now rotate to follow the device orientation
Added Material You support
Improved QR code scanner, now with support for Wi-Fi and Wi-Fi Easy Connect™ QR codes
Added support for Google Assistant voice actions
Added photo and video mirroring (flipping) options
Audio can be muted while recording a video
Better error handling, including when no camera is available
Added configurable volume button gestures
The app will now warn you if the device overheats and is now able to automatically stop recording if the device temperature is too high
Added an information chip on top of the viewfinder to show some useful information, like low battery or disabled microphone
Added some advanced video processing settings (noise reduction, sharpening, etc.)
You can now set the flash to torch mode in photo mode by long-pressing the flash button
Added support for HDR video recording
Jelly
Our browser app has received a UI refresh, bringing it in sync with the rest of our app suite, as well as a few new features:
Code cleanup
Reworked UI components to look more modern
Added support for Material You
Fixed some bugs regarding downloading files
Added Brave as a search engine and suggestions provider
Dropped Google encrypted search engine, as Google defaults to HSTS now
Baidu suggestion provider now uses HTTPS
Implemented per-website location permissions
Dialer, Messaging, and Contacts
Since AOSP abandoneddeprecated the Dialer, we have taken over the code base and did heavy cleanups, updating to newer standards (AndroidX) and redesigning:
Code cleanup
Changed to using Material You design
Proper dark and light themes
Several bugfixes, specifically with number lookups and the contact list
While Messaging was also deprecated by AOSP, at least the Contacts app was not. Nonetheless we gave both of them an overhaul and made them also follow the system colors and look more integrated.
Careful Commonization
Several of our developers have worked hard on SoC-specific common kernels to base on that can be merged on a somewhat regular basis to pull in the latest features/security patches to save maintainers additional effort.
Go check them out and consider basing your device kernels on them!
Additionally, many legacy devices require interpolating libraries that we colloquially refer to as “shims” – these have long been device and maintainer managed, but this cycle we have decided to commonize them to make the effort easier on everyone and not duplicate effort!
You can check it out here and contribute shims that you think other devices may need or add additional components to additional shims and compatibility layers provided via Gerrit!
Deprecations
Overall, we feel that the 21 branch has reached feature and stability parity with 20 and is ready for initial release.
For the first time in many cycles, all devices that shipped LineageOS 19.1 were either promoted or dropped by the maintainer by the time of this blog post, so LineageOS 19.1 was retired naturally. As such, no new device submissions targeting the 19.1 branch will be able to ship builds (you can still apply and fork your work to the organization, though!).
LineageOS 18.1 builds were still not deprecated this year, as Google’s somewhat harsh requirements of BPF support in all Android 12+ device’s kernels meant that a significant amount of our legacy devices on the build-roster would have died.
LineageOS 18.1, is still on a feature freeze, and building each device monthly, shortly after the Android Security Bulletin is merged for that month.
We will allow new LineageOS 18.1 submissions to be forked to the organization, but we no longer will allow newly submitted LineageOS 18.1 devices to ship.
LineageOS 21 will launch building for a decent selection of devices, with additional devices to come as they are marked as both Charter compliant and ready for builds by their maintainer.
Upgrading to LineageOS 21
To upgrade, please follow the upgrade guide for your device by clicking on it here and then on “Upgrade to a higher version of LineageOS”.
If you’re coming from an unofficial build, you need to follow the good ole’ install guide for your device, just like anyone else looking to install LineageOS for the first time. These can be found at the same place here by clicking on your device and then on “Installation”.
Please note that if you’re currently on an official build, you DO NOT need to wipe your device, unless your device’s wiki page specifically dictates otherwise, as is needed for some devices with massive changes, such as a repartition.
Download portal
While it has been in the making for quite a while and already released a year ago, it’s still news in regards to this blog post. Our download portal has been redesigned and also gained a few functional improvements:
Dark mode
Downloads of additional images (shown for all devices but not used on all of them, read the instructions to know which ones you need for your device’s installation!)
Verifying downloaded files (see here) – if you go with any download not obtained from us, you can still verify it was originally signed by us and thus untampered with
Wiki
The LineageOS Wiki has also been expanded throughout the year and now offers, in addition to the known and tested instructions for all supported devices, some improvements:
The device overview allows filtering for various attributes you might be interested in a device (please note: choosing a device only based on that list still does not guarantee any device support beyond the point of when you chose it)
The device overview now lists variants of a device and other known marketing names in a more visible way, also allowing for different device information and instructions per variant to be shown
The installation instructions have been paginated, giving users less chance to skip a section involuntarily
In addition to that we’d like to take this time to remind users to follow instructions on their device’s respective Wiki Page given the complexity introduced by AOSP changes like System-As-Root, A/B Partition Scheme, Dynamic Partitions, and most recently Virtual A/B found on the Pixel 5 and other devices launching with Android 11, the instructions many of you are used to following from memory are either no longer valid or are missing very critical steps. As of 16.0, maintainers have been expected to run through the full instructions and verify they work on their devices. The LineageOS Wiki was recently further extended, and maintainers were given significantly more options to customize their device’s specific installation, update, and upgrade instructions.
Developers, Developers, Developers
Or, in this case, maintainers, maintainers, maintainers. We want your device submissions!
If you’re a developer and would like to submit your device for officials, it’s easier than ever. Just follow the instructions here.
The above also applies to people looking to bring back devices that were at one point official but are no longer supported – seriously – even if it’s not yet completely compliant, submit it! Maybe we can help you complete it.
After you submit, within generally a few weeks, but in most cases a week, you’ll receive some feedback on your device submission; and if it’s up to par, you’ll be invited to our communications instances and your device will be forked to LineageOS’s official repositories.
Don’t have the knowledge to maintain a device, but want to contribute to the platform? We have lots of other things you can contribute to. For instance, our apps suite is always looking for new people to help improve them, or you can contribute to the wiki by adding more useful information & documentation. Gerrit is always open for submissions! Once you’ve contributed a few things, send an email to devrel(at)lineageos.org detailing them, and we’ll get you in the loop.
Also, if you sent a submission via Gmail over the last few months, due to infrastructural issues, some of them didn’t make it to us, so please resend them!
Generic Targets
We’ve talked about these before, but these are important, so we will cover them again.
Though we’ve had buildable generic targets since 2019, to make LineageOS more accessible to developers, and really anyone interested in giving LineageOS a try, we’ve documented how to use them in conjunction with the Android Emulator/Android Studio!
Additionally, similar targets can now be used to build GSI in mobile, Android TV configurations, and Android Automotive (we’ll talk more about this later) making LineageOS more accessible than ever to devices using Google’s Project Treble. We won’t be providing official builds for these targets, due to the fact the user experience varies entirely based on how well the device manufacturer complied with Treble’s requirements, but feel free to go build them yourself and give it a shot!
Please note that Android 12 (and by proxy Android 13/14) diverged GSI and Emulator targets. Emulator targets reside in lineage_sdk_$arch, while GSI targets reside in lineage_gsi_$arch.
Translations
Bilingual? Trilingual? Anything-lingual?
If you think you can help translate LineageOS to a different language, jump over to our wiki and have a go! If your language is not supported natively in Android, reach out to us on Crowdin and we’ll take the necessary steps to include your language. For instance, LineageOS is the first Android custom distribution that has complete support for the Welsh (Cymraeg) language thanks to its community of translators.
Please, contribute to translations only if you are reasonably literate in the target language; poor translations waste both our time and yours.
Looking to rapidly migrate to the cloud? Scale cost-effectively and strengthen disaster recovery? You’re not alone. Here’s how organizations are unlocking the power of hybrid cloud with VMware Cloud on AWS.
Visit Part 1, Part 2, Part 3 and Part 4 of the series for more customer stories across various use cases.
Organizations across different industries continue to benefit from hybrid cloud with VMware Cloud on AWS. Their goals are to increase agility, improve business continuity, scale cost-effectively and rapidly migrate or extend to the public cloud with an integrated, highly available and secure solution.
In this blog, we’ll share more of our customers’ stories and their experiences across various use cases, the business outcomes and how VMware Cloud on AWS helped them overcome the challenges of implementing a hybrid cloud.
Scale beyond on-premises infrastructure to the public cloud
IHS Markit
IHS Markit is a global leader in information, analytics and solutions for the major industries and markets that drive economies worldwide. With more than 5,000 analysts, data scientists, financial experts and industry specialists, their information expertise spans numerous industries, including leading positions in finance, energy and transportation.
“VMware Cloud on AWS helps us build on our success with VMware in our private, on-premises environment and cost-effectively extend services to a global hybrid cloud.” – Ben Tanner, Director of Cloud Enablement, IHS Markit
Get more details about IHS Markit’s experience in this summary.
ZOZO
ZOZO Technologies is the platform developer of ZOZOTOWN, the largest online fashion store in Japan. With the infrastructure supporting ZOZOTOWN built around an on-premises environment, coping with the winter sale – which generates the highest amount of traffic each year – was a challenge. Therefore, the company expanded its existing environment with VMware Cloud on AWS, which has excellent compatibility with conventional infrastructure, and moved to a pay-as-you-go system.
“Expanding our data center to the cloud was a huge challenge for us. By using VMware Cloud on AWS, we not only successfully survived our winter sale, which has the highest volume of traffic each year, but our company also accumulated knowledge that we’ll be able to use in the future.” – Nobuhiko Watanabe ZOZO-SRE Team Leader, SRE Department, Technology Development Division, ZOZO Technologies, Inc.
IndusInd Bank’s vision to provide best-in-class digital banking services required the right technology support to achieve industry dominance. VMware’s solutions helped the bank get centralized control over its applications, hosted either on-premises or in the public cloud. This increased the bank’s agility to deliver business outcomes and consumer applications with a mobile-first strategy, keeping the employee experience in mind. VMware helped IndusInd Bank deploy a hybrid cloud, running on both on-premises infrastructure and VMware Cloud on AWS.
“To create a robust platform for our evolving digital experience capabilities, we needed a scalable and agile solution that could handle a significant increase in customer transactions in both assisted and direct channels. For mission-critical workloads, VMware Cloud on AWS allowed us to enhance the on-premise private cloud set-up, with the flexibility to scale up on demand across private clouds in AWS and on-premise, thereby ensuring that we leverage the proven capabilities of scale with consistency and availability for our businesses.” – Biswabrata Chakravorty, CIO, IndusInd Bank
Get more details about IndusInd Bank’s experience in this case study.
ZENRIN DataCom
ZENRIN DataCom Co., Ltd. is a leading Japanese map publisher. To develop a consistent hybrid cloud IT infrastructure, the company adopted VMware Cloud on AWS. They’re now benefiting from simplified hybrid operations and expect reduced server costs.
“There will be cumulative cost benefits as more services are offered on VMware Cloud on AWS. Without VMware Cloud on AWS, our development costs may have been double or triple!” – Masayoshi Oku, Director and Executive Officer, Engineering Division, ZENRIN DataCom
888 Holdings Public Limited Company (888) is one of the world’s most popular online gaming entertainment and solutions providers. 888’s headquarters and main operations are in Gibraltar, and the company has licenses in Gibraltar, the U.K., Spain, Italy, Denmark, Ireland, Romania, and the U.S.
888 wants to reduce customer ‘churn’. It operates in a market with little brand loyalty and wants to enhance the end-user experience. It also wants to leverage big data to stay competitive. The company also must cope with the unexpected. The U.K.’s decision to leave the European Union (EU) has impacted where data can be held. 888 needed to move workloads from its data center in Gibraltar, a British Overseas Territory, to a new center somewhere in the EU. VMware Cloud on AWS enables 888 Holdings to rapidly extend, migrate and protect its VMware environment in the AWS public cloud.
“VMware Cloud on AWS means we have a disaster recovery site that is on-call only when we need it – we’re not paying for it when it’s at rest.” – Eran Elbaz, CIO, 888 Holdings
EMPLOYERS strives to meet the needs of its small business insurance policyholders while working to bolster the long-term success of its thousands of appointed agents, many running small businesses of their own. After experiencing new growth of 29 percent in 2018 during its 105th year in business, EMPLOYERS needed to ensure its IT environment would continue to support evolving business needs. In particular, the company’s strategic plans included rolling out new capabilities centered on improving the agent and end-customer experience to help foster the organization’s growth and retention goals.
They selected VMware solutions for the underpinnings of their new foundation, with VMware HCX for accelerated migration to the cloud and VMware Cloud on AWS for their disaster recovery needs. Read more about their story.
Scale and protect on-premises VDI needs in the public cloud
PennyMac
PennyMac, a leading national mortgage lender, is committed to providing every customer with the right home loan and superior service long after closing.
They needed to scale their virtual environment very rapidly in response to changing market conditions. In this video, hear how they leveraged VMware Cloud on AWS to migrate their VDI environment to the public cloud.
OSRAM Continental
OSRAM Continental develops innovative automotive lighting systems to meet the needs of modern mobility concepts. Launched in 2018, the joint venture quickly set up an entirely new, ready-to-run IT infrastructure based on a cloud principle and VMware Cloud on AWS. Thanks to a virtual desktop infrastructure, the company benefits from time and cost savings, maximum flexibility and centralized management, thereby creating the infrastructural prerequisites for an industry in transition.
“VMware Cloud on AWS enabled us to build our entire IT and process landscape from scratch in just six months.” – Michael Schöberl, CIO, OSRAM Continental
Learn more about OSRAM Continental’s journey in this case study.
As the above examples show, customers are increasing agility, improving business continuity, scaling cost-effectively and rapidly migrating or extending to the public cloud with VMware Cloud on AWS, an integrated, highly available and secure solution.
Cheryl Young is a Product Line Marketing Manager in the Cloud Infrastructure Business Group at VMware focused on Google Cloud VMware Engine. She has been working in the enterprise software…
Hornetsecurity is implementing an update to enhance email security by enforcing checks on the “Header-From” value in emails, as per RFC 5322 standards. This initiative is driven by several key reasons:
Preventing Email Delivery Issues: Historically, not enforcing the validity of the originator email address has led to emails being accepted by our system but ultimately rejected by the final destination, especially with most customers now using cloud email service providers that enforce stricter validation.
Enhanced Protection Against Spoofed Emails: By strictly validating the “Header-From” value, we aim to significantly reduce the risk of email spoofing.
Enhance Email Authentication for DKIM/DMARC Alignment: By enforcing RFC 5322 compliance in the “Header-From” field, we can ensure better alignment with DKIM and DMARC standards, thereby significantly improving the security and authenticity of email communications.
The cause of malformed “From” headers often stems from incorrect email server configurations by the sender or from bugs in scripts or other applications. Our new protocol aims to rectify these issues, ensuring that all emails passing through our system are fully compliant with established standards, thus improving the overall security and reliability of email communications.
Implementation Timeline
Stage 1 (Starting 4 March 2024): 1-5% of invalid emails will be rejected.
Stage 2 (Second week): 30% rejection rate.
Stage 3 (Third week): 60% rejection rate.
Final Stage (By the end of the fourth week): 100% rejection rate.
Impact Assessment
Extensive testing over the past six months indicates that the impact on legitimate email delivery is expected to be minimal. However, email administrators should be prepared for potential queries from users experiencing email rejections.
Handling Rejections
When an email is rejected due to a malformed “Header-From”, the sender will receive a bounce-back message with the error “510 5.1.7 malformed Header-From according to RFC 5322”. This message indicates that the email did not meet the necessary header standards.
Identifying Affected Emails
Email administrators can identify affected emails in the Hornetsecurity Control Panel (https://cp.hornetsecurity.com) using the following steps:
Navigate to ELT in the Hornetsecurity Control Panel.
Select your tenant in the top right field.
Choose a date range for your search. A shorter range will yield quicker results.
Click in the “Search” text box, select the “Msg ID” parameter, and type in “hfromfailed” (exact string).
Press ENTER to perform the search.
When email administrators identify emails affected by the “Header-From” checks in the Email Live Tracking (ELT) system, immediate and appropriate actions are necessary to verify if the email application or server settings are correctly configured to comply with RFC 5322 standards. This will help maintain email flow integrity.
Defining Exceptions
In implementing the new “Header-From” checks, Hornetsecurity recognizes the need for flexibility in certain cases. Therefore, we have provisioned for the definition of exceptions to these checks.
This section details how to set up these exceptions and the timeline for their deprecation:
Creating Exception Rules: Within the Compliance Filter, you can create rules that define exceptions to the “Header-From” checks. This should be based on the envelop sender address.
Applying the Exceptions: Once defined, these exceptions will allow certain emails to bypass the strict “Header-From” checks.
Timeline for Deprecation of Exceptions applied to the new Header-From checks
Initial Implementation: The ability to define exceptions is available as part of the initial rollout of the “Header-From” checks.
Deprecation Date: These exception provisions are set to be deprecated by the end of June 2024.
The provision for exceptions is intended as a temporary measure to facilitate a smoother transition to the new protocol. By June 2024, it is expected that all email senders would have had sufficient time to align their email systems with RFC 5322 standards. Deprecating the exceptions is a step towards ensuring full compliance and maximizing the security benefits of the “Header-From” checks.
Conclusion
The enhancement of our RFC-compliance is a significant step toward securing email communications. Adherence to these standards will collectively reduce risks associated with email. For further assistance or clarification, please reach out to our support team at support@hornetsecurity.com.
Invalid “Header From” Examples:
Header From
Reason
From: <>
Blank addresses are problematic as they cause issues in scenarios requiring a valid email address, such as allow and deny lists.
From: John Doe john.doe@hornetsecurity.com
Non-compliant with RFC standards. The email address must be enclosed in angle brackets (< and >) when accompanied by a display name.
While technically RFC-compliant, such formats are often rejected by M365 unless explicit exceptions are configured. We do accept certain email addresses with comments.
From: John, Doe <john.doe@hornetsecurity.com>
Non-compliant with RFC standards. A display name containing a comma must be enclosed in double quotes.
From: “John Doe <john.doe@hornetsecurity.com>”
Non-compliant with RFC standards. The entire ‘From’ value is incorrectly enclosed in double quotation marks, which is not allowed.
From: “John Doe <john.doe@hornetsecurity.com>” john.doe@hornetsecurity.com
Non-compliant with RFC standards. The display name is present, but the email address is not correctly enclosed in angle brackets.
From: “John Doe”<john.doe@hornetsecurity.com>
Non-compliant with RFC standards due to the absence of white-space between the display name and the email address.
Updated: Jan 31, 2024, 13:03 PM By Claire Broadley Content Manager REVIEWED By Jared Atchison Co-owner
Do you want to set up Postmaster Tools… but you’re not sure where to start?
Postmaster Tools lets you to monitor your spam complaints and domain reputation. That’s super important now that Gmail is blocking emails more aggressively.
Thankfully, Postmaster Tools is free and easy to configure. If you’ve already used a Google service like Analytics, it’ll take just a couple of minutes to set up.
You should set up Postmaster Tools if you meet any of the following criteria:
1. You Regularly Send Emails to Gmail Recipients
Postmaster Tools is a tool that Google provides to monitor emails to Gmail users.
Realistically, most of your email lists are likely to include a large number of Gmail mailboxes unless you’re sending to a very specific group of people, like an internal company mailing list. (According to Techjury, Gmail had a 75.8% share of the email market in 2023.)
Keep in mind that Gmail recipients aren’t always using Gmail email addresses. The people who use custom domains or Google Workspace are ‘hidden’, so it’s not always clear who’s using Gmail and who isn’t. To be on the safe side, it’s best to use it (it’s free).
2. You Send Marketing Emails (or Have a Large Website)
Postmaster Tools works best for bulk email senders, which Google defines as a domain that sends more than 5,000 emails a day.
If you’re sending email newsletters on a regular basis, having Postmaster Tools is going to help.
Likewise, if you use WooCommerce or a similar platform, you likely send a high number of transactional emails: password reset emails, receipts, and so on.
If you don’t send a large number of emails right now, you can still set up Postmaster Tools so you’re prepared for the time you might.
Just note that you may see the following message:
No data to display at present. Please come back later. Postmaster Tools requires your domain to satisfy certain conditions before data is visible for this chart.
This usually means you’re not sending enough emails for Google to be able to calculate meaningful statistics.
It’s up to you if you want to set it up anyway, or skip it until your business grows a little more.
How to Add a Domain to Postmaster Tools
Adding a domain to Postmaster Tools is simple and should take less than 10 minutes.
To get started, head to the Postmaster Tools site and log in. If you’re already using Google Analytics, sign in using the email address you use for your Analytics account.
The welcome popup will already be open. Click on Get Started to begin.
Next, enter the domain name that your emails come from.
This should be the domain you use as the sender, or the ‘from email’, when you’re sending emails from your domain. It will normally be your main website.
If your domain name is already verified for another Google service, that’s all you need to do! You’ll see confirmation that your domain is set up.
If you haven’t used this domain with Google services before, you’ll need to verify it. Google will ask you to add a TXT record to your DNS.
To complete this, head to the control panel for the company you bought your domain from. It’ll likely be your domain name registrar or your web host. If you’re using a service like Cloudflare, you’ll want to open up your DNS records there instead.
Locate the part of the control panel that handles your DNS (which might be called a DNS Zone) and add a new TXT record. Copy the record provided into the fields.
Note: Most providers will ask you to enter a Name, which isn’t shown in Google’s instructions. If your provider doesn’t fill this out by default, you can safely enter @ in the Name field.
Now save your record and wait a few minutes. Changes in Cloudflare can be near-instant, but other registrars or hosts may take longer.
After waiting for your change to take effect, switch back to Postmaster Tools and hit Verify to continue.
And that’s it! Now your domain has been added to Postmaster Tools.
How to Read the Charts in Google Postmaster Tools
Google is now tracking various aspects of your email deliverability. It’ll display the data in a series of charts in your account.
Here’s a quick overview of what you can see.
As I mentioned, keep in mind that the data here is only counted from Gmail accounts. It’s not a domain-wide measurement of everything you send.
Spam Rate
Your spam rate is the number of emails sent vs the number of spam complaints received each day. You should aim to keep this below 0.1%.
You can do that by making it easy for people to unsubscribe from marketing emails and using double optins rather than single optins.
It’s normal for spam complaint rates to spike occasionally because Google measures each day in isolation.
If you’re seeing a spam rate that is consistently above 0.3%, it’s worth looking into why that’s happening. You might be sending emails to people who don’t want to receive them.
IP Reputation
IP reputation is the trustworthiness of the IP address your emails come from. Google may mark emails as spam if your IP reputation is poor.
Keep in mind that IP reputation is tied to your email marketing provider. It’s a measure of their IP as well as yours.
If you see a downward trend, check in with the platform you’re using to ask if they’re seeing the same thing.
Domain Reputation
Domain reputation is the trustworthiness of the domain name you’ve verified in Postmaster Tools. This can be factored into Google’s spam scoring, along with other measurements.
The ideal scenario is a consistent rating of High, as shown in our screenshot above.
Wait: What is IP Reputation vs Domain Reputation?
You’ll now see that Google has separate options for IP reputation and domain reputation. Here’s the difference:
IP reputation measures the reputation of the server that actually sends your emails out. This might be a service like Constant Contact, ConvertKit, or Drip. Other people who use the service will share the same IP, so you’re a little more vulnerable to the impact of other users’ actions.
Domain reputation is a measure of the emails that are sent from your domain name as a whole.
Feedback Loop
High-volume or bulk senders can activate this feature to track spam complaints in more detail. You’ll need a special email header called Feedback-ID if you want to use this. Most likely, you won’t need to look at this report.
Authentication
This chart shows you how many emails cleared security checks.
In more technical terms, it shows how many emails attempted to authenticate using DMARC, SPF, and DKIM vs. how many actually did.
Encryption
This chart looks very similar to the domain reputation chart we already showed. It should sit at 100%.
If you’re seeing a lower percentage, you may be using outdated connection details for your email provider.
Check the websites or platforms that are sending emails from your domain and update them from an SSL connection to a TLS connection.
Delivery Errors
Last but not least, the final chart is the most useful. The Delivery Errors report will show you whether emails were rejected or temporarily delayed. A temporary delay is labeled as a TempFail in this report.
If you see any jumps, click on the point in the chart and the reason for the failures will be displayed below it.
Small jumps here and there are not a huge cause for concern. However, very large error rates are a definite red flag. You may have received a 550 error or a 421 error that gives you more clues as to why they’re happening.
Here are the 3 most important error messages related to blocked emails in Gmail:
421-4.7.0 unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been temporarily rate limited.
550-5.7.1 Our system has detected an unusual rate of unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been blocked.
550-5.7.26 This mail is unauthenticated, which poses a security risk to the sender and Gmail users, and has been blocked. The sender must authenticate with at least one of SPF or DKIM. For this message, DKIM checks did not pass and SPF check for example.com did not pass with ip: 192.186.0.1.
If you’re seeing these errors, check that your domain name has the correct DNS records for authenticating email. It’s also a good idea to examine your emails to ensure you have the right unsubscribe links in them.
Note: WP Mail SMTP preserves the list-unsubscribe headers that your email provider adds. That means that your emails will have a one-click unsubscribe option at the top.
If you’re using a different SMTP plugin, make sure it’s preserving that crucial list-unsubscribe header. If it’s not there, If not, you may want to consider switching to WP Mail SMTP for the best possible protection against spam complaints and failed emails.
Are your emails from WordPress disappearing or landing in the spam folder? You’re definitely not alone. Learn how to authenticate WordPress emails and ensure they always land in your inbox.
Ready to fix your emails? Get started today with the best WordPress SMTP plugin. If you don’t have the time to fix your emails, you can get full White Glove Setup assistance as an extra purchase, and there’s a 14-day money-back guarantee for all paid plans.
If this article helped you out, please follow us on Facebook and Twitter for more WordPress tips and tutorials.
On December 18, 2023, right before the end of Holiday Bug Extravaganza, we received a submission for a Local File Inclusion vulnerability in Shield Security, a WordPress plugin with more than 50,000+ active installations. It’s important to note that this vulnerability is limited to just the inclusion of PHP files, however, it could be leveraged by an attacker who has the ability to upload PHP files but can not directly access those files to execute.
Props to hir0ot who discovered and responsibly reported this vulnerability through the Wordfence Bug Bounty Program. This researcher earned a bounty of $938.00 for this discovery during our Bug Bounty Program Extravaganza.
All Wordfence Premium, Wordfence Care, and Wordfence Response customers, as well as those still using the free version of our plugin, are protected against any exploits targeting this vulnerability by the Wordfence firewall’s built-in Directory Traversal and Local File Inclusion protection.
We contacted the Shield Security Team on December 21, 2023, and received a response on December 23, 2023. After providing full disclosure details, the developer released a patch on December 23, 2023. We would like to commend the Shield Security Team for their prompt response and timely patch, which was released on the same day.
We urge users to update their sites with the latest patched version of Shield Security, which is version 18.5.10, as soon as possible.
The Shield Security – Smart Bot Blocking & Intrusion Prevention Security plugin for WordPress is vulnerable to Local File Inclusion in all versions up to, and including, 18.5.9 via the render_action_template parameter. This makes it possible for an unauthenticated attacker to include and execute PHP files on the server, allowing the execution of any PHP code in those files.
Technical Analysis
Shield Security is a WordPress website security plugin that offers several features to stop attackers, protect and monitor the website, including a firewall, malware scanner and also logs activities.
The plugin includes a template management system that renders .twig, .php or .html files. Unfortunately, the insecure implementation of the plugin’s file template including and rendering functionality allows for arbitrary file inclusion in vulnerable versions. The template path is set with the setTemplate() function.
The renderPhp() function in the Render class uses the path_join() function to join the template file. It then checks that the template file is an existing file and includes it.
Examining the code reveals that there is no file path sanitization anywhere in these functions. This makes it possible to include arbitrary PHP files from the server.
The file inclusion is limited to PHP files in the vulnerability. This means that threat actors cannot exploit one of the most popular remote code execution methods via a log file poisoning attack. Since the plugin also uses isFile() function to file checking, the other popular remote code execution method using wrappers attack is also not possible. Nevertheless, the attacker has several options to include and exploit a malicious PHP file and execute on the server. This can be achieved by chaining the attack and exploiting vulnerabilities in other plugins. However, it’s worth mentioning that the attack possibilities are limited. This would likely be leveraged in an instance where an attacker has access to upload a PHP file, but does not have direct access to the file to execute it.
Wordfence Firewall
The following graphic demonstrates the steps to exploitation an attacker might take and at which point the Wordfence firewall would block an attacker from successfully exploiting the vulnerability.
The Wordfence firewall rule detects the malicious file path and blocks the request.
Disclosure Timeline
December 18, 2023 – We receive the submission of the Local File Inclusion vulnerability in Shield Security via the Wordfence Bug Bounty Program. December 20, 2023 – We validate the report and confirm the proof-of-concept exploit. December 21, 2023 – We initiate contact with the plugin vendor asking that they confirm the inbox for handling the discussion. December 23, 2023 – The vendor confirms the inbox for handling the discussion. December 23, 2023 – We send over the full disclosure details. The vendor acknowledges the report and begins working on a fix. December 23, 2023 – The fully patched version of the plugin, 18.5.10, is released.
Conclusion
In this blog post, we detailed a Local File Inclusion vulnerability within the Shield Security plugin affecting versions 18.5.9 and earlier. This vulnerability allows unauthenticated threat actors to include and execute PHP files on the server, allowing the execution of any PHP code in those files, which can be used for complete site compromise. The vulnerability has been fully addressed in version 18.5.10 of the plugin.
We encourage WordPress users to verify that their sites are updated to the latest patched version of Shield Security.
All Wordfence Premium, Wordfence Care, and Wordfence Response customers, as well as those still using the free version of our plugin, are protected against any exploits targeting this vulnerability by the Wordfence firewall’s built-in Directory Traversal and Local File Inclusion protection.
If you know someone who uses this plugin on their site, we recommend sharing this advisory with them to ensure their site remains secure, as this vulnerability poses a significant risk.
Just in time for Data Privacy Day 2024 on January 28, the EU Commission is calling for evidence to understand how the EU’s General Data Protection Regulation (GDPR) has been functioning now that we’re nearing the 6th anniversary of the regulation coming into force.
We’re so glad they asked, because we have some thoughts. And what better way to celebrate privacy day than by discussing whether the application of the GDPR has actually done anything to improve people’s privacy?
The answer is, mostly yes, but in a couple of significant ways – no.
Overall, the GDPR is rightly seen as the global gold standard for privacy protection. It has served as a model for what data protection practices should look like globally, it enshrines data subject rights that have been copied across jurisdictions, and when it took effect, it created a standard for the kinds of privacy protections people worldwide should be able to expect and demand from the entities that handle their personal data. On balance, the GDPR has definitely moved the needle in the right direction for giving people more control over their personal data and in protecting their privacy.
In a couple of key areas, however, we believe the way the GDPR has been applied to data flowing across the Internet has done nothing for privacy and in fact may even jeopardize the protection of personal data. The first area where we see this is with respect to cross-border data transfers. Location has become a proxy for privacy in the minds of many EU data protection regulators, and we think that is the wrong result. The second area is an overly broad interpretation of what constitutes “personal data” by some regulators with respect to Internet Protocol or “IP” addresses. We contend that IP addresses should not always count as personal data, especially when the entities handling IP addresses have no ability on their own to tie those IP addresses to individuals. This is important because the ability to implement a number of industry-leading cybersecurity measures relies on the ability to do threat intelligence on Internet traffic metadata, including IP addresses.
Location should not be a proxy for privacy
Fundamentally, good data security and privacy practices should be able to protect personal data regardless of where that processing or storage occurs. Nevertheless, the GDPR is based on the idea that legal protections should attach to personal data based on the location of the data – where it is generated, processed, or stored. Articles 44 to 49 establish the conditions that must be in place in order for data to be transferred to a jurisdiction outside the EU, with the idea that even if the data is in a different location, the privacy protections established by the GDPR should follow the data. No doubt this approach was influenced by political developments around government surveillance practices, such as the revelations in 2013 of secret documents describing the relationship between the US NSA (and its Five Eyes partners) and large Internet companies, and that intelligence agencies were scooping up data from choke points on the Internet. And once the GDPR took effect, many data regulators in the EU were of the view that as a result of the GDPR’s restrictions on cross-border data transfers, European personal data simply could not be processed in the United States in a way that would be consistent with the GDPR.
This issue came to a head in July 2020, when the European Court of Justice (CJEU), in its “Schrems II” decision1, invalidated the EU-US Privacy Shield adequacy standard and questioned the suitability of the EU standard contractual clauses (a mechanism entities can use to ensure that GDPR protections are applied to EU personal data even if it is processed outside the EU). The ruling in some respects left data protection regulators with little room to maneuver on questions of transatlantic data flows. But while some regulators were able to view the Schrems II ruling in a way that would still allow for EU personal data to be processed in the United States, other data protection regulators saw the decision as an opportunity to double down on their view that EU personal data cannot be processed in the US consistent with the GDPR, therefore promoting the misconception that data localization should be a proxy for data protection.
In fact, we would argue that the opposite is the case. From our own experience and according to recent research2, we know that data localization threatens an organization’s ability to achieve integrated management of cybersecurity risk and limits an entity’s ability to employ state-of-the-art cybersecurity measures that rely on cross-border data transfers to make them as effective as possible. For example, Cloudflare’s Bot Management product only increases in accuracy with continued use on the global network: it detects and blocks traffic coming from likely bots before feeding back learnings to the models backing the product. A diversity of signal and scale of data on a global platform is critical to help us continue to evolve our bot detection tools. If the Internet were fragmented – preventing data from one jurisdiction being used in another – more and more signals would be missed. We wouldn’t be able to apply learnings from bot trends in Asia to bot mitigation efforts in Europe, for example. And if the ability to identify bot traffic is hampered, so is the ability to block those harmful bots from services that process personal data.
The need for industry-leading cybersecurity measures is self-evident, and it is not as if data protection authorities don’t realize this. If you look at any enforcement action brought against an entity that suffered a data breach, you see data protection regulators insisting that the impacted entities implement ever more robust cybersecurity measures in line with the obligation GDPR Article 32 places on data controllers and processors to “develop appropriate technical and organizational measures to ensure a level of security appropriate to the risk”, “taking into account the state of the art”. In addition, data localization undermines information sharing within industry and with government agencies for cybersecurity purposes, which is generally recognized as vital to effective cybersecurity.
In this way, while the GDPR itself lays out a solid framework for securing personal data to ensure its privacy, the application of the GDPR’s cross-border data transfer provisions has twisted and contorted the purpose of the GDPR. It’s a classic example of not being able to see the forest for the trees. If the GDPR is applied in such a way as to elevate the priority of data localization over the priority of keeping data private and secure, then the protection of ordinary people’s data suffers.
Applying data transfer rules to IP addresses could lead to balkanization of the Internet
The other key way in which the application of the GDPR has been detrimental to the actual privacy of personal data is related to the way the term “personal data” has been defined in the Internet context – specifically with respect to Internet Protocol or “IP” addresses. A world where IP addresses are always treated as personal data and therefore subject to the GDPR’s data transfer rules is a world that could come perilously close to requiring a walled-off European Internet. And as noted above, this could have serious consequences for data privacy, not to mention that it likely would cut the EU off from any number of global marketplaces, information exchanges, and social media platforms.
This is a bit of a complicated argument, so let’s break it down. As most of us know, IP addresses are the addressing system for the Internet. When you send a request to a website, send an email, or communicate online in any way, IP addresses connect your request to the destination you’re trying to access. These IP addresses are the key to making sure Internet traffic gets delivered to where it needs to go. As the Internet is a global network, this means it’s entirely possible that Internet traffic – which necessarily contains IP addresses – will cross national borders. Indeed, the destination you are trying to access may well be located in a different jurisdiction altogether. That’s just the way the global Internet works. So far, so good.
But if IP addresses are considered personal data, then they are subject to data transfer restrictions under the GDPR. And with the way those provisions have been applied in recent years, some data regulators were getting perilously close to saying that IP addresses cannot transit jurisdictional boundaries if it meant the data might go to the US. The EU’s recent approval of the EU-US Data Privacy Framework established adequacy for US entities that certify to the framework, so these cross-border data transfers are not currently an issue. But if the Data Privacy Framework were to be invalidated as the EU-US Privacy Shield was in the Schrems II decision, then we could find ourselves in a place where the GDPR is applied to mean that IP addresses ostensibly linked to EU residents can’t be processed in the US, or potentially not even leave the EU.
If this were the case, then providers would have to start developing Europe-only networks to ensure IP addresses never cross jurisdictional boundaries. But how would people in the EU and US communicate if EU IP addresses can’t go to the US? Would EU citizens be restricted from accessing content stored in the US? It’s an application of the GDPR that would lead to the absurd result – one surely not intended by its drafters. And yet, in light of the Schrems II case and the way the GDPR has been applied, here we are.
A possible solution would be to consider that IP addresses are not always “personal data” subject to the GDPR. In 2016 – even before the GDPR took effect – the Court of Justice of the European Union (CJEU) established the view in Breyer v. Bundesrepublik Deutschland that even dynamic IP addresses, which change with every new connection to the Internet, constituted personal data if an entity processing the IP address could link the IP addresses to an individual. While the court’s decision did not say that dynamic IP addresses are always personal data under European data protection law, that’s exactly what EU data regulators took from the decision, without considering whether an entity actually has a way to tie the IP address to a real person3.
The question of when an identifier qualifies as “personal data” is again before the CJEU: In April 2023, the lower EU General Court ruled in SRB v EDPS4 that transmitted data can be considered anonymised and therefore not personal data if the data recipient does not have any additional information reasonably likely to allow it to re-identify the data subjects and has no legal means available to access such information. The appellant – the European Data Protection Supervisor (EDPS) – disagrees. The EDPS, who mainly oversees the privacy compliance of EU institutions and bodies, is appealing the decision and arguing that a unique identifier should qualify as personal data if that identifier could ever be linked to an individual, regardless of whether the entity holding the identifier actually had the means to make such a link.
If the lower court’s common-sense ruling holds, one could argue that IP addresses are not personal data when those IP addresses are processed by entities like Cloudflare, which have no means of connecting an IP address to an individual. If IP addresses are then not always personal data, then IP addresses will not always be subject to the GDPR’s rules on cross-border data transfers.
Although it may seem counterintuitive, having a standard whereby an IP address is not necessarily “personal data” would actually be a positive development for privacy. If IP addresses can flow freely across the Internet, then entities in the EU can use non-EU cybersecurity providers to help them secure their personal data. Advanced Machine Learning/predictive AI techniques that look at IP addresses to protect against DDoS attacks, prevent bots, or otherwise guard against personal data breaches will be able to draw on attack patterns and threat intelligence from around the world to the benefit of EU entities and residents. But none of these benefits can be realized in a world where IP addresses are always personal data under the GDPR and where the GDPR’s data transfer rules are interpreted to mean IP addresses linked to EU residents can never flow to the United States.
Keeping privacy in focus
On this Data Privacy Day, we urge EU policy makers to look closely at how the GDPR is working in practice, and to take note of the instances where the GDPR is applied in ways that place privacy protections above all other considerations – even appropriate security measures mandated by the GDPR’s Article 32 that take into account the state of the art of technology. When this happens, it can actually be detrimental to privacy. If taken to the extreme, this formulaic approach would not only negatively impact cybersecurity and data protection, but even put into question the functioning of the global Internet infrastructure as a whole, which depends on cross-border data flows. So what can be done to avert this?
First, we believe EU policymakers could adopt guidelines (if not legal clarification) for regulators that IP addresses should not be considered personal data when they cannot be linked by an entity to a real person. Second, policymakers should clarify that the GDPR’s application should be considered with the cybersecurity benefits of data processing in mind. Building on the GDPR’s existing recital 49, which rightly recognizes cybersecurity as a legitimate interest for processing, personal data that needs to be processed outside the EU for cybersecurity purposes should be exempted from GDPR restrictions to international data transfers. This would avoid some of the worst effects of the mindset that currently views data localization as a proxy for data privacy. Such a shift would be a truly pro-privacy application of the GDPR.
Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.
To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.
01/02/2024 Matthew Prince John Graham-Cumming Grant Bourzikas
11 min read
On Thanksgiving Day, November 23, 2023, Cloudflare detected a threat actor on our self-hosted Atlassian server. Our security team immediately began an investigation, cut off the threat actor’s access, and on Sunday, November 26, we brought in CrowdStrike’s Forensic team to perform their own independent analysis.
Yesterday, CrowdStrike completed its investigation, and we are publishing this blog post to talk about the details of this security incident.
We want to emphasize to our customers that no Cloudflare customer data or systems were impacted by this event. Because of our access controls, firewall rules, and use of hard security keys enforced using our own Zero Trust tools, the threat actor’s ability to move laterally was limited. No services were implicated, and no changes were made to our global network systems or configuration. This is the promise of a Zero Trust architecture: it’s like bulkheads in a ship where a compromise in one system is limited from compromising the whole organization.
From November 14 to 17, a threat actor did reconnaissance and then accessed our internal wiki (which uses Atlassian Confluence) and our bug database (Atlassian Jira). On November 20 and 21, we saw additional access indicating they may have come back to test access to ensure they had connectivity.
They then returned on November 22 and established persistent access to our Atlassian server using ScriptRunner for Jira, gained access to our source code management system (which uses Atlassian Bitbucket), and tried, unsuccessfully, to access a console server that had access to the data center that Cloudflare had not yet put into production in São Paulo, Brazil.
They did this by using one access token and three service account credentials that had been taken, and that we failed to rotate, after the Okta compromise of October 2023. All threat actor access and connections were terminated on November 24 and CrowdStrike has confirmed that the last evidence of threat activity was on November 24 at 10:44.
(Throughout this blog post all dates and times are UTC.)
Even though we understand the operational impact of the incident to be extremely limited, we took this incident very seriously because a threat actor had used stolen credentials to get access to our Atlassian server and accessed some documentation and a limited amount of source code. Based on our collaboration with colleagues in the industry and government, we believe that this attack was performed by a nation state attacker with the goal of obtaining persistent and widespread access to Cloudflare’s global network.
“Code Red” Remediation and Hardening Effort
On November 24, after the threat actor was removed from our environment, our security team pulled in all the people they needed across the company to investigate the intrusion and ensure that the threat actor had been completely denied access to our systems, and to ensure we understood the full extent of what they accessed or tried to access.
Then, from November 27, we redirected the efforts of a large part of the Cloudflare technical staff (inside and outside the security team) to work on a single project dubbed “Code Red”. The focus was strengthening, validating, and remediating any control in our environment to ensure we are secure against future intrusion and to validate that the threat actor could not gain access to our environment. Additionally, we continued to investigate every system, account and log to make sure the threat actor did not have persistent access and that we fully understood what systems they had touched and which they had attempted to access.
CrowdStrike performed an independent assessment of the scope and extent of the threat actor’s activity, including a search for any evidence that they still persisted in our systems. CrowdStrike’s investigation provided helpful corroboration and support for our investigation, but did not bring to light any activities that we had missed. This blog post outlines in detail everything we and CrowdStrike uncovered about the activity of the threat actor.
The only production systems the threat actor could access using the stolen credentials was our Atlassian environment. Analyzing the wiki pages they accessed, bug database issues, and source code repositories, it appears they were looking for information about the architecture, security, and management of our global network; no doubt with an eye on gaining a deeper foothold. Because of that, we decided a huge effort was needed to further harden our security protocols to prevent the threat actor from being able to get that foothold had we overlooked something from our log files.
Our aim was to prevent the attacker from using the technical information about the operations of our network as a way to get back in. Even though we believed, and later confirmed, the attacker had limited access, we undertook a comprehensive effort to rotate every production credential (more than 5,000 individual credentials), physically segment test and staging systems, performed forensic triages on 4,893 systems, reimaged and rebooted every machine in our global network including all the systems the threat actor accessed and all Atlassian products (Jira, Confluence, and Bitbucket).
The threat actor also attempted to access a console server in our new, and not yet in production, data center in São Paulo. All attempts to gain access were unsuccessful. To ensure these systems are 100% secure, equipment in the Brazil data center was returned to the manufacturers. The manufacturers’ forensic teams examined all of our systems to ensure that no access or persistence was gained. Nothing was found, but we replaced the hardware anyway.
We also looked for software packages that hadn’t been updated, user accounts that might have been created, and unused active employee accounts; we went searching for secrets that might have been left in Jira tickets or source code, examined and deleted all HAR files uploaded to the wiki in case they contained tokens of any sort. Whenever in doubt, we assumed the worst and made changes to ensure anything the threat actor was able to access would no longer be in use and therefore no longer be valuable to them.
Every member of the team was encouraged to point out areas the threat actor might have touched, so we could examine log files and determine the extent of the threat actor’s access. By including such a large number of people across the company, we aimed to leave no stone unturned looking for evidence of access or changes that needed to be made to improve security.
The immediate “Code Red” effort ended on January 5, but work continues across the company around credential management, software hardening, vulnerability management, additional alerting, and more.
Attack timeline
The attack started in October with the compromise of Okta, but the threat actor only began targeting our systems using those credentials from the Okta compromise in mid-November.
The following timeline shows the major events:
October 18 – Okta compromise
We’ve written about this before but, in summary, we were (for the second time) the victim of a compromise of Okta’s systems which resulted in a threat actor gaining access to a set of credentials. These credentials were meant to all be rotated.
Unfortunately, we failed to rotate one service token and three service accounts (out of thousands) of credentials that were leaked during the Okta compromise.
One was a Moveworks service token that granted remote access into our Atlassian system. The second credential was a service account used by the SaaS-based Smartsheet application that had administrative access to our Atlassian Jira instance, the third account was a Bitbucket service account which was used to access our source code management system, and the fourth was an AWS environment that had no access to the global network and no customer or sensitive data.
The one service token and three accounts were not rotated because mistakenly it was believed they were unused. This was incorrect and was how the threat actor first got into our systems and gained persistence to our Atlassian products. Note that this was in no way an error on the part of Atlassian, AWS, Moveworks or Smartsheet. These were merely credentials which we failed to rotate.
November 14 09:22:49 – threat actor starts probing
Our logs show that the threat actor started probing and performing reconnaissance of our systems beginning on November 14, looking for a way to use the credentials and what systems were accessible. They attempted to log into our Okta instance and were denied access. They attempted access to the Cloudflare Dashboard and were denied access.
Additionally, the threat actor accessed an AWS environment that is used to power the Cloudflare Apps marketplace. This environment was segmented with no access to global network or customer data. The service account to access this environment was revoked, and we validated the integrity of the environment.
November 15 16:28:38 – threat actor gains access to Atlassian services
The threat actor successfully accessed Atlassian Jira and Confluence on November 15 using the Moveworks service token to authenticate through our gateway, and then they used the Smartsheet service account to gain access to the Atlassian suite. The next day they began looking for information about the configuration and management of our global network, and accessed various Jira tickets.
The threat actor searched the wiki for things like remote access, secret, client-secret, openconnect, cloudflared, and token. They accessed 36 Jira tickets (out of a total of 2,059,357 tickets) and 202 wiki pages (out of a total of 194,100 pages).
The threat actor accessed Jira tickets about vulnerability management, secret rotation, MFA bypass, network access, and even our response to the Okta incident itself.
The wiki searches and pages accessed suggest the threat actor was very interested in all aspects of access to our systems: password resets, remote access, configuration, our use of Salt, but they did not target customer data or customer configurations.
November 16 14:36:37 – threat actor creates an Atlassian user account
The threat actor used the Smartsheet credential to create an Atlassian account that looked like a normal Cloudflare user. They added this user to a number of groups within Atlassian so that they’d have persistent access to the Atlassian environment should the Smartsheet service account be removed.
November 17 14:33:52 to November 20 09:26:53 – threat actor takes a break from accessing Cloudflare systems
During this period, the attacker took a break from accessing our systems (apart from apparently briefly testing that they still had access) and returned just before Thanksgiving.
November 22 14:18:22 – threat actor gains persistence
Since the Smartsheet service account had administrative access to Atlassian Jira, the threat actor was able to install the Sliver Adversary Emulation Framework, which is a widely used tool and framework that red teams and attackers use to enable “C2” (command and control), connectivity gaining persistent and stealthy access to a computer on which it is installed. Sliver was installed using the ScriptRunner for Jira plugin.
This allowed them continuous access to the Atlassian server, and they used this to attempt lateral movement. With this access the Threat Actor attempted to gain access to a non-production console server in our São Paulo, Brazil data center due to a non-enforced ACL. The access was denied, and they were not able to access any of the global network.
Over the next day, the threat actor viewed 120 code repositories (out of a total of 11,904 repositories). Of the 120, the threat actor used the Atlassian Bitbucket git archive feature on 76 repositories to download them to the Atlassian server, and even though we were not able to confirm whether or not they had been exfiltrated, we decided to treat them as having been exfiltrated.
The 76 source code repositories were almost all related to how backups work, how the global network is configured and managed, how identity works at Cloudflare, remote access, and our use of Terraform and Kubernetes. A small number of the repositories contained encrypted secrets which were rotated immediately even though they were strongly encrypted themselves.
We focused particularly on these 76 source code repositories to look for embedded secrets, (secrets stored in the code were rotated), vulnerabilities and ways in which an attacker could use them to mount a subsequent attack. This work was done as a priority by engineering teams across the company as part of “Code Red”.
As a SaaS company, we’ve long believed that our source code itself is not as precious as the source code of software companies that distribute software to end users. In fact, we’ve open sourced a large amount of our source code and speak openly through our blog about algorithms and techniques we use. So our focus was not on someone having access to the source code, but whether that source code contained embedded secrets (such as a key or token) and vulnerabilities.
November 23 – Discovery and threat actor access termination begins
Our security team was alerted to the threat actor’s presence at 16:00 and deactivated the Smartsheet service account 35 minutes later. 48 minutes later the user account created by the threat actor was found and deactivated. Here’s the detailed timeline for the major actions taken to block the threat actor once the first alert was raised.
15:58 – The threat actor adds the Smartsheet service account to an administrator group. 16:00 – Automated alert about the change at 15:58 to our security team. 16:12 – Cloudflare SOC starts investigating the alert. 16:35 – Smartsheet service account deactivated by Cloudflare SOC. 17:23 – The threat actor-created Atlassian user account is found and deactivated. 17:43 – Internal Cloudflare incident declared. 21:31 – Firewall rules put in place to block the threat actor’s known IP addresses.
November 24 – Sliver removed; all threat actor access terminated
10:44 – Last known threat actor activity. 11:59 – Sliver removed.
Throughout this timeline, the threat actor tried to access a myriad of other systems at Cloudflare but failed because of our access controls, firewall rules, and use of hard security keys enforced using our own Zero Trust tools.
To be clear, we saw no evidence whatsoever that the threat actor got access to our global network, data centers, SSL keys, customer databases or configuration information, Cloudflare Workers deployed by us or customers, AI models, network infrastructure, or any of our datastores like Workers KV, R2 or Quicksilver. Their access was limited to the Atlassian suite and the server on which our Atlassian runs.
A large part of our “Code Red” effort was understanding what the threat actor got access to and what they tried to access. By looking at logging across systems we were able to track attempted access to our internal metrics, network configuration, build system, alerting systems, and release management system. Based on our review, none of their attempts to access these systems were successful. Independently, CrowdStrike performed an assessment of the scope and extent of the threat actor’s activity, which did not bring to light activities that we had missed and concluded that the last evidence of threat activity was on November 24 at 10:44.
We are confident that between our investigation and CrowdStrike’s, we fully understand the threat actor’s actions and that they were limited to the systems on which we saw their activity.
Conclusion
This was a security incident involving a sophisticated actor, likely a nation-state, who operated in a thoughtful and methodical manner. The efforts we have taken ensure that the ongoing impact of the incident was limited and that we are well-prepared to fend off any sophisticated attacks in the future. This required the efforts of a significant number of Cloudflare’s engineering staff, and, for over a month, this was the highest priority at Cloudflare. The entire Cloudflare team worked to ensure that our systems were secure, the threat actor’s access was understood, to remediate immediate priorities (such as mass credential rotation), and to build a plan of long-running work to improve our overall security based on areas for improvement discovered during this process.
We are incredibly grateful to everyone at Cloudflare who responded quickly over the Thanksgiving holiday to conduct an initial analysis and lock out the threat actor, and all those who contributed to this effort. It would be impossible to name everyone involved, but their long hours and dedicated work made it possible to undertake an essential review and change of Cloudflare’s security while keeping our global network running and our customers’ service running.
We are grateful to CrowdStrike for having been available immediately to conduct an independent assessment. Now that their final report is complete, we are confident in our internal analysis and remediation of the intrusion and are making this blog post available.
IOCs Below are the Indications of Compromise (IOCs) that we saw from this threat actor. We are publishing them so that other organizations, and especially those that may have been impacted by the Okta breach, can search their logs to confirm the same threat actor did not access their systems.
Indicator
Indicator Type
SHA256
Description
193.142.58[.]126
IPv4
N/A
Primary threat actor Infrastructure, owned by M247 Europe SRL (Bucharest, Romania)
198.244.174[.]214
IPv4
N/A
Sliver C2 server, owned by OVH SAS (London, England)
Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.
To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.