Recovery Mode is a last-resort solution to recover an unresponsive UniFi device, often resulting from power loss occurring at the same time as an update. Only use Recovery Mode if you are unable to perform a standard factory reset.
Note: UniFi Power Backup can prevent unexpected power losses from occurring.
The following UniFi devices support Recovery Mode:
Dream Machine, Dream Machine Pro & Dream Wall
Access Points (all models)
Cloud KeyCloud Key Gen2+
Cameras
USW Flex Mini
Before Considering Recovery Mode
If you are considering Recovery Mode, first check two key points:
Reboot your UniFi device. If this resolves your symptoms, no further actions are required.
Factory reset your UniFi device. If you have cloud backups, you can easily restore your settings after factory resetting. If a factory reset works, no further actions are required.
Performing a Device Recovery
Dream Machine, Dream Machine Pro & Dream Wall
Download the most recent firmware for your device, found here.
Completely power-off the UniFi device and unplug it from its power source.
Press and hold the Reset button and then reconnect it to the power source.
Continue holding the Reset button for 5 seconds, or until the display (in supported models) indicates Recovery Mode.
Connect an Ethernet cable from your computer to the first LAN port (Port 1). This is usually the port nearest to the top left corner.
Note: Connect to the Dream Wall via Port 18, not Port 1.
Configure a static IP address on your computer in the 192.168.1.0/24 range (for example, 192.168.1.11).
Windows Client
Navigate to the Windows 10 Network connections
Settings > Network & Internet > Status > Change Adapter Option
Modify the IPv4 settings of the Ethernet adapter
Ethernet Adapter > Properties > Internet Protocol Version 4 (TCP/IPv4) > Propertie
Select the option to manually enter an IP address and add the following information:
IP address: 192.168.1.11
Subnet mask: 255.255.255.0
Default gateway: <blank>
DNS servers: <blank>
macOS Client
Navigate to the mac OS Network connections.
System Preferences > Networks > Ethernet Adapter
Select to manually enter an IP address and add the following information:
IP Address: 192.168.1.11
Subnet Mask: 255.255.255.0
Router: <blank>
DNS server: <blank>
In a web browser, navigate to http://192.168.1.30 to access the Recovery Mode UI.
Note: The Recovery Mode UI is accessible via HTTP only (not HTTPS). Your browser may try to redirect your session to HTTPS. Use a different browser if necessary.
Select Firmware Update > Choose and browse your computer for the previously downloaded firmware (.bin) image file.
Wait for the upgrade process to complete and reboot the device afterwards.
Access Points
Download the most recent firmware for your device, found here.
Connect your AP and computer to the same network or VLAN, either through a PoE switch or by connecting the computer directly to the network (data) port on the PoE adapter.
Press and hold the Reset button, and connect your computer to the available ethernet port of the AP.
Continue holding the Reset button until the LED flashes white, blue, off as indicated in our LED Status Guide. This indicates your device is ready for TFTP Recovery and you can release the button.
Set a static IP address on your computer to communicate with the AP, which has a default IP address of 192.168.1.20. The following is an example configuration:
Use TFTP to move the firmware from your computer to your AP. There are various programs and methods for accomplishing this. Here are two methods for your reference.
Windows
Use the built-in TFTP command line tool, or a separate program such as Tftpd64 or pumpKIN.
Select the downloaded firmware image and transfer it to the AP.
macOS and Linux
Open Terminal
Enter TFTP mode by entering the command:tftp
Once in TFTP, paste the following commands and hit enter.connect 192.168.1.20 binary rexmt 1 timeout 60 put /path/to/firmware_name.bin
The file transfer will begin. The firmware will upgrade and the device will automatically reboot once it has finished. Do not reboot it yourself.
Cloud Key
Cloud Key Gen2, Gen2 Plus
Download the most recent firmware for your device, found here.
Power off the system.
Press and hold the Reset button and then connect it to the power source.
Continue holding the Reset button for 10 seconds, until the LED flashes blue and white. The LCD screen on the front panel will also read “RECOVERY MODE.”
Open your browser and type the IP address for the Cloud Key, visible on the device’s screen.
Note: The IP address comes from your DHCP server. If it has not been assigned an IP address, you can try the fallback: 192.168.1.30.
Run the “Check Filesystem” to try searching for and repairing any problems with your storage disk that may cause system issues.
Restore the firmware you downloaded in step (1). Note that this will also factory reset your device.
The LED will flash white while upgrading and turn into a steady white when it is complete.
If your device fails again, this is a sign that you should replace your storage disk.
Cloud Key (Gen1)
Download the most recent firmware for your device, found here.
Power off the system.
Press and hold the Reset button and then connect it to the power source.
Continue holding the Reset button for 10 seconds, until the LED flashes blue and white.
Open your browser and type the IP address for the Cloud Key.
Note: The IP address comes from your DHCP server. If it has not been assigned an IP address, you can try the fallback: 192.168.1.30.
If your Cloud Key does have an IP address assigned by the DHCP server, the fallback IP will not work.
User Tip: If you don’t know your Cloud Key’s IP address, you can use thearp -a SSH command, or software such as nmap to find the IP address.
You will be taken to the Recovery Mode screen. From here you can reset, reboot, power off and most importantly upload an updated firmware bin file.
Upload the firmware you downloaded in step (1).
Once it is uploaded, reboot the Cloud Key to complete the firmware upgrade.
The LED will flash white while upgrading and turn into a steady white when it is complete.
Cameras
Unplug the PoE cable from the camera.
Press and hold the Reset button, then reconnect the camera to its PoE cable.
Continue holding the Reset button for at least 10 seconds, or until you see the LED flash 3 times rapidly.
Release the Reset button.
The device will automatically reboot to an older firmware.
To update to more recent firmware:
Find your specific camera model at our Downloads page using the left hand menu.
Copy the .bin file link of the firmware.
Use that link to upload it via the webUI of the camera, in System Settings.
Alternatively, adopt the camera to your NVR to perform an upgrade via the NVR-hosted UniFi Video user interface.
USW Flex Mini
Prepare a web server (see below*).
Set the server/computer’s IP to a static 192.168.1.99.
The method to set a static IP on a computer varies from platform to platform. Find instructions in your product’s documentation (Windows, macOS or Ubuntu/Linux).
Download the most recent firmware for your device, found here.
Rename the binary to fwupdate.bin and place it in the directory that was created earlier (webserver).
Power off the switch by unplugging it from its power source.
Press and hold the Reset button and then connect it to the power source.
Continue holding the Reset button for 10 seconds, until the LED flashes blue, white, off.
The USW-Flex-Mini should be updated.
* The first step in the recovery process is to prepare a web server. See below for a walkthrough on your operating system: Windows, macOS and Ubuntu/Debian.
Prepare a Web Server on Windows
Download Python for Windows (Executable Installer) here.
Open the downloaded file and make sure you select Add Python x.x to PATH during installation.
After the Python installation open Command Prompt as Administrator and confirm that Python is installed correctly with the command below: python -V
Create a directory for the web server by running the commands below: mkdir c:\webserver cd c:\webserver
5. Start the Python web server on port 80. Note that the version of Python can be found with the command from step 3:
After the Python installation open Terminal and confirm that Python is installed correctly with the command below: python -V
Create a directory for the web server by running the commands below: cd ~ mkdir webserver cd webserver
Start the Python web server on port 80. Note that the version of Python can be found with the command from step 2:
Python 3.x: python -m http.server 80
Python 2.x: python -m SimpleHTTPServer 80
Prepare a Web Server on Ubuntu/Debian
Install Python on your machine with the commands below: sudo apt-get update && sudo apt-get install python3
After the Python installation open a terminal and confirm that Python is installed correctly with the command below: python_version=$(dpkg -l | grep “^ii” | awk ‘/python/{print$2}’ | grep “^python[0-9].[0-9]$” | head -n1) sudo “${python_version}” -V
Create a directory for the web server by running the commands below: cd ~ mkdir webserver cd webserver
Start the Python web server on port 80. Note that the version of Python can be found with the command from step 2:
Note that this article is only applicable to advanced users with the self-hosted UniFi Network Servers installed on a Windows/macOS/Linux machine. We generally recommend using a UniFi OS Host for the best experience. Visit UI.com to learn more.
This article describes what the system.properties file is used for, and how to edit it.
Introduction
The system.properties file defines system-wide parameters for the UniFi Network Server. It is found within <unifi.base> in the data folder. Some advanced use-cases include:
Manual override of the Application IP Interface (the address to which Devices send inform packets).
Advanced Database adjustments.
Port Assignments, for purposes of the UniFi Network application communicating with Managed Devices, redirecting Guest Portal traffic, etc.
WARNING: Before editing the file, remember to create a backup of your system. It is also necessary to stop the application before performing any change in the file to avoid errors after changes are made.
Thesystem.properties file can be edited directly via any text editor. Keep in mind that lines preceded by hash-tags (#) exist as comments and are non-operational. Make edits at the bottom of the file. After changing this file, you’ll need to manually trigger provisioning on each site in order to make these effective.
Note:The file is created when UniFi Network runs successfully. If you cannot find the file within the <unifi_base>, create it by running UniFi Network.
Manually Specify the IP Interface for UniFi Network Application Communication
If a UniFi Host has multiple IP interfaces, the following configuration can manually set the exact IP interface that adopted APs should communicate to the Network application:
system_ip=a.b.c.d # the IP devices should be talking to for inform
Advanced Database Configuration
Below are advanced database configurations that most users will never need. Note: We do not perform tests on these configurations, they are enabled for the convenience of database experts. One possible usage scenario is where few people run their application on a NAS, which has a smaller footprint than a normal server, hence there’s a need to reduce the required resources.
unifi.db.nojournal=false # disable mongodb journaling unifi.db.extraargs # extra mongod args
The configuration below is used to facilitate UniFi Network application installation. Again, most users will never need to set this. When the is_default is set to true, the application will start with factory default configuration. For normal, everyday users, an uninstallation and then fresh re-installation is recommended over this.
is_default=true
From the UniFi Network application you can configure the auto-backup frequency, amount of backups to store, time of backup, etc. At the time of writing this, you cannot change the storage location via the application. We do have a variable in the system.properties if you wish to change the storage location. Currently, the default points to:
1. For Cloud Key: /data/autobackup (where SD card is mounted as /data by default) 2. For software installs: {data.dir}/backup/autobackup
autobackup.dir=/some/path
HSTS can be enabled, but should only be done by advanced system administrators who are familiar with it. If you run into issues, you likely will need to clear your browser’s cache after disabling this and restarting the service. To enable HSTS support add the following:
NOTE: Currently no characters after the custom line(s) are allowed. This includes spaces, pound/sharp signs/comments, etc.
SMTP Related Settings
By default, SMTPS validates certificates and will reject self-signed or untrusted certificates. If your mail server uses an untrusted certificate, you must disable certificate verification with the following:
smtp.checkserveridentity=false
Starting with UniFi Network version 6.1, STARTTLS is opportunistically enabled by default; e.g. will be used if the server announces support for it, and will require a trusted certificate. If using a self-signed or untrusted certificate, you must disable STARTTLS by setting the following:
smtp.starttls_enabled=false
This only controls whether STARTTLS will be used if the server supports it. To force its use, see: starttls_required.
With UniFi Network version 6.1 and newer, STARTTLS is opportunistically enabled by default, but only required if using port 587. This behavior can be overridden by setting smtp.starttls_required=true to force the use of STARTTLS on ports other than 587, or to make STARTTLS optional on port 587, set it to false.
If smtp.starttls_enabled=false is set, the starttls_required value has no impact.
Since 2014, the National Institute of Standards and Technology (NIST), a U.S. federal agency, has issued guidelines for managing digital identities via Special Publication 800-63B. The latest revision (rev. 3) was released in 2017, and has been updated as recently as 2019. Revision 4 was made available for comment and review; however, revision 3 is still the standard as of the time of this blog post.
Section 5.1.1 – Memorized Secrets provides recommendations for requirements around how users may create new passwords or make password changes, including guidelines around issues such as password strength. Special Publication 800-63B also covers verifiers (software, websites, network directory services, etc.) that validate and handle passwords during authentication and other processes.
Not all organizations must adhere to NIST guidelines. However, many follow NIST password policy recommendations even if it’s not required because they provide a good foundation for sound digital identity management. Indeed, strong password security helps companies block many cybersecurity attacks, including hackers, brute force attacks like credential stuffing and dictionary attacks. In addition, mitigating identity-related security risks helps organizations ensure compliance with a wide range of regulations, such as HIPAA, FISMA and SOX.
Quick List of NIST Password Guidelines
This blog explain many NIST password guidelines in detail, but here’s a quick list:
User-generated passwords should be at least 8 characters in length.
Machine-generated passwords should be at least 6 characters in length.
Users should be able to create passwords at least 64 characters in length.
All ASCII/Unicode characters should be allowed, including emojis and spaces.
Stored passwords should be hashed and salted, and never truncated.
Prospective passwords should be compared against password breach databases and rejected if there’s a match.
Passwords should not expire.
Users should be prevented from using sequential characters (e.g., “1234”) or repeated characters (e.g., “aaaa”).
Two-factor authentication (2FA) should not use SMS for codes.
Knowledge-based authentication (KBA), such as “What was the name of your first pet?”, should not be used.
Users should be allowed 10 failed password attempts before being locked out of a system or service.
Passwords should not have hints.
Complexity requirements — like requiring special characters, numbers or uppercase letters — should not be used.
Context-specific words, such as the name of the service or the individual’s username, should not be permitted.
You probably notice that some of these recommendations represent a departure from previous assumptions and standards. For example, NIST has removed complexity requirements like special characters in passwords; this change was made in part because users find ways to circumvent stringent complexity requirements. Instead of struggling to remember complex passwords and risking getting locked out, they may write their passwords down and leave them near physical computers or servers. Or they simply recycle old passwords based on dictionary words by making minimal changes during password creation, such as incrementing a number at the end.
NIST Guidelines
Now let’s explore the NIST guidelines in more detail.
Password length & processing
Length has long been considered a crucial factor for password security. NIST now recommends a password policy that requires all user-created passwords to be at least 8 characters in length, and all machine-generated passwords to be at least 6 characters in length. Additionally, it’s recommended to allow passwords to be at least 64 characters as a maximum length.
Verifiers should no longer truncate any passwords during processing. Passwords should be hashed and salted, with the full password hash stored.
Also the recommended NIST account lockout policy is to allow users at least 10 attempts at entering their password before being locked out.
Accepted characters
All ASCII characters, including the space character, should be supported in passwords. NIST specifies that Unicode characters, such as emojis, should be accepted as well.
Users should be prevented from using sequential characters (e.g., “1234”), repeated characters (e.g., “aaaa”) and simple dictionary words.
Commonly used & breached passwords
Passwords that are known to be commonly used or compromised should not be permitted. For example, you should disallow passwords in lists from breaches (such as the Have I Been Pwned? database, which contains 570+ million passwords from breaches), previously used passwords, well-known commonly used passwords, and context-specific passwords (e.g., the name of the service).
When a user attempts to use a password that fails this check, a message should be displayed asking them for a different password and providing an explanation for why their previous entry was rejected.
Reduced complexity & password expiration
As explained earlier in the blog, previous password complexity requirements have led to less secure human behavior, instead of the intended effect of tightening security. With that in mind, NIST recommends reduced complexity requirements, which includes removing requirements for special characters, numbers, uppercase characters, etc.
A related recommendation for reducing insecure human behavior is to eliminate password expiration.
No more hints or knowledge-based authentication (KBA)
Although password hints were intended to help users to create more complex passwords, users often choose hints that practically give away their passwords. Accordingly, NIST recommends not allowing password hints.
NIST also recommends not using knowledge-based authentication (KBA), such as questions like “What was the name of your first pet?”
To account for the growing popularity of password managers, users should be able to paste passwords.
SMS is no longer considered a secure option for 2FA. Instead, one-time code provider, such as Google Authenticator or Okta Verify, should be used.
How Netwrix Can Help
Netwrix offers several solutions specifically designed to streamline and strengthen access and password management:
Netwrix Password Policy Enforcer makes it easy to create strong yet flexible password policies that enhance security and compliance without hurting user productivity or burdening helpdesk and IT teams.
Netwrix Password Reset enables users to safely unlock their own accounts and reset or change their own passwords, right from their web browser. This self-service functionality dramatically reduces user frustration and productivity losses while slashing helpdesk call volume.
FAQ
What is NIST Special Publication 800-63B?
NIST’s Digital Identity Guidelines (Special Publication 800-63B) provides reliable recommendations for identity and access management, including effective password policies.
Why does NIST recommend reducing password complexity requirements?
While requiring complex passwords makes them more difficult for attackers to crack, it also makes passwords harder for users to remember. To avoid frustrating lockouts, users tend to respond with behaviors like writing down their credentials on a sticky note by their desk or choosing to periodically reuse the same (or nearly the same) password — which increase security risks. Accordingly, NIST now recommends less stringent complexity requirements.
How governments play a vital role in developing regulations, stopping supply chain attacks, and diminishing other threats to our way of life.
While new issues are always emerging in the world of cybersecurity, some have been present since the beginning, such as what role cybersecurity should play in government operations and, conversely, what role government should play in cybersecurity. The answer to this question continues to shift and evolve over time, but each new leap in technology introduces additional considerations. As we move into the AI era, how can government best keep citizens safe without constraining innovation and the free market — and how can the government use its defensive capabilities to retain an edge in the conflicts of tomorrow?
The day’s first session, “Cybersecurity and Military Defense in an Increasingly Digital World,” offered a deep dive into the latter question. Over the past 20 years, military conflicts have moved from involving just Land, Air and Sea to also being fought in Space and Cyber. While superior technology has given us an upper hand in previous conflicts, in some areas our allies — and our adversaries — are catching up or even surpassing us. In each great technological leap, companies and countries alike ascend and recede, and to keep our edge in the conflicts of the future, the U.S. will need to shed complacency, develop the right policies, move toward greater infrastructure security and tap the capabilities of the private sector.
SonicWall in particular is well-positioned to work with the federal government and the military. For years, we’ve helped secure federal agencies and defense deployments against enemies foreign and domestic, and have woked to shorten and simplify the acquisition and procurement process. Our list of certifications includes FIPS 140-2, Common Criteria, DoDIN APL, Commercial Solutions for Classified (CSfC), USGv6, IPv6 and TAA and others. And our wide range of certified solutions have been used in a number of government use cases, such as globally distributed networks in military deployments and federal agencies, tip-of-the-spear, hub-and-spoke, defense in-depth layered firewall strategies and more.
This new strategy was at the center of the day’s next session. In “The National Cyber Strategy as Roadmap to a Secure Cyber Future,” panelists outlined this strategic guidance, which was released just two months ago and offered a roadmap for how the U.S. should protect its digital ecosystem against malicious criminal and nation-state actors. The guidance consists of five pillars, all of which SonicWall is in accord with:
Pillar One: Defend Critical Infrastructure SonicWall offers several security solutions that align with Pillar One, including firewalls, intrusion prevention, VPN, advanced threat protection, email security, Zero-Trust network access and more. We’re also working to align with and conform to NIST SSDF and NIST Zero Trust Architecture standards.
Pillar Three: Shape Market Forces to Drive Security and Resilience
This pillar shifts liability from end users to software providers that ignore best practices, ship insecure or vulnerable products or integrate unvetted or unsafe third-party software. And as part of our efforts to aign with the NIST SSDF, we’re implementing a Software Bill of Materials (SBOM).
Pillar Four: Invest in a Resilient Future
Given CISA’s prominence in this guidance, any regulations created will likely include threat emulation testing, and will likely be mapped to threat techniques, such as MITRE ATT&CK. SonicWall Capture Client (our EDR solution) is powered by SentinelOne, which has been a participant in the MITRE ATT&CK evaluations since 2018 and was a top performer in the 2022 Evaluations.
Pillar Five: Forge International Partnerships to Pursue Shared Goals
An international company, SonicWall recognizes the importance of international partnerships and works to comply with global regulations such as GDPR, HIPAA, PCI-DSS and more. By sharing threat intelligence and collaborating no mitigation strategies, we work with governments and the rest of the cybersecurity community to pursue shared cybersecurity goals.
And with the continued rise in cybercrime, realizing these goals has never been more important. In “The State of Cybersecurity: Year in Review,” Mandiant CEO Kevin Mandia summarized findings from the 1,163 intrusions his company investigated in 2022. The good news, Mandia said, is that we’re detecting threats faster. In just ten years, we’ve gone from averaging 200 days to notice there’s a problem, to just 16 days currently — but at the same time, an increase in the global median dwell time for ransomware shows there’s still work to be done.
Mandia also outined the evolution of how cybercriminals are entering networks, from Unix platforms, to Windows-based attacks, and from phishing, to spearphishing to vulnerabilities — bringing patch management once again to the fore.
Deep within the RSAC Sandbox, where today’s defenders learn, play and test their skills, panelists convened to discuss how to stop attackers’ relentless attempts to shift left. “Software Supply Chain: Panel on Threat Intel, Trends, Mitigation Strategies” explained that while the use of third-party components increases agility, it comes with tremendous risk. More than 96% of software organizations rely on third-party code, 90% of which consists of open source—but the developers of this software are frequently single individuals or small groups who may not have time to incorporate proper security, or even know how. Our current strategy of signing at the end isn’t enough, panelists argued—to truly ensure safety, signing should be done throughout the process (otherwise known as “sign at the station”).
Israel provides an example of how a country can approach the issue of software supply chain vulnerability — among other things, the country has created a GitHub and browser extension allowing developers to check packages for malicious code — but much work would need to be done to implement the Israel model in the U.S. AI also provides some hope, but given its current inability to reliably detect malicious code, we’re still a long way from being able to rely on it. In the meantime, organizations will need to rely on tried-and-true solutions such as SBOMs to help guard against supply chain attacks in the near future.
But while AI has tremendous potential to help defenders, it also has terrible potential to aid attackers. In “ChatGPT: A New Generation of Dynamic Machine-Based Attacks,” the speakers highlighted ways that attackers are using the new generation of AI technology to dramatically improve social engineering attempts, expand their efforts to targets in new areas, and even write ransomware and other malicious code. In real time, the speakers demonstrated the difference between previous phishing emails and phishing generated by ChatGPT, including the use of more natural language, the ability to instantly access details about the target and the ability to imitate a leader or colleague trusted by the victim with a minimum of effort. These advancements will lead to a sharp increase in victims of phishing attacks, as well as things like Business Email Compromise.
And while there are guardrails in place to help prevent ChatGPT from being used maliciously, they can be circumvented with breathtaking ease. With the simple adjustment of a prompt, the speakers demonstrated, ransomware and other malicious code can be generated. While this code isn’t functional on its own, it’s just one or two simple adjustments away — and this capability could be used to rapidly increase the speed with which attacks are launched.
These capabilities are especially concerning given the rise in state-sponsored attacks. In “State of the Hack 2023: NSA’s Perspective,” NSA Director of Cybersecurity Rob Joyce addressed a packed house regarding the NSA’s work to prevent the increasing wave of nation-state threats. The two biggest nation-state threats to U.S. cybersecurity continue to be Russia and China, with much of the Russian effort centering around the U.S.’ assistance in the Russia/Ukraine conflict.
As we detailed in our SonicWall 2023 Cyber Threat Report, since the beginning of the conflict, attacks by Russia’s military and associated groups have driven a massive spike in cybercrime in Ukraine. The good news, Joyce said, is that Russia is currently in intelligence-gathering mode when it comes to the U.S., and is specifically taking care not to release large-scale NotPetya-type attacks. But Russia also appears to be playing the long game, and is showing no signs of slowing or scaling back their efforts.
China also appears to be biding its time — but unlike Russia, whose efforts appear to be focused around traditional military dominance, China is seeking technological dominance. Exploitation by China has increased so much that we’ve become numb to it, Joyce argued. And since these nation-state sponsored attackers don’t incur much reputational damage for their misdeeds, they’ve become increasingly brazen in their attacks, going so far as to require any citizen who finds a zero-day to pass details to the government and hosting competitions for building exploits and finding vulnerabilities. And the country is also making efforts to influence international tech standards in an attempt to tip scales in their favor for years to come.
The 2023 RSA Conference has offered a wealth of information on a wide variety of topics, but it will soon draw to a close. Thursday is the last day to visit the SonicWall booth (#N-5585 in Moscone North) and enjoy demos and presentations on all of our latest technology. Don’t head home without stopping by — and don’t forget to check back for the conclusion of our RSAC 2023 coverage!
Microsoft Edge Windows 11 Windows 10 Office for business
Microsoft is always striving to improve and streamline our product experiences—offering a new way to use the classic Microsoft Outlook app on Windows and the Microsoft Edge web browser.
If you have a Microsoft 365 Personal or Family subscription, browser links from the Outlook app will open in Microsoft Edge by default, right alongside the email they’re from in the Microsoft Edge sidebar pane. This allows you to easily access, read, and respond to the message using your matching authenticated profile. No more disruptive switching—just your email and the web content you need to reference, in a single, side-by-side view. And we’re always optimizing the sidebar in Microsoft Edge to give you useful content and tools while you’re browsing so you don’t have to toggle back and forth between windows or even other tabs—whether you’re shopping online or working in a Microsoft 365 web app.
In the future, links from your Microsoft Teams messages will also open in Microsoft Edge by default to help you stay engaged in conversations as you browse the web.
Ultimately though, if this experience isn’t right for you, you can turn off this feature the first time it launches in Microsoft Edge, and then in Outlook settings at any time after that.
FAQ
Why is Microsoft making this change?
To improve your experience between email and browsing—letting you see them both at the same time, in the same place. No more switching back and forth between apps.
To provide a unique experience—at Microsoft, we strive to create the best customer experience across our products.
To reduce task switching and improve workflow and focus—by opening browser links in Microsoft Edge, the original message in Outlook can be viewed alongside web content to easily access, read, and respond to the message, using the matching authenticated profile.
Will this replace my default browser setting in Windows?
No, this only impacts links opened from Microsoft Outlook on Windows and you have the option to turn off this feature in Outlook settings.
I want to open links with the browser set as the default in Windows Settings. How do I do that?
You can choose your preferred browser for opening links from Outlook the first time it launches in Microsoft Edge. After that you can change this setting in Outlook at any time—select File > Options > Advanced > Link handling and choose your preferred browser from the dropdown menu.
In Outlook, go to File.
Select Options.
Select Advanced > Link handling and choose your preferred browser from the dropdown menu.
Will this change affect me if I’m using a Mac?
No, this change will only be applied to Windows 10 and Windows 11 devices.
Send us feedback
Please let us know what you think about the new experience in one of two ways:
In Microsoft Edge, go to Settings and more > Help and feedback > Send feedback. Follow the on-screen instructions and select Send.
Press Alt + Shift + I on your keyboard. Follow the on-screen instructions and select Send.
The SMTP protocol is the main protocol used to transfer messages between mail servers and is, by default, not secure. The Transport Layer Security (TLS) protocol was introduced years ago to support encrypted transmission of messages over SMTP. It’s commonly used opportunistically rather than as a requirement, leaving much email traffic in clear text, vulnerable to interception by nefarious actors. Furthermore, SMTP determines the IP addresses of destination servers through the public DNS infrastructure, which is susceptible to spoofing and Man-in-the-Middle (MITM) attacks. This vulnerability has led to many new standards being created to increase security for sending and receiving email, one of those standards being DNS-based Authentication of Named Entities (DANE).
DANE for SMTP RFC 7672 uses the presence of a Transport Layer Security Authentication (TLSA) record in a domain’s DNS record set to signal a domain and its mail server(s) support DANE. If there’s no TLSA record present, DNS resolution for mail flow will work as usual without any DANE checks being attempted. The TLSA record securely signals TLS support and publishes the DANE policy for the domain. So, sending mail servers can successfully authenticate legitimate receiving mail servers using SMTP DANE. This authentication makes it resistant to downgrade and MITM attacks. DANE has direct dependencies on DNSSEC, which works by digitally signing records for DNS lookups using public key cryptography. DNSSEC checks occur on recursive DNS resolvers, the DNS servers that make DNS queries for clients. DNSSEC ensures that DNS records aren’t tampered with and are authentic.
Once the MX, A/AAAA and DNSSEC-related resource records for a domain are returned to the DNS recursive resolver as DNSSEC authentic, the sending mail server will ask for the TLSA record corresponding to the MX host entry or entries. If the TLSA record is present and proven authentic using another DNSSEC check, the DNS recursive resolver will return the TLSA record to the sending mail server.
After the authentic TLSA record is received, the sending mail server establishes an SMTP connection to the MX host associated with the authentic TLSA record. The sending mail server will try to set up TLS and compare the server’s TLS certificate with the data in the TLSA record to validate that the destination mail server connected to the sender is the legitimate receiving mail server. The message will be transmitted (using TLS) if authentication succeeds. When authentication fails or if TLS isn’t supported by the destination server, Exchange Online will retry the entire validation process beginning with a DNS query for the same destination domain again after 15 minutes, then 15 minutes after that, then every hour for the next 24 hours. If authentication continues to fail after 24 hours of retrying, the message will expire, and an NDR with error details will be generated and sent to the sender.
Tip
If you’re not an E5 customer, use the 90-day Microsoft Purview solutions trial to explore how additional Purview capabilities can help your organization manage data security and compliance needs. Start now at the Microsoft Purview compliance portal trials hub. Learn details about signing up and trial terms.
What are the components of DANE?
TLSA Resource Record
The TLS Authentication (TLSA) record is used to associate a server’s X.509 certificate or public key value with the domain name that contains the record. TLSA records can only be trusted if DNSSEC is enabled on your domain. If you’re using a DNS provider to host your domain, the DNSSEC may be a setting offered when configuring a domain with them. To learn more about DNSSEC zone signing, visit this link: Overview of DNSSEC | Microsoft Docs.
Example TLSA record:
There are four configurable fields unique to the TLSA record type:
Certificate Usage Field: Specifies how the sending email server should verify the destination email server’s certificate.
Value
Acronym
Description
01
PKIX-TA
Certificate used is the trust-anchor Public CA from the X.509 trust-chain.
11
PKIX-EE
Certificate checked is the destination server; DNSSEC checks must verify its authenticity.
2
DANE-TA
Use server’s private key from the X.509 tree that must be validated by a trust anchor in the chain of trust. The TLSA record specifies the trust anchor to be used for validating the TLS certificates for the domain.
3
DANE-EE
Only match against the destination server’s certificate.
1 Exchange Online follows RFC implementation guidance that Certificate Usage Field values of 0 or 1 shouldn’t be used when DANE is implemented with SMTP. When a TLSA record that has a Certificate Usage field value of 0 or 1 is returned to Exchange Online, Exchange Online will treat it as not usable. If all TLSA records are found unusable, Exchange Online won’t perform the DANE validation steps for 0 or 1 when sending the email. Instead, because of the presence of a TLSA record, Exchange Online will enforce the use of TLS for sending the email, sending the email if the destination email server supports TLS or dropping the email and generating an NDR if the destination email server doesn’t support TLS.
In the example TLSA record, the Certificate Usage Field is set to ‘3’, so the Certificate Association Data (‘abc123…xyz789’) would be matched against the destination server’s certificate only.
Selector field: Indicates which parts of the destination server’s certificate should be checked.
Value
Acronym
Description
0
Cert
Use full certificate.
1
SPKI (Subject Public Key Info)
Use certificate’s public key and the algorithm with which the public key is identified to use.
In the example TLSA record, the Selector Field is set to ‘1’ so the Certificate Association Data would be matched using the destination server certificate’s public key and the algorithm with which the public key is identified to use.
Matching Type Field: Indicates the format the certificate will be represented in the TLSA record.
Value
Acronym
Description
0
Full
The data in the TSLA record is the full certificate or SPKI.
1
SHA-256
The data in the TSLA record is an SHA-256 hash of either the certificate or the SPKI.
2
SHA-512
The data in the TSLA record is an SHA-512 hash of either the certificate or the SPKI.
In the example TLSA record, the Matching Type Field is set to ‘1’ so the Certificate Association Data is an SHA-256 hash of the Subject Public Key Info from the destination server certificate
Certificate Association Data: Specifies the certificate data that is used for matching against the destination server certificate. This data depends on the Selector Field value and the Matching Type Value.
In the example TLSA record, the Certificate Association data is set to ‘abc123..xyz789’. Since the Selector Field value in the example is set to ‘1’, it would reference the destination server certificate’s public key and the algorithm that’s identified to be used with it. And since the Matching Type field value in the example is set to ‘1’, it would reference the SHA-256 hash of the Subject Public Key Info from the destination server certificate.
How can Exchange Online customers use SMTP DANE Outbound?
As an Exchange Online customer, there isn’t anything you need to do to configure this enhanced email security for your outbound email. This enhanced email security is something we have built for you and it’s ON by default for all Exchange Online customers and is used when the destination domain advertises support for DANE. To reap the benefits of sending email with DNSSEC and DANE checks, communicate to your business partners with whom you exchange email that they need to implement DNSSEC and DANE so they can receive email using these standards.
How can Exchange Online customers use SMTP DANE inbound?
Currently, inbound SMTP DANE isn’t supported for Exchange Online. Support for inbound SMTP DANE will be available in the near future.
What is the recommended TLSA record configuration?
Per RFC implementation guidance for SMTP DANE, a TLSA record composed of the Certificate Usage field set to 3, the Selector field set to 1, and the Matching Type field set to 1 is recommended.
Exchange Online Mail Flow with SMTP DANE
The mail flow process for Exchange Online with SMTP DANE, shown in the flow chart below, validates domain and resource record security through DNSSEC, TLS support on the destination mail server, and that the destination mail server’s certificate matches what is expected based on its associated TLSA record.
There are only two scenarios where an SMTP DANE failure will result in the email being blocked:
The destination domain signaled DNSSEC support but one or more records were returned as inauthentic.
All MX records for the destination domain have TLSA records and none of the destination server’s certificates match what was expected per the TSLA record data, or a TLS connection isn’t supported by the destination server.
Related Technologies
Technology
Additional Information
Mail Transfer Agent – Strict Transport Security (MTA-STS) helps thwart downgrade and Man-in-the-Middle attacks by providing a mechanism for setting domain policies that specify whether the destination email server supports TLS and what to do when TLS can’t be negotiated, for example stop the transmission.
More information about Exchange Online’s upcoming support for inbound and outbound MTA-STS will be published later this year.
DomainKeys Identified Mail (DKIM) uses X.509 certificate information to ensure that destination email systems trust messages sent outbound from your custom domain.
Domain-based Message Authentication, Reporting, and Conformance (DMARC) works with Sender Policy Framework and DomainKeys Identified Mail to authenticate mail senders and ensure that destination email systems trust messages sent from your domain.
Currently, there are four error codes for DANE when sending emails with Exchange Online. Microsoft is actively updating this error code list. The errors will be visible in:
The Exchange Admin Center portal through the Message Trace Details view.
NDRs generated when a message isn’t sent due to a DANE or DNSSEC failure.
starttls-not-supported: Destination mail server must support TLS to receive mail.
4/5.7.322
certificate-expired: Destination mail server’s certificate is expired.
4/5.7.323
tlsa-invalid: The domain failed DANE validation.
4/5.7.324
dnssec-invalid: Destination domain returned invalid DNSSEC records.
Note
Currently, when a domain signals that it supports DNSSEC but fails DNSSEC checks, Exchange Online does not generate the 4/5.7.324 dnssec-invalid error. It generates a generic DNS error:
4/5.4.312 DNS query failed
We are actively working to remedy this known limitation. If you receive this error statement, navigate to the Microsoft Remote Connectivity Analyzer and perform the DANE validation test against the domain that generated the 4/5.4.312 error. The results will show if it is a DNSSEC issue or a different DNS issue.
Troubleshooting 4/5.7.321 starttls-not-supported
This error usually indicates an issue with the destination mail server. After receiving the message:
Check that the destination email address was entered correctly.
Alert the destination email administrator that you received this error code so they can determine if the destination server is configured correctly to receive messages using TLS.
Retry sending the email and review the Message Trace Details for the message in the Exchange Admin Center portal.
Troubleshooting 4/5.7.322 certificate-expired
A valid X.509 certificate that hasn’t expired must be presented to the sending email server. X.509 certificates must be renewed after their expiration, commonly annually. After receiving the message:
Alert the destination email administrator that you received this error code and provide the error code string.
Allow time for the destination server certificate to be renewed and the TLSA record to be updated to reference the new certificate. Then, retry sending the email and review the Message Trace Details for the message in the Exchange Admin Center portal.
Troubleshooting 4/5.7.323 tlsa-invalid
This error code is related to a TLSA record misconfiguration and can only be generated after a DNSSEC-authentic TLSA record has been returned. There are many scenarios during the DANE validation that occur after the record has been returned that can result in the code being generated. Microsoft is actively working on the scenarios that are covered by this error code, so that each scenario has a specific code. Currently, one or more of these scenarios could cause the generation of the error code:
The destination mail server’s certificate doesn’t match with what is expected per the authentic TLSA record.
Authentic TLSA record is misconfigured.
The destination domain is being attacked.
Any other DANE failure.
After receiving the message:
Alert the destination email administrator that you received this error code and provide them the error code string.
Allow time for the destination email admin to review their DANE configuration and email server certificate validity. Then, retry sending the email and review the Message Trace Details for the message in the Exchange Admin Center portal.
Troubleshooting 4/5.7.324 dnssec-invalid
This error code is generated when the destination domain indicated it was DNSSEC-authentic but Exchange Online wasn’t able to verify it as DNSSEC-authentic.
After receiving the message:
Alert the destination email administrator that you received this error code and provide them the error code string.
Allow time for the destination email admin to review their domain’s DNSSEC configuration. Then, retry sending the email and review the Message Trace Details for the message in the Exchange Admin Center portal.
Troubleshooting Receiving Emails with SMTP DANE
Currently, there are two methods an admin of a receiving domain can use to validate and troubleshoot their DNSSEC and DANE configuration to receive email from Exchange Online using these standards.
Adopt SMTP TLS-RPT (Transport Layer Security Reporting) introduced in RFC8460
TLS-RPT https://datatracker.ietf.org/doc/html/rfc8460 is a reporting mechanism for senders to provide details to destination domain administrators about DANE and MTA-STS successes and failures with those respective destination domains. To receive TLS-RPT reports, you only need to add a TXT record in your domain’s DNS records that includes the email address or URI you would like the reports to be sent to. Exchange Online will send TLS-RPT reports in JSON format.
Example record:
The second method is to use the Remote Connectivity Analyzer Microsoft Remote Connectivity Analyzer, which can do the same DNSSEC and DANE checks against your DNS configuration that Exchange Online will do when sending email outside the service. This method is the most direct way of troubleshooting errors in your configuration to receive email from Exchange Online using these standards.
When errors are being troubleshooted, the below error codes may be generated:
NDR Code
Description
4/5.7.321
starttls-not-supported: Destination mail server must support TLS to receive mail.
4/5.7.322
certificate-expired: Destination mail server’s certificate has expired.
4/5.7.323
tlsa-invalid: The domain failed DANE validation.
4/5.7.324
dnssec-invalid: Destination domain returned invalid DNSSEC records.
Note
Currently, when a domain signals that it supports DNSSEC but fails DNSSEC checks, Exchange Online does not generate the 4/5.7.324 dnssec-invalid error. It generates a generic DNS error:
4/5.4.312 DNS query failed
We are actively working to remedy this known limitation. If you receive this error statement, navigate to the Microsoft Remote Connectivity Analyzer and perform the DANE validation test against the domain that generated the 4/5.4.312 error. The results will show if it is a DNSSEC issue or a different DNS issue.
Troubleshooting 4/5.7.321 starttls-not-supported
Note
These steps are for email administrators troubleshooting receiving email from Exchange Online using SMTP DANE.
This error usually indicates an issue with the destination mail server. The mail server that the Remote Connectivity Analyzer is testing connecting with. There are generally two scenarios that generate this code:
The destination mail server doesn’t support secure communication at all, and plain, non-encrypted communication must be used.
The destination server is configured improperly and ignores the STARTTLS command.
After receiving the message:
Check the email address.
Locate the IP address that is associated with the error statement so you can identify the mail server the statement is associated with.
Check your mail server’s setting to make sure it’s configured to listen for SMTP traffic (commonly ports 25 and 587).
Wait a few minutes, then retry the test with the Remote Connectivity Analyzer tool.
If it still fails, then try removing the TLSA record and run the test with the Remote Connectivity Analyzer tool again.
If there are no failures, this message may indicate the mail server you’re using to receive mail doesn’t support STARTTLS and you may need to upgrade to one that does in order to use DANE.
Troubleshooting 4/5.7.322 certificate-expired
Note
These steps are for email administrators troubleshooting receiving email from Exchange Online using SMTP DANE.
A valid X.509 certificate that hasn’t expired must be presented to the sending email server. X.509 certificates must be renewed after their expiration, commonly annually. After receiving the message:
Check the IP that is associated with the error statement, so you can identify the mail server it’s associated with. Locate the expired certificate on the email server you identified.
Sign in to your certificate provider’s website.
Select the expired certificate and follow the instructions to renew and to pay for the renewal.
After your provider has verified the purchase, you may download a new certificate.
Install the renewed certificate into its associated mail server.
Update the mail server’s associated TLSA record with the new certificate’s data.
After waiting an appropriate amount of time, retry the test with the Remote Connectivity Analyzer tool.
Troubleshooting 4/5.7.323 tlsa-invalid
Note
These steps are for email administrators troubleshooting receiving email from Exchange Online using SMTP DANE.
This error code is related to a TLSA record misconfiguration and can only be generated after a DNSSEC-authentic TSLA record has been returned. But, there are many scenarios during the DANE validation that occur after the record has been returned that can result in the code being generated. Microsoft is actively working on the scenarios that are covered by this error code, so that each scenario has a specific code. Currently, one or more of these scenarios could cause the generation of the error code:
Authentic TLSA record is misconfigured.
The certificate isn’t yet time valid/configured for a future time window.
Destination domain is being attacked.
Any other DANE failure.
After receiving the message:
Check the IP that is associated with the error statement to identify the mail server it’s associated with.
Identify the TLSA record that is associated with the identified mail server.
Verify the configuration of the TLSA record to ensure that it signals the sender to perform the preferred DANE checks and that the correct certificate data has been included in the TLSA record.
If you have to make any updates to the record for discrepancies, then wait a few minutes then rerun the test with the Remote Connectivity Analyzer tool.
Locate the certificate on the identified mail server.
Check the time window for which the certificate is valid. If it’s set to start validity at a future date, it needs to be renewed for the current date.
Sign in to your certificate provider’s website.
Select the expired certificate and follow the instructions to renew and to pay for the renewal.
After your provider has verified the purchase, you may download a new certificate.
Install the renewed certificate into its associated mail server.
Troubleshooting 4/5.7.324 dnssec-invalid
Note
These steps are for email administrators troubleshooting receiving email from Exchange Online using SMTP DANE.
This error code is generated when the destination domain indicated it’s DNSSEC-authentic but Exchange Online isn’t able to verify it as DNSSEC-authentic. This section won’t be comprehensive for troubleshooting DNSSEC issues and focuses on scenarios where domains previously passed DNSSEC authentication but not now.
After receiving the message:
If you’re using a DNS provider, for example GoDaddy, alert your DNS provider of the error so they can work on the troubleshooting and configuration change.
If you’re managing your own DNSSEC infrastructure, there are many DNSSEC misconfigurations that may generate this error message. Some common problems to check for if your zone was previously passing DNSSEC authentication:
Broken trust chain, when the parent zone holds a set of DS records that point to something that doesn’t exist in the child zone. Such pointers by DS records can result in the child zone being marked as bogus by validating resolvers.
Resolve by reviewing the child domains RRSIG key IDs and ensuring that they match with the key IDs in the DS records published in the parent zone.
RRSIG resource record for the domain isn’t time valid, it has either expired or its validity period hasn’t begun.
Resolve by generating new signatures for the domain using valid timespans.
Note
This error code is also generated if Exchange Online receives SERVFAIL response from DNS server on TLSA query for the destination domain.
When an outbound email is being sent, if the receiving domain has DNSSEC enabled, we query for TLSA records associated with MX entries in the domain. If no TLSA record is published, the response to the TLSA lookup must be NOERROR (no records of requested type for this domain) or NXDOMAIN (there’s no such domain). DNSSEC requires this response if no TLSA record is published; otherwise, Exchange Online interprets the lack of response as a SERVFAIL error. As per RFC 7672, a SERVFAIL response is untrustworthy; so, the destination domain fails DNSSEC validation. This email is then deferred with the following error:
450 4.7.324 dnssec-invalid: Destination domain returned invalid DNSSEC records
If the email sender reports receipt of the message
If you’re using a DNS provider, for example, GoDaddy, alert your DNS provider of the error so that they can troubleshoot the DNS response. If you’re managing your own DNSSEC infrastructure, it could be an issue with the DNS server itself or with the network.
Frequently Asked Questions
As an Exchange Online customer, can I opt out of using DNSSEC and/or DANE?
We strongly believe DNSSEC and DANE will significantly increase the security position of our service and benefit all of our customers. We’ve worked diligently over the last year to reduce the risk and severity of the potential impact this deployment might have for Microsoft 365 customers. We’ll be actively monitoring and tracking the deployment to ensure negative impact is minimized as it rolls out. Because of this, tenant-level exceptions or opt-out won’t be available. If you experience any issues related to the enablement of DNSSEC and/or DANE, the different methods for investigating failures noted in this document will help you identify the source of the error. In most cases, the issue will be with the external destination party and you’ll need to communicate to these business partners that they need to correctly configure DNSSEC and DANE in order to receive email from Exchange Online using these standards.
How does DNSSEC relate to DANE?
DNSSEC adds a layer of trust into DNS resolution by applying the public key infrastructure to ensure the records returned in response to a DNS query are authentic. DANE ensures that the receiving mail server is the legitimate and expected mail server for the authentic MX record.
What is the difference between MTA-STS and DANE for SMTP?
DANE and MTA-STS serve the same purpose, but DANE requires DNSSEC for DNS authentication while MTA-STS relies on certificate authorities.
Why isn’t Opportunistic TLS sufficient?
Opportunistic TLS will encrypt communication between two endpoints if both agree to support it. However, even if TLS encrypts the transmission, a domain could be spoofed during DNS resolution such that it points to a malicious actor’s endpoint instead of the real endpoint for the domain. This spoof is a gap in email security that is addressed by implementing MTA-STS and/or SMTP DANE with DNSSEC.
Why isn’t DNSSEC sufficient?
DNSSEC isn’t fully resistant to Man-in-the-Middle attacks and downgrade (from TLS to clear text) attacks for mail flow scenarios. The addition of MTA-STS and DANE along with DNSSEC provides a comprehensive security method to thwart both MITM and downgrade attacks.
This is a Friday long-read, so grab a warm cup of something and kick back because we’re going to take our time on this. The world is about to profoundly change. I know you’re nervous – perhaps excited and optimistic, but if you’ve been paying attention and have been watching the trajectory of this thing, the rational reaction is to be nervous. In this post I’d like to unpack in practical and tangible terms what AI is, where it came from, and the state of play, and then I’ll show you a path that will give you a pretty good shot at surviving the coming revolution.
Who am I? I’m Mark Maunder and I founded Wordfence in 2011 and wrote the early versions of the product until 2015 when Matt Barry took over as lead developer and I morphed into a tech executive running Defiant Inc, which makes Wordfence. We have over 4 million customers using our free product and a large number of paying customers using our various paid WordPress security products. I’ve been a technologist since the early 90s and a kid hacker in my teens in the 80s and 90s. I started my career in mission-critical operations for companies like Coca-Cola, Credit Suisse (now UBS), and DeBeers and then went over to do dev for companies like the BBC and eToys which was one of the biggest dot-com busts. I created the first job meta-search called WorkZoo in the UK around 2001 which later competed with Indeed, launched after us, and which I sold in 2005 but which made Time Magazine and NYTimes. I subsequently launched a Geoblogging platform, inline comments via JS, an ad network, real-time analytics, a localized news website, and more. You could say I’ve been in the innovation game for a minute, and I’m in it for life. I’m based in the USA these days in case you’re curious – moved over here in 2003.
Examining Bubbles
My apologies for the long bio, but what I’d like to illustrate is that I’ve seen tech come and go. The hype cycles I’ve seen typically include:
Outlandish claims about how the tech will solve everything from slicing bread to world peace and everything in between.
Commercial vendors jumping on the emerging mega-trend to surf the wave with proprietary technologies of their own which they position as standards, or at the very least the default choice.
Nascent technologies implementing the tech, that are immature, unstable, rapidly changing, and may very well be abandoned in a few months or a few years.
The press contracting a bad case of rabies and foaming at the mouth about the tech, amplifying the most extreme aspects and use cases and creating a lot of noise, which makes it hard for implementors to sift for the truth and the fundamentals around the technology.
The investment community pouring cash into the space with little focus. This creates an extremely adversarial environment for tech practitioners who are building fundamental value, who now have to compete with powerful VC-backed marketing machines.
As Warren Buffet says, the Innovators, the Imitators, and the Swarming Incompetents enter the space in that order. I’d add that they have a pyramid structure with each successive wave being at least an order of magnitude bigger than the last. Things get crowded for a while.
Then you have the typical bust cycle which cleans house and makes the tech uncool again, but also makes it interesting to the true believers. The VC’s go away and stop making noise that innovators have to compete against. The imitators and swarming incompetents drift off to imitate and mess up something else. The businesses not creating fundamental value fail. Some creating fundamental value fail too but the talent and tech are sometimes reincarnated into something else useful.
So who prospers, and what tech survives after a bust? Sometimes none of it, but helpful derivative technologies are created, like Java Applets in the 90s that inspired Flash which inspired standards-based rich content web browsers.
Sometimes out of the ashes, an Amazon is born, as with the dot-com boom. And sometimes you have incredible innovation where the innovators never see large-scale commercial success, but others do, as with Igor Sysoyev who created Nginx which eliminated the need for a data center full of web servers to handle large-scale websites. Igor has a commercial thing, but the real winners were companies like Cloudflare who based their global infrastructure on Nginx, reverse proxying massive numbers of connections to origin servers with rules about what gets proxied. Hey, I benefited too. Nginx saved our behinds when Kerry and I were running Feedjit.com from 2007 to 2011 because it let us handle over 1 billion application requests per month on just 6 servers. Thanks Igor!
Blockchain technology is in a bust cycle and you can map the characteristics I defined above to blockchain. It looks eerily similar to the dot-com bust and you’ll see an Amazon emerge from the ashes about a decade from now. It might be Coinbase if they survive the over 80% dive in market cap that may continue, but who knows?
Derivative Versus Fundamental Technology Innovation
I’ve mentioned a few tech bubbles so far and it’s really the first step in pulling us out of the trees so that we can examine the various forests out there. Now let’s go a little more meta and talk about which forests matter. Some technology is derivative. Examples of derivative tech are new stuff that runs inside a browser, for example, Websockets.
Websockets are awesome because they let a browser keep a connection open and get push notifications without doing the old TCP three-way handshake to establish a new connection every time the browser wants to check if there’s data waiting on a chat server or whatever. We used to call this long-polling and I wrote a web server to do long polling which was a clumsy but necessary approach, so when Web Sockets came along we all breathed a sigh of relief and I happily retired my web server glad that no one would see my nasty source code which worked quite well mind you.
Another derivative technology – and you’re not going to like this – is blockchain. It’s a useful and novel implementation of hashing algorithms and a few other cryptographic tricks, but honestly, we should have disintermediated banking at least two decades ago and the fact that blockchain has still failed to do that is both disappointing and illustrative that it is just a set of derivative applications built on a tower of fundamentals that has a way to go before it matures. The hype cycle and speculative bubble around it was simply humans making human noise.
So that’s derivative tech. In addition to derivative tech, you have what I’d like to call fundamental tech. Electricity is fundamental. It’s cornerstone technology that transformed the world in our ability to use and transport energy which has enabled an industrial and technological revolution the likes of which the world has never seen. The microprocessor is also fundamental tech for similar reasons. You have algorithms that are fundamental tech like the RSA algorithm which allows us to establish a secure communication channel while a bad person is listening in the whole time – the kind of tech that could have changed the outcome of World War II.
The Internet is fundamental tech. It connected the world and gave us the ability to build applications on top like HTTP and the web which are derivative tech.
Oh I know you want to have a bar fight with me at this point and we’ll do that if you’re attending Wordcamp EU – a collegial and metaphorical barfight, that is – but hopefully, you’re picking up what I’m putting down here in a general sense: There is fundamental tech that profoundly enables and changes the world and which many other things are built on top of, and there is derivative tech that gets a lot of attention but isn’t quite as transformative in a historical sense if you’re thinking in terms of centuries. And there’s the big fat grey area in between.
AI is fundamental tech. For decades we have been programming by writing functions by hand. We’ve gotten quite good at structuring our code using metaphors like object-oriented programming to create logical structures that make sense in a human world, and help us organize large code bases. But fundamentally the way we define logic in a program hasn’t changed for a long time. Until now. For the first time in all of history, we can create functions in programming by training them, rather than writing them by hand. In other, slightly more technical words, we can infer a function from observations and then use it. Like babies and toddlers do. This is historic, it’s transformative and it is a fundamental breakthrough.
Funny thing is that until quite recently – around 2015 – AI had suffered many so-called “AI Winters” where there was significant interest in the field that catalyzed investment dollars, and then a setback usually caused by a reality check, that caused a winter in funding and interest. Does anyone remember the “expert systems” of the 80s? By the early 2000s AIs name had been dragged through the mud so many times that anyone doing serious research in the field used different words to describe their work, like “machine learning” or “informatics” or “knowledge systems”.
A few hardcore true believers like Yann Lecun, Yushua Bengio and Geoffrey Hinton powered through like Bilbo and Sam across The Dead Marshes and went on to win the Turing Award, which is basically the Nobel Prize of computer science and which I had the privilege of attending when Rivest, Adleman and Shamir won theirs for public key crypto. The Turing Award is a very big deal and well deserved considering how adversarial the AI environment was for a while.
So what changed? Well for one thing you’re reading this post because it’s about AI and you’re interested. And you’re interested because you recently used GPT-4, MidJourney, Dall-E or another model to create something. You’re seeing tangible results. And the reason you’re seeing results is that GPU hardware, algorithms, and an interest in the field have brought us to an inflection point where the technology is delivering results that are jaw-dropping enough to catalyze more funding, more research, and more jaw-dropping results. This cycle really picked up steam in 2015, and with the release of GPT-4 recently, has entered a phase of what I would describe as true and consistent exponential growth.
According to NVidia “LLM sizes have been increasing 10X every year for the last few years”. In two years that’s 100X. Three years from now that’s 1000X and so on. Extrapolate that out and be afraid. Or optimistic if your mind isn’t for rent and you are hopeful yet discontent. Rush lyrics aside, that pace should give you an idea of how quickly this thing is coming. And now that we’ve reached the point of inflection I mentioned, where the hardware and algorithms seem to have overcome the cycle of disappointment that AI has been stuck in for decades, I predict that you’ll see consistent and exponential growth in the field in capabilities for the foreseeable future, with a financial bubble and bust in there that won’t be of much consequence to the fundamental value of the technology.
“Thanks for the history lesson Maunder, but you brought us here with promises of telling us what to do about AI. So?”
What to do about AI
So far we’ve discussed what boom cycles look like and the kind of noise and bear traps you should be aware of. We’ve defined what AI is in fundamental terms – a function that you can train rather than hand code. And we’ve hopefully agreed that we’ve entered a period of consistent and exponential growth in the field. Now we’ll chat about how to survive and prosper in a world that looks a lot like when electricity was invented and commercialized, or the microprocessor, or the Internet.
Goldman estimates that AI will add 7% to global GDP at a rate of about 1.5% growth per year. They also estimate that roughly two-thirds of US occupations are exposed to some degree of automation by AI. You can extrapolate this globally. That kind of global disruption is matched only by the industrial revolution or the entire recent tech revolution as a whole starting from 1980. From the same publication, “A recent study by economist David Autor cited in the report found that 60% of today’s workers are employed in occupations that didn’t exist in 1940.”. So on an optimistic note, this kind of disruption isn’t a new thing and we’ve been disrupting and adapting for some time now.
Perhaps you’re reading this because you are running a WordPress website, perhaps secured by my product, Wordfence. Which means you’re a creator of some kind. Perhaps you’re a writer, an artist, or perhaps you’re an entrepreneur creating a business out of thin air. [Yes my fellow entreps, you get to hang with the other cool creator kids too!!]. If you don’t plan on adapting at all, that makes you far more vulnerable to this coming wave than say a chef who runs a restaurant, or someone who manages real estate and rentals. And that really is the key: adaptation. So how can we adapt?
If you’re a creator, you need to become a user of AI. You’re probably already using GPT to write copy for your product catalogs on your e-commerce website, or using MidJourney (MJ) or Dall-E to create art for ad campaigns. If you’re a designer or artist, you may feel the kind of resentment this Blender artist does in the Blender subreddit.
“My Job is different now since Midjourney v5 came out last week. I am not an artist anymore, nor a 3D artist. Rn all I do is prompting, photoshopping and implementing good looking pictures. The reason I went to be a 3D artist in the first place is gone. I wanted to create form In 3D space, sculpt, create. With my own creativity. With my own hands.”
“It came over night for me. I had no choice. And my boss also had no choice. I am now able to create, rig and animate a character thats spit out from MJ in 2-3 days. Before, it took us several weeks in 3D. The difference is: I care, he does not. For my boss its just a huge time/money saver.”
While I sympathize with how hard change and disruption can be, it’s been a constant for the past couple of centuries in many fields. MidJourney has a long way to go before it can match a real-world artist, unless you’re just churning out images and letting the AI guide the design choices and are happy to work around the bugs. For MidJourney and other generative AIs to produce exactly what we want, they’re going to have to get better at understanding what exactly we want to create. And that’s where the skill comes in. You’re already seeing this with a document that someone has created listing famous photographers and examples of their look. This can be used in MJ prompts to say “in the style of” to get a specific look, but it is an incredibly rudimentary approach.
Another way to guide the MJ AI in particular is to blend photos it has generated. Again, super rudimentary, but it’s the start of having the ability to tightly specify exactly what you want and get that out of MJ. And if you need a reminder of how basic it still is, try to get MJ to generate hands. It still sucks, even at version 5.
So if you’re a creator, start getting good at using the tools now, understand their limitations, and evolve as the products evolve until you’re an expert at guiding the AI to create exactly what you want. This will help you guide your customers in explaining the limits of the current state of AI to them and where you add value, let you take immediate advantage of the use that the current tools have, and ramp up your productivity as the tools get better at taking instructions from you.
This applies to writers, artists, designers, filmmakers, photographers, screenwriters, and anyone with creative output. Get good at the tools. Get good at them now. Do it with an open mind. Know that changes aren’t permanent and that change is. (Again with sneaking in the Rush lyrics)
Adapting as a Dev
Coders! My people! We have a problem. Most of you have become users of AI. You’re users of GPT-4 via their API. You’re plugging into other generative AIs via an API. You aren’t rolling your own. And rolling your own is where all the fun is!!
Ever heard of transfer learning? You can grab a pre-trained model from Hugging Face, chop off the head – aka the final layer in the layers of neural nets, substitute it with random weights, and train the pre-trained model with your own data to take advantage of the sometimes millions of dollars that someone else already spent training their model. In fact Facebook’s LLAMA model which is one of the largest LLM’s in the world was leaked via Torrent recently.
The most important thing you need to do right now as a developer is to stop being a user of AI and become a dev of AI. GPT-4 is a shiny ball that the world will have forgotten about in a year, but it’s a very shiny and attractive ball right now that is fueling many a late-night dev chat. Remember that stat I gave you above? That LLM’s have been increasing in size at 10X per year. The current state of the art will be accessible to you on a desktop in a few years and you need to get ready for that world today.
I’m going to just go ahead and tell you what you need to do to get your AI stuff together, fast.
Ignore the math. Trust me on this. Most people including devs are not good at math and it intimidates the hell out of them. AI is just matrix multiplication and addition using GPU cores to parallelize the ops. Expressing this as code is easy. Expressing it as math will make you hide under your bed and cry. Ignore the math. If you can code, you’ll get it.
Learn Python. Everything in AI is Python. It’s a beautiful little language that you’ll come to appreciate very quickly if you’re already a dev. It’s like coming over to Aikido if you’re already a black belt. OK the MMA scene kinda messed up my metaphor proving that Aikido is actually worthless, but whatever.
Then go do the Practical Deep Learning for Coders course at fast.ai. It’s how we get our guys up to speed fast in the field and it’s brilliant. Jeremy Howard does a spectacular job of getting you up to speed fast in the field by immediately getting you productive and then unpacking the details in a fun non-mathy way.
As you progress in the course, definitely get up to speed using Jupyter Notebooks and I’d recommend Kaggle for this. They were bought by Google a few years ago and kind of compete with Google’s own notebook system called Colab, but I prefer Kaggle. You get GPU access by simply verifying your email address and it’s free which is kind of amazing. So you can use a rich text environment on Kaggle to write your code, see the output and run it on some fairly decent GPUs. Kaggle GPU’s perform at about 20% of the speed of my laptop RTX 4090 in case you’re curious about benchmarks.
The course teaches fundamentals, how to use pre-trained models, how to create Jupyter Notebooks or fork others, how to create Hugging Face Spaces, and how to share your models and their output with the world. It is the fastest way right now to transform yourself from an AI user into an AI dev and get drinks bought for you at parties by folks that have not yet made the leap.
Alright, this went long but that was the plan. We’ll talk more about AI. Go forth, be brave, learn, and create!
Mark Maunder – Founder & CEO – Wordfence and Defiant Inc.
Footnotes: All images on the page were created with MidJourney and if you’d like to see the prompt I used, simply view the image in a new tab and the image name is the prompt, all except for the heavy metal hands image which a colleague created. I’ll be in the comments in case there’s discussion.
Note: Content from this post first appeared in r/CrowdStrike
We will continue to update on this dynamic situation as more details become available. CrowdStrike’s Intelligence team is in contact with 3CX.
On March 29, 2023, CrowdStrike observed unexpected malicious activity emanating from a legitimate, signed binary, 3CXDesktopApp — a softphone application from 3CX. The malicious activity includes beaconing to actor-controlled infrastructure, deployment of second-stage payloads, and, in a small number of cases, hands-on-keyboard activity.
The CrowdStrike Falcon® platform has behavioral preventions and atomic indicator detections targeting the abuse of 3CXDesktopApp. In addition, CrowdStrike® Falcon OverWatch™ helps customers stay vigilant against hands-on-keyboard activity.
The 3CXDesktopApp is available for Windows, macOS, Linux and mobile. At this time, activity has been observed on both Windows and macOS.
CrowdStrike Intelligence has assessed there is suspected nation-state involvement by the threat actor LABYRINTH CHOLLIMA. CrowdStrike Intelligence customers received an alert this morning on this active intrusion.
The CrowdStrike Falcon platform protects customers from this attack and has coverage utilizing behavior-based indicators of attack (IOAs) and indicators of compromise (IOCs) based detections targeting malicious behaviors associated with 3CX on both macOS and Windows.
Customers should ensure that prevention policies are properly configured with Suspicious Processes enabled.
Figure 1. CrowdStrike’s indicator of attack (IOA) identifies and blocks the malicious behavior in macOS (click to enlarge)
Figure 2. CrowdStrike’s indicator of attack (IOA) identifies and blocks the malicious behavior in Windows (click to enlarge)
Hunting in the CrowdStrike Falcon Platform
Falcon Discover
CrowdStrike Falcon® Discover customers can use the following link: US-1 | US-2 | EU | Gov to look for the presence of 3CXDesktopApp in their environment.
Falcon Insight customers can assess if the 3CXDesktopApp is running in their environment with the following query:
Event Search — Application Search
event_simpleName IN (PeVersionInfo, ProcessRollup2) FileName IN ("3CXDesktopApp.exe", "3CX Desktop App")
| stats dc(aid) as endpointCount by event_platform, FileName, SHA256HashData
Falcon Long Term Repository — Application Search
#event_simpleName=/^(PeVersionInfo|ProcessRollup2)$/ AND (event_platform=Win ImageFileName=/\\3CXDesktopApp\.exe$/i) OR (event_platform=Mac ImageFileName=/\/3CX\sDesktop\sApp/i)
| ImageFileName = /.+(\\|\/)(?.+)$/i
| groupBy([event_platform, FileName, SHA256HashData], function=count(aid, distinct=true, as=endpointCount))
Atomic Indicators
The following domains have been observed beaconing, which should be considered an indication of malicious intent.
CrowdStrike Falcon® Insight customers, regardless of retention period, can search for the presence of these domains in their environment spanning back one year using Indicator Graph: US-1 | US-2 | EU | Gov.
Event Search — Domain Search
event_simpleName=DnsRequest DomainName IN (akamaicontainer.com, akamaitechcloudservices.com, azuredeploystore.com, azureonlinecloud.com, azureonlinestorage.com, dunamistrd.com, glcloudservice.com, journalide.org, msedgepackageinfo.com, msstorageazure.com, msstorageboxes.com, officeaddons.com, officestoragebox.com, pbxcloudeservices.com, pbxphonenetwork.com, pbxsources.com, qwepoi123098.com, sbmsa.wiki, sourceslabs.com, visualstudiofactory.com, zacharryblogs.com)
| stats dc(aid) as endpointCount, earliest(ContextTimeStamp_decimal) as firstSeen, latest(ContextTimeStamp_decimal) as lastSeen by DomainName
| convert ctime(firstSeen) ctime(lastSeen)
Here are three of the worst breaches, attacker tactics and techniques of 2022, and the security controls that can provide effective, enterprise security protection for them.
Ransomware as a service is a type of attack in which the ransomware software and infrastructure are leased out to the attackers. These ransomware services can be purchased on the dark web from other threat actors and ransomware gangs. Common purchasing plans include buying the entire tool, using the existing infrastructure while paying per infection, or letting other attackers perform the service while sharing revenue with them.
In this attack, the threat actor consists of one of the most prevalent ransomware groups, specializing in access via third parties, while the targeted company is a medium-sized retailer with dozens of sites in the United States.
The threat actors used ransomware as a service to breach the victim’s network. They were able to exploit third-party credentials to gain initial access, progress laterally, and ransom the company, all within mere minutes.
The swiftness of this attack was unusual. In most RaaS cases, attackers usually stay in the networks for weeks and months before demanding ransom. What is particularly interesting about this attack is that the company was ransomed in minutes, with no need for discovery or weeks of lateral movement.
A log investigation revealed that the attackers targeted servers that did not exist in this system. As it turns out, the victim was initially breached and ransomed 13 months before this second ransomware attack. Subsequently, the first attacker group monetized the first attack not only through the ransom they obtained, but also by selling the company’s network information to the second ransomware group.
In the 13 months between the two attacks, the victim changed its network and removed servers, but the new attackers were not aware of these architectural modifications. The scripts they developed were designed for the previous network map. This also explains how they were able to attack so quickly – they had plenty of information about the network. The main lesson here is that ransomware attacks can be repeated by different groups, especially if the victim pays well.
“RaaS attacks such as this one are a good example of how full visibility allows for early alerting. A global, converged, cloud-native SASE platform that supports all edges, like Cato Networks provides complete network visibility into network events that are invisible to other providers or may go under the radar as benign events. And, being able to fully contextualize the events allows for early detection and remediation.
#2: The Critical Infrastructure Attack on Radiation Alert Networks#
Attacks on critical infrastructure are becoming more common and more dangerous. Breaches of water supply plants, sewage systems and other such infrastructures could put millions of residents at risk of a human crisis. These infrastructures are also becoming more vulnerable, and attack surface management tools for OSINT like Shodan and Censys allow security teams to find such vulnerabilities with ease.
In 2021, two hackers were suspected of targeting radiation alert networks. Their attack relied on two insiders that worked for a third party. These insiders disabled the radiation alert systems, significantly debilitating their ability to monitor radiation attacks. The attackers were then able to delete critical software and disable radiation gauges (which is part of the infrastructure itself).
“Unfortunately, scanning for vulnerable systems in critical infrastructure is easier than ever. While many such organizations have multiple layers of security, they are still using point solutions to try and defend their infrastructure rather than one system that can look holistically at the full attack lifecycle. Breaches are never just a phishing problem, or a credentials problem, or a vulnerable system problem – they are always a combination of multiple compromises performed by the threat actor,” said Etay Maor, Sr. Director of Security Strategy at Cato Networks.
#3: The Three-Step Ransomware Attack That Started with Phishing#
The third attack is also a ransomware attack. This time, it consisted of three steps:
1. Infiltration – The attacker was able to gain access to the network through a phishing attack. The victim clicked on a link that generated a connection to an external site, which resulted in the download of the payload.
2. Network activity – In the second phase, the attacker progressed laterally in the network for two weeks. During this time, it collected admin passwords and used in-memory fileless malware. Then on New Year’s Eve, it performed the encryption. This date was chosen since it was (rightfully) assumed the security team would be off on vacation.
3. Exfiltration – Finally, the attackers uploaded the data out of the network.
In addition to these three main steps, additional sub-techniques were employed during the attack and the victim’s point security solutions were not able to block this attack.
“A multiple choke point approach, one that looks horizontally (so to speak) at the attack rather than as a set of vertical, disjointed issues, is the way to enhance detection, mitigation and prevention of such threats. Opposed to popular belief, the attacker needs to be right many times and the defenders only need to be right just once. The underlying technologies to implement a multiple choke point approach are full network visibility via a cloud-native backbone, and a single pass security stack that’s based on ZTNA.” said Etay Maor, Sr. Director of Security Strategy at Cato Networks.
It is common for security professionals to succumb to the “single point of failure fallacy”. However, cyber-attacks are sophisticated events that rarely involve just one tactic or technique which is the cause of the breach. Therefore, an all-encompassing outlook is required to successfully mitigate cyber-attacks. Security point solutions are a solution for single points of failure. These tools can identify risks, but they will not connect the dots, which could and has led to a breach.
According to ongoing security research conducted by Cato Networks Security Team, they have identified two additional vulnerabilities and exploit attempts that they recommend including in your upcoming security plans:
While Log4j made its debut as early as December of 2021, the noise its making hasn’t died down. Log4j is still being used by attackers to exploit systems, as not all organizations have been able to patch their Log4j vulnerabilities or detect Log4j attacks, in what is known as “virtual patching”. They recommend prioritizing Log4j mitigation.
Security solutions like firewalls and VPNs have become access points for attackers. Patching them has become increasingly difficult, especially in the era of architecture cloudification and remote work. It is recommended to pay close attention to these components as they are increasingly vulnerable.
How to Minimize Your Attack Surface and Gain Visibility into the Network#
To reduce the attack surface, security professionals need visibility into their networks. Visibility relies on three pillars:
Actionable information – that can be used to mitigate attacks
Reliable information – that minimizes the number of false positives
Timely information – to ensure mitigation happens before the attack has an impact
Once an organization has complete visibility to the activity on their network they can contextualize the data, decide whether the activity witnessed should be allowed, denied, monitored, restricted (or any other action) and then have the ability to enforce this decision. All these elements must be applied to every entity, be it a user, device, cloud app etc. All the time everywhere. That is what SASE is all about.
Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.
Any organization that handles sensitive data must be diligent in its security efforts, which include regular pen testing. Even a small data breach can result in significant damage to an organization’s reputation and bottom line.
There are two main reasons why regular pen testing is necessary for secure web application development:
Security: Web applications are constantly evolving, and new vulnerabilities are being discovered all the time. Pen testing helps identify vulnerabilities that could be exploited by hackers and allows you to fix them before they can do any damage.
Compliance: Depending on your industry and the type of data you handle, you may be required to comply with certain security standards (e.g., PCI DSS, NIST, HIPAA). Regular pen testing can help you verify that your web applications meet these standards and avoid penalties for non-compliance.
Many organizations, big and small, have once a year pen testing cycle. But what’s the best frequency for pen testing? Is once a year enough, or do you need to be more frequent?
The answer depends on several factors, including the type of development cycle you have, the criticality of your web applications, and the industry you’re in.
Agile development cycles are characterized by short release cycles and rapid iterations. This can make it difficult to keep track of changes made to the codebase and makes it more likely that security vulnerabilities will be introduced.
If you’re only testing once a year, there’s a good chance that vulnerabilities will go undetected for long periods of time. This could leave your organization open to attack.
To mitigate this risk, pen testing cycles should align with the organization’s development cycle. For static web applications, testing every 4-6 months should be sufficient. But for web applications that are updated frequently, you may need to test more often, such as monthly or even weekly.
Any system that is essential to your organization’s operations should be given extra attention when it comes to security. This is because a breach of these systems could have a devastating impact on your business. If your organization relies heavily on its web applications to do business, any downtime could result in significant financial losses.
For example, imagine that your organization’s e-commerce site went down for an hour due to a DDoS attack. Not only would you lose out on potential sales, but you would also have to deal with the cost of the attack and the negative publicity.
To avoid this scenario, it’s important to ensure that your web applications are always available and secure.
Non-critical web applications can usually get away with being tested once a year, but business-critical web applications should be tested more frequently to ensure they are not at risk of a major outage or data loss.
If all your web applications are internal, you may be able to get away with pen testing less frequently. However, if your web applications are accessible to the public, you must be extra diligent in your security efforts.
Web applications accessible to external traffic are more likely to be targeted by attackers. This is because there is a greater pool of attack vectors and more potential entry points for an attacker to exploit.
Customer-facing web applications also tend to have more users, which means that any security vulnerabilities will be exploited more quickly. For example, a cross-site scripting (XSS) vulnerability in an external web application with millions of users could be exploited within hours of being discovered.
To protect against these threats, it’s important to pen test customer-facing web applications more frequently than internal ones. Depending on the size and complexity of the application, you may need to pen test every month or even every week.
Certain industries are more likely to be targeted by hackers due to the sensitive nature of their data. Healthcare organizations, for example, are often targeted because of the protected health information (PHI) they hold.
If your organization is in a high-risk industry, you should consider conducting pen testing more frequently to ensure that your systems are secure and meet regulatory compliance. This will help protect your data and reduce the chances of a costly security incident.
You Don’t Have Internal Security Operations or a Pen testing Team#
This might sound counterintuitive, but if you don’t have an internal security team, you may need to conduct pen testing more frequently.
Organizations that don’t have dedicated security staff are more likely to be vulnerable to attacks.
Without an internal security team, you will need to rely on external pen testers to assess your organization’s security posture.
Depending on the size and complexity of your organization, you may need to pen test every month or even every week.
During a merger or acquisition, there is often a lot of confusion and chaos. This can make it difficult to keep track of all the systems and data that need to be secured. As a result, it’s important to conduct pen testing more frequently during these times to ensure that all systems are secure.
M&A also means that you are adding new web applications to your organization’s infrastructure. These new applications may have unknown security vulnerabilities that could put your entire organization at risk.
In 2016, Marriott acquired Starwood without being aware that hackers had exploited a flaw in Starwood’s reservation system two years earlier. Over 500 million customer records were compromised. This placed Marriott in hot water with the British watchdog ICO, resulting in 18.4 million pounds in fines in the UK. According to Bloomberg, there is more trouble ahead, as the hotel giant could “face up to $1 billion in regulatory fines and litigation costs.”
To protect against these threats, it’s important to conduct pen testing before and after an acquisition. This will help you identify potential security issues so they can be fixed before the transition is complete.
While periodic pen testing is important, it is no longer enough in today’s world. As businesses rely more on their web applications, continuous pen testing becomes increasingly important.
There are two main types of pen testing: time-boxed and continuous.
Traditional pen testing is done on a set schedule, such as once a year. This type of pen testing is no longer enough in today’s world, as businesses rely more on their web applications.
Continuous pen testing is the process of continuously scanning your systems for vulnerabilities. This allows you to identify and fix vulnerabilities before they can be exploited by attackers. Continuous pen testing allows you to find and fix security issues as they happen instead of waiting for a periodic assessment.
Continuous pen testing is especially important for organizations that have an agile development cycle. Since new code is deployed frequently, there is a greater chance for security vulnerabilities to be introduced.
Pen testing as a service models is where continuous pen testing shine. Outpost24’s PTaaS (Penetration-Testing-as-a-Service) platform enables businesses to conduct continuous pen testing with ease. The Outpost24 platform is always up-to-date with an organization’s latest security threats and vulnerabilities, so you can be confident that your web applications are secure.
Manual and automated pen testing: Outpost24’s PTaaS platform combines manual and automated pen testing to give you the best of both worlds. This means you can find and fix vulnerabilities faster while still getting the benefits of expert analysis.
Provides comprehensive coverage: Outpost24’s platform covers all OWASP Top 10 vulnerabilities and more. This means that you can be confident that your web applications are secure against the latest threats.
Is cost-effective: With Outpost24, you only pay for the services you need. This makes it more affordable to conduct continuous pen testing, even for small businesses.
Regular pen testing is essential for secure web application development. Depending on your organization’s size, industry, and development cycle, you may need to revise your pen testing schedule.
Once-a-year pen testing cycle may be enough for some organizations, but for most, it is not. For business-critical, customer-facing, or high-traffic web applications, you should consider continuous pen testing.
Outpost24’s PTaaS platform makes it easy and cost-effective to conduct continuous pen testing. Contact us today to learn more about our platform and how we can help you secure your web applications.
Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.