Best Practices for setting up Altaro VM Backup

This best practice guide goes through the Altaro VM Backup features explaining their use and the optimal way to configure them in order to make the best use out of the software.

You will need to adapt this to your specific environment, especially depending on how much resources you have available, however this guide takes you through the most important configurations that are often overlooked too.

Setting up the Altaro VM Backup Management Console

The Altaro VM Backup Management Console can be utilised to add and manage multiple hosts in one console. However these hosts must be in the same LAN and at the same physical site (same building). Setups with multiple physical sites must have an instance of Altaro VM Backup at each site.

To manage these multiple installations, you can utilise the ‘Central Monitoring Console’ where you’ll be able to monitor as well as manage these Altaro VM Backup installations remotely.

A single Altaro VM Backup instance can manage both Hyper-V & VMware hosts.

For optimal results, Altaro runs some maintenance specific tasks using (multiple) single threaded operations. For this reason installing on a machine which has a CPU with a higher single thread performance would yield better results than installing on a machine which has a CPU with more cores and lower single thread performance.

Thus for the fastest results, installing Altaro VM Backup on a machine with a higher single thread CPU speed would be best.

Backup Locations

Make sure Opportunity Locks (Oplocks) are disabled if the backup location is a NAS.

If your backup location is a Windows machine, the equivalent to Oplocks is: Set-SmbServerConfiguration -EnableLeasing 0

Run the above command via Powershell.

Offsite Copies

With Altaro VM Backup, you are provided with the functionality of an Offsite Copy Location, which is a redundant/secondary copy of your backups. You can even backup your VM’s to 2 different offsite copy locations for further redundancy of your data, so you can pick a cloud location as well as an Altaro Offsite Server for instance.

There are multiple options for setting this up:

  • You can choose a Physical Drive connected to the management console (the best practice for offsites is to have them located in another building/location).
  • Drive Rotation/Swap which allows you to set up a pool of drives/network paths.
  • A Network Path (LAN Only) or else to an offsite location via a WAN/VPN/Internet connection, which is an ideal tool for Disaster Recovery purposes. Please note that the latter situation (non-LAN) requires use of the Altaro Offsite Server
  • Backup to Microsoft AzureAmazon S3 or Wasabi.

Setting up an offsite copy location is as crucial as setting up backups to a primary location. Apart from the obvious reason that you’ll have a redundant set of backups to restore from, should the local backups become unusable due to disk corruption or other disk failures. Having a secondary copy of your backup sets also allows you to keep a broader history for your VM backups on your secondary location and you’ll be able to go further back when restoring if required.

Deduplication

Altaro VM Backup makes use of Augmented In-line Deduplication. Enabling this is highly recommended and is done from the ‘Advanced Settings’ screen as this will essentially ensure that any common data blocks across virtual machines are only written to the backup location once. This helps by saving a considerable amount of space and also makes backups much quicker since common information is only transferred once.

Boot From Backup

The Boot From Backup drive feature comes along with 2 options, either ‘Verification Mode’ or ‘Recovery Mode’. This is a very good option for getting your RTO down since you’re able to boot up the VM immediately from a backup location and start a restore in the background as well.

However it’s very important that if you are planning to do this, you’ll need a fast backup location that can handle the I/O of a booted VM that’s essentially going into production. Please note that when the VM has finished restoring, it’s suggested to restart the restored VM as soon as you get a chance in order to switch to the restored drives, which would have faster I/O throughput.

Notifications

E-mail notifications are a simple and effective method of monitoring the backup status, yet it’s often overlooked. Setting up these notifications will provide you with a quick overview of the status over your of your backup jobs, hence – you won’t need to login into the Altaro Management console every day to confirm the backup status.

This way you’ll be alerted of any backup failures, allowing you to address said issues before the next backup schedule. Thereby ensuring that you always have a restorable backup point; so as a general best practice, always monitor your backup notifications.

Master Encryption Key

The Master Encryption Key in Altaro is utilised to encrypt the backups using AES 256-bit. It’s used if you choose to encrypt the local backups from the ‘Advanced Settings’ screen, while if you’re configuring offsite copies it must be used as offsite copies must be encrypted.

Altaro VM Backup will require the encryption key upon restoring, so it’s critical that you either remember it or take note of it in a secure password manager as there is no method of recovery for the master encryption key.

Scheduled Test Drills

Altaro VM Backup has the ability to run manual or automated verification of your backup data. This allows you to run scheduled verification jobs that will check the integrity of your backups on your backup location, or schedule full VM restores so that you can actually boot up the VM and confirm that everything works as expected. The VM will be restored with the NIC disabled so as to avoid IP conflicts with the production machine as well.

Failure of storage devices is not uncommon, therefore scheduling test drills is strongly advised for added peace-of-mind. Full instructions on configuring test drills.

Other General Best Practices

  • Backups and production VM’s should not be placed on the same drive.
  • Make sure Opportunity Locks (Oplocks) are disabled if the backup location is a NAS.
  • Backups should not be placed on a drive where an OS is running.
  • Altaro uses the drive it’s installed on as temporary storage and will require a small amount of free space (varying according to the size of the VMs being backed up).
  • Keep at least 10% of the backup location free.
  • The main Altaro VM Backup installation should not be installed on a machine that is also a domain controller (DC).
  • Directories/files inside the Altaro backup folder should not be tampered with, deleted or moved.
  • Do not take snapshots DFSR databases: “Snapshots aren’t supported by the DFSR database or any other Windows multi-master databases. This lack of snapshot support includes all virtualization vendors and products. DFSR doesn’t implement USN rollback quarantine protection like Active Directory Domain Services.” Source. 

Best Practices for Replication

Exclude Page File from Backup

As you’re aware Altaro VM Backup will take note of all changes since the last backup and transfer over all of the blocks that changed to the backup location. The page file will be changing very often and potentially causing your replication jobs to take longer.

Therefore, excluding the page file from backup equals, less transferred changes and as a result the replication jobs takes less time. This can be done by placing the page file onto a separate VHDX/VMDK file from the VM itself and then you can follow the steps here, in order to exclude the VHDX/VMDK file.

High Disk IO and Hypervisor Performance

Replication needs to make use of CDP (Continuous Data Protection), in order to take a backup every couple of minutes/hours, which makes Replication possible.

It’s important to note however that you should only enable high-frequency CDP (15 minutes or less) on VM’s that you really need to. This will ensure that the VM’s you really need to will be able to achieve the selected maximum frequency and in order not to have an impact your Hypervisor’s performance.

Source :
https://help.altaro.com/hc/en-us/articles/4416921650577-Best-Practices-for-setting-up-Altaro-VM-Backup

How to relocate the Altaro temporary files

Altaro VM Backup will create temporary files during backup operations and by default the location for these files is on the C: drive.

If you’d like to move the location of this temporary directory, please see according to which version you’re running below:

7.6 and newer

To do so, you can follow these steps:

  1. Ensure you are running at least 7.6.14. If not, update to the latest version from here
  2. Firstly, create a folder named “Overrides” in this path: C:\ProgramData\Altaro\AltaroBackupProfile
    If you’d like to move the temp files on the Altaro Offsite Server, create a folder named “Overrides” in this path: C:\ProgramData\Altaro\AltaroOffsiteServerProfile
  3. Then create a text file named “OverrideTempFolder.txt” inside the newly created folder
  4.  In the text file enter the path where you wish to store Altaro’s temp files, for example:{ “TempFolderPathOverride”:”E:\\Temp\\Altaro” }Ensure this location exists & that you use a double backslash as a separator (like above)
  5. Restart all Altaro services and on next Operation, the Altaro temporary files will now be stored in the above directory

7.5 and older

To do so, you can follow these steps:

  1. Ensure you are running at least version 5.0.97
  2. Firstly, create a folder named “Overrides” in this path: C:\ProgramData\Altaro\AltaroBackupProfile
    If you’d like to move the temp files on the Altaro Offsite Server, create a folder named “Overrides” in this path: C:\ProgramData\Altaro\AltaroOffsiteServerProfile
  3. Then create a text file named “OverrideTempFolder.txt” inside the newly created folder
  4.  In the text file enter the path where you wish to store Altaro’s temp files, for example: “E:\AltaroTemp” (without quotes) — ensure this location exists
  5. Restart all Altaro services and on next Operation, the Altaro temporary files will now be stored in the above directory

Tips

  • ProgramData is a hidden folder by default
  • Ensure that the file extension is showing, or you might end up with a file named “OverrideTempFolder.txt.txt”
  • Ensure there are no spaces at the end of the path and no extra line breaks in the text file

    Source :
    https://help.altaro.com/hc/en-us/articles/4416899962001

Which Altaro directories do I need to exclude from AntiVirus software?

If you are running an AntiVirus software or a file-scanning software, we do recommend excluding a couple of directories used by Altaro in order to ensure that it’s operation remains undisrupted.

We do recommend excluding the following:

  • all onsite backup drive directories
  • all offsite backup drive directories
  • C:\ProgramData\Altaro on the Altaro Management and on the Hyper-V hosts
  • C:\Program Files\Altaro on the Altaro Management and on the Hyper-V hosts

Also, if you relocated the Altaro temporary files ensure to exclude that directory as well.

Source :
https://help.altaro.com/hc/en-us/articles/4416905883409-Which

Altaro Dealing with “Windows Error 64” and “Windows Error 59”

PROBLEM

The backup fails with a one of the following errors:

  • “Windows Error 64: The specified network name is no longer available.”
  • “Windows Error 59: An unexpected network error occurred.”

CAUSE

There’s a number of reasons that can very easily cause networks issues which will result in a failed backup pointing to a Windows Error 64 or 59. Mainly it could be down to potential hardware failures/issues or even configuration of network devices for that matter.

Aside from that, firewalls, other traffic on the line or other software could be causing load on the network or even on the storage device itself, that might be going over timeouts or maximum retransmission limits.

Sending backups over an unreliable connection such as a VPN/WAN connection can also result in such a failure, unless using the Altaro Offsite Server tool for offsite copies.

Timeouts from specific NAS boxes when using domain credentials can also be causing such disconnections.

SOLUTION

There are numerous, distinct solutions applicable for backups failing with this error, seeing as it could be occurring for a number of reasons.

  • If you’re using a NAS as a backup location, it’s recommended that you utilise the credentials of the NAS box itself, even if it’s connected to Active Directory. The reason behind this, being that certain NAS’s have a timeout period associated for connections connected via domain credentials, so it could be the cause for the backup failure.
  • In addition to that this also doubles as a security measure in order to protect against Crypto-malware.
  • Another point to keep in mind if you’re using a NAS box, is to check whether the particular model you’re using has a sleep/standby option that could be causing such backup failure.
  • If you have other storage media available, try taking backups to this location, as the previous location may be experiencing hardware or software issues that may only present themselves during backup times. This will serve as a definite confirmation if the issue is with the previously configured location as well as a temporary solution.
  • If the backup location you have configured is going over an unreliable network, such a VPN/WAN connection, please note that this is not supported. This would only be supported if you’re making use of the Altaro Offsite Server which is only applicable for offsite copies and not primary backups.
  • If you’re using a backup device, such as a NAS which supports connections via iSCSI it’s recommended to set up the backup location this way. Devices connected via iSCSI usually perform better and in turn offer increased performance.
  • If the backup device is connected to a different switch to the backup server then it’s best to connect it to the same switch and re-test.
  • It’s recommended to change the network cables that the backup device and the backup server are connected with; additionally changing the ports on the switch would also be suggested.
  • Make sure Opportunity Locks (Oplocks) are disabled if the backup location is a NAS
  • If your backup location is a Windows machine, the equivalent to Oplocks is: Set-SmbServerConfiguration -EnableLeasing 0

    Run the above command via Powershell.
  • It’s also a good idea to reboot the backup device as well as the backup server to clear any open connections and refresh the devices.

    SessTimeout
    Key:HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\
    DWORD: SessTimeoutThe value entered here should be in seconds. You can try entering a value of 300 seconds (5 minutes) or 600 seconds (10 minutes). The default for this is 1 minute.
    This will increase the time the backup server waits for a response before the connection is aborted.

    TcpMaxDataRetransmissions
    Key:HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
    DWORD: TcpMaxDataRetransmissions
    The value entered here will reflect the number of retries. The default number for this is 5.This will increase the number of tries the TCP retransmission mechanism will attempt to transmit data before the connection is aborted.
  • If the above does not help and you’re still experiencing issues, it’s recommended to temporarily disable any firewalls and antivirus products on the backup server, the hosts and the backup device. This applies for both software and hardware firewalls.

Source :
https://help.altaro.com/hc/en-us/articles/4416921704081-Dealing-w

Microsoft finds Raspberry Robin worm in hundreds of Windows networks

Microsoft says that a recently spotted Windows worm has been found on the networks of hundreds of organizations from various industry sectors.

The malware, dubbed Raspberry Robin, spreads via infected USB devices, and it was first spotted in September 2021 by Red Canary intelligence analysts.

Cybersecurity firm Sekoia also observed it using QNAP NAS devices as command and control servers (C2) servers in early November [PDF], while Microsoft said it found malicious artifacts linked to this worm created in 2019.

Redmond’s findings align with those of the Red Canary’s Detection Engineering team, which also detected this worm on the networks of multiple customers, some of them in the technology and manufacturing sectors.

Although Microsoft observed the malware connecting to addresses on the Tor network, the threat actors are yet to exploit the access they gained to their victims’ networks.

This is in spite of the fact that they could easily escalate their attacks given that the malware can bypass User Account Control (UAC) on infected systems using legitimate Windows tools.

Microsoft shared this info in a private threat intelligence advisory shared with Microsoft Defender for Endpoint subscribers and seen by BleepingComputer.

Raspberry Robin worm infection flow
Raspberry Robin worm infection flow (Red Canary)

Abuses Windows legitimate tools to infect new devices

As already mentioned, Raspberry Robin is spreading to new Windows systems via infected USB drives containing a malicious .LNK file.

Once the USB device is attached and the user clicks the link, the worm spawns a msiexec process using cmd.exe to launch a malicious file stored on the infected drive.

It infects new Windows devices, communicates with its command and control servers (C2), and executes malicious payloads using several legitimate Windows utilities:

  • fodhelper (a trusted binary for managing features in Windows settings),
  • msiexec (command line Windows Installer component),
  • and odbcconf (a tool for configuring ODBC drivers).

“While msiexec.exe downloads and executes legitimate installer packages, adversaries also leverage it to deliver malware,” Red Canary researchers explained.

“Raspberry Robin uses msiexec.exe to attempt external network communication to a malicious domain for C2 purposes.”

Security researchers who spotted Raspberry Robin in the wild are yet to attribute the malware to a threat group and are still working on finding its operators’ end goal.

However, Microsoft has tagged this campaign as high-risk, given that the attackers could download and deploy additional malware within the victims’ networks and escalate their privileges at any time.

Source :
https://www.bleepingcomputer.com/news/security/microsoft-finds-raspberry-robin-worm-in-hundreds-of-windows-networks/

What is Shadow IT and why is it so risky?

Shadow IT refers to the practice of users deploying unauthorized technology resources in order to circumvent their IT department. Users may resort to using shadow IT practices when they feel that existing IT policies are too restrictive or get in the way of them being able to do their jobs effectively.

An old school phenomenon

Shadow IT is not new. There have been countless examples of widespread shadow IT use over the years. In the early 2000s, for example, many organizations were reluctant to adopt Wi-Fi for fear that it could undermine their security efforts. However, users wanted the convenience of wireless device usage and often deployed wireless access points without the IT department’s knowledge or consent.

The same thing happened when the iPad first became popular. IT departments largely prohibited iPads from being used with business data because of the inability to apply group policy settings and other security controls to the devices. Even so, users often ignored IT and used iPads anyway.

Of course, IT pros eventually figured out how to secure iPads and Wi-Fi and eventually embraced the technology. However, shadow IT use does not always come with a happy ending. Users who engage in shadow IT use can unknowingly do irreparable harm to an organization.

Even so, the problem of shadow IT use continues to this day. If anything, shadow IT use has increased over the last several years. In 2021 for example, Gartner found that between 30% and 40% of all IT spending (in a large enterprise) goes toward funding shadow IT.

Shadow IT is on the rise in 2022

Remote work post-pandemic

One reason for the rise in shadow IT use is remote work. When users are working from home, it is easier for them to escape the notice if the IT department than it might be if they were to try using unauthorized technology from within the corporate office. A study by Core found that remote work stemming from COVID requirements increased shadow IT use by 59%.

Tech is getting simpler for end-users

Another reason for the increase in shadow IT is the fact that it is easier than ever for a user to circumvent the IT department. Suppose for a moment that a user wants to deploy a particular workload, but the IT department denies the request.

A determined user can simply use their corporate credit card to set up a cloud account. Because this account exists as an independent tenant, IT will have no visibility into the account and may not even know that it exists. This allows the user to run their unauthorized workload with total impunity.

In fact, a 2020 study found that 80% of workers admitted to using unauthorized SaaS applications. This same study also found that the average company’s shadow IT cloud could be 10X larger than the company’s sanctioned cloud usage.

Know your own network

Given the ease with which a user can deploy shadow IT resources, it is unrealistic for IT to assume that shadow IT isn’t happening or that they will be able to detect shadow IT use. As such, the best strategy may be to educate users about the risks posed by shadow IT. A user who has a limited IT background may inadvertently introduce security risks by engaging in shadow IT. According to a Forbes Insights report 60% of companies do not include shadow IT in their threat assessments.

Similarly, shadow IT use can expose an organization to regulatory penalties. In fact, it is often compliance auditors – not the IT department – who end up being the ones to discover shadow IT use.

Of course, educating users alone is not sufficient to stopping shadow IT use. There will always be users who choose to ignore the warnings. Likewise, giving in to user’s demands for using particular technologies might not always be in the organization’s best interests either. After all, there is no shortage of poorly written or outdated applications that could pose a significant threat to your organization. Never mind applications that are known for spying on users.

The zero-trust solution to Shadow IT

One of the best options for dealing with shadow IT threats may be to adopt zero trust. Zero-trust is a philosophy in which nothing in your organization is automatically assumed to be trustworthy. User and device identities must be proven each time that they are used to access a resource.

There are many different aspects to a zero-trust architecture, and each organization implements zero-trust differently. Some organizations for instance, use conditional access policies to control access to resources. That way, an organization isn’t just granting a user unrestricted access to a resource, but rather is considering how the user is trying to access the resource. This may involve setting up restrictions around the user’s geographic location, device type, time of day, or other factors.

Zero-trust at the helpdesk

One of the most important things that an organization can do with regard to implementing zero trust is to better secure its helpdesk. Most organizations’ help desks are vulnerable to social engineering attacks.

When a user calls and requests a password reset, the helpdesk technician assumes that the user is who they claim to be, when in reality, the caller could actually be a hacker who is trying to use a password reset request as a way of gaining access to the network. Granting password reset requests without verifying user identities goes against everything that zero trust stands for.

Specops Software’s Secure Service Desk can eliminate this vulnerability by making it impossible for a helpdesk technician to reset a user’s password until that user’s identity has been proven. You can test it out for free to reduce the risks of shadow IT in your network.

Source :
https://thehackernews.com/2022/06/what-is-shadow-it-and-why-is-it-so-risky.html

Securing Port 443: The Gateway To A New Universe

At Wordfence our business is to secure over 4 million WordPress websites and keep them secure. My background is in network operations, and then I transitioned into software development because my ops role was at a scale where I found myself writing a lot of code. This led me to founding startups, and ultimately into starting the cybersecurity business that is Wordfence. But I’ve maintained that ops perspective, and when I think about securing a network, I tend to think of ports.

You can find a rather exhaustive list of TCP and UDP ports on Wikipedia, but for the sake of this discussion let’s focus on a few of the most popular ports:

  • 20 and 21 – FTP
  • 22 – SSH
  • 23 – (Just kidding. You better not be running Telnet)
  • 25 – Email via SMTP
  • 53 – DNS
  • 80 – Unencrypted Web
  • 110 – POP3 (for older email clients)
  • 443 – Web encrypted via TLS
  • 445 – Active Directory or SMB sharing
  • 993 – IMAP (for email clients)
  • 3306 – MySQL
  • 6378 – Redis
  • 11211 – Memcached

If you run your eye down this list, you’ll notice something interesting. The options available to you for services to run on most of these ports are quite limited. Some of them are specific to a single application, like Redis. Others, like SMTP, provide a limited number of applications, either proprietary or open-source. In both cases, you can change the configuration of the application, but it’s rare to write a custom application on one of those ports. Except port 443.

In the case of port 443 and port 80, you have a limited range of web servers listening on those ports, but users are writing a huge range of bespoke applications on port 443, and have a massive selection of applications that they can host on that port. Everything from WordPress to Drupal to Joomla, and more. There are huge lists of Content Management Systems.

Not only do you have a wide range of off-the-shelf web applications that you can run on port 443 or (if you’re silly) port 80, but you also have a range of languages they might be coded in, or in which you can code your own web application. Keep in mind that the web server, in this case, is much like an SSH or IMAP server in that it is listening on the port and handling connections, but the difference is that it is handing off execution to these languages, their various development frameworks, and ultimately the application that a developer has written to handle the incoming request.

With SSH, SMTP, FTP, IMAP, MySQL, Redis and most other services, the process listening on the port is the process that handles the request. With web ports, the process listening on the port delegates the incoming connection to another application, usually written in another language, running at the application layer, that is part of the extremely large and diverse ecosystem of web applications.

This concept in itself – that the applications listening on the web ports are extremely diverse and either home-made or selected from a large and diverse ecosystem – presents unique security challenges. In the case of, say, Redis, you might worry about running a secure version of Redis and making sure it is not misconfigured. In the case of a web server, you may have 50 application instances written in two languages from five different vendors all on the same port, which all need to be correctly configured, have their patch levels maintained, and be written using secure coding practices.

As if that doesn’t make the web ports challenging enough, they are also, for the most part, public. Putting aside internal websites for the moment, perhaps the majority of websites derive their value from making services available to users on the Internet by being public-facing. If you consider the list of ports I have above, or in the Wikipedia article I linked to, many of those ports are only open on internal networks or have access to them controlled if they are external. Web ports for public websites, by their very nature, must be publicly accessible for them to be useful. There are certain public services like SMTP or DNS, but as I mentioned above, the server that is listening on the port is the server handling the request in these cases.

A further challenge when securing websites is that often the monetary and data assets available to an attacker when compromising a website are greater than the assets they may gain compromising a corporate network. You see this with high volume e-commerce websites where a small business is processing a large number of web-based e-commerce transactions below $100. If the attacker compromises their corporate network via leaked AWS credentials, they may gain access to the company bank account and company intellectual property, encrypt the company’s data using ransomware, or perhaps even obtain customer PII. But by compromising the e-commerce website, they can gain access to credit card numbers in-flight, which are far more tradeable, and where the sum of available credit among all cards is greater than all the assets of the small business, including the amount of ransom that business might be able to pay.

Let’s not discount breaches like the 2017 Equifax breach that compromised 163 million American, British and Canadian citizen’s records. That was extremely valuable to the attackers. But targets like this are rare, and the Web presents a target-rich environment. Which is the third point I’d like to make in this post. While an organization may run a handful of services on other ports, many companies – with hosting providers in particular – run a large number of web applications. And an individual or company is far more likely to have a service running on a web port than any other port. Many of us have websites, but how many of us run our own DNS, SMTP, Redis, or another service listening on a port other than 80 or 443? Most of us who run websites also run MySQL on port 3306, but that port should not be publicly accessible if configured correctly.

That port 443 security is different has become clear to us at Wordfence over the years as we have tracked and cataloged a huge number of malware variants, web vulnerabilities, and a wide range of tactics, techniques, and procedures (TTP) that attackers targeting web applications use. Most of these have no relationship with the web server listening on port 443, and nearly all of them have a close relationship with the web application that the web server hands off control to once communication is established.

My hope with this post has been to catalyze a different way of thinking about port 443 and that other insecure port (80) we all hopefully don’t use. Port 443 is not just another service. It is, in fact, the gateway to a whole new universe of programming languages, dev frameworks, and web applications.

In the majority of cases, the gateway to that new universe is publicly accessible.

Once an attacker passes through that gateway, a useful way to think about the web applications hosted on the server is that each application is its own service that needs to have its patch level maintained, needs to be configured correctly, and should be removed if it is not in use to reduce the available attack surface.

If you are a web developer you may already think this way, and if anything, you may be guilty of neglecting services on ports other than port 80 or 443. If you are an operations engineer, or an analyst working in a SOC protecting an enterprise network, you may be guilty of thinking about port 443 as just another port you need to secure.

Think of port 443 as a gateway to a new universe that has no access control, with HTTPS providing easy standardized access, and with a wide range of diverse services running on the other side, that provide an attacker with a target and asset-rich environment.

Footnote: We will be exhibiting at Black Hat in Las Vegas this year at booth 2514 between the main entrance and Innovation City. Our entire team of over 30 people will be there. We’ll have awesome swag, as always. Come and say hi! Our team will also be attending DEF CON immediately after Black Hat.

Written by Mark Maunder – Founder and CEO of Wordfence. 

Source :
https://www.wordfence.com/blog/2022/06/securing-port-443/

Critical PHP flaw exposes QNAP NAS devices to RCE attacks

QNAP has warned customers today that some of its Network Attached Storage (NAS) devices (with non-default configurations) are vulnerable to attacks that would exploit a three-year-old critical PHP vulnerability allowing remote code execution.

“A vulnerability has been reported to affect PHP versions 7.1.x below 7.1.33, 7.2.x below 7.2.24, and 7.3.x below 7.3.11. If exploited, the vulnerability allows attackers to gain remote code execution,” QNAP explained in a security advisory released today.

“To secure your device, we recommend regularly updating your system to the latest version to benefit from vulnerability fixes.”

The Taiwanese hardware vendor has already patched the security flaw (CVE-2019-11043) for some operating system versions exposed to attacks (QTS 5.0.1.2034 build 20220515 or later and QuTS hero h5.0.0.2069 build 20220614 or later).

However, the bug affects a wide range of devices running:

  • QTS 5.0.x and later
  • QTS 4.5.x and later
  • QuTS hero h5.0.x and later
  • QuTS hero h4.5.x and later
  • QuTScloud c5.0.x and later

QNAP customers who want to update their NAS devices to the latest firmware automatically need to log on to QTS, QuTS hero, or QuTScloud as administrator and click the “Check for Update” button under Control Panel > System > Firmware Update.

You can also manually upgrade your device after downloading the update on the QNAP website from Support > Download Center.

QNAP devices targeted by ransomware

Today’s warning comes after the NAS maker warned its customers on Thursday to secure their devices against active attacks deploying DeadBolt ransomware payloads.

BleepingComputer also reported over the weekend that ech0raix ransomware has started targeting vulnerable QNAP NAS devices again, according to sample submissions on the ID Ransomware platform and multiple user reports who had their systems encrypted.

Until QNAP issues more details on ongoing attacks, the infection vector used in these new DeadBolt and ech0raix campaigns remains unknown.

While QNAP is working on patching the CVE-2019-11043 PHP vulnerability in all vulnerable firmware versions, you should ensure that your device is not exposed to Internet access as an easy way to block incoming attacks.

As QNAP has advised in the past, users with Internet-exposed NAS devices should take the following measures to prevent remote access:

  • Disable the Port Forwarding function of the router: Go to the management interface of your router, check the Virtual Server, NAT, or Port Forwarding settings, and disable the port forwarding setting of the NAS management service port (port 8080 and 433 by default).
  • Disable the UPnP function of the QNAP NAS: Go to myQNAPcloud on the QTS menu, click the “Auto Router Configuration,” and unselect “Enable UPnP Port forwarding.”

QNAP also provides detailed info on how to toggle off remote SSH and Telnet connections, change the system port number, change device passwords, and enable IP and account access protection to further secure your device.


Update June 22, 08:45 EDT: After this story was published, QNAP’s PSIRT team updated the original advisory and told BleepingComputer that devices with default configurations are not impacted by CVE-2019-11043.

Also, QNAP said that the Deadbolt ransomware attacks are targeting devices running older system software (released between 2017 and 2019).

For CVE-2019-11043, described in QSA-22-20, to affect our users, there are some prerequisites that need to be met, which are:

  1. nginx is running, and
  2. php-fpm is running.

As we do not have nginx in our software by default, QNAP NAS are not affected by this vulnerability in their default state. If nginx is installed by the user and running, then the update provided with QSA-22-20 should be applied as soon as possible to mitigate associated risks.

We are updating our security advisory QSA-22-20 to reflect the facts stated above. Again we would like to point out that most QNAP NAS users are not affected by this vulnerability since its prerequisites are not met. The risk only exists when there is user-installed nginx present in the system.

 We have also updated the story to reflect the new information provided by QNAP.

Source :
https://www.bleepingcomputer.com/news/security/critical-php-flaw-exposes-qnap-nas-devices-to-rce-attacks/

QNAP Urges Users to Update NAS Devices to Prevent Deadbolt Ransomware Attacks

Taiwanese network-attached storage (NAS) devices maker QNAP on Thursday warned its customers of a fresh wave of DeadBolt ransomware attacks.

The intrusions are said to have targeted TS-x51 series and TS-x53 series appliances running on QTS 4.3.6 and QTS 4.4.1, according to its product security incident response team.

“QNAP urges all NAS users to check and update QTS to the latest version as soon as possible, and avoid exposing their NAS to the internet,” QNAP said in an advisory.

This development marks the third time QNAP devices have come under assault from DeadBolt ransomware since the start of the year.

Deadbolt Ransomware Attacks

In late January, as many as 4,988 DeadBolt-infected QNAP devices were identified, prompting the company to release a forced firmware update. A second uptick in new infections was observed in mid-March.

DeadBolt attacks are also notable for the fact that they allegedly leverage zero-day flaws in the software to gain remote access and encrypt the systems.

Ransomware Attacks

According to a new report published by Group-IB, exploitation of security vulnerabilities in public-facing applications emerged as the third most used vector to gain initial access, accounting for 21% of all ransomware attacks investigated by the firm in 2021.

Source :
https://thehackernews.com/2022/05/qnap-urges-users-to-update-nas-devices.html

Examining Emerging Backdoors

Next up in our “This didn’t quite make it into the 2021 Threat Report, but is still really cool” series: New backdoors!

Backdoors are a crucial component of a website infection. They allow the attackers ongoing access to the compromised environment and provide them a “foot in the door” to execute their payload. We see many different types of backdoors with varying functionality.

When our malware research team is provided with a new backdoor they need to write what’s called a “signature” to ensure that we detect and remove it in future security scans. Signatures need names, and over the years we’ve developed something of a taxonomy naming system for all of the different malware that we come across.

In this article we’re going to explore all the different categories of signatures for newly-discovered backdoors throughout the year 2021.

How do Backdoors Work?

HTTP requests to websites typically fall into one of the following categories:

  • POST – sending data to a website
  • GET – requesting data from a website
  • COOKIE – data (such as session data) saved from a website
  • REQUEST – a conjunction of all/any of the three

We see all sorts of different backdoors while cleaning up compromised websites. Sometimes they use one of these types of requests, or a combination of multiple different types.

We’ve broken all newly generated signatures from 2021 down for further analysis into the following categories:

A graph showing the distribution of new backdoor signatures generated in 2021.

Uploaders

By far the most common type of backdoor found in 2021 was an uploader: That is, a PHP script that allows the attackers to upload any file that they want. These malicious files allow anyone with the correct URL path, parameters and (occasionally) access credentials to upload whichever files they want to the web server. Typically, bad actors use these backdoors to upload a webshell, spam directory, dropper, or other type of file giving them full control over the environment.

To avoid detection, attackers are always tweaking their malware by using new methods of obfuscation or concealing backdoors within legitimate-looking images, core files, plugins, or even themes — this can make malicious file uploaders difficult to detect during a casual site review.

Once an attacker has identified a vulnerable environment that they can get a foothold in, planting the uploader is often the next step. After that they have enough access to upload more complicated access points such as a webshell.

Of course there are legitimate uploader scripts, as many websites require functionality to allow users to upload photos or other content to the website. To mitigate risk, secure uploader scripts contain strict rules on how they are able to behave:

  • Only certain file types/extensions are allowed (usually image, or document files)
  • May require authorisation cookies to be set
  • May place files in a restricted directory with PHP execution disabled
  • May disable direct access and instead need to be called by the existing CMS structure

Malicious uploaders, on the other hand, have no such restrictions as they are designed to upload malicious files and PHP scripts.

A malicious uploader script

WebShells

Webshells are a classic type of malware that have been used by attackers for many years. They are administrative dashboards that give the attacker full access to the files and often provide a large amount of information about the hosting environment including operating system, PHP settings, web server configurations, file management, and SQL connections.

The classic FilesMan shell continues to be very popular with attackers. In 2021 we generated 20 new signatures related to new filesman variants alone, not including hack tools which grab filesman shells from remote servers.

Interestingly, a lot of malicious web shells provide far superior functionality than a lot of file managers provided by web hosting providers.

A malicious web shell backdoor

Misc RCE

Sometimes remote code execution backdoors are a little more complicated, or just rely on more basic/generic $_REQUEST calls. This is a PHP global array that contains the content of GETPOST and COOKIE inputs. The content of these variables could be anything and the attacker can fill them — e.g. with the payload — which is then processed. Sometimes the entire payload code is stored there and only very simple code snippets are injected into legitimate files. Such a snippet only loads and executes the content of these variables.

Other times, RCE backdoors make use of multiple different functions and request types.

A remote code execution backdoor

Generic

Not falling into any particular category are our collection of “generic” backdoors. They tend to use a mixture of different functions and methods to maintain backdoor access to the environment. Some are heavily obfuscated and others are mostly in plain text, but what unites them is that they don’t rely on any one technique to backdoor the environment in which they reside.

A generic, malicious backdoor

FILE_GET_CONTENTS

The PHP function file_get_contents fetches a local file or remote file. As far as backdoors are concerned, attackers misuse this function to grab malicious files located on other websites or servers and add it to the victim’s website. This allows them to host the actual malicious content elsewhere, while maintaining all of the same functionality on the victim environment.

Here we have a very simple backdoor using file_get_contents to grab a backdoor from a malicious server. The actual address is obfuscated through use of a URL shortening service:

A backdoor which uses file_get_contents

The footprint of this malware is very small as the payload resides elsewhere, but the functionality is potentially huge.

Remote Code Execution Backdoors

Not to be confused with remote code execution vulnerabilities, these backdoors are crafted to take whatever command is issued to it by the attacker and execute it in the victim’s environment. These PHP backdoors are often more complex than uploaders and allow the attackers more leeway in terms of how they can interact with the victim website.

If a request is sent that matches the parameters of the backdoor it will execute whichever command the attacker instructs so long as it doesn’t get blocked by any security software or firewall running within the environment.

A remote code execution backdoor

Here’s another example of a quite well hidden RCE backdoor in a Magento environment:

A well-hidden RCE backdoor in a Magento environment

Attackers make heavy use of the eval function which executes the command in the victim environment.

FILE_PUT_CONTENTS

These backdoors utilise the PHP function file_put_contents which will write the instructed content to a file on the victim environment.

Here is an example of such a backdoor lodged in a WordPress configuration file wp-config.php:

A backdoor which uses file_put_contents

This backdoor writes the specified malicious content into the file structure of the victim website given the correct parameters in the attacker’s request, allowing them to infect other files on the server with the content of their choice.

cURL

The curl() function facilitates the transmission of data. It can be used maliciously to download remote code which can be executed or directly displayed. This way, malware authors are able to create a small backdoor that only has this curl functionality implemented while the payload itself can be downloaded from a remote source.

It has many uses, and as such can be misused in many ways by attackers. We have seen it used frequently in credit card skimmers to transmit sensitive details to exfiltration destinations. It can also be used in RCE backdoors:

A backdoor which uses CURL

Since the attackers have crafted a backdoor to (mis)use curl, and they control the parameters under which it will function, in this way they are able to send or receive malicious traffic to and from the website, depending on how the backdoor is designed.

Authentication Bypass

These types of backdoors are most often seen in WordPress environments. They are small PHP scripts which allow the attacker to automatically log in to the administrator panel without needing to provide any password.

As long as they include the database configuration file in the script then they are able to set the necessary cookies for authorization, as seen in this example here:

A backdoor which bypasses normal authentication

The existence of such backdoors presents a case that additional authentication requirements should be employed within website environments. Protecting your admin panel with our firewall’s protected page feature is a great way to do this.

If you’re not a user of our firewall there are a lot of other ways that your admin panel can be protected.

Basic RCE via POST

Backdoors that take input through POST requests are quite common and many of the backdoor types that we’ve seen contain such functionality. Some of them, however, are quite small and rely exclusively on POST requests.

The example below shows one such backdoor, coupled with basic password protection to ensure that the backdoor is not used by anybody that does not have access to the password.

A basic remote code execution backdoor which uses POST

Fake Plugins

Another tactic that we’ve seen attackers use is the use of fake plugins. This is frequently used as a payload to deliver spam and malware, since WordPress will load the components present in the ./wp-content/plugins directory.

We’ve also seen attackers use these plugins as backdoors to maintain access to compromised environments.

A fake plugin in a WordPress environment

Since admin panel compromises are a very common attack vector, the usage of fake/malicious backdoor plugins is quite popular with attackers.

System Shell Backdoors

Attackers have also written malware that interacts with the hosting environment itself and will attempt to run shell commands via PHP scripts in the environment. This is not always possible, depending on the security settings of the environment, but here’s an example of one such backdoor:

A system shell backdoor

If system() is disabled in the environment then these will not work, so the functionality of such backdoors will be limited by the security settings in the host.

COOKIE Based Backdoors

Some malware creators use COOKIES as a storage for various data. These can be decryption keys  used to decode an otherwise inaccessible payload, or even the entire malicious payload itself.

A cookie based backdoor

CREATE_FUNCTION

The create_function() is often used by malware instead of (or in conjunction with) the eval() function to hide the execution of the malicious code. The payload is encapsulated inside the crafted custom function, often with an obfuscated name to make the functionality less clear.

This function is then called somewhere else within the code, and thus the payload is evaluated. Backdoors have been found to abuse this to place their payload back on the infected website after it was removed.

A backdoor which creates a malicious function in the victim environment

RCE via GET

Backdoors have also been seen using GET requests for input, rather than POST requests. In the example below we can see that the backdoor will execute the malicious payload if a GET request contains a certain string.

A remote code execution backdoor which uses GET

This allows the attackers to restrict the usage of the backdoor to only those who know the exact parameters to specify in the malicious GET request to the website. If the correct parameters are given then the backdoor will execute its intended function.

Database Management Backdoors

Most often attackers will misuse tools such as Adminer to insert malicious content into the victim website’s database, but occasionally we have seen them craft their own database management tools. This allows them to insert admin users into the website as well as inject malicious JavaScript into the website content to redirect users to spam or scam websites or steal credit card information from eCommerce environments.

A database management backdoor

Conclusion & Mitigation Steps

Backdoors play a crucial role for the attackers in a huge number of website compromises. Once the attackers are able to gain a foothold into an environment their goal is to escalate the level of access they have as much as possible. Certain vulnerabilities will provide them access only to certain directories. For example, a subdirectory of the wp-content/uploads area of the file structure.

Often the first thing they will do is place a malicious uploader or webshell into the environment, giving them full control over the rest of the website files. Once that is established they are able to deliver a payload of their choosing.

If default configurations are in place in a standard WordPress/cPanel/WHM configuration a single compromised admin user on a single website can cause the entire environment to be infected. Attackers can move laterally throughout the environment by the use of symlinks even if the file permissions/ownership are configured correctly.

Malicious actors are writing new code daily to try to evade existing security detections. As security analysts and researchers it’s our job to stay on top of the most recent threats and ensure that our tools and monitoring detect it all.

Throughout the year 2021 we added hundreds of new signatures for newly discovered backdoors. I expect we’ll also be adding hundreds more this year.

If you’d like us to help you monitor and secure your website from backdoors and other threats you can sign up for our platform-agnostic website security services.

Source :
https://blog.sucuri.net/2022/05/examining-emerging-backdoors.html

Exit mobile version