CISA warns admins to patch maximum severity SAP vulnerability

The US Cybersecurity and Infrastructure Security Agency (CISA) has warned admins to patch a set of severe security flaws dubbed ICMAD (Internet Communication Manager Advanced Desync) and impacting SAP business apps using Internet Communication Manager (ICM).

CISA added that failing to patch these vulnerabilities exposes organizations with vulnerable servers to data theft, financial fraud risks, disruptions of mission-critical business processes, ransomware attacks, and a halt of all operations.

ICMAD bugs affect most SAP products

Yesterday, Onapsis Research Labs who found and reported CVE-2022-22536, one of the three ICMAD bugs and the one rated as a maximum severity issue, also cautioned SAP customers to patch them immediately (the other two are tracked as CVE-2022-22532, and CVE-2022-22533).

The SAP Product Security Response Team (PSRT) worked with Onapsis to create security patches to address these vulnerabilities and released them on February 8, during this month’s Patch Tuesday.

If successfully exploited, the ICMAD bugs allow attackers to target SAP users, business information, and processes, and steal credentials, trigger denials of service, execute code remotely and, ultimately, fully compromise any unpatched SAP applications.

“The ICM is one of the most important components of an SAP NetWeaver application server: It is present in most SAP products and is a critical part of the overall SAP technology stack, connecting SAP applications with the Internet,” Onapsis explained.

“Malicious actors can easily leverage the most critical vulnerability (CVSSv3 10.0) in unprotected systems; the exploit is simple, requires no previous authentication, no preconditions are necessary, and the payload can be sent through HTTP(S), the most widely used network service to access SAP applications.”

No SAP customers breached using ICMAD exploits so far

SAP’s Director of Security Response Vic Chung said they’re currently not aware of any customers’ networks breached using exploits targeting these vulnerabilities and “strongly” advised all impacted organizations to immediately apply patches “as soon as possible.”

SAP customers can use this open-source tool developed by Onapsis security researchers to help scan systems for ICMAD vulnerabilities.

The German business software developer also patched other maximum severity vulnerabilities associated with the Apache Log4j 2 component used in SAP Commerce, SAP Data Intelligence 3 (on-premise), SAP Dynamic Authorization Management, Internet of Things Edge Platform, SAP Customer Checkout.

All of them allow remote threat actors to execute code on systems running unpatched software following successful exploitation.

Source :
https://www.bleepingcomputer.com/news/security/cisa-warns-admins-to-patch-maximum-severity-sap-vulnerability/

How to Recover Deleted Emails in Microsoft 365

When the CEO realizes they deleted a vital email thread three weeks ago, email recovery becomes suddenly becomes an urgent task. Sure, you can look in the Deleted Items folder in Outlook, but beyond that, how can you recover what has undergone “permanent” deletion? In this article, we review how you can save the day by bringing supposedly unrecoverable email back from the great beyond.

Deleted Email Recovery in Microsoft And Office 365

Email Recovery for Outlook in Exchange Online through Microsoft and Office can be as simple as dragging and dropping the wayward email from the Deleted Items folder to your Inbox. But what do you do when you can’t find the email you want to recover?

First, let’s look at how email recovery is structured in Microsoft 365. There are few more layers here than you might think! In Microsoft 365, deleted email can be in one of three states: Deleted, Soft-Deleted, or Hard-Deleted. The way you recover email and how long you have to do so depends on the email’s delete status and the applicable retention policy.

Email Recovery in Microsoft 365

Let’s walk through the following graphic and talk about how email gets from one state to another, the default policies, how to recover deleted email in each state, and a few tips along the way.

Items vs. Email

Outlook is all about email yet also has tasks, contacts, calendar events, and other types of information. For example, you can delete calendar entries and may be called on to recover them, just like email. For this reason, the folder for deleted content is called “Deleted Items.” Also, when discussing deletions and recovery, it is common to refer to “items” rather than limiting the discussion to just email.

Policy

Various rules control the retention period for items in the different states of deletion. A policy is an automatically applied action that enforces a rule related to services. Microsoft 365 has hundreds of policies you can tweak to suit your requirements. See Overview of Retention policies for more information.

‘Deleted Items’ Email

When you press the Delete key on an email in Outlook, it’s moved to the Deleted Items folder. That email is now in the “Deleted” state, which simply means it moved to the Deleted Items folder. How long does Outlook retain deleted email? By default – forever! You can recover your deleted mail with just a drag and drop to your Inbox. Done!

If you can’t locate the email in the Deleted Items folder, double-check that you have the Deleted Items folder selected, then scroll to the bottom of the email list. Look for the following message:

Outlook Deleted Items Folder

If you see the above message, your cache settings may be keeping only part of the content in Outlook and rest in the cloud. The cache helps to keep mailbox sizes lower on your hard drive, which in turn speeds up search and load times. Click on the link to download the missing messages.

But I Didn’t Delete It!

If you find content in the Deleted Items and are sure you did not delete it, you may be right! Administrators can set Microsoft 365 policy to delete old Inbox content automatically.

Mail can ‘disappear’ another way. Some companies enable a personal archive mailbox for users. When enabled, by default, any mail two years or older will “disappear” from your Inbox and the Deleted Items folder. However, there is no need to worry. While apparently missing, the email has simply moved to the Archives Inbox. A personal Archives Inbox shows up as a stand-alone mailbox in Outlook, as shown below.

Stand-alone mailbox in Outlook

As a result, it’s a good idea to search the Archives Inbox, if it is present when searching for older messages.

Another setting to check is one that deletes email when Outlook is closed. Access this setting in Outlook by clicking “File,” then “Options,” and finally “Advanced” to display this window:

Outlook Advanced Options

If enabled, Outlook empties the Deleted Items when closed. The deleted email then moves to the ‘soft-delete’ state, which is covered next. Keep in mind that with this setting, all emails will be permanently deleted after 28 days

‘Soft-Deleted’ Email

The next stage in the process is Soft-Deleted. Soft-Deleted email is in the Deleted-Items folder but is still easily recovered. At a technical level, the mail is deleted locally from Outlook and placed in the Exchange Online folder named Deletions, which is a sub-folder of Recoverable Items. Any content in Recoverable Items folder in Exchange Online is, by definition, considered soft-deleted.

You have, by default, 14 days to recover soft-deleted mail. The service administrator can change the retention period to a maximum of 30 days. Be aware that this can consume some of the storage capacity assigned to each user account and you could get charged for overages.

How items become soft-deleted

There are three ways to soft-delete mail or other Outlook items.

  1. Delete an item already in the Deleted Items folder. When you manually delete something that is already in the Deleted Items folder, the item is soft-deleted. Any process, manual or otherwise that deletes content from this folder results in a ‘soft-delete’
  2. Pressing Shift + Delete on an email in your Outlook Inbox will bring up a dialog box asking if you wish to “permanently” delete the email. Clicking Yes will remove the email from the Deleted-Items folder but only perform a soft-delete. You can still recover the item if you do so within the 14 day retention period.
Soft Deleting Items in Outlook
  1. The final way items can be soft-deleted is by using Outlook policies or rules. By default, there are no policies that will automatically remove mail from the Deleted-Items folder in Outlook. However, users can create rules that ‘permanently’ (soft-delete) email. If you’re troubleshooting missing email, have the user check for such rules as shown below. You can click Rules on the Home menu and examine any created rules in the Rules Wizard shown below.
Microsoft Outlook Policies and Rules

Note that the caution is a bit misleading as the rule’s action will soft-delete the email, which, as already stated, is not an immediate permanent deletion.

Recovering soft-deleted mail

You can recover soft-deleted mail directly in Outlook. Be sure the Deleted Items folder is selected, then look for “Recover items recently removed from this folder at the top of the mail column, or the “Recover Deleted Items from Server” action on the Home menu bar.

Recovering soft-deleted mail in Outlook

Clicking on the recover items link opens the Recover Deleted Items window.

Recover Deleted Items, Microsoft Outlook

Click on the items you want to recover or Select All, and click OK.

NOTE: The recovered email returns to your Deleted Items folder. Be sure to move it into your Inbox.

If the email you’re looking for is not listed, it could have moved to the next stage: ‘Hard-Deleted.’

While users can recover soft-deleted email, Administrators can also recover soft-deleted email on their behalf using the ‘Hard-Deleted’ email recovery process described next (which works for both hard and soft deletions). Also, Microsoft has created two PowerShell commands very useful in this process for those who would rather script the tasks. You can use the Get-RecoverableItems and Restore-RecoverableItems cmdlets to search and restore soft-deleted email.

Hard-Deleted Email

The next stage for deletion is ‘Hard Delete.’ Technically, items are hard deleted when items moved from the Recoverable folder to the Purges folder in Exchange online. Administrators can still recover items in the folder with the recovery period set by policy which ranges from 14 (the default) to 30 (the maximum). You can extend the retention beyond 30 days by placing legal or litigation hold on the item or mailbox.

How items become Hard-Deleted

There are two ways content becomes hard-deleted.

  1. By policy, soft-deleted email is moved to the hard-deleted stage when the retention period expires.
  2. Users can hard-delete mail manually by selecting the Purge option in the Recover Deleted Items window shown above. (Again, choosing to ‘permanently delete’ mail with Shift + Del, results in a soft-delete, not a hard-delete.)

Recovering Hard-Deleted Mail

Once email enters the hard-delete stage, users can no longer recover the content. Only service administrators with the proper privileges can initiate recovery, and no administrators have those privileges by default, not even the global admin. The global admin does have the right to assign privileges so that they can give themselves (or others) the necessary rights. Privacy is a concern here since administrators with these privileges can search and export a user’s email.

Microsoft’s online documentation Recover deleted items in a user’s mailbox details the step-by-step instructions for recovering hard-deleted content. The process is a bit messy compared to other administrative tasks. As an overview, the administrator will:

  1. Assign the required permissions
  2. Search the Inbox for the missing email
  3. Copy the results to a Discovery mailbox where you can view mail in the Purged folder (optional).
  4. Export the results to a PST file.
  5. Import the PST to Outlook on the user’s system and locate the missing email in the Purged folder

Last Chance Recovery

Once hard-deleted items are purged, they are no longer discoverable by any method by users or administrators. You should consider the recovery of such content as unlikely. That said, if the email you are looking for is not recoverable by any of the above methods, you can open a ticket with Microsoft 365 Support. In some circumstances, they may be able to find the email that has been purged but not yet overwritten. They may or may not be willing to look for the email, but it can’t hurt to ask, and it has happened.

What about using Outlook to backup email?

Outlook does allow a user to export email to a PST file. To do this, click File” in the Outlook main menu, then “Import & Export” as shown below.

Outlook Menu, Import Export

You can specify what you want to export and even protect the file with a password.

While useful from time to time, a backup plan that depends on users manually exporting content to a local file doesn’t scale and isn’t reliable. Consequently, don’t rely on this as a possible backup and recovery solution.

Alternative Strategies

After reading this, you may be thinking, “isn’t there an easier way?” A service like Altaro Office 365 Backup allows you to recover from point-in-time snapshots of an inbox or other Microsoft 365 content. Having a service like this when you get that urgent call to recover a mail from a month ago can be a lifesaver.

Summary

Users can recover most deleted email without administrator intervention. Often, deleted email simply sits in the Deleted folder until manually cleared. When that occurs, email enters the ‘soft-deleted stage,’ and is easily restored by a user within 14-days. After this period, the item enters the ‘hard-deleted’ state. A service administrator can recover hard-deleted items within the recovery window. After the hard-deleted state, email should be considered uncoverable. Policies can be applied to extend the retention times of deleted mail in any state. While administrators can go far with the web-based administration tools, the entire recovery process can be scripted with PowerShell to customize and scale larger projects or provide granular discovery. It is always a great idea to use a backup solution designed for Microsoft 365, such as Altaro Office 365 Backup.

Source :
https://www.altaro.com/hyper-v/recover-emails-microsoft-365/

Altaro The backup snapshot for this VM is not application consistent. The backup will proceed in crash-consistent mode. (Error code ‘RCTCONTROLLER_011’)

APPLIES TO

Windows Server 2016 Hosts or newer

PROBLEM

Backup completes but gives a warning that it “The backup snapshot for this VM is not application consistent. The backup will proceed in crash-consistent mode. (Error code ‘RCTCONTROLLER_011’)”

SOLUTION

The Microsoft Volume Shadow Service (VSS) is Microsoft technology that forms part of Windows Server 2008 R2. This component allows applications to access a “point in time” snapshot of a logical drive on the host machine, including any VHDX and related virtual machine files on that drive. This enables these files to be accessed even if they are in use or locked.  It also ensures that the VHDX and related files are in a consistent state and all data has been flushed to disk before they are accessed for backup purposes.

Going forward, Microsoft have made improvements to this technology and in Server 2016 and newer they have changed the way it works once again. Now they’re using “Production Checkpoints” as a gateway between backup applications and the operating system. VSS exists to address the fact that data can and does change while backups are being taken.

In fact if you’re getting this warning through Altaro VM Backup, you’ll see that you will also get this when running a Production Checkpoint. In order to run a production checkpoint as opposed to a standard one, you can follow the steps below:

  • Go to Hyper-V Manager
  • Right-click on the VM > Settings
  • Go to Checkpoints
  • Un-tick the option “Create standard checkpoints if it’s not possible to create a production checkpoint”
  • Apply and OK
  • Right-click on the VM “Checkpoint”

The checkpoint operation should now fail, the reason being because it didn’t manage to run VSS inside the VM and tell applications to cease all I/O and flush outstanding data and operations from memory to disk so that the backup doesn’t miss anything. VSS in general is broad and can be affected by any application running inside the VM.

With that said, this means that it’s not always easy or straight-forward to resolve a ‘crash-consistent’ backup, however you should proceed to troubleshoot as follows:

  1. Firstly, check whether your VM actually requires an Application-Consistent backup. If it does not, you can disable it from the “VSS Settings” screen. Simply uncheck “Application Consistent” and Save changes. More information here.
  2. If your VM is running a non-VSS aware guest, such as a Linux OS, you can simply go to “VSS Settings” and disable “Application Consistent” for these VM’s.
  3. The Guest must be running one of the supported OS’s of Server 2016 as the following link: https://technet.microsoft.com/en-gb/windows-server-docs/compute/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows
  4. Ensure you have the latest Windows Updates installed as “Integration Services” are now being deployed through the updates.
  5. If the VM giving trouble is running Microsoft SBS 2011, then please go through this article.
  6. In the Properties dialog of the VM, from either Hyper-V Manager or SCVMM, look on the Integration Services tab and ensure that “Backup (volume checkpoint)” is checked.
  7. The guest VM is in a running state.
  8. All the guest VM’s disks have ample free space available for the internal shadow copy to complete. You must have 10% free disk space on each disk.
  9. In the guest VM, the service with name “Hyper-V Volume Shadow Copy Requestor” is running.
  10. The Checkpoint File Location for the VM must be set to be the same volume in the host operating system as the VHDX files for the VM.
  11. The guest VM must have a SCSI controller attached in the VM settings (in Hyper-V Manager). There is no need to have any disks on the controller, but it must be present.
  12. The guest VM must not have any Shadow Storage assignment of a volume explicitly set to a different volume other than itself. This can be checked by running “vssadmin list shadowstorage” through command line.
  13. If the guest OS has a system reserved partition, verify that it has at least 45MB of free space. If less than that is free, ensure that a windows shadow copy can be created as per the screenshot below:



  14. All of the virtual machine’s volumes must be formatted with NTFS/ReFS. The volume that contains the .VHD(s) for the VM must also be formatted with NTFS/ReFS. The guest operating system’s disks must be “Basic”, not “Dynamic” (this is not the same as dynamic vs. fixed VHDs, see screenshot below):



  15. Run the command below in command prompt inside the VM that is crash-consistent:

    vssadmin list writers

    In the results check that all writers inside the VM are in a “Stable” state and showing “No error”.

  16. Check that the shadowstorage on each drive is not full, ideally set to unbounded. You can set it as unbounded by running the following command in command prompt:

    vssadmin add shadowstorage /For=C: /On=C: /MaxSize=UNBOUNDED
    vssadmin resize shadowstorage /For=C: /On=C: /MaxSize=UNBOUNDED

    Note: Run the two above commands for each drive, each time replacing the drive letter from C: to the drive in question.

  17. The “COM+ Event System”, “Distributed Transaction Coordinator”, “Remote Procedure Call (RPC)”, and “System Event Notification” services must be running within the VM. By default, these are set to “Automatic” and/or “Automatic (Delayed Start)”. The “COM+ System Application” and “Microsoft Software Shadow Copy Provider” and “Volume Shadow Copy” services must at least be set to Manual, which is the default for these. It is acceptable, but not required, to set them to “Automatic” or “Automatic (Delayed Start)”.
  18. Check if you are getting a “vmicvss” with event ID 13 inside the Application event log of the VM that is specifying that Windows cannot perform an online backup of this system. Event below:

    Event ID: 13
    Source: vmicvss    
    Description: Windows cannot perform an online backup of this system because scoped snapshots are enabled. To resolve this, disable scoped snapshots by creating the following registry value on this computer:
         PATH: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\SystemRestore\
         DWORD: ScopeSnapshots
         Value: 0
  19. If the issue persists check for warnings or errors in the Application and System event logs on the VM that is giving trouble.
  20. Ensure that the latest Windows Updates for the Host and VM are also applied. Please note that “Optional Updates” are usually also important for VSS operations and are suggested as well.
  21. If none of the above helps, please contact our support team.

    Source :
    https://help.altaro.com/support/solutions/articles/43000469403

What Is VMware Horizon and How Does It Work?

Businesses today have been forced to switch to remote working to ensure continued business continuity. After the pandemic began in early 2020, it caused a shift to a majority remote workforce, seemingly overnight. With the change to a distributed workforce, new requirements have emerged for businesses around availability, security, and flexibility.

Virtual Desktop Infrastructure (VDI) is a solution that allows connecting remote workers with virtual desktops and applications running in a corporate data center. VMware Horizon is a VDI solution offered by VMware that provides a robust feature set and capabilities for remote workers. So what is VMware Horizon, and how does it work?

What is VMware Horizon?

Today, the work from anywhere model is no longer optional for businesses. Providing accessibility, flexibility, and connectivity from anywhere for the distributed workforce allows remote employees to remain productive no matter where they are located.

As the pandemic escalated, businesses quickly found legacy on-premises desktop and app virtualization platforms that predated the widespread use of the cloud were not equipped for current challenges. It led to many companies struggling to provide the distributed workforce with fast and reliable access to apps they need for business productivity.

VMware Horizon is an end-to-end solution for managing and delivering virtualized or physical desktops and virtual application delivery to end-users. It allows creating and brokering connections to Windows & Linux virtual desktops, Remote Desktop Services (RDS) applications, and desktops. It can also deliver Linux-hosted applications.

VMware Horizon is a Virtual Desktop Infrastructure (VDI) solution, a core component of VMware’s digital workspace for businesses looking to deliver virtual desktops and applications to their workforce. It provides the tooling and capabilities that enable access from any device and is deeply integrated with other VMware solutions and services such as VMware NSX, VMware Workspace One, vSAN, and others.

VMware Horizon provides secure and robust connectivity for remote workers


VMware Horizon provides secure and robust connectivity for remote workers

Recent VMware Horizon versions have evolved to provide desktop resources on-premises, in the cloud, hybrid clouds, and multi-cloud environments.

VMware Horizon Editions

VMware Horizon is provided in three editions:

  • Horizon Standard
  • Horizon Advanced
  • Horizon Enterprise

All three editions provide the components needed for end-to-end virtual desktop deployment.

What are the key capabilities / features of VMware Horizon?

  • VMware Horizon is a flexible and agile hybrid cloud platform.
  • It enables businesses to utilize existing datacenter based resources, including transforming on-premises desktop and app environments without redeploying.
  • It provides the ability to leverage the cloud for additional capacity and use cases
  • Choose if and when you transition workloads to optimize performance and lower the cost of on-premises environments.
  • It lets you leverage cloud-native control plane services. As a result, it reduces costs, improves productivity, and shifts IT focus from manual tasks to automated processes.
  • Manage and monitor your deployment from one central management GUI.
  • It offers the ability to meet remote user needs keeping employees connected to desktops and apps from anywhere and any device with a single login. It doesn’t matter where the data resides, on-premises or in the cloud.
  • The Horizon control plane delivers the ability to deploy, manage, and scale, virtual desktops, and apps across hybrid cloud environments.
  • Horizon is a modern platform for securely delivering virtual desktops and apps across the hybrid cloud, keeping employees connected, productive and engaged, anytime and anywhere.

Deliver applications and desktops automatically and in real-time

One of the key benefits and use cases of VMware Horizon is to deliver applications and desktops automatically and in real-time. Today, many organizations are using VMware Horizon as the vehicle that allows remote workers to connect to virtual machine resources or physical workstations in the corporate network, without VPN, or exposing an RDP server to the outside world.

Administrators configure desktop pools consisting of a single desktop or multiple desktops that end-users can connect to and utilize. When there are multiple virtual machines or physical desktops in a single pool, users will be placed on an available desktop resource in the pool.

Desktop pools consist of:

  • Automated desktop pools – An automated desktop pool uses a vCenter Server template or virtual machine snapshot to generate new machines. The machines can be created when the pool is created or generated on demand based on pool usage.
  • Manual desktop pools – A manual desktop pool provides access to an existing set of machines. Any machine that can install the VMware Horizon agent is supported. These include both vCenter virtual machines and physical desktops.
  • RDS Desktop pools – A Microsoft RDS desktop pool provides RDS sessions as machines to Horizon users. The Horizon Connection Server manages the RDS sessions in the same way as normal machines. Microsoft RDS hosts are supported on vCenter virtual machines and physical computers.
Viewing VMware Horizon Desktop Pools


Viewing VMware Horizon Desktop Pools

Application Pools provide remote workers with access to published applications, either from a desktop pool or RDS farm.

Viewing a published application in VMware Horizon


Viewing a published application in VMware Horizon

It also allows quickly performing maintenance tasks such as enabling or disabling specific Horizon Connection Servers and performing backup operations. You can also add vCenter Server environments and integrate your Unified Access Gateways to the environment.

Performing maintenance operations in the VMware Horizon Administration Console


Performing maintenance operations in the VMware Horizon Administration Console

Simplify management and maintenance tasks

One of the key areas that VMware Horizon provides quick time to value is the area of management and maintenance. The VMware Horizon Administration Console is an HTML 5 web console that is quick and intuitive. All of the tasks are very wizard-driven with natural workflows.

In the VMware Horizon Administration Console, administrators can easily see:

  • Problem vCenter VMs
  • Problem RDS hosts
  • Events
  • System Health

The VMware Horizon Monitoring dashboard quickly shows the overall system health, sessions, workload, VDI desktops, RDSH desktops, RDSH applications, and other information.

Viewing the VMware Horizon monitoring dashboard


Viewing the VMware Horizon monitoring dashboard

Keep sensitive data safe and enforce endpoint compliance

Several tools and VMware Horizon configurations help keep business-critical and sensitive data safe and enforce endpoint compliance. For example, the Endpoint Compliance Checks feature is part of the Unified Access Gateway (UAG) that provides a layer of security for clients accessing Horizon resources. The Endpoint Compliance Checks helps to verify end-user client compliance to predefined policies. These may include antivirus policy or encryption policy on endpoints.

Currently, a couple of endpoint compliance check providers offer the ability to check compliance of endpoints. These include:

  • OPSWAT – The OPSWAT MetaAccess persistent agent or the OPSWAT MetaAccess on-demand agent on the Horizon Client communicates the compliance status to an OPSWAT instance. It can then enforce policies related to the health of the endpoint and the allowed access to Horizon resources
OPSWAT Endpoint Compliance Checks


OPSWAT Endpoint Compliance Checks

  • Workspace ONE Intelligence (Risk Analytics) – The Workspace ONE Intelligence platform has a risk analytics feature. It can assess both user and device risk by identifying behaviours that affect security and calculating a risk score for each device and user. Based on the risk score, policies can define whether or not clients can connect and access resources.

End-user components

There are only a couple of different components required for end-user clients for VMware Horizon. Actually, you can use either a browser to connect to the Horizon environment or the VMware Horizon Client. Most modern clients feature an HTML5-capable browser that allows connecting to VMware Horizon.

While you can connect to VMware Horizon-enabled endpoints using a web browser, the most robust connection experience is provided with the VMware Horizon Client. However, a question often comes up with the VMware Horizon Client – is it free?

The VMware Horizon Client is indeed a free download from the VMware Customer Connect portal. Also, there is no need to provide an email address and sign up for an account. You can find the most recent download of the VMware Horizon Clients here:

Downloading the VMware Horizon Client


Downloading the VMware Horizon Client

The availability and ease of downloading the VMware Horizon Client help to ensure remote workers can easily download, install, and connect to VMware Horizon resources. Another great feature built into the VMware Horizon Client is checking for and updating the client directly from the interface.

Checking for updates to VMware Horizon Client


Checking for updates to VMware Horizon Client

When remote workers browse to the public URL of the Unified Access Gateway, the UAG presents the Horizon Connection Server web page, allowing users to download the client or connect to their assigned resources using the VMware Horizon HTML access link.

Browsing to the VMware Horizon web access


Browsing to the VMware Horizon web access

VMware Workspace ONE UEM additional components

Organizations using cloud-based VMware Workspace ONE can simplify access to the cloud, mobile, and enterprise applications from various types of devices. Workspace ONE Unified Endpoint Management (UEM) is a single solution for modern, over-the-air management of desktops, mobile, rugged, wearables, and IoT.

Supported devices with Workspace ONE UEM

It manages and secures devices and apps, taking advantage of native MDM capabilities in IOS and Android and the mobile-cloud management efficiencies found in modern versions of Windows, Mac, and Chrome OS.

Supported devices with Workspace ONE UEM

Managing clients with Workspace ONE UEM requires the Workspace ONE UEM agent is installed on the devices for management. It can be installed manually, scripted installations, or by using GPOs. Organizations can also make use of the Workspace ONE Intelligent Hub for an easily integrated digital workspace solution designed to improve employee engagement and productivity through a single app.

Read more about VMware Workspace ONE Intelligent Hub here:

The New Naming Format for VMware Horizon 8

VMware has departed a bit from the conventional naming convention associated with legacy versions of VMware Horizon previously. While the older versions of VMware Horizon were named according to a “major.minor” release name, VMware has adopted a release cadence style “YYMM” naming convention, denoting the year and month of the release, much like other software vendors have adopted in the last couple of years.

VMware Horizon 8 is denoted with a new naming convention in the YYMM format


VMware Horizon 8 is denoted with a new naming convention in the YYMM format

If you see any of the VMware Horizon versions that start with at least a “20,” these are synonymous with VMware Horizon 8 across various documentation.

Is VMware Horizon a VPN?

There are many ways that enterprise organizations have traditionally delivered access to internal resources for remote employees. Virtual Private Network (VPN) has historically been a prevalent and familiar way for end-users to access business-critical resources that reside on the internal corporate network from the Internet.

While VPN is more secure than simply placing internal resources accessible directly from the Internet (not recommended), it also has its share of security issues. With VPN connections, a VPN client is loaded on the client workstation, laptop, or other devices, creating a secure, encrypted tunnel between the client and a VPN terminator, such as a firewall or other VPN device.

VPNs traditionally have been used for remote connectivity


VPNs traditionally have been used for remote connectivity

While this secures and encrypts the communication between the client and the internal network, it essentially makes the end-user device part of the network. You can think of a VPN connection as simply a “long patch cable” between the corporate network switch and the client. There are ways to secure VPN connections and scope down the resources the external clients can see. However, it opens the door to potentially connecting a client with malware to the corporate network. It also creates the possibility of easy data exfiltration from the corporate network to the client.

VPN connections are also notoriously complex and cumbersome to manage and maintain. Admins must manage each VPN client individually in most cases. In addition, each VPN connection is its own tunnel to the corporate network, creating the need for tedious management of multiple tunnels.

VMware Horizon provides a solution that is not VPN-based and solves the challenges mentioned above with traditional VPN connections. Note the following:

  • Remote users connect to virtual or physical desktops that are provisioned inside the corporate network. It means the end-user remote client is not directly connected to the corporate network
  • While the Horizon Client is recommended for the most robust experience connecting to the VMware Horizon environment, end-users can also connect to provisioned resources over a simple web browser connection, with no client required.
  • VPNs may not work with all types of devices. VMware Horizon connectivity, either via the Horizon Client or web browser connection, means almost any modern device with web connectivity can allow a user to connect to VMware Horizon resources
  • Admins have a consolidated and centrally managed set of infrastructure as a connectivity point, either with the Unified Access Gateways (recommended for secure external connectivity) or the Horizon Connection Servers
  • Combined with VMware NSX-T Data Center, administrators can easily secure the connectivity between VMware Horizon resources and which resources users can hit, making it an identity-driven solution

VMware Anywhere Workspace

VMware Horizon is a core component of the VMware Anywhere Workspace. What is the VMware Anywhere Workspace? It is a holistic solution that combines multiple components required for effective and efficient secure remote access, including:

  • Digital workspace solution – Provided by VMware Horizon cloud services or on-premises resources
  • Endpoint security – Organizations can seamlessly secure their remote worker interface with VMware NSX-T Data Center and VMware Carbon Black.
  • Secure Access Service Edge (SASE) – Secure access service edge platform that converges industry-leading cloud networking and cloud security to deliver flexibility, agility, security, and scale for enterprise environments of all sizes.

Note how VMware Horizon fits into the various aspects of VMware Anywhere Workspace:

  • It helps to manage multi-modal employee experience – With the VMware Anywhere Workspace, VMware Horizon can help deliver a familiar desktop and application experience across workspace locations and devices.
  • Security and the distributed edge – VMware Horizon delivers access to desktops and applications to any endpoint.
  • Anywhere Workspace Integrations – Workspace SEcurity brings Carbon Black together with Workspace ONE UEM and VMware Horizon

VMware Horizon Architecture and Logical Components

VMware Horizon has a robust architecture that is compromised of many different components that make up the end-to-end solution. The components of VMware Horizon architecture include:

  • Horizon Client – The client is the piece that forms the protocol session connection to a Horizon Agent running in a virtual desktop, RDSH server, or physical machine
  • Universal Access Gateway (UAG) – It provides secure edge services for the Horizon Client. The Horizon Client authenticates to a Connection Server through the Unified Access Gateway and then forms a protocol session connection to the UAG and then the Horizon Agent running in a virtual desktop or RDSH server.
  • Horizon Connection Server – The Connection Server brokers and connects users to the Horizon Agent installed on VMs, physical hosts, and RDSH servers. The Connection Server authenticates user sessions through Active Directory, and grants access to the proper entitled resource.
  • Horizon Agent – The agent is installed in the guest OS of the target VM or system. It allows the machine to be managed by the Connection Servers and allows a Horizon Client to connect using the protocol session to the Horizon Agent.
  • RDSH Server – Microsoft Remote Desktop Servers that provide access to published applications and session-based remote desktops to end-users.
  • Virtual Machine – Virtual machines can be configured as persistent or non-persistent desktops. Persistent desktops are usually assigned in a 1-to-1 fashion to a specific user. Non-persistent desktops are assigned in desktop pools that can be dynamically provisioned to users as needed.
  • Physical Desktop – Counterintuitively, VMware Horizon can be used as a secure and efficient way to deliver connectivity to physical desktops to end-users. Starting with VMware Horizon 7.7, VMware introduced the ability to broker physical desktop machines with RDP. In Horizon 7.12, support was added for Blast protocol connectivity to physical desktops.
  • Virtual Application – Horizon can be used with RDSH servers to provide virtual application delivery. Using the functionality of the published application in RDSH, VMware Horizon can deliver the published applications to assigned users.

Logical Components

There are other components of Horizon architecture that are considered to be logical components of the solution. Some of the components listed below are not absolutely required. However, they can be used to enhance a Horizon deployment and scale the capabilities, security, and performance of the solution.

  • Workspace ONE Access – VMware Workspace ONE provides the solution for enterprise single sign-on (SSO) for the enterprise. It simplifies the access to apps, desktops, and other resources to the end-user. It can integrate with existing identity providers and provide a seamless login experience to create a smooth access workflow. It also offers application provisioning, a self-service catalogue, and conditional access.
  • App Volumes Manager – VMware App Volumes Manager coordinates and orchestrates the delivery of applications by managing assignments of application volumes. These include packages and writable volumes that can easily assign applications to users, groups, and target computers.
  • Dynamic Environment Manager – User profiles are also challenging in dynamic environments with multiple resources accessed by a single user. Dynamic Environment Manager enables seamless profile management by capturing user settings for the operating system and also end-user applications.
  • VMware vSAN storage – VMware vSAN is a software-defined storage solution that offers many advantages in the enterprise. It can deliver high-performance, highly-scalable storage that can be seamlessly managed from the vSphere Client as part of the native VMware solution. It does this by aggregating locally attached storage in each ESXi host in the vSphere cluster and presenting it as a logical volume for virtual machines and modern workloads. When it comes to VMware Horizon environments that are mission-critical, you want to have highly-resilient storage that is scalable and performant. VMware Horizon environments backed by VMware vSAN work exceptionally well for this use case.
  • VMware NSX-T Data Center – Another consideration for VMware Horizon environments and end-user computing is security. VMware NSX-T Data Center provides the network-based security needed in EUC environments. It allows easily creating secure, resilient, and software-defined networks that allow admins to take advantage of micro-segmentation for VMware Horizon workloads. Each virtual desktop can be isolated from all other virtual desktops using VMware NSX-T Data Center, bolstering security and protecting other critical Horizon infrastructure, such as the Connection Servers.
  • Microsoft SQL Servers – It is recommended to have a dedicated Microsoft SQL Server to house the event databases required by VMware Horizon. Plan your VMware Horizon deployment accordingly.

Horizon Hybrid and Multicloud Architecture

VMware Horizon can be deployed in many different architecture designs. These include on-premises, in the cloud, or a combination of hybrid and multi-cloud architectures.

In the VMware Horizon hybrid deployment, infrastructure can run in an on-premises datacenter with the Horizon control plane running in the cloud as well as deploy on both on-premises and public cloud, and join the two. In addition, organizations can connect their existing Horizon 7 or Horizon 8 implementations to the Horizon Cloud Service using the Horizon Cloud Connector appliance.

The VMware Horizon Control Plane Services are designed to meet modern challenges for remote workers and connectivity. Organizations that use virtual desktops and apps from companies that only support cloud solutions can benefit from the Horizon Control Plane Services. Existing VDI implementations may only be able to work with cloud environments. The Horizon Control Plane allows managing all hybrid and multi-cloud deployments and configurations.

VMware Horizon hybrid architecture with the Horizon Control Plane


VMware Horizon hybrid architecture with the Horizon Control Plane

It provides many benefits outside of management, including:

  • Universal brokering
  • Image management
  • Application management
  • Monitoring
  • Lifecycle management
The Horizon Control Plane Services


The Horizon Control Plane Services

Just-in-time desktops and apps

VMware Horizon technology allows organizations to provision “just-in-time” desktops and applications. Using a technology VMware calls Instant Clone Technology, entire desktops can be provisioned just-in-time. The Instant Clone Technology allows the rapid cloning of virtual machines in just a few seconds! Instant clones can configure, on average, one clone per second.

The Instant Clone Technology is really a radical evolution of what VMware Composer clones could do previously. With Instant Clone Technology, the steps required to provision a clone with VMware Composer are dramatically reduced. Note the comparison of the two processes below:

Comparing VMware Horizon Composer with Instant Clone Technology


Comparing VMware Horizon Composer with Instant Clone Technology

The VMware Instant Clone Technology was born from a project called “vmFork” that uses rapid in-memory cloning of a running parent virtual machine and copy-on-write to deploy the virtual machines to production rapidly.

  • Copy-on-write – The copy-on-write technology is an optimization strategy that forces tasks first to create a separate private copy of the data to prevent its changes from becoming visible to all other tasks. With copy-on-write, the parent VM is quiesced and then forked. The forking process creates two branches or variations of development, and the resulting clones receive unique MAC addresses, UUIDs, and other unique information.

Using the Instant Clone Technology with VDI provisioning is perfect for the just-in-time desktop and applications use case. New workstations can quickly be provisioned, just in time for the user to log into the environment. Then, using VMware App Volumes to attach AppStacks to the just-in-time desktops dynamically, you can have fully functional workstations with dynamically assigned applications in a matter of seconds, fully customized for each user.

Should you be using VMware Horizon?

VMware Horizon is a powerful remote connectivity solution that allows businesses today to solve the challenges of remote workers and connectivity needs. In addition, it enables businesses to scale their deployments with modern architectures, including hybrid cloud deployments and multi-cloud architectures.

With the new VMware Horizon Control Plane services, organizations can manage multiple VMware Horizon deployments across sites, clouds, and different infrastructures from the cloud. In addition, it opens up the possibility for organizations to use heterogeneous implementations of virtual desktops that may exist across on-premises and public cloud environments and aggregate these services for end-users.

VMware provides a rich set of additional solutions and services that seamlessly integrate with VMware Horizon and extend the solution’s capabilities, scalability, security, and management. These include VMware vSAN, VMware NSX-T Data Center, VMware Workspace ONE, Workspace ONE UEM, and VMware Anywhere Workspace.

For end-user clients, connecting to Workspace ONE or native VMware Horizon resources is as simple as browsing the solution’s service URLs. While the VMware Horizon Client provides the most robust connectivity experience for end-user clients, users can also use the HTML client to connect to virtual machines, physical desktops, and applications using a simple web browser.

The Instant Clone Technology provided by VMware Horizon allows just-in-time desktops and applications to be provisioned in seconds, a feat that is amazing to see and provides businesses with the capability to have exponentially more scale in providing virtual desktops to end-users. In addition, the dynamic capabilities offered by VMware Horizon allow companies to elastically scale up and scale down virtual desktops, even with on-premises infrastructure.

Source :
https://www.altaro.com/vmware/vmware-horizon/

Working with Failover Cluster Nodes and Storage

The previous entries in this section have gone through the most complex sections of Failover Cluster Manager as it applies to Hyper-V. Most of the tool’s remaining functions deal with the supporting infrastructure for a cluster and are much less involved with the virtual machines. If you’re building up and configuring a brand new cluster, these areas are where you’ll spend a lot of your initial time. For a functioning cluster, they still contain useful information but won’t be frequently accessed.

How to Manage Hyper-V Cluster Nodes in Failover Cluster Manager

In the left pane underneath the cluster that you wish to work with, the second tree item is Nodes. This is where you’ll find the physical systems that perform the virtualization hosting for your cluster. If you have hosts that perform other roles for this cluster but are not cluster members, such as storage nodes, they should not appear here.

While it is technically possible for a single cluster to operate multiple roles, such as Hyper-V and Scale-Out File Server (SOFS), a single cluster cannot serve as both the storage platform and the virtualization platform for the same Hyper-V guests. Differing host types should be placed in separate clusters. The only secondary role supported in a Hyper-V cluster is the Hyper-V Replica Broker.

The typical node view should look something like the following. In this cluster, all nodes are present with a status of Up:

There are two context menus to work with in this section. As with all other aspects of Failover Cluster Manager, you can access an object’s context menu by either right-clicking it or by left-clicking it and looking in the panes at the far right.

For the Nodes tree object itself, there is only one unique item: Add Node. Clicking this will take you through the same screens that you saw in the first section of this application’s tour, except that the outcome will be the addition of a new node to an existing cluster rather than the creation of an all-new cluster. If you proceed through the wizard, you’ll be notified of the need to perform a cluster validation. Remember that you might need an up-to-date validation report if you contact Microsoft support.

The other items on the Roles tree node’s context menu are standard. You can customize the columns that appear in the center pane by selecting Customize, which is the only option in the View sub-tree. By default, you are shown the Assigned Vote and Current Vote columns, which give you the status of the cluster’s quorum. There is also an Information column that is usually empty, but will contain a preview of any error states. The last menu option allows you to Refresh the center pane to have Failover Cluster Manager re-check the status of the nodes. Finally, you can click Help to see Failover Cluster Manager’s MMC help window.

The context menu for a node is more complex, although not nearly to the same degree as what you saw for virtual machines in the Roles node.

Node Context Menu: Pause

Pausing a node makes it an ineligible target for role migrations. The node is still given a vote for quorum and remains in full communication with the other nodes. This is an ideal state if you wish to perform short-term manual maintenance operations on the node. This menu has two sub-menu items: Drain Roles and Do Not Drain Roles.

If you opt to perform a drain, the cluster will attempt to move all roles on that node to other nodes in the cluster based on its own balancing algorithms. Active guests with a priority of Medium or higher will be Live Migrated; all others will be Quick Migrated. Even if the drain operation is not fully successful, the node will be paused in order to prevent it from accepting any new roles.

Node Context Menu: Resume

The Resume option has the same options as the Pause menu in reverse: Fail Roles Back and Do Not Fail Roles Back. If you choose to perform failback, all roles that were migrated as part of the initial drain operation are retrieved after the node is resumed. Otherwise, they are left where they are.

Node Context Menu: Remote Desktop

This menu option starts the Remote Desktop Client, automatically targeted at the node.

Node Context Menu: Information Details

If any operation resulted in an error status, the Information column will show a short preview. Use this menu item to display the complete error message.

Node Context Menu: Show Critical Events

This menu item will spawn a minimalist event viewer window that shows critical events related to node and quorum management. Despite the implications in the context menu and the spawned window’s title bar text, the events are for all nodes.

Node Context Menu: More Actions

The More Actions menu gives you three sub-items. The first two are Stop Cluster Service and Start Cluster Service. In the current version of Failover Clustering, the outcome of stopping the cluster service in this fashion is very similar to the drain operation, with the exception that the cluster service (clussvc.exe) is gracefully halted. All of the node’s roles are drained and it cannot receive any incoming roles. The node will retain its quorum vote, although Dynamic Quorum may choose to rescind it.

The Start Cluster Service option will not restore drained roles. It will start the service, reattach the node to the cluster, and, if necessary, restore its quorum vote.

The final option on the More Actions menu is Evict. This should only be used when a node is being decommissioned or has failed entirely. In earlier versions of Failover Clustering, evicting a node was a fairly common troubleshooting step. It should no longer be necessary in current versions. Evicting a node does cause configuration information to be lost, so, even if rejoined, pre-existing validation reports may become invalidated.

How to Manipulate Storage for Hyper-V in Failover Cluster Manager

The storage node of Failover Cluster Manager allows you to work with cluster-controlled storage. Hyper-V does work perfectly well with virtual machines that are placed on file servers running SMB (storage message block) protocol version 3 or later. Version 3 debuted with Windows Server 2012. These storage locations are not controlled by the cluster and cannot be managed through Failover Cluster Manager. It can only work with standard cluster disks and Cluster Shared Volumes.

The Storage node has two sub-nodes of its own: Disks and PoolsPools are used with Scale-Out File Servers
(SOFS). It is technically possible to run Hyper-V roles and SOFS on the same cluster, but the virtual machines cannot be placed on space used by the same cluster’s SOFS. In addition to being unsupported, the system will error if you attempt to create such a “loopback” configuration.

Disks

For a Hyper-V cluster, the Disks sub-node is typically of much greater use. The only situation in which it would not contain any information is if you are not using a disk witness for quorum and all guests are stored on SMB 3 storage. In order for this section to be of any use, you must have connected shared storage to every one of the nodes using common direct-attached storage through an external SCSI interface, an iSCSI link, or a fibre channel link.

Each shared storage location must be formatted with NTFS or ReFS. A disk to be used for quorum must be formatted with NTFS. The details of preparing storage are not part of this tour. Storage will be talked about in more detail in a later article, but you can find detailed guidance on how to connect storage to a Hyper-V system here. Making the connections on the nodes will not automatically make them available to the cluster. That can be done through this section of Failover Cluster Manager.

To begin, select the Disks node in the left pane and access its context menu. The very first item is Add Disk. If there is no unused storage connected to every node, you’ll receive a dialog indicating as much:

If one or more disks are available, you’ll see something like the following:

The cluster automatically determines the Resource Name by using the text “Cluster Disk” and incrementing a number. Disk Info helps you to identify what is being connected, as it does not read volume information such as labels. The signature can also be used to identify the disk; it’s retrievable by using Get-Disk. When adding several disks at once that are of equal size, be certain to match them when accessing this screen as it will not be so readily available after being attached to the cluster. Check the box(es) for the disk(s) you’d like to add and click OK. Each disk should then appear in the center pane:

The next item in the Disks sub-node’s context menu is Move Available Storage. Its sub-options are the same for virtual machine migrations: Best Possible Node and Select Node. These items operate only on standard cluster disks; quorum disks and Cluster Shared Volumes are unaffected. Every single cluster disk is moved if possible.

The remaining options in this node are the standard ViewRefresh, and Help items which work as they do elsewhere in Failover Cluster Manager.

Disk Items Context Menu

The items in the center pane represent the disk-based storage assigned to the cluster. They have a dynamic context menu. Each item is presented below in alphabetical order.

  • Add to Cluster Shared Volumes: This option is only available for standard cluster disks. Once used, the disk is converted to a CSV. It no longer appears as a separate disk attached to a singular cluster node but becomes an entity underneath C:\ClusterStorage on all nodes. A folder named Volume# will be created to represent this disk. It can be renamed, but doing so after virtual machines are placed on it will cause those virtual machines to break. Any virtual machines that were on the cluster disk before it was converted will also be broken.
  • Bring Online: This returns an offline object to online status. All disk types are eligible.
  • Information Details: If the previous operation on this item in this console resulted in an error, this entry will become active. Clicking it will spawn a dialog with details about the error.
  • Move: The Move option is only available for Cluster Shared Volumes. It reassigns ownership to another node, either automatically with the sub-item Best Possible Node or by manual selection using Select Node.
  • More Actions: As with the menu it’s found in, this displays a dynamic menu with the following possible options:
    • Assign to another role: In a Hyper-V cluster, this menu item is not useful. You do have the ability to assign it directly to a virtual machine role, but that doesn’t grant any special abilities to the virtual machine that it doesn’t already have. Virtual machines can already use any cluster disk as a pass-through disk. Using this menu item could help visually reinforce that a particular virtual machine is using it as pass-through storage.
    • Repair: This item becomes active for a disk in an offline state. This menu item is to be used In the event that the disk is offline because it has permanently failed and you are replacing it. The replacement disk must be attached to storage but must not have been added as a cluster disk; if it was added, remove it. Upon clicking Repair, a dialog will appear with all available storage. Choose the item that will replace the failed disk.

      Upon selecting the replacement item, it will be added into the cluster with the name of the disk that was replaced. You will be prompted to bring it online to complete the repair.
    • Show Dependency Report: This item is of little use in a Hyper-V cluster as disk resources are not assigned directly to roles. For CSVs, it will display the underlying Cluster Disk resource.
    • Simulate Failure: Triggers the configured failure action for a standard cluster disk or the quorum disk.
    • Turn off Maintenance Mode: Restores a disk object that was previously placed in Maintenance Mode to normal operation.
    • Turn on Maintenance Mode: This mode removes protections against tools such as CHKDSK from running against the volume and disables the cluster’s automated health checks. When activated against a Cluster Shared Volume, you receive a warning that roles will be taken offline; this is not true for virtual machines. However, the volume’s representation under C:\ClusterStorage will disappear and virtual machines in that space cannot be Live Migrated until Maintenance Mode is ended.
  • Properties: A properties dialog will be displayed that will change depending upon the selected item. These will be explored after this list.
  • Remove: The selected standard cluster disk is removed from cluster disks. Virtual machines on it will instantly crash.
  • Remove from roleIf a cluster disk is assigned to a role, a menu item will appear allowing you to return it to Available Storage.
  • Remove from Cluster Shared Volumes: The selected CSV is returned from CSV status to standard disk status. Any hosted virtual machines will instantly crash.
  • Show Critical Events: A minimal event viewer dialog is shown with any available critical events about the selected resource.
  • Take Offline: use this to take any disk resource offline. Any active virtual machines using this storage will instantly crash.

Properties Dialog for Cluster Shared Volumes

Of the three cluster disk types, the properties dialog for a CSV is the simplest:

The only modifiable control is the Name. This name is only used by Failover Cluster Manager and Failover Clustering. It does not change the way that virtual machines refer to their storage. You can change this at any time. The list box shows four sets of information. Volume is the logical path that the CSV is referred to on each node. This can be renamed using traditional file system commands and tools, but doing so after virtual machines are created on it will cause their links to break. Redirected Access indicates if the volume is in Redirected Access mode. Capacity shows the total space on the disk and Free Space displays how much of that space is unused.

Properties Dialog for Standard Cluster Disks and Quorum Disks

The properties dialog is identical for the other two types. It contains a series of tabs. The first is the General tab and it looks very similar to the properties dialog for the CSV:

You can use this page to rename the cluster disk. As with a CSV, nothing is harmed by performing this operation. This dialog shows the cluster’s disk number, which can be referenced with the text-based tools and Disk Management. The center pane shows similar information to that of a CSV, although instead of a symbolic link path, the Volume is the drive letter, if one is assigned, or a raw volume identifier. Since cluster disks do not support Redirected Access mode, that column is not present.

The Dependencies tab will not show anything for the typical cluster disk in a Hyper-V environment since they are not commonly attached to roles. The Policies, and Advanced Polices tabs are identical in content and function to those for other cluster resources and were examined in the two preceding articles in this series.

The unique item on this dialog is the Shadow Copies tab. This setting is node independent and should be used instead of the traditional setting in Windows Explorer.

Details Pane

When a single cluster disk is selected in the center pane, that center pane will be divided into upper and lower sections. The lower section will show a quick summary of the item:

You can quickly see the space utilization for the volume and its drive letter or raw volume identifier (standard cluster disks and quorum disks) or its symbolic link (CSVs). In this pane, the item has a single-item context menu. A quorum or standard cluster disk will have the option to Change Drive Letter, which displays a very simple dialog allowing you to clear the drive letter or assign a new one from the available letters. A Cluster Shared Volume will give you the option to Turn On Redirected Access Mode if it is off or turn it on otherwise.

The next cluster tree item after Disks is Networks. This section gives access to the networking resources as managed by the cluster. Clicking this tree node will display all of the networks that the cluster is aware of in the center pane. By default, the cluster names them as Cluster Network 1, Cluster Network 2, etc.

The way that Failover Clustering identifies a network is by subnet. Every unique subnet discovered on each host will be displayed here. If a host has two or more adapters in the same subnet, only one of them will be displayed. If any host does not have an adapter in a subnet that can be found on other nodes, that network will be considered Partitioned. Configuring the subnets is a topic that’s tackled in the Networking article. As this is just a tour of the tool, it’s assumed that all of your subnets are already configured as desired.

The tree node itself has only a single unique item: Live Migration Settings. The others are the standard ViewRefresh, and Help items. Clicking the Live Migration Settings item will display a dialog box similar to the following:

This dialog allows you to prioritize how Live Migration will utilize available networks. It should be used judiciously to prevent Live Migrations from drowning out other types of communication. Live Migration traffic will only be allowed on networks that are checked (those networks must also be marked to allow cluster traffic, which will be demonstrated shortly). Items at the top of the dialog will be given preference when networks are selected. If the sending and receiving nodes are both set to use SMB for Live Migration and SMB multichannel is active, all selected networks will carry Live Migration traffic.

Networks List Entries and Context Menus

The center pane of the Networks section of Failover Cluster Manager looks like the following:

The upper portion shows the name, status, and allowed traffic for each network. The context menu for these items contains only three entries: Information DetailsShow Critical Events, and Properties. As is common to previously discussed objects in Failover Cluster Manager, the Information Details link shows a pop-up dialog with details about any error message caused in this session and Show Critical Events displays any error events involving the selected item. Properties opens the properties dialog for the item:

The first changeable control is the name. A network can be safely renamed at any time. The most common use of this feature is to give a meaningful description to the network.

The second control group indicates how the network is to be used.

  • Allow cluster network communication on this network grants the ability for cluster communications, such as heartbeat, Redirected Access, and Live Migrations to utilize the selected network.
  • Allow clients to connect through this network is not as meaningful in a Hyper-V cluster as for other cluster types. The network that the cluster name object (CNO) appears on should be marked for client access. Clustered Hyper-V does not expose its virtual machine roles through this network the way that other clustered roles do, so this check box serves no other purpose.
  • Do not allow cluster network communication on this network prohibits the cluster from using the network at all. This will prevent Live Migration traffic even if the network’s box is checked for Live Migration as shown in the preceding sub-section.

The remainder of this dialog shows the status of the network and the subnets that have been detected on it.

Details Pane

The initial dialog for this section showed the Summary pane for the details section. It displays the name of the network and its detected subnets. There is also a Network Connections tab which shows the adapters in the subnet:

If a network is partitioned, this can help you determine which node(s) have lost connectivity or have failed adapters. It can also help you to verify that adapters have been assigned to the correct subnet. The displayed name (Onboard in the above image) is the same name that the host’s management operating system shows for the adapter. These items have a context menu with the options Information Details and Show Critical Events.

Cluster Events

The final cluster tree node is Cluster Events. This contains a display that is very similar to that of the standard Windows Event Viewer. It has been automatically filtered to contain a specific subset of the cluster-related events. Not all possible events are shown. The default view appears below:

The Cluster Events node does have a context menu, displayed on the right in the above screenshot. It will not be discussed in detail here, as it is quite similar to that found in the traditional Event Viewer. One item to point out to those that are not familiar with that interface is Query. Clicking this will show the following dialog, which you can use to tailor what appears here:

Other items in the context menu can be used to further manipulate the query, if you so desire.

A second notable item in this list is Reset Recent Events. This clears the view, but it does not remove the events themselves. It also has the effect of resetting the icon that Failover Cluster uses for the cluster back to its default as shown below:

Wrapping up in the GUI

This concludes the tour of Failover Cluster Manager and the unit on the built-in graphical tools to manage Hyper-V and Failover Clustering. These sections have taken a very thorough look at these tools and their capabilities and will serve as a reference as you work through the rest of the material and in into the future.

Source :
https://www.altaro.com/hyper-v/failover-cluster-manager/nodes-storage/

2021 VMware Major Developments, Releases, Updates & More!

Following a year that the world will remember for a long time to come (and mostly not for good reasons), we wrap up 2021 with a plethora of events happening in the tech industry. In the meantime, we certainly hope that you are doing well and staying safe during this upcoming festive period. In this article, we’ll recap the most important VMware news stories of the year and have a look ahead at what 2022 has in store. Let’s get going!

Company Growth

A lot has been going on this year in the VMware space, not only in a technical aspect but also with major changes within the company’s structure and management.

Financially, the company keeps doing very well with projected revenue of over $12.8 billion, an increase of around 9% compared to last year with expected significant growth in the SaaS area.

One of the axes VMware is also working on to generate revenue is the partner incentives program based on the customer life cycle. The new incentives reward partners that deliver PoCs, customers’ assessments and “sell-through” partners working together.

Acquisitions

VMware acquired a dizzying number of companies over the course of the previous year (2020). However, mergers are time-consuming and are never straightforward when it comes to restructuring teams, merging products into existing portfolios… VMware has put a lot of resources into integrating previous years’ acquisitions into their existing portfolios such as Carbon Black, Salt or Datrium.

This might be the reason why they only acquired one company in 2021 with Mesh7. Let’s have a closer look at what it is.

Mesh7

VMware acquired Mesh7 at the end of the first quarter of 2021. Their technology helps customers improve application resiliency, reliability and reduce blind spots through the integration of deep Layer 7 insights with cloud, host, and reputation data. They offer a distributed API Security Mesh solution (API Firewall and API Gateway) which is focused on securing the application layer at its core in Kubernetes environments.

VMware acquire Mesh7 at the end of March 2021 to further secure Tanzu Service Mesh

VMware acquired Mesh7 at the end of March 2021 to further secure Tanzu Service Mesh

VMware uses Envoy as an open-source Layer 7 proxy in Tanzu Service Mesh and Mesh7’s API gateway is being integrated into the solution to further secure the Kubernetes connectivity solution.

VMworld 2021

As usual, let’s quickly recap what happened during VMworld 2021 which was, once again, a virtual event. We will only skim over the surface of what was announced as a lot of other areas were covered such as Security, Networking, End-User services… For more information about the announcements made during this event, head over to our dedicated VMworld 2021 Round-up Article.

Strong focus on multi-cloud

VMware followed the trend set in the previous year with a strong push towards multi-cloud and managed cloud services. VMware Cross-Cloud services will offer a bunch of multi-cloud services you can pick and choose from in a flexible manner to facilitate and accelerate customers’ adoption.

VMware Cross-Cloud services aims at simplifying the shift to a multi-cloud SDDC

VMware Cross-Cloud services aims at simplifying the shift to a multi-cloud SDDC”

VMware Sovereign Cloud tackles the issues around how sensitive data is dealt with through partnerships with Cloud providers. The goal is to offer those public entities and large organizations a data sovereignty seal of approval in a multi-cloud world.

Other announcements in the Cloud space included VMware Cloud on AWS Outpost and improvements to the disaster recovery as a service (DRaaS) offering.

Tanzu gets ever closer to maturity

VMware Tanzu, the company’s implementation of Kubernetes is being built upon ever since the portfolio was announced at VMworld 2019. The big reveal of this year’s event was Tanzu Community Edition, a free and open-source release of the solution aimed at learners and users.

Other Tanzu related announcements included VMware Cloud with Tanzu Services, managed Tanzu Kubernetes Grid (TKG), Tanzu Mission Control Essentials and a free tier with Tanzu mission control Starter.

VMware Tanzu Community Edition is full featured but free and open-source

VMware Tanzu Community Edition is full-featured but free and open-source”

Lots of projects in development

VMware always has a bunch of projects with codenames in the works that later become actual products when they reached maturity. Remember how Tanzu used to be known as Project Pacific. In 2021, the company revealed no less than 9 major projects in various areas such as Edge computing, AI/ML, Security, multi-cloud, tiered memory for vSphere, Kubernetes…

Again, you can find the details about these projects in our VMworld 2021 roundup.

Edge Computing

The other area that was largely covered was Edge computing with the announcement of VMware Edge Compute Stack, a purpose-built and integrated stack offering HCI and SDN for small-scale VM and container workloads to effectively extend your SDDC to the Edge.

VMware Edge compute Stack helps solve use cases for a wide variety of challenges

VMware Edge compute Stack helps solve use cases for a wide variety of challenges”

While a lot of good things went their way, 2021 was an eventful year for VMware. Several big announcements were made that will change the face of the company and a few vSphere related crises the company’s TAMs had to navigate.

VMware and DellEMC Split

Probably the biggest announcement of the year was the split from DellEMC which was the majority stakeholder with 81% shares in the company. This separation comes 5 years after Dell acquired EMC in September of 2016 for a whopping $67 billion, EMC being VMware’s controlling stakeholder at the time. On November 1st 2021, VMware becomes a standalone company for the first time since EMC acquired it in 2004, albeit after paying $11.5 billion in dividends to the shareholders.

In a news article, VMware’s new CEO Raghu Raghuram (more on that later) officialized the split and kept emphasizing their multi-cloud strategy with the goal of becoming “the Switzerland of the cloud industry”:

As a standalone company, we now have the flexibility to partner even more deeply with all cloud and on-premises infrastructure companies to create a better foundation that drives results for our customers. And the increased flexibility we will have to use equity to complete future acquisitions will help us remain competitive. “

VMware has a new CEO

A number of top officers over at VMware left the building and were replaced by new top profiles. Among those, we find the CEO of the company himself. Pat Gelsinger, who led VMware between 2012 and 2021 gave his notice in February to become Intel’s new CEO after spending 30 years as a top profile between 1979 and 2009 for the blue team, a very impressive resume if you ask me.

VMware replaced him with Raghu Raghuram, the previous COO who’d been climbing up the corporate ladder since 2003, clocking over 18 years of employment to reach the top of the pyramid.

Raghu Raghuram succeeds to Pat Gelsinger as VMware’s CEO

Raghu Raghuram succeeds to Pat Gelsinger as VMware’s CEO

vSphere 7 Update 3 removed

On a more technical note, 2021 was a rather turbulent year for vSphere 7.0. The year started with many customers encountering purple screens on vSphere hypervisors installed on SD cards or USB sticks, which eventually led VMware to pull support for these boot devices. This wasn’t received particularly well among the customer base as many were taken by surprise and now have to plan for it, which will be a large piece of work and investment depending on the size of the environment.

Following this shaky start, customers started having problems with vSphere 7 Update 3 causing PSOD in some instances. In order to fix it, VMware released patches that ended up breaking vSphere HA for many customers using a certain type of Intel adapters. VMware eventually decided to stop the haemorrhage by removing vSphere 7 Update 3 from distribution altogether, just over a month after its release.

vSphere 7 Update 3 was crippled with issues since its initial release

vSphere 7 Update 3 was crippled with issues since its initial release”

Needless to say that customers were pretty unhappy with how this unfolded. Many blamed the 6 months release cycle and quality control being put to the side in favor of shiny new Cloud or Tanzu features. Let’s hope the scission from DellEMC will entice VMware to regain a certain level of quality control and that organizations won’t put the deployment of security patches on hold as a result.

VMware Cloud Universal

As you can tell, VMware is very keen to push Cloud subscriptions to its customers and VMware Cloud Universal, which was released in April 2021, was another testimony of that. A subscription offering that offers access to multi-cloud resources, be it infrastructure, compute, storage, networking, modern apps…

The idea is to be able to flexibly deploy VMware Cloud Infrastructure across private and public clouds. VMware Cloud Universal includes VCF-Subscription (also released in 2021), VMware Cloud on AWS and VMware Cloud on DellEMC.

Now, I’ll admit that it is getting a bit tricky to make sense of the many cloud offerings proposed by VMware with VMC, VMC on AWS, VMware Cloud Universal, VMware Cross-Cloud services and then the subtleties in each one of them.

VMware Cloud Universal allows customers to establish a flexible commercial agreement with VMware

VMware Cloud Universal allows customers to establish a flexible commercial agreement with VMware to commit once and consume dynamically

Ransomware Attacks Targeting vSphere ESXi

In 2021, we, unfortunately, witnessed no curb in the infamous growing trend of vSphere Ransomware attacks. While most encrypting ransomware attacks were historically focused on Windows and Linux instances, vSphere is now being targeted as well. Bad actors will try to gain access to the virtual infrastructure and initiate encryption of the datastores to claim a ransom, hence impacting every single VMs in the environment.

Fortunately, most companies are now investing large amounts of resources to mitigate the risks and protect the customers, for instance, Altaro has been doing it for a long time now.

A Look Ahead to 2022

I wrapped up last year’s roundup with “Watch for 2021 as it is without a doubt that it will be a year packed with major events”. Well, I think it is safe to say that it turned out to be true. VMware’s split from DellEMC will give the company absolute autonomy over its market strategy and path to a multi-cloud world. 2022 will see a maturing of these core cloud technologies alongside VMware doubling down on its acquisition strategy of key technologies that will solidify its commitment to this direction.

While we are eager to find out what it brings in terms of novelties, we are equally looking forward to a return to a more sensible release cycle and the distribution of a stable version of the historic hypervisor (well that’s my main hope at least!) I’d love to hear your thoughts, so feel free to take your bet in the comment section as to what 2022 will bring!

Source :
https://www.altaro.com/vmware/2021-vmware-developments/

Best Practices for setting up Altaro VM Backup

This best practice guide goes through the Altaro VM Backup features explaining their use and the optimal way to configure them in order to make the best use out of the software.

You will need to adapt this to your specific environment, especially depending on how much resources you have available, however this guide takes you through the most important configurations that are often overlooked too.

Setting up the Altaro VM Backup Management Console

The Altaro VM Backup Management Console can be utilised to add and manage multiple hosts in one console. However these hosts must be in the same LAN and at the same physical site (same building). Setups with multiple physical sites must have an instance of Altaro VM Backup at each site.

To manage these multiple installations, you can utilise the ‘Central Monitoring Console’ where you’ll be able to monitor as well as manage these Altaro VM Backup installations remotely.

A single Altaro VM Backup instance can manage both Hyper-V & VMware hosts.

For optimal results, Altaro runs some maintenance specific tasks using (multiple) single threaded operations. For this reason installing on a machine which has a CPU with a higher single thread performance would yield better results than installing on a machine which has a CPU with more cores and lower single thread performance.

Thus for the fastest results, installing Altaro VM Backup on a machine with a higher single thread CPU speed would be best.

Backup Locations

Make sure Opportunity Locks (Oplocks) are disabled if the backup location is a NAS.

If your backup location is a Windows machine, the equivalent to Oplocks is: Set-SmbServerConfiguration -EnableLeasing 0

Run the above command via Powershell.

Offsite Copies

With Altaro VM Backup, you are provided with the functionality of an Offsite Copy Location, which is a redundant/secondary copy of your backups. You can even backup your VM’s to 2 different offsite copy locations for further redundancy of your data, so you can pick a cloud location as well as an Altaro Offsite Server for instance.

There are multiple options for setting this up:

  • You can choose a Physical Drive connected to the management console (the best practice for offsites is to have them located in another building/location).
  • Drive Rotation/Swap which allows you to set up a pool of drives/network paths.
  • A Network Path (LAN Only) or else to an offsite location via a WAN/VPN/Internet connection, which is an ideal tool for Disaster Recovery purposes. Please note that the latter situation (non-LAN) requires use of the Altaro Offsite Server
  • Backup to Microsoft AzureAmazon S3 or Wasabi.

Setting up an offsite copy location is as crucial as setting up backups to a primary location. Apart from the obvious reason that you’ll have a redundant set of backups to restore from, should the local backups become unusable due to disk corruption or other disk failures. Having a secondary copy of your backup sets also allows you to keep a broader history for your VM backups on your secondary location and you’ll be able to go further back when restoring if required.

Deduplication

Altaro VM Backup makes use of Augmented In-line Deduplication. Enabling this is highly recommended and is done from the ‘Advanced Settings’ screen as this will essentially ensure that any common data blocks across virtual machines are only written to the backup location once. This helps by saving a considerable amount of space and also makes backups much quicker since common information is only transferred once.

Boot From Backup

The Boot From Backup drive feature comes along with 2 options, either ‘Verification Mode’ or ‘Recovery Mode’. This is a very good option for getting your RTO down since you’re able to boot up the VM immediately from a backup location and start a restore in the background as well.

However it’s very important that if you are planning to do this, you’ll need a fast backup location that can handle the I/O of a booted VM that’s essentially going into production. Please note that when the VM has finished restoring, it’s suggested to restart the restored VM as soon as you get a chance in order to switch to the restored drives, which would have faster I/O throughput.

Notifications

E-mail notifications are a simple and effective method of monitoring the backup status, yet it’s often overlooked. Setting up these notifications will provide you with a quick overview of the status over your of your backup jobs, hence – you won’t need to login into the Altaro Management console every day to confirm the backup status.

This way you’ll be alerted of any backup failures, allowing you to address said issues before the next backup schedule. Thereby ensuring that you always have a restorable backup point; so as a general best practice, always monitor your backup notifications.

Master Encryption Key

The Master Encryption Key in Altaro is utilised to encrypt the backups using AES 256-bit. It’s used if you choose to encrypt the local backups from the ‘Advanced Settings’ screen, while if you’re configuring offsite copies it must be used as offsite copies must be encrypted.

Altaro VM Backup will require the encryption key upon restoring, so it’s critical that you either remember it or take note of it in a secure password manager as there is no method of recovery for the master encryption key.

Scheduled Test Drills

Altaro VM Backup has the ability to run manual or automated verification of your backup data. This allows you to run scheduled verification jobs that will check the integrity of your backups on your backup location, or schedule full VM restores so that you can actually boot up the VM and confirm that everything works as expected. The VM will be restored with the NIC disabled so as to avoid IP conflicts with the production machine as well.

Failure of storage devices is not uncommon, therefore scheduling test drills is strongly advised for added peace-of-mind. Full instructions on configuring test drills.

Other General Best Practices

  • Backups and production VM’s should not be placed on the same drive.
  • Make sure Opportunity Locks (Oplocks) are disabled if the backup location is a NAS.
  • Backups should not be placed on a drive where an OS is running.
  • Altaro uses the drive it’s installed on as temporary storage and will require a small amount of free space (varying according to the size of the VMs being backed up).
  • Keep at least 10% of the backup location free.
  • The main Altaro VM Backup installation should not be installed on a machine that is also a domain controller (DC).
  • Directories/files inside the Altaro backup folder should not be tampered with, deleted or moved.
  • Do not take snapshots DFSR databases: “Snapshots aren’t supported by the DFSR database or any other Windows multi-master databases. This lack of snapshot support includes all virtualization vendors and products. DFSR doesn’t implement USN rollback quarantine protection like Active Directory Domain Services.” Source. 

Best Practices for Replication

Exclude Page File from Backup

As you’re aware Altaro VM Backup will take note of all changes since the last backup and transfer over all of the blocks that changed to the backup location. The page file will be changing very often and potentially causing your replication jobs to take longer.

Therefore, excluding the page file from backup equals, less transferred changes and as a result the replication jobs takes less time. This can be done by placing the page file onto a separate VHDX/VMDK file from the VM itself and then you can follow the steps here, in order to exclude the VHDX/VMDK file.

High Disk IO and Hypervisor Performance

Replication needs to make use of CDP (Continuous Data Protection), in order to take a backup every couple of minutes/hours, which makes Replication possible.

It’s important to note however that you should only enable high-frequency CDP (15 minutes or less) on VM’s that you really need to. This will ensure that the VM’s you really need to will be able to achieve the selected maximum frequency and in order not to have an impact your Hypervisor’s performance.

Source :
https://help.altaro.com/support/solutions/articles/43000467315-best-practices-for-setting-up-altaro-vm-backup

Altaro Dealing with “volsnap” errors in the System event log

The volsnap source errors are events that are listed in the Windows System event log. Such events usually contain relevant troubleshooting information as to why the shadow copy got dismounted and as a result causes the backups to fail.

You can refer to this article showcasing the error seen in Altaro.

Below you can find a couple of ‘volsnap’ events that we’ve encountered along with their solutions:


Error Message:

volsnap Event ID 25

The shadow copies of volume D: were deleted because the shadow copy storage could not grow in time. Consider reducing the IO load on the system or choose a shadow copy storage volume that is not being shadow copied.

Solution:

This error is logged due to the source drive experiencing a high IO load and thereby it causes the shadow copy to dismount, as a result causing a backup failure.  In this case you can choose to re-schedule the backup job when there is less IO on the source disk. In addition to that something that will help alleviate the IO on the source disk is placing the shadow copies onto another drive completely.

You’ll need to ensure that you have a disk with enough (10% of the original source) and it should also be a disk with on-per performance as the source.

You can run the following command on the host in order to place the shadow copy on another disk; drive letters need to be changed accordingly:

vssadmin add shadowstorage /For=D: /On=E: /MaxSize=UNBOUNDED
vssadmin resize shadowstorage /For=D: /On=E: /MaxSize=UNBOUNDED

Text

Adjusting the page file to 1.5 times the amount of RAM can also help the situation. Note that if you set the it to the maximum page file available, you will be required to restart the machine. Increases (not going up to the maximum) do not typically require a restart.


Error Message:

volsnap Event ID 16

The shadow copies of volume D: were aborted because volume D:, which contains shadow copy storage for this shadow copy, was force dismounted.

volsnap Event ID 14

The shadow copies of volume D: were aborted because of an IO failure on volume D:

Solution:

These two events are usually coupled together. This usually points to a disk issue on the drive being referenced and there should be ‘Disk’ or ‘Ntfs’ events at the same time that give more information on the issue.


Error Message:

volsnap Event ID 24

There was insufficient disk space on volume D: to grow the shadow copy storage for shadow copies of D:. As a result of this failure all shadow copies of volume D: are at risk of being deleted.

volsnap Event ID 35

The shadow copies of volume D: were aborted because the shadow copy storage failed to grow.

Solution:

These two events are usually coupled together. In this case it means that the shadow copy was dismounted due to insufficient disk space on the volume. Please ensure that you have at least 10% free space on the source drives and then run the backup again.


Error Message:

volsnap Event ID 36

The shadow copies of volume D: were aborted because the shadow copy storage could not grow due to a user imposed limit.

Solution:

This particular volsnap error means that the current limit imposed is limiting the shadow copy from growing any larger and hence it’s causing the shadow copy to dismount and cause the backup to fail. To resolve this error you can run the following commands to expand the ShadowStorage; drive letters need to be changed accordingly:

vssadmin add shadowstorage /For=D: /On=D: /MaxSize=UNBOUNDED
vssadmin resize shadowstorage /For=D: /On=D: /MaxSize=UNBOUNDED

Text


Also note that in case you’re using a CSV (Clustered Shared Volume), instead of a drive letter being listed in the event log, there will be an empty space.

Source :
https://help.altaro.com/support/solutions/articles/43000494972

What are the system requirements for Altaro VM Backup?

update november 2023

Version 9

The VM Backup Management Console can be utilized to add and manage multiple hosts in one console. However these hosts must be in the same LAN and at the same physical site (same building). Setups with multiple physical sites must have an instance of VM Backup at each site.


Supported Hypervisors (Hosts)

Microsoft Hyper-V

  • Windows Server 2008 R2 SP1 (Only with .NET Framework 4.8 or higher)
  • Windows Hyper-V Server 2008 R2 SP1 (core installation) (Only with .NET Framework 4.8 or higher)
  • Windows Server 2012
  • Windows Hyper-V Server 2012 (core installation)
  • Windows Server 2012 R2
  • Windows Hyper-V Server 2012 R2 (core installation)
  • Windows Server 2016
  • Windows Server 2016 (desktop experience)
  • Windows Hyper-V Server 2016 (core installation)
  • Windows Server 2019
  • Windows Hyper-V Server 2019 (core installation)
  • Windows Server 2022
  • Windows Hyper-V Server 2022 (core installation)
  • Azure Stack HCI

VMware

  • vSphere: 5.5 / 6.0 / 6.5 / 6.7 / 7.0 / 8.0
  • vCenter: 5.5 / 6.0 / 6.5 / 6.7 / 7.0 / 8.0
  • ESXi: 5.5 / 6.0 / 6.5 / 6.7 / 7.0 / 8.0

Note: vSphere/vCenter/ESXi 5.0/5.1 are no longer supported in V9

It’s important to note the version combination between ESXi and vCenter.

Note that the Free version of VMware ESXi is not supported as it lacks components required by VM Backup.

When making use of the NBD Transport mode, virtual disks cannot be larger than 1TB each. More information here.

Pass-through or RDM (Raw Device Mappings) are not backed up.

Supported Operating Systems

The VM Backup products can be installed on the following OS’s:

VM Backup

  • Windows Server 2008 R2 SP1 (Only with .NET Framework 4.8 or higher)
  • Windows Hyper-V Server 2008 R2 SP1 (core installation) (Only with .NET Framework 4.8 or higher)
  • Windows Server 2012
  • Windows Hyper-V Server 2012 (core installation)
  • Windows Server 2012 R2
  • Windows Hyper-V Server 2012 R2 (core installation)
  • Windows Server 2016
  • Windows Server 2016 (desktop experience)
  • Windows Hyper-V Server 2016
  • Windows Server 2019
  • Windows Hyper-V Server 2019
  • Windows Hyper-V Server 2019 (core installation)
  • Windows Server 2022
  • Azure Stack HCI

    Note that hosts must be in the same LAN and at the same physical site (same building). Setups with multiple physical sites must have an instance of VM Backup at each site.

Management Tools (UI)

  • Windows 2008 R2 SP1 (Only with .NET Framework 4.8 or higher)
  • Windows Server 2012
  • Windows Server 2012 R2
  • Windows Server 2016 (desktop experience)
  • Windows Server 2019
  • Windows Server 2022
  • Azure Stack HCI
  • Windows 7 (64-Bit)
  • Windows 8 (64-Bit)
  • Windows 10 (64-Bit)
  • Windows 11 (64-Bit)

Offsite Backup Server

  • Windows 2008 R2 SP1 (Only with .NET Framework 4.8 or higher)
  • Windows Hyper-V Server 2008 R2 SP1 (core installation) (Only with .NET Framework 4.8 or higher)
  • Windows Server 2012 
  • Windows Hyper-V Server 2012 (core installation)
  • Windows Server 2012 R2
  • Windows Hyper-V Server 2012 R2 (core installation)
  • Windows Server 2016
  • Windows Server 2016 (desktop experience)
  • Windows Hyper-V Server 2016
  • Windows Server 2019
  • Windows Hyper-V Server 2019
  • Windows Server 2022
  • Azure Stack HCI

 

Replication Support

[Hyper-V] A Windows Server OS is required for replication. The Offsite Backup Server needs to be installed on a Windows Server OS that’s matching the source host OS, where the production VMs are running. Below you can find a list supported OS’s that you can replicate to:

Host OSSupported Replication Offsite Backup Server OS
2012to 2012
2012R2to 2012R2
2016to 2016
2019to 2019
2022to 2022
Azure Stack HCIAzure Stack HCI

[VMware] The host added to the Offsite Backup Server must be the same OS as the source host being replicated from. Below you can find a list supported OS’s that you can replicate to:

Host OSSupported Replication Host OS
5.5to 5.5
6.0to 6.0
6.5to 6.5
6.7to 6.7
7.0to 7.0
8.0to 8.0

Required Hardware Specifications

VM Backup

  • Minimum of i5 (or equivalent – minimum 4 cores recommended) processor
  • 2 GB RAM + an additional 25MB RAM for every 100GB of data being backed up
  • 1 GB Hard Disk Space (for VM Backup Program and Settings) + 15 GB (for temporary files created during backup operations)
  • Minimum of 10% free disk space on each volume holding live VM data to be used for Microsoft Volume Shadow Copy
  • Minimum of 10% free disk space on each backup location holding backup data to ensure smooth operation

Hyper-V Host Agent

  • 1 GB RAM
  • 2 GB Hard Disk Space

Offsite Backup Server

  • Minimum of i5 (or equivalent – minimum 4 cores recommended) processor
  • 2 GB RAM + an additional 25MB RAM for every 100GB of data being backed up
  • For Replication, ensure that it has enough resources to boot your VMs

Software Prerequisites

  • MS .NET Framework 4.8
  • Minimum screen resolution for the Management console: 1280×800
  • One of the listed supported Operating Systems must be used (Windows client operating systems are not supported unless specified)
  • The main VM Backup installation cannot be installed on a machine that is also a domain controller (DC)

Communication Ports

Below is a list of the default TCP ports used by our software and their purpose. All these ports must be allowed.

36014 : Communication between Management Console UI and VM Backup

36015 : Communication from VM Backup to API Service

36021 & 36022 : Communication between the Host Agents and VM Backup

36023 : Communication from VM Backup to Host Agents

36070 : Communication for the Deduplication Service

36000 & 36001 : Communication from v9 Clients with the Offsite Backup Server

36050 : Communication from Offsite Backup Server UI to Offsite Backup Server

36100 : Communication from VM Backup to the Offsite Backup Server for Replication

36075: Communication for the Deduplication Service for the Offsite Backup Server

36200 – 36220 : Communication from VM Backup to Agents for Boot From Backup

80 & 443 : For Offsite copies to Azure Storage Accounts, Amazon S3 & Wasabi

443 & 7444 & 902 : Communication to VMware Hosts

Supported Backup Locations

  • USB External Drives
  • eSata External Drives
  • USB Flash Drives
  • Fileserver Network Shares using UNC Paths
  • NAS devices (Network Attached Storage) using UNC Paths
  • PC Internal Hard Drives (recommended only for evaluation purposes)
  • RDX Cartridges
  • Offsite Backup Server (incl. Replication)
  • Azure Cloud Storage Account
  • Amazon S3
  • Wasabi Cloud Storage Account

Note: The backup locations must be in the same LAN and at the same physical site (same building) as the VM management machine and the hosts, with the exception of the Offsite Backup Server/Cloud locations.

Note: Target storage partitions must be either of the below:

  • NTFS/ReFS formatted
  • Network Paths and accessible by SMB3.0

Note: Please ensure that the backup location chosen does not perform any sort of deduplication outside that of VM Backup.

Note: SMB file shares in Cloud locations (such as Azure Files) are not supported as a backup location.

Boot from Backup Requirements

  • For Hyper-V Windows Server 2012 Host OS and onward are supported for Boot from Backup Drive. The Microsoft iSCSI Initiator Service has to be running on the machine you’re attempting to boot to.
  • VMware requires ports 36200 – 36220 open on the firewall and it also requires an iSCSI Storage AdapterMore information on that here.
  • The datastore chosen for must be VMFS.
  • VMs with Storage Spaces volumes are not supported.

File Level/Exchange Item Level Restore Requirements

  • The partition must be NTFS ReFS (through Instant Mount – only for File Level Restores) formatted
  • The partition must be formatted as ‘Basic’ and not ‘Dynamic’
  • If the VM has Windows Data Deduplication role enabled, the role must also be enabled where the VM Backup machine is installed (through Instant Mount)
  • The files must NOT be encrypted or compressed at guest OS (VM) level
  • Exchange Item Level Restores are only supported from NTFS formatted partitions
  • Storage Spaces volumes are not supported for file level recovery
  • The following Microsoft Exchange versions are supported:
    • 2007 (up to SP3)
    • 2010 (up to SP3)
    • 2013 (from RTM up to CU21, with the exception of CU 2, 3 and 4)
    • 2016 (up to CU22)
    • 2019 (up to CU11)

Hyper-V Restore Version Compatibility

Virtual Machines backed up from Windows Server 2008 R2 SP1 and 2012 hosts have to be restored to hosts running Windows Server 2016 build 1607 or older.

Virtual machines backed up from Windows Server 2012 R2 and newer can be restore to hosts running up to Windows Server 2019. While VMs backed up from WS2016 and newer can be restored to hosts running WS 2022.

Naturally, you can restore to a newer operating system, but not to an older one i.e. you will be able to restore a VM backed up from a 2008 R2 SP1 host to a 2012 one, but not the other way round.

Please note that this also applies when restoring a single virtual hard disk as well.

Version 8

The VM Backup Management Console can be utilized to add and manage multiple hosts in one console. However these hosts must be in the same LAN and at the same physical site (same building). Setups with multiple physical sites must have an instance of VM Backup at each site.


Supported Hypervisors (Hosts)

Microsoft Hyper-V

  • Windows Server 2008 R2 SP1
  • Windows Hyper-V Server 2008 R2 SP1 (core installation)
  • Windows Server 2012
  • Windows Hyper-V Server 2012 (core installation)
  • Windows Server 2012 R2
  • Windows Hyper-V Server 2012 R2 (core installation)
  • Windows Server 2016
  • Windows Server 2016 (desktop experience)
  • Windows Hyper-V Server 2016 (core installation)
  • Windows Server 2019
  • Windows Hyper-V Server 2019 (core installation)
  • Windows Server 2022
  • Windows Hyper-V Server 2022 (core installation)
  • Azure Stack HCI

VMware

  • vSphere: 5.0 / 5.1 / 5.5 / 6.0 / 6.5 / 6.7 / 7.0
  • vCenter: 5.0 / 5.1 / 5.5 / 6.0 / 6.5 / 6.7 / 7.0
  • ESXi: 5.0 / 5.1 / 5.5 / 6.0 / 6.5 / 6.7 / 7.0

It’s important to note the version combination between ESXi and vCenter.

Note that the Free version of VMware ESXi is not supported as it lacks components required by VM Backup.

When making use of the NBD Transport mode, virtual disks cannot be larger than 1TB each. More information here.

Pass-through or RDM (Raw Device Mappings) are not backed up.

Backing up VMs that have fault tolerance enabled is not supported because when FT is enabled, it is not possible to take snapshots of those virtual machines, which is one of the criteria for AVMB to be able to take a backup of a VM.

Restoring to a vVol (VMware Virtual Volume) Datastore is not supported.

Supported Operating Systems

The VM Backup products can be installed on the following OS’s:

VM Backup

  • Windows Server 2008 R2 SP1
  • Windows Hyper-V Server 2008 R2 SP1 (core installation)
  • Windows Server 2012
  • Windows Hyper-V Server 2012 (core installation)
  • Windows Server 2012 R2
  • Windows Hyper-V Server 2012 R2 (core installation)
  • Windows Server 2016
  • Windows Server 2016 (desktop experience)
  • Windows Hyper-V Server 2016
  • Windows Server 2019
  • Windows Hyper-V Server 2019
  • Windows Hyper-V Server 2019 (core installation)
  • Windows Server 2022
  • Windows Hyper-V Server 2022
  • Windows Hyper-V Server 2022 (core installation)
  • Azure Stack HCI

    Note that hosts must be in the same LAN and at the same physical site (same building). Setups with multiple physical sites must have an instance of VM Backup at each site.

Management Tools (UI)

  • Windows 2008 R2 SP1
  • Windows Server 2012
  • Windows Server 2012 R2
  • Windows Server 2016 (desktop experience)
  • Windows Server 2019
  • Windows Server 2022
  • Azure Stack HCI
  • Windows 7 (64-Bit)
  • Windows 8 (64-Bit)
  • Windows 10 (64-Bit)

Offsite Backup Server

  • Windows 2008 R2 SP1
  • Windows Hyper-V Server 2008 R2 SP1 (core installation)
  • Windows Server 2012 
  • Windows Hyper-V Server 2012 (core installation)
  • Windows Server 2012 R2
  • Windows Hyper-V Server 2012 R2 (core installation)
  • Windows Server 2016
  • Windows Server 2016 (desktop experience)
  • Windows Hyper-V Server 2016
  • Windows Server 2019
  • Windows Hyper-V Server 2019
  • Windows Server 2022
  • Windows Hyper-V Server 2022
  • Azure Stack HCI

 

Replication Support

[Hyper-V] A Windows Server OS is required for replication. The Offsite Backup Server needs to be installed on a Windows Server OS that’s matching the source host OS, where the production VMs are running. Below you can find a list of supported OS’s that you can replicate to:

Host OSSupported Replication Offsite Backup Server OS
2012to 2012
2012R2to 2012R2
2016to 2016
2019to 2019
2022to 2022
Azure Stack HCIAzure Stack HCI

[VMware] The host added to the Offsite Backup Server must be the same OS as the source host being replicated from. Below you can find a list of supported OS’s that you can replicate to:

Host OSSupported Replication Host OS
5.5to 5.5
6.0to 6.0
6.5to 6.5
6.7to 6.7
7.0to 7.0

Required Hardware Specifications

VM Backup

  • Minimum of i5 (or equivalent – minimum 4 cores recommended) processor
  • 1 GB RAM + an additional 25MB RAM for every 100GB of data being backed up
  • 1 GB Hard Disk Space (for VM Backup Program and Settings) + 15 GB (for temporary files created during backup operations)
  • Minimum of 10% free disk space on each volume holding live VM data to be used for Microsoft Volume Shadow Copy

Hyper-V Host Agent

  • 500 MB RAM 

Offsite Backup Server

  • Minimum of i5 (or equivalent – minimum 4 cores recommended) processor
  • 1 GB RAM + an additional 25MB RAM for every 100GB of data being backed up
  • For Replication, ensure that it has enough resources to boot your VMs

Software Prerequisites

  • MS .NET Framework 4.7.2 
  • Minimum screen resolution for the Management console: 1280×800
  • One of the listed supported Operating Systems must be used (Windows client operating systems are not supported unless specified)

Communication Ports

Below is a list of the default TCP ports used by our software and their purpose. All these ports must be allowed.

35106 : Communication for VMware 6.5, Backup and Restore operations.

35107 : Communication between Management Console UI and VM Backup

35108 : Communication from VM Backup to Hyper-V Host Agents

35113 : Communication from VM Backup to API Service

35114 : Communication for the Deduplication Service 

35116 & 35117 : Communication from v8 Clients with the Offsite Backup Server

35119 : Communication from Offsite Backup Server V8 UI to Offsite Backup Server

35120 : Communication from VM Backup to the Offsite Backup Server for Replication

35121 : Communication for the Deduplication Service for Amazon S3/Wasabi offsite locations

35221 : Communication between the Hyper-V Host Agents and VM Backup

35200 – 35220 : Communication from VM Backup to Agents for VMware Boot From Backup

80 & 443 : For Offsite copies to Azure Storage Accounts, Amazon S3 & Wasabi

443 & 7444 & 902 : Communication to VMware Hosts

Supported Backup Locations

  • USB External Drives
  • eSata External Drives
  • USB Flash Drives
  • Fileserver Network Shares using UNC Paths
  • NAS devices (Network Attached Storage) using UNC Paths
  • PC Internal Hard Drives (recommended only for evaluation purposes)
  • RDX Cartridges
  • Offsite Backup Server (incl. Replication)
  • Azure Cloud Storage Account
  • Amazon S3
  • Wasabi Cloud Storage Account

Note: The backup locations must be in the same LAN and at the same physical site (same building) as the VM management machine and the hosts, with the exception of the Offsite Backup Server/Cloud locations.

Note: Target storage partitions must be either of the below:

  • NTFS/ReFS formatted
  • Network Paths and accessible by SMB3.0

Note: Please ensure that the backup location chosen does not perform any sort of deduplication outside that of VM Backup.

Note: SMB file shares in Cloud locations (such as Azure Files) are not supported as a backup location.

Boot from Backup Requirements

  • For Hyper-V Windows Server 2012 Host OS and onward are supported for Boot from Backup Drive. The Microsoft iSCSI Initiator Service has to be running on the machine you’re attempting to boot to.
  • VMware requires ports 35200 – 35220 open on the firewall and it also requires an iSCSI Storage AdapterMore information on that here.
  • The datastore chosen must be VMFS.
  • VMs with Storage Spaces volumes are not supported.

File Level/Exchange Item Level Restore Requirements

Hyper-V Restore Version Compatibility

Virtual Machines backed up from Windows Server 2008 R2 SP1 and 2012 hosts have to be restored to hosts running Windows Server 2016 build 1607 or older.

Virtual machines backed up from Windows Server 2012 R2 and newer can be restored to hosts running up to Windows Server 2019. While VMs backed up from WS2016 and newer can be restored to hosts running WS 2022.

Naturally, you can restore to a newer operating system, but not to an older one i.e. you will be able to restore a VM backed up from a 2008 R2 SP1 host to a 2012 one, but not the other way round.

Please note that this also applies when restoring a single virtual hard disk as well.

Have more questions? Submit a request

Source :
https://support.hornetsecurity.com/hc/en-us/articles/19687996547601

Exit mobile version