SaaS vs PaaS vs IaaS: What’s the Difference & How to Choose

Companies are increasingly using Cloud services to support their business processes. But which types of Cloud services are there, and what is the difference? Which kind of Cloud service is most suitable for you? Do you want to be unburdened or completely in control? Do you opt for maximum cost savings, or do you want the entire arsenal of possibilities and top performance? Can you still see the forest for the trees? In this article and in the next, I describe several different Cloud services, what the differences and features are and what exactly you need to pay attention to.

Let’s start with the definition of Cloud computing. This is the provision of services using the internet (Cloud). Think of storage, software, servers, databases etc. Depending on the type of service and the service that is offered (think of license management or data storage), you can divide these services into categories. Examples are IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service), etc. These services are provided by a cloud provider. Whether this is Microsoft (Azure), Amazon (AWS), or another vendor (Google, Alibaba, Oracle, etc.), each vendor offers Cloud services that fall under one of the categories of Cloud services that we are about to discuss.

One feature of Cloud computing is that you pay according to the usage and the service you purchase. For example, for SaaS, you pay for the software’s license and support. This also means that if you buy a SaaS service (e.g., Office 365) and don’t use it, you will still be charged. At the same time, if you purchase storage with IaaS, for example, you only pay for the amount of storage you use, possibly supplemented with additional services such as backup, etc.

Sometimes Cloud services complement each other; think, for example, of DaaS (Database as a Service), where a database is offered via the Cloud. Often you need an application server and other infrastructure to read data from this database. These usually run in a Landing Zone, purchased from an IaaS service. But some services can also be standalone, for example, SaaS (Office 365).

Each Cloud service has specific characteristics. Sometimes it requires little or no (technical) knowledge, but it can also be challenging to manage and use the services according to best practices. This often depends on the degree to which you want to see yourself in control. If you want an application from the Cloud where you are completely relieved of all worries, this requires little technical knowledge from the user or the administrator. But if you want maximum control, then IaaS gives you an enormous range of possibilities. In this article, you can read what you need to consider.

It is advisable to think beforehand about what your requirements and wishes are precisely and whether this fits in with the service you want to purchase. If you wish to use an application in the Cloud but use many custom settings, this is often not possible. If you don’t want to be responsible for updating and backing up an application and use little or no customization, a SaaS can be very interesting. Also, look at how a service fits into your business process. Does it offer possibilities for automation, reporting, or disaster recovery? Are there possibilities to temporarily allocate extra resources in case of peak demand (horizontal or vertical scaling up), and what guarantees does the supplier offer with this service? Think of RPO / RTO and accessibility of the service desk in case of a calamity.

Let’s get started quickly!

IaaS (Infrastructure as a Service)

One of the best-known Cloud services is undoubtedly IaaS. For many companies, this is often their first introduction to a Cloud service. You rent the infrastructure from a cloud provider. For example, the network infrastructure, virtual servers (including operations system), and storage. A feature of IaaS is that you have complete control – Both on the management side and how you can deploy resources (requests). This can be done in various automated ways (Powershell, IaC, DevOps pipelines, etc.) and via the classic management interface that all providers offer. Things that are often not possible with a PaaS service are possible with an IaaS service. You have complete control. In principle, you can set up a complete server environment (all services are available for this), but you do have the benefits of the Cloud, such as scalability and pay per use or per resource.

IaaS therefore, most resembles an on-premise implementation. You often see this used in combination with the use of virtual servers. Critical here is a good investigation into the possible limitations, for example, I/O, so that the performance can be different in practice than in a traditional local environment. You are responsible for arranging security and backup. The advantage is that you have an influence on the choice of technology used. You can customize the setup according to your needs and wishes. You can standardize the configuration to your organization. Deployment can be complex, and you are forced to make your own choices, so some expertise is needed.

PaaS (Platform as a Service)

PaaS stands for Platform as a service and goes further than IaaS. You get a platform where you can do the configuration yourself. When you use a PaaS service, the vendor takes care of the sub-layer (IaaS) and the operating system and middleware. So you sacrifice something in terms of control and capabilities. PaaS services are ideal for developers, web and application builders. After all, you can quickly make an environment available. Using it means you no longer have to worry about the infrastructure, operating system, and middleware. This is taken care of by the supplier based on best practices. This also offers security advantages, as you do not have to think about patching and upgrading these things that are now done by the vendor.

Another advantage is that you can entirely focus on what you want to do and not on managing the environment. You can also easily purchase additional services and quickly scale them up or down. When you are finished, you can remove and stop the resources, so you have no more costs.

However, do take into account the use of existing software. Not all existing software is suitable to function in a PaaS environment; for example, in a PaaS environment, you do not have full access (after all, the vendor is responsible). Also, not all CPU power and memory are allocated to the Cloud application. This is because it is often hosted on a shared platform, so other applications (and databases) may use the same resources. As for the database, you have the same advantages and disadvantages as with DBaaS.

SaaS (Software as a Service)

This is probably a service you’ve been using for a while. In short, you take applications through the Cloud on a subscription basis. The provider is responsible for managing the infrastructure, patches, and updates. A SaaS solution is ready for use immediately, and you directly benefit from the added value, such as fast scaling up and down and paying per use. Examples are Office365, Sharepoint online, SalesForce, Exact Online, Dropbox, etc.

Unlike IaaS and PaaS, where there is still a lot of freedom, and you have to set everything up yourself, with SaaS however, it is immediately clear what you are buying and what you will get. With this service, you are relieved of most of your worries. The vendor is responsible for all updates, patches, development, and more. You cannot make any updates or changes to the software with this service.

Many companies use one or more SaaS services often even within companies, there is a distinction. For example, each department within a company has its specific applications and associated SaaS services. With this service, you only pay for what you need, including the licenses. These licenses can easily be scaled up or down.

It is interesting for many companies to work with SAAS solutions. It is particularly interesting for start-ups, small companies and freelancers because you only purchase what you use, you don’t have unnecessarily high start-up costs, and you don’t have to worry about the maintenance of the software.

But SAAS can also be a perfect solution for larger companies. For example, if you hire extra staff for specific periods, you can quickly get these people working with the software they need. You buy several additional licenses, and you can stop this when the temporary staff leaves.

How can Vembu help you?

BDRSuite, is a comprehensive Backup & DR solution designed to protect your business-critical data across Virtual (VMware, Hyper-V), Physical Servers (Windows, Linux), SaaS (Microsoft 365, Google Workspace), AWS EC2 Instances, Endpoints (Windows, Mac) and Applications & Databases (MS Active Directory, MS Exchange, MS Outlook, SharePoint, MS SQL, MySQL).

To protect your workloads running on SaaS (Microsoft 365Google Workspace), try out a full-featured 30-days Free Trial of the latest version of BDRSuite.

Source :
https://www.vembu.com/blog/saas-vs-paas-vs-iaas-whats-the-difference-how-to-choose/

Teaming in Azure Stack HCI

As we’re deprecating the vSwitch attached to an LBFO team, this article introduces a new tool for converting your LBFO team to a SET team. To download this tool, run the following command or see the end of this article.

Install-Module Convert-LBFO2SET​

Windows Server currently has two inbox teaming mechanisms with two very different purposes. In this article, we’ll describe several reasons why you should use Switch Embedded Teaming (SET) for Azure Stack HCI scenarios and we’ll discuss several long-held teaming myths – We’d love to hear your feedback in the comments below. Let’s get started!

In Windows Server 2012 we released LBFO as an inbox teaming mechanism, with many customers leveraging this technology to provide load-balancing and fail-over between network adapters. Since then, the rise of software-defined storage and software defined networking has brought performance and compatibility challenges to the forefront (outlined in this article) with the LBFO architecture that required a change in direction.

This new direction is called Switch Embedded Teaming (SET) and was introduced in Windows Server 2016. SET is available when Hyper-V is installed on any Server OS (Windows Server 2016 and higher) and Windows 10 version 1809 (and higher). You’re not required to run virtual machines to use SET, but the Hyper-V role must be installed.

In summary, LBFO is our older teaming technology that will not see future investment, is not compatible with numerous advanced capabilities, and has been exceeded in both performance and stability by our new technology (SET). We’d like to discuss why you should move off LBFO for virtualized and cloud scenarios. Let’s dig into this paragraph a bit.

LBFO is our older teaming technology that will not see future investment

thumbnail image 1 of blog post titled
Teaming in Azure Stack HCI

With the intent to bring software-defined technologies like SDNv2 and containers to Windows Server, it became clear that we needed an alternative teaming solution and so we set off creating SET, circa 2014. Simultaneously reaching feature parity and stability with LBFO took time; several early adopters of SET will remember some of these pains. However, SET’s stability, performance, and features have now far surpassed LBFO.

All new features released since Windows Server 2016 (see below) were developed and tested with SET in mind – This includes all Azure Stack HCI solutions you may have purchased; Azure Stack HCI is not tested or certified with LBFO. This is largely due to development simplicity and testing; without driving too far into unimportant details, LBFO teams adapters inside NDIS which is a large and complex component – its roots date back to Windows 95 (of course updated considerably since then). If your system has a NIC, you’re using NDIS. In the picture shown above, each component below the vSwitch was part of NDIS.

The size and complexity of scenarios included in NDIS made for very complex testing requirements that were only compounded by virtualized and software-defined technologies that considerably hampered feature innovation. You might think that this is just a Microsoft problem, but really this affects NIC vendor driver development time and stability as well.

All-in-all, we’re not focusing on LBFO much these days, particularly as software-defined Windows Server networking scenarios become more exotic with the rise of containers, software-defined networking, and much more. There’s a faster, more stable, and performant teaming solution, called Switch Embedded Teaming.

LBFO is not compatible with several advanced capabilities

Here’s a smattering of scenarios and features that are supported with SET but NOT LBFO:

Windows Admin Center – WAC is the de facto management tool for Windows Server and Azure Stack HCI, with millions of nodes under management. You can create and manage a SET team for a single host, or deploy a SET team to multiple hosts with the new Cluster Creation UI we released to help you deploy Azure Stack HCI solutions at Microsoft Ignite this year (watch the sessiontry it out, and give us feedback).

LBFO is not available for configuration in Windows Admin Center.

RDMA Teaming – Only SET can team RDMA adapters. RDMA is used for example with Storage Spaces Direct (S2D) which requires a reliable high bandwidth, low latency network connection between each node. High-bandwidth? Low Latency? That’s RDMA’s bag so it is the recommended pattern with S2D. Reliability? That’s SET’s claim-to-fame so these two are a logical pairing.

Guest RDMA: SET supports RDMA into a virtual machine. This doesn’t work with LBFO for two reasons:

  • RDMA adapters cannot be teamed with LBFO both host adapters and virtual adapters.
  • RDMA uses SMB multichannel which requires multiple adapters to balance traffic across. Since you can’t assign a vNIC to pNIC affinity with LBFO, neither the SMB nor non-SMB traffic can be made highly available.

Guest Teaming is a strange one; you could add multiple virtual NICs to a Hyper-V VM; inside the VM, you could use LBFO to team the virtual NICs. However, you cannot affinitize a virtual NIC (vNIC) to a physical NIC (pNIC), so it’s possible that both vNICs added in the VM are sending and receiving traffic out of the same pNIC. If that pNIC fails, you lose both of your virtual NICs. 

SET allows you to map each vNIC to a pNIC to ensure that they don’t overlap, ensuring that a Guest Team is able to survive an adapter outage.

Microsoft Software Defined Networking – (SDN) was first released in its modern form in Windows Server 2016 and requires a virtual switch extension called the Virtual Filtering Platform (VFP). VFP is the brains behind SDN, the same extension that runs our public cloud, Azure. VFP can only be added to a SET team.

This means that any of the SDN features (which are included with a Datacenter Edition license) like the Software Load Balancer, Gateways, Distributed Firewall (ACLs), and our modern network QoS capability are also unavailable if you’re using LBFO.

Container Networking – Containers relies on a service called the Host Network Service (HNS). HNS also leverages VFP and as mentioned in the SDN section, VFP can only be added to a Switch Embedded Team (SET). For more information on Container Networking, please see this link.

Virtual Machine Multi-Queues – VMMQ is a critical performance feature for Azure Stack HCI. VMMQ allows you to assign multiple VMQs to the same virtual NIC without which, you rely on expensive software spreading operations (the OS spreads packets across multiple CPUs without hardware (NIC) assistance) that greatly increases CPU utilization on the host, reducing the number of virtual machines you can run.

Moreover, if your vNIC doesn’t get a VMQ, all traffic is processed by the default queue. With SET you can assign multiple VMQs to the default queue which can be shared as needed by any vNIC allowing more VMs to get the bandwidth they need.

In this video, you can see the performance (throughput) benefits of Switch Embedded Teaming over that of LBFO.  The video demonstrates a 2x throughput improvement with SET over LBFO, while consuming ~10% additional CPU (a result of double the throughput).

Dynamic VMMQ –  d.VMMQ won’t work either. Dynamic VMMQ is an intelligent queue scheduling algorithm for VMMQ that recognizes when CPU cores are overworked by network traffic and reassigns that network traffic processing to other cores automatically so your workloads (e.g. VMs, applications, etc.) can run without competing for processor time.

Here’s an example of some of the benefits of Dynamic VMMQ. In the video, you can see the host, spending CPU resources processing packets for a specific virtual NIC. When a competing workload begins on the system (which would prevent the virtual NIC from reaching maximum performance), we automatically tune the system by moving one of the workloads to an available processor.

RSC in the vSwitch is an acceleration that coalesces segments destined for the same virtual NIC into a larger segment.

Outbound network traffic is slimmed to fit into the mtu size of the physical network (default of ~1500 bytes). However, inbound traffic can be coalesced into one big segment. That one big segment takes far less processing than multiple small segments, so once traffic is received by the host, we can combine them and deliver several segments to the vNIC all at once. SET was made aware of RSC coalescing and supports this acceleration as of Windows Server 2019.

We’re continuing to improve this feature for even better performance in the next version of Windows Server and Azure Stack HCI by enabling RSC in the vSwitch to extend over the VMBus. In the video below, we show one VM sending traffic to another VM with the improved acceleration disabled – This is using only the original Windows Server 2019, RSC in the vSwitch capabilities. 

Next we enable the Windows Server vNext improvements; throughput is improved by ~17 Gbps while CPU resourcing is reduced by approximately 12% (20 cores on the system). This type of traffic pattern is specifically valuable for container scenarios that reside on the same host.

LBFO has been exceeded in both performance and stability by SET

Note: Guest RDMA, RSC in the vSwitch, VMMQ, and Dynamic VMMQ belong in this category as well.

Certified Azure Stack and Azure Stack HCI solutions test only SET

If all that wasn’t enough, both Microsoft and our partners validate and certify their solutions on SET, not LBFO. If you bought a certified Azure Stack HCI solution from one of our partners OR a standard or premium logo’d NIC, it was tested and validated with Switch Embedded Teaming. That means all certification tests where run with SET.

Link Aggregation Control Protocol (LACP)

thumbnail image 2 of blog post titled
Teaming in Azure Stack HCI

Ok, so this one is a little counter-intuitive. LACP, allows for port-channels or switch-dependent teams to send traffic to the host over more than one physical port simultaneously.

For native hosts this means that every port in the port-channel can send traffic simultaneously – for the system on the right with 2 x 50 Gbps NICs, it looks like one big pipe with a native host potentially receiving 100 Gbps. Naturally, you’d expect that this capability could extend to virtual NICs as well.

But things change with virtualization. When the traffic gets to the host, the NICs need to interrupt multiple, independent processors to exceed what a single CPU core can process – This is what VMMQ does, and as mentioned, VMMQ does not work with LBFO.

LBFO limits you to a single VMQ and despite having (in the picture) 100 Gbps of inbound bandwidth, you would only receive about 5 Gbps per virtual NIC (or up to ~20 Gbps per vNIC at the painful expense of OS-based software spreading that could be used for running virtual machine workloads).

thumbnail image 3 of blog post titled
Teaming in Azure Stack HCI

With SET, switch-independent teaming, and the hardware assistance of VMMQ and enough CPUs in the system, you could receive all 100 Gbps of data into the host.

In summary, LACP provides no throughput benefits for Azure Stack HCI scenarios, incurs higher CPU consumption, and cannot auto-tune the system to avoid competition between workloads for virtualized scenarios (Dynamic VMMQ).

Asymmetric Adapters

While we’re myth-busting, let’s talk about adapter symmetry which describes the length to which adapters have the same make, model, speed, and configuration – SET requires adapter symmetry for Microsoft support. Usually the easiest way to identify this symmetry is by the device Interface Description (with PowerShell, use Get-NetAdapter). If the interface description matches (with exception of the unique number given to each adapter e.g. Intel NIC #1, Intel NIC #2, etc.) then the adapters are symmetric.

Prior to Windows Server 2016, conventional wisdom stated that you should use different NICs with different drivers in a team. The thinking was that if one driver had an issue, another team member would survive, and the team would remain up. This is a common benefit customers cite in favor of LBFO: it supports asymmetric adapters.

However, two drivers mean twice as many things can go wrong in fact increasing the likelihood of a problem. Instead, a properly designed infrastructure with symmetric adapters are far more stable in our review of customer support cases. As a result, support for asymmetric teams are no longer a differentiator for LBFO nor do we recommend it for Azure Stack HCI scenarios where reliability is the #1 requirement.

LBFO for management adapters

Some customers I’ve worked with have asked if they should use LBFO for management adapters when the vSwitch is not attached – Our recommendation is to always use SET whenever available. A management adapter’s goal in life is to be stable and we see less support cases with SET.

To be clear, if the adapter is not attached to a virtual switch, LBFO is acceptable however, you should endeavor to use SET whenever possible due to the support reasons outlined in this article.

vSwitch on LBFO Deprecation Status

Recently, we publicly announced our plans to deprecate the use of LBFO with the Hyper-V virtual switch. Moving forward, and due to the various reasons outlined in this article, we have decided to block the binding of the vSwitch on LBFO.

Prior to upgrading from Windows Server 2019 to vNext or if you have a fresh install of vNext, you will need to convert any LBFO teams to a SET team if it’s attached to a Hyper-V virtual switch. To make this simpler, we’re releasing a tool (available on the PowerShell gallery) called Convert-LBFO2SET.

You can install this tool using the command:

Install-Module Convert-LBFO2SET

Or for disconnected systems:

Save-Module Convert-LBFO2SET -Path C:\SomeFolderPath

Please see the wiki for instructions on how to use the tool however here’s an example where we convert a system with 10 host vNICs, 10 generation 1 VMs, and 10 generation 2 VMs. 

Summary

LBFO remains our teaming solution if you are running your workloads on bare metal servers. If however, you are running virtualized or cloud scenarios like Azure Stack HCI, you should give Switch Embedded Teaming serious consideration. As we’ve described in this article, SET has been the Microsoft recommended teaming solution and focus since Windows Server 2016 as it brings better performance, stability, and feature support compared to LBFO.

Are there other questions you have about SET and LBFO? Please submit your questions in the comments below!

Thanks for reading,

Dan “All SET for Azure Stack HCI” Cuomo

Source :
https://techcommunity.microsoft.com/t5/networking-blog/teaming-in-azure-stack-hci/ba-p/1070642

Create and manage a Switch Embedded Teaming

What is Hyper-V Switch Embedded Teaming?

With Windows Server 2016, Microsoft brings us a new NIC (network interface card) teaming solution called Switch Embedded Teaming (SET). It is only available on servers running Windows Server 2016 with the Hyper-V role installed and is not backwards compatible.

SET allows us to group up to eight physical NICs to create one or more vNICs. Microsoft’s reasoning behind this is SET is best used for NICs 10 GB or bigger. SET only supports switch independent mode to make setup easy for us to configure. You can either plug all physical NICs into one physical switch or multiple physical switches if they are in the same subnet. Connecting the SET Team to multiple physical switches can give you extra redundancy.

Due to SET’s integration with the Hyper-V Virtual Switch, you are unable to use SET Teaming inside a virtual machine (VM). SET Teaming does not present the team interface to the VM, so you can utilize LBFO (load-balancing failover) Teaming inside a VM by adding additional vNICs to the VM.

Which NICS can I use?

If your NICS have passed the Windows Hardware Qualification and Logo (WHQL) test, then you can use them to create your SET Team. SET does require all NICs that are part of a SET Team to be identical. This means that they are from the same manufacturer, are the same model and have the same firmware and driver. You can only have up to eight NICs in one SET Team.

One good thing about SET teams is that you can configure Remote Direct Memory Access (RDMA) per vNIC. This means you can create multiple vNICS for your SET Team. You can configure a vNIC for storage and enable RDMA on it. You could then create another vNIC for management traffic and not enable RDMA. This is good for hyperconverged solutions.

SET compatibility with Windows Server

SET is compatible with the following networking technologies found in Windows Server 2016.

  • Datacentre bridging (DCB)
  • Hyper-V Network Virtualization — NV-GRE and VxLAN are both supported in Windows Server 2016.
  • Receive-side Checksum offloads (IPv4, IPv6, TCP) — These are only supported if at least one of the SET Team members support them.
  • Remote Direct Memory Access (RDMA)
  • Single root I/O virtualization (SR-IOV)
  • Transmit-side Checksum offloads (IPv4, IPv6, TCP) — These are only supported if all the SET team members support them.
  • Virtual Machine Queues (VMQ)
  • Virtual Receive Side Scaling (RSS)

SET is not compatible with the following networking technologies in Windows Server 2016.

  • 1X authentication
  • IPsec Task Offload (IPsecTO)
  • QoS in host or native operating systems
  • Receive side coalescing (RSC)
  • Receive side scaling (RSS)
  • TCP Chimney Offload
  • Virtual Machine QoS (VM-QoS)

SET modes and settings

When you first create a SET Team, you are unable to configure a team name and you are unable to set NICS to standby like you can with the NIC Teaming in Windows Server 2012 r2. In fact, with SET Teams all NICS are always active.

Another difference between SET Teams and LBFO (load balanced fail over) teams is that with NIC teaming you had a choice of three different teaming modes. SET only gives you one choice: Switch Independent. Switch Independent mode means the physical switch or switches are unaware of the SET Team and do not determine how to distribute the network traffic.

There are two team properties that need to be configured when you create a SET Team. They are:

  • Member adapters
  • Load balancing mode

Member adapters

As stated earlier in this post, when you are creating a SET Team, you must use up to eight identical NICS. You can create your SET Switch with just one NIC to start, but any new members must be identical.

Load Balancing mode

There are two Load Balancing distribution modes for SET Teams. They are:

  • Hyper-V Port
  • Dynamic

Hyper-V port

When using Hyper-V port mode for SET Teams, VMs are connected to a port on the Hyper-V Switch. This is just like a physical server would be connected to a port on a physical switch. The Hyper-V Virtual Switch port and the VM’s MAC address are used to divide the network traffic between the SET Team members.

If you use Switch Embedded Teaming with Packet Direct, you must use the Switch Independent Teaming mode and the Hyper-V port load balancing mode.

With this teaming mode, any adjacent switches will always see a VM’s MAC address on a given port. This allows the switch to distribute the ingress load (the traffic coming into the Hyper-V host from the switch) to the port where the MAC address is located. Hyper-V port mode is very useful when you utilize Virtual Machine Queues (VMQs) because a queue can be placed on the NIC where the traffic is expected to arrive.

Hyper-V port mode is not the best if you are only hosting a few VMs due to it not being granular enough to achieve a well-balanced distribution. This mode also limits each VM to the bandwidth that’s available on a single NIC.

Dynamic

When using the Dynamic port mode for SET Teams, all outbound loads are distributed using a hash of the TCP Ports and IP addresses. Dynamic Port mode also rebalances loads in real time to allow outbound flow to move between each SET Team member.

For inbound loads, all traffic is distributed in the same way as the Hyper-V port mode. Outbound loads use flowlets to dynamically balance. A flowlet is the portion of a TCP flow between two naturally occurring breaks. When the dynamic port mode detects a flowlet boundary, it will automatically rebalance the flow to another member of the SET Team, if appropriate. It is known that in some uncommon circumstances dynamic port mode may rebalance flows that do not contain any flowlets. Due to this, the affinity between a Team member and the TCP flow can change as the traffic is balanced.

Creating and managing a SET Team

Although it is recommended that you use Microsoft Virtual Machine Manager (VMM) to configure SET Teams, not every company has this. That’s where PowerShell comes in.

In the following section, we will be go through the process of creating a SET Team, adding and removing NICs to a SET team, changing the Load Balancing Mode and removing a SET team.

Create a SET Team

You can only create a SET Team at the same NIC that is used for a Hyper-V Virtual Switch. To create a SET Team with one NIC use the following PowerShell code in an elevated PowerShell window:

New-VMSwitch -Name "SET Team" -NetAdapterName "NIC 1" -EnableEmbeddedTeaming $true
 

 
You can change SET Team to any name you need, as well as adapter name NIC 1, to the name of your NIC. This can be done by using the following PowerShell command:

Get-NetAdapter
 

 
You should have a list of all your adapters. The first column is Name. This is the name of your network adapter that you need to use for the -NetAdapterName.

To create a SET Team with more than one NIC, you can use the following PowerShell command in an elevated PowerShell window:

New-VMSwitch -Name "SET Team" -NetAdapterName "NIC 1","NIC 2"
 

 
Notice when adding more than one NIC you don’t need to add the -EnableEmbeddedTeaming because it is assumed by Windows PowerShell when the argument to -NetAdapterName is an array of NICs.

Adding and removing SET Team members

The only way to add or remove SET Team members is to use the SET-VMSwitchTeam command. You basically name the SET switch and which adapters you want to be in it. Let’s say you have created your SET switch with NIC 1 and NIC 2, and now you want to remove NIC 2 and add NIC 3:

Set-VMSwitchTeam -Name SET Team -NetAdapterName "NIC 1","NIC 3"
 

 
If you now need to remove NIC 1, you can use the following command:

Set-VMSwitchTeam -Name "SET Team" -NetAdapterName "NIC 3"
 

 
To add NIC 2 back in to the SET Team, you can use the following command:

Set-VMSwitchTeam -Name "SET Team" -NetAdapterName "NIC 3","NIC 2"
 

Change the Load Balancing Mode

To change the Load Balancing Mode between the two mentioned above, you can use the following PowerShell command (just Change Dynamic to HyperVPort):

Set-VMSwitchTeam -Name "SET Team" -LoadBalancingAlgorithm Dynamic
 

 
To view which Load Balancing Mode you currently have:

Get-VMSwitchTeam -Name "SET Team" | FL
 

Remove a SET Team

The only way to remove a SET Team is by removing the Hyper-V Virtual switch completely:

Remove-VMSwitch "SET Team"
 

 
Now you have learned how to create, manage and remove a SET switch. Hopefully, you will find this post beneficial and helpful. If you have any questions, leave a comment.

See more: