SaaS vs PaaS vs IaaS: What’s the Difference & How to Choose

Companies are increasingly using Cloud services to support their business processes. But which types of Cloud services are there, and what is the difference? Which kind of Cloud service is most suitable for you? Do you want to be unburdened or completely in control? Do you opt for maximum cost savings, or do you want the entire arsenal of possibilities and top performance? Can you still see the forest for the trees? In this article and in the next, I describe several different Cloud services, what the differences and features are and what exactly you need to pay attention to.

Let’s start with the definition of Cloud computing. This is the provision of services using the internet (Cloud). Think of storage, software, servers, databases etc. Depending on the type of service and the service that is offered (think of license management or data storage), you can divide these services into categories. Examples are IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service), etc. These services are provided by a cloud provider. Whether this is Microsoft (Azure), Amazon (AWS), or another vendor (Google, Alibaba, Oracle, etc.), each vendor offers Cloud services that fall under one of the categories of Cloud services that we are about to discuss.

One feature of Cloud computing is that you pay according to the usage and the service you purchase. For example, for SaaS, you pay for the software’s license and support. This also means that if you buy a SaaS service (e.g., Office 365) and don’t use it, you will still be charged. At the same time, if you purchase storage with IaaS, for example, you only pay for the amount of storage you use, possibly supplemented with additional services such as backup, etc.

Sometimes Cloud services complement each other; think, for example, of DaaS (Database as a Service), where a database is offered via the Cloud. Often you need an application server and other infrastructure to read data from this database. These usually run in a Landing Zone, purchased from an IaaS service. But some services can also be standalone, for example, SaaS (Office 365).

Each Cloud service has specific characteristics. Sometimes it requires little or no (technical) knowledge, but it can also be challenging to manage and use the services according to best practices. This often depends on the degree to which you want to see yourself in control. If you want an application from the Cloud where you are completely relieved of all worries, this requires little technical knowledge from the user or the administrator. But if you want maximum control, then IaaS gives you an enormous range of possibilities. In this article, you can read what you need to consider.

It is advisable to think beforehand about what your requirements and wishes are precisely and whether this fits in with the service you want to purchase. If you wish to use an application in the Cloud but use many custom settings, this is often not possible. If you don’t want to be responsible for updating and backing up an application and use little or no customization, a SaaS can be very interesting. Also, look at how a service fits into your business process. Does it offer possibilities for automation, reporting, or disaster recovery? Are there possibilities to temporarily allocate extra resources in case of peak demand (horizontal or vertical scaling up), and what guarantees does the supplier offer with this service? Think of RPO / RTO and accessibility of the service desk in case of a calamity.

Let’s get started quickly!

IaaS (Infrastructure as a Service)

One of the best-known Cloud services is undoubtedly IaaS. For many companies, this is often their first introduction to a Cloud service. You rent the infrastructure from a cloud provider. For example, the network infrastructure, virtual servers (including operations system), and storage. A feature of IaaS is that you have complete control – Both on the management side and how you can deploy resources (requests). This can be done in various automated ways (Powershell, IaC, DevOps pipelines, etc.) and via the classic management interface that all providers offer. Things that are often not possible with a PaaS service are possible with an IaaS service. You have complete control. In principle, you can set up a complete server environment (all services are available for this), but you do have the benefits of the Cloud, such as scalability and pay per use or per resource.

IaaS therefore, most resembles an on-premise implementation. You often see this used in combination with the use of virtual servers. Critical here is a good investigation into the possible limitations, for example, I/O, so that the performance can be different in practice than in a traditional local environment. You are responsible for arranging security and backup. The advantage is that you have an influence on the choice of technology used. You can customize the setup according to your needs and wishes. You can standardize the configuration to your organization. Deployment can be complex, and you are forced to make your own choices, so some expertise is needed.

PaaS (Platform as a Service)

PaaS stands for Platform as a service and goes further than IaaS. You get a platform where you can do the configuration yourself. When you use a PaaS service, the vendor takes care of the sub-layer (IaaS) and the operating system and middleware. So you sacrifice something in terms of control and capabilities. PaaS services are ideal for developers, web and application builders. After all, you can quickly make an environment available. Using it means you no longer have to worry about the infrastructure, operating system, and middleware. This is taken care of by the supplier based on best practices. This also offers security advantages, as you do not have to think about patching and upgrading these things that are now done by the vendor.

Another advantage is that you can entirely focus on what you want to do and not on managing the environment. You can also easily purchase additional services and quickly scale them up or down. When you are finished, you can remove and stop the resources, so you have no more costs.

However, do take into account the use of existing software. Not all existing software is suitable to function in a PaaS environment; for example, in a PaaS environment, you do not have full access (after all, the vendor is responsible). Also, not all CPU power and memory are allocated to the Cloud application. This is because it is often hosted on a shared platform, so other applications (and databases) may use the same resources. As for the database, you have the same advantages and disadvantages as with DBaaS.

SaaS (Software as a Service)

This is probably a service you’ve been using for a while. In short, you take applications through the Cloud on a subscription basis. The provider is responsible for managing the infrastructure, patches, and updates. A SaaS solution is ready for use immediately, and you directly benefit from the added value, such as fast scaling up and down and paying per use. Examples are Office365, Sharepoint online, SalesForce, Exact Online, Dropbox, etc.

Unlike IaaS and PaaS, where there is still a lot of freedom, and you have to set everything up yourself, with SaaS however, it is immediately clear what you are buying and what you will get. With this service, you are relieved of most of your worries. The vendor is responsible for all updates, patches, development, and more. You cannot make any updates or changes to the software with this service.

Many companies use one or more SaaS services often even within companies, there is a distinction. For example, each department within a company has its specific applications and associated SaaS services. With this service, you only pay for what you need, including the licenses. These licenses can easily be scaled up or down.

It is interesting for many companies to work with SAAS solutions. It is particularly interesting for start-ups, small companies and freelancers because you only purchase what you use, you don’t have unnecessarily high start-up costs, and you don’t have to worry about the maintenance of the software.

But SAAS can also be a perfect solution for larger companies. For example, if you hire extra staff for specific periods, you can quickly get these people working with the software they need. You buy several additional licenses, and you can stop this when the temporary staff leaves.

How can Vembu help you?

BDRSuite, is a comprehensive Backup & DR solution designed to protect your business-critical data across Virtual (VMware, Hyper-V), Physical Servers (Windows, Linux), SaaS (Microsoft 365, Google Workspace), AWS EC2 Instances, Endpoints (Windows, Mac) and Applications & Databases (MS Active Directory, MS Exchange, MS Outlook, SharePoint, MS SQL, MySQL).

To protect your workloads running on SaaS (Microsoft 365Google Workspace), try out a full-featured 30-days Free Trial of the latest version of BDRSuite.

Source :
https://www.vembu.com/blog/saas-vs-paas-vs-iaas-whats-the-difference-how-to-choose/

Differences between availability modes for an Always On availability group

Applies to:  SQL Server (all supported versions)

In Always On availability groups, the availability mode is a replica property that determines whether a given availability replica can run in synchronous-commit mode. For each availability replica, the availability mode must be configured for either synchronous-commit mode, asynchronous-commit, or configuration only mode. If the primary replica is configured for asynchronous-commit mode, it does not wait for any secondary replica to write incoming transaction log records to disk (to harden the log). If a given secondary replica is configured for asynchronous-commit mode, the primary replica does not wait for that secondary replica to harden the log. If both the primary replica and a given secondary replica are both configured for synchronous-commit mode, the primary replica waits for the secondary replica to confirm that it has hardened the log (unless the secondary replica fails to ping the primary replica within the primary’s session-timeout period).

 Note

If primary’s session-timeout period is exceeded by a secondary replica, the primary replica temporarily shifts into asynchronous-commit mode for that secondary replica. When the secondary replica reconnects with the primary replica, they resume synchronous-commit mode.

Supported Availability Modes

Always On availability groups supports three availability modes-asynchronous-commit mode, synchronous-commit mode, and configuration only mode as follows:

  • Asynchronous-commit mode is a disaster-recovery solution that works well when the availability replicas are distributed over considerable distances. If every secondary replica is running under asynchronous-commit mode, the primary replica does not wait for any of the secondary replicas to harden the log. Rather, immediately after writing the log record to the local log file, the primary replica sends the transaction confirmation to the client. The primary replica runs with minimum transaction latency in relation to a secondary replica that is configured for asynchronous-commit mode. If the current primary is configured for asynchronous commit availability mode, it will commit transactions asynchronously for all secondary replicas regardless of their individual availability mode settings.For more information, see Asynchronous-Commit Availability Mode, later in this topic.
  • Synchronous-commit mode emphasizes high availability over performance, at the cost of increased transaction latency. Under synchronous-commit mode, transactions wait to send the transaction confirmation to the client until the secondary replica has hardened the log to disk. When data synchronization begins on a secondary database, the secondary replica begins applying incoming log records from the corresponding primary database. As soon as every log record has been hardened, the secondary database enters the SYNCHRONIZED state. Thereafter, every new transaction is hardened by the secondary replica before the log record is written to the local log file. When all the secondary databases of a given secondary replica are synchronized, synchronous-commit mode supports manual failover and, optionally, automatic failover.For more information, see Synchronous-Commit Availability Mode, later in this topic.
  • Configuration only mode applies to availability groups that are not on a Windows Server Failover Cluster. A replica in configuration only mode does not contain user data. In configuration only mode, the replica master database stores availability group configuration metadata. For more information see Availability group with configuration only replica.

The following illustration shows an availability group with five availability replicas. The primary replica and one secondary replica are configured for synchronous-commit mode with automatic failover. Another secondary replica is configured for synchronous-commit mode with only planned manual failover, and two secondary replicas are configured for asynchronous-commit mode, which supports only forced manual failover (typically called forced failover).

Availability and failover modes of replicas

The synchronization and failover behavior between two availability replicas depends on the availability mode of both replicas. For example, for synchronous commit to occur, both the current primary replica and the secondary replica in question must be configured for synchronous commit. Likewise, for automatic failover to occur, both replicas need to be configured for automatic failover. Therefore, the behavior for the illustrated deployment scenario above can be summarized in the following table, which explores the behavior with each potential primary replica:

Current Primary ReplicaAutomatic Failover TargetsSynchronous-Commit Mode Behavior WithAsynchronous-Commit Mode Behavior WithAutomatic failover possible
010202 and 0304Yes
020101 and 0304Yes
0301 and 0204No
0401, 02, and 03No

Typically, Node 04 as an asynchronous-commit replica, is deployed in a disaster recovery site. The fact that Nodes 01, 02, and 03 remain at asynchronous-commit mode after failing over to Node 04 helps prevent potential performance degradation in your availability group due to high network latency between the two sites.

Asynchronous-Commit Availability Mode

Under asynchronous-commit mode, the secondary replica never becomes synchronized with the primary replica. Though a given secondary database might catch up to the corresponding primary database, any secondary database could lag behind at any point. Asynchronous-commit mode can be useful in a disaster-recovery scenario in which the primary replica and the secondary replica are separated by a significant distance and where you do not want small errors to impact the primary replica or in situations where performance is more important than synchronized data protection. Furthermore, since the primary replica does not wait for acknowledgements from the secondary replica, problems on the secondary replica never impact the primary replica.

An asynchronous-commit secondary replica attempts to keep up with the log records received from the primary replica. But asynchronous-commit secondary databases always remain unsynchronized and are likely to lag somewhat behind the corresponding primary databases. Typically the gap between an asynchronous-commit secondary database and the corresponding primary database is small. But the gap can become substantial if the server hosting the secondary replica is over loaded or the network is slow.

The only form of failover supported by asynchronous-commit mode is forced failover (with possible data loss). Forcing failover is a last resort intended only for situations in which the current primary replica will remain unavailable for an extended period and immediate availability of primary databases is more critical than the risk of possible data loss.The failover target must be a replica whose role is in the SECONDARY or RESOLVING state. The failover target transitions to the primary role, and its copies of the databases become the primary database. Any remaining secondary databases, along with the former primary databases, once they become available, are suspended until you manually resume them individually. Under asynchronous-commit mode, any transaction logs that the original primary replica had not yet sent to the former secondary replica are lost. This means that some or all of the new primary databases might be lacking recently committed transactions. For more information on how forced failover works and on best practices for using it, see Failover and Failover Modes (Always On Availability Groups).

Synchronous-Commit Availability Mode

Under synchronous-commit availability mode (synchronous-commit mode), after being joined to an availability group, a secondary database catches up to the corresponding primary database and enters the SYNCHRONIZED state. The secondary database remains SYNCHRONIZED as long as data synchronization continues. This guarantees that every transaction that is committed on a given primary database has also been committed on the corresponding secondary database. When every secondary database on a given secondary replica is synchronized, the synchronization-health state of the secondary replica as a whole is HEALTHY.

In This Section:

Factors That Disrupt Data Synchronization

Once all of its databases are synchronized, a secondary replica enters the HEALTHY state. The synchronized secondary replica will remain healthy unless one of the following occurs:

  • A network or computer delay or glitch causes the session between the secondary replica and primary replica to timeout. NoteFor information about the session-time property of availability replicas, see Overview of Always On Availability Groups (SQL Server).
  • You suspend a secondary database on the secondary replica. The secondary replica ceases to be synchronized, and its synchronization-health state is marked as NOT_HEALTHY. The secondary replica cannot become healthy again until the suspended secondary database is either resumed and resynchronized or removed from the availability group.
  • You add a primary database the availability group. Previously synchronized secondary replicas enter the NOT_HEALTHY synchronization-health state. This state indicates that at least one database is in the NOT SYNCHRONIZING synchronization state. A given secondary replica cannot be HEALTHY again until a corresponding secondary database has been prepared on the replica, has been joined to the availability group, and has become synchronized with the new primary database.
  • You change the primary replica or the secondary replica to asynchronous-commit availability mode. After changing to asynchronous-commit mode, the secondary replica will remain in the HEALTHY synchronization-health state as long as data synchronization continues. However, if only the primary replica is changed to asynchronous-commit mode, the synchronous-commit secondary replica will enter the PARTIALLY_HEALTHY synchronization-health state. This state indicates that at least one database is in the SYNCHRONIZING synchronization state, but none of the databases are in the NOT SYNCHRONIZING state.
  • You change any secondary replica to synchronous-commit availability mode. This causes that secondary replica to be marked as in the PARTIALLY_HEALTHY synchronization-health state until all of its databases are in the SYNCHRONIZED synchronization state.

 Tip

To view the synchronization health of an availability group, availability replica, or availability database, query the synchronization_health or synchronization_health_desc column of sys.dm_hadr_availability_group_statessys.dm_hadr_availability_replica_states, or sys.dm_hadr_database_replica_states, respectively.

How Synchronization Works on a Secondary Replica

Under the synchronous-commit mode, after a secondary replica joins the availability group and establishes a session with the primary replica, the secondary replica writes incoming log records to disk (hardens the log) and sends a confirmation message to the primary replica. Once the hardened log on the secondary database has caught up the end of log on the primary database, the state of the secondary database is set to SYNCHRONIZED. The time required for synchronization depends essentially on how far the secondary database was behind the primary database at the start of the session (measured by the number of log records initially received from the primary replica), the work load on the primary database, and the speed of the computer of the server instance that hosts the secondary replica.

Synchronous operation is maintained in the following manner:

  1. On receiving a transaction from a client, the primary replica writes the log for the transaction to the transaction log and concurrently sends the log record to the secondary replicas.
  2. Once a log record is written to the transaction log of the primary database, the transaction can be undone only if there is a failover at this point to a secondary that did not receive the log. The primary replica waits for confirmation from the synchronous-commit secondary replica.
  3. The secondary replica hardens the log and returns an acknowledgement to the primary replica.
  4. On receiving the confirmation from the secondary replica, the primary replica finishes the commit processing and sends a confirmation message to the client. NoteIf a synchronous-commit secondary replica times out without confirming that it has hardened the log, the primary marks that secondary replica as failed. The connected state of the secondary replica changes to DISCONNECTED, and the primary replica stops waiting for confirmation from the secondary replica. This behavior ensures that a failed synchronous-commit secondary replica does not prevent hardening of the transaction log on the primary replica.

Synchronous-commit mode protects your data by requiring the data to be synchronized between two places, at the cost of somewhat increasing the latency of the transaction.

Synchronous-Commit Mode with Only Manual Failover

When these replicas are connected and the database is synchronized, manual failover is supported. If the secondary replica goes down, the primary replica is unaffected. The primary replica runs exposed if no SYNCHRONIZED replicas exist (that is, without sending data to any secondary replica). If the primary replica is lost, the secondary replicas enter the RESOLVING state, but the database owner can force a failover to the secondary replica (with possible data loss). For more information, see Failover and Failover Modes (Always On Availability Groups).

Synchronous-Commit Mode with Automatic Failover

Automatic failover provides high availability by ensuring that the database is quickly made available again after the loss of the primary replica. To configure an availability group for automatic failover, you need to set both the current primary replica and at least one secondary replica to synchronous-commit mode with automatic failover. You can have up to three automatic failover replicas.

Furthermore, for an automatic failover to be possible at a given time, this secondary replica must be synchronized with the primary replica (that is, the secondary databases are all synchronized), and the Windows Server Failover Clustering (WSFC) cluster must have quorum. If the primary replica becomes unavailable under these conditions, automatic failover occurs. The secondary replica switches to the role of primary, and it offers its database as the primary database. For more information, see the “Automatic Failover ” section of the Failover and Failover Modes (Always On Availability Groups) topic.

 Note

For information about WSFC quorum and Always On availability groups, see For more information, see WSFC Quorum Modes and Voting Configuration (SQL Server).

Data latency on secondary replica

Implementing read-only access to secondary replicas is useful if your read-only workloads can tolerate some data latency. In situations where data latency is unacceptable, consider running read-only workloads against the primary replica.

The primary replica sends log records of changes on primary database to the secondary replicas. On each secondary database, a dedicated redo thread applies the log records. On a read-access secondary database, a given data change does not appear in query results until the log record that contains the change has been applied to the secondary database and the transaction has been committed on primary database.

This means that there is some latency, usually only a matter of seconds, between the primary and secondary replicas. In unusual cases, however, for example if network issues reduce throughput, latency can become significant. Latency increases when I/O bottlenecks occur and when data movement is suspended. To monitor suspended data movement, you can use the Always On Dashboard or the sys.dm_hadr_database_replica_states dynamic management view.

For more information on investigating redo latency on the secondary replica, please see Troubleshoot primary changes not reflected on secondary replica.

Related Tasks

To change the availability mode and failover mode

To adjust quorum votes

To perform a manual failover

To view availability group, availability replica, and database states

Related Content

See Also

Overview of Always On Availability Groups (SQL Server)
Failover and Failover Modes (Always On Availability Groups)
Windows Server Failover Clustering (WSFC) with SQL Server

Source :
https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/availability-modes-always-on-availability-groups?view=sql-server-ver16

How to set up myQNAPcloud to remotely access a QNAP NAS

Requirements

Register your NAS with myQNAPcloud

  1. Log in to your QNAP NAS.
  2. Open myQNAPcloud.
  3. Click Get Started.

    The Welcome to myQNAPcloud! window appears.
  4. Follow the steps to register your NAS. Click Next to move to the next step.
    1. Enter your QNAP ID and Password.
    2. Enter a Device name for your NAS.
      Note: This name is used to identify your NAS on myQNAPcloud and must be unique across all users.
    3. Choose what NAS services will be enabled and the Access Control setting.

      Your device is registered on myQNAPcloud.

      A summary page displays all the registration details and services guidelines of your NAS.

Remotely access your QNAP NAS with myQNAPcloud

  1. Go to https://www.myqnapcloud.com/.
  2. Sign in using your QNAP Account.
    Note: If you are already signed in you are automatically redirected to My Devices .
  3. Go to My Devices.
    The devices registered to your QNAP Account are displayed.
  4. Click the ”  ” button next to the device to display the device IP and SmartURL.
  5. Click SmartURL.

    A login page for your NAS appears.

Source :
https://www.qnap.com/en/how-to/tutorial/article/how-to-set-up-myqnapcloud-to-remotely-access-a-qnap-nas

Moving the Mission Forward: Mandiant Joins Google Cloud

Google’s acquisition of Mandiant is now complete, marking a great moment for our team and for the security community we serve.

As part of Google Cloud, Mandiant now has a far greater capability to close the security gap created by a growing number of adversaries. In my 29 years on the front lines of securing networks, I have seen criminals, nation states, and plain bad actors bring harm to good people. By combining our expertise and intelligence with the scale and resources of Google Cloud, we can make a far greater difference in preventing and countering cyber attacks, while pinpointing new ways to hold adversaries accountable.  

When I founded Mandiant Corporation in 2004, we set out to change how businesses protected themselves from cyber threats. We felt the technologies people depended on to defend ultimately failed to innovate at the pace of the attackers. In order to deliver cyber defenses as dynamic as the threats, we believed you had to have your finger on the pulse of adversaries around the world. To address this need, we set out to respond to as many cyber security breaches as possible. We wanted to learn first-hand how adversaries were circumventing common safeguards with new and novel attacks; monitor the development and deployment of attacker tools, their infrastructure, and their underground economies; and study the attacker’s targeting trends.

Armed with this knowledge and experience, we felt we were best positioned to close the gap between the offense and the defense in the security arms race.  

As we investigated thousands of security incidents over the years, we honed the deep expertise required to find the proverbial needle in the haystack: the trace evidence that something unlawful, unauthorized, or simply unacceptable had occurred. We believed this skill was the foundation to automating security operations through software, so that organizations and governments around the world could easily implement effective security capabilities. 

By joining forces with Google Cloud, we can accelerate this vision. I am very excited that Mandiant and Google Cloud can now work together to leverage our frontline intelligence and security expertise to address a common goal: to relentlessly protect organizations against cyber attacks and provide solutions that allow defenders to operate with confidence in their cyber security posture. More specifically, we can leverage our intelligence differentiator to automate security operations and validate security effectiveness.

Mandiant Remains Relentless

While we are now part of Google Cloud, Mandiant is not going away—in fact, it’s getting stronger. We will maintain our focus on knowing the most about threat actors and extend our reputation for delivering world-class threat intelligence, consulting services, and security solutions. 

Automating Security Operations

Today’s announcement should be welcome news to organizations facing cyber security challenges that have accelerated in frequency, severity, and diversity. I have always believed that organizations can remain resilient in the fight against cyber threats if they have the right combination of expertise, intelligence, and adaptive technology. 

This is why I am a proponent of Google Cloud’s shared fate model. By taking an active stake in the security posture of customers, we can help organizations find and validate potential security issues before they become an incident. Google Cloud and Mandiant have the knowledge and skills to provide an incredibly efficient and effective security operations platform. We are building a “security brain” that scales our team to address the expertise shortage.

Validating Security Effectiveness

Google Cloud’s reach, resources, and focus will accelerate another Mandiant imperative: validating security effectiveness. Organizations today lack the tools needed to validate the effectiveness of security, quantify risk, and exhibit operational competency. Mandiant is working to provide visibility and evidence on the status of how effective security controls are against adversary threats. With this data, organizations have a clear line of sight into optimizing their individual environment against relevant threats.

Advancing Our Mission

Google Cloud has made security the cornerstone of its commitment to users around the world, and the Mandiant acquisition underscores that focus.

We are thrilled to continue moving our mission forward alongside the Google Cloud team. Together, I believe Mandiant and Google Cloud will help reinvent how organizations protect, detect, and respond to threats. This will benefit not only a growing base of customers and partners, but the security community at large.

You can learn more about this milestone moment and the exciting opportunities ahead in this blog post by Google Cloud CEO Thomas Kurian, “Google + Mandiant: Transforming Security Operations and Incident Response.”

Source :
https://www.mandiant.com/resources/blog/mandiant-joins-google-cloud

GIFShell attack creates reverse shell using Microsoft Teams GIFs

A new attack technique called ‘GIFShell’ allows threat actors to abuse Microsoft Teams for novel phishing attacks and covertly executing commands to steal data using … GIFs.

The new attack scenario, shared exclusively with BleepingComputer, illustrates how attackers can string together numerous Microsoft Teams vulnerabilities and flaws to abuse legitimate Microsoft infrastructure to deliver malicious files, commands, and perform exfiltrating data via GIFs. 

As the data exfiltration is done through Microsoft’s own servers, the traffic will be harder to detect by security software that sees it as legitimate Microsoft Team’s traffic.

Overall, the attack technique utilizes a variety of Microsoft Teams flaws and vulnerabilities:

  • Bypassing Microsoft Teams security controls allows external users to send attachments to Microsoft Teams users.
  • Modify sent attachments to have users download files from an external URL rather than the generated SharePoint link.
  • Spoof Microsoft teams attachments to appear as harmless files but download a malicious executable or document.
  • Insecure URI schemes to allow SMB NTLM hash theft or NTLM Relay attacks.
  • Microsoft supports sending HTML base64 encoded GIFs, but does not scan the byte content of those GIFs. This allows malicious commands to be delivered within a normal-looking GIF.
  • Microsoft stores Teams messages in a parsable log file, located locally on the victim’s machine, and accessible by a low-privileged user.
  • Microsoft servers retrieve GIFs from remote servers, allowing data exfiltration via GIF filenames.

GIFShell – a reverse shell via GIFs

The new attack chain was discovered by cybersecurity consultant and pentester Bobby Rauch, who found numerous vulnerabilities, or flaws, in Microsoft Teams that can be chained together for command execution, data exfiltration, security control bypasses, and phishing attacks.

The main component of this attack is called ‘GIFShell,’ which allows an attacker to create a reverse shell that delivers malicious commands via base64 encoded GIFs in Teams, and exfiltrates the output through GIFs retrieved by Microsoft’s own infrastructure.

To create this reverse shell, the attacker must first convince a user to install a malicious stager that executes commands, and uploads command output via a GIF url to a Microsoft Teams web hook.  However, as we know, phishing attacks work well in infecting devices, Rauch came up with a novel phishing attack in Microsoft Teams to aid in this, which we describe in the next section.

GIFShell works by tricking a user into loading a malware executable called the “stager” on their device that will continuously scan the Microsoft Teams logs located at $HOME\AppData\Roaming\Microsoft\Teams\IndexedDB\https_teams.microsoft.com_0.indexeddb.leveldb\*.log.

Microsoft Teams log folder
Microsoft Teams log folder
Source: BleepingComputer

All received messages are saved to these logs and are readable by all Windows user groups, meaning any malware on the device can access them.

Once the stager is in place, a threat actor would create their own Microsoft Teams tenant and contact other Microsoft Teams users outside of their organization. Attackers can easily achieve this as Microsoft allows external communication by default in Microsoft Teams.

To initiate the attack, the threat actor can use Rauch’s GIFShell Python script to send a message to a Microsoft Teams user that contains a specially crafted GIF. This legitimate GIF image has been modified to include commands to execute on a target’s machine.

When the target receives the message, the message and the GIF will be stored in Microsoft Team’s logs, which the malicious stager monitors.

When the stager detects a message with a GIF, it will extract the base64 encoded commands and execute them on the device. The GIFShell PoC will then take the output of the executed command and convert it to base64 text.

This base64 text is used as the filename for a remote GIF embedded in a Microsoft Teams Survey Card that the stager submits to the attacker’s public Microsoft Teams webhook.

As Microsoft Teams renders flash cards for the user, Microsoft’s servers will connect back to the attacker’s server URL to retrieve the GIF, which is named using the base64 encoded output of the executed command.

The GIFShell server running on the attacker’s server will receive this request and automatically decode the filename allowing the attackers to see the output of the command run on the victim’s device, as shown below.

For example, a retrieved GIF file named ‘dGhlIHVzZXIgaXM6IA0KYm9iYnlyYXVjaDYyNzRcYm9iYnlyYXVJa0K.gif’ would decode to the output from the ‘whoami’ command executed on the infected device:

the user is: 
bobbyrauch6274\bobbyrauIkBáë

The threat actors can continue using the GIFShell server to send more GIFs, with further embedded commands to execute, and continue to receive the output when Microsoft attempts to retrieve the GIFs.

As these requests are made by the Microsoft website, urlp.asm.skype.com, used for regular Microsoft Teams communication, the traffic will be seen as legitimate and not detected by security software.

This allows the GIFShell attack to covertly exfiltrate data by mixing the output of their commands with legitimate Microsoft Teams network communication.

Even worse, as Microsoft Teams runs as a background process, it does not even need to be opened by the user to receive the attacker’s commands to execute.

The Microsoft Teams logs folder have also been found accessed by other programs, including business monitoring software, such as Veriato, and potentially malware.

Microsoft acknowledged the research but said it would not be fixed as no security boundaries were bypassed.

“For this case, 72412, while this is great research and the engineering team will endeavor to improve these areas over time, these all are post exploitation and rely on a target already being compromised,” Microsoft told Rauch in an email shared with BleepingComputer.

“No security boundary appears to be bypassed.  The product team will review the issue for potential future design changes, but this would not be tracked by the security team.”

Abusing Microsoft teams for phishing attacks

As we previously said, the GIFShell attack requires the installation of an executable that executes commands received within the GIFs.

To aid in this, Rauch discovered Microsoft Teams flaws that allowed him to send malicious files to Teams users but spoof them to look as harmless images in phishing attacks.

“This research demonstrates how it is possible to send highly convincing phishing attachments to victims through Microsoft Teams, without any way for a user to pre-screen whether the linked attachment is malicious or not,” explains Rauch in his writeup on the phishing method.

As we previously said in our discussion about GIFShell, Microsoft Teams allows Microsoft Teams users to message users in other Tenants by default. 

However, to prevent attackers from using Microsoft Teams in malware phishing attacks, Microsoft does not allow external users to send attachments to members of another tenant.

While playing with attachments in Microsoft Teams, Rauch discovered that when someone sends a file to another user in the same tenant, Microsoft generates a Sharepoint link that is embedded in a JSON POST request to the Teams endpoint.

This JSON message, though, can then be modified to include any download link an attacker wants, even external links. Even worse, when the JSON is sent to a user via Teams’ conversation endpoint, it can also be used to send attachments as an external user, bypassing Microsoft Teams’ security restrictions.

For example, the JSON below has been modified to show a file name of Christmas_Party_Photo.jpeg but actually delivers a remote Christmas_Party_Photo.jpeg………….exe executable.

Microsoft Teams JSON to spoof an attachment
Microsoft Teams JSON to spoof an attachment
Source: Bobby Rauch

When the attachment is rendered in Teams, it is displayed as Christmas_Party_Photo.jpeg, and when highlighting it, it will continue to show that name, as shown below.

Spoofing a JPEG file
Spoofing a JPEG file
Source: Bobby Rauch

However, when the user clicks on the link, the attachment will download the executable from the attacker’s server.

In addition to using this Microsoft Teams spoofing phishing attack to send malicious files to external users, attackers can also modify the JSON to use Windows URIs, such as ms-excel:, to automatically launch an application to retrieve a document.

Rauch says this would allow attackers to trick users into connecting to a remote network share, letting threat actors steal NTLM hashes, or local attackers perform an NTLM relay attack to elevate privileges.

“These allowed, potentially unsafe URI schemes, combined with the lack of permissions enforcement and attachment spoofing vulnerabilities, can allow for a One Click RCE via NTLM relay in Microsoft Teams,” Rauch explains in his report on the spoofing attack.

Microsoft not immediately fixing bugs

Rauch told BleepingComputer that he disclosed the flaws to Microsoft in May and June of 2022, and despite Microsoft saying they were valid issues, they decided not to fix them immediately.

When BleepingComputer contacted Microsoft about why the bugs were not fixed, we were not surprised by their response regarding the GIFShell attack technique, as it requires the device to already be compromised with malware.

“This type of phishing is important to be aware of and as always, we recommend that users practice good computing habits online, including exercising caution when clicking on links to web pages, opening unknown files, or accepting file transfers.

We’ve assessed the techniques reported by this researcher and have determined that the two mentioned do not meet the bar for an urgent security fix. We’re constantly looking at new ways to better resist phishing to help ensure customer security and may take action in a future release to help mitigate this technique.” – a Microsoft spokesperson. 

However, we were surprised that Microsoft did not consider the ability of external attackers to bypass security controls and send attachments to another tenant as not something that should be immediately fixed.

Furthermore, not immediately fixing the ability to modify JSON attachment cards so that Microsoft Teams recipients could be tricked to download files from remote URLs was also surprising.

However, Microsoft has left the door open to resolving these issues, telling BleepingComputer that they may be serviced in future versions.

“Some lower severity vulnerabilities that don’t pose an immediate risk to customers are not prioritized for an immediate security update, but will be considered for the next version or release of Windows,” explained Microsoft in a statement to BleepingComputer.

Source :
https://www.bleepingcomputer.com/news/security/gifshell-attack-creates-reverse-shell-using-microsoft-teams-gifs/

KB5004442—Manage changes for Windows DCOM Server Security Feature Bypass (CVE-2021-26414)

Summary

The Distributed Component Object Model (DCOM) Remote Protocol is a protocol for exposing application objects using remote procedure calls (RPCs). DCOM is used for communication between the software components of networked devices.  

Hardening changes in DCOM were required for CVE-2021-26414. Therefore, we recommended that you verify if client or server applications in your environment that use DCOM or RPC work as expected with the hardening changes enabled.

To address the vulnerability described in CVE-2021-26414, you must install updates released September 14, 2021 or later and enable the registry key described below in your environment. We recommended that you complete testing in your environment and enable these hardening changes as soon as possible. If you find issues during testing, you must contact the vendor for the affected client or server software for an update or workaround before early 2022.

Note We recommend that you update your devices to the latest security update available to take advantage of the advanced protections from the latest security threats.

Timeline

Update releaseBehavior change
June 8, 2021Hardening changes disabled by default but with the ability to enable them using a registry key.
June 14, 2022Hardening changes enabled by default but with the ability to disable them using a registry key.
March 14, 2023Hardening changes enabled by default with no ability to disable them. By this point, you must resolve any compatibility issues with the hardening changes and applications in your environment.

Registry setting to enable or disable the hardening changes

During the timeline phases in which you can enable or disable the hardening changes for CVE-2021-26414, you can use the following registry key:

  • Path : HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Ole\AppCompat
  • Value Name: “RequireIntegrityActivationAuthenticationLevel”
  • Type: dword
  • Value Data: default = 0x00000000 means disabled. 0x00000001 means enabled. If this value is not defined, it will default to enabled.

Note You must enter Value Data in hexadecimal format. 

Important You must restart your device after setting this registry key for it to take effect.

Note Enabling the registry key above will make DCOM servers enforce an Authentication-Level of RPC_C_AUTHN_LEVEL_PKT_INTEGRITY or higher for activation.

Note This registry value does not exist by default; you must create it. Windows will read it if it exists and will not overwrite it.

New DCOM error events

To help you identify the applications that might have compatibility issues after we enable DCOM security hardening changes, we added new DCOM error events in the System log; see the tables below. The system will log these events if it detects that a DCOM client application is trying to activate a DCOM server using an authentication level that is less than RPC_C_AUTHN_LEVEL_PKT_INTEGRITY. You can trace to the client device from the server-side event log and use client-side event logs to find the application.

Server events

Event IDMessage
10036“The server-side authentication level policy does not allow the user %1\%2 SID (%3) from address %4 to activate DCOM server. Please raise the activation authentication level at least to RPC_C_AUTHN_LEVEL_PKT_INTEGRITY in client application.”(%1 – domain, %2 – user name, %3 – User SID, %4 – Client IP Address)

Client events

Event IDMessage
10037“Application %1 with PID %2 is requesting to activate CLSID %3 on computer %4 with explicitly set authentication level at %5. The lowest activation authentication level required by DCOM is 5(RPC_C_AUTHN_LEVEL_PKT_INTEGRITY). To raise the activation authentication level, please contact the application vendor.”
10038“Application %1 with PID %2 is requesting to activate CLSID %3 on computer %4 with default activation authentication level at %5. The lowest activation authentication level required by DCOM is 5(RPC_C_AUTHN_LEVEL_PKT_INTEGRITY). To raise the activation authentication level, please contact the application vendor.”(%1 – Application Path, %2 – Application PID, %3 – CLSID of the COM class the application is requesting to activate, %4 – Computer Name, %5 – Value of Authentication Level)

Availability

These error events are only available for a subset of Windows versions; see the table below.

Windows versionAvailable on or after these dates
Windows Server 2022September 27, 2021KB5005619
Windows 10, version 2004, Windows 10, version 20H2, Windows 10, version 21H1September 1, 2021KB5005101
Windows 10, version 1909August 26, 2021KB5005103
Windows Server 2019, Windows 10, version  1809August 26, 2021KB5005102
Windows Server 2016, Windows 10, version 1607September 14, 2021KB5005573
Windows Server 2012 R2 and Windows 8.1October 12, 2021KB5006714

Source :
https://support.microsoft.com/en-us/topic/kb5004442-manage-changes-for-windows-dcom-server-security-feature-bypass-cve-2021-26414-f1400b52-c141-43d2-941e-37ed901c769c

PSA: Nearly 5 Million Attacks Blocked Targeting 0-Day in BackupBuddy Plugin

Late evening, on September 6, 2022, the Wordfence Threat Intelligence team was alerted to the presence of a vulnerability being actively exploited in BackupBuddy, a WordPress plugin we estimate has around 140,000 active installations. This vulnerability makes it possible for unauthenticated users to download arbitrary files from the affected site which can include sensitive information.

After reviewing historical data, we determined that attackers started targeting this vulnerability on August 26, 2022, and that we have blocked 4,948,926 attacks targeting this vulnerability since that time.

The vulnerability affects versions 8.5.8.0 to 8.7.4.1, and has been fully patched as of September 2, 2022 in version 8.7.5. Due to the fact that this is an actively exploited vulnerability, we strongly encourage you to ensure your site has been updated to the latest patched version 8.7.5 which iThemes has made available to all site owners running a vulnerable version regardless of licensing status.

All Wordfence customers, including Wordfence PremiumWordfence CareWordfence Response, and Wordfence Free users, have been, and will continue to be, protected against any attackers trying to exploit this vulnerability due to the Wordfence firewall’s built-in directory traversal and file inclusion firewall rules. Wordfence PremiumCare, & Response, customers receive enhanced protection as attackers heavily targeting the vulnerability are blocked by the IP Blocklist.

Vulnerability Details

Description: Arbitrary File Download/Read
Affected Plugin: BackupBuddy
Plugin Slug: backupbuddy
Plugin Developer: iThemes
Affected Versions: 8.5.8.0 – 8.7.4.1
CVE ID: CVE-2022-31474
CVSS Score: 7.5 (High)
CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N
Fully Patched Version: 8.7.5

The BackupBuddy plugin for WordPress is designed to make back-up management easy for WordPress site owners. One of the features in the plugin is to store back-up files in multiple different locations, known as Destinations, which includes Google Drive, OneDrive, and AWS just to name a few. There is also the ability to store back-up downloads locally via the ‘Local Directory Copy’ option. Unfortunately, the method to download these locally stored files was insecurely implemented making it possible for unauthenticated users to download any file stored on the server.

More specifically the plugin registers an admin_init hook for the function intended to download local back-up files and the function itself did not have any capability checks nor any nonce validation. This means that the function could be triggered via any administrative page, including those that can be called without authentication (admin-post.php), making it possible for unauthenticated users to call the function. The back-up path is not validated and therefore an arbitrary file could be supplied and subsequently downloaded.

Due to this vulnerability being actively exploited, and its ease of exploitation, we are sharing minimal details about this vulnerability.

Indicators of Compromise

The Wordfence firewall has blocked over 4.9 million exploit attempts targeting this vulnerability since August 26, 2022, which is the first indication we have that this vulnerability was being exploited. We are seeing attackers attempting to retrieve sensitive files such as the /wp-config.php and /etc/passwd file which can be used to further compromise a victim.

The top 10 Attacking IP Addresses are as follows:

  • 195.178.120.89 with 1,960,065 attacks blocked
  • 51.142.90.255 with 482,604 attacks blocked
  • 51.142.185.212 with 366770 attacks blocked
  • 52.229.102.181 with 344604 attacks blocked
  • 20.10.168.93 with 341,309 attacks blocked
  • 20.91.192.253 with 320,187 attacks blocked
  • 23.100.57.101 with 303,844 attacks blocked
  • 20.38.8.68 with 302,136 attacks blocked
  • 20.229.10.195 with 277,545 attacks blocked
  • 20.108.248.76 with 211,924 attacks blocked

A majority of the attacks we have observed are attempting to read the following files:

  • /etc/passwd
  • /wp-config.php
  • .my.cnf
  • .accesshash

We recommend checking for the ‘local-download’ and/or the ‘local-destination-id’ parameter value when reviewing requests in your access logs. Presence of these parameters along with a full path to a file or the presence of ../../ to a file indicates the site may have been targeted for exploitation by this vulnerability. If the site is compromised, this can suggest that the BackupBuddy plugin was likely the source of compromise.

Conclusion

In today’s post, we detailed a zero-day vulnerability being actively exploited in the BackupBuddy plugin that makes it possible for unauthenticated attackers to steal sensitive files from an affected site and use the information obtained in those files to further infect a victim. This vulnerability was patched yesterday and we strongly recommend updating to the latest version of the plugin, currently version 8.7.5, right now since this is an actively exploited vulnerability.

All Wordfence customers, including Wordfence PremiumWordfence CareWordfence Response, and Wordfence Free users, have been, and will continue to be, protected against any attackers trying to exploit this vulnerability due to the Wordfence firewall’s built-in directory traversal and file inclusion firewall rules.

If you believe your site has been compromised as a result of this vulnerability or any other vulnerability, we offer Incident Response services via Wordfence Care. If you need your site cleaned immediately, Wordfence Response offers the same service with 24/7/365 availability and a 1-hour response time. Both these products include hands-on support in case you need further assistance.

If you know a friend or colleague who is using this plugin on their site, we highly recommend forwarding this advisory to them to help keep their sites protected, as this is a serious vulnerability that is actively being exploited in the wild.

We will continue to monitor the situation and follow up as more information becomes available.

Source :
https://www.wordfence.com/blog/2022/09/psa-nearly-5-million-attacks-blocked-targeting-0-day-in-backupbuddy-plugin/

#StopRansomware: Vice Society

Summary

Actions to take today to mitigate cyber threats from ransomware:

• Prioritize and remediate known exploited vulnerabilities.
• Train users to recognize and report phishing attempts.
• Enable and enforce multifactor authentication.

Note: This joint Cybersecurity Advisory (CSA) is part of an ongoing #StopRansomware effort to publish advisories for network defenders that detail various ransomware variants and ransomware threat actors. These #StopRansomware advisories include recently and historically observed tactics, techniques, and procedures (TTPs) and indicators of compromise (IOCs) to help organizations protect against ransomware. Visit stopransomware.gov to see all #StopRansomware advisories and to learn more about other ransomware threats and no-cost resources.

The Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), and the Multi-State Information Sharing and Analysis Center (MS-ISAC) are releasing this joint CSA to disseminate IOCs and TTPs associated with Vice Society actors identified through FBI investigations as recently as September 2022. The FBI, CISA, and the MS-ISAC have recently observed Vice Society actors disproportionately targeting the education sector with ransomware attacks.

Over the past several years, the education sector, especially kindergarten through twelfth grade (K-12) institutions, have been a frequent target of ransomware attacks. Impacts from these attacks have ranged from restricted access to networks and data, delayed exams, canceled school days, and unauthorized access to and theft of personal information regarding students and staff. The FBI, CISA, and the MS-ISAC anticipate attacks may increase as the 2022/2023 school year begins and criminal ransomware groups perceive opportunities for successful attacks. School districts with limited cybersecurity capabilities and constrained resources are often the most vulnerable; however, the opportunistic targeting often seen with cyber criminals can still put school districts with robust cybersecurity programs at risk. K-12 institutions may be seen as particularly lucrative targets due to the amount of sensitive student data accessible through school systems or their managed service providers.

The FBI, CISA, and the MS-ISAC encourage organizations to implement the recommendations in the Mitigations section of this CSA to reduce the likelihood and impact of ransomware incidents.

Download the PDF version of this report: pdf, 521 KB

Technical Details

Note: This advisory uses the MITRE ATT&CK® for Enterprise framework, version 11. See MITRE ATT&CK for Enterprise for all referenced tactics and techniques.

Vice Society is an intrusion, exfiltration, and extortion hacking group that first appeared in summer 2021. Vice Society actors do not use a ransomware variant of unique origin. Instead, the actors have deployed versions of Hello Kitty/Five Hands and Zeppelin ransomware, but may deploy other variants in the future.

Vice Society actors likely obtain initial network access through compromised credentials by exploiting internet-facing applications [T1190]. Prior to deploying ransomware, the actors spend time exploring the network, identifying opportunities to increase accesses, and exfiltrating data [TA0010] for double extortion–a tactic whereby actors threaten to publicly release sensitive data unless a victim pays a ransom. Vice Society actors have been observed using a variety of tools, including SystemBC, PowerShell Empire, and Cobalt Strike to move laterally. They have also used “living off the land” techniques targeting the legitimate Windows Management Instrumentation (WMI) service [T1047] and tainting shared content [T1080].

Vice Society actors have been observed exploiting the PrintNightmare vulnerability (CVE-2021-1675 and CVE-2021-34527 ) to escalate privileges [T1068]. To maintain persistence, the criminal actors have been observed leveraging scheduled tasks [T1053], creating undocumented autostart Registry keys [T1547.001], and pointing legitimate services to their custom malicious dynamic link libraries (DLLs) through a tactic known as DLL side-loading [T1574.002]. Vice Society actors attempt to evade detection through masquerading their malware and tools as legitimate files [T1036], using process injection [T1055], and likely use evasion techniques to defeat automated dynamic analysis [T1497]. Vice Society actors have been observed escalating privileges, then gaining access to domain administrator accounts, and running scripts to change the passwords of victims’ network accounts to prevent the victim from remediating. 

Indicators of Compromise (IOCs)

Email Addresses
v-society.official@onionmail[.]org
ViceSociety@onionmail[.]org
OnionMail email accounts in the format of [First Name][Last Name]@onionmail[.]org
TOR Address
http://vsociethok6sbprvevl4dlwbqrzyhxcxaqpvcqt5belwvsuxaxsutyad[.]onion
IP Addresses for C2Confidence Level
5.255.99[.]59High Confidence
5.161.136[.]176Medium Confidence
198.252.98[.]184Medium Confidence
194.34.246[.]90Low Confidence

See Table 1 for file hashes obtained from FBI incident response investigations in September 2022.

Table 1: File Hashes as of September 2022

MD5SHA1
fb91e471cfa246beb9618e1689f1ae1da0ee0761602470e24bcea5f403e8d1e8bfa29832
 3122ea585623531df2e860e7d0df0f25cce39b21
 41dc0ba220f30c70aea019de214eccd650bc6f37
 c9c2b6a5b930392b98f132f5395d54947391cb79

MITRE ATT&CK TECHNIQUES

Vice Society actors have used ATT&CK techniques, similar to Zeppelin techniques, listed in Table 2.

Table 2: Vice Society Actors ATT&CK Techniques for Enterprise

Initial Access
Technique TitleIDUse
Exploit Public-Facing ApplicationT1190Vice Society actors exploit vulnerabilities in an internet-facing systems to gain access to victims’ networks.
Valid AccountsT1078Vice Society actors obtain initial network access through compromised valid accounts.
Execution
Technique TitleIDUse
Windows Management Instrumentation (WMI)T1047Vice Society actors leverage WMI as a means of “living off the land” to execute malicious commands. WMI is a native Windows administration feature.
Scheduled Task/JobT1053Vice Society have used malicious files that create component task schedule objects, which are often mean to register a specific task to autostart on system boot. This facilitates recurring execution of their code.
Persistence
Technique TitleIDUse
Modify System ProcessT1543.003Vice Society actors encrypt Windows Operating functions to preserve compromised system functions.
Registry Run Keys/Startup FolderT1547.001Vice Society actors have employed malicious files that create an undocumented autostart Registry key to maintain persistence after boot/reboot.
DLL Side-LoadingT1574.002Vice Society actors may directly side-load their payloads by planting their own DLL then invoking a legitimate application that executes the payload within that DLL. This serves as both a persistence mechanism and a means to masquerade actions under legitimate programs.
Privilege Escalation
Technique TitleIDUse
Exploitation for Privilege EscalationT1068Vice Society actors have been observed exploiting PrintNightmare vulnerability (CVE-2021-1675 and CVE-2021-34527) to escalate privileges.
Defense Evasion
Technique TitleIDUse
MasqueradingT1036Vice Society actors may attempt to manipulate features of the files they drop in a victim’s environment to mask the files or make the files appear legitimate.
Process InjectionT1055Vice Society artifacts have been analyzed to reveal the ability to inject code into legitimate processes for evading process-based defenses. This tactic has other potential impacts, including the ability to escalate privileges or gain additional accesses.
Sandbox EvasionT1497Vice Society actors may have included sleep techniques in their files to hinder common reverse engineering or dynamic analysis.
Lateral Movement
Technique TitleIDUse
Taint Shared ContentT1080Vice Society actors may deliver payloads to remote systems by adding content to shared storage locations such as network drives.
Exfiltration
Technique TitleIDUse
ExfiltrationTA0010Vice Society actors are known for double extortion, which is a second attempt to force a victim to pay by threatening to expose sensitive information if the victim does not pay a ransom.
Impact
Technique TitleIDUse
Data Encrypted for ImpactT1486Vice Society actors have encrypted data on target systems or on large numbers of systems in a network to interrupt availability to system and network resources.
Account Access RemovalT1531Vice Society actors run a script to change passwords of victims’ email accounts.

Mitigations

The FBI and CISA recommend organizations, particularly the education sector, establish and maintain strong liaison relationships with the FBI Field Office in their region and their regional CISA Cybersecurity Advisor. The location and contact information for FBI Field Offices and CISA Regional Offices can be located at www.fbi.gov/contact-us/field-offices and www.cisa.gov/cisa-regions, respectively. Through these partnerships, the FBI and CISA can assist with identifying vulnerabilities to academia and mitigating potential threat activity. The FBI and CISA further recommend that academic entities review and, if needed, update incident response and communication plans that list actions an organization will take if impacted by a cyber incident.

The FBI, CISA, and the MS-ISAC recommend network defenders apply the following mitigations to limit potential adversarial use of common system and network discovery techniques and to reduce the risk of compromise by Vice Society actors:

Preparing for Cyber Incidents

  • Maintain offline backups of data, and regularly maintain backup and restoration.  By instituting this practice, the organization ensures they will not be severely interrupted, and/or only have irretrievable data.
  • Ensure all backup data is encrypted, immutable (i.e., cannot be altered or deleted), and covers the entire organization’s data infrastructure. Ensure your backup data is not already infected.
  • Review the security posture of third-party vendors and those interconnected with your organization. Ensure all connections between third-party vendors and outside software or hardware are monitored and reviewed for suspicious activity.
  • Implement listing policies for applications and remote access that only allow systems to execute known and permitted programs under an established security policy.
  • Document and monitor external remote connections. Organizations should document approved solutions for remote management and maintenance, and immediately investigate if an unapproved solution is installed on a workstation.
  • Implement a recovery plan to maintain and retain multiple copies of sensitive or proprietary data and servers in a physically separate, segmented, and secure location (i.e., hard drive, storage device, the cloud).

Identity and Access Management

  • Require all accounts with password logins (e.g., service account, admin accounts, and domain admin accounts) to comply with National Institute of Standards and Technology (NIST) standards for developing and managing password policies.
    • Use longer passwords consisting of at least 8 characters and no more than 64 characters in length;
    • Store passwords in hashed format using industry-recognized password managers;
    • Add password user “salts” to shared login credentials;
    • Avoid reusing passwords;
    • Implement multiple failed login attempt account lockouts;
    • Disable password “hints”;
    • Refrain from requiring password changes more frequently than once per year unless a password is known or suspected to be compromised.
      Note: NIST guidance suggests favoring longer passwords instead of requiring regular and frequent password resets. Frequent password resets are more likely to result in users developing password “patterns” cyber criminals can easily decipher.
    • Require administrator credentials to install software.
  • Require phishing-resistant multifactor authentication for all services to the extent possible, particularly for webmail, virtual private networks, and accounts that access critical systems.
  • Review domain controllers, servers, workstations, and active directories for new and/or unrecognized accounts.
  • Audit user accounts with administrative privileges and configure access controls according to the principle of least privilege. 
  • Implement time-based access for accounts set at the admin level and higher. For example, the Just-in-Time (JIT) access method provisions privileged access when needed and can support enforcement of the principle of least privilege (as well as the Zero Trust model). This is a process where a network-wide policy is set in place to automatically disable admin accounts at the Active Directory level when the account is not in direct need. Individual users may submit their requests through an automated process that grants them access to a specified system for a set timeframe when they need to support the completion of a certain task.

Protective Controls and Architecture

  • Segment networks to prevent the spread of ransomware. Network segmentation can help prevent the spread of ransomware by controlling traffic flows between—and access to—various subnetworks and by restricting adversary lateral movement.
  • Identify, detect, and investigate abnormal activity and potential traversal of the indicated ransomware with a networking monitoring tool. To aid in detecting the ransomware, implement a tool that logs and reports all network traffic, including lateral movement activity on a network. Endpoint detection and response (EDR) tools are particularly useful for detecting lateral connections as they have insight into common and uncommon network connections for each host.
  • Install, regularly update, and enable real time detection for antivirus software on all hosts.
  • Secure and closely monitor remote desktop protocol (RDP) use.
    • Limit access to resources over internal networks, especially by restricting RDP and using virtual desktop infrastructure. If RDP is deemed operationally necessary, restrict the originating sources and require MFA to mitigate credential theft and reuse. If RDP must be available externally, use a VPN, virtual desktop infrastructure, or other means to authenticate and secure the connection before allowing RDP to connect to internal devices. Monitor remote access/RDP logs, enforce account lockouts after a specified number of attempts to block brute force campaigns, log RDP login attempts, and disable unused remote access/RDP ports.

Vulnerability and Configuration Management

  • Keep all operating systems, software, and firmware up to date. Timely patching is one of the most efficient and cost-effective steps an organization can take to minimize its exposure to cybersecurity threats. Organizations should prioritize patching of vulnerabilities on CISA’s Known Exploited Vulnerabilities catalog.
  • Disable unused ports.
  • Consider adding an email banner to emails received from outside your organization.
  • Disable hyperlinks in received emails.
  • Disable command-line and scripting activities and permissions. Privilege escalation and lateral movement often depend on software utilities running from the command line. If threat actors are not able to run these tools, they will have difficulty escalating privileges and/or moving laterally.
  • Ensure devices are properly configured and that security features are enabled.
  • Disable ports and protocols that are not being used for a business purpose (e.g., RDP Transmission Control Protocol Port 3389).
  • Restrict Server Message Block (SMB) Protocol within the network to only access servers that are necessary, and remove or disable outdated versions of SMB (i.e., SMB version 1). Threat actors use SMB to propagate malware across organizations.

REFERENCES

REPORTING

The FBI is seeking any information that can be shared, to include boundary logs showing communication to and from foreign IP addresses, a sample ransom note, communications with Vice Society actors, Bitcoin wallet information, decryptor files, and/or a benign sample of an encrypted file.

The FBI, CISA, and the MS-ISAC strongly discourage paying ransom as payment does not guarantee victim files will be recovered. Furthermore, payment may also embolden adversaries to target additional organizations, encourage other criminal actors to engage in the distribution of ransomware, and/or fund illicit activities. Regardless of whether you or your organization have decided to pay the ransom, the FBI and CISA urge you to promptly report ransomware incidents to a local FBI Field Office, or to CISA at report@cisa.gov or (888) 282-0870. SLTT government entities can also report to the MS-ISAC (SOC@cisecurity.org or 866-787-4722).

DISCLAIMER

The information in this report is being provided “as is” for informational purposes only. The FBI, CISA, and the MS-ISAC do not endorse any commercial product or service, including any subjects of analysis. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply endorsement, recommendation, or favoring by the FBI, CISA, or the MS-ISAC.

Revisions

September 6, 2022: Initial Version

Source :
https://www.cisa.gov/uscert/ncas/alerts/aa22-249a

Teaming in Azure Stack HCI

As we’re deprecating the vSwitch attached to an LBFO team, this article introduces a new tool for converting your LBFO team to a SET team. To download this tool, run the following command or see the end of this article.

Install-Module Convert-LBFO2SET​

Windows Server currently has two inbox teaming mechanisms with two very different purposes. In this article, we’ll describe several reasons why you should use Switch Embedded Teaming (SET) for Azure Stack HCI scenarios and we’ll discuss several long-held teaming myths – We’d love to hear your feedback in the comments below. Let’s get started!

In Windows Server 2012 we released LBFO as an inbox teaming mechanism, with many customers leveraging this technology to provide load-balancing and fail-over between network adapters. Since then, the rise of software-defined storage and software defined networking has brought performance and compatibility challenges to the forefront (outlined in this article) with the LBFO architecture that required a change in direction.

This new direction is called Switch Embedded Teaming (SET) and was introduced in Windows Server 2016. SET is available when Hyper-V is installed on any Server OS (Windows Server 2016 and higher) and Windows 10 version 1809 (and higher). You’re not required to run virtual machines to use SET, but the Hyper-V role must be installed.

In summary, LBFO is our older teaming technology that will not see future investment, is not compatible with numerous advanced capabilities, and has been exceeded in both performance and stability by our new technology (SET). We’d like to discuss why you should move off LBFO for virtualized and cloud scenarios. Let’s dig into this paragraph a bit.

LBFO is our older teaming technology that will not see future investment

thumbnail image 1 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Teaming in Azure Stack HCI

With the intent to bring software-defined technologies like SDNv2 and containers to Windows Server, it became clear that we needed an alternative teaming solution and so we set off creating SET, circa 2014. Simultaneously reaching feature parity and stability with LBFO took time; several early adopters of SET will remember some of these pains. However, SET’s stability, performance, and features have now far surpassed LBFO.

All new features released since Windows Server 2016 (see below) were developed and tested with SET in mind – This includes all Azure Stack HCI solutions you may have purchased; Azure Stack HCI is not tested or certified with LBFO. This is largely due to development simplicity and testing; without driving too far into unimportant details, LBFO teams adapters inside NDIS which is a large and complex component – its roots date back to Windows 95 (of course updated considerably since then). If your system has a NIC, you’re using NDIS. In the picture shown above, each component below the vSwitch was part of NDIS.

The size and complexity of scenarios included in NDIS made for very complex testing requirements that were only compounded by virtualized and software-defined technologies that considerably hampered feature innovation. You might think that this is just a Microsoft problem, but really this affects NIC vendor driver development time and stability as well.

All-in-all, we’re not focusing on LBFO much these days, particularly as software-defined Windows Server networking scenarios become more exotic with the rise of containers, software-defined networking, and much more. There’s a faster, more stable, and performant teaming solution, called Switch Embedded Teaming.

LBFO is not compatible with several advanced capabilities

Here’s a smattering of scenarios and features that are supported with SET but NOT LBFO:

Windows Admin Center – WAC is the de facto management tool for Windows Server and Azure Stack HCI, with millions of nodes under management. You can create and manage a SET team for a single host, or deploy a SET team to multiple hosts with the new Cluster Creation UI we released to help you deploy Azure Stack HCI solutions at Microsoft Ignite this year (watch the sessiontry it out, and give us feedback).

LBFO is not available for configuration in Windows Admin Center.

RDMA Teaming – Only SET can team RDMA adapters. RDMA is used for example with Storage Spaces Direct (S2D) which requires a reliable high bandwidth, low latency network connection between each node. High-bandwidth? Low Latency? That’s RDMA’s bag so it is the recommended pattern with S2D. Reliability? That’s SET’s claim-to-fame so these two are a logical pairing.

Guest RDMA: SET supports RDMA into a virtual machine. This doesn’t work with LBFO for two reasons:

  • RDMA adapters cannot be teamed with LBFO both host adapters and virtual adapters.
  • RDMA uses SMB multichannel which requires multiple adapters to balance traffic across. Since you can’t assign a vNIC to pNIC affinity with LBFO, neither the SMB nor non-SMB traffic can be made highly available.

Guest Teaming is a strange one; you could add multiple virtual NICs to a Hyper-V VM; inside the VM, you could use LBFO to team the virtual NICs. However, you cannot affinitize a virtual NIC (vNIC) to a physical NIC (pNIC), so it’s possible that both vNICs added in the VM are sending and receiving traffic out of the same pNIC. If that pNIC fails, you lose both of your virtual NICs. 

SET allows you to map each vNIC to a pNIC to ensure that they don’t overlap, ensuring that a Guest Team is able to survive an adapter outage.

Microsoft Software Defined Networking – (SDN) was first released in its modern form in Windows Server 2016 and requires a virtual switch extension called the Virtual Filtering Platform (VFP). VFP is the brains behind SDN, the same extension that runs our public cloud, Azure. VFP can only be added to a SET team.

This means that any of the SDN features (which are included with a Datacenter Edition license) like the Software Load Balancer, Gateways, Distributed Firewall (ACLs), and our modern network QoS capability are also unavailable if you’re using LBFO.

Container Networking – Containers relies on a service called the Host Network Service (HNS). HNS also leverages VFP and as mentioned in the SDN section, VFP can only be added to a Switch Embedded Team (SET). For more information on Container Networking, please see this link.

Virtual Machine Multi-Queues – VMMQ is a critical performance feature for Azure Stack HCI. VMMQ allows you to assign multiple VMQs to the same virtual NIC without which, you rely on expensive software spreading operations (the OS spreads packets across multiple CPUs without hardware (NIC) assistance) that greatly increases CPU utilization on the host, reducing the number of virtual machines you can run.

Moreover, if your vNIC doesn’t get a VMQ, all traffic is processed by the default queue. With SET you can assign multiple VMQs to the default queue which can be shared as needed by any vNIC allowing more VMs to get the bandwidth they need.

In this video, you can see the performance (throughput) benefits of Switch Embedded Teaming over that of LBFO.  The video demonstrates a 2x throughput improvement with SET over LBFO, while consuming ~10% additional CPU (a result of double the throughput).

Dynamic VMMQ –  d.VMMQ won’t work either. Dynamic VMMQ is an intelligent queue scheduling algorithm for VMMQ that recognizes when CPU cores are overworked by network traffic and reassigns that network traffic processing to other cores automatically so your workloads (e.g. VMs, applications, etc.) can run without competing for processor time.

Here’s an example of some of the benefits of Dynamic VMMQ. In the video, you can see the host, spending CPU resources processing packets for a specific virtual NIC. When a competing workload begins on the system (which would prevent the virtual NIC from reaching maximum performance), we automatically tune the system by moving one of the workloads to an available processor.

RSC in the vSwitch is an acceleration that coalesces segments destined for the same virtual NIC into a larger segment.

Outbound network traffic is slimmed to fit into the mtu size of the physical network (default of ~1500 bytes). However, inbound traffic can be coalesced into one big segment. That one big segment takes far less processing than multiple small segments, so once traffic is received by the host, we can combine them and deliver several segments to the vNIC all at once. SET was made aware of RSC coalescing and supports this acceleration as of Windows Server 2019.

We’re continuing to improve this feature for even better performance in the next version of Windows Server and Azure Stack HCI by enabling RSC in the vSwitch to extend over the VMBus. In the video below, we show one VM sending traffic to another VM with the improved acceleration disabled – This is using only the original Windows Server 2019, RSC in the vSwitch capabilities. 

Next we enable the Windows Server vNext improvements; throughput is improved by ~17 Gbps while CPU resourcing is reduced by approximately 12% (20 cores on the system). This type of traffic pattern is specifically valuable for container scenarios that reside on the same host.

LBFO has been exceeded in both performance and stability by SET

Note: Guest RDMA, RSC in the vSwitch, VMMQ, and Dynamic VMMQ belong in this category as well.

Certified Azure Stack and Azure Stack HCI solutions test only SET

If all that wasn’t enough, both Microsoft and our partners validate and certify their solutions on SET, not LBFO. If you bought a certified Azure Stack HCI solution from one of our partners OR a standard or premium logo’d NIC, it was tested and validated with Switch Embedded Teaming. That means all certification tests where run with SET.

Link Aggregation Control Protocol (LACP)

thumbnail image 2 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Teaming in Azure Stack HCI

Ok, so this one is a little counter-intuitive. LACP, allows for port-channels or switch-dependent teams to send traffic to the host over more than one physical port simultaneously.

For native hosts this means that every port in the port-channel can send traffic simultaneously – for the system on the right with 2 x 50 Gbps NICs, it looks like one big pipe with a native host potentially receiving 100 Gbps. Naturally, you’d expect that this capability could extend to virtual NICs as well.

But things change with virtualization. When the traffic gets to the host, the NICs need to interrupt multiple, independent processors to exceed what a single CPU core can process – This is what VMMQ does, and as mentioned, VMMQ does not work with LBFO.

LBFO limits you to a single VMQ and despite having (in the picture) 100 Gbps of inbound bandwidth, you would only receive about 5 Gbps per virtual NIC (or up to ~20 Gbps per vNIC at the painful expense of OS-based software spreading that could be used for running virtual machine workloads).

thumbnail image 3 of blog post titled 
	
	
	 
	
	
	
				
		
			
				
						
							Teaming in Azure Stack HCI

With SET, switch-independent teaming, and the hardware assistance of VMMQ and enough CPUs in the system, you could receive all 100 Gbps of data into the host.

In summary, LACP provides no throughput benefits for Azure Stack HCI scenarios, incurs higher CPU consumption, and cannot auto-tune the system to avoid competition between workloads for virtualized scenarios (Dynamic VMMQ).

Asymmetric Adapters

While we’re myth-busting, let’s talk about adapter symmetry which describes the length to which adapters have the same make, model, speed, and configuration – SET requires adapter symmetry for Microsoft support. Usually the easiest way to identify this symmetry is by the device Interface Description (with PowerShell, use Get-NetAdapter). If the interface description matches (with exception of the unique number given to each adapter e.g. Intel NIC #1, Intel NIC #2, etc.) then the adapters are symmetric.

Prior to Windows Server 2016, conventional wisdom stated that you should use different NICs with different drivers in a team. The thinking was that if one driver had an issue, another team member would survive, and the team would remain up. This is a common benefit customers cite in favor of LBFO: it supports asymmetric adapters.

However, two drivers mean twice as many things can go wrong in fact increasing the likelihood of a problem. Instead, a properly designed infrastructure with symmetric adapters are far more stable in our review of customer support cases. As a result, support for asymmetric teams are no longer a differentiator for LBFO nor do we recommend it for Azure Stack HCI scenarios where reliability is the #1 requirement.

LBFO for management adapters

Some customers I’ve worked with have asked if they should use LBFO for management adapters when the vSwitch is not attached – Our recommendation is to always use SET whenever available. A management adapter’s goal in life is to be stable and we see less support cases with SET.

To be clear, if the adapter is not attached to a virtual switch, LBFO is acceptable however, you should endeavor to use SET whenever possible due to the support reasons outlined in this article.

vSwitch on LBFO Deprecation Status

Recently, we publicly announced our plans to deprecate the use of LBFO with the Hyper-V virtual switch. Moving forward, and due to the various reasons outlined in this article, we have decided to block the binding of the vSwitch on LBFO.

Prior to upgrading from Windows Server 2019 to vNext or if you have a fresh install of vNext, you will need to convert any LBFO teams to a SET team if it’s attached to a Hyper-V virtual switch. To make this simpler, we’re releasing a tool (available on the PowerShell gallery) called Convert-LBFO2SET.

You can install this tool using the command:

Install-Module Convert-LBFO2SET

Or for disconnected systems:

Save-Module Convert-LBFO2SET -Path C:\SomeFolderPath

Please see the wiki for instructions on how to use the tool however here’s an example where we convert a system with 10 host vNICs, 10 generation 1 VMs, and 10 generation 2 VMs. 

Summary

LBFO remains our teaming solution if you are running your workloads on bare metal servers. If however, you are running virtualized or cloud scenarios like Azure Stack HCI, you should give Switch Embedded Teaming serious consideration. As we’ve described in this article, SET has been the Microsoft recommended teaming solution and focus since Windows Server 2016 as it brings better performance, stability, and feature support compared to LBFO.

Are there other questions you have about SET and LBFO? Please submit your questions in the comments below!

Thanks for reading,

Dan “All SET for Azure Stack HCI” Cuomo

Source :
https://techcommunity.microsoft.com/t5/networking-blog/teaming-in-azure-stack-hci/ba-p/1070642

Create and manage a Switch Embedded Teaming

What is Hyper-V Switch Embedded Teaming?

With Windows Server 2016, Microsoft brings us a new NIC (network interface card) teaming solution called Switch Embedded Teaming (SET). It is only available on servers running Windows Server 2016 with the Hyper-V role installed and is not backwards compatible.

SET allows us to group up to eight physical NICs to create one or more vNICs. Microsoft’s reasoning behind this is SET is best used for NICs 10 GB or bigger. SET only supports switch independent mode to make setup easy for us to configure. You can either plug all physical NICs into one physical switch or multiple physical switches if they are in the same subnet. Connecting the SET Team to multiple physical switches can give you extra redundancy.

Due to SET’s integration with the Hyper-V Virtual Switch, you are unable to use SET Teaming inside a virtual machine (VM). SET Teaming does not present the team interface to the VM, so you can utilize LBFO (load-balancing failover) Teaming inside a VM by adding additional vNICs to the VM.

Which NICS can I use?

If your NICS have passed the Windows Hardware Qualification and Logo (WHQL) test, then you can use them to create your SET Team. SET does require all NICs that are part of a SET Team to be identical. This means that they are from the same manufacturer, are the same model and have the same firmware and driver. You can only have up to eight NICs in one SET Team.

One good thing about SET teams is that you can configure Remote Direct Memory Access (RDMA) per vNIC. This means you can create multiple vNICS for your SET Team. You can configure a vNIC for storage and enable RDMA on it. You could then create another vNIC for management traffic and not enable RDMA. This is good for hyperconverged solutions.

SET compatibility with Windows Server

SET is compatible with the following networking technologies found in Windows Server 2016.

  • Datacentre bridging (DCB)
  • Hyper-V Network Virtualization — NV-GRE and VxLAN are both supported in Windows Server 2016.
  • Receive-side Checksum offloads (IPv4, IPv6, TCP) — These are only supported if at least one of the SET Team members support them.
  • Remote Direct Memory Access (RDMA)
  • Single root I/O virtualization (SR-IOV)
  • Transmit-side Checksum offloads (IPv4, IPv6, TCP) — These are only supported if all the SET team members support them.
  • Virtual Machine Queues (VMQ)
  • Virtual Receive Side Scaling (RSS)

SET is not compatible with the following networking technologies in Windows Server 2016.

  • 1X authentication
  • IPsec Task Offload (IPsecTO)
  • QoS in host or native operating systems
  • Receive side coalescing (RSC)
  • Receive side scaling (RSS)
  • TCP Chimney Offload
  • Virtual Machine QoS (VM-QoS)

SET modes and settings

When you first create a SET Team, you are unable to configure a team name and you are unable to set NICS to standby like you can with the NIC Teaming in Windows Server 2012 r2. In fact, with SET Teams all NICS are always active.

Another difference between SET Teams and LBFO (load balanced fail over) teams is that with NIC teaming you had a choice of three different teaming modes. SET only gives you one choice: Switch Independent. Switch Independent mode means the physical switch or switches are unaware of the SET Team and do not determine how to distribute the network traffic.

There are two team properties that need to be configured when you create a SET Team. They are:

  • Member adapters
  • Load balancing mode

Member adapters

As stated earlier in this post, when you are creating a SET Team, you must use up to eight identical NICS. You can create your SET Switch with just one NIC to start, but any new members must be identical.

Load Balancing mode

There are two Load Balancing distribution modes for SET Teams. They are:

  • Hyper-V Port
  • Dynamic

Hyper-V port

When using Hyper-V port mode for SET Teams, VMs are connected to a port on the Hyper-V Switch. This is just like a physical server would be connected to a port on a physical switch. The Hyper-V Virtual Switch port and the VM’s MAC address are used to divide the network traffic between the SET Team members.

If you use Switch Embedded Teaming with Packet Direct, you must use the Switch Independent Teaming mode and the Hyper-V port load balancing mode.

With this teaming mode, any adjacent switches will always see a VM’s MAC address on a given port. This allows the switch to distribute the ingress load (the traffic coming into the Hyper-V host from the switch) to the port where the MAC address is located. Hyper-V port mode is very useful when you utilize Virtual Machine Queues (VMQs) because a queue can be placed on the NIC where the traffic is expected to arrive.

Hyper-V port mode is not the best if you are only hosting a few VMs due to it not being granular enough to achieve a well-balanced distribution. This mode also limits each VM to the bandwidth that’s available on a single NIC.

Dynamic

When using the Dynamic port mode for SET Teams, all outbound loads are distributed using a hash of the TCP Ports and IP addresses. Dynamic Port mode also rebalances loads in real time to allow outbound flow to move between each SET Team member.

For inbound loads, all traffic is distributed in the same way as the Hyper-V port mode. Outbound loads use flowlets to dynamically balance. A flowlet is the portion of a TCP flow between two naturally occurring breaks. When the dynamic port mode detects a flowlet boundary, it will automatically rebalance the flow to another member of the SET Team, if appropriate. It is known that in some uncommon circumstances dynamic port mode may rebalance flows that do not contain any flowlets. Due to this, the affinity between a Team member and the TCP flow can change as the traffic is balanced.

Creating and managing a SET Team

Although it is recommended that you use Microsoft Virtual Machine Manager (VMM) to configure SET Teams, not every company has this. That’s where PowerShell comes in.

In the following section, we will be go through the process of creating a SET Team, adding and removing NICs to a SET team, changing the Load Balancing Mode and removing a SET team.

Create a SET Team

You can only create a SET Team at the same NIC that is used for a Hyper-V Virtual Switch. To create a SET Team with one NIC use the following PowerShell code in an elevated PowerShell window:

New-VMSwitch -Name "SET Team" -NetAdapterName "NIC 1" -EnableEmbeddedTeaming $true
 

 
You can change SET Team to any name you need, as well as adapter name NIC 1, to the name of your NIC. This can be done by using the following PowerShell command:

Get-NetAdapter
 

 
You should have a list of all your adapters. The first column is Name. This is the name of your network adapter that you need to use for the -NetAdapterName.

To create a SET Team with more than one NIC, you can use the following PowerShell command in an elevated PowerShell window:

New-VMSwitch -Name "SET Team" -NetAdapterName "NIC 1","NIC 2"
 

 
Notice when adding more than one NIC you don’t need to add the -EnableEmbeddedTeaming because it is assumed by Windows PowerShell when the argument to -NetAdapterName is an array of NICs.

Adding and removing SET Team members

The only way to add or remove SET Team members is to use the SET-VMSwitchTeam command. You basically name the SET switch and which adapters you want to be in it. Let’s say you have created your SET switch with NIC 1 and NIC 2, and now you want to remove NIC 2 and add NIC 3:

Set-VMSwitchTeam -Name SET Team -NetAdapterName "NIC 1","NIC 3"
 

 
If you now need to remove NIC 1, you can use the following command:

Set-VMSwitchTeam -Name "SET Team" -NetAdapterName "NIC 3"
 

 
To add NIC 2 back in to the SET Team, you can use the following command:

Set-VMSwitchTeam -Name "SET Team" -NetAdapterName "NIC 3","NIC 2"
 

Change the Load Balancing Mode

To change the Load Balancing Mode between the two mentioned above, you can use the following PowerShell command (just Change Dynamic to HyperVPort):

Set-VMSwitchTeam -Name "SET Team" -LoadBalancingAlgorithm Dynamic
 

 
To view which Load Balancing Mode you currently have:

Get-VMSwitchTeam -Name "SET Team" | FL
 

Remove a SET Team

The only way to remove a SET Team is by removing the Hyper-V Virtual switch completely:

Remove-VMSwitch "SET Team"
 

 
Now you have learned how to create, manage and remove a SET switch. Hopefully, you will find this post beneficial and helpful. If you have any questions, leave a comment.

See more: