Microsoft Office 365 boosts email security against MITM, downgrade attacks

Microsoft has added SMTP MTA Strict Transport Security (MTA-STS) support to Exchange Online to ensure Office 365 customers’ email communication integrity and security.

Redmond first announced MTA-STS’ introduction in September 2020, after revealing that it was also working on adding inbound and outbound support for DNSSEC (Domain Name System Security Extensions) and DANE for SMTP (DNS-based Authentication of Named Entities).

“We have been validating our implementation and are now pleased to announce support for MTA-STS for all outgoing messages from Exchange Online,” the Exchange Online Transport Team said today.

With MTA-STS now available in Office 365, emails sent by users via Exchange Online will only be delivered using connections with both authentication and encryption, protecting the messages from interception and attack attempts.

This new standard strengthens Exchange Online email security and solves several SMTP security problems, including expired TLS certificates, the lack of support for secure protocols, and certificates not issued by trusted third parties or matching server domain names.

Before MTA-STS, emails sent through improperly secured TLS connections were exposed to various attacks, including downgrade and man-in-the-middle attacks.

“Downgrade attacks are possible where the STARTTLS response can be deleted, thus rendering the message in cleartext. Man-in-the-middle (MITM) attacks are also possible, whereby the message can be rerouted to an attacker’s server,” the Exchange team said.

“MTA-STS (RFC8461) helps thwart such attacks by providing a mechanism for setting domain policies that specify whether the receiving domain supports TLS and what to do when TLS can’t be negotiated, for example stop the transmission.”

Microsoft provides guidance on how to adopt MTA-STS, including where to host the policy file on your domain’s web infrastructure.https://www.youtube.com/embed/VY3YvrrHXJk

DANE for SMTP support also rolling out

Redmond is still working on rolling out DANE for SMTP (with DNSSEC support), which provides better protection for SMTP connections than MTA-STS does.

“We will deploy support for DANE for SMTP and DNSSEC in two phases. The first phase, DANE and DNSSEC for outbound email (from Exchange Online to external destinations), is slowly being deployed between now and March 2022. We expect the second phase, support for inbound email, to start by the end of 2022,” said The Exchange Team on Tuesday.

“We’ve been working on support for both MTA-STS and DANE for SMTP. At the very least, we encourage customers to secure their domains with MTA-STS,” Microsoft added today.

“You can use both standards on the same domain at the same time, so customers are free to use both when Exchange Online offers inbound protection using DANE for SMTP by the end of 2022. By supporting both standards, you can account for senders who may support only one method.”

Microsoft has already secured several domains it uses for email transport as a domain owner itself, including primary domains like outlook.com, hotmail.com, and live.com. 

This ensures that all connections from senders who support MTA-STS are better protected from man-in-the-middle attacks. 

Source :
https://www.bleepingcomputer.com/news/microsoft/office-365-boosts-email-security-against-mitm-downgrade-attacks/

Microsoft Windows 10 optional updates fix performance problems introduced last month

Optional updates for Windows 10 and Windows 11 released in January have fixed performance problems when playing games, using the operating system, or even opening folders in File Explorer.

With the January 2022 updates, Microsoft introduced numerous bugs breaking LT2P VPN connections, causing domain controller reboots, and preventing Hyper-V from working.

Microsoft later released out-of-band updates to fix these issues, whose fixes were also rolled into the optional preview updates.

However, these optional updates seem to have fixed more than the reported bugs, as they are also resolving significant performance issues caused by the January updates.

Recent Windows updates caused performance hits

After installing the January 2022 KB5009543 update, Windows 10 users began to notice that Windows suffered from severe performance issues.

These performance issues included slow boots and slow response times when opening the Start Menu, launching apps, playing games, and performing pretty much all of the basic functions of the operating system. In general, Windows felt “laggy” after installing the updates.

“Prior to the update, it took maybe 2 minutes for my laptop to boot to the home screen. It now takes close to a half hour. I’m frustrated to the point where I’m now planning to disable updates and uninstall this update,” a user named Ninja_Bobcat posted on Reddit.

“This update has ruined my laptop in games, namely warzone and apex. Goes to 0 fps and huge stutters everywhere,” another person posted.

“My computer is incredibly slow after KB5009543 security update and KB5008876 windows update. It takes about 3 minutes for my computer to boot and maybe 2-3 minutes to open a tab on chrome. Absolutely killed my computer,” said a third Windows 10 user.

BleepingComputer replicated these performance issues after installing the January 2022 KB5009543 update on multiple laptops.

The good news is that BleepingComputer found that installing the optional KB5009596 preview update released late last month fixed these newly introduced performance issues.

Windows 10 KB5009596 optional update
Windows 10 KB5009596 optional update

However, as these updates are optional, many users will not know to install them. Thus, their performance issues will continue until the mandatory February 2022 Patch Tuesday updates are installed, which will include these fixes.

Windows users can install the optional updates by going into Settings, clicking on Windows Update, and manually performing a ‘Check for Updates.’

As this is an optional update, you will need to install the KB5009596 by clicking on the ‘Download and install’ link.

Windows 11 issues were fixed as well

Not to be outdone by Windows 10, Windows 11 has also been dealing with performance issues within File Explorer.

Users found that it was slow to switch between folders, browse folders, or select files when using File Explorer.

However, the optional Windows 11 KB5008353 cumulative update preview has resolved these issues, with users reporting that File Explorer is back to normal.

“I honestly lost hope because this issue has been there since I upgraded to Win11, other users were claiming it was solved but it wasn’t the case for everyone. However, this update seems to have fixed this issue for good amongst others of course,” a Windows 11 user posted on Reddit.

BleepingComputer has not been able to replicate the performance issues on Windows 11 to test the fix.

BleepingComputer has also reached out to Microsoft with further questions about what has been fixed but has not received a reply as of yet.

Source :
https://www.bleepingcomputer.com/news/microsoft/windows-10-optional-updates-fix-performance-problems-introduced-last-month/

A physical disk resource may not come online on a Microsoft cluster node

This article helps solve an issue where a physical disk resource doesn’t come online on a cluster node.

Applies to:   Windows Server 2012 R2
Original KB number:   981475

Symptoms

On a cluster node that is running Windows Server, a physical disk resource may enter the Failed state when you try to move a group that contains the physical disk resource. If you restart the cluster node that has the physical disk resource that did not come online, the problem is temporarily resolved.

When this problem occurs, the following entries are logged in the Cluster log for the physical disk resource that entered the failed state:

000020cc.000014d0::<DateTime> ERR Physical Disk <Disk Q:>:
DiskspCheckPath: GetFileAttrs(Q:) returned status of 87.
000020cc.000014d0::<DateTime> WARN Physical Disk <Disk Q:>:
DiskspCheckDriveLetter: Checking drive name (Q:) returns 87

Additionally, the following events are logged in the System Event log:

Event Type: Error
Event Source: ClusSvc
Event Category: Physical Disk Resource
Event ID: 1066
Date: <date>
Time: <time>
User: N/A
Computer: <node name>
Description: Cluster disk resource “Disk Q:” is corrupt. Run ‘ChkDsk /F’ to repair problems. The volume name for this resource is “<\?\Volume{4323d41e-1379-11dd-9538-001e0b20dfe6}>”. If available, ChkDsk output will be in the file “C:\WINDOWS\Cluster\ChkDsk_Disk2_SigB05E593B.log”. ChkDsk may write information to the Application Event Log with Event ID 26180.

Event Type: Error
Event Source: ClusSvc
Event Category: Physical Disk Resource
Event ID: 1035
Date: <date>
Time: <time>
User: N/A
Computer: <node name>
Description: Cluster disk resource ‘Disk Q:’ could not be mounted.

Similarly, on a Windows Server cluster node you may see following entries are logged in the Cluster log:

00000db0.00000868::<DateTime> WARN [RES] Physical Disk <Cluster Disk 1>: OnlineThread: Failed to get volume guid for device \?\GLOBALROOT\Device\Harddisk15\Partition1. Error 3
00000db0.00000868::<DateTime> WARN [RES] Physical Disk <Cluster Disk 1>: OnlineThread: Failed to set volguid ??\Volume{3cb36133-0d0b-11df-afcf-005056ab58b9}. Error: 183.
00000db0.00000868::<DateTime> INFO [RES] Physical Disk <Cluster Disk 1>: VolumeIsNtfs: Volume \?\GLOBALROOT\Device\Harddisk15\Partition1\ has FS type NTFS

Cause

This problem is known to occur when antivirus software that is not cluster-aware is installed, upgraded, or reconfigured. For example, this problem is known to occur after you install or migrate to Symantec Endpoint Protection 11.0 Release Update 5 (RU5) on the cluster nodes.

Resolution

To resolve this problem, follow these steps:

  1. Verify that this problem is caused by Symantec Endpoint Protection (SEP) 11.0 Release Update 5 (RU5). To do this, run the Handle.exe utility immediately after the issue occurs on the cluster node where the physical disk resource did not come online.At an elevated command prompt, type the following command, and then press ENTER: Handle.exe -a -u drive_letter. NoteThe drive_letter placeholder is the drive designation for the cluster drive that did not come online.For example, assume that the drive designation for the cluster drive that did not come online is drive Q. To run the Handle.exe utility in this scenario, type the following command, and then press ENTER: Handle.exe -a -u Q:.The problem is caused by the Symantec application if you receive the following message that identifies the Smc.exe process as the process that owns the handle:Handle v3.42
    Copyright (C) 1997-2008 Mark Russinovich
    Sysinternals – www.sysinternals.comSmc.exe pid: 856 NT AUTHORITY\SYSTEM 66C: Q:
  2. If the problem is caused by the Symantec application, contact Symantec to obtain Symantec Endpoint Protection 11 Release Update 6 (RU6), which was released to resolve this issue.

More information

For more information about the Handle.exe utility, see Handle v4.22.

The third-party products that this article discusses are manufactured by companies that are independent of Microsoft. Microsoft makes no warranty, implied or otherwise, about the performance or reliability of these products.

Source :
https://docs.microsoft.com/en-us/troubleshoot/windows-server/high-availability/physical-disk-resource-not-come-online

Microsoft Cluster Shared Volume (CSV) Inside Out

In this blog we will take a look under the hood of the cluster file system in Windows Server 2012 R2 called Cluster Shared Volumes (CSV). This blog post is targeted at developers and ISV’s who are looking to integrate their storage solutions with CSV.

Note: Throughout this blog, I will refer to C:\ClusterStorage assuming that the Windows is installed on the C:\ drive. Windows can be installed on any available drive and the CSV namespace will be built on the system drive, but instead of using %SystemDrive%\ClusterStorage\ I’ve used C:\ClusteredStorage for better readability since C:\ is used as the system drive most of the time.

Components


Cluster Shared Volume in Windows Server 2012 is a completely re-architected solution from Cluster Shared Volumes you knew in Windows Server 2008 R2. Although it may look similar in the user experience – just a bunch of volumes mapped under the C:\ClusterStorage\ and you are using regular windows file system interface to work with the files on these volumes, under the hood, these are two completely different architectures. One of the main goals is that in Windows Server 2012, CSV has been expanded beyond the Hyper-V workload, for example Scale-out File Server and in Windows Server 2012 R2 CSV is also supported with SQL Server 2014.

First, let us look under the hood of CsvFs at the components that constitute the solution.


thumbnail image 1 of blog post titled
Cluster Shared Volume (CSV) Inside Out

Figure 1:

CSV Components and Data Flow Diagram



The diagram above shows a 3 node cluster. There is one shared disk that is visible to Node 1 and Node 2. Node 3 in this diagram has no direct connectivity to the storage. The disk was first clustered and then added to the Cluster Shared Volume. From the user’s perspective, everything will look the same as in the Windows 2008 R2. On every cluster node you will find a mount point to the volume: C:\ClusterStorage\Volume1. The “VolumeX” naming can be changed, just use Windows Explorer and rename like you would any other directory.  CSV will then take care of synchronizing the updated name around the cluster to ensure all nodes are consistent.  Now let’s look at the components that are backing up these mount points.

Terminology

The node where NTFS for the clustered CSV disk is mounted is called the Coordinator Node. In this context, any other node that does not have clustered disk mounted is called Data Servers (DS). Note that coordinator node is always a data server node at the same time. In other words, coordinator is a special data server node when NTFS is mounted.

If you have multiple disks in CSV, you can place them on different cluster nodes. The node that hosts a disk will be a Coordinator Node only for the volumes that are located on that disk. Since each node might be hosting a disk, each of them might be a Coordinator Node, but for different disks. So technically, to avoid ambiguity, we should always qualify “Coordinator Node” with the volume name. For instance we should say: “Node 2 is a Coordinator Node for the Volume1”. Most of the examples we will go through in this blog post for simplicity will have only one CSV disk in the cluster so we will drop the qualification part and will just say Coordinator Node to refer to the node that has this disk online.

Sometimes we will use terms “disk” and “volume” interchangeably because in the samples we will be going through one disk will have only one NTFS volume, which is the most common deployment configuration. In practice, you can create multiple volumes on a disk and CSV fully supports that as well. When you move a disk ownership from one cluster node to another, all the volumes will travel along with the disk and any given node will be the coordinator for all volumes on a given disk. Storage Spaces would be one exception from that model, but we will ignore that possibility for now.


This diagram is complicated so let’s try to break it up to the pieces, and discuss each peace separately, and then hopefully the whole picture together will make more sense.

On the Node 2, you can see following stack that represents mounted NTFS. Cluster guarantees that only one node has NTFS in the state where it can write to the disk, this is important because NTFS is not a clustered file system.  CSV provides a layer of orchestration that enables NTFS or ReFS (with Windows Server 2012 R2) to be accessed concurrently by multiple servers. Following blog post explains how cluster leverages SCSI-3 Persistent Reservation commands with disks to implement that guarantee https://techcommunity.microsoft.com/t5/Failover-Clustering/Cluster-Shared-Volumes-CSV-Disk-Ownership….



thumbnail image 2 of blog post titled
Cluster Shared Volume (CSV) Inside Out

Figure 2:

CSV NTFS stack



Cluster makes this volume hidden so that Volume Manager (Volume in the diagram above) does not assign a volume GUID to this volume and there will be no drive letter assigned. You also would not see this volume using mountvol.exe or using FindFirstVolume() and FindNextVolume() WIN32 APIs.

On the NTFS stack the cluster will attach an instance of a file system mini-filter driver called CsvFlt.sys at the altitude 404800. You can see that filter attached to the NTFS volume used by CSV if you run following command:

>fltmc.exe instances


Filter       Volume Name                      Altitude   Instance Name

———  —————————      ———  ———————-

<skip>
CsvFlt     \Device\HarddiskVolume7   404800     CsvFlt Instance

<skip>


Applications are not expected to access the NTFS stack and we even go an extra mile to block access to this volume from the user mode applications. CsvFlt will check all create requests coming from the user mode against the security descriptor that is kept in the cluster public property SharedVolumeSecurityDescriptor. You can use power shell cmdlet “Get-Cluster | fl SharedVolumeSecurityDescriptor” to get to that property. The output of this PowerShell cmdlet shows value of the security descriptor in self-relative binary format ( http://msdn.microsoft.com/en-us/library/windows/desktop/aa374807(v=vs.85).aspx 🙁
PS > Get-Cluster | fl SharedVolumeSecurityDescriptorSharedVolumeSecurityDescriptor : {1, 0, 4, 128…}


CsvFlt plays several roles:

  • Provides an extra level of protection for the hidden NTFS volume used for CSV
  • Helps provide a local volume experience (after all CsvFs does look like a local volume). For instance you cannot open volume over SMB or read USN journal. To enable these kinds of scenarios CsvFs often times marshals the operation that need to be performed to the CsvFlt disguising it behind a tunneling file system control. CsvFlt is responsible for converting the tunneled information back to the original request before forwarding it down-the stack to NTFS.
  • It implements several mechanisms to help coordinate certain states across multiple nodes. We will touch on them in the future posts. File Revision Number is one of them for example.

The next stack we will look at is the system volume stack. On the diagram above you see this stack only on the coordinator node which has NTFS mounted. In practice exactly the same stack exists on all nodes.



thumbnail image 3 of blog post titled
Cluster Shared Volume (CSV) Inside Out

Figure 3:

System Volume Stack



The CSV Namespace Filter (CsvNsFlt.sys) is a file system mini-filter driver at an altitude of 404900:
>fltmc instances
Filter            Volume Name          Altitude      Instance Name
————  ———————  ————  ———————-
<skip>
CsvNSFlt      C:                             404900        CsvNSFlt Instance
<skip>


CsvNsFlt plays the following roles:

  • It protects C:\ClusterStorage by blocking unauthorized attempts that are not coming from the cluster service to delete or create any files or subfolders in this folder or change any attributes on the files. Other than opening these folders about the only other operation that is not blocked is renaming the folders. You can use command prompt or explorer to rename C:\ClusterStorage\Volume1 to something like C:\ClusterStorage\Accounting.  The directory name will be synchronized and updated on all nodes in the cluster.
  • It helps us to dispatch the block level redirected IO. We will cover this in more details when we talk about the block level redirected IO later on in this post.

The last stack we will look at is the stack of the CSV file system. Here you will see two modules CSV Volume Manager (csvvbus.sys), and CSV File System (CsvFs.sys). CsvFs is a file system driver, and mounts exclusively to the volumes surfaced up by CsvVbus.



thumbnail image 4 of blog post titled
Cluster Shared Volume (CSV) Inside Out

Figure 5:

CsvFs stack

Data Flow


Now that we are familiar with the components and how they are related to each other, let’s look at the data flow.

First let’s look at how Metadata flows. Below you can see the same diagram as on the Figure 1. I’ve just kept only the arrows and blocks that is relevant to the metadata flow and removed the rest from the diagram.



thumbnail image 5 of blog post titled
Cluster Shared Volume (CSV) Inside Out

Figure 6:

Metadata Flow



Our definition of metadata operation is everything except read and write. Examples of metadata operation would be create file, close file, rename file, change file attributes, delete file, change file size, any file system control, etc. Some writes may also, as a side effect cause a metadata change. For instance, an extending write will cause CsvFs to extend all or some of the following: file allocation size, file size and valid data length. A read might cause CsvFs to query some information from NTFS.

On the diagram above you can see that metadata from any node goes to the NTFS stack on Node 2. Data server nodes (Node 1 and Node 3) are using Server Message Block (SMB) as a protocol to forward metadata over.

Metadata are always forwarded to NTFS. On the coordinator node CsvFs will forward metadata IO directly to the NTFS volume while other nodes will use SMB to forward the metadata over the network.

Next, let’s look at the data flow for the Direct IO . The following diagram is produced from the diagram on the Figure 1 by removing any blocks and lines that are not relevant to the Direct IO. By definition Direct IO are the reads and writes that never go over the network, but go from CsvFs through CsvVbus straight to the disk stack. To make sure there is no ambiguity I’ll repeat it again: – Direct IO bypasses volume stack and goes directly to the disk.

thumbnail image 6 of blog post titled
Cluster Shared Volume (CSV) Inside Out


Figure 7:

Direct IO Flow



Both Node 1 and Node 2 can see the shared disk – they can send reads and writes directly to the disk completely avoiding sending data over the network. The Node 3 is not in the diagram on the Figure 7 Direct IO Flow since it cannot perform Direct IO, but it is still part of the cluster and it will use block level redirected IO for reads and writes.

The next diagram shows a File System Redirected IO request flows. The diagram and data flow for the redirected IO is very similar to the one for the metadata from the Figure 6 Metadata Flow:



thumbnail image 7 of blog post titled
Cluster Shared Volume (CSV) Inside Out

Figure 8
File System Redirected IO Flow



Later we will discuss when CsvFs uses the file system redirected IO to handle reads and writes and how it compares to what we see on the next diagram – Block Level Redirected IO :



thumbnail image 8 of blog post titled
Cluster Shared Volume (CSV) Inside Out

Figure 9:

Block Level Redirected IO Flow



Note that on this diagram I have completely removed CsvFs stack and CSV NTFS stack from the Coordinator Node leaving only the system volume NTFS stack. The CSV NTFS stack is removed because Block Level Redirected IO completely bypasses it and goes to the disk (yes, like Direct IO it bypasses the volume stack and goes straight to the disk) below the NTFS stack. The CsvFs stack is removed because on the coordinating node CsvFs would never use Block Level Redirected IO, and would always talk to the disk. The reason why Node 3 would use Redirected IO, is because Node 3 does not have physical connectivity to the disk. A curious reader might wonder why Node 1 that can see the disk would ever use Block Level Redirected IO. There are at least two cases when this might be happening. Although the disk might be visible on the node it is possible that IO requests will fail because the adapter or storage network switch is misbehaving. In this case, CsvVbus will first attempt to send IO to the disk and on failure will forward the IO to the Coordinator Node using the Block Level Redirected IO. The other example is Storage Spaces – if the disk is a Mirrored Storage Space, then CsvFs will never use Direct IO on a data server node, but instead it will send the block level IO to the Coordinating Node using Block Level Redirected IO.  In Windows Server 2012 R2 you can use the Get-ClusterSharedVolumeState cmdlet to query the CSV state (direct / file level redirected / block level redirected) and if redirected it will state why.

Note that CsvFs sends the Block Level Redirected IO to the CsvNsFlt filter attached to the system volume stack on the Coordinating Node. This filter dispatches this IO directly to the disk bypassing NTFS and volume stack so no other filters below the CsvNsFlt on the system volume will see that IO. Since CsvNsFlt sits at a very high altitude, in practice no one besides this filter will see these IO requests. This IO is also completely invisible to the CSV NTFS stack. You can think about Block Level Redirected IO as a Direct IO that CsvVbus is shipping to the Coordinating Node and then with the help of the CsvNsFlt it is dispatched directly to the disk as a Direct IO is dispatched directly to the disk by CsvVbus.

What are these SMB shares?


CSV uses the Server Message Block (SMB) protocol to communicate with the Coordinator Node. As you know, SMB3 requires certain configuration to work. For instance it requires file shares. Let’s take a look at how cluster configures SMB to enable CSV.

If you dump list of SMB file shares on a cluster node with CSV volumes you will see following:
> Get-SmbShare
Name                  ScopeName           Path                        Description
——–                ————-           —-                          ———–
ADMIN$               *                           C:\Windows             Remote Admin
C$                         *                           C:\                           Default share
ClusterStorage$   CLUS030512         C:\ClusterStorage   Cluster Shared Volumes Def…
IPC$                      *                                                           Remote IPC


There is a hidden admin share that is created for CSV, shared as ClusterStorage$. This share is created by the cluster to facilitate remote administration. You should use it in the scenarios where you would normally use an admin share on any other volume (such as D$). This share is scoped to the Cluster Name. Cluster Name is a special kind of Network Name that is designed to be used to manage a cluster. You can learn more about Network Name in the following blog post.  You can access this share using the Cluster Name, i.e. \\<cluster name>\ClusterStorage$

Since this is an admin share, it is ACL’d so only members of the Administrators group have full access to this share. In the output the access control list is defined using Security Descriptor Definition Language (SDDL). You can learn more about SDDL here http://msdn.microsoft.com/en-us/library/windows/desktop/aa379567(v=vs.85).aspx
ShareState                        : Online
ClusterType                      : ScaleOut
ShareType                        : FileSystemDirectory
FolderEnumerationMode : Unrestricted
CachingMode                   : Manual
CATimeout                        : 0
ConcurrentUserLimit         : 0
ContinuouslyAvailable      : False
CurrentUsers                     : 0
Description                        : Cluster Shared Volumes Default Share
EncryptData                       : False
Name                                 : ClusterStorage$
Path                                   : C:\ClusterStorage
Scoped                              : True
ScopeName                       : CLUS030512
SecurityDescriptor             : D:(A;;FA;;;BA)


There are also couple hidden shares that are used by the CSV. You can see them if you add the IncludeHidden parameter to the get-SmbShare cmdlet. These shares are used only on the Coordinator Node. Other nodes either do not have these shares or these shares are not used:
> Get-SmbShare -IncludeHidden
Name                                           ScopeName         Path                                                  Description
—-                                               ———               —-                                                    ———–
17f81c5c-b533-43f0-a024-dc…    *                          \\?\GLOBALROOT\Device\Hard 
ADMIN$                                       *                          C:\Windows                                       Remote Admin
C$                                                 *                          C:\                                                      Default share
ClusterStorage$                           VPCLUS030512    C:\ClusterStorage                              Cluster Shared Volumes Def…
CSV$                                            *                           C:\ClusterStorage
IPC$                                             *                                                                                     Remote IPC


Each Cluster Shared Volume hosted on a coordinating node cluster creates a share with a name that looks like a GUID. This is used by CsvFs to communicate to the hidden CSV NTFS stack on the coordinating node. This share points to the hidden NTFS volume used by CSV. Metadata and the File System Redirected IO are flowing to the Coordinating Node using this share.
ShareState                           : Online
ClusterType                          : CSV
ShareType                            : FileSystemDirectory
FolderEnumerationMode    : Unrestricted
CachingMode                      : Manual
CATimeout                           : 0
ConcurrentUserLimit            : 0
ContinuouslyAvailable          : False
CurrentUsers                        : 0
Description                           :
EncryptData                          : False
Name                                    : 17f81c5c-b533-43f0-a024-dc431b8a7ee9-1048576$
Path                                      : \\?\GLOBALROOT\Device\Harddisk2\ClusterPartition1\
Scoped                                  : False
ScopeName                          : *
SecurityDescriptor                : O:SYG:SYD:(A;;FA;;;SY)(A;;FA;;;S-1-5-21-2310202761-1163001117-2437225037-1002)
ShadowCopy                         : False
Special                                   : True
Temporary                             : True


On the Coordinating Node you also will see a share with the name CSV$. This share is used to forward Block Level Redirected IO to the Coordinating Node. There is only one CSV$ share on every Coordinating Node:
ShareState                          : Online
ClusterType                         : CSV
ShareType                           : FileSystemDirectory
FolderEnumerationMode   : Unrestricted
CachingMode                     : Manual
CATimeout                          : 0
ConcurrentUserLimit           : 0
ContinuouslyAvailable         : False
CurrentUsers                       : 0
Description                          :
EncryptData                         : False
Name                                  : CSV$
Path                                     : C:\ClusterStorage
Scoped                                : False
ScopeName                         : *
SecurityDescriptor               : O:SYG:SYD:(A;;FA;;;SY)(A;;FA;;;S-1-5-21-2310202761-1163001117-2437225037-1002)
ShadowCopy                       : False
Special                                 : True
Temporary                           : True


Users are not expected to use these shares – they are ACL’d so only Local System and Failover Cluster Identity user (CLIUSR) have access to the share.

All of these shares are temporary – information about these shares is not in any persistent storage, and when node reboots they will be removed from the Server Service. Cluster takes care of creating the shares every time during CSV start up.

Conclusion


You can see that that Cluster Shared Volumes in Windows Server 2012 R2 is built on a solid foundation of Windows storage stack, CSVv1, and SMB3.

Thanks!
Vladimir Petter
Principal Software Development Engineer
Clustering & High-Availability
Microsoft

Additional Resources:


To learn more, here are others in the Cluster Shared Volume (CSV) blog series:

Cluster Shared Volume (CSV) Inside Out

Cluster Shared Volume Diagnostics

Cluster Shared Volume Performance Counters

Cluster Shared Volume Failure Handling

Troubleshooting Cluster Shared Volume Auto-Pauses – Event 5120

Troubleshooting Cluster Shared Volume Recovery Failure – System Event 5142

Cluster Shared Volume – A Systematic Approach to Finding Bottlenecks

Source :
https://techcommunity.microsoft.com/t5/failover-clustering/cluster-shared-volume-csv-inside-out/ba-p/371872

Ubiquiti UniFi – Run the Network Application as a Windows Service

Windows services are often useful since they are background applications that don’t require any attention from the end-user. The service launches upon startup, without any intervention from the user. The service is a direct replacement for running the Network application manually (via the icon or a scheduled task), so there is no need to run the UniFi Network application if it is being run as a Windows service.

This article describes how to set up the UniFi Network application to run as a Windows service, and how to update it when it’s running this way.NOTES & REQUIREMENTS:

  • Applicable to the latest UniFi application versions for Windows.
  • This article applies to UniFi applications that are installed on Windows Desktop (Windows 10) and not Windows Server versions.
  • It is recommended to only install the x64 version of Java 8 for the UniFi Cloud Access Portal to work properly.
  • Make sure to allow the ports used by the UniFi application through the Windows Firewall. See the UniFi – Ports Used article for more information. 

How to set up the UniFi Network application as a Windows Service

ATTENTION: It is recommended to only install the x64 version of Java 8 for the UniFi Cloud Access Portal to work properly. However, older versions of the Network application may require both x64 and x86 Java to be installed on a Windows x64 system.

1. Close any instances of the UniFi Network application on the computer. If the UniFi Network application was just installed, make sure to open the application manually at least once, or let it run at the end of the wizard. Once you see the message UniFi Network application (a.b.c) started, the application may be closed.CLI:Open an administrative Windows Command Prompt (CMD) window.

2. Change the directory to the location of UniFi installation.

cd "%UserProfile%\Ubiquiti UniFi\"

Click to copy

3. Once in the root of the UniFi folder, run the following command to install the UniFi Network application service:

java -jar lib\ace.jar installsvc

Click to copy

4. Wait for the installation to complete, indicated by the Complete Installation log message.

5. Start the service with the command below: 

java -jar lib\ace.jar startsvc

Click to copy

6. Open a browser and navigate to the application’s IP address or https://localhost:8443.

How to upgrade a UniFi Network application that is running as a Windows Service

1. Create a backup of your Network application.CLI:Open an administrative Windows Command Prompt (CMD) window.

2. Change the directory to the location of UniFi installation.

cd "%UserProfile%\Ubiquiti UniFi\"

Click to copy

3. Once in the root of the UniFi folder, issue the following to uninstall the Network application service:

java -jar lib\ace.jar uninstallsvc

Click to copy

4. Wait for the service uninstall process to complete. 

5. Launch the Network application and update it through the Settings section. Alternatively, download the latest installation file from the Downloads section.

6. Repeat the steps from the section above after the new Network application version is installed.

Source :
https://help.ui.com/hc/en-us/articles/205144550-UniFi-Run-the-Network-Application-as-a-Windows-Service

Ubiquiti UniFi – USG Advanced Configuration Using config.gateway.json

This article describes how to perform advanced configurations on the UniFi Security Gateway (USG and USG-PRO-4) using the config.gateway.json file. This article is not applicable to the UniFi Dream Machine models. The UDM line does not support configurations done outside of the UniFi Network application.NOTES & REQUIREMENTS:

  • Ubiquiti Support cannot assist in the creation of the config.gateway.json file nor will assistance be provided for command line configuration. If assistance is required, feel free to visit our Community to create a topic and ask for help with your desired configuration.
  • This article covers advanced configuration, and should only be used by advanced users.

Table of Contents

  1. Introduction
  2. Creating the config.gateway.json File
  3. Editing the config.gateway.json File
  4. Testing & Verification
  5. Related Articles

Introduction

The config.gateway.json is a file that sits in the UniFi Network application filesystem and allows custom changes to the USG that aren’t available in the web GUI. Some possible customizations will be: configuring site-to-site VPNs with hostnames, policy routing certain traffic out WAN2, or even adding multiple IP addresses on an interface. These features don’t exist in the UniFi Network application yet, so the config.gateway.json file will supplement those features until they’re available in the GUI.

When making customizations via the config.gateway.json file, it is best to enter only the customizations that can’t be performed via the Network application. If the formatting is incorrect, a provisioning loop will be triggered on the USG, and a reboot will take place once the USG comes out of the provisioning loop. At this point the config.gateway.json file could be corrected or removed to correct this.WARNING:Some users may find they can get away with using the full config, but this is not recommended as it will most likely cause issues down the road. A provisioning loop might take place when a setting is changed in the Network application that conflicts with a setting in the config.gateway.json file.

Creating the config.gateway.json File

By default, the config.gateway.json file doesn’t exist, it has to be created in order to use it. 

1. Create a new file using a text editor such as TextEdit or Notepad++.

2. The structure of a json file is just as important as the words themselves. Incorrect placement of brackets, indentations, line breaks or any other structural element will make the json file invalid. It is recommended to run the text through a json validator in order to verify it has the correct syntax. The JSON Formatter website is one example of the many options of json validators you’ll find online.

3. Once the contents of the file has been validated, save it by naming it config.gateway.json and placing it under the <unifi_base>/data/sites/site_ID directory stored on the Network application. User Tip:Depending on your operating system, placing the file under this directory might be as simple as drag and drop, or using a FTP server might be necessary. The config.gateway.json file must have unifi:unifi as the owner and group permissions. You can check to verify with ls -l <unifi_base>/data/sites/site_ID. To change it, once you’re in the site directory, use the command: chown unifi:unifi config.gateway.json

The location <unifi_base> will vary from one operating system to another. See this article for more information. The site_ID can be seen in the URL of your browser when on the Network application. The original site is named “default”, and every site that is created will be assigned a random string. For example, this is what would be seen in the URL bar when inside the dashboard page of a site:

https://127.0.0.1:8443/manage/s/ceb1m27d/dashboard

In the above case, the random string ceb1m27d is the folder name that shall be used under <unifi_base>/data/sites/Therefore, the config.gateway.json should be placed inside <unifi_base>/data/sites/ceb1m27d/.User Tips:

  • On Cloud Key install the path for the .json file is: /srv/unifi/data/sites/[site name/default]/
  • On an Ubuntu install the path for the .json file is: /usr/lib/unifi/data/sites/[site name/default]/

Editing the config.gateway.json File

Before customizing firewall or NAT rules, take note of the rule numbers used in the UniFi Network application under Settings > Routing & Firewall > Firewall. Default firewall rules start at either 3001 or 6001, and NAT rules will also start at 6001 (which don’t overlap with firewall rules). The custom rules created in the config.gateway.json cannot have duplicate rule numbers with the existing rules in the USG, or there will be a provisioning loop. It is recommended to put custom rules before the existing ruleset, as the lower number will win between two matching rules.NOTE: When editing thiscustom json file, it is not necessary to include everything. You must only include the complete “path” to the items you have edited, anything outside of the path can be omitted. Think of each node in the json file as a folder that is nested within other folders (except for the level 1 folder which is our main section). The folder path that takes you from level 1 all the way down to the item you will be configuring must be present in the json file. See this example where we want to edit “close”, which has the following path: system > conntrack > timeout > tcp > close.  

levels.png

Notice that in level 3 “modules” is also present along with “timeout”, but we will not include it in the json file because it is not part of the path to “close”. Same with the other items in level 5 under “tcp”. They do not need to appear in the config.gateway.json file because they are not part of the path. A successful change then, in the configuration of “close” from 10 to 20 would look like this:

success.png

The following is an example of how a DNAT rule is created for DNS configured using EdgeOS formatting:

1. Connect to the USG via SSH, and issue the following commands:

configure
set service nat rule 1 type destination
set service nat rule 1 inbound-interface eth0
set service nat rule 1 protocol tcp_udp
set service nat rule 1 destination port 53
set service nat rule 1 inside-address address 10.0.0.1
set service nat rule 1 inside-address port 53
commit;save;exit

2. Next is displaying the config. The following command displays the entire config in a JSON format:

mca-ctrl -t dump-cfg

 The config can also be exported if preferred. The following example exports the output to the config.txt:

mca-ctrl -t dump-cfg > config.txt

3. Find the appropriate section with the custom changes in the config output, for our example above it would be the following:

                "nat": {
                        "rule": {
                               "1": {
                                       "destination": {
                                               "port": "53"
                                       },
                                       "inbound-interface": "eth0",
                                       "inside-address": {
                                               "address": "10.0.0.1",
                                               "port": "53"
                                       },
                                       "protocol": "tcp_udp",
                                       "type": "destination"
                               },

4. Above is the custom rule, but it’s missing all the closing brackets (}) at the end to make it correct. If you look at the config output from the start, there is a certain format that is required for the file to be read correctly. Each node in a section must be separated by a comma (,), and it section must begin with an opening bracket ({) and finish with a closing one (}). Follow the existing format carefully. If the above rule is the only change in the config.gateway.json, you would edit it to look like so:

{
       "service": {
                "nat": {
                        "rule": {
                               "1": {
                                       "destination": {
                                               "port": "53"
                                       },
                                       "inbound-interface": "eth0",
                                       "inside-address": {
                                               "address": "10.0.0.1",
                                               "port": "53"
                                       },
                                       "protocol": "tcp_udp",
                                       "type": "destination"
                               }
                       }
               }
       }
}

5. If there are multiple sections to add, say Firewall, Service, VPN, the closing bracket for that section would be followed by a comma (},), before starting the next section. You can see these formatting details in the example below.

The DNAT rule # ranges are from 1-4999, and the Source/Masquerade rule numbers are from 5000-9999. If you wanted to add a port forward (DNAT) in the config.gateway.json for WAN2 in a multiWAN (load-balance) setup, this is what the config.gateway.json would look like with only this particular NAT rule:

{
	"service": {
		"nat": {
			"rule": {
				"4500": {
					"description": "port_forward_WAN2",
					"destination": {
						"address": "100.64.100.100",
						"port": "22"
					},
					"inbound-interface": "eth3",
					"inside-address": {
						"address": "192.168.1.100"
					},
					"protocol": "tcp",
					"type": "destination"
				}
			}
		}
	}
}

And if we were to add a VPN with hostnames to the file, the config.gateway.json would look like the one below. Notice the opening and closing brackets, as well as the bracket with comma before starting with the "vpn" section:

{
	"service": {
		"nat": {
			"rule": {
				"4500": {
					"description": "port_forward_WAN2",
					"destination": {
						"address": "100.64.100.100",
						"port": "22"
					},
					"inbound-interface": "eth3",
					"inside-address": {
						"address": "192.168.1.100"
					},
					"protocol": "tcp",
					"type": "destination"
				}
			}
		}
	},
	"vpn": {
		"ipsec": {
			"site-to-site": {
				"peer": {
					"yyyy.ignorelist.com": {
						"authentication": {
							"id": "xxxx.ignorelist.com"
						},
						"local-address": "xxxx.ignorelist.com"
					}
				}
			}
		}
	}
}

Testing & Verification

It’s recommended to validate the code once finished creating the config.gateway.json. There are a number of free options out there, jsonlint.com is used by the Ubiquiti support team quite often.

After adding the config.gateway.json to the UniFi Network site of your choosing, you can test it by running a “force provision” to the USG in UniFi Devices > select the USG > Config > Manage Device > Force provision. This will take a while to provision (30 seconds to 3 minutes), and if it stays in provisioning longer than that, there may be a formatting error in the config.gateway.json, and you are experiencing the provisioning loop that was mentioned earlier. You can check server.log in the application and search for commit error. You can usually find what went wrong with the provisioning of the newly customized configuration in the log files. Find information about that here.User Tip:An easy way to test the validity of the json file is: python -m json.tool config.gateway.json

Deleting Changes or Reverting to Previous State

To remove a certain advanced configuration, just delete the section pertinent to that configuration in the config.gateway.json file. To completely remove all advanced configurations created in the config.gateway.json file, delete the file or rename it. This will void all manual changes. The USG will be provisioned with the current config contained within the UniFi Network application.

A best practice when editing an already working config.gateway.json file is to create a backup. If you need to add additional changes to the config.gateway.json file, rename the current file to config.gateway.json.old, essentially creating a backup, and copy all the existing and new changes into a new file named config.gateway.json. This way, if there happens to be any mistakes resulting in a “commit” error or provisioning loop, you can delete config.gateway.json, and try again starting from config.gateway.json.old.

Source :
https://help.ui.com/hc/en-us/articles/215458888-UniFi-USG-Advanced-Configuration-Using-config-gateway-json

Ubiquiti UniFi – Explaining the system.properties File

This article describes what the system.properties file is used for and how to edit it.NOTES & REQUIREMENTS:This article includes some advanced configurations that should only be performed by advanced users. Advanced configurations are not supported by our Support team. The Community is the best place to find experts to guide you with advanced configurations.

Table of Contents

  1. Introduction
  2. Manually Specify the IP Interface for UniFi Network Application Communication
  3. Advanced Database Configuration
  4. SMTP Related Settings
  5. User Tips & Notes

Introduction

The system.properties file, found within <unifi.base> in the data folder, is the file inside the UniFi server installation directory, which defines system-wide parameters for the UniFi Network application. Here are just a few notable examples of supported configuration changes for UniFi Network application made in the system.properties file:

  • Manual override of the Application IP Interface (the address to which Devices send inform packets).
  • Advanced Database adjustments.
  • Port Assignments, for purposes of the UniFi Network application communicating with Managed Devices, redirecting Guest Portal traffic, etc.

WARNING:Before editing the system.properties file, remember to create a backup of your system and download it to a safe place. It is also necessary to stop the application before performing any change in the file to avoid errors after changes are made.

The system.properties file can be edited directly via any text editor. Keep in mind that lines preceded by hash-tags (#) exist as comments and are non-operational. Make edits at the bottom of the file. After changing this file, you’ll need to manually trigger provisioning on each site in order to make these effective.NOTE:The system.properties file is created when UniFi Network runs successfully. If you cannot find the file within the <unifi_base>, create it by running the UniFi Network application .

Manually Specify the IP Interface for UniFi Network Application Communication

If a UniFi OS Console (or device hosting the application) has multiple IP interfaces, the following configuration can manually set the exact IP interface that adopted APs should communicate to the Network application:

  • system_ip=a.b.c.d           # the IP devices should be talking to for inform

Advanced Database Configuration

Below are advanced database configurations that most users will never need. Note: We do not perform tests on these configurations, they are enabled for the convenience of database experts. One possible usage scenario is where few people run their application on a NAS, which has a smaller footprint than a normal server, hence there’s a need to reduce the required resources.

  • unifi.db.nojournal=false    # disable mongodb journaling
  • unifi.db.extraargs            # extra mongod args

The configuration below is used to facilitate UniFi Network application installation. Again, most users will never need to set this. When the is_default is set to true, the application will start with factory default configuration. For normal, everyday users, an uninstallation and then fresh re-installation is recommended over this.

  • is_default=true

From the UniFi Network application you can configure the autobackup frequency, amount of backups to store, time of backup, etc. At the time of writing this, you cannot change the storage location via the application. We do have a variable in the system.properties if you wish to change the storage location. Currently, the default points to:

1. For Cloud Key: /data/autobackup (where SD card is mounted as /data by default)
2. For software installs: {data.dir}/backup/autobackup

  • autobackup.dir=/some/path

The UAP-AC-EDU is recommended to be managed from a local application. The current communication from the EDU mobile app relays from app to Network application to EDU. If the mobile device is remote to the EDU, then you just need to open the appropriate ports. If the UniFi Network application is remote to the EDUs, then you need to add the following line to system.properties.

  • stream.playback.url.type=inform

(5.5.15+/5.6.7+) We’ve added HSTS support to the application. Do note that it is default disabled. This should only be enabled if you know what you’re doing with it. This will only ever be a system.properties value so it can be easily disabled in case of issues. If you run into issues, you likely will need to clear your browser’s cache after disabling this and restarting the service. To enable HSTS support add the following:

  • unifi.https.hsts=true
  • unifi.https.hsts.max_age=31536000
  • unifi.https.hsts.preload=false
  • unifi.https.hsts.subdomain=false 

NOTE: Currently no characters after the custom line(s) are allowed. This includes spaces, pound/sharp signs/comments, etc.

SMTP Related Settings

By default, SMTPS validates certificates and will reject self-signed or untrusted certificates. If your mail server uses an untrusted certificate, you must disable certificate verification with the following: smtp.checkserveridentity=false

Starting with UniFi Network version 6.1, STARTTLS is opportunistically enabled by default; e.g. will be used if the server announces support for it, and will require a trusted certificate. If using a self-signed or untrusted certificate, you must disable STARTTLS by setting the following: smtp.starttls_enabled=false

This only controls whether STARTTLS will be used if the server supports it. To force its use, see: starttls_required

With UniFi Network version 6.1 and newer, STARTTLS is opportunistically enabled by default, but only required if using port 587. This behavior can be overridden by setting smtp.starttls_required=true to force the use of STARTTLS on ports other than 587, or to make STARTTLS optional on port 587, set it to false.

If smtp.starttls_enabled=false is set, the starttls_required value has no impact.

User Tips & Notes

  • If receiving error, it’s possible there are hash tags (#) present in front of commands. Hash tags indicate comments, and will make commands not work until hash tag is removed.
  • If you want to reduce the logging frequency on your RPi UniFi Network application, see this Community threadATTENTION:Without logs, it is impossible to receive appropriate support. Use this tip under your own discretion. See how to extract logs in our UniFi – How to View Log Files article.
  • If you cannot find the system.properties file, it might not have been created yet. This file is created once the UniFi app runs successfully. If you need to change port numbers because of a port clash, it doesn’t count as a successful launch and does not create the file, so you can’t alter the port numbers to avoid the clash.

    Source :
    https://help.ui.com/hc/en-us/articles/205202580-UniFi-Explaining-the-system-properties-File

Ubiquiti UniFi – Ports Used

This article shows what UDP and TCP ports are used by the UniFi Network application by default. The information applies to both Network applications hosted on UniFi OS Consoles, such as UniFi Cloud Key (UCK-G2UCK-G2-PLUS, and UC-CK) or UniFi Dream Machine (UDM or UDM-Pro), as well as self-hosted Network applications. 

Note: Make sure to always update your Network application to the latest version.

Local Ingress Ports

ProtocolPort numberUsage
UDP3478Port used for STUN.
UDP5514Port used for remote syslog capture.
TCP8080Port used for device and application communication.
TCP443Port used for application GUI/API as seen in a web browser.Applications hosted on a UniFi OS Console
TCP8443Port used for application GUI/API as seen in a web browser.Applications hosted on Windows/macOS/Linux
TCP8880Port used for HTTP portal redirection.
TCP8843Port used for HTTPS portal redirection.
TCP6789Port used for UniFi mobile speed test.
TCP27117Port used for local-bound database communication.
UDP5656-5699Ports used by AP-EDU broadcasting.
UDP10001Port used for device discovery.
UDP1900Port used for “Make application discoverable on L2 network” in the UniFi Network settings.

Note: Although TCP 22 is not one of the ports UniFi Network operates on by default, it is worth mentioning in this article since it is the port used when UniFi devices or the Network application is accessed via SSH.

Ingress Ports required for L3 management over the internet

Note: These ports need to be open at the gateway/firewall as well as on the UniFi Network application host. This would be achieved by creating port forwards on the gateway/firewall where the application is hosted.

ProtocolPort numberUsage
UDP3478Port used for STUN.
TCP8080Port used for device and application communication.
TCP443Port used for application GUI/API as seen in a web browser.Applications hosted on a UniFi OS Console
TCP8443Port used for application GUI/API as seen in a web browser.Applications hosted on Windows/macOS/Linux
TCP8843Port used for HTTPS portal redirection.
TCP6789Port used for UniFi mobile speed test.

Egress Ports required for UniFi Remote Access

Note: In most cases, these ports will be open and unrestricted by default.

ProtocolPort numberUsage
UDP3478Port used for STUN.
TCP/UDP443Port used for Remote Access service.
TCP8883Port used for Remote Access service.

Changing Default Ports

Changing default port assignments can only be done on self-hosted Network applications (Windows/macOS/Linux). This can be accomplished as follows:

1. Close any instances of the UniFi Network application.

2. Modify the system.properties file, which can be found in the directory <unifi_base>/data/system.properties.

  • For example, if port 8081 was in use and port 8089 was open, you could change it by modifying unifi.shutdown.port=8081 to unifi.shutdown.port=8089

3. Restart the UniFi Network application.

Note: Make sure there are no leading or trailing spaces, comments, or other characters like hash tags (#) on any custom lines. Otherwise, UniFi Network will ignore the customizations.

Source :
https://help.ui.com/hc/en-us/articles/218506997-UniFi-Ports-Used

Ubiquiti UniFi – UAP Status Meaning Definitions

This article describes the different statuses a UniFi Access Point might be ascribed by the UniFi Network application within the UniFi Devices section.

device.status.png
Device StatusDescription
ConnectedThe UAP is physically wired to the network by ethernet cable. The UAP is in a connected state, able to service WLAN stations. Currently, no updates/changes to configuration are being run on the UAP.
Connected (Wireless)The UAP is wirelessly uplinked to a physically wired AP.
Connected (100 FDX)The UAP is physically wired to the network at 100 Mbps in full-duplex mode. This will appear when a UAP is connected but not at the ideal connection rate. FDX stands for Full Duplex. It may appear as 10/100/1000, HDX or FDX.
Connected (Disabled)The UAP has been disabled in the UniFi Network application. Properties > Manage Device > Disable this Device. It will be excluded from the dashboard status, and its LED and WLAN will be turned off.
Connected (Limited)This will appear when a UAP is connected and can reach the UniFi Network application, but is unable to reach either the gateway or the custom IP defined for the uplink connectivity monitor. In this state downlink UAPs (wireless UAPs) will become Isolated.
ProvisioningThe UAP is in a connected state, however it is applying updates/changes to the configuration, and will shortly reboot (temporarily disconnecting WLAN stations), and return back online.
RestartingAfter clicking on the Restart button in the Actions column, the device will restart.
AdoptingDevice is adopting normally.
Pending AdoptionThe UAP has been detected by the UniFi Network application, but is not adopted yet. Click on the Adopt button to do so.
Pending Adoption (Update Required)Devices with firmware that are too old for UniFi Network application will see this when attempting to adopt. Clicking Update will upgrade the Device to the latest stable firmware release and adopt the Device to the application.
Heartbeat Missed  The UniFi Network application did not receive a reply at the dynamically scheduled interval. This will appear before “Disconnected,” usually about 30-45 seconds after missing the interval. This is usually seen when configuring a wireless uplink, if state does not change to connected after a while something went wrong. See this article for wireless uplink instructions.
Disconnected The adopted UAP is now in a disconnected state, meaning the UniFi Network application does not have connectivity to the access point (check cables, network settings, and changes to topology).
IsolatedThe adopted UAP is unable to reach the gateway and is awaiting a nearby, wired UAP, which is already managed by the UniFi Network application in order to establish “wireless uplink.” See this article for wireless uplink instructions.
Managed by OtherA UAP is located on the same network as the UniFi Network application, but is already bound to another UniFi Network application. Providing the username/password to the UAP will unbind the UAP from the existing Network application and begin adoption in current application. See this article: UniFi – Advanced Adoption of a “Managed By Other” Device.
UpgradingThe UAP is upgrading and should not be disconnected. This should be accompanied by the AP’s LED flashing. (See what the different LED combinations mean in this article).
RF ScanningAppears when an RF Scan is taking place.

Source :
https://help.ui.com/hc/en-us/articles/205231710-UniFi-UAP-Status-Meaning-Definitions

Ubiquiti UniFi – How to View Log Files

 directory for Linux is mentioned below as it is the consistent folder location on the officially supported distros. It is the same whether you install the UniFi Network application on your own installation of Debian or Ubuntu, or a UniFi Cloud Key. Depending on the platform being used, and how it is configured, there will be other locations, but no matter what, on the supported distros, /usr/lib/unifi/logs will always contain the files.

There are three locations where you can view log files related to UniFi devices and the Network application: /var/log/messages, server.log, and mongod.log. See below what you will find in each.

1. UniFi Dream Machines

/var/log/messages

2. UniFi AP: contains info local to UniFi Access Points, like 802.11 info

/var/log/messages

3. UniFi Switch: contains info local to the switch, like port link state changes, spanning tree events, etc.

/var/log/messages

4. UniFi Security Gateway: contains USG’s general logging.

/var/log/messages

5. UniFi Network application:

  • Contains information about the Network application, communication with UAPs, etc:
server.log
  • Contains information about UniFi software local to Network application installed on a PC.
mongod.log

How to View Log Files: UniFi APs and Switches

To view log files under UAP and USW:

1. Connect to UAP or USW via SSH.
2. Type:

cat /var/log/messages

3. View output.

To view the live logs, with output updating in your SSH session as new logs are appended, run the following instead of the above cat command.

tail -f /var/log/messages

How to View Log Files: UniFi Security Gateways

To view log files under a USG:

1. Connect to the USG via SSH.
2. In the EdgeOS CLI, the log can be viewed by running the following commands:

General Logging

show log

IPsec VPN Logging

show vpn log 

FreeRADIUS Logging 

sudo cat /var/log/freeradius/radius.log

DNSmasq Logging

sudo cat /var/log/dnsmasq.log

IPS/IDS Engine Logging

sudo cat /var/log/suricata/suricata.log

3. View live logging.

To view the live logs, with output updating in your SSH session as new logs are appended, run the following instead of the cat command above.

tail -f /var/log/messages

User Tip:If a user would like to only get the last number of lines the tail utility can be used. The command below will output the last 10 lines of the radius.log file. 

sudo tail -n 10 /var/log/freeradius/radius.log

NOTE:Firewall logs aren’t in the UI yet. Please see this Community post for more details.  

How to Download Encrypted Log Files from the Network Application

The Network application also allows users to download log files to share with Ubiquiti support, but these logs are encrypted (for security reasons), so as the user, you wouldn’t be able to view the logs. For viewing, we continue to suggest all the options described above. It is important to note that this support file does not include device logs.

In the Network application, go to Settings System Settings > Maintenance > Support information, and click Download Logs.

For Network applications not hosted on UniFi OS Consoles, you can find the logs in the following locations:

  • Windows: C:\Users\<username>\Ubiquiti UniFi\logs\
  • macOS: /Users/<username>/Library/Application\ Support/UniFi/logs/
  • UniFi Cloud Key and Debian/Ubuntu Linux*: /usr/lib/unifi/logs/ 

    Source :
    https://help.ui.com/hc/en-us/articles/204959834-UniFi-How-to-View-Log-Files