Top 10 AI Security Risks According to OWASP

By: Trend Micro
August 15, 2023
Read time: 4 min (1157 words)

The unveiling of the first-ever Open Worldwide Application Security Project (OWASP) risk list for large language model AI chatbots was yet another sign of generative AI’s rush into the mainstream—and a crucial step toward protecting enterprises from AI-related threats.

For more than 20 years, the Open Worldwide Application Security Project (OWASP) top 10 risk list has been a go-to reference in the fight to make software more secure. So it’s no surprise developers and cybersecurity professionals paid close attention earlier this spring when OWASP published an all-new list focused on large language model AI vulnerabilities.

OWASP’s move is yet more proof of how quickly AI chatbots have swept into the mainstream. Nearly half (48%) of corporate respondents to one survey said that by February 2023 they had already replaced workers with ChatGPT—just three months after its public launch. With many observers expressing concern that AI adoption has rushed ahead without understanding of the risks involved, the OWASP top 10 AI risk list is both timely and essential.

Large language model vulnerabilities at a glance

OWASP has released two draft versions of its AI vulnerability list so far: one in May 2023 and a July 1 update with refined classifications and definitions, examples, scenarios, and links to additional references. The most recent is labeled ‘version 0.5’, and a formal version 1 is reported to be in the works.

We did some analysis and found the vulnerabilities identified by OWASP fall broadly into three categories:

  1. Access risks associated with exploited privileges and unauthorized actions.
  2. Data risks such as data manipulation or loss of services.
  3. Reputational and business risks resulting from bad AI outputs or actions.

In this blog, we take a closer look at the specific risks in each case and offer some suggestions about how to handle them.

1. Access risks

Of the 10 vulnerabilities listed by OWASP, four are specific to access and misuse of privileges: insecure plugins, insecure output handling, permissions issues, and excessive agency.

According to OWASP, any large language model that uses insecure plugins to receive “free-form text” inputs could be exposed to malicious requests, resulting in unwanted behaviors or the execution of unauthorized remote code. On the flipside, plugins or applications that handle large language model outputs insecurely—without evaluating them—could be susceptible to cross-site and server-side request forgeries, unauthorized privilege escalations, hijack attacks, and more.

Similarly, when authorizations aren’t tracked between plugins, permissions issues can arise that open the way for indirect prompt injections or malicious plugin usage.

Finally, because AI chatbots are ‘actors’ able to make and implement decisions, it matters how much free reign (i.e., agency) they’re given. As OWASP explains, “When LLMs interface with other systems, unrestricted agency may lead to undesirable operations and actions.” Examples include personal mail reader assistants being exploited to propagate spam or customer service AI chatbots manipulated into issuing undeserved refunds.

In all of these cases, the large language model becomes a conduit for bad actors to infiltrate systems.

2. Data risks

Poisoned training data, supply chain vulnerabilities, prompt injection vulnerabilities and denials of serviceare all data-specific AI risks.

Data can be poisoned deliberately by bad actors who want to harm an organization. It can also be distorted inadvertently when an AI system learns from unreliable or unvetted sources. Both types of poisoning can occur within an active AI chatbot application or emerge from the large language model supply chain, where reliance on pre-trained models, crowdsourced data, and insecure plugin extensions may produce biased data outputs, security breaches, or system failures.

With prompt injections, ill-meaning inputs may cause a large language model AI chatbot to expose data that should be kept private or perform other actions that lead to data compromises.

AI denial of service attacks are similar to classic DOS attacks. They may aim to overwhelm a large language model and deprive users of access to data and apps, or—because many AI chatbots rely on pay-as-you-go IT infrastructure—force the system to consume excessive resources and rack up massive costs.

3. Reputational and business risks

The final OWASP vulnerability (according to our buckets) is already reaping consequences around the world today:overreliance on AI. There’s no shortage of stories about large language models generating false or inappropriate outputs from fabricated citations and legal precedents to racist and sexist language.

OWASP points out that depending on AI chatbots without proper oversight can make organizations vulnerable to publishing misinformation or offensive content that results in reputational damage or even legal action.
Given all these various risks, the question becomes, “What can we do about it?” Fortunately, there are some protective steps organizations can take. 

What enterprises can do about large language model vulnerabilities

From our perspective at Trend Micro, defending against AI access risks requires a zero-trust security stance with disciplined separation of systems (sandboxing). Even though generative AI has the ability to challenge zero-trust defenses in ways that other IT systems don’t—because it can mimic trusted entities—a zero-trust posture still adds checks and balances that make it easier to identify and contain unwanted activity. OWASP also advises that large language models “should not self-police” and calls for controls to be embedded in application programming interfaces (APIs).

Sandboxing is also key to protecting data privacy and integrity: keeping confidential information fully separated from shareable data and making it inaccessible to AI chatbots and other public-facing systems. (See our recent blog on AI cybersecurity policies for more.)

Good separation of data prevents large language models from including private or personally identifiable information in public outputs, and from being publicly prompted to interact with secure applications such as payment systems in inappropriate ways.

On the reputational front, the simplest remedies are to not rely solely on AI-generated content or code, and to never publish or use AI outputs without first verifying they are true, accurate, and reliable.

Many of these defensive measures can—and should—be embedded in corporate policies. Once an appropriate policy foundation is in place, security technologies such as endpoint detection and response (EDR), extended detection and response (XDR), and security information and event management (SIEM) can be used for enforcement and to monitor for potentially harmful activity.

Large language model AI chatbots are here to stay

OWASP’s initial work cataloguing AI risks proves that concerns about the rush to embrace AI are well justified. At the same time, AI clearly isn’t going anywhere, so understanding the risks and taking responsible steps to mitigate them is critically important.

Setting up the right policies to manage AI use and implementing those policies with the help of cybersecurity solutions is a good first step. So is staying informed. The way we see it at Trend Micro, OWASP’s top 10 AI risk list is bound to become as much of an annual must-read as its original application security list has been since 2003.

Next steps

For more Trend Micro thought leadership on AI chatbot security, check out these resources:

Source :
https://www.trendmicro.com/en_us/research/23/h/top-ai-risks.html

The Current Security State of Private 5G Networks

By: Trend Micro
August 18, 2023
Read time: 3 min (931 words)

Private 5G networks offer businesses enhanced security, reliability, and scalability. Learn more about why private 5G could be the future of secure networking.

Private 5G networks offer businesses enhanced security, reliability, and scalability. Learn more about why private 5G could be the future of secure networking.

Source :
https://www.trendmicro.com/en_us/research/23/h/private-5g-network-security.html

An Overview of the New Rhysida Ransomware Targeting the Healthcare Sector

By: Trend Micro Research
August 09, 2023
Read time: 7 min (1966 words)

Updated on August 9, 2023, 9:30 a.m. EDT: We updated the entry to include an analysis of current Rhysida ransomware samples’ encryption routine.  
Updated on August 14, 2023, 6:00 a.m. EDT: We updated the entry to include Trend XDR workbench alerts for Rhysida and its components.

Introduction

On August 4, 2023, the HHS’ Health Sector Cybersecurity Coordination Center (HC3) released a security alert about a relatively new ransomware called Rhysida (detected as Ransom.PS1.RHYSIDA.SM), which has been active since May 2023. In this blog entry, we will provide details on Rhysida, including its targets and what we know about its infection chain.

Who is behind the Rhysida ransomware?

Not much is currently known about the threat actors behind Rhysida in terms of origin or affiliations. According to the HC3 alert, Rhysida poses itself as a “cybersecurity team” that offers to assist victims in finding security weaknesses within their networks and system. In fact, the group’s first appearance involved the use of a victim chat support portal.

Who are Rhysida’s targets?

As mentioned earlier, Rhysida, which was previously known for targeting the education, government, manufacturing, and tech industries, among others — has begun conducting attacks on healthcare and public health organizations. The healthcare industry has seen an increasing number of ransomware attacks over the past five years.  This includes a recent incident involving Prospect Medical Holdings, a California-based healthcare system, that occurred in early August (although the group behind the attack has yet to be named as of writing).

Data from Trend Micro™ Smart Protection Network™ (SPN) shows a similar trend, where detections from May to August 2023 show that its operators are targeting multiple industries rather than focusing on just a single sector.

The threat actor also targets organizations around the world, with SPN data showing several countries where Rhysida binaries were detected, including Indonesia, Germany, and the United States.

Figure 1. The industry and country detection count for Rhysida ransomware based on Trend SPN data from May to August 2023
Figure 1. The industry and country detection count for Rhysida ransomware based on Trend SPN data from May to August 2023
Figure 1. The industry and country detection count for Rhysida ransomware based on Trend SPN data from May to August 2023

How does a Rhysida attack proceed?

Figure 2. The Rhysida ransomware infection chain
Figure 2. The Rhysida ransomware infection chain

Rhysida ransomware usually arrives on a victim’s machine via phishing lures, after which Cobalt Strike is used for lateral movement within the system.

Additionally, our telemetry shows that the threat actors execute PsExec to deploy PowerShell scripts and the Rhysida ransomware payload itself. The PowerShell script (g.ps1), detected as Trojan.PS1.SILENTKILL.A, is used by the threat actors to terminate antivirus-related processes and services, delete shadow copies, modify remote desktop protocol (RDP) configurations, and change the active directory (AD) password.

Interestingly, it appears that the script (g.ps1) was updated by the threat actors during execution, eventually leading us to a PowerShell version of the Rhysida ransomware.

Rhysida ransomware employs a 4096-bit RSA key and AES-CTR for file encryption, which we discuss in detail in a succeeding section. After successful encryption, it appends the .rhysida extension and drops the ransom note CriticalBreachDetected.pdf.

This ransom note is fairly unusual — instead of an outright ransom demand as seen in most ransom notes from other ransomware families, the Rhysida ransom note is presented as an alert from the Rhysida “cybersecurity team” notifying victims that their system has been compromised and their files encrypted. The ransom demand comes in the form of a “unique key” designed to restore encrypted files, which must be paid for by the victim.

Summary of malware and tools used by Rhysida

  • Malware: RHYSIDA, SILENTKILL, Cobalt Strike
  • Tools: PsExec
Initial AccessPhishingBased on external reports, Rhysida uses phishing lures for initial access
Lateral MovementPsExecMicrosoft tool used for remote execution
Cobalt Strike3rd party tool abused for lateral movement
Defense EvasionSILENTKILLMalware deployed to terminate security-related processes and services, delete shadow copies, modify RDP configurations, and change the AD password
ImpactRhysida ransomwareRansomware encryption
Table 1. A summary of the malware, tools, and exploits used by Rhysida

A closer look at Rhysida’s encryption routine 
After analyzing current Rhysida samples, we observed that the ransomware uses LibTomCrypt, an open-source cryptographic library, to implement its encryption routine. Figure 3 shows the procedures Rhysida follows when initializing its encryption parameters. 

Figure 3. Rhysida’s parameters for encryption
Figure 3. Rhysida’s parameters for encryption

Rhysida uses LibTomCrypt’s pseudorandom number generator (PRNG) functionalities for key and initialization vector (IV) generation. The init_prng function is used to initialize PRNG functionalities as shown in Figure 4. The same screenshot also shows how the ransomware uses the library’s ChaCha20 PRNG functionality.

rhysida_fig4
Figure 4. Rhysida’s use of the “init_prng” function

After the PRNG is initialized, Rhysida then proceeds to import the embedded RSA key and declares the encryption algorithm it will use for file encryption:

  •  
  • It will use the register_cipher function to “register” the algorithm (in this case, aes), to its table of usable ciphers.
  •  
  • It will use the find_cipher function to store the algorithm to be used (still aes), in the variable CIPHER.

Afterward, it will proceed to also register and declare aes for its Cipher Hash Construction (CHC) functionalities. 

Based on our analysis, Rhysida’s encryption routine follows these steps:

  1. After it reads file contents for encryption, it will use the initialized PRNG’s function, chacha20_prng_read, to generate both a key and an IV that are unique for each file.
  2. It will use the ctr_start function to initialize the cipher that will be used, which is aes (from the variable CIPHER), in counter or CTR mode.
  3. The generated key and IV are then encrypted with the rsa_encrypt_key_ex function.
  4. Once the key and IV are encrypted, Rhysida will proceed to encrypt the file using LibTomCrypt’s ctr_encrypt function.
Figure 5. Rhysida’s encryption routine
Figure 5. Rhysida’s encryption routine

Unfortunately, since each encrypted file has a unique key and IV — and only the attackers have a copy of the associated private key — decryption is currently not feasible.

How can organizations protect themselves from Rhysida and other ransomware families?

Although we are still in the process of fully analyzing Rhysida ransomware and its tools, tactics, and procedures (TTPs), the best practices for defending against ransomware attacks still holds true for Rhysida and other ransomware families.

Here are several recommended measures that organizations implement to safeguard their systems from ransomware attacks:

  • Create an inventory of assets and data
  • Review event and incident logs
  • Manage hardware and software configurations.
  • Grant administrative privileges and access only when relevant to an employee’s role and responsibilities.
  • Enforce security configurations on network infrastructure devices like firewalls and routers.
  • Establish a software whitelist permitting only legitimate applications
  • Perform routine vulnerability assessments
  • Apply patches or virtual patches for operating systems and applications
  • Keep software and applications up to date using their latest versions
  • Integrate data protection, backup, and recovery protocols
  • Enable multifactor authentication (MFA) mechanisms
  • Utilize sandbox analysis to intercept malicious emails
  • Regularly educate and evaluate employees’ security aptitude
  • Deploy security tools (such as XDR) which are capable of detecting abuse of legitimate applications

Indicators of compromise

Hashes

The indicators of compromise for this entry can be found here.

MITRE ATT&CK Matrix

Initial AccessT1566 PhishingBased on external reports, Rhysida uses phishing lures for initial access.
ExecutionT1059.003 Command and Scripting Interpreter: Windows Command ShellIt uses cmd.exe to execute commands for execution.
T1059.001 Command and Scripting Interpreter: PowerShellIt uses PowerShell to create scheduled task named Rhsd pointing to the ransomware.
PersistenceT1053.005 Scheduled Task/Job: Scheduled TaskWhen executed with the argument -S, it will create a scheduled task named Rhsd that will execute the ransomware
Defense EvasionT1070.004 Indicator Removal: File DeletionRhysida ransomware deletes itself after execution. The scheduled task (Rhsd) created would also be deleted after execution.
T1070.001 Indicator Removal: Clear Windows Event LogsIt uses wevtutil.exe to clear Windows event logs.
DiscoveryT1083 File and Directory DiscoveryIt enumerates and looks for files to encrypt in all local drives.
T1082 System Information DiscoveryObtains the following information:Number of processorsSystem information
ImpactT1490 Inhibit System RecoveryIt executes uses vssadmin to remove volume shadow copies
T1486 Data Encrypted for ImpactIt uses a 4096-bit RSA key and Cha-cha20 for file encryption.It avoids encrypting files with the following strings in their file name:.bat.bin.cab.cmd.com.cur.diagcab.diagcfg.diagpkg.drv.dll.exe.hlp.hta.ico.msi.ocx.ps1.psm1.scr.sys.ini.Thumbs.db.url.isoIt avoids encrypting files found in the following folders:$Recycle.BinBootDocuments and SettingsPerfLogsProgramDataRecoverySystem Volume InformationWindows$RECYCLE.BINApzDataIt appends the following extension to the file name of the encrypted files:.rhysidaIt encrypts all system drives from A to Z.It drops the following ransom note:{Encrypted Directory}\CriticalBreachDetected.pdf
T1491.001 Defacement: Internal DefacementIt changes the desktop wallpaper after encryption and prevents the user from changing it back by modifying the NoChangingWallpaper registry value.

Trend Micro Solutions

Trend solutions such as Apex One Deep Security,  Cloud One Workload SecurityWorry-Free Business Security,  Deep Discovery Web InspectorTitanium Internet Security, and Cloud Edge can help protect against attacks employed by the Rhysida ransomware.

The following solutions protect Trend customers from Rhysida attacks:

Trend Micro solutionsDetection Patterns / Policies / Rules
Trend Micro Apex OneTrend Micro Deep SecurityTrend Micro Titanium Internet SecurityTrend Micro Cloud One Workload Security Trend Micro Worry-Free Business Security ServicesRansom.Win64.RHYSIDA.SMRansom.Win64.RHYSIDA.THEBBBCRansom.Win64.RHYSIDA.THFOHBCTrojan.PS1.SILENTKILL.SMAJCTrojan.PS1.SILENTKILL.A
Trend Micro Apex OneTrend Micro Deep SecurityTrend Micro Worry-Free Business Security ServicesTrend Micro Titanium Internet Security
 
RAN4056TRAN4052T
Trend Micro Apex OneTrend Micro Deep Discovery Web InspectorDDI Rule ID: 597 – “PsExec tool detected”DDI Rule ID: 1847 – “PsExec tool detected – Class 2″DDI Rule ID: 4524 – “Possible Renamed PSEXEC Service – SMB2 (Request)”DDI Rule ID: 4466 – “PsExec Clones – SMB2 (Request)”DDI Rule ID: 4571 – “Possible Suspicious Named Pipe – SMB2 (REQUEST)”DDI Rule ID: 4570 – “COBALTSTRIKE – DNS(RESPONSE)”DDI Rule ID: 4152 – “COBALTSTRIKE – HTTP (Response)”DDI Rule ID: 4469 – “APT – COBALTSRIKE – HTTP (RESPONSE)”DDI Rule ID: 4594 – “COBALTSTRIKE – HTTP(REQUEST) – Variant 3″DDI Rule ID: 4153 – “COBALTSTRIKE – HTTP (Request) – Variant 2″DDI Rule ID: 2341 – “COBALTSTRIKE – HTTP (Request)”DDI Rule ID: 4390 – “CobaltStrike – HTTPS (Request)”DDI Rule ID: 4870 – “COBEACON DEFAULT NAMED PIPE – SMB2 (Request)”DDI Rule ID: 4861 – “COBEACON – DNS (Response) – Variant 3″DDI Rule ID: 4860 – “COBEACON – DNS (Response) – Variant 2″DDI Rule ID: 4391 – “COBEACON – DNS (Response)”
Trend Micro Apex OneTrend Micro Deep Security Trend Micro Worry-Free Business Security ServicesTrend Micro Titanium Internet SecurityTrend Micro Cloud EdgeTroj.Win32.TRX.XXPE50FFF071

Trend Micro XDR uses the following workbench alerts to protect customers from Rhysida-related attacks:

Cobalt Strike

Workbench AlertID
Anomalous Regsvr32 Execution Leading to Cobalt Strike63758d9f-4405-4ec5-b421-64aef7c85dca
COBALT C2 Connectionafd1fa1f-b8fc-4979-8bf7-136db80aa264
Early Indicator of Attack via Cobalt Strike0ddda3c1-dd25-4975-a4ab-b1fa9065568d
Lateral Movement of Cobalt Strike Beacon5c7cdb1d-c9fb-4b1d-b71f-9a916b10b513
Possible Cobalt Strike Beacon45ca58cc-671b-42ab-a388-d972ff571d68
Possible Cobalt Strike Beacon Active Directory Database Dumping1f103cab-9517-455d-ad08-70eaa05b8f8d
Possible Cobalt Strike Connection85c752b8-93c2-4450-81eb-52ec6161088e
Possible Cobalt Strike Privilege Escalation Behavior2c997bac-4fc0-43b4-8279-6f2e7cf723ae
Possible Fileless Cobalt Strikecf1051ba-5360-4226-8ffb-955fe849db53

PsExec

Workbench AlertID
Possible Credential Access via PSEXESVC Command Execution0b870a13-e371-4bad-9221-be7ad98f16d7
Possible Powershell Process Injection via PSEXEC7fe83eb8-f40f-43be-8edd-f6cbc1399ac0
Possible Remote Ransomware Execution via PsExec47fbd8f3-9fb5-4595-9582-eb82566ead7a
PSEXEC Execution By Processe011b6b9-bdef-47b7-b823-c29492cab414
Remote Execution of Windows Command Shell via PsExecb21f4b3e-c692-4eaf-bee0-ece272b69ed0
Suspicious Execution of PowerShell Parameters and PSEXEC26371284-526b-4028-810d-9ac71aad2536
Suspicious Mimikatz Credential Dumping via PsExec8004d0ac-ea48-40dd-aabf-f96c24906acf

SILENTKILL

Workbench AlertID
Possible Disabling of Antivirus Software64a633e4-e1e3-443a-8a56-7574c022d23f
Suspicious Deletion of Volume Shadow Copy5707562c-e4bf-4714-90b8-becd19bce8e5

Rhysida

Workbench AlertID
Ransom Note Detection (Real-time Scan)16423703-6226-4564-91f2-3c03f2409843
Ransomware Behavior Detection6afc8c15-a075-4412-98c1-bb2b25d6e05e
Ransomware Detection (Real-time Scan)2c5e7584-b88e-4bed-b80c-dfb7ede8626d
Scheduled Task Creation via Command Line05989746-dc16-4589-8261-6b604cd2e186
System-Defined Event Logs Clearing via Wevtutil639bd61d-8aee-4538-bc37-c630dd63d80f

Trend Micro Vision One hunting query

Trend Vision One customers can use the following hunting query to search for Rhysida within their system:

processCmd:”powershell.exe*\\*$\?.ps1″ OR (objectFilePath:”?:*\\??\\psexec.exe” AND processCmd:”*cmd.exe*\\??\\??.bat”)

Source :
https://www.trendmicro.com/en_us/research/23/h/an-overview-of-the-new-rhysida-ransomware.html

What does the Allow, Deny & Discard do on an Sonicwall Access Rule?

Last Update : 07/25/2022

Description

This article explains the 3 Actions available on an access rule

Resolution

Firewall rules, in general, based on concept of Implicit Deny.  Implicit Deny basically means that the default answer to whether a communication is allowed to transit the firewall is always No or Deny.  Therefore, the majority of Access Rules tend to be Allow.  A firewall will process a communication, inbound or outbound, based on the highest priority rule to the lowest.  Once a rule is found with conditions that match, that rule is executed by the firewall.  Allow, Deny & Discard is the action that the firewall will take for any communication that meets the conditions of a particular Access Rule.  Should a communication come into the firewall and no Access Rule meets the condition to allow it through, the firewall will Drop the communication.

Gen7 Add access rule dialog box

Image

Allow – This means that the firewall will permit the communication to continue through the firewall to its destination.

 NOTE: When creating a new access rule, the default Action on your firewall is set to Allow. 

Gen6 Add access rule dialog box

Deny – This means that when a communication is found to match the conditions of an Access Rule with the Deny action, the communication will not be permitted to proceed.  The communication is Dropped by the firewall.  A RST (reset) packet sent back to the originating device and the communication will be ended.  The RST packet is a communication that goes back to the originator of the traffic stating that the connection has been closed.  Under most circumstances, you should not have to write a Deny rule as Deny is the default action as described above.

 NOTE: Be advised that the RST packet is a normal part of network communications and is not unique to the SonicWall.

Discard – This option is much like Deny in that it will stop and drop the communication.  In this instance, the firewall will not send a RST packet as described in the Deny action above.  When the RST packet does not go back as with Deny, the originator has no confirmation that there is a device to respond at the IP address that is trying to reach.  Even if the originator suspects that it is a security function that is stopping it, they will still not know anything for sure.  This is essentially Stealth Mode applied at the Access Rule level.

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/what-does-the-allow-deny-discard-do-on-an-access-rule/220725123655973/

Accessing Safemode when Sonicwall firewall is not reachable via CLI or GUI

Last Update : 05/09/2023

Description

This article describes how to put a SonicWall into safe mode through the GUI or through the command line interface (CLI).

You may require to follow this article for the following:

  • Firewall not accessible any longer due to configuration issues or other causes
  • Perform a firmware upgrade when it fails via normal means.
  • Perform a ROM/Safemode version upgrade.
  • Viewing the bootlogs or other diagnostic information.

 NOTE: Factory Reset via safemode is a required step when the device turns on but it is not reachable. A backup of the settings will be required after the factory reset or the firewall has to be reconfigured from scratch.

Resolution

ACCESSING SAFEMODE WHEN FIREWALL IS NOT REACHABLE VIA CLI/UI:

  1. Using a paperclip or similarly sized object, press and hold down the RST button located in the small hole on the front or back of the device (depending on the appliance) for at least 60 Seconds. Once the test light on the device becomes solid or begins to blink then the SonicWall is in safe mode.

     NOTE: On an NSsp 13700 or NSa Series appliance, press the button, but you do not need to hold it down.
  2. Connect a computer directly to the following Interface, depending on what model SonicWall you have, via an ethernet cable.
    1. Manually assign a static IP / subnet mask and Gateway (gateway will be the safemode firewall IP) on the connected computers NIC depending on the SonicWall appliance.
    2. Open the browser on the client connected to the firewall and go to: http://Enterherethe_Safemode_Firewall_IP

      Generation/ModelInterface to be used while in SafemodeSafemode Firewall IPRecommended IP to be set on clientGeneration 5X0192.168.168.168192.168.168.10 | 255.255.255.0Generation 6 & 7 | SOHO & TZ Devices
      X0192.168.168.168192.168.168.10 |  255.255.255.0Generation 6 & 7 | NSa/SM/NSsp DevicesMGMT Interface192.168.1.254192.168.1.10 | 255.255.255.0 CAUTION: Safemode is only available via HTTP so you have to manually type http:// otherwise the browser will automatically take you to https://.

       NOTE: For new safe mode options on Gen7, please refer: Safemode options on SonicWall Gen 7 devices

ACCESSING SAFEMODE VIA CLI

 NOTE: There is an E-CLI command safemode that restarts the firewall in SafeMode for Generation 7 (NSsp 13700 or NSa).

  1.  If you’re unfamiliar with how to access the SonicWall management using CLI please reference How to login to the appliance using the Command Line Interface (CLI).
  2. Once logged into the CLI, input the following commands.

    Safemode
    yes
  3. The SonicWall will reboot and enter safe mode.
    Image
  4. Reference the steps above to login to the safe mode GUI, beginning with “Connect a computer directly to the following Interface…”

Below you can find some additional information about what you can do in SafeMode:

Reset your firewall to Factory Default

  1. Select Current Firmware with Factory Default Settings and confirm.
  2. Your firewall will restart to factory default.
  3. After the reboot, login to the SonicWall management GUI via X0 Interface on the default firewall IP (192.168.168.168).
     NOTE: Make sure to modify the NIC Settings of the client connected to X0 to match the new firewall default settings (Gateway: 192.168.168.168 and NetMask: 255.255.255.0).

    Image

Upgrading the Gen 6 Firmware or ROM Version from Safe Mode

  1. Download the desired firmware version from MySonicWall.com or have the desired ROM Version on hand. ROM Packs are only available via SonicWall technical support.
     NOTE: Upgrading the ROM version only applies to Generation 6 NSA SonicWalls – 2600, 3600, 4600, 5600, and 6600. Unless you have been requested to upgrade the ROM version by SonicWall technical support do not attempt to do so.  
  2. Select Upload New Firmware and follow the prompt in the pop-up window to upload the firmware or ROM version to the SonicWall.
  3. You should now see the New Firmware or Uploaded ROM Pack on the safe mode GUI. You can boot to the new firmware or ROM by clicking the boot icon on the far right.
     NOTE: Booting to a new firmware or ROM version will reboot the SonicWall and exit safe mode. Make sure you’re completely finished with the SonicWall’s safe mode before selecting boot. 
  4. After the reboot, login to the SonicWall management GUI as you normally would. Navigate to Monitor | Current Status | System Status.
  5. On the Status screen you should see the new firmware version listed under Firmware Version or the new ROM version listed under Safemode Version.

Gen 7 (Using SafeMode to Upgrade Firmware):

  1. Once we enter the url in the web browser to get to the safe mode page on SonicWall Gen 7 devices, we need to authenticate using Maintenance Key.
  2. In the Maintenance Key prompt, type in or paste the key you got from MySonicWall and then click Authenticate. If your appliance is running SonicOS 7.0.1 and is not yet registered, use its Auth Code as the key. (To find the Maintenance key, please refer to: Safemode options on SonicWall Gen 7 devices)
    Image

  3. Safe mode page is displayed

    Image
  4. Click Upload Image, and then browse to the location where you saved the SonicOS firmware image, select the file, and click Upload.
  5. Click the Boot button in the row for Available Image Version and select one of the following:
    1. Boot Available Image with Current Configuration: Use this option to restart the appliance with your current configuration settings.
    2. Boot Available Image with Factory Default Configuration: Use this option to restart the appliance with factory default configuration settings. The configuration settings revert to default values, but logs and local backups remain in place.
    3. Boot Available Image with Backup Configuration: Use this option to restart the appliance with saved backup configuration settings. You can choose which backup to use. 

      Image
  6. In the confirmation dialog, click Boot to proceed.
  7. Wait while the firmware is installed, then booted. 
  8. Login to the SonicWall management GUI as you normally would.

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/accessing-safemode-when-firewall-is-not-reachable-via-cli-or-gui/170507123738054/

How can I access the SonicWall Management Interface?

Last Update: 03/13/2023

Description

The SonicWall UTM appliance has a web-based graphical user interface for configuring the security appliance. This is the primary means of configuring the device.

Resolution

By default all the interfaces (ports like WAN,OPT or X1,X2) are unconfigured except the LAN or X0 interface. The LAN or X0 interface is pre-configured with an ip address of 192.168.168.168 and subnet mask of 255.255.255.0.

You could also determine the LAN or X0 interface IP address by using the Setup Tool (Windows SetupTool – https://software.sonicwall.com/UtilityTools/SetupTool.exe)

Image
Your UTM appliance package will contain, among other things, an Ethernet cable. Connect one end of the cable to the LAN or X0 interface of the SonicWall and the other end to a computer. Make sure the LED alongside LAN or X0 is lit solid.

As the UTM appliance is not pre-configured with DHCP, the computer connected to it must be configured with a static IP address. Set the computer IP address in the same subnet as the SonicWall LAN or X0.

 EXAMPLE:192.168.168.2 with subnet mask of 255.255.255.0.

Open an Internet browser and enter 192.168.168.168 in the address bar.

As this is the first time you are accessing the SonicWall UTM management interface, you will be presented with a wizard. You could follow the wizard to set a new admin password and other information. You could skip the wizard and login directly to the interface by clicking the click here link in the wizard prompt. 

Quick Configuration for Gen6 Appliances with SonicOS 6.5 & above.
Image

When attempting to login directly you will be prompted for a username and password. By default the username is admin and the password is password. Once successfully logged in you can change the password under Manage | Appliance | Base Settings | Administrator Name & Password.

Further configuration of the device can be done either manually, by navigating the tabs on the left-hand side of the interface, or by using the wizard. The wizard can be accessed by clicking on the Wizards icon at the top of the interface.

TROUBLESHOOTING
  • Make sure there is physical connectivity between the computer and the SonicWall.
  • It is always recommended to connect the computer directly to SonicWall instead of through a switch or hub.
  • The LAN or X0 interface LED should be lit solid. If the computer is a PC, the Network Connection Status should show connected.
  • Although SonicWall is Auto DBX capable, try a cross-over cable.
     TIP: If physical connection has been established but the user is unable to access the management interface try doing a ping to the IP address 192.168.168.168 from the computer.
    If the ping test passes and the user is unable to open the interface page in the browser,  try the following:
  1.  Reboot the SonicWall.
  2.  Clear the browser cache.

See also:

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/how-can-i-access-the-sonicwall-management-interface/170503695604558/

Sonicwall Application Rule Common Configurations

Last Update 03/26/2020

Description

This document explains in detail how the SonicWall rulebase works and provides common configurations.

Topics include:

  • Application Rule tips
  • The SonicOS rulebase
  • App Rules positive matching
  • Inspection of encrypted traffic
  • Methods of designing a rulebase

Resolution

The SonicOS Rulebase
SonicWall has two rulebases, one for Stateful Packet Inspection (SPI), and one for Deep Packet Inspection (DPI). The SPI rulebase deals with socket filters that are defined between source and destination address objects to a combination of destination port and protocol, or a range of ports, called a service. Optionally, source ports can also be defined within the service which is more useful for legacy UDP services than for modern services that randomize the source port. A connection is established with the first UDP packet, or after a successful TCP handshake. All other protocols behave like UDP and establish a connection with the first packet.

App Rules, in contrast, monitor traffic of established connections. When an application is detected and a rule matches, the rule action is applied such as dropping the connection.
Access Rules are processed top-down, which means that on the first rule that is matched, (counted from the top) the rule action is applied, and the rulebase is exited. No further rulebase processing follows. This is the industry standard implementation for SPI rules. In contrast, no industry standard implementation exists for App Rules. In addition to standard top-down behavior known from SPI rules, some vendors match top down, but do not drop out with the first match. SonicOS does something in-between: rule order is non-deterministic because rules are internally optimized for processing speed. App Rules cannot overlap. Per definition, only one rule can match. If a matching rule is found, the rule action is applied.

Access Rules have Allowed, Deny, and Discard actions. The difference between Deny and Discard is that Deny sends a segment with TCP RST flag back, whereas Discard silently drops the packet. It is best to use Discard in most cases, unless that breaks something like long living dormant TCP connections that lack higher layer health monitoring as can be found in some legacy custom applications. Both actions terminate the connection and remove it from the connection table. App Rules can apply various actions but Allowed is not one of them. The reason is that App Rules check on an already established connection. By the very nature on how DPI works, the connection has to be established so that the DPI engine can look for clues within the data traffic to determine the application.

Access Rules are enforced between zones that have interfaces assigned. One zone may match to one or multiple interfaces. App Rules are enforced on ingress of a zone, or globally. Both Access Rules and App Rules can be assigned address objects and address groups. Only one object can be assigned per rule. If multiple objects in a rule are desired, a group needs to be created. Groups can be nested.
In addition to defining source and destination address objects in App Rules, source address exclusions can be defined so that App Rules do not overlap. Both Access Rules and App Rules can have socket services assigned. In contrast to Access Rules, App Rules cannot have service groups. Services are less often used in App Rules because App Signatures generally match independent of sockets. The reason to assign a service is to limit application matches to one specific socket, such as an Application on a cleartext HTTP socket that needs to be dropped. App Rules also may match on indirect traffic such as DNS when inspecting a Web session on an HTTP socket. This is often not obvious. In addition to dropping the connection that carries the service, control connections, or peripheral connections like DNS can be targeted by signatures within one App. This is a reason that one typically wants to leave the socket out of the match criteria for an App Rule.

App Rules match on applications which is the main difference to Access Rules that only match on a socket. A variety of match objects can be defined to match within a certain context such as file names, as well as categories, applications, and application sub lists like Social Networking, Facebook, and Like button. The same connection can match many different applications such as HTTP and Netflix. Users are treated as a filter – after a rule was matched. Users are not part of the match criteria of the rule itself. Vendors are not consistent in the implementation of users. Many implement it like SonicWall but some also make the user a match criteria. In SonicOS, an action is applied to all include users minus those users that overlap with exclude users. There is only one rule check; no other rule check is performed regardless whether the user matches or not. Access Rules and App Rules are similar in their behavior to unmatched users. Access Rules apply the inverse of the action such as Deny instead of Allowed, or vice versa. App Rules do not have an Allowed action by their very nature. Unmatched users are simply not applied any action. If the action is Drop, not matched traffic is simply passed without logging. The same is true for the No Action that produces a log for matched users. Remember that not matched users include all user(s) in exclude and all other users not in include. In other words, a rule is applied only to all include users that are not in exclude. All non-defined users are treated as not matching.

Exclude is a concept present in many objects in SonicOS. An exclude is a minus to an include, which means applied to the rule is only what is left of the include, once the exclude was subtracted. No matching of the rule applies to anything in the exclude. This is a bit complicated, but exclude users only matters if also at least partially part of the include. An exclude that does not overlap with an include has no function. This is the same behavior for other object types.

The user concept in SonicOS is a filter after a rule match was made. Only the leftover of include users after subtracting excluded users is applied to that particular matched rule. Users that do not match are no longer processed in the rulebase. This is important to understand.

Image

App Rules
IF source:

  • src-zone
  • src-ip MINUS excluded src-ip

AND IF destination:

  • dst-ip

AND IF application:

  • Apps identified by DPI MINUS excluded Apps, limited to socket

THEN

  • user MINUS excluded users filter
  • action: Drop, BWM, no-DPI, log, nothing

App Rules Positive Matching

While an Access Rule can determine the socket within the first one to three segments within a connection, App Rules match can only be determined deeper into the connection life, after the connection was established. This puts positive matching at a conundrum. How for instance do you permit a connection with Netflix, before you even know that the connection carries Netflix? And how do you make sure after Netflix in a connection stream was detected, that it does not carry other traffic, such as tunneled VPN traffic?

These are interesting questions, and essentially, there is no precise solution. Vendors differ in the implementation of App Rules. Some vendors focus on winning over firewall operators that are used to
maintaining SPI rulebases with hundreds or thousands of simple rules, by hiding the abstracts of an App Rules under the hood. The nice thing is that operators can treat App Rules the same way as Access Rules. It is also nice that migrating an Access Rule base into next-gen land is as easy as swapping socket service objects for App objects. The big disadvantage of this approach is that this is a very rough interface abstraction. A hacker who studies that specific interface abstraction can make traffic look like Netflix and tunnel malicious traffic through a rule that allows Netflix traffic.

SonicWall decided for the sake of efficacy not to implement such user interface abstraction. With SonicOS App Rules follow very closely the inner working of the DPI engine. If an App is detected, the operator can decide what to do about traffic following the detection. If we want to allow Netflix traffic, we really do not care about detecting Netflix at all. We care about detecting traffic that is NOT Netflix so that we can drop this. Whatever we do not drop, is implicitly allowed at the end of the App Rule base. This is the opposite from an Access Rule base where everything is implicitly dropped at the end of the rulebase. Rules are written in a way to disallow all the things that we do not want in our network excluding those Apps that we want. The easiest way to do this is per category. We drop traffic for instance from the entire Multimedia category, with the exclusion of Netflix that we are allowing. This would drop any traffic for which an App Signature exists in the category Multimedia that is NOT Netflix. At the same time, we still can drop traffic from other categories such as Proxies and protect ourselves from an evasion attack.

Inspection of Encrypted Traffic

Access Rules work the same whether traffic is cleartext or encrypted – unless traffic is tunneled within an encrypted connection. For App Rules, all encrypted traffic looks like tunneled as the App detection has to happen within the encrypted traffic stream.
SonicOS solves this problem via DPI-SSL. DPI-SSL client-side intercepts traffic from a client, decrypts it, scans it, re-encrypts it and sends it off on its way to the server. On the return wing, the opposite happens. Vendors who do not implement such functionality fly blind. They have devices that can be easily evaded by SSL or SSH encrypted traffic that already today makes up over 60% of the Internet traffic.

Methods of Designing a Rulebase

The first decision that is made is whether a rule should be an Access Rule or an App Rule. If a rule does not contain a service, or a socket can be clearly defined, then an Access Rule is the better approach. If a rule uses a generic socket, or can run on dynamic sockets, then an Access Rule needs to be chosen. As described above, Access Rules can be negative or positive, hence explicitly permit traffic, or drop traffic. App Rules by design can only be negative. Also, remember that App Rules cannot overlap, hence unlike with Access Rules, rule order does not matter. The author prepared a worksheet where you can turn a positive match into a negative match for an entire category. To allow an application, you deny the entire App Category with the exception of the allowed application. This is a simple approach to configure a positive match on an App Rule.

When you design rules with users, make sure to summarize users into user groups for common applications that are dropped. Again, focus on what is dropped. If you have a combination of networks with users, and networks without users, make sure that you put these networks without users in the src-ip exclude field when referencing a user. Because if you do not do that, the rule is skipped as networks without users would not match any include users, the rule is skipped, and you drop out of the rulebase. Everything that you do not explicitly deny in an App Rules is automatically allowed, just the opposite from an Access Rule where everything that is not explicitly allowed is implicitly denied at the end of the rulebase.

Examples
Admin: YouTube, Vudu, Hulu
Faculty: YouTube and Vudu
Students: YouTube
Nobody: Netflix
Rule 1: Netflix DENY Admin, Faculty, Students
Rule 2: Hulu DENY Faculty, Students
Rule 3: Vudu: DENY Students
Rule 4: MULTIMEDIA except Netflix, Hulu, Vudu DENY all-users

Make use of the spreadsheet to carefully plan out your rulebase before configuring it. On Tab Applications, chose a category in column B. Then in columns D through H check the field to TRUE for the users you want this application allowed. If you do not use users, simply use column D only. Columns J through N is the negative representation, converting a positive match to a negative match as it is entered in an App Rule. App Rules can only drop a connection AFTER an App was recognized. Hence, we cannot permit an App explicitly. Create an App Rule where you deny all users that show TRUE in columns J through N for that application. Put those apps that are allowed, FALSE in J through N, into the exclude Apps. Keep in mind that in SonicOS App Rules cannot overlap. Create non-overlapping rules with the help of excludes. In App Rules, the user group is only applied to include users. All users that are not in include, or excluded, are dropping out of the rule base without any action, and the packet is allowed. If you need a final explicit deny rule, you build rules with all app categories that are not users and simply drop this traffic.

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/application-rule-common-configurations/180208123013371/

Enable Remote Desktop (Windows 10, 11, Windows Server)

Last Updated: June 22, 2023 by Robert Allen

In this guide, you will learn how to enable Remote Desktop on Windows 10, 11, and Windows Server. I’ll also show you on to enable RDP using PowerShell and group policy.

Tip: Use a remote desktop connection manager to manage multiple remote desktop connections. You can organize your desktops and servers into groups for easy access.

Table of contents

In the diagram below, my admin workstation is PC1. I’m going to enable RDP on PC2, PC3, and Server1 so that I can remotely connect to them. RDP uses port TCP 3389. You can change the RDP listening port by modifying the registry.

Enable Remote Desktop on Windows 10

In this example, I’m going to enable remote desktop on PC2 that is running windows 10.

Step 1. Enable Remote Desktop

Right click the start menu and select system.

Under related settings click on Remote desktop.

Click the slider button to enable remote desktop.

You will get a popup to confirm that you want to enable Remote desktop. Click confirm.

Next, Click on Advanced Settings.

Make sure “Require computers to use Network Level Authentication to connect” is selected.

This setting will force the user to authenticate before it will start a remote desktop session. This setting will enable a layer of security and prevent unauthorized remote connections.

Step 2. Select Users Accounts

The next step is to ensure only specific accounts can use RDP.

By default, only members of the local administrators group will be allowed to connect using remote desktop.

To add or remove user accounts click on “select users that can remotely access this PC”.

To add a user click the Add button and search for the username.

In this example, I’m going to add a user Adam A. Anderson.

Tip. I recommend creating a domain group to allow RDP access. This will make it easier to manage and audit RDP access.

That was the last step, remote desktop is now enabled.

Let’s test the connection.

From PC1 I open Remote Desktop Connection and enter PC2.

I am prompted to enter credentials.

Success!

I now have a remote desktop connection to PC2.

In the screenshot below you can see I’m connected via console to PC1 and I have a remote desktop connection open to PC2.

Damware Mini Remote Control

Multiple monitor support. Reboot and wake sleeping computers remotely.

Remote access to Windows, Linux, and Mac OS X operating systems. In session chat, remote screenshot, file transfer, and more.

Download 14 Day Free Trial

Enable Remote Desktop on Windows 11

In this example, I’ll enable remote desktop on my Windows 11 computer (PC3).

Step 1. Enable Remote Desktop

Click on search.

Enter “remote desktop” and click on “Remote desktop settings”

Click the slider to enable remote desktop. You will get a popup to confirm.

Click the down arrow and verify “Require devices to use Network Level Authentication to connect” is enabled.

Remote Desktop is now enabled. In the next step, you will select which users are allowed to use remote desktop.

Step 2. Remote Desktop Users

By default, only members of the local administrators group can use remote desktop. To add additional users follow these steps.

Click on “Remote Desktop users”

Click on add and search or enter a user to add. In this example, I’ll add the user adam.reed.

Now I’ll test if remote desktop is working.

From my workstation PC1 I’ll create a remote desktop connection to PC3 (windows 11).

Enter the password to connect.

The connection is good!

You can see in the screenshot below I’m on the console of PC1 and I have a remote desktop connection to PC3 that is running Windows 11.

Enable Remote Desktop on Windows Server

In this example, I’ll enable remote desktop on Windows Server 2022.

Step 1. Enable Remote Desktop.

Right click the start menu and select System.

On the settings screen under related settings click on “Remote desktop”.

Click the slider button to enable remote desktop.

You will get a popup to confirm that you want to enable Remote desktop. Click confirm.

Click on Advanced settings.

Make sure “Require computers to use Network level Authentication to connect” is enabled.

Remote desktop is now enabled, the next step is to select users that can remotely access the PC.

Step 2. Select User accounts

By default, only members of the local administrators group will be allowed to connect using remote desktop.

To add additional users click on click on “select users that can remotely access this pc”.

Next, click add then enter or search for users to add. In this example, I’ll add the user robert.allen. Click ok.

Now I’ll test if remote desktop is working on my Windows 2022 server.

From my workstation (pc2) I open the remote desktop connection client and enter srv-vm1and click connect. Enter my username and password and click ok.

Awesome, it works!

I’ve established a remote session to my Windows 2022 server from my Windows 10 computer.

PowerShell Enable Remote Desktop

To enable Remote Desktop using PowerShell use the command below. This will enable RDP on the local computer.

Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Control\Terminal Server' -name "fDenyTSConnections" -value 0

You can use the below PowerShell command to check if remote desktop is enabled.


if ((Get-ItemProperty "hklm:\System\CurrentControlSet\Control\Terminal Server").fDenyTSConnections -eq 0) { write-host "RDP is Enabled" } else { write-host "RDP is NOT enabled" }

To enable remote desktop remotely you can use the invoke-command. This requires PS remoting to be enabled, check out my article on remote powershell for more details.

In this example, I’ll enable remote desktop on the remote computer PC2.

invoke-command -ComputerName pc2 -scriptblock {Set-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Control\Terminal Server' -name "fDenyTSConnections" -value 0} 

Group Policy Configuration to allow RDP

If you need to enable and manage the remote desktop settings on multiple computers then you should use Group Policy or intune.

Follow the steps below to create a new GPO.

Step 1. Create a new GPO

Open the group policy management console and right click the OU or root domain to create a new GPO.

In this example, I’m going to create a new GPO on my ADPPRO Computers OU, this OU has all my client computers.

Give the GPO a name.

Edit the GPO and browse to the following policy setting.

Computer Configuration -> Policies -> Administrative Templates -> Windows Components -> Remote Desktop Services -> Remote Desktop Session Host -> Connections;

Enable the policy setting -> Allow users to connect remotely by using Remote Desktop Services

That is the only policy setting that needs to be enabled to allow remote desktop

Step 2. Update Computer GPO

The GPO policies will auto refresh on remote computers every 90 minutes.

To manually update GPO on a computer run the gpupdate command.

When remote desktop is managed with group policy the settings will be greyed out. This will allow you to have consistent settings across all your computers. It will also prevent users or the helpdesk from modifying the settings.

That’s a wrap.

I just showed you several ways to enable remote desktop on Windows computers. If you are using Active Directory with domain joined computers then enabling RDP via group policy is the best option.

Related Articles

Recommended: Active Directory Permissions Reporting Tool

The ARM Permissions Reporting Tool helps you monitor, analyze, and report on the permissions assigned to users, groups, computers, and organizational units in your Active Directory

You can easily identify who has what permissions, where they came from, and when they were granted or revoked. You can also generate compliance-ready reports for various standards and regulations, such as HIPAA, PCI DSS, SOX, and GDPR

Get instant visibility into user and group permissions.

Download Free Trial

Source :
https://activedirectorypro.com/enable-remote-desktop-windows/

10 NTFS Permissions Management Best Practices

Last Updated: July 27, 2023 by Robert Allen

ntfs permissions management best practices

This is a list of 10 best NTFS permissions management tips, techniques, and best practices.

These are strategies I have used to implement and manage NTFS security permissions on Windows file shares in medium and large organizations.

NTFS permissions management is critical to ensuring your data is secure from threats and prevents unauthorized access. NTFS permissions need to be properly configured when enabling shared folders on your network.

Let’s get started.

1. Audit & Review NTFS Permissions

Whether you have an existing file server or are setting up a new one it is important to review your NTFS permissions, this at times can even be a requirement of an audit. To simplify this task I recommend using an NTFS Permissions Report Tool that can scan all folders and show you who has access to what. With a reporting tool, you can list all folder permissions, verify users have the correct permissions, check inheritance, find insecure permissions, verify directory rights, and export the report to CSV, Excel, or PDF.

AD Pro NTFS Permissions Reporter

2. Secure NTFS Permissions with Security Groups

It is a best practice to create security groups to set NTFS permissions rather than using individual user accounts. Security groups have the following advantages:

  • Easier to manage permissions for a group of users
  • Easily remove user’s permissions
  • Easily grant users access to a file or folder
  • Makes it easier to identify who has access to what
  • Simplifies auditing and compliance reports

Let me walk through an example of how using security groups simplifies NTFS permissions management.

Say you have 100 employees that need access to the accounting folder, 80 need read/write permissions, and the other 20 need read-only access.

To set these permissions you only need to create two security groups, and then configure the permissions for these two groups. Example below.

Example of using security groups to manage NTFS permissions.

Now as new employees are hired, all you need to do is add the user to one of these groups to give them access. To remove access you would just remove them from the group.

If you did not use security groups for the NTFS permissions you would have to add all 100 users to the ACL, this would be very time consuming and difficult to manage. Example below.

Example of setting individual accounts on NTFS ACL permissions. This is a bad design.

Always use security groups to manage the ACL on NTFS permissions.

3. Standardized Naming Convention & Documentation

This is my favorite NTFS Permissions management tip.

You can easily provide groups of users with unwanted access if you do not use descriptive security group names.

For example, the accounting department just purchased a SaaS based accounting program. It can sync with Active Directory for single sign-on and permissions. The administrator created an Accounting_1 and Accounting_2 group to manage access to the software. Accounting_1 is full access and Accounting_2 is limited. Both groups are generic and have no description or documentation.

The accounting department also needs a shared folder setup so they can share and collaborate on some files. The administrator thinks, oh I’ve already got accounting groups configured, and therefore proceeds to use Accounting_1. Users are added to Accounting_1 to provide access to the NTFS share, but unfortunately this now grants users full access to the SaaS accounting program.

Bad Security Group Names

The groups below are examples of bad security group names because there is no description and are generic, telling the administrator nothing. You would have to scan the entire network to know where these groups are being used.

Good Security Group Names

In the examples below you can look at the group name and instantly know what it is used for and there is information in the description box.

Do not create generic security group names, instead be descriptive in their use and use the description field.

4. Do Not Use the Everyone Group (For Anything)

I might get some hate mail for this but seriously what is the justification for using the everyone group? There is no good reason to use it.

You should not set the everyone group on the ACL

What is the Everyone group?

All interactive, network, dial-up, and authenticated users are members of the Everyone group. This special identity group gives wide access to system resources. When a user logs on to the network, the user is automatically added to the Everyone group. Membership is controlled by the operating system.https://learn.microsoft.com/en-us/windows-server/identity/ad-ds/manage/understand-special-identities-groups

The Everyone group also includes the Guest account. This is just bad news for security so I highly recommend never using the Everyone group for anything.

Unfortunately, there are some poorly designed programs and tech support that do not understand this. Has a vendor tech support ever told you, “you need to add the everyone group and give them it permissions”? This is horrible advice and if followed you have significantly weakened security in your network.

Some admins will argue that it is not an issue to use everyone on shared permissions and then lock it down using NTFS permissions. This would still allow hackers to scan and detect shared folders in the network so why allow it? Instead, use the least principle model and only allow those that need access to it.

You can quickly find where the Everyone account is in use by using a reporting tool and filter for the account.

In the example below, I scanned my file server and found 4 folders that are using the Everyone account and have full control, and this is not good.

Easily search for the everyone group using the AD Pro Toolkit

5. Use the Principle of Least Privilege

The principle of least privilege means a user should only have access to the data, resources, and applications needed to complete a required task.

Preventing unnecessary permissions prevents mishandling of company data and helps to mitigate security threats.

Just because a user is part of a department doesn’t mean they need full access to all department folders and files. Consider using read-only and read/write groups to set granular permissions on files and folders.

6. Avoid setting Full Access Permissions

Only the administrator account or other IT staff should have full control of files and folders. I can’t think of a good reason a regular user needs full control. By giving regular users full control they are granted the ability to change settings and permissions, which is a bad idea.

Do not give regular users full access

7. Limit the Depth of Setting NTFS Permissions

Try and limit settings NTFS permissions to no more than two or three levels deep. There will always be exceptions to this rule, but if you set no rules for this these permissions, things will get out of control. Your users will request for every file or folder to have special permissions which will cause problems.

Here is an example.

The accounting department has a folder that has a level 1 folder and two subfolders (level 2 and level 3). It is no problem to set explicit permissions on level 1 and level 2 but I would not go any level deeper (level 3) as this becomes difficult to manage, and the same goes for files.

I would also try to limit setting explicit permissions to folders only. Users will call and will want to set specific permissions on individual files, this will become a pain to manage so try to avoid this.

8. Avoid Breaking Inheritance

By default, the permissions set at the root folder will be inherited by all subfolders. If you break inheritance it can make it difficult to read and manage NTFS permissions.

Let’s look at an example.

In the above screenshot, accounting, sales, and purchasing are what I consider the root folder. These folders have NTFS permissions set and all the subfolders will inherit their permissions.

For example, I set permissions on the accounting folder, and therefore all its subfolders inherit its permissions. If I broke the inheritance I would have to set the NTFS permissions on the folder.

There will be times when you need to break inheritance such as limiting access to a specific folder but this should be kept to a minimum.

You can easily check for folder inheritance with the AD Pro Toolkit.

Audit Folder Inheritance with the AD Pro Toolkit

9. Use Access Based Enumeration (ABE)

Access Based Enumeration allows you to hide files and folders from users who do not have permission. Limiting visibility to files and folders makes it easier for your users to browse and access resources.

If ABE is not enabled users will still see folders they do not have access to but will be denied if they try to open them. This can cause some confusion and so it is best to just hide them.

To enable ABE follow these steps.

1. Open Server Manager

2. Click on File and Storage Services (left sidebar menu)

3. Click on Shares

4. Right click the share and select properties.

5. Click on Settings

6. Check “Enable Access based enumeration.

Enable access based enumeration

10. Prevent Users from Creating Folders in the Root

It can be frustrating when you take the time to organize your folders and get it all cleaned up just to then find a bunch of new folders in the root directory.

What usually then happens is someone will create a folder and use it to share files with other users bypassing the security you have put in place. To fix this you need to set read and execute permissions at the root folder only, do not set this permission on subfolders. You will then need to add the group again and set the permissions for the subfolders. Be careful configuring this as you can easily mess up permissions.

Bonus #1. File Screening Management

File screen management can increase security and help control data on your Windows file shares. File screen management can be used for the following:

  • Block certain files types such as exe, bat files, videos files.
  • Quote Management – Limit disk space usage to users and groups.
  • Storage Reports – Generate store reports and see who is using the most space and what file types.

Bonus #2. Use Volume Shadow Copy Service (VSS)

VSS is a built-in Windows technology that allows you to take point-in-time snapshots of an entire disk. This allows you to create a backup of your file shares or any other data that resides on the disk. VSS works great as a quick solution to recover deleted files and folders from your file servers. VSS should not be used as your only backup solution.

I hope you enjoyed this article. If you have questions or comments please post them below.

Source :
https://activedirectorypro.com/ntfs-permissions-management-best-practices/

Shared Storage and Monitoring for VMware vSphere Cluster as a base building block with Software Defined Storage from StarWind

By Vladan SEGET | Last Updated: July 31, 2023

Shared storage is a critical component of a VMware vSphere cluster. In a vSphere cluster, multiple hosts are grouped together to provide a pool of computing resources that can be used to run virtual machines. These hosts are connected to shared storage, which provides a centralized location for storing virtual machine files, such as virtual disks and configuration files. This shared storage is accessible to all hosts in the cluster, allowing virtual machines to be migrated between hosts without the need to copy files between them.

Shared storage is a base building block without which most (if not all) cluster services will not work. Shared storage is a requirement for vSphere HA, DRS, FT or other cluster services.

What are the benefits of shared storage?

There are several benefits to using shared storage in a vSphere cluster. One of the most significant benefits is the ability to migrate virtual machines between hosts using vMotion. vMotion allows virtual machines to be moved between hosts without any downtime, allowing administrators to perform maintenance tasks or balance the load on the hosts without impacting the availability of virtual machines. This is possible because the virtual machine files are stored on shared storage, which is accessible to all hosts in the cluster.

Another benefit of shared storage is the ability to use advanced features such as High Availability (HA) and Distributed Resource Scheduler (DRS). HA provides automatic failover of virtual machines in the event of a host failure, while DRS provides load balancing of virtual machines across hosts in the cluster. Both of these features rely on shared storage to function properly.
There are several types of shared storage that can be used in a vSphere cluster, including Fibre Channel, iSCSI, and NFS. Each of these storage types has its own advantages and disadvantages, and the choice of storage type will depend on factors such as performance requirements, budget, and existing infrastructure.

In addition to choosing the right type of shared storage, it is also important to properly configure and manage the storage environment. This includes tasks such as setting up storage arrays, configuring storage networking, and monitoring storage performance. VMware provides a number of tools and best practices to help administrators manage shared storage in a vSphere cluster, including the vSphere Storage APIs, vSphere Storage DRS, and the vSphere Web Client.

StarWind SAN and NAS has another advantage over a hardware based storage array. This is cost. In addition, storage array, despite that you can have multiple PSUs or multiple CPUs or controller cards or NICs, you can only have a single motherboard, which is a still single point of failureStarWind SAN and NAS, that is a software based, is configured to run on at least 2-nodes where each node participate with its internal disks and RAM, to the storage pool created by StarWind. As a result, when you have a 1 host failure, the other host still has your VM file as the storage is simply mirrored. If you have vSphere HA, the restart of VMs on the remaining host is done automatically. Without vSphere HA you simply start those VMs manually from your remaining host.

What is StarWind SAN and NAS?

StarWind SAN and NAS is a software that turns your server or a group of servers into a powerful and easy-to-use storage appliance. It eliminates the need for expensive and complex storage hardware and provides a cost-effective and scalable storage solution for your virtualized environment.

Benefits of StarWind SAN and NAS for VMware vSphere

High Availability – StarWind SAN and NAS provides high availability by creating a redundant storage pool that can withstand hardware failures. It uses synchronous replication to keep the data in sync between the nodes, ensuring that there is no data loss in case of a failure.

Scalability – StarWind SAN and NAS is highly scalable and can be easily expanded by adding more nodes to the storage pool. This allows you to scale your storage capacity as your business grows, without having to invest in expensive hardware.

Cost-Effective – StarWind SAN and NAS is a cost-effective storage solution that eliminates the need for expensive hardware. It uses commodity hardware and turns it into a powerful storage appliance, reducing the overall cost of ownership.

Easy to Use – StarWind SAN and NAS is easy to use and can be set up in minutes. It comes with a user-friendly web-based interface that allows you to manage your storage pool and monitor its performance.

Performance – StarWind SAN and NAS provides high-performance storage that can meet the demands of your virtualized environment. It uses advanced caching algorithms to optimize the performance of your storage pool, ensuring that your virtual machines run smoothly.

Integration with VMware vSphere – StarWind SAN and NAS integrates seamlessly with VMware vSphere, providing a powerful and scalable storage solution for your virtualized environment. It supports all the features of VMware vSphere, including vMotion, High Availability, and Distributed Resource Scheduler.

StarWind Virtual SAN – StarWind Virtual SAN is a software that eliminates the need for physical shared storage by simply “mirroring” internal hard disks and flash between hypervisor servers. It creates a VM-centric and high-performing storage pool for a VMware cluster. This allows you to create a highly available and scalable storage solution for your virtualized environment.

Quote:

StarWind SAN & NAS supports hardware and software-based storage redundancy configurations. The solution allows turning your server with internal storage into a redundant storage array presented as NAS or SAN, exposing standard protocols such as iSCSI, SMB, and NFS. It features Web-based UI, Text-based UI, vCenter Plugin, and Command-line interface for your cluster-wide operations.

A while back, we have created a short video from the deployment process for vSphere. However, please note that this product is evolving and today, it might look a bit different. Check the latest StarWind SAN and NAS version here.

https://www.youtube.com/embed/4Wzzk-d_BOM
How about vCenter server appliance on 2-hosts config?

Note: in 2-node config, your vCenter server appliance (VCSA) should be stored on shared storage. If you running your VCSA from local storage on one of your ESXi hosts, you risking the downtime of your VCSA in case this particular host fails. This does not mean, however, that vSphere HA or other cluster services will fail. Not at all, as VCSA is used only to configure vSphere HA, not responsible in triggering the actual HA event! It mean you can perfectly “lose” your VCSA and still have your VMs restarted on the remaining host automatically.

Performance Improvements of vSphere cluster

StarWind SAN and NAS can improve the performance of VMware vSphere in several ways. One of the main ways is through the use of StarWind Virtual SAN for vSphere, which creates a VM-centric and high-performing storage pool for a VMware cluster. This allows for faster data access and improved performance for virtual machines. StarWind SAN and NAS also uses advanced caching algorithms to optimize the performance of the storage pool. This ensures that frequently accessed data is stored in cache, reducing the time it takes to access the data and improving overall performance.

In addition, StarWind SAN and NAS provides high availability and redundancy, which can improve performance by reducing downtime and ensuring that data is always available. This is achieved through synchronous replicationwhich keeps the data in sync between the nodes, ensuring that there is no data loss in case of a failure. It supports all the features of VMware vSphere, including vMotion, High Availability, and Distributed Resource Scheduler, which can further improve performance by allowing for workload balancing and resource optimization.

Final Words

In conclusion, shared storage is a critical component of a VMware vSphere cluster. It provides a centralized location for storing virtual machine files, allowing virtual machines to be migrated between hosts without downtime and enabling advanced features such as HA and DRS. Properly configuring and managing shared storage is essential for ensuring the availability and performance of virtual machines in a vSphere cluster.

StarWind SAN and NAS is a powerful and cost-effective storage solution that can be used with VMware vSphere. It provides high availability, scalability, and performance, making it an ideal storage solution for virtualized environments. Its seamless integration with VMware vSphere and support for all its features make it a must-have for any virtualized environment.

More posts about StarWind on ESX Virtualization:

More posts from ESX Virtualization: