New SEC Cybersecurity Rules: What You Need to Know

By: Greg Young – Trendmicro
August 03, 2023
Read time: 4 min (1014 words)

The US Securities and Exchange Commission (SEC) recently adopted rules regarding mandatory cybersecurity disclosure. Explore what this announcement means for you and your organization.

On July 26, 2023, the US Securities and Exchange Commission (SEC) adopted rules regarding mandatory cybersecurity disclosure. What does this mean for you and your organization? As I understand them, here are the major takeaways that cybersecurity and business leaders need to know:

Who does this apply to?

The rules announced apply only to registrants of the SEC i.e., companies filing documents with the US SEC. Not surprisingly, this isn’t limited to attacks on assets located within the US, so incidents concerning SEC registrant companies’ assets in other countries are in scope. This scope also, not surprisingly, does not include the government, companies not subject to SEC reporting (i.e., privately held companies), and other organizations.

Breach notification for these others will be the subject of separate compliance regimes, which will hopefully, at some point in time, be harmonized and/or unified to some degree with the SEC reporting.

Advice for security leaders: be aware that these new rules could require “double reporting,” such as for publicly traded critical infrastructure companies. Having multiple compliance regimes, however, is not new for cybersecurity.

What are the general disclosure requirements?

Some pundits have said “four days after an incident” but that’s not quite correct. The SEC says that “material breaches” must be reported “four business days after a registrant determines that a cybersecurity incident is material.”

We’ve hit the first squishy bit: materiality. Directing companies to disclose material events shouldn’t be necessary before there’s a mixed record of companies making materiality for public company operation. But what kind of cybersecurity incident would be likely to be important to a reasonable investor?

We’ve seen giant breaches that paradoxically did not move stock prices, and minor breaches that did the opposite. I’m clearly on the side of compliance and disclosure, but I recognize it is a gray area. Recently we saw some companies that had the MOVEit vulnerability exploited but had no data loss. Should they report? But in some cases, their response to the vulnerability was in the millions: how about then? I expect and hope there will be further guidance.

Advice for security leaders: monitor the breach investigation and monitor the analysis of materiality. Security leaders won’t often make that call but should give guidance and continuous updates to the CxO who are responsible.

The second squishy bit is that the requirement is the reporting should be made four days after determining the incident is material. So not four days after the incident, but after the materiality determination. I understand why it was structured this way, as a small indicator of compromise must be followed up before understanding the scope and nature of a breach, including whether a breach has occurred at all. But this does give a window to some of the foot-dragging for disclosure we’ve unfortunately seen, including product companies with vulnerabilities.

Advice for security leaders: make management aware of the four-day reporting requirement and monitor the clock once the material line is crossed or identified.

Are there extensions?

There are, but not because you need more time. Instead “The disclosure may be delayed if the United States Attorney General determines that immediate disclosure would pose a substantial risk to national security or public safety and notifies the Commission of such determination in writing.” Note that it specifically states that the Attorney General (AG) makes that determination, and the AG communicates this to the SEC. There could be some delegation of this authority within the Department of Justice in the future, but today it is the AG.

How does it compare to other countries and compliance regimes?

Breach and incident reporting and disclosure is not new, and the concept of reporting material events is already commonplace around the world. GDPR breach reporting is 72 hours, HHS HIPAA requires notice not later than 60 days and 90 days to individuals affected, and the UK Financial Conduct Authority (FCA) has breach reporting requirements. Canada has draft legislation in Bill C-26 that looks at mandatory reporting through the lens of critical industries, which includes verticals such as banking and telecoms but not public companies. Many of the world’s financial oversight bodies do not require breach notification for public companies in the exchanges they are responsible for.

Advice to security leaders: consider the new SEC rules as clarification and amplification of existing reporting requirements for material events rather than a new regime or something that is harsher or different to other geographies.

Is breach reporting the only new rule?

No, I’ve only focused on incident reporting in this post. There’s a few more. The two most noteworthy ones are:

  • Regulation S-K Item 106, requiring registrants to “describe their processes, if any, for assessing, identifying, and managing material risks from cybersecurity threats, as well as the material effects or reasonably likely material effects of risks from cybersecurity threats and previous cybersecurity incidents.”
  • Also specified is that annual 10-Ks “describe the board of directors’ oversight of risks from cybersecurity threats and management’s role and expertise in assessing and managing material risks from cybersecurity threats.”

Bottom line

SEC mandatory reporting for material cybersecurity events was already a requirement under the general reporting requirements, however the timelines and nature of the reporting are getting real and have a ticking four-day timer on them.

Stepping back from the rules, the importance of visibility and continuous monitoring are the real takeaways. Time to detection can’t be at the speed of your least experienced analyst. Platform means unified visibility rather than a wall of consoles. Finding and stopping breaches means internal visibility must include a rich array of telemetry, and that it be continuously monitored.

Many SEC registrants have operations outside the US, and that means visibility needs to include threat intelligence that is localized to other geographies. These new SEC rules show more than ever that that cyber risk is business risk.

To learn more about cyber risk management, check out the following resources:

Source :
https://www.trendmicro.com/en_us/research/23/h/sec-cybersecurity-rules-2023.html

Cybersecurity Threat 1H 2023 Brief with Generative AI

By: Trend Micro
August 08, 2023
Read time: 4 min (1020 words)

How generative AI influenced threat trends in 1H 2023

A lot can change in cybersecurity over the course of just six months in criminal marketplaces. In the first half of 2023, the rapid expansion of generative AI tools began to be felt in scams such as virtual kidnapping and tools by cybercriminals. Tools like WormGPT and FraudGPT are being marketed. The use of AI empowers adversaries to carry out more sophisticated attacks and poses a new set of challenges. The good news is that the same technology can also be used to empower security teams to work more effectively.

As we analyze the major events and patterns observed during this time, we uncover critical insights that can help businesses stay ahead of risk and prepare for the challenges that lie ahead in the second half of the year.

AI-Driven Tools in Cybercrime

The adoption of AI in organizations has increased significantly, offering numerous benefits. However, cybercriminals are also harnessing the power of AI to carry out attacks more efficiently.

As detailed in a Trend research report in June, virtual kidnapping is a relatively new and concerning type of imposter scam. The scammer extorts their victims by tricking them into believing they are holding a friend or family member hostage. In reality, it is AI technology known as a “deepfake,” which enables the fraudster to impersonate the real voice of the “hostage” whilst on the phone. Audio harvested from their social media posts will typically be used to train the AI model.

However, it is generative AI that’s playing an increasingly important role earlier on in the attack chain—by accelerating what would otherwise be a time-consuming process of selecting the right victims. To find those most likely to pay up when confronted with traumatic content, threat groups can use generative AI like ChatGPT to filter large quantities of potential victim data, fusing it with geolocation and advertising analytics. The result is a risk-based scoring system that can show scammers at a glance where they should focus their attacks.

This isn’t just theory. Virtual kidnapping scams are already happening. The bad news is that generative AI could be leveraged to make such attacks even more automated and effective in the future. An attacker could generate a script via ChatGPT to then convert to the hostage’s voice using deepfake and a text-to-speech app.

Of course, virtual kidnapping is just one of a growing number of scams that are continually being refined and improved by threat actors. Pig butchering is another type of investment fraud where the victim is befriended online, sometimes on romance sites, and then tricked into depositing their money into fictitious cryptocurrency schemes. It’s feared that these fraudsters could use ChatGPT and similar tools to improve their conversational techniques and perhaps even shortlist victims most likely to fall for the scams.

What to expect

The emergence of generative AI tools enables cybercriminals to automate and improve the efficiency of their attacks. The future may witness the development of AI-driven threats like DDoS attacks, wipers, and more, increasing the sophistication and scale of cyberattacks.

One area of concern is the use of generative AI to select victims based on extensive data analysis. This capability allows cybercriminals to target individuals and organizations with precision, maximizing the impact of their attacks.

Fighting back

Fortunately, security experts like Trend are also developing AI tools to help customers mitigate such threats. Trend pioneered the use of AI and machine learning for cybersecurity—embedding the technology in products as far back as 2005. From those early days of spam filtering, we began developing models designed to detect and block unknown threats more effectively.

Trend’s defense strategy

Most recently, we began leveraging generative AI to enhance security operations. Companion is a cybersecurity assistant designed to automate repetitive tasks and thereby free up time-poor analysts to focus on high-value tasks. It can also help to fill skills gaps by decoding complex scripts, triaging and recommending actions, and explaining and contextualizing alerts for SecOps staff.

What else happened in 1H 2023?

Ransomware: Adapting and Growing

Ransomware attacks are becoming sophisticated, with illegal actors leveraging AI-enabled tools to automate their malicious activities. One new player on the scene, Mimic, has abused legitimate search tools to identify and encrypt specific files for maximum impact. Meanwhile, the Royal ransomware group has expanded its targets to include Linux platforms, signaling an escalation in their capabilities.

According to Trend data, ransomware groups have been targeting finance, IT, and healthcare industries the most in 2023. From January 1 to July 17, 2023, there have been 219, 206, and 178 successful compromises of victims in these industries, respectively.

Our research findings revealed that ransomware groups are collaborating more frequently, leading to lower costs and increased market presence. Some groups are showing a shift in motivation, with recent attacks resembling those of advanced persistent threat (APT) groups. To combat these evolving threats, organizations need to implement a “shift left” strategy, fortifying their defenses to prevent threats from gaining access to their networks in the first place.

Vulnerabilities: Paring Down Cyber Risk Index

While the Cyber Risk Index (CRI) has lowered to a moderate range, the threat landscape remains concerning. Smaller platforms are exploited by threat actors, such as Clop ransomware targeting MOVEIt and compromising government agencies. New top-level domains by Google pose risks for concealing malicious URLs. Connected cars create new avenues for hackers. Proactive cyber risk management is crucial.

Campaigns: Evading Detection and Expanding Targets

Malicious actors are continually updating their tools, techniques and procedures (TTP) to evade detection and cast a wider net for victims. APT34, for instance, used DNS-based communication combined with legitimate SMTP mail traffic to bypass security policies. Meanwhile, Earth Preta has shifted its focus to target critical infrastructure and key institutions using hybrid techniques to deploy malware.

Persistent threats like the APT41 subgroup Earth Longzhi have resurfaced with new techniques, targeting firms in multiple countries. These campaigns require a coordinated approach to cyber espionage, and businesses must remain vigilant against such attacks.

To learn more about Trend’s 2023 Midyear Cybersecurity Report, please visit: https://www.trendmicro.com/vinfo/us/security/research-and-analysis/threat-reports/roundup/stepping-ahead-of-risk-trend-micro-2023-midyear-cybersecurity-threat-report

Source :
https://www.trendmicro.com/en_us/research/23/h/cybersecurity-threat-2023-generative-ai.html

The Journey to Zero Trust with Industry Frameworks

By: Alifiya Sadikali – Trendmicro
August 09, 2023
Read time: 4 min (1179 words)

Discover the core principles and frameworks of Zero Trust, NIST 800-207 guidelines, and best practices when implementing CISA’s Zero Trust Maturity Model.

With the growing number of devices connected to the internet, traditional security measures are no longer enough to keep your digital assets safe. To protect your organization from digital threats, it’s crucial to establish strong security protocols and take proactive measures to stay vigilant.

What is Zero Trust?

Zero Trust is a cybersecurity philosophy based on the premise that threats can arise internally and externally. With Zero Trust, no user, system, or service should automatically be trusted, regardless of its location within or outside the network. Providing an added layer of security to protect sensitive data and applications, Zero Trust only grants access to authenticated and authorized users and devices. And in the event of a data breach, compartmentalizing access to individual resources limits potential damage.

Your organization should consider Zero Trust as a proactive security strategy to protect its data and assets better.

The pillars of Zero Trust

At its core, the basis for Zero Trust is comprised of a few fundamental principles:

  • Verify explicitly. Only grant access once the user or device has been explicitly authenticated and verified. By doing so, you can ensure that only those with a legitimate need to access your organization’s resources can do so.
  • Least privilege access. Only give users access to the resources they need to do their job and nothing more. Limiting access in this way prevents unauthorized access to your organization’s data and applications.
  • Assume breach. Act as if a compromise to your organization’s security has occurred. Take steps to minimize the damage, including monitoring for unusual activity, limiting access to sensitive data, and ensuring that backups are up-to-date and secure.
  • Microsegmentation. Divide your organization’s network into smaller, more manageable segments and apply security controls to each segment individually. This reduces the risk of a breach spreading from one part of your network to another.
  • Security automation. Use tools and technologies to automate the process of monitoring, detecting, and responding to security threats. This ensures that your organization’s security is always up-to-date and can react quickly to new threats and vulnerabilities.

A Zero Trust approach is a proactive and effective way to protect your organization’s data and assets from cyber-attacks and data breaches. By following these core principles, your organization can minimize the risk of unauthorized access, reduce the impact of a breach, and ensure that your organization’s security is always up-to-date and effective.

The role of NIST 800-207 in Zero Trust

NIST 800-207 is a cybersecurity framework developed by the National Institute of Standards and Technology. It provides guidelines and best practices for organizations to manage and mitigate cybersecurity risks.

Designed to be flexible and adaptable for a variety of organizations and industries, the framework supports the customization of cybersecurity plans to meet their specific needs. Its implementation can help organizations improve their cybersecurity posture and protect against cyber threats.

One of the most important recommendations of NIST 800-207 is to establish a policy engine, policy administrator, and policy enforcement point. This will help ensure consistent policy enforcement and that access is granted only to those who need it.

Another critical recommendation is conducting continuous monitoring and having real-time risk-based decision-making capabilities. This can help you quickly identify and respond to potential threats.

Additionally, it is essential to understand and map dependencies among assets and resources. This will help you ensure your security measures are appropriately targeted based on potential vulnerabilities.

Finally, NIST recommends replacing traditional paradigms, such as implicit trust in assets or entities, with a “trust but verify” methodology. Adopting this approach can better protect your organization’s assets and resources from internal and external threats.

CISA’s Zero Trust Maturity Model

The Zero Trust Maturity Model (ZMM), developed by CISA, provides a comprehensive framework for assessing an organization’s Zero Trust posture. This model covers critical areas including:

  • Identity management: To implement a Zero Trust strategy, it is important to begin with identity. This involves continuously verifying, authenticating, and authorizing any entity before granting access to corporate resources. To achieve this, comprehensive visibility is necessary.
  • Devices, networks, applications: To maintain Zero Trust, use endpoint detection and response capabilities to detect threats and keep track of device assets, network connections, application configurations, and vulnerabilities. Continuously assess and score device security posture and implement risk-informed authentication protocols to ensure only trusted devices, networks and applications can access sensitive data and enterprise systems.
  • Data and governance: To maximize security, implement prevention, detection, and response measures for identity, devices, networks, IoT, and cloud. Monitor legacy protocols and device encryption status. Apply Data Loss Prevention and access control policies based on risk profiles.
  • Visibility and analytics: Zero Trust strategies cannot succeed within silos. By collecting data from various sources within an organization, organizations can gain a complete view of all entities and resources. This data can be analyzed through threat intelligence, generating reliable and contextualized alerts. By tracking broader incidents connected to the same root cause, organizations can make informed policy decisions and take appropriate response actions.
  • Automation and orchestration: To effectively automate security responses, it is important to have access to comprehensive data that can inform the orchestration of systems and manage permissions. This includes identifying the types of data being protected and the entities that are accessing it. By doing so, it ensures that there is proper oversight and security throughout the development process of functions, products, and services.

By thoroughly evaluating these areas, your organization can identify potential vulnerabilities in its security measures and take prompt action to improve your overall cybersecurity posture. CISA’s ZMM offers a holistic approach to security that will enable your organization to remain vigilant against potential threats.

Implementing Zero Trust with Trend Vision One

Trend Vision One seamlessly integrates with third-party partner ecosystems and aligns to industry frameworks and best practices, including NIST and CISA, offering coverage from prevention to extended detection and response across all pillars of zero trust.

Trend Vision One is an innovative solution that empowers organizations to identify their vulnerabilities, monitor potential threats, and evaluate risks in real-time, enabling them to make informed decisions regarding access control. With its open platform approach, Trend enables seamless integration with third-party partner ecosystems, including IAM, Vulnerability Management, Firewall, BAS, and SIEM/SOAR vendors, providing a comprehensive and unified source of truth for risk assessment within your current security framework. Additionally, Trend Vision One is interoperable with SWG, CASB, and ZTNA and includes Attack Surface Management and XDR, all within a single console.

Conclusion

CISOs today understand that the journey towards achieving Zero Trust is a gradual process that requires careful planning, step-by-step implementation, and a shift in mindset towards proactive security and cyber risk management. By understanding the core principles of Zero Trust and utilizing the guidelines provided by NIST and CISA to operationalize Zero Trust with Trend Vision One, you can ensure that your organization’s cybersecurity measures are strong and can adapt to the constantly changing threat landscape.

To read more thought leadership and research about Zero Trust, click here.

Source :
https://www.trendmicro.com/en_us/research/23/h/industry-zero-trust-frameworks.html

ChatGPT Highlights a Flaw in the Educational System

By: William Malik – Trendmicro
August 14, 2023
Read time: 4 min (1014 words)

Rethinking learning metrics and fostering critical thinking in the era of generative AI and LLMs

I recently participated in a conversation about artificial intelligence, specifically ChatGPT and its kin, with a group of educators in South Africa. They were concerned that the software would help students cheat.

We discussed two possible alternatives to ChatGPT: First, teachers could require that students submit handwritten homework. This would force students to at least read the material once before submitting it; Second, teachers could grade the paper submissions no higher than 89 percent (or a “B”), but that to get an “A,” the student would have to stand in front of the class and verbally discuss the material, their research, their conclusion, and answer any questions the teacher or other classmates might ask. (With that verbal defense of the ideas, the teacher might even waive the requirement for paper submission at all!)

The fundamental problem is that the grading system depends on homework. If education aims to teach an individual both a) a body of knowledge and b) the techniques of reasoning with that knowledge, then the metrics proving that achievement is misaligned.

One of the most quoted management scientists is Fredrick W. Taylor. He is most known for saying, “If you can’t measure it, you can’t manage it.” Interestingly, he never said that – which is fortunate because it is entirely wrong. People always manage things without metrics – from driving a car to raising children. He said: “If you measure it, you’ll manage it” – and he intended that as a warning. Whenever you adopt a metric, you will adjust your assessment of the underlying process in terms of your chosen metric. His warning is to be very careful about which metrics you choose.

Sometime in the past forty years, we decided that the purpose of education is to do well on tests. Unfortunately, that is also wrong. The purpose of education is to teach people to gather evidence and to think clearly about it. Students should learn how to judge various forms of evidence. They should understand rhetorical techniques (in the classical sense – how to render ideas clearly). They should be aware of common errors in thinking – the cognitive pitfalls we all fall into when rushed or distracted and logical fallacies which rob our arguments of their validity.

Large Language Models (LLMs) aggregate vast troves of text. Those data sources are not curated, so LLMs reflect the biases, logical limitations, and cognitive distortions in so much of what’s online. We are all familiar with early chatbots that were easily corrupted – the Microsoft chatbot Tay was perverted into being a racist resonator. (See “Twitter taught Microsoft’s AI Chatbot to be a Racist A**hole in Less than a Day” from The Verge, March 24, 2016, at https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist accessed Aug 2023.)

LLMs do not think. They scan as much material as possible, then build a set of probabilities about which word is most likely to follow another word. If the word “pterodactyl” occurs in a text, then the next most likely word might be “soaring,” and “flying” might be in second place. If ChatGPT gets the word “pterodactyl” as input, it will put “soaring” next to it. This may look plausible to a person reading the output, but it cannot be correct. Correctness implies some kind of comprehension and judgment. ChatGPT does neither. It merely arranges words based on their statistical likelihood in the LLM’s database. We are now learning that LLMs that ingest computer-generated content become even more skewed – augmenting the likelihood of one word following another by rescanning the previous output. Over time, LLMs fed AI-generated content will drift farther and farther from actual human writing. The oft-mentioned hallucinations that LLMs generate will become more common as the distillation and amplification of the more likely subset of words leads to a contracted pool of possible machine-generated responses. Eventually – if we are not able to prevent LLMs from ingesting already-processed content – the output of ChatGPT will become more and more constrained, which, taken to the extreme, will yield one plot, one answer, one painting, and one outcome regardless of the specific input. Long before then, people will have abandoned LLM-based efforts for any activity that requires creativity.

Where can LLMs help? By sorting through bounded sets of information. That means an LLM trained on protein sequences could rapidly develop a most likely model for a protein that could attack a particular disease or interrupt an allergic reaction. In that case, the issue isn’t seeking creativity but rapidly scanning a set of nearly identical data overreactions to find the few that stand out enough to make a difference. A human doing this kind of work would quickly grow bored and likely make errors. LLMs can help science move quickly through vast quantities of data in closed domains. But when looking at an unbounded domain (art, poetry, fiction, movies, music, and the like), LLMs can only build average content, filling in the space between works. Artists seek to reach beyond the space their prior work defined.

The core problem with LLMs may be unsolvable. At this point, various organizations are exploring ways to tag AI-generated content (written and graphic) so humans can spend a moment assessing the accuracy and validity of the material. Of course, message digests can be corrupted and watermarks forged. A bad actor might maliciously tag authentic content as AI-generated. Recent developments include malicious ChatGPT variants designed to create BEC and phishing email content,

Students will always look for a shortcut, and that habit is difficult to overcome. In business, it will also be tempting for bureaucrats to use tools to simplify their tasks. How will your firm incorporate LLMs safely into your business processes? Organizations should consider how they will audit their internal procedures to ensure that LLM outputs are incorporated appropriately into communications. Imagine the potential for harm if some publicly traded company was found to have used an LLM to develop its annual financial report!

What do you think? Let me know in the comments below, or contact me @wjmalik@noc.social

Source :
https://www.trendmicro.com/en_us/research/23/h/chatgpt-flaw.html

OT Security is Less Mature but Progressing Rapidly

By: Kazuhisa Tagaya – Trendmicro
August 14, 2023
Read time: 2 min (638 words)

The latest study said that OT security is less mature in several capabilities than IT security, but most organizations are improving it.

e asked participants whether OT security for cybersecurity capabilities is less mature or more mature than IT in their organizations with reference to the NIST CSF.

As an average of all items, 39.5% answered that OT has a lower level of maturity. (18% answered OT security is more mature, and 36.4% at the same level)

Categorizing security capabilities into the five cores of the NIST CSF and aggregating them for each core, the most was that Detect is lower maturity in OT security than in IT. (42%)

figure1
Figure1: What security capabilities in OT are lower than IT (NIST CSF 5 Core)

Furthermore, looking at the specific security capabilities, the score of “Cyber event detection” is the most(45.7%).

figure2
Figure2: What security capabilities in OT are lower than IT (detail)

The OT environment has more diverse legacy assets, and protocol stacks dedicated to ICS/OT, making it difficult to implement sensors to detect malicious behavior or apply the patches on the assets. The inability to implement uniform measures in the same way as IT security is an obstacle to increasing the maturity level.

Detection in OT: Endpoint and Network

The survey asked respondents about their Endpoint Detection and Response (EDR) and Network Security Monitoring (NSM) implementations to measure their visibility in their OT environments. They answered whether EDR (including antivirus) was implemented in the following three places.

  • Server assets running commercial OS (Windows, Linux, Unix): 41%
  • Engineering (engineering workstations, instrumentation laptops, calibration and test equipment) assets running commercial OS (Windows, Unix, Linux): 34%
  • Operator assets (HMI, workstations) running commercial OS (Windows, Linux, Unix): 33% 

In addition, 76% of organizations that have already deployed EDR said they plan to expand their deployment within 24 months.

figure3
Figure3: EDR deployment

We also asked whether NSM (including IDS) was implemented at the following levels referring to the Purdue model.

  • Purdue Level 4 (Enterprise): 30%
  • Purdue Level 3.5 (DMZ): 36%
  • Purdue Level 3 (Site or SCADA-wide): 38%
  • Purdue Level 2 (Control): 20%
  • Purdue Levels 1/0 (Sensors and Actuators): 8%

Like EDR, 70% of organizations that have already implemented NSM said they have plans to expand implementation within 24 months.

figure4
Figure4: NSM deployment

In this survey, EDR implementation rates tended to vary depending on the respondent’s industry and size of organization. The implementation rate of NSM was relatively high in DMZ and Level 3, and the implementation rate decreased according to the lower layers. But I think it is not appropriate to conclude the decisive trend from the average value in the questions, because there are variations in the places where they are implemented EDR and NSM depending on the organization. The implementation rate shown here is just a rough standard. Where and how much to invest depends on the environment and decision-making of the organization. Asset owners can use the result as a reference to see where to implement EDR and NSM and evaluate their implementation plans.

To learn about how to assess risk in your OT environment to invest appropriately, please refer to our practices of risk assessment in smart factories.

Reference:
Breaking IT/OT Silos with ICS/OT Visibility – 2023 SANS ICS/OT visibility survey

Source :
https://www.trendmicro.com/en_us/research/23/h/ot-security-2023.html

Top 10 AI Security Risks According to OWASP

By: Trend Micro
August 15, 2023
Read time: 4 min (1157 words)

The unveiling of the first-ever Open Worldwide Application Security Project (OWASP) risk list for large language model AI chatbots was yet another sign of generative AI’s rush into the mainstream—and a crucial step toward protecting enterprises from AI-related threats.

For more than 20 years, the Open Worldwide Application Security Project (OWASP) top 10 risk list has been a go-to reference in the fight to make software more secure. So it’s no surprise developers and cybersecurity professionals paid close attention earlier this spring when OWASP published an all-new list focused on large language model AI vulnerabilities.

OWASP’s move is yet more proof of how quickly AI chatbots have swept into the mainstream. Nearly half (48%) of corporate respondents to one survey said that by February 2023 they had already replaced workers with ChatGPT—just three months after its public launch. With many observers expressing concern that AI adoption has rushed ahead without understanding of the risks involved, the OWASP top 10 AI risk list is both timely and essential.

Large language model vulnerabilities at a glance

OWASP has released two draft versions of its AI vulnerability list so far: one in May 2023 and a July 1 update with refined classifications and definitions, examples, scenarios, and links to additional references. The most recent is labeled ‘version 0.5’, and a formal version 1 is reported to be in the works.

We did some analysis and found the vulnerabilities identified by OWASP fall broadly into three categories:

  1. Access risks associated with exploited privileges and unauthorized actions.
  2. Data risks such as data manipulation or loss of services.
  3. Reputational and business risks resulting from bad AI outputs or actions.

In this blog, we take a closer look at the specific risks in each case and offer some suggestions about how to handle them.

1. Access risks

Of the 10 vulnerabilities listed by OWASP, four are specific to access and misuse of privileges: insecure plugins, insecure output handling, permissions issues, and excessive agency.

According to OWASP, any large language model that uses insecure plugins to receive “free-form text” inputs could be exposed to malicious requests, resulting in unwanted behaviors or the execution of unauthorized remote code. On the flipside, plugins or applications that handle large language model outputs insecurely—without evaluating them—could be susceptible to cross-site and server-side request forgeries, unauthorized privilege escalations, hijack attacks, and more.

Similarly, when authorizations aren’t tracked between plugins, permissions issues can arise that open the way for indirect prompt injections or malicious plugin usage.

Finally, because AI chatbots are ‘actors’ able to make and implement decisions, it matters how much free reign (i.e., agency) they’re given. As OWASP explains, “When LLMs interface with other systems, unrestricted agency may lead to undesirable operations and actions.” Examples include personal mail reader assistants being exploited to propagate spam or customer service AI chatbots manipulated into issuing undeserved refunds.

In all of these cases, the large language model becomes a conduit for bad actors to infiltrate systems.

2. Data risks

Poisoned training data, supply chain vulnerabilities, prompt injection vulnerabilities and denials of serviceare all data-specific AI risks.

Data can be poisoned deliberately by bad actors who want to harm an organization. It can also be distorted inadvertently when an AI system learns from unreliable or unvetted sources. Both types of poisoning can occur within an active AI chatbot application or emerge from the large language model supply chain, where reliance on pre-trained models, crowdsourced data, and insecure plugin extensions may produce biased data outputs, security breaches, or system failures.

With prompt injections, ill-meaning inputs may cause a large language model AI chatbot to expose data that should be kept private or perform other actions that lead to data compromises.

AI denial of service attacks are similar to classic DOS attacks. They may aim to overwhelm a large language model and deprive users of access to data and apps, or—because many AI chatbots rely on pay-as-you-go IT infrastructure—force the system to consume excessive resources and rack up massive costs.

3. Reputational and business risks

The final OWASP vulnerability (according to our buckets) is already reaping consequences around the world today:overreliance on AI. There’s no shortage of stories about large language models generating false or inappropriate outputs from fabricated citations and legal precedents to racist and sexist language.

OWASP points out that depending on AI chatbots without proper oversight can make organizations vulnerable to publishing misinformation or offensive content that results in reputational damage or even legal action.
Given all these various risks, the question becomes, “What can we do about it?” Fortunately, there are some protective steps organizations can take. 

What enterprises can do about large language model vulnerabilities

From our perspective at Trend Micro, defending against AI access risks requires a zero-trust security stance with disciplined separation of systems (sandboxing). Even though generative AI has the ability to challenge zero-trust defenses in ways that other IT systems don’t—because it can mimic trusted entities—a zero-trust posture still adds checks and balances that make it easier to identify and contain unwanted activity. OWASP also advises that large language models “should not self-police” and calls for controls to be embedded in application programming interfaces (APIs).

Sandboxing is also key to protecting data privacy and integrity: keeping confidential information fully separated from shareable data and making it inaccessible to AI chatbots and other public-facing systems. (See our recent blog on AI cybersecurity policies for more.)

Good separation of data prevents large language models from including private or personally identifiable information in public outputs, and from being publicly prompted to interact with secure applications such as payment systems in inappropriate ways.

On the reputational front, the simplest remedies are to not rely solely on AI-generated content or code, and to never publish or use AI outputs without first verifying they are true, accurate, and reliable.

Many of these defensive measures can—and should—be embedded in corporate policies. Once an appropriate policy foundation is in place, security technologies such as endpoint detection and response (EDR), extended detection and response (XDR), and security information and event management (SIEM) can be used for enforcement and to monitor for potentially harmful activity.

Large language model AI chatbots are here to stay

OWASP’s initial work cataloguing AI risks proves that concerns about the rush to embrace AI are well justified. At the same time, AI clearly isn’t going anywhere, so understanding the risks and taking responsible steps to mitigate them is critically important.

Setting up the right policies to manage AI use and implementing those policies with the help of cybersecurity solutions is a good first step. So is staying informed. The way we see it at Trend Micro, OWASP’s top 10 AI risk list is bound to become as much of an annual must-read as its original application security list has been since 2003.

Next steps

For more Trend Micro thought leadership on AI chatbot security, check out these resources:

Source :
https://www.trendmicro.com/en_us/research/23/h/top-ai-risks.html

An Overview of the New Rhysida Ransomware Targeting the Healthcare Sector

By: Trend Micro Research
August 09, 2023
Read time: 7 min (1966 words)

Updated on August 9, 2023, 9:30 a.m. EDT: We updated the entry to include an analysis of current Rhysida ransomware samples’ encryption routine.  
Updated on August 14, 2023, 6:00 a.m. EDT: We updated the entry to include Trend XDR workbench alerts for Rhysida and its components.

Introduction

On August 4, 2023, the HHS’ Health Sector Cybersecurity Coordination Center (HC3) released a security alert about a relatively new ransomware called Rhysida (detected as Ransom.PS1.RHYSIDA.SM), which has been active since May 2023. In this blog entry, we will provide details on Rhysida, including its targets and what we know about its infection chain.

Who is behind the Rhysida ransomware?

Not much is currently known about the threat actors behind Rhysida in terms of origin or affiliations. According to the HC3 alert, Rhysida poses itself as a “cybersecurity team” that offers to assist victims in finding security weaknesses within their networks and system. In fact, the group’s first appearance involved the use of a victim chat support portal.

Who are Rhysida’s targets?

As mentioned earlier, Rhysida, which was previously known for targeting the education, government, manufacturing, and tech industries, among others — has begun conducting attacks on healthcare and public health organizations. The healthcare industry has seen an increasing number of ransomware attacks over the past five years.  This includes a recent incident involving Prospect Medical Holdings, a California-based healthcare system, that occurred in early August (although the group behind the attack has yet to be named as of writing).

Data from Trend Micro™ Smart Protection Network™ (SPN) shows a similar trend, where detections from May to August 2023 show that its operators are targeting multiple industries rather than focusing on just a single sector.

The threat actor also targets organizations around the world, with SPN data showing several countries where Rhysida binaries were detected, including Indonesia, Germany, and the United States.

Figure 1. The industry and country detection count for Rhysida ransomware based on Trend SPN data from May to August 2023
Figure 1. The industry and country detection count for Rhysida ransomware based on Trend SPN data from May to August 2023
Figure 1. The industry and country detection count for Rhysida ransomware based on Trend SPN data from May to August 2023

How does a Rhysida attack proceed?

Figure 2. The Rhysida ransomware infection chain
Figure 2. The Rhysida ransomware infection chain

Rhysida ransomware usually arrives on a victim’s machine via phishing lures, after which Cobalt Strike is used for lateral movement within the system.

Additionally, our telemetry shows that the threat actors execute PsExec to deploy PowerShell scripts and the Rhysida ransomware payload itself. The PowerShell script (g.ps1), detected as Trojan.PS1.SILENTKILL.A, is used by the threat actors to terminate antivirus-related processes and services, delete shadow copies, modify remote desktop protocol (RDP) configurations, and change the active directory (AD) password.

Interestingly, it appears that the script (g.ps1) was updated by the threat actors during execution, eventually leading us to a PowerShell version of the Rhysida ransomware.

Rhysida ransomware employs a 4096-bit RSA key and AES-CTR for file encryption, which we discuss in detail in a succeeding section. After successful encryption, it appends the .rhysida extension and drops the ransom note CriticalBreachDetected.pdf.

This ransom note is fairly unusual — instead of an outright ransom demand as seen in most ransom notes from other ransomware families, the Rhysida ransom note is presented as an alert from the Rhysida “cybersecurity team” notifying victims that their system has been compromised and their files encrypted. The ransom demand comes in the form of a “unique key” designed to restore encrypted files, which must be paid for by the victim.

Summary of malware and tools used by Rhysida

  • Malware: RHYSIDA, SILENTKILL, Cobalt Strike
  • Tools: PsExec
Initial AccessPhishingBased on external reports, Rhysida uses phishing lures for initial access
Lateral MovementPsExecMicrosoft tool used for remote execution
Cobalt Strike3rd party tool abused for lateral movement
Defense EvasionSILENTKILLMalware deployed to terminate security-related processes and services, delete shadow copies, modify RDP configurations, and change the AD password
ImpactRhysida ransomwareRansomware encryption
Table 1. A summary of the malware, tools, and exploits used by Rhysida

A closer look at Rhysida’s encryption routine 
After analyzing current Rhysida samples, we observed that the ransomware uses LibTomCrypt, an open-source cryptographic library, to implement its encryption routine. Figure 3 shows the procedures Rhysida follows when initializing its encryption parameters. 

Figure 3. Rhysida’s parameters for encryption
Figure 3. Rhysida’s parameters for encryption

Rhysida uses LibTomCrypt’s pseudorandom number generator (PRNG) functionalities for key and initialization vector (IV) generation. The init_prng function is used to initialize PRNG functionalities as shown in Figure 4. The same screenshot also shows how the ransomware uses the library’s ChaCha20 PRNG functionality.

rhysida_fig4
Figure 4. Rhysida’s use of the “init_prng” function

After the PRNG is initialized, Rhysida then proceeds to import the embedded RSA key and declares the encryption algorithm it will use for file encryption:

  •  
  • It will use the register_cipher function to “register” the algorithm (in this case, aes), to its table of usable ciphers.
  •  
  • It will use the find_cipher function to store the algorithm to be used (still aes), in the variable CIPHER.

Afterward, it will proceed to also register and declare aes for its Cipher Hash Construction (CHC) functionalities. 

Based on our analysis, Rhysida’s encryption routine follows these steps:

  1. After it reads file contents for encryption, it will use the initialized PRNG’s function, chacha20_prng_read, to generate both a key and an IV that are unique for each file.
  2. It will use the ctr_start function to initialize the cipher that will be used, which is aes (from the variable CIPHER), in counter or CTR mode.
  3. The generated key and IV are then encrypted with the rsa_encrypt_key_ex function.
  4. Once the key and IV are encrypted, Rhysida will proceed to encrypt the file using LibTomCrypt’s ctr_encrypt function.
Figure 5. Rhysida’s encryption routine
Figure 5. Rhysida’s encryption routine

Unfortunately, since each encrypted file has a unique key and IV — and only the attackers have a copy of the associated private key — decryption is currently not feasible.

How can organizations protect themselves from Rhysida and other ransomware families?

Although we are still in the process of fully analyzing Rhysida ransomware and its tools, tactics, and procedures (TTPs), the best practices for defending against ransomware attacks still holds true for Rhysida and other ransomware families.

Here are several recommended measures that organizations implement to safeguard their systems from ransomware attacks:

  • Create an inventory of assets and data
  • Review event and incident logs
  • Manage hardware and software configurations.
  • Grant administrative privileges and access only when relevant to an employee’s role and responsibilities.
  • Enforce security configurations on network infrastructure devices like firewalls and routers.
  • Establish a software whitelist permitting only legitimate applications
  • Perform routine vulnerability assessments
  • Apply patches or virtual patches for operating systems and applications
  • Keep software and applications up to date using their latest versions
  • Integrate data protection, backup, and recovery protocols
  • Enable multifactor authentication (MFA) mechanisms
  • Utilize sandbox analysis to intercept malicious emails
  • Regularly educate and evaluate employees’ security aptitude
  • Deploy security tools (such as XDR) which are capable of detecting abuse of legitimate applications

Indicators of compromise

Hashes

The indicators of compromise for this entry can be found here.

MITRE ATT&CK Matrix

Initial AccessT1566 PhishingBased on external reports, Rhysida uses phishing lures for initial access.
ExecutionT1059.003 Command and Scripting Interpreter: Windows Command ShellIt uses cmd.exe to execute commands for execution.
T1059.001 Command and Scripting Interpreter: PowerShellIt uses PowerShell to create scheduled task named Rhsd pointing to the ransomware.
PersistenceT1053.005 Scheduled Task/Job: Scheduled TaskWhen executed with the argument -S, it will create a scheduled task named Rhsd that will execute the ransomware
Defense EvasionT1070.004 Indicator Removal: File DeletionRhysida ransomware deletes itself after execution. The scheduled task (Rhsd) created would also be deleted after execution.
T1070.001 Indicator Removal: Clear Windows Event LogsIt uses wevtutil.exe to clear Windows event logs.
DiscoveryT1083 File and Directory DiscoveryIt enumerates and looks for files to encrypt in all local drives.
T1082 System Information DiscoveryObtains the following information:Number of processorsSystem information
ImpactT1490 Inhibit System RecoveryIt executes uses vssadmin to remove volume shadow copies
T1486 Data Encrypted for ImpactIt uses a 4096-bit RSA key and Cha-cha20 for file encryption.It avoids encrypting files with the following strings in their file name:.bat.bin.cab.cmd.com.cur.diagcab.diagcfg.diagpkg.drv.dll.exe.hlp.hta.ico.msi.ocx.ps1.psm1.scr.sys.ini.Thumbs.db.url.isoIt avoids encrypting files found in the following folders:$Recycle.BinBootDocuments and SettingsPerfLogsProgramDataRecoverySystem Volume InformationWindows$RECYCLE.BINApzDataIt appends the following extension to the file name of the encrypted files:.rhysidaIt encrypts all system drives from A to Z.It drops the following ransom note:{Encrypted Directory}\CriticalBreachDetected.pdf
T1491.001 Defacement: Internal DefacementIt changes the desktop wallpaper after encryption and prevents the user from changing it back by modifying the NoChangingWallpaper registry value.

Trend Micro Solutions

Trend solutions such as Apex One Deep Security,  Cloud One Workload SecurityWorry-Free Business Security,  Deep Discovery Web InspectorTitanium Internet Security, and Cloud Edge can help protect against attacks employed by the Rhysida ransomware.

The following solutions protect Trend customers from Rhysida attacks:

Trend Micro solutionsDetection Patterns / Policies / Rules
Trend Micro Apex OneTrend Micro Deep SecurityTrend Micro Titanium Internet SecurityTrend Micro Cloud One Workload Security Trend Micro Worry-Free Business Security ServicesRansom.Win64.RHYSIDA.SMRansom.Win64.RHYSIDA.THEBBBCRansom.Win64.RHYSIDA.THFOHBCTrojan.PS1.SILENTKILL.SMAJCTrojan.PS1.SILENTKILL.A
Trend Micro Apex OneTrend Micro Deep SecurityTrend Micro Worry-Free Business Security ServicesTrend Micro Titanium Internet Security
 
RAN4056TRAN4052T
Trend Micro Apex OneTrend Micro Deep Discovery Web InspectorDDI Rule ID: 597 – “PsExec tool detected”DDI Rule ID: 1847 – “PsExec tool detected – Class 2″DDI Rule ID: 4524 – “Possible Renamed PSEXEC Service – SMB2 (Request)”DDI Rule ID: 4466 – “PsExec Clones – SMB2 (Request)”DDI Rule ID: 4571 – “Possible Suspicious Named Pipe – SMB2 (REQUEST)”DDI Rule ID: 4570 – “COBALTSTRIKE – DNS(RESPONSE)”DDI Rule ID: 4152 – “COBALTSTRIKE – HTTP (Response)”DDI Rule ID: 4469 – “APT – COBALTSRIKE – HTTP (RESPONSE)”DDI Rule ID: 4594 – “COBALTSTRIKE – HTTP(REQUEST) – Variant 3″DDI Rule ID: 4153 – “COBALTSTRIKE – HTTP (Request) – Variant 2″DDI Rule ID: 2341 – “COBALTSTRIKE – HTTP (Request)”DDI Rule ID: 4390 – “CobaltStrike – HTTPS (Request)”DDI Rule ID: 4870 – “COBEACON DEFAULT NAMED PIPE – SMB2 (Request)”DDI Rule ID: 4861 – “COBEACON – DNS (Response) – Variant 3″DDI Rule ID: 4860 – “COBEACON – DNS (Response) – Variant 2″DDI Rule ID: 4391 – “COBEACON – DNS (Response)”
Trend Micro Apex OneTrend Micro Deep Security Trend Micro Worry-Free Business Security ServicesTrend Micro Titanium Internet SecurityTrend Micro Cloud EdgeTroj.Win32.TRX.XXPE50FFF071

Trend Micro XDR uses the following workbench alerts to protect customers from Rhysida-related attacks:

Cobalt Strike

Workbench AlertID
Anomalous Regsvr32 Execution Leading to Cobalt Strike63758d9f-4405-4ec5-b421-64aef7c85dca
COBALT C2 Connectionafd1fa1f-b8fc-4979-8bf7-136db80aa264
Early Indicator of Attack via Cobalt Strike0ddda3c1-dd25-4975-a4ab-b1fa9065568d
Lateral Movement of Cobalt Strike Beacon5c7cdb1d-c9fb-4b1d-b71f-9a916b10b513
Possible Cobalt Strike Beacon45ca58cc-671b-42ab-a388-d972ff571d68
Possible Cobalt Strike Beacon Active Directory Database Dumping1f103cab-9517-455d-ad08-70eaa05b8f8d
Possible Cobalt Strike Connection85c752b8-93c2-4450-81eb-52ec6161088e
Possible Cobalt Strike Privilege Escalation Behavior2c997bac-4fc0-43b4-8279-6f2e7cf723ae
Possible Fileless Cobalt Strikecf1051ba-5360-4226-8ffb-955fe849db53

PsExec

Workbench AlertID
Possible Credential Access via PSEXESVC Command Execution0b870a13-e371-4bad-9221-be7ad98f16d7
Possible Powershell Process Injection via PSEXEC7fe83eb8-f40f-43be-8edd-f6cbc1399ac0
Possible Remote Ransomware Execution via PsExec47fbd8f3-9fb5-4595-9582-eb82566ead7a
PSEXEC Execution By Processe011b6b9-bdef-47b7-b823-c29492cab414
Remote Execution of Windows Command Shell via PsExecb21f4b3e-c692-4eaf-bee0-ece272b69ed0
Suspicious Execution of PowerShell Parameters and PSEXEC26371284-526b-4028-810d-9ac71aad2536
Suspicious Mimikatz Credential Dumping via PsExec8004d0ac-ea48-40dd-aabf-f96c24906acf

SILENTKILL

Workbench AlertID
Possible Disabling of Antivirus Software64a633e4-e1e3-443a-8a56-7574c022d23f
Suspicious Deletion of Volume Shadow Copy5707562c-e4bf-4714-90b8-becd19bce8e5

Rhysida

Workbench AlertID
Ransom Note Detection (Real-time Scan)16423703-6226-4564-91f2-3c03f2409843
Ransomware Behavior Detection6afc8c15-a075-4412-98c1-bb2b25d6e05e
Ransomware Detection (Real-time Scan)2c5e7584-b88e-4bed-b80c-dfb7ede8626d
Scheduled Task Creation via Command Line05989746-dc16-4589-8261-6b604cd2e186
System-Defined Event Logs Clearing via Wevtutil639bd61d-8aee-4538-bc37-c630dd63d80f

Trend Micro Vision One hunting query

Trend Vision One customers can use the following hunting query to search for Rhysida within their system:

processCmd:”powershell.exe*\\*$\?.ps1″ OR (objectFilePath:”?:*\\??\\psexec.exe” AND processCmd:”*cmd.exe*\\??\\??.bat”)

Source :
https://www.trendmicro.com/en_us/research/23/h/an-overview-of-the-new-rhysida-ransomware.html

What does the Allow, Deny & Discard do on an Sonicwall Access Rule?

Last Update : 07/25/2022

Description

This article explains the 3 Actions available on an access rule

Resolution

Firewall rules, in general, based on concept of Implicit Deny.  Implicit Deny basically means that the default answer to whether a communication is allowed to transit the firewall is always No or Deny.  Therefore, the majority of Access Rules tend to be Allow.  A firewall will process a communication, inbound or outbound, based on the highest priority rule to the lowest.  Once a rule is found with conditions that match, that rule is executed by the firewall.  Allow, Deny & Discard is the action that the firewall will take for any communication that meets the conditions of a particular Access Rule.  Should a communication come into the firewall and no Access Rule meets the condition to allow it through, the firewall will Drop the communication.

Gen7 Add access rule dialog box

Image

Allow – This means that the firewall will permit the communication to continue through the firewall to its destination.

 NOTE: When creating a new access rule, the default Action on your firewall is set to Allow. 

Gen6 Add access rule dialog box

Deny – This means that when a communication is found to match the conditions of an Access Rule with the Deny action, the communication will not be permitted to proceed.  The communication is Dropped by the firewall.  A RST (reset) packet sent back to the originating device and the communication will be ended.  The RST packet is a communication that goes back to the originator of the traffic stating that the connection has been closed.  Under most circumstances, you should not have to write a Deny rule as Deny is the default action as described above.

 NOTE: Be advised that the RST packet is a normal part of network communications and is not unique to the SonicWall.

Discard – This option is much like Deny in that it will stop and drop the communication.  In this instance, the firewall will not send a RST packet as described in the Deny action above.  When the RST packet does not go back as with Deny, the originator has no confirmation that there is a device to respond at the IP address that is trying to reach.  Even if the originator suspects that it is a security function that is stopping it, they will still not know anything for sure.  This is essentially Stealth Mode applied at the Access Rule level.

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/what-does-the-allow-deny-discard-do-on-an-access-rule/220725123655973/

Accessing Safemode when Sonicwall firewall is not reachable via CLI or GUI

Last Update : 05/09/2023

Description

This article describes how to put a SonicWall into safe mode through the GUI or through the command line interface (CLI).

You may require to follow this article for the following:

  • Firewall not accessible any longer due to configuration issues or other causes
  • Perform a firmware upgrade when it fails via normal means.
  • Perform a ROM/Safemode version upgrade.
  • Viewing the bootlogs or other diagnostic information.

 NOTE: Factory Reset via safemode is a required step when the device turns on but it is not reachable. A backup of the settings will be required after the factory reset or the firewall has to be reconfigured from scratch.

Resolution

ACCESSING SAFEMODE WHEN FIREWALL IS NOT REACHABLE VIA CLI/UI:

  1. Using a paperclip or similarly sized object, press and hold down the RST button located in the small hole on the front or back of the device (depending on the appliance) for at least 60 Seconds. Once the test light on the device becomes solid or begins to blink then the SonicWall is in safe mode.

     NOTE: On an NSsp 13700 or NSa Series appliance, press the button, but you do not need to hold it down.
  2. Connect a computer directly to the following Interface, depending on what model SonicWall you have, via an ethernet cable.
    1. Manually assign a static IP / subnet mask and Gateway (gateway will be the safemode firewall IP) on the connected computers NIC depending on the SonicWall appliance.
    2. Open the browser on the client connected to the firewall and go to: http://Enterherethe_Safemode_Firewall_IP

      Generation/ModelInterface to be used while in SafemodeSafemode Firewall IPRecommended IP to be set on clientGeneration 5X0192.168.168.168192.168.168.10 | 255.255.255.0Generation 6 & 7 | SOHO & TZ Devices
      X0192.168.168.168192.168.168.10 |  255.255.255.0Generation 6 & 7 | NSa/SM/NSsp DevicesMGMT Interface192.168.1.254192.168.1.10 | 255.255.255.0 CAUTION: Safemode is only available via HTTP so you have to manually type http:// otherwise the browser will automatically take you to https://.

       NOTE: For new safe mode options on Gen7, please refer: Safemode options on SonicWall Gen 7 devices

ACCESSING SAFEMODE VIA CLI

 NOTE: There is an E-CLI command safemode that restarts the firewall in SafeMode for Generation 7 (NSsp 13700 or NSa).

  1.  If you’re unfamiliar with how to access the SonicWall management using CLI please reference How to login to the appliance using the Command Line Interface (CLI).
  2. Once logged into the CLI, input the following commands.

    Safemode
    yes
  3. The SonicWall will reboot and enter safe mode.
    Image
  4. Reference the steps above to login to the safe mode GUI, beginning with “Connect a computer directly to the following Interface…”

Below you can find some additional information about what you can do in SafeMode:

Reset your firewall to Factory Default

  1. Select Current Firmware with Factory Default Settings and confirm.
  2. Your firewall will restart to factory default.
  3. After the reboot, login to the SonicWall management GUI via X0 Interface on the default firewall IP (192.168.168.168).
     NOTE: Make sure to modify the NIC Settings of the client connected to X0 to match the new firewall default settings (Gateway: 192.168.168.168 and NetMask: 255.255.255.0).

    Image

Upgrading the Gen 6 Firmware or ROM Version from Safe Mode

  1. Download the desired firmware version from MySonicWall.com or have the desired ROM Version on hand. ROM Packs are only available via SonicWall technical support.
     NOTE: Upgrading the ROM version only applies to Generation 6 NSA SonicWalls – 2600, 3600, 4600, 5600, and 6600. Unless you have been requested to upgrade the ROM version by SonicWall technical support do not attempt to do so.  
  2. Select Upload New Firmware and follow the prompt in the pop-up window to upload the firmware or ROM version to the SonicWall.
  3. You should now see the New Firmware or Uploaded ROM Pack on the safe mode GUI. You can boot to the new firmware or ROM by clicking the boot icon on the far right.
     NOTE: Booting to a new firmware or ROM version will reboot the SonicWall and exit safe mode. Make sure you’re completely finished with the SonicWall’s safe mode before selecting boot. 
  4. After the reboot, login to the SonicWall management GUI as you normally would. Navigate to Monitor | Current Status | System Status.
  5. On the Status screen you should see the new firmware version listed under Firmware Version or the new ROM version listed under Safemode Version.

Gen 7 (Using SafeMode to Upgrade Firmware):

  1. Once we enter the url in the web browser to get to the safe mode page on SonicWall Gen 7 devices, we need to authenticate using Maintenance Key.
  2. In the Maintenance Key prompt, type in or paste the key you got from MySonicWall and then click Authenticate. If your appliance is running SonicOS 7.0.1 and is not yet registered, use its Auth Code as the key. (To find the Maintenance key, please refer to: Safemode options on SonicWall Gen 7 devices)
    Image

  3. Safe mode page is displayed

    Image
  4. Click Upload Image, and then browse to the location where you saved the SonicOS firmware image, select the file, and click Upload.
  5. Click the Boot button in the row for Available Image Version and select one of the following:
    1. Boot Available Image with Current Configuration: Use this option to restart the appliance with your current configuration settings.
    2. Boot Available Image with Factory Default Configuration: Use this option to restart the appliance with factory default configuration settings. The configuration settings revert to default values, but logs and local backups remain in place.
    3. Boot Available Image with Backup Configuration: Use this option to restart the appliance with saved backup configuration settings. You can choose which backup to use. 

      Image
  6. In the confirmation dialog, click Boot to proceed.
  7. Wait while the firmware is installed, then booted. 
  8. Login to the SonicWall management GUI as you normally would.

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/accessing-safemode-when-firewall-is-not-reachable-via-cli-or-gui/170507123738054/

How can I access the SonicWall Management Interface?

Last Update: 03/13/2023

Description

The SonicWall UTM appliance has a web-based graphical user interface for configuring the security appliance. This is the primary means of configuring the device.

Resolution

By default all the interfaces (ports like WAN,OPT or X1,X2) are unconfigured except the LAN or X0 interface. The LAN or X0 interface is pre-configured with an ip address of 192.168.168.168 and subnet mask of 255.255.255.0.

You could also determine the LAN or X0 interface IP address by using the Setup Tool (Windows SetupTool – https://software.sonicwall.com/UtilityTools/SetupTool.exe)

Image
Your UTM appliance package will contain, among other things, an Ethernet cable. Connect one end of the cable to the LAN or X0 interface of the SonicWall and the other end to a computer. Make sure the LED alongside LAN or X0 is lit solid.

As the UTM appliance is not pre-configured with DHCP, the computer connected to it must be configured with a static IP address. Set the computer IP address in the same subnet as the SonicWall LAN or X0.

 EXAMPLE:192.168.168.2 with subnet mask of 255.255.255.0.

Open an Internet browser and enter 192.168.168.168 in the address bar.

As this is the first time you are accessing the SonicWall UTM management interface, you will be presented with a wizard. You could follow the wizard to set a new admin password and other information. You could skip the wizard and login directly to the interface by clicking the click here link in the wizard prompt. 

Quick Configuration for Gen6 Appliances with SonicOS 6.5 & above.
Image

When attempting to login directly you will be prompted for a username and password. By default the username is admin and the password is password. Once successfully logged in you can change the password under Manage | Appliance | Base Settings | Administrator Name & Password.

Further configuration of the device can be done either manually, by navigating the tabs on the left-hand side of the interface, or by using the wizard. The wizard can be accessed by clicking on the Wizards icon at the top of the interface.

TROUBLESHOOTING
  • Make sure there is physical connectivity between the computer and the SonicWall.
  • It is always recommended to connect the computer directly to SonicWall instead of through a switch or hub.
  • The LAN or X0 interface LED should be lit solid. If the computer is a PC, the Network Connection Status should show connected.
  • Although SonicWall is Auto DBX capable, try a cross-over cable.
     TIP: If physical connection has been established but the user is unable to access the management interface try doing a ping to the IP address 192.168.168.168 from the computer.
    If the ping test passes and the user is unable to open the interface page in the browser,  try the following:
  1.  Reboot the SonicWall.
  2.  Clear the browser cache.

See also:

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/how-can-i-access-the-sonicwall-management-interface/170503695604558/

Exit mobile version