How GRC protects the value of organizations — A simple guide to data quality and integrity

Contemporary organizations understand the importance of data and its impact on improving interactions with customers, offering quality products or services, and building loyalty.

Data is fundamental to business success. It allows companies to make the right decisions at the right time and deliver the high-quality, personalized products and services that customers expect.

There is a challenge, though.

Businesses are collecting more data than ever before, and new technologies have accelerated this process dramatically. As a result, organizations have significant volumes of data, making it hard to manage, protect, and get value from it.

Here is where Governance, Risk, and Compliance (GRC) comes in. GRC enables companies to define and implement the best practices, procedures, and governance to ensure the data is clean, safe, and reliable across the board.

More importantly, organizations can use GRC platforms like StandardFusion to create an organizational culture around security. The objective is to encourage everyone to understand how their actions affect the business’s success.

Now, the big question is:

Are organizations getting value from their data?

To answer that, first, it’s important to understand the following two concepts.

Data quality

Data quality represents how reliable the information serves an organization’s specific needs — mainly supporting decision-making.

Some of these needs might be:

  • Operations – Where and how can we be more efficient?
  • Resource distribution – Do we have any excess? Where? And why?
  • Planning – How likely is this scenario to occur? What can we do about it?
  • Management – What methods are working? What processes need improvement?

From a GRC standpoint, companies can achieve data quality by creating rules and policies so the entire organization can use that data in the same ways. These policies could, for example, define how to label, transfer, process, and maintain information.

Data Integrity

Data integrity focuses on the trustworthiness of the information in terms of its physical and logical validity. Some of the key characteristics to ensure the usability of data are:

  • Consistency
  • Accuracy
  • Validity
  • Truthfulness

GRC’s goal for data integrity is to keep the information reliable by eliminating unwanted changes between updates or modifications. It is all about the data’s accuracy, availability, and trust.

How GRC empowers organizations achieve high-quality data

Organizations that want to leverage their data to generate value must ensure the information they collect is helpful and truthful. The following are the key characteristics of high-quality data:

  • Completeness: The expected data to make decisions is present.
  • Uniqueness: There is no duplication of data.
  • Timeliness: The data is up-to-date and available to use when needed.
  • Validity: The information has the proper format and matches the requirements.
  • Accuracy: The data describes the object correctly in a real-world context.
  • Consistency: The data must be the same across multiple databases

A powerful way to make sure the company’s data maintains these six characteristics is by leveraging the power of GRC.

Why?

Because GRC empowers organizations to set standards, regulations, and security controls to avoid mistakes, standardize tasks and guide personnel when collecting and dealing with vital information.

GRC helps organizations answer the following questions:

  • How is the company ensuring that data is available for internal decision and for the clients?
  • Is everyone taking the proper steps to collect and process data?
  • Have redundancies been removed?
  • Is the organization prepared for unexpected events?
  • Does the organization have a backup system?
  • Are the key processes standardized?

Overall, GRC aims to build shared attitudes and actions towards security.

Why every organization needs high-quality data and how GRC helps

Unless the data companies collect is high-quality and trustworthy, there’s no value in it — it becomes a liability and a risk for the organization.

Modern companies recognize data as an essential asset that impacts their bottom line. Furthermore, they understand that poor data quality can damage credibility, reduce sales, and minimize growth.

In today’s world, organizations are aiming to be data-driven. However, becoming a data-driven organization is tough without a GRC program.

How so?

Governance, Risk, and Compliance enable organizations to protect and manage data quality by creating standardized, controlled, and repeatable processes. This is key because every piece of data an organization process has an associated risk.

By understanding these risks, companies can implement the necessary controls and policies for handling and extracting data correctly so that every department can access the same quality information.

Organizations without structured data can’t provide any value, and they face the following risks:

  • Missed opportunities: Many leads are lost because of incomplete or inaccurate data. Also, incorrect data means wrong insights, resulting in missing critical business opportunities.
  • Lost revenue: According to 2021 Gartner’s research, the average financial impact of poor data quality on organizations is $12.9 million annually.
  • Poor customer experience: When data quality is poor, organizations can’t identify customers’ pain points and preferences. As a result, the offer of products or services doesn’t match customers’ needs and expectations.
  • Lack of compliance: In some industries where regulations control relationships or customer transactions, maintaining good-quality data can be the difference between compliance and fines of millions of dollars. GRC is vital to keep compliance in the loop as new regulations evolve worldwide.
  • Increased expenses: A few years ago, IBM’s research showed that businesses lost 3.1 trillion dollars in the US alone. How? Spending time to find the correct data, fixing errors, and just hunting for information and confirmed sources.
  • Misanalysis: Around 84% of CEOs are concerned about the quality of data they are deciding on. Wrong data will lead to bad decisions and ultimately damage operations, finances, HR, and every area within the company.
  • Reputational damage: In today’s world, customers spend a lot of their time reading reviews before making a decision. For instance, if a company fails to satisfy its customers, everyone will know.
  • Reduced efficiency: Poor data quality forces employees to do manual data quality checks, losing time and money.

To sum up:

Having the right processes to manipulate data will prevent organizations from missing business opportunities, damaging their reputation, and doing unnecessary repetitive tasks.

How GRC supports data-driven business and what are the key benefits of clean data

Data-driven businesses embrace the use of data (and its analysis) to get insights that can improve the organization. The efficient management of big data through GRC tools helps identify new business opportunities, strengthen customer experiences, grow sales, improve operations, and more.

For example, GRC helps data-driven businesses by allowing them to create and manage the right policies to process and protect the company’s data.

More importantly, organizations can also control individual policies to ensure they have been distributed and acknowledged accordingly.

In terms of benefits, although clean data has numerous “easy-to-identify” benefits, many others are not easily identified. Trusting data not just improves efficiency and results; it also helps with fundamental, vital factors that affect business performance and success.

What are these factors?

Fundamental benefits:

  • Profits/Revenue
  • Internal communication
  • Employees confidence to share information
  • Company’s reputation
  • Trust

Operational benefits:

  • Efficiency
  • Business outcome
  • Privacy issues
  • Customer satisfaction
  • Better audience-targeting

How GRC protect the value of businesses and their data

In this contemporary world, companies should be measured not only via existing financial measurements but also by the amount of monetizable data they can capture, consume, store and use. More importantly, how the data helps the organization’s internal processes to be faster and more agile.

When people think of high-quality data and big data, they usually associate these two with big organizations, especially technology and social media platforms. However, big quality data gives organizations of any size plenty of benefits.

Data quality and integrity help organizations to:

  • Understand their clients
  • Enhance business operations
  • Understand industry best practices
  • Identify the best partnership options
  • Strengthen business culture
  • Deliver better results
  • Make more money

Using the right GRC platform helps companies create and control the policies and practices to ensure their data is valid, consistent, accurate, and complete — allowing them to get all these benefits.

The key to using GRC tools is that businesses can produce what customers expect on a greater scale and with higher precision and velocity.

Now, what does this have to do with value?

By protecting the value of data, organizations are protecting their overall worth. Indeed, GRC empowers companies to create a culture of value, giving everyone education and agency so they can make better decisions.

Also, GRC helps companies tell better security stories. These stories aim to build trust with customers and partners, enter new markets, and shorten sale cycles.

To summarize:

A better understanding of customers and processes — through data — will lead to better products and services, enhanced experiences, and long-lasting relationships with customers. All these represent growth and more revenue for companies.

What happens when a company’s data is not safe? Can it damage their value?

Trust is a vital component of any interaction (business or personal) and, as such, is mandatory for organizations to protect it — without trust, there is no business.

When data is not protected, the chances of breaches are higher, causing direct and indirect costs.

Direct costs are:

  • Fines
  • Lawsuits
  • Stolen information
  • Compensations
  • Potential business loss

Indirect costs are:

  • Reputation/Trust
  • PR activities
  • Lost revenue from downtime
  • New and better protection

Often, reputation damages can cause long-term harm to organizations, making it hard for them to acquire and maintain business. In fact, reputation loss is the company’s biggest worry, followed by financial costs, system damage, and downtime.

So, what does all this mean?

It’s not just about collecting data; it is also about how companies reduce risks and leverage and protect the data they have. GRC integrates data security, helping organizations be better prepared against unauthorized access, corruption, or theft.

Moreover, GRC tools can help elevate data security by controlling policies, regulations, and predictable issues within the organization.

The bottom line?

When companies can’t get or maintain customers because of a lack of trust, the organization’s value will be significantly lower — or even zero. Unfortunately, this is even more true for small and medium size companies.

How to use GRC to achieve and maintain high-quality data?

Many organizations have trouble managing their data, which, unfortunately, leads to poor decisions and a lack of trust from employees and customers.

Moreover, although companies know how costly wrong information is, many are not working on ensuring quality data through the right processes and controls. In fact, Harward Business Review said that 47% of newly created data records have at least one critical error.

Why is that?

Because there is a lack of focus on the right processes and systems that need to be in place to ensure quality data.

What do poor processes cause?

  • Human errors
  • Wrong data handling
  • Inaccurate formatting
  • Different sets of data for various departments
  • Unawareness of risks
  • Incorrect data input or extraction

Fortunately, GRC’s primary goal is to develop the right policies and procedures to ensure everyone in the organization appropriately manages the data.

GRC aims to create a data structure based on the proper governance that will dictate how people organize and handle the company’s information. As a result, GRC will empower companies to be able to extract value from their data.

That is not everything.

Governance, Risk, and Compliance allow organizations to understand the risks associated with data handling and guide managers to create and distribute the policies that will support any data-related activity.

The following are some of the ways GRC is used to achieve and maintain high-quality data:

  • Data governance: Data governance is more than setting rules and telling people what to do. Instead, it is a collection of processes, roles, policies, standards, and metrics that will lead to a cultural change to ensure effective management of information throughout the organization.
  • Education: Achieving good data quality is not easy. It requires a deep understanding of data quality principles, processes, and technologies. GRC facilitates the education process by allowing the organization to seamlessly implement, share, and communicate its policies and standards to every department.
  • Everyone is involved: Everyone must understand the organization’s goal for data quality and the different processes and approaches that will be implemented. GRC focuses on cultural change.
  • Be aware of threats: When managing data, each process has risks associated with it. The mission of GRC is for the organization to recognize and deal with potential threats effectively. When companies are aware of risks, they can implement the necessary controls and rules to protect the data.
  • One single source of truth: A single source of truth ensures everyone in the organization makes decisions based on the same consistent and accurate data. GRC can help by defining the governance over data usage and manipulation. Furthermore, GRC makes it easy to communicate policies, see who the policy creator is, and ensure employees are acting according to the standards.

Get a free consultation with StandardFusion to learn more about how GRC and data governance can boost your organization’s value.

Source :
https://thehackernews.com/2022/09/how-grc-protects-value-of-organizations.html

Over 280,000 WordPress Sites Attacked Using WPGateway Plugin Zero-Day Vulnerability

A zero-day flaw in the latest version of a WordPress premium plugin known as WPGateway is being actively exploited in the wild, potentially allowing malicious actors to completely take over affected sites.

Tracked as CVE-2022-3180 (CVSS score: 9.8), the issue is being weaponized to add a malicious administrator user to sites running the WPGateway plugin, WordPress security company Wordfence noted.

“Part of the plugin functionality exposes a vulnerability that allows unauthenticated attackers to insert a malicious administrator,” Wordfence researcher Ram Gall said in an advisory.

CyberSecurity

WPGateway is billed as a means for site administrators to install, backup, and clone WordPress plugins and themes from a unified dashboard.

The most common indicator that a website running the plugin has been compromised is the presence of an administrator with the username “rangex.”

Additionally, the appearance of requests to “//wp-content/plugins/wpgateway/wpgateway-webservice-new.php?wp_new_credentials=1” in the access logs is a sign that the WordPress site has been targeted using the flaw, although it doesn’t necessarily imply a successful breach.

Wordfence said it blocked over 4.6 million attacks attempting to take advantage of the vulnerability against more than 280,000 sites in the past 30 days.

Further details about the vulnerability have been withheld owing to active exploitation and to prevent other actors from taking advantage of the shortcoming. In the absence of a patch, users are recommended to remove the plugin from their WordPress installations until a fix is available.

CyberSecurity

The development comes days after Wordfence warned of in-the-wild abuse of another zero-day flaw in a WordPress plugin called BackupBuddy.

The disclosure also arrives as Sansec revealed that threat actors broke into the extension license system of FishPig, a vendor of popular Magento-WordPress integrations, to inject malicious code that’s designed to install a remote access trojan called Rekoobe.

Source :
https://thehackernews.com/2022/09/over-280000-wordpress-sites-attacked.html

Microsoft’s Latest Security Update Fixes 64 New Flaws, Including a Zero-Day

Tech giant Microsoft on Tuesday shipped fixes to quash 64 new security flaws across its software lineup, including one zero-day flaw that has been actively exploited in real-world attacks.

Of the 64 bugs, five are rated Critical, 57 are rated Important, one is rated Moderate, and one is rated Low in severity. The patches are in addition to 16 vulnerabilities that Microsoft addressed in its Chromium-based Edge browser earlier this month.

“In terms of CVEs released, this Patch Tuesday may appear on the lighter side in comparison to other months,” Bharat Jogi, director of vulnerability and threat research at Qualys, said in a statement shared with The Hacker News.

“However, this month hit a sizable milestone for the calendar year, with MSFT having fixed the 1000th CVE of 2022 – likely on track to surpass 2021 which patched 1,200 CVEs in total.”

CyberSecurity

The actively exploited vulnerability in question is CVE-2022-37969 (CVSS score: 7.8), a privilege escalation flaw affecting the Windows Common Log File System (CLFS) Driver, which could be leveraged by an adversary to gain SYSTEM privileges on an already compromised asset.

“An attacker must already have access and the ability to run code on the target system. This technique does not allow for remote code execution in cases where the attacker does not already have that ability on the target system,” Microsoft said in an advisory.

The tech giant credited four different sets of researchers from CrowdStrike, DBAPPSecurity, Mandiant, and Zscaler for reporting the flaw, which may be an indication of widespread exploitation in the wild, Greg Wiseman, product manager at Rapid7, said in a statement.

CVE-2022-37969 is also the second actively exploited zero-day flaw in the CLFS component after CVE-2022-24521 (CVSS score: 7.8), the latter of which was resolved by Microsoft as part of its April 2022 Patch Tuesday updates.

It’s not immediately clear if CVE-2022-37969 is a patch bypass for CVE-2022-24521. Other critical flaws of note are as follows –

  • CVE-2022-34718 (CVSS score: 9.8) – Windows TCP/IP Remote Code Execution Vulnerability
  • CVE-2022-34721 (CVSS score: 9.8) – Windows Internet Key Exchange (IKE) Protocol Extensions Remote Code Execution Vulnerability
  • CVE-2022-34722 (CVSS score: 9.8) – Windows Internet Key Exchange (IKE) Protocol Extensions Remote Code Execution Vulnerability
  • CVE-2022-34700 (CVSS score: 8.8) – Microsoft Dynamics 365 (on-premises) Remote Code Execution Vulnerability
  • CVE-2022-35805 (CVSS score: 8.8) – Microsoft Dynamics 365 (on-premises) Remote Code Execution Vulnerability

“An unauthenticated attacker could send a specially crafted IP packet to a target machine that is running Windows and has IPSec enabled, which could enable a remote code execution exploitation,” Microsoft said about CVE-2022-34721 and CVE-2022-34722.

Also resolved by Microsoft are 15 remote code execution flaws in Microsoft ODBC Driver, Microsoft OLE DB Provider for SQL Server, and Microsoft SharePoint Server and five privilege escalation bugs spanning Windows Kerberos and Windows Kernel.

The September release is further notable for patching yet another elevation of privilege vulnerability in the Print Spooler module (CVE-2022-38005, CVSS score: 7.8) that could be abused to obtain SYSTEM-level permissions.

CyberSecurity

Lastly, included in the raft of security updates is a fix released by chipmaker Arm for a speculative execution vulnerability called Branch History Injection or Spectre-BHB (CVE-2022-23960) that came to light earlier this March.

“This class of vulnerabilities poses a large headache to the organizations attempting mitigation, as they often require updates to the operating systems, firmware and in some cases, a recompilation of applications and hardening,” Jogi said. “If an attacker successfully exploits this type of vulnerability, they could gain access to sensitive information.”

Software Patches from Other Vendors

Aside from Microsoft, security updates have also been released by other vendors since the start of the month to rectify dozens of vulnerabilities, including —

CRITICAL SECURITY BULLETIN: September 2022 Security Bulletin for Trend Micro Apex One

Summary

Release Date: Sept. 13, 2022
CVE Identifier(s): CVE-2022- 40139 through CVE-2022-40144
Platform(s): Windows
CVSS 3.0 Score(s): 5.5 – 8.2
Severity Rating(s): Medium – High

Trend Micro has released a new Service Pack for Trend Micro Apex One (On Premise) and Critical Patches for Apex One as a Service (SaaS) that resolve multiple vulnerabilities in the product.

Please note – Trend Micro has observed at least one active attempt of potential attacks against at least one of these vulnerabilities in the wild (ITW) – details below. Customers are strongly encouraged to update to the latest versions as soon as possible.

Affected Version(s)

ProductAffected Version(s) Platform Language(s) 
Apex One 2019 (On-prem)
SaaS
Windows
Windows
English
English


Solution

Trend Micro has released the following solutions to address the issue:

ProductUpdated version NotesPlatform Availability 
Apex One Apex One SP1 (b11092/11088)  Readme  WindowsNow Available 
Apex One (SaaS) August 2022 Monthly Patch
(202208)*
ReadmeWindowsNow Available

These are the minimum recommended version(s) of the patches and/or builds required to address the issue. Trend Micro highly encourages customers to obtain the latest version of the product if there is a newer one available than the one listed in this bulletin.

* Please note that some of the vulnerabilities listed below were addressed in earlier monthly SaaS updates, but Trend Micro recommends that Apex One as a Service customers are always on the latest available build to ensure all issues are properly resolved.

Customers are encouraged to visit Trend Micro’s Download Center to obtain prerequisite software (such as Service Packs) before applying any of the solutions above.


Vulnerability Details

CVE-2022-40139:  Improper Validation of Rollback Mechanism Components RCE Vulnerability 
CVSSv3: 7.2: AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H
Improper validation of some components used by the rollback mechanism in Trend Micro Apex One and Trend Micro Apex One as a Service clients could allow a Apex One server administrator to instruct affected clients to download an unverified rollback package, which could lead to remote code execution.

Please note: an attacker must first obtain Apex One server administration console access in order to exploit this vulnerability.

ITW Alert: Trend Micro has observed at least one active attempt of potential exploitation of this vulnerability in the wild.

CVE-2022-40140:  Origin Validation Error Denial-of-Service Vulnerability 
ZDI-CAN-16314
CVSSv3: 5.5: AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

An origin validation error vulnerability in Trend Micro Apex One and Apex One as a Service could allow a local attacker to cause a denial-of-service on affected installations.

Please note: an attacker must first obtain the ability to execute low-privileged code on the target system in order to exploit this vulnerability.

CVE-2022-40141:  Information Disclosure Vulnerability 
CVSSv3: 5.6: AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:L
A vulnerability in Trend Micro Apex One and Apex One as a Service could allow an attacker to intercept and decode certain communication strings that may contain some identification attributes of a particular Apex One server.

CVE-2022-40142:  Agent Link Following Local Privilege Escalation Vulnerability 
ZDI-CAN-16691
CVSSv3: 7.8: AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H

A security link following local privilege escalation vulnerability in Trend Micro Apex One and Trend Micro Apex One as a Service agents could allow a local attacker to create a writable folder in an arbitrary location and escalate privileges on affected installations.

Please note: an attacker must first obtain the ability to execute low-privileged code on the target system in order to exploit this vulnerability.

CVE-2022-40143:  Link Following Local Privilege Escalation Vulnerability 
ZDI-CAN-16435
CVSSv3: 7.3: AV:L/AC:L/PR:L/UI:R/S:U/C:H/I:H/A:H

A link following local privilege escalation vulnerability in Trend Micro Apex One and Trend Micro Apex One as a Service servers could allow a local attacker to abuse an insecure directory that could allow a low-privileged user to run arbitrary code with elevated privileges.

Please note: an attacker must first obtain the ability to execute low-privileged code on the target system in order to exploit this vulnerability.

CVE-2022-40144:  Login Authentication Bypass Vulnerability 
JVN#36454862
CVSSv3: 8.2: AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:H

A vulnerability in Trend Micro Apex One and Trend Micro Apex One as a Service could allow an attacker to bypass the product’s login authentication by falsifying request parameters on affected installations.


Mitigating Factors

Exploiting these type of vulnerabilities generally require that an attacker has access (physical or remote) to a vulnerable machine. In addition to timely application of patches and updated solutions, customers are also advised to review remote access to critical systems and ensure policies and perimeter security is up-to-date.

However, even though an exploit may require several specific conditions to be met, Trend Micro strongly encourages customers to update to the latest builds as soon as possible.


Acknowledgement

Trend Micro would like to thank the following individuals for responsibly disclosing these issues and working with Trend Micro to help protect our customers:


External Reference(s)

The following advisories may be found at Trend Micro’s Zero Day Initiative Published Advisories site:

  • ZDI-CAN-16314
  • ZDI-CAN-16691
  • ZDI-CAN-16435

The following advisory may be found at Japan Vulnerability Notes (JVN) :

SaaS vs PaaS vs IaaS: What’s the Difference & How to Choose

Companies are increasingly using Cloud services to support their business processes. But which types of Cloud services are there, and what is the difference? Which kind of Cloud service is most suitable for you? Do you want to be unburdened or completely in control? Do you opt for maximum cost savings, or do you want the entire arsenal of possibilities and top performance? Can you still see the forest for the trees? In this article and in the next, I describe several different Cloud services, what the differences and features are and what exactly you need to pay attention to.

Let’s start with the definition of Cloud computing. This is the provision of services using the internet (Cloud). Think of storage, software, servers, databases etc. Depending on the type of service and the service that is offered (think of license management or data storage), you can divide these services into categories. Examples are IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service), etc. These services are provided by a cloud provider. Whether this is Microsoft (Azure), Amazon (AWS), or another vendor (Google, Alibaba, Oracle, etc.), each vendor offers Cloud services that fall under one of the categories of Cloud services that we are about to discuss.

One feature of Cloud computing is that you pay according to the usage and the service you purchase. For example, for SaaS, you pay for the software’s license and support. This also means that if you buy a SaaS service (e.g., Office 365) and don’t use it, you will still be charged. At the same time, if you purchase storage with IaaS, for example, you only pay for the amount of storage you use, possibly supplemented with additional services such as backup, etc.

Sometimes Cloud services complement each other; think, for example, of DaaS (Database as a Service), where a database is offered via the Cloud. Often you need an application server and other infrastructure to read data from this database. These usually run in a Landing Zone, purchased from an IaaS service. But some services can also be standalone, for example, SaaS (Office 365).

Each Cloud service has specific characteristics. Sometimes it requires little or no (technical) knowledge, but it can also be challenging to manage and use the services according to best practices. This often depends on the degree to which you want to see yourself in control. If you want an application from the Cloud where you are completely relieved of all worries, this requires little technical knowledge from the user or the administrator. But if you want maximum control, then IaaS gives you an enormous range of possibilities. In this article, you can read what you need to consider.

It is advisable to think beforehand about what your requirements and wishes are precisely and whether this fits in with the service you want to purchase. If you wish to use an application in the Cloud but use many custom settings, this is often not possible. If you don’t want to be responsible for updating and backing up an application and use little or no customization, a SaaS can be very interesting. Also, look at how a service fits into your business process. Does it offer possibilities for automation, reporting, or disaster recovery? Are there possibilities to temporarily allocate extra resources in case of peak demand (horizontal or vertical scaling up), and what guarantees does the supplier offer with this service? Think of RPO / RTO and accessibility of the service desk in case of a calamity.

Let’s get started quickly!

IaaS (Infrastructure as a Service)

One of the best-known Cloud services is undoubtedly IaaS. For many companies, this is often their first introduction to a Cloud service. You rent the infrastructure from a cloud provider. For example, the network infrastructure, virtual servers (including operations system), and storage. A feature of IaaS is that you have complete control – Both on the management side and how you can deploy resources (requests). This can be done in various automated ways (Powershell, IaC, DevOps pipelines, etc.) and via the classic management interface that all providers offer. Things that are often not possible with a PaaS service are possible with an IaaS service. You have complete control. In principle, you can set up a complete server environment (all services are available for this), but you do have the benefits of the Cloud, such as scalability and pay per use or per resource.

IaaS therefore, most resembles an on-premise implementation. You often see this used in combination with the use of virtual servers. Critical here is a good investigation into the possible limitations, for example, I/O, so that the performance can be different in practice than in a traditional local environment. You are responsible for arranging security and backup. The advantage is that you have an influence on the choice of technology used. You can customize the setup according to your needs and wishes. You can standardize the configuration to your organization. Deployment can be complex, and you are forced to make your own choices, so some expertise is needed.

PaaS (Platform as a Service)

PaaS stands for Platform as a service and goes further than IaaS. You get a platform where you can do the configuration yourself. When you use a PaaS service, the vendor takes care of the sub-layer (IaaS) and the operating system and middleware. So you sacrifice something in terms of control and capabilities. PaaS services are ideal for developers, web and application builders. After all, you can quickly make an environment available. Using it means you no longer have to worry about the infrastructure, operating system, and middleware. This is taken care of by the supplier based on best practices. This also offers security advantages, as you do not have to think about patching and upgrading these things that are now done by the vendor.

Another advantage is that you can entirely focus on what you want to do and not on managing the environment. You can also easily purchase additional services and quickly scale them up or down. When you are finished, you can remove and stop the resources, so you have no more costs.

However, do take into account the use of existing software. Not all existing software is suitable to function in a PaaS environment; for example, in a PaaS environment, you do not have full access (after all, the vendor is responsible). Also, not all CPU power and memory are allocated to the Cloud application. This is because it is often hosted on a shared platform, so other applications (and databases) may use the same resources. As for the database, you have the same advantages and disadvantages as with DBaaS.

SaaS (Software as a Service)

This is probably a service you’ve been using for a while. In short, you take applications through the Cloud on a subscription basis. The provider is responsible for managing the infrastructure, patches, and updates. A SaaS solution is ready for use immediately, and you directly benefit from the added value, such as fast scaling up and down and paying per use. Examples are Office365, Sharepoint online, SalesForce, Exact Online, Dropbox, etc.

Unlike IaaS and PaaS, where there is still a lot of freedom, and you have to set everything up yourself, with SaaS however, it is immediately clear what you are buying and what you will get. With this service, you are relieved of most of your worries. The vendor is responsible for all updates, patches, development, and more. You cannot make any updates or changes to the software with this service.

Many companies use one or more SaaS services often even within companies, there is a distinction. For example, each department within a company has its specific applications and associated SaaS services. With this service, you only pay for what you need, including the licenses. These licenses can easily be scaled up or down.

It is interesting for many companies to work with SAAS solutions. It is particularly interesting for start-ups, small companies and freelancers because you only purchase what you use, you don’t have unnecessarily high start-up costs, and you don’t have to worry about the maintenance of the software.

But SAAS can also be a perfect solution for larger companies. For example, if you hire extra staff for specific periods, you can quickly get these people working with the software they need. You buy several additional licenses, and you can stop this when the temporary staff leaves.

How can Vembu help you?

BDRSuite, is a comprehensive Backup & DR solution designed to protect your business-critical data across Virtual (VMware, Hyper-V), Physical Servers (Windows, Linux), SaaS (Microsoft 365, Google Workspace), AWS EC2 Instances, Endpoints (Windows, Mac) and Applications & Databases (MS Active Directory, MS Exchange, MS Outlook, SharePoint, MS SQL, MySQL).

To protect your workloads running on SaaS (Microsoft 365Google Workspace), try out a full-featured 30-days Free Trial of the latest version of BDRSuite.

Source :
https://www.vembu.com/blog/saas-vs-paas-vs-iaas-whats-the-difference-how-to-choose/

Differences between availability modes for an Always On availability group

Applies to:  SQL Server (all supported versions)

In Always On availability groups, the availability mode is a replica property that determines whether a given availability replica can run in synchronous-commit mode. For each availability replica, the availability mode must be configured for either synchronous-commit mode, asynchronous-commit, or configuration only mode. If the primary replica is configured for asynchronous-commit mode, it does not wait for any secondary replica to write incoming transaction log records to disk (to harden the log). If a given secondary replica is configured for asynchronous-commit mode, the primary replica does not wait for that secondary replica to harden the log. If both the primary replica and a given secondary replica are both configured for synchronous-commit mode, the primary replica waits for the secondary replica to confirm that it has hardened the log (unless the secondary replica fails to ping the primary replica within the primary’s session-timeout period).

 Note

If primary’s session-timeout period is exceeded by a secondary replica, the primary replica temporarily shifts into asynchronous-commit mode for that secondary replica. When the secondary replica reconnects with the primary replica, they resume synchronous-commit mode.

Supported Availability Modes

Always On availability groups supports three availability modes-asynchronous-commit mode, synchronous-commit mode, and configuration only mode as follows:

  • Asynchronous-commit mode is a disaster-recovery solution that works well when the availability replicas are distributed over considerable distances. If every secondary replica is running under asynchronous-commit mode, the primary replica does not wait for any of the secondary replicas to harden the log. Rather, immediately after writing the log record to the local log file, the primary replica sends the transaction confirmation to the client. The primary replica runs with minimum transaction latency in relation to a secondary replica that is configured for asynchronous-commit mode. If the current primary is configured for asynchronous commit availability mode, it will commit transactions asynchronously for all secondary replicas regardless of their individual availability mode settings.For more information, see Asynchronous-Commit Availability Mode, later in this topic.
  • Synchronous-commit mode emphasizes high availability over performance, at the cost of increased transaction latency. Under synchronous-commit mode, transactions wait to send the transaction confirmation to the client until the secondary replica has hardened the log to disk. When data synchronization begins on a secondary database, the secondary replica begins applying incoming log records from the corresponding primary database. As soon as every log record has been hardened, the secondary database enters the SYNCHRONIZED state. Thereafter, every new transaction is hardened by the secondary replica before the log record is written to the local log file. When all the secondary databases of a given secondary replica are synchronized, synchronous-commit mode supports manual failover and, optionally, automatic failover.For more information, see Synchronous-Commit Availability Mode, later in this topic.
  • Configuration only mode applies to availability groups that are not on a Windows Server Failover Cluster. A replica in configuration only mode does not contain user data. In configuration only mode, the replica master database stores availability group configuration metadata. For more information see Availability group with configuration only replica.

The following illustration shows an availability group with five availability replicas. The primary replica and one secondary replica are configured for synchronous-commit mode with automatic failover. Another secondary replica is configured for synchronous-commit mode with only planned manual failover, and two secondary replicas are configured for asynchronous-commit mode, which supports only forced manual failover (typically called forced failover).

Availability and failover modes of replicas

The synchronization and failover behavior between two availability replicas depends on the availability mode of both replicas. For example, for synchronous commit to occur, both the current primary replica and the secondary replica in question must be configured for synchronous commit. Likewise, for automatic failover to occur, both replicas need to be configured for automatic failover. Therefore, the behavior for the illustrated deployment scenario above can be summarized in the following table, which explores the behavior with each potential primary replica:

Current Primary ReplicaAutomatic Failover TargetsSynchronous-Commit Mode Behavior WithAsynchronous-Commit Mode Behavior WithAutomatic failover possible
010202 and 0304Yes
020101 and 0304Yes
0301 and 0204No
0401, 02, and 03No

Typically, Node 04 as an asynchronous-commit replica, is deployed in a disaster recovery site. The fact that Nodes 01, 02, and 03 remain at asynchronous-commit mode after failing over to Node 04 helps prevent potential performance degradation in your availability group due to high network latency between the two sites.

Asynchronous-Commit Availability Mode

Under asynchronous-commit mode, the secondary replica never becomes synchronized with the primary replica. Though a given secondary database might catch up to the corresponding primary database, any secondary database could lag behind at any point. Asynchronous-commit mode can be useful in a disaster-recovery scenario in which the primary replica and the secondary replica are separated by a significant distance and where you do not want small errors to impact the primary replica or in situations where performance is more important than synchronized data protection. Furthermore, since the primary replica does not wait for acknowledgements from the secondary replica, problems on the secondary replica never impact the primary replica.

An asynchronous-commit secondary replica attempts to keep up with the log records received from the primary replica. But asynchronous-commit secondary databases always remain unsynchronized and are likely to lag somewhat behind the corresponding primary databases. Typically the gap between an asynchronous-commit secondary database and the corresponding primary database is small. But the gap can become substantial if the server hosting the secondary replica is over loaded or the network is slow.

The only form of failover supported by asynchronous-commit mode is forced failover (with possible data loss). Forcing failover is a last resort intended only for situations in which the current primary replica will remain unavailable for an extended period and immediate availability of primary databases is more critical than the risk of possible data loss.The failover target must be a replica whose role is in the SECONDARY or RESOLVING state. The failover target transitions to the primary role, and its copies of the databases become the primary database. Any remaining secondary databases, along with the former primary databases, once they become available, are suspended until you manually resume them individually. Under asynchronous-commit mode, any transaction logs that the original primary replica had not yet sent to the former secondary replica are lost. This means that some or all of the new primary databases might be lacking recently committed transactions. For more information on how forced failover works and on best practices for using it, see Failover and Failover Modes (Always On Availability Groups).

Synchronous-Commit Availability Mode

Under synchronous-commit availability mode (synchronous-commit mode), after being joined to an availability group, a secondary database catches up to the corresponding primary database and enters the SYNCHRONIZED state. The secondary database remains SYNCHRONIZED as long as data synchronization continues. This guarantees that every transaction that is committed on a given primary database has also been committed on the corresponding secondary database. When every secondary database on a given secondary replica is synchronized, the synchronization-health state of the secondary replica as a whole is HEALTHY.

In This Section:

Factors That Disrupt Data Synchronization

Once all of its databases are synchronized, a secondary replica enters the HEALTHY state. The synchronized secondary replica will remain healthy unless one of the following occurs:

  • A network or computer delay or glitch causes the session between the secondary replica and primary replica to timeout. NoteFor information about the session-time property of availability replicas, see Overview of Always On Availability Groups (SQL Server).
  • You suspend a secondary database on the secondary replica. The secondary replica ceases to be synchronized, and its synchronization-health state is marked as NOT_HEALTHY. The secondary replica cannot become healthy again until the suspended secondary database is either resumed and resynchronized or removed from the availability group.
  • You add a primary database the availability group. Previously synchronized secondary replicas enter the NOT_HEALTHY synchronization-health state. This state indicates that at least one database is in the NOT SYNCHRONIZING synchronization state. A given secondary replica cannot be HEALTHY again until a corresponding secondary database has been prepared on the replica, has been joined to the availability group, and has become synchronized with the new primary database.
  • You change the primary replica or the secondary replica to asynchronous-commit availability mode. After changing to asynchronous-commit mode, the secondary replica will remain in the HEALTHY synchronization-health state as long as data synchronization continues. However, if only the primary replica is changed to asynchronous-commit mode, the synchronous-commit secondary replica will enter the PARTIALLY_HEALTHY synchronization-health state. This state indicates that at least one database is in the SYNCHRONIZING synchronization state, but none of the databases are in the NOT SYNCHRONIZING state.
  • You change any secondary replica to synchronous-commit availability mode. This causes that secondary replica to be marked as in the PARTIALLY_HEALTHY synchronization-health state until all of its databases are in the SYNCHRONIZED synchronization state.

 Tip

To view the synchronization health of an availability group, availability replica, or availability database, query the synchronization_health or synchronization_health_desc column of sys.dm_hadr_availability_group_statessys.dm_hadr_availability_replica_states, or sys.dm_hadr_database_replica_states, respectively.

How Synchronization Works on a Secondary Replica

Under the synchronous-commit mode, after a secondary replica joins the availability group and establishes a session with the primary replica, the secondary replica writes incoming log records to disk (hardens the log) and sends a confirmation message to the primary replica. Once the hardened log on the secondary database has caught up the end of log on the primary database, the state of the secondary database is set to SYNCHRONIZED. The time required for synchronization depends essentially on how far the secondary database was behind the primary database at the start of the session (measured by the number of log records initially received from the primary replica), the work load on the primary database, and the speed of the computer of the server instance that hosts the secondary replica.

Synchronous operation is maintained in the following manner:

  1. On receiving a transaction from a client, the primary replica writes the log for the transaction to the transaction log and concurrently sends the log record to the secondary replicas.
  2. Once a log record is written to the transaction log of the primary database, the transaction can be undone only if there is a failover at this point to a secondary that did not receive the log. The primary replica waits for confirmation from the synchronous-commit secondary replica.
  3. The secondary replica hardens the log and returns an acknowledgement to the primary replica.
  4. On receiving the confirmation from the secondary replica, the primary replica finishes the commit processing and sends a confirmation message to the client. NoteIf a synchronous-commit secondary replica times out without confirming that it has hardened the log, the primary marks that secondary replica as failed. The connected state of the secondary replica changes to DISCONNECTED, and the primary replica stops waiting for confirmation from the secondary replica. This behavior ensures that a failed synchronous-commit secondary replica does not prevent hardening of the transaction log on the primary replica.

Synchronous-commit mode protects your data by requiring the data to be synchronized between two places, at the cost of somewhat increasing the latency of the transaction.

Synchronous-Commit Mode with Only Manual Failover

When these replicas are connected and the database is synchronized, manual failover is supported. If the secondary replica goes down, the primary replica is unaffected. The primary replica runs exposed if no SYNCHRONIZED replicas exist (that is, without sending data to any secondary replica). If the primary replica is lost, the secondary replicas enter the RESOLVING state, but the database owner can force a failover to the secondary replica (with possible data loss). For more information, see Failover and Failover Modes (Always On Availability Groups).

Synchronous-Commit Mode with Automatic Failover

Automatic failover provides high availability by ensuring that the database is quickly made available again after the loss of the primary replica. To configure an availability group for automatic failover, you need to set both the current primary replica and at least one secondary replica to synchronous-commit mode with automatic failover. You can have up to three automatic failover replicas.

Furthermore, for an automatic failover to be possible at a given time, this secondary replica must be synchronized with the primary replica (that is, the secondary databases are all synchronized), and the Windows Server Failover Clustering (WSFC) cluster must have quorum. If the primary replica becomes unavailable under these conditions, automatic failover occurs. The secondary replica switches to the role of primary, and it offers its database as the primary database. For more information, see the “Automatic Failover ” section of the Failover and Failover Modes (Always On Availability Groups) topic.

 Note

For information about WSFC quorum and Always On availability groups, see For more information, see WSFC Quorum Modes and Voting Configuration (SQL Server).

Data latency on secondary replica

Implementing read-only access to secondary replicas is useful if your read-only workloads can tolerate some data latency. In situations where data latency is unacceptable, consider running read-only workloads against the primary replica.

The primary replica sends log records of changes on primary database to the secondary replicas. On each secondary database, a dedicated redo thread applies the log records. On a read-access secondary database, a given data change does not appear in query results until the log record that contains the change has been applied to the secondary database and the transaction has been committed on primary database.

This means that there is some latency, usually only a matter of seconds, between the primary and secondary replicas. In unusual cases, however, for example if network issues reduce throughput, latency can become significant. Latency increases when I/O bottlenecks occur and when data movement is suspended. To monitor suspended data movement, you can use the Always On Dashboard or the sys.dm_hadr_database_replica_states dynamic management view.

For more information on investigating redo latency on the secondary replica, please see Troubleshoot primary changes not reflected on secondary replica.

Related Tasks

To change the availability mode and failover mode

To adjust quorum votes

To perform a manual failover

To view availability group, availability replica, and database states

Related Content

See Also

Overview of Always On Availability Groups (SQL Server)
Failover and Failover Modes (Always On Availability Groups)
Windows Server Failover Clustering (WSFC) with SQL Server

Source :
https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/availability-modes-always-on-availability-groups?view=sql-server-ver16

PSA: Zero-Day Vulnerability in WPGateway Actively Exploited in the Wild

On September 8, 2022, the Wordfence Threat Intelligence team became aware of an actively exploited zero-day vulnerability being used to add a malicious administrator user to sites running the WPGateway plugin. We released a firewall rule to Wordfence PremiumWordfence Care, and Wordfence Response customers to block the exploit on the same day, September 8, 2022.

Sites still running the free version of Wordfence will receive the same protection 30 days later, on October 8, 2022. The Wordfence firewall has successfully blocked over 4.6 million attacks targeting this vulnerability against more than 280,000 sites in the past 30 days.

Vulnerability Details

Description: Unauthenticated Privilege Escalation
Affected Plugin: WPGateway
Plugin Slug: wpgateway
Plugin Developer: Jack Hopman/WPGateway
Affected Versions: <= 3.5
CVE ID: CVE-2022-3180
CVSS Score: 9.8 (Critical)
CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
Fully Patched Version: N/A

The WPGateway plugin is a premium plugin tied to the WPGateway cloud service, which offers its users a way to setup and manage WordPress sites from a single dashboard. Part of the plugin functionality exposes a vulnerability that allows unauthenticated attackers to insert a malicious administrator.

We obtained a current copy of the plugin on September 9, 2022, and determined that it is vulnerable, at which time we contacted the plugin vendor with our initial disclosure. We have reserved vulnerability identifier CVE-2022-3180 for this issue.

As this is an actively exploited zero-day vulnerability, and attackers are already aware of the mechanism required to exploit it, we are releasing this public service announcement (PSA) to all of our users. We are intentionally withholding certain details to prevent further exploitation. As a reminder, an attacker with administrator privileges has effectively achieved a complete site takeover.

Indicators of compromise

If you are working to determine whether a site has been compromised using this vulnerability, the most common indicator of compromise is a malicious administrator with the username of rangex.

If you see this user added to your dashboard, it means that your site has been compromised.

Additionally, you can check your site’s access logs for requests to //wp-content/plugins/wpgateway/wpgateway-webservice-new.php?wp_new_credentials=1

If these requests are present in your logs, they indicate that your site has been attacked using an exploit targeting this vulnerability, but do not necessarily indicate that it has been successfully compromised.

Conclusion

In today’s post, we detailed a zero-day vulnerability being actively exploited in the WPGateway plugin.

Wordfence PremiumWordfence Care, and Wordfence Response customers received a firewall rule on September 8, 2022, protecting against this vulnerability, while sites still using the free version of Wordfence will receive the same protection 30 days later, on October 8, 2022.

If you have the WPGateway plugin installed, we urge you to remove it immediately until a patch is made available and to check for malicious administrator users in your WordPress dashboard.

If you know a friend or colleague who is using this plugin on their site, we highly recommend forwarding this advisory to them to help keep their sites protected, as this is a serious vulnerability that is actively being exploited in the wild. Please help make the WordPress community aware of this issue.

If you believe your site has been compromised as a result of this vulnerability or any other vulnerability, we offer Incident Response services via Wordfence Care. If you need your site cleaned immediately, Wordfence Response offers the same service with 24/7/365 availability and a 1-hour response time. Both these products include hands-on support in case you need further assistance.

Our investigation is ongoing, and we will provide more information in an additional blog post when it becomes available.

Special thanks to Threat Intelligence Lead Chloe Chamberland for spotting this exploit in the wild.

Source :
https://www.wordfence.com/blog/2022/09/psa-zero-day-vulnerability-in-wpgateway-actively-exploited-in-the-wild/

WP Shield Security PRO – Release 16.1

It’s been a few months in the making, but it’s finally here – our most exciting release (yet again!) of Shield Security for WordPress.

This release is absolutely packed with goodies and our headline feature – integration with CrowdSec – deserves an article all to itself.

Here you’ll discover all the exciting things we’ve packed into ShieldPRO v16 and why you should be upgrading as soon as it’s out.

Let’s dig into all the new goodies…

#1 Partnership with CrowdSec for Crowd-Sourced IP Intelligence

This is, to our mind, one of the most exciting developments for WordPress security for a very long time.

We’ve wanted to achieve this level of protection against bots for years, as we firmly believe that good WordPress security starts with intelligent blocking malicious IP addresses.

Shield does an effective job of this already with its automatic block list system, but we’ve now achieved group intelligence so all WordPress sites running on Shield will benefit from the experiences of all the other websites running Shield.

This is a big topic so we’ve dedicated a whole article to it – learn about the new partnership here.

#2 Brand New IP Rules and Blocking Engine

IP Blocking has been a part of ShieldPRO, practically from the outset. It’s core to our WordPress security philosophy.

With such a long-standing feature, you can imagine that the knowledge and experience used to create that original system isn’t as thorough as it is today. We’ve come a long way, I can promise you.

This release, spurred on by the new CrowdSec integration, sees the much-needed overhaul of our IP management system. It’s smarter and more versatile, and altogether much faster!

Shield must lookup a visitor’s IP address on every single request to a WordPress site. If we can improve the speed of that lookup, we improve Shield performance overall.

#3 Improved UI

Shield has a number of different subsystems, many of which are related. The scan results page is linked to the scanner configuration page, for example.

To-date when you wanted to view any section of the plugin, it would reload the entire page. We’ve done some work to reduce full page reloads so that you can stay “where you are” while viewing the contents of another page.

In particular we’re referring to “Configuration” pages. Links to such areas will now open in an overlay, letting you keep your current page active while you review and adjust settings.

Another UI enhancement is a new title bar across every page of the plugin, letting you see more clearly where you are, along with important links to help resources.

This title bar also includes our brand new “super search box”…

#4 Shield’s Super Search Box

We mentioned UI improvements already, but this deserves a section all to itself.

To say Shield is a large plugin is understating it. There are many options pages, as well tools, tables, data, and charts etc.

Finding your way around can be a bit tricky. Since we built it, we know it inside out. But for everyone that uses it as a tool to protect their sites, it’s not always obvious where to go to find the “thing” you need.

No longer!

With Shield’s “Super Search Box”, you can find almost anything you need, and jump directly to it. Currently you can search for:

  • Specific configuration options
  • Tools such as Import/Export, Admin Notes, Debug
  • Logs such as Activity Logs and Traffic Logs
  • IP Rules
  • IP addresses – it’ll open a popup to review the data Shield holds on any particular IP
  • External links such as Shield’s homepage, Facebook page, helpdesk, crowdsec etc.

We’ll develop this a bit more over time as we get feedback from you on what you’d like to see in there.

#5 Lighter, Faster Scan Results Display

Shield’s scans can turn up a lot of results and some customers have reported trouble on some servers with limited resources.

We’ve redesigned how the scan results are built, so it’s faster and lighter on both your browser and the WordPress server.

#6 Improved Human SPAM Detection

After working with a customer on some issues she faced with Human SPAM, we’ve developed enhancements to how Shield will detect repeated human spam comments.

For example, a SPAMer may post a comment and trigger our human SPAM scanner. But then they’ll fire off more comments which might bypass the same scanner. We’ll now use previous SPAM detections by Shield to inform future comments, too.

We also squashed a bug where Shield wasn’t properly honouring the “disallowed keywords” option built into WordPress itself.

#7 Custom Activity Logs and Events

Shield covers a lot of areas when it comes to monitoring events that happen on a WordPress site. But we typically don’t cover 3rd party plugins.

So, based on the feedback from a number of interested customers, we’ve added the ability for any PHP developer to add custom events to Shield’s Activity Logs.

When might you find that useful?

You could, for example, track WooCommerce orders, or you could be facing a particularly menacing visitor that repeats an undesireable action on your site that’s not covered by Shield, and decide to block their IP.

You can do whatever you want with this, though you should always take care when allocating offenses to actions as you may inadvertently block legitimate users.

#8 All-New Guided Setup Wizard

When first installing a platform like Shield Security for WordPress, it can be a little overwhelming. Shield is a large plugin, with many features, tools and options.

We’ve had a “Welcome Wizard” in Shield for a while, but it was a little rough around the edges. For this release we decided to revamp it and provide a new guided setup wizard, helping newcomers get up-to-speed more quickly.

Anyone can access the Guided Setup from the Super Search Box (search: “Wizard”), or from the Shield > Tools menu.

A Change To Minimum Supported WordPress Version

We try to make Shield Security as backward-compatible as possible, while it makes sense to do so.

However, this means that our code development and testing must reflect this and means that the burden of support increases the farther back we support older versions.

Our Telemetry data suggests that there are no WordPress sites below version 4.7 running the Shield plugin. Of course, we can only go on what data has been sent to us. But we have to draw the line somewhere, and with Shield v16, we’re drawing the line at WordPress 4.7.

As more data comes through and time marches on, we’ll gradually increase our minimum requirements so we strongly suggest you keep your WordPress sites, and hosting platforms as up-to-date as possible.

Comments, Feedback and Suggestions

A lot of work has gone into this release that will, we hope, improve security for all users by making it much easier to see what’s going on and what areas need improved. The Security Rules Engine is one of our most exciting developments to-date and we can hardly wait to get the first iteration into your hands and start further development on it.

As always, we welcome your thoughts and feedback so please do feel free to leave your comments and suggestions below.

Source :
https://getshieldsecurity.com/blog/wp-shield-security-pro-release-16-1/

ShieldPRO 16.1.0 Upgrade Guide

ShieldPRO 16.1.0 for WordPress is a major release packed with many changes and improvements, including UI enhancement, adding integration with CrowdSec and the ability to permanently block IP any much more.

This guide outlines what have been added/removed, changed, or improved and what fixes we’ve made.

Firstly, we’re going to explain what major changes are made and which options you’d need to review.

New Added Features

For 16.1.0 release we added

With the CrowdSec integration, your WordPress sites will have access to intelligence about malicious IP addresses before they’ve ever accessed your website. (This intelligence will have already been gathered for you by other websites.)

This reduces that “window” available to malicious bots to zero.

The settings can be found under the IP Blocking section:

There are 2 options available

  1. CrowdSec IP Blocking – how Shield should block requests from IP addresses found on CrowdSec’s list of malicious IP addresses.
  2. CrowdSec Enroll ID – link site to your CrowdSec console by providing your Enroll ID.

There is now the option to log custom events to Shield’s Activity Log. It’s impossible that Shield can log every possibly event for every plugin and scenario, so you can now add logging for all your desired site events. This is an advanced option and will require professional software development experience to implement. 

  • Logging: App Password Creation

Shield now captures creation of new Application Passwords in the Activity Log.

  • Shield’s Super Search Box

This search box will look for almost anything you need and provide you with links directly to the item in question. 

Currently you can search for:

  • Specific configuration options
  • Tools such as Import/Export, Admin Notes, Debug
  • Logs such as Activity Logs and Traffic Logs
  • IP Rules
  • IP addresses – it’ll open a popup in-situ to review the data Shield holds on any particular IP
  • External links such as Shield’s homepage, Facebook page, helpdesk, CrowdSec etc.

The Super Search Box is accessible and visible from every page inside the plugin.

Enabling the Shield Beta Access option allows you to gain access to beta versions of the Shield Security plugin.

  • All-New Guided Setup Wizard

For this release we revamped it and provide a new guided setup wizard, helping newcomers get up-to-speed more quickly.

You can access the Guided Setup from the Super Search Box (search: “Wizard”), or from the Shield > Tools menu.

For whitelisted IP addresses, there are no restrictions for the user related with that IP whatsoever –  none of the setting will apply to that IP, including the hiding login URL. 

We added a special notice for a user with a whitelisted IP:

Changes

Change 1: Improved UI

We’ve done some work to reduce full page reloads so that you can stay “where you are” while viewing the contents of another page.

In particular we’re referring to “Options/Configuration” pages. Links to such areas will now open in an overlay, letting you keep your current page active while you review and adjust settings.

Example

Also, IP analysis dialog now opens in an overlay, for example:

Another UI enhancement is a new top title bar across every page of the plugin, letting you see more clearly where you are and with some important links to help and other resources.

Example

Change 2: Completely New IP Rules and Blocking Engine

This release, spurred on by our CrowdSec integration, sees the much-needed overhaul of our IP management system. It’s smarter and more versatile and altogether much faster.

We also made some UI enhancements on the Management & Analysis section:

  • “Manage IP” section is renamed to “IP Rules”
  • IP blocking and bypass list are merged and a new table is used now
  • IP Analysis dialog is now separated and can be loaded for each IP directly from within IP Rules, Activity Log, and Traffic Log. Example, loading from within IP Rules:

  • “Reset” option added into the IP analysis dialog

  • Manual adding IP to the block or bypass list is merged now and can be accessed from within “Add New IP” option:
  • Manually or auto blocked IP can be now permanently blocked

    You can do this by manually adding IP to the block list or directly from within IP analysis dialog

Change 3: Improved Build Custom Charts option

The Shield event(s) are now displayed in a form of list. Selecting desired events is much easier now.



Improvements

For 16.1.0 release we’ve made the following improvements

  • Improved and Faster Scan Results Display

    We’ve redesigned how the scan results are built so it’s faster and lighter on your browser and on the server itself.

    Eliminated errors and slow processing when displaying scan results pages for large datasets. Shield now uses highly optimised queries to request only the records required to display the current table page.
  • Improved Human SPAM Detection
    We’ve added some enhancements on how Shield will detect repeated human spam comments.

    We also squashed a bug where Shield wasn’t properly honouring the “disallowed keywords” option built into WordPress itself.
  • A change to minimum supported WordPress version: 4.7
    Based on Shield telemetry data, we’re pushing our minimum supported WordPress version up to 4.7. We’ll continue to push this upwards as usage data suggests it make sense to do so.
  • Protection Against Unauthorised Deactivation
    The Security Admin feature that protects against unauthorised deactivation has been further strengthened with offenses.
  • Shield Navigation Bar
    Shield offer a much better navbar on the dashboard with built-in search, helpdesk links and updates.

Removed Options

For 16.1.0 release we removed the following options

  • Auto Block Expiration (under Config > IP Blocking section) we removed “1 minute” option.
  • Leading Schema Firewall Rule
    This rules flags too many false positives for members.

Fixes

For 16.1.0 release we’ve made various fixes

  • Mitigate a fatal error caused by the latest wpForo plugin passing NULL to locale filters.
  • Bug when specifying a particular list when adding/removing an IP address using WP-CLI.
  • Shield no longer attempts to solve the issue of invalid ‘from’ email addresses on a WordPress site.

For more information on Shield 16.1.0 release, read this blog article here.

Source :
https://help.getshieldsecurity.com/article/476-shieldpro-1610-upgrade-guide

How to set up the Surveillance Station of QNAP NAS?

Introduction

To satisfy the increasing demand for embedded network surveillance solutions on NAS, QNAP unveiled a value-added application ‘Surveillance Station’ on its All-in-One Turbo NAS Series. The Surveillance Station enables users to configure and connect many IP cameras at the same time and manage functions including live audio & video monitoring, recording, and playback. Installation and configuration can be easily carried out remotely in a web browser in a few steps. Various recording modes are provided: continuous recording, motion-detection recording, and scheduled recording. Users can flexibly define the recording settings according their security plans.
The Surveillance Station supports a large number of IP camera brands. You can find a list of supported cameras at: https://www.qnap.com/compatibility.

Contents

  • Plan your home/office network topology
  • Set up the IP Cameras
  • Configure the Surveillance Station on the QNAP NAS
  • Configure Alarm Recording on the QNAP NAS
  • Play Video Files from the Surveillance Station

Plan Your Home/Office Network Topology

Write down your plan of the home/office network before starting to set up the surveillance system. Consider the following when doing so:

  • The IP address of the NAS
  • The IP address of the cameras
  • The IP address of your router and the wireless SSID

Your computer, the NAS, and the IP cameras should be installed to the same router in LAN. Assign fixed IP addresses for the NAS and the IP cameras.
For example:

  • The LAN IP of the router: 192.168.1.100
  • Camera 1 IP: 192.168.1.10 (fixed IP)
  • Camera 2 IP: 192.168.1.20 (fixed IP)
  • NAS IP: 192.168.1.60 (fixed IP)

Set up the IP Cameras

Configure the IP address for both IP cameras using the following steps.
You can download a camera IP Finder from official website of your camera’s vendor.
The name of the IP finder may differ between vendors. IP Finder is a utility that helps you search for the IP address of the camera.
CONNECT the IP camera to your home/office network with a network cable and run the IP Finder. Set the IP address of the cameras so that they are on the same LAN as the computer. You will then be able to login to the configuration page of the camera with a web browser. Enter the IP address of the first camera as 192.168.1.10. The default gateway should be set as the LAN IP of the router (192.168.1.100 in our example).

Note: The default IP and ID of administrator may differ based on what camera model is used.

ENTER the web configuration page of the IP camera.
You will then be able to view the monitoring image.

GO to ‘Network/ Network’ and check the IP settings of the camera.

NEXT, if you are using a Wireless IP CAM, please go to “Network/Wireless” and configure the wireless setting of your camera. Please ensure the camera’s settings are completed.

Repeat the above steps to set up the second camera.
To summarize, so far you have finished the following settings:

  • Camera 1 IP: 192.168.1.10
  • Camera 2 IP: 192.168.1.20

Note:
If you forget the camera settings, please press the reset button at the back of the camera for 5-10 seconds. The camera will be restored to default settings. You can then set the IP address and login to the camera’s configuration page with using the default login name and password. The reset function may differ by the brand of the camera. Please refer to the camera’s user manual in advance.

Configure the Surveillance Station on the QNAP NAS

Go to “Control Panel” > “System Settings” >”Network” > “TCP/IP” and press the “Edit” button to specify a fixed IP to the NAS: 192.168.1.60. The default gateway should be the same as the LAN IP of your router, which is 192.168.1.100 in our example.

Install Surveillance Station

  • Auto installation: Go to “App Center” > “Surveillance” > “Surveillance Station” and click “Add to QTS” to start installation.
  • Manual installation: Download the Surveillance Station QPKG from the App Center on the QNAP website. Then you can install it by clicking the “Install Manually” button and by selecting the location of the Surveillance Station QPKG to start installing.

Please note: To ensure proper operations of Surveillance Station, we recommend rebooting the Turbo NAS after its installation is completed.

In the Surveillance Station, please go to “Settings” and select “Camera 1” then click “” to add the camera configuration, e.g. name, model, IP address, recording setting and recording schedule.

In our demonstration we will assign the following IPs to each camera:
Camera 1 IP: 192.168.1.10
Camera 2 IP: 192.168.1.20

Note:
Before applying the settings, you may click “Test” on the right to ensure the connection to the IP camera is successful.

You can enable or change the recording option of the camera in next page. Click “next” to move to the next page.

On this page, you will see the “Schedule Settings.” In the table, 0~23 represents the time period. For example, 0 means 00:00~01:00, 1 means 01:00~02:00. You can set a continuous recording in any period that you want.

Then you will see the “Confirm Settings” on the next page.

After you have added the network cameras to the NAS, go to the “Monitor” page. The first time you access this page by browser, you have to install the ActiveX control (QMon.cab) in order to view the images of Camera 1 and Camera 2.

Note:
You can use the Surveillance Station in Chrome, Firefox or IE. The browser will prompt you to install the “ActiveX control” (QMon.cab) before using Monitor or Playback functions. Please follow the on-screen instructions to complete the installation.

Note:
When you click on the monitoring screen of a camera, the frame will become orange. You can use the s configuration page.
In Surveillance Station 5, there is a new feature called “Instant Playback”. You can click the floating button to play recording and find recent event.

Configure Alarm Recording on the QNAP NAS

The Surveillance Station supports alarm recording by schedule. To use this function, go to “Camera Settings” > “Alarm Settings” in the Surveillance Station. You could select ‘Traditional Mode’ to do basic configurations or ‘Advanced Mode’ to define advanced alarm events.

  • Traditional Mode :
    You may define criteria enabling alarm recording then click ‘Apply’ to save the changes.
  • Advanced Mode :
    You may select the event on the left side and add an action on the right side by clicking “Add”.

Then you may choose the action type you need for this event.

The event “Motion Detection” has a corresponding action “Recording”.

Play Video Files from the Surveillance Station

You have to click or to enter the playback page and follow the steps below to play the video files on the remote Surveillance Station.

1. Drag and drop camera(s) from the server/camera tree to the respective playback window(s) to select the channel(s) for playback.

2. Select playback date from.You can examine each channel to know the time range when the files were recorded for each IP camera. The blue cells indicate regular recording files and the red cells indicate alarm recording files. If it is blank, it means no files are recorded at that time.

3. Clickto start the playback. You can control the speed and playback direction by dragging the button to right or left on the shuttle bar.

4. Specify the time to play back the recording files at that moment. You can view the preview image on the timeline bar to search the moment you want to play.

5. Clickto control all the playback windows to play back the recording files. When this function is enabled, the playback options (play, pause, stop, previous/next frame, previous/next file, speed adjustment) will be applied to all the playback windows.

Source :
https://www.qnap.com/en/how-to/tutorial/article/how-to-set-up-the-surveillance-station-of-qnap-nas