Twitch Suffers Massive 125GB Data and Source Code Leak Due to Server Misconfiguration

Interactive livestreaming platform Twitch acknowledged a “breach” after an anonymous poster on the 4chan messaging board leaked its source code, an unreleased Steam competitor from Amazon Game Studios, details of creator payouts, proprietary software development kits, and other internal tools.

The Amazon-owned service said it’s “working with urgency to understand the extent of this,” adding the data was exposed “due to an error in a Twitch server configuration change that was subsequently accessed by a malicious third party.”

“At this time, we have no indication that login credentials have been exposed,” Twitch noted in a post published late Wednesday. “Additionally, full credit card numbers are not stored by Twitch, so full credit card numbers were not exposed.”

The forum user claimed the hack is designed to “foster more disruption and competition in the online video streaming space” because “their community is a disgusting toxic cesspool.” The development was first reported by Video Games Chronicle, which said Twitch was internally “aware” of the leak on October 4. The leak has also been labeled as “part one,” suggesting that there could be more on the way.

The massive trove, which comes in the form of a 125GB Torrent, allegedly includes —

  • The entirety of Twitch’s source code with commit history “going back to its early beginnings”
  • Proprietary software development kits and internal AWS services used by Twitch
  • An unreleased Steam competitor, codenamed Vapor, from Amazon Game Studios
  • Information on other Twitch properties like IGDB and CurseForge
  • Creator revenue reports from 2019 to 2021
  • Mobile, desktop and console Twitch clients, and
  • Cache of internal “red teaming” tools designed to improve security

The leak of internal source code poses a serious security risk in that it allows interested parties to search for vulnerabilities in the source code. While the data doesn’t include password related details, users are advised to change their credentials as a precautionary measure and turn on two-factor authentication for additional security.

Source :
https://thehackernews.com/2021/10/twitch-suffers-massive-125gb-data-and.html

CISA, NIST Says Use Cybersecurity Control Systems

Last July 2021, US President Joe Biden signed a memorandum on improving the US’s cybersecurity for critical infrastructure control systems. It establishes a voluntary initiative, encouraging collaboration between the federal government and the critical infrastructure community to improve cybersecurity control.

In line with this memorandum, the Department of Homeland Security (DHS) is instructed to lead the development of preliminary cross-sector control system cybersecurity performance goals and sector-specific performance goals within one year of the memorandum.

The Cybersecurity and Infrastructure Security Agency (CISA), together with the National Institute of Standards and Technology (NIST) performed a primary crosswalk of available control system resources, recommending practices that were produced by the US government and the private sector.

The crosswalk focused on various cybersecurity documents related to best practices and risk mitigation. These documents include CISA Cyber EssentialsNISTIR 8183, Rev 1, “Cybersecurity Framework Version 1.1 Manufacturing Profile, and CISA Pipeline Cyber Risk Mitigation.

Upon review, CISA and NIST have determined nine categories of recommended cybersecurity practices, using the categories as the foundation for preliminary control systems cybersecurity performance goals.

The nine categories are:

  • Risk Management and Cybersecurity Governance, which aims to “identify and document cybersecurity control systems using established recommended practices”.
  • Architecture and Design, which has the objective of integrating cybersecurity and resilience into system architecture in line with established best practices.
  • Configuration and Change Management. This category aims to documents and control hardware and software inventory, system settings, configurations, and network traffic flows during the control system hardware and software lifecycles.
  • Physical Security, which aims to limit physical access to systems, facilities, equipment, and other infrastructure assets to authorized users.
  • System and Data Integrity, Availability, and Confidentiality. This category aims to protect the control system and its data against corruption, compromise, or loss.
  • Continuous Monitoring and Vulnerability Management, which aims to implement and perform continuous monitoring of control systems cybersecurity threats and vulnerabilities.
  • Training and Awareness aims to train personnel to have the fundamental knowledge and skills needed to determine control systems cybersecurity risks.
  • Incident Response and Recovery. This category aims to implement and test control system response and recovery plans with clearly defined roles and responsibilities.
  • Supply Chain Risk Management, which aims to identify risks associated with control system hardware, software, and manage services.

CISA explained that the nine categories’ goals outlined above are “foundational activities for effective risk management”, representing high-level cybersecurity best practices. The agency also said that these are not an exhaustive guide to all facets of an effective cybersecurity program.

As cyber threats and risks become more and more sophisticated and difficult to mitigate, it is important for critical infrastructure owners to future-proof their enterprises, minimizing operational risks and disturbances.

Apart from practices identified by CISA and NIST, owners and users should understand various practical countermeasures that should be considered during their planning and design phases.

Check out our “Best Practices for Securing Smart Factories: Three Steps to Keep Operations Running” to learn more about security issues, defense strategies, and the benefit of efficiently securing factories with minimal TCO.

Source :
https://www.trendmicro.com/en_us/research/21/j/cisa-nist-says-use-cybersecurity-control-systems.html

What happened on the Internet during the Facebook outage

It’s been a few days now since Facebook, Instagram, and WhatsApp went AWOL and experienced one of the most extended and rough downtime periods in their existence.

When that happened, we reported our bird’s-eye view of the event and posted the blog Understanding How Facebook Disappeared from the Internet where we tried to explain what we saw and how DNS and BGP, two of the technologies at the center of the outage, played a role in the event.

In the meantime, more information has surfaced, and Facebook has published a blog post giving more details of what happened internally.

As we said before, these events are a gentle reminder that the Internet is a vast network of networks, and we, as industry players and end-users, are part of it and should work together.

In the aftermath of an event of this size, we don’t waste much time debating how peers handled the situation. We do, however, ask ourselves the more important questions: “How did this affect us?” and “What if this had happened to us?” Asking and answering these questions whenever something like this happens is a great and healthy exercise that helps us improve our own resilience.

Today, we’re going to show you how the Facebook and affiliate sites downtime affected us, and what we can see in our data.

1.1.1.1

1.1.1.1 is a fast and privacy-centric public DNS resolver operated by Cloudflare, used by millions of users, browsers, and devices worldwide. Let’s look at our telemetry and see what we find.

First, the obvious. If we look at the response rate, there was a massive spike in the number of SERVFAIL codes. SERVFAILs can happen for several reasons; we have an excellent blog called Unwrap the SERVFAIL that you should read if you’re curious.

In this case, we started serving SERVFAIL responses to all facebook.com and whatsapp.com DNS queries because our resolver couldn’t access the upstream Facebook authoritative servers. About 60x times more than the average on a typical day.

If we look at all the queries, not specific to Facebook or WhatsApp domains, and we split them by IPv4 and IPv6 clients, we can see that our load increased too.

As explained before, this is due to a snowball effect associated with applications and users retrying after the errors and generating even more traffic. In this case, 1.1.1.1 had to handle more than the expected rate for A and AAAA queries.

Here’s another fun one.

DNS vs. DoT and DoH. Typically, DNS queries and responses are sent in plaintext over UDP (or TCP sometimes), and that’s been the case for decades now. Naturally, this poses security and privacy risks to end-users as it allows in-transit attacks or traffic snooping.

With DNS over TLS (DoT) and DNS over HTTPS, clients can talk DNS using well-known, well-supported encryption and authentication protocols.

Our learning center has a good article on “DNS over TLS vs. DNS over HTTPS” that you can read. Browsers like Chrome, Firefox, and Edge have supported DoH for some time now, WAP uses DoH too, and you can even configure your operating system to use the new protocols.

When Facebook went offline, we saw the number of DoT+DoH SERVFAILs responses grow by over x300 vs. the average rate.

So, we got hammered with lots of requests and errors, causing traffic spikes to our 1.1.1.1 resolver and causing an unexpected load in the edge network and systems. How did we perform during this stressful period?

Quite well. 1.1.1.1 kept its cool and continued serving the vast majority of requests around the famous 10ms mark. An insignificant fraction of p95 and p99 percentiles saw increased response times, probably due to timeouts trying to reach Facebook’s nameservers.

Another interesting perspective is the distribution of the ratio between SERVFAIL and good DNS answers, by country. In theory, the higher this ratio is, the more the country uses Facebook. Here’s the map with the countries that suffered the most:

Here’s the top twelve country list, ordered by those that apparently use Facebook, WhatsApp and Instagram the most:

CountrySERVFAIL/Good Answers ratio
Turkey7.34
Grenada4.84
Congo4.44
Lesotho3.94
Nicaragua3.57
South Sudan3.47
Syrian Arab Republic3.41
Serbia3.25
Turkmenistan3.23
United Arab Emirates3.17
Togo3.14
French Guiana3.00

Impact on other sites

When Facebook, Instagram, and WhatsApp aren’t around, the world turns to other places to look for information on what’s going on, other forms of entertainment or other applications to communicate with their friends and family. Our data shows us those shifts. While Facebook was going down, other services and platforms were going up.

To get an idea of the changing traffic patterns we look at DNS queries as an indicator of increased traffic to specific sites or types of site.

Here are a few examples.

Other social media platforms saw a slight increase in use, compared to normal.

Traffic to messaging platforms like Telegram, Signal, Discord and Slack got a little push too.

Nothing like a little gaming time when Instagram is down, we guess, when looking at traffic to sites like Steam, Xbox, Minecraft and others.

And yes, people want to know what’s going on and fall back on news sites like CNN, New York Times, The Guardian, Wall Street Journal, Washington Post, Huffington Post, BBC, and others:

Attacks

One could speculate that the Internet was under attack from malicious hackers. Our Firewall doesn’t agree; nothing out of the ordinary stands out.

Network Error Logs

Network Error Logging, NEL for short, is an experimental technology supported in Chrome. A website can issue a Report-To header and ask the browser to send reports about network problems, like bad requests or DNS issues, to a specific endpoint.

Cloudflare uses NEL data to quickly help triage end-user connectivity issues when end-users reach our network. You can learn more about this feature in our help center.

If Facebook is down and their DNS isn’t responding, Chrome will start reporting NEL events every time one of the pages in our zones fails to load Facebook comments, posts, ads, or authentication buttons. This chart shows it clearly.​​

WARP

Cloudflare announced WARP in 2019, and called it “A VPN for People Who Don’t Know What V.P.N. Stands For” and offered it for free to its customers. Today WARP is used by millions of people worldwide to securely and privately access the Internet on their desktop and mobile devices. Here’s what we saw during the outage by looking at traffic volume between WARP and Facebook’s network:

You can see how the steep drop in Facebook ASN traffic coincides with the start of the incident and how it compares to the same period the day before.

Our own traffic

People tend to think of Facebook as a place to visit. We log in, and we access Facebook, we post. It turns out that Facebook likes to visit us too, quite a lot. Like Google and other platforms, Facebook uses an army of crawlers to constantly check websites for data and updates. Those robots gather information about websites content, such as its titles, descriptions, thumbnail images, and metadata. You can learn more about this on the “The Facebook Crawler” page and the Open Graph website.

Here’s what we see when traffic is coming from the Facebook ASN, supposedly from crawlers, to our CDN sites:

The robots went silent.

What about the traffic coming to our CDN sites from Facebook User-Agents? The gap is indisputable.

We see about 30% of a typical request rate hitting us. But it’s not zero; why is that?

We’ll let you know a little secret. Never trust User-Agent information; it’s broken. User-Agent spoofing is everywhere. Browsers, apps, and other clients deliberately change the User-Agent string when they fetch pages from the Internet to hide, obtain access to certain features, or bypass paywalls (because pay-walled sites want sites like Facebook to index their content, so that then they get more traffic from links).

Fortunately, there are newer, and privacy-centric standards emerging like User-Agent Client Hints.

Core Web Vitals

Core Web Vitals are the subset of Web Vitals, an initiative by Google to provide a unified interface to measure real-world quality signals when a user visits a web page. Such signals include Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS).

We use Core Web Vitals with our privacy-centric Web Analytics product and collect anonymized data on how end-users experience the websites that enable this feature.

One of the metrics we can calculate using these signals is the page load time. Our theory is that if a page includes scripts coming from external sites (for example, Facebook “like” buttons, comments, ads), and they are unreachable, its total load time gets affected.

We used a list of about 400 domains that we know embed Facebook scripts in their pages and looked at the data.

Now let’s look at the Largest Contentful Paint. LCP marks the point in the page load timeline when the page’s main content has likely loaded. The faster the LCP is, the better the end-user experience.

Again, the page load experience got visibly degraded.

The outcome seems clear. The sites that use Facebook scripts in their pages took 1.5x more time to load their pages during the outage, with some of them taking more than 2x the usual time. Facebook’s outage dragged the performance of  some other sites down.

Conclusion

When Facebook, Instagram, and WhatsApp went down, the Web felt it. Some websites got slower or lost traffic, other services and platforms got unexpected load, and people lost the ability to communicate or do business normally.

Source :
https://blog.cloudflare.com/during-the-facebook-outage/

Top 10 Dangerous DNS Attacks Types and The Prevention Measures

From the above topic, we can guess that today, we are going to discuss the top 10 DNS attacks and how to mitigate them. DNS stands for Domain Name System which remains under constant attacks, and thus we can assume there is no end in sight because the threats are growing increasingly nowadays.

DNS generally uses UDP fundamentally and in some cases, uses TCP as well. When it uses the UDP protocol, which is connectionless and can be tricked easily.

Thus DNS protocol is remarkably popular as a DDoS tool, and DNS, recognized as the internet’s phonebook, which is a component of the global internet foundation that transmutes between well-known names and the number that a computer needed to enter a website and send an email.

DNS has long been the target of attackers looking to take all custom of corporate and secret data, hence, the warnings in the past year indicate a worsening of the condition.

As per the IDC’s research, the average costs correlated with a DNS mugging rose by 49% associated with a year earlier. However, in the U.S., the average price of a DNS attack trims out at more than $1.27 million.

Approximately half of the respondents (48%) state that wasting more than $500,000 to a DNS attack, and about 10% say that they lost more than $5 million on each break. In extension, the preponderance of U.S. companies says that it needed more than one day to determine a DNS attack.

Shockingly, as per the information both in-house and cloud applications were destroyed, the 100% growth of threats in the in-house application interlude, frothingly it is now the most widespread destruction experienced that IDC composed.

Thus the “DNS attacks are running away from real brute-force to more complicated attacks running from the internal network. Thus the complicated attack will push the organizations to use intelligent mitigation tools so that they can easily cope with insider threats.”

Therefore we have provided the top 10 DNS attacks and the proper solutions to fix them, so that it will be easy for the organizations to recognize the attacks and can quickly solve it.

Famous DNS Attacks Type:

  1. DNS Cache Poisoning Attack
  2. Distributed Reflection Denial of Service (DRDoS)
  3. DNS Hijacking
  4. Phantom Domain Attack
  5. TCP SYN Floods
  6. Random Subdomain Attack
  7. DNS Tunneling
  8. DNS Flood Attack
  9. Domain Hijacking
  10. Botnet-based Attacks

DNS Cache Poisoning Attack

At first, we have the cache poisoning, it’s one of the frequent attacks, and its main aim is to take the web users towards the scam websites, as for example, a user accesses gmail.com through the web browser to consult their mailbox.

Moreover, the DNS is becoming poisoned, and it’s not the gmail.com page which is exposed but a scam page determined by the criminal, in order, for example, to reclaim the email box accesses. Thus the users accessing the correct domain name will not see that the website they’re entering is not the right one but a scam one.

Cache poisoning

Basically, it generates an excellent possibility for cybercriminals to use phishing techniques to steal information, both identification information or credit card information from ingenuous victims. The attack can be devastating, depending on several factors, the attacker’s purpose, and the DNS poisoning impact.

DNS Attack Mitigation – Cache poisoning

As per the information, there are several forms to solve or to prevent this attack. For beginners, the IT teams should configure DNS servers to rely as small as possible on trust relations with other DNS servers. Performing so will make it more difficult for attackers to practice their DNS servers to debased their targets’ servers. There is another method to prevent cache poisoning attacks, as IT teams should also configure their DNS name servers to:-

  • To restrict recursive queries.
  • To store only data associated with the requested domain.
  • To restrict query responses to only given information about the demanded domain.

Not only this, but there are also some cache poisoning tools accessible to help organizations for preventing cache poisoning outbreaks. And the most famous cache poisoning prevention tool is the DNSSEC (Domain Name System Security Extension), a tool that is produced by the Internet Engineering Task Force, which provides reliable DNS data authentication.

Distributed Reflection Denial of Service (DRDoS)

Distributed reflective denial of service (DRDoS) attacks concentrate on bringing down the availability of an asset within an authoritative volume of UDP acknowledgments. In some instances, the attacker would transfer a DNS, NTP, etc.

They demand a parodied source IP, with the purpose of a more extensive acknowledgment being transferred to the host who indeed continues at the address that was forged.

DRDoS Attack

UDP is the protocol of different choices for this variety of attacks, as it does not build a connection state. For example, suppose a spoofed source of IP in the SYN package of a TCP connection would cause immediate termination just because the SYN/ACK will go away.

This practice makes reflection potential and possible, meanwhile, regulating these attacks at the proper scale, the idea of shared reflection becomes clear; hence, various endpoints transmitting spoofed UDP offers, generating acknowledgments that will be concentrated upon a target.

Once these response packs begin to appear, the goal experiences a loss of availability.

How to Prevent?

Usually, organizations should commence on preparing for DDoS attacks in advance, it is exceedingly harder to answer after an attack because it is already underway.

Moreover, DDoS attacks can’t be stopped, therefore some steps can be taken to make it more troublesome for an attacker to perform a network unresponsive. The following steps will help you to scatter organizational assets to bypass performing a single deep target to an attacker.

  • First, locate servers in different data centers.
  • Assure that your data centers are located on various networks.
  • Make sure that data centers have several paths.
  • Make sure that the data centers, or the networks that the data centers are related to, have no essential security holes or single points of failure.

An organization that relies on servers and Internet port, for them, it is essential to make sure that devices are geographically scattered and not located in a particular data center.

Moreover, if the resources are already geographically dispersed, then it’s essential to inspect each data station is having more than one channel to the internet and assure that not all data stations are attached to the corresponding internet provider.

DNS Hijacking

DNS hijacking is a method in which an individual can divert to the doubtful DNS (Domain Name System). However, it may be achieved by using malicious software or unauthorized alteration of a server.

DNS Hijacking

Meanwhile, the individual has the authority of the DNS; they can guide others who obtain it to a web page that seems identical but carries extra content like advertisements. They can also guide users to pages carrying malware or a third-party search engine as well.

How to Prevent?

A DNS name server is a compassionate foundation that needs necessary protection measures because it can be hijacked and used by several hackers to raise DDoS attacks on others, thus, here we have mentioned some prevention of DNS hijacking.

  • See for resolvers on your network.
  • Critically restrict access to a name server.
  • Utilize measures against cache poisoning.
  • Instantly patch known vulnerabilities.
  • Separate the authoritative name server from the resolver.
  • Restrain zone alterations.

Phantom domain attack

Phantom domain attacks are kind of comparable to casual subdomain attacks. Thus in this kind of attack, the attackers attack your DNS resolver and overpower it to use up supplies to determine that’s what we name “phantom” domains, as these phantom domains will never respond to the queries.

Phantom Domain Attack

The main motive of this attack is to let the DNS resolver server await for the answer for a long time, ultimately leading to failure or deteriorated DNS performance problems.

How to Prevent?

To identify phantom domain attacks, you can analyze your log messages. Moreover, you can also follow the steps that we have mentioned below to mitigate this attack.

  • First, increase the number of recursive clients.
  • Use a proper sequence of the following parameters to gain optimum results.
  • Restrict recursive queries per server and Restrict recursive inquiries per zone.
  • Empower to hold down for non-responsive servers and Check recursive queries per zone.

When you allow any of the options, the failure values are set at an excellent level for overall operations. However, you should keep the default charges while using these commands, moreover, it guarantees that you know the consequences if you want to replace the default values.

TCP SYN Floods

An SYN Flood is a simple form of Denial-of-Service (DDoS) attack that can target any operation related to the internet and thus implementing Transmission Control Protocol (TCP) services.

An SYN wave is a type of TCP State-Exhaustion attack that endeavors to utilize the connection element tables present in common infrastructure elements, for example, load balancers, firewalls, Intrusion Prevention Systems (IPS), and the utilization servers themselves.

TCP SYN Flooding Attack

Hence, This type of attack can bring down even high-capacity devices fitted to managing millions of links. Moreover, a TCP SYN flood attack occurs when the attacker overflows the system with SYN questions to destroy the target and make it incapable of reacting to new real connection offers.

Thus it encourages all of the target server’s information ports into a half-open state.

How to Prevent?

So, the firewalls and IPS devices, while important to network security, are not sufficient to protect a network from complex DDoS attacks.

Nowadays, the more sophisticated attack methodologies demand a multi-faceted program that allows users to look beyond both internet foundation and network availability.

Thus there are some capabilities that you can count for more powerful DDoS security and faster mitigation of TCP SYN flood attacks.

  • At first, provide proper support to both inline and out-of-band deployment to assure that there is not only one single point of collapse on the network.
  • Extensive network distinctness with the capacity to see and examine traffic from various parts of the network.
  • Different sources of threat intelligence, including statistical exception detection, customizable entrance alerts, and fingerprints of known threats that assure fast and reliable detection.

Extensible to handle attacks of all sizes, extending from low-end to high-end and high-end to low-end.

Random Subdomain Attack

This is not the most prevalent type of DNS attack, but it can happen from time to time on several networks. Hence, the random subdomain attacks can often be identified as DoS attacks, as their creation adheres to the same goal as simple DoS.

Incase, spoilers send a lot of DNS inquiries against a healthy and active domain name. However, the questions will not target the primary domain name, but it will harm a lot of non-existing subdomains.

Random Subdomain Attack

Basically, the main motive of this attack is to build a DoS that will immerse the authorized DNS server that receives the primary domain name, and finally let the interruption of all DNS record lookups.

Thus It’s an attack that’s hard to identify, as the queries will come from infected users who don’t even understand they’re sending certain types of questions, from what are eventually legitimate computers.

How to Prevent?

Thus we have provided you a simple method for preventing the random subdomain attack only in a 30-minute.

  • In the beginning, you have to learn the techniques to mitigate the attacks that generate extreme traffic on resolvers and web resources that are connected with the victim the names that can be taken down.
  • Next, Hear about modern capabilities like Response Rate Limiting for preserving DNS experts that provoke attacks.

DNS tunneling

This is a cyber attack that is used to carry encoded data from different applications inside DNS acknowledgments and queries.

DNS Tunneling

Meanwhile, this system wasn’t formerly created to attack multitudes, but to bypass interface controls, now it is mostly used to achieve remote attacks.

To implement DNS tunneling, attackers demand to gain entrance to a settled system, as well as access to an internal DNS server, a domain name, and a DNS authoritative server.

How to Prevent?

To configure the firewall to identify and block DNS tunneling by designing an application rule that uses some protocol object, we have mentioned three steps to mitigate these types of attacks.

  • Create an access rule.
  • Create a protocol object.
  • Create an application rule.

DNS Flood Attack

This is one of the most primary types of DNS attacks, and in this Distributed Denial of Service (DDoS), the intruder will hit your DNS servers.

The main motive of this kind of DNS flood is to completely overload your server so that it cannot maintain serving DNS requests because all the treated DNS zones influence the purpose of resource records.

DNS Flood Attack

Thus this kind of attack is relieved easily as the source usually comes from one single IP. However, it can get complicated when it becomes a DDoS where a hundred or thousand gatherings are involved.

While a lot of questions will be immediately identified as malicious bugs and a lot of legitimate requests will be made to mislead defense devices, hence, this makes the mitigation method a little bit difficult sometimes.

How to Prevent?

Domain Name System (DNS) has developed a target of the Distributed Denial of Service (DDoS) attacks. When a DNS is below a DDoS flood attack, all the domain data under that DNS enhances unreachable, thus ultimately creating the unavailability of those appropriate domain names.

Hence, for this type of attack, we have introduced a method that includes the periodic stale content update and manages a list of the most commonly queried domain names of several DNS servers. Hence our simulation outcomes show that our method can work more than 70% of the total cache replies during a massive DNS Flood attack.

Domain Hijacking

This type of attack involves settings in your DNS servers and domain registrar that can manage your traffic away from the actual servers to new destinations.

Domain hijacking is usually affected by a lot of determinants related to exploiting a vulnerability in the domain name registrar’s system, but can also be performed at the DNS level when attackers take command of your DNS records.

Hence when the attacker hijacked your domain name, it will be used to originate malicious movements such as installing up a fake page of repayment systems like PayPal, Visa, or bank systems. Attackers will produce an identical copy of the real website that reads critical personal knowledge, such as email addresses, usernames, and passwords.

How to Prevent?

Thus you can simply mitigate the domain hijacking by practicing a few steps that we have mentioned below.

  • Upgrade your DNS in the application foundation.
  • Use DNSSEC.
  • Secure access.
  • Client lock.

Botnet-based Attacks

If we talk about the botnet, then let me clarify that it is a number of Internet-connected devices, and it can be practiced to implement a distributed denial-of-service attack (DDoS attack), which steal data, transmit spam, and enables the attacker to obtain access to the device and its connection.

Botnet-based Attacks

Moreover, botnets are diverse and evolving threats, hence, all these attacks are bound to develop in parallel with our growing dependence on digital devices, the internet, and new future technologies.

The botnets can be counted as attacks, as well as programs for future attacks, with this as the foundational prospect, this study explores how a botnet described and organized, how it is created, and used.

How to Prevent?

This is one of the frequent DNS attacks which have been faced by the victims every day, thus to mitigate these type of attacks, we have mentioned below few steps so that it will be helpful for you.

  • At first, understand your vulnerabilities properly.
  • Next, secure the IoT devices.
  • Identify both your mitigation myths from facts.
  • Discover, classify and control.

Conclusion

As you see, DNS service is essential for preserving your companies’ websites and online assistance working day-to-day. Thus if you’re looking for methods to evade these kinds of DNS attacks, then this post will be helpful for you. So, what do you think about this? Simply share all your views and thoughts in the comment section below. And if you liked this post then simply do not forget to share this post with your friends and family.

Microsoft Says Its Systems Were Also Breached in Massive SolarWinds Hack

The massive state-sponsored espionage campaign that compromised software maker SolarWinds also targeted Microsoft, as the unfolding investigation into the hacking spree reveals the incident may have been far more wider in scope, sophistication, and impact than previously thought.

News of Microsoft’s compromise was first reported by Reuters, which also said the company’s own products were then used to strike other victims by leveraging its cloud offerings, citing people familiar with the matter.

The Windows maker, however, denied the threat actor had infiltrated its production systems to stage further attacks against its customers.

In a statement to The Hacker News via email, the company said —

“Like other SolarWinds customers, we have been actively looking for indicators of this actor and can confirm that we detected malicious SolarWinds binaries in our environment, which we isolated and removed. We have not found evidence of access to production services or customer data. Our investigations, which are ongoing, have found absolutely no indications that our systems were used to attack others.”

Characterizing the hack as “a moment of reckoning,” Microsoft president Brad Smith said it has notified over 40 customers located in Belgium, Canada, Israel, Mexico, Spain, the UAE, the UK, and the US that were singled out by the attackers. 44% of the victims are in the information technology sector, including software firms, IT services, and equipment providers.

CISA Issues New Advisory

The development comes as the US Cybersecurity and Infrastructure Security Agency (CISA) published a fresh advisory, stating the “APT actor [behind the compromises] has demonstrated patience, operational security, and complex tradecraft in these intrusions.”

“This threat poses a grave risk to the Federal Government and state, local, tribal, and territorial governments as well as critical infrastructure entities and other private sector organizations,” it added.

But in a twist, the agency also said it identified additional initial infection vectors, other than the SolarWinds Orion platform, that have been leveraged by the adversary to mount the attacks, including a previously stolen key to circumvent Duo’s multi-factor authentication (MFA) to access the mailbox of a user via Outlook Web App (OWA) service.

Digital forensics firm Volexity, which tracks the actor under the moniker Dark Halo, said the MFA bypass was one of the three incidents between late 2019 and 2020 aimed at a US-based think tank.

The entire intrusion campaign came to light earlier this week when FireEye disclosed it had detected a breach that also pilfered its Red Team penetration testing tools.

Since then, a number of agencies have been found to be attacked, including the US departments of Treasury, Commerce, Homeland Security, and Energy, the National Nuclear Security Administration (NNSA), and several state department networks.

While many details continue to remain unclear, the revelation about new modes of attack raises more questions about the level of access the attackers were able to gain across government and corporate systems worldwide.

Microsoft, FireEye, and GoDaddy Create a Killswitch

Over the last few days, Microsoft, FireEye, and GoDaddy seized control over one of the main GoDaddy domains — avsvmcloud[.]com — that was used by the hackers to communicate with the compromised systems, reconfiguring it to create a killswitch that would prevent the SUNBURST malware from continuing to operate on victims’ networks.

For its part, SolarWinds has not yet disclosed how exactly the attacker managed to gain extensive access to its systems to be able to insert malware into the company’s legitimate software updates.

Recent evidence, however, points to a compromise of its build and software release system. An estimated 18,000 Orion customers are said to have downloaded the updates containing the back door.

Symantec, which earlier uncovered more than 2,000 systems belonging to 100 customers that received the trojanized SolarWinds Orion updates, has now confirmed the deployment of a separate second-stage payload called Teardrop that’s used to install the Cobalt Strike Beacon against select targets of interest.

The hacks are believed to be the work of APT29, a Russian threat group also known as Cozy Bear, which has been linked to a series of breaches of critical US infrastructure over the past year.

The latest slew of intrusions has also led CISA, the US Federal Bureau of Investigation (FBI), and the Office of the Director of National Intelligence (ODNI) to issue a joint statement, stating the agencies are gathering intelligence in order to attribute, pursue, and disrupt the responsible threat actors.

Calling for stronger steps to hold nation-states accountable for cyberattacks, Smith said the attacks represent “an act of recklessness that created a serious technological vulnerability for the United States and the world.”

“In effect, this is not just an attack on specific targets, but on the trust and reliability of the world’s critical infrastructure in order to advance one nation’s intelligence agency,” he added.

How to Use Password Length to Set Best Password Expiration Policy

One of the many features of an Active Directory Password Policy is the maximum password age. Traditional Active Directory environments have long using password aging as a means to bolster password security. Native password aging in the default Active Directory Password Policy is relatively limited in configuration settings.

Let’s take a look at a few best practices that have changed in regards to password aging. What controls can you enforce in regards to password aging using the default Active Directory Password Policy? Are there better tools that organizations can use regarding controlling the maximum password age for Active Directory user accounts?

What password aging best practices have changed?

Password aging for Active Directory user accounts has long been a controversial topic in security best practices.

While many organizations still apply more traditional password aging rules, noted security organizations have provided updated password aging guidance. Microsoft has said that they are dropping the password-expiration policies from the Security baseline for Windows 10 v1903 and Windows Server v1903. The National Institute of Standards and Technology (NIST) has long offered a cybersecurity framework and security best practice recommendations.

As updated in SP 800-63B Section 5.1.1.2 of the Digital Identity Guidelines – Authentication and Lifecycle Management, note the following guidance:

“Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.” NIST helps to explain the guidance change in their FAQ page covering the Digital Identity Guidelines.

It states: “Users tend to choose weaker memorized secrets when they know that they will have to change them in the near future. When those changes do occur, they often select a secret that is similar to their old memorized secret by applying a set of common transformations such as increasing a number in the password. This practice provides a false sense of security if any of the previous secrets has been compromised since attackers can apply these same common transformations. But if there is evidence that the memorized secret has been compromised, such as by a breach of the verifier’s hashed password database or observed fraudulent activity, subscribers should be required to change their memorized secrets. However, this event-based change should occur rarely, so that they are less motivated to choose a weak secret with the knowledge that it will only be used for a limited period of time.”

With the new guidance from the above organizations and many others, security experts acknowledge that password aging, at least in itself, is not necessarily a good strategy to prevent the compromise of passwords in the environment.

The recent changes in password aging guidance also apply to traditional Microsoft Active Directory Password Policies.

Active Directory Password Policy Password Aging

The capabilities of the password change policies in default Active Directory Password Policies are limited. You can configure the maximum password age, and that is all. By default, Active Directory includes the following Password Policy settings:

  • Enforce password history
  • Maximum password age
  • Minimum password age
  • Minimum password length
  • Minimum password length audit
  • Password must meet complexity requirements
  • Store passwords using reversible encryption

When you double click the maximum password age, you can configure the maximum number of days a user can use the same password.

When you look at the explanation given for the password age, you will see the following in the Group Policy setting:

“This security setting determines the period of time (in days) that a password can be used before the system requires the user to change it. You can set passwords to expire after a number of days between 1 and 999, or you can specify that passwords never expire by setting the number of days to 0. If the maximum password age is between 1 and 999 days, the minimum password age must be less than the maximum password age. If the maximum password age is set to 0, the minimum password age can be any value between 0 and 998 days.”

Defining the maximum password age with Active Directory Password Policy

With the default policy setting, you really can either turn the policy on or off and then set the number of days before the user password expires. What if you had further options to control the maximum password age and set different values based on the password complexity?

Specops Length Based Password Policy

As mentioned, recent guidance from many cybersecurity best practice authorities recommends against forced password changes and details the reasons for this change. However, many organizations may still leverage password aging as a part of their overall password security strategy to protect against user passwords falling into the wrong hands. What if IT admins had features in addition to what is provided by Active Directory?

Specops Password Policy provides many additional features when compared to the default Active Directory Password Policy settings, including password expiration. One of the options contained in the Specops Password Policy is called “Length based password aging.

Using this setting, organizations can define different “levels” of password expiration based on the user password’s length. It allows much more granularity in how organizations configure password aging in an Active Directory environment compared to using the default Active Directory Password Policy configuration settings.

It also allows targeting the weakest passwords in the environment and forcing these to age out the quickest. You will note in the screenshot. The length-based password aging in Specops Password Policy is highly configurable.

It includes the following settings:

  • Number of expiration levels – Enter how many expiration levels there will be. An expiration level determines how many extra days the user will have until their password expires and they are required to change it. This depends on how long the user’s password is. To increase the number of levels, move the slider to the right. The maximum number of expiration levels that can exist is 5.
  • Characters per level – The number of additional characters per level that define the extra days in password expiration
  • Extra days per level – How many additional expiration days each level is worth.
  • Disable expiration for the last level – Passwords that meet the requirements for the final expiration level in the list will not expire.
Configuring the Length based password policy in Specops Password Policy

Specops allows easily notifying end-users when their password is close to expiring. It will inform end-users at login or by way of sending an email notification. You can configure the days before expiration value for each of these settings.

Configuring password expiration notifications in Specops Password Policy

Organizations define the minimum and maximum password length configurations in the Password Rules area of the Specops Password Policy configuration. If you change the minimum and maximum password length configuration, the password length values in each level of the length-based password expiration will change as well.

Configuring the minimum and maximum password length

Combined with other Specops Password Policy features, such as breached password protection, the length-based password expiration strengthens enterprise password policies for both on-premises and remote workers.

Wrapping Up

Password aging has long been a feature of Active Directory Password Policies in most enterprise environments. However, as attackers get better at compromising passwords, new security best practice guidance is no longer recommending organizations make use of standard password aging.

Specops Password Policy provides compelling password aging capabilities that allow extending password aging features compared to default Active Directory Password Policies. By adding expiration levels, Specops Password Policy allows effectively targeting weak passwords in the environment by quickly aging these passwords out. End-users can use strong passwords much longer.

Organizations can even decide never to expire specific passwords that meet the defined password length. Using Specops Password Policy features, including length-based password expiration, helps to ensure more robust password security in the environment. Click here to learn more.

Microsoft Office 365 adds protection against downgrade and MITM attacks

Microsoft is working on adding SMTP MTA Strict Transport Security (MTA-STS) support to Exchange Online to ensure Office 365 customers’ email communication security and integrity.

Once MTA-STS is available in Office 365 Exchange Online, emails sent by users via Exchange Online will only one delivered using connections with both authentication and encryption, protecting against both email interception and attacks.

Protection against MITM and downgrade attacks

MTA-STS strengthens Exchange Online email security and solves multiple SMTP security problems including the lack of support for secure protocols, expired TLS certificates, and certs not issued by trusted third parties or matching server domain names.

Given that mail servers will still deliver emails even though a properly secured TLS connection can’t be created, SMTP connections are exposed to various attacks including downgrade and man-in-the-middle attacks.

“[D]owngrade attacks are possible where the STARTTLS response can be deleted, thus rendering the message in clear text,” Microsoft says. “Man-in-the-middle (MITM) attacks are also possible, whereby the message can be rerouted to an attacker’s server.”

“MTA-STS (RFC8461) helps thwart such attacks by providing a mechanism for setting domain policies that specify whether the receiving domain supports TLS and what to do when TLS can’t be negotiated, for example stop the transmission,” the company explains in a Microsoft 365 roadmap entry.

“Exchange Online (EXO) outbound mail flow now supports MTA-STS,” Microsoft also adds.https://www.youtube.com/embed/VY3YvrrHXJk?t=775

Exchange Online SMTP MTA Strict Transport Security (MTA-STS) support is currently in development and the company is planning to make it generally available during December in all environments, for all Exchange Online users.

DNSSEC and DANE for SMTP also coming

Microsoft is also working on including support for the DNSSEC (Domain Name System Security Extensions) and DANE for SMTP (DNS-based Authentication of Named Entities) to Office 365 Exchange Online.

Support for the two SMTP standards will be added to both inbound and outbound mail, “specific to SMTP traffic between SMTP gateways” according to the Microsoft 365 roadmap [12] and this blog post.

According to Microsoft, after including support for the two SMTP security standards in Exchange Online:

  1. DANE for SMTP will provide a more secure method for email transport. DANE uses the presence of DNS TLSA resource records to securely signal TLS support to ensure sending servers can successfully authenticate legitimate receiving email servers. This makes the secure connection resistant to downgrade and MITM attacks.
  2. DNSSEC works by digitally signing records for DNS lookup using public key cryptography. This ensures that the received DNS records have not been tampered with and are authentic. 

Microsoft is planning to release DANE and DNSSEC for SMTP in two phases, with the first one to include only outbound support during December 2020 and with the second to add inbound support by the end of next year.

Source :
https://www.bleepingcomputer.com/news/security/office-365-adds-protection-against-downgrade-and-mitm-attacks/

Introducing Google Workspace

For more than a decade, we’ve been building products to help people transform the way they work.

Now, work itself is transforming in unprecedented ways. For many of us, work is no longer a physical place we go to, and interactions that used to take place in person are being rapidly digitized. Office workers no longer have impromptu discussions at the coffee machine or while walking to meetings together, and instead have turned their homes into workspaces. Frontline workers, from builders on a construction site to delivery specialists keeping critical supply chains moving, are turning to their phones to help get their jobs done. While doctors treating patients and local government agencies engaging with their communities are accelerating how they can use technology to deliver their services. 

Amidst this transformation, time is more fragmented—split between work and personal responsibilities—and human connections are more difficult than ever to establish and maintain.

These are unique challenges, but they also represent a significant opportunity to help people succeed in this highly distributed and increasingly digitized world. With the right solution in place, people are able to collaborate more easily, spend time on what matters most, and foster human connections, no matter where they are.

That solution is Google Workspace: everything you need to get anything done, now in one place. Google Workspace includes all of the productivity apps you know and love—Gmail, Calendar, Drive, Docs, Sheets, Slides, Meet, and many more. Whether you’re returning to the office, working from home, on the frontlines with your mobile device, or connecting with customers, Google Workspace is the best way to create, communicate, and collaborate.https://www.youtube.com/embed/bE31y5HbukA

With Google Workspace, we’re introducing three major developments:

  • new, deeply integrated user experience that helps teams collaborate more effectively, frontline workers stay connected, and businesses power new digital customer experiences
  • new brand identity that reflects our ambitious product vision and the way our products work together
  • new ways to get started with solutions tailored to the unique needs of our broad range of customers 

New user experience

At Next OnAir in July, we announced a better home for work. One that thoughtfully brings together core tools for communication and collaboration—like chat, email, voice and video calling, and content management and collaboration—into a single, unified experience to ensure that employees have access to everything they need in one place. This integrated experience is now generally available to all paying customers of Google Workspace.

In the coming months we’ll also be bringing this new experience to consumers to help them do things like set up a neighborhood group, manage a family budget, or plan a celebration using integrated tools like Gmail, Chat, Meet, Docs, and Tasks. 

We’ve already made it easier for business users to connect with customers and partners using guest access features in Chat and Drive, and in the coming weeks, you’ll be able to dynamically create and collaborate on a document with guests in a Chat room. This makes it easy to share content and directly work together with those outside your organization, and ensure that everyone has access and visibility to the same information.

When every minute you spend at work is a minute you could be helping your daughter with her homework, efficiency is everything. We’ve been working hard to add helpful features that make it easier to get your most important work done. For example, in Docs, Sheets, and Slides, you can now preview a linked file without having to open a new tab—which means less time spent moving between apps, and more time getting work done. And beginning today, when you @mention someone in your document, a smart chip will show contact details, including for those outside your organization, provide context and even suggest actions like adding that person to Contacts or reaching out via email, chat or video. 

By connecting you to relevant content and people right in Docs, Sheets and Slides, Google Workspace helps you get more done from where you already are.

We also recognize that reinforcing human connections is even more important when people are working remotely and interacting with their customers digitally. It’s what keeps teams together and helps build trust and loyalty with your customers.

Back in July, we shared that we’re bringing Meet picture-in-picture to Gmail and Chat, so you can actually see and hear the people you’re working with, while you’re collaborating. In the coming months, we’ll be rolling out Meet picture-in-picture to Docs, Sheets, and Slides, too. This is especially powerful for customer interactions where you’re pitching a proposal or walking through a document. Where before, you could only see the file you were presenting, now you’ll get all those valuable nonverbal cues that come with actually seeing someone’s face.

And because we know many companies are implementing a mix of remote and in-person work environments, Meet supports a variety of devices with the best of Google AI built-in. From helpful and inclusive Series One hardware kits that provide immersive sound and effortlessly scalability, to native integrations with Chromecast and Nest Smart Displays that make your work experience more enjoyable—whether that’s at home or in the office. 

New brand identity

10 years ago, when many of our products were first developed, they were created as individual apps that solved distinct challenges—like a better email with Gmail, or a new way for individuals to collaborate together with Docs. Over time, our products have become more integrated, so much so that the lines between our apps have started to disappear.

Our new Google Workspace brand reflects this more connected, helpful, and flexible experience, and our icons will reflect the same. In the coming weeks, you will see new four-color icons for Gmail, Drive, Calendar, Meet, and our collaborative content creation tools like Docs, Sheets, Slides that are part of the same family. They represent our commitment to building integrated communication and collaboration experiences for everyone, all with helpfulness from Google.https://www.youtube.com/embed/uZXa0N0-Zu0

We are also bringing Google Workspace to our education and nonprofit customers in the coming months. Education customers can continue to access our tools via G Suite for Education, which includes Classroom, Assignments, Gmail, Calendar, Drive, Docs, Sheets, Slides, and Meet. G Suite for Nonprofits will continue to be available to eligible organizations through the Google for Nonprofits program.

New ways to get started

Simplicity, helpfulness, flexibility—these guiding principles apply both to the way people experience our products and to the way we do business. All of our customers share a need for transformative solutions—whether to power remote work, support frontline workers, create immersive digital experiences for their own customers, or all of the above—but their storage, management, and security and compliance needs often vary greatly. 

In order to provide more choice and help customers get the most out of Google Workspace, we are evolving our editions to provide more tailored offerings. Our new editions for smaller businesses are aimed at those often looking to make fast, self-serviced purchases. Our editions for larger enterprises are designed to help organizations that have more complex implementation needs and often require technical assistance over the course of a longer buying and deployment cycle. 

You can learn more about these new offerings on our pricing page. And existing customers can read more here.

Empowering our customers and partners

You, our customers and our users, are our inspiration as we work together to navigate the change ahead. This is an incredibly challenging time, but we believe it’s also the beginning of a new approach to working together. One that is more productive, collaborative, and impactful.

Google Workspace embodies our vision for a future where work is more flexible, time is more precious, and enabling stronger human connections becomes even more important. It’s a vision we’ve been building toward for more than a decade, and one we’re excited to bring to life together with you.

Source :
https://cloud.google.com/blog/products/workspace/introducing-google-workspace

Network-layer DDoS attack trends for Q2 2020

In the first quarter of 2020, within a matter of weeks, our way of life shifted. We’ve become reliant on online services more than ever. Employees that can are working from home, students of all ages and grades are taking classes online, and we’ve redefined what it means to stay connected. The more the public is dependent on staying connected, the larger the potential reward for attackers to cause chaos and disrupt our way of life. It is therefore no surprise that in Q1 2020 (January 1, 2020 to March 31, 2020) we reported an increase in the number of attacks—especially after various government authority mandates to stay indoors—shelter-in-place went into effect in the second half of March.

In Q2 2020 (April 1, 2020 to June 30, 2020), this trend of increasing DDoS attacks continued and even accelerated:

  1. The number of L3/4 DDoS attacks observed over our network doubled compared to that in the first three months of the year.
  2. The scale of the largest L3/4 DDoS attacks increased significantly. In fact, we observed some of the largest attacks ever recorded over our network.
  3. We observed more attack vectors being deployed and attacks were more geographically distributed.

The number of global L3/4 DDoS attacks in Q2 doubled

Gatebot is Cloudflare’s primary DDoS protection system. It automatically detects and mitigates globally distributed DDoS attacks. A global DDoS attack is an attack that we observe in more than one of our edge data centers. These attacks are usually generated by sophisticated attackers employing botnets in the range of tens of thousand to millions of bots.

Sophisticated attackers kept Gatebot busy in Q2. The total number of global L3/4 DDoS attacks that Gatebot detected and mitigated in Q2 doubled quarter over quarter. In our Q1 DDoS report, we reported a spike in the number and size of attacks. We continue to see this trend accelerate through Q2; over 66% of all global DDoS attacks in 2020 occurred in the second quarter (nearly 100% increase). May was the busiest month in the first half of 2020, followed by June and April. Almost a third of all L3/4 DDoS attacks occurred in May.

In fact, 63% of all L3/4 DDoS attacks that peaked over 100 Gbps occurred in May. As the global pandemic continued to heighten around the world in May, attackers were especially eager to take down websites and other Internet properties.

Small attacks continue to dominate in numbers as big attacks get bigger in size

A DDoS attack’s strength is equivalent to its size—the actual number of packets or bits flooding the link to overwhelm the target. A ‘large’ DDoS attack refers to an attack that peaks at a high rate of Internet traffic. The rate can be measured in terms of packets or bits. Attacks with high bit rates attempt to saturate the Internet link, and attacks with high packet rates attempt to overwhelm the routers or other in-line hardware devices.

Similar to Q1, the majority of L3/4 DDoS attacks that we observed in Q2 were also relatively ‘small’ with regards to the scale of Cloudflare’s network. In Q2, nearly 90% of all L3/4 DDoS attacks that we saw peaked below 10 Gbps. Small attacks that peak below 10 Gbps can still easily cause an outage to most of the websites and Internet properties around the world if they are not protected by a cloud-based DDoS mitigation service.

Similarly, from a packet rate perspective, 76% of all L3/4 DDoS attacks in Q2 peaked up to 1 million packets per second (pps). Typically, a 1 Gbps Ethernet interface can deliver anywhere between 80k to 1.5M pps. Assuming the interface also serves legitimate traffic, and that most organizations have much less than a 1 Gbps interface, you can see how even these ‘small’ packet rate DDoS attacks can easily take down Internet properties.

In terms of duration, 83% of all attacks lasted between 30 to 60 minutes. We saw a similar trend in Q1 with 79% of attacks falling in the same duration range. This may seem like a short duration, but imagine this as a 30 to 60 minute cyber battle between your security team and the attackers. Now it doesn’t seem so short. Additionally, if a DDoS attack creates an outage or service degradation, the recovery time to reboot your appliances and relaunch your services can be much longer; costing you lost revenue and reputation for every minute.

In Q2, we saw the largest DDoS attacks on our network, ever

This quarter, we saw an increasing number of large scale attacks; both in terms of packet rate and bit rate. In fact, 88% of all DDoS attacks in 2020 that peaked above 100 Gbps were launched after shelter-in-place went into effect in March. Once again, May was not just the busiest month with the most number of attacks, but also the greatest number of large attacks above 100 Gbps.

From the packet perspective, June took the lead with a whopping 754 million pps attack. Besides that attack, the maximum packet rates stayed mostly consistent throughout the quarter with around 200 million pps.

The 754 million pps attack was automatically detected and mitigated by Cloudflare. The attack was part of an organized four-day campaign that lasted from June 18 to the 21. As part of the campaign, attack traffic from over 316,000 IP addresses targeted a single Cloudflare IP address.

Cloudflare’s DDoS protection systems automatically detected and mitigated the attack, and due to the size and global coverage of our network, there was no impact to performance. A global interconnected network is crucial when mitigating large attacks in order to be able to absorb the attack traffic and mitigate it close to the source, whilst also continuing serving legitimate customer traffic without inducing latency or service interruptions.

The United States is targeted with the most attacks

When we look at the L3/4 DDoS attack distribution by country, our data centers in the United States received the most number of attacks (22.6%), followed by Germany (4.4%), Canada (2.7%) and Great Britain (2.6%).

However when we look at the total attack bytes mitigated by each Cloudflare data center, the United States still leads (34.9%), but followed by Hong Kong (6.6%), Russia (6.5%), Germany (4.5%) and Colombia (3.7%). The reason for this change is due to the total amount of bandwidth that was generated in each attack. For instance, while Hong Kong did not make it to the top 10 list due to the relatively small number of attacks that was observed in Hong Kong (1.8%), the attacks were highly volumetric and generated so much attack traffic that pushed Hong Kong to the 2nd place.

When analyzing L3/4 DDoS attacks, we bucket the traffic by the Cloudflare edge data center locations and not by the location of the source IP. The reason is when attackers launch L3/4 attacks they can ‘spoof’ (alter) the source IP address in order to obfuscate the attack source. If we were to derive the country based on a spoofed source IP, we would get a spoofed country. Cloudflare is able to overcome the challenges of spoofed IPs by displaying the attack data by the location of Cloudflare’s data center in which the attack was observed. We’re able to achieve geographical accuracy in our report because we have data centers in over 200 cities around the world.

57% of all L3/4 DDoS attacks in Q2 were SYN floods

An attack vector is a term used to describe the attack method. In Q2, we observed an increase in the number of vectors used by attackers in L3/4 DDoS attacks. A total of 39 different types of attack vectors were used in Q2, compared to 34 in Q1. SYN floods formed the majority with over 57% in share, followed by RST (13%), UDP (7%), CLDAP (6%) and SSDP (3%) attacks.

SYN flood attacks aim to exploit the handshake process of a TCP connection. By repeatedly sending initial connection request packets with a synchronize flag (SYN), the attacker attempts to overwhelm the router’s connection table that tracks the state of TCP connections. The router replies with a packet that contains a synchronized acknowledgment flag (SYN-ACK), allocates a certain amount of memory for each given connection and falsely waits for the client to respond with a final acknowledgment (ACK). Given a sufficient number of SYNs that occupy the router’s memory, the router is unable to allocate further memory for legitimate clients causing a denial of service.

No matter the attack vector, Cloudflare automatically detects and mitigates stateful or stateless DDoS attacks using our 3 pronged protection approach comprising of our home-built DDoS protection systems:

  1. Gatebot – Cloudflare’s centralized DDoS protection systems for detecting and mitigating globally distributed volumetric DDoS attacks. Gatebot runs in our network’s core data center. It receives samples from every one of our edge data centers, analyzes them and automatically sends mitigation instructions when attacks are detected. Gatebot is also synchronized to each of our customers’ web servers to identify its health and triggers accordingly, tailored protection.
  2. dosd (denial of service daemon) – Cloudflare’s decentralized DDoS protection systems. dosd runs autonomously in each server in every Cloudflare data center around the world, analyzes traffic, and applies local mitigation rules when needed. Besides being able to detect and mitigate attacks at super fast speeds, dosd significantly improves our network resilience by delegating the detection and mitigation capabilities to the edge.
  3. flowtrackd (flow tracking daemon) – Cloudflare’s TCP state tracking machine for detecting and mitigating the most randomized and sophisticated TCP-based DDoS attacks in unidirectional routing topologies. flowtrackd is able to identify the state of a TCP connection and then drops, challenges or rate-limits packets that don’t belong to a legitimate connection.

In addition to our automated DDoS protection systems, Cloudflare also generates real-time threat intelligence that automatically mitigates attacks. Furthermore, Cloudflare provides its customers firewall, rate-limiting and additional tools to further customize and optimize their protection.

Cloudflare DDoS mitigation

As Internet usage continues to evolve for businesses and individuals, expect DDoS tactics to adapt as well. Cloudflare protects websitesapplications, and entire networks from DDoS attacks of any size, kind, or level of sophistication.

Our customers and industry analysts recommend our comprehensive solution for three main reasons:

  1. Network scale: Cloudflare’s 37 Tbps network can easily block attacks of any size, type, or level of sophistication. The Cloudflare network has a DDoS mitigation capacity that is higher than the next four competitors—combined.
  2. Time-to-mitigation: Cloudflare mitigates most network layer attacks in under 10 seconds globally, and immediate mitigation (0 seconds) when static rules are preconfigured. With our global presence, Cloudflare mitigates attacks close to the source with minimal latency. In some cases, traffic is even faster than over the public Internet.
  3. Threat intelligence: Cloudflare’s DDoS mitigation is powered by threat intelligence harnessed from over 27 million Internet properties on it. Additionally, the threat intelligence is incorporated into customer facing firewalls and tools in order to empower our customers.

Cloudflare is uniquely positioned to deliver DDoS mitigation with unparalleled scale, speed, and smarts because of the architecture of our network. Cloudflare’s network is like a fractal—every service runs on every server in every Cloudflare data center that spans over 200 cities globally. This enables Cloudflare to detect and mitigate attacks close to the source of origin, no matter the size, source, or type of attack.

To learn more about Cloudflare’s DDoS solution contact us or get started.

You can also join an upcoming live webinar where we will be discussing these trends, and strategies enterprises can implement to combat DDoS attacks and keep their networks online and fast.

Source :
https://blog.cloudflare.com/network-layer-ddos-attack-trends-for-q2-2020/

Google Pixel 4a is the first device to go through ioXt at launch

Trust is very important when it comes to the relationship between a user and their smartphone. While phone functionality and design can enhance the user experience, security is fundamental and foundational to our relationship with our phones.There are multiple ways to build trust around the security capabilities that a device provides and we continue to invest in verifiable ways to do just that.

Pixel 4a ioXt certification

Today we are happy to announce that the Pixel 4/4 XL and the newly launched Pixel 4a are the first Android smartphones to go through ioXt certification against the Android Profile.

The Internet of Secure Things Alliance (ioXt) manages a security compliance assessment program for connected devices. ioXt has over 200 members across various industries, including Google, Amazon, Facebook, T-Mobile, Comcast, Zigbee Alliance, Z-Wave Alliance, Legrand, Resideo, Schneider Electric, and many others. With so many companies involved, ioXt covers a wide range of device types, including smart lighting, smart speakers, webcams, and Android smartphones.

The core focus of ioXt is “to set security standards that bring security, upgradability and transparency to the market and directly into the hands of consumers.” This is accomplished by assessing devices against a baseline set of requirements and relying on publicly available evidence. The goal of ioXt’s approach is to enable users, enterprises, regulators, and other stakeholders to understand the security in connected products to drive better awareness towards how these products are protecting the security and privacy of users.

ioXt’s baseline security requirements are tailored for product classes, and the ioXt Android Profile enables smartphone manufacturers to differentiate security capabilities, including biometric authentication strength, security update frequency, length of security support lifetime commitment, vulnerability disclosure program quality, and preloaded app risk minimization.

We believe that using a widely known industry consortium standard for Pixel certification provides increased trust in the security claims we make to our users. NCC Group has published an audit report that can be downloaded here. The report documents the evaluation of Pixel 4/4 XL and Pixel 4a against the ioXt Android Profile.

Security by Default is one of the most important criteria used in the ioXt Android profile. Security by Default rates devices by cumulatively scoring the risk for all preloads on a particular device. For this particular measurement, we worked with a team of university experts from the University of Cambridge, University of Strathclyde, and Johannes Kepler University in Linz to create a formula that considers the risk of platform signed apps, pregranted permissions on preloaded apps, and apps communicating using cleartext traffic.

Screenshot of the presentation of the Android Device Security Database at the Android Security Symposium 2020

In partnership with those teams, Google created Uraniborg, an open source tool that collects necessary attributes from the device and runs it through this formula to come up with a raw score. NCC Group leveraged Uraniborg to conduct the assessment for the ioXt Security by Default category.

As part of our ongoing certification efforts, we look forward to submitting future Pixel smartphones through the ioXt standard, and we encourage the Android device ecosystem to participate in similar transparency efforts for their devices.

Acknowledgements: This post leveraged contributions from Sudhi Herle, Billy Lau and Sam Schumacher

Source :
https://security.googleblog.com/2020/08/pixel-4a-is-first-device-to-go-through.html