Australian researchers record world’s fastest internet speed from a single optical chip

Researchers from Monash, Swinburne and RMIT universities have successfully tested and recorded Australia’s fastest internet data speed, and that of the world, from a single optical chip – capable of downloading 1000 high definition movies in a split second.

Published in the prestigious journal Nature Communications, these findings have the potential to not only fast-track the next 25 years of Australia’s telecommunications capacity, but also the possibility for this home-grown technology to be rolled out across the world.

In light of the pressures being placed on the world’s internet infrastructure, recently highlighted by isolation policies as a result of COVID-19, the research team led by Dr Bill Corcoran (Monash), Distinguished Professor Arnan Mitchell (RMIT) and Professor David Moss (Swinburne) were able to achieve a data speed of 44.2 Terabits per second (Tbps) from a single light source.

This technology has the capacity to support the high-speed internet connections of 1.8 million households in Melbourne, at the same time, and billions across the world during peak periods.

Demonstrations of this magnitude are usually confined to a laboratory. But, for this study, researchers achieved these quick speeds using existing communications infrastructure where they were able to efficiently load-test the network.

They used a new device that replaces 80 lasers with one single piece of equipment known as a micro-comb, which is smaller and lighter than existing telecommunications hardware. It was planted into and load-tested using existing infrastructure, which mirrors that used by the NBN.

The micro-comb chip over a A$2 coin. This tiny chip produces an infrared rainbow of light, the equivalent of 80 lasers. The ribbon to the right of the image is an array of optical fibres connected to the device. The chip itself measures about 3x5 mm.
The micro-comb chip over a A$2 coin. This tiny chip produces an infrared rainbow of light, the equivalent of 80 lasers. The ribbon to the right of the image is an array of optical fibres connected to the device. The chip itself measures about 3x5 mm.

It is the first time any micro-comb has been used in a field trial and possesses the highest amount of data produced from a single optical chip.

“We’re currently getting a sneak-peak of how the infrastructure for the internet will hold up in two to three years’ time, due to the unprecedented number of people using the internet for remote work, socialising and streaming. It’s really showing us that we need to be able to scale the capacity of our internet connections,” says Dr Bill Corcoran, co-lead author of the study and Lecturer in Electrical and Computer Systems Engineering at Monash University.

“What our research demonstrates is the ability for fibres that we already have in the ground, thanks to the NBN project, to be the backbone of communications networks now and in the future. We’ve developed something that is scalable to meet future needs.

“And it’s not just Netflix we’re talking about here – it’s the broader scale of what we use our communication networks for. This data can be used for self-driving cars and future transportation and it can help the medicine, education, finance and e-commerce industries, as well as enable us to read with our grandchildren from kilometres away.”

To illustrate the impact optical micro-combs have on optimising communication systems, researchers installed 76.6km of ‘dark’ optical fibres between RMIT’s Melbourne City Campus and Monash University’s Clayton Campus. The optical fibres were provided by Australia’s Academic Research Network.

Within these fibres, researchers placed the micro-comb – contributed by Swinburne, as part of a broad international collaboration – which acts like a rainbow made up of hundreds of high quality infrared lasers from a single chip. Each ‘laser’ has the capacity to be used as a separate communications channel.

Researchers were able to send maximum data down each channel, simulating peak internet usage, across 4THz of bandwidth.

Distinguished Professor Mitchell said reaching the optimum data speed of 44.2 Tbps showed the potential of existing Australian infrastructure. The future ambition of the project is to scale up the current transmitters from hundreds of gigabytes per second towards tens of terabytes per second without increasing size, weight or cost.

“Long-term, we hope to create integrated photonic chips that could enable this sort of data rate to be achieved across existing optical fibre links with minimal cost,” Distinguished Professor Mitchell says.

“Initially, these would be attractive for ultra-high speed communications between data centres. However, we could imagine this technology becoming sufficiently low cost and compact that it could be deployed for commercial use by the general public in cities across the world.”

Professor Moss, Director of the Optical Sciences Centre at Swinburne, says: “In the 10 years since I co-invented micro-comb chips, they have become an enormously important field of research.

“It is truly exciting to see their capability in ultra-high bandwidth fibre optic telecommunications coming to fruition. This work represents a world-record for bandwidth down a single optical fibre from a single chip source, and represents an enormous breakthrough for part of the network which does the heaviest lifting. Micro-combs offer enormous promise for us to meet the world’s insatiable demand for bandwidth.”

To download a copy of the paper, please visit: https://doi.org/10.1038/s41467-020-16265-x

Source :
http://www.swinburne.edu.au/news/latest-news/2020/05/australian-researchers-record-worlds-fastest-internet-speed-from-a-single-optical-chip.php

World Record Transmission of 172 Terabit/s over 2,040 km Distance Coupled-3-core Multi-core Fiber

  • A world record for high-capacity, long-haul transmission in standard diameter optical fibers was achieved in coupled-3-core multi-core fiber with characteristics similar to multi-mode fibers.
  • The signal processing complexity is significantly reduced compared to multi-mode fibers.
  • The fiber type is promising for early adoption in backbone high-capacity transmission systems as it can be cabled with the same technology.
In a collaboration, led by RADEMACHER Georg between researchers from the Network Systems Research Institute at the National Institute of Information and Communications Technology (NICT, President: TOKUDA Hideyuki, Ph.D.) and researchers from NOKIA Bell Labs (Bell Labs, President: WELDON Marcus), led by RYF Roland, transmission of 172 terabit/s over 2,040 km was successfully demonstrated, using a standard outer diameter (0.125 mm) coupled-3-core optical fiber.
Using the product of data-rate and distance as a general index of transmission capacity, we achieved 351 petabit/s x km, more than doubling the current world record in standard outer diameter optical fibers employing space-division multiplexing. The used coupled-core multi-core fiber requires signal processing on the receiving side after transmission, but the signal processing load is less compared to more commonly investigated few-mode fibers. In addition, the used fiber has the same outer diameter as standard optical fibers which allows to convert such a fiber into a cable with existing technologies and equipment, simplifying a timely adoption of coupled-core multi-core fibers in the industry.
     The results of this experiment were presented at the 43rd International Conference on Optical Fiber Communications (OFC 2020) where it was accepted as a Post Deadline Paper.

Background

Figure 1
Figure 1: Data-rates and distances reported to date with standard cladding diameter optical fibers
In order to cope with ever-increasing communication traffic, research on new-types of optical fibers that can exceed the limits of conventional optical fibers and large-scale optical transmission experiments using them are actively conducted around the world. In research pursuing ultimate high capacity, multi-core and multi-mode fibers that increase the number of optical fiber cores and transmit optical signals of different modes to each core are being studied. On the other hand, in research aimed at early commercialization, research is being carried out on multi-core or multi-mode optical fibers with a standard outer diameter (0.125 mm) in consideration of manufacturing methods and ease of handling.

Achievements

NICT constructed a large-capacity, long-distance transmission system based on the results of Bell Labs' long-distance transmission demonstration experiment using the suppressed modal dispersion characteristics of a coupled-core multi-core fiber. 359 wavelength channels were modulated by 16QAM signals, and a total data-rate of 172 terabits per second was successfully transmitted over 2,040 km. Converted to the product of transmission capacity and distance, which is a general indicator of transmission capacity, 351 petabit per second x km was achieved, which is more than twice the current world record.
When using coupled-core multi-core fibers for transmission, it is necessary to eliminate the interference between optical signals between cores by signal processing (MIMO processing) on the receiving side. To date, transmission over coupled-core multi-core fibers has been performed only in a limited signal band (less than 5 nanometers in wavelength range), and it was unclear whether it is possible to achieve both long-distance transmission characteristics and large-capacity transmission in coupled-core multi-core fibers.
In this experiment, using a standard outside diameter optical fiber, we succeeded in transmitting 17 times the backbone communication capacity of Japan over a distance of 2,040 km. The standard outside diameter optical fiber is compatible with conventional fiber cables, increasing prospects for early commercialization of large-capacity backbone communication systems.
Figure 2
Figure 2: Experimental demonstrations of advanced optical fibers by NICT

Future Prospects

We will work on research and development of future optical communication infrastructure technology that can smoothly accommodate traffic such as 5G-based services and international communications via submarine cables.
The paper on the results of this experiment was published at the 43rd International Conference on Optical Fiber Communication (OFC 2020, March 8 (Sun) - March 12 (Thu)), one of the largest international conferences on optical fiber communication held in San Diego, USA. It was highly evaluated and was presented in the Post Deadline session, known for release of latest important research achievements, and published on Thursday, March 12 2020.

References

International Conference: 43rd International Conference on Optical Fiber Communications (OFC 2020) Post Deadline Paper
Title: 172 Tb/s C+L Band Transmission over 2,040 km Strongly Coupled 3-Core Fiber
Authors: Georg Rademacher, Ruben S. Luís, Benjamin J. Puttnam, Roland Ryf, Sjoerd v. d. Heide, Tobias A. Eriksson, Nicolas K. Fontaine, Haoshuo Chen, René-Jean Essiambre, Yoshinari Awaji, Hideaki Furukawa, and Naoya Wada

Source :
https://www.nict.go.jp/en/press/2020/04/02-1.html

DoH! What’s all the fuss about DNS over HTTPS?

Cisco Umbrella now supports DoH

Not all DNS services are created equally. Some break. Some fail to connect to domain servers. Speeds can vary, and if not kept up-to-date, some DNS services can affect the ability to work efficiently. But with more than a decade of leadership in recursive DNS services (13+ years and counting!) Cisco Umbrella boasts significant advantages when it comes to understanding how both legitimate and non-legitimate parties register domains, provision infrastructure, and route internet traffic.

Back in the old days when we were known as OpenDNS, we started with the mission to deliver the most reliable, safest, smartest, and fastest DNS resolution in the world. It was a pretty tall order, but we did it — and we’re still doing it today under our new name, Cisco Umbrella. (Here’s one for the trivia champions: OpenDNS was acquired by Cisco on August 27, 2015.)

In fact, TechRadar Pro recognized us as having the best free and public DNS server for 2020. You don’t have to take our word for it — check it out here. But just because we’re the best doesn’t mean we’ll stop innovating.

We recently announced support for DNS over HTTPS, commonly referred to as DoH, a standard published by the Internet Engineering Task Force (IETF). Cisco Umbrella offers DNS resolution over an HTTPS endpoint as part of our home and enterprise customer DNS services. Users may now choose to use the DoH endpoint instead of sending DNS queries over plaintext for increased security and privacy. DoH can increase user privacy and security by preventing eavesdropping and manipulation of DNS data by man-in-the-middle attacks. In addition, when DoH is enabled, it ensures that your ISP can’t collect personal information related to your browsing history. It can often improve performance, too.

How does it work?

DoH works just like a normal DNS request, except that it uses Transmission Control Protocol (TCP) to transmit and receive queries. Both requests take a domain name that a user types into their browser and send a query to a DNS server to learn the numerical IP address of the web server hosting that site. The key difference is that DoH takes the DNS query and sends it to a DoH-compatible DNS server (resolver) via an encrypted HTTPS connection on port 443, rather than plaintext on port 53. DoH prevents third-party observers from sniffing traffic and understanding what DNS queries users have run or what websites users are intending to access. Since the DoH (DNS) request is encrypted, it’s even invisible to cybersecurity software that relies on passive DNS monitoring to block requests to known malicious domains.

DoH is a choice, not a requirement

So what’s all the fuss about DoH? It all comes down to user privacy. And since privacy is a hot topic, it will continue to be blogged and chatted about wildly. To block or not to block DoH is a personal choice. Mozilla blazed the trail with the Firefox browser, but other vendors like Microsoft and Google recently announced plans to add support for DoH in future releases of Windows and Chrome. Mozilla started enabling DoH by default in version 69 of Firefox, and started rolling it out gradually in September 2019. Cisco Umbrella supports Mozilla’s ‘use-application-dns.net‘ canary domain, meaning that Firefox will disable DoH for users of Cisco Umbrella.

Because DoH is configured within the application, the DNS servers configured by the operating system are not used. This means that the protection provided by Cisco Umbrella may be bypassed by applications using DoH. But don’t worry… you can block this feature easily with Umbrella, too. Most of our enterprise customers choose not to utilize DoH. It isn’t right for everyone.

Protect your Umbrella settings

Our team at Cisco Umbrella recommends that companies use enterprise policies to manage DoH on endpoints they control. For detailed help on how to proceed, check out this helpful article, GPO and DoH.

To block DoH providers and keep your Umbrella deployment settings follow these simple steps:

1. Navigate to Policies > Content Categories

2. Select your in use category setting.

3. Ensure that “Proxy/Anonymizer” is selected

Example of settings to block DNS over HTTPS (DoH) providers - Cisco Umbrella Blog

4. Save.

Your users will now remain covered by Umbrella as Firefox gradually rolls out this change to their users.

How to disable DoH in Firefox

Firefox allows users (via settings) and organizations (via enterprise policies and a canary domain lookup) to disable DoH when it interferes with a preferred policy. For existing Firefox users that are based in the United States, the notification below will display if and when DoH is first enabled, allowing the user to choose not to use DoH and instead continue using their default OS DNS resolver.

Example of a Mozilla warning, regarding DNS over HTTPS (DoH) - Cisco Umbrella Blog

Reliable, effective protection with Cisco Umbrella

Cisco Umbrella is the leading provider of network security and DNS services, enabling the world to connect to the internet with confidence on any device. When connecting directly to the internet, organizations need security that is incredibly reliable and eliminates performance problems for end users. Umbrella is built upon a global cloud infrastructure that has delivered 100% uptime since 2006 and we provide automated failover for simplified deployment and management. By leveraging our extensive peering relationships with the top internet service providers (ISPs), content delivery networks (CDNs), and SaaS platforms, such as O365, Umbrella optimizes the routes between core networks and our cloud hubs, providing superior performance and user satisfaction.

Umbrella’s support for DoH is just another demonstration of our commitment to delivering the best, most reliable, and fastest internet experience to more than 100 million enterprise and consumer users (and counting).

For more information on DoH, visit our knowledge base.

Source :
https://umbrella.cisco.com/blog/doh-whats-all-the-fuss-about-dns-over-https

5 reasons to move your endpoint security to the cloud now

As the world adapts work from home initiatives, we’ve seen many organizations accelerate their plans to move from on-premises endpoint security and Detection and Response (EDR/XDR) solutions to Software as a Service versions. And several customers who switched to the SaaS version last year, recently wrote us to tell how glad to have done so as they transitioned to working remote. Here are 5 reasons to consider moving to a cloud managed solution:

  1. No internal infrastructure management = less risk

If you haven’t found the time to update your endpoint security software and are one or two versions behind, you are putting your organization at risk of attack. Older versions do not have the same level of protection against ransomware and file-less attacks. Just as the threats are always evolving, the same is true for the technology built to protect against them.

With Apex One as a Service, you always have the latest version. There are no software patches to apply or Apex One servers to manage – we take care of it for you. If you are working remote, this is one less task to worry about and less servers in your environment which might need your attention.

  1. High availability, reliability

With redundant processes and continuous service monitoring, Apex One as a Services delivers the uptime you need with 99.9% availability. The operations team also proactively monitors for potential issues on your endpoints and with your prior approval, can fix minor issues with an endpoint agent before they need your attention.

  1. Faster Detection and Response (EDR/XDR)

By transferring endpoint telemetry to a cloud data lake, detection and response activities like investigations and sweeping can be processed much faster. For example, creating a root cause analysis diagram in cloud takes a fraction of the time since the data is readily available and can be quickly processed with the compute power of the cloud.

  1. Increased MITRE mapping

The unmatched power of cloud computing also enables analytics across a high volume of events and telemetry to identify a suspicious series of activities. This allows for innovative detection methods but also additional mapping of techniques and tactics to the MITRE framework.  Building the equivalent compute power in an on- premises architecture would be cost prohibitive.

  1. XDR – Combined Endpoint + Email Detection and Response

According to Verizon, 94% of malware incidents start with email.  When an endpoint incident occurs, chances are it came from an email message and you want to know what other users have messages with the same email or email attachment in their inbox? You can ask your email admin to run these searches for you which takes time and coordination. As Forrester recognized in the recently published report: The Forrester Wave™ Enterprise Detection and Response, Q1 2020:

“Trend Micro delivers XDR functionality that can be impactful today. Phishing may be the single most effective way for an adversary to deliver targeted payloads deep into an infrastructure. Trend Micro recognized this and made its first entrance into XDR by integrating Microsoft office 365 and Google G suite management capabilities into its EDR workflows.”

This XDR capability is available today by combining alerts, logs and activity data of Apex One as a Service and Trend Micro Cloud App Security. Endpoint data is linked with Office 365 or G Suite email information from Cloud App Security to quickly assess the email impact without having to use another tool or coordinate with other groups.

Moving endpoint protection and detection and response to the cloud, has enormous savings in customer time while increasing their protection and capabilities. If you are licensed with our Smart Protection Suites, you already have access to Apex One as a Service and our support team is ready to help you with your migration. If you are an older suite, talk to your Trend Micro sales rep about moving to a license which includes SaaS.

Source :

https://blog.trendmicro.com/5-reasons-to-move-your-endpoint-security-to-the-cloud-now/

This Week in Security News: 5 Reasons to Move Your Endpoint Security to the Cloud Now and ICEBUCKET Group Mimics Smart TVs to Steal Ad Money

Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, learn about 5 reasons your organization should consider moving to a cloud managed solution. Also, read about a massive online fraud operation that has been mimicking smart TVs to fool online advertisers and gain unearned profits from online ads.

 

Read on:

Letter from the CEO: A Time of Kindness and Compassion

As a global company with headquarters in Japan, Trend Micro has been exposed to COVID-19 from the very early days when it first erupted in Asia. During these difficult times, Trend Micro has also witnessed the amazing power of positivity and kindness around the world. In this blog, read more about the importance of compassion during these unprecedented times from Trend Micro’s CEO, Eva Chen.

What Do Serverless Compute Platforms Mean for Security?

Developers deploying containers to restricted platforms or “serverless” containers to the likes of AWS Fargate, for example, should think about security differently – by looking upward, looking left and also looking all-around your cloud domain for opportunities to properly security your cloud native applications. 

April Patch Tuesday: Microsoft Battles 4 Bugs Under Active Exploit

Microsoft released its April 2020 Patch Tuesday security updates, its first big patch update released since the work-from-home era began, with a whopping 113 vulnerabilities. Microsoft has seen a 44% increase in the number of CVEs patched between January to April 2020 compared to the same time period in 2019, according to Trend Micro’s Zero Day Initiative – a likely result of an increasing number of researchers looking for bugs and an expanding portfolio of supported products.

5 Reasons to Move Your Endpoint Security to the Cloud Now

As the world adopts work from home initiatives, we’ve seen many organizations accelerate their plans to move from on-premises endpoint security and detection and response (EDR/XDR) solutions to SaaS versions. In this blog, learn about 5 reasons you should consider moving to a cloud managed solution.

Why Running a Privileged Container is Not a Good Idea

Containers are not, by any means, new. They have been consistently and increasingly adopted in the past few years, with security being a popular related topic. It is well-established that giving administrative powers to server users is not a good security practice. In the world of containers, we have the same paradigm. In this article, Trend Micro’s Fernando Cardoso explains why running a privileged container is a bad idea.

Why CISOs Are Demanding Detection and Response Everywhere

Over the past three decades, Trend Micro has observed the industry trends that have the biggest impact on its customers. One of the big things we’ve noticed is that threats move largely in tandem with changes to IT infrastructure. As digital transformation continues to remain a priority, it also comes with an expanded corporate attack surface, driving security leaders to demand enhanced visibility, detection and response across the entire enterprise — not just the endpoint.

Shift Well-Architecture Left. By Extension, Security Will Follow

Using Infrastructure as Code (IaC) is the norm in the cloud. From CloudFormation, CDK, Terraform, Serverless Framework and ARM, the options are nearly endless. IaC allows architects and DevOps engineers to version the application infrastructure as much as the developers are already versioning the code. So, any bad change, no matter if on the application code or infrastructure, can be easily inspected or, even better, rolled back.

Work from Home Presents a Data Security Challenge for Banks

The mass relocation of financial services employees from the office to their couch, dining table or spare room to stop the spread of the deadly novel coronavirus is a significant data security concern, according to several industry experts. In this article, learn how managers can support security efforts from Trend Micro’s Bill Malik.

Principles of a Cloud Migration – Security, The W5H

For as long as cloud providers have been in business, discussing the Shared Responsibility Model has been priority when it comes to customer operation teams. It defines the different aspects of control, and with that control, comes the need to secure, manage, and maintain. In this blog, Trend Micro highlights some of the requirements and discusses the organization’s layout for responsibility.

Coronavirus Update App Leads to Project Spy Android and iOS Spyware

Trend Micro discovered a potential cyberespionage campaign, dubbed Project Spy, that infects Android and iOS devices with spyware. Project Spy uses the COVID-19 pandemic as a lure, posing as an app called ‘Coronavirus Updates’. Trend Micro also found similarities in two older samples disguised as a Google service and, subsequently, as a music app. Trend Micro noted a small number of downloads of the app in Pakistan, India, Afghanistan, Bangladesh, Iran, Saudi Arabia, Austria, Romania, Grenada and Russia.

Exposing Modular Adware: How DealPly, IsErIk, and ManageX Persist in Systems

Trend Micro has observed suspicious activities caused by adware, with common behaviors that include access to random domains with alternating consonant and vowel names, scheduled tasks, and in-memory execution via WScript that has proven to be an effective method to hide its operations. In this blog, Trend Micro walks through its analysis of three adware events linked to and named as Dealply, IsErIk and ManageX. 

ICEBUCKET Group Mimicked Smart TVs to Steal Ad Money

Cybersecurity firm and bot detection platform White Ops has discovered a massive online fraud operation that for the past few months has been mimicking smart TVs to fool online advertisers and gain unearned profits from online ads. White Ops has named this operation ICEBUCKET and has described it as “the largest case of SSAI spoofing” known to date.

Fake Messaging App Installers Promoted on Fraudulent Download Sites, Target Russian Users

Fake installers of popular messaging apps are being propagated via fraudulent download sites, as disclosed in a series of tweets by a security researcher from CronUp. Trend Micro has also encountered samples of the files. The sites and the apps are in Russian and are aiming to bait Russian users.

“Twin Flower” Campaign Jacks Up Network Traffic, Downloads Files, Steals Data

A campaign dubbed “Twin Flower” has been detected by Jinshan security researchers in a report published in Chinese and analyzed by Trend Micro. The files are believed to be downloaded unknowingly when visiting malicious sites or dropped into the system by another malware. The potentially unwanted application (PUA) PUA.Win32.BoxMini.A files are either a component or the main executable itself of a music downloader that automatically downloads music files without user consent.

Undertaking Security Challenges in Hybrid Cloud Environments

Businesses are now turning to hybrid cloud environments to make the most of the cloud’s dependability and dynamicity. The hybrid cloud gives organizations the speed and scalability of the public cloud, as well as the control and reliability of the private cloud. A 2019 Nutanix survey shows that 85% of its respondents regard the hybrid cloud as the ideal IT operating model.

How to Secure Video Conferencing Apps

What do businesses have to be wary of when it comes to their video conferencing software? Vulnerabilities, for one. Threat actors are not shy about using everything they have in their toolbox and are always on the lookout for any flaw or vulnerability they can exploit to pull off malicious attacks. In this blog, learn about securing your video conferencing apps and best practices for strengthening the security of work-from-home setups.

Monitoring and Maintaining Trend Micro Home Network Security – Part 4: Best Practices

In the last blog of this four-part series, Trend Micro delves deeper into regular monitoring and maintenance of home network security, to ensure you’re getting the best protection that Trend Micro Home Network Security can provide your connected home.

Surprised by the ICEBUCKET operation that has described as “the largest case of SSAI spoofing” known to date? Share your thoughts in the comments below or follow me on Twitter to continue the conversation: @JonLClay.

Source :
https://blog.trendmicro.com/this-week-in-security-news-5-reasons-to-move-your-endpoint-security-to-the-cloud-now-and-icebucket-group-mimics-smart-tvs-to-steal-ad-money/

Effective Business Continuity Plans Require CISOs to Rethink WAN Connectivity

As more businesses leverage remote, mobile, and temporary workforces, the elements of business continuity planning are evolving and requiring that IT professionals look deep into the nuts and bolts of connectivity.

CISOs and their team members are facing new challenges each and every day, many of which have been driven by digital transformation, as well as the adoption of other productivity-enhancing technologies.

A case in point is the rapidly evolving need to support remote and mobile users as businesses change how they interact with staffers.

For example, the recent COVID-19 crisis has forced the majority of businesses worldwide to support employees that work from home or other remote locations.

Many businesses are encountering numerous problems with connection reliability, as well as the challenges presented by rapidly scaling connectivity to meet a growing number of remote workers.

Add to that security and privacy issues, and it becomes evident that CISOs may very well face what may become insurmountable challenges to keep things working and secure.

It is the potential for disruption that is bringing Business Continuity Planning (BCP) to the forefront of many IT conversations. What's more, many IT professionals are quickly coming to the conclusion that persistent WAN and Internet connectivity prove to be the foundation of an effective business continuity plan.

VPNs are Failing to Deliver

Virtual Private Networks (VPNs) are often the first choice for creating secure connections into a corporate network from the outside world.

However, VPNs have initially been designed to allow a remote endpoint to attach to an internal local area network and grant that system access to data and applications stored on the network.

For occasional connectivity, with a focus on ease of use.

Yet, VPNs are quickly beginning to show their limitations when placed under the demand for supporting a rapidly deployed remote workforce.

One of the most significant issues around VPNs comes in the context of scalability; in other words, VPNs can be complicated to scale quickly.

For the most part, VPNs are licensed by connection and are supported by an appliance on the network side to encrypt and decrypt traffic. The more VPN users that are added, the more licenses and processing power that is needed, which ultimately adds unforeseen costs, as well as introducing additional latency into the network.

Eventually, VPNs can break under strain, and that creates an issue around business continuity. Simply put, if VPNs become overwhelmed by increased traffic, connectivity may fail, and the ability for employees to access the network may be impacted, the concept of business continuity suffers as a result.

VPNs are also used for site to site connections, where the bandwidth may be shared not only from a branch office to a headquarters office but also with remote users. A situation such as that can completely derail an organization's ability to do business if those VPNs fail.

Perhaps an even bigger concern with VPNs comes in the form of cybersecurity. VPNs that are used to give remote users access to a network are only as reliable as the credentials that are given to those remote users.

In some cases, users may share password and login information with others, or carelessly expose their systems to intrusion or theft. Ultimately, VPNs may pave the way for attacks on the corporate network by allowing bad actors to access systems.

ZTNA Moves Beyond VPNs

With VPN technology becoming suspect in the rapid expansion of remote workforces, CISOs and IT pros are looking for alternatives to ensure reliable and secure connections into the network from remote workers.

The desire to bridge security and reliability is driven by continuity, as well as operational issues. CISOs are looking to keep costs down, provide a level of security, without compromising performance, and still meet projected growth.

Many enterprises thought that the answer to the VPN dilemma could be found in SDP (Software Defined Perimeters) or ZTNA (Zero Trust Network Access), two acronyms that have become interchangeable in the arena of cybersecurity.

ZTNA has been built for the cloud as a solution that shifted security from the network to the applications. In other words, ZTNA is application-centric, meaning that users are granted access to applications and not the complete network.

Of course, ZTNA does much more than that. ZTNA can "hide" applications, while still granting access to authorized users. Unlike VPNs, ZTNA technology does not broadcast any information outside of the network for authentication, whereas VPN concentrators sit at the edge of the network for all to see, making them a target for malicious attackers.

What's more, ZTNA uses inside-out connections, which means IP addresses are never exposed to the internet. Instead of granting access to the network like a VPN, ZTNA technology uses a micro-segmentation approach, where a secure segment is created between the end-user and the named application.

ZTNA creates an access environment that provides private access to an application for an individual user, and only grants the lowest level of privileges to that user.

ZTNA technology decouples access to applications from access to the network, creating a new paradigm of connectivity. ZTNA based solutions also capture much more information than a VPN, which helps with analytics and security planning.

While a VPN may only track a device's IP address, port data, and protocols, ZTNA solutions capture data around the user identity, named application, latency, locations, and much more. It creates an environment that allows administrators to be more proactive and more easily consume and analyze the information.

While ZTNA may be a monumental step forward from legacy VPN systems, ZTNA solutions are not without their own concerns. ZTNA solutions do not address performance and scalability issues and may lack the core components of continuity, such as failover and automated rerouting of traffic.

In other words, ZTNA may require those additional third-party solutions to be added to the mix to support BCP.

Resolving ZTNA and VPN issues with SASE

A newer technology, which goes by the moniker of SASE (Secure Access Service Edge), may very well have the answer to the dilemmas of security, continuity, and scale that both ZTNA and VPNs introduce into the networking equation.

The Secure Access Service Edge (SASE) model was proposed by Gartner's leading security analysts, Neil MacDonald, Lawrence Orans, and Joe Skorupa. Gartner presents SASE as a way to collapse the networking and security stacks of SD-WANs into a fully integrated offering that is both easy to deploy and manage.

Gartner sees SASE as a game-changer in the world of wide-area networking and cloud connectivity. The research house expects 40% of enterprises to adopt SASE by 2024. However, a significant challenge remains, networking and cybersecurity vendors are still building their SASE offerings, and very few are actually available at this time.

One such vendor is Cato Networks, which offers a fully baked SASE solution and has been identified as one of the leaders in the SASE game by Gartner.

SASE differs significantly from the VPN and ZTNA models by leveraging a native cloud architecture that is built on the concepts of SD-WAN (Software-Defined Wide Area Network). According to Gartner, SASE is an identity-driven connectivity platform that uses a native cloud architecture to support secure connectivity at the network edge that is globally distributed.

SASE gives organizations access to what is essentially a private networking backbone that runs within the global internet. What's more, SASE incorporates automated failover, AI-driven performance tuning, and multiple secure paths into the private backbone.

SASE is deployed at the edge of the network, where the LAN connects to the public internet to access cloud or other services. And as with other SD-WAN offerings, the edge has to connect to something beyond the four walls of the private network.

In Cato's case, the company has created a global private backbone, which is connected via multiple network providers. Cato has built a private cloud that can be reached over the public internet.

SASE also offers the ability to combine the benefits of SDP with the resiliency of an SD-WAN, without introducing any of the shortcomings of a VPN.

Case in point is Cato's Instant Access, a clientless connectivity model that uses a Software-Defined Perimeter (SDP) solution to grant secure access to cloud-delivered applications for authorized remote users.

Instant access offers multi-factor authentication, single sign-on, least privileged access, and is incorporated into the combined networking and security stacks. Since it is built on SASE, full administrator visibility is a reality, as well as simplified deployment, instant scalability, integrated performance management, and automated failover.

Cato Networks' Remote Access Product Demo

In Cato's case, continuous threat protection keeps remote workers, as well as the network, safe from network-based threats. Cato's security stack includes NGFW, SWG, IPS, advanced anti-malware, and Managed Threat Detection and Response (MDR) service. Of course, Cato isn't the only player in the SASE game; other vendors pushing into SASE territory include Cisco, Akamai, Palo Alto Networks, Symantec, VMWare, and Netskope.

SASE Address the Problems of VPNs, ZTNA -- and More

With VPNs coming up short and ZTNA lacking critical functionality, such as ease of scale and performance management, it is quickly becoming evident that CISOs may need to take a long hard look at SASE.

SASE addresses the all too common problems that VPNs are introducing into a rapidly evolving remote work paradigm, while still offering the application-centric security that ZTNA brings to the table.

What's more, SASE brings with it advanced security, enhanced visibility, and reliability that will go a long way to improving continuity, while also potentially lowering costs.

Source :
https://thehackernews.com/2020/05/rethink-wan-connectivity.html

British Airline EasyJet Suffers Data Breach Exposing 9 Million Customers’ Data

easyjet data breach
British low-cost airline EasyJet today admitted that the company has fallen victim to a cyber-attack, which it labeled "highly sophisticated," exposing email addresses and travel details of around 9 million of its customers.

In an official statement released today, EasyJet confirmed that of the 9 million affected users, a small subset of customers, i.e., 2,208 customers, have also had their credit card details stolen, though no passport details were accessed.

The airline did not disclose precisely how the breach happened, when it happened, when the company discovered it, how the sophisticated attackers unauthorizedly managed to gain access to the private information of its customers, and for how long they had that access to the airline's systems.

However, EasyJet assured its users that the company had closed off the unauthorized access following the discovery and that it found "no evidence that any personal information of any nature has been misused" by the attackers.

"As soon as we became aware of the attack, we took immediate steps to respond to and manage the incident and engaged leading forensic experts to investigate the issue," the company said in a statement published today.

EasyJet has also notified the Information Commissioner's Office (ICO), Britain's data protection agency, and continues to investigate the breach incident to determine its extent and further enhance its security environment.

"We take the cybersecurity of our systems very seriously and have robust security measures in place to protect our customers' personal information. However, this is an evolving threat as cyber attackers get ever more sophisticated," says EasyJet Chief Executive Officer Johan Lundgren.

"Since we became aware of the incident, it has become clear that owing to COVID-19, there is heightened concern about personal data being used for online scams. Every business must continue to stay agile to stay ahead of the threat."

As a precautionary measure recommended by the ICO, the airline has started contacting all customers whose travel and credit card details were accessed in the breach to advise them to be "extra vigilant, particularly if they receive unsolicited communications."

Affected customers will be notified by May 26.

Last year, the ICO fined British Airways with a record of £183 million for failing to protect the personal information of around half a million of its customers during a 2018 security breach incident involving a Magecart-style card-skimming attack on its website.

Affected customers should be suspicious of phishing emails, which are usually the next step of cybercriminals to trick users into giving away further details of their accounts like passwords and banking information.

Affected customers exposing their credit card details are advised to block the affected cards and request a new one from their respective financial institution, and always keep a close eye on your bank and payment card statements for any unusual activity and report to the bank if you find any.

Source :
https://thehackernews.com/2020/05/easyjet-data-breach-hacking.html

New Bluetooth Vulnerability Exposes Billions of Devices to Hackers

hacking bluetooth devices

Academics from École Polytechnique Fédérale de Lausanne (EPFL) disclosed a security vulnerability in Bluetooth that could potentially allow an attacker to spoof a remotely paired device, exposing over a billion of modern devices to hackers.

The attacks, dubbed Bluetooth Impersonation AttackS or BIAS, concern Bluetooth Classic, which supports Basic Rate (BR) and Enhanced Data Rate (EDR) for wireless data transfer between devices.

"The Bluetooth specification contains vulnerabilities enabling to perform impersonation attacks during secure connection establishment," the researchers outlined in the paper. "Such vulnerabilities include the lack of mandatory mutual authentication, overly permissive role switching, and an authentication procedure downgrade."

Given the widespread impact of the vulnerability, the researchers said they responsibly disclosed the findings to the Bluetooth Special Interest Group (SIG), the organization that oversees the development of Bluetooth standards, in December 2019.

The Bluetooth SIG acknowledged the flaw, adding it has made changes to resolve the vulnerability. "These changes will be introduced into a future specification revision," the SIG said.

The BIAS Attack

For BIAS to be successful, an attacking device would need to be within the wireless range of a vulnerable Bluetooth device that has previously established a BR/EDR connection with another Bluetooth device whose address is known to the attacker.

The flaw stems from how two previously paired devices handle the long term key, also known as link key, that's used to mutually authenticate the devices and activate a secure connection between them.

The link key also ensures that users don't have to pair their devices every time a data transfer occurs between, say, a wireless headset and a phone, or between two laptops.

The attacker, then, can exploit the bug to request a connection to a vulnerable device by forging the other end's Bluetooth address, and vice versa, thus spoofing the identity and gaining full access to another device without actually possessing the long term pairing key that was used to establish a connection.

Put differently, the attack allows a bad actor to impersonate the address of a device previously paired with the target device.

What's more, BIAS can be combined with other attacks, including the KNOB (Key Negotiation of Bluetooth) attack, which occurs when a third party forces two or more victims to agree on an encryption key with reduced entropy, thus allowing the attacker to brute-force the encryption key and use it to decrypt communications.

Devices Not Updated Since December 2019 Affected

With most standard-compliant Bluetooth devices impacted by the vulnerability, the researchers said they tested the attack against as many as 30 devices, including smartphones, tablets, laptops, headphones, and single-board computers such as Raspberry Pi. All the devices were found to be vulnerable to BIAS attacks.

The Bluetooth SIG said it's updating the Bluetooth Core Specification to "avoid a downgrade of secure connections to legacy encryption," which lets the attacker initiate "a master-slave role switch to place itself into the master role and become the authentication initiator."

In addition to urging companies to apply the necessary patches, the organization is recommending Bluetooth users to install the latest updates from device and operating system manufacturers.

"The BIAS attacks are the first uncovering issues related to Bluetooth's secure connection establishment authentication procedures, adversarial role switches, and Secure Connections downgrades," the research team concluded. "The BIAS attacks are stealthy, as Bluetooth secure connection establishment does not require user interaction."

Source :

New DNS Vulnerability Lets Attackers Launch Large-Scale DDoS Attacks

Israeli cybersecurity researchers have disclosed details about a new flaw impacting DNS protocol that can be exploited to launch amplified, large-scale distributed denial-of-service (DDoS) attacks to takedown targeted websites.

Called NXNSAttack, the flaw hinges on the DNS delegation mechanism to force DNS resolvers to generate more DNS queries to authoritative servers of attacker's choice, potentially causing a botnet-scale disruption to online services.

"We show that the number of DNS messages exchanged in a typical resolution process might be much higher in practice than what is expected in theory, mainly due to a proactive resolution of name-servers' IP addresses," the researchers said in the paper.

"We show how this inefficiency becomes a bottleneck and might be used to mount a devastating attack against either or both, recursive resolvers and authoritative servers."

Following responsible disclosure of NXNSAttack, several of the companies in charge of the internet infrastructure, including PowerDNS (CVE-2020-10995), CZ.NIC (CVE-2020-12667), Cloudflare, Google, Amazon, Microsoft, Oracle-owned Dyn, Verisign, and IBM Quad9, have patched their software to address the problem.

The DNS infrastructure has been previously at the receiving end of a rash of DDoS attacks through the infamous Mirai botnet, including those against Dyn DNS service in 2016, crippling some of the world's biggest sites, including Twitter, Netflix, Amazon, and Spotify.

The NXNSAttack Method

recursive DNS lookup happens when a DNS server communicates with multiple authoritative DNS servers in a hierarchical sequence to locate an IP address associated with a domain (e.g., www.google.com) and return it to the client.

This resolution typically starts with the DNS resolver controlled by your ISPs or public DNS servers, like Cloudflare (1.1.1.1) or Google (8.8.8.8), whichever is configured with your system.

The resolver passes the request to an authoritative DNS name server if it's unable to locate the IP address for a given domain name.

But if the first authoritative DNS name server also doesn't hold the desired records, it returns the delegation message with addresses to the next authoritative servers to which DNS resolver can query.

dns server to ddos attack

In other words, an authoritative server tells the recursive resolver: "I do not know the answer, go and query these and these name servers, e.g., ns1, ns2, etc., instead".

This hierarchical process goes on until the DNS resolver reaches the correct authoritative server that provides the domain's IP address, allowing the user to access the desired website.

Researchers found that these large undesired overheads can be exploited to trick recursive resolvers into forcefully continuously sending a large number of packets to a targeted domain instead of legitimate authoritative servers.

In order to mount the attack through a recursive resolver, the attacker must be in possession of an authoritative server, the researchers said.

"This can be easily achieved by buying a domain name. An adversary who acts as an authoritative server can craft any NS referral response as an answer to different DNS queries," the researchers said.

The NXNSAttack works by sending a request for an attacker-controlled domain (e.g., "attacker.com") to a vulnerable DNS resolving server, which would forward the DNS query to the attacker-controlled authoritative server.

Instead of returning addresses to the actual authoritative servers, the attacker-controlled authoritative server responds to the DNS query with a list of fake server names or subdomains controlled by the threat actor that points to a victim DNS domain.

The DNS server, then, forwards the query to all the nonexistent subdomains, creating a massive surge in traffic to the victim site.

The researchers said the attack can amplify the number of packets exchanged by the recursive resolver by as much as a factor of more than 1,620, thereby overwhelming not only the DNS resolvers with more requests they can handle, but also flood the target domain with superfluous requests and take it down.

ddos attack using dns-servers

What's more, using a botnet such as the Mirai as a DNS client can further augment the scale of the attack.

"Controlling and acquiring a huge number of clients and a large number of authoritative NSs by an attacker is easy and cheap in practice," the researchers said.

"Our initial goal was to investigate the efficiency of recursive resolvers and their behavior under different attacks, and we ended up finding a new seriously looking vulnerability, the NXNSAttack," the researchers concluded.

"The key ingredients of the new attack are (i) the ease with which one can own or control an authoritative name server, and (ii) the usage of nonexistent domain names for name servers and (iii) the extra redundancy placed in the DNS structure to achieve fault tolerance and fast response time," they added.

It's highly recommended that network administrators who run their own DNS servers update their DNS resolver software to the latest version.

Source :
https://thehackernews.com/2020/05/dns-server-ddos-attack.html

What is Azure Bastion?

In this post, you’ll get a short introduction into Azure Bastion Host. To be honest, I still don’t know if I should pronounce it as [basˈti̯oːn] (German), /bæstʃən/ (US engl.) or [basˈt̪jõn] (french) but that shouldn’t stop us from learning more about Azure Bastion Host, what is it, and when it’s useful.

We will also discuss a webinar on Azure Security –

So let’s start.

What is Azure Bastion Host?

Azure Bastion Host is a Jump-server as a Service within an Azure vNet (note that this service is currently in preview). What does that mean exactly? Well, a jump server is a fixed point on a network that is the sole place for you to remote in, get to other servers and services, and manage the environment. Now some will say, but I build my own jump server VM myself! While you’re certainly free to do that yourself, there are some key differences between the self-built VM option and a Bastion Host.

A regular Jump-server VM must either be reachable via VPN or needs to have a public IP with RDP and/or SSH open to the Internet. Option one, in some environments, is rather complex. Option two is a security nightmare. With Azure Bastion Host, you can solve this access issue. Azure Bastion enables you to use RDP and SSH via the Internet or (if available) via a VPN using the Azure Portal. The VM does not need a public IP, which GREATLY increases security for the target machine.

NOTE: Looking for more great content on security? Watch our webinar on Azure Security Center On-Demand.

After the deployment (which we’ll talk about in a second), Bastion becomes the 3rd option when connecting to a VM through the Azure Portal, as shown below.

Bastion

 

Virtual Machine Bastion

After you hit connect, an HTTPs browser Window will open and your session will open within an SSL encrypted Window.

Bastion in browser

Azure Bastion Use Cases

Now let’s list some possible use-cases. Azure Bastion can be very useful (but not limited) to these scenarios:

  1. Your Azure-based VMs are running in a subscription where you’re unable to connect via VPN, and for security reasons, you cannot set up a dedicated Jump-host within that vNet.
  2. The usage of a Jump-host or Terminal Server in Azure would be more cost-intensive than using a Bastion Host within the VNet (e.g. when you have more than one admin or user working on the host at the same time.)
  3. You want to give developers access to a single VM without giving them access to additional services like a VPN or other things running within the VNet.
  4. You want to implement Just in Time (JIT) Administration in Azure. You can deploy and enable Bastion Host on the fly and as you need it. This allows you yo implement it as part of your Operating System Runbook when you need to maintain the OS of an Azure-based VM. Azure Bastion allows you to do this without setting up permanent access to the VM.

How to deploy Azure Bastion Host in preview

The way you deploy Azure Bastion Host within a VNet is pretty straightforward. Let’s go through the steps together.

    1. Open the Azure Preview Portal through the following link.
    2. Search for the feature in the Azure Marketplace and walk through the deployment wizard by filling out the fields shown below.

create a bastion

Again, the deployment is quite simple and most options are fairly well explained within the UI. However, if you want further details, you can find them in the official feature documentation here.

Also, be aware that a Bastion Host must be implemented in every vNet where you want to connect to a VM. Currently, Bastion does not support vNet Peering.

How Much Does Azure Bastion Cost?

Pricing for Bastion is pretty easy to understand. As all Microsoft VM Services, you pay for the time the Bastion hast is deployed and for any Bastion service you have deployed. You can easily calculate the costs for the Bastions Hosts you need via Azure Price Calculator.

I made my example for one Bastion Host in West Europe, with the assumption it would be needed all month long.

Azure Bastion Price Calculator

Bastion Roadmap Items

Being in preview there are still a number of things that Microsoft is adding to Bastion’s feature set. This includes things like:

  1. Single-Sign-On with Azure AD
  2. Multi-Factor Auth
  3. vNet Peering (Not confirmed, but being HEAVILY requested by the community right now)

vNet Peering support would make it so that only a single Bastion Host in a Hub or Security vNet is needed.

You can see additional feature request or submit your own via the Microsoft Feedback Forum.

If you like a feature request or want to push your own request, keep an eye on the votes. The more votes a piece of feedback has, the more likely Microsoft will work on the feature.

Source :
https://www.altaro.com/hyper-v/what-is-azure-bastion/