In the world of data security there are many different types of encryption, but arguably the two most common are AES and PGP. With so many three-letter acronyms in the technical landscape, it’s easy to get lost in data security conversations. So let’s catch up!
First, we’ll define both AES and PGP, and then we’ll look at how they compare to each other.
AES encryption
AES stands for Advanced Encryption Standard. It is the dreamchild of two cryptographers’ proposal of a symmetric key encryption algorithm based on the Rijndael cipher. This algorithm was developed when the National Institute of Standards and Technology (NIST) sent the call out to the cryptographic community to develop a new standard. NIST spent five years evaluating 15 competing designs for the AES project. In 2001, NIST announced the cipher developed by the two Belgians, Joan Daemen and Vincent Rijmen, as the adopted standard (known as FIPS-197) for electronic data encryption.
AES is a symmetric key encryption algorithm, which essentially means that the same key is used to encrypt and decrypt the data. A computer program takes clear text and processes it through an encryption key and returns ciphertext. If the data needs to be decrypted, the program processes it again with the same key and is able to reproduce the clear text. This method required less computational resources for the program to complete its cipher process, which means lower performance impact. AES is a good method to protect sensitive data stored in large databases.
That said, AES will not always be your go-to for encrypting data.
When sharing sensitive information with trading partners or transferring information across networks, using AES would leave your data vulnerable because you would need to share your encryption key with your trading partners. This means that while they would be able to decrypt the information you sent them, they could also decrypt anything else encrypted using that same key.
And if the key itself were compromised, then anyone in its possession could decrypt your data.
PGP encryption
The answer to your above data sharing security problem is found in PGP encryption. This is because PGP uses both symmetric and asymmetric keys to encrypt data being transferred across networks.
PGP stands for Pretty Good Privacy. Which is ironic because it’s actually much better than just “pretty good.”
PGP was developed by the American computer scientist Phil Zimmerman, who made it available for non-commercial use at no charge in 1991. To encrypt data, PGP generates a symmetric key to encrypt data which is protected by the asymmetric key.
Asymmetric encryption uses two different keys for the encryption and decryption processes of sensitive information. Both keys are derived from one another and created at the same time. This key pair is divided and referred to as a public key and a private key. Data is only encrypted with a public key, and thus, can only be decrypted with its matching private key.
PGP is just as strong as that of AES, but it adds an additional layer of security to prevent anyone who only has the public key from being able to decrypt data. Another benefit of asymmetric encryption is that it allows for authentication. After you have exchanged public keys with your trading partners, the private keys can be used to digitally sign the encrypted content, allowing the decryptor to verify the authenticity of the sender.
PGP requires more computational resources, which is why it is usually not recommended for encrypting data in large databases where information needs to be accessed frequently, and each record that you access needs to be ran through a cryptographic process.
AES or PGP: Which should I use?
When you are considering which encryption to use for your sensitive information, choose whichever will suit your needs best:
AES is fast and works best in closed systems and large databases
PGP should be used when sharing information across an open network, but it can be slower and works better for individual files.
One thing that’s become abundantly clear in the internet age is that preventing unauthorized people from gaining access to the data stored in web-enabled computer systems is extremely difficult. All it takes is for a worker to click on the wrong link in an email, or respond unwarily to a seemingly legitimate request for information, and an intruder could gain complete access to all your data. In today’s regulatory and public relations environments, that kind of breach can be catastrophic.
But what if you could be assured that even if an attacker got access to your information, they couldn’t use it? That’s the role of data encryption.
How encryption works
The basic idea of encryption is to convert data into a form in which the original meaning is masked, and only those who are properly authorized can decipher it. This is done by scrambling the information using mathematical functions based on a number called a key. An inverse process, using the same or a different key, is used to unscramble (or decrypt) the information. If the same key is used for both encryption and decryption, the process is said to be symmetric. If different keys are used the process is defined as asymmetric.
Two of the most widely used encryption algorithms today are AES and RSA. Both are highly effective and secure, but they are typically used in different ways. Let’s take a look at how they compare.
AES encryption
AES (Advanced Encryption Standard) has become the encryption algorithm of choice for governments, financial institutions, and security-conscious enterprises around the world. The U.S. National Security Agency (NSC) uses it to protect the country’s “top secret” information.
The AES algorithm successively applies a series of mathematical transformations to each 128-bit block of data. Because the computational requirements of this approach are low, AES can be used with consumer computing devices such as laptops and smartphones, as well as for quickly encrypting large amounts of data. For example, the IBM z14 mainframe series uses AES to enable pervasive encryption in which all the data in the entire system, whether at rest or in transit, is encrypted.
AES is a symmetric algorithm which uses the same 128, 192, or 256 bit key for both encryption and decryption (the security of an AES system increases exponentially with key length). With even a 128-bit key, the task of cracking AES by checking each of the 2128 possible key values (a “brute force” attack) is so computationally intensive that even the fastest supercomputer would require, on average, more than 100 trillion years to do it. In fact, AES has never been cracked, and based on current technological trends, is expected to remain secure for years to come.
RSA encryption
RSA is named for the MIT scientists (Rivest, Shamir, and Adleman) who first described it in 1977. It is an asymmetric algorithm that uses a publicly known key for encryption, but requires a different key, known only to the intended recipient, for decryption. In this system, appropriately called public key cryptography (PKC), the public key is the product of multiplying two huge prime numbers together. Only that product, 1024, 2048, or 4096 bits in length, is made public. But RSA decryption requires knowledge of the two prime factors of that product. Because there is no known method of calculating the prime factors of such large numbers, only the creator of the public key can also generate the private key required for decryption.
RSA is more computationally intensive than AES, and much slower. It’s normally used to encrypt only small amounts of data.
How AES and RSA work together
A major issue with AES is that, as a symmetric algorithm, it requires that both the encryptor and the decryptor use the same key. This gives rise to a crucial key management issue – how can that all-important secret key be distributed to perhaps hundreds of recipients around the world without running a huge risk of it being carelessly or deliberately compromised somewhere along the way? The answer is to combine the strengths of AES and RSA encryption.
In many modern communication environments, including the internet, the bulk of the data exchanged is encrypted by the speedy AES algorithm. To get the secret key required to decrypt that data, authorized recipients publish a public key while retaining an associated private key that only they know. The sender then uses that public key and RSA to encrypt and transmit to each recipient their own secret AES key, which can be used to decrypt the data.
Every so often, we encounter someone still using antiquated DES for encryption. If your organization hasn’t switched to the Advanced Encryption Standard (AES), it’s time for an upgrade. To better understand why: let’s compare DES and AES encryption:
Data Encryption Standard (DES)
What is DES encryption?
DES is a symmetric block cipher (shared secret key), with a key length of 56-bits. Published as the Federal Information Processing Standards (FIPS) 46 standard in 1977, DES was officially withdrawn in 2005.
The federal government originally developed DES encryption over 35 years ago to provide cryptographic security for all government communications. The idea was to ensure government systems all used the same, secure standard to facilitate interconnectivity.
Why DES is no longer effective
To show that the DES was inadequate and should not be used in important systems anymore, a series of challenges were sponsored to see how long it would take to decrypt a message. Two organizations played key roles in breaking DES: distributed.net and the Electronic Frontier Foundation (EFF).
The DES I contest (1997) took 84 days to break the encrypted message using a brute force attack.
In 1998, there were two DES II challenges issued. The first challenge took just over a month and the decrypted text was “The unknown message is: Many hands make light work”. The second challenge took less than three days, with the plaintext message “It’s time for those 128-, 192-, and 256-bit keys”.
The final DES III challenge in early 1999 only took 22 hours and 15 minutes. Electronic Frontier Foundation’s Deep Crack computer (built for less than $250,000) and distributed.net’s computing network found the 56-bit DES key, deciphered the message, and they (EFF & distributed.net) won the contest. The decrypted message read “See you in Rome (Second AES Candidate Conference, March 22-23, 1999)”, and was found after checking about 30 percent of the key space – finally proving that DES belonged to the past.
Even Triple DES is not enough protection
Triple DES (3DES) – also known as Triple Data Encryption Algorithm (TDEA) – is a way of using DES encryption three times. But even Triple DES was proven ineffective against brute force attacks (in addition to slowing down the process substantially).
According to draft guidance published by NIST on July 19, 2018, TDEA/3DES is officially being retired. The guidelines propose that Triple DES be deprecated for all new applications and disallowed after 2023.
Advanced Encryption Standard (AES)
What is AES encryption?
Published as a FIPS 197 standard in 2001. AES data encryption is a more mathematically efficient and elegant cryptographic algorithm, but its main strength rests in the option for various key lengths. AES allows you to choose a 128-bit, 192-bit or 256-bit key, making it exponentially stronger than the 56-bit key of DES.
In terms of structure, DES uses the Feistel network which divides the block into two halves before going through the encryption steps. AES on the other hand, uses permutation-substitution, which involves a series of substitution and permutation steps to create the encrypted block. The original DES designers made a great contribution to data security, but one could say that the aggregate effort of cryptographers for the AES algorithm has been far greater.
One of the original requirements from the National Institute of Standards and Technology (NIST) for the DES replacement algorithm was that it had to be efficient both in software and hardware implementations. (DES was originally practical only in hardware implementations.) Java and C reference implementations were used to do performance analysis of the algorithms. AES was chosen through an open competition with 15 candidates from as many research teams around the world, and the total amount of resources allocated to that process was tremendous.
Finally, in October 2000, a NIST press release announced the selection of Rijndael as the proposed Advanced Encryption Standard (AES).
What are the differences between DES and AES encryption?
It started with one weird tweet. Then another. Quickly, some of the most prominent accounts on Twitter were all sending out the same message;
I am giving back to the community.
All Bitcoin sent to the address below will be sent back double! If you send $1,000, I will send back $2,000. Only doing this for 30 minutes.
[- BITCOIN WALLET ADDRESS -]
Are Apple, Elon Musk, Barrack Obama, Uber, Joe Biden, and a host of others participating in a very transparent bitcoin scheme?
No. Of course, not. The question was whether or not individual accounts were compromised or if something deeper was going on.
User Account Protection
These high profile accounts are prime targets for cybercriminals. They have a broad reach, and even a brief compromise of one of these accounts would significantly increase a hacker’s reputation in the underground.
That is why these accounts leverage the protections made available by Twitter in order to keep their accounts safe.
While it’s believed that one or two of these accounts failed to take these measures, it’s highly unlikely that dozens and dozens of them did. So what happened?
Rumours Swirl
As with any public attack, the Twitter-verse (ironically) was abuzz with speculation. That speculation ramped up when Twitter took the reasonable step of preventing any verified account from tweeting for about three hours.
This step helped prevent any additional scam tweets from being published and further raised the profile of this attack.
While some might shy away from raising the profile of an attack, this was a reasonable trade-off to prevent further damage to affected accounts and to help prevent the attack from taking more ground.
This move also provided a hint as to what was going on. If individual accounts were being attacked, it’s unlikely that this type of movement would’ve done much to prevent the attacker from gaining access. However, if the attacker was accessing a backend system, this mitigation would be effective.
Had Twitter itself been hacked?
Occam’s Razor
When imagining attack scenarios, a direct breach of the main service is a scenario that is often examined in-depth, which is also why it is one of the most planned for scenarios.
Twitter — like any company — has challenges with its systems, but they center primarily around content moderation…their backend security is top-notch.
An example of this an incident in 2018. Twitter engineers made a mistake that meant anyone’s password could have been exposed in their internal logs. Just in case, Twitter urged everyone to reset their password.
While possible, it’s unlikely that Twitter’s backend systems were directly breached. There is a much simpler potential explanation: insider access.
Internal Screenshot
Quickly after the attack, some in the security community noticed a screenshot of an internal support tool from Twitter surfacing in underground discussion forums. This rare inside view showed what appeared to be what a Twitter support team member would see.
“We used a rep that literally done all the work for us.”
Anonymous Source
What remains unclear is whether this is a case of social engineering (tricking a privileged insider into taking action) or a malicious insider (someone internally motivated to attack the system).
The difference is important for other defenders out there.
The investigation is ongoing, and Twitter continues to provide updates via @TwitterSupport;
Social Engineering
Donnie Sullivan from CNN has a fantastic interview with the legendary Rachel Tobac showing how simple social engineering can be and the dangerous impact it can have;
If this attack was conducted through social engineering, the security team at Twitter would need to implement additional processes and controls to ensure that it doesn’t happen again.
Such a situation is what your team also needs to look at. While password resets, account closures, data transfers, and other critical processes are at particular risk of social engineering, financial transactions are atop the cybercriminal’s target list.
Adding additional side-channel confirmations, additional steps for verifications, firm and clear approvals and other process steps can help organizations mitigate these types of social engineering attacks.
Malicious Insider
If the attack turns out to be from a malicious insider. Defenders need to take a different approach.
Malicious insiders are both a security problem and human resource one.
From the security perspective, two key principles help mitigate the potential of these attacks;
Making sure that individuals only have the technical access needed to complete their assigned tasks, and only that access is key to limiting this potential attack. Combined with the smart separation of duties (one person to request a change, another to approval it), this significantly reduces the possibility of these attacks causing harm.
The other—and not often spoken of—side of these attacks is the reason behind the malicious intent. Some people are just malicious, and when presented with an opportunity, they will take it.
Other times, it’s an employee that feels neglected, passed over, or is disgruntled in some other way. A strong internal community, regular communication, and a strong HR program can help address these issues before they escalate to the point where aiding a cybercriminal becomes an enticing choice.
Support Risks
Underlying this whole situation is a more challenging issue; the level of access that support has to any given system.
It’s easy to think of a Twitter account as “yours.” It’s not. It’s part of a system run by a company that needs to monitor the health of the system, respond to support issues, and aid law enforcement when legally required.
All of these requirements necessitate a level of access that most don’t think about.
How often are you sharing sensitive information via direct message? Those messages are most likely accessible by support.
What’s to prevent them from accessing any given account or message at any time? We don’t know.
Hopefully, Twitter—and others—have clear guardrails (technical and policy-based) in place to prevent abuse of support access, and they regularly audit them.
It’s a hard balance to strike. User trust is at stake but also the viability of running a service.
Clear, transparent policies and controls are the keys to success here.
Abuse can be internal or external. Support teams typically have privileged access but are also among the lowest paid in the organization. Support—outside of the SRE community—is usually seen as entry-level.
These teams have highly sensitive access, and when things go south, can do a lot of harm. Again, the principles of least privilege, separation of duties, and a strong set of policies can help.
What’s Next?
In the coming days, more details of the attack will surface. In the meantime, the community is still struggling to reconcile the level of access gained and how it was used.
Getting access to some of the world’s most prominent accounts and then conducting a bitcoin scam? Based on the bitcoin transactions, it appears the cybercriminals made off with a little over USD 100,000. Not insignificant, but surely there were other opportunities?
Occam’s razor can help here again. Bitcoin scams and coin miners are the most direct method fo cybercriminals to capitalized on their efforts. Given the high profile nature of the attack, the time before the discovery was always going to be sure. This may have been the “safest” bet for the criminal(s) to profit from this hack.
In the end, it’s a lesson for users of social networks and other services; even if you take all of the reasonable security precautions, you are relying on the service itself to help protect you. That might not always hold true.
It’s a harsh reminder that the very tooling you put in place to run your service may be its biggest risk for service providers and defenders…a risk that’s often overlooked and underestimated.
We believe that WISPs serve a crucial role in these difficult times by providing Internet connectivity to all our communities. Our goal with UNMS Cloud and CRM is to empower WISPs with world-class tools and services so that they can focus on connecting the world.
That’s why we are proud to introduce the Ubiquiti Payment Gateway.
Easy and Affordable Payment Processing
We know that fees can add up. That’s why Ubiquiti Payment Gateway is offering an industry-leading processing fee of 1.9%+30c per transaction for the first year.
Better yet, the UPG is simple to use! No need to set up accounts with other payment gateways or use a separate site to manage your subscriptions – simply activate the UPG with a few clicks, go through our quick onboarding process, and you will be using the UPG in no time.
If you are currently using other payment options for your subscriptions, you can easily switch to the UPG from the billing settings. We will continue to support other payment options, if you prefer to keep your existing payment processors.
For now, Ubiquiti Payment Gateway is only available in the United States, but we are working to bring it to other countries. Stay tuned.
Automatic Payments
The UPG isn’t the only thing we’ve been working on. We know that managing monthly payments can be time-consuming. That’s why we have built autopayments into the latest release of CRM. You can activate it in the billing settings:
Autopayments can be set to trigger at invoice creation date or at the due date. No more need to keep track of due dates!
An IoT device is simply any physical device with a defined purpose that has an operating system and can communicate through the internet with other things. Projections show that by 2021, about 25 billion IoT devices will be in operation, and 75 billion by the year 2025.
The support of so many connected devices used to be impossible. Now, advances in technology such as IPv6—the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet—and 5G is enabling the IoT revolution.
Benefits of IoT
The benefits of IoT span across all industries, including agriculture and healthcare, but personal lives are enhanced by IoT as well. For example, IoT thermostats monitor and control temperature, which is both convenient and cost saving. Smart watches and Fitbits monitor health stats such as pulse and steps, going so far as to send this information to a doctor or sounding an alert if a risk is detected. Smart cities, homes, and cars are other large-scale examples of IoT. While the ultimate realization of these technologies is a long way off and involves the use of imagination, advancements in IoT aren’t slowing down.
In fact, wearables are a perfect example of this. What was once a clunky step-tracking device is now a fashion statement that serves multiple purposes. In addition, designers and engineers are playing with fabrics that can be interwoven with IoT components so a sport shoe can measure speed, heartbeat, and sweat output or a jacket can charge phones.
Cyber Security With Wearables
However, wearables are prone to cyber attacks. While not a wearable but similar, a connected pacemaker was compromised in 2018, which opened the eyes to the industry of the associated risks that come along with IoT devices. As Dr. Antoniou explains, “The pacemaker was compromised through a remote execution of the code into the person who was having the pacemaker.”
Manufacturers of connected wearables must practice their due diligence to ensure that the security of devices is done correctly. Dr Antoniou emphasizes that the onus lays on the manufacturer.
Smart Cities and IoT
When people think of smart cities, they often envision traffic signals that change according to the current traffic pattern, tickets handed out automatically after cameras catch illegal incidents, or tolls automatically deducted from checking accounts when a sensor deems it appropriate. Smart cities are so much more than that, however.
Dr. Antoniou explains that a smart city exists as an ecosystem of those sensor components plus the services the city is providing. That includes public lighting, smart roads and parks, and free Wi-Fi across the city. Services include DMV renewal and efficiency measures that help keep costs and resource draining low through the use of a connected device or app.
Enterprise IoT
Enterprise IoT, also called Industry 4.0 consists of IoT devices that are designed to operate within a business to drive efficiency, effectiveness, and cost savings. Examples include voiceover IP phones, smart lighting within the building, and smart TVs and vending machines located in an enterprise building. With these tools, internet connection enables TVs with internet access and vending machines can take debit cards. Security features like cameras and intrusion detection also fall into the realm of Enterprise IoT.
There is some concern that Industry 4.0 will eliminate jobs, but Dr. Antoniou believes the contrary. “I think we will see some reduction in certain jobs, but then we will see more demand in other jobs. As we know, cyber security is a very hot field nowadays, and if you go to the Department of Labor, you can see millions of openings especially in cyber security.” He goes on to explain that what IT administration and project management jobs are lost to IoT, cyber security jobs will fill—and then some. He also believes any collateral damage will be worth one other key benefit: sustainability.
“Sustainability is a big, big issue and a trending around the globe. So these devices, they will be helping us to accomplish [the things that make] a better planet: reduce waste [and] make more effective use of resources and consumption.”
On-Prem IoT Security
Many at-home IoT devices run on Wi-Fi connected to home modems. Dr. Antoniou encourages everyone who purchases a new IoT device to always read the manufacturer instructions in order to understand what kind of security parameters and configurations need to be put in place for that device. He also talks about Rule Zero, or his firewall rule. “I explicitly deny everything inbound to my home… That would protect your IoT, but also your other devices that are connected to your home network.”
Dr. Antoniou stresses the fact that IoT technology is still in its infancy. There are a lot of security and connectivity kinks to be worked out. Too many manufacturers are rolling out new, snazzy devices without actively imagining all the future security risks the device may enable. Cyber security needs to be an active part of the manufacturing supply chain.
Digital Identities
Finally, each device must have its own digital identity, or an identity that the device can assume for the entirety of its life. “So the digital identities on the IoTs, it is similar to what we call the identity access management, and it’s important to have them. And today, we don’t have a centralized digital identity management for IoTs.” Dr. Antoniou is an expert in the future of digital identity evolution: “if you get that digital ID and marry it with a microchip that is embedded to this device and it creates a strong encryption algorithm and somehow creates a digital ID in a centralized identity and access management database that is utilizing blockchain for verification, authentication, and authorization, that device now has a digital ID. It has a body of existence.”
Humans are defined with a social security number which enable transactions like home loans or tax payments. Digital identities for IoT devices identify them within their ecosystem. From there, authorization is granted only to the IDs of the devices we want active on our home or enterprise network. This system is not currently in place. For example, a rogue employee could potentially go to work, pair their smart witch with a Bluetooth device, piggyback into the work network, and steal data. If that smart watch had a digital ID, the network would know instantly that it doesn’t belong.
Currently, Dr. Antoniou explains that the best defense to IoT threats is enterprise education and policy. By running a risk analysis, companies start to think about connectivity as a whole. From there, they can create policies and train employees on those policies.
When asked about current IoT regulations, Dr. Antoniou exhaustedly explains that there aren’t any. Some countries are farther ahead than others, however, and most countries are working on them. Also, there are commonly-accepted preliminary guidelines. “NIST the National Institute of Standards and Technology, run by United States government, has some preliminary frameworks for IoT, but it has not been come to a fruition as a standard yet.”
In today’s blog post, we’ll talk about the difference between authoritative and recursive domain name system (DNS) servers. We’ll explain how these two types of DNS servers form the foundation of the internet and help the world stay connected.
What is the domain name system?
Every computer on the Internet identifies itself with an “Internet Protocol” or “IP” address, which is a series of numbers — just like a phone number. That means you can contact any of those computers by typing in the website name, or you can type the IP address into your browser address bar. Either method will get you to the same destination. All servers that host websites and apps on the internet have IP addresses, too.
Give it a try: the IP address of the Cisco Umbrella website is 67.215.70.40.
The domain name system (DNS) is sometimes referred to as the “phone book” of the Internet. You can connect to our website by typing in the IP address in the address bar of your browser, but it’s much easier to type in umbrella.cisco.com. DNS was invented so that people didn’t need to remember long IP address numbers (like phone numbers) and could look up websites by human-friendly names like umbrella.cisco.com instead.
There are too many sites on the Internet for your personal computer to keep a complete list. DNS servers power a website directory service to make things easier for humans. Like phone books, you won’t find one big book that contains every listing for everyone in the world (how many pages would that require? That’s a question for a different blog post.)
There are two types of DNS servers: authoritative and recursive. Authoritative nameservers are like the phone book company that publishes multiple phone books, one per region. Recursive DNS servers are like someone who uses a phone book to look up the number to contact a person or company. Keep in mind, these companies don’t actually decide what number belongs to which person or company — that’s the responsibility of domain name registrars.
Let’s talk about the two different types in more detail.
What is a recursive DNS server?
When you type a website address into your browser address bar, it might seem like magic happens. In reality, the DNS system makes effortless internet browsing possible. First, your browser connects to a recursive DNS server. There are many thousands of recursive DNS servers in the world. Many people use the recursive DNS servers managed by their Internet Service Provider (ISP) and never change them. If you’re a Cisco Umbrella customer, you’re using our recursive DNS servers instead.
Once your computer connects to its assigned recursive DNS server, it asks the question “what’s the IP address assigned to that website name?” The recursive DNS server doesn’t have a copy of the phone book, but it does know where to find one. So it connects to another type of DNS server to continue the search.
What is an authoritative DNS nameserver?
The second type of DNS server holds a copy of the regional phone book that matches IP addresses with domain names. These are called authoritative DNS servers. Authoritative DNS nameservers are responsible for providing answers to recursive DNS nameservers about where specific websites can be found. These answers contain important information for each domain, like IP addresses.
Like phone books, there are different authoritative DNS servers that cover different regions (a company, the local area, your country, etc.) No matter what region it covers, an authoritative DNS server performs two important tasks. First, it stores lists of domain names and their associated IP addresses. Second, it responds to requests from a recursive DNS server (the person who needs to look up a number) about the correct IP address assigned to a domain name. After getting the answer, the recursive DNS server sends that information back to the computer (and browser) that requested it. The computer connects to the IP address, and the website loads, leading to a happy user who can go on with their day.
Putting it all together
This process happens so quickly that you don’t even notice it happening — unless, of course, something is broken.
Let’s use a real world example. Imagine that you are sitting at your computer and you want to search for pictures of cats wearing bow ties (hey, we don’t judge). So you decide to visit Google to do a web search.
First, you type www.google.com into your web browser. However, your computer doesn’t know the IP address of the server for www.google.com. So your computer starts by sending a query to its assigned recursive DNS nameserver. For this example, we’ll assume you’re one of our customers., So it’s a Cisco Umbrella server. Your computer asks the recursive DNS server to locate the IP address of www.google.com. The Cisco Umbrella recursive DNS nameserver is now assigned the task of finding the IP address of the website. Google is a popular website, so its result will probably be cached. But if the recursive DNS nameserver did not already have a DNS record for www.google.com cached in its system, it will need to ask for help from the authoritative DNS hierarchy to get the answer. This is more likely if you are going to a website that is newer or less popular.
Each part of a domain like www.google.com has a specific authoritative DNS nameserver (or group of redundant authoritative nameservers).
At the top of the server tree are the root domain nameservers. Every website address has an implied “.” at the end, even if we don’t type it in. This “.” designates the DNS root nameservers at the top of the DNS hierarchy. The root domain nameservers will know the IP addresses of the authoritative nameservers that handle DNS queries for the Top Level Domains (TLD) like “.com”, “.edu”, or “.gov”. The Umbrella recursive DNS server first asks the root domain nameserver for the IP address of the .com TLD server, since www.google.com is within the .com TLD.
The root domain nameserver responds with the address of the TLD server. Next, the Umbrella recursive DNS server asks the TLD authoritative server where it can find the authoritative DNS server for www.google.com. The TLD authoritative server responds, and the process continues. The authoritative server for www.google.com is asked where to find www.google.com and the server responds with the answer.Once the Cisco Umbrella recursive DNS server knows the IP address for the website, it responds to your computer with the appropriate IP address. Your browser loads Google, and you can get started with more important business: finding pictures of cats in bow ties.
Without DNS, the internet stops working
The DNS system is so important to the modern world that we often refer to it as the foundation of the internet. If your recursive DNS service breaks for some reason, you won’t be able to connect to websites unless you type in the IP addresses directly — and who keeps an emergency list of IP addresses in their desk? If the recursive DNS service you use is working, but has been slowed down for some reason (like a cyberattack), then your connection to websites will be slowed down, too.
Cisco Umbrella launched its recursive DNS service in 2006 (as OpenDNS) to provide everyone with reliable, safe, smart, and fast Internet connectivity. Umbrella has a highly resilient recursive DNS network. We’ve had 100% uptime with no DNS outages in our history. Our 30-plus worldwide data centers use anycast routing to send requests transparently to the fastest available data center with automatic failover.
By configuring your network to use Umbrella’s recursive DNS service, you’ll get the fastest and most reliable connectivity you can imagine. But Umbrella provides much more than just plain old internet browsing. Learn more about how we make the internet a safer place for cats in bow ties in our post about DNS-layer security.
VirusTotal, the famous multi-antivirus scanning service owned by Google, recently announced new threat detection capabilities it added with the help of an Israeli cybersecurity firm.
VirusTotal provides a free online service that analyzes suspicious files and URLs to detect malware and automatically shares them with the security community. With the onslaught of new malware types and samples, researchers rely on the rapid discovery and sharing provided by VirusTotal to keep their companies safe from attacks.
VirusTotal relies on a continuous stream of new malware discoveries to protect its members from significant damage.
Cynet, the creator of the autonomous breach protection platform, has now integrated its Cynet Detection Engine into VirusTotal.
The benefits of this partnership are twofold. First, Cynet provides the VirusTotal partner network cutting-edge threat intelligence from its ML-based detection engine (CyAI) that actively protects the company’s clients around the globe.
CyAI is a continuously learning and evolving detection model that routinely contributes information about new threats that are not available in VirusTotal. Although many vendors are using AI/ML models, the ability of the models to detect new threats vary greatly.
Cynet routinely outperforms third party and open source detection platforms and is frequently relied upon in incident response cases when underlying threats remain hidden from other solutions.
For example, Cynet recently conducted an Incident Response engagement for a large telecom provider. Cynet discovered several malicious files that did not appear in the VirusTotal database.
Contributing information on these newly discovered files helps our entire industry perform better and protect businesses against cyber-attacks.
Second, Cynet will leverage intelligence in VirusTotal to inform its CyAI model in order to continuously improve its detection capabilities and accuracy.
Cynet AI is continually evolving, constantly learning new datasets in order to improve its accuracy and decrease its already-low false positive ratio. Comparing files found to be malicious by CyAI against files also found to be malicious by other providers helps to quickly validate Cynet’s findings.
With Docker gaining popularity as a service to package and deploy software applications, malicious actors are taking advantage of the opportunity to target exposed API endpoints and craft malware-infested images to facilitate distributed denial-of-service (DDoS) attacks and mine cryptocurrencies.
According to a report published by Palo Alto Networks’ Unit 42 threat intelligence team, the purpose of these Docker images is to generate funds by deploying a cryptocurrency miner using Docker containers and leveraging the Docker Hub repository to distribute these images.
“Docker containers provide a convenient way for packaging software, which is evident by its increasing adoption rate,” Unit 42 researchers said. “This, combined with coin mining, makes it easy for a malicious actor to distribute their images to any machine that supports Docker and instantly starts using its compute resources towards cryptojacking.”
Docker is a well-known platform-as-a-service (PaaS) solution for Linux and Windows that allows developers to deploy, test, and package their applications in a contained virtual environment — in a way that isolates the service from the host system they run on.
The now taken down Docker Hub account, named “azurenql,” consisted of eight repositories hosting six malicious images capable of mining Monero, a privacy-focused cryptocurrency.
The malware author behind the images used a Python script to trigger the cryptojacking operation and took advantage of network anonymizing tools such as ProxyChains and Tor to evade network detection.
The coin mining code within the image then exploited the processing power of the infected systems to mine the blocks.
The images hosted on this account have been collectively pulled over two million times since the start of the campaign in October 2019, with one of the wallet IDs used to earn more than 525.38 XMR ($36,000).
Exposed Docker Servers Targeted With DDoS Malware
That’s not all. In a new mass-scanning operation spotted by Trend Micro researchers, unprotected Docker servers are being targeted with at least two different kinds of malware — XOR DDoS and Kaiji — to collect system information and carry out DDoS attacks.
“Attackers usually used botnets to perform brute-force attacks after scanning for open Secure Shell (SSH) and Telnet ports,” the researchers said. “Now, they are also searching for Docker servers with exposed ports (2375).”
It’s worth noting that both XOR DDoS and Kaiji are Linux trojans known for their ability to conduct DDoS attacks, with the latter written entirely from scratch using Go programming language to target IoT devices via SSH brute-forcing.
The XOR DDoS malware strain works by searching for hosts with exposed Docker API ports, followed by sending a command to list all the containers hosted on the target server, and subsequently compromising them with the XORDDoS malware.
Likewise, the Kaiji malware scans the internet for hosts with exposed port 2375 to deploy a rogue ARM container (“linux_arm”) that executes the Kaiji binary.
“While the XOR DDoS attack infiltrated the Docker server to infect all the containers hosted on it, the Kaiji attack deploys its own container that will house its DDoS malware,” the researchers said, noting the difference between the two malware variants.
In addition, both the two pieces of malware gather details such as domain names, network speeds, process identifiers of running processes, and CPU and network information that are needed to mount a DDoS attack.
“Threat actors behind malware variants constantly upgrade their creations with new capabilities so that they can deploy their attacks against other entry points,” the researchers concluded.
“As they are relatively convenient to deploy in the cloud, Docker servers are becoming an increasingly popular option for companies. However, these also make them an attractive target for cybercriminals who are on the constant lookout for systems that they can exploit.”
It’s advised that users and organizations who run Docker instances immediately check if they expose API endpoints on the Internet, close the ports, and adhere to recommended best practices.
There’s nothing normal about the “new business normal.” The past few months have represented a complete shift in the way we think of work — and with vastly more employees working remotely than ever before, bringing with them an unprecedented quantity of exposure points and risk, the traditional cybersecurity model is proving woefully inadequate.
As cybercriminals ramp up attacks on anyone they perceive to be vulnerable, it isn’t enough to simply enable working from home. To truly ensure business continuity, you must secure and rearchitect these massively distributed networks with a platform capable of stopping the ever-increasing number of threats — both known and unknown.
Many businesses need to secure remote branch offices and retail stores, but it often isn’t possible — or practical — to have dedicated IT staff at each of these locations. SonicWall SD-Branch enables your organization to provide seamless connectivity that keeps pace with escalating bandwidth demands, and allows you to quickly and cost-effectively upgrade the network security at your remote locations.
Secure SD-Branch is a comprehensive solution that combines the power of secure SD-WAN, secure wireless and wired LAN technology with zero-touch deployment. Through the power of Capture Security Center — SonicWall’s cloud-based, single-pane-of-glass management console — the management, reporting and analytics for all locations is centralized and accessible from any web-enabled device.
SonicWall Switches
The shift to remote work has resulted in a sudden rise in the use of high-bandwidth applications — something that can easily overwhelm branch networks. At the same time, monitoring, managing and continually refreshing a growing number of network devices across multiple branches has grown exponentially more difficult, especially since many branch locations don’t have trained IT staff.
SonicWall Switches offer multi-gigabit wired performance that lets you rapidly scale your branch networks through remote installation. Available in seven models — ranging from eight to 48 ports, with gigabit and 10 gigabit ethernet ports — SonicWall Switches deliver network switching that accommodates the growing number of mobile and IoT devices in branch locations and provides the network performance needed to support cloud-delivered applications. SonicWall Switches also fit seamlessly into your existing SonicWall ecosystem, helping you to unify your network security posture. They’re SD-Branch-ready and managed via firewalls — either locally or through SonicWall’s cloud-based Capture Security Center — for unified, single-pane-of-glass management of your entire SonicWall infrastructure.
SonicWall Capture Client 3.0
SonicWall Capture Client 3.0 allows employees to operate remotely without having to worry to about advanced threats, all while giving administrators comprehensive visibility and the ability to extend standard protections to remote endpoints. SonicWall Capture Client 3.0 is the latest iteration of our lightweight, unified endpoint protection platform, and features a number of new and upgraded features.
Capture Client 3.0’s comprehensive, client-based content filtering allows you to easily extend network-based content filtering to off-network users. It provides HTTP and HTTPS traffic inspection capabilities, along with the ability to assign exclusions for trusted applications or blacklist untrusted applications. Capture Client also offers real-time visibility of applications and identifies vulnerabilities.
Starting with Capture Client 3.0, administrators can leverage Azure active directory properties for granular policy assignment based on categories such as group membership — regardless of whether the directory is hosted on-prem or in the cloud.
Capture Client 3.0 also brings in support for the SentinelOne Linux agent, enabling you to extend next-generation antimalware capabilities to Linux servers. This feature will allow customers to safeguard Linux-based workloads irrespective of their location — on-prem or in the cloud.