New Cache Side Channel Attack Can De-Anonymize Targeted Online Users

A group of academics from the New Jersey Institute of Technology (NJIT) has warned of a novel technique that could be used to defeat anonymity protections and identify a unique website visitor.

“An attacker who has complete or partial control over a website can learn whether a specific target (i.e., a unique individual) is browsing the website,” the researchers said. “The attacker knows this target only through a public identifier, such as an email address or a Twitter handle.”

The cache-based targeted de-anonymization attack is a cross-site leak that involves the adversary leveraging a service such as Google Drive, Dropbox, or YouTube to privately share a resource (e.g., image, video, or a YouTube playlist) with the target, followed by embedding the shared resource into the attack website.

This can be achieved by, say, privately sharing the resource with the target using the victim’s email address or the appropriate username associated with the service and then inserting the leaky resource using an <iframe> HTML tag.

In the next step, the attacker tricks the victim into visiting the malicious website and clicking on the aforementioned content, causing the shared resource to be loaded as a pop-under window (as opposed to a pop-up) or a browser tab — a method that’s been used by advertisers to sneakily load ads.

This exploit page, as it’s rendered by the target’s browser, is used to determine if the visitor can access the shared resource, successful access indicating that the visitor is indeed the intended target.

The attack, in a nutshell, aims to unmask the users of a website under the attacker’s control by connecting the list of accounts tied to those individuals with their social media accounts or email addresses through a piece of shared content.

In a hypothetical scenario, a bad actor could share a video hosted on Google Drive with a target’s email address, and follow it up by inserting this video in the lure website. Thus when visitors land on the portal, a successful loading of the video could be used as a yardstick to infer if their victim is one among them.

anonymity

The attacks, which are practical to exploit across desktop and mobile systems with multiple CPU microarchitectures and different web browsers, are made possible by means of a cache-based side channel that’s used to glean if the shared resource has been loaded and therefore distinguish between targeted and non-targeted users.

Put differently, the idea is to observe the subtle timing differences that arise when the shared resource is being accessed by the two sets of users, which, in turn, occurs due to differences in the time it takes to return an appropriate response from the web server depending on the user’s authorization status.

The attacks also take into account a second set of differences on the client-side that happens when the web browser renders the relevant content or error page based on the response received.

“There are two main causes for differences in the observed side channel leakages between targeted and non-targeted users – a server-side timing difference and a client-side rendering difference,” the researchers said.

Cache Side Channel Attack

While most popular platforms such as those from Google, Facebook, Instagram, LinkedIn, Twitter, and TikTok were found susceptible, one notable service that’s immune to the attack is Apple iCloud.

It’s worth pointing out the de-anonymization method banks on the prerequisite that the targeted user is already logged in to the service. As mitigations, the researchers have released a browser extension called Leakuidator+ that’s available for ChromeFirefox, and Tor browsers.

To counter the timing and rendering side channels, website owners are recommended to design web servers to return their responses in constant time, irrespective of whether the user is provisioned to access the shared resource, and make their error pages as similar as possible to the content pages to minimize the attacker-observable differences.

“As an example, if an authorized user was going to be shown a video, the error page for the non-targeted user should also be made to show a video,” the researchers said, adding websites should also be made to require user interaction before rendering content.

“Knowing the precise identity of the person who is currently visiting a website can be the starting point for a range of nefarious targeted activities that can be executed by the operator of that website.”

The findings arrive weeks after researchers from the University of Hamburg, Germany, demonstrated that mobile devices leak identifying information such as passwords and past holiday locations via Wi-Fi probe requests.

In a related development, MIT researchers last month revealed the root cause behind a website fingerprinting attack as not due to signals generated by cache contention (aka a cache-based side channel) but rather due to system interrupts, while showing that interrupt-based side channels can be used to mount a powerful website fingerprinting attack.

Source :
https://thehackernews.com/2022/07/new-cache-side-channel-attack-can-de.html

5 Key Things We Learned from CISOs of Smaller Enterprises Survey

New survey reveals lack of staff, skills, and resources driving smaller teams to outsource security.

As business begins its return to normalcy (however “normal” may look), CISOs at small and medium-size enterprises (500 – 10,000 employees) were asked to share their cybersecurity challenges and priorities, and their responses were compared the results with those of a similar survey from 2021.

Here are the 5 key things we learned from 200 responses:

— Remote Work Has Accelerated the Use of EDR Technologies

In 2021, 52% of CISOs surveyed were relying on endpoint detection and response (EDR) tools. This year that number has leapt to 85%. In contrast, last year 45% were using network detection and response (NDR) tools, while this year just 6% employ NDR. Compared to 2021, double the number of CISOs and their organizations are seeing the value of extended detection and response (XDR) tools, which combine EDR with integrated network signals. This is likely due to the increase in remote work, which is more difficult to secure than when employees work within the company’s network environment.

— 90% of CISOs Use an MDR Solution

There is a massive skills gap in the cybersecurity industry, and CISOs are under increasing pressure to recruit internally. Especially in small security teams where additional headcount is not the answer, CISOs are turning to outsourced services to fill the void. In 2021, 47% of CISOs surveyed relied on a Managed Security Services Provider (MSSP), while 53% were using a managed detection and response (MDR) service. This year, just 21% are using an MSSP, and 90% are using MDR.

— Overlapping Threat Protection Tools are the #1 Pain Point for Small Teams

The majority (87%) of companies with small security teams struggle to manage and operate their threat protection products. Among these companies, 44% struggle with overlapping capabilities, while 42% struggle to visualize the full picture of an attack when it occurs. These challenges are intrinsically connected, as teams find it difficult to get a single, comprehensive view with multiple tools.

— Small Security Teams Are Ignoring More Alerts

Small security teams are giving less attention to their security alerts. Last year 14% of CISOs said they look only at critical alerts, while this year that number jumped to 21%. In addition, organizations are increasingly letting automation take the wheel. Last year, 16% said they ignore automatically remediated alerts, and this year that’s true for 34% of small security teams.

— 96% of CISOs Are Planning to Consolidate Security Platforms

Almost all CISOs surveyed have consolidation of security tools on their to-do lists, compared to 61% in 2021. Not only does consolidation reduce the number of alerts – making it easier to prioritize and view all threats – respondents believe it will stop them from missing threats (57%), reduce the need for specific expertise (56%), and make it easier to correlate findings and visualize the risk landscape (46%). XDR technologies have emerged as the preferred method of consolidation, with 63% of CISOs calling it their top choice.

Download 2022 CISO Survey of Small Cyber Security Teams to see all the results.

Source :
https://thehackernews.com/2022/07/5-key-things-we-learned-from-cisos-of.html

Spectre and Meltdown Attacks Against OpenSSL

The OpenSSL Technical Committee (OTC) was recently made aware of several potential attacks against the OpenSSL libraries which might permit information leakage via the Spectre attack.1 Although there are currently no known exploits for the Spectre attacks identified, it is plausible that some of them might be exploitable.

Local side channel attacks, such as these, are outside the scope of our security policy, however the project generally does introduce mitigations when they are discovered. In this case, the OTC has decided that these attacks will not be mitigated by changes to the OpenSSL code base. The full reasoning behind this is given below.

The Spectre attack vector, while applicable everywhere, is most important for code running in enclaves because it bypasses the protections offered. Example enclaves include, but are not limited to:

The reasoning behind the OTC’s decision to not introduce mitigations for these attacks is multifold:

  • Such issues do not fall under the scope of our defined security policy. Even though we often apply mitigations for such issues we do not mandate that they are addressed.
  • Maintaining code with mitigations in place would be significantly more difficult. Most potentially vulnerable code is extremely non-obvious, even to experienced security programmers. It would thus be quite easy to introduce new attack vectors or fix existing ones unknowingly. The mitigations themselves obscure the code which increases the maintenance burden.
  • Automated verification and testing of the attacks is necessary but not sufficient. We do not have automated detection for this family of vulnerabilities and if we did, it is likely that variations would escape detection. This does not mean we won’t add automated checking for issues like this at some stage.
  • These problems are fundamentally a bug in the hardware. The software running on the hardware cannot be expected to mitigate all such attacks. Some of the in-CPU caches are completely opaque to software and cannot be easily flushed, making software mitigation quixotic. However, the OTC recognises that fixing hardware is difficult and in some cases impossible.
  • Some kernels and compilers can provide partial mitigation. Specifically, several common compilers have introduced code generation options addressing some of these classes of vulnerability:
    • GCC has the -mindirect-branch-mfunction-return and -mindirect-branch-register options
    • LLVM has the -mretpoline option
    • MSVC has the /Qspectre option

  1. Nicholas Mosier, Hanna Lachnitt, Hamed Nemati, and Caroline Trippel, “Axiomatic Hardware-Software Contracts for Security,” in Proceedings of the 49th ACM/IEEE International Symposium on Computer Architecture (ISCA), 2022.

Posted by OpenSSL Technical Committee May 13th, 2022 12:00 am

Source :
https://www.openssl.org/blog/blog/2022/05/13/spectre-meltdown/

Prepare for a New Cryptographic Standard to Protect Against Future Quantum-Based Threats

The National Institute of Standards and Technology (NIST) has announced that a new post-quantum cryptographic standard will replace current public-key cryptography, which is vulnerable to quantum-based attacks. Note: the term “post-quantum cryptography” is often referred to as “quantum-resistant cryptography” and includes, “cryptographic algorithms or methods that are assessed not to be specifically vulnerable to attack by either a CRQC [cryptanalytically relevant quantum computer] or classical computer.” (See the National Security Memorandum on Promoting United States Leadership in Quantum Computing While Mitigating Risks to Vulnerable Cryptographic Systems for more information).

Although NIST will not publish the new post-quantum cryptographic standard for use by commercial products until 2024, CISA and NIST strongly recommend organizations start preparing for the transition now by following the Post-Quantum Cryptography Roadmap, which includes:

  • Inventorying your organization’s systems for applications that use public-key cryptography.
  • Testing the new post-quantum cryptographic standard in a lab environment; however, organizations should wait until the official release to implement the new standard in a production environment.
  • Creating a plan for transitioning your organization’s systems to the new cryptographic standard that includes:
    • Performing an interdependence analysis, which should reveal issues that may impact the order of systems transition;
    • Decommissioning old technology that will become unsupported upon publication of the new standard; and
    • Ensuring validation and testing of products that incorporate the new standard.
  • Creating acquisition policies regarding post-quantum cryptography. This process should include:
    • Setting new service levels for the transition.
    • Surveying vendors to determine possible integration into your organization’s roadmap and to identify needed foundational technologies.
  • Alerting your organization’s IT departments and vendors about the upcoming transition.
  • Educating your organization’s workforce about the upcoming transition and providing any applicable training.

For additional guidance and background, CISA and NIST strongly encourage users and administrators to review:

Best Practices for setting up Altaro VM Backup

This best practice guide goes through the Altaro VM Backup features explaining their use and the optimal way to configure them in order to make the best use out of the software.

You will need to adapt this to your specific environment, especially depending on how much resources you have available, however this guide takes you through the most important configurations that are often overlooked too.

Setting up the Altaro VM Backup Management Console

The Altaro VM Backup Management Console can be utilised to add and manage multiple hosts in one console. However these hosts must be in the same LAN and at the same physical site (same building). Setups with multiple physical sites must have an instance of Altaro VM Backup at each site.

To manage these multiple installations, you can utilise the ‘Central Monitoring Console’ where you’ll be able to monitor as well as manage these Altaro VM Backup installations remotely.

A single Altaro VM Backup instance can manage both Hyper-V & VMware hosts.

For optimal results, Altaro runs some maintenance specific tasks using (multiple) single threaded operations. For this reason installing on a machine which has a CPU with a higher single thread performance would yield better results than installing on a machine which has a CPU with more cores and lower single thread performance.

Thus for the fastest results, installing Altaro VM Backup on a machine with a higher single thread CPU speed would be best.

Backup Locations

Make sure Opportunity Locks (Oplocks) are disabled if the backup location is a NAS.

If your backup location is a Windows machine, the equivalent to Oplocks is: Set-SmbServerConfiguration -EnableLeasing 0

Run the above command via Powershell.

Offsite Copies

With Altaro VM Backup, you are provided with the functionality of an Offsite Copy Location, which is a redundant/secondary copy of your backups. You can even backup your VM’s to 2 different offsite copy locations for further redundancy of your data, so you can pick a cloud location as well as an Altaro Offsite Server for instance.

There are multiple options for setting this up:

  • You can choose a Physical Drive connected to the management console (the best practice for offsites is to have them located in another building/location).
  • Drive Rotation/Swap which allows you to set up a pool of drives/network paths.
  • A Network Path (LAN Only) or else to an offsite location via a WAN/VPN/Internet connection, which is an ideal tool for Disaster Recovery purposes. Please note that the latter situation (non-LAN) requires use of the Altaro Offsite Server
  • Backup to Microsoft AzureAmazon S3 or Wasabi.

Setting up an offsite copy location is as crucial as setting up backups to a primary location. Apart from the obvious reason that you’ll have a redundant set of backups to restore from, should the local backups become unusable due to disk corruption or other disk failures. Having a secondary copy of your backup sets also allows you to keep a broader history for your VM backups on your secondary location and you’ll be able to go further back when restoring if required.

Deduplication

Altaro VM Backup makes use of Augmented In-line Deduplication. Enabling this is highly recommended and is done from the ‘Advanced Settings’ screen as this will essentially ensure that any common data blocks across virtual machines are only written to the backup location once. This helps by saving a considerable amount of space and also makes backups much quicker since common information is only transferred once.

Boot From Backup

The Boot From Backup drive feature comes along with 2 options, either ‘Verification Mode’ or ‘Recovery Mode’. This is a very good option for getting your RTO down since you’re able to boot up the VM immediately from a backup location and start a restore in the background as well.

However it’s very important that if you are planning to do this, you’ll need a fast backup location that can handle the I/O of a booted VM that’s essentially going into production. Please note that when the VM has finished restoring, it’s suggested to restart the restored VM as soon as you get a chance in order to switch to the restored drives, which would have faster I/O throughput.

Notifications

E-mail notifications are a simple and effective method of monitoring the backup status, yet it’s often overlooked. Setting up these notifications will provide you with a quick overview of the status over your of your backup jobs, hence – you won’t need to login into the Altaro Management console every day to confirm the backup status.

This way you’ll be alerted of any backup failures, allowing you to address said issues before the next backup schedule. Thereby ensuring that you always have a restorable backup point; so as a general best practice, always monitor your backup notifications.

Master Encryption Key

The Master Encryption Key in Altaro is utilised to encrypt the backups using AES 256-bit. It’s used if you choose to encrypt the local backups from the ‘Advanced Settings’ screen, while if you’re configuring offsite copies it must be used as offsite copies must be encrypted.

Altaro VM Backup will require the encryption key upon restoring, so it’s critical that you either remember it or take note of it in a secure password manager as there is no method of recovery for the master encryption key.

Scheduled Test Drills

Altaro VM Backup has the ability to run manual or automated verification of your backup data. This allows you to run scheduled verification jobs that will check the integrity of your backups on your backup location, or schedule full VM restores so that you can actually boot up the VM and confirm that everything works as expected. The VM will be restored with the NIC disabled so as to avoid IP conflicts with the production machine as well.

Failure of storage devices is not uncommon, therefore scheduling test drills is strongly advised for added peace-of-mind. Full instructions on configuring test drills.

Other General Best Practices

  • Backups and production VM’s should not be placed on the same drive.
  • Make sure Opportunity Locks (Oplocks) are disabled if the backup location is a NAS.
  • Backups should not be placed on a drive where an OS is running.
  • Altaro uses the drive it’s installed on as temporary storage and will require a small amount of free space (varying according to the size of the VMs being backed up).
  • Keep at least 10% of the backup location free.
  • The main Altaro VM Backup installation should not be installed on a machine that is also a domain controller (DC).
  • Directories/files inside the Altaro backup folder should not be tampered with, deleted or moved.
  • Do not take snapshots DFSR databases: “Snapshots aren’t supported by the DFSR database or any other Windows multi-master databases. This lack of snapshot support includes all virtualization vendors and products. DFSR doesn’t implement USN rollback quarantine protection like Active Directory Domain Services.” Source. 

Best Practices for Replication

Exclude Page File from Backup

As you’re aware Altaro VM Backup will take note of all changes since the last backup and transfer over all of the blocks that changed to the backup location. The page file will be changing very often and potentially causing your replication jobs to take longer.

Therefore, excluding the page file from backup equals, less transferred changes and as a result the replication jobs takes less time. This can be done by placing the page file onto a separate VHDX/VMDK file from the VM itself and then you can follow the steps here, in order to exclude the VHDX/VMDK file.

High Disk IO and Hypervisor Performance

Replication needs to make use of CDP (Continuous Data Protection), in order to take a backup every couple of minutes/hours, which makes Replication possible.

It’s important to note however that you should only enable high-frequency CDP (15 minutes or less) on VM’s that you really need to. This will ensure that the VM’s you really need to will be able to achieve the selected maximum frequency and in order not to have an impact your Hypervisor’s performance.

Source :
https://help.altaro.com/hc/en-us/articles/4416921650577-Best-Practices-for-setting-up-Altaro-VM-Backup

How to relocate the Altaro temporary files

Altaro VM Backup will create temporary files during backup operations and by default the location for these files is on the C: drive.

If you’d like to move the location of this temporary directory, please see according to which version you’re running below:

7.6 and newer

To do so, you can follow these steps:

  1. Ensure you are running at least 7.6.14. If not, update to the latest version from here
  2. Firstly, create a folder named “Overrides” in this path: C:\ProgramData\Altaro\AltaroBackupProfile
    If you’d like to move the temp files on the Altaro Offsite Server, create a folder named “Overrides” in this path: C:\ProgramData\Altaro\AltaroOffsiteServerProfile
  3. Then create a text file named “OverrideTempFolder.txt” inside the newly created folder
  4.  In the text file enter the path where you wish to store Altaro’s temp files, for example:{ “TempFolderPathOverride”:”E:\\Temp\\Altaro” }Ensure this location exists & that you use a double backslash as a separator (like above)
  5. Restart all Altaro services and on next Operation, the Altaro temporary files will now be stored in the above directory

7.5 and older

To do so, you can follow these steps:

  1. Ensure you are running at least version 5.0.97
  2. Firstly, create a folder named “Overrides” in this path: C:\ProgramData\Altaro\AltaroBackupProfile
    If you’d like to move the temp files on the Altaro Offsite Server, create a folder named “Overrides” in this path: C:\ProgramData\Altaro\AltaroOffsiteServerProfile
  3. Then create a text file named “OverrideTempFolder.txt” inside the newly created folder
  4.  In the text file enter the path where you wish to store Altaro’s temp files, for example: “E:\AltaroTemp” (without quotes) — ensure this location exists
  5. Restart all Altaro services and on next Operation, the Altaro temporary files will now be stored in the above directory

Tips

  • ProgramData is a hidden folder by default
  • Ensure that the file extension is showing, or you might end up with a file named “OverrideTempFolder.txt.txt”
  • Ensure there are no spaces at the end of the path and no extra line breaks in the text file

    Source :
    https://help.altaro.com/hc/en-us/articles/4416899962001

Which Altaro directories do I need to exclude from AntiVirus software?

If you are running an AntiVirus software or a file-scanning software, we do recommend excluding a couple of directories used by Altaro in order to ensure that it’s operation remains undisrupted.

We do recommend excluding the following:

  • all onsite backup drive directories
  • all offsite backup drive directories
  • C:\ProgramData\Altaro on the Altaro Management and on the Hyper-V hosts
  • C:\Program Files\Altaro on the Altaro Management and on the Hyper-V hosts

Also, if you relocated the Altaro temporary files ensure to exclude that directory as well.

Source :
https://help.altaro.com/hc/en-us/articles/4416905883409-Which

Microsoft finds Raspberry Robin worm in hundreds of Windows networks

Microsoft says that a recently spotted Windows worm has been found on the networks of hundreds of organizations from various industry sectors.

The malware, dubbed Raspberry Robin, spreads via infected USB devices, and it was first spotted in September 2021 by Red Canary intelligence analysts.

Cybersecurity firm Sekoia also observed it using QNAP NAS devices as command and control servers (C2) servers in early November [PDF], while Microsoft said it found malicious artifacts linked to this worm created in 2019.

Redmond’s findings align with those of the Red Canary’s Detection Engineering team, which also detected this worm on the networks of multiple customers, some of them in the technology and manufacturing sectors.

Although Microsoft observed the malware connecting to addresses on the Tor network, the threat actors are yet to exploit the access they gained to their victims’ networks.

This is in spite of the fact that they could easily escalate their attacks given that the malware can bypass User Account Control (UAC) on infected systems using legitimate Windows tools.

Microsoft shared this info in a private threat intelligence advisory shared with Microsoft Defender for Endpoint subscribers and seen by BleepingComputer.

Raspberry Robin worm infection flow
Raspberry Robin worm infection flow (Red Canary)

Abuses Windows legitimate tools to infect new devices

As already mentioned, Raspberry Robin is spreading to new Windows systems via infected USB drives containing a malicious .LNK file.

Once the USB device is attached and the user clicks the link, the worm spawns a msiexec process using cmd.exe to launch a malicious file stored on the infected drive.

It infects new Windows devices, communicates with its command and control servers (C2), and executes malicious payloads using several legitimate Windows utilities:

  • fodhelper (a trusted binary for managing features in Windows settings),
  • msiexec (command line Windows Installer component),
  • and odbcconf (a tool for configuring ODBC drivers).

“While msiexec.exe downloads and executes legitimate installer packages, adversaries also leverage it to deliver malware,” Red Canary researchers explained.

“Raspberry Robin uses msiexec.exe to attempt external network communication to a malicious domain for C2 purposes.”

Security researchers who spotted Raspberry Robin in the wild are yet to attribute the malware to a threat group and are still working on finding its operators’ end goal.

However, Microsoft has tagged this campaign as high-risk, given that the attackers could download and deploy additional malware within the victims’ networks and escalate their privileges at any time.

Source :
https://www.bleepingcomputer.com/news/security/microsoft-finds-raspberry-robin-worm-in-hundreds-of-windows-networks/

What is Shadow IT and why is it so risky?

Shadow IT refers to the practice of users deploying unauthorized technology resources in order to circumvent their IT department. Users may resort to using shadow IT practices when they feel that existing IT policies are too restrictive or get in the way of them being able to do their jobs effectively.

An old school phenomenon

Shadow IT is not new. There have been countless examples of widespread shadow IT use over the years. In the early 2000s, for example, many organizations were reluctant to adopt Wi-Fi for fear that it could undermine their security efforts. However, users wanted the convenience of wireless device usage and often deployed wireless access points without the IT department’s knowledge or consent.

The same thing happened when the iPad first became popular. IT departments largely prohibited iPads from being used with business data because of the inability to apply group policy settings and other security controls to the devices. Even so, users often ignored IT and used iPads anyway.

Of course, IT pros eventually figured out how to secure iPads and Wi-Fi and eventually embraced the technology. However, shadow IT use does not always come with a happy ending. Users who engage in shadow IT use can unknowingly do irreparable harm to an organization.

Even so, the problem of shadow IT use continues to this day. If anything, shadow IT use has increased over the last several years. In 2021 for example, Gartner found that between 30% and 40% of all IT spending (in a large enterprise) goes toward funding shadow IT.

Shadow IT is on the rise in 2022

Remote work post-pandemic

One reason for the rise in shadow IT use is remote work. When users are working from home, it is easier for them to escape the notice if the IT department than it might be if they were to try using unauthorized technology from within the corporate office. A study by Core found that remote work stemming from COVID requirements increased shadow IT use by 59%.

Tech is getting simpler for end-users

Another reason for the increase in shadow IT is the fact that it is easier than ever for a user to circumvent the IT department. Suppose for a moment that a user wants to deploy a particular workload, but the IT department denies the request.

A determined user can simply use their corporate credit card to set up a cloud account. Because this account exists as an independent tenant, IT will have no visibility into the account and may not even know that it exists. This allows the user to run their unauthorized workload with total impunity.

In fact, a 2020 study found that 80% of workers admitted to using unauthorized SaaS applications. This same study also found that the average company’s shadow IT cloud could be 10X larger than the company’s sanctioned cloud usage.

Know your own network

Given the ease with which a user can deploy shadow IT resources, it is unrealistic for IT to assume that shadow IT isn’t happening or that they will be able to detect shadow IT use. As such, the best strategy may be to educate users about the risks posed by shadow IT. A user who has a limited IT background may inadvertently introduce security risks by engaging in shadow IT. According to a Forbes Insights report 60% of companies do not include shadow IT in their threat assessments.

Similarly, shadow IT use can expose an organization to regulatory penalties. In fact, it is often compliance auditors – not the IT department – who end up being the ones to discover shadow IT use.

Of course, educating users alone is not sufficient to stopping shadow IT use. There will always be users who choose to ignore the warnings. Likewise, giving in to user’s demands for using particular technologies might not always be in the organization’s best interests either. After all, there is no shortage of poorly written or outdated applications that could pose a significant threat to your organization. Never mind applications that are known for spying on users.

The zero-trust solution to Shadow IT

One of the best options for dealing with shadow IT threats may be to adopt zero trust. Zero-trust is a philosophy in which nothing in your organization is automatically assumed to be trustworthy. User and device identities must be proven each time that they are used to access a resource.

There are many different aspects to a zero-trust architecture, and each organization implements zero-trust differently. Some organizations for instance, use conditional access policies to control access to resources. That way, an organization isn’t just granting a user unrestricted access to a resource, but rather is considering how the user is trying to access the resource. This may involve setting up restrictions around the user’s geographic location, device type, time of day, or other factors.

Zero-trust at the helpdesk

One of the most important things that an organization can do with regard to implementing zero trust is to better secure its helpdesk. Most organizations’ help desks are vulnerable to social engineering attacks.

When a user calls and requests a password reset, the helpdesk technician assumes that the user is who they claim to be, when in reality, the caller could actually be a hacker who is trying to use a password reset request as a way of gaining access to the network. Granting password reset requests without verifying user identities goes against everything that zero trust stands for.

Specops Software’s Secure Service Desk can eliminate this vulnerability by making it impossible for a helpdesk technician to reset a user’s password until that user’s identity has been proven. You can test it out for free to reduce the risks of shadow IT in your network.

Source :
https://thehackernews.com/2022/06/what-is-shadow-it-and-why-is-it-so-risky.html

Vulnerability in Amazon Photos Android App Exposed User Information

Cybersecurity firm Checkmarx has published details on a high-severity vulnerability in the Amazon Photos Android application that could have allowed malicious apps to steal an Amazon access token.

With more than 50 million downloads, Amazon Photos offers cloud storage, allowing users to store photos and videos at their original quality, as well as to print and share photos, and to display them on multiple Amazon devices.

In November 2021, Checkmarx researchers identified an issue in the application that could have leaked the Amazon access token to malicious applications on the user’s device, potentially exposing the user’s personal information. The bug was addressed in December 2021.

The leaked Amazon access token is used for user authentication across Amazon APIs, including some that contain personal information such as names, addresses, and emails. Through the Amazon Drive API, for example, the attacker could access the user’s files, Checkmarx says.

The issue, the researchers explain, resided in a misconfigured component that was “exported in the app’s manifest file, thus allowing external applications to access it.”

The issue resulted in the access token being sent in the header of a HTTP request, but the most important aspect was the fact that an attacker could control the server receiving this request.

“The activity is declared with an intent-filter used by the application to decide the destination of the request containing the access token. Knowing this, a malicious application installed on the victim’s phone could send an intent that effectively launches the vulnerable activity and triggers the request to be sent to a server controlled by the attacker,” Checkmarx notes.

The leaked token could provide the attacker with access to all of the user information available through the Amazon API. Using the Amazon Drive API, the attacker could access users’ files and read, re-write, or delete their contents.

The researchers also explain that the access token could have allowed anyone to modify files and erase their history, to prevent recovery, or could have completely deleted files and folders from the user’s Amazon Drive account.

“With all these options available for an attacker, a ransomware scenario was easy to come up with as a likely attack vector. A malicious actor would simply need to read, encrypt, and re-write the customer’s files while erasing their history,” the researchers say.

The vulnerability might have had a wider impact, given that the potentially affected APIs that the researchers identified represent only a small subset of the entire Amazon ecosystem, Checkmarx also notes.

Source :
https://www.securityweek.com/vulnerability-amazon-photos-android-app-exposed-user-information