Part 7: See How Customers are Accelerating Cloud Transformation with VMware Cloud on AWS

Ruchi Tandon
August 25, 2022

Looking to rapidly migrate to the cloud? Scale cost-effectively and strengthen disaster recovery? You’re not alone. Here’s how organizations are unlocking the power of hybrid cloud with VMware Cloud on AWS.

As we welcome back customers, partners, colleagues, and friends of VMware in person once again at VMware Explore 2022, one thing is unchanged – the impact that VMware Cloud on AWS has had on our customers’ cloud migration journeys.

In this blog, I want to highlight some of the recent customer stories and share our customers’ experiences with VMware Cloud on AWS. Also, check out Part 1Part 2Part 3Part 4Part 5, and Part 6 of this blog series for more customer stories across various use cases.

Schibsted Media Group

Schibsted Moves to the Cloud to Support Rapid Expansion and Gain Competitive Advantage

Schibsted Media Group, or Schibsted, a leading media corporation in Scandinavia, wanted to create a unified digital platform for its 55+ brands portfolio, allowing each company to scale operations easily. To support their rapid growth, Schibsted’s team also knew that having a cloud strategy in place would be beneficial when acquiring new companies. It would also significantly reduce time and resources otherwise spent managing multiple vendors and local data centers.

With VMware Cloud on AWS, the team at Schibsted was able to shut down 350 on-premises servers and migrate traditional workloads and legacy software to the cloud faster than expected. Their enterprise systems running apps such as Newspilot, SAP, HR systems, and a variety of advertising platforms now all run on VMware Cloud on AWS. Working with VMware, Schibsted has achieved considerable cost savings compared to on-premises data centers and hopes to save more on operating costs with every new acquisition in the future.

Here is what Schibsted has to say about their experience of using the service: 

“We have a public cloud strategy, and traditional workloads are now running on VMware Cloud on AWS. It is a scalable platform that we are taking full advantage of to become cloud-native. We were one of the first customers in the Nordics who started using it.”

– Ken Sivertsen, Cloud Infrastructure Architect, Schibsted Enterprise Technology

Check out the case study to learn more about their experience of using VMware Cloud on AWS.

Lotte

Lotte Moves to the Cloud for Future-Readiness

When several divisions of Lotte merged into a single corporate entity in 2018—the year of its 70th anniversary—they decided to embark on digital transformation (DX) journey to enhance synergies and encourage business growth. After merging, Lotte found it challenging to ensure smooth business operations and employee experience due to silos between the merged departments. This had the potential to impact business development. Moreover, Lotte had been operating an on-premises VDI environment on Windows 7. When Windows 7 support ended, there was an urgent need to move to Windows 10 to strengthen the Lotte VDI resources and improve their system more broadly.

Lotte decided to use VMware Cloud on AWS because it offered the agility and flexibility of the cloud with a proven track record and made migration easy. With the help of VMware and partnering with DXC Technology Japan, and AWS Japan, Lotte has now migrated 4,000 VDI units to VMware Cloud on AWS. Doing so has improved employee experience and maximized business profitability. It has also positioned them well for future expansion and helped them reduce the time needed for infrastructure maintenance and operations.

Here is what Lotte has to say about their experience of using the product: 

“We’re currently running VMware Cloud on AWS VDI environment alongside our on-premises environment and there is no difference between them. The user experience is virtually unchanged and everyone finds it easy to use.”

– Mr. Hisaaki Ogata, Senior Manager of ICT Strategic Division, Lotte Corporation

Read their customer story here in detail.

KDDI Corporation

KDDI Corporation Innovates from Modern Applications.

Keeping its agile mindset front and center, KDDI Corporation needed a go-to-market strategy to deploy its applications and deliver new services more rapidly than before. Their developers wanted the ability to concentrate on app development above all things. They also wanted monitoring, log collection, and security features integrated into the platform. In addition, the IT team at KDDI expected their environments to become more and more complex as KDDI deployed applications at the edge and in the cloud.

Solutions like VMware Cloud on AWS and Project Pacific helped KDDI achieve consistent control over such complex environments with a single portal.

Here is what KDDI Corporation has to say about their experience of using the product:

“Going forward, our environments will become more and more complex as we deploy applications at the edge and in the cloud. So, we are looking to VMware Cloud on AWS and Project Pacific to help us achieve consistent control over such complex environments with a single portal.” 

– Takeshi Maehara, General Manager, KDDI Corporation

Learn more about KDDI’s story here.

State of Louisiana

State of Louisiana Unifies IT Service Delivery, Improves Medicare Enrollment, and Rapidly Responds to Disasters with VMware Cloud on AWS

Reforming government is a constant process requiring continuous innovation, creativity, and vigilance, including the technology on which government operates. For the State of Louisiana, that meant embarking on a statewide initiative to transform security and modernize data center operations. The goal: Take IT from legacy mainframes to cloud-based, mobile-ready application delivery. Louisiana decided to partner with VMware to modernize its data centers, transform digital workspaces for users, and move toward a common operating model that spans both private and public clouds.

To extend its on-premises data centers and easily migrate application workloads to the public cloud, the state decided to use VMware Cloud on AWS. With VMware SDDC software running on the AWS cloud, the state can seamlessly integrate with the public cloud and scale easily while leveraging existing VMware skill sets. It can also use familiar tools, such as vRealize Suite and NSX, to extend intelligent operations and micro-segmentation to the public cloud, helping keep its environment manageable and secure. As Louisiana adopts a public cloud-first strategy to reduce costs further, it will use VMware Cloud on AWS to evolve into DevOps methodologies and become an even more efficient broker of IT services.

Here is what the State of Louisiana has to say about their experience of using the product:

“VMware Cloud on AWS will help us take advantage of the elasticity of public cloud, giving us workload portability, a platform for next-gen apps, and easy access to AWS services.”

– Michael Allison, CTO, State of Louisiana

Learn more about State of Louisiana’s experience of using VMware Cloud on AWS in this case study

GuideOne

VMware Cloud on AWS Enables Cloud-Native Capabilities Without Increasing IT Budget For GuideOne

GuideOne, an insurance firm in the United States with over 600 employees and more than $500 million in annual revenue, maintains an environment of 16 ESXi hosts and 800 virtual machines (VMs).

The organization faced several challenges that prompted its investment in VMware Cloud on AWS. They supported workloads with on-premises hardware but wanted to move to the cloud to avoid the headaches and costs associated with managing its on-premises deployment. The organization also wanted the cost and capability benefits of cloud computing, but it wanted to minimize the likelihood of outages, delays, and cost overruns that could occur when migrating legacy workloads to the public cloud.

The move to VMware Cloud on AWS produced some great results: 

  • Eliminated 40% of its data center footprint and reduced power costs
  • Reallocated resources to strategic IT initiatives
  • Invested in its own employees and avoided costs of recruiting cloud-native skills
  • Avoided hardware expansion and refresh costs
  • Avoided costly application rearchitecture
  • Flattened IT budget while providing business with new capabilities
  • Enabled a more responsive and compliant security environment

Here is what GuideOne has to say about their experience of using the product:

“[VMware Cloud on AWS] is a quick way of getting into the cloud. You don’t have to do as much QA when it comes to switching over the workloads because you are doing it at the hypervisor level, and you’re really only worried about performance and latency.”

– IT Director, GuideOne

Read more about their experience here.

A Global Financial Firm

VMware Cloud on AWS Provides Frictionless Path to Capital and Operational Cost Savings to a global financial services firm

A global financial services firm headquartered in the United States with over 10,000 employees and more than $3 billion in annual revenue, now maintains three software-defined data centers (SDDCs), with a total of 42 hosts and roughly 800 virtual machines (VMs).

Prior to investing in VMware Cloud on AWS, the organization relied on outsource vendors to maintain its data centers. When the contract was up, the organization could not easily switch providers and did not want to reinvest in building a new data center. In an on-premises environment, the organization was also limited to inefficient disaster recovery processes, which hindered development teams. Additionally, a portfolio of 150 applications, many of which were legacy applications, meant unnecessary maintenance and operations costs. The organization was struggling to modernize its application portfolio due to the speed of service of the vendors managing its on-premises environment.

The business decision-makers were vary of upcoming data center deadlines

The move to VMware Cloud on AWS produced some great results: 

  • Retired on-premises data center and reduced annual operating costs by 59%
  • Avoided costly infrastructure refreshes, saving ~$10M
  • Reduced downtime
  • Improved IT agility
  • Modernized application portfolio, saving $200K in annual spend
  • Improved business resilience across 35 offices during the pandemic by being in the cloud

Here is what this Financial Services organization has to say about their experience of using the product:

“Modern applications require modern infrastructure. So today we’re upscaling, we’re new-skilling, and we’re reskilling. I’ve been trying to retire apps my whole time here and was not able to until we moved to the cloud [with VMware Cloud on AWS]”

– Associate Director of Cloud Infrastructure, Financial Services organization

Read more about their experience here.

So don’t wait any further. Start your cloud migration and application modernization journey with VMware Cloud on AWS. If you are interested in finding out how much you could save, try the VMware Cloud on AWS TCO Calculator. To learn more about VMware Cloud on AWS, here are some learning resources. Or, you can get started now with VMware Cloud on AWS by purchasing the service online.

Resources for VMware Cloud on AWS

Ruchi Tandon

Ruchi is a Senior Product Marketing Manager for VMware Cloud on AWS at VMware Inc. With 14+ years of strong technology, data, and marketing background, Ruchi brings deep experience in…

Source :
https://blogs.vmware.com/cloud/2022/08/25/part-7-see-how-customers-are-accelerating-cloud-transformation-with-vmware-cloud-on-aws/

Part 6: See How Customers Are Unlocking the Power of Hybrid Cloud with VMware Cloud on AWS

Narayan Bharadwaj
February 28, 2022

VMware Cloud on AWS has been helping customers for last 4+ years in accelerating their hybrid cloud journey. Customers across different industries and across different geographies have been using this service in their cloud migration and application modernization journey.

In this blog, let’s check out some of the recent customer stories and understand what customers want to talk about as far as their experience of using VMware Cloud on AWS is concerned. Also, check out Part 1Part 2Part 3Part 4 and Part 5 of this blog series for some more customer stories across variety of use cases.

The College of New Jersey: The College of New Jersey (TCNJ) serves 7,400 students in the US. To deliver students a modern, high-caliber learning experience accessible from anywhere on or off campus, the college needed to embrace more cloud technology and establish a secure, virtual desktop infrastructure at scale. Partnering with non-profit technology services provider, NJ Edge, the team rolled out a “work from anywhere” solution with VMware Horizon on VMware Cloud on AWS. The college migrated its VMware vSphere clusters to VMware Cloud on AWS, leveraging VMware HCX to simplify and streamline the migration.

“VMware Cloud on AWS requires significantly fewer resources to manage than our on-premises environment. We can spin down resources when everything slows down after graduation in the summer.” Leonard Niebo, Associate Vice President & Chief Information Officer, Office of Information Technology, TCNJ

Check out the case study to learn more about their experience of using VMware Cloud on AWS.

Kem One: Kem One, the second-largest PVC manufacturer in Europe, wanted to renew its ageing IT infrastructure, divided between two data centers in the Lyon region. With VMware HCX and VMware Cloud on AWS, Kem One migrated 280 virtual machines from on-premises to cloud with minimal downtime for their 900 SAP users.

“The migration of our information system to the AWS public cloud was completed in a matter of months, thanks to solutions from VMware and support from TeamWork. In the end, we save 26% on our infrastructure costs, gain agility and all without penalizing our 900 users for a single moment. ” Jean-Yves Pottier, Head of Infrastructures IT, Kem One

Learn more about Kem One’s experience of using VMware Cloud on AWS in this summary

Clark County: Clark County, Nevada governs one of the US’ largest counties, with jurisdiction over an area of 476 square miles including the Las Vegas Strip. Clark County provides services to more than 2.4 million citizens and 45 million visitors every year. With digital-first operating model, Clark County wanted to provided better experience to its citizens with modern digital services. Also, they wanted to provide “work from anywhere” option to its employees when COVID-19 pandemic hit. And for they opted in for VMware Cloud on AWS.

“We had really positive feedback after the rollout because with VMware, everything simply worked.” Martin Bennett, Technical Services Manager, Clark County

Check out this summary and watch the video to learn more about how they used VMware Cloud on AWS, VMware Horizon and VMware Workspace One to accelerate their digital transformation journey.

Sterling National Bank: Sterling National Bank serves consumers and business owners across New York and the Hudson Valley regions, as well as providing nationwide specialty financial services. To keep up with rapid growth and continue to offer competitive, digital services to clients, it needed to shut down on-premises data centers that were costly to maintain and migrate to the cloud. The bank partnered with Deloitte to implement VMware Cloud on AWS, to fully embrace the public cloud. This technology transformation from on-premises to cloud took place against the backdrop of the COVID-19 pandemic but going from requirements gathering to migration was still completed in just 12 months. With this cloud migration initiative, Sterling National Bank is now able to get new products to clients faster, embrace exciting new technologies such as AI, and enhance back-office efficiency by up to 75 percent.

“Our migration to VMware Cloud on AWS was so seamless that our users and application owners were unaware any changes took place. There was no downtime at all,” Vesko Pehlivanov, Sr. Managing Director, Solution Strategy and Architecture, Sterling National Bank

Read more about their experience of using VMware Cloud on AWS in this summary

The Chilean Institute of Workplace Safety: The Chilean Institute of Workplace Safety (El Instituto de Seguridad Laboral de Chile, or ISL) is the public entity in charge of administering Social Security benefits that cover risks related to workplace accidents and illnesses. They were having difficulty maintaining operational continuity due to random power issues in their data center. They decided to extend its current private cloud to the public cloud using VMware Cloud on AWS. VMware Cloud Foundation along with VMware Cloud on AWS resolved key problems and allowed their IT team to work with the flexibility and scalability required to optimize how they support the business.

Read more about their experience of using VMware’s hybrid cloud solution in this summary.

SGB-SMIT Group: SGB-SMIT Group is the largest independent manufacturer of power transformers in Europe and its success hinges on its close proximity to customers and fast time-to-market. After doubling its global footprint in five years, the business was fragmented, hampering its ability to continue to scale and grow. With VMware Cloud on AWS, now the company has the scalability to support global business growth and deliver a consistent, virtualized desktop environment with VMware Horizon.

Check out the customer story here

National Stock Exchange of India Ltd (NSE): The National Stock Exchange of India Ltd. (NSE), the second-largest electronic stock exchange in the world, needed a solution which would help to modernize its current IT platform, so it could keep up with the growth in trading volume and consistently innovate for new offerings to customers. With VMware Cloud on AWS, now NSE can easily extend its on-premises data center workloads to the public cloud and meet its resource scalability needs. It has also helped NSE to save on manpower training costs as they weren’t required to retrain the IT staff separately for AWS.

“With the VMware Cloud on AWS-based hybrid cloud infrastructure, we have been able to simplify administrative tasks; automate manual processes; scale up on-demand; and improve our business agility. This deployment has helped us to stay at the cutting edge of technology for years to come. Our infrastructure is now future-proof.” Shiv Kumar Bhasin, Chief Technology and Operations Officer, NSE

Read about NSE’s experience of using VMware Cloud on AWS in this summary

So don’t wait further. Start your cloud migration and application modernization journey with VMware Cloud on AWS. Learn, test-drive and purchase the service online by visiting VMware Cloud on AWS Get Started page. And don’t forget to check out more resources mentioned below.

Narayan Bharadwaj

Narayan leads the Cloud Solutions team at VMware in a general management role. The Cloud Solutions team builds and operates the VMware Cloud SaaS platform for our public cloud solutions…

Source :
https://blogs.vmware.com/cloud/2022/02/28/part-6-see-how-customers-are-unlocking-the-power-of-hybrid-cloud-with-vmware-cloud-on-aws/

Part 5: See How Customers Are Unlocking the Power of Hybrid Cloud with VMware Cloud on AWS

Cheryl Young
November 24, 2020

Looking to rapidly migrate to the cloud? Scale cost-effectively and strengthen disaster recovery? You’re not alone. Here’s how organizations are unlocking the power of hybrid cloud with VMware Cloud on AWS.

Visit Part 1Part 2Part 3 and Part 4 of the series for more customer stories across various use cases.

Organizations across different industries continue to benefit from hybrid cloud with VMware Cloud on AWS. Their goals are to increase agility, improve business continuity, scale cost-effectively and rapidly migrate or extend to the public cloud with an integrated, highly available and secure solution.

In this blog, we’ll share more of our customers’ stories and their experiences across various use cases, the business outcomes and how VMware Cloud on AWS helped them overcome the challenges of implementing a hybrid cloud.

Scale beyond on-premises infrastructure to the public cloud

IHS Markit 

IHS Markit is a global leader in information, analytics and solutions for the major industries and markets that drive economies worldwide. With more than 5,000 analysts, data scientists, financial experts and industry specialists, their information expertise spans numerous industries, including leading positions in finance, energy and transportation.

“VMware Cloud on AWS helps us build on our success with VMware in our private, on-premises environment and cost-effectively extend services to a global hybrid cloud.”
– Ben Tanner, Director of Cloud Enablement, IHS Markit

Get more details about IHS Markit’s experience in this summary.

ZOZO

ZOZO Technologies is the platform developer of ZOZOTOWN, the largest online fashion store in Japan. With the infrastructure supporting ZOZOTOWN built around an on-premises environment, coping with the winter sale – which generates the highest amount of traffic each year – was a challenge. Therefore, the company expanded its existing environment with VMware Cloud on AWS, which has excellent compatibility with conventional infrastructure, and moved to a pay-as-you-go system. 

“Expanding our data center to the cloud was a huge challenge for us. By using VMware Cloud on AWS, we not only successfully survived our winter sale, which has the highest volume of traffic each year, but our company also accumulated knowledge that we’ll be able to use in the future.”
– Nobuhiko Watanabe ZOZO-SRE Team Leader, SRE Department, Technology Development Division, ZOZO Technologies, Inc.

Read more about ZOZO’s success in this summary.

Rapidly migrate to the public cloud

IndusInd Bank

IndusInd Bank’s vision to provide best-in-class digital banking services required the right technology support to achieve industry dominance. VMware’s solutions helped the bank get centralized control over its applications, hosted either on-premises or in the public cloud. This increased the bank’s agility to deliver business outcomes and consumer applications with a mobile-first strategy, keeping the employee experience in mind. VMware helped IndusInd Bank deploy a hybrid cloud, running on both on-premises infrastructure and VMware Cloud on AWS.

“To create a robust platform for our evolving digital experience capabilities, we needed a scalable and agile solution that could handle a significant increase in customer transactions in both assisted and direct channels. For mission-critical workloads, VMware Cloud on AWS allowed us to enhance the on-premise private cloud set-up, with the flexibility to scale up on demand across private clouds in AWS and on-premise, thereby ensuring that we leverage the proven capabilities of scale with consistency and availability for our businesses.”
– Biswabrata Chakravorty, CIO, IndusInd Bank

Get more details about IndusInd Bank’s experience in this case study.

ZENRIN DataCom

ZENRIN DataCom Co., Ltd. is a leading Japanese map publisher. To develop a consistent hybrid cloud IT infrastructure, the company adopted VMware Cloud on AWS. They’re now benefiting from simplified hybrid operations and expect reduced server costs.

“There will be cumulative cost benefits as more services are offered on VMware Cloud on AWS. Without VMware Cloud on AWS, our development costs may have been double or triple!”
– Masayoshi Oku, Director and Executive Officer, Engineering Division, ZENRIN DataCom

Learn more about ZENRIN DataCom’s story.

Strengthen Disaster Recovery

888 Holdings

888 Holdings Public Limited Company (888) is one of the world’s most popular online gaming entertainment and solutions providers. 888’s headquarters and main operations are in Gibraltar, and the company has licenses in Gibraltar, the U.K., Spain, Italy, Denmark, Ireland, Romania, and the U.S. 

888 wants to reduce customer ‘churn’. It operates in a market with little brand loyalty and wants to enhance the end-user experience. It also wants to leverage big data to stay competitive. The company also must cope with the unexpected. The U.K.’s decision to leave the European Union (EU) has impacted where data can be held. 888 needed to move workloads from its data center in Gibraltar, a British Overseas Territory, to a new center somewhere in the EU. VMware Cloud on AWS enables 888 Holdings to rapidly extend, migrate and protect its VMware environment in the AWS public cloud. 

“VMware Cloud on AWS means we have a disaster recovery site that is on-call only when we need it – we’re not paying for it when it’s at rest.”
– Eran Elbaz, CIO, 888 Holdings

Read more about 888’s story in this case study.

EMPLOYERS

EMPLOYERS strives to meet the needs of its small business insurance policyholders while working to bolster the long-term success of its thousands of appointed agents, many running small businesses of their own. After experiencing new growth of 29 percent in 2018 during its 105th year in business, EMPLOYERS needed to ensure its IT environment would continue to support evolving business needs. In particular, the company’s strategic plans included rolling out new capabilities centered on improving the agent and end-customer experience to help foster the organization’s growth and retention goals.

They selected VMware solutions for the underpinnings of their new foundation, with VMware HCX for accelerated migration to the cloud and VMware Cloud on AWS for their disaster recovery needs. Read more about their story.

Scale and protect on-premises VDI needs in the public cloud

PennyMac 

PennyMac, a leading national mortgage lender, is committed to providing every customer with the right home loan and superior service long after closing. 

They needed to scale their virtual environment very rapidly in response to changing market conditions. In this video, hear how they leveraged VMware Cloud on AWS to migrate their VDI environment to the public cloud.

OSRAM Continental

OSRAM Continental develops innovative automotive lighting systems to meet the needs of modern mobility concepts. Launched in 2018, the joint venture quickly set up an entirely new, ready-to-run IT infrastructure based on a cloud principle and VMware Cloud on AWS. Thanks to a virtual desktop infrastructure, the company benefits from time and cost savings, maximum flexibility and centralized management, thereby creating the infrastructural prerequisites for an industry in transition. 

“VMware Cloud on AWS enabled us to build our entire IT and process landscape from scratch in just six months.”
– Michael Schöberl, CIO, OSRAM Continental

Learn more about OSRAM Continental’s journey in this case study.

As the above examples show, customers are increasing agility, improving business continuity, scaling cost-effectively and rapidly migrating or extending to the public cloud with VMware Cloud on AWS, an integrated, highly available and secure solution.

To learn more about VMware Cloud on AWS, check out these resources. Or, you can get started with VMware Cloud on AWS online.

Resources

Cheryl Young

Cheryl Young is a Product Line Marketing Manager in the Cloud Infrastructure Business Group at VMware focused on Google Cloud VMware Engine. She has been working in the enterprise software…

Source :
https://blogs.vmware.com/cloud/2020/11/24/part-5-see-customers-unlocking-power-hybrid-cloud-vmware-cloud-aws/

Enhancing RFC-compliance for message header from addresses

06/02/2024

Hornetsecurity is implementing an update to enhance email security by enforcing checks on the “Header-From” value in emails, as per RFC 5322 standards.
This initiative is driven by several key reasons:

  1. Preventing Email Delivery Issues: Historically, not enforcing the validity of the originator email address has led to emails being accepted by our system but ultimately rejected by the final destination, especially with most customers now using cloud email service providers that enforce stricter validation.
  2. Enhanced Protection Against Spoofed Emails: By strictly validating the “Header-From” value, we aim to significantly reduce the risk of email spoofing.
  3. Enhance Email Authentication for DKIM/DMARC Alignment: By enforcing RFC 5322 compliance in the “Header-From” field, we can ensure better alignment with DKIM and DMARC standards, thereby significantly improving the security and authenticity of email communications.

The cause of malformed “From” headers often stems from incorrect email server configurations by the sender or from bugs in scripts or other applications. Our new protocol aims to rectify these issues, ensuring that all emails passing through our system are fully compliant with established standards, thus improving the overall security and reliability of email communications.

Implementation Timeline

  • Stage 1 (Starting 4 March 2024): 1-5% of invalid emails will be rejected.
  • Stage 2 (Second week): 30% rejection rate.
  • Stage 3 (Third week): 60% rejection rate.
  • Final Stage (By the end of the fourth week): 100% rejection rate.

Impact Assessment

Extensive testing over the past six months indicates that the impact on legitimate email delivery is expected to be minimal. However, email administrators should be prepared for potential queries from users experiencing email rejections.

Handling Rejections

When an email is rejected due to a malformed “Header-From”, the sender will receive a bounce-back message with the error “510 5.1.7 malformed Header-From according to RFC 5322”. This message indicates that the email did not meet the necessary header standards.

Identifying Affected Emails

Email administrators can identify affected emails in the Hornetsecurity Control Panel (https://cp.hornetsecurity.com) using the following steps:

  1. Navigate to ELT in the Hornetsecurity Control Panel.
  2. Select your tenant in the top right field.
  3. Choose a date range for your search. A shorter range will yield quicker results.
  4. Click in the “Search” text box, select the “Msg ID” parameter, and type in “hfromfailed” (exact string).
  5. Press ENTER to perform the search.

When email administrators identify emails affected by the “Header-From” checks in the Email Live Tracking (ELT) system, immediate and appropriate actions are necessary to verify if the email application or server settings are correctly configured to comply with RFC 5322 standards. This will help maintain email flow integrity.


Defining Exceptions

In implementing the new “Header-From” checks, Hornetsecurity recognizes the need for flexibility in certain cases. Therefore, we have provisioned for the definition of exceptions to these checks.

This section details how to set up these exceptions and the timeline for their deprecation:

Configuring Exceptions

  1. Accessing the Control Panel: Log in to the Hornetsecurity Control Panel at https://cp.hornetsecurity.com.
  2. Navigating to the Compliance Filter.
  3. Creating Exception Rules: Within the Compliance Filter, you can create rules that define exceptions to the “Header-From” checks. This should be based on the envelop sender address.
  4. Applying the Exceptions: Once defined, these exceptions will allow certain emails to bypass the strict “Header-From” checks.

Timeline for Deprecation of Exceptions applied to the new Header-From checks

  • Initial Implementation: The ability to define exceptions is available as part of the initial rollout of the “Header-From” checks.
  • Deprecation Date: These exception provisions are set to be deprecated by the end of June 2024.

The provision for exceptions is intended as a temporary measure to facilitate a smoother transition to the new protocol. By June 2024, it is expected that all email senders would have had sufficient time to align their email systems with RFC 5322 standards. Deprecating the exceptions is a step towards ensuring full compliance and maximizing the security benefits of the “Header-From” checks.

Conclusion

The enhancement of our RFC-compliance is a significant step toward securing email communications. Adherence to these standards will collectively reduce risks associated with email. For further assistance or clarification, please reach out to our support team at support@hornetsecurity.com.

 

Invalid “Header From” Examples:

Header From Reason 
From: <> Blank addresses are problematic as they cause issues in scenarios requiring a valid email address, such as allow and deny lists. 
From: John Doe john.doe@hornetsecurity.com Non-compliant with RFC standards. The email address must be enclosed in angle brackets (< and >) when accompanied by a display name. 
From: “John Doe” <john.doe@hornetsecurity.com> (Peter’s cousin) While technically RFC-compliant, such formats are often rejected by M365 unless explicit exceptions are configured. We do accept certain email addresses with comments. 
From: John, Doe <john.doe@hornetsecurity.com> Non-compliant with RFC standards. A display name containing a comma must be enclosed in double quotes. 
From: “John Doe <john.doe@hornetsecurity.com>” Non-compliant with RFC standards. The entire ‘From’ value is incorrectly enclosed in double quotation marks, which is not allowed. 
From: “John Doe <john.doe@hornetsecurity.com>” john.doe@hornetsecurity.com Non-compliant with RFC standards. The display name is present, but the email address is not correctly enclosed in angle brackets. 
From: “John Doe”<john.doe@hornetsecurity.com> Non-compliant with RFC standards due to the absence of white-space between the display name and the email address. 
From: “Nested Brackets” <<info@hornetsecurity.com> Nested angle brackets are not allowed in the “addr-spec” part of the email address. 
From: Peter Martin <e14011> Non-compliant with RFC standards. The domain part of the email address (“addr-spec”) is missing. 
From: “News” <news.@hornetsecurity.com> Non-compliant with RFC standards. The local part of the email address must not end with a dot. 
Missing “From” header altogether A “From” header is mandatory in emails. The absence of this header is a clear violation of RFC standards. 

Valid “Header From” Examples:

Header From Reason 
From: john.doe@hornetsecurity.com RFC-compliant 
From: <john.doe@hornetsecurity.com> RFC-compliant 
From: “Doe, John” <john.doe@hornetsecurity.com> RFC-compliant 
From: “John Doe” <john.doe@hornetsecurity.com> RFC-compliant 
From: < john.doe@hornetsecurity.com > RFC-compliant but not recommended because of the spaces between the email address and angle brackets 
From: John Doe <john.doe@hornetsecurity.com> Acceptable, although it is recommended that the display name is enclosed in double quotes if it contains any white-space. 

Source :
https://support.hornetsecurity.com/hc/en-us/articles/22036971529617-Enhancing-RFC-compliance-for-message-header-from-addresses

Local File Inclusion Vulnerability Patched in Shield Security WordPress Plugin

István Márton
February 5, 2024

On December 18, 2023, right before the end of Holiday Bug Extravaganza, we received a submission for a Local File Inclusion vulnerability in Shield Security, a WordPress plugin with more than 50,000+ active installations. It’s important to note that this vulnerability is limited to just the inclusion of PHP files, however, it could be leveraged by an attacker who has the ability to upload PHP files but can not directly access those files to execute.

Props to hir0ot who discovered and responsibly reported this vulnerability through the Wordfence Bug Bounty Program. This researcher earned a bounty of $938.00 for this discovery during our Bug Bounty Program Extravaganza.

All Wordfence PremiumWordfence Care, and Wordfence Response customers, as well as those still using the free version of our plugin, are protected against any exploits targeting this vulnerability by the Wordfence firewall’s built-in Directory Traversal and Local File Inclusion protection.

We contacted the Shield Security Team on December 21, 2023, and received a response on December 23, 2023. After providing full disclosure details, the developer released a patch on December 23, 2023. We would like to commend the Shield Security Team for their prompt response and timely patch, which was released on the same day.

We urge users to update their sites with the latest patched version of Shield Security, which is version 18.5.10, as soon as possible.

Vulnerability Summary from Wordfence Intelligence

Description: Shield Security – Smart Bot Blocking & Intrusion Prevention Security <= 18.5.9 – Unauthenticated Local File Inclusion
Affected Plugin: Shield Security – Smart Bot Blocking & Intrusion Prevention Security
Plugin Slug: wp-simple-firewall
Affected Versions: <= 18.5.9
CVE ID: CVE-2023-6989
CVSS Score: 9.8 (Critical)
CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
Researcher/s: hir0ot
Fully Patched Version: 18.5.10
Bounty Awarded: $938.00

The Shield Security – Smart Bot Blocking & Intrusion Prevention Security plugin for WordPress is vulnerable to Local File Inclusion in all versions up to, and including, 18.5.9 via the render_action_template parameter. This makes it possible for an unauthenticated attacker to include and execute PHP files on the server, allowing the execution of any PHP code in those files.

Technical Analysis

Shield Security is a WordPress website security plugin that offers several features to stop attackers, protect and monitor the website, including a firewall, malware scanner and also logs activities.

The plugin includes a template management system that renders .twig.php or .html files. Unfortunately, the insecure implementation of the plugin’s file template including and rendering functionality allows for arbitrary file inclusion in vulnerable versions. The template path is set with the setTemplate() function.

242243244245246247248publicfunctionsetTemplate( $templatePath) {    $this->template = $templatePath;    if( property_exists( $this, 'sTemplate') ) {        $this->sTemplate = $templatePath;    }    return$this;}

The renderPhp() function in the Render class uses the path_join() function to join the template file. It then checks that the template file is an existing file and includes it.

8182838485868788899091929394959697privatefunctionrenderPhp() :string {    if( \count( $this->getRenderVars() ) > 0 ) {        \extract( $this->getRenderVars() );    }    $template= path_join( $this->getTemplateRoot(), $this->getTemplate() );    if( Services::WpFs()->isFile( $template) ) {        \ob_start();        include( $template);        $contents= \ob_get_clean();    }    else{        $contents= 'Error: Template file not found: '.$template;    }    return(string)$contents;}

Examining the code reveals that there is no file path sanitization anywhere in these functions. This makes it possible to include arbitrary PHP files from the server.

The file inclusion is limited to PHP files in the vulnerability. This means that threat actors cannot exploit one of the most popular remote code execution methods via a log file poisoning attack. Since the plugin also uses isFile() function to file checking, the other popular remote code execution method using wrappers attack is also not possible. Nevertheless, the attacker has several options to include and exploit a malicious PHP file and execute on the server. This can be achieved by chaining the attack and exploiting vulnerabilities in other plugins. However, it’s worth mentioning that the attack possibilities are limited. This would likely be leveraged in an instance where an attacker has access to upload a PHP file, but does not have direct access to the file to execute it.

Wordfence Firewall

The following graphic demonstrates the steps to exploitation an attacker might take and at which point the Wordfence firewall would block an attacker from successfully exploiting the vulnerability.

The Wordfence firewall rule detects the malicious file path and blocks the request.

Disclosure Timeline

December 18, 2023 – We receive the submission of the Local File Inclusion vulnerability in Shield Security via the Wordfence Bug Bounty Program.
December 20, 2023 – We validate the report and confirm the proof-of-concept exploit.
December 21, 2023 – We initiate contact with the plugin vendor asking that they confirm the inbox for handling the discussion.
December 23, 2023 – The vendor confirms the inbox for handling the discussion.
December 23, 2023 – We send over the full disclosure details. The vendor acknowledges the report and begins working on a fix.
December 23, 2023 – The fully patched version of the plugin, 18.5.10, is released.

Conclusion

In this blog post, we detailed a Local File Inclusion vulnerability within the Shield Security plugin affecting versions 18.5.9 and earlier. This vulnerability allows unauthenticated threat actors to include and execute PHP files on the server, allowing the execution of any PHP code in those files, which can be used for complete site compromise. The vulnerability has been fully addressed in version 18.5.10 of the plugin.

We encourage WordPress users to verify that their sites are updated to the latest patched version of Shield Security.

All Wordfence PremiumWordfence Care, and Wordfence Response customers, as well as those still using the free version of our plugin, are protected against any exploits targeting this vulnerability by the Wordfence firewall’s built-in Directory Traversal and Local File Inclusion protection.

If you know someone who uses this plugin on their site, we recommend sharing this advisory with them to ensure their site remains secure, as this vulnerability poses a significant risk.

Did you enjoy this post? Share it!

Source :
https://www.wordfence.com/blog/2024/02/local-file-inclusion-vulnerability-patched-in-shield-security-wordpress-plugin/

Reflecting on the GDPR to celebrate Privacy Day 2024

26/01/2024
Emily Hancock

10 min read

This post is also available in DeutschFrançais日本語 and Nederlands.

Reflecting on the GDPR to celebrate Privacy Day 2024

Just in time for Data Privacy Day 2024 on January 28, the EU Commission is calling for evidence to understand how the EU’s General Data Protection Regulation (GDPR) has been functioning now that we’re nearing the 6th anniversary of the regulation coming into force.

We’re so glad they asked, because we have some thoughts. And what better way to celebrate privacy day than by discussing whether the application of the GDPR has actually done anything to improve people’s privacy?

The answer is, mostly yes, but in a couple of significant ways – no.

Overall, the GDPR is rightly seen as the global gold standard for privacy protection. It has served as a model for what data protection practices should look like globally, it enshrines data subject rights that have been copied across jurisdictions, and when it took effect, it created a standard for the kinds of privacy protections people worldwide should be able to expect and demand from the entities that handle their personal data. On balance, the GDPR has definitely moved the needle in the right direction for giving people more control over their personal data and in protecting their privacy.

In a couple of key areas, however, we believe the way the GDPR has been applied to data flowing across the Internet has done nothing for privacy and in fact may even jeopardize the protection of personal data. The first area where we see this is with respect to cross-border data transfers. Location has become a proxy for privacy in the minds of many EU data protection regulators, and we think that is the wrong result. The second area is an overly broad interpretation of what constitutes “personal data” by some regulators with respect to Internet Protocol or “IP” addresses. We contend that IP addresses should not always count as personal data, especially when the entities handling IP addresses have no ability on their own to tie those IP addresses to individuals. This is important because the ability to implement a number of industry-leading cybersecurity measures relies on the ability to do threat intelligence on Internet traffic metadata, including IP addresses.  

Location should not be a proxy for privacy

Fundamentally, good data security and privacy practices should be able to protect personal data regardless of where that processing or storage occurs. Nevertheless, the GDPR is based on the idea that legal protections should attach to personal data based on the location of the data – where it is generated, processed, or stored. Articles 44 to 49 establish the conditions that must be in place in order for data to be transferred to a jurisdiction outside the EU, with the idea that even if the data is in a different location, the privacy protections established by the GDPR should follow the data. No doubt this approach was influenced by political developments around government surveillance practices, such as the revelations in 2013 of secret documents describing the relationship between the US NSA (and its Five Eyes partners) and large Internet companies, and that intelligence agencies were scooping up data from choke points on the Internet. And once the GDPR took effect, many data regulators in the EU were of the view that as a result of the GDPR’s restrictions on cross-border data transfers, European personal data simply could not be processed in the United States in a way that would be consistent with the GDPR.

This issue came to a head in July 2020, when the European Court of Justice (CJEU), in its “Schrems II” decision1, invalidated the EU-US Privacy Shield adequacy standard and questioned the suitability of the EU standard contractual clauses (a mechanism entities can use to ensure that GDPR protections are applied to EU personal data even if it is processed outside the EU). The ruling in some respects left data protection regulators with little room to maneuver on questions of transatlantic data flows. But while some regulators were able to view the Schrems II ruling in a way that would still allow for EU personal data to be processed in the United States, other data protection regulators saw the decision as an opportunity to double down on their view that EU personal data cannot be processed in the US consistent with the GDPR, therefore promoting the misconception that data localization should be a proxy for data protection.

In fact, we would argue that the opposite is the case. From our own experience and according to recent research2, we know that data localization threatens an organization’s ability to achieve integrated management of cybersecurity risk and limits an entity’s ability to employ state-of-the-art cybersecurity measures that rely on cross-border data transfers to make them as effective as possible. For example, Cloudflare’s Bot Management product only increases in accuracy with continued use on the global network: it detects and blocks traffic coming from likely bots before feeding back learnings to the models backing the product. A diversity of signal and scale of data on a global platform is critical to help us continue to evolve our bot detection tools. If the Internet were fragmented – preventing data from one jurisdiction being used in another – more and more signals would be missed. We wouldn’t be able to apply learnings from bot trends in Asia to bot mitigation efforts in Europe, for example. And if the ability to identify bot traffic is hampered, so is the ability to block those harmful bots from services that process personal data.

The need for industry-leading cybersecurity measures is self-evident, and it is not as if data protection authorities don’t realize this. If you look at any enforcement action brought against an entity that suffered a data breach, you see data protection regulators insisting that the impacted entities implement ever more robust cybersecurity measures in line with the obligation GDPR Article 32 places on data controllers and processors to “develop appropriate technical and organizational measures to ensure a level of security appropriate to the risk”, “taking into account the state of the art”. In addition, data localization undermines information sharing within industry and with government agencies for cybersecurity purposes, which is generally recognized as vital to effective cybersecurity.

In this way, while the GDPR itself lays out a solid framework for securing personal data to ensure its privacy, the application of the GDPR’s cross-border data transfer provisions has twisted and contorted the purpose of the GDPR. It’s a classic example of not being able to see the forest for the trees. If the GDPR is applied in such a way as to elevate the priority of data localization over the priority of keeping data private and secure, then the protection of ordinary people’s data suffers.

Applying data transfer rules to IP addresses could lead to balkanization of the Internet

The other key way in which the application of the GDPR has been detrimental to the actual privacy of personal data is related to the way the term “personal data” has been defined in the Internet context – specifically with respect to Internet Protocol or “IP” addresses. A world where IP addresses are always treated as personal data and therefore subject to the GDPR’s data transfer rules is a world that could come perilously close to requiring a walled-off European Internet. And as noted above, this could have serious consequences for data privacy, not to mention that it likely would cut the EU off from any number of global marketplaces, information exchanges, and social media platforms.

This is a bit of a complicated argument, so let’s break it down. As most of us know, IP addresses are the addressing system for the Internet. When you send a request to a website, send an email, or communicate online in any way, IP addresses connect your request to the destination you’re trying to access. These IP addresses are the key to making sure Internet traffic gets delivered to where it needs to go. As the Internet is a global network, this means it’s entirely possible that Internet traffic – which necessarily contains IP addresses – will cross national borders. Indeed, the destination you are trying to access may well be located in a different jurisdiction altogether. That’s just the way the global Internet works. So far, so good.

But if IP addresses are considered personal data, then they are subject to data transfer restrictions under the GDPR. And with the way those provisions have been applied in recent years, some data regulators were getting perilously close to saying that IP addresses cannot transit jurisdictional boundaries if it meant the data might go to the US. The EU’s recent approval of the EU-US Data Privacy Framework established adequacy for US entities that certify to the framework, so these cross-border data transfers are not currently an issue. But if the Data Privacy Framework were to be invalidated as the EU-US Privacy Shield was in the Schrems II decision, then we could find ourselves in a place where the GDPR is applied to mean that IP addresses ostensibly linked to EU residents can’t be processed in the US, or potentially not even leave the EU.

If this were the case, then providers would have to start developing Europe-only networks to ensure IP addresses never cross jurisdictional boundaries. But how would people in the EU and US communicate if EU IP addresses can’t go to the US? Would EU citizens be restricted from accessing content stored in the US? It’s an application of the GDPR that would lead to the absurd result – one surely not intended by its drafters. And yet, in light of the Schrems II case and the way the GDPR has been applied, here we are.

A possible solution would be to consider that IP addresses are not always “personal data” subject to the GDPR. In 2016 – even before the GDPR took effect – the Court of Justice of the European Union (CJEU) established the view in Breyer v. Bundesrepublik Deutschland that even dynamic IP addresses, which change with every new connection to the Internet, constituted personal data if an entity processing the IP address could link the IP addresses to an individual. While the court’s decision did not say that dynamic IP addresses are always personal data under European data protection law, that’s exactly what EU data regulators took from the decision, without considering whether an entity actually has a way to tie the IP address to a real person3.

The question of when an identifier qualifies as “personal data” is again before the CJEU: In April 2023, the lower EU General Court ruled in SRB v EDPS4 that transmitted data can be considered anonymised and therefore not personal data if the data recipient does not have any additional information reasonably likely to allow it to re-identify the data subjects and has no legal means available to access such information. The appellant – the European Data Protection Supervisor (EDPS) – disagrees. The EDPS, who mainly oversees the privacy compliance of EU institutions and bodies, is appealing the decision and arguing that a unique identifier should qualify as personal data if that identifier could ever be linked to an individual, regardless of whether the entity holding the identifier actually had the means to make such a link.

If the lower court’s common-sense ruling holds, one could argue that IP addresses are not personal data when those IP addresses are processed by entities like Cloudflare, which have no means of connecting an IP address to an individual. If IP addresses are then not always personal data, then IP addresses will not always be subject to the GDPR’s rules on cross-border data transfers.

Although it may seem counterintuitive, having a standard whereby an IP address is not necessarily “personal data” would actually be a positive development for privacy. If IP addresses can flow freely across the Internet, then entities in the EU can use non-EU cybersecurity providers to help them secure their personal data. Advanced Machine Learning/predictive AI techniques that look at IP addresses to protect against DDoS attacks, prevent bots, or otherwise guard against personal data breaches will be able to draw on attack patterns and threat intelligence from around the world to the benefit of EU entities and residents. But none of these benefits can be realized in a world where IP addresses are always personal data under the GDPR and where the GDPR’s data transfer rules are interpreted to mean IP addresses linked to EU residents can never flow to the United States.

Keeping privacy in focus

On this Data Privacy Day, we urge EU policy makers to look closely at how the GDPR is working in practice, and to take note of the instances where the GDPR is applied in ways that place privacy protections above all other considerations – even appropriate security measures mandated by the GDPR’s Article 32 that take into account the state of the art of technology. When this happens, it can actually be detrimental to privacy. If taken to the extreme, this formulaic approach would not only negatively impact cybersecurity and data protection, but even put into question the functioning of the global Internet infrastructure as a whole, which depends on cross-border data flows. So what can be done to avert this?

First, we believe EU policymakers could adopt guidelines (if not legal clarification) for regulators that IP addresses should not be considered personal data when they cannot be linked by an entity to a real person. Second, policymakers should clarify that the GDPR’s application should be considered with the cybersecurity benefits of data processing in mind. Building on the GDPR’s existing recital 49, which rightly recognizes cybersecurity as a legitimate interest for processing, personal data that needs to be processed outside the EU for cybersecurity purposes should be exempted from GDPR restrictions to international data transfers. This would avoid some of the worst effects of the mindset that currently views data localization as a proxy for data privacy. Such a shift would be a truly pro-privacy application of the GDPR.

1 Case C-311/18, Data Protection Commissioner v Facebook Ireland and Maximillian Schrems.
2 Swire, Peter and Kennedy-Mayo, DeBrae and Bagley, Andrew and Modak, Avani and Krasser, Sven and Bausewein, Christoph, Risks to Cybersecurity from Data Localization, Organized by Techniques, Tactics, and Procedures (2023).
3 Different decisions by the European data protection authorities, namely the Austrian DSB (December 2021), the French CNIL (February 2022) and the Italian Garante (June 2022), while analyzing the use of Google Analytics, have rejected the relative approach used by the Breyer case and considered that an IP address should always be considered as personal data. Only the decision issued by the Spanish AEPD (December 2022) followed the same interpretation of the Breyer case. In addition, see paragraphs 109 and 136 in Guidelines by Supervisory Authorities for Tele-Media Providers, DSK (2021).
4 Single Resolution Board v EDPS, Court of Justice of the European Union, April 2023.

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/reflecting-on-the-gdpr-to-celebrate-privacy-day-2024/

Thanksgiving 2023 security incident

01/02/2024
Matthew Prince John Graham-Cumming Grant Bourzikas

11 min read

On Thanksgiving Day, November 23, 2023, Cloudflare detected a threat actor on our self-hosted Atlassian server. Our security team immediately began an investigation, cut off the threat actor’s access, and on Sunday, November 26, we brought in CrowdStrike’s Forensic team to perform their own independent analysis.

Yesterday, CrowdStrike completed its investigation, and we are publishing this blog post to talk about the details of this security incident.

We want to emphasize to our customers that no Cloudflare customer data or systems were impacted by this event. Because of our access controls, firewall rules, and use of hard security keys enforced using our own Zero Trust tools, the threat actor’s ability to move laterally was limited. No services were implicated, and no changes were made to our global network systems or configuration. This is the promise of a Zero Trust architecture: it’s like bulkheads in a ship where a compromise in one system is limited from compromising the whole organization.

From November 14 to 17, a threat actor did reconnaissance and then accessed our internal wiki (which uses Atlassian Confluence) and our bug database (Atlassian Jira). On November 20 and 21, we saw additional access indicating they may have come back to test access to ensure they had connectivity.

They then returned on November 22 and established persistent access to our Atlassian server using ScriptRunner for Jira, gained access to our source code management system (which uses Atlassian Bitbucket), and tried, unsuccessfully, to access a console server that had access to the data center that Cloudflare had not yet put into production in São Paulo, Brazil.

They did this by using one access token and three service account credentials that had been taken, and that we failed to rotate, after the Okta compromise of October 2023. All threat actor access and connections were terminated on November 24 and CrowdStrike has confirmed that the last evidence of threat activity was on November 24 at 10:44.

(Throughout this blog post all dates and times are UTC.)

Even though we understand the operational impact of the incident to be extremely limited, we took this incident very seriously because a threat actor had used stolen credentials to get access to our Atlassian server and accessed some documentation and a limited amount of source code. Based on our collaboration with colleagues in the industry and government, we believe that this attack was performed by a nation state attacker with the goal of obtaining persistent and widespread access to Cloudflare’s global network.

“Code Red” Remediation and Hardening Effort

On November 24, after the threat actor was removed from our environment, our security team pulled in all the people they needed across the company to investigate the intrusion and ensure that the threat actor had been completely denied access to our systems, and to ensure we understood the full extent of what they accessed or tried to access.

Then, from November 27, we redirected the efforts of a large part of the Cloudflare technical staff (inside and outside the security team) to work on a single project dubbed “Code Red”. The focus was strengthening, validating, and remediating any control in our environment to ensure we are secure against future intrusion and to validate that the threat actor could not gain access to our environment. Additionally, we continued to investigate every system, account and log to make sure the threat actor did not have persistent access and that we fully understood what systems they had touched and which they had attempted to access.

CrowdStrike performed an independent assessment of the scope and extent of the threat actor’s activity, including a search for any evidence that they still persisted in our systems. CrowdStrike’s investigation provided helpful corroboration and support for our investigation, but did not bring to light any activities that we had missed. This blog post outlines in detail everything we and CrowdStrike uncovered about the activity of the threat actor.

The only production systems the threat actor could access using the stolen credentials was our Atlassian environment. Analyzing the wiki pages they accessed, bug database issues, and source code repositories, it appears they were looking for information about the architecture, security, and management of our global network; no doubt with an eye on gaining a deeper foothold. Because of that, we decided a huge effort was needed to further harden our security protocols to prevent the threat actor from being able to get that foothold had we overlooked something from our log files.

Our aim was to prevent the attacker from using the technical information about the operations of our network as a way to get back in. Even though we believed, and later confirmed, the attacker had limited access, we undertook a comprehensive effort to rotate every production credential (more than 5,000 individual credentials), physically segment test and staging systems, performed forensic triages on 4,893 systems, reimaged and rebooted every machine in our global network including all the systems the threat actor accessed and all Atlassian products (Jira, Confluence, and Bitbucket).

The threat actor also attempted to access a console server in our new, and not yet in production, data center in São Paulo. All attempts to gain access were unsuccessful. To ensure these systems are 100% secure, equipment in the Brazil data center was returned to the manufacturers. The manufacturers’ forensic teams examined all of our systems to ensure that no access or persistence was gained. Nothing was found, but we replaced the hardware anyway.

We also looked for software packages that hadn’t been updated, user accounts that might have been created, and unused active employee accounts; we went searching for secrets that might have been left in Jira tickets or source code, examined and deleted all HAR files uploaded to the wiki in case they contained tokens of any sort. Whenever in doubt, we assumed the worst and made changes to ensure anything the threat actor was able to access would no longer be in use and therefore no longer be valuable to them.

Every member of the team was encouraged to point out areas the threat actor might have touched, so we could examine log files and determine the extent of the threat actor’s access. By including such a large number of people across the company, we aimed to leave no stone unturned looking for evidence of access or changes that needed to be made to improve security.

The immediate “Code Red” effort ended on January 5, but work continues across the company around credential management, software hardening, vulnerability management, additional alerting, and more.

Attack timeline

The attack started in October with the compromise of Okta, but the threat actor only began targeting our systems using those credentials from the Okta compromise in mid-November.

The following timeline shows the major events:

October 18 – Okta compromise

We’ve written about this before but, in summary, we were (for the second time) the victim of a compromise of Okta’s systems which resulted in a threat actor gaining access to a set of credentials. These credentials were meant to all be rotated.

Unfortunately, we failed to rotate one service token and three service accounts (out of thousands) of credentials that were leaked during the Okta compromise.

One was a Moveworks service token that granted remote access into our Atlassian system. The second credential was a service account used by the SaaS-based Smartsheet application that had administrative access to our Atlassian Jira instance, the third account was a Bitbucket service account which was used to access our source code management system, and the fourth was an AWS environment that had no access to the global network and no customer or sensitive data.

The one service token and three accounts were not rotated because mistakenly it was believed they were unused. This was incorrect and was how the threat actor first got into our systems and gained persistence to our Atlassian products. Note that this was in no way an error on the part of Atlassian, AWS, Moveworks or Smartsheet. These were merely credentials which we failed to rotate.

November 14 09:22:49 – threat actor starts probing

Our logs show that the threat actor started probing and performing reconnaissance of our systems beginning on November 14, looking for a way to use the credentials and what systems were accessible. They attempted to log into our Okta instance and were denied access. They attempted access to the Cloudflare Dashboard and were denied access.

Additionally, the threat actor accessed an AWS environment that is used to power the Cloudflare Apps marketplace. This environment was segmented with no access to global network or customer data. The service account to access this environment was revoked, and we validated the integrity of the environment.

November 15 16:28:38 – threat actor gains access to Atlassian services

The threat actor successfully accessed Atlassian Jira and Confluence on November 15 using the Moveworks service token to authenticate through our gateway, and then they used the Smartsheet service account to gain access to the Atlassian suite. The next day they began looking for information about the configuration and management of our global network, and accessed various Jira tickets.

The threat actor searched the wiki for things like remote access, secret, client-secret, openconnect, cloudflared, and token. They accessed 36 Jira tickets (out of a total of 2,059,357 tickets) and 202 wiki pages (out of a total of 194,100 pages).

The threat actor accessed Jira tickets about vulnerability management, secret rotation, MFA bypass, network access, and even our response to the Okta incident itself.

The wiki searches and pages accessed suggest the threat actor was very interested in all aspects of access to our systems: password resets, remote access, configuration, our use of Salt, but they did not target customer data or customer configurations.

November 16 14:36:37 – threat actor creates an Atlassian user account

The threat actor used the Smartsheet credential to create an Atlassian account that looked like a normal Cloudflare user. They added this user to a number of groups within Atlassian so that they’d have persistent access to the Atlassian environment should the Smartsheet service account be removed.

November 17 14:33:52 to November 20 09:26:53 – threat actor takes a break from accessing Cloudflare systems

During this period, the attacker took a break from accessing our systems (apart from apparently briefly testing that they still had access) and returned just before Thanksgiving.

November 22 14:18:22 – threat actor gains persistence

Since the Smartsheet service account had administrative access to Atlassian Jira, the threat actor was able to install the Sliver Adversary Emulation Framework, which is a widely used tool and framework that red teams and attackers use to enable “C2” (command and control), connectivity gaining persistent and stealthy access to a computer on which it is installed. Sliver was installed using the ScriptRunner for Jira plugin.

This allowed them continuous access to the Atlassian server, and they used this to attempt lateral movement. With this access the Threat Actor attempted to gain access to a non-production console server in our São Paulo, Brazil data center due to a non-enforced ACL. The access was denied, and they were not able to access any of the global network.

Over the next day, the threat actor viewed 120 code repositories (out of a total of 11,904 repositories). Of the 120, the threat actor used the Atlassian Bitbucket git archive feature on 76 repositories to download them to the Atlassian server, and even though we were not able to confirm whether or not they had been exfiltrated, we decided to treat them as having been exfiltrated.

The 76 source code repositories were almost all related to how backups work, how the global network is configured and managed, how identity works at Cloudflare, remote access, and our use of Terraform and Kubernetes. A small number of the repositories contained encrypted secrets which were rotated immediately even though they were strongly encrypted themselves.

We focused particularly on these 76 source code repositories to look for embedded secrets, (secrets stored in the code were rotated), vulnerabilities and ways in which an attacker could use them to mount a subsequent attack. This work was done as a priority by engineering teams across the company as part of “Code Red”.

As a SaaS company, we’ve long believed that our source code itself is not as precious as the source code of software companies that distribute software to end users. In fact, we’ve open sourced a large amount of our source code and speak openly through our blog about algorithms and techniques we use. So our focus was not on someone having access to the source code, but whether that source code contained embedded secrets (such as a key or token) and vulnerabilities.

November 23 – Discovery and threat actor access termination begins

Our security team was alerted to the threat actor’s presence at 16:00 and deactivated the Smartsheet service account 35 minutes later. 48 minutes later the user account created by the threat actor was found and deactivated. Here’s the detailed timeline for the major actions taken to block the threat actor once the first alert was raised.

15:58 – The threat actor adds the Smartsheet service account to an administrator group.
16:00 – Automated alert about the change at 15:58 to our security team.
16:12 – Cloudflare SOC starts investigating the alert.
16:35 – Smartsheet service account deactivated by Cloudflare SOC.
17:23 – The threat actor-created Atlassian user account is found and deactivated.
17:43 – Internal Cloudflare incident declared.
21:31 – Firewall rules put in place to block the threat actor’s known IP addresses.

November 24 – Sliver removed; all threat actor access terminated

10:44 – Last known threat actor activity.
11:59 – Sliver removed.

Throughout this timeline, the threat actor tried to access a myriad of other systems at Cloudflare but failed because of our access controls, firewall rules, and use of hard security keys enforced using our own Zero Trust tools.

To be clear, we saw no evidence whatsoever that the threat actor got access to our global network, data centers, SSL keys, customer databases or configuration information, Cloudflare Workers deployed by us or customers, AI models, network infrastructure, or any of our datastores like Workers KV, R2 or Quicksilver. Their access was limited to the Atlassian suite and the server on which our Atlassian runs.

A large part of our “Code Red” effort was understanding what the threat actor got access to and what they tried to access. By looking at logging across systems we were able to track attempted access to our internal metrics, network configuration, build system, alerting systems, and release management system. Based on our review, none of their attempts to access these systems were successful. Independently, CrowdStrike performed an assessment of the scope and extent of the threat actor’s activity, which did not bring to light activities that we had missed and concluded that the last evidence of threat activity was on November 24 at 10:44.

We are confident that between our investigation and CrowdStrike’s, we fully understand the threat actor’s actions and that they were limited to the systems on which we saw their activity.

Conclusion

This was a security incident involving a sophisticated actor, likely a nation-state, who operated in a thoughtful and methodical manner. The efforts we have taken ensure that the ongoing impact of the incident was limited and that we are well-prepared to fend off any sophisticated attacks in the future. This required the efforts of a significant number of Cloudflare’s engineering staff, and, for over a month, this was the highest priority at Cloudflare. The entire Cloudflare team worked to ensure that our systems were secure, the threat actor’s access was understood, to remediate immediate priorities (such as mass credential rotation), and to build a plan of long-running work to improve our overall security based on areas for improvement discovered during this process.

We are incredibly grateful to everyone at Cloudflare who responded quickly over the Thanksgiving holiday to conduct an initial analysis and lock out the threat actor, and all those who contributed to this effort. It would be impossible to name everyone involved, but their long hours and dedicated work made it possible to undertake an essential review and change of Cloudflare’s security while keeping our global network running and our customers’ service running.

We are grateful to CrowdStrike for having been available immediately to conduct an independent assessment. Now that their final report is complete, we are confident in our internal analysis and remediation of the intrusion and are making this blog post available.

IOCs
Below are the Indications of Compromise (IOCs) that we saw from this threat actor. We are publishing them so that other organizations, and especially those that may have been impacted by the Okta breach, can search their logs to confirm the same threat actor did not access their systems.

IndicatorIndicator TypeSHA256Description
193.142.58[.]126IPv4N/APrimary threat actor
Infrastructure, owned by
M247 Europe SRL (Bucharest,
Romania)
198.244.174[.]214IPv4N/ASliver C2 server, owned by
OVH SAS (London, England)
idowall[.]comDomainN/AInfrastructure serving Sliver
payload
jvm-agentFilenamebdd1a085d651082ad567b03e5186d1d4
6d822bb7794157ab8cce95d850a3caaf
Sliver payload

We protect entire corporate networks, help customers build Internet-scale applications efficiently, accelerate any website or Internet applicationward off DDoS attacks, keep hackers at bay, and can help you on your journey to Zero Trust.

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

Source :
https://blog.cloudflare.com/thanksgiving-2023-security-incident

AnyDesk says hackers breached its production servers, reset passwords

By Lawrence Abrams
February 2, 2024

AnyDesk confirmed today that it suffered a recent cyberattack that allowed hackers to gain access to the company’s production systems. BleepingComputer has learned that source code and private code signing keys were stolen during the attack.

AnyDesk is a remote access solution that allows users to remotely access computers over a network or the internet. The program is very popular with the enterprise, which use it for remote support or to access colocated servers.

The software is also popular among threat actors who use it for persistent access to breached devices and networks.

The company reports having 170,000 customers, including 7-Eleven, Comcast, Samsung, MIT, NVIDIA, SIEMENS, and the United Nations.

AnyDesk hacked

In a statement shared with BleepingComputer late Friday afternoon, AnyDesk says they first learned of the attack after detecting indications of an incident on their production servers. 

After conducting a security audit, they determined their systems were compromised and activated a response plan with the help of cybersecurity firm CrowdStrike.

AnyDesk did not share details on whether data was stolen during the attack. However, BleepingComputer has learned that the threat actors stole source code and code signing certificates.

The company also confirmed ransomware was not involved but didn’t share too much information about the attack other than saying their servers were breached, with the advisory mainly focusing on how they responded to the incident.

As part of their response, AnyDesk says they have revoked security-related certificates and remediated or replaced systems as necessary. They also reassured customers that AnyDesk was safe to use and that there was no evidence of end-user devices being affected by the incident.

“We can confirm that the situation is under control and it is safe to use AnyDesk. Please ensure that you are using the latest version, with the new code signing certificate,” AnyDesk said in a public statement.

While the company says that no authentication tokens were stolen, out of caution, AnyDesk is revoking all passwords to their web portal and suggests changing the password if it’s used on other sites.

“AnyDesk is designed in a way which session authentication tokens cannot be stolen. They only exist on the end user’s device and are associated with the device fingerprint. These tokens never touch our systems, “AnyDesk told BleepingComputer in response to our questions about the attack.

“We have no indication of session hijacking as to our knowledge this is not possible.”

The company has already begun replacing stolen code signing certificates, with Günter Born of BornCity first reporting that they are using a new certificate in AnyDesk version 8.0.8, released on January 29th. The only listed change in the new version is that the company switched to a new code signing certificate and will revoke the old one soon.

BleepingComputer looked at previous versions of the software, and the older executables were signed under the name ‘philandro Software GmbH’ with serial number 0dbf152deaf0b981a8a938d53f769db8. The new version is now signed under ‘AnyDesk Software GmbH,’ with a serial number of 0a8177fcd8936a91b5e0eddf995b0ba5, as shown below.

Signed AnyDesk 8.0.6 (left) vs AnyDesk 8.0.8 (right)
Signed AnyDesk 8.0.6 (left) vs AnyDesk 8.0.8 (right)
Source: BleepingComputer

Certificates are usually not invalidated unless they have been compromised, such as being stolen in attacks or publicly exposed.

While AnyDesk had not shared when the breach occurred, Born reported that AnyDesk suffered a four-day outage starting on January 29th, during which the company disabled the ability to log in to the AnyDesk client.

“my.anydesk II is currently undergoing maintenance, which is expected to last for the next 48 hours or less,” reads the AnyDesk status message page.

“You can still access and use your account normally. Logging in to the AnyDesk client will be restored once the maintenance is complete.”

Yesterday, access was restored, allowing users to log in to their accounts, but AnyDesk did not provide any reason for the maintenance in the status updates.

However, AnyDesk has confirmed to BleepingComputer that this maintenance is related to the cybersecurity incident.

It is strongly recommended that all users switch to the new version of the software, as the old code signing certificate will soon be revoked.

Furthermore, while AnyDesk says that passwords were not stolen in the attack, the threat actors did gain access to production systems, so it is strongly advised that all AnyDesk users change their passwords. Furthermore, if they use their AnyDesk password at other sites, they should be changed there as well.

Every week, it feels like we learn of a new breach against well-known companies.

Last night, Cloudflare disclosed that they were hacked on Thanksgiving using authentication keys stolen during last years Okta cyberattack.

Last week, Microsoft also revealed that they were hacked by Russian state-sponsored hackers named Midnight Blizzard, who also attacked HPE in May.

Related Articles:

GTA 5 source code reportedly leaked online a year after Rockstar hack

Lurie Children’s Hospital took systems offline after cyberattack

Johnson Controls says ransomware attack cost $27 million, data stolen

A mishandled GitHub token exposed Mercedes-Benz source code

How SMBs can lower their risk of cyberattacks and data breaches

Source :
https://www.bleepingcomputer.com/news/security/anydesk-says-hackers-breached-its-production-servers-reset-passwords/

Gmail Blocking Your Emails? Here’s How to Fix It (Feb 2024)

Updated: Jan 22, 2024, 15:17 PM
By Claire Broadley Content Manager
REVIEWED By Jared Atchison President and co-owner

Is Gmail blocking emails that you send? You’re not alone.

Google has always been strict in blocking rogue senders in its fight against spam.

In 2024, it’s tightening up the rules and enforcing tighter anti-spam limits. That means emails you send to Gmail mailboxes won’t arrive if you’re not compliant.

The amount of spam emails that Google’s servers deal with is mind-bogglingly huge. About half of all emails sent daily are spam, and according to The Tech Report, about 1.8 billion people use Gmail. It has a vested interest in keeping spam out of its customers’ inboxes.

This article explains who’s impacted by Google’s new sending requirements, what exactly will change this year, and what you need to do to ensure your emails are delivered.

Fix Your WordPress Emails Now

In This Article

Why Is Gmail Blocking My Emails?

Gmail is likely blocking your emails for one of 2 reasons. Either you’re on a spam blacklist already, or you don’t comply with its new requirements for bulk senders.

Reason 1. Google Put Your Domain On a Spam Blacklist

It only takes a few people to click Mark as Spam in Gmail for your domain reputation to be impacted. This can result in Gmail adding your email to a blacklist if the spam complaints build up.

Once you’re on a blacklist, you’ll have to earn the trust of email providers to be removed.

Rachel Adnyana, Email Deliverability Expert at SendLayer

“Getting off a blacklist is often not a straightforward task. It’s usually not just a case of requesting your removal – you’ll also have to show what you’ve done to resolve the issues that lead to your blacklisting in the first place. ”

-Rachel Adnyana, Email Deliverability Expert at SendLayer

Blacklists are not new, but the threshold for being added to one is lower than it once was.

The telltale sign that you’re on a blacklist is an error like this:

421-4.7.0 unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been temporarily rate limited.

550-5.7.1 Our system has detected an unusual rate of unsolicited mail originating from your IP address. To protect our users from spam, mail sent from your IP address has been blocked.

You may see a different 500 error when sending an email if you’re impacted by this. You can look through the SendLayer error library if you see an error you don’t understand.

If you don’t see errors, try running your domain name or sender IP through the blacklist checker at MXToolbox:

Stop WordPress emails going to spam with blacklist check

We’ll explain how you can resolve this problem in just a minute. First, let’s look at the other possible cause of emails to Gmail being blocked.

Reason 2. Your Emails Aren’t Authenticated

Emails are often sent without authentication, but they are sometimes delivered anyway.

If you have a WordPress website, it’ll send emails without authentication by default. You will likely find them in your spam folder.

Gmail contact form entry in spam folder

Some Gmail users will find their contact form emails don’t arrive at all.

As email providers become less tolerant of unauthenticated emails, we’re seeing more support tickets from customers whose WordPress emails go to spam. Some say they used to be delivered, but now aren’t. It’s confusing when this happens. “I didn’t change anything, so why did my emails stop sending?”

It’s not that your website changed. It’s more likely that the rules for detecting spam got tougher. Soon, senders who don’t authenticate their emails will be blocked from emailing Gmail recipients at all.

The telltale sign is an error like this:

550-5.7.26 This mail is unauthenticated, which poses a security risk to the sender and Gmail users, and has been blocked. The sender must authenticate with at least one of SPF or DKIM. For this message, DKIM checks did not pass and SPF check for example.com did not pass with ip: 192.186.0.1.

You can see more details about error 550-5.7.26 in the SendLayer error library.

As you can see, Google is cracking down on domains that don’t have SPF, DMARC, and DKIM configured. If you’re not sure what that means, I’ll explain more in the next section.

Who Do Gmail’s New Rules Apply To?

Initially, the SPF, DMARC, and DKIM requirement will apply to bulk senders. Google defines a bulk sender as a domain that has, at some point, sent more than 5,000 emails to Gmail recipients in a single day.

  • ‘Gmail recipients’ means anyone with an email ending @gmail.com or @googlemail.com, and people who are using custom domains or Google Workspace to receive emails.
  • You only need to send 5,000 emails once to be considered a bulk sender forever. Remember: this applies to all emails you send from your domain.

Email authentication is best practise and should be set up to maintain good deliverability — even if you’re not considered a bulk sender.

How to Stop Gmail Blocking Your Emails

Now to the important part. How do you stop Gmail blocking the emails you send?

Email deliverability issues can seriously harm your business. If you use Google Workspace, they could even prevent you from sending emails to your own employees.

If your newsletters are considered to be spam, and people mark them as such, that could mean your purchase receipts don’t get through in the future.

No matter why Gmail is blocking your emails, the solutions are the same. First, let’s set up a free reporting tool so you can see your email spam complaints.

1. Set Up Google Postmaster Tools (Bulk Senders)

Google Postmaster Tools is a free tool that will show you exactly what your spam complaint rate is.

If you send a large number of emails, it’s worth creating an account because it will allow you to understand your current standing with Gmail.

You’ll need to authenticate your domain before your spam complaint rate appears. If you’ve already authenticated it for services like Google Analytics, you may find that setup is almost instant.

Verified domain in Postmaster Tools

If you see any spikes in Postmaster Tools’ spam reporting, or you’re consistently maintaining a level of spam complaints over 0.1%, you might not be able to send emails to Gmail recipients (and that includes customers on Google Workspace).

The absolute maximum spam complaint rate that Google will tolerate is 0.3%.

Example of a Postmaster Tools report for Gmail recipients

If your spam complaints are trending higher, it’s a sign you need to get to the bottom of the causes. People could be marking emails as spam for all kinds of reasons, but here are a few that Google has specifically highlighted:

  • You might be sending emails to people who are not expecting to receive them.

Trying to get people on a mailing list to inflate the size can be tempting. After all, you’ll cast a wider net when you send out a marketing email.

But it will could your deliverability too. More people will mark your emails as spam if you don’t give them any choice.

  • You might not be making it easy for people to unsubscribe.

You need to have a way for people to unsubscribe from your emails. You also need to implement a one-click unsubscribe list header if your email marketing platform supports that.

  • People could be sending spam through your website forms.

This is surprisingly common. If you don’t protect your contact form from spam, the junk email that passes through it hurts your deliverability because it appears to come from your domain.

  • You have a security issue on your website and you’re spamming people without even knowing.

In WordPress, there are a few common causes of poor security:

  • Poor security on your WordPress admin account, meaning your passwords are easy to guess and other people can get into your dashboard.
  • Nulled plugins, which can contain malicious code, including code that sends spam or phishing emails.
  • Poor security on your hosting account; for example, if you have a VPS, you need to watch out for hackers getting access and setting up SMTP relays that blast out emails without you knowing.

All in all, this is about keeping a close eye on what you’re sending and who you’re sending to.

2. Authenticate Emails From WordPress

If you’re still using WordPress without an SMTP plugin, we highly recommend that you install one to stop messages to Gmail from being blocked.

WP Mail SMTP steps in to handle all outgoing email from your WordPress site, routing it through a proper email provider. That authenticates the emails and stops them from being blocked.

WP Mail SMTP easy to set up thanks to the Setup Wizard and it supports many popular email platforms.

Choosing a mailer in the WP Mail SMTP setup wizard

You can also purchase the additional plugin setup service if you need a hand getting your email authentication working.

Add White Glove Setup

The Pro version of WP Mail SMTP is worth it because it adds lots of useful email logging and routing features. But if you just need to fix blocked emails to Gmail, the free version of WP Mail SMTP will do that.

Read more about setting up WordPress emails with authentication using WP Mail SMTP.

3. Implement DKIM, DMARC, and SPF

We already talked about issues that can arise without proper authentication.

You can authenticate your emails by ensuring they have the correct email headers: DKIM, SPF, and DMARC.

These 3 records prove that the emails you send are from you — the domain owner — not a random person pretending to be you.

What Are DMARC, SPF, and DKIM

In the past, you could get away without setting up these records, but Google will no longer allow you to skip this. If you’re seeing the 5.7.26 error from Gmail, you need to review your DNS records to figure out what’s missing.

Your email provider(s) will typically provide all 3 records and explain how to add them to your DNS. If you need a little more help, we have a few blog posts to help you understand what’s required:

Just to add: Google also requires a PTR record, which is sometimes called forward reverse DNS, or full circle DNS.

Full circle reverse DNS lookup for PTR record

Your web host or email provider should handle the creation and management of your PTR record, but it’s worth checking that it has been set up, just to rule out any future problems. See our post on What is a PTR record? to find out more.

Once your DNS has been set up, send a test email to AboutMy.Email, which will check your email for compliance.

4. Use the Correct From Email When Sending

The From Email is the sender email — the email address your emails appear to come from.

You should send emails from an email address at the same domain as your website. In other words, don’t authenticate your domain and send emails from a totally different account elsewhere.  Make sure everything matches.

WP Mail SMTP has settings specifically to allow you to set the from email (and the corresponding from name):

from name and from email

What about real email addresses vs fake ones? It’s good practise to avoid using noreply@domain.com (or any non-existent email address) as a From Email.

5. Send Email With TLS

When you’re sending emails through WordPress (or any other platform) using an SMTP server, you should use a provider that uses TLS to make the connection.

TLS stands for Transport Layer Security. It’s better than SSL because it’s more secure, and the end goal is that TLS will eventually replace the older SSL protocol.

wp mail smtp host and port settings

We don’t need to go into a huge amount of detail on this. Most email providers will support TLS so you may already be using it. But it’s worth double-checking your account to make sure you’re using the latest settings.

6. Add Unsubscribe Links to Marketing Emails

Most businesses send transactional emails and marketing emails.

So what’s the difference?

  • Transactional emails are emails that are necessary for the normal operation of your business. Password reset emails, renewal reminders, and receipts are all transactional. These kinds of emails usually need to be delivered immediately to be effective.
  • Marketing emails are emails you send to promote your products and services. They don’t necessarily need to be sent immediately, and they are not essential for a customer.

There are 2 things to think about here.

First, marketing emails must have an unsubscribe link in the footer of the email. The link doesn’t have to be huge, but it has to be clearly visible.

Unsubscribe link example

Second, you should also make sure that your newsletters have a one-click unsubscribe link at the top.

One click unsubscribe link

In Gmail, this link triggers an instant unsubscribe popup. This is going to be important if you want to prevent your emails from being blocked in the future.

Gmail one click unsubscribe popup

The one-click unsubscribe link near the subject line is triggered by list unsubscribe headers. Your email provider should be able to add these headers for you.

If you’re not sure what to ask for, the header is the technical part of the email that we don’t normally see; here’s what it looks like:

List unsubscribe header example

One question we’re asked a lot is this: Do transactional emails need to have unsubscribe links? They do not. However:

  • Include unsubscribe links in all marketing emails.
  • Don’t send emails that have a mixture of transactional and marketing content in them to try to get around this rule.
  • It’s OK to give people the choice of which email marketing lists they want to be subscribed to, but Google is clear that you must also provide an option to unsubscribe from all marketing emails.

7. Use Double Optins Where Possible

Google recommends that everyone who sends marketing emails uses double optins.

A double optin means that someone has to choose to join your list and confirm their choice, usually by clicking a confirmation link.

While Google won’t block emails to Gmail if you don’t use double optins, the truth is that single optins result in higher spam complaints. So implementing them will keep that important spam complaint rate low.

The downside of double optins is that you’ll grow your list more slowly because you will sign up fewer leads.

Recovering From a Gmail Block

If your WordPress emails are being blocked to Gmail recipients, running through this guide should help you to figure out the reason why.

  • If Google is rejecting emails from your domain because it’s missing some crucial DNS records, adding them might resolve the problem quickly.
  • If your domain or IP is on a blacklist, it’ll take longer to recover. You’ll need to earn the trust of email providers and slowly improve your domain or IP reputation.
  • Make it easy for people to leave your mailing lists and don’t send them emails they don’t want. This will reduce the likelihood of them marking emails as spam, therefore keeping your spam complaint rate low.
  • It can take time to clean up your lists, but removing people who aren’t opening your emails is a good first step. Re-engagement workflows typically unsubscribe people who aren’t responsive, helping to reduce spam complaints, and automatically unsubscribing invalid email addresses can also help.

Email providers like Brevo or SMTP.com are used to helping customers with these issues. If you’re concerned, reach out to them for advice. They may be able to change your sender IP or help you look into your bounce rates to diagnose the problem.

It’s difficult to say how long recovery will take. It depends on the reason you were blocked and the severity of the problem. Either way, prevention is always better than the cure.

If WordPress emails are not being delivered to Gmail and you can’t figure out why, our support team is standing by to help.

Fix Your WordPress Emails Now

Next, Boost Your Site Security

Improving your site’s security will help you to block malicious logins, and that will reduce the risk of people using your domain to send spam.

Check out our list of the best security plugins for WordPress to harden your security against common threats.

Ready to fix your emails? Get started today with the best WordPress SMTP plugin. If you don’t have the time to fix your emails, you can get full White Glove Setup assistance as an extra purchase, and there’s a 14-day money-back guarantee for all paid plans.

If this article helped you out, please follow us on Facebook and Twitter for more WordPress tips and tutorials.

Source :
https://wpmailsmtp.com/fix-gmail-blocking-emails/

How to Diagnose High Admin-Ajax Usage on Your WordPress Site

Salman Ravoof, January 8, 2024

Ajax is a JavaScript-based web technology that helps you to build dynamic and interactive websites. WordPress uses Ajax to power many of its core admin area features such as auto-saving posts, user session management, and notifications.

By default, WordPress directs all Ajax calls through the admin-ajax.php file located in the site’s /wp-admin directory.

Numerous simultaneous Ajax requests can lead to high admin-ajax.php usage, resulting in a considerably slowed down server and website. It’s one of the most common problems faced by many unoptimized WordPress sites. Typically, it manifests itself as a slow website or an HTTP 5xx error (mostly 504 or 502 errors).

In this article, you’ll learn about WordPress’ admin-ajax.php file, how it works, its benefits and drawbacks, and how you can diagnose and fix the high admin-ajax.php usage issue.

Ready to go? Let’s roll out!

What Is the admin-ajax.php File?

The admin-ajax.php file contains all the code for routing Ajax requests on WordPress. Its primary purpose is to establish a connection between the client and the server using Ajax. WordPress uses it to refresh the page’s contents without reloading it, thus making it dynamic and interactive to the users.

A basic overview of how Admin Ajax works on WordPress
A basic overview of how Admin Ajax works on WordPress

Since the WordPress core already uses Ajax to power its various backend features, you can use the same functions to use Ajax on WordPress. All you need to do is register an action, point it to your site’s admin-ajax.php file, and define how you want it to return the value. You can set it to return HTML, JSON, or even XML.

admin-ajax.php file in WordPress
admin-ajax.php file in WordPress

As per WordPress Trac, the admin-ajax.php file first appeared in WordPress 2.1. It’s also referred to as Ajax Admin in the WordPress development community.

Checking Ajax usage in MyKinsta dashboard
Checking Ajax usage in MyKinsta dashboard

The chart above only shows the amount of admin-ajax.php requests, not where they might be coming from. It’s a great way to see when the spikes are occurring. You can combine it with other techniques mentioned in this post to narrow down the primary cause.

Checking the number of admin-ajax.php requests in Chrome DevTools
Checking the number of admin-ajax.php requests in Chrome DevTools

You can also use Chrome DevTools to see how many requests are being sent to admin-ajax.php. You can also check out the Timings tab under the Network section to find out how much time it takes to process these requests.

As for finding the exact reason behind high admin-ajax.php usage, there are primarily two main causes: one due to frontend, and the other due to backend. We’ll discuss both below.

Unlock more growth, zero guesswork

Subscribe to our newsletter – we’re serving up the latest web dev news and tips you’ll actually use.

Subscribe

How to Debug High admin-ajax.php Usage on WordPress

Third-party plugins are one of the most common reasons behind high admin-ajax.php usage. Typically, this issue is seen on the site’s frontend and shows up frequently in speed test reports.

But plugins aren’t the only culprit here as themes, the WordPress core, the webserver, and a DDoS attack can also be the reason behind high Admin Ajax usage.

Let’s explore them in more detail.

How to Determine the Origin of High admin-ajax.php Usage for Plugins and Themes

Ajax-powered plugins in WordPress.org repository
Ajax-powered plugins in WordPress.org repository

Ajax is often used by WordPress developers to create dynamic and interactive plugins and themes. Some popular examples include adding features such as live search, product filters, infinite scroll, dynamic shopping cart, and chat box.

Just because a plugin uses Ajax doesn’t mean that it’ll slow down your site.

admin-ajax.php request in WebPageTest report
Viewing the admin-ajax.php request in WebPageTest report

Usually, Admin Ajax loads towards the end of the page load. Also, you can set Ajax requests to load asynchronously, so it can have little to no effect on the page’s perceived performance for the user.

As you can see in the WebPageTest report above, admin-ajax.php loads towards the end of the requests queue, but it still takes up 780 ms. That’s a lot of time for just one request.

GTmetrix report indicating a serious admin-ajax.php usage spike
GTmetrix report indicating a serious admin-ajax.php usage spike

When developers don’t implement Ajax properly on WordPress, it can lead to drastic performance issues. The above GTmetrix report is a perfect example of such behavior.

You can also use GTmetrix to dig into individual post and response data. You can use this feature to pinpoint what’s causing the issue.

To do that, go to GTmetrix report’s Waterfall tab, and then find and click the POST admin-ajax.php item. You’ll see three tabs for this request: Headers, Post, and Response.

POST admin-ajax.php request's Headers data
POST admin-ajax.php request’s Headers data

Checking out the request’s Post and Response tabs will give you some hints to find out the reasons behind the performance issue. For this site, you can see clues in the Response tab.

POST admin-ajax.php request's Response data
POST admin-ajax.php request’s Response data

You can see that part of the response has something to do with an input tag with id set to “fusion-form-nonce-656”.

A quick search of this clue will lead you to ThemeFusion’s website, the creators of Avada theme. Hence, you can conclude that the request is originating from the theme, or any of the plugins it’s bundled with.

In such a case, you must first ensure that the Avada theme and all its related plugins are fully updated. If that doesn’t fix the issue, then you can try disabling the theme and see if that fixes the issue.

Unlike disabling a plugin, disabling a theme isn’t feasible in most scenarios. Hence, try optimizing the theme to remove any bottlenecks. You can also reach out to the theme’s support team to see if they can suggest a better solution.

Testing another slow website in GTmetrix led to finding similar issues with Visual Composer page builder and Notification Bar plugins.

Another POST admin-ajax.php request's Response data
Another POST admin-ajax.php request’s Response data
POST admin-ajax.php request's Post data
POST admin-ajax.php request’s Post data

Thankfully, if you cannot resolve an issue with the plugin, you most like have many alternative plugins available to try out. For example, when it comes to page builders you could also try out Beaver Builder or Elementor.

One platform, dozens of premium hosting features

The list is too long for this section. But you can find them all here. (Hint: you’ll save $275 worth of premium features, included in all WordPress plans.)

Show me

How to Determine the Origin of High admin-ajax.php

Sometimes, the Post and Response data presented in speed test reports may not be as clear and straightforward. Here, finding the origin of high admin-ajax.php usage isn’t as easy. In such cases, you can always do it the old-school way.

Disable all your site’s pluginsclear your site’s cache (if any), and then run a speed test again. If admin-ajax.php is still present, then the most likely culprit is the theme. But if it’s nowhere to be found, then you must activate each plugin one-by-one and run the speed tests each time. By process of elimination, you’ll lock down on the issue’s origin.

Tip: Using a staging environment (e.g. Kinsta’s staging environment) is a great way to run tests on your site without affecting your live site. Once you’ve determined the cause and fixed the issue in the staging environment, you can push the changes to your live site.

Diagnosing Backend Server Issues with admin-ajax.php

The second most common reason for high admin-ajax.php usage is the WordPress Heartbeat API generating frequent Ajax calls, leading to high CPU usage on the server. Typically, this is caused because of many users logged into the WordPress backend dashboard. Hence, you won’t see this show up in speed tests.

By default, the Heartbeat API polls the admin-ajax.php file every 15 seconds to auto-save posts or pages. If you’re using a shared hosting server, then you don’t have many server resources dedicated to your site. If you’re editing a post or page and leave the tab open for a significant time, then it can rack up a lot of Admin Ajax requests.

For example, when you’re writing or editing posts, a single user alone can generate 240 requests in an hour!

Frequent autosave admin-ajax.php requests
Frequent autosave admin-ajax.php requests

That’s a lot of requests on the backend with just one user. Now imagine a site where there are multiple editors logged in concurrently. Such a site can rack up Ajax requests rapidly, generating high CPU usage.

That was the situation discovered by DARTDrones when the company was preparing its WooCommerce site for an expected surge in traffic following an appearance on Shark Tank.

Before being featured on the television show, the DARTDrones site was receiving over 4,100 admin-ajax.php calls in a day with only 2,000 unique visitors. That’s a weak requests-to-visits ratio.

Heavy admin-ajax.php usage on dartdrones.com
Heavy admin-ajax.php usage on dartdrones.com

Investigators noticed the /wp-admin referrer URL and correctly determined the root cause. These requests were due to DARTDrones’ admins and editors updating the site frequently in anticipation of the show.

WordPress has fixed this Heartbeat API issue partially long ago. For instance, you can reduce the frequency of requests generated by the Heartbeat API on hosts with limited resources. It also suspends itself after one hour of keyboard/mouse/touch inactivity.

Info

If you are using WP Rocket, then Heartbeat Control is now a built-in feature instead of a standalone plugin.

High Traffic Due to a DDoS Attack or Spam Bots

Overwhelming your site with a DDoS attack or spam bots can also lead to high admin-ajax.php usage. However, such an attack doesn’t necessarily target increasing Admin Ajax requests. It’s just collateral damage.

If your site is under a DDoS attack, your priority should be to get it behind a robust CDN/WAF like Cloudflare or Sucuri. Every hosting plan with Kinsta includes free Cloudflare integration and Kinsta CDN, which can help you offload your website’s resources to a large extent.

To learn more about how you can protect your websites from malicious attacks like these, you can refer to our in-depth guide on how to stop a DDoS attack.

Summary

WordPress uses Ajax in its Heartbeat API to implement many of its core features. However, it can lead to increased load times if not used correctly. This is typically caused due to a high frequency of requests to the admin-ajax.php file.

In this article, you learned the various causes for high admin-ajax.php usage, how to diagnose what’s responsible for this symptom, and how you can go about fixing it. In most cases, following this guide should get your site back up and running smoothly in no time.

However, in some cases upgrading to a server with higher resources is the only viable solution. Especially for demanding use cases such as ecommerce and membership sites. If you’re running such a site, consider upgrading to a managed WordPress host who is experienced with these types of performance issues.

If you’re still struggling with high admin-ajax.php usage on your WordPress site, let us know in the comments section.


Save time and costs, plus maximize site performance, with $275+ worth of enterprise-level integrations included in every Managed WordPress plan. This includes a high-performance CDN, DDoS protection, malware and hack mitigation, edge caching, and Google’s fastest CPU machines. Get started with no long-term contracts, assisted migrations, and a 30-day money-back guarantee.

Check out our plans or talk to sales to find the plan that’s right for you.

Salman Ravoof

Salman Ravoof is a self-taught web developer, writer, creator, and a huge admirer of Free and Open Source Software (FOSS). Besides tech, he’s excited by science, philosophy, photography, arts, cats, and food. Learn more about him on his website, and connect with Salman on Twitter.

Source :
https://kinsta.com/blog/admin-ajax-php/#:~:text=php%20File%3F-,The%20admin%2Dajax.,and%20interactive%20to%20the%20users.