The PQXDH Key Agreement Protocol

Revision 1, 2023-05-24 [PDF]

Ehren Kret, Rolfe Schmidt

Table of Contents

1. Introduction

This document describes the “PQXDH” (or “Post-Quantum Extended Diffie-Hellman”) key agreement protocol. PQXDH establishes a shared secret key between two parties who mutually authenticate each other based on public keys. PQXDH provides post-quantum forward secrecy and a form of cryptographic deniability but still relies on the hardness of the discrete log problem for mutual authentication in this revision of the protocol.

PQXDH is designed for asynchronous settings where one user (“Bob”) is offline but has published some information to a server. Another user (“Alice”) wants to use that information to send encrypted data to Bob, and also establish a shared secret key for future communication.

2. Preliminaries

2.1. PQXDH parameters

An application using PQXDH must decide on several parameters:

curveA Montgomery curve for which XEdDSA [1] is specified, at present this is one of curve25519 or curve448
hashA 256 or 512-bit hash function (e.g. SHA-256 or SHA-512)
infoAn ASCII string identifying the application with a minimum length of 8 bytes
pqkemA post-quantum key encapsulation mechanism (e.g. Crystals-Kyber-1024 [2])
EncodeECA function that encodes a curve public key into a byte sequence
DecodeECA function that decodes a byte sequence into a curve public key and is the inverse of EncodeEC
EncodeKEMA function that encodes a pqkem public key into a byte sequence
DecodeKEMA function that decodes a byte sequence into a pqkem public key and is the inverse of EncodeKEM

For example, an application could choose curve as curve25519, hash as SHA-512, info as “MyProtocol”, and pqkem as CRYSTALS-KYBER-1024.

The recommended implementation of EncodeEC consists of a single-byte constant representation of curve followed by little-endian encoding of the u-coordinate as specified in [3]. The single-byte representation of curve is defined by the implementer. Similarly the recommended implementation of DecodeEC reads the first byte to determine the parameter curve. If the first byte does not represent a recognized curve, the function fails. Otherwise it applies the little-endian decoding of the u-coordinate for curve as specified in [3].

The recommended implementation of EncodeKEM consists of a single-byte constant representation of pqkem followed by the encoding of PQKPK specified by pqkem. The single-byte representation of pqkem is defined by the implementer. Similarly the recommended implementation of DecodeKEM reads the first byte to determine the parameter pqkem. If the first byte does not represent a recognized key encapsulation mechanism, the function fails. Otherwise it applies the decoding specified by the selected key encapsulation mechanism.

2.2. Cryptographic notation

Throughout this document, all public keys have a corresponding private key, but to simplify descriptions we will identify key pairs by the public key and assume that the corresponding private key can be accessed by the key owner.

This document will use the following notation:

  • The concatenation of byte sequences X and Y is X || Y.
  • DH(PK1, PK2) represents a byte sequence which is the shared secret output from an Elliptic Curve Diffie-Hellman function involving the key pairs represented by public keys PK1 and PK2. The Elliptic Curve Diffie-Hellman function will be either the X25519 or X448 function from [3], depending on the curve parameter.
  • Sig(PK, M, Z) represents the byte sequence that is a curve XEdDSA signature on the byte sequence M which was created by signing M with PK’s corresponding private key and using 64 bytes of randomness Z. This signature verifies with public key PK. The signing and verification functions for XEdDSA are specified in [1].
  • KDF(KM) represents 32 bytes of output from the HKDF algorithm [4] using hash with inputs:
    • HKDF input key material = F || KM, where KM is an input byte sequence containing secret key material, and F is a byte sequence containing 32 0xFF bytes if curve is curve25519, and 57 0xFF bytes if curve is curve448. As in in XEdDSA [1]F ensures that the first bits of the HKDF input key material are never a valid encoding of a scalar or elliptic curve point.
    • HKDF salt = A zero-filled byte sequence with length equal to the hash output length, in bytes.
    • HKDF info = The concatenation of string representations of the 4 PQXDH parameters infocurvehash, and pqkem into a single string separated with ‘_’ such as “MyProtocol_CURVE25519_SHA-512_CRYSTALS-KYBER-1024”. The string representations of the PQXDH parameters are defined by the implementer.
  • (CT, SS) = PQKEM-ENC(PK) represents a tuple of the byte sequence that is the KEM ciphertext, CT, output by the algorithm pqkem together with the shared secret byte sequence SS encapsulated by the ciphertext using the public key PK.
  • PQKEM-DEC(PK, CT) represents the shared secret byte sequence SS decapsulated from a pqkem ciphertext using the private key counterpart of the public key PK used to encapsulate the ciphertext CT.

2.3. Roles

The PQXDH protocol involves three parties: AliceBob, and a server.

  • Alice wants to send Bob some initial data using encryption, and also establish a shared secret key which may be used for bidirectional communication.
  • Bob wants to allow parties like Alice to establish a shared key with him and send encrypted data. However, Bob might be offline when Alice attempts to do this. To enable this, Bob has a relationship with some server.
  • The server can store messages from Alice to Bob which Bob can later retrieve. The server also lets Bob publish some data which the server will provide to parties like Alice. The amount of trust placed in the server is discussed in Section 4.9.

In some systems the server role might be divided between multiple entities, but for simplicity we assume a single server that provides the above functions for Alice and Bob.

2.4. Elliptic Curve Keys

PQXDH uses the following elliptic curve public keys:

IKAAlice’s identity key
IKBBob’s identity key
EKAAlice’s ephemeral key
SPKBBob’s signed prekey
(OPKB1OPKB2, …)Bob’s set of one-time prekeys

The elliptic curve public keys used within a PQXDH protocol run must either all be in curve25519 form, or they must all be in curve448 form, depending on the curve parameter [3].

Each party has a long-term identity elliptic curve public key (IKA for Alice, IKB for Bob).

Bob also has a signed prekey SPKB, which he changes periodically and signs each time with IKB, and a set of one-time prekeys (OPKB1OPKB2, …), which are each used in a single PQXDH protocol run. (“Prekeys” are so named because they are essentially protocol messages which Bob publishes to the server prior to Alice beginning the protocol run.) These keys will be uploaded to the server as described in Section 3.2.

During each protocol run, Alice generates a new ephemeral key pair with public key EKA.

2.5. Post-Quantum Key Encapsulation Keys

PQXDH uses the following post-quantum key encapsulation public keys:

PQSPKBBob’s signed last-resort pqkem prekey
(PQOPKB1PQOPKB2, …)Bob’s set of signed one-time pqkem prekeys

The pqkem public keys used within a PQXDH protocol run must all use the same pqkem parameter.

Bob has a signed last-resort post-quantum prekey PQSPKB, which he changes periodically and signs each time with IKB, and a set of signed one-time prekeys (PQOPKB1PQOPKB2, …) which are also signed with IKB and each used in a single PQXDH protocol run. These keys will be uploaded to the server as described in Section 3.2. The name “last-resort” refers to the fact that the last-resort prekey is only used when one-time pqkem prekeys are not available. This can happen when the number of prekey bundles downloaded for Bob exceeds the number of one-time pqkem prekeys Bob has uploaded (see Section 3 for details about the role of the server).

3. The PQXDH protocol

3.1. Overview

PQXDH has three phases:

  1. Bob publishes his elliptic curve identity key, elliptic curve prekeys, and pqkem prekeys to a server.
  2. Alice fetches a “prekey bundle” from the server, and uses it to send an initial message to Bob.
  3. Bob receives and processes Alice’s initial message.

The following sections explain these phases.

3.2. Publishing keys

Bob generates a sequence of 64-byte random values ZSPK, ZPQSPK, Z1, Z2, … and publishes a set of keys to the server containing:

  • Bob’s curve identity key IKB
  • Bob’s signed curve prekey SPKB
  • Bob’s signature on the curve prekey Sig(IKB, EncodeEC(SPKB), ZSPK)
  • Bob’s signed last-resort pqkem prekey PQSPKB
  • Bob’s signature on the pqkem prekey Sig(IKB, EncodeKEM(PQSPKB), ZPQSPK)
  • A set of Bob’s one-time curve prekeys (OPKB1, OPKB2, OPKB3, …)
  • A set of Bob’s signed one-time pqkem prekeys (PQOPKB1, PQOPKB2, PQOPKB3, …)
  • The set of Bob’s signatures on the signed one-time pqkem prekeys (Sig(IKB, EncodeKEM(PQOPKB1), Z1), Sig(IKB, EncodeKEM(PQOPKB2), Z2), Sig(IKB, EncodeKEM(PQOPKB3), Z3), …)

Bob only needs to upload his identity key to the server once. However, Bob may upload new one-time prekeys at other times (e.g. when the server informs Bob that the server’s store of one-time prekeys is getting low).

For both the signed curve prekey and the signed last-resort pqkem prekey, Bob will upload a new prekey along with its signature using IKB at some interval (e.g. once a week or once a month). The new signed prekey and its signatures will replace the previous values.

After uploading a new pair of signed curve and signed last-resort pqkem prekeys, Bob may keep the private key corresponding to the previous pair around for some period of time to handle messages using it that may have been delayed in transit. Eventually, Bob should delete this private key for forward secrecy (one-time prekey private keys will be deleted as Bob receives messages using them; see Section 3.4).

3.3. Sending the initial message

To perform a PQXDH key agreement with Bob, Alice contacts the server and fetches a “prekey bundle” containing the following values:

  • Bob’s curve identity key IKB
  • Bob’s signed curve prekey SPKB
  • Bob’s signature on the curve prekey Sig(IKB, EncodeEC(SPKB), ZSPK)
  • One of either Bob’s signed one-time pqkem prekey PQOPKBn or Bob’s last-resort signed pqkem prekey PQSPKB if no signed one-time pqkem prekey remains. Call this key PQPKB.
  • Bob’s signature on the pqkem prekey Sig(IKB, EncodeKEM(PQPKB), ZPQPK)
  • (Optionally) Bob’s one-time curve prekey OPKBn

The server should provide one of Bob’s curve one-time prekeys if one exists and then delete it. If all of Bob’s curve one-time prekeys on the server have been deleted, the bundle will not contain a one-time curve prekey element.

The server should prefer to provide one of Bob’s pqkem one-time signed prekeys PQOPKBn if one exists and then delete it. If all of Bob’s pqkem one-time signed prekeys on the server have been deleted, the bundle will instead contain Bob’s pqkem last-resort signed prekey PQSPKB.

Alice verifies the signatures on the prekeys. If any signature check fails, Alice aborts the protocol. Otherwise, if all signature checks pass, Alice then generates an ephemeral curve key pair with public key EKA. Alice additionally generates a pqkem encapsulated shared secret:

               shared secret SS
               ciphertext CT

If the bundle does not contain a curve one-time prekey, she calculates:

    DH1 = DH(IKA, SPKB)
    DH2 = DH(EKA, IKB)
    DH3 = DH(EKA, SPKB)
    SK = KDF(DH1 || DH2 || DH3 || SS)

If the bundle does contain a curve one-time prekey, the calculation is modified to include an additional DH:

    DH4 = DH(EKA, OPKB)
    SK = KDF(DH1 || DH2 || DH3 || DH4 || SS)

After calculating SK, Alice deletes her ephemeral private key, the DH outputs, the shared secret SS, and the ciphertext CT.

Alice then calculates an “associated data” byte sequence AD that contains identity information for both parties:

    AD = EncodeEC(IKA) || EncodeEC(IKB)

Alice may optionally append additional information to AD, such as Alice and Bob’s usernames, certificates, or other identifying information.

Alice then sends Bob an initial message containing:

  • Alice’s identity key IKA
  • Alice’s ephemeral key EKA
  • The pqkem ciphertext CT encapsulating SS for PQPKB
  • Identifiers stating which of Bob’s prekeys Alice used
  • An initial ciphertext encrypted with some AEAD encryption scheme [5] using AD as associated data and using an encryption key which is either SK or the output from some cryptographic PRF keyed by SK.

The initial ciphertext is typically the first message in some post-PQXDH communication protocol. In other words, this ciphertext typically has two roles, serving as the first message within some post-PQXDH protocol, and as part of Alice’s PQXDH initial message.

The initial message must be encoded in an unambiguous format to avoid confusion of the message items by the recipient.

After sending this, Alice may continue using SK or keys derived from SK within the post-PQXDH protocol for communication with Bob, subject to the security considerations discussed in Section 4.

3.4. Receiving the initial message

Upon receiving Alice’s initial message, Bob retrieves Alice’s identity key and ephemeral key from the message. Bob also loads his identity private key and the private key(s) corresponding to the signed prekeys and one-time prekeys Alice used.

Using these keys, Bob calculates PQKEM-DEC(PQPKB, CT) as the shared secret SS and repeats the DH and KDF calculations from the previous section to derive SK, and then deletes the DH values and SS values.

Bob then constructs the AD byte sequence using IKA and IKB as described in the previous section. Finally, Bob attempts to decrypt the initial ciphertext using SK and AD. If the initial ciphertext fails to decrypt, then Bob aborts the protocol and deletes SK.

If the initial ciphertext decrypts successfully, the protocol is complete for Bob. For forward secrecy, Bob deletes the ciphertext and any one-time prekey private key that was used. Bob may then continue using SK or keys derived from SK within the post-PQXDH protocol for communication with Alice subject to the security considerations discussed in Section 4.

4. Security considerations

The security of the composition of X3DH [6] with the Double Ratchet [7] was formally studied in [8] and proven secure under the Gap Diffie-Hellman assumption (GDH)[9]. PQXDH composed with the Double Ratchet retains this security against an adversary without access to a quantum computer, but strengthens the security of the initial handshake to require the solution of both GDH and Module-LWE [10]. The remainder of this section discusses an incomplete list of further security considerations.

4.1. Authentication

Before or after a PQXDH key agreement, the parties may compare their identity public keys IKA and IKB through some authenticated channel. For example, they may compare public key fingerprints manually, or by scanning a QR code. Methods for doing this are outside the scope of this document.

Authentication in PQXDH is not quantum-secure. In the presence of an active quantum adversary, the parties receive no cryptographic guarantees as to who they are communicating with. Post-quantum secure deniable mutual authentication is an open research problem which we hope to address with a future revision of this protocol.

If authentication is not performed, the parties receive no cryptographic guarantee as to who they are communicating with.

4.2. Protocol replay

If Alice’s initial message doesn’t use a one-time prekey, it may be replayed to Bob and he will accept it. This could cause Bob to think Alice had sent him the same message (or messages) repeatedly.

To mitigate this, a post-PQXDH protocol may wish to quickly negotiate a new encryption key for Alice based on fresh random input from Bob. This is the typical behavior of Diffie-Hellman-based ratcheting protocols [7].

Bob could attempt other mitigations, such as maintaining a blacklist of observed messages, or replacing old signed prekeys more rapidly. Analyzing these mitigations is beyond the scope of this document.

4.3. Replay and key reuse

Another consequence of the replays discussed in the previous section is that a successfully replayed initial message would cause Bob to derive the same SK in different protocol runs.

For this reason, any post-PQXDH protocol that uses SK to derive encryption keys MUST take measures to prevent catastrophic key reuse. For example, Bob could use a DH-based ratcheting protocol to combine SK with a freshly generated DH output to get a randomized encryption key [7].

4.4. Deniability

Informally, cryptographic deniability means that a protocol neither gives its participants a publishable cryptographic proof of the contents of their communication nor proof of the fact that they communicated. PQXDH, like X3DH, aims to provide both Alice and Bob deniablilty that they communicated with each other in a context where a “judge” who may have access to one or more party’s secret keys is presented with a transcript allegedly created by communication between Alice and Bob.

We focus on offline deniability because if either party is collaborating with a third party during protocol execution, they will be able to provide proof of their communication to such a third party. This limitation on “online” deniability appears to be intrinsic to the asynchronous setting [11].

PQXDH has some forms of cryptographic deniability. Motivated by the goals of X3DH, Brendel et al. [12] introduce a notion of 1-out-of-2 deniability for semi-honest parties and a “big brother” judge with access to all parties’ secret keys. Since either Alice or Bob can create a fake transcript using only their own secret keys, PQXDH has this deniability property. Vatandas, et al. [13] prove that X3DH is deniable in a different sense subject to certain “Knowledge of Diffie-Hellman Assumptions”. PQXDH is deniable in this sense for Alice, subject to the same assumptions, and we conjecture that it is deniable for Bob subject to an additional Plaintext Awareness (PA) assumption for pqkem. We note that Kyber uses a variant of the Fujisaki-Okamoto transform with implicit rejection [14] and is therefore not PA as is. However, in PQXDH, an AEAD ciphertext encrypted with the session key is always sent along with the Kyber ciphertext. This should offer the same guarantees as PA. We encourage the community to investigate the precise deniability properties of PQXDH.

These assertions all pertain to deniability in the classical setting. As discussed in [15] we expect that for future revisions of this protocol (that provide post-quantum mutual authentication) assertions about deniability against semi-honest quantum advsersaries will hold. Deniability in the face of malicious quantum adversaries requires further research.

4.5. Signatures

It might be tempting to omit the prekey signature after observing that mutual authentication and forward secrecy are achieved by the DH calculations. However, this would allow a “weak forward secrecy” attack: A malicious server could provide Alice a prekey bundle with forged prekeys, and later compromise Bob’s IKB to calculate SK.

Alternatively, it might be tempting to replace the DH-based mutual authentication (i.e. DH1 and DH2) with signatures from the identity keys. However, this reduces deniability, increases the size of initial messages, and increases the damage done if ephemeral or prekey private keys are compromised, or if the signature scheme is broken.

4.6. Key compromise

Compromise of a party’s private keys has a disastrous effect on security, though the use of ephemeral keys and prekeys provides some mitigation.

Compromise of a party’s identity private key allows impersonation of that party to others. Compromise of a party’s prekey private keys may affect the security of older or newer SK values, depending on many considerations.

A full analysis of all possible compromise scenarios is outside the scope of this document, however a partial analysis of some plausible scenarios is below:

  • If either an elliptic curve one-time prekey (OPKB) or a post-quantum key encapsulation one-time prekey (PQOPKB) are used for a protocol run and deleted as specified, then a compromise of Bob’s identity key and prekey private keys at some future time will not compromise the older SK.
  • If one-time prekeys were not used for a protocol run, then a compromise of the private keys for IKBSPKB, and PQSPKB from that protocol run would compromise the SK that was calculated earlier. Frequent replacement of signed prekeys mitigates this, as does using a post-PQXDH ratcheting protocol which rapidly replaces SK with new keys to provide fresh forward secrecy [7].
  • Compromise of prekey private keys may enable attacks that extend into the future, such as passive calculation of SK values, and impersonation of arbitrary other parties to the compromised party (“key-compromise impersonation”). These attacks are possible until the compromised party replaces his compromised prekeys on the server (in the case of passive attack); or deletes his compromised signed prekey’s private key (in the case of key-compromise impersonation).

4.7. Passive quantum adversaries

PQXDH is designed to prevent “harvest now, decrypt later” attacks by adversaries with access to a quantum computer capable of computing discrete logarithms in curve.

  • If an attacker has recorded the public information and the message from Alice to Bob, even access to a quantum computer will not compromise SK.
  • If a post-quantum key encapsulation one-time prekey (PQOPKB) is used for a protocol run and deleted as specified then compromise after deletion and access to a quantum computer at some future time will not compromise the older SK.
  • If post-quantum one-time prekeys were not used for a protocol run, then access to a quantum computer and a compromise of the private key for PQSPKB from that protocol run would compromise the SK that was calculated earlier. Frequent replacement of signed prekeys mitigates this, as does using a post-PQXDH ratcheting protocol which rapidly replaces SK with new keys to provide fresh forward secrecy [7].

4.8. Active quantum adversaries

PQXDH is not designed to provide protection against active quantum attackers. An active attacker with access to a quantum computer capable of computing discrete logarithms in curve can compute DH(PK1, PK2) and Sig(PK, M, Z) for all elliptic curve keys PK1PK2, and PK. This allows an attacker to impersonate Alice by using the quantum computer to compute the secret key corresponding to PKA then continuing with the protocol. A malicious server with access to such a quantum computer could impersonate Bob by generating new key pairs PQSPK’B and PQOPK’B, computing the secret key corresponding to PKB, then using PKB to sign the newly generated post-quantum KEM keys and delivering these attacker-generated keys in place of Bob’s post-quantum KEM key when Alice requests a prekey bundle.

It is tempting to consider adding a post-quantum identity key that Bob could use to sign the post-quantum prekeys. This would prevent the malicious server attack described above and provide Alice a cryptographic guarantee that she is communicating with Bob, but it does not provide mutual authentication. Bob does not have any cryptographic guarantee about who he is communicating with. The post-quantum KEM and signature schemes being standardized by NIST [16] do not provide a mechanism for post-quantum deniable mutual authentication, although this can be achieved through the use of a post-quantum ring signature or designated verifier signature [12][15]. We urge the community to work toward standardization of these or other mechanisms that will allow deniable mutual authentication.

4.9. Server trust

A malicious server could cause communication between Alice and Bob to fail (e.g. by refusing to deliver messages).

If Alice and Bob authenticate each other as in Section 4.1, then the only additional attack available to the server is to refuse to hand out one-time prekeys, causing forward secrecy for SK to depend on the signed prekey’s lifetime (as analyzed in Section 4.6).

This reduction in initial forward secrecy could also happen if one party maliciously drains another party’s one-time prekeys, so the server should attempt to prevent this (e.g. with rate limits on fetching prekey bundles).

4.10. Identity binding

Authentication as in Section 4.1 does not necessarily prevent an “identity misbinding” or “unknown key share” attack.

This results when an attacker (“Charlie”) falsely presents Bob’s identity key fingerprint to Alice as his (Charlie’s) own, and then either forwards Alice’s initial message to Bob, or falsely presents Bob’s contact information as his own. The effect of this is that Alice thinks she is sending an initial message to Charlie when she is actually sending it to Bob.

To make this more difficult the parties can include more identifying information into AD, or hash more identifying information into the fingerprint, such as usernames, phone numbers, real names, or other identifying information. Charlie would be forced to lie about these additional values, which might be difficult.

However, there is no way to reliably prevent Charlie from lying about additional values, and including more identity information into the protocol often brings trade-offs in terms of privacy, flexibility, and user interface. A detailed analysis of these trade-offs is beyond the scope of this document.

4.11. Risks of weak randomness sources

In addition to concerns about the generation of the keys themselves, the security of the PQKEM shared secret relies on the random source available to Alice’s machine at the time of running the PQKEM-ENC operation. This leads to a situation similar to what we face with a Diffie-Hellman exchange. For both Diffie-Hellman and Kyber, if Alice has weak entropy then the resulting shared secret will have low entropy when conditioned on Bob’s public key. Thus both the classical and post-quantum security of SK depend on the strength of Alice’s random source.

Kyber hashes Bob’s public key with Alice’s random bits to generate the shared secret, making Bob’s key contributory, as it is with a Diffie-Hellman key exchange. This does not reduce the dependence on Alice’s entropy source, as described above, but it does limit Alice’s ability to control the post-quantum shared secret. Not all KEMs make Bob’s key contributory and this is a property to consider when selecting pqkem.

5. IPR

This document is hereby placed in the public domain.

6. Acknowledgements

The PQXDH protocol was developed by Ehren Kret and Rolfe Schmidt as an extension of the X3DH protocol [6] by Moxie Marlinspike and Trevor Perrin. Thanks to Trevor Perrin for discussions on the design of this protocol.

Thanks to Bas Westerbaan, Chris Peikert, Daniel Collins, Deirdre Connolly, John Schanck, Jon Millican, Jordan Rose, Karthik Bhargavan, Loïs Huguenin-Dumittan, Peter Schwabe, Rune Fiedler, Shuichi Katsumata, Sofía Celi, and Yo’av Rieck for helpful discussions and editorial feedback.

Thanks to the Kyber team [17] for their work on the Kyber key encapsulation mechanism.

7. References


T. Perrin, “The XEdDSA and VXEdDSA Signature Schemes,” 2016.


“Module-lattice-based key-encapsulation mechanism standard.”


A. Langley, M. Hamburg, and S. Turner, “Elliptic Curves for Security.” Internet Engineering Task Force; RFC 7748 (Informational); IETF, Jan-2016.


H. Krawczyk and P. Eronen, “HMAC-based Extract-and-Expand Key Derivation Function (HKDF).” Internet Engineering Task Force; RFC 5869 (Informational); IETF, May-2010.


P. Rogaway, “Authenticated-encryption with Associated-data,” in Proceedings of the 9th ACM Conference on Computer and Communications Security, 2002.


M. Marlinspike and T. Perrin, “The X3DH Key Agreement Protocol,” 2016.


T. Perrin and M. Marlinspike, “The Double Ratchet Algorithm,” 2016.


K. Cohn-Gordon, C. Cremers, B. Dowling, L. Garratt, and D. Stebila, “A formal security analysis of the signal messaging protocol,” J. Cryptol., vol. 33, no. 4, 2020.


T. Okamoto and D. Pointcheval, “The gap-problems: A new class of problems for the security of cryptographic schemes,” in Proceedings of the 4th international workshop on practice and theory in public key cryptography: Public key cryptography, 2001.


A. Langlois and D. Stehlé, “Worst-case to average-case reductions for module lattices,” Des. Codes Cryptography, vol. 75, no. 3, Jun. 2015.


N. Unger and I. Goldberg, “Deniable Key Exchanges for Secure Messaging,” in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015.


J. Brendel, R. Fiedler, F. Günther, C. Janson, and D. Stebila, “Post-quantum asynchronous deniable key exchange and the signal handshake,” in Public-key cryptography – PKC 2022 – 25th IACR international conference on practice and theory of public-key cryptography, virtual event, march 8-11, 2022, proceedings, part II, 2022, vol. 13178.


N. Vatandas, R. Gennaro, B. Ithurburn, and H. Krawczyk, “On the cryptographic deniability of the signal protocol,” in Applied cryptography and network security – 18th international conference, ACNS 2020, rome, italy, october 19-22, 2020, proceedings, part II, 2020, vol. 12147.


D. Hofheinz, K. Hövelmanns, and E. Kiltz, “A modular analysis of the fujisaki-okamoto transformation,” in Theory of cryptography – 15th international conference, TCC 2017, baltimore, MD, USA, november 12-15, 2017, proceedings, part I, 2017, vol. 10677.


K. Hashimoto, S. Katsumata, K. Kwiatkowski, and T. Prest, “An efficient and generic construction for signal’s handshake (X3DH): Post-quantum, state leakage secure, and deniable,” J. Cryptol., vol. 35, no. 3, 2022.


NIST, “Post-quantum cryptography.”


“Kyber key encapsulation mechanism.”

Source :

How WordPress Can Help Scale Your Business


Having a robust online presence is not just an option but a necessity for businesses looking to scale. While there are numerous platforms that offer varying degrees of customization and functionality, WordPress stands out as a versatile Content Management System (CMS) that transcends its initial design as a platform for bloggers. WordPress has evolved into an incredibly powerful tool that can assist in the growth and management of your business, whether you’re a start-up or an established enterprise.

From its SEO capabilities to its eCommerce solutions, WordPress offers a range of features designed to make your business more efficient, reachable, and scalable. This article will delve into the multiple ways WordPress can be your business’s best friend, helping you navigate the complex maze of scaling effectively. So, if you’re contemplating how to take your business to the next level without getting tangled in the complexities of coding or spending a fortune, read on.

Why Choose WordPress for Business

When it comes to setting up and managing an online business, the platform you choose serves as the backbone of your operations. WordPress emerges as a leading choice for several compelling reasons:

1. Open-Source Platform With a Large Community
One of the most appealing aspects of WordPress is that it’s an open-source platform, meaning you have the freedom to customize and modify your website as you see fit. This also means that there is a large community of developers continually contributing to its improvement. You’re never alone when you have a problem; chances are someone else has faced the same issue and found a solution that they’ve shared.

2. High Level of Customization and Scalability
WordPress offers an almost limitless array of customization options. With thousands of themes and plugins available, you can tailor the appearance and functionality of your website to perfectly match your brand and business objectives. This high degree of customization extends to supporting different currencies, enabling you to appeal to an international customer base effortlessly.

3. User-Friendly Interface
WordPress is designed to be used by people who may not have any coding experience. The platform is intuitive, making it easy to add content, make updates, and manage various aspects of your site without needing specialized technical knowledge.

4. SEO Capabilities
Search engine optimization (SEO) is critical for any business aiming for long-term success. WordPress is coded to be SEO-friendly right out of the box. Additionally, there are several SEO plugins available that can help you optimize your content and improve your rankings further.

5. Cost-Effective
Starting a website with WordPress can be incredibly cost-efficient. The platform itself is free, and many high-quality themes and plugins are available at no cost. While there may be some expenses, such as for specialized plugins or a more premium hosting service, these costs are generally lower compared to developing a custom website from scratch.

6. eCommerce Ready
For businesses looking to sell products or services online, WordPress seamlessly integrates with eCommerce solutions like WooCommerce. This allows for easy inventory management, payment gateway integration, and functionalities like printing shipping labels directly from your dashboard.

By choosing WordPress as your business platform, you’re not just creating a website—you’re building a scalable, customizable, and efficient business operation that can grow with you. With its blend of user-friendly design, SEO capabilities, and versatile functionality, WordPress proves to be a strong ally in achieving your business goals. Keep reading as we delve into these aspects in greater detail, starting with the platform’s unmatched flexibility and customization options.

Flexibility and Customization

One of the most significant advantages of using WordPress is its unparalleled flexibility and customization options. Whether you’re in the healthcare sector, the food and beverage industry, or running an eCommerce store, WordPress has you covered. With its array of specialized themes and plugins tailored to business needs, you can establish a strong online presence that aligns with your brand and business goals.


Themes offer the first layer of customization. Designed specifically for various business sectors, they provide built-in functionalities like portfolios, customer testimonials, and eCommerce features. You can establish your visual brand identity effortlessly, without writing a single line of code.


Moving on to plugins, they are the true workhorses of WordPress customization. WordPress has tens of thousands of plugins in its directory, and these handy additions can install virtually any functionality you can imagine. Whether you need an appointment booking system, a members-only section, or automated marketing solutions, there’s a plugin for that. Some plugins even allow you to handle multiple currencies, making your website more accommodating for international customers.

By combining the right themes and plugins, WordPress allows you unparalleled control over how your website looks and functions. This isn’t just advantageous for you as the business owner; it also dramatically enhances the user experience. Your customers can interact with a platform that is both visually appealing and highly functional, meeting their needs no matter where they are in the world or what currency they prefer to use.


Having a secure website is a non-negotiable for businesses. Luckily, WordPress takes security seriously and offers a multitude of features to help you protect your online assets. For starters, the platform releases regular updates to address known security vulnerabilities, ensuring that you are always running the most secure version possible.

Change reporting is another powerful feature provided by WordPress security plugins, allowing you to monitor real-time changes on your site. Any unauthorized modifications can trigger alerts, enabling you to take quick action. Additionally, many plugins offer malware scanning, which continuously scans your site’s files to detect malicious code and potential threats.

Intrusion prevention mechanisms are also commonly found in WordPress security solutions. These tools can block suspicious IP addresses, limit login attempts, and even implement two-factor authentication to add an extra layer of protection to your site.

While no system can guarantee 100% security, WordPress comes close by offering a range of robust features that work together to minimize risks. By taking advantage of these tools, you’re not just protecting your website; you’re safeguarding your business reputation and the trust of your customers. However, it’s crucial to remember that users also bear the responsibility for keeping themes and plugins updated, as outdated software can pose security risks.

SEO Capabilities

Visibility is crucial for any online business, and WordPress shines when it comes to search engine optimization (SEO). The platform is designed with built-in SEO features that allow for custom permalinks, meta descriptions, and image alt text, making it easier for search engines to read and index your site.

Additionally, SEO plugins like Yoast and All in One SEO can further enhance your optimization efforts. These plugins help you target specific keywords and improve content readability. Site speed, an important SEO factor, can be optimized by choosing a quality hosting service like HostDash.

WordPress themes are also generally responsive, adapting to various screen sizes, which is vital for mobile optimization—a significant factor in search rankings. Analytics plugins offer insights into your site’s performance, and local SEO can be easily managed for businesses operating in specific geographic locations.

Whether you want to target a global or local audience, WordPress has the tools and setup to help you achieve your specific SEO goals. By leveraging WordPress’s SEO features, you set the stage for better visibility, increased customer engagement, and, ultimately, business growth.

eCommerce Solutions

In today’s digital age, having an eCommerce capability is often essential for business growth. WordPress makes this transition smooth and simple. Through its seamless integration with WooCommerce and other eCommerce plugins, WordPress allows businesses to set up an online store effortlessly.

WooCommerce Integration

WooCommerce is the go-to eCommerce plugin for WordPress users, enabling a wide range of functionalities, from inventory management to payment gateway integration. The setup is straightforward, allowing even those with minimal technical expertise to launch an online store.

Payment and Currency Flexibility

One of the benefits of using WordPress for eCommerce is the range of payment options available. Whether your customers prefer credit card payments, PayPal, or digital wallets, WordPress has you covered. Some plugins even support transactions in multiple currencies, which is ideal for businesses looking to serve an international clientele.

Shipping Solutions

Shipping is a critical component of any eCommerce operation. WordPress simplifies this aspect as well, with options for calculating real-time shipping costs and even printing shipping labels directly from your dashboard.

Content Management

Managing content effectively is at the heart of any successful online business. WordPress makes this task simple and intuitive. Built originally as a blogging platform, WordPress has advanced content management capabilities that extend far beyond just text-based posts. It supports a wide range of media types, including images, videos, and audio files, allowing you to create a rich, multimedia experience for your visitors.

One of the standout features is the built-in editor, which provides a user-friendly interface for creating and formatting your content. This editor allows for real-time previews so you can see how changes will look before they go live. Beyond the visual aspects, WordPress enables easy content organization through categories, tags, and custom taxonomies. You can also schedule posts in advance, freeing you from having to manually update content and allowing you to focus on other aspects of your business.

Even more appealing is how WordPress content management intersects with other functionalities. You can easily link blog posts to specific products in your online store or incorporate SEO best practices directly into your content using plugins. All these features work in tandem to make your site not just a promotional tool, but a comprehensive platform for customer engagement and business growth.


As your business grows, you need a platform that can grow with you, and WordPress excels in this aspect. The platform allows you to scale up or down easily based on your business needs. Whether you’re adding new products, launching a subscription service, or expanding into new markets, WordPress remains stable and functional. Furthermore, it’s essential to choose a hosting plan that can adapt as you grow. The right host will offer various server resources and hosting plans that can be modified to meet your increasing requirements, ensuring that scaling up doesn’t become a bottleneck for your business.

Analytics and Reporting

Data is vital in understanding how your business is performing, and WordPress allows for seamless integration with analytics tools like Google Analytics. With just a few clicks, you can have access to a wealth of information ranging from visitor demographics to behavior patterns. WordPress also offers plugins that can help you monitor key performance indicators (KPIs). By keeping an eye on these metrics, you can gain valuable insights into customer behavior, which in turn can inform your business strategies and help you make data-driven decisions.


In sum, WordPress isn’t just a platform for bloggers; it’s a comprehensive tool for businesses of all sizes. Its open-source nature, scalability, and a vast array of customization options make it a compelling choice for entrepreneurs looking to build an online presence without breaking the bank. With robust security measures, SEO capabilities, and integrated eCommerce solutions, WordPress offers a well-rounded package that can adapt to your evolving business needs.

Whether you’re looking to attract a global audience, keep your site secure, or gain valuable insights through analytics, WordPress provides the tools you need to not just survive, but thrive in the competitive digital landscape.

Source :

Top 5 Security Misconfigurations Causing Data Breaches in 2023

Edward Kost
updated May 15, 2023

Security misconfigurations are a common and significant cybersecurity issue that can leave businesses vulnerable to data breaches. According to the latest data breach investigation report by IBM and the Ponemon Institute, the average cost of a breach has peaked at US$4.35 million. Many data breaches are caused by avoidable errors like security misconfiguration. By following the tips in this article, you could identify and address a security error that could save you millions of dollars in damages.

Learn how UpGuard can help you detect data breach risks >

What is a Security Misconfiguration?

A security misconfiguration occurs when a system, application, or network device’s settings are not correctly configured, leaving it exposed to potential cyber threats. This could be due to default configurations left unchanged, unnecessary features enabled, or permissions set too broadly. Hackers often exploit these misconfigurations to gain unauthorized access to sensitive data, launch malware attacks, or carry out phishing attacks, among other malicious activities.

What Causes Security Misconfigurations?

Security misconfigurations can result from various factors, including human error, lack of awareness, and insufficient security measures. For instance, employees might configure systems without a thorough understanding of security best practices, security teams might overlook crucial security updates due to the growing complexity of cloud services and infrastructures.

Additionally, the rapid shift to remote work during the pandemic has increased the attack surface for cybercriminals, making it more challenging for security teams to manage and monitor potential vulnerabilities.

List of Common Types of Security Configurations Facilitating Data Breaches

Some common types of security misconfigurations include:

1. Default Settings

With the rise of cloud solutions such as Amazon Web Services (AWS) and Microsoft Azure, companies increasingly rely on these platforms to store and manage their data. However, using cloud services also introduces new security risks, such as the potential for misconfigured settings or unauthorized access.

A prominent example of insecure default software settings that could have facilitated a significant breach is the Microsoft Power Apps data leak incident of 2021. By default, Power Apps portal data feeds were set to be accessible to the public.

Unless developers specified for OData feeds to be set to private, virtually anyone could access the backend databases of applications built with Power Apps. UpGuard researchers located the exposure and notified Microsoft, who promptly addressed the leak. UpGuard’s detection helped Microsoft avoid a large-scale breach that could have potentially compromised 38 million records.

Read this whitepaper to learn how to prevent data breaches >

2. Unnecessary Features

Enabling features or services not required for a system’s operation can increase its attack surface, making it more vulnerable to threats. Some examples of unnecessary product features include remote administration tools, file-sharing services, and unused network ports. To mitigate data breach risks, organizations should conduct regular reviews of their systems and applications to identify and disable or remove features that are not necessary for their operations.

Additionally, organizations should practice the principle of least functionality, ensuring that systems are deployed with only the minimal set of features and services required for their specific use case.

3. Insecure Permissions

Overly permissive access controls can allow unauthorized users to access sensitive data or perform malicious actions. To address this issue, organizations should implement the principle of least privilege, granting users the minimum level of access necessary to perform their job functions. This can be achieved through proper role-based access control (RBAC) configurations and regular audits of user privileges. Additionally, organizations should ensure that sensitive data is appropriately encrypted both in transit and at rest, further reducing the risk of unauthorized access.

4. Outdated Software

Failing to apply security patches and updates can expose systems to known vulnerabilities. To protect against data breaches resulting from outdated software, organizations should have a robust patch management program in place. This includes regularly monitoring for available patches and updates, prioritizing their deployment based on the severity of the vulnerabilities being addressed, and verifying the successful installation of these patches.

Additionally, organizations should consider implementing automated patch management solutions and vulnerability scanning tools to streamline the patching process and minimize the risk of human error.

5. Insecure API Configurations

APIs that are not adequately secured can allow threat actors to access sensitive information or manipulate systems. API misconfigurations – like the one that led to T-Mobile’s 2023 data breach, are becoming more common. As more companies move their services to the cloud, securing these APIs and preventing the data leaks they facilitate is becoming a bigger challenge.

To mitigate the risks associated with insecure API configurations, organizations should implement strong authentication and authorization mechanisms, such as OAuth 2.0 or API keys, to ensure only authorized clients can access their APIs. Additionally, organizations should conduct regular security assessments and penetration testing to identify and remediate potential vulnerabilities in their API configurations.

Finally, adopting a secure software development lifecycle (SSDLC) and employing API security best practices, such as rate limiting and input validation, can help prevent data breaches stemming from insecure APIs.

Learn how UpGuard protects against third-party breaches >

How to Avoid Security Misconfigurations Impacting Your Data Breach Resilience

To protect against security misconfigurations, organizations should:

1. Implement a Comprehensive Security Policy

Implement a cybersecurity policy covering all system and application configuration aspects, including guidelines for setting permissions, enabling features, and updating software.

2. Implement a Cyber Threat Awareness Program

An essential security measure that should accompany the remediation of security misconfigurations is employee threat awareness training. Of those who recently suffered cloud security breaches, 55% of respondents identified human error as the primary cause.

With your employees equipped to correctly respond to common cybercrime tactics that preceded data breaches, such as social engineering attacks and social media phishing attacks, your business could avoid a security incident should threat actors find and exploit an overlooked security misconfiguration.

Phishing attacks involve tricking individuals into revealing sensitive information that could be used to compromise an account or facilitate a data breach. During these attacks, threat actors target account login credentials, credit card numbers, and even phone numbers to exploit Multi-Factor authentication.

Learn the common ways MFA can be exploited >

Phishing attacks are becoming increasingly sophisticated, with cybercriminals using automation and other tools to target large numbers of individuals. 

Here’s an example of a phishing campaign where a hacker has built a fake login page to steal a customer’s banking credentials. As you can see, the fake login page looks almost identical to the actual page, and an unsuspecting eye will not notice anything suspicious.

Real Commonwealth Bank Login Page
Real Commonwealth Bank Login Page.
Fake Commonwealth Bank Login Page
Fake Commonwealth Bank Login Page

Because this poor cybersecurity habit is common amongst the general population, phishing campaigns could involve fake login pages for social media websites, such as LinkedIn, popular websites like Amazon, and even SaaS products. Hackers implementing such tactics hope the same credentials are used for logging into banking websites.

Cyber threat awareness training is the best defense against phishing, the most common attack vector leading to data breaches and ransomware attacks.

Because small businesses often lack the resources and expertise of larger companies, they usually don’t have the budget for additional security programs like awareness training. This is why, according to a recent report, 61% of small and medium-sized businesses experienced at least one cyber attack in the past year, and 40% experienced eight or more attacks.

Luckily, with the help of ChatGPT, small businesses can implement an internal threat awareness program at a fraction of the cost. Industries at a heightened risk of suffering a data breach, such as healthcare, should especially prioritize awareness of the cyber threat landscape.

Learn how to implement an internal cyber threat awareness campaign >

3. Use Multi-Factor Authentication

MFA and strong access management control to limit unauthorized access to sensitive systems and data.

Previously compromised passwords are often used to hack into accounts. MFA adds additional authentication protocols to the login process, making it difficult to compromise an account, even if hackers get their hands on a stolen password

4. Use Strong Access Management Controls

Identity and Access Management (IAM) systems ensure users only have access to the data and applications they need to do their jobs and that permissions are revoked when an employee leaves the company or changes roles.

The 2023 Thales Dara Threat Report found that 28% of respondents found IAM to be the most effective data security control preventing personal data compromise.

5. Keep All Software Patched and Updated

Keep all environments up-to-date by promptly applying patches and updates. Consider patching a “golden image” and deploying it across your environment. Perform regular scans and audits to identify potential security misconfigurations and missing patches.

An attack surface monitoring solution, such as UpGuard, can detect vulnerable software versions that have been impacted by zero-days and other known security flaws.

6. Deploy Security Tools

Security tools, such as intrusion detection and prevention systems (IDPS) and security information and event management (SIEM) solutions, to monitor and respond to potential threats.

It’s essential also to implement tools to defend against tactics often used to complement data breach attempts, for example. DDoS attacks – a type of attack where a server is flooded with fake traffic to force it offline, allowing hackers to exploit security misconfigurations during the chaos of excessive downtime.

Another important security tool is a data leak detection solution for discovering compromised account credentials published on the dark web. These credentials, if exploited, allow hackers to compress the data breach lifecycle, making these events harder to detect and intercept.

Dara leaks compressing the data breach lifecycle.

Learn how to detect and prevent data leaks >

7. Implement a Zero-Trust Architecture

One of the main ways that companies can protect themselves from cloud-related security threats is by implementing a Zero Trust security architecture. This approach assumes all requests for access to resources are potentially malicious and, therefore, require additional verification before granting access.

Learn how to implement a Zero-Trust Architecture >

A Zero-Trust approach to security assumes that all users, devices, and networks are untrustworthy until proven otherwise.

8. Develop a Repeatable Hardening Process

Establish a process that can be easily replicated to ensure consistent, secure configurations across production, development, and QA environments. Use different passwords for each environment and automate the process for efficient deployment. Be sure to address IoT devices in the hardening process. 

These devices tend to be secured with their default factory passwords, making them highly vulnerable to DDoS attacks.

9. Implement a Secure Application Architecture

Design your application architecture to obfuscate general access to sensitive resources using the principle of network segmentation.

Learn more about network segmentation >

Cloud infrastructure has become a significant cybersecurity issue in the last decade. Barely a month goes by without a major security breach at a cloud service provider or a large corporation using cloud services.

10. Maintain a Structured Development Cycle

Facilitate security testing during development by adhering to a well-organized development process. Following cybersecurity best practices this early in the development process sets the foundation for a resilient security posture that will protect your data even as your company scales.

Implement a secure software development lifecycle (SSDLC) that incorporates security checkpoints at each stage of development, including requirements gathering, design, implementation, testing, and deployment. Additionally, train your development team in secure coding practices and encourage a culture of security awareness to help identify and remediate potential vulnerabilities before they make their way into production environments.

11. Review Custom Code

If using custom code, employ a static code security scanner before integrating it into the production environment. These scanners can automatically analyze code for potential vulnerabilities and compliance issues, reducing the risk of security misconfigurations.

Additionally, have security professionals conduct manual reviews and dynamic testing to identify issues that may not be detected by automated tools. This combination of automated and manual testing ensures that custom code is thoroughly vetted for security risks before deployment.

12. Utilize a Minimal Platform

Remove unused features, insecure frameworks, and unnecessary documentation, samples, or components from your platform. Adopt a “lean” approach to your software stack by only including components that are essential for your application’s functionality.

This reduces the attack surface and minimizes the chances of security misconfigurations. Furthermore, keep an inventory of all components and their associated security risks to better manage and mitigate potential vulnerabilities.

13. Review Cloud Storage Permissions

Regularly examine permissions for cloud storage, such as S3 buckets, and incorporate security configuration updates and reviews into your patch management process. This process should be a standard inclusion across all cloud security measures. Ensure that access controls are properly configured to follow the principle of least privilege, and encrypt sensitive data both in transit and at rest.

Implement monitoring and alerting mechanisms to detect unauthorized access or changes to your cloud storage configurations. By regularly reviewing and updating your cloud storage permissions, you can proactively identify and address potential security misconfigurations, thereby enhancing your organization’s data breach resilience.

How UpGuard Can Help

UpGuard’s IP monitoring feature monitors all IP addresses associated with your attack surface for security issues, misconfigurations, and vulnerabilities. UpGuard’s attack surface monitoring solution can also identify common misconfigurations and security issues shared across your organization and its subsidiaries, including the exposure of WordPress user names, vulnerable server versions, and a range of attack vectors facilitating first and third data breaches.

UpGuard's Risk Profile feature displays security vulnerabilities associated with end-of-life software.
UpGuard’s Risk Profile feature displays security vulnerabilities associated with end-of-life software.

To further expand its mitigation of data breach threat categories, UpGuard offersa data leak detection solution that scans ransomware blogs on the dark web for compromised credentials, and any leaked data could help hackers breach your network and sensitive resources.

UpGuard's ransomware blog detection feature.
UpGuard’s ransomware blog detection feature.

Source :

Cybersecurity and Social Responsibility: Ethical Considerations

Kyle Chin
updated Aug 21, 2023

Cybersecurity is necessary to protect data from criminals. However, the world of cybersecurity is not so simple. Therefore, a discussion of cybersecurity ethics needs to examine the morality of businesses collecting, processing, using, and storing data.

How cybersecurity professionals affect security measures is also worth exploring. Businesses and individuals should ask themselves whether the ends justify the means and to what extent they are willing to sacrifice data privacy for data protection.

This post underlines the ethical concerns and cybersecurity issues surrounding information security policies, procedures, systems, and teams and how they ought to contribute to the well-being of consumers.

What Are Ethics in Cybersecurity?

Ethics can be described as ideals and values that determine how people live and, increasingly, how businesses and their employees work.

While it is far from the technical specifications of networks and device configurations, it is an increasingly important part of business operations. It can be codified and included in an organization’s framework, determining acceptable behavior throughout the company in any scenario.

One of the main benefits of a strong ethical foundation for a business is that it will have a moral compass to help make ethical decisions in a rapidly changing business environment. The world is experiencing massive changes in information technology with advancements in artificial intelligence, machine learning algorithms, 5G, and data collection and processing.

The cyber threat landscape is also rapidly evolving, and businesses must make critical decisions about protecting themselves and their clients. With cybercrime on the rise and emerging threats driven by new technology such as AI, businesses need to elevate their cybersecurity. Doing so without sacrificing the customers or clients they set out to protect requires a strong ethical foundation and a written code of conduct.

The ACM Code of Ethics and Professional Conduct

In 1992, the Association for Computing Machinery (ACM) developed its Code of Ethics and Professional Conduct for computer systems workers. While it is not mandated, except for members of the ACM, it can be a useful starting point for Chief Information Security Officers (CISOs) and other stakeholders to think about and take a stance on ethical practices when tackling sensitive cybersecurity issues.

The Code of Ethics was revisited and revised in 2018. While the cloud stands to make more updates in the face of 5G, AI, and other advances in computing, it remains a valuable resource for anyone seeking to define ethical standards concerning computer systems and technology.

Having a clear set of ethical principles is helpful because it can clarify and speed up important decision-making in an increasingly complex, rapidly evolving cyber threat landscape.

The ACM Code of Ethics is divided into four categories:

  • General Ethical Principles
  • Professional Responsibilities
  • Professional Leadership Principles
  • Compliance with the Code

General Ethical Principles

The General Ethical Principles section makes the following assertions about the role of computing professionals. Computing professionals should:

  1. Use their skills to benefit society and people’s well-being, and note that everyone is a stakeholder in computing.
  2. Avoid negative and unjust consequences, noting that well-intended actions can result in harm that they should then mitigate.
  3. Fully disclose all pertinent computing issues and not misrepresent data while being transparent about their capabilities to perform necessary tasks.
  4. Demonstrate respect and tolerance for all people.
  5. Credit the creators of the resources they use.
  6. Respect privacy, using best cybersecurity practices, including data limitation.
  7. Honor confidentiality, including trade secrets, business strategies, and client data.

Professional Responsibilities

The Professional Responsibilities section also says that computing professionals must prioritize high-quality services, maintain competence and ethical practice, promote computing awareness, and perform their duties within authorized boundaries.

  1. Strive to achieve high quality in both the processes and products of professional work.
  2. Maintain high standards of professional competence, conduct, and ethical practice.
  3. Know and respect existing rules pertaining to professional work.
  4. Accept and provide an appropriate professional review.
  5. Give comprehensive and thorough evaluations of computer systems and their impacts, including analysis of possible risks.
  6. Perform work only in areas of competence.
  7. Foster public awareness and understanding of computing, related technologies, and their consequences.
  8. Access computing and communication resources only when authorized or when compelled by the public good.
  9. Design and implement systems that are robustly and usably secure.

Professional Leadership Principles

Professional Leadership pertains to any position within an organization that has influence or managerial responsibilities over other members and has increased responsibilities to uphold certain values set by the organization.

  1. Ensure that the public good is the central concern during all professional computing work.
  2. Articulate, encourage acceptance of, and evaluate fulfillment of social responsibilities by the organization or group members.
  3. Manage personnel and resources to enhance the quality of working life.
  4. Articulate, apply, and support policies and processes that reflect the principles of the Code.
  5. Create opportunities for members of the organization or group to grow as professionals.
  6. Use care when modifying or retiring systems.
  7. Recognize and take special care of systems that become integrated into the infrastructure of society.

Compliance with the Code

Of course, compliance with the Code of Ethics is the only way to ensure cybersecurity professionals uphold certain ethical standards. Without enforcement of the Code of Ethics or similar ethical considerations, it is impossible to document and recognize adherence to ethics and social responsibility.

  1. Uphold, promote, and respect the principles of the Code.
  2. Treat violations of the Code as inconsistent with membership in the ACM.

Corporate Social Responsibility and Cybersecurity

To compete with other businesses and delivery the user experiences that consumers expect, modern businesses are obligated to collect and process increasing amounts of data. This particular genie is already out of the bottle, so the question is not really whether big data should exist but how businesses use and protect data.

Cybersecurity helps prevent and mitigate data breaches and attacks that threaten information security, so it is crucial for public safety and well-being, as well as helping to ensure the longevity of businesses. There is so much at stake that cybersecurity professionals should be willing to come under scrutiny by those in and outside the field.

Cyber ethics encapsulates common courtesy, trust, and legal considerations. Acting ethically should protect individuals, organizations, and the wider economy. So it’s vital for cyber professionals and the organizations that employ them. The following considerations will explore what makes effective cybersecurity and explain how poor cybersecurity is not only ineffective but also potentially unethical.

Information Security

Businesses have a moral obligation to protect their customers and business partners. They benefit from data that allows them to operate and can give them a competitive advantage, but they need to protect that information from hackers and accidental leaks.

Unfortunately, businesses that are hacked are often at fault. While nobody deserves to be hacked, a business’s moral obligations to consumers are such that they are expected to have adequate cybersecurity for their computer systems and respond promptly and decisively in the event of a cyber incident.

Equifax’s 2017 cyber attack is a prime example of a business that damaged its reputation due to inadequate cybersecurity and poor response to attacks. It was hacked around May 2017 but did not disclose the breach until September.

While Equifax’s president for Europe said that protecting consumer and client data was always its top priority, it failed to follow through with patching a software security vulnerability it knew about in March and failed to let affected customers know so that they could take steps to protect themselves from phishing, identity theft, and other kinds of fraud.

Equifax’s human and technological failures compromised 14.5 million sensitive data records, including addresses, birth dates, driver’s licenses, and social security numbers. It also puts the firm’s morality into question, as it processes sensitive information and purports to help customers with their financial security, but its ineffective cybersecurity procedures put those people at risk.


Ethically, businesses should be prepared to disclose the risks inherent to the business if they could substantially affect people, whether customers, business partners, or their supply chain.

Data breach reporting is a significant part of a business’s transparency. While reporting a breach highlights a business in crisis, failing to report promptly can lead to a more significant loss of trust, criticism from industry professionals, and sometimes, as in Equifax’s case, action from investigators.

Even if a business operates in an unregulated industry or a cyber attack does not cause business disruption or affect clients, reporting all data breaches is a worthwhile ethical consideration. The more businesses report cyber attacks, the more information there is for cybersecurity experts and industry professionals to share and learn from. This protects other businesses and their clients from emerging threats.

While revealing a vulnerability or data breach according to applicable regulations may not be necessary, there is a moral question as to whether this information should be shared regardless. Being transparent about discovering vulnerabilities can help all businesses protect their information systems and clients.

Cyber incidents are varied, and cybercriminals are continually researching new methods to apply and vulnerabilities to exploit. So how businesses respond to threats and potential threats needs to change on a case-by-case basis. However, they can base their decision-making on an explicit, underlying ethical framework that guides the business according to its values and corporate social responsibility.

While some businesses reject revealing data breaches “unnecessarily” for fear of losing trust or business, disclosing data breaches late can cause more damage and even harsh penalties. Handling a crisis professionally and ethically can even be good for a firm’s reputation, as in the case of Norsk Hydro’s handling of the fallout from its 2017 ransomware attack, which impressed industry professionals and cybersecurity experts.

Organizations and their cybersecurity teams can reap rewards from being proactive and enacting policies and procedures according to a defined, documented code of ethics.

Security vs. Privacy Protection

A prime ethical dilemma in cybersecurity concerns cybersecurity experts’ privileged access to sensitive information. In effect, they must understand how cybercriminals operate and be able theoretically to perform the same feats without crossing the line into the territory of black hat hackers.

Cybersecurity professionals set access privileges, monitor network activity, and can read people’s emails. They can scan machines and therefore can compromise and protect people’s personal lives.

Collecting data leads to ethical questions but so does protecting it. Ethically, everyone deserves dignity, which is tied in with privacy. But how do businesses achieve privacy when they collect customer data, and that data must be protected?

Social engineering and identity theft are among the biggest cyber risks to the public. This is partly because it can affect people beyond those whose data is stored. With stolen data, a cybercriminal can launch phishing attacks against the victim and their associates.

Keeping personally identifiable information (PII) secure, therefore, is paramount. However, that requires personnel to access and in some ways manipulate that data. Anyone working in cybersecurity is walking a tightrope of ethical issues every day. It’s helpful to acknowledge this so that grey areas can be defined and clients are reassured.


Excellent cybersecurity is not just about technical standards. Cybersecurity professionals need to demonstrate their moral standards when handling sensitive data. During daily duties, cybersecurity professionals will have access to confidential data and files. This could include sensitive data such as payroll details, private emails, and medical records.

Intellectual property theft is one of the most costly cybercrime, as stealing a business’s product designs and concepts can give opponents an unfair advantage while saving them the massive cost and time investment of product development. Nation-states may sponsor cyber espionage to achieve this advantage, risking destabilizing the affected nation’s market and economy. Intellectual property theft can be a serious risk to human life in a critical infrastructure industry, such as defense or healthcare.

It almost goes without saying that cybersecurity staff shouldn’t say anything to the public about the confidential data and intellectual property they see, nor should they store or transmit it in any way that is not aligned with the business’s goals to protect data. “Almost” because ethical debates often involve bringing things out of the shadows and into the light.

An implicit understanding may not be enough to ensure the confidentiality of sensitive data. It’s better to have documented policies and procedures regarding confidentiality and the organization’s attitude to how cybersecurity interacts with personal data.

On April 13, 2023, federal investigators arrested Jake Teixeira, an air national guardsman, concerning the unauthorized transmission of classified US intelligence documents. Teixeira’s role in the Massachusetts Air National Guard was as a Cyber Transport Systems Journeyman responsible for maintaining communication networks.

While there are some claims that he acted as a whistleblower, he shared the documents in a small private group on a social media platform, not seeming to have intended to share it with a wider audience.

Nonetheless, this massive data security breach calls into question cybersecurity professionals’ commitment to upholding the law when faced with tempting confidential information. Cybersecurity teams must be continuously committed and engaged to perform their duties honorably, within the law, and according to the expectations of their employers.

Although The Association for Computing Machinery (ACM) developed a Code of Ethics and Professional Conduct for computer systems workers, ethics in cybersecurity is not regulated. Ethics can’t be ensured by law enforcement.

Having said that, unethical behavior can lead to fines, loss of revenue, and loss of customers, so businesses and cybersecurity professionals will benefit from addressing ethics seriously.

While there’s no handy accreditation that cybersecurity staff can achieve to attest to their honesty, hiring organizations should look at a cybersecurity firm’s history and culture for evidence of its ethical stance on cybersecurity.


Cybersecurity professionals cannot have a lapse of concentration or a couple of days where they’re off their game and let things slide. Responsibility for others’ information security is a massive contractual and ethical responsibility. Almost no matter what the individual does, scrutiny will be on any assigned cybersecurity team or professional in the event of a cyber incident.

Cybersecurity professionals must maintain their competence level, respect sensitive information privacy, and uphold the well-being of those they serve. It requires honesty for these team members to evaluate their skills, abilities, and alertness and ensure that they take the appropriate action to stay on top of their game.

Ethical Hacking

Ethical hacking refers to sanctioned hacking by businesses onto their own systems to discover vulnerabilities and security gaps. Ethical hackers attempt to find vulnerabilities to exploit and break into information systems to fix those issues before cybercriminals find them.

But now imagine an ethical break-in, in which an ethical burglar break into people’s homes and then advises them on which locks they should have used and where to hide their laptops. Ethical hackers use illegal means to achieve positive results.

To protect data from hackers, particularly when they are using increasingly sophisticated methods and rapidly advancing technologies, cybersecurity professionals must use the same techniques. Cybersecurity programmers need to know how to commit crimes by black hat hackers, such as stealing credit card data. What stops them from doing this, however, is that ethical principles separate them.

Cyber professionals must be aware of computer ethics since what they do gives them access to privileged information. This is especially true for professionals working in critical infrastructure, including defense, healthcare, finance, and manufacturing, where the consequences of unethical actions regarding sensitive data could cause serious harm to individuals, organizations, and the economy.

Cybersecurity professionals and businesses that need them must understand cyber ethics and insist that a moral code is always evident in their attitude and behavior.


Before the dark web became known as a haven for hackers and cybercriminals to extort money, purchase malware, and prepare to commit multiple kinds of cybercrime, it existed in large part to protect whistleblowers.

Whistleblowing refers to someone reporting their organization’s wrongdoing, typically an employee. A whistleblower’s objection might be that the organization or someone in it is acting illegally, fraudulently, immorally, or without proper regard for safety or human rights. Furthermore, the issue should be in the public interest.

Public sector whistleblowers are protected by the First Amendment. Even so, whistleblowing might be considered a grey area when considering cyber ethics.

If a cybersecurity expert reveals confidential information to stop a harmful practice, the objective is good, but how they achieved this breaks the ethical confidentiality essential to that employee-employer relationship.

Edward Snowden famously blew the whistle on the National Security Agency’s unethical, invasive surveillance of innocent US citizens. While the former computer intelligence consultant and CIA systems administrator is a hero to many, his actions were criminal. The US Department of Justice charged him with stealing government property and violating the Espionage Act of 1917.

Jesselyn Radack, from the Government Accountability Project, argued that Snowden’s contract with the Government was less important than the social contract of a democracy.

Security vs. Functionality

While organizations have a responsibility to society to protect data, they need to balance this requirement with maintaining functionality. A technically workable cybersecurity solution is not necessarily the best if it prevents the organization from operating. This is a moral debate because organizations won’t always use the most secure cybersecurity practices or systems. Operating a modern business means navigating such trade-offs daily.

Cybersecurity experts have a responsibility to balance securing information and keeping organizations running. Some businesses need to be able to work quickly, such as in healthcare where the most robust security system could slow daily operations and risk human life. A holistic approach to information security is required based on thorough risk management.

Source :

Exploring the ePrivacy Directive

Leah Sadoian
updated Sep 15, 2023

There are a variety of cybersecurity regulations in Europe, including the ePrivacy Directive, which focuses on enhancing data protection, processing personal data, and privacy in the digital age. This Directive, recently updated with the ePrivacy regulation, continues the European Union’s ongoing efforts to create cohesive and comprehensive European data protection and cybersecurity standards across all member states.

Upgrade your organization’s cybersecurity standards with UpGuard Breachsight >

What is the ePrivacy Directive?

The Privacy and Electronic Communications Directive 2002/58/EC, or the ePrivacy Directive, is a European Union cybersecurity directive on data protection and privacy protection. The current ePrivacy Directive addresses the growing landscape of new digital technologies and electronic communications services. The Directive aims to harmonize national protection of fundamental rights within the EU, including privacy, confidentiality, and free data movement.

The ePrivacy Directive was enacted in 2002. It required each EU Member State to pass its national data protection and privacy laws, regulating essential issues like consent, spam marketing, cookies, and confidentiality.

Key Components of the ePrivacy Directive

Since the ePrivacy Directive focuses on the protection of online privacy in the electronic communications sector, the Directive’s key components include standards around how people communicate with each other electronically, aligning them with recent technological advancements.

Cookies and Consent Mechanisms

A significant component of the ePrivacy Directive is cookies, which are small data files websites use to track user behavior. Specifically, the Directive states that websites must obtain informed user consent before storing or retrieving any information on their electronic devices, giving the ePrivacy Directive the nickname “cookie law.”

Gaining this consent includes providing end-users with information about the purpose of the data storage and an opportunity to accept or opt-out. Many websites utilize a cookie banner to obtain cookie consent for website visitors. However, cookies essential for site functionality or for delivering a service requested by a user (like tracking the items in an online shopping cart) are exempt from this requirement. Note that the Directive applies to both first-party and third-party cookies.

Protection of Personal Data in Communications

Concerning data protection, the Directive states that providers of electronic communication services must ensure that their services are secure—which in turn secures any personal data that may be shared through those services. Standard electronic communication services include email and instant messaging.

These providers must also inform their users whenever a risk, such as a data breach or ransomware attack, leaves their personal data vulnerable to misuse.

Data Retention

Data retention refers to how companies retain your data, and the ePrivacy Directive includes standards for this practice.

Specifically, the Directive states that when providers of services no longer need your data, they must erase or anonymize it. There are specific situations in which data retention is allowed, such as billing services or issues of national security.

Otherwise, data may only be retained if a user consents to it, and they must also be informed why the data is being processed and the length of time it will be stored.

Unsolicited Marketing Communications

The ePrivacy Directive includes strict restrictions on the use of digital marketing communications. Unsolicited communications for direct marketing purposes are not allowed without the recipient’s consent. This includes email and text message marketing.

Typically, this is done through opt-in or opt-out systems determined by individual EU member states. However, the overall rule is that marketing communications cannot be sent without explicit consent from the user.

Location Data

The ePrivacy Directive sets instructions for using location data obtained through electronic communications. Specifically, location data must be processed with informed consent and should be anonymized when no longer needed.

This provision is very relevant for mobile service providers and location-based services. Like the marketing communications provision, an opt-in or opt-out mechanism allows users to provide explicit consent before location data is provided.

Communications Confidentiality

Companies that provide electronic communication services must implement appropriate security measures to safeguard users’ data. They must also notify users and relevant authorities in case of any security breaches involving personal data. Additionally, the Directive governs how traffic data, which includes information about communication between individuals, can be processed and stored.

Even though the primary goal of the ePrivacy Directive is to protect confidentiality, it does allow for the retention of metadata for billing, service quality, and other purposes. Member states may require data retention under specific conditions, often related to national security or criminal investigations.

Member State Laws

The ePrivacy Directive is a directive that requires every EU Member State to establish national laws to accomplish the Directive’s goals. There is some variation in the regulations across different countries due to this, unlike the GDPR, which is a regulation and applies directly throughout the EU.

How the ePrivacy Directive Affects the GDPR

The General Data Protection Regulation (GDPR) is a mandatory regulation in Europe that protects the personal data of its citizens. Since the GDPR and the ePrivacy directive both concern data privacy, they work in tandem across various components.

  • Scope: The ePrivacy Directive focuses explicitly on the electronic communications sector, and the GDPR extends data privacy laws to other industries that process personal data.
  • Consent: Both the ePrivacy Directive and the GDPR focus on user consent, but the GDPR also outlines principles of lawful processing, including contractual necessity, legitimate interests, and legal obligation.
  • Confidentiality vs. Data Protection: The ePrivacy Directive is primarily concerned with the privacy and security of electronic communications, and the GDPR includes broader concepts of data protection like data minimization, accountability, and individuals’ rights to access, rectify, and erase personal data.
  • Security Measures: The ePrivacy Directive requires providers of electronic communication services to implement security measures to protect user information. At the same time, the GDPR mandates robust security measures and includes the concept of “data protection by design and default.”
  • Data Breach Notifications: Both require notification of data breaches to users and regulatory authorities. The ePrivacy Directive only requires communication service providers to provide notification, but the GDPR extends that requirement to all data controllers and processors.

Who Must Comply with the ePrivacy Directive?

The ePrivacy Directive applies to entities providing electronic communication services in the EU, including but not limited to:

  • Telecommunication Companies: Traditional telecom providers offer fixed or mobile telephony services.
  • Internet Service Providers (ISPs): Entities providing internet connectivity services.
  • Over-the-top (OTT) Providers: Companies that offer online communication services, such as instant messaging apps and VoIP services like Skype or WhatsApp.
  • Website Owners: Any website that uses cookies or similar technologies to track user behavior must comply with the Directive.
  • Email and SMS Marketers: Businesses that send marketing messages via email or SMS must adhere to the rules set by the Directive.
  • Location-Based Services: Services that use location data also fall under the Directive’s jurisdiction.

Penalties for Noncompliance

Penalties for failing to comply with the ePrivacy Directive may differ across EU Member States, as each country is responsible for incorporating the Directive into national law. As a result, penalties can vary from monetary fines to legal actions, and the severity of the consequences will depend on the nature of the breach and the location of the incident. Below are some typical types of penalties that may be enforced:

  • Financial Fines: These can vary widely from state to state but are generally designed to be dissuasive. Some countries have a cap on fines, while others may calculate them as a percentage of the annual turnover of the offending company.
  • Legal Sanctions: In some instances, severe or repeat violations may result in legal action, including the possibility of criminal charges.
  • Reputational Damage: Beyond legal penalties, companies that violate ePrivacy laws often suffer significant reputational damage, which can result in loss of customer trust and revenue.
  • Cease and Desist Orders: Regulatory bodies may require the violating entity to stop the offending action immediately, often at the cost of temporarily or permanently turning off a service or feature.
  • Data Audits: In some cases, the regulatory bodies may require a thorough audit of data protection practices within the offending organization.
  • Notification Requirements: Failing to notify the authorities and individuals affected by a data breach, as stipulated by the Directive, can lead to additional penalties.

In 2022, Google and Meta were both found to be in violation of the ePrivacy Directive and faced steep fines for their non-compliance. France’s Commission Nationale Informatique & Libertés (CNIL) fined Google €150M and Facebook another €60M for not offering an option for users to reject non-essential cookies in line with the option to accept all tracking. This violates the ePrivacy Directive’s requirements around cookies and consent mechanisms.

The Future: Introducing the ePrivacy Regulation

Since 2002, the digital communications industry has evolved rapidly, which means the ePrivacy Directive needed drastic updating. In 2017, The European Commission proposed the ePrivacy Regulation, which aims to replace the existing ePrivacy Directive and better align it with the General Data Protection Regulation (GDPR) data protection laws.

The regulation is still under discussion amongst the EU Council because of the scope of the rules and the impact it would have on big tech companies, large telecom providers, and even areas of online advertising, media, and national security.

This new legislation is a regulation of the European Parliament and Council of the European Union. It specifies and complements the ePrivacy Directive on privacy-related topics such as the confidentiality of communications, consumer privacy controls through electronic consent and browsers, and cookies.

Key Differences

  • Legal Form and Scope: As a directive, member states must achieve specific goals but have the authority to decide how to do so, which can lead to differences in implementation across countries. The ePrivacy Regulation is a directly applicable law that becomes enforceable across the European Union, creating greater consistency.
  • Cookies and Trackers: The ePrivacy Regulation expands on the requirement for user consent before utilizing cookies and tracking technologies but simplifies the rules around this requirement. This can include allowing users to consent through browser extensions and specific exceptions for cookies that improve user experience.
  • Consent: The ePrivacy Regulation aligns the ePrivacy Directive’s requirements for user consent with the GDPR’s more stringent standards. This also simplifies consent mechanisms.
  • Electronic Marketing: The ePrivacy Regulation extends the ePrivacy Directive’s restriction on unsolicited communications for marketing purposes to cover new marketing methods and forms of electronic communication, like marketing through social media platforms.
  • Data Protection and Security: The ePrivacy Directive requires service providers to utilize security measures and report data breaches. The ePrivacy Regulation aligns those requirements with the GDPR’s broader data protection framework, which has stricter data breach notification timelines.
  • Penalties: Instead of allowing individual member states to determine penalties for noncompliance, the ePrivacy Regulation adopts a penalty framework similar to the GDPR, with fines based on a company’s global turnover, up to 4% or up to €20 million, whichever is higher. It also gives more power to Data Protection Authorities, aligning it with the GDPR.
  • International Impact: The ePrivacy Regulation’s alignment with the GDPR means data protection standards are not just primarily focused on EU member states but now affect any company that offers services or data transfers to EU residents (even if they are not located within the EU).

UpGuard Helps Your Organization Stay Compliant with Privacy Regulations

Enhance your organization’s data privacy standards with UpGuard. Whether you’re looking to stay compliant with the EU’s ePrivacy Regulation or the CCPA in the states, our all-in-one attack surface management platform, BreachSight, helps you understand the risks impacting your external security posture and know that your assets are constantly monitored and protected.

UpGuard BreachSight features include:

  • Security Ratings: Use our security ratings for a data-driven, objective, and dynamic measurement of your organization’s security posture. Our security ratings are generated by analyzing trusted commercial, open-source, and proprietary threat intelligence feeds and non-intrusive data collection methods.
  • Continuous Security Monitoring: Get real-time information about misconfigurations, understand your risk profile, and get started in minutes, not weeks, with our fully integrated solution and API. Because we use externally verifiable information, you won’t have to lift a finger to get started.
  • Attack Surface Reduction: Reduce your attack surface by discovering exploitable vulnerabilities and permutations of your domains at risk of typosquatting.
  • Data Protection: UpGuard’s proprietary Data Leak Search Engine scans every corner of the Internet and identifies data that presents a risk. It monitors your Internet presence and doesn’t check every website where we can find cloud storage buckets and source code repos.
  • Workflows and Waivers: Simplify and accelerate how you remediate issues, waive risks, and respond to security queries. Use our real-time data to get information about risks, rely on our workflows to track progress, and know precisely when issues are fixed.
  • Security Profile: Eliminate security questionnaires and stop answering the same questions repeatedly. Create an UpGuard security profile and share it before being asked.
  • Reporting and Insights: The Reports Library makes accessing tailor-made reports for different stakeholders in one centralized location easier and faster. See all risks–across various domains, IPs, and categories–in the UpGuard platform or extract the data directly from the API.
  • Business Operation Management: Share access to your UpGuard account with other team members with confidence. Each user gets an individual account with fine-grained access control.
  • Third-Party Integrations: Integrate and extend the UpGuard platform with other tools with our easy-to-use API that can save hours of human time.

    Source :

What is ISO 31000? An Effective Risk Management Strategy

Edward Kost
updated Sep 14, 2023

ISO 31000 was specifically developed to help organizations effectively cope with unexpected events while managing risks. Besides mitigating operational risks, ISO 31000 supports increased resilience across all risk management categories, including the most complicated group to manage effectively – digital threats.

Whether you’re considering implementing ISO 31000 or you’re not very familiar with this framework, this post provides a comprehensive overview of the standard.

Learn how UpGuard simplifies Vendor Risk Management >

What is ISO 31000?

ISO 31000 is an international standard outlining a risk management structure supporting effective risk management strategies. The standard is divided into three sections:

  1. Principles
  2. Framework
  3. Process
The three components of ISO 31000 - Principles, Framework, Process


The objective of all of the principles of ISO 31000 is to simultaneously increase the value and protection aspects of a management system.

The 11 principles of ISO 31000 are as follows:

  • Risk management creates and protects value – Risk management should support objective achievement and performance improvements across various sectors, including human health and safety, cybersecurity, regulatory compliance, environmental protection, governance, and reputation.
  • Risk management is an integral part of all organizational processes – Risk management shouldn’t be separated from the main body of a management system. It should be integrated into an organization’s processes to create a risk-aware culture. Management teams should champion this cultural change.
  • Risk management is systematic, structured, and timely – Risk management should cover the complete scope of systemic risk. It shouldn’t be focused on a single business component prone to risks, like the sales cycle.
  • Risk management is tailored – A risk management program should be tailored to your objectives within the context of internal and external risk profiles.
  • Risk management is transparent and inclusive – All appropriate stakeholders and decision-makers should be involved in ensuring risk management remains relevant and updated.
  • Risk management is dynamic, iterative, and responsive to change – A risk management program shouldn’t be based on a rigid template. It should be dynamic, capable of conforming to changing internal and external threat landscapes.
  • Risk management is based on the best available information – Risk management processes shouldn’t be limited to historical data, stakeholders’ feedback, forecasts, and expert judgments. It’s essential to consider the limitation of data sources and the likely possibility of divergent opinions among experts.
  • Risk management is part of decision-making – Risk management should help leadership teams make intelligent risk mitigation decisions by understanding which risks should be prioritized to maximize impact.
  • Risk management takes human and cultural factors into account – All risk management activities should be assigned to individuals with the most relevant competencies. Appropriate tools should be available to these individuals to support their efforts as much as possible.
  • Risk management facilitates continual improvement of the organization – Strategies should be developed to ensure risk management efforts are continuously improving.
  • Risk management explicitly addresses uncertainty – Risk management should directly address uncertainty by understanding its nature and finding ways to mitigate it.


The framework component of the ISO 31000 standard outlines the structure of a risk management framework, but not in a prescriptive way. The objective is to help organizations integrate risk management into their overall management system based on their unique risk exposure context. Businesses should implement the framework through the lens of their risk management objectives, prioritizing the most relevant aspect of the proposed framework. This flexibility makes any management system capable of mapping to ISO 31000, making the standard industry agnostic.

ISO 31000 can be implemented by any industry to reduce enterprise risk, regardless of size or existing risk management process.

The driving factor for the framework aspect of ISO 31000 is the management team’s commitment to embedding a risk management culture across all organizational levels.

Leadership and commitment branching out into 5 points - integration, design, implementation, evaluation, and improvement.

The five framework pillars of ISO 31000 are as follows:

  • Integration – The risk management framework should be integrated into all business processes, a change that follows the management team’s push for a cultural shift towards greater risk awareness.
  • Design – The design of the final risk management framework must consider the organization’s unique risk exposure and risk appetite.
  • Implementation – An implementation strategy should consider potential roadblocks, resources, timeframes, key personnel, and mechanisms for tracking the framework’s efficacy following implementation.
  • Evaluation  The evaluation components broaden the focus on measuring framework efficacy. This process could involve appealing to various data sources, such as customer complaints, the number of unexpected risk-related events, etc.
  • Improvement – This is the final step of the popular management system design model, Plan Do, Check Act (PDCA). Improvements should be made based on the insights gathered in the evaluation phase. The objective of each improvement interaction is to reduce the number of surprises caused by the risk management framework.

The design of the risk framework should be based on business objectives and a risk management policy within an organization’s unique risk context (the contextualization of risks is a recurring theme in ISO 31000).

Risk management policy feeding program design which is part of a cycle consissting of - program design, implementation, monitoring, improvement.

The Framework stage sets the broad risk management context, which is then refined in the Process stage, setting the foundation for more meaningful insights gathered through risk assessments.


The process approach to ISO 31000 is represented graphically as follows:

Risk management process lifecycle.

Communication and Consultation

The first stage of this process approach is communication and consultation. The more cross-functional opinions that are heard, the more comprehensive your risk management efforts will be. This stage draws upon ISO 31000’s inclusivity and cultural factor principles.

Communications aren’t just limited to internal functions. External stakeholders should be involved in all decision-making processes. This will encourage stakeholder involvement in all stages of the risk management program’s development – which supports the primary objective of the Framework stage in ISO 31000:2018.

Scope, Context, and Criteria

Ideally, many of these mechanisms should already be established in your management system. The scope of all management activities is performed within the organization’s context, as defined in ISO 9001 Clause 4.1.

Contextual intelligence is a consideration of all internal and external issues impacting the achievement of business objectives. Contextualization can be achieved by gathering information from the following sources:

  • Risk assessment of internal and external risk factors
  • Internal audits
  • Organization policy statements
  • The use of a SWOT template (Strengths, Weaknesses, Opporitnies, Threats)
  • Strategy documents
  • Questionnaires (for internal and external process investigations)
  • Interviews (with stakeholders, senior management, cross-functional teams including finance, human resources, engineering, training, etc.).

Learn about UpGuard’s security questionnaires >

The criteria used to assess risk depends on the most appropriate initiative and objective methodology as outlined in the value creation principle of ISO 31000.

This could include

  • Strategic objectives
  • Operational objectives
  • Business objectives
  • Health and safety objectives
  • Cybersecurity objectives

Start by narrowing your focus to a single scope. Then, after the process has been proven to work, expand your scope into other regions.

Risk Assessment

After defining your scope, context, and criteria, the actual risk assessment process begins. There are three primary stages in the risk assessment lifecycle.

  • Risk Identification – Understanding the source of discovered risks and their classification (whether they originate from internal or external attack surfaces)
  • Risk Analysis – Understanding the impact of identified risks and potential risks and the efficacy of their associated security controls.
  • Risk Evaluation – A comparison of discovered risks against your risk register.
  • Deciding which risk should be addressed based on an acceptance criterion defined by your risk appetite.

Learn about UpGuard’s vendor risk assessment features >

Risk evaluation data will determine which actions need to take place. Any control adjustments or framework improvements will be relative to each unique scope, context, and criteria scenario.

Stakeholders should be involved in deciding how to best respond to risk evaluation insights.

Risk Treatment

The risk treatment stage is where you decide the best course of action. These decisions will depend on your risk appetite, which defines the threshold between the levels of risk that can be accepted and those that need to be addressed.

Different types of risk should be considered, including:

  • Strategic risks
  • Cybersecurity risks
  • Reputational risks
Security controls suppress cybersecurity inherent risks within acceptable risk appetite levels
Security controls suppress cybersecurity inherent risks within acceptable risk appetite levels

Your methodology for treating risks depends on the risk culture being developed by the management team. Some organizations have a very low-risk tolerance, while others (such as those in heavily regulatory industries like healthcare) have a very low tolerance to risk. These tolerance bands are decided during the calculation of your risk appeite. If your risk appetite has already been determined, revise it to ensure it’s clear enough to support the risk management standards of ISO 31000.

Learn how to calculate your risk appetite >

A risk matrix is helpful in the risk treatment phase as it indicates what risks should be prioritized in remediation efforts to minimize impact.

In the context of Vendor Risk Management, a risk matrix indicates which vendors pose the most significant risk to an organization’s security posture.

For a deep dive into Vendor Risk Management, read this post.

These insights, coupled with an ability to project the impact of selected 

remediation tasks, help response teams optimize their risk treatment efforts, supporting the continuous improvement objectives of ISO 3100

UpGuard’s vendor risk matrix.
Remediation impact projections on the UpGuard platform.

Another form of risk treatment is to outsource the responsibility to a third party. For example, third-party risk management, the process of managing security risks caused by third-party vendors, could be outsourced to a team of cybersecurity experts. Your organization will still be responsible for the outcome of detected risks but without the added burden of also having to manage them.

The benefit of reduced internal resources makes outsourcing third-party risk management a very economical choice for scaling businesses.

Watch this video to learn about UpGuard’s Third-Party Risk Management Service.

Monitoring and Review

Evaluating the effectiveness of your implemented risk framework will determine whether or not your ISO 31000 risk management program was a profitable investment. During each review and iteration process, be sure to keep the human and cultural factor principle front of mind – don’t forget the people impacted by each iteration. 

Your risk mitigation objectives shouldn’t be so ambitious that you must handcuff your employees. You need to strike the perfect balance between risk management, risk acceptance, and employee well-being.

Recording and Reporting

Finally, all risk management activities should be recorded. Not only will this support stakeholders with their ongoing risk-based strategic decisions, but it will also provide you with a reference for tracking your management systems maturity throughout the ISO 31000 implementation lifecycle.

Source :

The Windows Server Hardening Checklist 2023

UpGuard Team
updated Jan 08, 2023

Whether you’re deploying hundreds of Windows servers into the cloud, or handbuilding physical servers for a small business, having a proper method to ensure a secure, reliable environment is crucial to keeping your ecosystem safe from data breaches.

Everyone knows that an out-of-the-box Windows server may not have all the necessary security measures in place to go right into production, although Microsoft has been improving the default configuration in every server version. UpGuard presents this ten step checklist to ensure that your Windows servers have been sufficiently hardened against most cyber attacks.

Specific best practices differ depending on need, but addressing these ten areas before subjecting a server to the internet will protect against the most common exploits. Many of these are standard recommendations that apply to servers of any flavor, while some are Windows specific, delving into some of the ways you can tighten up the Microsoft server platform. Details on hardening Linux servers can be found in our article 10 Essential Steps to Configuring a New Server.

1. User configurationProtect your credentials
2. Network configurationEstablish communications
3. Features and roles configurationAdd what you need, remove what you don’t
4. Update installationPatch vulnerabilities
5. NTP configurationPrevent clock drift
6. Firewall configurationMinimize your external footprint
7. Remove access configurationHarden remote administration sessions
8. Service configurationMinimize your attack surface
9. Further hardeningProtect the OS and other applications
10. Logging and monitoringKnow what’s happening on your system
11. Frequently asked questionsCommon questions about server hardening

1. User Configuration

Modern Windows Server editions force you to do this, but make sure the password for the local Administrator account is reset to something secure. Furthermore, disable the local administrator whenever possible. There are very few scenarios where this account is required and because it’s a popular target for attack, it should be disabled altogether to prevent it from being exploited.

With that account out of the way, you need to set up an admin account to use. You can either add an appropriate domain account, if your server is a member of an Active Directory (AD), or create a new local account and put it in the administrators group. Either way, you may want to consider using a non-administrator account to handle your business whenever possible, requesting elevation using Windows sudo equivalent, “Run As” and entering the password for the administrator account when prompted.

Verify that the local guest account is disabled where applicable. None of the built-in accounts are secure, guest perhaps least of all, so just close that door. Double check your security groups to make sure everyone is where they are supposed to be (adding domain accounts to the remote desktop users group, for example.)

Don’t forget to protect your passwords. Use a strong password policy to make sure accounts on the server can’t be compromised. If your server is a member of AD, the password policy will be set at the domain level in the Default Domain Policy. Stand alone servers can be set in the local policy editor. Either way, a good password policy will at least establish the following:

  • Complexity and length requirements – how strong the password must be
  • Password expiration – how long the password is valid
  • Password history – how long until previous passwords can be reused
  • Account lockout – how many failed password attempts before the account is suspended

Old passwords account for many successful hacks, so be sure to protect against these by requiring regular password changes.

2. Network Configuration

Production servers should have a static IP so clients can reliably find them. This IP should be in a protected segment, behind a firewall. Configure at least two DNS servers for redundancy and double check name resolution using nslookup from the command prompt. Ensure the server has a valid A record in DNS with the name you want, as well as a PTR record for reverse lookups. Note that it may take several hours for DNS changes to propagate across the internet, so production addresses should be established well before a go live window. Finally, disable any network services the server won’t be using, such as IPv6. This depends on your environment and any changes here should be well-tested before going into production.

3. Windows Features and Roles Configuration

Microsoft uses roles and features to manage OS packages. Roles are basically a collection of features designed for a specific purpose, so generally roles can be chosen if the server fits one, and then the features can be customized from there. Two equally important things to do are 1) make sure everything you need is installed. This might be a .NET framework version or IIS, but without the right pieces your applications won’t work. 2) Uninstall everything you don’t need. Extraneous packages unnecessarily extend the attack surface of the server and should be removed whenever possible. This is equally true for default applications installed on the server that won’t be used. Servers should be designed with necessity in mind and stripped lean to make the necessary parts function as smoothly and quickly as possible.

4. Update Installation

This may seem to go without saying, but the best way to keep your server secure is to keep it up to date. This doesn’t necessarily mean living on the cutting edge and applying updates as soon as they are released with little to no testing, but simply having a process to ensure updates do get applied within a reasonable window. Most exploited vulnerabilities are over a year old, though critical updates should be applied as soon as possible in testing and then in production if there are no problems. 

There are different kinds of updates: patches tend to address a single vulnerability; roll-ups are a group of packages that address several, perhaps related vulnerability, and service packs are updates to a wide range of vulnerabilities, comprised of dozens or hundreds of individual patches. Be sure to peek into the many Microsoft user forums after an update is released to find out what kind of experience other people are having with it. Keep in mind that the version of the OS is a type of update too, and using years-old server versions puts you well behind the security curve.

If your production schedule allows it, you should configure automatic updates on your server. Unfortunately, the manpower to review and test every patch is lacking from many IT shops and this can lead to stagnation when it comes to installing updates. It’s much more dangerous, however, to leave a production system unpatched than to automatically update it, at least for critical patches. If at all possible, the updates should be staggered so test environments receive them a week or so earlier, giving teams a chance to observe their behavior. Optional updates can be done manually, as they usually address minor issues.

Other MS software updates through Windows Update as well, so make sure to turn on updates for other products if you’re running Exchange, SQL or another MS server technology. Each application should be updated regularly and with testing.

5. NTP Configuration

A time difference of merely 5 minutes will completely break Windows logons and various other functions that rely on Kerberos authentication. Servers that are domain members will automatically have their time synched with a domain controller upon joining the domain, but stand alone servers need to have NTP set up to sync to an external source so the clock remains accurate. Domain controllers should also have their time synched to a time server, ensuring the entire domain remains within operational range of actual time.

6. Firewall Configuration

If you’re building a web server, for example, you’re only going to want web ports (80 and 443) open to that server from the internet. If anonymous internet clients can talk to the server on other ports, that opens a huge and unnecessary security risk. If the server has other functions such as remote desktop (RDP) for management, they should only be available over a VPN connection, ensuring that unauthorized people can’t exploit the port at will from the net.

The Windows firewall is a decent built-in software firewall that allows configuration of port-based traffic from within the OS. On a stand alone server, or any server without a hardware firewall in front of it, the Windows firewall will at least provide some protection against network based attacks by limiting the attack surface to the allowed ports. That said, a hardware firewall is always a better choice because it offloads the traffic to another device and offers more options on handling that traffic, leaving the server to perform its main duty. Whichever method you use, the key point is to restrict traffic to only necessary pathways.

7. Remote Access Configuration

As mentioned above, if you use RDP, be sure it is only accessible via VPN if at all possible. Leaving it open to the internet doesn’t guarantee you’ll get hacked, but it does offer potential hackers another inroad into your server.

Make sure RDP is only accessible by authorized users. By default, all administrators can use RDP once it is enabled on the server. Additional people can join the Remote Desktop Users group for access without becoming administrators.

In addition to RDP, various other remote access mechanisms such as Powershell and SSH should be carefully locked down if used and made accessible only within a VPN environment. Telnet should never be used at all, as it passes information in plain text and is woefully insecure in several ways. Same goes for FTP. Use SFTP or SSH (from a VPN) whenever possible and avoid any unencrypted communications altogether.

8. Service Configuration

Windows server has a set of default services that start automatically and run in the background. Many of these are required for the OS to function, but some are not and should be disabled if not in use. Following the same logic as the firewall, we want to minimize the attack surface of the server by disabling everything other than primary functionality. Older versions of MS server have more unneeded services than newer, so carefully check any 2008 or 2003 (!) servers.

Important services should be set to start automatically so that the server can recover without human interaction after failure. For more complex applications, take advantage of the Automatic (Delayed Start) option to give other services a chance to get going before launching intensive application services. You can also set up service dependencies in which a service will wait for another service or set of services to successfully start before starting. Dependencies also allow you to stop and start an entire chain at once, which can be helpful when timing is important.

Finally, every service runs in the security context of a specific user. For default Windows services, this is often as the Local System, Local Service or Network Service accounts. This configuration may work most of the time, but for application and user services, best practice dictates setting up service specific accounts, either locally or in AD, to handle these services with the minimum amount of access necessary. This keeps malicious actors who have compromised an application from extending that compromise into other areas of the server or domain.

9. Further Hardening

Microsoft provides best practices analyzers based on role and server version that can help you further harden your systems by scanning and making recommendations.

Although User Account Control (UAC) can get annoying, it serves the important purpose of abstracting executables from the security context of the logged in user. This means that even when you’re logged in as an admin, UAC will prevent applications from running as you without your consent. This prevents malware from running in the background and malicious websites from launching installers or other code. Leave UAC on whenever possible.

The tips in this guide help secure the Windows operating system, but every application you run should be hardened as well. Common Microsoft server applications such as MSSQL and Exchange have specific security mechanisms that can help protect them against attacks like ransomware such as WannaCry, be sure to research and tweak each application for maximum resilience. If you’re building a web server, you can also follow our hardening guide to improve its internet facing security.

10. Logging and Monitoring

Finally, you need to make sure that your logs and monitoring are configured and capturing the data you want so that in the event of a problem, you can quickly find what you need and remediate it. Logging works differently depending on whether your server is part of a domain. Domain logons are processed by domain controllers, and as such, they have the audit logs for that activity, not the local system. Stand alone servers will have security audits available and can be configured to show passes and/or failures.

Check the max size of your logs and scope them to an appropriate size. Log defaults are almost always far too small to monitor complex production applications. As such, disk space should be allocated during server builds for logging, especially for applications like MS Exchange. Logs should be backed up according to your organization’s retention policies and then cleared to make room for more current events.

Consider a centralized log management solution if handling logs individually on servers gets overwhelming. Like a syslog server in the Linux world, a centralized event viewer for Windows servers can help speed up troubleshooting and remediation times for medium to large environments.

Establish a performance baseline and set up notification thresholds for important metrics. Whether you use the built-in Windows performance monitor, or a third party solution that uses a client or SNMP to gather data, you need to be gathering performance info on every server. Things like available disk space, processor and memory use, network activity and even temperature should be constantly analyzed and recorded so anomalies can be easily identified and dealt with. This step is often skipped over due to the hectic nature of production schedules, but in the long run it will pay dividends because troubleshooting without established baselines is basically shooting in the dark.

11. Frequently Asked Questions About Windows Server Hardening

What is Server Hardening?

Hardening is a catch-all term for the changes made in configuration, access control, network settings and server environment, including applications, in order to improve the server security and overall security of an organization’s IT infrastructure. Different benchmarks exist for Windows server hardening, including Microsoft Security Benchmarks as well as CIS Benchmark hardening standards established by the Center For Internet Security. Benchmarks from CIS cover network security hardening for cloud platforms such as Microsoft Azure as well as application security policy for software such as Microsoft SharePoint, along with database hardening for Microsoft SQL Server, among others. 

How Do l Harden a Web Server?

It’s good practice to follow a standard web server hardening process for new servers before they go into production. Never attempt to harden web servers in use as this can affect your production workloads, with unpredictable disruptions, so instead, provision fresh servers for hardening, then migrate your applications after hardening and fully testing the setup. A good first step when hardening a Windows web server involves patching the server with the latest service packs from Microsoft before moving on to securing your web server software such as Microsoft IIS, Apache, PHP, or Nginx. 

Harden system access and configure network traffic controls, including setting minimum password length, configure Windows Firewall, which allows you to implement functionality similar to iptables using traffic policy, set up a hardware firewall if one is available, and configure your audit policy as well as log settings. Eliminate potential backdoors that can be used by an attacker, starting at the firmware level, by ensuring your servers have the latest BIOS firmware that is hardened against firmware attacks, all the way to IP address rules for limiting unauthorized access, and uninstalling unused services or unnecessary software. Make sure all file system volumes use the NTFS filesystem, and configure file permissions to limit user permission to least privilege access. You should also install anti-virus software as part of your standard server security configuration, ideally with daily updates and real-time protection.  

What is the Most Important Process in Windows Server Hardening?

To really secure your servers against the most common attacks, you must adopt something of the hacker mindset yourself, which means scanning for potential vulnerabilities from the viewpoint of how a malicious attacker might look for an opening. Inevitably, the largest hacks tend to occur when servers have poor or incorrect access control permissions, ranging from lax file system permissions to network and device permissions. In a statistical study of recent security breaches, poor access management to be the root cause behind an overwhelming majority of data breaches, with 74% of breaches involving the use of a privileged account in some capacity or the other. 

Perhaps the most dangerous but pervasive form of poor access control is granting of Everyone Write/Modify or Read permissions on files and folders with sensitive contents, which occurs so frequently as a natural offshoot of complex organizational collaborative team structures. To reduce exposure through access control, set group policy and permissions to the minimum privileges acceptable, and consider implementing strict protocols such as 2 Factor Authentication as well as zero trust privilege to ensure resources are only accessed by authenticated actors. 

Other common areas of vulnerability include social engineering and servers running with unpatched software, for which your team should undergo regular cybersecurity training and you should be regularly testing and applying the most recent security patches for software running on your servers. On this last one, you want to remove unnecessary services from your servers as these hurt the security of your IT infrastructure in two crucial ways, firstly by broadening the attacker’s potential target area, as well as by running old services in the background that might be several patches behind. These can be attractive targets for exploits. In reality, there is no system hardening silver bullet that will secure your Windows server against any and all attacks. The best hardening process follows information security best practices end to end, from hardening the operating system itself to application and database hardening.  

Which Windows Server Version is the Most Secure?

The latest versions of Windows Server tend to be the most secure since they use the most current server security best practices. For cutting edge server security, you should be looking at recent versions, including Windows Server 2008 R2, Windows Server 2012 R2, Windows Server 2016, and the most recent release, Windows Server 2019. Microsoft has added significantly to the security profile of its server OS in Windows Server 2019, with far-reaching security-focused updates that acknowledge the widespread impact of breaches and attacks. These new features make Windows Server 2019 the most formidable of the line from a security perspective. 

Windows Server 2019 features such as Windows Defender ATP Exploit Guard and Attack Surface Reduction(ASR) help to lock down your systems against intrusion and provide advanced tools for blocking malicious file access, scripts, ransomware, and other attacks. Network protection features in Windows Server 2019 provide protection against web attacks through IP blocking to eliminate outbound processes to untrusted hosts. Advanced audit policy settings in Windows Server 2019, including the Microsoft Defender Advanced Threat Protection Incidents queue help you get a granular event log for monitoring threats that require manual action or follow up.  

Final Thoughts

Defining your ideal state is an important first step for server management. Building new servers to meet that ideal takes it a step further. But creating a reliable and scalable server management process requires continuous testing of actual state against the expected ideal. This is because configurations drift over time: updates, changes made by IT, integration of new software– the causes are endless.

UpGuard provides both unparalleled visibility into your IT environment and the means to control configuration drift by checking it against your desired state and notifying you when assets fall out of compliance. Compare systems to one another or in a group to see how configurations differ, or compare a system to itself over time to discover historical trends.

Is your business at risk of a security breach?

UpGuard can protect your business from data breaches, identify all of your data leaks, and help you continuously monitor the security posture of all your vendors.

UpGuard also supports compliance across a myriad of security frameworks, including the new requirements set by Biden’s Cybersecurity Executive Order.

Test the security of your website, CLICK HERE to receive your instant security score now!

Source :

Initial and Advanced Firewall Setup for high security environments


This article covers how to setup firewall initial and advanced configuration when configuring in environments that requires top security compliance, military environments and closed environments.


Resolution for SonicOS 7.X

This release includes significant user interface changes and many new features that are different from the SonicOS 6.5 and earlier firmware. The below resolution is for customers using SonicOS 7.X firmware.

Interfaces Configuration:

After collecting all necessary infrastructure-related information such as the relevant service IP networks,addresses, and so on, you can begin the basic configuration. To complete the basic configuration, complete the following steps:

  1. Log in to the default LAN interface X0, using the default IP:
  2. Go to Network |System | Interfaces.


  3. Under the Interface Settings section, click the Configure icon and assign relevant IP addresses to the interfaces in the trusted and untrusted zones.

  4. Based on the information previously collected, assign the IP address to the interfaces in the correct subnet, you can use the default network as well.
  5. Enable HTTPS management and user management on the interfaces.
  6. Enable the desired protocols on the LAN and WAN interfaces.
  7. Configure the management interface with the appropriate IP addresses, net masks, and gateways, This is used only for controlling traffic management to the firewall.
  8. Disable DHCP server: Uncheck ‘Enable DHCP Server’ under Network | System | DHCP Server > DHCPv4 Server Settings.


  9. Set Firewall Host and Domain Names, Navigate to Device | Settings | Administration with Firewall Administration
    i.  Enter the firewall Name in the ‘Firewall Name’ box
    ii. Enter the firewall Domain Name (i.e. in the ‘Firewall’s Domain Name’ box and click Accept.


Admintrator Settings:

  1. Set Administrator Account Properties:
    a. Under Firewall Administrator| Administrator Name & Password: Verify Administrator Name and set the password.
    b. Under Device | Settings | Administration with Login/Multiple Administrators

    i. Check ‘Password Must be Changed Every (days)
    ii. Check ‘Bar repeated passwords for this many changes’ – set to ‘10’
    iii. Select ‘New password must contain 8 characters different from the old password’
    iv. Set ‘Enforce a minimum password length of:’ to ‘16’
    v. Set ‘Enforce password complexity’ to ‘Require alphabetic, numeric, and symbolic characters’ (from the drop-down box choices)
    vi. Set ‘Complexity Requirement’ to ‘2’ in each box
    vii. Check all ‘Apply the above password constraints for:’ boxes
    viii. Set the ‘Log out the administrator after inactivity of (minutes)’ timer to ‘10’
    ix. Check the ‘Enable the administrator/user lockout’ checkbox

    1. Set ‘Failed login attempts per minute before lockout’ to ‘3’
    2. Set ‘Lockout Period (minutes)’ to ‘30’
    x. Set ‘Max login attempts through CLI’ to ‘3’
    1. Click ‘Accept’ (may require a reboot).


    xi. Under ‘Multiple Administrators’ – Select ‘Enable Multiple Administrative Roles’


    xii. Under Audit/Sonic OS API, ‘Enhanced Audit Logging Support’ – Select ‘Enable Enhanced Audit Logging’, click Accept.


User Configuration:

Force a new login session after a password change and display user login information since last login:
Navigate to Device | Users | Settings and Select ‘Force Relogin After Password Change’, Select ‘Display User Login Info Since Last Login’ and Click ‘Accept’.


Advanced Configuration:

For Advanced configurations in the firewall, complete the following additional steps:

  1. If a closed system is necessary, go to the Backend Server Communication section 12 and disable the Prevent communication with Backend servers option after the licensing protocol synchronizes, See the SonicOS Administration Guide for more information on manually updating these signatures.
  2. Under Diag page settings,In internal settings:
    1. Go to the Security Services Settings section, click Apply IPS Signatures Bidirectionally.
    2. Go to the ICMP Settings section, disable both ICMP packet settings.
    3. Under the VPN Settings section, enable the Trust Built-in CA certificates for IKE authentication and Local certificate import option.
    4. Click Close.
  3. Navigate to Device>Diagnostics and deselect “Periodic Secure Diagnostic Reporting for Support Purposes” and “Automatic Secure Crash Analysis Reporting”, the click “Accept”.
  4. Restart the firewall.
  5. Disable Advanced Networking:
    a. In Network| System | Dynamic Routing and disable Advanced Routing.

  6. Change IKEv2 Dynamic Client Proposal in IPSec VPN Advanced Settings to require at least DH Group 14,AES-256 encryption, and SHA-256 authentication:
    a. In IPSec VPN / Advanced, navigate to ‘IKEv2 Settings’ and click the ‘IKEv2 Dynamic Client Proposal’ button
    b. Change ‘DH Group’ to ‘14’ as appropriate
    c. Change ‘Encryption’ to ‘AES-256’
    d. Change ‘Authentication’ to ‘SHA-256’
    e. Click ‘Accept’ and then ‘Accept’ again


Setting configuration:

  1. Turn off SSH and SNMP Management (not allowed 1 in FIPS mode):
    a. Navigate to Network | System | Interfaces and select the configuration icon for X0 (this assumes it’ the only interface that SSH or SNMP management might be  enabled on; turn off for any others configured for SSH and/or SNMP management)
    b. Deselect SSH or SNMP as appropriate.
    c. Click  ‘Ok’.

  2. Set session quota for each management IP (NOTE: This applies to both IPv4 and IPv6):
    a. Using the browser, navigate to the diag.html page https://<IP address> /sonicui/7/m/Mgmt/settings to https://<IP address>/sonicui/7/m/Mgmt/settings/diag.
    b. Check the box labeled ‘Set Connection Limitation of Management Policies’ and accept and exit internal settings.

    i. NOTE: This will require an automatic reboot

  3. Enable “Drop and log network packets whose source or destination address is 3 reserved by RFC”
    a. In Network | Firewall | Advanced> IPV6 settings,  navigate to the ‘IPv6 Advanced Configuration’ section
    b. Check the option ‘Drop and log network packets whose source or destination address is reserved by RFC’ and accept it.


172220  Log alert when the log buffer is 75% full
Log -> Settings -> Log category -> General -> Logs at 75% of maximum:  set the priority to Alert


172218 Minimum number of characters changed for password should be eight (8)
Device>Settings> Administration > Login/Multiple Administrators:


172221 Login history during a user defined time period
Diag settings, new checkbox to set the time interval for login history. Note: The system -> Status the login history is displayed. The text in the display still shows as “since system restart” but it is actually since the organizationally defined time period in the below setting.


Sample output:
• Last successful login timestamp 04/11/2016 17:30:32.000.
• Number of all user successful login attempts since system reset is 1.

Note: Login history for CAC user with LDAP

Login History for a CAC user with credentials imported from LDAP will be recorded only when the user accounts are imported from LDAP locally onto the firewall. In order for the firewall to track history of the account the user account information should be available locally on the firewall.
If using CAC with LDAP , import the LDAP user accounts locally by clicking the “Import Users” and clicking “save”.

172219 Minimum password lifetime
Device>Settings> Administration > Login/Multiple Administrators:


172223 Password complexity requirements should be applicable to OTP
Device> User -> Settings -> checkbox to apply password constraints to OTP


171473 Indefinite lockout of a user for wrong password(Device>Settings> Administration > Login/Multiple Administrators:)
172217 Enforce a limit of number of invalid consecutive logons within a time period


Resolution for SonicOS 6.5

This release includes significant user interface changes and many new features that are different from the SonicOS 6.2 and earlier firmware. The below resolution is for customers using SonicOS 6.5 firmware.

Interfaces Configuration:

After collecting all necessary infrastructure-related information such as the relevant service IP networks,addresses, and so on, you can begin the basic configuration. To complete the basic configuration, complete the following steps:

  1. Log in to the default LAN interface X0, using the default IP:
  2. Go to Manage |Network| Interfaces.

  3. Under the Interface Settings section, click the Configure icon and assign relevant IP addresses to the interfaces in the trusted and untrusted zones.

  4. Based on the information previously collected, assign the IP address to the interfaces in the correct subnet, you can use the default network as well.
  5. Enable HTTPS management and user management on the interfaces.
  6. Enable the desired protocols on the LAN and WAN interfaces.
  7. Configure the management interface with the appropriate IP addresses, net masks, and gateways, This is used only for controlling traffic management to the firewall.
  8. Disable DHCP server: Uncheck ‘Enable DHCP Server’ under Manage | Network| DHCP Server > DHCPv4 Server Leases Scopes.

  9. Set Firewall Host and Domain Names, Navigate to Manage| Appliance | Base Settings  with Firewall Administration
    i.  Enter the firewall Name in the ‘Firewall Name’ box
    ii. Enter the firewall Domain Name (i.e. in the ‘Firewall’s Domain Name’ box and click Accept.


Admintrator Settings:

  1. Set Administrator Account Properties:
    a. Under Administrator Name & Password: Verify Administrator Name and set the password.
    b. Under Administration / Login Security with Login/Multiple Administrators

    i. Check ‘Password Must be Changed Every (days)
    ii. Check ‘Bar repeated passwords for this many changes’ – set to ‘10’
    iii. Select ‘New password must contain 8 characters different from the old password’
    iv. Set ‘Enforce a minimum password length of:’ to ‘16’
    v. Set ‘Enforce password complexity’ to ‘Require alphabetic, numeric, and symbolic characters’ (from the drop-down box choices)
    vi. Set ‘Complexity Requirement’ to ‘2’ in each box
    vii. Check all ‘Apply the above password constraints for:’ boxes
    viii. Set the ‘Log out the administrator after inactivity of (minutes)’ timer to ‘10’
    ix. Check the ‘Enable the administrator/user lockout’ checkbox

    1. Set ‘Failed login attempts per minute before lockout’ to ‘3’
    2. Set ‘Lockout Period (minutes)’ to ‘30’
    x. Set ‘Max login attempts through CLI’ to ‘3’
    1. Click ‘Accept’ (may require a reboot).


    xi. Under ‘Multiple Administrators’ – Select ‘Enable Multiple Administrative Roles’.


    xii. Under ‘Enhanced Audit Logging Support’ – Select ‘Enable Enhanced Audit Logging’, click Accept.


User Configuration:

Force a new login session after a password change and display user login information since last login:
Navigate to Manage | Users | Settings and Select ‘Force Relogin After Password Change’, Select ‘Display User Login Info Since Last Login’ and Click ‘Accept’.


Advanced Configuration:

For Advanced configurations in the firewall, complete the following additional steps:

  1. If a closed system is necessary, go to the Backend Server Communication section 12 and disable the Prevent communication with Backend servers option after the licensing protocol synchronizes, See the SonicOS Administration Guide for more information on manually updating these signatures.
  2. Under Diag page settings, In internal settings:
    1. Go to the Security Services Settings section, click Apply IPS Signatures Bidirectionally.
    2. Go to the ICMP Settings section, disable both ICMP packet settings.
    3. Under the VPN Settings section, enable the Trust Built-in CA certificates for IKE authentication and Local certificate import option.
    4. Click Accept and exit the internal settings.
  3. Navigate to Device>Diagnostics and deselect “Periodic Secure Diagnostic Reporting for Support Purposes” and “Automatic Secure Crash Analysis Reporting”, the click “Accept”.
  4. Restart the firewall.
  5. Disable Advanced Networking:
    a.In Network / Routing, change ‘Advanced Routing’ to ‘Simple RIP Advertisement’

  6. Change IKEv2 Dynamic Client Proposal in IPSec VPN Advanced Settings to require at least DH Group 14,AES-256 encryption, and SHA-256 authentication:
    a. In IPSec VPN / Advanced, navigate to ‘IKEv2 Settings’ and click the ‘IKEv2 Dynamic Client Proposal’ button
    b. Change ‘DH Group’ to ‘14’ as appropriate
    c. Change ‘Encryption’ to ‘AES-256’
    d. Change ‘Authentication’ to ‘SHA-256’
    e. Click ‘Ok’ and then ‘Accept’ again.


Setting configuration:

  1. Turn off SSH and SNMP Management (not allowed 1 in FIPS mode):
    a. Navigate to Network | System | Interfaces and select the configuration icon for X0 (this assumes it’ the only interface that SSH or SNMP management might be  enabled on; turn off for any others configured for SSH and/or SNMP management)
    b. Deselect SSH or SNMP as appropriate.
    c. Click  ‘Ok’.

  2. Set session quota for each management IP (NOTE: This applies to both IPv4 and IPv6):
    a. Using the browser, navigate to the diag.html page (<IP address of host>/diag.html)
    b. Check the box labeled ‘Set Connection Limitation of Management Policies’

    i. NOTE: This will require an automatic reboot

  3. Enable “Drop and log network packets whose source or destination address is 3 reserved by RFC”
    a. In Firewall Settings >Advanced Settings,  navigate to the ‘IPv6 Advanced Configuration’ section
    b. Check the option ‘Drop and log network packets whose source or destination address is reserved by RFC’ and accept it.


172220  Log alert when the log buffer is 75% full
Log -> Base setup -> Log category -> General -> Logs at 75% of maximum:  set the priority to Alert


172218 Minimum number of characters changed for password should be eight (8)
Manage>Appliance> Base settings > Login security:


172221 Login history during a user defined time period
Diag settings, new checkbox to set the time interval for login history. Note: The system -> Status the login history is displayed. The text in the display still shows as “since system restart” but it is actually since the organizationally defined time period in the below setting.


Sample output:
• Last successful login timestamp 04/11/2016 17:30:32.000.
• Number of all user successful login attempts since system reset is 1.

Note: Login history for CAC user with LDAP

Login History for a CAC user with credentials imported from LDAP will be recorded only when the user accounts are imported from LDAP locally onto the firewall. In order for the firewall to track history of the account the user account information should be available locally on the firewall.
If using CAC with LDAP , import the LDAP user accounts locally by clicking the “Import Users” and clicking “save”.

172219 Minimum password lifetime
Manage>Appliance> Base settings > Login security:


172223 Password complexity requirements should be applicable to OTP
Manage> Users -> Settings -> checkbox to apply password constraints to OTP


171473 Indefinite lockout of a user for wrong password(Manage>Appliance> Base settings > Login security:)
172217 Enforce a limit of number of invalid consecutive logons within a time period


Related Articles


Source :

Can Settings be Exported/Imported from one SonicWall to Another? (Support Matrix)


While settings can be exported from one SonicWall to another, not every model of SonicWall is compatible with all others. Similarly, some firmware versions are not compatible with subsequent versions as new features were added or changes were made to existing features. This article details which settings files are supported to and from each SonicWall UTM device to help administrators avoid possible settings corruption from unsupported settings Imports.


Support Matrix for Gen 7 Products

SonicOS 7 is only compatible with Gen 7 hardware such as the TZ570 and 670. SonicOS is the minimum version supported for settings import to a TZ running SonicOS 7. Existing settings for Global Bandwidth Management, Virtual Assist and Content Filter Client Enforcement cannot be imported into SonicOS 7. Global Bandwidth Management is replaced by Advanced Bandwidth Management, and the other features are deprecated in SonicOS 7.

NOTE: Settings import from Gen6/6.5 is only supported with migration tool. For help with creating Gen7 settings file using migration tool, please follow: How to Create Gen 7 Settings File by Using the Online Migration Tool?. Once you have a Gen7 compatible configuration from Migration tool, settings can be imported into relevant Gen7 models as per the product matrix.


Configuration Settings Import Support by Version

 CAUTION: Settings from a higher firmware version cannot be imported into a lower version of firmware. For example it is not supported to import 6.5.3.x settings into 6.5.1.x firmware.

How To Understand And Resolve Settings Corruption

The following matrix illustrates the supported source and destination versions of SonicOS when importing configuration settings from one appliance to another. SonicOS 6.5 and 7.0 are included.


Configuration settings import to a TZ running SonicOS 7 from any SonicOS 6.x version prior to SonicOS 6.5.x is supported as a two-step process:

  1. Upgrade the TZ from SonicOS 6.x to SonicOS or higher.
  2. Export settings from the upgraded TZ and then use the migration tool to  import them to the TZ running SonicOS 7.

Configuration Settings Import Support by Platform

The matrix in this section shows the SonicWall firewalls running SonicOS 6.5 or 7.0 whose configuration settings can be imported to SonicWall platforms running SonicOS 7.0.

In the matrix, the source firewalls are in the left column, and the destination firewalls are listed across the top.


The legend for the above table is:


 NOTE: Settings import is supported form SOHO running SonicOS 5.9 to SonicWall platforms running SonicOS 7.0. This is a special case, as SOHO cannot run SonicOS6.5.

Support Matrix for Importing Preferences from Gen 5 to Gen 6 Products

 NOTE: Upgrading from SonicOS 5.9.0.x to SonicOS 6.1.x.x is NOT supported at this time.

 NOTE: SonicOS running on NSv does NOT support settings import from a physical to virtual NSv

Failing to follow the guidelines as provided in this article may result in a failed upgrade and/or corruption of the configuration file, which would then require a manual configuration of the firewall settings.

 TIP: When importing settings to a TZ Series Firewall, make sure to disable Portshield on the destination Firewall beforehand to ensure the interface configuration will be updated.

 CAUTION: Settings from a higher firmware version cannot be imported into a lower version of firmware. For example importing settings into firmware is unsupported.



TZ Series / SOHO Series Configuration Import Support


NSA / SuperMassive Configuration Import Support


NSa Configuration Import Support



 NOTE:  SonicOS running on NSv Gen 6/6.5 does not support settings import to NSv Gen 7 devices at this time.

See also:

How to Understand and Resolve Settings Corruption

How do I safely perform a firmware downgrade?

SonicOS 6.5 administrative and upgrade guides – Reference Links

Related Articles


Source :

Exit mobile version