Teamviewer Two-Factor Authentication for connections

By .Carol.fg.

Last Updated: 

This article provides a step-by-step guide to activating Two-factor authentication for connections (also known as TFA for connections). This feature enables you to allow or deny connections via push notifications on a mobile device.

This article applies to all Windows users using TeamViewer (Classic) 15.17 (and newer) and macOS and Linux users in version 15.22 (and newer).

What is Two-factor authentication for connections?

TFA for connections offers an extra layer of protection to desktop computers.

When enabled, connections to that computer need to be approved using a push notification sent to specific mobile devices. 

Enabling Two-factor authentication for connections and adding approval devices

Windows and Linux:

1. In the TeamViewer (Classic) application, click the gear icon at the top right menu.

2. Click on the Security tab on the left.

3. You will find the Two-factor authentication for connections section at the bottom.

4. Click on Configure… to open the list of approval devices.

5. To add a new mobile device to receive the push notifications, click Add.

6. You will now see a QR code that needs to be scanned by your mobile device.

Below please find a step-by-step gif for Windows, Linux, and macOS:

Windows

TFA for connections.gif

Linux

Linux add new device.gif

macOS

MAC1_community.gif

7. On the mobile device, download and install the TeamViewer Remote Control app:

a. Android

📌Note: This feature is only available on Android 6.0 or higher.

b. iOS

8. In the TeamViewer Remote Control app, go to Settings â†’ TFA for connections.

9. You will see a short explanation and the option to open the camera to scan the QR code.

image.png

10. Tap on Scan QR code and you will be asked to give the TeamViewer app permission to access the camera.

11. After permission is given, the camera will open. Point the camera at the QR code on the desktop computer (see Step 6 above).

12. The activation will happen automatically, and a success message will be displayed. 

image.png

13. The new device is now included in the list of approval devices.

image.png

14. From now on, any connection to this desktop computer will need to be approved using a push notification.

📌 Note: TFA for connections cannot be remotely disabled if the approval device is not accessible. Due to this, we recommend setting up an additional approval device as a backup.

Removing approval devices

1. Select an approval device from the list and click Remove or the X.

2. You will be asked to confirm the action.

3. By clicking Remove again, the mobile device will be removed from the list of approval devices and won’t receive any further push notifications.

4. If the Approval devices list is empty, Two-factor authentication for connections will be completely disabled.

Below please find a step by step gif for Windows, Linux and macOS:

▹ Windows:

Removing approval devices[1).gif

▹ Linux:

linux remove device.gif

▹ macOS:

MAC2_community.gif

Remote connections when Two-factor authentication for connections is enabled

TFA for connections does not replace any existing authentication method. When enabled, it adds an extra security layer against unauthorized access.

When connecting to a desktop computer protected by TFA for connections, a push notification will be sent to all of the approval devices.

You can either:

  • accept/deny the connection request via the system notification:
image.png
  • accept/deny the connection request by tapping the TeamViewer notification. It will lead to you the following screen within the TeamViewer application to accept/deny the connection:
image.png

Multiple approval devices

All approval devices in the list will receive a push notification. 

The first notification that is answered on any of the devices will be used to allow or deny the connection.

Source :
https://community.teamviewer.com/English/kb/articles/108791-two-factor-authentication-for-connections

Teamviewer Zero Knowledge Account Recovery

By .Carol.fg.

Last Updated: 

TeamViewer offers the possibility to activate Account Recovery based on the zero-trust principle.  

This is a major security enhancement for your TeamViewer account and a unique offering on the market. 

This article applies to all users.

What is Zero Knowledge Account Recovery 

In cases where you cannot remember your TeamViewer Account credentials, you click on I forgot my password, which triggers an email with a clickable link that leads you to the option of resetting your password.  

The regular reset process leads you to a page where you can set a new password for your account.

The Zero Knowledge Account Recovery acts as another layer of security for this process as the reset process requires you to enter the unique 64 characters Zero Knowledge Account Recovery Code for your account to prove your identity. Important to note is that this happens without any intervention and knowledge of the TeamViewer infrastructure. 

Activate Zero Knowledge Account Recover

To activate Zero Knowledge Account Recovery please follow the steps below: 

1. Log in with your TeamViewer account at login.teamviewer.com

2. Click Edit profile under your profile name (upper right corner). 

3. Go to Security in the left menu 

4. Click the Activate Zero knowledge account recovery button

image.png

📌 Note: The password recovery code is a unique 64 characters code that allows you to regain access if you forgot your password. It is absolutely essential that you print/download your recovery code and keep this in a secure place.

⚠ IMPORTANT: Without the recovery code you won’t be able to recover your account. Access to your account will be irreversibly lost. The data is encrypted with the key and you are the only owner of this key. TeamViewer has no access to it.

5. A PopUp window appears sharing the above information. Click on Generate Recovery Code to proceed. 

6. The Recovery Code is shown. You have to download or print the code as well as you tick the check box confirming that you acknowledge and understand that if you lose your zero knowledge account recovery code, you won’t be able to recover your password and you will lose access to your account forever

image.png

⚠ Do not tick the box unless you understand the meaning.

7. Once you either downloaded or printed the recovery code and ticked the acknowledge box, you can activate the Zero knowledge account recovery by clicking Activate.

Deactivate Zero Knowledge Account Recovery 

To deactivate Zero Knowledge Account Recovery please follow the steps below: 

1. Log in with your TeamViewer account at login.teamviewer.com

2. Click Edit profile under your profile name (upper right corner). 

3. Go to Security in the left menu 

4. Click the Deactivate Zero knowledge account recovery button

image.png

5. A PopUp appears. You have to tick the check box confirming that you acknowledge and understand that if you will be deactivating your zero knowledge account recovery

image.png

6. Click Deactivate to deactivate the Zero Knowledge Account recovery for your TeamViewer Account.

Reset your password

To reset your password for your TeamViewer account, please follow the steps below: (More info here: Reset account password)

1. Go to https://login.teamviewer.com/LogOn#lost-password 

2. Type in your email to the form, confirm you´re not a robot and click Change password

image.png

3. You´ll get the following notification:

image.png

4. Check your email inbox for an email from TeamViewer and click the button within the email

5. You´ll get to a page where you are asked to fill in your Zero Knowledge Account Recovery Code and a new password:

image.png

6. Confirm the chosen password by inserting it again and finish the process by clicking OK

Source :
https://community.teamviewer.com/English/kb/articles/108862-zero-knowledge-account-recovery

Ports used by TeamViewer

By Ying_Q

Last Updated: 

TeamViewer is designed to connect easily to remote computers without any special firewall configurations being necessary.

This article applies to all users in all licenses.

In the vast majority of cases, TeamViewer will always work if surfing on the internet is possible. TeamViewer makes outbound connections to the internet, which are usually not blocked by firewalls.

However, in some situations, for example in a corporate environment with strict security policies, a firewall might be set up to block all unknown outbound connections, and in this case, you will need to configure the firewall to allow TeamViewer to connect out through it.

TeamViewer ‘s Ports

These are the ports that TeamViewer needs to use.

TCP/UDP Port 5938

TeamViewer prefers to make outbound TCP and UDP connections over port 5938 â€“ this is the primary port it uses, and TeamViewer performs best using this port. Your firewall should allow this at a minimum.

TCP Port 443

If TeamViewer can’t connect over port 5938, it will next try to connect over TCP port 443.

However, our mobile apps running on iOS and Windows Mobile don’t use port 443.

📌Note: port 443 is also used by our custom modules which are created in the Management Console. If you’re deploying a custom module, eg. through Group Policy, then you need to ensure that port 443 is open on the computers to which you’re deploying. Port 443 is also used for a few other things, including TeamViewer (Classic) update checks.

TCP Port 80

If TeamViewer can’t connect over port 5938 or 443, then it will try on TCP port 80. The connection speed over this port is slower and less reliable than ports 5938 or 443, due to the additional overhead it uses, and there is no automatic reconnection if the connection is temporarily lost. For this reason port 80 is only used as a last resort.

Our mobile apps running on Windows Mobile don’t use port 80. However, our iOS and Android apps can use port 80 if necessary.

Windows Mobile

Our mobile apps running on Windows Mobile can only connect out over port 5938. If the TeamViewer app on your mobile device won’t connect and tells you to “check your internet connection”, it’s probably because this port is being blocked by your mobile data provider or your WiFi router/firewall.

Destination IP addresses

The TeamViewer software makes connections to our master servers located around the world. These servers use a number of different IP address ranges, which are also frequently changing. As such, we are unable to provide a list of our server IPs. However, all of our IP addresses have PTR records that resolve to *.teamviewer.com. You can use this to restrict the destination IP addresses that you allow through your firewall or proxy server.

 Having said that, from a security point-of-view this should not really be necessary – TeamViewer only ever initiates outgoing data connections through a firewall, so it is sufficient to simply block all incoming connections on your firewall and only allow outgoing connections over port 5938, regardless of the destination IP address.

Ports Used per Operating System

image.png

Source :
https://community.teamviewer.com/English/kb/articles/4139-ports-used-by-teamviewer

Ubiquiti Unifi reset to Factory Defaults

Updated on 27 giu 2023

A factory reset is useful for a creating fresh setup of a UniFi Console, or device that was already configured in a managed state.

Restoring with the Reset Button

All UniFi devices have a Reset button. You can return a device to a factory-default state by holding this for 5-10 seconds (depending on the device), or until the LEDs indicate the restore has begun. Your device must remain powered during this process.

UniFi PoE Adapters also have a Reset button that can be used if the actual device is mounted and out of reach. 

Example: The diagram below illustrates how to locate this button on the UDM Pro.

udm-pro-topology.png

Restoring From Your UniFi Application

UniFi Devices

All UniFi devices can be restored to their factory defaults via their respective web or mobile applications. This is located in the Manage section of a device’s settings. Depending on the application, this may be referred to as Forget (UniFi Network) or Unmanage (UniFi Protect).

Selecting this option will unmanage the device from your UniFi Console and restore the device to a factory default state.

UniFi Consoles

A UniFi Console admin with Owner privileges has the ability to restore their console using the “Factory Reset” button located in the UniFi OS System settings. 

Frequently Asked Questions

Why does my device still appear in my application after I restored it using the physical Reset button?

Why does my device say “Managed by Other”?

This will occur if the device was managed by another instance of a UniFi application. This includes cases where the UniFi Console (e.g., Dream Machine Pro, or Cloud Key) was factory restored, because the UniFi device still considers itself as being managed by the ‘old’ application console, prior to restoration.

There are several options to resolve this:

  • Restore the UniFi Console from a backup in which the device was already managed.
  • Factory restore the UniFi device and then re-adopt it.
  • Reassign the device using the UniFi Network mobile app.
    Note: This can only be done by the account owner and requires them to have previously signed into the mobile app while the device was managed.

Note: If you are self-hosting the Network application, you should only ever download the UniFi software on a single machine which will act as the UniFi Console. Some users mistakenly download this multiple times because they believe it is a requirement to manage their Network Application from other devices, but this is actually creating a completely new instance. To manage your network from another device, you can type in the IP address of the UniFi Console while connected to the same local network. Alternatively, you can enable Remote Access to manage your network anywhere. See Connecting to UniFi to learn more.

Why is my UniFi Device not factory restoring?

Ensure that your device remains powered on during the restoration process, otherwise it will not occur. 

It is also possible that you held the button for too short of a time (resulting in a reboot), or too long of a time (resulting in entering TFTP Recovery Mode). Refer to our UniFi Device LED Status guide for more information.


Source:
https://help.ubnt.com/hc/en-us/articles/205143490-UniFi-How-to-Reset-the-UniFi-Access-Point-to-Factory-Defaults

Turning a Fast Network into a Smart Network with Autopilot

At Fastly we often highlight our powerful POPs and modern architecture when asked how we’re different, and better than the competition. Today we’re excited to give you another peek under the hood at the kind of innovation we can achieve on a modern network that is fully software-defined.

This past February, Fastly delivered a new record of 81.9 Tbps of traffic during the Super Bowl, and absolutely no one had to do anything with egress policies to manage that traffic over the course of the event thanks to Autopilot. Autopilot is our new zero-touch egress traffic engineering automation system, and because it was running, no manual interventions were required even for this record-breaking day of service. This means that for the first time ever at Fastly we set a new traffic record for the Fastly network while reducing the number of people who were needed to manage it. (And we notably reduced that number all the way to zero.) It took a lot of people across different Fastly teams, working incredibly hard, to improve the self-managing capabilities of our network, and the result is a network with complete automation that can react quickly and more frequently to failures, congestion, and performance degradation with zero manual intervention. 

Autopilot brings many benefits to Fastly, but it is even better for our customers who can now be even more confident in our ability to manage events like network provider failures or DDoS attacks and unexpected traffic spikes — all while maintaining a seamless and unimpacted experience for their end users. Let’s look at how we got here, and just how well Autopilot works. (Oh, but if you’re not a customer yet, get in touch or get started with our free tier. This is the network you want to be on.)

Getting to this result required a lot of effort over several years. Exactly three years ago, we shared how we managed the traffic during the 2020 Super Bowl. At that time, an earlier generation of our traffic engineering automation would route traffic around common capacity bottlenecks while requiring operators to deal with only the most complex cases. That approach served us well for the traffic and network footprint we had three years ago, but it still limited our ability to scale our traffic and network footprint because, while we had reduced human involvement, people were still required to deal reactively with capacity. This translates to hiring and onboarding becoming a bottleneck of its own as we would need to scale the number of network operators at least at the same rate of the expansion of our network. On top of that, while we can prepare and be effective during a planned event like a Super Bowl, human neurophysiology is not always at its peak performance when woken up in the middle of the night to deal with unexpected internet weather events.

Achieving Complete automation with Autopilot and Precision Path

The only way forward was to remove humans from the picture entirely. This single improvement allows us to scale easily while also greatly improving our handling of capacity and performance issues. Manual interventions have a cost. They require a human to reason about the problem at hand and make a decision. This cannot be performed infinite times, so that requires us to preserve energy and only act when the problem is large enough to impact customer performance. It also means that when a human-driven action is taken, it normally moves a larger amount of traffic to avoid having to deal with the same issue again soon, and to minimize the amount of human interventions needed. 

A modern CDN gives you huge improvements in caching, SEO, performance, conversions, & more.

Modern CDN ebook

Learn more

With complete automation the cost of making an action is virtually 0, allowing very frequent micro-optimizations whenever small issues occur, or are about to occur. The additional precision and reactivity provided by full automation makes it possible to safely run links at higher utilization and rapidly move traffic around as necessary.

Smartest Network blog image 1

Figure: Egress interface traffic demand over capacity. Multiple interfaces had a demand that exceeded three times the physical capacity available during the Super Bowl, triggering automated traffic engineering overrides, which enabled continued efficient delivery without negative consequences to the network.

The graph above shows an example where Autopilot detected traffic demand exceeding physical link capacity. During the Super Bowl this demand exceeded 3 times the available capacity in some cases. Without Autopilot the peaks in traffic demand would have overwhelmed those links, requiring a lot of human intervention to prevent failure, but then to manage all of the downstream impacts of those interventions in order to get the network operating at top efficiency again. With Autopilot the network deflected traffic onto secondary paths automatically and we were able to deliver the excess demand without any performance degradation.

This post sheds light on the systems we built to scale handling large traffic events without any operator intervention.

Technical problem

Smartest Network Blog image 2

Figure – Fastly POP is interconnected to the Internet via multiple peers and transit providers

The Fastly network of Points of Presence (POPs) is distributed across the world. Each POP is “multihomed”, i.e., it is interconnected to the Internet via a number of different networks, which are either peers or transit providers, for capacity and reliability purposes. With multiple routing options available, the challenge is how to select the best available path. We need to ensure that we pick the best performing route (in any given moment), and quickly move traffic away from paths experiencing failures or congestion.

Network providers use a protocol called Border Gateway Protocol (BGP) to exchange information about the reachability of Internet destinations. Fastly consumes BGP updates from its neighbors, and learns which neighbor can be used to deliver traffic to a given destination. However, BGP has several limitations. First, it is not capacity or performance aware: it can only be used to communicate whether an Internet destination can be reached or not, but not whether there is enough capacity to deliver the desired amount of traffic or what the throughput or latency would be for that delivery. Second, BGP is slow at reacting to remote failures: if a failure on a remote path occurs, it typically takes minutes for updates to be propagated, during which time blackholes and loops may occur.

Solving these problems without creating new ones is challenging, especially when operating at the scale of tens of Terabits per second (Tbps) of traffic. In fact, while it is desirable to rapidly route around failures, we need to be careful in those processes as well because rerouting large amounts of traffic erroneously can move traffic away from a well performing path onto a worse performing one and create congestion downstream as a result of our action, resulting in poor user experience. In other words, if decisions are not made carefully, some actions that are taken to reduce congestion will actually increase it instead – sometimes significantly.

Fastly’s solution to the problem is to use two different control systems that operate at different timescales to ensure we rapidly route around failures while keeping traffic on most performing paths.

The first system, which operates at a timescale of tens of milliseconds (to make a few round trips), monitors the performance of each TCP connection between Fastly and end users. If the connection fails to make forward progress for a few round trip times it reroutes that individual connection onto alternate paths until it resumes progress. This is the system underlying our Precision Path product for protecting connections between Fastly and end users, and it makes sure we rapidly react to network failures by surgically rerouting individual flows that are experiencing issues on these smaller timescales.

The second system, internally named Autopilot, operates over a longer timescale. Every minute it estimates the residual capacity of our links and the performance of network paths collected via network measurements. It uses that information to ensure traffic is allocated to links in order to optimize performance and prevent links from becoming congested. This system has a slower reaction time, but makes a more informed decision based on several minutes of high resolution network telemetry data. Autopilot ensures that large amounts of traffic can be moved confidently without downstream negative effects.

These two systems working together, make it possible to rapidly reroute struggling flows onto working paths and periodically adjust our overall routing configuration with enough data to make safe decisions. These systems operate 24/7 but had a particularly prominent role during the Super Bowl where they rerouted respectively 300 Gbps and 9 Tbps of traffic which would have otherwise been delivered over faulty, congested or underperforming paths.

This approach to egress traffic engineering using systems operating at different timescales to balance reactivity, accuracy, and safety of routing decisions is the first of its type in the industry to the best of our knowledge. In the remainder of this blog post, we are going to cover how both systems work but we’ll need to first make a small digression to explain how we route traffic out of our POPs, which is unusual and another approach where we’re also industry leaders.

Smartest Network blog image 3
Smartest Network blog image 4

Figure – Amount of traffic (absolute and percentage of total traffic) delivered by Precision Path and Autopilot respectively during the Super Bowl

Fastly network architecture

Smartest Network blog image 5

Figure – Fastly POP architecture

A typical Fastly POP comprises a layer of servers that are interconnected with all peers and transit providers via a tier of network switches. The typical approach to build an edge cloud POP consists in using network routers, which have a large enough memory to store the entire Internet routing table. In contrast, Fastly started designing a routing architecture that pushed all routes to end hosts in order to build a more cost-effective network, but we quickly realized and embraced the powerful capabilities that this architecture made possible. Endpoints that have visibility into the performance of flows now also have the means to influence their routing. This is one of the key reasons Fastly’s networking capabilities, programmability, flexibility, and ease of use continue to exceed the competition.

Here’s how our routing architecture works: Both switches and servers run routing daemons, which are instances of the BIRD Internet Routing Daemon with some proprietary patches applied to it. The daemons running on switches learn all routes advertised by our transits and peers. However, instead of injecting those routes in the routing table of the switches, they propagate them down to the servers which will then inject them into their routing tables. To make it possible for servers to then route traffic to the desired transit or peer, we use the Multiprotocol Label Switching (MPLS) protocol. We populate each switch with an entry in their MPLS lookup table (Label Forwarding Information Base [LFIB]) per each egress port and we tag all BGP route announcements propagated down to the servers with a community encoding the MPLS label that is used to route that traffic. The servers use this information to populate their routing table and use the appropriate label to route traffic out of the POP. We discuss this more at length in a scientific paper we published at USENIX NSDI ‘21.

Quickly routing around failures with Precision Path

Our approach of pushing all routes to the servers, giving endpoints the ability to reroute based on transport and application-layer metrics, made it possible to build Precision Path. Precision Path works on a timeframe of tens of milliseconds to reroute individual flows in cases of path failures and severe congestion. It’s great at quickly routing away from failures happening right in the moment, but it’s not aware or able to make decisions about proactively selecting the best path. Precision Path is good at steering away from trouble, but not zooming out and getting a better overall picture to select an optimized new route. The technology behind our precision path product is discussed in this blog post and, more extensively in this peer-reviewed scientific paper, but here’s a brief explanation.

Smarted Network blog image 6

Figure – Precision path rerouting decision logic for connections being established (left) and connections already established (right).

This system is a Linux kernel patch that monitors the health status of individual TCP connections. When a connection fails to make forward progress for some Round Trip Time (RTT), indicating a potential path failure, it is rerouted onto a randomly chosen alternate path until it resumes forward progress. Being able to make per-flow rerouting decisions is made possible by our host-based routing architecture where servers select routes of outgoing traffic by applying MPLS labels. End hosts can move traffic rapidly on a per-flow granularity because they have both visibility into the progress of connections, and the means to change network route selection. This system is remarkably effective at rapidly addressing short-lived failures and performance degradation that operators or any other telemetry-driven traffic engineering would be too slow to address. The downside is that this system only reacts to severe performance degradations that are already visible in the data plane and moves traffic onto randomly selected alternate paths, just to select non-failing paths, but they might not be the best and most optimal paths.

Making more informed long-term routing decision with Autopilot

Autopilot complements the limitations of Precision Path because it’s not great at responding as quickly, but it makes more informed decisions based on knowledge of which paths are able to perform better, or are currently less congested. Rather than just moving traffic away from a failed path (like Precision Path), it moves larger amounts of traffic *toward* better parts of a network. Autopilot has not been presented before today, and we are excited to detail it extensively in this post. 

Autopilot is a controller that receives network telemetry signals from our network such as packet samples, link capacity, RTT, packet loss measurements, and availability of routes for each given destination. Every minute, the Autopilot controller collects network telemetry, uses it to project per-egress interface traffic demand without override paths, and makes decisions to reroute traffic onto alternate paths if one or more links are about to reach full capacity or if the currently used path for a given destination is underperforming its alternatives.

smartest network blog image 7

Figure – Autopilot architecture diagram

Autopilot’s architecture is comprised of three components (shown above):

  1. A route manager, which peers with each switch within a POP and receives all route updates the switch received from its neighbors over a BGP peering session. The route manager provides an API that allows consumers to know what routes are available for a given destination prefix. The route manager also offers the ability to inject route overrides via its API. This is executed by announcing a BGP route update to the switch with a higher local preference value than routes learned from other peers and transit providers. This new route announcement will win the BGP tie-breaking mechanism and be inserted into servers’ routing tables and used to route traffic.
  2. A telemetry collector, which receives sFlow packet samples from all the switches of a POP which allow an estimation of the volume of traffic broken down by destination interface and destination prefix as well as latency and packet loss measurements for all the traffic between Fastly POPs over all available providers from servers.
  3. A controller, which consumes (every minute) the latest telemetry data (traffic volumes and performance) as well as all routes available for the prefixes currently served by the POP, and then computes whether to inject a BGP route override to steer traffic over alternate paths.

Making Precision Path and Autopilot work together

One challenge of having multiple control systems operating on the same inputs and outputs is having them work collaboratively to select the overall best options rather than compete with each other. Trying to select the best option from the limited vantage point of each separate optimization process could actually lead to additional disruption and do more harm than good. To the best of our knowledge, we are the first in the industry using this multi-timescale approach to traffic engineering.

The key challenge here is that once a flow is being rerouted by Precision Path, it no longer responds to BGP routing changes, including those triggered by Autopilot. As a result, Autopilot needs to account for the amount of traffic currently controlled by Precision Path in its decisions. We addressed this problem in two ways: first we tuned Precision Path to minimize the amount of traffic it reroutes, and by making that traffic observable by Autopilot so that it can be factored into Autopilot decisions.

When we first deployed Precision Path, we fine-tuned its configuration to minimize false positives. False positives would result in traffic being rerouted away from an optimal path that is temporarily experiencing a small hiccup, and onto longer paths with worse performance, which could in turn lead to a worse degradation by impacting the performance of affected TCP connections. We reported extensively on our tuning experiments in this paper. However, this is not enough, because even if we make the right decision at the time of rerouting a connection, the originally preferred path may recover a few minutes after the reroute, and this is typically what happens when BGP eventually catches up with the failure and withdraws routes through the failed path. To make sure we reroute connections back onto the preferred path when recovered, Precision Path probes the original path every five minutes after the first reroute, and if the preferred path is functional, it moves the connection back onto it. This mechanism is particularly helpful for long-lived connections, such as video streaming, which would otherwise be stuck on a backup path for their entire lifetime. This also minimizes the amount of traffic that Autopilot cannot control, giving it more room to maneuver.

The problem of making the amount of traffic routed by Precision Path visible to Autopilot is trickier. As we discuss earlier in this post, Autopilot learns the volume of traffic sent over each interface from sFlow packet samples emitted by switches. These samples report, among other things, over what interface the packets were sent to and which MPLS label it carried but do not report any information about how that MPLS label was applied. Our solution was to create a new set of alternate MPLS labels for our egress ports and allocate them for exclusive usage by Precision Path. This way, by looking up an MPLS label in our IP address management database, we can quickly find out if that packet was routed according to BGP path selection or according to Precision Path rerouting. We expose this information to the Autopilot controller which treats Precision Path as “uncontrollable”, i.e., traffic that will not move away from its current path even if the preferred route for its destination prefix is updated.

Making automation safe

Customers trust us with their business to occupy a position as a middleman between their services and their users, and we take that responsibility very seriously. While automating network operations allows for a more seamless experience for our customers, we also want to provide assurances to its reliability.  We design all our automation with safety and operability at its core. Our systems fail gracefully when issues occur and are built so that network operators can always step in and override their behaviors using routing policy adjustments. The last aspect is particularly important because it allows operators to use tools and techniques learned in environments without automation and apply them here. Minimizing cognitive overhead by successfully automating more and more of the problem is particularly important to reduce the amount of time needed to solve problems when operating under duress. These are some of the approaches we used to make our automation safe and operable:

Standard operator tooling: both Precision Path and Autopilot can be controlled using standard network operator tools and techniques.

Precision Path can be disabled on individual routes by injecting a specific BGP community on an individual route announcement, which is a very common task that network engineers typically perform for a variety of reasons. Precision Path can also be disabled on an individual TCP session by setting a specific forwarding mark on the socket, which makes it possible to run active measurements without Precision Path kicking in and polluting results.

Autopilot route reselection is based on BGP best path selection, i.e., it will try to reroute traffic onto the second best path according to BGP best path selection. As a result, operators can influence which path Autopilot will fail over to by applying BGP policy changes such as altering MED or local pref values, and this is also a very common technique.

Finally, data about whether connections were routed on paths selected by precision path or autopilot is collected by our network telemetry systems, which allows us to reconstruct what happens

Data quality auditing: We audit the quality of data fed into our automation and have configured our systems to avoid executing any change if input data is inconsistent. In the case of Autopilot, for example, we compare egress flow estimation collected via packet samples against an estimation collected via interface counters, and if they diverge beyond a given threshold it means at least one of the estimations must be wrong, and we do not apply any change. The graph below shows the difference between those two estimations during the Super Bowl on one North American POP.

smartest network blog image 8

Figure – Difference between link utilization estimates obtained via interface counters and packet samples. The +/- 5% thresholds represent the acceptable margins of error

What-if analysis and control groups: in addition to monitoring input data we also audit the decisions made by systems and step in to correct them if they misbehave. Precision Path uses treatment and control groups. We randomly select a small percentage of connections to be part of a control group for which Precision Path is disabled and then monitor their performance compared to the others where precision path is enabled. If control connections perform better than treatment connections our engineering team is alerted, and steps in to investigate and remediate. Similarly, in Autopilot, before deploying a configuration change to our algorithm, we run it in “shadow” mode where the new algorithm makes decisions, but they are not applied to the network. The new algorithm will only be deployed if it performs at least as well as the one that is currently running.

Fail-static: when a failure occurs at any component of our systems, rather than failing close or open, they fail static, i.e., leave the network in the last known working configuration and alert our engineering team to investigate the problem.

Conclusions

This blog post is a view into how Fastly automates egress traffic engineering to make sure our customers’ traffic reaches their end users reliably. We continue to innovate and push the boundaries of what is possible while maintaining a focus on performance that is unrivaled. If you are thinking that you want your traffic to be handled by people who are not only experts, but also care this much, now is a great time to get in touch. Or if you’re thinking you want to be a part of innovation like this, check out our open listings here: https://www.fastly.com/about/careers/current-openings.

Open Source Software

The automation built into our network was made possible by open source technology. Open source is a part of Fastly’s heritage — we’re built on it, contribute to it, and open source our own projects whenever we can. What’s more, we’ve committed $50 million in free services to Fast Forward, to give back to the projects that make the internet, and our products, work. To make our large network automation possible, we used: 

  • Kafka – distributed event streaming platform
  • pmacct – sFlow collector
  • goBGP – BGP routing daemon library, used to build the Autopilot route collector/injector
  • BIRD – BGP routing daemon running on our switches and servers.

We did our best to contribute back to the community by submitting to their maintainers improvements and bug fixes that we implemented as part of our work. We are sending our deepest gratitude to the people that created these projects. If you’re an open source maintainer or contributor and would like to explore joining Fast Forward, reach out here. 

Lorenzo Saino

Director of Engineering

Lorenzo Saino is a director of engineering at Fastly, where he leads the teams responsible for building the systems that control and optimize Fastly’s network infrastructure. During his tenure at Fastly, he built systems solving problems related to load balancing, distributed health checking, routing resilience, traffic engineering and network telemetry. Before joining Fastly he received a PhD from University College London. His thesis investigated design issues in networked caching systems.

lorenzosaino

Jeremiah Millay

Principal Network Engineer

Jeremiah Millay is a Principal Engineer on the Network Systems team at Fastly where he spends most of his time focused on network automation and writing software with the goal of improving network operations at Fastly. Prior to Fastly he spent a number of years as a Network Engineer for various regional internet service providers.

Paolo Alvarado

Senior Manager of Technical Operations

Paolo Alvarado is a Senior Manager of Technical Operations at Fastly. Paolo has over 10 years of experience working with content delivery networks in customer-facing and behind-the-scenes roles. Paolo joined Fastly to help build out the Fastly Tokyo office before moving into network operations. Currently, he manages a team of Network and System Operation engineers to meet the challenges of building and running a large scale network.

Hossein Lotfi

VP of Engineering leading Network Systems Organization

Hossein Lotfi is VP of Engineering leading Network Systems Organization at Fastly. Hossein has over 20 years of experience building networks and large-scale systems ranging from startups to hyper-scale cloud infrastructure. He has scaled multiple engineering organizations geared towards rapid, novel innovation development and innovations that are informed and inspired by deep involvement with the operational challenges of global scale systems. At Fastly, Hossein is responsible for building reliable, cost-effective, and low-latency systems to connect Fastly with end-users and customer infrastructures. The Network Systems Organization teams include Kernel, DataPath (XDP), L7 Load Balancing, TLS Termination, DDoS Defence, Network Architecture, Network Modeling and Provisioning Systems, Traffic Engineering, Network Telemetry, DNS, Hardware Engineering, Pre-Production Testing and Fastly’s Edge Delivery platform.

Source :
https://www.fastly.com/blog/turning-a-fast-network-into-a-smart-network-with-autopilot

Change the owner of computer objects in Active Directory

Wolfgang Sommergut Thu, Jun 15 2023

When a user joins a computer to an AD domain, they automatically become the owner of the corresponding AD object. This is why standard users should not have the domain join permission. If they still own computer objects, it is recommended for security reasons to replace them with a service account.

As a best practice, Microsoft recommends revoking the domain join permission from regular users. Instead, it is advised to delegate this task to service accounts whose permissions are tailored to this purpose. By doing so, a known attack vector is eliminated.

Easily deploy and centrally manage your phone system in your Windows network

Download 3CX now for free!Ad

If the domain join is delegated to specific accounts after end users have already added numerous computers to the domain, it is recommended that the owner of these computer objects be changed.

This also applies if a domain admin has been used for this purpose until now.

Active Directory Users and Computers

To view the permissions and the owner of a computer object in AD Users and Computers (ADUC), open the properties of the computer object, switch to the Security tab, and click Advanced.

Edit the owner of a computer object in Active Directory with AD Users and Computers

If necessary, you can enter a new owner by clicking the Change link in that section.

In ADUC, you can only edit the permissions of individual objects. If you select multiple objects, the Properties dialog will not display the Security tab.

Display owner with PowerShell

For bulk operations, it is therefore recommended to use PowerShell. If you first want to get an overview of multiple objects’ ownership, there are several options available.

One approach is to generate a list of computer names and owners by expanding the nTSecurityDescriptor attribute using Select-Object:

Get-ADComputer –Filter * -properties ntSecurityDescriptor -PipelineVariable p |

select -ExpandProperty ntSecurityDescriptor |

select @{n=”Computer”;e={ $p.name }}, @{n=”Owner”;e={ $_.owner }}

Display all domain computers and their owners with PowerShell

Alternatively, you can use Get-ACL to retrieve the owner for each computer individually. When outputting the results using Format-List, you can use Trimstart() to remove the leading “CN=” from PSChildName:

Get-ADComputer –Filter * |

foreach{Get-Acl -Path “AD:$($_.DistinguishedName)“} |

Format-List @{n=”Name”;e={$_.PSChildName.Trimstart(“CN=”)}}, @{n=”Owner”;e={$_.owner}}

This variant has the advantage of generating the necessary ACL objects, which are required if you want to change the owner. The following script accomplishes this task:

$user = new-object system.security.principal.ntaccount(“contoso\djoin”)

Get-ADComputer –filter ‘name -like “win11*”‘ |

foreach{

$acl = Get-Acl -Path “AD:$($_.DistinguishedName)

$acl.SetOwner($user)

Set-Acl -Path “AD:$($_.DistinguishedName)$acl

}

In this example, all computers whose names begin with “Win11” are assigned contoso\djoin as the new owner.

Assign a new owner to computer objects with Set Acl

It is worth mentioning that to use the SetOwner method, you need to provide a system.security.principal.ntaccount object. However, Get-ADuser returns objects of the type Microsoft.ActiveDirectory.Management.ADUser. If you want to retrieve the principal using this cmdlet, then you need to call it as follows:

$user = New-Object System.Security.Principal.SecurityIdentifier (Get-ADUser -Identity “myuser”)

Summary

For security reasons, it is not recommended to let users join PCs to an AD domain. However, if you have allowed this in the past, it is advisable to assign new owners to the computer objects.

Source :
https://4sysops.com/archives/change-the-owner-of-computer-objects-in-active-directory/

Complete Solidworks Clean Uninstall Guide

ARTICLE BY GOENGINEER ON DEC 23, 2022

If you need to uninstall SOLIDWORKS, this guide walks through the entire process including preparing your machine and uninstalling Windows items, SOLIDWORKS items, SOLIDWORKS prerequisites, and more. Let’s get started.

*Disclaimer: This document assumes that SOLIDWORKS was originally installed using the default locations. If our installation location for SOLIDWORKS differs, please adjust the procedure below accordingly. The procedure in the following document requires access and edits to the Windows registry. Any such edits are done at your own risk and should only be attempted under the advisement of your IT professional. If you are uncomfortable with following the procedure below, please seek assistance. It is advised to backup your registry and any valued data before making any changes to the system registry. GoEngineer assumes no responsibility for any consequences, unintended or otherwise resulting from changes made to the system registry.

Preparing your machine


Confirm and create backup
Copy Settings Wizard iconCopy Settings Wizard (CSW) or Manual Registry Export

  • If the machine has multiple users. Run it for each user while they are logged in and setup a file naming convention like “SWSettings_UserName_Date.sldreg”.
  • If the machine just has one user, then you can just create one file to hold all settings for example “SWSettings_Date.sldreg”
  • For Manual Registry Export, go to the search bar and type in regedit.exe then press Enter. Depending on how your IT department has your permissions setup you may have to get one of them involved for this step.
  • Once in the registry editor, navigate to the following keys, right-click on them and select Export. Then name them according to the file name suggested under each one.
    • HKEY_CURRENT_USER\Software\SolidWorks
      •  HKCU_SWX_UserName.reg
    • HKEY_LOCAL_MACHINE\SOFTWARE\SolidWorks
      • HKLM_SWX_UserName.reg
    • HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\SolidWorks
      • HKLM_6432_SWX_UserName.reg
  • Backup all customizations like document templates, sheet formats, toolbox databases, weldment profiles, design library, etc.
    • You can go to System Options | File Locations within SOLIDWORKS to verify the locations of any customizations. Note: Any files in custom locations will not be removed during uninstallation, but ones in any of the default folders will so make sure and double check this step. 
  • Create a restore point and/or backup of your system.

Items needed

  • Obtain Software either via download from the customer portal or DVD. It is recommended that you obtain the latest service pack for the version you are planning to install.
  • Local Admin rights/permissions on the machine. Domain admin will not suffice.
  • For SA licenses you will need the serial number(s). For SNL you will need the serial number and license server name in this format “25734@ServerName”.

Uninstalling

Windows Items

  • Disable UAC (User Account Control) by moving the slider to the lowest setting.
  • Disable any Anti-Viral or Malware software if permitted. You may have to get IT involved in this step.

SOLIDWORKS Items

  • Transfer License (For SA only, and only if you are moving machines or upgrading to a different major version)

While in SOLIDWORKS, go to the Help menu and select “Deactivate License”. This will bring up the SOLIDWORKS Product Activation dialog.

SOLIDWORKS Product Activation screen

Then click the “Select All” button and click Next. Finally, you will see a completion dialog showing the activations were transferred successfully.

  • Uninstall SOLIDWORKS via these steps below.
    1. Go to Start -> Control Panel -> Programs and Features. Select SOLIDWORKS 20XX SPX and then click Uninstall at the top of the list.
    2. This will open the SOLIDWORKS Installation Manager to the Summary page. Then select “Change” to the right of the Advanced Options section.

      How to uninstall solidworks
    3. In the dialog that comes up select the boxes for the items you want to have removed from the system. If you have multiple versions installed and you are not uninstalling the latest version some boxes will not be available.

      Advanced options uninstalling solidworks
    4. Click “Back to Summary”, then click “Remove Items” to start the uninstallation process.
      Note: This process will not remove any SOLIDWORKS Parts, assemblies, or drawings.

SOLIDWORKS Prerequisites

  • Manually remove prerequisites. ** Not all may be installed on your machine depending on the SOLIDWORKS version. **
    Bonjour
    Microsoft .Net Framework 3.5 (Use Windows Components to Uninstall)
    Microsoft .Net Framework 4.0
    Microsoft .Net Framework 4.5
    Microsoft Office Web Components 11
    Remove all instances of C++ Runtimes
    Note: Do not remove all if you have any programming or other CAD software installed. Contact us for the exact current list of runtimes to remove.
    Microsoft Visual Studio 2005 Tools for Applications
    Microsoft Visual Studio 2005 Remote Debugger Light
    VBA 7.1
    • For SOLIDWORKS 2013 and higher, please follow these steps to uninstall VBA.
    • Open Command Prompt (Start | All Programs | Accessories then right-click on Command Prompt and select “Run as Administrator”).
    • Then type: msiexec.exe /X{90120064-0070-0000-0000-4000000FF1CE} and press Enter.

Manual System Cleaning Steps

  • Manually clean remaining folders and registry.
    • Registry: (**Please use caution when Windows Start Menu > type “regedit” > press Enter) editing the Windows Registry, Deletions cannot be undone. **
      • HKEY_CURRENT_USER\SOFTWARE
        • \eDrawings
        • ** If you have PDM installed, DO NOT DELETE. Delete only the IM & \SolidWorks SOLIDWORKS [version] subkeys. **
        • \SOLIDWORKS 20XX
        • \SolidWorks BackOffice
        • \SRAC
        • \Microsoft\VSTAHOST
        • \Microsoft\VSTAHostConfig
      • HKEY_LOCAL_MACHINE\SOFTWARE **Wow6432Node keys on 64 bit machines only**
        • \eDrawings
        • \SolidWorks the IM & SOLIDWORKS [version] subkeys. **
        • \SolidWorks Corporation
        • \SRAC
        •  ** If you have PDM Pro & Stnd installed, DO NOT \Wow6432Node\SolidWorks DELETE. Delete only the IM & SOLIDWORKS [version] subkeys. **
        • \Wow6432Node\SolidWorks Corporation
        • \Wow6432Node\Microsoft\VSTAHOST
        • \Wow6432Node\Microsoft\VSTAHostConfig
    • SOLIDWORKS Directories: ()
      • C:\Program Files\SolidWorks Corp\
      • C:\Program Files\Common Files\eDrawings
      • C:\Program Files\Common Files\eDrawings<year>
      • C:\Program Files\Common Files\SolidWorks Installation Manager
      • C:\Program Files\Common Files\SolidWorks Shared
      • 64-bit Operating System Below Only
        • C:\Program Files (x86)\Common Files\eDrawings
        • C:\Program Files (x86)\Common Files\SolidWorks Installation Manager
        • C:\Program Files (x86)\Common Files\SolidWorks Shared
        • C:\Program Files (x86)\SolidWorks Shared

Note: The “ProgramData” folder is hidden by default. Go to the Control Panel and select Folder Options. Open the View tab and select “Show hidden files, folders and drives” from the list and click OK.

    • C:\ProgramData\COSMOS Applications\
    • C:\ProgramData\SolidWorks
    • C:\ProgramData\SolidWorks Electrical
    • C:\ProgramData\SolidWorks Flow Simulation
    • C:\SolidWorks Data – Note: delete all duplicates like (2) C:\SolidWorks Data (2)
    • C:\Users\%username%\Documents\SolidWorks Downloads\SolidWorks [version]
    • C:\Users\%username%\AppData\Local\Microsoft\VSTAHost\SolidWorks_VSTA
    • C:\Users\%username%\AppData\Local\Temp\Solidworks
    • C:\Users\%username%\AppData\Local\SolidWorks
    • C:\Users\%username%\AppData\Local\TempSWBackupDirectory
    • C:\Users\%username%\AppData\Roaming\SolidWorks
    • C:\Users\%username%\AppData\Roaming\Microsoft\VSTAHost\SolidWorks_VSTA

Contact GoEngineer

Please contact us if you have any questions or issues with the steps above via one of the following methods.

SOLIDWORKS TRAINING

Expand your SOLIDWORKS skillset and enroll in a professional SOLIDWORKS training course. Choose how you want to learn, (online self paced or virtual classroom or in-person classroom or onsite) and the select your level of training. 

Editor’s Note: This article was originally published in August 2017 and has been updated for accuracy and comprehensiveness.

Source :
https://www.goengineer.com/blog/complete-solidworks-clean-uninstall-guide

Qnap QuTS hero h5.1.0 | Release Notes

QuTS hero h5.1.0
2023-05-29

QuTS hero h5.1.0 brings many important new features to further enhance security, improve performance, and boost productivity for your QNAP NAS. You can now log in with more secure verification methods, delegate administrative tasks to general users, and centrally manage NAS devices via AMIZ Cloud. You can also benefit from smarter disk migration, smoother file browsing and search in File Station, more powerful SMB signing and file sharing, more convenient storage pool expansion, and much more. See What’s New to learn about main features and Other Changes to learn about other features, enhancements, and changes.

We also include fixes for reported issues and provide information about known issues. For details, see Fixed and Known Issues. You should also see Important Notes before updating QuTS hero.

What’s New

Storage pool expansion by adding disks to an existing RAID group

Users can now expand a storage pool by adding disks to expand an existing RAID group within the pool. When expanding the RAID group, users can also migrate the RAID group to a different RAID type.

To use this function, go to Storage & Snapshots > Storage > Storage/Snapshots, select a storage pool, click Manage > Storage Pool > Action > Expand Pool to open the Expand Storage Pool Wizard, and then select Add new disk(s) to an existing RAID group.

Support for SMB multichannel

Users can now allow SMB 3.x clients to establish multiple network connections simultaneously to an SMB file share. Multichannel can increase the network performance by aggregating network bandwidth over multiple NICs and mitigating network disruption by increasing network fault tolerance.

To enable SMB multichannel, go to Control Panel > Network & File Services > Win/Mac/NFS/WebDAV > Microsoft Networking, and then select Enable SMB Multichannel.

SMB multichannel is only supported on the following clients using SMB 3.0 or later:

  • Windows 8.1 and later
  • Windows Server 2012 and later
  • macOS Big Sur 11.3.1 and later

AES-128-GMAC algorithm support for SMB signing

QuTS hero h5.1.0 now supports the Advanced Encryption Standard (AES) Galois Message Authentication Code (GMAC) cipher suite for SMB signing. SMB signing can use this algorithm to encode and decode using 128-bit keys and can automatically negotiate this method when connecting to a client device that also supports the same algorithm standard.

To enable SMB signing, go to Control Panel > Network & File Services > Win/Mac/NFS/WebDAV > Microsoft Networking > Advanced Settings, and then configure the SMB signing settings. Make sure that you select the highest SMB version as SMB 3.

Delegated Administration for better organization flexibility and productivity

In modern organizations, IT administrators are often overwhelmed by a sheer number of tasks and responsibilities. QuTS hero h5.1.0 now supports Delegated Administration, which allows administrators to delegate various roles to general users, so that they can perform routine tasks, control their data, manage system resources, and monitor device status even when IT administrators are not available. You can choose from a wide range of roles, including System Management, Application Management, Backup Management, Shared Folder Management, and many more. To ensure system security, we recommend only granting permissions that are essential for performing required tasks.

This feature not only helps reduce the workloads of administrators but also greatly enhances productivity and flexibility for your organization. You can also easily view the roles currently assigned to each user and change their roles anytime according to your needs. To configure these settings, go to Control Panel > Privilege > Delegated Administration. To learn more about Delegated Administration, check QuTS hero h5.1.0 User Guide.

2-step verification and passwordless login for enhanced account security

QuTS hero now supports passwordless login, which replaces your password with a more secure verification method. Instead of entering a password, you can scan a QR code or approve a login request with your mobile device to verify your identify. QuTS hero now also supports more verification methods for 2-step verification. In addition to a security code (TOTP), you can also choose to scan a QR code, approve a login request, or enter an online verification code to add an extra layer of security to protect your NAS account.

To configure these settings, go to the NAS desktop, click your username on the taskbar, and then select Login and Security. You can download and install QNAP Authenticator from App Store or Google Play and pair this mobile app with your NAS to secure your NAS account. Note that you cannot use 2-step verification and passwordless login at the same time.

Centralized NAS management with AMIZ Cloud

You can now add the NAS to an organization when setting up the myQNAPcloud service for your NAS. This allows organization administrators to remotely access, manage, and monitor various system resources on the NAS via AMIZ Cloud, a central cloud management platform designed for QNAP devices.

To manage the NAS via AMIZ Cloud, you must enable AMIZ Cloud Agent in myQNAPcloud. This utility communicates with AMIZ Cloud and collects the data of various resources on your device for analytics purposes without any identifiable person information.

Automatic disk replacement with Predictive Migration before potential failure

Predictive Migration is a major improvement over the original Predictive S.M.A.R.T. Migration feature in Storage & Snapshots. This upgrade now allows users to specify multiple trigger events that prompt the system to automatically replace a disk before it fails.

Besides S.M.A.R.T. warnings, users can also specify trigger events from other monitoring systems such as Western Digital Device Analytics (WDDA), IronWolf Health Management (IHM), DA Drive Analyzer, and SSD estimated remaining life. When a specified trigger event occurs—for example, a disk ‘s Galois WDDA status changes to “Warning” or the SSD estimated remaining life reaches 3%—the system automatically replaces the disk and migrates all its data to a spare disk. This process protects your data better and is safer than manually initiating a full RAID rebuild after the disk fails.

To configure Predictive Migration, go to Storage & Snapshots > Global Settings > Disk Health.

Lists of recent files in File Station for easier file browsing

With the new Recent Files feature in File Station, you can now easily locate files that were recently uploaded, opened, or deleted. These three folders are conveniently grouped together under the Recent File folder at the upper left portion of File Station.

File content search in File Station with Qsirch integration

The original search function in File Station could only search for file names of a specific file type. With the integration of Qsirch into File Station, you can now search for file content using keywords, and also search for multiple file types using these keywords at the same time. To use this feature, you need to install Qsirch, an app that can index the files on your device and greatly facilitate your file search.

Other Changes

Control Panel

  • Users can now configure an individual folder to inherit permissions from its parent folder or to remove the inherited permissions anytime. Users can also make a folder extend its permissions to all its subfolders and files. To configure permission inheritance on a folder, go to Control Panel > Privilege > Shared Folders, and then click the Edit Shared Folder Permissions icon under Action.
  • Added additional specification information for memory slots in Control Panel > System Status > Hardware Information.
  • Changed the behavior and the description of certain permission settings as we do not recommend using the default administrator account “admin”.
  • Optimized the process of restoring the LDAP database.
  • The “Network Recycle Bin” feature has been renamed to “Recycle Bin” in Network & File Services.
  • The automatic firmware update settings have been streamlined with the following changes: – The selectable options for automatic firmware updates have been greatly simplified. Users now select one of three firmware types to automatically update their system with: quality updates, critical updates, or latest updates. – “Security updates” are now “critical updates”. Critical updates include security fixes as well as critical system issue fixes. – “Quality updates” now include security fixes and critical issue fixes in addition to bug fixes.- “Feature updates” are now “latest updates” and include quality and critical updates in addition to new features, enhancements, and bug fixes. – Update notifications no longer need to be enabled separately for each firmware type. Notifications are now either enabled or disabled for all firmware types.
  • The time interval for observing successive failed login attempts can now be configured to be between 0 and 600 minutes. Moreover, a time interval of 0 minutes means that failed login attempts are never reset.
  • You can now include more information from account profiles when importing and exporting user accounts.
  • You can now select the direction to append the custom header for the reverse proxy rule.
  • Users can now edit and enable or disable existing power schedules in Control Panel > System > Power > Power Schedule. Previously, users could only add or remove power schedules.
  • The “Network Recycle Bin” feature has been renamed to “Recycle Bin” in Network & File Services.

Desktop & Login

  • You can now log out of your account on all devices, browsers, and applications at once. To use this feature, go to the desktop, click your username on the taskbar, and then go to Login and Security > Password.
  • Added an icon on the top-right corner of the desktop to indicate whether the device has enabled myQNAPcloud and been associated with a QNAP ID or whether the device has joined AMIZ Cloud.
  • Users can now save their QuTS Hero login credentials in their web browser. To enhance the security of your QuTS Hero user account, we recommend enabling 2-step verification.

App Center

  • Users can now configure a schedule for automatic installations of app updates.

File Station

  • Added prompt banners to remind users to turn on related browsing functions for multimedia files.
  • Enhanced the Background Tasks display UI.
  • Improved File Station performance and enhanced file browsing experience.

Help Center

  • Redesigned the user interface of Help Center for a better user experience.

Initialization

  • You can now purchase licenses during QuTScloud installation.

iSCSI & Fibre Channel

  • Added a new settings page for managing default iSCSI CHAP authentication settings, which you can use for multiple iSCSI targets. You can find these settings in iSCSI & Fibre Channel > Global Settings > Default iSCSI CHAP. When creating or editing a target, you can choose to use the default CHAP settings or configure unique settings for the target.
  • Added the client umask feature to assign default permissions for existing and new files and folders.
  • When creating an iSCSI target, you can now select the network interfaces that an iSCSI target will use for data transmission. Previously, users could only do so after the target was created.

Network & Virtual Switch

  • Network & Virtual Switch can now record event logs when the system identifies conflicting IP addresses between the local device and another device on the same network.
  • Users can now configure the MAC address when creating or modifying a virtual switch.
  • When selecting the system default gateway automatically, you can now configure the checking target by specifying the domain name or IP address.

NFS

  • NFS service now supports both NFSv4 and NFSv4.1 protocols.
  • Users can now set the rcpbind to assign fixed ports to RPC services. Make sure that you configure the firewall rules accordingly to allow connections only on the fixed ports.

PHP System Module

  • Updated the built-in PHP version to 8.2.0.

Resource Monitor

  • Resource Monitor now displays the space used by files created from Qsync file versioning.

SAMBA

  • Updated Samba to version 4.15.
  • You can now aggregate up to 50 shared folders on a Windows network.

Storage & Snapshots

  • Added support for disk failure prediction from ULINK’s DA Drive Analyzer. Registered users of DA Drive Analyzer can now also monitor disk failure prediction statuses in Storage & Snapshots > Storage > Disks/VJBOD > Disks.
  • Added support for Seagate dual-actuator disks. These disks appear with a “Seagate DA” tag in Storage & Snapshots > Storage > Disks/VJBOD > Disks.
  • Added support for Western Digital Device Analytics (WDDA) for Western Digital (WD) disks. To view WDDA information, go to Storage & Snapshots > Storage > Disks/VJBOD > Disks, select a WD disk, and click Health > View Details.
  • Improved the “Enable Read Acceleration” feature so that it not only improves the read performance of new files added to a shared folder (starting in QuTS hero h5.0.1), but also improves the read performance of existing files (starting in QuTS hero h5.1.0). This feature can be enabled for shared folders after upgrading from QuTS hero h5.0.0 or earlier to QuTS hero h5.0.1 or later.
  • Increased the maximum number of disks in RAID-TP from 16 to 24.
  • Redesigned the presentation of disk information into tabular format for enhanced user experience, now viewable in Storage & Snapshots > Storage > Disks/VJBOD > Disks.
  • Renamed the function “Replace & Detach” to “Replace” and added the option for users to choose whether to designate the replaced disk as a spare disk or to detach it from the system.
  • You can now select up to 24 disks for a single RAID-TP group.
  • Encrypted LUNs are now supported in VJBOD, SnapSync, Snapshot Replica, and snapshot import/export operations.
  • Improved the user interface on various snapshot-related screens.
  • Users can now change the destination IP address in Snapshot Replica jobs.
  • Added a new window that automatically appears when you insert new disks and helps you decide what to do with them. You can also access this window any time by going to Storage & Snapshots > Storage > Disks/VJBOD > Disks > More > Manage Free Disks.
  • After rebuilding a RAID group with a spare disk, the failed disk’s slot becomes reserved for a spare disk. To free up this slot for other purposes, go to Storage & Snapshots > Storage > Disks/VJBOD > Disks, select the disk slot, and click Action > Free Up Spare Disk Slot.
  • Users can now enable and disable QNAP SSD Antiwear Leveling (QSAL) on an existing SSD storage pool any time. Richer information is also available for QSAL-enabled pools, including replacement priority recommendation and charts showing the remaining capacity and life of the SSDs in the pool. To configure QSAL or view QSAL information, go to Storage & Snapshots > Storage > Storage/Snapshots, click an SSD storage pool, and then click Manage > QSAL.

System

  • You now need to enter a verification code when resetting your password if you forgot your password. This extra step helps enhance your account security.

Important Note

  • In QuTS Hero h5.0.1 or earlier, users can no longer create new VJBOD disks from a remote NAS if the remote NAS is running QuTS Hero h5.1.0 or later. If there are existing VJBOD disks connections to the remote NAS before it is updated to QuTS Hero h5.1.0 or later, these VJBOD disks are unaffected and remain operational after the update. In QuTS Hero h5.1.0 or later, users can still create VJBOD disks from a remote NAS running QuTS Hero h5.0.1 or earlier.
  • Removed support for CO Video.


Source :
https://www.qnap.com/en/release-notes/quts_hero/overview/h5.1.0

How To Stop Windows From Updating Graphics Drivers

Updated on January 12, 2023

Marlo Strydom

You may have noticed that Windows 10 is very eager to keep your system software up to date. The OS will automatically download and install new drivers for your graphics card, sound card, modem, or other hardware components.

Table Of Contents

While this can be convenient, it also risks breaking previous driver configurations and introducing bugs to your system through the updated driver. 

Here’s how to stop windows from updating graphics drivers:

  1. Click on the Start menu.
  2. Double click on Advanced System Settings.
  3. On the System Properties window, select the Hardware tab.
  4. Select Device Installation Settings.
  5. Save to apply the setting.

In this article, I’ll take you through the quickest way to stop Windows from updating graphics drivers on your computer.

0 of 34 secondsVolume 0%

00:01

00:16

00:34

1. Click on the Start Menu

An open Start Menu in Windows.

The Start menu provides easy access to commonly used programs and system settings. You can click on the Start button from the taskbar or press the Windows key on your keyboard.

The Windows Start Menu (as shown above) should pop up whichever approach you take, allowing you to proceed to the next step.

2. Double click on Advanced System Settings 

Type in advanced system and open advanced system settings.

In the search bar, you can find it in the Windows menu, type in Advanced System, and select Advanced System Settings from the options that appear.

Windows 10 comes with a range of system settings that you can access to control how the operating system and its apps behave. 

3. On the System Properties Window, select the Hardware tab

The Hardware tab in the Advanced System Properties windows is selected.

Windows provides a variety of built-in system properties, which are attributes that describe specific features of the system.

You can view and change these properties on the System Properties window. Specifically, you’ll want to access the Hardware tab, which you can do by selecting it in the menu that pops up.

4. Select Device Installation Settings

Click on the Device Installation Button.

Windows 10 users can now set their device installation settings to receive important updates.

No is selected to stop Windows from updating graphics drivers.

Here, the system will ask you whether you want to automatically download the manufacturer’s apps and custom icons available for your device.

Select No (your device might not work as expected).

5. Save to apply the setting

Hit the Save Changes button to save the settings.

Lastly, click Save to apply the new settings.

Using the Device Installation Settings is one of the easiest ways to prevent your Windows device from automatically updating drivers.

If that doesn’t work, there’s no need to worry. There are a few other alternative solutions you can try.

How to stop Windows from updating graphics drivers in alternative ways

Windows Update automatically downloads and installs software updates that are released from time to time.

These updates introduce new features, fix problems with existing apps, or improve the operating system’s performance.

If you have an OEM computer or a pre-installed version of Windows on your computer, you might not have much control over what updates get installed on your system.

Sometimes these updates can cause stability issues and lead to blue screen crashes.

If this happens to you after installing graphics driver updates, here are some alternative ways to stop Windows from updating graphics drivers in the future:

Stop Automatic Updates through the Local Group Policy Editor

Windows Local Group Policy Editor (LGPE) is a snap-in that can be used to manage local group policy objects on Windows operating systems.

In Windows, the traditional system controller isn’t always enough for managing user settings and other configurations.

Administrators can use the Local Group Policy Editor to: 

  • Manage the operating system and user behavior.
  • Set restrictions on user applications.
  • Control what software the computer can install.
  • Restrict their access to certain programs and folders, and much more. 

In this section, we’ll focus on how to use it to stop Windows from automatically updating graphics drivers.

Exclude driver updates for Windows updates in Group Policy Editor.
  1. Click the Start button on your keyboard.
  2. Copy gpedit.msc and paste it into the search box at the bottom left corner.
  3. Open the Local Group Policy Editor.
  4. Click Computer Configuration.
  5. Navigate to Administrative TemplatesWindows Components.
  6. Scroll down to Windows Update.
  7. Navigate to Do not include drivers with Windows Update and double click.
  8. You should see three options: Not ConfiguredEnabled, and Disabled.
  9. Select Enabled and click on Apply.
  10. Click OK.

Windows will still receive other updates but will exclude drivers from being installed on your computer. In that case, you may have to download and install drivers manually.

Stop graphics drivers update with Windows Registry

The Windows Registry is a central location for storing configuration information and user settings for Windows and its applications.

The registry stores information about user preferences, operating system settings, and application configurations to help your computer run smoothly and efficiently.

The Windows registry is organized into different categories known as keys. Each key stores specific information in the form of values, which are either numbers or text strings.

Create a new folder under the Windows folder in Windows Registry.

Here’s how to use the registry to stop automatic driver updates:

  1. Click the Start button on your keyboard.
  2. Type Regedit on the search box and open the Registry Editor app.
  3. Allow the application to make changes to your computer.
  4. Navigate to HKEY_LOCAL_MACHINE, go to SOFTWARE and scroll down to Policies.
  5. Select Microsoft and right-click Windows.
  6. Select New > Key.
  7. Rename your newly created key, preferably as Windows Updates.
  8. Right-click your newly created key, and go to New > DWORD (32-bit) Value.
  9. Right-click the DWORD and rename it as ExcludeWUDriversInQualityUpdate.
  10. Right-click the new DWORD and select Modify. Set the value data from 0 to 1 and click OK.
Windows key and value added to the Registry to stop Windows updating graphics card driver.

Using the Microsoft Show or Hide Updates Troubleshooter

The Microsoft Show or Hide Updates Troubleshooter is a lightweight, portable tool that can be used to identify and hide problematic updates on your computer.

Unfortunately, if you’re using Windows 10, you’re bound to run into some issues.

You might find yourself in a situation where an update messes up something essential to your workflow.

Perhaps an upcoming update has broken some functionality or compatibility with other programs.

In that case, try the Show or Hide Updates Troubleshooter to block automatic updates for a problematic driver:

  1. Download the Microsoft Show or Hide Updates Troubleshooter.
  2. Run the troubleshooter to select the drivers that will install automatically.
  3. Click Next and wait as the system detects problems.
  4. Select Hide Updates.
  5. Choose your graphics drivers from the list.

Hidden drivers will be temporarily blocked from automatic updates. If this tool does not work, you may always roll back to the previous version of the driver.

In that case, you may have to uninstall the driver, reinstall the previous one, and download it from the vendor’s website.

Troubleshooting graphics drivers on Windows

We don’t normally think of graphics cards as independent units, but if you’re an avid gamer or a professional video editor, you know how important they are in your work.

Graphics drivers are programs that tell your computer how to efficiently operate its graphical user interface (GUI). 

If you recently updated graphics drivers and are experiencing issues with your Windows 10 computer, you may want to revert back to the previous version of the graphics driver.

Updating graphics drivers may differ depending on your system’s manufacturer and graphics card type.

Here are some things to check if you’re having trouble with your graphics driver:

  • Low-performance computer: A slow computer is one of the most common graphics driver issues. While many computers experience some slowdown over time, poor graphics drivers can worsen this issue. Playing games, editing videos, or performing other tasks requiring high graphics levels with problematic graphics drivers is virtually impossible.
  • Display issues: Display issues can be caused by several culprits, including a faulty computer or a faulty graphics driver. While a faulty computer is less likely, a faulty graphics driver is much more common. You can tell that your graphics drivers are not working properly by checking for distorted images, colors that look washed out, or black and white screen issues.
  • Video card issues: A video card is responsible for converting your computer’s instructions into images that appear on your monitor. If your video card is misconfigured or damaged, it can cause various issues, including distorted images, poor color quality, or even a blank or black screen. 
  • Failed driver updates and installs: A failed driver update is one of the most common graphics driver issues. Fortunately, this problem is easy to spot and usually has an easy fix. Download and reinstall your graphics driver.

A graphics driver issue can cause various subsequent problems, including low performance, display issues, and video card issues.

To troubleshoot these issues, start by ensuring that your computer’s display is set up properly. Then, if your computer has display issues, check your computer’s graphics settings.

Check for Windows Updates

Performing Windows updates check and updating Windows.

When looking for potential issues with your Windows 10 computer, don’t forget to check whether any Windows updates are interfering with your graphics drivers.

When your computer installs a Windows update, it may modify the driver file associated with your graphics card, which can cause your computer to malfunction. 

To check for pending Windows updates:

  1. Open the Windows 10 Start Menu
  2. Go to Settings > Update & Security > Windows Update.
  3. Select Check for updates.

Windows users are always at risk of a virus, trojan horse, or other malicious software. Windows updates are the solution to all these issues.

It’s up to you to keep your computer secure. Check for updates regularly and install them as soon as possible to avoid problems that can slow down or crash your computer.

Reinstall the current version of the graphics driver

Uninstalling the device driver in the Device Manager in Windows.

If you’ve recently installed new graphics drivers and are having issues, it might be best to uninstall them and reinstall the older version.

To remove the current graphics driver:

  1. Open the Start Menu and search for Device Manager. You can also open the Control Panel and search for Device Manager
  2. Once there, select Display adapters and right-click the installed graphics driver
  3. Select Properties.
  4. Navigate to the Driver tab.
  5. Select Uninstall Device and reboot your computer.

Once you’ve uninstalled the driver, go back to the manufacturer’s website and download it. Next, check out this video on how to install the driver manually:

YouTube video

Roll back to an earlier version of the graphics driver

If you’ve tried installing a different graphics driver and the issues persist, you can roll back to an earlier version of the driver. 

  1. Open the Device Manager and scroll down to Graphics adapters.
  2. Right-click your installed graphics driver (under Display Adapters) and select Properties
  3. Navigate to the Driver tab.
  4. Select Roll Back Driver and follow the on-screen instructions to return to an earlier version of the graphics driver.

If none of the above solutions work, it may be best to completely uninstall your current graphics driver, restart your computer, and then manually reinstall the latest version of the driver.

Check your Device Manager

If your computer is running slowly and you suspect that the graphics driver may be to blame, one thing to check is the Device Manager.

In the Device Manager, you can see a list of all the hardware installed on your computer. 

If there is a yellow ! or red X next to a device, it means the computer is having some issues with it.

If there is an exclamation mark next to your graphics card, it means there’s a problem with the device driver, which can slow down your computer.

Check for hardware compatibility issue(s) and update(s)

If you recently installed a new driver accompanied by a new device and are experiencing issues when using it, you could be experiencing a hardware compatibility issue.

To check for compatibility issues, you can browse through the supported devices list for the program or device you’re using. 

As for updating the device driver, you should do this to ensure it is compatible with your computer and operating system.

If you’re using a brand-new device, it may not yet have a working third-party graphics driver installed on your computer. Check the manufacturer’s website to see if a compatible version has been released.

Ensure your computer is using the latest software

Last but not least, ensure that your computer has the latest updates. If your computer runs slow or has issues, it could result from the old software.

While some updates are crucial and address important computer security issues, others may create more problems than they solve. 

Windows will automatically prompt you to install new updates when available. However, you might need to check for updates yourself in some instances. 

Installing the latest updates for your computer’s operating system, browser, and other programs keeps your computer safe from cyber criminals who try to exploit outdated software.

Updating software regularly also helps prevent crashes, reduce blue screen errors and increase system performance.

Final thoughts

If you’ve had enough of Windows automatically updating your graphics drivers, the good news is there are several simple solutions to this issue.

By reading through the previous sections, you’ll be able to obtain a much higher control level over your device.

Source :
https://computerinfobits.com/how-to-stop-windows-from-updating-graphics-drivers/

How to use CHATGPT to write a blog post: easy step-by-step guide

By Emily Brookes
Last updated: May 5, 2023

In this article, we’re going to show you how to use ChatGPT to write a blog post. If you’re new to using AI content generators, don’t worry. We will be walking you through the entire process step-by-step.

ChatGPT is a game-changer for marketers and bloggers—in fact, pretty much anyone that does anything online, in fact, ChatGPT can even help you brainstorm. And although it might sound like AI will take everyone’s jobs, we should embrace AI technology and use it to create better content more quickly.

Before we jump into this topic, it’s worth noting here that it is highly likely that OpenAI will be adding a digital watermark to content generated by ChatGPT.

If you intend to publish this content online, you should either rewrite the output in your own words or use a more comprehensive AI writing tool like Jasper to write or rewrite the paragraphs for you, based on the outline and ideas generated by ChatGPT (and check out our thoughts on the future of white-collar work in the age of AI here)

You Can Try Jasper for Free Right Here


CONTENTS[SHOW]

HOW TO USE CHAT GPT TO WRITE A BLOG POST

Writing a blog post is somewhere ChatGPT can excel. But the thing is, it won’t simply produce the perfect blog post at the click of a button. ChatGPT needs detailed instructions to produce good content.

And of course, when it comes to creativity and original ideas, you will still need to add a human touch.

That being said, ChatGPT can be used for pretty much every part of the writing process when guided carefully by a human writer.

Often, blog articles are relatively short and focused pieces that center primarily around one topic. Because of this, Chat GPT will happily suffice for short blog posts on simple topics.

However, a higher standard can often be achieved by augmenting the process with Jasper’s AI writing capabilities.

Here’s how to use ChatGPT to write a blog post.

BRAINSTORM TOPICS AND TITLE IDEAS

Chat GPT has emerged as a useful brainstorming tool. It’s becoming increasingly popular with bloggers and copywriters to help them with writer’s block.

It offers a quick and convenient way of generating relevant topics and title suggestions. To get started, you must create a free account with OpenAI. There is a paid version available, too—ChatGPT Plus.

In this guide, we’re going to be using the free version, but you can use either.

Once you’re signed in, you can enter a prompt in the chat box at the bottom of the page. For example: “Generate 12 new topic ideas and titles for a dog training blog.”

If you’re happy with the generated text, you can move on to the next step. Alternatively, you can also ask ChatGPT to regenerate the response for more ideas.

USE CHATGPT TO HELP YOU WRITE A SOLID OUTLINE

Once you have established a topic, the next step is to use ChatGPT to write an outline for your blog post.

Doing this manually can be a time-consuming process. But the good news is, ChatGPT will make it a lot easier.

It will provide you with a detailed outline which you can then edit or add to yourself with your own ideas.

First, you will need to enter your command into ChatGPT.

Command example: Create a detailed outline for a blog post titled “Mastering Recall: Tips and Techniques for Training Your Dog to Come When Called”.

ChatGPT will then provide you with a detailed outline that you can tweak as needed.

Now that you’ve got an outline, you can either use ChatGPT, or another tool like Jasper to create content for each section of your blog post.

HOW TO USE CHATGPT TO HELP WRITE EACH SECTION OF YOUR BLOG POST

If you want to use ChatGPT to write a blog post, you’re going to need to break down what you want into different sections and categories. That way, you can ask ChatGPT to write each section for you as you go.

After that, you can piece them all together at the end to create a long-form blog post you can publish.

If you’re writing a shorter piece of content of up to 500 words, then technically, you could just ask it to write a whole blog post in one go.

However, in general, breaking this down into sections is the best way to go about this. This will ensure that the topic is covered thoroughly and in the appropriate order.

Doing this is also essential if you want to create long-form content.

ASK CHATGPT TO WRITE YOUR INTRODUCTION

A strong start to any blog post is a must. This is why you want to start by asking ChatGPT to write your introduction for you.

Ask ChatGPT to write an introduction to your blog post.

Example prompt:

Write an introduction for a blog post titled “Mastering Recall: Tips and Techniques for Training Your Dog to Come When Called”.

And here’s what ChatGPT generated based on that prompt:

As you can see, it has done a pretty good job in just a few seconds.

You can now tweak this introduction if required. This is a good time to add your own expertise and introduce yourself as an authority on the topic.

ENTER EACH SUBHEADING IN CHATGPT AS A QUESTION

The next step is to create content for each subheading detailed in your outline.

ChatGPT is designed to be an AI chatbot rather than exclusively an article writer. Because of this, it works well if you enter your prompts as questions.

If you make the headings within your article a question, then you can ask GPT to answer this question for you. Then you can use the answer it generates as a basis for each paragraph of your blog post.

So for the first subheading, “Explanation of the importance of recall training”, you would enter a prompt of â€œExplain the importance of recall training for dogs”.

ChatGPT will then respond to this prompt, providing another section of your blog post.

Note: If you intend to publish this content online, you should either rewrite the output in your own words. You could also use a more comprehensive tool like Jasper to write or rewrite the paragraphs for you, based on the outline created by ChatGPT.

 Try Jasper Here Free

ASK CHATGPT TO WRITE A CONCLUSION PARAGRAPH

Ending any blog post on a high is a great idea. Once you are certain your blog post has thoroughly covered the topic at hand, it’s time to close things off.

Simply ask ChatGPT to create a conclusion based on the topic you’re already writing about. You can even go one step further and ask it to include things like a call to action or next steps.

You might want to change things a little to ensure your brand and/or name is mentioned. However, asking ChatGPT to write you a conclusion paragraph gives you a solid starting point.

When you start by asking ChatGPT to write you a conclusion, it will tell you that it needs to know the topic of the blog and the main points you have mentioned in the post, so it can conclude your blog post accurately.

REVIEW AND EDIT YOUR BLOG POST

Just because ChatGPT (or indeed any AI writing software) has created a post for you, that doesn’t mean you should use it as it is. It’s important to thoroughly review and edit the content. Make sure that it reads well and keeps in line with your existing brand voice. 

Most people won’t respond well to content they think has been auto-generated, so putting across your voice and ensuring that it sounds in line with the rest of your content is essential.

This is something that you should be double-checking in the review stage of your blog post.

FACT-CHECKING 

ChatGPT’s knowledge generally ends in the latter part of 2021. This means that some of the facts it gives may be outdated and, therefore, inaccurate.

Before you publish a post, while you’re reviewing it, you should make sure that any facts mentioned are accurate and edit them if they’re not.

It’s all well and good having a well-written article, but if the information within it is inaccurate, it could destroy any trust you have built with your readers or audience.

Instead, spend some time checking all of the facts for yourself. This way, you can be sure that the content you are putting out there is going to be well received by its intended audience.

CHECK FOR PLAGIARISM WITH GRAMMARLY

While your text should be unique when generated with ChatGPT, that’s not always true. It’s always a good idea to double-check it. Grammarly is a popular free tool for checking spelling and grammar in written content, and it has a built-in plagiarism checker.

It’s worth spending a couple of minutes copying and pasting your AI-generated content into Grammarly’s Plagiarism Checker just to give it the once over before it goes live.

Get Grammarly Here

IS CHATGPT GOOD FOR BLOGGING?

Overall, ChatGPT is a super useful tool for digital marketers and bloggers to have as part of their content creation toolkit.

You can use it for everything from blog writing to writing a meta description and even generating social media captions. It can also be used for keyword research and to help you generate new keyword ideas.

The main thing to bear in mind is that it’s likely that content generated with ChatGPT is watermarked or soon will be.

This means that Google and other search engines, along with AI content detection tools like Originality.ai, will usually be able to tell if your content is AI-generated.

However, that doesn’t mean you should dismiss ChatGPT altogether. But it does mean you need to be savvy and do what you can to get the most out of the tool.

Teaming up ChatGPT with other tools like Jasper can be a great way to get the most out of your content marketing efforts. This can also help you to get around the potential ‘Watermarking’ issues that you may come across in the future with Chat GPT.

ChatGPT isn’t really designed for long-form content writing, so you probably won’t use it to create entire blog posts in one go. However, there’s nothing to say that facility won’t come in the future. And there are already awesome courses like AI for blogging that are helping students profit from this new technology.

What it does is offer a quick and easy way to get blog post ideas, expand on ideas you already have, and even get an idea of what other people might be writing about within your niche.

You can then use the information you have gathered from ChatGPT in Jasper to create a unique, high-quality long-form blog post that you would be proud to publish on your platform.

Try Jasper Here Free

Source :
https://www.nichepursuits.com/how-to-use-chatgpt-to-write-a-blog-post/