Introducing Google Cloud’s Secret Manager

Many applications require credentials to connect to a database, API keys to invoke a service, or certificates for authentication. Managing and securing access to these secrets is often complicated by secret sprawl, poor visibility, or lack of integrations.

Secret Manager is a new Google Cloud service that provides a secure and convenient method for storing API keys, passwords, certificates, and other sensitive data. Secret Manager provides a central place and single source of truth to manage, access, and audit secrets across Google Cloud.

Secret Manager offers many important features:

  • Global names and replication: Secrets are project-global resources. You can choose between automatic and user-managed replication policies, so you control where your secret data is stored.
  • First-class versioning: Secret data is immutable and most operations take place on secret versions. With Secret Manager, you can pin a secret to specific versions like 42 or floating aliases like latest.
  • Principles of least privilege: Only project owners have permissions to access secrets. Other roles must explicitly be granted permissions through Cloud IAM.
  • Audit logging: With Cloud Audit Logging enabled, every interaction with Secret Manager generates an audit entry. You can ingest these logs into anomaly detection systems to spot abnormal access patterns and alert on possible security breaches.
  • Strong encryption guarantees: Data is encrypted in transit with TLS and at rest with AES-256-bit encryption keys. Support for customer-managed encryption keys (CMEK) is coming soon.
  • VPC Service Controls: Enable context-aware access to Secret Manager from hybrid environments with VPC Service Controls.

The Secret Manager beta is available to all Google Cloud customers today. To get started, check out the Secret Manager Quickstarts. Let’s take a deeper dive into some of Secret Manager’s functionality.

Global names and replication

Early customer feedback identified that regionalization is often a pain point in existing secrets management tools, even though credentials like API keys or certificates rarely differ across cloud regions. For this reason, secret names are global within their project.

While secret names are global, the secret data is regional. Some enterprises want full control over the regions in which their secrets are stored, while others do not have a preference. Secret Manager addresses both of these customer requirements and preferences with replication policies.

  • Automatic replication: The simplest replication policy is to let Google choose the regions where Secret Manager secrets should be replicated.
  • User-managed replication: If given a user-managed replication policy, Secret Manager replicates secret data into all the user-supplied locations. You don’t need to install any additional software or run additional services—Google handles data replication to your specified regions. Customers who want more control over the regions where their secret data is stored should choose this replication strategy.

First-class versioning

Versioning is a core tenet of reliable systems to support gradual rollout, emergency rollback, and auditing. Secret Manager automatically versions secret data using secret versions, and most operations—like access, destroy, disable, and enable—take place on a secret version.

Production deployments should always be pinned to a specific secret version. Updating a secret should be treated in the same way as deploying a new version of the application. Rapid iteration environments like development and staging, on the other hand, can use Secret Manager’s latest alias, which always returns the most recent version of the secret.

Integrations

In addition to the Secret Manager API and client libraries, you can also use the Cloud SDK to create secrets:

$ gcloud beta secrets create "my-secret" --replication-policy "automatic" --data-file "/tmp/my-secret.txt"

and to access secret versions:

$ gcloud beta secrets versions access "latest" --secret "my-secret"

Discovering secrets

As mentioned above, Secret Manager can store a variety of secrets. You can use Cloud DLP to help find secrets using infoType detectors for credentials and secrets. The following command will search all files in a source directory and produce a report of possible secrets to migrate to Secret Manager:

$ find . -type f | xargs -n1 gcloud alpha dlp text inspect --info-types="AUTH_TOKEN,ENCRYPTION_KEY,GCP_CREDENTIALS,PASSWORD" --content-file

If you currently store secrets in a Cloud Storage bucket, you can configure a DLP job to scan your bucket in the Cloud Console.

Over time, native Secret Manager integrations will become available in other Google Cloud products and services.

What about Berglas?

Berglas is an open source project for managing secrets on Google Cloud. You can continue to use Berglas as-is and, beginning with v0.5.0, you can use it to create and access secrets directly from Secret Manager using the sm:// prefix.

$ berglas access sm://my-project/api-key

If you want to move your secrets from Berglas into Secret Manager, the berglas migrate command provides a one-time automated migration.

Accelerating security

Security is central to modern software development, and we’re excited to help you make your environment more secure by adding secrets management to our existing Google Cloud security product portfolio. With Secret Manager, you can easily manage, audit, and access secrets like API keys and credentials across Google Cloud.

To learn more, check out the Secret Manager documentation and Secret Manager pricing pages.

Source :
https://cloud.google.com/blog/products/identity-security/introducing-google-clouds-secret-manager

Set up Chrome Browser Cloud Management

Enroll cloud-managed Chrome Browsers

Next: 3. Set policies for enrolled Chrome Browsers

After you have access to your Google Admin console, here’s how to enroll the devices where you want to manage Chrome Browsers. You’ll then be able to enforce policies for any users who open Chrome Browser on an enrolled device.

Step 1: Generate enrollment token

  1. In your Google Admin console (at admin.google.com)…

  2. Go to Device management.
  3. (Optional) To add browsers in the top-level organization in your domain, keep Include all organizational units selected. Alternatively, you can generate a token that will enroll browsers directly to a specific organizational unit by selecting it in the left navigation before moving on to the next step. For more information, see Add an organization unit.
  4. At the bottom, click Add Add to generate an enrollment token.
  5. In the box, click Copy Copy to copy the enrollment token.

Step 2: Enroll browsers with the enrollment token

Enroll browsers on Windows

Option 1: Use the Group Policy Management Editor

Under HKEY_LOCAL_MACHINESOFTWAREPoliciesGoogleChrome, set CloudManagementEnrollmentToken to the generated token you copied above.

Clear the current enrollment if one exists using:
-HKEY_LOCAL_MACHINESOFTWAREGoogleChromeEnrollment

(Optional) By default, if enrollment fails (for example if the enrollment token is invalid or revoked), Chrome will start in an unmanaged state. If you instead want to prevent Chrome browser from starting if enrollment fails, set CloudManagementEnrollmentMandatory under HKEY_LOCAL_MACHINESOFTWAREPoliciesGoogleChrome to true

Notes:

  • The token must be set at a local machine level. It won’t work at the user level.
  • If the machines you are enrolling are imaged from the same Windows source, make sure that you have used Microsoft’s System Preparation tool (Sysprep) so that each enrolled machine has a unique identifier.

Option 2: Download the reg file

Click Download .reg file. The downloaded .reg file automatically adds the token and clears the current enrollment when run.

When you use the reg file, Chrome browser will still respect the CloudManagementEnrollmentMandatory policy in Option 1, blocking launch if enrollment fails. See the note above if you’re enrolling machines imaged from the same Windows source.

Enroll browsers on Mac

Option 1: Use a policy

Push the token to your browser as a policy named CloudManagementEnrollmentToken. Setting policies on Mac devices requires the Apple Profile Manager.

Note: If you choose to manually set policies, be aware that Mac OS will delete the policy files on every sign-in. Learn more about setting up policies on Mac in the Quick Start Guide and help center.

(Optional) By default, if enrollment fails (for example if the enrollment token is invalid or revoked), Chrome will start in an unmanaged state. If you instead want to prevent Chrome browser from starting if enrollment fails, set CloudManagementEnrollmentMandatory to true

Option 2: Use a text file

Push the token in a text file called CloudManagementEnrollmentToken, under /Library/Google/Chrome/. This file must only contain the token and be encoded as a .txt file, but should not have the .txt filename extension.

(Optional) By default, if enrollment fails (for example if the enrollment token is invalid or revoked), Chrome will start in an unmanaged state. If you instead want to prevent Chrome browser from starting if enrollment fails, create a file called CloudManagementEnrollmentOptions under /Library/Google/Chrome/ with the text Mandatory (case sensitive). This file must be encoded as a .txt file, but should not have the .txt filename extension.

If a token is pushed using both methods above, Chrome will use the value present in the policy and ignore the file. The token is stored in a directory under the home directory on the user’s Mac. Each Mac OS user must enroll separately.

Enroll browsers on Linux machines

The token can be pushed by creating a text file called enrollment_token, under /etc/opt/chrome/policies/enrollment. This file must only contain the token and nothing else.

(Optional) By default, if enrollment fails (for example if the enrollment token is invalid or revoked), Chrome will start in an unmanaged state. If you instead want to prevent Chrome browser from starting if enrollment fails, create a file called CloudManagementEnrollmentOptions under /etc/opt/chrome/policies/enrollment/ with the text Mandatory (case sensitive). This file must be encoded as a .txt file, but should not have the .txt filename extension.

Step 3: Launch Chrome Browser and confirm enrollment

  1. After setting the enrollment token using one of the methods in Step 2, quit Chrome Browser (if it’s open) and launch Chrome Browser on the managed device.
  2. Sign in to the Google Admin console (admin.google.com).
  3. Go to Device management and then Chrome management and then Managed browsers.  All browsers that have been launched with your enrollment token will appear in the browser list.
  4. (Optional) To see additional details, click a machine’s name.

Notes: 

  • If you have multiple installations of Chrome Browser on a single device, they will show up in the browser list as a single managed browser.
  • Enrollment tokens are only used during enrollment. After enrollment, they can be revoked in the Admin console. However, enrolled browsers will still be registered.
  • On Windows, only system installations are supported because Chrome Browser requires admin privileges to register.

Just after registering, not many fields are populated. You need to enable browser reporting to access detailed reporting information. For more information, see Step 4: Enable Chrome Browser reporting.

Unenroll and re-enroll devices

To remove policies and to unenroll a device in Chrome Browser Cloud Management, delete both the enrollment token and the device token.

To re-enroll a device, delete the device token while leaving the enrollment token in place. The device token was created by Chrome during the initial enrollment. Make sure not to revoke the enrollment token. If you accidentally delete the enrollment token, create a new one.

Note: Unenrolling browsers from Chrome Browser Cloud Management doesn’t delete the data that’s already uploaded to the Google Admin console. To delete uploaded data, delete the corresponding device from the Admin console.

Questions

When are enrollment tokens used?

Enrollment tokens are only used during enrollment. They can be revoked after enrollment and enrolled browsers will still be registered.

Does this token enrollment process require admin privileges on Windows?

Yes. On Windows, only system installations are supported.

What gets uploaded during the enrollment process?

During the enrollment process, Chrome Browser uploads the following information:

  •   Enrollment token
  •   Device ID
  •   Machine name
  •   OS platform
  •   OS version

Why don’t I see a Chrome management section in my Admin console?

If you have the legacy free edition of G Suite, Chrome management isn’t currently available in your Admin console. Support for legacy free edition will be rolled out in the future.

source:
https://support.google.com/chrome/a/answer/9301891?hl=en

Configure Google Drive File Stream

Configure Drive File Stream

You can specify custom options for Drive File Stream, including the default drive letter on Windows, the mount point on macOS, the cache location, bandwidth limits, and proxy settings. These configurations can be set at the user or host level, and persist when Drive File Stream restarts.

Where to update settings

To set the Drive File Stream options, you update registry keys (Windows) or use the defaults command (macOS). If you’re not familiar with making these updates, contact your administrator or check your operating system documentation. Additionally, administrators can choose to set override values that end users can’t change.

Windows

Host-wide HKEY_LOCAL_MACHINESoftwareGoogleDriveFS
User only HKEY_CURRENT_USERSoftwareGoogleDriveFS
Override HKEY_LOCAL_MACHINESoftwarePoliciesGoogleDriveFS

macOS

Host-wide /Library/Preferences/com.google.drivefs.settings
User only ~/Library/Preferences/com.google.drivefs.settings
Override /Library/Managed Preferences/com.google.drivefs.settings.plist

macOS examples

Host-wide mount point:
sudo defaults write /Library/Preferences/com.google.drivefs.settings DefaultMountPoint '/Volumes/Google Drive File Stream' Host-wide trusted certificates file:
sudo defaults write /Library/Preferences/com.google.drivefs.settings TrustedRootCertsFile /Library/MyCompany/DriveFileStream/MyProxyCert.pem User maximum download bandwidth:
defaults write com.google.drivefs.settings BandwidthRxKBPS -int 100 User-enabled browser authentication:
defaults write com.google.drivefs.settings ForceBrowserAuth -bool true

Settings

Set these name/value pairs using the registry keys or defaults command, as described above. On Windows, create the registry keys if they don’t already exist. On macOS, the defaults command maintains a plist file for settings. You should not modify the plist file directly, as some changes might not be applied.

Setting name Value type Value description
AutoStartOnLogin* DWORD (Windows)
Bool (macOS)
Start Drive File Stream automatically on session login.
BandwidthRxKBPS DWORD (Windows)
Number (macOS)
Maximum downstream kilobytes per second.
BandwidthTxKBPS DWORD (Windows)
Number (macOS)
Maximum upstream kilobytes per second.
ContentCachePath String Sets the path to the content cache location on a connected APFS, HFS+, or NTFS file system.

When Drive File Stream restarts, local data in the old content cache will move to the new content cache location. If you delete your custom setting, data will move back to the default location.

The default cache location is:

Windows: %LOCALAPPDATA%GoogleDriveFS
Mac: ~/Library/Application Support/Google/DriveFS
ContentCacheMaxKbytes QWORD (Windows)
Number (macOS)
Sets the limit on content cache size in kilobytes. The limit is capped at 20% of the available space on the hard drive (regardless of the setting value).The setting does not apply to files made available offline or files that are in the process of uploading. This setting is only available for admins, as an override or host-wide setting.
DefaultMountPoint String Windows: Set the mounted drive letter.
You can use an environment variable to specify the drive letter. macOS: Set the mounted drive path. You can include tilde (~) or environment variables in the path.
DisableRealTimePresence* DWORD (Windows)
Bool (macOS)
Disables real-time presence in Microsoft Office. This can also be disabled for organizational units from the Admin console. See step 3 of Deploy Drive File Stream.
ForceBrowserAuth* DWORD (Windows)
Bool (macOS)
Use browser authentication. If your organization uses security keys or SSO, this setting may resolve sign-in problems.
MinFreeDiskSpaceKBytes QWORD (Windows)
Number (macOS)
Controls the amount of local space used by Drive File Stream’s cache. Stops writing content to the disk when free disk space gets below this threshold, in kilobytes.
Proxy settings:
DisableSSLValidation* DWORD (Windows)
Bool (macOS)
This disables validating SSL traffic. Traffic will still be encrypted, but we will not validate that the SSL certificates of the upstream servers are all valid. This is inherently insecure. It would allow a man-in-the-middle attack against traffic to Google Drive. Only settable host-wide.
TrustedRootCertsFile String This is the full path to an alternate file to use for validating host SSL certificates. It must be in Privacy Enhanced Mail (PEM) format. Set this if your users are on networks with decrypting proxies.

The file should contain the contents of the roots.pem file shipped with Drive File Stream, plus the certificates used to authenticate your proxy. These additions should correspond to the proxy-signing certificates you added to the certificate stores in your fleet of machines.

You can find roots.pem in:

Program FilesGoogleDriveFS<version>configroots.pem
(Windows)

or

/Applications/Google Drive File Stream.app/Contents/Resources/roots.pem
(macOS) Only settable host-wide.
DisableCRLCheck* DWORD (Windows)
Bool (macOS)
This disables checking Certificate Revocation Lists (CRLs) provided by certificate authorities.

If not explicitly set, this defaults to true if TrustedRootCertsFile is provided, otherwise false. Sites that use self-signed certificates for their content inspection proxies typically don’t provide a CRL.

Enterprises that specify a CRL in their proxy certificate can explicitly set DisableCRLCheck to 0 for the added check.

For boolean values, use 1 for true and 0 for false (Windows), or use true and false (macOS).

Related topics

Source:

https://support.google.com/a/answer/7644837

Attackers Use Legacy IMAP Protocol to Bypass Multifactor Authentication in Cloud Accounts, Leading to Internal Phishing and BEC


Threats to cloud-based applications
 have been growing, and passwords — the traditional method used to secure accounts — are often no longer enough to protect users from the dangers that they potentially face. The need for more comprehensive security in cloud-based applications has led to vendors offering multifactor authentication (MFA) as an integral feature of their products and services. By using MFA, users limit the risk that an attacker will gain control of their accounts by spreading authentication across multiple devices.

However, while MFA provides an additional layer of security for protecting account access, it’s not a fool-proof feature. For example, a recent study from Proofpoint examined brute-force attacks against user accounts in major cloud services. The attacks reportedly took advantage of legacy email protocols, phishing, and credential dumps to bypass MFA.

Notably, attackers were able to abuse legacy protocols — most commonly the IMAP authentication protocol — to bypass even multifactor authentication. The study noted that the IMAP protocol can be abused under certain situations, such as when users employ third-party email clients that do not have modern authentication support. IMAP abuse can also be performed in two other cases: when the targets do not implement applications passwords and when it is done against shared email accounts where IMAP is not blocked and/or MFA cannot be used. The report also said these attacks can often go undetected, instead looking like failed logins rather than external attempts. Threat actors use these accounts as entry points into the system, after which lateral movement is carried out via internal phishing and BEC to expand their reach within the organization.

The six-month study saw over 72 percent of cloud tenants being targeted at least once by attackers, while 40 percent had at least one compromised account within their system. Even more concerning, 15 out of every 10,000 active user accounts were successfully breached. Hijacked servers and routers were used as the main attack platforms, with the network devices gaining access to approximately one new tenant every 2.5 days during a 50-day period.

Roughly 60 percent of the tenants involved in the study that were using Microsoft Office 365 and G Suite were targeted with the password-spraying attacks via IMAP, and 25 percent fell victim to a successful breach.

As more companies across industries adopt cloud-based services, it’s expected that cybercriminals will go after accounts for cloud-based platforms. Once an account has been compromised, whether through hacking or brute force, the account could be used to communicate with executives and their staff. Internal BEC emails could trick the targets into transferring funds and personal or corporate data or downloading malicious files. Compromised email accounts, for example, had been found replying to email threads to deliver malware. These BEC attempts can be difficult to detect given that they come from legitimate (though compromised) email accounts.

A feature such as MFA is only one part of an effective multilayered security implementation. Organizations looking to boost their security can start with these best practices:

  • Passwords still have a role to play as a component of multifactor authentication. Ensure that users have passwords that are strong and regularly changed to stay protected from brute-force attacks. This could mean includes using at least 12 characters with a mix of upper and lowercase letters, numbers, and special characters. Ask users to avoid common or easily-guessable passwords or passwords that show obvious information such as names or birthdates.
  • Educate employees on how to identify phishing attacks. Common indicators that an email is a phishing attempt include suspicious-looking email addresses and the presence of misspellings and typographical errors.
  • Furthermore, attackers often try to make their phishing attempts as convincing as possible. Thus, users should avoid giving out personal and company information unless they are absolutely certain that the person or group they are communicating with is legitimate.

Given that cybercriminals use compromised accounts and internal BEC emails, organizations should also consider the use of security solutions designed to combat the growing threat. Trend Micro’s existing BEC protection uses AI, including expert rules and machine learning to analyze email behavior and intention. The new and innovative Writing Style DNA technology goes further by using machine learning to recognize the DNA of an executive’s writing style based on past written emails. Designed for high-profile users who are prone to being spoofed, Writing Style DNA technology can detect forged emails when the writing style of an email does not match that of the supposed sender. The technology is used by Trend Micro™ Cloud App Security™ and ScanMail™ Suite for Microsoft® Exchange™ solutions to cross-match the email content’s writing style to the sender’s by taking into account the following criteria: capital letters, short words, punctuation marks, function words, word repeats, distinct words, sentence length, and blank lines, among 7,000 other writing characteristics.

Source
https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/attackers-use-legacy-imap-protocol-to-bypass-multifactor-authentication-in-cloud-accounts-leading-to-internal-phishing-and-bec

Multi-Cloud Disaster Recovery Benefits and Challenges

The cloud has definitely changed both operations and data protection requirements for almost all businesses today. Not only is the cloud the basis for popular SaaS applications like Office 365, it is also used as a backup and DR target by many organizations.

Using the cloud opens up new possibilities for DR. However, one growing complication for DR and the cloud is the use of multiple clouds. Today, many businesses have adopted multiple clouds – many use both Amazon AWS and Microsoft Azure or in some cases Google Cloud or IBM Cloud. According to research done by the IBM Institute for Business Value, 85% of today’s enterprises operate in multi-cloud environments. Further, most of those organizations that don’t currently have a multi-cloud IT strategic plan to do so in the near future.  The IBM research estimates that by 2021, 98% of business will move to multiple hybrid clouds. Similarly, an ESG study found that 81% of enterprises are utilizing more than one public cloud infrastructure service provider and only 15% were using a single cloud provider.

Multi-Cloud Advantages

Using multiple clouds definitely has its advantages. Cost is one of the primary driving factors. The IBM study which consisted of 1016 executives from 19 different industries reported that 66% said multi-cloud is crucial to reducing costs. Using multiple clouds not only allows you to pick the most cost-effective options, it also allows you to pick the best cloud services to fill your own specific business needs. Adopting a multi-cloud strategy can also enable businesses to avoid vendor lock-in decreasing their dependence on a single cloud provider.

Multi-Cloud DR Planning

As a general rule, the big public cloud providers like AWS and Azure are more reliable than your own local data centers. Even so, a large-scale disaster could potentially impact both your organization and your cloud provider. Using multi-cloud disaster recovery enables you to replicate your resources to a second cloud provider in another geographic region. Typically, it’s best to use a second cloud provider that is within the same country. Crossing international boundaries can potentially bring up legal and regulatory constraints that you are probably better off without. Locating the second cloud provider in a different geographic region ensures that there is virtually no chance that both cloud providers will undergo a major outage at the same time. For instance, you could use one provider in the United States west coast region and then the east coast region with your other cloud provider.

There are challenges in using multi-cloud DR. Each different cloud provider has its own management portal and different services which require different skill sets. For IaaS implementations, you need to be aware that the different cloud providers each use different on-disk formats for their VMs. Microsoft Azure uses the VHD format while AWS uses the AMI format. As a general rule, each cloud provider’s DR services are not designed to deal with multiple cloud providers. However, some third party DR solutions are able to bridge multiple clouds making it far easier to implement a multi-cloud DR strategy. If you’re looking to implement your multi-cloud DR plan it’s best to begin with a smaller scoped POC before expanding to the rest of your organization. And like all DR plans, regular testing is a must.

Source
https://www.petri.com/multi-cloud-disaster-recovery-benefits-and-challenges
Exit mobile version