External Remote Cache and Execution
External remote resources
Every Workflows deployment includes a remote cache compliant with the Bazel Remote Execution Protocol v2. Workflows supports the provisioning of external-facing resources for use in an office/team environment. With a secure OpenID Connect (OIDC)-based or HTTP Basic Auth-based authentication scheme, only authenticated users will be able to access the externally facing remote resource cluster. With an external cluster, a team can make better use of off-site resources to work faster and more effectively, reducing thrash caused by individual machine settings and resource allocation. Configuration of the external cluster is discussed in detail below.
For configuration of the remote cache, see remote cache. For configuration of the remote build execution (RBE), see remote build execution.
Enable an external remote resource cluster
For customers that have team workflows that require an externalized remote cluster, one can be bootstrapped with minimal additional configuration. Please note that this does not externalize the remote cluster used by the CI runners. This creates a separate deployment explicitly for external use cases, with the necessary authentication to permit use by the Bazel command-line tool on individual machines.
Before enabling the external remote cache, a Route53 public hosted zone (AWS) or a Cloud DNS public zone (GCP) is required for the domain that fronts the external cache. If using another DNS provider such as Cloudflare to provision DNS, this must point your external DNS to your hosted zone after creation. For Cloudflare, follow this guide. Provisioning instructions for Route53/Cloud DNS and the external cache follow in the next section.
Also, for AWS, pass vpc_subnets_public
to the Aspect
Workflows module, in order for the remote cluster to be exposed to the public Internet.
Set up a hosted zone on your cloud provider
- AWS
- GCP
Provision a new Route53 public hosted zone by adding the following Terraform to your existing code and apply. You do not need to create any additional records inside this zone. The Aspect Workflows module adds all required A records to make the external remote cache functional and discoverable.
module "remote-cache-dns" {
source = "terraform-aws-modules/route53/aws//modules/zones"
version = "2.10.2"
zones = {
"<DNS name from your provider, e.g. remote-cache.aspect.build>" = {
domain_name = "<DNS name>"
comment = "<DNS name>"
tags = {
Name = "<DNS name>"
}
}
}
}
Provision a new Cloud DNS public zone by adding the following Terraform to your existing code and apply. You do not need to create any additional records inside this zone. The Aspect Workflows module adds all required A records to make the external remote cache functional and discoverable.
resource "google_dns_managed_zone" "external_remote_zone" {
project = local.project
name = "external-remote"
dns_name = "<DNS name>." # Note the trailing `.`
description = "Zone for external remote cache and execution"
visibility = "public"
}
To find the name servers to register with your DNS provider, navigate to Cloud DNS, click on the zone that was created, then click REGISTRAR SETUP.
Enable the external remote cluster
From within your Aspect Workflows module definition, add the following code:
- AWS
- GCP
module "aspect_workflows" {
...
external_remote = {
dns = {
hosted_zone_id = module.remote-cache-dns.route53_zone_zone_id["<DNS name>"]
}
}
...
}
module "aspect_workflows" {
...
external_remote = {
dns = {
zone_name = google_dns_managed_zone.external_remote_zone.name
}
}
...
}
As with the CI runner remote cluster, you can customize the external remote cluster to suit your team's needs. When you apply the workflows module, a new, Internet-facing load balancer is spun up with either an HTTP Basic Auth or an OpenID Connect (OIDC) scheme over HTTPS/TLS. Instructions for invocation and use follow in the next section.
Enabling OIDC for the external remote cluster
If organizational rules require it, the external remote cluster can be configured to use OIDC as the authentication scheme. This is considered more secure than HTTP Basic Auth, which has a single shared key that is rotated every month. To enable this functionality, a customer must provide all the OIDC configuration options. Some guides for setting up OIDC with popular IdPs are included below.
- AWS
- GCP
module "aspect_workflows" {
...
external_remote = {
...
oidc = {
issuer = "https://<endpoint>" // example
auth_endpoint = "https://<endpoint>/auth" // example
token_endpoint = "https://<endpoint>/token" // example
user_info_endpoint = "https://<endpoint>/userInfo" // example
client_id = "<id>"
client_secret = "<sensitive-secret>" // this should be stored in a sensitive Terraform value
session_timeout_seconds = 604800 // 7 days in seconds, the default
}
}
}
Enable the following APIs if they are not already enabled:
Configure OAuth consent screen
In order to enable OIDC using external identities from Identity Platform (GCIP), IAP must first be enabled with IAM authentication, then manually switched over to use external identities for authentication. This is because GKE Ingress does not yet have rich support for authentication via GCIP. This requires the creation of an OAuth Consent Screen. This consent screen will not be used, but it is required for the terraform apply to succeed.
Configure your project's consent screen at https://console.cloud.google.com/apis/credentials/consent
- Select Internal
- Click Create
- Set
App name
field to Aspect Workflows External Remote - Set
User support email
to a user at your company - Set email for
Developer contact information
- Click
Save and continue
- Don't set any scopes.
- Click
Save and continue
Obtain the id of the consent screen by running:
gcloud iap oauth-brands list
There should be a single "brand" with an ID of the form projects/<number>/brands/<number>
.
Copy this ID and input it as the value of consent_screen_brand
in the terraform below in
addition to information provided by your OIDC provider.
Terraform module
module "aspect_workflows" {
...
auth = {
consent_screen_brand = "projects/<number>/brands/<number>"
providers = {
oidc = {
# Rename to identify your provider
my-provider = {
display_name = "My Identity Provider"
issuer = "https://<endpoint>" # Fill in the endpoint
client_id = "<id>" # Fill in the client id
# The client secret should be stored in a sensitive Terraform value
client_secret = data.google_secret_manager_secret_version_access.oidc_client_secret_version.secret_data
}
}
}
}
external_remote = {
...
iap = {
oidc_provider = "my-provider" # Name of provider declared above
}
}
}
# Example secret access
data "google_secret_manager_secret_version_access" "oidc_client_secret_version" {
project = "my-project"
secret = "external-remote-oidc-client-secret"
}
Manually switch IAP to use external identities
After a successful apply, wait for the GKE Ingress resource to be successfully created. The ingress will be listed here. Wait for its status to have a green checkmark before proceeding.
Navigate to Identity Aware Proxy. There will be several backend services named external/external-buildbarn-frontend
, one of which has the IAP toggle flipped on. This backed will be switched to use external identities (GCIP)
for authentication.
Select the backend with IAP enabled.
- Select "Start" under "Use external identities for authorisation"
- Select "Create a sign-in page for me" and choose a region
- Check the OIDC provider that you configured above in Terraform
- Click "Save"
- After the sign-in page has been created, you will be given a URL to the sign-in page to copy. Copy it.
- If the table still shows the backend as using IAM and not GCIP, refresh the page
Add an auth callback URLs to your OIDC provider
Taking the sign-in page copied in the earlier step, add two following URIs to your OIDC provider's list of authorized redirect URIs.
<signInPageUrl>/__/auth/handler
https://aw-remote-ext.<DNS name>/__auth__
Wait for Google to verify the managed certificate
Google can take up to 24 hours to provision the managed TLS certificate for the remote endpoint. You can view the status of the certificate by navigating to ingress, clicking on the ingress resources, scrolling down to Front-end configuration, and clicking on the certificate resource. You are ready to use the remote cache endpoint once the certificate has a green check under the status.
Identity provider guides
There are some important caveats to consider when using OIDC. First, all authentication concerns are outsourced to the specific IdP. This means that if the IdP sets controls over timeouts of credentials, the remote cluster has no control over those settings, and cannot override them. The remote cluster also has no concept of "log out", and so will continue to allow access until credentials expire. Finally, because of how OIDC works, the cluster will cache the access token for a user on sign in, and then use the refresh token to refresh it until the session token expires (as configured above). This means that if a user's access to the IdP is revoked while the access token is still active, their session will still be valid, but their refresh event will fail as soon as the access token expires. This window is typically small, but is wholly at the discretion of the IdP.
Connecting bazel
to the external remote cluster
Depending on the authentication scheme used, there are different processes for connecting bazel
to the
external remote cluster. Once configured, the external remote cluster should be just as performant as the CI runner
cluster, and can be tuned to meet any team workload requirements.
Common settings
Regardless of underlying authentication scheme, the following settings need to be added to a user's .bazelrc
file to
enable connectivity to the remote resource cluster.
- AWS
- GCP
build --remote_accept_cached
build --remote_upload_local_results
# if using the remote cache
build --remote_cache="grpcs://aw-remote-ext.<DNS name>:8980"
# if using the remote executor
build --remote_executor="grpcs://aw-remote-ext.<DNS name>:8980"
# if using the remote downloader
build --experimental_remote_downloader="grpcs://aw-remote-ext.<DNS name>:8981"
build --remote_accept_cached
build --remote_upload_local_results
# if using the remote cache
build --remote_cache="grpcs://aw-remote-ext.<DNS name>"
# if using the remote executor
build --remote_executor="grpcs://aw-remote-ext.<DNS name>"
The above lines can be collapsed into one at the user's discretion. The last two parameters for the remote cache are
by default True
, and can be omitted if no configuration exists that overrides those defaults.
OIDC
In order for bazel
to get up-to-the-second valid credentials for a given IdP and OIDC configuration, a special utility
called a credential helper is used.
For Workflows, Aspect has developed a purpose-built credential helper designed to work with Workflows-instantiated
remote clusters. They will not work with any other remote cluster by any other provider. First, download the
correct credential helper for a given platform.
Once downloaded, unzip the file, which should provide the credential-helper
binary. This binary should be moved to
somewhere accessible from the user's $PATH
(meaning it can be invoked directly from a terminal). It can optionally be
renamed, e.g. to aspect-credential-helper
if there are other helpers on a user's machine. This binary can be reused
for all Aspect external remote clusters, independent of underlying OIDC providers per cluster.
Once downloaded and placed in the correct path, the
following lines must be added to the .bazelrc
file to point the remote cluster to the credential helper:
build --credential_helper="aw-remote-ext.<DNS name>"=aspect_credential_helper
Once the configuration is complete, a user must log in to their IdP by running the following command on the command line:
credential-helper login aw-remote-ext.<DNS name>
This will save the user's credentials in a local keychain for retrieval on each Bazel build. When the underlying session token expires, the user will have to complete the same command. So long as they are continually signed in to their IdP in the background, they will not need to sign in more frequently than that, as the refresh token will retrieve up-to-date credentials in the backend.
HTTP Basic Auth (AWS only)
The workflows module stores the HTTP Basic Auth username and password combination in AWS Systems Manager Parameter Store.
The SecureString
parameter name is aw_external_cache_auth_header
.
A quick link to the parameter is: https://<AWS_REGION>.console.aws.amazon.com/systems-manager/parameters/aw_external_cache_auth_header/description?tab=Table
.
Once the parameter value is retrieved, it can be added to the bazel
command as follows:
--remote_header="Authorization=Basic <INSERT AUTH KEY FROM SSM HERE>"