Skip to main content
Version: 5.11.x

External Remote Cache and Execution

External remote resources

Every Workflows deployment includes a remote cache compliant with the Bazel Remote Execution Protocol v2. Workflows supports the provisioning of external-facing resources for use in an office/team environment. With a secure OpenID Connect (OIDC)-based or HTTP Basic Auth-based authentication scheme, only authenticated users will be able to access the externally facing remote resource cluster. With an external cluster, a team can make better use of off-site resources to work faster and more effectively, reducing thrash caused by individual machine settings and resource allocation. Configuration of the external cluster is discussed in detail below.

For configuration of the remote cache, see remote cache. For configuration of the remote build execution (RBE), see remote build execution.

Enable an external remote resource cluster

For customers that have team workflows that require an externalized remote cluster, one can be bootstrapped with minimal additional configuration. Please note that this does not externalize the remote cluster used by the CI runners. This creates a separate deployment explicitly for external use cases, with the necessary authentication to permit use by the Bazel command-line tool on individual machines.

Before enabling the external remote cache, a Route53 public hosted zone (AWS) or a Cloud DNS public zone (GCP) is required for the domain that fronts the external cache. If using another DNS provider such as Cloudflare to provision DNS, this must point your external DNS to your hosted zone after creation. For Cloudflare, follow this guide. Provisioning instructions for Route53/Cloud DNS and the external cache follow in the next section.

Also, for AWS, pass vpc_subnets_public to the Aspect Workflows module, in order for the remote cluster to be exposed to the public Internet.

Set up a hosted zone on your cloud provider

Provision a new Route53 public hosted zone by adding the following Terraform to your existing code and apply. You do not need to create any additional records inside this zone. The Aspect Workflows module adds all required A records to make the external remote cache functional and discoverable.

module "remote-cache-dns" {
source = "terraform-aws-modules/route53/aws//modules/zones"
version = "2.10.2"

zones = {
"<DNS name from your provider, e.g. remote-cache.aspect.build>" = {
domain_name = "<DNS name>"
comment = "<DNS name>"
tags = {
Name = "<DNS name>"
}
}
}
}

Enable the external remote cluster

From within your Aspect Workflows module definition, add the following code:

module "aspect_workflows" {
...
external_remote = {
dns = {
hosted_zone_id = module.remote-cache-dns.route53_zone_zone_id["<DNS name>"]
}
}
...
}

As with the CI runner remote cluster, you can customize the external remote cluster to suit your team's needs. When you apply the workflows module, a new, Internet-facing load balancer is spun up with either an HTTP Basic Auth or an OpenID Connect (OIDC) scheme over HTTPS/TLS. Instructions for invocation and use follow in the next section.

Enabling OIDC for the external remote cluster

If organizational rules require it, the external remote cluster can be configured to use OIDC as the authentication scheme. This is considered more secure than HTTP Basic Auth, which has a single shared key that is rotated every month. To enable this functionality, a customer must provide all the OIDC configuration options. Some guides for setting up OIDC with popular IdPs are included below.

module "aspect_workflows" {
...
external_remote = {
...
oidc = {
issuer = "https://<endpoint>" // example
auth_endpoint = "https://<endpoint>/auth" // example
token_endpoint = "https://<endpoint>/token" // example
user_info_endpoint = "https://<endpoint>/userInfo" // example
client_id = "<id>"
client_secret = "<sensitive-secret>" // this should be stored in a sensitive Terraform value
session_timeout_seconds = 604800 // 7 days in seconds, the default
}
}
}
Identity provider guides

There are some important caveats to consider when using OIDC. First, all authentication concerns are outsourced to the specific IdP. This means that if the IdP sets controls over timeouts of credentials, the remote cluster has no control over those settings, and cannot override them. The remote cluster also has no concept of "log out", and so will continue to allow access until credentials expire. Finally, because of how OIDC works, the cluster will cache the access token for a user on sign in, and then use the refresh token to refresh it until the session token expires (as configured above). This means that if a user's access to the IdP is revoked while the access token is still active, their session will still be valid, but their refresh event will fail as soon as the access token expires. This window is typically small, but is wholly at the discretion of the IdP.

Connecting bazel to the external remote cluster

Depending on the authentication scheme used, there are different processes for connecting bazel to the external remote cluster. Once configured, the external remote cluster should be just as performant as the CI runner cluster, and can be tuned to meet any team workload requirements.

Common settings

Regardless of underlying authentication scheme, the following settings need to be added to a user's .bazelrc file to enable connectivity to the remote resource cluster.

build --remote_accept_cached
build --remote_upload_local_results

# if using the remote cache
build --remote_cache="grpcs://aw-remote-ext.<DNS name>:8980"
# if using the remote executor
build --remote_executor="grpcs://aw-remote-ext.<DNS name>:8980"
# if using the remote downloader
build --experimental_remote_downloader="grpcs://aw-remote-ext.<DNS name>:8981"

The above lines can be collapsed into one at the user's discretion. The last two parameters for the remote cache are by default True, and can be omitted if no configuration exists that overrides those defaults.

OIDC

In order for bazel to get up-to-the-second valid credentials for a given IdP and OIDC configuration, a special utility called a credential helper is used. For Workflows, Aspect has developed a purpose-built credential helper designed to work with Workflows-instantiated remote clusters. They will not work with any other remote cluster by any other provider. First, download the correct credential helper for a given platform.

Once downloaded, unzip the file, which should provide the credential-helper binary. This binary should be moved to somewhere accessible from the user's $PATH (meaning it can be invoked directly from a terminal). It can optionally be renamed, e.g. to aspect-credential-helper if there are other helpers on a user's machine. This binary can be reused for all Aspect external remote clusters, independent of underlying OIDC providers per cluster.

Once downloaded and placed in the correct path, the following lines must be added to the .bazelrc file to point the remote cluster to the credential helper:

build --credential_helper="aw-remote-ext.<DNS name>"=aspect_credential_helper

Once the configuration is complete, a user must log in to their IdP by running the following command on the command line:

credential-helper login aw-remote-ext.<DNS name>

This will save the user's credentials in a local keychain for retrieval on each Bazel build. When the underlying session token expires, the user will have to complete the same command. So long as they are continually signed in to their IdP in the background, they will not need to sign in more frequently than that, as the refresh token will retrieve up-to-date credentials in the backend.

HTTP Basic Auth (AWS only)

The workflows module stores the HTTP Basic Auth username and password combination in AWS Systems Manager Parameter Store.

The SecureString parameter name is aw_external_cache_auth_header. A quick link to the parameter is: https://<AWS_REGION>.console.aws.amazon.com/systems-manager/parameters/aw_external_cache_auth_header/description?tab=Table.

Once the parameter value is retrieved, it can be added to the bazel command as follows:

--remote_header="Authorization=Basic <INSERT AUTH KEY FROM SSM HERE>"