Skip to main content

Bazel's Evaluation Model

By the end of this section, you should have a rough mental model of what Bazel is doing behind-the-scenes as it performs your build and test commands.

Bazel Evaluation Model

Bazel's runtime is a series of phases. You won't normally interact with these phases, if you simply ask Bazel to run a test, it will automatically perform these phases.

They are "pipelined" - a phase may begin before the previous one has finished.

note

Starting in Bazel 7, execution can start before analysis is complete, thanks to Skymeld

  1. Configure "phase". Read all the source files and update the BUILD files.

In our case we will run aspect configure, but most repositories compile a "Gazelle" binary and then run that.

This phase is manual - Bazel doesn't do this for you, because it may take longer than Bazel is willing to spend on a "no-op" build. It's not even regarded as a "phase" in Bazel terminology, but it is the first step in the process so we model it this way in the course.

Developers are expected to "configure" when they get an error message that BUILD files are out-of-date. It's also possible to automate this in other ways, such as with editor extensions, autogazelle, etc.

  1. Fetching "phase". Download external resources that are required by the requested targets.

This phase is implicit: during the "Loading" phase, any references that walk outside the sources in the monorepo automatically trigger fetching to occur.

Bazel's dependency graph should always determine which dependencies are fetched lazily as needed. For example, a third-party package that isn't used anywhere in the repository should never be fetched.

If you see something being fetched that shouldn't be, please raise an issue with your DevInfra team!

note
  1. Loading phase. Load and evaluate all extensions and all BUILD files that are needed for the build.

This triggers fetching as well. The execution of the BUILD files simply instantiates rules (each time a rule is called, it gets added to a graph). This is where macros are evaluated.

  1. Analysis phase. The implementation function for each rule is executed, and actions are instantiated.

An "action" describes how to generate a set of outputs from a set of inputs, such as "run gcc on hello.c to get hello.o".

You must list explicitly which files will be generated before executing the actual commands. In other words, the analysis phase takes the graph generated by the loading phase and generates an action graph.

  1. Execution phase. Actions are executed, when at least one of their outputs is required.

The bazel-out folder is up-to-date at the end of the execution phase.

Try it: Interacting with the phases

We will now run a series of commands to walk through the Bazel phases in order.

Configure Phase

Run bazel configure. All the source files for enabled languages are parsed, to infer their dependency graph.

As a result, the BUILD files should now be up-to-date.

Fetching Phase

  1. Run bazel fetch cli/... to download all the Python dependencies from PyPI.
  2. Run bazel fetch frontend/... to download all the JavaScript dependencies from npm. etc.
You can fetch third-party packages directly
% bazel fetch @org_golang_google_grpc//... @org_golang_google_protobuf//...
% bazel fetch @pip//:requests_pkg
% bazel fetch @maven//...
% bazel fetch @npm//:all
% bazel fetch @swiftpkg_swifterswift//...

As a result, you ought to be able to turn off Wifi or unplug the network, and still perform remaining phases.

Caching fetches: the repository cache

Fetching external dependencies can be slow. In a big monorepo, you'll download many large files for hermetic toolchains.

Bazel caches these in the $(bazel info repository_cache) folder.

  • Caches the downloaded files.
  • Always give the integrity hash, that's the key
note

There's no cache for external repositories created by repository rules!! Frequent de-optimization. It may be fixed in the future: A true repository cache for Bazel

Loading

We can trigger loading to occur by performing a query.

Run bazel query 'somepath(client, @@zlib~1.3//:zlib.h)' to see why our Java application depends on zlib.

As a result, Bazel constructs a "Dependency Graph" where the nodes are "targets" and the edges are dependencies between these.

Analysis

This can be thought of as a "dry run" of the build. Bazel will figure out all the build steps ("actions") that need to be performed.

Awkwardly, instead of an "analyze" command, we must run it this way: bazel build --nobuild //backend/...

As a result, we know the build has been defined without any errors.

Execution

Finally we have everything in-place for Bazel to spawn sub-processes which do the actual work of the build.

Run bazel build //....