:focal(smart))
Building your own build backend for Pixi
Introduction
Pixi is a modern package manager built on top of the conda ecosystem. It provides reproducible environments consisting of cross-platform packages for any language.
Normally, development tooling and package publishing live in separate worlds: you use one workflow to build and test your software locally, and a completely different one to publish it. Wouldn't it be great to use the same manifest that describes your project environment to also define how the package is built, versioned, and released?
Pixi-build is solution to this problem. It lets you describe how your software should be built directly in the manifest and uses a backend to turn that description into a full package recipe. This means you can develop, test, and publish from the same project without rewriting metadata or juggling multiple tools.
Backends are standalone servers that implement the actual build process and communicate with Pixi over a simple JSON-RPC protocol. This design makes Pixi extensible: any language ecosystem can define its own build rules without changing Pixi itself.
Pixi-build
Recipe generation
Usually, recipes are written manually and built using a program like rattler-build. If you’ve ever written a package recipe — of any kind — you’ve probably noticed how repetitive they can be.
Every C++ package that uses CMake looks roughly like this — same compilers, same build script, same boilerplate:
package: name: ${{ name|lower }} version: ${{ version }} source: url: ... sha256: ... build: number: 0 script: content: |- mkdir -p build cd build cmake -GNinja -DCMAKE_BUILD_TYPE=Release \ -DCMAKE_INSTALL_PREFIX=$PREFIX \ -DCMAKE_EXPORT_COMPILE_COMMANDS=ON \ -DBUILD_SHARED_LIBS=ON \ .. cmake --build . --target install requirements: build: - ${{ compiler('cxx') }} - cmake - ninja tests: - script: - ${PKG_NAME} --help about: ... extra: recipe-maintainers: - YOU
And if you want to write recipes for several packages, it gets even more cumbersome and tedious. For each recipe, you have to manually copy this structure, modify parameters, and if you plan to share it — submit it for a review to conda-forge.
A natural reaction is to introduce templates for common ecosystems such as CMake, Rust, or Python. That helps, but you still end up editing variables by hand.
A more ambitious solution is a recipe generator that fills in defaults automatically from project metadata. This reduces work, yet you still need to maintain the generated files themselves.
Pixi-build backend solves the final step. Instead of dozens of recipes, you maintain one backend capable of generating them on demand. The same tool can build both your internal and external packages, simplifying the development process. That’s what pixi-build does.
Showcase
Let’s take a look at our sample package from above and how it would be written using pixi-build. We will use the pixi-build-cmake backend that encodes best practices to build CMake packages. With it, our definition shrinks down to just following the pixi.toml content in the root directory of the corresponding project repository.
[package] name = "YOUR_PACKAGE_NAME" version = "1.0.0" [package.build.backend] name = "pixi-build-cmake" version = "*"
Pretty neat, isn’t it? Our backend is responsible for discovering package name, version and other metadata required for a top-notch package. Though, it still may require some assistance since we can not necessarily find all metadata or dependencies required for building and running our program. Let’s say that we want to compile small C++ program that uses SDL2. We have to add it manually. This can be done by adding the following to our manifest:
[package.host-dependencies] sdl2 = "*"
Not only this, but we can also depend on those packages from source. It is as simple as adding the following source dependency
[dependencies] YOUR_PACKAGE_NAME = { path = "path/to/your/package" }
to your dependencies table.
This will make it compile successfully. You can find a more in-depth overview of building your package from this awesome blog post from Ruben.
Nowadays you can also have a pixi-build package that depends on a package that again depends on package and so on like in the example above! This is called a recursive source dependency in the documentation.
Why and how write your own backend?
The short answer: if you need to package "typical" software and want to do less, then pixi-build backends are for you! Now let's focus on the more interesting part — writing your own backend.
First of all, you should check out our documentation for pixi-build backends to see if we already have a backend for your use-case. If not, then now I'll teach you how to easily write your own backend using Python.
We will be more specific from now on and write a build backend. On today's menu: the Go programming language!
Note: Most of the backends that we currently have are written in Rust and very similar in structure to what we will do in Python. You can compare implementation difference yourself as pixi-build-go repository contains both Rust and Python versions. Find a link in the exercises sections.
Writing a backend
A backend consists of two components:
A server that will accept and respond to JSON-RPC requests from Pixi.
A generator of the recipe that will create
rattler-buildrecipe based on the package configuration and source directory.
And that's it! Thankfully, we already implemented all server logic for you, so your task is only to implement the recipe generator.
Creating a project
Start by creating a new Python project using Pixi and adding the dependencies:
pixi init --format pyproject pixi add packaging jinja2 pydantic py-pixi-build-backend
We're eating our own dog food, so we will use pixi-build-python to build our backend:
[tool.pixi.package.build.backend] name = "pixi-build-python" version = "*" [tool.pixi.package.host-dependencies] hatchling = "*" python = ">=3.10" [tool.pixi.package.run-dependencies] python = ">=3.10" jinja2 = "*" packaging = "*" pydantic = ">=2.8.2,<3" py-pixi-build-backend = "*"
The generator is defined in Python by creating a class that inherits from GenerateRecipeProtocol. It is responsible for converting a project model into an intermediate recipe, which will be used for the build. You can think of the intermediate recipe as a regular rattler-build recipe that you write by hand.
Structure of recipe generation
The main method of the generator is generate_recipe, which accepts all information that you might need to generate a recipe. We will focus only on this method, but don't forget to provide blank implementations for other methods.
The implementation of generate_recipe method typically consists of the following steps:
Initialize a metadata provider. This structure scans source code to obtain project metadata. It is responsible for automatic filling of the fields
name,version,license,license_file,repository, etc. For the full list of values it provides, check out theMetadataProviderclass definition.Generate the skeleton of the recipe from the project model and metadata provider. This can be achieved by calling the
GeneratedRecipe.from_modelmethod.Add dependencies. This part consists of resolving dependencies from the project model (they can be target-specific); adding compilers, standard libraries and custom dependencies.
Create the build script. Here we have to render and assign rendered script string to the recipe. The content must be valid
cmd.execode on Windows and validbashcode on UNIX systems correspondingly.Return the generated recipe.
Process may slightly vary depending on complexity of your backend, but this is a good starting point.
Now, let's go through the process in chronological order.
Implementation
We start by creating the metadata provider. We will provide only a blank implementation for now and return to field discovery later.
It looks like this:
class GoMetadataProvider(MetadataProvider): def __init__(self): pass
Another important component is a configuration that we parse from the manifest package description. Therefore, we declare it as follows:
class GoBackendConfig(BaseModel): model_config = ConfigDict(extra="forbid", populate_by_name=True) extra_args: List[str] = Field(default_factory=list, alias="extra-args") extra_input_globs: List[str] = Field(default_factory=list, alias="extra-input-globs") compilers: List[str] = Field(default_factory=list)
We return to our generator and generate a skeleton of our recipe from the project model
metadata_provider = GoMetadataProvider(manifest_root) generated_recipe = GeneratedRecipe.from_model(model, metadata_provider)
This generated recipe doesn't contain any dependencies yet, let's add the compiler. First, we obtain the requirements section of the recipe and then resolve them. We do this to get the dependencies specified in the manifest file in place. After that, we add the compilers to the requirements.
requirements = generated_recipe.recipe.requirements resolved_build = requirements.resolve(host_platform).build compilers = config_model.compilers or ["go-nocgo"] for language in compilers: language_compiler = _default_compiler(host_platform, language) if language_compiler in resolved_build: continue requirements.build.append(ItemPackageDependency(f"${{{{ compiler('{language}') }}}}"))
After specifying dependencies, we have to get build script. As stated above, the build script is platform native shell code — bash on UNIX platforms and cmd.exe on windows. To dynamically generate those scripts and avoid as much boilerplate as possible, we will use jinja2 template engine. We define BuildScriptContext and implement the render function as follows:
build_template = _ENVIRONMENT.get_template("build_script.j2") utils_module = _load_utils_module(self.is_windows) context = { "source_dir": self.source_dir, "main_package": self._main_package_expr(), "extra_args": list(self.extra_args), "is_windows": self.is_windows, } context.update(_utils_context(utils_module)) script = build_template.render(**context) return script.strip().splitlines()
We will show only build_script.j2, all missing names should be self-evident:
{%- include "utils.j2" -%} {{ prelude }} cd {{ source_dir }} go build -o {{ env("PREFIX") }}/bin/{{ env("PKG_NAME") }}{{ ".exe" if windows }} {{ extra_args | join(" ") }} ./cmd/{{ env("PKG_NAME") }} {{ posthoc }} {{ postlude }}
The only thing that is remaining is to update generate_recipe to call rendering in the generator, return our recipe and we're done!
build_script_lines = BuildScriptContext( source_dir=str(manifest_root), main_module_path=metadata_provider.main_module_path(), extra_args=config_model.extra_args, is_windows=Platform.current().is_windows, ).render() generated_recipe.recipe.build.script = Script(content=build_script_lines, env={}) return generated_recipe
And that's it! We have a simple working build backend at our hands. Let's proceed to testing it.
Testing our backend
Let's look at a couple of cherry-picked examples and assume that it will work for different packages! For the sake of simplicity, we will focus only on CLI programs, but theoretically nothing stops us from building graphical applications.
Initial trials
We start with picking the simplest possible program that matches our current build script and on which I looked when I was writing it. Gojq is a rewrite of the famous jq program in Go. Its main features are:
Statically linked,
Arbitrary precision arithmetic,
Support of
yamlfiles.
Start by globally installing pixi-build-go: change your directory to its root directory (where pixi.toml at), then run
pixi global install --path .
similar to cargo install --path . to install backend globally. We needed this to try it out hassle-free.
Clone gojq repository locally and put this pixi.toml in it
[workspace] authors = ["remimimimimi <remimimimimi@protonmail.com>"] channels = ["conda-forge"] name = "gojq" platforms = ["linux-64"] version = "0.1.0" preview = ["pixi-build"] [package] name = "gojq" version = "0.12.17" [package.build.backend] name = "pixi-build-go" version = "*" channels = [ "https://prefix.dev/pixi-build-backends", "https://prefix.dev/conda-forge", ]
Now we run
PIXI_BUILD_BACKEND_OVERRIDE_ALL=1 pixi build
and then
$ pixi exec -s $(pwd)/gojq-0.12.17-hbf21a9e_0.conda gojq --help gojq - Go implementation of jq Version: 0.12.17 (rev: HEAD/go1.25.3) Synopsis: % echo '{"foo": 128}' | gojq '.foo' Usage: gojq [OPTIONS] Command Options: -r, --raw-output output raw strings --raw-output0 implies -r with NUL character delimiter -j, --join-output implies -r with no newline delimiter -c, --compact-output output without pretty-printing --indent number number of spaces for indentation --tab use tabs for indentation --yaml-output output in YAML format -C, --color-output output with colors even if piped -M, --monochrome-output output without colors -n, --null-input use null as input value -R, --raw-input read input as raw strings --stream parse input in stream fashion --yaml-input read input as YAML format -s, --slurp read all inputs into an array -f, --from-file load query from file -L, --library-path dir directory to search modules from --arg name value set a string value to a variable --argjson name value set a JSON value to a variable --slurpfile name file set the JSON contents of a file to a variable --rawfile name file set the contents of a file to a variable --args consume remaining arguments as positional string values --jsonargs consume remaining arguments as positional JSON values -e, --exit-status exit 1 when the last value is false or null -v, --version display version information Help Option: -h, --help display this help information
Success! But let's try something different. Now I want to measure how productive I was. The ideal tool for solving my problem is scc — a line counter program that also outputs some (maybe) useful statistics.
We clone scc repository and put a similar pixi.toml with the following difference
[package] name = "scc" version = "3.6.0"
Then run pixi build and… hit an error! We only support the main module at ./cmd/${PKG_NAME}. That is not the case for scc, which has its main module at the root directory of its repository. Let's tweak the backend to support this (quite common) case as well.
Improving metadata discovery
Start by imagining how we should tweak the recipe build script to match our expectations. It should look something like this:
go build -o {{ env("PREFIX") }}/bin/{{ env("PKG_NAME") }}{{ ".exe" if is_windows }} {{ extra_args | join(" ") }} {{ main_package }} {{ posthoc }}
So we somehow have to obtain a path for the main_package. We extend BuildScriptContext with this field and then add a simple function to the metadata provider to obtain it.
Logic is pretty simple — we check if ./cmd/${PKG_NAME} contains main module and fallback to the current directory otherwise.
def main_module_path(self) -> Path: """Return the relative path to the main module entry point.""" project_name = self.name() candidate = self.project_root / "cmd" / project_name if self._contains_main_entry(candidate): return Path("cmd") / project_name return Path(".")
Notice that we've used self.name(). Since we need the package name to obtain correct path, let's also provide simple name discovery. We will implement a simplest possible logic and just use the name of the directory where the manifest at, as the name of our package. Extend MetadataProvider constructor to store project root and implement our method
def name(self) -> str: """Infer the package name from the project root directory name.""" return self.project_root.name
Now we initialize BuildScriptContext with
build_script_lines = BuildScriptContext( source_dir=str(manifest_root), main_module_path=metadata_provider.main_module_path(), extra_args=config_model.extra_args, is_windows=Platform.current().is_windows, ).render()
and go back to scc.
Testing-out improved version
Now we try to build scc again, and this time we again get total success!
$ pixi exec -s $(pwd)/scc-3.6.0-hbf21a9e_0.conda scc ../pixi-build-go ─────────────────────────────────────────────────────────────────────────────── Language Files Lines Blanks Comments Code Complexity ─────────────────────────────────────────────────────────────────────────────── Python 10 631 138 53 440 86 Rust 6 780 116 146 518 31 Jinja 4 87 18 2 67 16 TOML 3 148 25 3 120 1 License 1 28 6 0 22 0 Markdown 1 2 0 0 2 0 ─────────────────────────────────────────────────────────────────────────────── Total 25 1,676 303 204 1,169 134 ─────────────────────────────────────────────────────────────────────────────── Estimated Cost to Develop (organic) $31,827 Estimated Schedule Effort (organic) 3.71 months Estimated People Required (organic) 0.76 ─────────────────────────────────────────────────────────────────────────────── Processed 53488 bytes, 0.053 megabytes (SI) ───────────────────────────────────────────────────────────────────────────────
For the sake of completeness, let's also try fzf. We clone repository, add our typical pixi.toml without package.name field (only with version), run pixi build and
~/Projects/Cloned/fzf $ pixi exec -s $(pwd)/fzf-0.66.1-hbf21a9e_0.conda fzf --version 0.66 (devel)
obtain total satisfaction!
Exercises
You can't learn something well without doing exercises, so here are a few ideas for you to experiment and improve this backend. All sources can be found in the pixi-build-go repository.
Currently, we don't specify license type and license file. Extend metadata provider to discover both automatically.
Add proper error handling for metadata extractor. Now it doesn't give proper errors on tools failure from the previous exercise, for example.
Add support for specifying which binary we want to build in the manifest.
Add support for Cgo packages.
Conclusion
At this point you have a minimal backend that can receive build requests from Pixi, translate them into rattler-build recipes, and return the results. From here, the possibilities open up quickly.
You can encode best practices for your language ecosystem (compilers, flags, layout conventions).
You can integrate external metadata sources.
You can experiment with hybrid builds that combine Python, C++, or Rust code in one project.
In short, pixi-build turns repetitive packaging work into a programmable layer. Instead of writing and maintaining dozens of recipes, you write logic once and reuse it everywhere.
If you want to explore more examples, start with the examples in the Pixi repository or check the pixi-build-go repository for more details to implement your own backend.