I really like the ambition here; it's very hard for me to translate this into belief that I could start using this now and actually replace some of my existing tooling.
At the same time, I kind of hate that they went for bash-compatible. I know everybody thinks bash is unbeatable, but at some point _surely_ we're going to move past its awful syntax and footguns (and SQL, and can I have a pony?)...
Also what would you prefer to see instead of bash compatible?
https://github.com/nlewo/nix2container#nix2container
Dagger looks nifty, but it's too bad that it's still basically an imperative "start from thing, do stuff to it, publish" kind of model, rather than just expressing a desired end state and letting an underlying engine find its way there.
So, if you took Nix, and replaced the static scheme-like DSL with a proper API, and then built SDKs in 5 languages for that API; and then built a bash-like shell also for easy scripting; then you would start to have something that approximates Dagger.
We have a very active discord, feel free to come by and ask all the tough questions!
What we're focusing on is green field application of container tech. Things that should be containerized, but aren't. For example:
- Cross-platform builds
- Complex integration testing environments
- Data processing pipelines
- AI agent workflows (where you give a LLM the ability to perform tasks in a controlled environment)
In those kinds of workflows, there is no dominant tool to replace - Docker or otherwise. Everyone builds their own unique snowflake of a monolith, by gluing together a dozen tools. Dagger aims to replace that glue with a modular, composable system.
i think there's a decent chance we end up giving Dagger a spin this year.
I have never heard of Kabuki, and couldn't find it in a quick web search. Did you mean Kaniko?
> Can one exchange lower / independent layers like with nix container builds?
Yes.
For example, here's a Dagger module for building Alpine Linux containers in an order-independent way. Under the hood it's derived from the 'apko' build tool created by the folks at Chainguard. https://daggerverse.dev/mod/github.com/dagger/dagger/modules/alpine
And here's a Dagger Shell command for building a container using that module:
github.com/dagger/dagger/modules/alpine | container --packages=git,openssh,curl | terminal
You mentioned deb packages. Your intuition is correct, Dagger doesn't magically make .deb or .rpm packages reorderable, since those packaging systems are designed in a way that makes it difficult. But it does provide the primitives, and a cross-language programming environment for creating the "adaptors" you described in a way that maximizes reuse. If someone does for deb what Chainguard did for apk, you can trivially integrate that into a Dagger module.It seems like you're cutting an interesting track on providing a better way than Dockerfiles with their imperative layers, but pragmatic enough to not insist on taking over the whole world like Bazel or Nix.
Is it? Docker is quite long in the tooth and this point and is a long way from perfect. Docker's design is rife with poor choices and unaddressed flaws. One I've been tracking for years: there is no straightforward way to undo several things that get inherited by derived images, such as VOLUME and EXPOSE, and Docker Inc. doesn't care no matter how long the github threads get.
So I think there is ample room for alternatives.
But if you're talking specifically about shell or a "native" REPL in a safer language - I also want to scratch that itch. Some day I'll dust off https://github.com/vito/dash :)
I think one could wrap the Go SDK with CUE, and have oft considered doing it, but the work has not migrated from the backlog yet. I've been toying with a CUE powered monorepo DX CLI that is more akin to BuildPacks, using Dagger as the containerization layer.
Or there's the copy my ansible playbooks into the container and run them there as localhost approach, which can be mitigated by what Packer does with at least running the playbook over SSH. But then Packer has no concept of multi-stage or copying in assets from other containers.
There's a lot of room to do better in this space.
This is workable-ish for the base developer + ship a product use cases, but it creates a bit of a crisis of repeatability when you've got a complicated CI config implementing all this mystery meat that falls in the middle of the sandwich, and there's separate documentation somewhere else that explains what the dev workflow is supposed to be, but no one is explicitly tasked with keeping the CI script and dev workflow in sync, so they inevitably drift over time.
Plus, long and complicated CI-specific scripts are a smell all on their own, as was discussed here on HN a few days ago.
I don't know about containers specifically, since I've never bothered to use packer for that process, but it does seem that packer supports multi-step artifact production and their example even uses docker to demonstrate it https://github.com/hashicorp/packer/blob/v1.9.5/website/content/guides/packer-on-cicd/pipelineing-builds.mdx#chaining-together-several-of-the-same-builders-to-make-save-points
Dagger is built on the same underlying tech as docker build (buildkit). So the compatibility bridge is not a re-implementation of Dockerfile, it's literally the official upstream implementation.
Here's an example that 1) fetches a random git repo 2) builds from its dockerfile 3) opens an interactive terminal to look inside 4) publish to a registry once you exit the terminal:
git https://github.com/goreleaser/goreleaser |
head |
tree |
docker-build |
terminal |
publish ttl.sh/goreleaser-example-image
In the specific case of Dockerfile compatibility, I don't actually know if it will be smart enough to drop you in the exact intermediary state that the Docker build failed in, or if it reverts atomically the whole 'docker build' operation.
It seems like it would be good to be able to prevent the pipeline from publishing the image, if the inspection with 'terminal' shows there's something wrong (with e.g. 'exit 1'). I looked a little bit into the help system, and it doesn't seem that there's a way from inside the 'terminal' function to signal that the pipeline should stop. Semantics like bash's "set -e -o pipefail" might help here.
with-exec lets you specify that you want a command to succeed with e.g.
container | from alpine | with-exec --expect SUCCESS /bin/false | publish ...
If you try that, the pipeline will stop before publishing the image.By the way, in your example: `--expect SUCCESS` is the default behavior of with-exec, so you can simplify your pipeline to:
container | from alpine | with-exec /bin/false | publish ...
Thank you! Would you be willing to open an issue on our github repo? If not I will take care of it.Dagger can read dockefiles as is, https://docs.dagger.io/cookbook#build-image-from-dockerfile
Docker shell container - https://github.com/jrz/container-shell
It's really neat, I recommend checking it out.
I am imagining this with a simple cloudflare tunnel and self hosting gitlab and I am really really seeing an open source way that developers can REALLY scale.
I mean docker is really great but dagger in notebook formats just seems really cool ngl.
the best part with the Dagger + Runme combo is that it runs entirely local. this isn't just a huge with for portability. it also cuts down development cycle times significantly.
Resources available so far: - https://docs.runme.dev/guide/dagger - https://github.com/runmedev/vscode-runme/blob/58ea9a10c00df7f3f4e0ba15a06e3503a649eff8/dagger/notebook/shell.dag
dagger Dockerfile L4
and it pops open a shell at that point. Then I don't need new syntax.https://docs.earthly.dev/docs/guides/debugging
there's also https://github.com/ktock/buildg
At first we'd hoped it could replace jenkins - it provided an alternative way to run and debug CI pipelines - right on your machine! You could write in golang and just import what you needed. The dev direction feels more scattered now, trying to replace docker, be a new shell(?), and weirdly trying to be some kind of langchain? Doing something different doesn't imply better. A new set of complicated CLI args is no better than what we started with (shell scripts, or jenkinsfiles to integrate docker builds). I'm a little bummed that the project has seemingly drifted (in my view) from the original mission.
I think this approach of see what sticks and them trying out a lot of different things can be nice but not in its current form.
Although I have installed dagger to give it a try and I am just not sure how to make it work , the quickstart / hello world doesn't' work.
No I don't want to make an AI , I just want to see what you really are.
And so I do agree with your statement!
Seems strange that especially with the push for modules this was integrated as a core type. It has nothing to do with buildkit or builds.
With LLMs becoming core to the development workflow, it kinda makes sense to have a primitive, since LLM i/o is a bit different from other functions. I haven't tried it, but this thread had me go look at the details and now it's on the roadmap. Probably make something of an agentic workflow that uses a tool in a container, see how it works out. I'm still skeptical that Dagger is a good tool or ecosystem for LLM work without integrations to all the extra stuff you need around LLMs and agents.
But I can appreciate your perspective. Thank you for sharing your point of view.
[1] http://lazamar.github.io/download-specific-package-version-with-nix/
After taking Nix for a spin, I cannot be bothered to learn another custom tool with a bespoke language when I already have containers for doing the same things.
For Dagger, I can choose from a number of languages I already know and the Docker concepts map over nearly 1-1
That's also the case with the Docker ecosystem. On top of that, you need to take into account the base image, versions, etc.
At the end what I look for is for a project being able to build my source code with runtime dependencies and supporting tools that won't change overtime for the architecture that I need.
I have not yet encountered actual nix usage in my professional or open source work, so I don't see Nix as eating anyone's lunch
So yeah, no where near eating anyone’s lunch, let alone the behemoth that is Docker.
This is an interesting problem you've faced, their marketing claim is that they have the most comprehensive catalog of packages (and I'm inclined to believe it). I very rarely run into broken packages, and that's usually resolved by using a stable release for that specific package - and it's not like my usage of packages is lightweight (7292 lines of nix config). That's on NixOS (and Silverblue, and Ubuntu) at least.
Things just work with pacman or dnf.
I definitely encountered packages available in homebrew but not nixpkgs, so the idea that if not in nix not in brew is wrong. Another package I use has been out of date for months, again it's quality over quantity and nix lacks the quality that i deem more important for myself
Someone else published a nixpkg for my project, but it is wrong. As an OSS maintainer, I have reduced the number of places I publish, too many packages managers these days for my limited time
We tried to fit dagger where we had jenkins - not just for binary builds, but for the other stuff. Mounting secrets for git clones / NPM installs, integration tests, terraform execution, build notifications and logging.
Caching is great, and dagger/nix both have interesting options here, but that's more of a bonus and not the core value prop of a build orchestrator.
"Tried", implying it didn't go well and isn't a fit for replacing Jenkins?
Every feature we ship is carefully designed for consistency with the overall design. For example, Dagger Shell is built on the same Dagger Engine that we've been steadily improving for years. It's just another client. Our goal is to build a platform that feels like Lego: each new piece makes all other pieces more useful, because they all can be composed together into a consistent system.
That said, in the early days it was definitely pitched for CI/CD - and this how we've implemented it.
> What is it? > Programmable: develop your CI/CD pipelines as code, in the same programming language as your application.
> Who is it for? > A developer wishing your CI pipelines were code instead of YAML
https://github.com/dagger/dagger/blob/0620b658242fdf62c872c667623c9d47f79c1f6c/README.md
Edit: This functionality/interaction with the dagger engine still exists today, and is what we rely on. The original comment is more of an observation on the new directions the project has taken since then.
I just wanted to clarify that in terms of product design and engineering, there is unwavering focus and continuity. Everything we build is carefully designed to fit with the rest. We are emphatically not throwing unrelated products at the wall to see what sticks.
For example, I saw your comment elsewhere about the LLM type not belonging in the core. That's a legitimate concern that we debated ourselves. In the end we think there is a good design reason to make it core; we may be wrong, but the point is that we take those kinds of design decisions seriously and we take all use cases into account when we make them.
Huh? When did that change?
Dagger seems interesting, but this opinion that the shell and standard Unix tools are archaic is a dubious starting principle.
Every now and then a project with similar ideas comes along, and whether it rejects the notion of passing unstructured data between commands, addresses the footguns and frustrations of shell scripting, or is built with a more modern and "safe" programming language, it ultimately never seems to catch on as much as traditional Unix shells and tooling has.
The reality is that these tools and the design choices made 50 years ago are timeless in nature. They're deliberately lacking in features and don't attach themselves to any specific tech du jour. It's this primitive nature and the "do one thing well" philosophy that makes them infinitely composable. The same pipelines Unix nerds were building 50 years ago to solve problems are still useful today, which is remarkable when you consider how quickly technology moves.
Sure, new tools are invented all the time, and they might do things better than old ones. I use `eza` instead of `ls`; `fd` instead of `find` (mostly); `rg` instead of `grep` (mostly); `fzf` is a pretty essential addition to my workflow, and so on. But the underlying principles of these tools are still the same as the tools they're replacing. They're just slightly modernized and ergonomic versions of them.
Whether or not we need a `container` command, `from alpine`, or an entire new shell for that, is a separate topic. It could be argued that this could be accomplished with a few functions or standalone commands. Even if we do need this new tooling, that's great, but don't tell me that it's meant to replace a proven set of tools and workflows[1]. When containers stop being popular, will we still need this?
Also, "Daggerverse" and "modules"? Great, let's bring in the npm mentality to the shell, just what I needed.
[1]: Ah, they don't, it's meant to serve as a complement. Alright, fair enough. I'm lowering my pitchfork.
"The devops operating system" does what?
- Build, run, and publish containers
- Containerize language-specific builds
- Run integration tests in ephemeral environments, in a repeatable way
- Run your tools in controlled sandboxes with only the files, secrets and network access they need
It also mentions what you might replace with it:
> When a workflow is too complex to fit in a regular shell, the next available option is often a brittle monolith: not as simple as a shell script, not as robust as full-blown software. The Dagger Shell aspires to help you replace that monolith with a collection of simple modules, composed with standard interfaces.
How does Dagger help? I just can't get anything concrete from their site.
If you’re working on a team often your shell scripts, python programs, and makes files are not always portable between local and CI, but even worse between your local machine and your colleagues local machine. This is where dagger shines because it lets you do all that stuff in a fully portable way.
Low pro teams and individuals will happily abuse any new tool.
Mature teams stick to local-first CI - i.e. be able to run pipeline locally - and then translate to whatever is being used in their build and test infrastructure.
> The cross-platform composition engine
> Build powerful software environments from modular components and simple functions. Perfect for complex builds and AI workflows.
This is generic to point of being useless.
Everything is a composition engine. Javascript is a composition engine. macOs is a composition engine.
Answering "what does dagger do" can be tough, because there are very broad applications.
Its always hard to describe general purpose platforms like these. Dagger is an open platform for building composable software. It has SDKs, sandboxed execution environment, observability, and now this shell interface.
Bash pipes make intuitive sense, you’re taking the output of one process and pushing it into another. Here, the use of pipes makes little sense. Why is “from alpine” being passed into “with-exec”? What is moving between those processes?
In this case, `from alpine` is a function attached to the container type that has many additional functions. You chain them together to do stuff. You can do it through code as if it was any other object, but this shell allows you to do things without code as well.
Perhaps the example is too simple to feel useful, but being able to pipe primitives like files, directories, containers, secrets, and even any custom object makes it possible to rapidly experiment with and compose pipelines.
`container().from('alpine').withExec(args: ["apk", "add"]).terminal()`
If it was unix pipes, they'd be context independent tools being composed together.
Dagger can be a useful but I feel a bit deceived, and it makes me worry about the rest of the quality of this product.
So it’s a bit of a hybrid between bash and powershell.
Nix, Bazel, and similar alternatives may have learning curves, but they provide clearer guardrails and more predictable outcomes without exposing so many implementation details. I deeply regret the time I spent working with Dagger.
Feel free to send me an email if you'd like lev@dagger.io, it would be great to learn from your experience so we can continue to improve the platform for everyone.
However, I am very happy somebody finally put the prompt on TOP of a terminal window, since years I am wondering why people like to bow their head to the bottom of the screen.
I guess it is the expected gesture for IT devs and admins nowadays to follow and bow your head, so this is a great help for people to understand what is the right posture in life!
- Dagger supports opentelemetry natively: it emits traces, metrics and logs that you can send to your favorite observability tool, or to our commercial product Dagger Cloud (which has a free tier) which lets you drill down into the trace. See https://docs.dagger.io/features/visualization
- The Dagger CLI also renders the otel traces, and you can browse them as well. Press escape, then navigate with up, down, enter, escape. Same drill-down functionality but straight from the CLI
- You can call dagger with the --interactive flag. If you do, in case of an error you will get an interactive shell to inspect the state at time of failure.
- You can also call the `Container.terminal()` function from your code, to force an interactive terminal to appear for the user at that stage. Sort of like a "debug printf" but with an interactive terminal
B: What's a container?
A: It lets you execute a sequence of commands in an isolated and controlled environment that can be readily rebuilt, cloned or disposed of.
B: How do we do that? Using a standard procedural format such as a shell script or sequential command list?
A: Far too simple. Let's pipe everything through a new and evolving middleware control stack that re-implements standard primitives like file creation but lets you introduce new bugs and edge cases in any language you choose to use, thereby ensuring the host environment also requires additional configuration! And keep that interface unstable! And definitely don't write man pages!
B: TF?
Those who don't understand Unix are condemned to reinvent it, poorly. - Henry Spencer
Maybe the sandboxing will be nice and they can provide it in a way that isn’t tied to a particular shell. However sandboxing on a shell command level is too coarse grained for some things.
From the post:
> Dagger Shell isn’t meant to replace your system shell, but to complement it. When a workflow is too complex to fit in a regular shell, the next available option is often a brittle monolith: not as simple as a shell script, not as robust as full-blown software. The Dagger Shell aspires to help you replace that monolith with a collection of simple modules, composed with standard interfaces.
But how can you really differentiate b/w a user opening git and some other program running git.
I think we would need friction there , some sort of manual intervention.
The best I could think of was something like bitwarden/keepassxc like cli where it requires a password and it would just straight up copy that token into github.
If we are really talking / you have the source code and you want end to end security , you could theoeretically also compile git with the specific idea / implementation of whatever encrypted password manager you might use directly within the code of git / github but I know that can be an overkill for this problem.
I could even run git and gh in a container that has a volume to be able to access the directory.
I think I have an idea of what this could look like and I might try and prototype it with fish and see what code parts it goes down to gauge how secure it’s likely to be.
It's not much of a shell otherwise, is it?