普通视图

发现新文章,点击刷新页面。
昨天 — 2026年5月6日首页

Bun 深度调研:一个想把 JavaScript 工具链全部重写的野心项目

作者 王若风
2026年5月6日 22:47

ChatGPT Image 2026年5月6日 22_32_49.png

大家好,我是若风。

2025 年 12 月上旬,Bun 先宣布加入 Anthropic,随后 Anthropic 也发布公告:它收购了 Bun。

我也注意到,Bun 官方在加入 Anthropic 的文章里提到 Claude Code、Factory AI、OpenCode 等 AI 编程工具都在使用 Bun。换句话说,它已经不只是一个“开发者自己试试看”的运行时,也开始进入 AI coding 工具链的内部执行路径。

这 2 件事情叠加引起了我对 Bun 的好奇,说实话,我之前对 Bun 的印象一直停留在“又一个号称比 Node 快的 JS 运行时”这个层面。但这次收购让我觉得事情没那么简单——Anthropic 为什么要收一个 JavaScript 运行时?

带着这个疑问,我花了不少时间做了一次深度调研。越看越觉得,Bun 这个项目比我想象的有意思得多。它不只是“比 Node 快”,它其实在试图重新回答一个问题:现代 JavaScript 工程,到底应该由几层工具组成?

今天这篇文章,就是这次调研的结果。

先说结论

Bun 不是 React、Vue 那种“框架”,而是一个试图把 JavaScript 运行时、包管理器、测试运行器、Bundler、全栈开发服务器 压进同一个二进制里的基础设施产品。

用人话说:Bun 想把 "Node.js + npm/pnpm + Jest/Vitest + esbuild/Vite" 这一大串组合拳,收缩成一个叫 bun 的命令。

这个野心非常大。大到现在社区对 Bun 的态度非常分裂:一边是兴奋,因为确实快;另一边是警惕,因为边界扩得太宽,质量能不能跟上是个问号。

Bun 怎么来的:一次“忍不了”

Bun 的起点不是抽象的“想做一个更快的 JS 运行时”,而是非常具体的工程烦躁。

创建者 Jarred Sumner 在多次访谈和项目回顾里都把 Bun 的起点讲得很工程化:他当年在做一个浏览器里的 voxel game,代码库一大,开发服务器反馈变得难以忍受。InfoWorld 访谈里提到,从保存代码到浏览器看到变化大概要 30 秒;后来的复盘文章里也常把这个痛点描述成几十秒级的等待。不是哲学问题,是会把人耐心一点点磨掉的那种工程问题。

于是他先去折腾 JSX/TypeScript 转译器,后来为了让 Next.js 的服务端渲染跑起来,发现自己又不可避免地走进了 runtime 的坑里。

很多产品的诞生,表面上是“新需求催生新架构”,实际上更像“一个足够能折腾的人,被一个具体痛点逼进了深水区”。Bun 就是后者。

两个关键选择,决定了 Bun 的一切

Bun 在 2021 年做了两个技术选择,几乎锁死了它后来所有的优点和缺点。

选择一:用 Zig,不是 Rust

Jarred 说他一开始也考虑过 Rust,但最后选了 Zig。理由不是“生态更好”,而是更朴素的——他要对底层行为、内存和系统调用路径有足够直接的控制。

Bun 从第一天起就是一个极度强调性能路径的产品。语言选择不是外围决策,它会把团队的心智模型也一起定型。

选择二:JavaScriptCore,不是 V8

这一步影响更大。Node 和 Deno 背后都是 V8,Bun 反而绕到 Safari 那条线,嵌入 WebKit 的 JavaScriptCore。

为什么?因为 JavaScriptCore 的启动速度快。

这个选择给了 Bun 很强的启动速度优势,也让它从第一天就和 Node/Deno 拉开了技术路线差异。但代价也很明显:在 Node 兼容、V8 私有 C++ API、Windows 适配这几个方向,Bun 注定要比“直接站在 V8 上做演化”的路线更辛苦。

也就是说,Bun 后来的很多优点和很多麻烦,其实在 2021 年这两个选择里就已经埋下了。

Bun 的版本演化:从“能跑 demo”到“被 Anthropic 收购”

Bun 到今天大致经历了四个阶段,每个阶段的核心矛盾都不一样。

第一阶段:萌芽期(2021-2022.7)

核心矛盾:如何把对 JavaScript 工具链低效的个人愤怒,变成一个技术上可成立的新产品。

最关键的决策就是 Zig 和 JavaScriptCore。

第二阶段:品牌爆发期(2022.7-2023.9)

2022 年 7 月,Bun v0.1.0 发布后迅速出圈。早期社区买单的不是成熟度,而是三件事:

第一,统一叙事。 Node 时代的 JS 工具栈越来越像乐高——runtime 是 Node,包管理是 npm/yarn/pnpm,转译是 Babel/SWC/tsx,测试是 Jest/Vitest,打包是 webpack/esbuild/rollup/Vite。每多一层,就多一层配置、多一层 cache、多一层边界 bug。Bun 一上来就反着来:别拼了,我给你一个总工具。

第二,速度叙事足够强。 无论是早期官网还是后来的首页,Bun 一直把“快”放在最前面。启动快、HTTP 快、WebSocket 快、bun install 快、bun test 快。

第三,兼容策略更现实。 Deno 早期更像“重写现代 JS 运行环境”的宣言,理念漂亮但迁移成本高。Bun 从一开始就瞄准“Node 的替代者”,不是“Node 的批评者”。它不是要教育开发者换脑子,而是想让开发者带着旧项目直接试。

这件事非常重要。因为在运行时竞争里,迁移成本比理论优雅更接近真实决策。

第三阶段:平台化与兼容攻坚期(2023.9-2025.10)

Bun 1.0(2023.9.8):第一次完成产品定义

Bun 1.0 不只是版本号从 0.x 跨到 1.0。真正重要的是,它第一次把产品定义讲完整了:明确列出想要替代 Node.js、npx、dotenv、nodemon、babel、ts-node、esbuild、webpack、npm、yarn、pnpm、Jest、Vitest。

它讲的不只是性能,而是一个完整的开发路径。

这背后是一种很强的产品哲学:不是让你装更多工具去修 JavaScript,而是让 runtime 本身长出更多“开发者真正高频会用”的器官。

Bun 1.1(2024.4.1):补上 Windows,从明星项目走向平台项目

Windows 支持看起来不性感,但它决定你到底能不能被更大盘子的开发者采用。Bun 1.1 发布文里说,Bun on Windows 已经可以通过 Bun 在 macOS 和 Linux 上自用测试套件的 98%。

这一步的意义在于:你不再只是给愿意折腾的新技术爱好者服务,你开始对更多普通开发者负责。对基础设施产品来说,这是从“项目”走向“平台”的必要一步。

Bun 1.2(2025.1.22):方法论升级

很多人看 Bun 1.2,会先看到新增能力:Bun.s3Bun.sqlbun.lock 文本锁文件。

但我觉得真正的分水岭在于它对 Node.js 兼容性的方法论升级。过去 Bun 修 Node 兼容,更多是哪个 npm 包炸了就去补哪个坑——像打地鼠。Bun 1.2 开始系统性地跑 Node.js 官方测试套件

这意味着 Bun 承认了一件事:如果你真想吃掉 Node 的运行时地位,最重要的战场不是宣传页,而是兼容性回归测试。

这是 Bun 从“产品想象力很强”走向“工程纪律正在成形”的关键一步。

Bun 1.3(2025.10.10):从 runtime 推向 full-stack substrate

官方原话:Bun 1.3 turns Bun into a batteries-included full-stack JavaScript runtime.

新增了内建前端 dev server、内建热重载、可以直接跑 HTML、更强的 Bun.serve() 路由、MySQL 和 Redis 也进来了。

到 1.3 为止,Bun 已经不再只想做“更快的 Node 替代品”。它想做的是:你用一门语言,一套命令,一个二进制,直接把前端开发、后端服务、测试、打包、依赖管理、甚至一部分数据库接入都串起来。

第四阶段:AI 基础设施绑定期(2025.12 至今)

被 Anthropic 收购

2025 年 12 月 2 日,Bun 宣布加入 Anthropic。Jarred 在公告里说得很坦白:截至公告发布时,Bun 收入是 0。

这很真实。Bun 过去默认的商业化设想大概是做某种云托管产品,但 AI coding tools 在 2024 年后突然变成更大的浪潮。Bun 团队判断,把自己放到 Anthropic 体系里,成为 Claude Code、Claude Agent SDK 和未来 AI coding 产品的基础设施层,比继续摸索独立 startup 如何变现更有意思。

这里有一个非常值得注意的转向:Bun 最早是因为前端热重载慢而生,最初服务的是“人类开发者更快写代码”。到了 2025 年底,新叙事变成:AI agents 正在写、测、跑更多代码,所以 runtime 和工具链层的重要性反而更高。

这不是换包装,是战略重心真的变了。

2026 年:继续打磨底层

截至 2026 年 5 月 6 日,我查到的 Bun 最新稳定版是 v1.3.13。这些 1.3.x 小版本说明 Bun 已经进入更像基础设施产品的第二阶段:围绕真实工作负载,打磨测试、安装、内存、兼容这些又难讲故事、又极其影响体感的底层路径。

比如 1.3.13 里:bun test --parallel--isolate--shard--changedbun install 流式解压 tarball,官方称在大仓库里把内存压到之前的 1/17;source map 内存占用最多下降 8 倍;runtime 基线内存再降约 5%。

竞争格局:Bun vs Node vs Deno

Bun 的竞品不是一个,至少有三类对手:Node.js(默认标准)、Deno(方法论型对手)、传统 Node 工具链组合(最真实的日常对手)。

Bun vs Node.js

Node 的价值不是“每一项都最好”,而是“它在无数真实生产环境里被证明足够稳,而且生态默认围着它转”。Node 自己不试图把整个工具链都吃下来,它把 runtime 做成公共地基,然后允许上层生态自由长。

Bun 则更激进。它不满足于做 runtime,它想把“你平时围绕 runtime 装的一整套常用工具”一起卷进来。

Bun 赢在:集成度极高、启动快反馈快(对 CLI 和 AI agent 特别友好)、产品表达更现代。

Bun 输在:稳定性和边缘兼容还没到 Node 那个量级、维护面过宽、生产可预期性仍然弱于 Node。

用户的真实选择逻辑通常是:

  • 选 Bun:新项目、内部工具、CLI、AI agent,迁移包袱小,想少装几层工具
  • 选 Node:老项目依赖深,团队更看重稳定性,生产事故成本远大于日常快一点的收益

Bun vs Deno

两者经常被放在一起,但方法论差很大。

Deno 的路线:先把原则立住,再向现实靠近。secure by default,Web 标准优先,TypeScript 零配置。Deno 2 的现实转向很明显:官方发布文明确提到支持 package.jsonnode_modules、npm 和更强的 Node 兼容。

Bun 的路线:先吃现实,再慢慢补原则。从第一天就强调"drop-in replacement for Node.js",优先考虑的不是权限模型的美感,而是“老项目能不能尽快跑”。

三条路径,三种气质:

  • Node 是默认公路
  • Deno 是重新规划过交通规则的新城
  • Bun 是一条想直接把老城最堵那几段全改成高速路的工程

Bun vs 传统工具链

这才是最真实的竞争。很多团队不会问“要不要从 Node 换到 Bun”,而是问:现有这套 Node + pnpm + Vite + Vitest 的组合,有没有必要整体迁到 Bun?

传统组合的优点是每一层都相对成熟,并且可以替换。缺点是你得自己承担拼装成本——不同工具会反复解析源码、反复做模块图、反复维护 cache。

Bun 赢在减少重复工作、配置面收缩、局部采用路径很丝滑(你可以先只用 bun install)。

Bun 输在传统组合的每个局部都更成熟,专业化工具在极端场景下仍然更强,而且很多成熟团队不喜欢“一个核心工具管太多事”。

Bun 的优势和劣势,都来自同一个源头

Bun 最让人着迷的地方,和最让人不放心的地方,其实是一回事。

它的核心优势来自一个很稳定的价值排序:凡是会拖慢开发者反馈回路的环节,都值得被系统性重做。 安装更快、启动更快、TypeScript/JSX 不用额外转、测试和打包都不绕远路、CLI 分发成本更低。

但正因为不是只做 runtime,而是一路把更多高频能力往 core 里收——包管理器、测试运行器、Bundler、dev server、数据库客户端、对象存储客户端——维护半径不断扩大。任何一个面出问题,都会牵连整个品牌。

GitHub 上能看到非常具体的担忧。有用户在用 Elysia.js + Prisma 的 API 服务里观察到 CPU 持续上升;也有用户把同一套服务从 Node 切到 Bun 后,在 GKE 上内存从 500MB 很快打到 1.2GB 容器上限。这些都是单个 issue 案例,不等于 Bun 在所有生产场景都有同样问题,但它们确实说明:社区对稳定性和边缘兼容的担心并不是凭空来的。

这些声音代表了 Bun 当前最真实的舆论断层:在开发体验层,很多人已经觉得它很香;在关键生产基础设施层,很多人还没有放心。

未来会怎样?

我觉得有三种剧本。

最可能:外围切入,向中心渗透

Bun 会继续成为 AI-native JavaScript 工具链和 CLI 分发的重要基础设施。很多团队会先用 bun installbun test、可执行程序编译、内部脚本,再慢慢决定要不要让核心服务也切到 Bun。

最危险:功能扩张但质量跟不上

功能继续高速扩张,但质量治理跟不上,导致 Bun 在生产后端场景里长期背着“不够稳”的标签。最后更多停留在“很好用的开发工具/CLI 基座”,而不是全面替代 Node 的基础设施。

最乐观:AI 时代的默认 JS 执行层

Anthropic 的资源、AI coding tool 的真实需求、Bun 团队对性能和 DX 的持续偏执,最终叠加成一个结果:Bun 在 2 到 3 年里把 Node 兼容和稳定性再往前推一个台阶,成为 AI 时代默认的 JavaScript 执行与分发层。

写在最后

回头看看 Bun 的发展历程,有一个感受很深。

Node 代表的是“runtime 做小,生态做大”。Deno 代表的是“先把原则立住,再兼容现实”。Bun 代表的是“把最影响效率的几层工具重新焊成一个整体,并且优先服务真实迁移路径和更短的反馈回路”。

所以 Bun 最核心的竞争力,从来不只是快。而是它在试图把“现代 JavaScript 工程到底应该由几层工具组成”这个问题,重新回答一遍。

这就是它值得被认真研究的地方。也是它注定会继续伴随争议的原因。

对于普通开发者,我的建议是:

如果你在做新项目、CLI 工具、内部工具、AI agent 相关的东西,Bun 值得认真试试。你可以先只用 bun installbun test,先吃到局部收益,不需要一步到位。

如果你在维护大型生产系统,继续用 Node 是更安全的选择。Bun 还没到可以放心把核心服务压上去的阶段。

但不管你用不用 Bun,它正在改变 JavaScript 工具链的游戏规则。这件事本身就很值得关注。

参考来源

昨天以前首页

How to Install Docker on Ubuntu 26.04

Docker is an open-source container platform that lets you build, test, and run applications as portable containers. Containers package the application code together with its dependencies, which makes it easier to run the same workload across development machines, test environments, and servers.

Docker is widely used in development workflows and DevOps pipelines because it keeps application environments predictable.

This tutorial explains how to install Docker on Ubuntu 26.04.

Supported Ubuntu Versions

Docker is available from the standard Ubuntu repositories, but those packages are often outdated. To ensure you get the latest stable version, we will install Docker from the official Docker repository.

At the time of writing, the Docker repository provides packages for current supported Ubuntu releases, including Ubuntu 26.04 LTS, Ubuntu 25.10, Ubuntu 24.04 LTS, and Ubuntu 22.04 LTS.

Info
Derivatives like Linux Mint are not officially supported, though they often work.

Prerequisites

Before you begin, make sure that:

  • You are running a 64-bit supported Ubuntu version
  • You have a user account with sudo privileges
  • Your system is connected to the internet and up to date

Uninstall any old or conflicting Docker packages first to avoid potential issues:

Terminal
sudo apt remove docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc

Installing Docker on Ubuntu 26.04

Installing Docker on Ubuntu is relatively straightforward. We will enable the Docker repository, import the repository GPG key, and install the Docker packages.

Step 1: Update the Package Index and Install Dependencies

First, update the package index and install packages required to use repositories over HTTPS :

Terminal
sudo apt update
sudo apt install ca-certificates curl

Step 2: Import Docker’s Official GPG Key

Add Docker’s official GPG key so your system can verify package authenticity:

Terminal
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

Step 3: Add the Docker APT Repository

Add the Docker repository to your system:

Terminal
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/ubuntu
Suites: $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}")
Components: stable
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/docker.asc
EOF

$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") reads your Ubuntu codename from the OS release file. On Ubuntu 26.04 it returns resolute.

Step 4: Install Docker Engine

Tip
If you want to install a specific Docker version, skip this step and go to the next one.

Now that the Docker repository is enabled, update the package index and install Docker:

Terminal
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Installing a Specific Docker Version (Optional)

If you want to install a specific Docker version instead of the latest one, first list the available versions:

Terminal
sudo apt update
apt list --all-versions docker-ce

The available Docker versions are printed in the second column:

output
docker-ce/resolute 5:29.4.0-1~ubuntu.26.04~resolute amd64
docker-ce/resolute 5:29.3.1-1~ubuntu.26.04~resolute amd64
docker-ce/resolute 5:29.3.0-1~ubuntu.26.04~resolute amd64
...

Install a specific version by adding =<VERSION> after the package name:

Terminal
DOCKER_VERSION="<VERSION>"
sudo apt install docker-ce=$DOCKER_VERSION docker-ce-cli=$DOCKER_VERSION containerd.io docker-buildx-plugin docker-compose-plugin

Replace <VERSION> with the exact version string returned by apt list --all-versions docker-ce.

Verify the Docker Installation

On most Ubuntu systems, the Docker service starts automatically after installation. If it does not, start it manually:

Terminal
sudo systemctl start docker

Check the service status:

Terminal
sudo systemctl status docker

The output will look something like this:

output
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
Active: active (running)
...

You can also confirm that the client and daemon are responding:

Terminal
sudo docker version

To verify that Docker can pull and run containers correctly, run the test image:

Terminal
sudo docker run hello-world

If the image is not found locally, Docker will download it from Docker Hub, run the container, print a “Hello from Docker” message, and exit.

Docker Hello World

The container stops after printing the message because it has no long-running process.

Run Docker Commands Without sudo (Highly Recommended)

By default, only root and a user with sudo privileges can run Docker commands.

To allow a non-root user to execute Docker commands, add the user to the docker group:

Terminal
sudo usermod -aG docker $USER

$USER is an environment variable that holds the currently logged-in username. If you want to execute commands with another user, replace $USER with that username.

Run newgrp docker or log out and log back in for the group membership changes to take effect.

After that, you can rerun the test container without sudo :

Terminal
docker run hello-world
Info
By default, Docker pulls images from Docker Hub. It is a cloud-based registry service that stores Docker images in public or private repositories.

Updating Docker

When a new Docker version is released, update it using standard system commands:

Terminal
sudo apt update
sudo apt upgrade

To prevent Docker from being updated automatically, mark it as held:

Terminal
sudo apt-mark hold docker-ce

Uninstalling Docker

Before uninstalling Docker, it is recommended to remove all containers, images, volumes, and networks :

Run the following commands to stop all running containers and remove all Docker objects:

Terminal
docker container stop $(docker container ls -aq)
docker system prune -a --volumes -f

Remove Docker packages:

Terminal
sudo apt purge docker-ce docker-ce-cli containerd.io \
docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras
sudo apt autoremove

To completely remove Docker data from your system, delete the directories that were created during the installation process:

Terminal
sudo rm -rf /var/lib/{docker,containerd}

Troubleshooting

Permission denied while trying to connect to the Docker daemon socket
Your user is not yet in the docker group, or the group membership has not taken effect. Run sudo usermod -aG docker $USER, then log out and back in (or run newgrp docker).

Cannot connect to the Docker daemon. Is the daemon running?
The Docker service is not running. Start it with sudo systemctl start docker and check its status with sudo systemctl status docker.

Package ‘docker-ce’ has no installation candidate
The Docker repository was not added correctly. Re-run Steps 2 and 3, then run sudo apt update before installing.

Containers fail to start after an OS or Docker upgrade
Check the Docker daemon status, the installed Docker version, and your container runtime configuration. If the host or Docker Engine was upgraded recently, review the Docker release notes and verify that your workloads are using a supported configuration before restarting them.

Conclusion

Installing Docker from the official repository ensures you always have access to the latest stable releases and security updates. For post-install steps such as configuring log drivers or setting up rootless mode, see the official post-install guide .

Docker Compose: Define and Run Multi-Container Apps

Docker Compose is a tool for defining and running multi-container applications. Instead of starting each container separately with docker run , you describe all of your services, networks, and volumes in a single YAML file and bring the entire environment up with one command.

This guide explains how Docker Compose V2 works, walks through a practical example with a SvelteKit development server and PostgreSQL database, and covers the most commonly used Compose directives and commands.

Quick Reference

For a printable quick reference, see the Docker cheatsheet .

Command Description
docker compose up Start all services (foreground)
docker compose up -d Start all services in detached mode
docker compose down Stop and remove containers and networks
docker compose down -v Also remove named volumes
docker compose ps List running services
docker compose logs SERVICE View logs for a service
docker compose logs -f SERVICE Follow logs in real time
docker compose exec SERVICE sh Open a shell in a running container
docker compose stop Stop containers without removing them
docker compose build Build or rebuild images
docker compose pull Pull the latest images

Prerequisites

The examples in this guide require Docker with the Compose plugin installed. Docker Desktop includes Compose by default. On Linux, install the docker-compose-plugin package alongside Docker Engine.

To verify Compose is available, run:

Terminal
docker compose version
output
Docker Compose version v2.x.x

Compose V2 runs as docker compose (with a space), not docker-compose. The old V1 binary is end-of-life and not covered in this guide.

The Compose File

Modern Docker Compose prefers a file named compose.yaml, though docker-compose.yml is still supported for compatibility. In this guide, we will use docker-compose.yml because many readers still recognize that name first.

A Compose file describes your application’s environment using three top-level keys:

  • services — defines each container: its image, ports, volumes, environment variables, and dependencies
  • volumes — declares named volumes that persist data across container restarts
  • networks — defines custom networks for service communication (Compose creates a default network automatically)

Compose V2 does not require a version: field at the top of the file. If you see version: '3' in older guides, that is legacy syntax kept for backward compatibility, not something new Compose files need.

YAML uses indentation to define structure. Use spaces, not tabs.

Setting Up the Example

In this example, we will run a SvelteKit development server alongside a PostgreSQL 16 database using Docker Compose.

Start by creating a new SvelteKit project. The npm create command launches an interactive wizard — select “Skeleton project” when prompted, then choose your preferred options for TypeScript and linting:

Terminal
npm create svelte@latest myapp
cd myapp

If you already have a Node.js project, skip this step and use your project directory instead.

Next, create a docker-compose.yml file in the project root:

docker-compose.ymlyaml
services:
 app:
 image: node:20-alpine
 working_dir: /app
 volumes:
 - .:/app
 - node_modules:/app/node_modules
 ports:
 - "5173:5173"
 environment:
 DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@db:5432/myapp
 depends_on:
 db:
 condition: service_healthy
 command: sh -c "npm install && npm run dev -- --host"

 db:
 image: postgres:16-alpine
 volumes:
 - postgres_data:/var/lib/postgresql/data
 environment:
 POSTGRES_USER: postgres
 POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
 POSTGRES_DB: myapp
 healthcheck:
 test: ["CMD-SHELL", "pg_isready -U postgres"]
 interval: 5s
 timeout: 5s
 retries: 5

volumes:
 postgres_data:
 node_modules:

Compose V2 automatically reads a .env file in the project directory and substitutes ${VAR} references in the compose file. Create a .env file to define the database password:

.envtxt
POSTGRES_PASSWORD=changeme
Warning
Do not commit .env to version control. Add it to your .gitignore file to keep credentials out of your repository.

The password above is for local development only. We will walk through each directive in the compose file in the next section.

Understanding the Compose File

Let us go through each directive in the compose file.

services

The services key is the core of any compose file. Each entry under services defines one container. The key name (app, db) becomes the service name, and Compose also uses it as the hostname for inter-service communication — so the app container can reach the database at db:5432.

image

The image directive tells Compose which Docker image to pull. Both node:20-alpine and postgres:16-alpine are pulled from Docker Hub if they are not already present on your machine. The alpine variant uses Alpine Linux as a base, which keeps image sizes small.

working_dir

The working_dir directive sets the working directory inside the container. All commands run in /app, which is where we mount the project files.

volumes

The app service uses two volume entries:

txt
- .:/app
- node_modules:/app/node_modules

The first entry is a bind mount. It maps the current directory on your host (.) to /app inside the container. Any file you edit on your host is immediately reflected inside the container, which is what enables hot-reload during development.

The second entry is a named volume. Without it, the bind mount would overwrite /app/node_modules with whatever is in your host directory — which may be empty or incompatible. The node_modules named volume tells Docker to keep a separate copy of node_modules inside the container so the bind mount does not interfere with it.

The db service uses a named volume for its data directory:

txt
- postgres_data:/var/lib/postgresql/data

This ensures that your database data persists across container restarts. When you run docker compose down, the postgres_data volume is kept. Only docker compose down -v removes it.

Named volumes must be declared at the top level under volumes:. Both postgres_data and node_modules are listed there with no additional configuration, which tells Docker to manage them using its default storage driver.

ports

The ports directive maps ports between the host machine and the container in "HOST:CONTAINER" format. The entry "5173:5173" exposes the Vite development server so you can open http://localhost:5173 in your browser.

environment

The environment directive sets environment variables inside the container. The app service receives the DATABASE_URL connection string, which a SvelteKit application can read to connect to the database.

Notice the ${POSTGRES_PASSWORD} syntax. Compose reads this value from the .env file and substitutes it before starting the container. This keeps credentials out of the compose file itself.

depends_on

By default, depends_on only controls the order in which containers start — it does not wait for a service to be ready. Compose V2 supports a long-form syntax with a condition key that changes this behavior:

yaml
depends_on:
 db:
 condition: service_healthy

With condition: service_healthy, Compose waits until the db service passes its healthcheck before starting app. This prevents the Node.js process from trying to connect to PostgreSQL before the database is accepting connections.

command

The command directive overrides the default command defined in the image. Here it runs npm install to install dependencies, then starts the Vite dev server with --host to bind to 0.0.0.0 inside the container (required for Docker to forward the port to your host):

txt
sh -c "npm install && npm run dev -- --host"

Running npm install on every container start is slow but convenient for development — you do not need to install dependencies manually before running docker compose up. For production, use a multi-stage Dockerfile to pre-install dependencies and build the application.

healthcheck

The healthcheck directive defines a command Compose runs inside the container to determine whether the service is ready. The pg_isready utility checks whether PostgreSQL is accepting connections:

yaml
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5

Compose runs this check every 5 seconds. Once the service passes its first check, it is marked as healthy and any services using condition: service_healthy are allowed to start. If the check fails 5 consecutive times, the service is marked as unhealthy.

Managing the Application

Run all commands from the project directory — the directory where your docker-compose.yml is located.

Starting Services

To start all services and stream their logs to your terminal, run:

Terminal
docker compose up

Compose pulls any missing images, creates the network, starts the db container, waits for it to pass its healthcheck, then starts the app container. Press Ctrl+C to stop.

To start in detached mode and run the services in the background:

Terminal
docker compose up -d

Viewing Status and Logs

To list running services and their current state:

Terminal
docker compose ps
output
NAME IMAGE COMMAND SERVICE STATUS PORTS
myapp-app-1 node:20-alpine "docker-entrypoint.s…" app Up 2 minutes 0.0.0.0:5173->5173/tcp
myapp-db-1 postgres:16-alpine "docker-entrypoint.s…" db Up 2 minutes 5432/tcp

To view the logs for a specific service:

Terminal
docker compose logs app

To follow the logs in real time (like tail -f):

Terminal
docker compose logs -f app

Press Ctrl+C to stop following.

Running Commands Inside a Container

To open an interactive shell inside the running app container:

Terminal
docker compose exec app sh

This is useful for running one-off commands, inspecting the file system, or debugging. Type exit to leave the shell.

Stopping and Removing Services

To stop running containers without removing them — preserving named volumes and the network:

Terminal
docker compose stop

To start them again after stopping:

Terminal
docker compose start

To stop containers and remove them along with the network:

Terminal
docker compose down

Named volumes (postgres_data, node_modules) are preserved. Your database data is safe.

To also remove all named volumes — this deletes your database data:

Terminal
docker compose down -v

Use this when you want a completely clean environment.

Common Directives Reference

The following directives are not used in the example above but are commonly needed in real projects.

restart

The restart directive controls what Compose does when a container exits:

yaml
services:
 app:
 restart: unless-stopped

The available policies are:

  • no — do not restart (default)
  • always — always restart, including on system reboot
  • unless-stopped — restart unless the container was explicitly stopped
  • on-failure — restart only when the container exits with a non-zero status

Use unless-stopped for long-running services in single-host deployments.

build

Instead of pulling a pre-built image, build tells Compose to build the image from a local Dockerfile :

yaml
services:
 app:
 build:
 context: .
 dockerfile: Dockerfile

context is the directory Compose sends to the Docker daemon as the build context. dockerfile specifies the Dockerfile path relative to context. If both the image and a build context exist, build takes precedence.

env_file

The env_file directive injects environment variables into a container from a file at runtime:

yaml
services:
 app:
 env_file:
 - .env.local

This is different from Compose’s native .env substitution. The .env file at the project root is read by Compose itself to substitute ${VAR} references in the compose file. The env_file directive injects variables directly into the container’s environment. Both can be used together.

networks

Compose creates a default bridge network connecting all services automatically. You can define custom networks to isolate groups of services or control how containers communicate:

yaml
services:
 app:
 networks:
 - frontend
 - backend
 db:
 networks:
 - backend

networks:
 frontend:
 backend:

A service only communicates with other services on the same network. In this example, app can reach db because both share the backend network. A service attached only to frontend — not backend — has no route to db.

profiles

Profiles let you define optional services that only start when explicitly requested. This is useful for development tools, debuggers, or admin interfaces that you do not want running all the time:

yaml
services:
 app:
 image: node:20-alpine

 adminer:
 image: adminer
 profiles:
 - tools
 ports:
 - "8080:8080"

Running docker compose up starts only app. To also start adminer, pass the profile flag:

Terminal
docker compose --profile tools up

Troubleshooting

Port is already in use
Another process on your host is using the same port. Change the host-side port in the ports: mapping. For example, change "5173:5173" to "5174:5173" to expose the container on port 5174 instead.

Service cannot connect to the database
If you are not using condition: service_healthy, depends_on only ensures the db container starts before app — it does not wait for PostgreSQL to be ready to accept connections. Add a healthcheck to the db service and set condition: service_healthy in depends_on as shown in the example above.

node_modules is empty inside the container
The bind mount .:/app maps your host directory to /app, which overwrites /app/node_modules with whatever is on your host. If your host directory has no node_modules, the container sees none either. Fix this by adding a named volume entry node_modules:/app/node_modules as shown in the example. Docker preserves the named volume’s contents and the bind mount does not overwrite it.

FAQ

What is the difference between docker compose down and docker compose stop?
docker compose stop stops the running containers but leaves them on disk along with the network and volumes. You can restart them with docker compose start. docker compose down removes the containers and the network. Named volumes are kept unless you add the -v flag.

How do I view logs for a specific service?
Run docker compose logs SERVICE, replacing SERVICE with the service name defined in your compose file (for example, docker compose logs app). To follow logs in real time, add the -f flag: docker compose logs -f app.

How do I pass secrets without hardcoding them in the Compose file?
Create a .env file in the project directory with your credentials (for example, POSTGRES_PASSWORD=changeme) and reference them in the compose file using ${VAR} syntax. Compose reads the .env file automatically. Add .env to your .gitignore so credentials are never committed to version control.

Can I use Docker Compose in production?
Compose works well for single-host deployments — it is simpler to operate than Kubernetes when you only have one server. Use restart: unless-stopped to keep services running after reboots, and keep secrets in environment variables rather than hardcoded values. For multi-host deployments, look at Docker Swarm or Kubernetes.

What is the difference between a bind mount and a named volume?
A bind mount (./host-path:/container-path) maps a specific directory from your host into the container. Changes on either side are immediately visible on the other. A named volume (volume-name:/container-path) is managed entirely by Docker — it persists data independently of the host directory structure and is not tied to a specific path on your machine. Use bind mounts for source code (so edits take effect immediately) and named volumes for database data (so it persists reliably).

Conclusion

Docker Compose gives you a straightforward way to define and run multi-container development environments with a single YAML file. Once you are comfortable with the basics, the next step is writing custom Dockerfiles to build your own images instead of relying on generic ones.

How to Install Git on Debian 13

Git is the world’s most popular distributed version control system used by many open-source and commercial projects. It allows you to collaborate on projects with fellow developers, keep track of your code changes, revert to previous stages, create branches , and more.

This guide covers installing and configuring Git on Debian 13 (Trixie) using apt or by compiling from source.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Task Command
Install Git (apt) sudo apt install git
Check Git version git --version
Set username git config --global user.name "Your Name"
Set email git config --global user.email "you@example.com"
View config git config --list

Installing Git with Apt

This is the quickest way to install Git on Debian.

Check if Git is already installed:

Terminal
git --version

If Git is not installed, you will see a “command not found” message. Otherwise, it shows the installed version.

Use the apt package manager to install Git:

Terminal
sudo apt update
sudo apt install git

Verify the installation:

Terminal
git --version

Debian 13 stable currently provides Git 2.47.3:

output
git version 2.47.3

You can now start configuring Git.

When a new version of Git is released, you can update using sudo apt update && sudo apt upgrade.

Installing Git from Source

The main benefit of installing Git from source is that you can compile any version you want. However, you cannot maintain your installation through the apt package manager.

Install the build dependencies:

Terminal
sudo apt update
sudo apt install libcurl4-gnutls-dev libexpat1-dev cmake gettext libz-dev libssl-dev gcc wget

Visit the Git download page to find the latest version.

At the time of writing, the latest stable Git version is 2.53.0.

If you need a different version, visit the Git archive to find available releases.

Download and extract the source to /usr/src:

Terminal
wget -c https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.53.0.tar.gz -O - | sudo tar -xz -C /usr/src

Navigate to the source directory and compile:

Terminal
cd /usr/src/git-*
sudo make prefix=/usr/local all
sudo make prefix=/usr/local install

The compilation may take some time depending on your system.

If your shell still resolves /usr/bin/git after installation, open a new terminal or verify your PATH and binary location with:

Terminal
which git
echo $PATH

Verify the installation:

Terminal
git --version
output
git version 2.53.0

To upgrade to a newer version later, repeat the same process with the new version number.

Configuring Git

After installing Git, configure your username and email address. Git associates your identity with every commit you make.

Set your global commit name and email:

Terminal
git config --global user.name "Your Name"
git config --global user.email "youremail@yourdomain.com"

Verify the configuration:

Terminal
git config --list
output
user.name=Your Name
user.email=youremail@yourdomain.com

The configuration is stored in ~/.gitconfig:

~/.gitconfigconf
[user]
name = Your Name
email = youremail@yourdomain.com

You can edit the configuration using the git config command or by editing ~/.gitconfig directly.

For a deeper walkthrough, see How to Configure Git Username and Email .

Troubleshooting

E: Unable to locate package git
Run sudo apt update first and verify you are on Debian 13 repositories. If sources were recently changed, refresh package metadata again.

git --version still shows an older version after source install
Your shell may still resolve /usr/bin/git before /usr/local/bin/git. Check with which git and adjust PATH order if needed.

Build fails with missing headers or libraries
One or more dependencies are missing. Re-run the dependency install command and then compile again.

make succeeds but git command is not found
Confirm install step ran successfully: sudo make prefix=/usr/local install. Then check /usr/local/bin/git exists.

FAQ

Should you use apt or source on Debian 13?
For most systems, use apt because updates are integrated with Debian security and package management. Build from source only when you need a newer Git release than the repository version.

Does compiling from source replace the apt package automatically?
No. Source builds under /usr/local and can coexist with the apt package in /usr/bin. Your PATH order determines which binary runs by default.

How can you remove a source-installed Git version?
If you built from the source tree, run sudo make prefix=/usr/local uninstall from that same source directory.

Conclusion

We covered two ways to install Git on Debian 13: using apt, which provides Git 2.47.3, or compiling from source for the latest version. The default repository version is sufficient for most use cases.

For more information, see the Pro Git book .

How to Revert a Commit in Git

The git revert command creates a new commit that undoes the changes introduced by a specified commit. Unlike git reset , which rewrites the commit history, git revert preserves the full history and is the safe way to undo changes that have already been pushed to a shared repository.

This guide explains how to use git revert to undo one or more commits with practical examples.

Quick Reference

Task Command
Revert the last commit git revert HEAD
Revert without opening editor git revert --no-edit HEAD
Revert a specific commit git revert COMMIT_HASH
Revert without committing git revert --no-commit HEAD
Revert a range of commits git revert --no-commit HEAD~3..HEAD
Revert a merge commit git revert -m 1 MERGE_COMMIT_HASH
Abort a revert in progress git revert --abort
Continue after resolving conflicts git revert --continue

Syntax

The general syntax for the git revert command is:

Terminal
git revert [OPTIONS] COMMIT
  • OPTIONS — Flags that modify the behavior of the command.
  • COMMIT — The commit hash or reference to revert.

git revert vs git reset

Before diving into examples, it is important to understand the difference between git revert and git reset, as they serve different purposes:

  • git revert — Creates a new commit that undoes the changes from a previous commit. The original commit remains in the history. This is safe to use on branches that have been pushed to a remote repository.
  • git reset — Moves the HEAD pointer backward, effectively removing commits from the history. This rewrites the commit history and should not be used on shared branches without coordinating with your team.

As a general rule, use git revert for commits that have already been pushed, and git reset for local commits that have not been shared.

Reverting the Last Commit

To revert the most recent commit, run git revert followed by HEAD:

Terminal
git revert HEAD

Git will open your default text editor so you can edit the revert commit message. The default message looks like this:

plain
Revert "Original commit message"

This reverts commit abc1234def5678...

Save and close the editor to complete the revert. Git will create a new commit that reverses the changes from the last commit.

To verify the revert, use git log to see the new revert commit in the history:

Terminal
git log --oneline -3
output
a1b2c3d (HEAD -> main) Revert "Add new feature"
f4e5d6c Add new feature
b7a8c9d Update configuration

The original commit (f4e5d6c) remains in the history, and the new revert commit (a1b2c3d) undoes its changes.

Reverting a Specific Commit

You do not have to revert the most recent commit. You can revert any commit in the history by specifying its commit hash.

First, find the commit hash using git log:

Terminal
git log --oneline
output
a1b2c3d (HEAD -> main) Update README
f4e5d6c Add login feature
b7a8c9d Fix navigation bug
e0f1a2b Add search functionality

To revert the “Fix navigation bug” commit, pass its hash to git revert:

Terminal
git revert b7a8c9d

Git will create a new commit that undoes only the changes introduced in commit b7a8c9d, leaving all other commits intact.

Reverting Without Opening an Editor

If you want to use the default revert commit message without opening an editor, use the --no-edit option:

Terminal
git revert --no-edit HEAD
output
[main d5e6f7a] Revert "Add new feature"
2 files changed, 0 insertions(+), 15 deletions(-)

This is useful when scripting or when you do not need a custom commit message.

Reverting Without Committing

By default, git revert automatically creates a new commit. If you want to stage the reverted changes without committing them, use the --no-commit (or -n) option:

Terminal
git revert --no-commit HEAD

The changes will be applied to the working directory and staging area, but no commit is created. You can then review the changes, make additional modifications, and commit manually:

Terminal
git status
git commit -m "Revert feature and clean up related code"

This is useful when you want to combine the revert with other changes in a single commit.

Reverting Multiple Commits

Reverting a Range of Commits

To revert a range of consecutive commits, specify the range using the .. notation:

Terminal
git revert --no-commit HEAD~3..HEAD

This reverts the last three commits. The --no-commit option stages all the reverted changes without creating individual revert commits, allowing you to commit them as a single revert:

Terminal
git commit -m "Revert last three commits"

Without --no-commit, Git will create a separate revert commit for each commit in the range.

Reverting Multiple Individual Commits

To revert multiple specific (non-consecutive) commits, list them one after another:

Terminal
git revert --no-commit abc1234 def5678 ghi9012
git commit -m "Revert selected commits"

Reverting a Merge Commit

Merge commits have two parent commits, so Git needs to know which parent to revert to. Use the -m option followed by the parent number (usually 1 for the branch you merged into):

Terminal
git revert -m 1 MERGE_COMMIT_HASH

In the following example, we revert a merge commit and keep the main branch as the base:

Terminal
git revert -m 1 a1b2c3d
  • -m 1 — Tells Git to use the first parent (the branch the merge was made into, typically main or master) as the base.
  • -m 2 — Would use the second parent (the branch that was merged in).

You can check the parents of a merge commit using:

Terminal
git log --oneline --graph

Handling Conflicts During Revert

If the code has changed since the original commit, Git may encounter conflicts when applying the revert. When this happens, Git will pause the revert and show conflicting files:

output
CONFLICT (content): Merge conflict in src/app.js
error: could not revert abc1234... Add new feature
hint: After resolving the conflicts, mark the corrected paths
hint: with 'git add <paths>' and run 'git revert --continue'.

To resolve the conflict:

  1. Open the conflicting files and resolve the merge conflicts manually.

  2. Stage the resolved files:

    Terminal
    git add src/app.js
  3. Continue the revert:

    Terminal
    git revert --continue

If you decide not to proceed with the revert, you can abort it:

Terminal
git revert --abort

This restores the repository to the state before you started the revert.

Pushing a Revert to a Remote Repository

Since git revert creates a new commit rather than rewriting history, you can safely push it to a shared repository using a regular push:

Terminal
git push origin main

There is no need for --force because the commit history is not rewritten.

Common Options

The git revert command accepts several options:

  • --no-edit — Use the default commit message without opening an editor.
  • --no-commit (-n) — Apply the revert to the working directory and index without creating a commit.
  • -m parent-number — Specify which parent to use when reverting a merge commit (usually 1).
  • --abort — Cancel the revert operation and return to the pre-revert state.
  • --continue — Continue the revert after resolving conflicts.
  • --skip — Skip the current commit and continue reverting the rest.

Troubleshooting

Revert is in progress
If you see a message that a revert is in progress, either continue with git revert --continue after resolving conflicts or cancel with git revert --abort.

Revert skips a commit
If Git reports that a commit was skipped because it was already applied, it means the changes are already present. You can proceed or use git revert --skip to continue.

FAQ

What is the difference between git revert and git reset?
git revert creates a new commit that undoes changes while preserving the full commit history. git reset moves the HEAD pointer backward and can remove commits from the history. Use git revert for shared branches and git reset for local, unpushed changes.

Can I revert a commit that has already been pushed?
Yes. This is exactly what git revert is designed for. Since it creates a new commit rather than rewriting history, it is safe to push to shared repositories without disrupting other collaborators.

How do I revert a merge commit?
Use the -m option to specify the parent number. For example, git revert -m 1 MERGE_HASH reverts the merge and keeps the first parent (usually the main branch) as the base.

Can I undo a git revert?
Yes. Since a revert is just a regular commit, you can revert the revert commit itself: git revert REVERT_COMMIT_HASH. This effectively re-applies the original changes.

What happens if there are conflicts during a revert?
Git will pause the revert and mark the conflicting files. You need to resolve the conflicts manually, stage the files with git add, and then run git revert --continue to complete the operation.

Conclusion

The git revert command is the safest way to undo changes in a Git repository, especially for commits that have already been pushed. It creates a new commit that reverses the specified changes while keeping the full commit history intact.

If you have any questions, feel free to leave a comment below.

❌
❌