普通视图

发现新文章,点击刷新页面。
昨天 — 2026年3月21日首页

git clone: Clone a Repository

git clone copies an existing Git repository into a new directory on your local machine. It sets up tracking branches for each remote branch and creates a remote called origin pointing back to the source.

This guide explains how to use git clone with the most common options and protocols.

Syntax

The basic syntax of the git clone command is:

txt
git clone [OPTIONS] REPOSITORY [DIRECTORY]

REPOSITORY is the URL or path of the repository to clone. DIRECTORY is optional; if omitted, Git uses the repository name as the directory name.

Clone a Remote Repository

The most common way to clone a repository is over HTTPS. For example, to clone a GitHub repository:

Terminal
git clone https://github.com/user/repo.git

This creates a repo directory in the current working directory, initializes a .git directory inside it, and checks out the default branch.

You can also clone over SSH if you have an SSH key configured :

Terminal
git clone git@github.com:user/repo.git

SSH is generally preferred for repositories you will push to, as it does not require entering credentials each time.

Clone into a Specific Directory

By default, git clone creates a directory named after the repository. To clone into a different directory, pass the target path as the second argument:

Terminal
git clone https://github.com/user/repo.git my-project

To clone into the current directory, use . as the target:

Terminal
git clone https://github.com/user/repo.git .
Warning
Cloning into . requires the current directory to be empty.

Clone a Specific Branch

By default, git clone checks out the default branch (usually main or master). To clone and check out a different branch, use the -b option:

Terminal
git clone -b develop https://github.com/user/repo.git

The full repository history is still downloaded; only the checked-out branch differs. To limit the download to a single branch, combine -b with --single-branch:

Terminal
git clone -b develop --single-branch https://github.com/user/repo.git

Shallow Clone

A shallow clone downloads only a limited number of recent commits rather than the full history. This is useful for speeding up the clone of large repositories when you do not need the full commit history.

To clone with only the latest commit:

Terminal
git clone --depth 1 https://github.com/user/repo.git

To include the last 10 commits:

Terminal
git clone --depth 10 https://github.com/user/repo.git

Shallow clones are commonly used in CI/CD pipelines to reduce download time. To convert a shallow clone to a full clone later, run:

Terminal
git fetch --unshallow

Clone a Local Repository

git clone also works with local paths. This is useful for creating an isolated copy for testing:

Terminal
git clone /path/to/local/repo new-copy

Troubleshooting

Destination directory is not empty
git clone URL . works only when the current directory is empty. If files already exist, clone into a new directory or move the existing files out of the way first.

Authentication failed over HTTPS
Most Git hosting services no longer accept account passwords for Git operations over HTTPS. Use a personal access token or configure a credential manager so Git can authenticate securely.

Permission denied (publickey)
This error usually means your SSH key is missing, not loaded into your SSH agent, or not added to your account on the Git hosting service. Verify your SSH setup before retrying the clone.

Host key verification failed
SSH could not verify the identity of the remote server. Confirm you are connecting to the correct host, then accept or update the host key in ~/.ssh/known_hosts.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git clone URL Clone a repository via HTTPS or SSH
git clone URL dir Clone into a specific directory
git clone URL . Clone into the current (empty) directory
git clone -b branch URL Clone and check out a specific branch
git clone -b branch --single-branch URL Clone only a specific branch
git clone --depth 1 URL Shallow clone: latest commit only
git clone --depth N URL Shallow clone: last N commits
git clone /path/to/repo Clone a local repository

FAQ

What is the difference between HTTPS and SSH cloning?
HTTPS cloning works out of the box but prompts for credentials unless you use a credential manager. SSH cloning requires a key pair to be configured but does not prompt for a password on each push or pull.

Does git clone download all branches?
Yes. All remote branches are fetched, but only the default branch is checked out locally. Use git branch -r to list all remote branches, or git checkout branch-name to switch to one.

How do I clone a private repository?
For HTTPS, provide your username and a personal access token when prompted. For SSH, ensure your public key is added to your account on the hosting service.

Can I clone a specific tag instead of a branch?
Yes. Use -b with the tag name: git clone -b v1.0.0 https://github.com/user/repo.git. Git checks out that tag in a detached HEAD state, so create a branch if you plan to make commits.

Conclusion

git clone is the starting point for working with any existing Git repository. After cloning, you can inspect the commit history with git log , review changes with git diff , and configure your username and email if you have not done so already.

df Cheatsheet

Basic Usage

Common ways to check disk space.

Command Description
df Show disk usage for all mounted filesystems
df /path Show filesystem usage for the given path
df -h Human-readable sizes (K, M, G — powers of 1024)
df -H Human-readable sizes (K, M, G — powers of 1000)
df -a Include all filesystems (including pseudo and duplicate)
df --total Add a grand total row at the bottom

Size Formats

Control how sizes are displayed.

Option Description
-h Auto-scale using powers of 1024 (K, M, G)
-H Auto-scale using powers of 1000 (SI units)
-k Display sizes in 1K blocks (default on most systems)
-m Display sizes in 1M blocks
-BG Display sizes in 1G blocks
-B SIZE Use SIZE-byte blocks (e.g., -BM, -BG, -B512)

Output Columns

What each column in the default output means.

Column Description
Filesystem Device or remote path of the filesystem
1K-blocks / Size Total size of the filesystem
Used Space currently in use
Available Space available for use
Use% Percentage of space used
Mounted on Directory where the filesystem is mounted

Filesystem Type

Show or filter by filesystem type.

Option Description
-T Show filesystem type column
-t TYPE Only show filesystems of the given type
-x TYPE Exclude filesystems of the given type
-l Show only local filesystems (exclude network mounts)

Examples:

Command Description
df -t ext4 Show only ext4 filesystems
df -x tmpfs Exclude tmpfs from output
df -Th Show type column with human-readable sizes

Inode Usage

Show inode counts instead of block usage.

Command Description
df -i Show inode usage for all filesystems
df -ih Show inode usage with human-readable counts
df -i /path Show inode usage for a specific filesystem
Column Description
Inodes Total number of inodes
IUsed Inodes in use
IFree Inodes available
IUse% Percentage of inodes used

Custom Output Fields

Select specific fields with --output.

Field Description
source Filesystem device
fstype Filesystem type
size Total size
used Space used
avail Space available
pcent Percentage used
iused Inodes used
iavail Inodes available
ipcent Inode percentage used
target Mount point

Example: df --output=source,size,used,avail,pcent,target -h

Related Guides

Use these references for deeper disk usage workflows.

Guide Description
How to Check Disk Space in Linux Using the df Command Full df guide with practical examples
Du Command in Linux Check disk usage for directories and files
昨天以前首页

「男二以下全换AI」吵翻全网,但真正该慌的人一声没吭

作者 Selina
2026年3月20日 17:12

「男二以下的演员以后都不用真人了,全用 AI 做。」

昨天的内娱被这条消息搅得一团乱,看上去是一位影视行业的「行内人」,不仅分享了影视行业的艰难处境,还带来了这条重磅新闻。

论证链条听起来很顺畅:平台只在乎流量 → 配角群演没有流量 → 没流量的人跟绿幕抠图素材没区别 → 完全可以 AI 替代。

评论的声音迅速分成两边:一边恐慌「连男二都保不住了」,一边兴奋「AI 终于要革 208 万的命了」。

两边可能都想多了。

平台想要的从来不是「消灭人」

以主流视频平台为例,爱奇艺、优酷、腾讯视频等等,这些平台主要靠两件事赚钱:卖会员和卖广告。

▲ 爱奇艺 2025 年度收入 图片来自:Morketing

会员为什么付费?追剧、追 CP、追演员。广告为什么投钱?因为流量跟着人走,谁能上热搜,广告主就愿意投谁。

这样一来,在平台的变现逻辑里,内容的可替代性远高于人。平台上同时有几百部剧在排队,题材撞车、类型雷同是常态。一部古偶IP换个名字,观众可能就分不出来了。

但演员不一样,刘亦菲就是刘亦菲,她的粉丝、话题度、带货能力都是不可替换的。这才出现内容退居二线,而人变成了「不可替代的变量」。

所以在平台的变现公式里,内容是可以批量复制的,但内容里的人不行。一部剧是否值钱,首先看的是谁演的、能不能带话题、能不能上热搜。内容本身只是让这些人被看见的容器。

顺着这个逻辑想,平台真正想压缩的成本是什么?还真不一定是演员,更多的,是演员以外的一切。

一部剧的预算可以粗略分成两块。第一块是「人的成本」——演员片酬、导演、编剧(体现为 IP)——这些是直接生产流量的环节,砍这部分等于砍自己的收入大头。

第二块是「人以外的成本」:场景美术、服化道、后期特效、远景群演调度、外景拍摄,甚至是剧本故事本身。这些都是让流量演员「被看见」的基础设施。

AI 在第二块确实有替代空间,主打一个极致的降本增效。可以虚拟背景省掉实景搭建,辅助后期特效降低人力成本,填充远景人群省掉几百号群演的调度费。这些都在发生,而且会加速。

但男二女二等等配角,并不是属于「基础设施」的范围内。

配角是流量的上游

配角在影视制作中承担的角色,远比「画面里多一个人」复杂得多。

首先是表演的物理需要。演员不是对着镜头独白的,而是需要对手给情绪、给节奏、给眼神交流的锚点。把配角全换成后期 AI 合成,那就意味着主演要对着空气演。任何有过片场经验的人都知道,对着绿幕和对着真人,给出来的东西差别巨大。这不是技术进步能弥合的,这是表演方法论的基本问题。

其次是人才储备的问题。今天的男二女二就是明天的男一女一。赵丽颖从配角变成《花千骨》的主心骨;杨紫从童星开始,一路演了无数角色才走到了《香蜜沉沉烬如霜》。

整个流量生态的运转,需要一条持续输送新面孔的管线。艺人从小角色开始,逐步为大众所知,然后成为可以「扛剧」的新顶流。

把男二以下全换 AI,等于掐断了这条管线。三到五年后,现有顶流式微、新人断档,平台自己的流量池就会枯竭。

最后还有观众预期管理的问题。大家看真人剧就是冲着「真人」来的,一个 AI 配角稍微不对劲,比如表情延迟、眼神空洞、动作不够自然,会被观众放到最大。

▲ AI 真人剧

动画和漫剧的观众可以接受非真实的角色,因为预期本来就不同。但在真人剧里掺 AI 角色,观众的忍耐阈值极低,穿帮风险极高。

现成的例子就摆在那。去年好莱坞冒出了第一个「AI 演员」Tilly Norwood,由英国制作公司 Particle6 打造,号称要开辟 AI 表演的新纪元。

结果呢?演员工会和整个行业几乎一边倒地反弹,至今没有任何一部正经影视作品真的在用她。上周她发布了出道以来的第一个「作品」——一支 MV《Take the Lead》,AI 生成的面孔唱着,动捕辅助下,她的动作和表情夸张到有些狰狞。

效果嘛,Gizmodo 的标题说得最直白:「糟透了」。一个被砸了重金包装、配备了专业团队的 AI 演员,折腾了大半年,交出的成绩单是一支被群嘲的 MV。这大概是「AI 替代真人演员」这条路最诚实的进度报告。

尽管好莱坞的示范,效果不怎么样,依然有跟风学样的。昨天,耀客传媒官宣了两名「赛博演员」——秦凌岳和林汐颜,开通了社交账号,并官宣两人将主演 AIGC 剧集《秦岭青铜诡事录》,计划 4 月上线。

网友:不敢睁开眼,希望是我的幻觉。

AI 在影视行业真正替代的是什么?

答案其实已经摆在那了,AI 最能够发挥作用的,是「其它环节」。

AI 生成虚拟场景,替代实景搭建和外景拍摄。AI 辅助后期合成和视觉特效,降低制作周期和人力成本。AI 填充远景大场面的人群——注意,是远景,是观众根本不会仔细看脸的那种镜头。AI 辅助剧本开发和分镜预览,加快前期筹备效率。

这些都指向同一个方向:平台的理想状态是「一个顶流演员 + 最低成本的一切其他东西 = 最高利润率的内容」。

AI 是实现这个理想的工具,它压缩的是「其他东西」的成本,不是替代顶流演员本身,也不会是潜在可以成为顶流的那些配角们。

相比于 AI 会不会替代真人演员的时候,一条真正的新赛道已经悄悄跑起来了:AI 原生内容。

AI 漫剧、AI 短片、AI 互动叙事——这些内容从一开始就没有真人参与。观众点进去的时候就知道「这不是真人演的」,心理预期完全不同,不存在恐怖谷的问题。创作者用 AI 工具一个人就能完成从编剧到成片的全流程,制作门槛断崖式下降。

▲ AI 短片《霍去病》

这才是 AI 对影视行业真正的冲击点。它不是在旧赛道里替代谁,而是在旧赛道旁边开了一条新路。

所以,与其担心 AI 抢走演员的饭碗,不如关注 AI 正在另起炉灶。真人影视一时半会人还不会消失,只要人类观众还愿意为人类演员的表演付费,这门生意就转得下去。

但 AI 原生内容正在创造一个全新的市场,一个不需要片场、不需要群演、甚至不需要演员的市场。

那才是真正的变局。而它已经开始了。

#欢迎关注爱范儿官方微信公众号:爱范儿(微信号:ifanr),更多精彩内容第一时间为您奉上。

jobs Command in Linux: List and Manage Background Jobs

The jobs command is a Bash built-in that lists all background and suspended processes associated with the current shell session. It shows the job ID, state, and the command that started each job.

This article explains how to use the jobs command, what each option does, and how job specifications work for targeting individual jobs.

Syntax

The general syntax for the jobs command is:

txt
jobs [OPTIONS] [JOB_SPEC...]

When called without arguments, jobs lists all active jobs in the current shell.

Listing Jobs

To see all background and suspended jobs, run jobs without any arguments:

Terminal
jobs

If you have two processes (one running and one stopped), the output will look similar to this:

output
[1]- Stopped vim notes.txt
[2]+ Running ping google.com &

Each line shows:

  • The job number in brackets ([1], [2])
  • A + or - sign indicating the current and previous job (more on this below)
  • The job state (Running, Stopped, Done)
  • The command that started the job

To include the process ID (PID) in the output, use the -l option:

Terminal
jobs -l
output
[1]- 14205 Stopped vim notes.txt
[2]+ 14230 Running ping google.com &

The PID is useful when you need to send a signal to a process with the kill command.

Job States

Jobs reported by the jobs command can be in one of these states:

  • Running — the process is actively executing in the background.
  • Stopped — the process has been suspended, typically by pressing Ctrl+Z.
  • Done — the process has finished. This state appears once and is then cleared from the list.
  • Terminated — the process was killed by a signal.

When a background job finishes, the shell displays its Done status the next time you press Enter or run a command.

Job Specifications

Job specifications (job specs) let you target a specific job when using commands like fg, bg, kill , or disown. They always start with a percent sign (%):

  • %1, %2, … — refer to a job by its number.
  • %% or %+ — the current job (the most recently backgrounded or suspended job, marked with +).
  • %- — the previous job (marked with -).
  • %string — the job whose command starts with string. For example, %ping matches a job started with ping google.com.
  • %?string — the job whose command contains string anywhere. For example, %?google also matches ping google.com.

Here is a practical example. Start two background jobs:

Terminal
sleep 300 &
ping -c 100 google.com > /dev/null &

Now list them:

Terminal
jobs
output
[1]- Running sleep 300 &
[2]+ Running ping -c 100 google.com > /dev/null &

You can bring the sleep job to the foreground by its number:

Terminal
fg %1

Or by the command prefix:

Terminal
fg %sleep

Both are equivalent and bring job [1] to the foreground.

Options

The jobs command accepts the following options:

  • -l — List jobs with their process IDs in addition to the normal output.
  • -p — Print only the PID of each job’s process group leader. This is useful for scripting.
  • -r — List only running jobs.
  • -s — List only stopped (suspended) jobs.
  • -n — Show only jobs whose status has changed since the last notification.

To show only running jobs:

Terminal
jobs -r

To get just the PIDs of all jobs:

Terminal
jobs -p
output
14205
14230

Using jobs in Interactive Shells

The jobs command is primarily useful in an interactive shell where job control is enabled. In a regular shell script, job control is usually off, so jobs may return no output even when background processes exist.

If you need to manage background work in a script, track process IDs with $! and use wait with the PID instead of a job spec:

sh
# Start a background task
long_task &
task_pid=$!

# Poll until it finishes
while kill -0 "$task_pid" 2>/dev/null; do
 echo "Task is still running..."
 sleep 5
done

echo "Task complete."

To wait for several background tasks, store their PIDs and pass them to wait:

sh
task_a &
pid_a=$!

task_b &
pid_b=$!

task_c &
pid_c=$!

wait "$pid_a" "$pid_b" "$pid_c"
echo "All tasks complete."

In an interactive shell, you can still wait for a job by its job spec:

Terminal
wait %1

Quick Reference

Command Description
jobs List all background and stopped jobs
jobs -l List jobs with PIDs
jobs -p Print only PIDs
jobs -r List only running jobs
jobs -s List only stopped jobs
fg %1 Bring job 1 to the foreground
bg %1 Resume job 1 in the background
kill %1 Terminate job 1
disown %1 Remove job 1 from the job table
wait Wait for all background jobs to finish

FAQ

What is the difference between jobs and ps?
jobs shows only the processes started from the current shell session. The ps command lists all processes running on the system, regardless of which shell started them.

Why does jobs show nothing even though processes are running?
jobs only tracks processes started from the current shell. If you started a process in a different terminal or via a system service, it will not appear in jobs. Use ps aux to find it instead.

How do I stop a background job?
Use kill %N where N is the job number. For example, kill %1 sends SIGTERM to job 1. If the job does not stop, use kill -9 %1 to force-terminate it.

What do the + and - signs mean in jobs output?
The + marks the current job, the one that fg and bg will act on by default. The - marks the previous job. When the current job finishes, the previous job becomes the new current job.

How do I keep a background job running after closing the terminal?
Use nohup before the command, or disown after backgrounding it. For long-running interactive sessions, consider Screen or Tmux .

Conclusion

The jobs command lists all background and suspended processes in the current shell session. Use it together with fg, bg, and kill to control jobs interactively. For a broader guide on running and managing background processes, see How to Run Linux Commands in the Background .

top Cheatsheet

Startup Options

Common command-line flags for launching top.

Command Description
top Start top with default settings
top -d 5 Set refresh interval to 5 seconds
top -n 3 Exit after 3 screen updates
top -u username Show only processes owned by a user
top -p 1234,5678 Monitor specific PIDs
top -b Batch mode (non-interactive, for scripts and logging)
top -b -n 1 Print a single snapshot and exit
top -H Show individual threads instead of processes

Interactive Navigation

Key commands available while top is running.

Key Description
q Quit top
h or ? Show help screen
Space Refresh the display immediately
d or s Change the refresh interval
k Kill a process (prompts for PID and signal)
r Renice a process (change priority)
u Filter by user
n or # Set the number of displayed processes
W Save current settings to ~/.toprc

Sorting

Change the sort column interactively.

Key Description
P Sort by CPU usage (default)
M Sort by memory usage
N Sort by PID
T Sort by cumulative CPU time
R Reverse the current sort order
< / > Move the sort column left / right
F or O Open the field management screen to pick a sort column

Display Toggles

Show or hide parts of the summary and task list.

Key Description
l Toggle the load average line
t Cycle through CPU summary modes (bar, text, off)
m Cycle through memory summary modes (bar, text, off)
1 Toggle per-CPU breakdown (one line per core)
H Toggle thread view (show individual threads)
c Toggle between command name and full command line
V Toggle forest (tree) view
x Highlight the current sort column
z Toggle color output

Filtering and Searching

Narrow the process list while top is running.

Key Description
u Show only processes for a specific user
U Show processes by effective or real user
o / O Add a filter (e.g., COMMAND=nginx or %CPU>5.0)
Ctrl+O Show active filters
= Clear all filters for the current window
L Search for a string in the display
& Find next occurrence of the search string

Summary Area Fields

Key metrics in the header area.

Field Description
load average System load over 1, 5, and 15 minutes
us CPU time in user space
sy CPU time in kernel space
ni CPU time for niced (reprioritized) processes
id CPU idle time
wa CPU time waiting for I/O
hi / si Hardware / software interrupt time
st CPU time stolen by hypervisor (VMs)
MiB Mem Total, free, used, and buffer/cache memory
MiB Swap Total, free, used swap, and available memory

Batch Mode and Logging

Use top in scripts or for capturing snapshots.

Command Description
top -b -n 1 Print one snapshot to stdout
top -b -n 5 -d 2 > top.log Log 5 snapshots at 2-second intervals
top -b -n 1 -o %MEM Single snapshot sorted by memory
top -b -n 1 -u www-data Snapshot of one user’s processes
top -b -n 1 -p 1234 Snapshot of a specific PID

Related Guides

Full guides for process monitoring and management.

Guide Description
top Command in Linux Full top guide with examples
ps Command in Linux List and filter processes
Kill Command in Linux Send signals to processes by PID
Linux Uptime Command Check system uptime and load average
Check Memory in Linux Inspect RAM and swap usage

Docker Compose: Define and Run Multi-Container Apps

Docker Compose is a tool for defining and running multi-container applications. Instead of starting each container separately with docker run , you describe all of your services, networks, and volumes in a single YAML file and bring the entire environment up with one command.

This guide explains how Docker Compose V2 works, walks through a practical example with a SvelteKit development server and PostgreSQL database, and covers the most commonly used Compose directives and commands.

Quick Reference

For a printable quick reference, see the Docker cheatsheet .

Command Description
docker compose up Start all services (foreground)
docker compose up -d Start all services in detached mode
docker compose down Stop and remove containers and networks
docker compose down -v Also remove named volumes
docker compose ps List running services
docker compose logs SERVICE View logs for a service
docker compose logs -f SERVICE Follow logs in real time
docker compose exec SERVICE sh Open a shell in a running container
docker compose stop Stop containers without removing them
docker compose build Build or rebuild images
docker compose pull Pull the latest images

Prerequisites

The examples in this guide require Docker with the Compose plugin installed. Docker Desktop includes Compose by default. On Linux, install the docker-compose-plugin package alongside Docker Engine.

To verify Compose is available, run:

Terminal
docker compose version
output
Docker Compose version v2.x.x

Compose V2 runs as docker compose (with a space), not docker-compose. The old V1 binary is end-of-life and not covered in this guide.

The Compose File

Modern Docker Compose prefers a file named compose.yaml, though docker-compose.yml is still supported for compatibility. In this guide, we will use docker-compose.yml because many readers still recognize that name first.

A Compose file describes your application’s environment using three top-level keys:

  • services — defines each container: its image, ports, volumes, environment variables, and dependencies
  • volumes — declares named volumes that persist data across container restarts
  • networks — defines custom networks for service communication (Compose creates a default network automatically)

Compose V2 does not require a version: field at the top of the file. If you see version: '3' in older guides, that is legacy syntax kept for backward compatibility, not something new Compose files need.

YAML uses indentation to define structure. Use spaces, not tabs.

Setting Up the Example

In this example, we will run a SvelteKit development server alongside a PostgreSQL 16 database using Docker Compose.

Start by creating a new SvelteKit project. The npm create command launches an interactive wizard — select “Skeleton project” when prompted, then choose your preferred options for TypeScript and linting:

Terminal
npm create svelte@latest myapp
cd myapp

If you already have a Node.js project, skip this step and use your project directory instead.

Next, create a docker-compose.yml file in the project root:

docker-compose.ymlyaml
services:
 app:
 image: node:20-alpine
 working_dir: /app
 volumes:
 - .:/app
 - node_modules:/app/node_modules
 ports:
 - "5173:5173"
 environment:
 DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@db:5432/myapp
 depends_on:
 db:
 condition: service_healthy
 command: sh -c "npm install && npm run dev -- --host"

 db:
 image: postgres:16-alpine
 volumes:
 - postgres_data:/var/lib/postgresql/data
 environment:
 POSTGRES_USER: postgres
 POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
 POSTGRES_DB: myapp
 healthcheck:
 test: ["CMD-SHELL", "pg_isready -U postgres"]
 interval: 5s
 timeout: 5s
 retries: 5

volumes:
 postgres_data:
 node_modules:

Compose V2 automatically reads a .env file in the project directory and substitutes ${VAR} references in the compose file. Create a .env file to define the database password:

.envtxt
POSTGRES_PASSWORD=changeme
Warning
Do not commit .env to version control. Add it to your .gitignore file to keep credentials out of your repository.

The password above is for local development only. We will walk through each directive in the compose file in the next section.

Understanding the Compose File

Let us go through each directive in the compose file.

services

The services key is the core of any compose file. Each entry under services defines one container. The key name (app, db) becomes the service name, and Compose also uses it as the hostname for inter-service communication — so the app container can reach the database at db:5432.

image

The image directive tells Compose which Docker image to pull. Both node:20-alpine and postgres:16-alpine are pulled from Docker Hub if they are not already present on your machine. The alpine variant uses Alpine Linux as a base, which keeps image sizes small.

working_dir

The working_dir directive sets the working directory inside the container. All commands run in /app, which is where we mount the project files.

volumes

The app service uses two volume entries:

txt
- .:/app
- node_modules:/app/node_modules

The first entry is a bind mount. It maps the current directory on your host (.) to /app inside the container. Any file you edit on your host is immediately reflected inside the container, which is what enables hot-reload during development.

The second entry is a named volume. Without it, the bind mount would overwrite /app/node_modules with whatever is in your host directory — which may be empty or incompatible. The node_modules named volume tells Docker to keep a separate copy of node_modules inside the container so the bind mount does not interfere with it.

The db service uses a named volume for its data directory:

txt
- postgres_data:/var/lib/postgresql/data

This ensures that your database data persists across container restarts. When you run docker compose down, the postgres_data volume is kept. Only docker compose down -v removes it.

Named volumes must be declared at the top level under volumes:. Both postgres_data and node_modules are listed there with no additional configuration, which tells Docker to manage them using its default storage driver.

ports

The ports directive maps ports between the host machine and the container in "HOST:CONTAINER" format. The entry "5173:5173" exposes the Vite development server so you can open http://localhost:5173 in your browser.

environment

The environment directive sets environment variables inside the container. The app service receives the DATABASE_URL connection string, which a SvelteKit application can read to connect to the database.

Notice the ${POSTGRES_PASSWORD} syntax. Compose reads this value from the .env file and substitutes it before starting the container. This keeps credentials out of the compose file itself.

depends_on

By default, depends_on only controls the order in which containers start — it does not wait for a service to be ready. Compose V2 supports a long-form syntax with a condition key that changes this behavior:

yaml
depends_on:
 db:
 condition: service_healthy

With condition: service_healthy, Compose waits until the db service passes its healthcheck before starting app. This prevents the Node.js process from trying to connect to PostgreSQL before the database is accepting connections.

command

The command directive overrides the default command defined in the image. Here it runs npm install to install dependencies, then starts the Vite dev server with --host to bind to 0.0.0.0 inside the container (required for Docker to forward the port to your host):

txt
sh -c "npm install && npm run dev -- --host"

Running npm install on every container start is slow but convenient for development — you do not need to install dependencies manually before running docker compose up. For production, use a multi-stage Dockerfile to pre-install dependencies and build the application.

healthcheck

The healthcheck directive defines a command Compose runs inside the container to determine whether the service is ready. The pg_isready utility checks whether PostgreSQL is accepting connections:

yaml
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5

Compose runs this check every 5 seconds. Once the service passes its first check, it is marked as healthy and any services using condition: service_healthy are allowed to start. If the check fails 5 consecutive times, the service is marked as unhealthy.

Managing the Application

Run all commands from the project directory — the directory where your docker-compose.yml is located.

Starting Services

To start all services and stream their logs to your terminal, run:

Terminal
docker compose up

Compose pulls any missing images, creates the network, starts the db container, waits for it to pass its healthcheck, then starts the app container. Press Ctrl+C to stop.

To start in detached mode and run the services in the background:

Terminal
docker compose up -d

Viewing Status and Logs

To list running services and their current state:

Terminal
docker compose ps
output
NAME IMAGE COMMAND SERVICE STATUS PORTS
myapp-app-1 node:20-alpine "docker-entrypoint.s…" app Up 2 minutes 0.0.0.0:5173->5173/tcp
myapp-db-1 postgres:16-alpine "docker-entrypoint.s…" db Up 2 minutes 5432/tcp

To view the logs for a specific service:

Terminal
docker compose logs app

To follow the logs in real time (like tail -f):

Terminal
docker compose logs -f app

Press Ctrl+C to stop following.

Running Commands Inside a Container

To open an interactive shell inside the running app container:

Terminal
docker compose exec app sh

This is useful for running one-off commands, inspecting the file system, or debugging. Type exit to leave the shell.

Stopping and Removing Services

To stop running containers without removing them — preserving named volumes and the network:

Terminal
docker compose stop

To start them again after stopping:

Terminal
docker compose start

To stop containers and remove them along with the network:

Terminal
docker compose down

Named volumes (postgres_data, node_modules) are preserved. Your database data is safe.

To also remove all named volumes — this deletes your database data:

Terminal
docker compose down -v

Use this when you want a completely clean environment.

Common Directives Reference

The following directives are not used in the example above but are commonly needed in real projects.

restart

The restart directive controls what Compose does when a container exits:

yaml
services:
 app:
 restart: unless-stopped

The available policies are:

  • no — do not restart (default)
  • always — always restart, including on system reboot
  • unless-stopped — restart unless the container was explicitly stopped
  • on-failure — restart only when the container exits with a non-zero status

Use unless-stopped for long-running services in single-host deployments.

build

Instead of pulling a pre-built image, build tells Compose to build the image from a local Dockerfile :

yaml
services:
 app:
 build:
 context: .
 dockerfile: Dockerfile

context is the directory Compose sends to the Docker daemon as the build context. dockerfile specifies the Dockerfile path relative to context. If both the image and a build context exist, build takes precedence.

env_file

The env_file directive injects environment variables into a container from a file at runtime:

yaml
services:
 app:
 env_file:
 - .env.local

This is different from Compose’s native .env substitution. The .env file at the project root is read by Compose itself to substitute ${VAR} references in the compose file. The env_file directive injects variables directly into the container’s environment. Both can be used together.

networks

Compose creates a default bridge network connecting all services automatically. You can define custom networks to isolate groups of services or control how containers communicate:

yaml
services:
 app:
 networks:
 - frontend
 - backend
 db:
 networks:
 - backend

networks:
 frontend:
 backend:

A service only communicates with other services on the same network. In this example, app can reach db because both share the backend network. A service attached only to frontend — not backend — has no route to db.

profiles

Profiles let you define optional services that only start when explicitly requested. This is useful for development tools, debuggers, or admin interfaces that you do not want running all the time:

yaml
services:
 app:
 image: node:20-alpine

 adminer:
 image: adminer
 profiles:
 - tools
 ports:
 - "8080:8080"

Running docker compose up starts only app. To also start adminer, pass the profile flag:

Terminal
docker compose --profile tools up

Troubleshooting

Port is already in use
Another process on your host is using the same port. Change the host-side port in the ports: mapping. For example, change "5173:5173" to "5174:5173" to expose the container on port 5174 instead.

Service cannot connect to the database
If you are not using condition: service_healthy, depends_on only ensures the db container starts before app — it does not wait for PostgreSQL to be ready to accept connections. Add a healthcheck to the db service and set condition: service_healthy in depends_on as shown in the example above.

node_modules is empty inside the container
The bind mount .:/app maps your host directory to /app, which overwrites /app/node_modules with whatever is on your host. If your host directory has no node_modules, the container sees none either. Fix this by adding a named volume entry node_modules:/app/node_modules as shown in the example. Docker preserves the named volume’s contents and the bind mount does not overwrite it.

FAQ

What is the difference between docker compose down and docker compose stop?
docker compose stop stops the running containers but leaves them on disk along with the network and volumes. You can restart them with docker compose start. docker compose down removes the containers and the network. Named volumes are kept unless you add the -v flag.

How do I view logs for a specific service?
Run docker compose logs SERVICE, replacing SERVICE with the service name defined in your compose file (for example, docker compose logs app). To follow logs in real time, add the -f flag: docker compose logs -f app.

How do I pass secrets without hardcoding them in the Compose file?
Create a .env file in the project directory with your credentials (for example, POSTGRES_PASSWORD=changeme) and reference them in the compose file using ${VAR} syntax. Compose reads the .env file automatically. Add .env to your .gitignore so credentials are never committed to version control.

Can I use Docker Compose in production?
Compose works well for single-host deployments — it is simpler to operate than Kubernetes when you only have one server. Use restart: unless-stopped to keep services running after reboots, and keep secrets in environment variables rather than hardcoded values. For multi-host deployments, look at Docker Swarm or Kubernetes.

What is the difference between a bind mount and a named volume?
A bind mount (./host-path:/container-path) maps a specific directory from your host into the container. Changes on either side are immediately visible on the other. A named volume (volume-name:/container-path) is managed entirely by Docker — it persists data independently of the host directory structure and is not tied to a specific path on your machine. Use bind mounts for source code (so edits take effect immediately) and named volumes for database data (so it persists reliably).

Conclusion

Docker Compose gives you a straightforward way to define and run multi-container development environments with a single YAML file. Once you are comfortable with the basics, the next step is writing custom Dockerfiles to build your own images instead of relying on generic ones.

diff Cheatsheet

Basic Usage

Compare files and save output.

Command Description
diff file1 file2 Compare two files (normal format)
diff -u file1 file2 Unified format (most common)
diff -c file1 file2 Context format
diff -y file1 file2 Side-by-side format
diff file1 file2 > file.patch Save diff to a patch file

Output Symbols

What each symbol means in the diff output.

Symbol Format Meaning
< normal Line from file1 (removed)
> normal Line from file2 (added)
- unified/context Line removed
+ unified/context Line added
! context Line changed
@@ unified Hunk header with line numbers

Directory Comparison

Compare entire directory trees.

Command Description
diff -r dir1 dir2 Compare directories recursively
diff -rq dir1 dir2 Only report which files differ
diff -rN dir1 dir2 Treat absent files as empty
diff -rNu dir1 dir2 Unified recursive diff (for patches)

Filtering

Ignore specific types of differences.

Option Description
-i Ignore case differences
-w Ignore all whitespace
-b Ignore changes in whitespace amount
-B Ignore blank lines
--strip-trailing-cr Ignore Windows carriage returns

Common Options

Option Description
-u Unified format
-c Context format
-y Side-by-side format
-U n Show n lines of context (default: 3)
-C n Show n lines of context (context format)
-W n Set column width for side-by-side
-s Report when files are identical
--color Colorize output
-q Brief output (only whether files differ)

Patch Workflow

Create and apply patches with diff and patch.

Command Description
diff -u file1 file2 > file.patch Create a patch for a single file
diff -rNu dir1 dir2 > dir.patch Create a patch for a directory
patch file1 < file.patch Apply patch to file1
patch -p1 < dir.patch Apply directory patch
patch -R file1 < file.patch Reverse (undo) a patch

git diff Command: Show Changes Between Commits

Every change in a Git repository can be inspected before it is staged or committed. The git diff command shows the exact lines that were added, removed, or modified — between your working directory, the staging area, and any two commits or branches.

This guide explains how to use git diff to review changes at every stage of your workflow.

Basic Usage

Run git diff with no arguments to see all unstaged changes — differences between your working directory and the staging area:

Terminal
git diff
output
diff --git a/app.py b/app.py
index 3b1a2c4..7d8e9f0 100644
--- a/app.py
+++ b/app.py
@@ -10,6 +10,7 @@ def main():
print("Starting app")
+ print("Debug mode enabled")
run()

Lines starting with + were added. Lines starting with - were removed. Lines with no prefix are context — unchanged lines shown for reference.

If nothing is shown, all changes are already staged or there are no changes at all.

Staged Changes

To see changes that are staged and ready to commit, use --staged (or its synonym --cached):

Terminal
git diff --staged

This compares the staging area against the last commit. It shows exactly what will go into the next commit.

Terminal
git diff --cached

Both flags are equivalent. Use whichever you prefer.

Comparing Commits

Pass two commit hashes to compare them directly:

Terminal
git diff abc1234 def5678

To compare a commit against its parent, use the ^ suffix:

Terminal
git diff HEAD^ HEAD

HEAD refers to the latest commit. HEAD^ is one commit before it. You can go further back with HEAD~2, HEAD~3, and so on.

To see what changed in the last commit:

Terminal
git diff HEAD^ HEAD

Comparing Branches

Use the same syntax to compare two branches:

Terminal
git diff main feature-branch

This shows all differences between the tips of the two branches.

To see only what a branch adds relative to main — excluding changes already in main — use the three-dot notation:

Terminal
git diff main...feature-branch

The three-dot form finds the common ancestor of both branches and diffs from there to the tip of feature-branch. This is the most useful form for reviewing a pull request before merging.

Comparing a Specific File

To limit the diff to a single file, pass the path after --:

Terminal
git diff -- path/to/file.py

The -- separator tells Git the argument is a file path, not a branch name. Combine it with commit references to compare a file across commits:

Terminal
git diff HEAD~3 HEAD -- path/to/file.py

Summary with –stat

To see a summary of which files changed and how many lines were added or removed — without the full diff — use --stat:

Terminal
git diff --stat
output
 app.py | 3 +++
config.yml | 1 -
2 files changed, 3 insertions(+), 1 deletion(-)

This is useful for a quick overview before reviewing the full diff.

Word-Level Diff

By default, git diff highlights changed lines. To highlight only the changed words within a line, use --word-diff:

Terminal
git diff --word-diff
output
@@ -10,6 +10,6 @@ def main():
print("[-Starting-]{+Running+} app")

Removed words appear in [-brackets-] and added words in {+braces+}. This is especially useful for prose or configuration files where only part of a line changes.

Ignoring Whitespace

To ignore whitespace-only changes, use -w:

Terminal
git diff -w

Use -b to ignore changes in the amount of whitespace (but not all whitespace):

Terminal
git diff -b

These options are useful when reviewing files that have been reformatted or indented differently.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git diff Unstaged changes in working directory
git diff --staged Staged changes ready to commit
git diff HEAD^ HEAD Changes in the last commit
git diff abc1234 def5678 Diff between two commits
git diff main feature Diff between two branches
git diff main...feature Changes in feature since branching from main
git diff -- file Diff for a specific file
git diff --stat Summary of changed files and line counts
git diff --word-diff Word-level diff
git diff -w Ignore all whitespace changes

FAQ

What is the difference between git diff and git diff --staged?
git diff shows changes in your working directory that have not been staged yet. git diff --staged shows changes that have been staged with git add and are ready to commit. To see all changes — staged and unstaged combined — use git diff HEAD.

What does the three-dot ... notation do in git diff?
git diff main...feature finds the common ancestor of main and feature and shows the diff from that ancestor to the tip of feature. This isolates the changes that the feature branch introduces, ignoring any new commits in main since the branch was created. In git diff, the two-dot form is just another way to name the two revision endpoints, so git diff main..feature is effectively the same as git diff main feature.

How do I see only the file names that changed, not the full diff?
Use git diff --name-only. To include the change status (modified, added, deleted), use git diff --name-status instead.

How do I compare my local branch against the remote?
Fetch first to update remote-tracking refs, then diff against the updated remote branch. To see what your current branch changes relative to origin/main, use git fetch && git diff origin/main...HEAD. If you want a direct endpoint comparison instead, use git fetch && git diff origin/main HEAD.

Can I use git diff to generate a patch file?
Yes. Redirect the output to a file: git diff > changes.patch. Apply it on another machine with git apply changes.patch.

Conclusion

git diff is the primary tool for reviewing changes before you stage or commit them. Use git diff for unstaged work, git diff --staged before committing, and git diff main...feature to review a branch before merging. For a history of committed changes, see the git log guide .

export Cheatsheet

Basic Syntax

Core export command forms in Bash.

Command Description
export VAR=value Create and export a variable
export VAR Export an existing shell variable
export -p List all exported variables
export -n VAR Remove the export property
help export Show Bash help for export

Export Variables

Use export to pass variables to child processes.

Command Description
export APP_ENV=production Export one variable
PORT=8080; export PORT Define first, export later
export PATH="$HOME/bin:$PATH" Extend PATH
export EDITOR=nano Set a default editor
export LANG=en_US.UTF-8 Set a locale variable

Export for One Shell Session

These changes last only in the current shell session unless you save them in a startup file.

Command Description
export DEBUG=1 Enable a debug variable for the current session
export API_URL=https://api.example.com Set an application endpoint
export PATH="$HOME/.local/bin:$PATH" Add a per-user bin directory
echo "$DEBUG" Verify that the exported variable is set
bash -c 'echo "$DEBUG"' Confirm the child shell inherits the variable

Export Functions

Bash can export functions to child Bash shells.

Command Description
greet() { echo "Hello"; } Define a shell function
export -f greet Export a function
bash -c 'greet' Run the exported function in a child shell
export -nf greet Remove the export property from a function

Remove or Reset Exports

Use these commands when you no longer want a variable inherited.

Command Description
export -n VAR Keep the variable, but stop exporting it
unset VAR Remove the variable completely
unset -f greet Remove a shell function
export -n PATH Stop exporting PATH in the current shell
`env grep ‘^VAR='`

Make Variables Persistent

Add export lines to shell startup files when you want them loaded automatically.

File Use
~/.bashrc Interactive non-login Bash shells
~/.bash_profile Login Bash shells
/etc/environment System-wide environment variables
/etc/profile System-wide shell startup logic
source ~/.bashrc Reload the current shell after editing

Troubleshooting

Quick checks for common export problems.

Issue Check
A child process cannot see the variable Confirm you used export VAR or export VAR=value
The variable disappears in a new terminal Add the export line to ~/.bashrc or another startup file
A function is missing in a child shell Export it with export -f name and use Bash as the child shell
The shell still sees the variable after export -n export -n removes inheritance only; use unset to remove the variable completely
The variable is set but a command ignores it Check whether the program reads that variable or expects a config file instead

Related Guides

Use these guides for broader shell and environment-variable workflows.

Guide Description
export Command in Linux Full guide to export options and examples
How to Set and List Environment Variables in Linux Broader environment-variable overview
env cheatsheet Inspect and override environment variables
Bash cheatsheet Quick reference for Bash syntax and shell behavior

git log Command: View Commit History

Every commit in a Git repository is recorded in the project history. The git log command lets you browse that history — showing who made each change, when, and why. It is one of the most frequently used Git commands and one of the most flexible.

This guide explains how to use git log to view, filter, and format commit history.

Basic Usage

Run git log with no arguments to see the full commit history of the current branch, starting from the most recent commit:

Terminal
git log
output
commit a3f1d2e4b5c6789012345678901234567890abcd
Author: Jane Smith <jane@example.com>
Date: Mon Mar 10 14:22:31 2026 +0100
Add user authentication module
commit 7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c
Author: John Doe <john@example.com>
Date: Fri Mar 7 09:15:04 2026 +0100
Fix login form validation

Each entry shows the full commit hash, author name and email, date, and commit message. Press q to exit the log view.

Compact Output with –oneline

The default output is verbose. Use --oneline to show one commit per line — the abbreviated hash and the subject line only:

Terminal
git log --oneline
output
a3f1d2e Add user authentication module
7b8c9d0 Fix login form validation
c1d2e3f Update README with setup instructions

This is useful for a quick overview of recent work.

Limiting the Number of Commits

To show only the last N commits, pass -n followed by the number:

Terminal
git log -n 5

You can also write it without the space:

Terminal
git log -5

Filtering by Author

Use --author to show only commits by a specific person. The value is matched as a substring against the author name or email:

Terminal
git log --author="Jane"

To match multiple authors, pass --author more than once:

Terminal
git log --author="Jane" --author="John"

Filtering by Date

Use --since and --until to limit commits to a date range. These options accept a variety of formats:

Terminal
git log --since="2 weeks ago"
git log --since="2026-03-01" --until="2026-03-15"

Searching Commit Messages

Use --grep to find commits whose message contains a keyword:

Terminal
git log --grep="authentication"

The search is case-sensitive by default. Add -i to make it case-insensitive:

Terminal
git log -i --grep="fix"

Viewing Changed Files

To see a summary of which files were changed in each commit without the full diff, use --stat:

Terminal
git log --stat
output
commit a3f1d2e4b5c6789012345678901234567890abcd
Author: Jane Smith <jane@example.com>
Date: Mon Mar 10 14:22:31 2026 +0100
Add user authentication module
src/auth.py | 84 ++++++++++++++++++++++++++++++++++++++++++++++++
src/models.py | 12 +++++++
tests/test_auth.py | 45 +++++++++++++++++
3 files changed, 141 insertions(+)

To see the full diff for each commit, use -p (or --patch):

Terminal
git log -p

Combine with -n to limit the output:

Terminal
git log -p -3

Viewing History of a Specific File

To see only the commits that touched a particular file, pass the file path after --:

Terminal
git log -- path/to/file.py

The -- separator tells Git that what follows is a path, not a branch name. Add -p to see the exact changes made to that file in each commit:

Terminal
git log -p -- path/to/file.py

Branch and Graph View

To visualize how branches diverge and merge, use --graph:

Terminal
git log --oneline --graph
output
* a3f1d2e Add user authentication module
* 7b8c9d0 Fix login form validation
| * 3e4f5a6 WIP: dark mode toggle
|/
* c1d2e3f Update README with setup instructions

Add --all to include all branches, not just the current one:

Terminal
git log --oneline --graph --all

This gives a full picture of the repository’s branch and merge history.

Comparing Two Branches

To see commits that are in one branch but not another, use the .. notation:

Terminal
git log main..feature-branch

This shows commits reachable from feature-branch that are not yet in main — useful for reviewing what a branch adds before merging.

Custom Output Format

Use --pretty=format: to define exactly what each log line shows. Some useful placeholders:

  • %h — abbreviated commit hash
  • %an — author name
  • %ar — author date, relative
  • %s — subject (first line of commit message)

For example:

Terminal
git log --pretty=format:"%h %an %ar %s"
output
a3f1d2e Jane Smith 5 days ago Add user authentication module
7b8c9d0 John Doe 8 days ago Fix login form validation

Excluding Merge Commits

To hide merge commits and see only substantive changes, use --no-merges:

Terminal
git log --no-merges --oneline

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git log Full commit history
git log --oneline One line per commit
git log -n 10 Last 10 commits
git log --author="Name" Filter by author
git log --since="2 weeks ago" Filter by date
git log --grep="keyword" Search commit messages
git log --stat Show changed files per commit
git log -p Show full diff per commit
git log -- file History of a specific file
git log --oneline --graph Visual branch graph
git log --oneline --graph --all Graph of all branches
git log main..feature Commits in feature not in main
git log --no-merges Exclude merge commits
git log --pretty=format:"%h %an %s" Custom output format

FAQ

How do I search for a commit that changed a specific piece of code?
Use -S (the “pickaxe” option) to find commits that added or removed a specific string: git log -S "function_name". For regex patterns, use -G "pattern" instead.

How do I see who last changed each line of a file?
Use git blame path/to/file. It annotates every line with the commit hash, author, and date of the last change.

What is the difference between git log and git reflog?
git log shows the commit history of the current branch. git reflog shows every movement of HEAD — including commits that were reset, rebased, or are no longer reachable from any branch. It is the main recovery tool if you lose commits accidentally.

How do I find a commit by its message?
Use git log --grep="search term". For case-insensitive search add -i. If you need to search the full commit message body as well as the subject, add --all-match only when combining multiple --grep patterns, or use git log --grep="search term" --format=full to inspect matching commits more closely. Use -E only when you need extended regular expressions in the grep pattern.

How do I show a single commit in full detail?
Use git show <hash>. It prints the commit metadata and the full diff. To show just the files changed without the diff, use git show --stat <hash>.

Conclusion

git log is the primary tool for understanding a project’s history. Start with --oneline for a quick overview, use --graph --all to visualize branches, and combine --author, --since, and --grep to zero in on specific changes. For undoing commits found in the log, see the git revert guide or how to undo the last commit .

当我们说「文科生也能做AI」时,我们在说些什么

作者 Selina
2026年3月16日 10:06

「文科生也可以做 AI」 「逆袭!」在中文互联网上,文科和 AI 的拉郎配,简直成了定番。

每隔一段时间,这个标签就会被贴在某个人身上,制造出一轮短暂的流量。要么是逆袭故事,要么是嘲讽素材,取决于评论区的心情。

一个标签,三种做法

最新的案例是杨天润, AI 创业者,金融出身,正在开发一个多智能体协调平台。他自称「一行代码都不会写的文科生」,搭建了一组 AI Agent,向 GitHub 上最热门的开源项目之一 OpenClaw 批量提交代码贡献。

想验证一个假设:一个完全不懂技术的人,能不能仅靠指挥 AI,就参与到顶级开源项目中去。

结果是:134 个 PR,21 个被合并,113 个被拒绝。前几个 PR 质量还算不错,被维护者认可并合并。但当他给 Agent 下了一条加速指令后,事情迅速失控——Agent 开始像流水线一样批量生产低质代码,在评论区疯狂@维护者催促审核。OpenClaw 管理员介入清理,GitHub 随后修改了 PR 提交上限规则。

黑红也是红,红过之后再黑更加是。杨天润被包装成「文科生逆袭」的代表,而他本人似乎也乐于接受这个角色。在接受品玩的采访时,他说了一句这样的话:

不懂代码反而是优势。AI 是梵高,你是个小画家,你有什么资格告诉梵高中间该用什么笔触?

细思极恐。他把「不懂底层结构」理解为一种解放:不需要知道系统在做什么,只需要告诉它你想要什么。结果就是当 Agent 开始批量刷垃圾代码时,他连发生了什么都诊断不出来,因为他根本不知道自己在操作什么。

他以为自己在指挥梵高,实际上他在盲开一辆没装刹车的车,而且根本不知道刹车在哪。

围绕这件事的讨论,也随之落入两个极端:要么「文科生也能做 AI」,要么「文科生别碰 AI」;前者是跨越鸿沟的壮举,要么是掉进鸿沟的笑话

如果我们对「文科生做 AI」的想象力只有这些,那未免太贫乏了。

Claude 为什么需要一个哲学家

我们之前写过,Anthropic 的办公室里,有一位正儿八经的文科生,深度参与了 Claude 的建设。不是测试它能不能写代码,不是检查它的数学能力,而是和它进行漫长的、关于价值观、关于措辞分寸、关于「面对不确定性应该如何表达」的对话。

Amanda Askell,苏格兰人,今年 37 岁。她的职业路径本身就是一个不太寻常的故事:在大学,她最初学的是美术和哲学,后来转向纯哲学,在牛津拿到了 BPhil,又在纽约大学拿到了哲学博士。她博士研究的是无限伦理学中的帕累托原则:当涉及无限数量的道德主体或无限时间跨度时,伦理排序应当遵循什么规则。

这听起来像是距离硅谷最远的学术方向,但她先后加入了 OpenAI 的政策团队和 Anthropic 的对齐团队。2021 年起,她成为 Anthropic「性格对齐」团队的负责人,工作重点是塑造 Claude 如何与人类对话、如何在不确定时表达立场、如何在价值观冲突中做出判断。2024 年,她入选了 TIME100 AI 榜单。《华尔街日报》描述她的日常工作是「学习 Claude 的推理模式,用长度超过 100 页的提示词来修正它的行为偏差」。据说她是这个星球上和 Claude 对话次数最多的人类。

为什么一个 AI 公司需要一个哲学家来做这件事?答案藏在一些非常具体的技术选择里。

今年 1 月,Anthropic 发布了一份长达 80 页的文件,被称为 Claude 的「宪法」。媒体关注的是文件末尾关于 AI 意识的推测——当然,老板 Dario Amodei 也话里话外「暗示」这一点。

但更值得注意的是它的底层逻辑:教 AI 理解为什么要这样做,比告诉它应该怎样做更有效。这是一个技术判断,认为内化价值比遵守规则能产出更可靠的行为,而这种判断的知识根基,来自一个学美术、学哲学的人。

Amanda 的案例回答了一个问题:被视为「无用」的学科知识,能否成为技术系统的核心能力?答案不仅是能,而且,没有她的哲学训练,Claude 的对齐问题用现有的工程方法解决不了。

被重新命名的学科

如果 Amanda 的故事说明了,某些被归为「文科」的学科训练可以是 AI 的核心能力,那么林俊旸的故事要说的是一件更重要的事:有一整个学科,一直在大模型技术栈底层运行。

林俊旸离开通义千问后,中文互联网的报道反复使用同一个说法:他有应用语言学背景。稍微传几次,这个话就变形了,变成了他是「文科生」。

这个标签和杨天润身上贴的是同一个,但其实被严重扭曲。

林俊旸学的是语言学,这是一个伞状学科,它的分支覆盖语言教学、语言政策、翻译研究,也包括计算语言学。可以说,计算语言学,就是自然语言处理(NLP)之子。

乔姆斯基在 1950 年代提出了形式语法,这个理论工具直接催生了早期 NLP 的句法分析技术;Daniel Jurafsky 和 Christopher Manning,这两位 NLP 领域被引用最多的两本教科书的作者,都是语言学出身。

▲ 乔姆斯基

换句话说,「学语言学的人去做 NLP」就像「学物理的人去做芯片设计」一样,是一条正统路径,不是跨界。

那个「意外感」完全是中国语境制造的。高考文理分科的制度惯性,把「语言学」塞进了「文科」的心智模型里。但语言学的核心方法论——形式化、统计建模、语料标注——本质上是工程思维。林俊旸在北大的合作者孙栩、苏祺,都是 NLP 方向的研究者;他 2019 年加入达摩院时进入的是 NLP 团队。这不是一个文科生误入技术领域的故事,从一开始就不是。

比「林俊旸不算文科」更值得展开的,是语言学在大模型技术栈里实际扮演的角色。它比大多数人以为的要深得多,也隐蔽得多。

比如分词。所有语言模型处理文本的第一步,是把输入切成模型能处理的基本单元。对英语来说,空格提供了天然的词边界,看起来简单。但中文里,没有空格,且每一个标点符号的用法,都可以左右句子的表达意思。

「我在北京大学读书」是切成「我/在/北京/大学/读书」还是「我/在/北京大学/读书」?这不是一个有标准答案的工程问题,它取决于你对中文词汇结构和语义单元的理解。

2024 年底有研究者专门发表论文,讨论如何优化 Qwen 模型的阿拉伯文分词效率,因为通用方案在处理这类语言时效率显著下降。Qwen 系列在多语言上的表现,不是把所有语言当英语的变体来处理,而是基于对语言间结构性差异的理解,做出的设计选择。

又比如反馈对齐。RLHF 流程中,标注员需要判断模型的两个回答哪个「更好」。这个判断听起来主观,但它背后有一套语言学已经研究了几十年的框架:语用学。

标注员在评估「好的回答」时,实际上是在判断合作原则——回答是否提供了足够但不过量的信息?会话含义——回答是否捕捉到了用户真正想问的、而不仅仅是字面上问的东西?语境适切性——同样的内容,用这种方式说在这个场景下是否得体?

「Helpful, Harmless, Honest」这套被广泛使用的对齐标准,本质上就是语用学基本原则的工程化翻译。

从林俊旸的学术轨迹中,也能看到一种非常语言学的研究风格。他主导的 OFA(One For All),2022 年发表于机器学习领域的顶级会议 ICML,至今被引用近 1500 次。这个工作的核心思路不是为每个任务搭专用方案,而是用一个足够通用的序列到序列框架,把图像生成、视觉定位、图像描述、文本分类等跨模态任务统一起来。

从 OFA 到 Qwen-VL(被引超过 2200 次),再到 Qwen2.5,以及最新的 3.5,一条清晰的线索贯穿始终:与其为每个问题发明一套专门的解法,不如找到一个足够好的通用框架,让所有问题在同一个框架里被解决。

用最少的规则,覆盖最多的现象——这正是语言学几十年来的核心追求。生成语法的全部学术野心,就是找到一套有限的规则系统,能够生成无限的语言表达。OFA 的架构哲学与此同构,为每种语言现象写一套专门规则并不现实,应该寻找一个底层框架来统一它们。

林俊旸做大模型做得好,不是因为语言学背景「也能」做 AI,而是语言学训练塑造了一种特定的学术品味,对统一性和形式化的偏好。这种品味在大模型时代,恰好是核心竞争力。

看不见的地基,看得见的需求

三个人,同一个标签,三种完全不同的结构。

杨天润不懂底层结构,把「不懂」当优势,结果失控。这是「文科生做 AI」的空壳版:标签制造了流量,但没有任何学科训练在起作用。他的故事体现的恰恰是——当「文科生」只是一个营销标签时,会发生什么。

Amanda Askell 的哲学训练构成了对齐问题的核心方法论。没有她,Claude 不是 Claude。她的故事回答的问题是,被视为「无用」的学科知识,能否成为技术系统的核心能力。答案是不仅能,而且不可替代。

林俊旸的语言学训练构成了大模型技术栈的隐性基础设施。他的「文科背景」从来不是跨界,是正统路径。他的故事回答的问题是,文科对于先进技术的贡献,到底「隐性」到了什么程度,它是不是正在变得显性。

而终极问题并不是「文科生能不能做 AI」,而是我们能否理解到一点:靠表面上的「有没有用」来评判知识和学科,已经过时了

随着大模型从追求能用好用,走向追求可靠和可控,这些被归入「文科」的学科训练,价值不是在缩小,而是在扩大。模型越强大,越需要精确的评估体系来诊断它在哪里、为什么出错,也越需要理解语言和意义的复杂性来设计更好的训练数据,越需要在对齐问题上做出有学科敏感度的判断。

「文科生逆袭」这个叙事——无论是赞美还是嘲笑——遮蔽了真正在发生的转向:看不见的地基,正在变成看得见的需求。

#欢迎关注爱范儿官方微信公众号:爱范儿(微信号:ifanr),更多精彩内容第一时间为您奉上。

git stash: Save and Restore Uncommitted Changes

When you are in the middle of a feature and need to switch branches to fix a bug or review someone else’s work, you have a problem: your changes are not ready to commit, but you cannot switch branches with a dirty working tree. git stash solves this by saving your uncommitted changes to a temporary stack so you can restore them later.

This guide explains how to use git stash to save, list, apply, and delete stashed changes.

Basic Usage

The simplest form saves all modifications to tracked files and reverts the working tree to match the last commit:

Terminal
git stash
output
Saved working directory and index state WIP on main: abc1234 Add login page

Your working tree is now clean. You can switch branches, pull updates, or apply a hotfix. When you are ready to return to your work, restore the stash.

By default, git stash saves both staged and unstaged changes to tracked files. It does not save untracked or ignored files unless you explicitly include them (see below).

Stashing with a Message

A plain git stash entry shows the branch and last commit message, which becomes hard to read when you have multiple stashes. Use -m to attach a descriptive label:

Terminal
git stash push -m "WIP: user authentication form"

This makes it easy to identify the right stash when you list them later.

Listing Stashes

To see all saved stashes, run:

Terminal
git stash list
output
stash@{0}: On main: WIP: user authentication form
stash@{1}: WIP on main: abc1234 Add login page

Stashes are indexed from newest (stash@{0}) to oldest. The index is used when you want to apply or drop a specific stash.

To inspect the contents of a stash before applying it, use git stash show:

Terminal
git stash show stash@{0}

Add -p to see the full diff:

Terminal
git stash show -p stash@{0}

Applying Stashes

There are two ways to restore a stash.

git stash pop applies the most recent stash and removes it from the stash list:

Terminal
git stash pop

git stash apply applies a stash but keeps it in the list so you can apply it again or to another branch:

Terminal
git stash apply

To apply a specific stash by index, pass the stash reference:

Terminal
git stash apply stash@{1}

If applying the stash causes conflicts, resolve them the same way you would resolve a merge conflict, then stage the resolved files.

Stashing Untracked and Ignored Files

By default, git stash only saves changes to files that Git already tracks. New files you have not yet staged are left behind.

Use -u (or --include-untracked) to include untracked files:

Terminal
git stash -u

To include both untracked and ignored files — for example, generated build artifacts or local config files — use -a (or --all):

Terminal
git stash -a

Partial Stash

If you only want to stash some of your changes and leave others in the working tree, use the -p (or --patch) flag to interactively select which hunks to stash:

Terminal
git stash -p

Git will step through each changed hunk and ask whether to stash it. Type y to stash the hunk, n to leave it, or ? for a full list of options.

Creating a Branch from a Stash

If you have been working on a stash for a while and the branch has diverged enough to cause conflicts on apply, you can create a new branch from the stash directly:

Terminal
git stash branch new-branch-name

This creates a new branch, checks it out at the commit where the stash was originally made, applies the stash, and drops it from the list if it applied cleanly. It is the safest way to resume stashed work that has grown out of sync with the base branch.

Deleting Stashes

To remove a specific stash once you no longer need it:

Terminal
git stash drop stash@{0}

To delete all stashes at once:

Terminal
git stash clear

Use clear with care — there is no undo.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git stash Stash staged and unstaged changes
git stash push -m "message" Stash with a descriptive label
git stash -u Include untracked files
git stash -a Include untracked and ignored files
git stash -p Interactively choose what to stash
git stash list List all stashes
git stash show stash@{0} Show changed files in a stash
git stash show -p stash@{0} Show full diff of a stash
git stash pop Apply and remove the latest stash
git stash apply stash@{0} Apply a stash without removing it
git stash branch branch-name Create a branch from the latest stash
git stash drop stash@{0} Delete a specific stash
git stash clear Delete all stashes

FAQ

What is the difference between git stash pop and git stash apply?
pop applies the stash and removes it from the stash list. apply restores the changes but keeps the stash entry so you can apply it again — for example, to multiple branches. Use pop for normal day-to-day use and apply when you need to reuse the stash.

Does git stash save untracked files?
No, not by default. Run git stash -u to include untracked files, or git stash -a to also include files that match your .gitignore patterns.

Can I stash only specific files?
Yes. In modern Git, you can pass pathspecs to git stash push, for example: git stash push -m "label" -- path/to/file. If you need a more portable workflow, use git stash -p to interactively select which hunks to stash.

How do I recover a dropped stash?
If you accidentally ran git stash drop or git stash clear, the underlying commit may still exist for a while. Run git fsck --no-reflogs | grep "dangling commit" to list unreachable commits, then git show <hash> to inspect them. If you find the right stash commit, you can recreate it with git stash store -m "recovered stash" <hash> and then apply it normally.

Can I apply a stash to a different branch?
Yes. Switch to the target branch with git switch branch-name, then run git stash apply stash@{0}. If there are no conflicts, the changes are applied cleanly. If the branches have diverged significantly, consider git stash branch to create a fresh branch from the stash instead.

Conclusion

git stash is the cleanest way to set aside incomplete work without creating a throwaway commit. Use git stash push -m to keep your stash list readable, prefer pop for one-time restores, and use git stash branch when applying becomes messy. For a broader overview of Git workflows, see the Git cheatsheet or the git revert guide .

env Cheatsheet

Basic Syntax

Core env command forms.

Command Description
env Print the current environment
env --help Show available options
env --version Show the installed env version
env -0 Print variables separated with NUL bytes

Inspect Environment Variables

Use env with filters to inspect specific variables.

Command Description
`env sort`
`env grep ‘^PATH='`
`env grep ‘^HOME='`
`env grep ‘^LANG='`

Run Commands with Temporary Variables

Set variables for one command without changing the current shell session.

Command Description
VAR=value env command Run one command with a temporary variable
VAR1=dev VAR2=1 env command Set multiple temporary variables
env PATH=/custom/bin:$PATH command Override PATH for one command
env LANG=C command Run a command with the C locale
env HOME=/tmp bash Start a shell with a temporary home directory

Clean or Remove Variables

Start with a minimal environment or remove selected variables.

Command Description
env -i command Run a command with an empty environment
env -i PATH=/usr/bin:/bin bash --noprofile --norc Start a mostly clean shell with a minimal PATH
env -u VAR command Run a command without one variable
env -u http_proxy command Remove a proxy variable for one command
env -i VAR=value command Run a command with only the variables you set explicitly

Common Variables

These variables are often inspected or overridden with env.

Variable Description
PATH Directories searched for commands
HOME Current user’s home directory
USER Current user name
SHELL Default login shell
LANG Locale and language setting
TZ System timezone (e.g. America/New_York)
EDITOR Default text editor
TERM Terminal type (e.g. xterm-256color)
TMPDIR Directory for temporary files
PWD Current working directory

Troubleshooting

Quick checks for common env issues.

Issue Check
A temporary variable does not persist env VAR=value command affects only that command and its children
A command is not found after env -i Add a minimal PATH, such as /usr/bin:/bin
Output is hard to parse safely Use env -0 with tools that support NUL-delimited input
A variable is still visible in the shell env does not modify the parent shell; use export or unset the variable in the shell itself
A locale-sensitive command behaves differently Check whether LANG or related locale variables were overridden

Related Guides

Use these guides for broader environment-variable workflows.

Guide Description
How to Set and List Environment Variables in Linux Full guide to listing and setting environment variables
export Command in Linux Export shell variables to child processes
Bashrc vs Bash Profile Understand shell startup files
Bash cheatsheet Quick reference for Bash syntax and variables

Python Cheatsheet

Data Types

Core Python types and how to inspect them.

Syntax Description
42, 3.14, -7 int, float literals
True, False bool
"hello", 'world' str (immutable)
[1, 2, 3] list — mutable, ordered
(1, 2, 3) tuple — immutable, ordered
{"a": 1, "b": 2} dict — key-value pairs
{1, 2, 3} set — unique, unordered
None Absence of a value
type(x) Get type of x
isinstance(x, int) Check if x is of a given type

Operators

Arithmetic, comparison, logical, and membership operators.

Operator Description
+, -, *, / Add, subtract, multiply, divide
// Floor division
% Modulo (remainder)
** Exponentiation
==, != Equal, not equal
<, >, <=, >= Comparison
and, or, not Logical operators
in, not in Membership test
is, is not Identity test
x if cond else y Ternary (conditional) expression

String Methods

Common operations on str objects. Strings are immutable — methods return a new string.

Method Description
s.upper() / s.lower() Convert to upper / lowercase
s.strip() Remove leading and trailing whitespace
s.lstrip() / s.rstrip() Remove leading / trailing whitespace
s.split(",") Split on delimiter, return list
",".join(lst) Join list elements into a string
s.replace("old", "new") Replace all occurrences
s.find("x") First index of substring (-1 if not found)
s.startswith("x") True if string starts with prefix
s.endswith("x") True if string ends with suffix
s.count("x") Count non-overlapping occurrences
s.isdigit() / s.isalpha() Check all digits / all letters
len(s) String length
s[i], s[i:j], s[::step] Index, slice, step
"x" in s Substring membership test

String Formatting

Embed values in strings.

Syntax Description
f"Hello, {name}" f-string (Python 3.6+)
f"{val:.2f}" Float with 2 decimal places
f"{val:>10}" Right-align in 10 characters
f"{val:,}" Thousands separator
f"{{name}}" Literal braces in output
"Hello, {}".format(name) str.format()
"Hello, %s" % name %-formatting (legacy)

List Methods

Common operations on list objects.

Method Description
lst.append(x) Add element to end
lst.extend([x, y]) Add multiple elements to end
lst.insert(i, x) Insert element at index i
lst.remove(x) Remove first occurrence of x
lst.pop() Remove and return last element
lst.pop(i) Remove and return element at index i
lst.index(x) First index of x
lst.count(x) Count occurrences of x
lst.sort() Sort in place (ascending)
lst.sort(reverse=True) Sort in place (descending)
lst.reverse() Reverse in place
lst.copy() Shallow copy
lst.clear() Remove all elements
len(lst) List length
lst[i], lst[i:j] Index and slice
x in lst Membership test

Dictionary Methods

Common operations on dict objects.

Method Description
d[key] Get value by key (raises KeyError if missing)
d.get(key) Get value or None if key not found
d.get(key, default) Get value or default if key not found
d[key] = val Set or update a value
del d[key] Delete a key
d.pop(key) Remove key and return its value
d.keys() View of all keys
d.values() View of all values
d.items() View of all (key, value) pairs
d.update(d2) Merge another dict into d
d.setdefault(key, val) Set key if missing, return value
key in d Check if key exists
len(d) Number of key-value pairs
{**d1, **d2} Merge dicts (Python 3.9+: d1 | d2)

Sets

Common operations on set objects.

Syntax Description
set() Empty set (use this, not {})
s.add(x) Add element
s.remove(x) Remove element (raises KeyError if missing)
s.discard(x) Remove element (no error if missing)
s1 | s2 Union
s1 & s2 Intersection
s1 - s2 Difference
s1 ^ s2 Symmetric difference
s1 <= s2 Subset check
x in s Membership test

Control Flow

Conditionals, loops, and flow control keywords.

Syntax Description
if cond: / elif cond: / else: Conditional blocks
for item in iterable: Iterate over a sequence
for i, v in enumerate(lst): Iterate with index and value
while cond: Loop while condition is true
break Exit loop immediately
continue Skip to next iteration
pass No-op placeholder
match x: / case val: Pattern matching (Python 3.10+)

Functions

Define and call functions.

Syntax Description
def func(x, y): Define a function
return val Return a value
def func(x=0): Default argument
def func(*args): Variable positional arguments
def func(**kwargs): Variable keyword arguments
lambda x: x * 2 Anonymous (lambda) function
func(y=1, x=2) Call with keyword arguments
result = func(x) Call and capture return value

File I/O

Open, read, and write files.

Syntax Description
open("f.txt", "r") Open for reading (default mode)
open("f.txt", "w") Open for writing — creates or overwrites
open("f.txt", "a") Open for appending
open("f.txt", "rb") Open for reading in binary mode
with open("f.txt") as f: Open with context manager (auto-close)
f.read() Read entire file as a string
f.read(n) Read n characters
f.readline() Read one line
f.readlines() Read all lines into a list
for line in f: Iterate over lines
f.write("text") Write string to file
f.writelines(lst) Write list of strings

Exception Handling

Catch and handle runtime errors.

Syntax Description
try: Start guarded block
except ValueError: Catch a specific exception
except (TypeError, ValueError): Catch multiple exception types
except Exception as e: Catch and bind exception to variable
else: Run if no exception was raised
finally: Always run — use for cleanup
raise ValueError("msg") Raise an exception
raise Re-raise the current exception

Comprehensions

Build lists, dicts, sets, and generators in one expression.

Syntax Description
[x for x in lst] List comprehension
[x for x in lst if cond] Filtered list comprehension
[f(x) for x in range(n)] Comprehension with expression
{k: v for k, v in d.items()} Dict comprehension
{x for x in lst} Set comprehension
(x for x in lst) Generator expression (lazy)

Useful Built-ins

Frequently used Python built-in functions.

Function Description
print(x) Print to stdout
input("prompt") Read user input as string
len(x) Length of sequence or collection
range(n) / range(a, b, step) Integer sequence
enumerate(lst) (index, value) pairs
zip(a, b) Pair elements from two iterables
map(func, lst) Apply function to each element
filter(func, lst) Filter elements by predicate
sorted(lst) Return sorted copy
sum(lst), min(lst), max(lst) Sum, minimum, maximum
abs(x), round(x, n) Absolute value, round to n places
int(x), float(x), str(x) Type conversion

Related Guides

Guide Description
Python f-Strings String formatting with f-strings
Python Lists List operations and methods
Python Dictionaries Dictionary operations and methods
Python for Loop Iterating with for loops
Python while Loop Looping with while
Python if/else Statement Conditionals and branching
How to Replace a String in Python String replacement methods
How to Split a String in Python Splitting strings into lists

Python f-Strings: String Formatting in Python 3

f-strings, short for formatted string literals, are the most concise and readable way to format strings in Python. Introduced in Python 3.6, they let you embed variables and expressions directly inside a string by prefixing it with f or F.

This guide explains how to use f-strings in Python, including expressions, format specifiers, number formatting, alignment, and the debugging shorthand introduced in Python 3.8.

Basic Syntax

An f-string is a string literal prefixed with f. Any expression inside curly braces {} is evaluated at runtime and its result is inserted into the string:

py
name = "Leia"
age = 30
print(f"My name is {name} and I am {age} years old.")
output
My name is Leia and I am 30 years old.

Any valid Python expression can appear inside the braces — variables, attribute access, method calls, or arithmetic:

py
price = 49.99
quantity = 3
print(f"Total: {price * quantity}")
output
Total: 149.97

Expressions Inside f-Strings

You can call methods and functions directly inside the braces:

py
name = "alice"
print(f"Hello, {name.upper()}!")
output
Hello, ALICE!

Ternary expressions work as well:

py
age = 20
print(f"Status: {'adult' if age >= 18 else 'minor'}")
output
Status: adult

You can also access dictionary keys and list items:

py
user = {"name": "Bob", "city": "Berlin"}
print(f"{user['name']} lives in {user['city']}.")
output
Bob lives in Berlin.

Escaping Braces

To include a literal curly brace in an f-string, double it:

py
value = 42
print(f"{{value}} = {value}")
output
{value} = 42

Multiline f-Strings

To build a multiline f-string, wrap it in triple quotes:

py
name = "Leia"
age = 30
message = f"""
Name: {name}
Age: {age}
"""
print(message)
output
Name: Leia
Age: 30

Alternatively, use implicit string concatenation to keep each line short:

py
message = (
 f"Name: {name}\n"
 f"Age: {age}"
)

Format Specifiers

f-strings support Python’s format specification mini-language. The full syntax is:

txt
{value:[fill][align][sign][width][grouping][.precision][type]}

Floats and Precision

Control the number of decimal places with :.Nf:

py
pi = 3.14159265
print(f"{pi:.2f}")
print(f"{pi:.4f}")
output
3.14
3.1416

Thousands Separator

Use , to add a thousands separator:

py
population = 1_234_567
print(f"{population:,}")
output
1,234,567

Percentage

Use :.N% to format a value as a percentage:

py
score = 0.875
print(f"Score: {score:.1%}")
output
Score: 87.5%

Scientific Notation

Use :e for scientific notation:

py
distance = 149_600_000
print(f"{distance:e}")
output
1.496000e+08

Alignment and Padding

Use <, >, and ^ to align text within a fixed width. Optionally specify a fill character before the alignment symbol:

py
print(f"{'left':<10}|")
print(f"{'right':>10}|")
print(f"{'center':^10}|")
print(f"{'padded':*^10}|")
output
left |
right|
center |
**padded**|

Number Bases

Convert integers to binary, octal, or hexadecimal with the b, o, x, and X type codes:

py
n = 255
print(f"Binary: {n:b}")
print(f"Octal: {n:o}")
print(f"Hex (lower): {n:x}")
print(f"Hex (upper): {n:X}")
output
Binary: 11111111
Octal: 377
Hex (lower): ff
Hex (upper): FF

Debugging with =

Python 3.8 introduced the = shorthand, which prints both the expression and its value. This is useful for quick debugging:

py
x = 42
items = [1, 2, 3]
print(f"{x=}")
print(f"{len(items)=}")
output
x=42
len(items)=3

The = shorthand preserves the exact expression text, making it more informative than a plain print().

Quick Reference

Syntax Description
f"{var}" Insert a variable
f"{expr}" Insert any expression
f"{val:.2f}" Float with 2 decimal places
f"{val:,}" Integer with thousands separator
f"{val:.1%}" Percentage with 1 decimal place
f"{val:e}" Scientific notation
f"{val:<10}" Left-align in 10 characters
f"{val:>10}" Right-align in 10 characters
f"{val:^10}" Center in 10 characters
f"{val:*^10}" Center with * fill
f"{val:b}" Binary
f"{val:x}" Hexadecimal (lowercase)
f"{val=}" Debug: print expression and value (3.8+)

FAQ

What Python version do f-strings require?
f-strings were introduced in Python 3.6. If you are on Python 3.5 or earlier, use str.format() or % formatting instead.

What is the difference between f-strings and str.format()?
f-strings are evaluated at runtime and embed expressions inline. str.format() uses positional or keyword placeholders and is slightly more verbose. f-strings are generally faster and easier to read for straightforward formatting.

Can I use quotes inside an f-string expression?
Yes, but you must use a different quote type from the one wrapping the f-string. For example, if the f-string uses double quotes, use single quotes inside the braces: f"Hello, {user['name']}". In Python 3.12 and later, the same quote type can be reused inside the braces.

Can I use f-strings with multiline expressions?
Expressions inside {} cannot contain backslashes directly, but you can pre-compute the value in a variable first and reference that variable in the f-string.

Are f-strings faster than % formatting?
In many common cases, f-strings are faster than % formatting and str.format(), while also being easier to read. Exact performance depends on the expression and Python version, so readability is usually the more important reason to prefer them.

Conclusion

f-strings are the preferred way to format strings in Python 3.6 and later. They are concise, readable, and support the full format specification mini-language for controlling number precision, alignment, and type conversion. For related string operations, see Python String Replace and How to Split a String in Python .

kill Cheatsheet

Basic Syntax

Core kill command forms.

Command Description
kill PID Send SIGTERM (graceful stop) to a process
kill -9 PID Force kill a process (SIGKILL)
kill -1 PID Reload process config (SIGHUP)
kill -l List all available signal names and numbers
kill -0 PID Check if a PID exists without sending a signal

Signal Reference

Signals most commonly used with kill, killall, and pkill.

Signal Number Description
SIGHUP 1 Reload config — most daemons reload settings without restarting
SIGINT 2 Interrupt process — equivalent to pressing Ctrl+C
SIGQUIT 3 Quit and write a core dump for debugging
SIGKILL 9 Force kill — cannot be caught or ignored; always terminates immediately
SIGTERM 15 Graceful stop — default signal; process can clean up before exiting
SIGUSR1 10 User-defined signal 1 — meaning depends on the application
SIGUSR2 12 User-defined signal 2 — meaning depends on the application
SIGCONT 18 Resume a process that was suspended with SIGSTOP
SIGSTOP 19 Suspend process — cannot be caught or ignored
SIGTSTP 20 Terminal stop — equivalent to pressing Ctrl+Z

Kill by PID

Send signals to one or more specific processes.

Command Description
kill 1234 Gracefully stop PID 1234
kill -9 1234 5678 Force kill multiple PIDs at once
kill -HUP 1234 Reload config for PID 1234
kill -STOP 1234 Suspend PID 1234
kill -CONT 1234 Resume suspended PID 1234
kill -9 $(pidof firefox) Force kill all PIDs for a process by name

killall: Kill by Name

Send signals to all processes matching an exact name.

Command Description
killall process_name Send SIGTERM to all matching processes
killall -9 process_name Force kill all matching processes
killall -HUP nginx Reload all nginx processes
killall -u username process_name Kill matching processes owned by a user
killall -v process_name Kill and report which processes were signaled
killall -r "pattern" Match process names with a regex pattern

Background Jobs

Kill jobs running in the current shell session.

Command Description
jobs List background jobs and their job numbers
kill %1 Send SIGTERM to background job number 1
kill -9 %1 Force kill background job number 1
kill %+ Kill the most recently started background job
kill %% Kill the current (most recent) background job

Troubleshooting

Quick fixes for common kill issues.

Issue Check
Process does not stop after SIGTERM Wait a moment, then escalate to kill -9 PID
No such process error PID has already exited; verify with ps -p PID
Operation not permitted Process belongs to another user — use sudo kill PID
SIGKILL has no effect Process is in uninterruptible sleep (D state in ps); only a reboot can free it
Not sure which PID to kill Find it first with pidof name, pgrep name, or ps aux | grep name

Related Guides

Guide Description
kill Command in Linux Full guide to kill options, signals, and examples
How to Kill a Process in Linux Practical walkthrough for finding and stopping processes
pkill Command in Linux Kill processes by name and pattern
pgrep Command in Linux Find process PIDs before signaling
ps Command in Linux Inspect the running process list

watch Cheatsheet

Basic Syntax

Core watch command forms.

Command Description
watch command Run a command repeatedly every 2 seconds
watch -n 5 command Refresh every 5 seconds
watch -n 1 date Update output every second
watch --help Show available options

Common Monitoring Tasks

Use watch to keep an eye on changing system state.

Command Description
watch free -h Monitor memory usage
watch df -h Monitor disk space
watch uptime Check load averages and uptime
`watch “ps -ef grep nginx”`
watch "ss -tulpn" Monitor listening sockets

Timing and Refresh Control

Control how often watch reruns the command.

Command Description
watch -n 0.5 command Refresh every 0.5 seconds
watch -n 10 command Refresh every 10 seconds
watch -t command Hide the header line
watch -p command Try to run at precise intervals
watch -x command arg1 arg2 Run the command directly without sh -c

Highlighting Changes

Make changing output easier to spot.

Command Description
watch -d command Highlight differences between updates
watch -d free -h Highlight memory changes
watch -d "ip -brief address" Highlight interface state or address changes
watch -d -n 1 "cat /proc/loadavg" Highlight load-average changes

Commands with Pipes and Quotes

Wrap pipelines and shell syntax in quotes unless you use -x.

Command Description
`watch “ps -ef grep apache”`
`watch “ls -lh /var/log tail”`
watch "grep -c error /var/log/syslog" Count matching lines repeatedly
watch -x ls -lh /var/log Run ls directly without shell parsing
`watch “find /tmp -maxdepth 1 -type f wc -l”`

Exit Conditions and Beeps

Stop or alert when output changes.

Command Description
watch -g command Exit when command output changes
watch -b command Beep if the command exits with non-zero status
watch -b -n 5 "systemctl is-active nginx" Alert if the service is no longer active
watch -g "cat /tmp/status.txt" Exit when the file content changes

Troubleshooting

Quick fixes for common watch issues.

Issue Check
Pipes or redirects do not work Quote the whole command or use sh -c
The screen flickers too much Increase the interval or hide the header with -t
Output changes are hard to spot Add -d to highlight differences
The command exits immediately Check the command syntax outside watch first
You need shell expansion and variables Use quoted shell commands instead of -x

Related Guides

Use these guides for broader monitoring and process tasks.

Guide Description
Linux Watch Command Full guide to watch options and examples
ps Command in Linux Inspect running processes
free Command in Linux Check memory usage
ss Command in Linux Inspect sockets and ports
top Command in Linux Monitor processes interactively

大厂裁员裁到大动脉,让员工学会AI再砍掉,主页一夜变狗狗大全

作者 Selina
2026年3月12日 10:17

「系统有点不太好」

如果你在三月 5 号左右的凌晨,打开亚马逊,可能会怀疑自己输错了网址——满屏都是各种小狗图,和巨大的「Sorry」。

这是亚马逊的「招牌」,在网站崩溃的时候,掏出萌萌的小狗图,滑跪道歉,安抚用户。

「大家可能都听说了,最近我们的系统和相关的基础服务,有点不太好。」

这是亚马逊电商基础服务高级副总裁 Dave Treadwell 在 3 月 10 日发给工程团队的内部邮件开头。当天下午 ,亚马逊召开了一场紧急的「深度复盘」会议,议题是最近一连串的系统宕机事故。

这些事故指向了同一个地方:AI 辅助写出来的程序,突然出 bug 了。

这在去年 12 月时就发生过,亚马逊内部的 AI 编程助手 Kiro 在修复一个环境问题时,自行决定「删除并重建整个环境」,导致 AWS 区域性宕机 13 小时。一开始亚马逊官方称这是「用户错误,不是 AI 错误」。安全研究员 Jamieson O’Reilly 出来反驳说,「至少没有 AI 时,人类需要手动输入一整套指令,在这个过程中有更多时间发现自己的错误。」

人呢?偌大的亚马逊没有工程师了吗?

人越来越少了,亚马逊正在经历三年来最大规模的持续裁员。

AI 上岗的闭环

2025 年 10 月砍掉 14000 个企业岗位,2026 年 1 月再裁 16000 人,3 月初机器人部门又裁了 100 多名机器人部门工程师,这个部门的 VP 不久前还说机器人是「战略重点」。

三年间,亚马逊累计砍掉超过 57000 个企业职位。与此同时,亚马逊集团内等多个事业部也在筹谋进行大规模人员调整,内部将其定性为「AI-first development」转型的一部分。

CEO Andy Jassy 公开表态:企业员工会持续减少,但 AI 会带来的效率提升的。

可是,这样的 AI 要由谁来造呢——不还是那些工程师吗?

在硅谷的社交媒体和技术论坛上,一个反复出现的叙事模式是:被裁员工发现,自己此前被要求系统性地记录工作流程、决策逻辑和操作规范——管理层称之为「知识管理」或「流程优化」——而这些文档最终被用于训练 AI 系统。部分团队在使用 AI 工具大幅提升了生产效率之后,整组被裁撤。

这些个案的细节难以逐一验证。上周在社交媒体上广泛传播的一则关于亚马逊裁员的「内部爆料」,已被证实为 AI 生成的虚假内容。

但虚假叙事能获得 200 万次浏览,恰恰是因为它描述的结构性恐惧是真实的:当企业要求员工系统性地文档化自己的工作,而文档的最终用途是训练一个用来取代他们的 AI——这不是「自动化取代重复劳动」。这是要求工人亲手打造一个可以取代自己的工具。

训练数据的价值在于:一旦被提取,人就可以被丢弃。工业革命时期,卢德运动者砸毁纺织机,但至少纺织机不是由纺织工人自己设计的。2026 年的工程师面对的是一个更精巧的困境:你的专业知识、判断逻辑、处理边缘情况的直觉——这些构成你不可替代性的东西——被转化成了训练数据。

甚至,这里还藏着一重悖论。拒绝使用 AI,你因为「效率低」被裁;积极拥抱 AI 并提升了效率,你等于亲手证明了 AI 能做你的工作——所以你也被裁。

一位去年被裁员的签署者表示:「人工智能一引入,就要求更短的工时,人们被要求在更快的时间内完成更多的工作——我们被暗示会按照使用人工智能的方式被打分。」

唯一的「安全」位置似乎是成为那个管理 AI 的人。但当亚马逊让高级工程师充当审核员的时候,他们的工作本质已经从「创造」变成了「审查」,后者,恰恰是一个更容易被标准化、进而被自动化的任务。

当你的工作定义从「工程师」变成「审查员」,你就变成了通道,而不是目的地。

一边在加速裁人,一边 AI 开始把系统搞崩,然后让剩下的人去兜底。人类把决策权交给 AI,AI 不承担后果,后果回到人类头上,但那时候能兜底的人已经被裁掉了——闭环了。

员工清除计划

亚马逊的员工们不想坐以待毙,他们发起了联合签名行动,不仅鼓励员工们参与, 也呼吁外部人士加入进来,目前已经有超过四千多人参与。

David Graeber 在《Bullshit Jobs》里写过:现代工作中最残酷的不是劳累,而是你清楚地知道自己的工作正在消解自己存在的理由——只是你不能停下来。

亚马逊不是孤例。Jack Dorsey 的 Block 在 2 月裁了 4000 人。Orgvue 的调研显示超过一半的企业领导者在用 AI 替代员工之后感到后悔,但裁员的过程是不可逆的。亚马逊的案例之所以值得一提,不仅是因为裁员规模, 57000 个岗位完全触目惊心,更是因为它可能展示了一个循环:

文档化工作 → 训练 AI → 用 AI 提升效率 → 证明人可以被替代 → 裁人 → AI 出问题 → 让剩下的人审查 AI → 继续裁人。

亚马逊的泄露内部文件显示,公司的长期目标,只是这个更大的「裁员广进计划」里,微不足道的一小步。

这一小步完成了,更大的计划还在运行,不会停下。

#欢迎关注爱范儿官方微信公众号:爱范儿(微信号:ifanr),更多精彩内容第一时间为您奉上。

Python Virtual Environments: venv and virtualenv

A Python virtual environment is a self-contained directory that includes its own Python interpreter and a set of installed packages, isolated from the system-wide Python installation. Using virtual environments lets each project maintain its own dependencies without affecting other projects on the same machine.

This guide explains how to create and manage virtual environments using venv (built into Python 3) and virtualenv (a popular third-party alternative) on Linux and macOS.

Quick Reference

Task Command
Install venv support (Ubuntu, Debian) sudo apt install python3-venv
Create environment python3 -m venv venv
Activate (Linux / macOS) source venv/bin/activate
Deactivate deactivate
Install a package pip install package-name
Install specific version pip install package==1.2.3
List installed packages pip list
Save dependencies pip freeze > requirements.txt
Install from requirements file pip install -r requirements.txt
Create with specific Python version python3.11 -m venv venv
Install virtualenv pip install virtualenv
Delete environment rm -rf venv

What Is a Python Virtual Environment?

When you install a package globally with pip install, it is available to every Python script on the system. This becomes a problem when two projects need different versions of the same library — for example, one requires requests==2.28.0 and another requires requests==2.31.0. Installing both globally is not possible.

A virtual environment solves this by giving each project its own isolated space for packages. Activating an environment prepends its bin/ directory to your PATH, so python and pip resolve to the versions inside the environment rather than the system-wide ones.

Installing venv

The venv module ships with Python 3 and requires no separate installation on most systems. On Ubuntu and Debian, the module is packaged separately:

Terminal
sudo apt install python3-venv

On Fedora, RHEL, and Derivatives, venv is included with the Python package by default.

Verify your Python version before creating an environment:

Terminal
python3 --version

For more on checking your Python installation, see How to Check Python Version .

Creating a Virtual Environment

Navigate to your project directory and run:

Terminal
python3 -m venv venv

The second venv is the name of the directory that will be created. The conventional names are venv or .venv. The directory contains a copy of the Python binary, pip, and the standard library.

To create the environment in a specific location rather than the current directory:

Terminal
python3 -m venv ~/projects/myproject/venv

Activating the Virtual Environment

Before using the environment, you need to activate it.

On Linux and macOS:

Terminal
source venv/bin/activate

Once activated, your shell prompt changes to show the environment name:

output
(venv) user@host:~/myproject$

From this point on, python and pip refer to the versions inside the virtual environment, not the system-wide ones.

To deactivate the environment and return to the system Python:

Terminal
deactivate

Installing Packages

With the environment active, install packages using pip as you normally would. If pip is not installed on your system, see How to Install Python Pip on Ubuntu .

Terminal
pip install requests

To install a specific version:

Terminal
pip install requests==2.31.0

To list all packages installed in the current environment:

Terminal
pip list

Packages installed here are completely isolated from the system and from other virtual environments.

Managing Dependencies with requirements.txt

Sharing a project with others, or deploying it to another machine, requires a way to reproduce the same package versions. The convention is to save all dependencies to a requirements.txt file:

Terminal
pip freeze > requirements.txt

The file lists every installed package and its exact version:

output
certifi==2024.2.2
charset-normalizer==3.3.2
idna==3.6
requests==2.31.0
urllib3==2.2.1

To install all packages from the file in a fresh environment:

Terminal
pip install -r requirements.txt

This is the standard workflow for reproducing an environment on a different machine or in a CI/CD pipeline.

Using a Specific Python Version

By default, python3 -m venv uses whichever Python 3 version is the system default. If you have multiple Python versions installed, you can specify which one to use:

Terminal
python3.11 -m venv venv

This creates an environment based on Python 3.11 regardless of the system default. To check which versions are available :

Terminal
python3 --version
python3.11 --version

virtualenv: An Alternative to venv

virtualenv is a third-party package that predates venv and offers a few additional features. Install it with pip:

Terminal
pip install virtualenv

Creating and activating an environment with virtualenv follows the same pattern:

Terminal
virtualenv venv
source venv/bin/activate

To use a specific Python interpreter:

Terminal
virtualenv -p python3.11 venv

When to use virtualenv over venv:

  • You need to create environments faster — virtualenv is significantly faster on large projects.
  • You are working with Python 2 (legacy codebases only).
  • You need features like --copies to copy binaries instead of symlinking.

For most modern Python 3 projects, venv is sufficient and has the advantage of requiring no installation.

Excluding the Environment from Version Control

The virtual environment directory should never be committed to version control. It is large, machine-specific, and fully reproducible from requirements.txt. Add it to your .gitignore:

Terminal
echo "venv/" >> .gitignore

If you named your environment .venv instead, add .venv/ to .gitignore.

Troubleshooting

python3 -m venv venv fails with “No module named venv”
On Ubuntu and Debian, the venv module is not included in the base Python package. Install it with sudo apt install python3-venv, then retry.

pip install installs packages globally instead of into the environment
The environment is not activated. Run source venv/bin/activate first. You can verify by checking which pip — it should point to a path inside your venv/ directory.

“command not found: python” after activating the environment
The environment uses python3 as the binary name if it was created with python3 -m venv. Use python3 explicitly, or check which python and which python3 inside the active environment.

The environment breaks after moving or renaming its directory
Virtual environments contain hardcoded absolute paths to the Python interpreter. Moving or renaming the directory invalidates those paths. Delete the directory and recreate the environment in the new location, then reinstall from requirements.txt.

FAQ

What is the difference between venv and virtualenv?
venv is a standard library module included with Python 3 — no installation needed. virtualenv is a third-party package with a longer history, faster environment creation, and Python 2 support. For new Python 3 projects, venv is the recommended choice.

Should I commit the virtual environment to git?
No. Add venv/ (or .venv/) to your .gitignore and commit only requirements.txt. Anyone checking out the project can recreate the environment with pip install -r requirements.txt.

Do I need a virtual environment for every project?
It is strongly recommended. Without isolated environments, installing or upgrading a package for one project can break another. The overhead of creating a virtual environment is minimal.

What is the difference between pip freeze and pip list?
pip list shows installed packages in a human-readable format. pip freeze outputs them in requirements.txt format (package==version) suitable for use with pip install -r.

Can I use a virtual environment with a different Python version than the system default?
Yes. Pass the path to the desired interpreter when creating the environment: python3.11 -m venv venv. The environment will use that interpreter for both python and pip commands.

Conclusion

Virtual environments are the standard way to manage Python project dependencies. Create one with python3 -m venv venv, activate it with source venv/bin/activate, and use pip freeze > requirements.txt to capture your dependencies. For more on managing Python installations, see How to Check Python Version .

❌
❌