普通视图

发现新文章,点击刷新页面。
今天 — 2026年3月18日Linux Tips, Tricks and Tutorials on Linuxize

Docker Compose: Define and Run Multi-Container Apps

Docker Compose is a tool for defining and running multi-container applications. Instead of starting each container separately with docker run , you describe all of your services, networks, and volumes in a single YAML file and bring the entire environment up with one command.

This guide explains how Docker Compose V2 works, walks through a practical example with a SvelteKit development server and PostgreSQL database, and covers the most commonly used Compose directives and commands.

Quick Reference

For a printable quick reference, see the Docker cheatsheet .

Command Description
docker compose up Start all services (foreground)
docker compose up -d Start all services in detached mode
docker compose down Stop and remove containers and networks
docker compose down -v Also remove named volumes
docker compose ps List running services
docker compose logs SERVICE View logs for a service
docker compose logs -f SERVICE Follow logs in real time
docker compose exec SERVICE sh Open a shell in a running container
docker compose stop Stop containers without removing them
docker compose build Build or rebuild images
docker compose pull Pull the latest images

Prerequisites

The examples in this guide require Docker with the Compose plugin installed. Docker Desktop includes Compose by default. On Linux, install the docker-compose-plugin package alongside Docker Engine.

To verify Compose is available, run:

Terminal
docker compose version
output
Docker Compose version v2.x.x

Compose V2 runs as docker compose (with a space), not docker-compose. The old V1 binary is end-of-life and not covered in this guide.

The Compose File

Modern Docker Compose prefers a file named compose.yaml, though docker-compose.yml is still supported for compatibility. In this guide, we will use docker-compose.yml because many readers still recognize that name first.

A Compose file describes your application’s environment using three top-level keys:

  • services — defines each container: its image, ports, volumes, environment variables, and dependencies
  • volumes — declares named volumes that persist data across container restarts
  • networks — defines custom networks for service communication (Compose creates a default network automatically)

Compose V2 does not require a version: field at the top of the file. If you see version: '3' in older guides, that is legacy syntax kept for backward compatibility, not something new Compose files need.

YAML uses indentation to define structure. Use spaces, not tabs.

Setting Up the Example

In this example, we will run a SvelteKit development server alongside a PostgreSQL 16 database using Docker Compose.

Start by creating a new SvelteKit project. The npm create command launches an interactive wizard — select “Skeleton project” when prompted, then choose your preferred options for TypeScript and linting:

Terminal
npm create svelte@latest myapp
cd myapp

If you already have a Node.js project, skip this step and use your project directory instead.

Next, create a docker-compose.yml file in the project root:

docker-compose.ymlyaml
services:
 app:
 image: node:20-alpine
 working_dir: /app
 volumes:
 - .:/app
 - node_modules:/app/node_modules
 ports:
 - "5173:5173"
 environment:
 DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@db:5432/myapp
 depends_on:
 db:
 condition: service_healthy
 command: sh -c "npm install && npm run dev -- --host"

 db:
 image: postgres:16-alpine
 volumes:
 - postgres_data:/var/lib/postgresql/data
 environment:
 POSTGRES_USER: postgres
 POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
 POSTGRES_DB: myapp
 healthcheck:
 test: ["CMD-SHELL", "pg_isready -U postgres"]
 interval: 5s
 timeout: 5s
 retries: 5

volumes:
 postgres_data:
 node_modules:

Compose V2 automatically reads a .env file in the project directory and substitutes ${VAR} references in the compose file. Create a .env file to define the database password:

.envtxt
POSTGRES_PASSWORD=changeme
Warning
Do not commit .env to version control. Add it to your .gitignore file to keep credentials out of your repository.

The password above is for local development only. We will walk through each directive in the compose file in the next section.

Understanding the Compose File

Let us go through each directive in the compose file.

services

The services key is the core of any compose file. Each entry under services defines one container. The key name (app, db) becomes the service name, and Compose also uses it as the hostname for inter-service communication — so the app container can reach the database at db:5432.

image

The image directive tells Compose which Docker image to pull. Both node:20-alpine and postgres:16-alpine are pulled from Docker Hub if they are not already present on your machine. The alpine variant uses Alpine Linux as a base, which keeps image sizes small.

working_dir

The working_dir directive sets the working directory inside the container. All commands run in /app, which is where we mount the project files.

volumes

The app service uses two volume entries:

txt
- .:/app
- node_modules:/app/node_modules

The first entry is a bind mount. It maps the current directory on your host (.) to /app inside the container. Any file you edit on your host is immediately reflected inside the container, which is what enables hot-reload during development.

The second entry is a named volume. Without it, the bind mount would overwrite /app/node_modules with whatever is in your host directory — which may be empty or incompatible. The node_modules named volume tells Docker to keep a separate copy of node_modules inside the container so the bind mount does not interfere with it.

The db service uses a named volume for its data directory:

txt
- postgres_data:/var/lib/postgresql/data

This ensures that your database data persists across container restarts. When you run docker compose down, the postgres_data volume is kept. Only docker compose down -v removes it.

Named volumes must be declared at the top level under volumes:. Both postgres_data and node_modules are listed there with no additional configuration, which tells Docker to manage them using its default storage driver.

ports

The ports directive maps ports between the host machine and the container in "HOST:CONTAINER" format. The entry "5173:5173" exposes the Vite development server so you can open http://localhost:5173 in your browser.

environment

The environment directive sets environment variables inside the container. The app service receives the DATABASE_URL connection string, which a SvelteKit application can read to connect to the database.

Notice the ${POSTGRES_PASSWORD} syntax. Compose reads this value from the .env file and substitutes it before starting the container. This keeps credentials out of the compose file itself.

depends_on

By default, depends_on only controls the order in which containers start — it does not wait for a service to be ready. Compose V2 supports a long-form syntax with a condition key that changes this behavior:

yaml
depends_on:
 db:
 condition: service_healthy

With condition: service_healthy, Compose waits until the db service passes its healthcheck before starting app. This prevents the Node.js process from trying to connect to PostgreSQL before the database is accepting connections.

command

The command directive overrides the default command defined in the image. Here it runs npm install to install dependencies, then starts the Vite dev server with --host to bind to 0.0.0.0 inside the container (required for Docker to forward the port to your host):

txt
sh -c "npm install && npm run dev -- --host"

Running npm install on every container start is slow but convenient for development — you do not need to install dependencies manually before running docker compose up. For production, use a multi-stage Dockerfile to pre-install dependencies and build the application.

healthcheck

The healthcheck directive defines a command Compose runs inside the container to determine whether the service is ready. The pg_isready utility checks whether PostgreSQL is accepting connections:

yaml
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5

Compose runs this check every 5 seconds. Once the service passes its first check, it is marked as healthy and any services using condition: service_healthy are allowed to start. If the check fails 5 consecutive times, the service is marked as unhealthy.

Managing the Application

Run all commands from the project directory — the directory where your docker-compose.yml is located.

Starting Services

To start all services and stream their logs to your terminal, run:

Terminal
docker compose up

Compose pulls any missing images, creates the network, starts the db container, waits for it to pass its healthcheck, then starts the app container. Press Ctrl+C to stop.

To start in detached mode and run the services in the background:

Terminal
docker compose up -d

Viewing Status and Logs

To list running services and their current state:

Terminal
docker compose ps
output
NAME IMAGE COMMAND SERVICE STATUS PORTS
myapp-app-1 node:20-alpine "docker-entrypoint.s…" app Up 2 minutes 0.0.0.0:5173->5173/tcp
myapp-db-1 postgres:16-alpine "docker-entrypoint.s…" db Up 2 minutes 5432/tcp

To view the logs for a specific service:

Terminal
docker compose logs app

To follow the logs in real time (like tail -f):

Terminal
docker compose logs -f app

Press Ctrl+C to stop following.

Running Commands Inside a Container

To open an interactive shell inside the running app container:

Terminal
docker compose exec app sh

This is useful for running one-off commands, inspecting the file system, or debugging. Type exit to leave the shell.

Stopping and Removing Services

To stop running containers without removing them — preserving named volumes and the network:

Terminal
docker compose stop

To start them again after stopping:

Terminal
docker compose start

To stop containers and remove them along with the network:

Terminal
docker compose down

Named volumes (postgres_data, node_modules) are preserved. Your database data is safe.

To also remove all named volumes — this deletes your database data:

Terminal
docker compose down -v

Use this when you want a completely clean environment.

Common Directives Reference

The following directives are not used in the example above but are commonly needed in real projects.

restart

The restart directive controls what Compose does when a container exits:

yaml
services:
 app:
 restart: unless-stopped

The available policies are:

  • no — do not restart (default)
  • always — always restart, including on system reboot
  • unless-stopped — restart unless the container was explicitly stopped
  • on-failure — restart only when the container exits with a non-zero status

Use unless-stopped for long-running services in single-host deployments.

build

Instead of pulling a pre-built image, build tells Compose to build the image from a local Dockerfile :

yaml
services:
 app:
 build:
 context: .
 dockerfile: Dockerfile

context is the directory Compose sends to the Docker daemon as the build context. dockerfile specifies the Dockerfile path relative to context. If both the image and a build context exist, build takes precedence.

env_file

The env_file directive injects environment variables into a container from a file at runtime:

yaml
services:
 app:
 env_file:
 - .env.local

This is different from Compose’s native .env substitution. The .env file at the project root is read by Compose itself to substitute ${VAR} references in the compose file. The env_file directive injects variables directly into the container’s environment. Both can be used together.

networks

Compose creates a default bridge network connecting all services automatically. You can define custom networks to isolate groups of services or control how containers communicate:

yaml
services:
 app:
 networks:
 - frontend
 - backend
 db:
 networks:
 - backend

networks:
 frontend:
 backend:

A service only communicates with other services on the same network. In this example, app can reach db because both share the backend network. A service attached only to frontend — not backend — has no route to db.

profiles

Profiles let you define optional services that only start when explicitly requested. This is useful for development tools, debuggers, or admin interfaces that you do not want running all the time:

yaml
services:
 app:
 image: node:20-alpine

 adminer:
 image: adminer
 profiles:
 - tools
 ports:
 - "8080:8080"

Running docker compose up starts only app. To also start adminer, pass the profile flag:

Terminal
docker compose --profile tools up

Troubleshooting

Port is already in use
Another process on your host is using the same port. Change the host-side port in the ports: mapping. For example, change "5173:5173" to "5174:5173" to expose the container on port 5174 instead.

Service cannot connect to the database
If you are not using condition: service_healthy, depends_on only ensures the db container starts before app — it does not wait for PostgreSQL to be ready to accept connections. Add a healthcheck to the db service and set condition: service_healthy in depends_on as shown in the example above.

node_modules is empty inside the container
The bind mount .:/app maps your host directory to /app, which overwrites /app/node_modules with whatever is on your host. If your host directory has no node_modules, the container sees none either. Fix this by adding a named volume entry node_modules:/app/node_modules as shown in the example. Docker preserves the named volume’s contents and the bind mount does not overwrite it.

FAQ

What is the difference between docker compose down and docker compose stop?
docker compose stop stops the running containers but leaves them on disk along with the network and volumes. You can restart them with docker compose start. docker compose down removes the containers and the network. Named volumes are kept unless you add the -v flag.

How do I view logs for a specific service?
Run docker compose logs SERVICE, replacing SERVICE with the service name defined in your compose file (for example, docker compose logs app). To follow logs in real time, add the -f flag: docker compose logs -f app.

How do I pass secrets without hardcoding them in the Compose file?
Create a .env file in the project directory with your credentials (for example, POSTGRES_PASSWORD=changeme) and reference them in the compose file using ${VAR} syntax. Compose reads the .env file automatically. Add .env to your .gitignore so credentials are never committed to version control.

Can I use Docker Compose in production?
Compose works well for single-host deployments — it is simpler to operate than Kubernetes when you only have one server. Use restart: unless-stopped to keep services running after reboots, and keep secrets in environment variables rather than hardcoded values. For multi-host deployments, look at Docker Swarm or Kubernetes.

What is the difference between a bind mount and a named volume?
A bind mount (./host-path:/container-path) maps a specific directory from your host into the container. Changes on either side are immediately visible on the other. A named volume (volume-name:/container-path) is managed entirely by Docker — it persists data independently of the host directory structure and is not tied to a specific path on your machine. Use bind mounts for source code (so edits take effect immediately) and named volumes for database data (so it persists reliably).

Conclusion

Docker Compose gives you a straightforward way to define and run multi-container development environments with a single YAML file. Once you are comfortable with the basics, the next step is writing custom Dockerfiles to build your own images instead of relying on generic ones.

diff Cheatsheet

Basic Usage

Compare files and save output.

Command Description
diff file1 file2 Compare two files (normal format)
diff -u file1 file2 Unified format (most common)
diff -c file1 file2 Context format
diff -y file1 file2 Side-by-side format
diff file1 file2 > file.patch Save diff to a patch file

Output Symbols

What each symbol means in the diff output.

Symbol Format Meaning
< normal Line from file1 (removed)
> normal Line from file2 (added)
- unified/context Line removed
+ unified/context Line added
! context Line changed
@@ unified Hunk header with line numbers

Directory Comparison

Compare entire directory trees.

Command Description
diff -r dir1 dir2 Compare directories recursively
diff -rq dir1 dir2 Only report which files differ
diff -rN dir1 dir2 Treat absent files as empty
diff -rNu dir1 dir2 Unified recursive diff (for patches)

Filtering

Ignore specific types of differences.

Option Description
-i Ignore case differences
-w Ignore all whitespace
-b Ignore changes in whitespace amount
-B Ignore blank lines
--strip-trailing-cr Ignore Windows carriage returns

Common Options

Option Description
-u Unified format
-c Context format
-y Side-by-side format
-U n Show n lines of context (default: 3)
-C n Show n lines of context (context format)
-W n Set column width for side-by-side
-s Report when files are identical
--color Colorize output
-q Brief output (only whether files differ)

Patch Workflow

Create and apply patches with diff and patch.

Command Description
diff -u file1 file2 > file.patch Create a patch for a single file
diff -rNu dir1 dir2 > dir.patch Create a patch for a directory
patch file1 < file.patch Apply patch to file1
patch -p1 < dir.patch Apply directory patch
patch -R file1 < file.patch Reverse (undo) a patch
昨天 — 2026年3月17日Linux Tips, Tricks and Tutorials on Linuxize

git diff Command: Show Changes Between Commits

Every change in a Git repository can be inspected before it is staged or committed. The git diff command shows the exact lines that were added, removed, or modified — between your working directory, the staging area, and any two commits or branches.

This guide explains how to use git diff to review changes at every stage of your workflow.

Basic Usage

Run git diff with no arguments to see all unstaged changes — differences between your working directory and the staging area:

Terminal
git diff
output
diff --git a/app.py b/app.py
index 3b1a2c4..7d8e9f0 100644
--- a/app.py
+++ b/app.py
@@ -10,6 +10,7 @@ def main():
print("Starting app")
+ print("Debug mode enabled")
run()

Lines starting with + were added. Lines starting with - were removed. Lines with no prefix are context — unchanged lines shown for reference.

If nothing is shown, all changes are already staged or there are no changes at all.

Staged Changes

To see changes that are staged and ready to commit, use --staged (or its synonym --cached):

Terminal
git diff --staged

This compares the staging area against the last commit. It shows exactly what will go into the next commit.

Terminal
git diff --cached

Both flags are equivalent. Use whichever you prefer.

Comparing Commits

Pass two commit hashes to compare them directly:

Terminal
git diff abc1234 def5678

To compare a commit against its parent, use the ^ suffix:

Terminal
git diff HEAD^ HEAD

HEAD refers to the latest commit. HEAD^ is one commit before it. You can go further back with HEAD~2, HEAD~3, and so on.

To see what changed in the last commit:

Terminal
git diff HEAD^ HEAD

Comparing Branches

Use the same syntax to compare two branches:

Terminal
git diff main feature-branch

This shows all differences between the tips of the two branches.

To see only what a branch adds relative to main — excluding changes already in main — use the three-dot notation:

Terminal
git diff main...feature-branch

The three-dot form finds the common ancestor of both branches and diffs from there to the tip of feature-branch. This is the most useful form for reviewing a pull request before merging.

Comparing a Specific File

To limit the diff to a single file, pass the path after --:

Terminal
git diff -- path/to/file.py

The -- separator tells Git the argument is a file path, not a branch name. Combine it with commit references to compare a file across commits:

Terminal
git diff HEAD~3 HEAD -- path/to/file.py

Summary with –stat

To see a summary of which files changed and how many lines were added or removed — without the full diff — use --stat:

Terminal
git diff --stat
output
 app.py | 3 +++
config.yml | 1 -
2 files changed, 3 insertions(+), 1 deletion(-)

This is useful for a quick overview before reviewing the full diff.

Word-Level Diff

By default, git diff highlights changed lines. To highlight only the changed words within a line, use --word-diff:

Terminal
git diff --word-diff
output
@@ -10,6 +10,6 @@ def main():
print("[-Starting-]{+Running+} app")

Removed words appear in [-brackets-] and added words in {+braces+}. This is especially useful for prose or configuration files where only part of a line changes.

Ignoring Whitespace

To ignore whitespace-only changes, use -w:

Terminal
git diff -w

Use -b to ignore changes in the amount of whitespace (but not all whitespace):

Terminal
git diff -b

These options are useful when reviewing files that have been reformatted or indented differently.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git diff Unstaged changes in working directory
git diff --staged Staged changes ready to commit
git diff HEAD^ HEAD Changes in the last commit
git diff abc1234 def5678 Diff between two commits
git diff main feature Diff between two branches
git diff main...feature Changes in feature since branching from main
git diff -- file Diff for a specific file
git diff --stat Summary of changed files and line counts
git diff --word-diff Word-level diff
git diff -w Ignore all whitespace changes

FAQ

What is the difference between git diff and git diff --staged?
git diff shows changes in your working directory that have not been staged yet. git diff --staged shows changes that have been staged with git add and are ready to commit. To see all changes — staged and unstaged combined — use git diff HEAD.

What does the three-dot ... notation do in git diff?
git diff main...feature finds the common ancestor of main and feature and shows the diff from that ancestor to the tip of feature. This isolates the changes that the feature branch introduces, ignoring any new commits in main since the branch was created. In git diff, the two-dot form is just another way to name the two revision endpoints, so git diff main..feature is effectively the same as git diff main feature.

How do I see only the file names that changed, not the full diff?
Use git diff --name-only. To include the change status (modified, added, deleted), use git diff --name-status instead.

How do I compare my local branch against the remote?
Fetch first to update remote-tracking refs, then diff against the updated remote branch. To see what your current branch changes relative to origin/main, use git fetch && git diff origin/main...HEAD. If you want a direct endpoint comparison instead, use git fetch && git diff origin/main HEAD.

Can I use git diff to generate a patch file?
Yes. Redirect the output to a file: git diff > changes.patch. Apply it on another machine with git apply changes.patch.

Conclusion

git diff is the primary tool for reviewing changes before you stage or commit them. Use git diff for unstaged work, git diff --staged before committing, and git diff main...feature to review a branch before merging. For a history of committed changes, see the git log guide .

昨天以前Linux Tips, Tricks and Tutorials on Linuxize

export Cheatsheet

Basic Syntax

Core export command forms in Bash.

Command Description
export VAR=value Create and export a variable
export VAR Export an existing shell variable
export -p List all exported variables
export -n VAR Remove the export property
help export Show Bash help for export

Export Variables

Use export to pass variables to child processes.

Command Description
export APP_ENV=production Export one variable
PORT=8080; export PORT Define first, export later
export PATH="$HOME/bin:$PATH" Extend PATH
export EDITOR=nano Set a default editor
export LANG=en_US.UTF-8 Set a locale variable

Export for One Shell Session

These changes last only in the current shell session unless you save them in a startup file.

Command Description
export DEBUG=1 Enable a debug variable for the current session
export API_URL=https://api.example.com Set an application endpoint
export PATH="$HOME/.local/bin:$PATH" Add a per-user bin directory
echo "$DEBUG" Verify that the exported variable is set
bash -c 'echo "$DEBUG"' Confirm the child shell inherits the variable

Export Functions

Bash can export functions to child Bash shells.

Command Description
greet() { echo "Hello"; } Define a shell function
export -f greet Export a function
bash -c 'greet' Run the exported function in a child shell
export -nf greet Remove the export property from a function

Remove or Reset Exports

Use these commands when you no longer want a variable inherited.

Command Description
export -n VAR Keep the variable, but stop exporting it
unset VAR Remove the variable completely
unset -f greet Remove a shell function
export -n PATH Stop exporting PATH in the current shell
`env grep ‘^VAR='`

Make Variables Persistent

Add export lines to shell startup files when you want them loaded automatically.

File Use
~/.bashrc Interactive non-login Bash shells
~/.bash_profile Login Bash shells
/etc/environment System-wide environment variables
/etc/profile System-wide shell startup logic
source ~/.bashrc Reload the current shell after editing

Troubleshooting

Quick checks for common export problems.

Issue Check
A child process cannot see the variable Confirm you used export VAR or export VAR=value
The variable disappears in a new terminal Add the export line to ~/.bashrc or another startup file
A function is missing in a child shell Export it with export -f name and use Bash as the child shell
The shell still sees the variable after export -n export -n removes inheritance only; use unset to remove the variable completely
The variable is set but a command ignores it Check whether the program reads that variable or expects a config file instead

Related Guides

Use these guides for broader shell and environment-variable workflows.

Guide Description
export Command in Linux Full guide to export options and examples
How to Set and List Environment Variables in Linux Broader environment-variable overview
env cheatsheet Inspect and override environment variables
Bash cheatsheet Quick reference for Bash syntax and shell behavior

git log Command: View Commit History

Every commit in a Git repository is recorded in the project history. The git log command lets you browse that history — showing who made each change, when, and why. It is one of the most frequently used Git commands and one of the most flexible.

This guide explains how to use git log to view, filter, and format commit history.

Basic Usage

Run git log with no arguments to see the full commit history of the current branch, starting from the most recent commit:

Terminal
git log
output
commit a3f1d2e4b5c6789012345678901234567890abcd
Author: Jane Smith <jane@example.com>
Date: Mon Mar 10 14:22:31 2026 +0100
Add user authentication module
commit 7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c
Author: John Doe <john@example.com>
Date: Fri Mar 7 09:15:04 2026 +0100
Fix login form validation

Each entry shows the full commit hash, author name and email, date, and commit message. Press q to exit the log view.

Compact Output with –oneline

The default output is verbose. Use --oneline to show one commit per line — the abbreviated hash and the subject line only:

Terminal
git log --oneline
output
a3f1d2e Add user authentication module
7b8c9d0 Fix login form validation
c1d2e3f Update README with setup instructions

This is useful for a quick overview of recent work.

Limiting the Number of Commits

To show only the last N commits, pass -n followed by the number:

Terminal
git log -n 5

You can also write it without the space:

Terminal
git log -5

Filtering by Author

Use --author to show only commits by a specific person. The value is matched as a substring against the author name or email:

Terminal
git log --author="Jane"

To match multiple authors, pass --author more than once:

Terminal
git log --author="Jane" --author="John"

Filtering by Date

Use --since and --until to limit commits to a date range. These options accept a variety of formats:

Terminal
git log --since="2 weeks ago"
git log --since="2026-03-01" --until="2026-03-15"

Searching Commit Messages

Use --grep to find commits whose message contains a keyword:

Terminal
git log --grep="authentication"

The search is case-sensitive by default. Add -i to make it case-insensitive:

Terminal
git log -i --grep="fix"

Viewing Changed Files

To see a summary of which files were changed in each commit without the full diff, use --stat:

Terminal
git log --stat
output
commit a3f1d2e4b5c6789012345678901234567890abcd
Author: Jane Smith <jane@example.com>
Date: Mon Mar 10 14:22:31 2026 +0100
Add user authentication module
src/auth.py | 84 ++++++++++++++++++++++++++++++++++++++++++++++++
src/models.py | 12 +++++++
tests/test_auth.py | 45 +++++++++++++++++
3 files changed, 141 insertions(+)

To see the full diff for each commit, use -p (or --patch):

Terminal
git log -p

Combine with -n to limit the output:

Terminal
git log -p -3

Viewing History of a Specific File

To see only the commits that touched a particular file, pass the file path after --:

Terminal
git log -- path/to/file.py

The -- separator tells Git that what follows is a path, not a branch name. Add -p to see the exact changes made to that file in each commit:

Terminal
git log -p -- path/to/file.py

Branch and Graph View

To visualize how branches diverge and merge, use --graph:

Terminal
git log --oneline --graph
output
* a3f1d2e Add user authentication module
* 7b8c9d0 Fix login form validation
| * 3e4f5a6 WIP: dark mode toggle
|/
* c1d2e3f Update README with setup instructions

Add --all to include all branches, not just the current one:

Terminal
git log --oneline --graph --all

This gives a full picture of the repository’s branch and merge history.

Comparing Two Branches

To see commits that are in one branch but not another, use the .. notation:

Terminal
git log main..feature-branch

This shows commits reachable from feature-branch that are not yet in main — useful for reviewing what a branch adds before merging.

Custom Output Format

Use --pretty=format: to define exactly what each log line shows. Some useful placeholders:

  • %h — abbreviated commit hash
  • %an — author name
  • %ar — author date, relative
  • %s — subject (first line of commit message)

For example:

Terminal
git log --pretty=format:"%h %an %ar %s"
output
a3f1d2e Jane Smith 5 days ago Add user authentication module
7b8c9d0 John Doe 8 days ago Fix login form validation

Excluding Merge Commits

To hide merge commits and see only substantive changes, use --no-merges:

Terminal
git log --no-merges --oneline

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git log Full commit history
git log --oneline One line per commit
git log -n 10 Last 10 commits
git log --author="Name" Filter by author
git log --since="2 weeks ago" Filter by date
git log --grep="keyword" Search commit messages
git log --stat Show changed files per commit
git log -p Show full diff per commit
git log -- file History of a specific file
git log --oneline --graph Visual branch graph
git log --oneline --graph --all Graph of all branches
git log main..feature Commits in feature not in main
git log --no-merges Exclude merge commits
git log --pretty=format:"%h %an %s" Custom output format

FAQ

How do I search for a commit that changed a specific piece of code?
Use -S (the “pickaxe” option) to find commits that added or removed a specific string: git log -S "function_name". For regex patterns, use -G "pattern" instead.

How do I see who last changed each line of a file?
Use git blame path/to/file. It annotates every line with the commit hash, author, and date of the last change.

What is the difference between git log and git reflog?
git log shows the commit history of the current branch. git reflog shows every movement of HEAD — including commits that were reset, rebased, or are no longer reachable from any branch. It is the main recovery tool if you lose commits accidentally.

How do I find a commit by its message?
Use git log --grep="search term". For case-insensitive search add -i. If you need to search the full commit message body as well as the subject, add --all-match only when combining multiple --grep patterns, or use git log --grep="search term" --format=full to inspect matching commits more closely. Use -E only when you need extended regular expressions in the grep pattern.

How do I show a single commit in full detail?
Use git show <hash>. It prints the commit metadata and the full diff. To show just the files changed without the diff, use git show --stat <hash>.

Conclusion

git log is the primary tool for understanding a project’s history. Start with --oneline for a quick overview, use --graph --all to visualize branches, and combine --author, --since, and --grep to zero in on specific changes. For undoing commits found in the log, see the git revert guide or how to undo the last commit .

git stash: Save and Restore Uncommitted Changes

When you are in the middle of a feature and need to switch branches to fix a bug or review someone else’s work, you have a problem: your changes are not ready to commit, but you cannot switch branches with a dirty working tree. git stash solves this by saving your uncommitted changes to a temporary stack so you can restore them later.

This guide explains how to use git stash to save, list, apply, and delete stashed changes.

Basic Usage

The simplest form saves all modifications to tracked files and reverts the working tree to match the last commit:

Terminal
git stash
output
Saved working directory and index state WIP on main: abc1234 Add login page

Your working tree is now clean. You can switch branches, pull updates, or apply a hotfix. When you are ready to return to your work, restore the stash.

By default, git stash saves both staged and unstaged changes to tracked files. It does not save untracked or ignored files unless you explicitly include them (see below).

Stashing with a Message

A plain git stash entry shows the branch and last commit message, which becomes hard to read when you have multiple stashes. Use -m to attach a descriptive label:

Terminal
git stash push -m "WIP: user authentication form"

This makes it easy to identify the right stash when you list them later.

Listing Stashes

To see all saved stashes, run:

Terminal
git stash list
output
stash@{0}: On main: WIP: user authentication form
stash@{1}: WIP on main: abc1234 Add login page

Stashes are indexed from newest (stash@{0}) to oldest. The index is used when you want to apply or drop a specific stash.

To inspect the contents of a stash before applying it, use git stash show:

Terminal
git stash show stash@{0}

Add -p to see the full diff:

Terminal
git stash show -p stash@{0}

Applying Stashes

There are two ways to restore a stash.

git stash pop applies the most recent stash and removes it from the stash list:

Terminal
git stash pop

git stash apply applies a stash but keeps it in the list so you can apply it again or to another branch:

Terminal
git stash apply

To apply a specific stash by index, pass the stash reference:

Terminal
git stash apply stash@{1}

If applying the stash causes conflicts, resolve them the same way you would resolve a merge conflict, then stage the resolved files.

Stashing Untracked and Ignored Files

By default, git stash only saves changes to files that Git already tracks. New files you have not yet staged are left behind.

Use -u (or --include-untracked) to include untracked files:

Terminal
git stash -u

To include both untracked and ignored files — for example, generated build artifacts or local config files — use -a (or --all):

Terminal
git stash -a

Partial Stash

If you only want to stash some of your changes and leave others in the working tree, use the -p (or --patch) flag to interactively select which hunks to stash:

Terminal
git stash -p

Git will step through each changed hunk and ask whether to stash it. Type y to stash the hunk, n to leave it, or ? for a full list of options.

Creating a Branch from a Stash

If you have been working on a stash for a while and the branch has diverged enough to cause conflicts on apply, you can create a new branch from the stash directly:

Terminal
git stash branch new-branch-name

This creates a new branch, checks it out at the commit where the stash was originally made, applies the stash, and drops it from the list if it applied cleanly. It is the safest way to resume stashed work that has grown out of sync with the base branch.

Deleting Stashes

To remove a specific stash once you no longer need it:

Terminal
git stash drop stash@{0}

To delete all stashes at once:

Terminal
git stash clear

Use clear with care — there is no undo.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git stash Stash staged and unstaged changes
git stash push -m "message" Stash with a descriptive label
git stash -u Include untracked files
git stash -a Include untracked and ignored files
git stash -p Interactively choose what to stash
git stash list List all stashes
git stash show stash@{0} Show changed files in a stash
git stash show -p stash@{0} Show full diff of a stash
git stash pop Apply and remove the latest stash
git stash apply stash@{0} Apply a stash without removing it
git stash branch branch-name Create a branch from the latest stash
git stash drop stash@{0} Delete a specific stash
git stash clear Delete all stashes

FAQ

What is the difference between git stash pop and git stash apply?
pop applies the stash and removes it from the stash list. apply restores the changes but keeps the stash entry so you can apply it again — for example, to multiple branches. Use pop for normal day-to-day use and apply when you need to reuse the stash.

Does git stash save untracked files?
No, not by default. Run git stash -u to include untracked files, or git stash -a to also include files that match your .gitignore patterns.

Can I stash only specific files?
Yes. In modern Git, you can pass pathspecs to git stash push, for example: git stash push -m "label" -- path/to/file. If you need a more portable workflow, use git stash -p to interactively select which hunks to stash.

How do I recover a dropped stash?
If you accidentally ran git stash drop or git stash clear, the underlying commit may still exist for a while. Run git fsck --no-reflogs | grep "dangling commit" to list unreachable commits, then git show <hash> to inspect them. If you find the right stash commit, you can recreate it with git stash store -m "recovered stash" <hash> and then apply it normally.

Can I apply a stash to a different branch?
Yes. Switch to the target branch with git switch branch-name, then run git stash apply stash@{0}. If there are no conflicts, the changes are applied cleanly. If the branches have diverged significantly, consider git stash branch to create a fresh branch from the stash instead.

Conclusion

git stash is the cleanest way to set aside incomplete work without creating a throwaway commit. Use git stash push -m to keep your stash list readable, prefer pop for one-time restores, and use git stash branch when applying becomes messy. For a broader overview of Git workflows, see the Git cheatsheet or the git revert guide .

env Cheatsheet

Basic Syntax

Core env command forms.

Command Description
env Print the current environment
env --help Show available options
env --version Show the installed env version
env -0 Print variables separated with NUL bytes

Inspect Environment Variables

Use env with filters to inspect specific variables.

Command Description
`env sort`
`env grep ‘^PATH='`
`env grep ‘^HOME='`
`env grep ‘^LANG='`

Run Commands with Temporary Variables

Set variables for one command without changing the current shell session.

Command Description
VAR=value env command Run one command with a temporary variable
VAR1=dev VAR2=1 env command Set multiple temporary variables
env PATH=/custom/bin:$PATH command Override PATH for one command
env LANG=C command Run a command with the C locale
env HOME=/tmp bash Start a shell with a temporary home directory

Clean or Remove Variables

Start with a minimal environment or remove selected variables.

Command Description
env -i command Run a command with an empty environment
env -i PATH=/usr/bin:/bin bash --noprofile --norc Start a mostly clean shell with a minimal PATH
env -u VAR command Run a command without one variable
env -u http_proxy command Remove a proxy variable for one command
env -i VAR=value command Run a command with only the variables you set explicitly

Common Variables

These variables are often inspected or overridden with env.

Variable Description
PATH Directories searched for commands
HOME Current user’s home directory
USER Current user name
SHELL Default login shell
LANG Locale and language setting
TZ System timezone (e.g. America/New_York)
EDITOR Default text editor
TERM Terminal type (e.g. xterm-256color)
TMPDIR Directory for temporary files
PWD Current working directory

Troubleshooting

Quick checks for common env issues.

Issue Check
A temporary variable does not persist env VAR=value command affects only that command and its children
A command is not found after env -i Add a minimal PATH, such as /usr/bin:/bin
Output is hard to parse safely Use env -0 with tools that support NUL-delimited input
A variable is still visible in the shell env does not modify the parent shell; use export or unset the variable in the shell itself
A locale-sensitive command behaves differently Check whether LANG or related locale variables were overridden

Related Guides

Use these guides for broader environment-variable workflows.

Guide Description
How to Set and List Environment Variables in Linux Full guide to listing and setting environment variables
export Command in Linux Export shell variables to child processes
Bashrc vs Bash Profile Understand shell startup files
Bash cheatsheet Quick reference for Bash syntax and variables

Python Cheatsheet

Data Types

Core Python types and how to inspect them.

Syntax Description
42, 3.14, -7 int, float literals
True, False bool
"hello", 'world' str (immutable)
[1, 2, 3] list — mutable, ordered
(1, 2, 3) tuple — immutable, ordered
{"a": 1, "b": 2} dict — key-value pairs
{1, 2, 3} set — unique, unordered
None Absence of a value
type(x) Get type of x
isinstance(x, int) Check if x is of a given type

Operators

Arithmetic, comparison, logical, and membership operators.

Operator Description
+, -, *, / Add, subtract, multiply, divide
// Floor division
% Modulo (remainder)
** Exponentiation
==, != Equal, not equal
<, >, <=, >= Comparison
and, or, not Logical operators
in, not in Membership test
is, is not Identity test
x if cond else y Ternary (conditional) expression

String Methods

Common operations on str objects. Strings are immutable — methods return a new string.

Method Description
s.upper() / s.lower() Convert to upper / lowercase
s.strip() Remove leading and trailing whitespace
s.lstrip() / s.rstrip() Remove leading / trailing whitespace
s.split(",") Split on delimiter, return list
",".join(lst) Join list elements into a string
s.replace("old", "new") Replace all occurrences
s.find("x") First index of substring (-1 if not found)
s.startswith("x") True if string starts with prefix
s.endswith("x") True if string ends with suffix
s.count("x") Count non-overlapping occurrences
s.isdigit() / s.isalpha() Check all digits / all letters
len(s) String length
s[i], s[i:j], s[::step] Index, slice, step
"x" in s Substring membership test

String Formatting

Embed values in strings.

Syntax Description
f"Hello, {name}" f-string (Python 3.6+)
f"{val:.2f}" Float with 2 decimal places
f"{val:>10}" Right-align in 10 characters
f"{val:,}" Thousands separator
f"{{name}}" Literal braces in output
"Hello, {}".format(name) str.format()
"Hello, %s" % name %-formatting (legacy)

List Methods

Common operations on list objects.

Method Description
lst.append(x) Add element to end
lst.extend([x, y]) Add multiple elements to end
lst.insert(i, x) Insert element at index i
lst.remove(x) Remove first occurrence of x
lst.pop() Remove and return last element
lst.pop(i) Remove and return element at index i
lst.index(x) First index of x
lst.count(x) Count occurrences of x
lst.sort() Sort in place (ascending)
lst.sort(reverse=True) Sort in place (descending)
lst.reverse() Reverse in place
lst.copy() Shallow copy
lst.clear() Remove all elements
len(lst) List length
lst[i], lst[i:j] Index and slice
x in lst Membership test

Dictionary Methods

Common operations on dict objects.

Method Description
d[key] Get value by key (raises KeyError if missing)
d.get(key) Get value or None if key not found
d.get(key, default) Get value or default if key not found
d[key] = val Set or update a value
del d[key] Delete a key
d.pop(key) Remove key and return its value
d.keys() View of all keys
d.values() View of all values
d.items() View of all (key, value) pairs
d.update(d2) Merge another dict into d
d.setdefault(key, val) Set key if missing, return value
key in d Check if key exists
len(d) Number of key-value pairs
{**d1, **d2} Merge dicts (Python 3.9+: d1 | d2)

Sets

Common operations on set objects.

Syntax Description
set() Empty set (use this, not {})
s.add(x) Add element
s.remove(x) Remove element (raises KeyError if missing)
s.discard(x) Remove element (no error if missing)
s1 | s2 Union
s1 & s2 Intersection
s1 - s2 Difference
s1 ^ s2 Symmetric difference
s1 <= s2 Subset check
x in s Membership test

Control Flow

Conditionals, loops, and flow control keywords.

Syntax Description
if cond: / elif cond: / else: Conditional blocks
for item in iterable: Iterate over a sequence
for i, v in enumerate(lst): Iterate with index and value
while cond: Loop while condition is true
break Exit loop immediately
continue Skip to next iteration
pass No-op placeholder
match x: / case val: Pattern matching (Python 3.10+)

Functions

Define and call functions.

Syntax Description
def func(x, y): Define a function
return val Return a value
def func(x=0): Default argument
def func(*args): Variable positional arguments
def func(**kwargs): Variable keyword arguments
lambda x: x * 2 Anonymous (lambda) function
func(y=1, x=2) Call with keyword arguments
result = func(x) Call and capture return value

File I/O

Open, read, and write files.

Syntax Description
open("f.txt", "r") Open for reading (default mode)
open("f.txt", "w") Open for writing — creates or overwrites
open("f.txt", "a") Open for appending
open("f.txt", "rb") Open for reading in binary mode
with open("f.txt") as f: Open with context manager (auto-close)
f.read() Read entire file as a string
f.read(n) Read n characters
f.readline() Read one line
f.readlines() Read all lines into a list
for line in f: Iterate over lines
f.write("text") Write string to file
f.writelines(lst) Write list of strings

Exception Handling

Catch and handle runtime errors.

Syntax Description
try: Start guarded block
except ValueError: Catch a specific exception
except (TypeError, ValueError): Catch multiple exception types
except Exception as e: Catch and bind exception to variable
else: Run if no exception was raised
finally: Always run — use for cleanup
raise ValueError("msg") Raise an exception
raise Re-raise the current exception

Comprehensions

Build lists, dicts, sets, and generators in one expression.

Syntax Description
[x for x in lst] List comprehension
[x for x in lst if cond] Filtered list comprehension
[f(x) for x in range(n)] Comprehension with expression
{k: v for k, v in d.items()} Dict comprehension
{x for x in lst} Set comprehension
(x for x in lst) Generator expression (lazy)

Useful Built-ins

Frequently used Python built-in functions.

Function Description
print(x) Print to stdout
input("prompt") Read user input as string
len(x) Length of sequence or collection
range(n) / range(a, b, step) Integer sequence
enumerate(lst) (index, value) pairs
zip(a, b) Pair elements from two iterables
map(func, lst) Apply function to each element
filter(func, lst) Filter elements by predicate
sorted(lst) Return sorted copy
sum(lst), min(lst), max(lst) Sum, minimum, maximum
abs(x), round(x, n) Absolute value, round to n places
int(x), float(x), str(x) Type conversion

Related Guides

Guide Description
Python f-Strings String formatting with f-strings
Python Lists List operations and methods
Python Dictionaries Dictionary operations and methods
Python for Loop Iterating with for loops
Python while Loop Looping with while
Python if/else Statement Conditionals and branching
How to Replace a String in Python String replacement methods
How to Split a String in Python Splitting strings into lists

Python f-Strings: String Formatting in Python 3

f-strings, short for formatted string literals, are the most concise and readable way to format strings in Python. Introduced in Python 3.6, they let you embed variables and expressions directly inside a string by prefixing it with f or F.

This guide explains how to use f-strings in Python, including expressions, format specifiers, number formatting, alignment, and the debugging shorthand introduced in Python 3.8.

Basic Syntax

An f-string is a string literal prefixed with f. Any expression inside curly braces {} is evaluated at runtime and its result is inserted into the string:

py
name = "Leia"
age = 30
print(f"My name is {name} and I am {age} years old.")
output
My name is Leia and I am 30 years old.

Any valid Python expression can appear inside the braces — variables, attribute access, method calls, or arithmetic:

py
price = 49.99
quantity = 3
print(f"Total: {price * quantity}")
output
Total: 149.97

Expressions Inside f-Strings

You can call methods and functions directly inside the braces:

py
name = "alice"
print(f"Hello, {name.upper()}!")
output
Hello, ALICE!

Ternary expressions work as well:

py
age = 20
print(f"Status: {'adult' if age >= 18 else 'minor'}")
output
Status: adult

You can also access dictionary keys and list items:

py
user = {"name": "Bob", "city": "Berlin"}
print(f"{user['name']} lives in {user['city']}.")
output
Bob lives in Berlin.

Escaping Braces

To include a literal curly brace in an f-string, double it:

py
value = 42
print(f"{{value}} = {value}")
output
{value} = 42

Multiline f-Strings

To build a multiline f-string, wrap it in triple quotes:

py
name = "Leia"
age = 30
message = f"""
Name: {name}
Age: {age}
"""
print(message)
output
Name: Leia
Age: 30

Alternatively, use implicit string concatenation to keep each line short:

py
message = (
 f"Name: {name}\n"
 f"Age: {age}"
)

Format Specifiers

f-strings support Python’s format specification mini-language. The full syntax is:

txt
{value:[fill][align][sign][width][grouping][.precision][type]}

Floats and Precision

Control the number of decimal places with :.Nf:

py
pi = 3.14159265
print(f"{pi:.2f}")
print(f"{pi:.4f}")
output
3.14
3.1416

Thousands Separator

Use , to add a thousands separator:

py
population = 1_234_567
print(f"{population:,}")
output
1,234,567

Percentage

Use :.N% to format a value as a percentage:

py
score = 0.875
print(f"Score: {score:.1%}")
output
Score: 87.5%

Scientific Notation

Use :e for scientific notation:

py
distance = 149_600_000
print(f"{distance:e}")
output
1.496000e+08

Alignment and Padding

Use <, >, and ^ to align text within a fixed width. Optionally specify a fill character before the alignment symbol:

py
print(f"{'left':<10}|")
print(f"{'right':>10}|")
print(f"{'center':^10}|")
print(f"{'padded':*^10}|")
output
left |
right|
center |
**padded**|

Number Bases

Convert integers to binary, octal, or hexadecimal with the b, o, x, and X type codes:

py
n = 255
print(f"Binary: {n:b}")
print(f"Octal: {n:o}")
print(f"Hex (lower): {n:x}")
print(f"Hex (upper): {n:X}")
output
Binary: 11111111
Octal: 377
Hex (lower): ff
Hex (upper): FF

Debugging with =

Python 3.8 introduced the = shorthand, which prints both the expression and its value. This is useful for quick debugging:

py
x = 42
items = [1, 2, 3]
print(f"{x=}")
print(f"{len(items)=}")
output
x=42
len(items)=3

The = shorthand preserves the exact expression text, making it more informative than a plain print().

Quick Reference

Syntax Description
f"{var}" Insert a variable
f"{expr}" Insert any expression
f"{val:.2f}" Float with 2 decimal places
f"{val:,}" Integer with thousands separator
f"{val:.1%}" Percentage with 1 decimal place
f"{val:e}" Scientific notation
f"{val:<10}" Left-align in 10 characters
f"{val:>10}" Right-align in 10 characters
f"{val:^10}" Center in 10 characters
f"{val:*^10}" Center with * fill
f"{val:b}" Binary
f"{val:x}" Hexadecimal (lowercase)
f"{val=}" Debug: print expression and value (3.8+)

FAQ

What Python version do f-strings require?
f-strings were introduced in Python 3.6. If you are on Python 3.5 or earlier, use str.format() or % formatting instead.

What is the difference between f-strings and str.format()?
f-strings are evaluated at runtime and embed expressions inline. str.format() uses positional or keyword placeholders and is slightly more verbose. f-strings are generally faster and easier to read for straightforward formatting.

Can I use quotes inside an f-string expression?
Yes, but you must use a different quote type from the one wrapping the f-string. For example, if the f-string uses double quotes, use single quotes inside the braces: f"Hello, {user['name']}". In Python 3.12 and later, the same quote type can be reused inside the braces.

Can I use f-strings with multiline expressions?
Expressions inside {} cannot contain backslashes directly, but you can pre-compute the value in a variable first and reference that variable in the f-string.

Are f-strings faster than % formatting?
In many common cases, f-strings are faster than % formatting and str.format(), while also being easier to read. Exact performance depends on the expression and Python version, so readability is usually the more important reason to prefer them.

Conclusion

f-strings are the preferred way to format strings in Python 3.6 and later. They are concise, readable, and support the full format specification mini-language for controlling number precision, alignment, and type conversion. For related string operations, see Python String Replace and How to Split a String in Python .

kill Cheatsheet

Basic Syntax

Core kill command forms.

Command Description
kill PID Send SIGTERM (graceful stop) to a process
kill -9 PID Force kill a process (SIGKILL)
kill -1 PID Reload process config (SIGHUP)
kill -l List all available signal names and numbers
kill -0 PID Check if a PID exists without sending a signal

Signal Reference

Signals most commonly used with kill, killall, and pkill.

Signal Number Description
SIGHUP 1 Reload config — most daemons reload settings without restarting
SIGINT 2 Interrupt process — equivalent to pressing Ctrl+C
SIGQUIT 3 Quit and write a core dump for debugging
SIGKILL 9 Force kill — cannot be caught or ignored; always terminates immediately
SIGTERM 15 Graceful stop — default signal; process can clean up before exiting
SIGUSR1 10 User-defined signal 1 — meaning depends on the application
SIGUSR2 12 User-defined signal 2 — meaning depends on the application
SIGCONT 18 Resume a process that was suspended with SIGSTOP
SIGSTOP 19 Suspend process — cannot be caught or ignored
SIGTSTP 20 Terminal stop — equivalent to pressing Ctrl+Z

Kill by PID

Send signals to one or more specific processes.

Command Description
kill 1234 Gracefully stop PID 1234
kill -9 1234 5678 Force kill multiple PIDs at once
kill -HUP 1234 Reload config for PID 1234
kill -STOP 1234 Suspend PID 1234
kill -CONT 1234 Resume suspended PID 1234
kill -9 $(pidof firefox) Force kill all PIDs for a process by name

killall: Kill by Name

Send signals to all processes matching an exact name.

Command Description
killall process_name Send SIGTERM to all matching processes
killall -9 process_name Force kill all matching processes
killall -HUP nginx Reload all nginx processes
killall -u username process_name Kill matching processes owned by a user
killall -v process_name Kill and report which processes were signaled
killall -r "pattern" Match process names with a regex pattern

Background Jobs

Kill jobs running in the current shell session.

Command Description
jobs List background jobs and their job numbers
kill %1 Send SIGTERM to background job number 1
kill -9 %1 Force kill background job number 1
kill %+ Kill the most recently started background job
kill %% Kill the current (most recent) background job

Troubleshooting

Quick fixes for common kill issues.

Issue Check
Process does not stop after SIGTERM Wait a moment, then escalate to kill -9 PID
No such process error PID has already exited; verify with ps -p PID
Operation not permitted Process belongs to another user — use sudo kill PID
SIGKILL has no effect Process is in uninterruptible sleep (D state in ps); only a reboot can free it
Not sure which PID to kill Find it first with pidof name, pgrep name, or ps aux | grep name

Related Guides

Guide Description
kill Command in Linux Full guide to kill options, signals, and examples
How to Kill a Process in Linux Practical walkthrough for finding and stopping processes
pkill Command in Linux Kill processes by name and pattern
pgrep Command in Linux Find process PIDs before signaling
ps Command in Linux Inspect the running process list

watch Cheatsheet

Basic Syntax

Core watch command forms.

Command Description
watch command Run a command repeatedly every 2 seconds
watch -n 5 command Refresh every 5 seconds
watch -n 1 date Update output every second
watch --help Show available options

Common Monitoring Tasks

Use watch to keep an eye on changing system state.

Command Description
watch free -h Monitor memory usage
watch df -h Monitor disk space
watch uptime Check load averages and uptime
`watch “ps -ef grep nginx”`
watch "ss -tulpn" Monitor listening sockets

Timing and Refresh Control

Control how often watch reruns the command.

Command Description
watch -n 0.5 command Refresh every 0.5 seconds
watch -n 10 command Refresh every 10 seconds
watch -t command Hide the header line
watch -p command Try to run at precise intervals
watch -x command arg1 arg2 Run the command directly without sh -c

Highlighting Changes

Make changing output easier to spot.

Command Description
watch -d command Highlight differences between updates
watch -d free -h Highlight memory changes
watch -d "ip -brief address" Highlight interface state or address changes
watch -d -n 1 "cat /proc/loadavg" Highlight load-average changes

Commands with Pipes and Quotes

Wrap pipelines and shell syntax in quotes unless you use -x.

Command Description
`watch “ps -ef grep apache”`
`watch “ls -lh /var/log tail”`
watch "grep -c error /var/log/syslog" Count matching lines repeatedly
watch -x ls -lh /var/log Run ls directly without shell parsing
`watch “find /tmp -maxdepth 1 -type f wc -l”`

Exit Conditions and Beeps

Stop or alert when output changes.

Command Description
watch -g command Exit when command output changes
watch -b command Beep if the command exits with non-zero status
watch -b -n 5 "systemctl is-active nginx" Alert if the service is no longer active
watch -g "cat /tmp/status.txt" Exit when the file content changes

Troubleshooting

Quick fixes for common watch issues.

Issue Check
Pipes or redirects do not work Quote the whole command or use sh -c
The screen flickers too much Increase the interval or hide the header with -t
Output changes are hard to spot Add -d to highlight differences
The command exits immediately Check the command syntax outside watch first
You need shell expansion and variables Use quoted shell commands instead of -x

Related Guides

Use these guides for broader monitoring and process tasks.

Guide Description
Linux Watch Command Full guide to watch options and examples
ps Command in Linux Inspect running processes
free Command in Linux Check memory usage
ss Command in Linux Inspect sockets and ports
top Command in Linux Monitor processes interactively

Python Virtual Environments: venv and virtualenv

A Python virtual environment is a self-contained directory that includes its own Python interpreter and a set of installed packages, isolated from the system-wide Python installation. Using virtual environments lets each project maintain its own dependencies without affecting other projects on the same machine.

This guide explains how to create and manage virtual environments using venv (built into Python 3) and virtualenv (a popular third-party alternative) on Linux and macOS.

Quick Reference

Task Command
Install venv support (Ubuntu, Debian) sudo apt install python3-venv
Create environment python3 -m venv venv
Activate (Linux / macOS) source venv/bin/activate
Deactivate deactivate
Install a package pip install package-name
Install specific version pip install package==1.2.3
List installed packages pip list
Save dependencies pip freeze > requirements.txt
Install from requirements file pip install -r requirements.txt
Create with specific Python version python3.11 -m venv venv
Install virtualenv pip install virtualenv
Delete environment rm -rf venv

What Is a Python Virtual Environment?

When you install a package globally with pip install, it is available to every Python script on the system. This becomes a problem when two projects need different versions of the same library — for example, one requires requests==2.28.0 and another requires requests==2.31.0. Installing both globally is not possible.

A virtual environment solves this by giving each project its own isolated space for packages. Activating an environment prepends its bin/ directory to your PATH, so python and pip resolve to the versions inside the environment rather than the system-wide ones.

Installing venv

The venv module ships with Python 3 and requires no separate installation on most systems. On Ubuntu and Debian, the module is packaged separately:

Terminal
sudo apt install python3-venv

On Fedora, RHEL, and Derivatives, venv is included with the Python package by default.

Verify your Python version before creating an environment:

Terminal
python3 --version

For more on checking your Python installation, see How to Check Python Version .

Creating a Virtual Environment

Navigate to your project directory and run:

Terminal
python3 -m venv venv

The second venv is the name of the directory that will be created. The conventional names are venv or .venv. The directory contains a copy of the Python binary, pip, and the standard library.

To create the environment in a specific location rather than the current directory:

Terminal
python3 -m venv ~/projects/myproject/venv

Activating the Virtual Environment

Before using the environment, you need to activate it.

On Linux and macOS:

Terminal
source venv/bin/activate

Once activated, your shell prompt changes to show the environment name:

output
(venv) user@host:~/myproject$

From this point on, python and pip refer to the versions inside the virtual environment, not the system-wide ones.

To deactivate the environment and return to the system Python:

Terminal
deactivate

Installing Packages

With the environment active, install packages using pip as you normally would. If pip is not installed on your system, see How to Install Python Pip on Ubuntu .

Terminal
pip install requests

To install a specific version:

Terminal
pip install requests==2.31.0

To list all packages installed in the current environment:

Terminal
pip list

Packages installed here are completely isolated from the system and from other virtual environments.

Managing Dependencies with requirements.txt

Sharing a project with others, or deploying it to another machine, requires a way to reproduce the same package versions. The convention is to save all dependencies to a requirements.txt file:

Terminal
pip freeze > requirements.txt

The file lists every installed package and its exact version:

output
certifi==2024.2.2
charset-normalizer==3.3.2
idna==3.6
requests==2.31.0
urllib3==2.2.1

To install all packages from the file in a fresh environment:

Terminal
pip install -r requirements.txt

This is the standard workflow for reproducing an environment on a different machine or in a CI/CD pipeline.

Using a Specific Python Version

By default, python3 -m venv uses whichever Python 3 version is the system default. If you have multiple Python versions installed, you can specify which one to use:

Terminal
python3.11 -m venv venv

This creates an environment based on Python 3.11 regardless of the system default. To check which versions are available :

Terminal
python3 --version
python3.11 --version

virtualenv: An Alternative to venv

virtualenv is a third-party package that predates venv and offers a few additional features. Install it with pip:

Terminal
pip install virtualenv

Creating and activating an environment with virtualenv follows the same pattern:

Terminal
virtualenv venv
source venv/bin/activate

To use a specific Python interpreter:

Terminal
virtualenv -p python3.11 venv

When to use virtualenv over venv:

  • You need to create environments faster — virtualenv is significantly faster on large projects.
  • You are working with Python 2 (legacy codebases only).
  • You need features like --copies to copy binaries instead of symlinking.

For most modern Python 3 projects, venv is sufficient and has the advantage of requiring no installation.

Excluding the Environment from Version Control

The virtual environment directory should never be committed to version control. It is large, machine-specific, and fully reproducible from requirements.txt. Add it to your .gitignore:

Terminal
echo "venv/" >> .gitignore

If you named your environment .venv instead, add .venv/ to .gitignore.

Troubleshooting

python3 -m venv venv fails with “No module named venv”
On Ubuntu and Debian, the venv module is not included in the base Python package. Install it with sudo apt install python3-venv, then retry.

pip install installs packages globally instead of into the environment
The environment is not activated. Run source venv/bin/activate first. You can verify by checking which pip — it should point to a path inside your venv/ directory.

“command not found: python” after activating the environment
The environment uses python3 as the binary name if it was created with python3 -m venv. Use python3 explicitly, or check which python and which python3 inside the active environment.

The environment breaks after moving or renaming its directory
Virtual environments contain hardcoded absolute paths to the Python interpreter. Moving or renaming the directory invalidates those paths. Delete the directory and recreate the environment in the new location, then reinstall from requirements.txt.

FAQ

What is the difference between venv and virtualenv?
venv is a standard library module included with Python 3 — no installation needed. virtualenv is a third-party package with a longer history, faster environment creation, and Python 2 support. For new Python 3 projects, venv is the recommended choice.

Should I commit the virtual environment to git?
No. Add venv/ (or .venv/) to your .gitignore and commit only requirements.txt. Anyone checking out the project can recreate the environment with pip install -r requirements.txt.

Do I need a virtual environment for every project?
It is strongly recommended. Without isolated environments, installing or upgrading a package for one project can break another. The overhead of creating a virtual environment is minimal.

What is the difference between pip freeze and pip list?
pip list shows installed packages in a human-readable format. pip freeze outputs them in requirements.txt format (package==version) suitable for use with pip install -r.

Can I use a virtual environment with a different Python version than the system default?
Yes. Pass the path to the desired interpreter when creating the environment: python3.11 -m venv venv. The environment will use that interpreter for both python and pip commands.

Conclusion

Virtual environments are the standard way to manage Python project dependencies. Create one with python3 -m venv venv, activate it with source venv/bin/activate, and use pip freeze > requirements.txt to capture your dependencies. For more on managing Python installations, see How to Check Python Version .

tcpdump Cheatsheet

Basic Syntax

Core tcpdump command forms.

Command Description
sudo tcpdump Start capturing on the default interface
sudo tcpdump -i eth0 Capture on a specific interface
sudo tcpdump -i any Capture on all interfaces
sudo tcpdump -D List available interfaces
sudo tcpdump -h Show help and usage

Limit and Format Output

Control how much data is shown and how packets are displayed.

Command Description
sudo tcpdump -c 10 Stop after 10 packets
sudo tcpdump -n Do not resolve hostnames
sudo tcpdump -nn Do not resolve hostnames or service names
sudo tcpdump -v Verbose output
sudo tcpdump -X Show packet contents in hex and ASCII

Protocol Filters

Capture only the protocol traffic you care about.

Command Description
sudo tcpdump tcp Capture TCP packets only
sudo tcpdump udp Capture UDP packets only
sudo tcpdump icmp Capture ICMP packets only
sudo tcpdump arp Capture ARP traffic
sudo tcpdump port 53 Capture DNS traffic on port 53

Host and Port Filters

Match packets by source, destination, host, or port.

Command Description
sudo tcpdump host 192.168.1.10 Capture traffic to or from one host
sudo tcpdump src host 192.168.1.10 Capture packets from one source host
sudo tcpdump dst host 192.168.1.10 Capture packets to one destination host
sudo tcpdump port 22 Capture SSH traffic
sudo tcpdump src port 443 Capture packets from source port 443

Combine Filters

Use boolean operators to build precise capture expressions.

Command Description
sudo tcpdump 'tcp and port 80' Capture HTTP traffic over TCP
sudo tcpdump 'host 10.0.0.5 and port 22' Capture SSH traffic for one host
sudo tcpdump 'src 10.0.0.5 and dst port 443' Match one source and HTTPS destination
sudo tcpdump 'port 80 or port 443' Capture HTTP or HTTPS traffic
sudo tcpdump 'net 192.168.1.0/24 and not port 22' Capture a subnet except SSH

Write and Read Capture Files

Save traffic to a file or inspect an existing pcap capture.

Command Description
sudo tcpdump -w capture.pcap Write packets to a pcap file
sudo tcpdump -r capture.pcap Read packets from a pcap file
sudo tcpdump -i eth0 -w web.pcap port 80 Save filtered traffic to a file
sudo tcpdump -nn -r capture.pcap Read a file without name resolution
sudo tcpdump -r capture.pcap 'host 10.0.0.5' Apply a filter while reading a pcap

Common Use Cases

Practical commands for day-to-day packet inspection.

Command Description
sudo tcpdump -i any port 22 Watch SSH connections
sudo tcpdump -i any port 53 Inspect DNS queries and replies
sudo tcpdump -i eth0 host 8.8.8.8 Trace traffic to one external host
sudo tcpdump -i any 'tcp port 80 or tcp port 443' Watch web traffic
sudo tcpdump -i any icmp Check ping and ICMP traffic

Troubleshooting

Quick checks for common tcpdump issues.

Issue Check
You do not have permission to capture on that device Run with sudo or verify packet-capture capabilities
No packets appear Confirm the correct interface with tcpdump -D and use -i any if needed
Hostnames make output slow Add -n or -nn to disable name resolution
Output is too noisy Add -c, protocol filters, or host/port filters to narrow the capture
Need to inspect later Write to a file with -w capture.pcap and review it with tcpdump -r or Wireshark

Related Guides

Use these guides for broader networking and packet-capture workflows.

Guide Description
tcpdump Command in Linux Full tcpdump guide with detailed examples
ss Command in Linux Inspect sockets and listening services
ping cheatsheet Test reachability and latency
IP command cheatsheet Check interfaces, addresses, and routes
How to Check Open Ports in Linux Review listening ports before capturing traffic

How to Create a systemd Service File in Linux

systemd is the init system used by most modern Linux distributions, including Ubuntu, Debian, Fedora, and RHEL. It manages system processes, handles service dependencies, and starts services automatically at boot.

A systemd service file — also called a unit file — tells systemd how to start, stop, and manage a process. This guide explains how to create a systemd service file, install it on the system, and manage it with systemctl.

Quick Reference

Task Command
Create a service file sudo nano /etc/systemd/system/myservice.service
Reload systemd after changes sudo systemctl daemon-reload
Enable service at boot sudo systemctl enable myservice
Start the service sudo systemctl start myservice
Enable and start at once sudo systemctl enable --now myservice
Check service status sudo systemctl status myservice
Stop the service sudo systemctl stop myservice
Disable at boot sudo systemctl disable myservice
View service logs sudo journalctl -u myservice
View live logs sudo journalctl -fu myservice

Understanding systemd Unit Files

Service files are plain text files with an .service extension. They are stored in one of two locations:

  • /etc/systemd/system/ — system-wide services, managed by root. Files here take priority over package-installed units.
  • /usr/lib/systemd/system/ or /lib/systemd/system/ — units installed by packages, depending on the distribution. Do not edit these directly; copy them to /etc/systemd/system/ to override.

A service file is divided into three sections:

  • [Unit] — describes the service and declares dependencies.
  • [Service] — defines how to start, stop, and run the process.
  • [Install] — controls how and when the service is enabled.

Creating a Simple Service File

In this example, we will create a service that runs a custom Bash script. First, create the script you want to run:

Terminal
sudo nano /usr/local/bin/myapp.sh

Add the following content to the script:

/usr/local/bin/myapp.shsh
#!/bin/bash
echo "myapp started" >> /var/log/myapp.log
# your application logic here
exec /usr/local/bin/myapp

Make the script executable:

Terminal
sudo chmod +x /usr/local/bin/myapp.sh

Now create the service file:

Terminal
sudo nano /etc/systemd/system/myapp.service

Add the following content:

/etc/systemd/system/myapp.serviceini
[Unit]
Description=My Custom Application
After=network.target

[Service]
Type=simple
ExecStart=/usr/local/bin/myapp.sh
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

After saving the file, reload systemd to pick up the new unit:

Terminal
sudo systemctl daemon-reload

Enable and start the service:

Terminal
sudo systemctl enable --now myapp

Check that it is running:

Terminal
sudo systemctl status myapp
output
● myapp.service - My Custom Application
Loaded: loaded (/etc/systemd/system/myapp.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2026-03-10 09:00:00 CET; 2s ago
Main PID: 1234 (myapp.sh)
Tasks: 1 (limit: 4915)
CGroup: /system.slice/myapp.service
└─1234 /bin/bash /usr/local/bin/myapp.sh

Unit File Sections Explained

[Unit] Section

The [Unit] section contains metadata and dependency declarations.

Common directives:

  • Description= — a short human-readable name shown in systemctl status output.
  • After= — start this service only after the listed units are active. Does not create a dependency; use Requires= for hard dependencies.
  • Requires= — this service requires the listed units. If they fail, this service fails too.
  • Wants= — like Requires=, but the service still starts even if the listed units fail.
  • Documentation= — a URL or man page reference for the service.

[Service] Section

The [Service] section defines the process and how systemd manages it.

Common directives:

  • Type= — the startup type. See Service Types below.
  • ExecStart= — the command to run when the service starts. Must be an absolute path.
  • ExecStop= — the command to run when the service stops. If omitted, systemd sends SIGTERM.
  • ExecReload= — the command to reload the service configuration without restarting.
  • Restart= — when to restart automatically. See Restart Policies below.
  • RestartSec= — how many seconds to wait before restarting. Default is 100ms.
  • User= — run the process as this user instead of root.
  • Group= — run the process as this group.
  • WorkingDirectory= — set the working directory for the process.
  • Environment= — set environment variables: Environment="KEY=value".
  • EnvironmentFile= — read environment variables from a file.

[Install] Section

The [Install] section controls how the service is enabled.

  • WantedBy=multi-user.target — the most common value. Enables the service for normal multi-user (non-graphical) boot.
  • WantedBy=graphical.target — use this if the service requires a graphical environment.

Service Types

The Type= directive tells systemd how the process behaves at startup.

Type Description
simple Default. The process started by ExecStart is the main process.
forking The process forks and the parent exits. systemd tracks the child. Use with traditional daemons.
oneshot The process runs once and exits. systemd waits for it to finish before marking the service as active.
notify Like simple, but the process notifies systemd when it is ready using sd_notify().
idle Like simple, but the process is delayed until all active jobs finish.

For most custom scripts and applications, Type=simple is the right choice.

Restart Policies

The Restart= directive controls when systemd automatically restarts the service.

Value Restarts when
no Never (default).
always The process exits for any reason.
on-failure The process exits with a non-zero code, is killed by a signal, or times out.
on-abnormal Killed by a signal or times out, but not clean exit.
on-success Only if the process exits cleanly (exit code 0).

For long-running services, Restart=on-failure is the most common choice. Use Restart=always for services that must stay running under all circumstances.

Running a Service as a Non-Root User

Running services as root is a security risk. Use the User= and Group= directives to run the service as a dedicated user:

/etc/systemd/system/myapp.serviceini
[Unit]
Description=My Custom Application
After=network.target

[Service]
Type=simple
User=myappuser
Group=myappuser
ExecStart=/usr/local/bin/myapp
Restart=on-failure
WorkingDirectory=/opt/myapp
Environment="NODE_ENV=production"

[Install]
WantedBy=multi-user.target

Create the system user before enabling the service:

Terminal
sudo useradd -r -s /usr/sbin/nologin myappuser

Troubleshooting

Service fails to start — “Failed to start myapp.service”
Check the logs with journalctl : sudo journalctl -u myapp --since "5 minutes ago". The output shows the exact error message from the process.

Changes to the service file have no effect
You must run sudo systemctl daemon-reload after every edit to the unit file. systemd reads the file at reload time, not at start time.

“ExecStart= must be an absolute path”
The ExecStart= value must start with /. Use the full path to the binary — for example /usr/bin/python3 not python3. Use which python3 to find the absolute path.

Service starts but immediately exits
The process may be crashing on startup. Check sudo journalctl -u myapp -n 50 for error output. If Type=simple, make sure ExecStart= runs the process in the foreground — not a script that launches a background process and exits.

Service does not start at boot despite being enabled
Confirm the [Install] section has a valid WantedBy= target and that systemctl enable was run after daemon-reload. Check with systemctl is-enabled myapp.

FAQ

Where should I put my service file?
Put custom service files in /etc/systemd/system/. Files there take priority over package-installed units in /usr/lib/systemd/system/ and persist across package upgrades.

What is the difference between enable and start?
systemctl start starts the service immediately for the current session only. systemctl enable creates a symlink so the service starts automatically at boot. To do both at once, use systemctl enable --now myservice.

How do I view logs for my service?
Use sudo journalctl -u myservice to see all logs. Add -f to follow live output, or --since "1 hour ago" to limit the time range. See the journalctl guide for full usage.

Can I run a service as a regular user without root?
Yes, using user-level systemd units stored in ~/.config/systemd/user/. Manage them with systemctl --user enable myservice. They run when the user logs in and stop when they log out (unless lingering is enabled with loginctl enable-linger username).

How do I reload a service after changing its configuration?
If the service supports it, use sudo systemctl reload myservice — this sends SIGHUP or runs ExecReload= without restarting the process. Otherwise, use sudo systemctl restart myservice.

Conclusion

Creating a systemd service file gives you full control over how and when your process runs — including automatic restarts, boot integration, and user isolation. Once the file is in place, use systemctl enable --now to activate it and journalctl -u to monitor it. For scheduling tasks that run once at a set time rather than continuously, consider cron jobs as an alternative.

su Cheatsheet

Basic Syntax

Core su command forms.

Command Description
su Switch to root using the current shell environment
su - Switch to root with a full login shell
su username Switch to another user
su - username Switch to another user with a login shell
su --help Show available options

Login Shell vs Current Shell

Choose whether to keep the current environment or start a clean login session.

Command Description
su Keep most of the current environment
su - Load the target user’s login environment
su - username Change to the target user’s home directory and shell
whoami Confirm the current effective user
pwd Check whether the working directory changed

Run a Command as Another User

Use -c to run a single command without starting a full interactive shell.

Command Description
su -c 'whoami' Run a single command as root
su -c 'id' username Run a command as another user
su -c 'ps aux' Run a quoted command with spaces
su -c 'cd /tmp && pwd' username Run a compound shell command
su -c 'echo $HOME' username Check the target user’s home in a non-login shell

Shell and Environment

Adjust the shell or preserve the caller’s environment when needed.

Command Description
su -s /bin/bash username Use Bash as the target shell
su -s /usr/bin/zsh username Use Zsh as the target shell
su -p Preserve the current environment
echo $HOME Check the current home directory
echo $SHELL Check the current shell

Root Access Patterns

Compare common root-shell workflows.

Command Description
su Prompt for the root password
su - Prompt for the root password and start a login shell
sudo su - Use sudo to start a root login shell
sudo -i Start a root login shell without calling su directly
exit Leave the current su shell

Troubleshooting

Quick checks for common su issues.

Issue Check
Authentication failure Verify the target user’s password, not your own
su: user username does not exist Confirm the account exists with id username
This account is currently not available Check the target shell in /etc/passwd; service accounts often use /usr/sbin/nologin
su keeps the old PATH Use su - to start a clean login shell
Root access fails on Ubuntu The root account may be locked; use sudo -i instead

Related Guides

Use these guides when working with users, passwords, and privilege escalation.

Guide Description
su Command in Linux Full su guide with examples and sudo comparison
sudo Command in Linux Run commands with elevated privileges
How to Change User Password in Linux Set or reset passwords
useradd cheatsheet Create and manage user accounts
How to Add User to Group in Linux Manage group membership

ping Cheatsheet

Basic Syntax

Core ping command forms.

Command Description
ping host Continuously ping a host
ping -c 4 host Send 4 echo requests, then stop
ping -i 2 host Wait 2 seconds between packets
ping -w 10 host Stop after 10 seconds
ping --help Show available options

Common Reachability Checks

Quick tests for DNS names and IP addresses.

Command Description
ping google.com Test DNS resolution and connectivity
ping 8.8.8.8 Test reachability to a public IPv4 address
ping localhost Verify local TCP/IP stack
`ping $(hostname -I awk ‘{print $1}’)`
ping router.local Test a local network device by hostname

Count and Timing

Control how many packets are sent and how long ping runs.

Command Description
ping -c 3 host Send exactly 3 packets
ping -c 5 -i 0.5 host Send 5 packets at 0.5-second intervals
ping -w 5 host Stop after 5 seconds total
ping -W 2 host Wait up to 2 seconds for each reply
ping -c 10 -q host Show summary only after 10 packets

IPv4 and IPv6

Force the address family when needed.

Command Description
ping -4 host Use IPv4 only
ping -6 host Use IPv6 only
ping6 host IPv6 ping on systems that provide ping6
ping -c 4 -4 example.com Check IPv4 replies for a dual-stack host
ping -c 4 -6 example.com Check IPv6 replies for a dual-stack host

Packet Size and Interface

Adjust packet payload and source interface.

Command Description
ping -s 1400 host Send larger packets with 1400-byte payload
ping -s 56 host Use the default payload size explicitly
ping -I eth0 host Send packets from a specific interface
ping -I 192.168.1.10 host Use a specific source address
ping -D host Print timestamps before each reply

Troubleshooting

Quick checks for common ping issues.

Issue Check
Name or service not known DNS failed; test with an IP address directly
Destination Host Unreachable Check routing, gateway, and local network link
100% packet loss The host may be down, blocked by a firewall, or not routing replies
ping: socket: Operation not permitted Use sudo or verify capabilities on systems with restricted raw sockets
IPv6 ping fails only Confirm the host has AAAA records and IPv6 connectivity

Related Guides

Use these guides for broader network troubleshooting workflows.

Guide Description
ping Command in Linux Full ping guide with detailed examples
IP command cheatsheet Inspect interfaces, addresses, and routes
ss Command in Linux Inspect sockets and active network connections
traceroute Command in Linux Trace the route packets take to a host
SSH cheatsheet Quick reference for remote connectivity commands

xargs Cheatsheet

Basic Syntax

Core xargs command forms.

Command Description
xargs command Read stdin and pass items to a command
printf '%s\n' a b c | xargs command Pass newline-separated items to a command
cat list.txt | xargs command Read arguments from a file through stdin
xargs Use /bin/echo as the default command
xargs --help Show available options

Limit Arguments

Control how many items xargs passes at a time.

Command Description
printf '%s\n' a b c | xargs -n 1 echo Pass one argument per command run
printf '%s\n' a b c d | xargs -n 2 echo Pass two arguments per command run
printf '%s\n' a b c | xargs -L 1 echo Read one input line per command run
printf '%s\n' a b c | xargs -P 4 echo Run up to four commands in parallel
printf '%s\n' a b c | xargs -n 100 rm Batch large argument lists

Replace Input

Use placeholders when each input item must appear in a specific position.

Command Description
printf '%s\n' file1 file2 | xargs -I {} touch {} Replace {} with each input item
printf '%s\n' file1 file2 | xargs -I % sh -c 'echo %; ls -l %' Run multiple commands per item
printf '%s\n' img1 img2 | xargs -I {} mv {} {}.bak Reuse the same item twice
printf '%s\n' user1 user2 | xargs -I {} id {} Insert input into a fixed command pattern
printf '%s\n' src1 src2 | xargs -I {} cp {} /backup/ Copy each input item to a directory

Safe File Handling

Use null-delimited input when paths may contain spaces or special characters.

Command Description
find . -type f -print0 | xargs -0 rm -f Remove found files safely
find . -name '*.log' -print0 | xargs -0 ls -lh List matching files safely
find . -type f -print0 | xargs -0 -n 1 basename Process one safe path at a time
printf '%s\0' 'file one' 'file two' | xargs -0 -n 1 echo Feed null-delimited names directly
find /var/www -type f -print0 | xargs -0 chmod 644 Apply permissions to many files safely

Preview and Confirm

Check generated commands before running them.

Command Description
printf '%s\n' a b c | xargs -t touch Print each command before execution
printf '%s\n' a b c | xargs -p rm Prompt before running the command
find . -type f -print0 | xargs -0 -t rm -f Preview destructive file removals
find . -type f -print0 | xargs -0 echo rm -f Dry run by replacing rm with echo
printf '%s\n' a b c | xargs -r echo Do nothing if stdin is empty

Read from Files

Load items from a file instead of a pipeline.

Command Description
xargs -a list.txt echo Read arguments from list.txt
xargs -a ips.txt -L 1 ping -c 1 Read one IP per line and ping it
xargs -a packages.txt sudo apt install Install packages listed in a file
xargs -a dirs.txt mkdir -p Create directories from a file
xargs -a users.txt -n 1 id Check users listed in a file

Troubleshooting

Quick checks for common xargs issues.

Issue Check
Filenames split at spaces Use find -print0 | xargs -0
Too many arguments at once Add -n N to batch input
Command order looks wrong Add -t to print generated commands
Empty input still runs command Use -r to skip empty stdin
Need item in the middle of a command Use -I {} with a placeholder

Related Guides

Use these guides for full command workflows.

Guide Description
xargs Command in Linux Full xargs tutorial with practical examples
find Files in Linux Build file lists to pass into xargs
rm Command in Linux Remove files safely in bulk operations
grep Command in Linux Filter text before passing results to xargs
Bash Cheatsheet Shell patterns for scripts and pipelines

wc Cheatsheet

Basic Syntax

Core wc command forms.

Command Description
wc file.txt Show lines, words, and bytes for a file
wc file1 file2 Show counts per file and a total line
wc *.log Count all matching files
command | wc Count output from another command
wc --help Show available options

Common Count Flags

Use flags to request specific counters.

Command Description
wc -l file.txt Count lines only
wc -w file.txt Count words only
wc -c file.txt Count bytes only
wc -m file.txt Count characters only
wc -L file.txt Show longest line length

Useful Pipelines

Practical wc combinations for quick shell checks.

Command Description
ls -1 | wc -l Count directory entries (one per line)
grep -r "ERROR" /var/log | wc -l Count matching log lines
find . -type f | wc -l Count files recursively
ps aux | wc -l Count process list lines
cat file.txt | wc -w Count words from stdin

Multi-File Counting

Summarize multiple files and totals.

Command Description
wc -l *.txt Line count for each file plus total
wc -w docs/*.md Word count per Markdown file and total
wc -c part1 part2 part3 Byte count per file and combined total
wc -m *.csv Character count for each CSV file
wc -L *.log Max line length per file and max total

Script-Friendly Patterns

Extract numeric output safely in scripts.

Command Description
count=$(wc -l < file.txt) Capture pure line count without filename
words=$(wc -w < file.txt) Capture word count only
bytes=$(wc -c < file.txt) Capture byte count only
if [ "$(wc -l < file.txt)" -gt 1000 ]; then ... fi Threshold check in scripts
printf '%s\n' "$text" | wc -m Count characters in a variable

Troubleshooting

Quick checks for common wc confusion.

Issue Check
Count includes filename Use input redirection: wc -l < file
Unexpected word count Confirm whitespace and delimiters in the file
Character and byte counts differ Use -m for characters and -c for bytes
Total line missing with many files Ensure shell glob matches at least one file
Pipeline count seems off by one Some commands add headers; account for that

Related Guides

Use these guides for complete text-processing workflows.

Guide Description
linux wc Command Full wc guide with detailed examples
head Command in Linux Show first lines of files
tail Command in Linux Show last lines and follow logs
grep Command in Linux Search and filter matching lines
find Files in Linux Build file lists for counting

useradd Cheatsheet

Basic Syntax

Core useradd command forms.

Command Description
sudo useradd username Create a user account with defaults
sudo useradd -m username Create user and home directory
sudo useradd -m -s /bin/bash username Create user with explicit login shell
sudo useradd -m -c "Full Name" username Create user with GECOS/comment field
sudo useradd -D Show current default useradd settings

Home Directory and Shell

Set home path and login shell at creation time.

Command Description
sudo useradd -m username Create /home/username if missing
sudo useradd -M username Create user without home directory
sudo useradd -d /srv/appuser -m appuser Create user with custom home path
sudo useradd -s /bin/zsh username Set login shell to Zsh
sudo useradd -s /usr/sbin/nologin serviceuser Disable interactive login for service account

Groups and Permissions

Assign primary and supplementary groups during creation.

Command Description
sudo useradd -m -g developers username Set primary group to developers
sudo useradd -m -G sudo username Add user to supplementary sudo group
sudo useradd -m -G docker,developers username Add user to multiple supplementary groups
id username Verify UID, GID, and group membership
groups username Show group memberships for a user

UID, Expiry, and Inactive Policy

Control account identity and lifetime.

Command Description
sudo useradd -m -u 1050 username Create user with specific UID
sudo useradd -m -e 2026-12-31 username Set account expiration date
sudo useradd -m -f 30 username Disable account after 30 inactive days
sudo useradd -m -k /etc/skel username Use skeleton directory for initial files
sudo chage -l username Inspect account aging and expiry policy

Password and Account Activation

Set password and verify account usability.

Command Description
sudo passwd username Set or reset user password
sudo passwd -l username Lock account password login
sudo passwd -u username Unlock account password login
sudo su - username Test login environment for new user
getent passwd username Confirm user entry in account database

Defaults and Safe Workflow

Check defaults first and validate each account creation.

Command Description
sudo useradd -D Show defaults (HOME, SHELL, SKEL, etc.)
sudo useradd -D -s /bin/bash Change default shell for future users
sudo useradd -m newuser && sudo passwd newuser Common two-step creation flow
sudo usermod -aG sudo newuser Grant admin privileges after creation
sudo userdel -r username Remove user and home directory when deprovisioning

Troubleshooting

Quick checks for common useradd errors.

Issue Check
useradd: user 'name' already exists Confirm with id name or choose a different username
group 'name' does not exist Create group first with groupadd or use an existing group
Home directory not created Use -m and verify defaults with useradd -D
Cannot log in after creation Check shell (getent passwd user) and set password with passwd
UID conflict Verify used UIDs in /etc/passwd before assigning -u manually

Related Guides

Use these guides for full account lifecycle tasks.

Guide Description
How to Create Users in Linux Using the useradd Command Full useradd tutorial with examples
usermod Command in Linux Modify existing user accounts
How to Delete Users in Linux Using userdel Remove users safely
How to Add User to Group in Linux Manage supplementary groups
How to Change User Password in Linux Set and rotate account passwords
❌
❌