普通视图

发现新文章,点击刷新页面。
昨天 — 2026年3月25日Linux Tips, Tricks and Tutorials on Linuxize

firewalld Cheatsheet

Basic Commands

Start, stop, and reload the firewalld service.

Command Description
firewall-cmd --state Check if firewalld is running
sudo systemctl start firewalld Start the service
sudo systemctl stop firewalld Stop the service
sudo systemctl enable firewalld Enable at boot
sudo systemctl disable firewalld Disable at boot
sudo firewall-cmd --reload Reload rules without dropping connections
sudo firewall-cmd --complete-reload Full reload, resets all connections

Runtime vs Permanent

By default, firewall-cmd changes apply at runtime only and are lost on reload. Add --permanent to persist a rule, then reload to activate it.

Command Description
sudo firewall-cmd --add-service=http Allow HTTP (runtime only)
sudo firewall-cmd --add-service=http --permanent Allow HTTP (survives reload)
sudo firewall-cmd --reload Activate permanent rules
sudo firewall-cmd --runtime-to-permanent Save all runtime rules as permanent

Zones

Zones define trust levels for network connections. Each interface belongs to one zone.

Command Description
firewall-cmd --get-zones List all available zones
firewall-cmd --get-default-zone Show the default zone
sudo firewall-cmd --set-default-zone=public Set the default zone
firewall-cmd --get-active-zones Show active zones and their interfaces
firewall-cmd --zone=public --list-all List all settings for a zone
sudo firewall-cmd --zone=public --change-interface=eth0 Assign interface to zone (runtime)
sudo firewall-cmd --zone=public --add-interface=eth0 --permanent Assign interface permanently
sudo firewall-cmd --zone=public --remove-interface=eth0 Remove interface from zone

Services

Allow or block named services defined in /usr/lib/firewalld/services/.

Command Description
firewall-cmd --get-services List all predefined services
firewall-cmd --zone=public --list-services List services allowed in zone
firewall-cmd --info-service=http Show ports and protocols for a service
sudo firewall-cmd --zone=public --add-service=http --permanent Allow service permanently
sudo firewall-cmd --zone=public --remove-service=http --permanent Remove service

Ports

Open or close individual ports when no predefined service exists.

Command Description
firewall-cmd --zone=public --list-ports List open ports in zone
sudo firewall-cmd --zone=public --add-port=8080/tcp --permanent Open a TCP port
sudo firewall-cmd --zone=public --add-port=4000-4500/tcp --permanent Open a port range
sudo firewall-cmd --zone=public --remove-port=8080/tcp --permanent Close a port

Rich Rules

Rich rules allow fine-grained control over source, destination, port, and action.

Command Description
firewall-cmd --zone=public --list-rich-rules List rich rules in zone
sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" accept' --permanent Allow traffic from subnet
sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="203.0.113.10" reject' --permanent Reject traffic from IP
sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" port port="22" protocol="tcp" accept' --permanent Allow SSH from subnet
sudo firewall-cmd --zone=public --remove-rich-rule='rule family="ipv4" source address="203.0.113.10" reject' --permanent Remove a rich rule

Masquerade (NAT)

Masquerading lets machines on a private network reach the internet through the firewall host.

Command Description
firewall-cmd --zone=public --query-masquerade Check if masquerading is enabled
sudo firewall-cmd --zone=public --add-masquerade --permanent Enable masquerading
sudo firewall-cmd --zone=public --remove-masquerade --permanent Disable masquerading

Logging

Control which denied packets are logged to help with debugging.

Command Description
firewall-cmd --get-log-denied Show current log-denied setting
sudo firewall-cmd --set-log-denied=all Log all denied packets
sudo firewall-cmd --set-log-denied=unicast Log denied unicast only
sudo firewall-cmd --set-log-denied=off Disable denied-packet logging

Common Server Setup

Baseline rules for a web server using firewalld.

Command Description
sudo firewall-cmd --set-default-zone=public Set zone to public
sudo firewall-cmd --zone=public --add-service=ssh --permanent Keep SSH access
sudo firewall-cmd --zone=public --add-service=http --permanent Allow HTTP
sudo firewall-cmd --zone=public --add-service=https --permanent Allow HTTPS
sudo firewall-cmd --reload Activate all permanent rules
firewall-cmd --zone=public --list-all Verify active rules

How to Set Up a Firewall with UFW on Ubuntu 24.04

A firewall is a tool for monitoring and filtering incoming and outgoing network traffic. It works by defining a set of security rules that determine whether to allow or block specific traffic.

Ubuntu ships with a firewall configuration tool called UFW (Uncomplicated Firewall). It is a user-friendly front-end for managing iptables firewall rules. Its main goal is to make managing a firewall easier or, as the name says, uncomplicated.

This article describes how to use the UFW tool to configure and manage a firewall on Ubuntu 24.04. A properly configured firewall is one of the most important aspects of overall system security.

Prerequisites

Only root or users with sudo privileges can manage the system firewall. The best practice is to run administrative tasks as a sudo user.

Install UFW

UFW is part of the standard Ubuntu 24.04 installation and should be present on your system. If for some reason it is not installed, you can install the package by typing:

Terminal
sudo apt update
sudo apt install ufw

Check UFW Status

UFW is disabled by default. You can check the status of the UFW service with the following command:

Terminal
sudo ufw status verbose

The output will show that the firewall status is inactive:

output
Status: inactive

If UFW is activated, the output will look something like the following:

output
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

UFW Default Policies

The default behavior of the UFW firewall is to block all incoming and forwarding traffic and allow all outbound traffic. This means that anyone trying to access your server will not be able to connect unless you specifically open the port. Applications and services running on your server will be able to access the outside world.

The default policies are defined in the /etc/default/ufw file and can be changed either by manually modifying the file or with the sudo ufw default <policy> <chain> command.

Firewall policies are the foundation for building more complex and user-defined rules. Generally, the initial UFW default policies are a good starting point.

Application Profiles

An application profile is a text file in INI format that describes the service and contains firewall rules for the service. Application profiles are created in the /etc/ufw/applications.d directory during the installation of the package.

You can list all application profiles available on your server by typing:

Terminal
sudo ufw app list

Depending on the packages installed on your system, the output will look similar to the following:

output
Available applications:
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH

To find more information about a specific profile and included rules, use the following command:

Terminal
sudo ufw app info 'Nginx Full'

The output shows that the ‘Nginx Full’ profile opens ports 80 and 443.

output
Profile: Nginx Full
Title: Web Server (Nginx, HTTP + HTTPS)
Description: Small, but very powerful and efficient web server
Ports:
80,443/tcp

You can also create custom profiles for your applications.

Enabling UFW

If you are connecting to your Ubuntu server from a remote location, before enabling the UFW firewall you must explicitly allow incoming SSH connections. Otherwise, you will no longer be able to connect to the machine.

To configure your UFW firewall to allow incoming SSH connections, type the following command:

Terminal
sudo ufw allow ssh
output
Rules updated
Rules updated (v6)

If SSH is running on a non-standard port , you need to open that port.

For example, if your ssh daemon listens on port 7722, enter the following command to allow connections on that port:

Terminal
sudo ufw allow 7722/tcp

Now that the firewall is configured to allow incoming SSH connections, you can enable it by typing:

Terminal
sudo ufw enable
output
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup

You will be warned that enabling the firewall may disrupt existing ssh connections; type y and hit Enter.

Opening Ports

Depending on the applications that run on the system, you may also need to open other ports. The general syntax to open a port is as follows:

Terminal
ufw allow port_number/protocol

Below are a few ways to allow HTTP connections.

The first option is to use the service name. UFW checks the /etc/services file for the port and protocol of the specified service:

Terminal
sudo ufw allow http

You can also specify the port number and the protocol:

Terminal
sudo ufw allow 80/tcp

When no protocol is given, UFW creates rules for both tcp and udp.

Another option is to use the application profile; in this case, ‘Nginx HTTP’:

Terminal
sudo ufw allow 'Nginx HTTP'

UFW also supports another syntax for specifying the protocol using the proto keyword:

Terminal
sudo ufw allow proto tcp to any port 80

Port Ranges

UFW also allows you to open port ranges. The start and the end ports are separated by a colon (:), and you must specify the protocol, either tcp or udp.

For example, if you want to allow ports from 7100 to 7200 on both tcp and udp, run the following commands:

Terminal
sudo ufw allow 7100:7200/tcp
sudo ufw allow 7100:7200/udp

Specific IP Address and Port

To allow connections on all ports from a given source IP, use the from keyword followed by the source address.

Here is an example of allowlisting an IP address:

Terminal
sudo ufw allow from 64.63.62.61

If you want to allow the given IP address access only to a specific port, use the to any port keyword followed by the port number.

For example, to allow access on port 22 from a machine with IP address 64.63.62.61, enter:

Terminal
sudo ufw allow from 64.63.62.61 to any port 22

Subnets

The syntax for allowing connections to a subnet of IP addresses is the same as when using a single IP address. The only difference is that you need to specify the netmask.

Below is an example showing how to allow access for IP addresses ranging from 192.168.1.1 to 192.168.1.254 to port 3306 (MySQL):

Terminal
sudo ufw allow from 192.168.1.0/24 to any port 3306

Specific Network Interface

To allow connections on a particular network interface, use the in on keyword followed by the name of the network interface:

Terminal
sudo ufw allow in on eth2 to any port 3306

Denying Connections

The default policy for all incoming connections is set to deny, and if you have not changed it, UFW will block all incoming connections unless you specifically open the connection.

Writing deny rules is the same as writing allow rules; you only need to use the deny keyword instead of allow.

Say you opened ports 80 and 443, and your server is under attack from the 23.24.25.0/24 network. To deny all connections from that network, run:

Terminal
sudo ufw deny from 23.24.25.0/24

To deny access only to ports 80 and 443 from 23.24.25.0/24, use the following command:

Terminal
sudo ufw deny proto tcp from 23.24.25.0/24 to any port 80,443

Deleting UFW Rules

There are two ways to delete UFW rules: by rule number, and by specifying the actual rule.

Deleting rules by rule number is easier, especially when you are new to UFW. To delete a rule by number, first find the number of the rule you want to delete. To get a list of numbered rules, use the ufw status numbered command:

Terminal
sudo ufw status numbered
output
Status: active
To Action From
-- ------ ----
[ 1] 22/tcp ALLOW IN Anywhere
[ 2] 80/tcp ALLOW IN Anywhere
[ 3] 8080/tcp ALLOW IN Anywhere

To delete rule number 3, the one that allows connections to port 8080, enter:

Terminal
sudo ufw delete 3

The second method is to delete a rule by specifying the actual rule. For example, if you added a rule to open port 8069 you can delete it with:

Terminal
sudo ufw delete allow 8069

Disabling UFW

If you want to stop UFW and deactivate all the rules, use:

Terminal
sudo ufw disable

To re-enable UFW and activate all rules, type:

Terminal
sudo ufw enable

Resetting UFW

Resetting UFW will disable it and delete all active rules. This is helpful if you want to revert all your changes and start fresh.

To reset UFW, run the following command:

Terminal
sudo ufw reset

IP Masquerading

IP Masquerading is a variant of NAT (network address translation) in the Linux kernel that translates network traffic by rewriting the source and destination IP addresses and ports. With IP Masquerading, you can allow one or more machines in a private network to communicate with the internet using one Linux machine that acts as a gateway.

Configuring IP Masquerading with UFW involves several steps.

First, you need to enable IP forwarding. To do that, open the /etc/ufw/sysctl.conf file:

Terminal
sudo nano /etc/ufw/sysctl.conf

Find and uncomment the line which reads net.ipv4.ip_forward=1:

/etc/ufw/sysctl.confini
net.ipv4.ip_forward=1

Next, you need to configure UFW to allow forwarded packets. Open the UFW configuration file:

Terminal
sudo nano /etc/default/ufw

Locate the DEFAULT_FORWARD_POLICY key, and change the value from DROP to ACCEPT:

/etc/default/ufwini
DEFAULT_FORWARD_POLICY="ACCEPT"

Now you need to set the default policy for the POSTROUTING chain in the nat table and the masquerade rule. To do so, open the /etc/ufw/before.rules file:

Terminal
sudo nano /etc/ufw/before.rules

Append the following lines:

/etc/ufw/before.rulesini
#NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]

# Forward traffic through eth0 - Change to public network interface
-A POSTROUTING -s 10.8.0.0/16 -o eth0 -j MASQUERADE

# don't delete the 'COMMIT' line or these rules won't be processed
COMMIT

Replace eth0 in the -A POSTROUTING line with the name of your public network interface.

When you are done, save and close the file. Finally, reload the UFW rules:

Terminal
sudo ufw disable
sudo ufw enable

Troubleshooting

UFW blocks SSH after enabling it
If you enabled UFW without first allowing SSH, you will lose access to a remote server. To recover, you need console access (via your hosting provider’s web console) and run sudo ufw allow ssh followed by sudo ufw enable. Always allow SSH before enabling UFW on a remote machine.

Rules not active after ufw reset
After a reset, UFW is disabled and all rules are cleared. You need to re-add your rules and run sudo ufw enable to bring the firewall back up.

IPv6 rules not created
If ufw status shows rules only for IPv4, check that IPV6=yes is set in /etc/default/ufw, then run sudo ufw disable && sudo ufw enable to reload the configuration.

Application profile not found
If sudo ufw allow 'Nginx Full' returns an error, the profile may not be installed yet. Install the relevant package first (for example, nginx), then retry.

Conclusion

We have shown you how to install and configure a UFW firewall on your Ubuntu 24.04 server. For a full reference of UFW commands and options, see the UFW man page .

昨天以前Linux Tips, Tricks and Tutorials on Linuxize

apt vs apt-get: What Is the Difference?

Both apt and apt-get manage packages on Debian-based systems. They talk to the same package database, resolve the same dependencies, and pull from the same repositories. Yet they exist as separate tools, and choosing the wrong one in the wrong context can cause minor headaches.

This guide explains how apt and apt-get differ, when to use each one, and how their commands map to each other.

Why Two Tools Exist

apt-get has been the standard Debian package manager since 1998. It is reliable and well-documented, but its interface was designed for an era when terminal usability was an afterthought. Related tasks like searching and showing package details live in a separate tool called apt-cache, so users constantly switch between the two.

The apt command was introduced in Debian 8 and Ubuntu 14.04 (APT version 1.0, released in 2014) as a user-friendly front end. It combines the most common apt-get and apt-cache operations into a single command, adds a progress bar, and uses colored output by default.

Both tools are part of the same apt package and share the same underlying libraries. Installing one always installs the other.

User-Facing Differences

When you run apt install in a terminal, you will notice a few things that apt-get does not do:

  • A progress bar at the bottom of the screen shows overall download and install progress.
  • Newly installed packages are highlighted in the output.
  • The number of upgradable packages is shown after apt update finishes.
  • Output is formatted with color when connected to a terminal.

These features make apt more pleasant for interactive use. They also make its output less predictable for scripts, since the formatting can change between APT versions without warning. The apt man page states this explicitly: the command line interface of apt “may change between versions” and “should not be used in scripts.”

apt-get, on the other hand, guarantees a stable output format. Its behavior and exit codes are consistent across versions, which is why automation tools like Ansible, Docker, and CI pipelines use apt-get rather than apt.

Command Comparison

The table below maps the most common apt commands to their apt-get or apt-cache equivalents:

Task apt apt-get / apt-cache
Update package index apt update apt-get update
Upgrade all packages apt upgrade apt-get upgrade
Full upgrade (may remove packages) apt full-upgrade apt-get dist-upgrade
Install a package apt install pkg apt-get install pkg
Remove a package apt remove pkg apt-get remove pkg
Remove with config files apt purge pkg apt-get purge pkg
Remove unused dependencies apt autoremove apt-get autoremove
Search for a package apt search keyword apt-cache search keyword
Show package details apt show pkg apt-cache show pkg
List installed packages apt list --installed dpkg --list
List upgradable packages apt list --upgradable (no direct equivalent)
Edit sources list apt edit-sources (manual file editing)

As you can see, apt merges apt-get and apt-cache into one tool and adds a few convenience commands like apt list and apt edit-sources that have no direct apt-get equivalent.

When to Use apt

Use apt for everyday interactive work in the terminal. If you are installing a package, checking for updates, or searching the repository by hand, apt is the better choice. The progress bar and cleaner output make it easier to follow what is happening.

For example, to update your package index and upgrade all packages:

Terminal
sudo apt update && sudo apt upgrade

To search for a package and then install it:

Terminal
apt search nginx
Terminal
sudo apt install nginx

When to Use apt-get

Use apt-get in shell scripts, Dockerfiles, CI pipelines, and any automated workflow. The stable output format means your scripts will not break when the system upgrades to a newer APT version.

A typical Dockerfile uses apt-get with the -y flag to skip confirmation prompts:

dockerfile
RUN apt-get update && apt-get install -y \
 curl \
 git \
 && rm -rf /var/lib/apt/lists/*

The same applies to Bash scripts and configuration management tools. If the command runs without a human watching the output, use apt-get.

Advanced Operations

The commands below work with both apt and apt-get, but they are better documented and more established under apt-get. In scripts and Dockerfiles, prefer the apt-get form for consistency and stability.

  • apt-get install --no-install-recommends pkg — Install a package without pulling in recommended (but not required) dependencies. This is widely used in Docker images to keep the image size small.
  • apt-get install --allow-downgrades pkg — Allow installing an older version of a package.
  • apt-get build-dep pkg — Install all build dependencies needed to compile a package from source.
  • apt-get source pkg — Download the source package.
  • apt-get download pkg — Download the .deb file without installing it.
  • apt-get changelog pkg — Display the package changelog.
  • apt-get clean / apt-get autoclean — Clear the local package cache.

FAQ

Is apt-get deprecated? No. apt-get is actively maintained and receives updates alongside apt. The APT developers have stated that apt-get will not be removed. It remains the recommended tool for scripts and automation.

Can I replace all apt-get commands with apt? For interactive terminal use, yes. For scripts, no. The apt output format is not guaranteed to remain stable, and its interface may change between versions. Stick with apt-get in any automated workflow.

Is there a performance difference between apt and apt-get? No. Both tools use the same libraries and resolve dependencies the same way. The only difference is the user interface. Install and download speeds are identical.

What about apt-cache? Is it still needed? For interactive use, apt search and apt show replace the most common apt-cache operations. For scripts or when you need advanced query options like apt-cache policy or apt-cache depends, you should still use apt-cache directly.

Conclusion

Use apt when you are typing commands at a terminal, and apt-get when you are writing scripts or Dockerfiles. For a complete reference of apt subcommands, see the apt Command in Linux guide and the APT cheatsheet .

git clone: Clone a Repository

git clone copies an existing Git repository into a new directory on your local machine. It sets up tracking branches for each remote branch and creates a remote called origin pointing back to the source.

This guide explains how to use git clone with the most common options and protocols.

Syntax

The basic syntax of the git clone command is:

txt
git clone [OPTIONS] REPOSITORY [DIRECTORY]

REPOSITORY is the URL or path of the repository to clone. DIRECTORY is optional; if omitted, Git uses the repository name as the directory name.

Clone a Remote Repository

The most common way to clone a repository is over HTTPS. For example, to clone a GitHub repository:

Terminal
git clone https://github.com/user/repo.git

This creates a repo directory in the current working directory, initializes a .git directory inside it, and checks out the default branch.

You can also clone over SSH if you have an SSH key configured :

Terminal
git clone git@github.com:user/repo.git

SSH is generally preferred for repositories you will push to, as it does not require entering credentials each time.

Clone into a Specific Directory

By default, git clone creates a directory named after the repository. To clone into a different directory, pass the target path as the second argument:

Terminal
git clone https://github.com/user/repo.git my-project

To clone into the current directory, use . as the target:

Terminal
git clone https://github.com/user/repo.git .
Warning
Cloning into . requires the current directory to be empty.

Clone a Specific Branch

By default, git clone checks out the default branch (usually main or master). To clone and check out a different branch, use the -b option:

Terminal
git clone -b develop https://github.com/user/repo.git

The full repository history is still downloaded; only the checked-out branch differs. To limit the download to a single branch, combine -b with --single-branch:

Terminal
git clone -b develop --single-branch https://github.com/user/repo.git

Shallow Clone

A shallow clone downloads only a limited number of recent commits rather than the full history. This is useful for speeding up the clone of large repositories when you do not need the full commit history.

To clone with only the latest commit:

Terminal
git clone --depth 1 https://github.com/user/repo.git

To include the last 10 commits:

Terminal
git clone --depth 10 https://github.com/user/repo.git

Shallow clones are commonly used in CI/CD pipelines to reduce download time. To convert a shallow clone to a full clone later, run:

Terminal
git fetch --unshallow

Clone a Local Repository

git clone also works with local paths. This is useful for creating an isolated copy for testing:

Terminal
git clone /path/to/local/repo new-copy

Troubleshooting

Destination directory is not empty
git clone URL . works only when the current directory is empty. If files already exist, clone into a new directory or move the existing files out of the way first.

Authentication failed over HTTPS
Most Git hosting services no longer accept account passwords for Git operations over HTTPS. Use a personal access token or configure a credential manager so Git can authenticate securely.

Permission denied (publickey)
This error usually means your SSH key is missing, not loaded into your SSH agent, or not added to your account on the Git hosting service. Verify your SSH setup before retrying the clone.

Host key verification failed
SSH could not verify the identity of the remote server. Confirm you are connecting to the correct host, then accept or update the host key in ~/.ssh/known_hosts.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git clone URL Clone a repository via HTTPS or SSH
git clone URL dir Clone into a specific directory
git clone URL . Clone into the current (empty) directory
git clone -b branch URL Clone and check out a specific branch
git clone -b branch --single-branch URL Clone only a specific branch
git clone --depth 1 URL Shallow clone: latest commit only
git clone --depth N URL Shallow clone: last N commits
git clone /path/to/repo Clone a local repository

FAQ

What is the difference between HTTPS and SSH cloning?
HTTPS cloning works out of the box but prompts for credentials unless you use a credential manager. SSH cloning requires a key pair to be configured but does not prompt for a password on each push or pull.

Does git clone download all branches?
Yes. All remote branches are fetched, but only the default branch is checked out locally. Use git branch -r to list all remote branches, or git checkout branch-name to switch to one.

How do I clone a private repository?
For HTTPS, provide your username and a personal access token when prompted. For SSH, ensure your public key is added to your account on the hosting service.

Can I clone a specific tag instead of a branch?
Yes. Use -b with the tag name: git clone -b v1.0.0 https://github.com/user/repo.git. Git checks out that tag in a detached HEAD state, so create a branch if you plan to make commits.

Conclusion

git clone is the starting point for working with any existing Git repository. After cloning, you can inspect the commit history with git log , review changes with git diff , and configure your username and email if you have not done so already.

df Cheatsheet

Basic Usage

Common ways to check disk space.

Command Description
df Show disk usage for all mounted filesystems
df /path Show filesystem usage for the given path
df -h Human-readable sizes (K, M, G — powers of 1024)
df -H Human-readable sizes (K, M, G — powers of 1000)
df -a Include all filesystems (including pseudo and duplicate)
df --total Add a grand total row at the bottom

Size Formats

Control how sizes are displayed.

Option Description
-h Auto-scale using powers of 1024 (K, M, G)
-H Auto-scale using powers of 1000 (SI units)
-k Display sizes in 1K blocks (default on most systems)
-m Display sizes in 1M blocks
-BG Display sizes in 1G blocks
-B SIZE Use SIZE-byte blocks (e.g., -BM, -BG, -B512)

Output Columns

What each column in the default output means.

Column Description
Filesystem Device or remote path of the filesystem
1K-blocks / Size Total size of the filesystem
Used Space currently in use
Available Space available for use
Use% Percentage of space used
Mounted on Directory where the filesystem is mounted

Filesystem Type

Show or filter by filesystem type.

Option Description
-T Show filesystem type column
-t TYPE Only show filesystems of the given type
-x TYPE Exclude filesystems of the given type
-l Show only local filesystems (exclude network mounts)

Examples:

Command Description
df -t ext4 Show only ext4 filesystems
df -x tmpfs Exclude tmpfs from output
df -Th Show type column with human-readable sizes

Inode Usage

Show inode counts instead of block usage.

Command Description
df -i Show inode usage for all filesystems
df -ih Show inode usage with human-readable counts
df -i /path Show inode usage for a specific filesystem
Column Description
Inodes Total number of inodes
IUsed Inodes in use
IFree Inodes available
IUse% Percentage of inodes used

Custom Output Fields

Select specific fields with --output.

Field Description
source Filesystem device
fstype Filesystem type
size Total size
used Space used
avail Space available
pcent Percentage used
iused Inodes used
iavail Inodes available
ipcent Inode percentage used
target Mount point

Example: df --output=source,size,used,avail,pcent,target -h

Related Guides

Use these references for deeper disk usage workflows.

Guide Description
How to Check Disk Space in Linux Using the df Command Full df guide with practical examples
Du Command in Linux Check disk usage for directories and files

jobs Command in Linux: List and Manage Background Jobs

The jobs command is a Bash built-in that lists all background and suspended processes associated with the current shell session. It shows the job ID, state, and the command that started each job.

This article explains how to use the jobs command, what each option does, and how job specifications work for targeting individual jobs.

Syntax

The general syntax for the jobs command is:

txt
jobs [OPTIONS] [JOB_SPEC...]

When called without arguments, jobs lists all active jobs in the current shell.

Listing Jobs

To see all background and suspended jobs, run jobs without any arguments:

Terminal
jobs

If you have two processes (one running and one stopped), the output will look similar to this:

output
[1]- Stopped vim notes.txt
[2]+ Running ping google.com &

Each line shows:

  • The job number in brackets ([1], [2])
  • A + or - sign indicating the current and previous job (more on this below)
  • The job state (Running, Stopped, Done)
  • The command that started the job

To include the process ID (PID) in the output, use the -l option:

Terminal
jobs -l
output
[1]- 14205 Stopped vim notes.txt
[2]+ 14230 Running ping google.com &

The PID is useful when you need to send a signal to a process with the kill command.

Job States

Jobs reported by the jobs command can be in one of these states:

  • Running — the process is actively executing in the background.
  • Stopped — the process has been suspended, typically by pressing Ctrl+Z.
  • Done — the process has finished. This state appears once and is then cleared from the list.
  • Terminated — the process was killed by a signal.

When a background job finishes, the shell displays its Done status the next time you press Enter or run a command.

Job Specifications

Job specifications (job specs) let you target a specific job when using commands like fg, bg, kill , or disown. They always start with a percent sign (%):

  • %1, %2, … — refer to a job by its number.
  • %% or %+ — the current job (the most recently backgrounded or suspended job, marked with +).
  • %- — the previous job (marked with -).
  • %string — the job whose command starts with string. For example, %ping matches a job started with ping google.com.
  • %?string — the job whose command contains string anywhere. For example, %?google also matches ping google.com.

Here is a practical example. Start two background jobs:

Terminal
sleep 300 &
ping -c 100 google.com > /dev/null &

Now list them:

Terminal
jobs
output
[1]- Running sleep 300 &
[2]+ Running ping -c 100 google.com > /dev/null &

You can bring the sleep job to the foreground by its number:

Terminal
fg %1

Or by the command prefix:

Terminal
fg %sleep

Both are equivalent and bring job [1] to the foreground.

Options

The jobs command accepts the following options:

  • -l — List jobs with their process IDs in addition to the normal output.
  • -p — Print only the PID of each job’s process group leader. This is useful for scripting.
  • -r — List only running jobs.
  • -s — List only stopped (suspended) jobs.
  • -n — Show only jobs whose status has changed since the last notification.

To show only running jobs:

Terminal
jobs -r

To get just the PIDs of all jobs:

Terminal
jobs -p
output
14205
14230

Using jobs in Interactive Shells

The jobs command is primarily useful in an interactive shell where job control is enabled. In a regular shell script, job control is usually off, so jobs may return no output even when background processes exist.

If you need to manage background work in a script, track process IDs with $! and use wait with the PID instead of a job spec:

sh
# Start a background task
long_task &
task_pid=$!

# Poll until it finishes
while kill -0 "$task_pid" 2>/dev/null; do
 echo "Task is still running..."
 sleep 5
done

echo "Task complete."

To wait for several background tasks, store their PIDs and pass them to wait:

sh
task_a &
pid_a=$!

task_b &
pid_b=$!

task_c &
pid_c=$!

wait "$pid_a" "$pid_b" "$pid_c"
echo "All tasks complete."

In an interactive shell, you can still wait for a job by its job spec:

Terminal
wait %1

Quick Reference

Command Description
jobs List all background and stopped jobs
jobs -l List jobs with PIDs
jobs -p Print only PIDs
jobs -r List only running jobs
jobs -s List only stopped jobs
fg %1 Bring job 1 to the foreground
bg %1 Resume job 1 in the background
kill %1 Terminate job 1
disown %1 Remove job 1 from the job table
wait Wait for all background jobs to finish

FAQ

What is the difference between jobs and ps?
jobs shows only the processes started from the current shell session. The ps command lists all processes running on the system, regardless of which shell started them.

Why does jobs show nothing even though processes are running?
jobs only tracks processes started from the current shell. If you started a process in a different terminal or via a system service, it will not appear in jobs. Use ps aux to find it instead.

How do I stop a background job?
Use kill %N where N is the job number. For example, kill %1 sends SIGTERM to job 1. If the job does not stop, use kill -9 %1 to force-terminate it.

What do the + and - signs mean in jobs output?
The + marks the current job, the one that fg and bg will act on by default. The - marks the previous job. When the current job finishes, the previous job becomes the new current job.

How do I keep a background job running after closing the terminal?
Use nohup before the command, or disown after backgrounding it. For long-running interactive sessions, consider Screen or Tmux .

Conclusion

The jobs command lists all background and suspended processes in the current shell session. Use it together with fg, bg, and kill to control jobs interactively. For a broader guide on running and managing background processes, see How to Run Linux Commands in the Background .

top Cheatsheet

Startup Options

Common command-line flags for launching top.

Command Description
top Start top with default settings
top -d 5 Set refresh interval to 5 seconds
top -n 3 Exit after 3 screen updates
top -u username Show only processes owned by a user
top -p 1234,5678 Monitor specific PIDs
top -b Batch mode (non-interactive, for scripts and logging)
top -b -n 1 Print a single snapshot and exit
top -H Show individual threads instead of processes

Interactive Navigation

Key commands available while top is running.

Key Description
q Quit top
h or ? Show help screen
Space Refresh the display immediately
d or s Change the refresh interval
k Kill a process (prompts for PID and signal)
r Renice a process (change priority)
u Filter by user
n or # Set the number of displayed processes
W Save current settings to ~/.toprc

Sorting

Change the sort column interactively.

Key Description
P Sort by CPU usage (default)
M Sort by memory usage
N Sort by PID
T Sort by cumulative CPU time
R Reverse the current sort order
< / > Move the sort column left / right
F or O Open the field management screen to pick a sort column

Display Toggles

Show or hide parts of the summary and task list.

Key Description
l Toggle the load average line
t Cycle through CPU summary modes (bar, text, off)
m Cycle through memory summary modes (bar, text, off)
1 Toggle per-CPU breakdown (one line per core)
H Toggle thread view (show individual threads)
c Toggle between command name and full command line
V Toggle forest (tree) view
x Highlight the current sort column
z Toggle color output

Filtering and Searching

Narrow the process list while top is running.

Key Description
u Show only processes for a specific user
U Show processes by effective or real user
o / O Add a filter (e.g., COMMAND=nginx or %CPU>5.0)
Ctrl+O Show active filters
= Clear all filters for the current window
L Search for a string in the display
& Find next occurrence of the search string

Summary Area Fields

Key metrics in the header area.

Field Description
load average System load over 1, 5, and 15 minutes
us CPU time in user space
sy CPU time in kernel space
ni CPU time for niced (reprioritized) processes
id CPU idle time
wa CPU time waiting for I/O
hi / si Hardware / software interrupt time
st CPU time stolen by hypervisor (VMs)
MiB Mem Total, free, used, and buffer/cache memory
MiB Swap Total, free, used swap, and available memory

Batch Mode and Logging

Use top in scripts or for capturing snapshots.

Command Description
top -b -n 1 Print one snapshot to stdout
top -b -n 5 -d 2 > top.log Log 5 snapshots at 2-second intervals
top -b -n 1 -o %MEM Single snapshot sorted by memory
top -b -n 1 -u www-data Snapshot of one user’s processes
top -b -n 1 -p 1234 Snapshot of a specific PID

Related Guides

Full guides for process monitoring and management.

Guide Description
top Command in Linux Full top guide with examples
ps Command in Linux List and filter processes
Kill Command in Linux Send signals to processes by PID
Linux Uptime Command Check system uptime and load average
Check Memory in Linux Inspect RAM and swap usage

Docker Compose: Define and Run Multi-Container Apps

Docker Compose is a tool for defining and running multi-container applications. Instead of starting each container separately with docker run , you describe all of your services, networks, and volumes in a single YAML file and bring the entire environment up with one command.

This guide explains how Docker Compose V2 works, walks through a practical example with a SvelteKit development server and PostgreSQL database, and covers the most commonly used Compose directives and commands.

Quick Reference

For a printable quick reference, see the Docker cheatsheet .

Command Description
docker compose up Start all services (foreground)
docker compose up -d Start all services in detached mode
docker compose down Stop and remove containers and networks
docker compose down -v Also remove named volumes
docker compose ps List running services
docker compose logs SERVICE View logs for a service
docker compose logs -f SERVICE Follow logs in real time
docker compose exec SERVICE sh Open a shell in a running container
docker compose stop Stop containers without removing them
docker compose build Build or rebuild images
docker compose pull Pull the latest images

Prerequisites

The examples in this guide require Docker with the Compose plugin installed. Docker Desktop includes Compose by default. On Linux, install the docker-compose-plugin package alongside Docker Engine.

To verify Compose is available, run:

Terminal
docker compose version
output
Docker Compose version v2.x.x

Compose V2 runs as docker compose (with a space), not docker-compose. The old V1 binary is end-of-life and not covered in this guide.

The Compose File

Modern Docker Compose prefers a file named compose.yaml, though docker-compose.yml is still supported for compatibility. In this guide, we will use docker-compose.yml because many readers still recognize that name first.

A Compose file describes your application’s environment using three top-level keys:

  • services — defines each container: its image, ports, volumes, environment variables, and dependencies
  • volumes — declares named volumes that persist data across container restarts
  • networks — defines custom networks for service communication (Compose creates a default network automatically)

Compose V2 does not require a version: field at the top of the file. If you see version: '3' in older guides, that is legacy syntax kept for backward compatibility, not something new Compose files need.

YAML uses indentation to define structure. Use spaces, not tabs.

Setting Up the Example

In this example, we will run a SvelteKit development server alongside a PostgreSQL 16 database using Docker Compose.

Start by creating a new SvelteKit project. The npm create command launches an interactive wizard — select “Skeleton project” when prompted, then choose your preferred options for TypeScript and linting:

Terminal
npm create svelte@latest myapp
cd myapp

If you already have a Node.js project, skip this step and use your project directory instead.

Next, create a docker-compose.yml file in the project root:

docker-compose.ymlyaml
services:
 app:
 image: node:20-alpine
 working_dir: /app
 volumes:
 - .:/app
 - node_modules:/app/node_modules
 ports:
 - "5173:5173"
 environment:
 DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@db:5432/myapp
 depends_on:
 db:
 condition: service_healthy
 command: sh -c "npm install && npm run dev -- --host"

 db:
 image: postgres:16-alpine
 volumes:
 - postgres_data:/var/lib/postgresql/data
 environment:
 POSTGRES_USER: postgres
 POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
 POSTGRES_DB: myapp
 healthcheck:
 test: ["CMD-SHELL", "pg_isready -U postgres"]
 interval: 5s
 timeout: 5s
 retries: 5

volumes:
 postgres_data:
 node_modules:

Compose V2 automatically reads a .env file in the project directory and substitutes ${VAR} references in the compose file. Create a .env file to define the database password:

.envtxt
POSTGRES_PASSWORD=changeme
Warning
Do not commit .env to version control. Add it to your .gitignore file to keep credentials out of your repository.

The password above is for local development only. We will walk through each directive in the compose file in the next section.

Understanding the Compose File

Let us go through each directive in the compose file.

services

The services key is the core of any compose file. Each entry under services defines one container. The key name (app, db) becomes the service name, and Compose also uses it as the hostname for inter-service communication — so the app container can reach the database at db:5432.

image

The image directive tells Compose which Docker image to pull. Both node:20-alpine and postgres:16-alpine are pulled from Docker Hub if they are not already present on your machine. The alpine variant uses Alpine Linux as a base, which keeps image sizes small.

working_dir

The working_dir directive sets the working directory inside the container. All commands run in /app, which is where we mount the project files.

volumes

The app service uses two volume entries:

txt
- .:/app
- node_modules:/app/node_modules

The first entry is a bind mount. It maps the current directory on your host (.) to /app inside the container. Any file you edit on your host is immediately reflected inside the container, which is what enables hot-reload during development.

The second entry is a named volume. Without it, the bind mount would overwrite /app/node_modules with whatever is in your host directory — which may be empty or incompatible. The node_modules named volume tells Docker to keep a separate copy of node_modules inside the container so the bind mount does not interfere with it.

The db service uses a named volume for its data directory:

txt
- postgres_data:/var/lib/postgresql/data

This ensures that your database data persists across container restarts. When you run docker compose down, the postgres_data volume is kept. Only docker compose down -v removes it.

Named volumes must be declared at the top level under volumes:. Both postgres_data and node_modules are listed there with no additional configuration, which tells Docker to manage them using its default storage driver.

ports

The ports directive maps ports between the host machine and the container in "HOST:CONTAINER" format. The entry "5173:5173" exposes the Vite development server so you can open http://localhost:5173 in your browser.

environment

The environment directive sets environment variables inside the container. The app service receives the DATABASE_URL connection string, which a SvelteKit application can read to connect to the database.

Notice the ${POSTGRES_PASSWORD} syntax. Compose reads this value from the .env file and substitutes it before starting the container. This keeps credentials out of the compose file itself.

depends_on

By default, depends_on only controls the order in which containers start — it does not wait for a service to be ready. Compose V2 supports a long-form syntax with a condition key that changes this behavior:

yaml
depends_on:
 db:
 condition: service_healthy

With condition: service_healthy, Compose waits until the db service passes its healthcheck before starting app. This prevents the Node.js process from trying to connect to PostgreSQL before the database is accepting connections.

command

The command directive overrides the default command defined in the image. Here it runs npm install to install dependencies, then starts the Vite dev server with --host to bind to 0.0.0.0 inside the container (required for Docker to forward the port to your host):

txt
sh -c "npm install && npm run dev -- --host"

Running npm install on every container start is slow but convenient for development — you do not need to install dependencies manually before running docker compose up. For production, use a multi-stage Dockerfile to pre-install dependencies and build the application.

healthcheck

The healthcheck directive defines a command Compose runs inside the container to determine whether the service is ready. The pg_isready utility checks whether PostgreSQL is accepting connections:

yaml
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5

Compose runs this check every 5 seconds. Once the service passes its first check, it is marked as healthy and any services using condition: service_healthy are allowed to start. If the check fails 5 consecutive times, the service is marked as unhealthy.

Managing the Application

Run all commands from the project directory — the directory where your docker-compose.yml is located.

Starting Services

To start all services and stream their logs to your terminal, run:

Terminal
docker compose up

Compose pulls any missing images, creates the network, starts the db container, waits for it to pass its healthcheck, then starts the app container. Press Ctrl+C to stop.

To start in detached mode and run the services in the background:

Terminal
docker compose up -d

Viewing Status and Logs

To list running services and their current state:

Terminal
docker compose ps
output
NAME IMAGE COMMAND SERVICE STATUS PORTS
myapp-app-1 node:20-alpine "docker-entrypoint.s…" app Up 2 minutes 0.0.0.0:5173->5173/tcp
myapp-db-1 postgres:16-alpine "docker-entrypoint.s…" db Up 2 minutes 5432/tcp

To view the logs for a specific service:

Terminal
docker compose logs app

To follow the logs in real time (like tail -f):

Terminal
docker compose logs -f app

Press Ctrl+C to stop following.

Running Commands Inside a Container

To open an interactive shell inside the running app container:

Terminal
docker compose exec app sh

This is useful for running one-off commands, inspecting the file system, or debugging. Type exit to leave the shell.

Stopping and Removing Services

To stop running containers without removing them — preserving named volumes and the network:

Terminal
docker compose stop

To start them again after stopping:

Terminal
docker compose start

To stop containers and remove them along with the network:

Terminal
docker compose down

Named volumes (postgres_data, node_modules) are preserved. Your database data is safe.

To also remove all named volumes — this deletes your database data:

Terminal
docker compose down -v

Use this when you want a completely clean environment.

Common Directives Reference

The following directives are not used in the example above but are commonly needed in real projects.

restart

The restart directive controls what Compose does when a container exits:

yaml
services:
 app:
 restart: unless-stopped

The available policies are:

  • no — do not restart (default)
  • always — always restart, including on system reboot
  • unless-stopped — restart unless the container was explicitly stopped
  • on-failure — restart only when the container exits with a non-zero status

Use unless-stopped for long-running services in single-host deployments.

build

Instead of pulling a pre-built image, build tells Compose to build the image from a local Dockerfile :

yaml
services:
 app:
 build:
 context: .
 dockerfile: Dockerfile

context is the directory Compose sends to the Docker daemon as the build context. dockerfile specifies the Dockerfile path relative to context. If both the image and a build context exist, build takes precedence.

env_file

The env_file directive injects environment variables into a container from a file at runtime:

yaml
services:
 app:
 env_file:
 - .env.local

This is different from Compose’s native .env substitution. The .env file at the project root is read by Compose itself to substitute ${VAR} references in the compose file. The env_file directive injects variables directly into the container’s environment. Both can be used together.

networks

Compose creates a default bridge network connecting all services automatically. You can define custom networks to isolate groups of services or control how containers communicate:

yaml
services:
 app:
 networks:
 - frontend
 - backend
 db:
 networks:
 - backend

networks:
 frontend:
 backend:

A service only communicates with other services on the same network. In this example, app can reach db because both share the backend network. A service attached only to frontend — not backend — has no route to db.

profiles

Profiles let you define optional services that only start when explicitly requested. This is useful for development tools, debuggers, or admin interfaces that you do not want running all the time:

yaml
services:
 app:
 image: node:20-alpine

 adminer:
 image: adminer
 profiles:
 - tools
 ports:
 - "8080:8080"

Running docker compose up starts only app. To also start adminer, pass the profile flag:

Terminal
docker compose --profile tools up

Troubleshooting

Port is already in use
Another process on your host is using the same port. Change the host-side port in the ports: mapping. For example, change "5173:5173" to "5174:5173" to expose the container on port 5174 instead.

Service cannot connect to the database
If you are not using condition: service_healthy, depends_on only ensures the db container starts before app — it does not wait for PostgreSQL to be ready to accept connections. Add a healthcheck to the db service and set condition: service_healthy in depends_on as shown in the example above.

node_modules is empty inside the container
The bind mount .:/app maps your host directory to /app, which overwrites /app/node_modules with whatever is on your host. If your host directory has no node_modules, the container sees none either. Fix this by adding a named volume entry node_modules:/app/node_modules as shown in the example. Docker preserves the named volume’s contents and the bind mount does not overwrite it.

FAQ

What is the difference between docker compose down and docker compose stop?
docker compose stop stops the running containers but leaves them on disk along with the network and volumes. You can restart them with docker compose start. docker compose down removes the containers and the network. Named volumes are kept unless you add the -v flag.

How do I view logs for a specific service?
Run docker compose logs SERVICE, replacing SERVICE with the service name defined in your compose file (for example, docker compose logs app). To follow logs in real time, add the -f flag: docker compose logs -f app.

How do I pass secrets without hardcoding them in the Compose file?
Create a .env file in the project directory with your credentials (for example, POSTGRES_PASSWORD=changeme) and reference them in the compose file using ${VAR} syntax. Compose reads the .env file automatically. Add .env to your .gitignore so credentials are never committed to version control.

Can I use Docker Compose in production?
Compose works well for single-host deployments — it is simpler to operate than Kubernetes when you only have one server. Use restart: unless-stopped to keep services running after reboots, and keep secrets in environment variables rather than hardcoded values. For multi-host deployments, look at Docker Swarm or Kubernetes.

What is the difference between a bind mount and a named volume?
A bind mount (./host-path:/container-path) maps a specific directory from your host into the container. Changes on either side are immediately visible on the other. A named volume (volume-name:/container-path) is managed entirely by Docker — it persists data independently of the host directory structure and is not tied to a specific path on your machine. Use bind mounts for source code (so edits take effect immediately) and named volumes for database data (so it persists reliably).

Conclusion

Docker Compose gives you a straightforward way to define and run multi-container development environments with a single YAML file. Once you are comfortable with the basics, the next step is writing custom Dockerfiles to build your own images instead of relying on generic ones.

diff Cheatsheet

Basic Usage

Compare files and save output.

Command Description
diff file1 file2 Compare two files (normal format)
diff -u file1 file2 Unified format (most common)
diff -c file1 file2 Context format
diff -y file1 file2 Side-by-side format
diff file1 file2 > file.patch Save diff to a patch file

Output Symbols

What each symbol means in the diff output.

Symbol Format Meaning
< normal Line from file1 (removed)
> normal Line from file2 (added)
- unified/context Line removed
+ unified/context Line added
! context Line changed
@@ unified Hunk header with line numbers

Directory Comparison

Compare entire directory trees.

Command Description
diff -r dir1 dir2 Compare directories recursively
diff -rq dir1 dir2 Only report which files differ
diff -rN dir1 dir2 Treat absent files as empty
diff -rNu dir1 dir2 Unified recursive diff (for patches)

Filtering

Ignore specific types of differences.

Option Description
-i Ignore case differences
-w Ignore all whitespace
-b Ignore changes in whitespace amount
-B Ignore blank lines
--strip-trailing-cr Ignore Windows carriage returns

Common Options

Option Description
-u Unified format
-c Context format
-y Side-by-side format
-U n Show n lines of context (default: 3)
-C n Show n lines of context (context format)
-W n Set column width for side-by-side
-s Report when files are identical
--color Colorize output
-q Brief output (only whether files differ)

Patch Workflow

Create and apply patches with diff and patch.

Command Description
diff -u file1 file2 > file.patch Create a patch for a single file
diff -rNu dir1 dir2 > dir.patch Create a patch for a directory
patch file1 < file.patch Apply patch to file1
patch -p1 < dir.patch Apply directory patch
patch -R file1 < file.patch Reverse (undo) a patch

git diff Command: Show Changes Between Commits

Every change in a Git repository can be inspected before it is staged or committed. The git diff command shows the exact lines that were added, removed, or modified — between your working directory, the staging area, and any two commits or branches.

This guide explains how to use git diff to review changes at every stage of your workflow.

Basic Usage

Run git diff with no arguments to see all unstaged changes — differences between your working directory and the staging area:

Terminal
git diff
output
diff --git a/app.py b/app.py
index 3b1a2c4..7d8e9f0 100644
--- a/app.py
+++ b/app.py
@@ -10,6 +10,7 @@ def main():
print("Starting app")
+ print("Debug mode enabled")
run()

Lines starting with + were added. Lines starting with - were removed. Lines with no prefix are context — unchanged lines shown for reference.

If nothing is shown, all changes are already staged or there are no changes at all.

Staged Changes

To see changes that are staged and ready to commit, use --staged (or its synonym --cached):

Terminal
git diff --staged

This compares the staging area against the last commit. It shows exactly what will go into the next commit.

Terminal
git diff --cached

Both flags are equivalent. Use whichever you prefer.

Comparing Commits

Pass two commit hashes to compare them directly:

Terminal
git diff abc1234 def5678

To compare a commit against its parent, use the ^ suffix:

Terminal
git diff HEAD^ HEAD

HEAD refers to the latest commit. HEAD^ is one commit before it. You can go further back with HEAD~2, HEAD~3, and so on.

To see what changed in the last commit:

Terminal
git diff HEAD^ HEAD

Comparing Branches

Use the same syntax to compare two branches:

Terminal
git diff main feature-branch

This shows all differences between the tips of the two branches.

To see only what a branch adds relative to main — excluding changes already in main — use the three-dot notation:

Terminal
git diff main...feature-branch

The three-dot form finds the common ancestor of both branches and diffs from there to the tip of feature-branch. This is the most useful form for reviewing a pull request before merging.

Comparing a Specific File

To limit the diff to a single file, pass the path after --:

Terminal
git diff -- path/to/file.py

The -- separator tells Git the argument is a file path, not a branch name. Combine it with commit references to compare a file across commits:

Terminal
git diff HEAD~3 HEAD -- path/to/file.py

Summary with –stat

To see a summary of which files changed and how many lines were added or removed — without the full diff — use --stat:

Terminal
git diff --stat
output
 app.py | 3 +++
config.yml | 1 -
2 files changed, 3 insertions(+), 1 deletion(-)

This is useful for a quick overview before reviewing the full diff.

Word-Level Diff

By default, git diff highlights changed lines. To highlight only the changed words within a line, use --word-diff:

Terminal
git diff --word-diff
output
@@ -10,6 +10,6 @@ def main():
print("[-Starting-]{+Running+} app")

Removed words appear in [-brackets-] and added words in {+braces+}. This is especially useful for prose or configuration files where only part of a line changes.

Ignoring Whitespace

To ignore whitespace-only changes, use -w:

Terminal
git diff -w

Use -b to ignore changes in the amount of whitespace (but not all whitespace):

Terminal
git diff -b

These options are useful when reviewing files that have been reformatted or indented differently.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git diff Unstaged changes in working directory
git diff --staged Staged changes ready to commit
git diff HEAD^ HEAD Changes in the last commit
git diff abc1234 def5678 Diff between two commits
git diff main feature Diff between two branches
git diff main...feature Changes in feature since branching from main
git diff -- file Diff for a specific file
git diff --stat Summary of changed files and line counts
git diff --word-diff Word-level diff
git diff -w Ignore all whitespace changes

FAQ

What is the difference between git diff and git diff --staged?
git diff shows changes in your working directory that have not been staged yet. git diff --staged shows changes that have been staged with git add and are ready to commit. To see all changes — staged and unstaged combined — use git diff HEAD.

What does the three-dot ... notation do in git diff?
git diff main...feature finds the common ancestor of main and feature and shows the diff from that ancestor to the tip of feature. This isolates the changes that the feature branch introduces, ignoring any new commits in main since the branch was created. In git diff, the two-dot form is just another way to name the two revision endpoints, so git diff main..feature is effectively the same as git diff main feature.

How do I see only the file names that changed, not the full diff?
Use git diff --name-only. To include the change status (modified, added, deleted), use git diff --name-status instead.

How do I compare my local branch against the remote?
Fetch first to update remote-tracking refs, then diff against the updated remote branch. To see what your current branch changes relative to origin/main, use git fetch && git diff origin/main...HEAD. If you want a direct endpoint comparison instead, use git fetch && git diff origin/main HEAD.

Can I use git diff to generate a patch file?
Yes. Redirect the output to a file: git diff > changes.patch. Apply it on another machine with git apply changes.patch.

Conclusion

git diff is the primary tool for reviewing changes before you stage or commit them. Use git diff for unstaged work, git diff --staged before committing, and git diff main...feature to review a branch before merging. For a history of committed changes, see the git log guide .

export Cheatsheet

Basic Syntax

Core export command forms in Bash.

Command Description
export VAR=value Create and export a variable
export VAR Export an existing shell variable
export -p List all exported variables
export -n VAR Remove the export property
help export Show Bash help for export

Export Variables

Use export to pass variables to child processes.

Command Description
export APP_ENV=production Export one variable
PORT=8080; export PORT Define first, export later
export PATH="$HOME/bin:$PATH" Extend PATH
export EDITOR=nano Set a default editor
export LANG=en_US.UTF-8 Set a locale variable

Export for One Shell Session

These changes last only in the current shell session unless you save them in a startup file.

Command Description
export DEBUG=1 Enable a debug variable for the current session
export API_URL=https://api.example.com Set an application endpoint
export PATH="$HOME/.local/bin:$PATH" Add a per-user bin directory
echo "$DEBUG" Verify that the exported variable is set
bash -c 'echo "$DEBUG"' Confirm the child shell inherits the variable

Export Functions

Bash can export functions to child Bash shells.

Command Description
greet() { echo "Hello"; } Define a shell function
export -f greet Export a function
bash -c 'greet' Run the exported function in a child shell
export -nf greet Remove the export property from a function

Remove or Reset Exports

Use these commands when you no longer want a variable inherited.

Command Description
export -n VAR Keep the variable, but stop exporting it
unset VAR Remove the variable completely
unset -f greet Remove a shell function
export -n PATH Stop exporting PATH in the current shell
`env grep ‘^VAR='`

Make Variables Persistent

Add export lines to shell startup files when you want them loaded automatically.

File Use
~/.bashrc Interactive non-login Bash shells
~/.bash_profile Login Bash shells
/etc/environment System-wide environment variables
/etc/profile System-wide shell startup logic
source ~/.bashrc Reload the current shell after editing

Troubleshooting

Quick checks for common export problems.

Issue Check
A child process cannot see the variable Confirm you used export VAR or export VAR=value
The variable disappears in a new terminal Add the export line to ~/.bashrc or another startup file
A function is missing in a child shell Export it with export -f name and use Bash as the child shell
The shell still sees the variable after export -n export -n removes inheritance only; use unset to remove the variable completely
The variable is set but a command ignores it Check whether the program reads that variable or expects a config file instead

Related Guides

Use these guides for broader shell and environment-variable workflows.

Guide Description
export Command in Linux Full guide to export options and examples
How to Set and List Environment Variables in Linux Broader environment-variable overview
env cheatsheet Inspect and override environment variables
Bash cheatsheet Quick reference for Bash syntax and shell behavior

git log Command: View Commit History

Every commit in a Git repository is recorded in the project history. The git log command lets you browse that history — showing who made each change, when, and why. It is one of the most frequently used Git commands and one of the most flexible.

This guide explains how to use git log to view, filter, and format commit history.

Basic Usage

Run git log with no arguments to see the full commit history of the current branch, starting from the most recent commit:

Terminal
git log
output
commit a3f1d2e4b5c6789012345678901234567890abcd
Author: Jane Smith <jane@example.com>
Date: Mon Mar 10 14:22:31 2026 +0100
Add user authentication module
commit 7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c
Author: John Doe <john@example.com>
Date: Fri Mar 7 09:15:04 2026 +0100
Fix login form validation

Each entry shows the full commit hash, author name and email, date, and commit message. Press q to exit the log view.

Compact Output with –oneline

The default output is verbose. Use --oneline to show one commit per line — the abbreviated hash and the subject line only:

Terminal
git log --oneline
output
a3f1d2e Add user authentication module
7b8c9d0 Fix login form validation
c1d2e3f Update README with setup instructions

This is useful for a quick overview of recent work.

Limiting the Number of Commits

To show only the last N commits, pass -n followed by the number:

Terminal
git log -n 5

You can also write it without the space:

Terminal
git log -5

Filtering by Author

Use --author to show only commits by a specific person. The value is matched as a substring against the author name or email:

Terminal
git log --author="Jane"

To match multiple authors, pass --author more than once:

Terminal
git log --author="Jane" --author="John"

Filtering by Date

Use --since and --until to limit commits to a date range. These options accept a variety of formats:

Terminal
git log --since="2 weeks ago"
git log --since="2026-03-01" --until="2026-03-15"

Searching Commit Messages

Use --grep to find commits whose message contains a keyword:

Terminal
git log --grep="authentication"

The search is case-sensitive by default. Add -i to make it case-insensitive:

Terminal
git log -i --grep="fix"

Viewing Changed Files

To see a summary of which files were changed in each commit without the full diff, use --stat:

Terminal
git log --stat
output
commit a3f1d2e4b5c6789012345678901234567890abcd
Author: Jane Smith <jane@example.com>
Date: Mon Mar 10 14:22:31 2026 +0100
Add user authentication module
src/auth.py | 84 ++++++++++++++++++++++++++++++++++++++++++++++++
src/models.py | 12 +++++++
tests/test_auth.py | 45 +++++++++++++++++
3 files changed, 141 insertions(+)

To see the full diff for each commit, use -p (or --patch):

Terminal
git log -p

Combine with -n to limit the output:

Terminal
git log -p -3

Viewing History of a Specific File

To see only the commits that touched a particular file, pass the file path after --:

Terminal
git log -- path/to/file.py

The -- separator tells Git that what follows is a path, not a branch name. Add -p to see the exact changes made to that file in each commit:

Terminal
git log -p -- path/to/file.py

Branch and Graph View

To visualize how branches diverge and merge, use --graph:

Terminal
git log --oneline --graph
output
* a3f1d2e Add user authentication module
* 7b8c9d0 Fix login form validation
| * 3e4f5a6 WIP: dark mode toggle
|/
* c1d2e3f Update README with setup instructions

Add --all to include all branches, not just the current one:

Terminal
git log --oneline --graph --all

This gives a full picture of the repository’s branch and merge history.

Comparing Two Branches

To see commits that are in one branch but not another, use the .. notation:

Terminal
git log main..feature-branch

This shows commits reachable from feature-branch that are not yet in main — useful for reviewing what a branch adds before merging.

Custom Output Format

Use --pretty=format: to define exactly what each log line shows. Some useful placeholders:

  • %h — abbreviated commit hash
  • %an — author name
  • %ar — author date, relative
  • %s — subject (first line of commit message)

For example:

Terminal
git log --pretty=format:"%h %an %ar %s"
output
a3f1d2e Jane Smith 5 days ago Add user authentication module
7b8c9d0 John Doe 8 days ago Fix login form validation

Excluding Merge Commits

To hide merge commits and see only substantive changes, use --no-merges:

Terminal
git log --no-merges --oneline

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git log Full commit history
git log --oneline One line per commit
git log -n 10 Last 10 commits
git log --author="Name" Filter by author
git log --since="2 weeks ago" Filter by date
git log --grep="keyword" Search commit messages
git log --stat Show changed files per commit
git log -p Show full diff per commit
git log -- file History of a specific file
git log --oneline --graph Visual branch graph
git log --oneline --graph --all Graph of all branches
git log main..feature Commits in feature not in main
git log --no-merges Exclude merge commits
git log --pretty=format:"%h %an %s" Custom output format

FAQ

How do I search for a commit that changed a specific piece of code?
Use -S (the “pickaxe” option) to find commits that added or removed a specific string: git log -S "function_name". For regex patterns, use -G "pattern" instead.

How do I see who last changed each line of a file?
Use git blame path/to/file. It annotates every line with the commit hash, author, and date of the last change.

What is the difference between git log and git reflog?
git log shows the commit history of the current branch. git reflog shows every movement of HEAD — including commits that were reset, rebased, or are no longer reachable from any branch. It is the main recovery tool if you lose commits accidentally.

How do I find a commit by its message?
Use git log --grep="search term". For case-insensitive search add -i. If you need to search the full commit message body as well as the subject, add --all-match only when combining multiple --grep patterns, or use git log --grep="search term" --format=full to inspect matching commits more closely. Use -E only when you need extended regular expressions in the grep pattern.

How do I show a single commit in full detail?
Use git show <hash>. It prints the commit metadata and the full diff. To show just the files changed without the diff, use git show --stat <hash>.

Conclusion

git log is the primary tool for understanding a project’s history. Start with --oneline for a quick overview, use --graph --all to visualize branches, and combine --author, --since, and --grep to zero in on specific changes. For undoing commits found in the log, see the git revert guide or how to undo the last commit .

git stash: Save and Restore Uncommitted Changes

When you are in the middle of a feature and need to switch branches to fix a bug or review someone else’s work, you have a problem: your changes are not ready to commit, but you cannot switch branches with a dirty working tree. git stash solves this by saving your uncommitted changes to a temporary stack so you can restore them later.

This guide explains how to use git stash to save, list, apply, and delete stashed changes.

Basic Usage

The simplest form saves all modifications to tracked files and reverts the working tree to match the last commit:

Terminal
git stash
output
Saved working directory and index state WIP on main: abc1234 Add login page

Your working tree is now clean. You can switch branches, pull updates, or apply a hotfix. When you are ready to return to your work, restore the stash.

By default, git stash saves both staged and unstaged changes to tracked files. It does not save untracked or ignored files unless you explicitly include them (see below).

Stashing with a Message

A plain git stash entry shows the branch and last commit message, which becomes hard to read when you have multiple stashes. Use -m to attach a descriptive label:

Terminal
git stash push -m "WIP: user authentication form"

This makes it easy to identify the right stash when you list them later.

Listing Stashes

To see all saved stashes, run:

Terminal
git stash list
output
stash@{0}: On main: WIP: user authentication form
stash@{1}: WIP on main: abc1234 Add login page

Stashes are indexed from newest (stash@{0}) to oldest. The index is used when you want to apply or drop a specific stash.

To inspect the contents of a stash before applying it, use git stash show:

Terminal
git stash show stash@{0}

Add -p to see the full diff:

Terminal
git stash show -p stash@{0}

Applying Stashes

There are two ways to restore a stash.

git stash pop applies the most recent stash and removes it from the stash list:

Terminal
git stash pop

git stash apply applies a stash but keeps it in the list so you can apply it again or to another branch:

Terminal
git stash apply

To apply a specific stash by index, pass the stash reference:

Terminal
git stash apply stash@{1}

If applying the stash causes conflicts, resolve them the same way you would resolve a merge conflict, then stage the resolved files.

Stashing Untracked and Ignored Files

By default, git stash only saves changes to files that Git already tracks. New files you have not yet staged are left behind.

Use -u (or --include-untracked) to include untracked files:

Terminal
git stash -u

To include both untracked and ignored files — for example, generated build artifacts or local config files — use -a (or --all):

Terminal
git stash -a

Partial Stash

If you only want to stash some of your changes and leave others in the working tree, use the -p (or --patch) flag to interactively select which hunks to stash:

Terminal
git stash -p

Git will step through each changed hunk and ask whether to stash it. Type y to stash the hunk, n to leave it, or ? for a full list of options.

Creating a Branch from a Stash

If you have been working on a stash for a while and the branch has diverged enough to cause conflicts on apply, you can create a new branch from the stash directly:

Terminal
git stash branch new-branch-name

This creates a new branch, checks it out at the commit where the stash was originally made, applies the stash, and drops it from the list if it applied cleanly. It is the safest way to resume stashed work that has grown out of sync with the base branch.

Deleting Stashes

To remove a specific stash once you no longer need it:

Terminal
git stash drop stash@{0}

To delete all stashes at once:

Terminal
git stash clear

Use clear with care — there is no undo.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git stash Stash staged and unstaged changes
git stash push -m "message" Stash with a descriptive label
git stash -u Include untracked files
git stash -a Include untracked and ignored files
git stash -p Interactively choose what to stash
git stash list List all stashes
git stash show stash@{0} Show changed files in a stash
git stash show -p stash@{0} Show full diff of a stash
git stash pop Apply and remove the latest stash
git stash apply stash@{0} Apply a stash without removing it
git stash branch branch-name Create a branch from the latest stash
git stash drop stash@{0} Delete a specific stash
git stash clear Delete all stashes

FAQ

What is the difference between git stash pop and git stash apply?
pop applies the stash and removes it from the stash list. apply restores the changes but keeps the stash entry so you can apply it again — for example, to multiple branches. Use pop for normal day-to-day use and apply when you need to reuse the stash.

Does git stash save untracked files?
No, not by default. Run git stash -u to include untracked files, or git stash -a to also include files that match your .gitignore patterns.

Can I stash only specific files?
Yes. In modern Git, you can pass pathspecs to git stash push, for example: git stash push -m "label" -- path/to/file. If you need a more portable workflow, use git stash -p to interactively select which hunks to stash.

How do I recover a dropped stash?
If you accidentally ran git stash drop or git stash clear, the underlying commit may still exist for a while. Run git fsck --no-reflogs | grep "dangling commit" to list unreachable commits, then git show <hash> to inspect them. If you find the right stash commit, you can recreate it with git stash store -m "recovered stash" <hash> and then apply it normally.

Can I apply a stash to a different branch?
Yes. Switch to the target branch with git switch branch-name, then run git stash apply stash@{0}. If there are no conflicts, the changes are applied cleanly. If the branches have diverged significantly, consider git stash branch to create a fresh branch from the stash instead.

Conclusion

git stash is the cleanest way to set aside incomplete work without creating a throwaway commit. Use git stash push -m to keep your stash list readable, prefer pop for one-time restores, and use git stash branch when applying becomes messy. For a broader overview of Git workflows, see the Git cheatsheet or the git revert guide .

env Cheatsheet

Basic Syntax

Core env command forms.

Command Description
env Print the current environment
env --help Show available options
env --version Show the installed env version
env -0 Print variables separated with NUL bytes

Inspect Environment Variables

Use env with filters to inspect specific variables.

Command Description
`env sort`
`env grep ‘^PATH='`
`env grep ‘^HOME='`
`env grep ‘^LANG='`

Run Commands with Temporary Variables

Set variables for one command without changing the current shell session.

Command Description
VAR=value env command Run one command with a temporary variable
VAR1=dev VAR2=1 env command Set multiple temporary variables
env PATH=/custom/bin:$PATH command Override PATH for one command
env LANG=C command Run a command with the C locale
env HOME=/tmp bash Start a shell with a temporary home directory

Clean or Remove Variables

Start with a minimal environment or remove selected variables.

Command Description
env -i command Run a command with an empty environment
env -i PATH=/usr/bin:/bin bash --noprofile --norc Start a mostly clean shell with a minimal PATH
env -u VAR command Run a command without one variable
env -u http_proxy command Remove a proxy variable for one command
env -i VAR=value command Run a command with only the variables you set explicitly

Common Variables

These variables are often inspected or overridden with env.

Variable Description
PATH Directories searched for commands
HOME Current user’s home directory
USER Current user name
SHELL Default login shell
LANG Locale and language setting
TZ System timezone (e.g. America/New_York)
EDITOR Default text editor
TERM Terminal type (e.g. xterm-256color)
TMPDIR Directory for temporary files
PWD Current working directory

Troubleshooting

Quick checks for common env issues.

Issue Check
A temporary variable does not persist env VAR=value command affects only that command and its children
A command is not found after env -i Add a minimal PATH, such as /usr/bin:/bin
Output is hard to parse safely Use env -0 with tools that support NUL-delimited input
A variable is still visible in the shell env does not modify the parent shell; use export or unset the variable in the shell itself
A locale-sensitive command behaves differently Check whether LANG or related locale variables were overridden

Related Guides

Use these guides for broader environment-variable workflows.

Guide Description
How to Set and List Environment Variables in Linux Full guide to listing and setting environment variables
export Command in Linux Export shell variables to child processes
Bashrc vs Bash Profile Understand shell startup files
Bash cheatsheet Quick reference for Bash syntax and variables

Python Cheatsheet

Data Types

Core Python types and how to inspect them.

Syntax Description
42, 3.14, -7 int, float literals
True, False bool
"hello", 'world' str (immutable)
[1, 2, 3] list — mutable, ordered
(1, 2, 3) tuple — immutable, ordered
{"a": 1, "b": 2} dict — key-value pairs
{1, 2, 3} set — unique, unordered
None Absence of a value
type(x) Get type of x
isinstance(x, int) Check if x is of a given type

Operators

Arithmetic, comparison, logical, and membership operators.

Operator Description
+, -, *, / Add, subtract, multiply, divide
// Floor division
% Modulo (remainder)
** Exponentiation
==, != Equal, not equal
<, >, <=, >= Comparison
and, or, not Logical operators
in, not in Membership test
is, is not Identity test
x if cond else y Ternary (conditional) expression

String Methods

Common operations on str objects. Strings are immutable — methods return a new string.

Method Description
s.upper() / s.lower() Convert to upper / lowercase
s.strip() Remove leading and trailing whitespace
s.lstrip() / s.rstrip() Remove leading / trailing whitespace
s.split(",") Split on delimiter, return list
",".join(lst) Join list elements into a string
s.replace("old", "new") Replace all occurrences
s.find("x") First index of substring (-1 if not found)
s.startswith("x") True if string starts with prefix
s.endswith("x") True if string ends with suffix
s.count("x") Count non-overlapping occurrences
s.isdigit() / s.isalpha() Check all digits / all letters
len(s) String length
s[i], s[i:j], s[::step] Index, slice, step
"x" in s Substring membership test

String Formatting

Embed values in strings.

Syntax Description
f"Hello, {name}" f-string (Python 3.6+)
f"{val:.2f}" Float with 2 decimal places
f"{val:>10}" Right-align in 10 characters
f"{val:,}" Thousands separator
f"{{name}}" Literal braces in output
"Hello, {}".format(name) str.format()
"Hello, %s" % name %-formatting (legacy)

List Methods

Common operations on list objects.

Method Description
lst.append(x) Add element to end
lst.extend([x, y]) Add multiple elements to end
lst.insert(i, x) Insert element at index i
lst.remove(x) Remove first occurrence of x
lst.pop() Remove and return last element
lst.pop(i) Remove and return element at index i
lst.index(x) First index of x
lst.count(x) Count occurrences of x
lst.sort() Sort in place (ascending)
lst.sort(reverse=True) Sort in place (descending)
lst.reverse() Reverse in place
lst.copy() Shallow copy
lst.clear() Remove all elements
len(lst) List length
lst[i], lst[i:j] Index and slice
x in lst Membership test

Dictionary Methods

Common operations on dict objects.

Method Description
d[key] Get value by key (raises KeyError if missing)
d.get(key) Get value or None if key not found
d.get(key, default) Get value or default if key not found
d[key] = val Set or update a value
del d[key] Delete a key
d.pop(key) Remove key and return its value
d.keys() View of all keys
d.values() View of all values
d.items() View of all (key, value) pairs
d.update(d2) Merge another dict into d
d.setdefault(key, val) Set key if missing, return value
key in d Check if key exists
len(d) Number of key-value pairs
{**d1, **d2} Merge dicts (Python 3.9+: d1 | d2)

Sets

Common operations on set objects.

Syntax Description
set() Empty set (use this, not {})
s.add(x) Add element
s.remove(x) Remove element (raises KeyError if missing)
s.discard(x) Remove element (no error if missing)
s1 | s2 Union
s1 & s2 Intersection
s1 - s2 Difference
s1 ^ s2 Symmetric difference
s1 <= s2 Subset check
x in s Membership test

Control Flow

Conditionals, loops, and flow control keywords.

Syntax Description
if cond: / elif cond: / else: Conditional blocks
for item in iterable: Iterate over a sequence
for i, v in enumerate(lst): Iterate with index and value
while cond: Loop while condition is true
break Exit loop immediately
continue Skip to next iteration
pass No-op placeholder
match x: / case val: Pattern matching (Python 3.10+)

Functions

Define and call functions.

Syntax Description
def func(x, y): Define a function
return val Return a value
def func(x=0): Default argument
def func(*args): Variable positional arguments
def func(**kwargs): Variable keyword arguments
lambda x: x * 2 Anonymous (lambda) function
func(y=1, x=2) Call with keyword arguments
result = func(x) Call and capture return value

File I/O

Open, read, and write files.

Syntax Description
open("f.txt", "r") Open for reading (default mode)
open("f.txt", "w") Open for writing — creates or overwrites
open("f.txt", "a") Open for appending
open("f.txt", "rb") Open for reading in binary mode
with open("f.txt") as f: Open with context manager (auto-close)
f.read() Read entire file as a string
f.read(n) Read n characters
f.readline() Read one line
f.readlines() Read all lines into a list
for line in f: Iterate over lines
f.write("text") Write string to file
f.writelines(lst) Write list of strings

Exception Handling

Catch and handle runtime errors.

Syntax Description
try: Start guarded block
except ValueError: Catch a specific exception
except (TypeError, ValueError): Catch multiple exception types
except Exception as e: Catch and bind exception to variable
else: Run if no exception was raised
finally: Always run — use for cleanup
raise ValueError("msg") Raise an exception
raise Re-raise the current exception

Comprehensions

Build lists, dicts, sets, and generators in one expression.

Syntax Description
[x for x in lst] List comprehension
[x for x in lst if cond] Filtered list comprehension
[f(x) for x in range(n)] Comprehension with expression
{k: v for k, v in d.items()} Dict comprehension
{x for x in lst} Set comprehension
(x for x in lst) Generator expression (lazy)

Useful Built-ins

Frequently used Python built-in functions.

Function Description
print(x) Print to stdout
input("prompt") Read user input as string
len(x) Length of sequence or collection
range(n) / range(a, b, step) Integer sequence
enumerate(lst) (index, value) pairs
zip(a, b) Pair elements from two iterables
map(func, lst) Apply function to each element
filter(func, lst) Filter elements by predicate
sorted(lst) Return sorted copy
sum(lst), min(lst), max(lst) Sum, minimum, maximum
abs(x), round(x, n) Absolute value, round to n places
int(x), float(x), str(x) Type conversion

Related Guides

Guide Description
Python f-Strings String formatting with f-strings
Python Lists List operations and methods
Python Dictionaries Dictionary operations and methods
Python for Loop Iterating with for loops
Python while Loop Looping with while
Python if/else Statement Conditionals and branching
How to Replace a String in Python String replacement methods
How to Split a String in Python Splitting strings into lists

Python f-Strings: String Formatting in Python 3

f-strings, short for formatted string literals, are the most concise and readable way to format strings in Python. Introduced in Python 3.6, they let you embed variables and expressions directly inside a string by prefixing it with f or F.

This guide explains how to use f-strings in Python, including expressions, format specifiers, number formatting, alignment, and the debugging shorthand introduced in Python 3.8.

Basic Syntax

An f-string is a string literal prefixed with f. Any expression inside curly braces {} is evaluated at runtime and its result is inserted into the string:

py
name = "Leia"
age = 30
print(f"My name is {name} and I am {age} years old.")
output
My name is Leia and I am 30 years old.

Any valid Python expression can appear inside the braces — variables, attribute access, method calls, or arithmetic:

py
price = 49.99
quantity = 3
print(f"Total: {price * quantity}")
output
Total: 149.97

Expressions Inside f-Strings

You can call methods and functions directly inside the braces:

py
name = "alice"
print(f"Hello, {name.upper()}!")
output
Hello, ALICE!

Ternary expressions work as well:

py
age = 20
print(f"Status: {'adult' if age >= 18 else 'minor'}")
output
Status: adult

You can also access dictionary keys and list items:

py
user = {"name": "Bob", "city": "Berlin"}
print(f"{user['name']} lives in {user['city']}.")
output
Bob lives in Berlin.

Escaping Braces

To include a literal curly brace in an f-string, double it:

py
value = 42
print(f"{{value}} = {value}")
output
{value} = 42

Multiline f-Strings

To build a multiline f-string, wrap it in triple quotes:

py
name = "Leia"
age = 30
message = f"""
Name: {name}
Age: {age}
"""
print(message)
output
Name: Leia
Age: 30

Alternatively, use implicit string concatenation to keep each line short:

py
message = (
 f"Name: {name}\n"
 f"Age: {age}"
)

Format Specifiers

f-strings support Python’s format specification mini-language. The full syntax is:

txt
{value:[fill][align][sign][width][grouping][.precision][type]}

Floats and Precision

Control the number of decimal places with :.Nf:

py
pi = 3.14159265
print(f"{pi:.2f}")
print(f"{pi:.4f}")
output
3.14
3.1416

Thousands Separator

Use , to add a thousands separator:

py
population = 1_234_567
print(f"{population:,}")
output
1,234,567

Percentage

Use :.N% to format a value as a percentage:

py
score = 0.875
print(f"Score: {score:.1%}")
output
Score: 87.5%

Scientific Notation

Use :e for scientific notation:

py
distance = 149_600_000
print(f"{distance:e}")
output
1.496000e+08

Alignment and Padding

Use <, >, and ^ to align text within a fixed width. Optionally specify a fill character before the alignment symbol:

py
print(f"{'left':<10}|")
print(f"{'right':>10}|")
print(f"{'center':^10}|")
print(f"{'padded':*^10}|")
output
left |
right|
center |
**padded**|

Number Bases

Convert integers to binary, octal, or hexadecimal with the b, o, x, and X type codes:

py
n = 255
print(f"Binary: {n:b}")
print(f"Octal: {n:o}")
print(f"Hex (lower): {n:x}")
print(f"Hex (upper): {n:X}")
output
Binary: 11111111
Octal: 377
Hex (lower): ff
Hex (upper): FF

Debugging with =

Python 3.8 introduced the = shorthand, which prints both the expression and its value. This is useful for quick debugging:

py
x = 42
items = [1, 2, 3]
print(f"{x=}")
print(f"{len(items)=}")
output
x=42
len(items)=3

The = shorthand preserves the exact expression text, making it more informative than a plain print().

Quick Reference

Syntax Description
f"{var}" Insert a variable
f"{expr}" Insert any expression
f"{val:.2f}" Float with 2 decimal places
f"{val:,}" Integer with thousands separator
f"{val:.1%}" Percentage with 1 decimal place
f"{val:e}" Scientific notation
f"{val:<10}" Left-align in 10 characters
f"{val:>10}" Right-align in 10 characters
f"{val:^10}" Center in 10 characters
f"{val:*^10}" Center with * fill
f"{val:b}" Binary
f"{val:x}" Hexadecimal (lowercase)
f"{val=}" Debug: print expression and value (3.8+)

FAQ

What Python version do f-strings require?
f-strings were introduced in Python 3.6. If you are on Python 3.5 or earlier, use str.format() or % formatting instead.

What is the difference between f-strings and str.format()?
f-strings are evaluated at runtime and embed expressions inline. str.format() uses positional or keyword placeholders and is slightly more verbose. f-strings are generally faster and easier to read for straightforward formatting.

Can I use quotes inside an f-string expression?
Yes, but you must use a different quote type from the one wrapping the f-string. For example, if the f-string uses double quotes, use single quotes inside the braces: f"Hello, {user['name']}". In Python 3.12 and later, the same quote type can be reused inside the braces.

Can I use f-strings with multiline expressions?
Expressions inside {} cannot contain backslashes directly, but you can pre-compute the value in a variable first and reference that variable in the f-string.

Are f-strings faster than % formatting?
In many common cases, f-strings are faster than % formatting and str.format(), while also being easier to read. Exact performance depends on the expression and Python version, so readability is usually the more important reason to prefer them.

Conclusion

f-strings are the preferred way to format strings in Python 3.6 and later. They are concise, readable, and support the full format specification mini-language for controlling number precision, alignment, and type conversion. For related string operations, see Python String Replace and How to Split a String in Python .

kill Cheatsheet

Basic Syntax

Core kill command forms.

Command Description
kill PID Send SIGTERM (graceful stop) to a process
kill -9 PID Force kill a process (SIGKILL)
kill -1 PID Reload process config (SIGHUP)
kill -l List all available signal names and numbers
kill -0 PID Check if a PID exists without sending a signal

Signal Reference

Signals most commonly used with kill, killall, and pkill.

Signal Number Description
SIGHUP 1 Reload config — most daemons reload settings without restarting
SIGINT 2 Interrupt process — equivalent to pressing Ctrl+C
SIGQUIT 3 Quit and write a core dump for debugging
SIGKILL 9 Force kill — cannot be caught or ignored; always terminates immediately
SIGTERM 15 Graceful stop — default signal; process can clean up before exiting
SIGUSR1 10 User-defined signal 1 — meaning depends on the application
SIGUSR2 12 User-defined signal 2 — meaning depends on the application
SIGCONT 18 Resume a process that was suspended with SIGSTOP
SIGSTOP 19 Suspend process — cannot be caught or ignored
SIGTSTP 20 Terminal stop — equivalent to pressing Ctrl+Z

Kill by PID

Send signals to one or more specific processes.

Command Description
kill 1234 Gracefully stop PID 1234
kill -9 1234 5678 Force kill multiple PIDs at once
kill -HUP 1234 Reload config for PID 1234
kill -STOP 1234 Suspend PID 1234
kill -CONT 1234 Resume suspended PID 1234
kill -9 $(pidof firefox) Force kill all PIDs for a process by name

killall: Kill by Name

Send signals to all processes matching an exact name.

Command Description
killall process_name Send SIGTERM to all matching processes
killall -9 process_name Force kill all matching processes
killall -HUP nginx Reload all nginx processes
killall -u username process_name Kill matching processes owned by a user
killall -v process_name Kill and report which processes were signaled
killall -r "pattern" Match process names with a regex pattern

Background Jobs

Kill jobs running in the current shell session.

Command Description
jobs List background jobs and their job numbers
kill %1 Send SIGTERM to background job number 1
kill -9 %1 Force kill background job number 1
kill %+ Kill the most recently started background job
kill %% Kill the current (most recent) background job

Troubleshooting

Quick fixes for common kill issues.

Issue Check
Process does not stop after SIGTERM Wait a moment, then escalate to kill -9 PID
No such process error PID has already exited; verify with ps -p PID
Operation not permitted Process belongs to another user — use sudo kill PID
SIGKILL has no effect Process is in uninterruptible sleep (D state in ps); only a reboot can free it
Not sure which PID to kill Find it first with pidof name, pgrep name, or ps aux | grep name

Related Guides

Guide Description
kill Command in Linux Full guide to kill options, signals, and examples
How to Kill a Process in Linux Practical walkthrough for finding and stopping processes
pkill Command in Linux Kill processes by name and pattern
pgrep Command in Linux Find process PIDs before signaling
ps Command in Linux Inspect the running process list

watch Cheatsheet

Basic Syntax

Core watch command forms.

Command Description
watch command Run a command repeatedly every 2 seconds
watch -n 5 command Refresh every 5 seconds
watch -n 1 date Update output every second
watch --help Show available options

Common Monitoring Tasks

Use watch to keep an eye on changing system state.

Command Description
watch free -h Monitor memory usage
watch df -h Monitor disk space
watch uptime Check load averages and uptime
`watch “ps -ef grep nginx”`
watch "ss -tulpn" Monitor listening sockets

Timing and Refresh Control

Control how often watch reruns the command.

Command Description
watch -n 0.5 command Refresh every 0.5 seconds
watch -n 10 command Refresh every 10 seconds
watch -t command Hide the header line
watch -p command Try to run at precise intervals
watch -x command arg1 arg2 Run the command directly without sh -c

Highlighting Changes

Make changing output easier to spot.

Command Description
watch -d command Highlight differences between updates
watch -d free -h Highlight memory changes
watch -d "ip -brief address" Highlight interface state or address changes
watch -d -n 1 "cat /proc/loadavg" Highlight load-average changes

Commands with Pipes and Quotes

Wrap pipelines and shell syntax in quotes unless you use -x.

Command Description
`watch “ps -ef grep apache”`
`watch “ls -lh /var/log tail”`
watch "grep -c error /var/log/syslog" Count matching lines repeatedly
watch -x ls -lh /var/log Run ls directly without shell parsing
`watch “find /tmp -maxdepth 1 -type f wc -l”`

Exit Conditions and Beeps

Stop or alert when output changes.

Command Description
watch -g command Exit when command output changes
watch -b command Beep if the command exits with non-zero status
watch -b -n 5 "systemctl is-active nginx" Alert if the service is no longer active
watch -g "cat /tmp/status.txt" Exit when the file content changes

Troubleshooting

Quick fixes for common watch issues.

Issue Check
Pipes or redirects do not work Quote the whole command or use sh -c
The screen flickers too much Increase the interval or hide the header with -t
Output changes are hard to spot Add -d to highlight differences
The command exits immediately Check the command syntax outside watch first
You need shell expansion and variables Use quoted shell commands instead of -x

Related Guides

Use these guides for broader monitoring and process tasks.

Guide Description
Linux Watch Command Full guide to watch options and examples
ps Command in Linux Inspect running processes
free Command in Linux Check memory usage
ss Command in Linux Inspect sockets and ports
top Command in Linux Monitor processes interactively

Python Virtual Environments: venv and virtualenv

A Python virtual environment is a self-contained directory that includes its own Python interpreter and a set of installed packages, isolated from the system-wide Python installation. Using virtual environments lets each project maintain its own dependencies without affecting other projects on the same machine.

This guide explains how to create and manage virtual environments using venv (built into Python 3) and virtualenv (a popular third-party alternative) on Linux and macOS.

Quick Reference

Task Command
Install venv support (Ubuntu, Debian) sudo apt install python3-venv
Create environment python3 -m venv venv
Activate (Linux / macOS) source venv/bin/activate
Deactivate deactivate
Install a package pip install package-name
Install specific version pip install package==1.2.3
List installed packages pip list
Save dependencies pip freeze > requirements.txt
Install from requirements file pip install -r requirements.txt
Create with specific Python version python3.11 -m venv venv
Install virtualenv pip install virtualenv
Delete environment rm -rf venv

What Is a Python Virtual Environment?

When you install a package globally with pip install, it is available to every Python script on the system. This becomes a problem when two projects need different versions of the same library — for example, one requires requests==2.28.0 and another requires requests==2.31.0. Installing both globally is not possible.

A virtual environment solves this by giving each project its own isolated space for packages. Activating an environment prepends its bin/ directory to your PATH, so python and pip resolve to the versions inside the environment rather than the system-wide ones.

Installing venv

The venv module ships with Python 3 and requires no separate installation on most systems. On Ubuntu and Debian, the module is packaged separately:

Terminal
sudo apt install python3-venv

On Fedora, RHEL, and Derivatives, venv is included with the Python package by default.

Verify your Python version before creating an environment:

Terminal
python3 --version

For more on checking your Python installation, see How to Check Python Version .

Creating a Virtual Environment

Navigate to your project directory and run:

Terminal
python3 -m venv venv

The second venv is the name of the directory that will be created. The conventional names are venv or .venv. The directory contains a copy of the Python binary, pip, and the standard library.

To create the environment in a specific location rather than the current directory:

Terminal
python3 -m venv ~/projects/myproject/venv

Activating the Virtual Environment

Before using the environment, you need to activate it.

On Linux and macOS:

Terminal
source venv/bin/activate

Once activated, your shell prompt changes to show the environment name:

output
(venv) user@host:~/myproject$

From this point on, python and pip refer to the versions inside the virtual environment, not the system-wide ones.

To deactivate the environment and return to the system Python:

Terminal
deactivate

Installing Packages

With the environment active, install packages using pip as you normally would. If pip is not installed on your system, see How to Install Python Pip on Ubuntu .

Terminal
pip install requests

To install a specific version:

Terminal
pip install requests==2.31.0

To list all packages installed in the current environment:

Terminal
pip list

Packages installed here are completely isolated from the system and from other virtual environments.

Managing Dependencies with requirements.txt

Sharing a project with others, or deploying it to another machine, requires a way to reproduce the same package versions. The convention is to save all dependencies to a requirements.txt file:

Terminal
pip freeze > requirements.txt

The file lists every installed package and its exact version:

output
certifi==2024.2.2
charset-normalizer==3.3.2
idna==3.6
requests==2.31.0
urllib3==2.2.1

To install all packages from the file in a fresh environment:

Terminal
pip install -r requirements.txt

This is the standard workflow for reproducing an environment on a different machine or in a CI/CD pipeline.

Using a Specific Python Version

By default, python3 -m venv uses whichever Python 3 version is the system default. If you have multiple Python versions installed, you can specify which one to use:

Terminal
python3.11 -m venv venv

This creates an environment based on Python 3.11 regardless of the system default. To check which versions are available :

Terminal
python3 --version
python3.11 --version

virtualenv: An Alternative to venv

virtualenv is a third-party package that predates venv and offers a few additional features. Install it with pip:

Terminal
pip install virtualenv

Creating and activating an environment with virtualenv follows the same pattern:

Terminal
virtualenv venv
source venv/bin/activate

To use a specific Python interpreter:

Terminal
virtualenv -p python3.11 venv

When to use virtualenv over venv:

  • You need to create environments faster — virtualenv is significantly faster on large projects.
  • You are working with Python 2 (legacy codebases only).
  • You need features like --copies to copy binaries instead of symlinking.

For most modern Python 3 projects, venv is sufficient and has the advantage of requiring no installation.

Excluding the Environment from Version Control

The virtual environment directory should never be committed to version control. It is large, machine-specific, and fully reproducible from requirements.txt. Add it to your .gitignore:

Terminal
echo "venv/" >> .gitignore

If you named your environment .venv instead, add .venv/ to .gitignore.

Troubleshooting

python3 -m venv venv fails with “No module named venv”
On Ubuntu and Debian, the venv module is not included in the base Python package. Install it with sudo apt install python3-venv, then retry.

pip install installs packages globally instead of into the environment
The environment is not activated. Run source venv/bin/activate first. You can verify by checking which pip — it should point to a path inside your venv/ directory.

“command not found: python” after activating the environment
The environment uses python3 as the binary name if it was created with python3 -m venv. Use python3 explicitly, or check which python and which python3 inside the active environment.

The environment breaks after moving or renaming its directory
Virtual environments contain hardcoded absolute paths to the Python interpreter. Moving or renaming the directory invalidates those paths. Delete the directory and recreate the environment in the new location, then reinstall from requirements.txt.

FAQ

What is the difference between venv and virtualenv?
venv is a standard library module included with Python 3 — no installation needed. virtualenv is a third-party package with a longer history, faster environment creation, and Python 2 support. For new Python 3 projects, venv is the recommended choice.

Should I commit the virtual environment to git?
No. Add venv/ (or .venv/) to your .gitignore and commit only requirements.txt. Anyone checking out the project can recreate the environment with pip install -r requirements.txt.

Do I need a virtual environment for every project?
It is strongly recommended. Without isolated environments, installing or upgrading a package for one project can break another. The overhead of creating a virtual environment is minimal.

What is the difference between pip freeze and pip list?
pip list shows installed packages in a human-readable format. pip freeze outputs them in requirements.txt format (package==version) suitable for use with pip install -r.

Can I use a virtual environment with a different Python version than the system default?
Yes. Pass the path to the desired interpreter when creating the environment: python3.11 -m venv venv. The environment will use that interpreter for both python and pip commands.

Conclusion

Virtual environments are the standard way to manage Python project dependencies. Create one with python3 -m venv venv, activate it with source venv/bin/activate, and use pip freeze > requirements.txt to capture your dependencies. For more on managing Python installations, see How to Check Python Version .

❌
❌