阅读视图

发现新文章,点击刷新页面。

dd Command in Linux: Copy Disks, Partitions, and Files

Most of the time when you copy a file on Linux, you reach for cp. It walks the filesystem, respects permissions, and works on individual files. But when you need to copy an entire disk, write an ISO image to a USB stick, or create a byte-for-byte backup of a partition, the filesystem view is not enough. For that, Linux ships with dd, a tool that copies raw data block by block between any two files or devices.

This guide explains how to use dd safely to write ISO images, clone disks, back up partitions, create test files, and benchmark storage performance.

dd Syntax

The general form of the dd command is:

txt
dd if=INPUT of=OUTPUT [OPTIONS]

The two key operands are if (input file) and of (output file). Either one can be a regular file, a block device such as /dev/sda, or even /dev/zero and /dev/urandom. Options control the block size, how many blocks to copy, and what conversions to apply.

Warning
dd writes exactly what you tell it to write, to exactly the target you point it at. Pointing of= at the wrong disk will overwrite that disk without warning or confirmation. Always double-check the target device with lsblk or sudo fdisk -l before running a dd command.

Writing an ISO Image to a USB Drive

The most common use for dd is flashing a Linux ISO to a USB drive so you can boot and install from it. First, identify the USB device with lsblk or sudo fdisk -l:

Terminal
lsblk
output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
└─sda2 8:2 0 465.3G 0 part /
sdb 8:16 1 14.5G 0 disk
└─sdb1 8:17 1 14.5G 0 part

Here sda is the system disk and sdb is the USB drive. Make sure the USB is unmounted before writing to it:

Terminal
sudo umount /dev/sdb1

Then write the ISO image:

Terminal
sudo dd if=ubuntu-24.04-desktop-amd64.iso of=/dev/sdb bs=4M status=progress oflag=sync

Notice that the output target is the whole disk (/dev/sdb), not a partition (/dev/sdb1). Writing an ISO to a partition would leave the USB unbootable. The options do the following:

  • bs=4M - Read and write 4 MB at a time, which is much faster than the 512-byte default.
  • status=progress - Print a live progress indicator so you can see how much has been written.
  • oflag=sync - Flush every write to the device, so the final byte count reflects data actually on disk.

When dd finishes, run sync once more to make sure all buffered writes hit the USB stick before you remove it:

Terminal
sync

For alternative methods and distro-specific tips, see the guide on how to create a bootable Linux USB drive .

Cloning a Whole Disk

To make an exact block-level copy of one disk onto another, point if= and of= at two different block devices:

Terminal
sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync status=progress

The destination disk must be at least as large as the source. Anything already on it is overwritten. The conversion flags handle read errors gracefully:

  • conv=noerror - Keep going when a read error occurs instead of aborting.
  • conv=sync - Pad short reads with zeros so the output stays aligned with the source.

Clone offline whenever possible. Cloning a disk that is actively being written to produces an inconsistent copy, especially for databases and journaling filesystems.

Backing Up a Partition to an Image File

dd can also write a partition or whole disk to a regular file. That file becomes a bit-for-bit image you can restore later:

Terminal
sudo dd if=/dev/sda1 of=/backup/sda1.img bs=4M status=progress

To save space, pipe the output through gzip:

Terminal
sudo dd if=/dev/sda1 bs=4M status=progress | gzip -c > /backup/sda1.img.gz

The compressed image is much smaller for filesystems that contain text or empty space, though the compression step adds CPU time.

To restore the image to a partition:

Terminal
gunzip -c /backup/sda1.img.gz | sudo dd of=/dev/sda1 bs=4M status=progress

The target partition must be unmounted during restore, and it must be at least as large as the original.

Backing Up and Restoring the MBR

The Master Boot Record sits in the first 512 bytes of a disk that uses a traditional BIOS partition table. You can back it up with dd:

Terminal
sudo dd if=/dev/sda of=/backup/mbr.img bs=512 count=1

To restore the MBR, swap the input and output:

Terminal
sudo dd if=/backup/mbr.img of=/dev/sda bs=512 count=1

On modern UEFI systems the partition table uses GPT instead of MBR, so this trick only applies to older BIOS installations. For GPT disks, use sgdisk --backup and sgdisk --load-backup from the gdisk package.

Creating an Empty File of a Specific Size

dd is often used to create test files of a known size. The count option sets how many blocks to write, and bs sets the block size:

Terminal
dd if=/dev/zero of=testfile bs=1M count=100

This creates a 100 MB file filled with zeros, useful for testing backups, simulating quota limits, or preparing a swap file. To create a file filled with random data instead, read from /dev/urandom:

Terminal
dd if=/dev/urandom of=random.bin bs=1M count=10

Random data is slower to generate because the kernel has to produce the bytes, while /dev/zero is effectively free.

Creating a Swap File

One practical use of the empty-file pattern is creating a swap file :

Terminal
sudo dd if=/dev/zero of=/swapfile bs=1M count=2048 status=progress
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

The chmod 600 step keeps the swap file readable only by root, which is required by mkswap on most distros. After swapon, the 2 GB swap file is active immediately. Add an entry to /etc/fstab to make it permanent.

Wiping a Disk

To destroy data on a disk before disposing of it or repurposing it, overwrite the entire device with zeros:

Terminal
sudo dd if=/dev/zero of=/dev/sdb bs=4M status=progress

For a stronger wipe that makes recovery harder on traditional spinning disks, use random data:

Terminal
sudo dd if=/dev/urandom of=/dev/sdb bs=4M status=progress

On SSDs, a dd wipe is not the right tool. SSDs remap blocks internally, so a full write does not guarantee every cell was overwritten. Use blkdiscard or the drive vendor’s secure-erase utility instead.

Benchmarking Disk Write Speed

dd is a quick way to measure sustained write throughput. Write a 1 GB file of zeros and let dd report the rate:

Terminal
dd if=/dev/zero of=tempfile bs=1M count=1024 oflag=direct
output
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.1 s, 346 MB/s

The oflag=direct flag bypasses the page cache, so the number reflects actual disk speed rather than memory throughput. For a read benchmark, drop the caches first and then read the file back:

Terminal
sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"
dd if=tempfile of=/dev/null bs=1M

Remove the test file when you are done:

Terminal
rm tempfile

For more detailed I/O profiling, tools like fio and ioping produce more accurate, repeatable results.

Showing Progress on a Running dd

If you started dd without status=progress, you can still ask it to print a status line by sending the USR1 signal to its process. From another terminal, run:

Terminal
sudo kill -USR1 $(pidof dd)

dd will respond by printing the number of bytes copied so far and its current rate, then keep running.

Common Options

A quick rundown of the options covered in this guide, plus a few more worth knowing:

  • if=FILE - Input file or device.
  • of=FILE - Output file or device.
  • bs=N - Block size for both reads and writes (e.g., 4M for 4 MB).
  • ibs=N / obs=N - Separate input and output block sizes.
  • count=N - Number of input blocks to copy.
  • skip=N - Skip N blocks at the start of the input.
  • seek=N - Skip N blocks at the start of the output.
  • status=progress - Print progress while copying.
  • conv=noerror - Continue past read errors.
  • conv=sync - Pad short reads with zeros to keep block alignment.
  • oflag=direct - Bypass the kernel page cache on write.
  • oflag=sync - Flush each write to the device before continuing.

Quick Reference

Command Description
sudo dd if=image.iso of=/dev/sdX bs=4M status=progress oflag=sync Write an ISO image to a USB drive
sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync status=progress Clone one disk to another
sudo dd if=/dev/sda1 of=/backup/sda1.img bs=4M status=progress Back up a partition to an image file
sudo dd if=/dev/sda bs=4M status=progress | gzip -c > disk.img.gz Compressed disk image backup
sudo dd if=/dev/sda of=/backup/mbr.img bs=512 count=1 Back up the MBR
dd if=/dev/zero of=testfile bs=1M count=100 Create a 100 MB file of zeros
sudo dd if=/dev/zero of=/dev/sdX bs=4M status=progress Wipe a disk with zeros
dd if=/dev/zero of=tempfile bs=1M count=1024 oflag=direct Benchmark disk write speed
sudo kill -USR1 $(pidof dd) Ask a running dd for its progress

Troubleshooting

dd: failed to open ‘/dev/sdX’: Permission denied
Writing to a block device requires root privileges. Prefix the command with sudo. If you are already using sudo, double-check that the device path is correct and that the device is not exclusively held by another process.

dd: error writing ‘/dev/sdX’: No space left on device
The destination is smaller than the source. When cloning, the target disk or partition must be at least as large as the source. When writing an ISO, the USB drive must be larger than the ISO file.

The write is much slower than expected
The default block size of 512 bytes forces millions of tiny syscalls. Set bs=4M or bs=64K to speed things up. Also check whether other processes are doing heavy I/O on the same device.

The USB does not boot after writing the ISO
Verify that you wrote the image to the disk (/dev/sdb), not a partition (/dev/sdb1). Also confirm the ISO download with its SHA256 checksum; a truncated or corrupted download produces an unbootable stick.

dd: invalid number: ‘4M’
Older or minimal systems may ship a dd that does not accept the M, G, or K suffixes. On those systems, spell out the block size in bytes (bs=4194304 for 4 MB), or install GNU coreutils.

FAQ

What does dd stand for?
The name comes from the IBM Job Control Language statement “Data Definition.” Because of how often the command is used to overwrite disks by mistake, many Linux users jokingly call it “disk destroyer.” Either way, the behavior is the same: it copies bytes from input to output without touching the filesystem layer.

Is dd faster than cp for copying large files?
For regular files on a normal filesystem, cp is usually just as fast and safer to use. dd only wins when you are copying raw devices, working with exact byte offsets, or deliberately limiting how much data is copied with count.

Can I use dd on an SSD without damaging it?
Reading and writing with dd is fine. The concern with SSDs is full-disk wipes: a single pass of zeros is enough to erase visible data, but repeated full-drive writes wear out the flash cells. For secure erase, use blkdiscard or the drive’s built-in secure-erase command.

How do I see how much data dd has copied?
Add status=progress to the command, or send the running process the USR1 signal with sudo kill -USR1 $(pidof dd). Both methods print the byte count and current throughput without interrupting the copy.

Why use bs=4M instead of the default?
The default block size of 512 bytes means dd issues a separate read and write for every 512 bytes of data. Larger block sizes such as 4 MB dramatically reduce syscall overhead and let the kernel fill the disk pipeline efficiently, often cutting copy times by a factor of ten.

Conclusion

dd is one of the most direct tools in the Linux toolbox. It copies bytes, no more and no less, between any two files or devices. That power makes it indispensable for flashing installers, cloning disks, and building image backups, and the same power makes it unforgiving if you point it at the wrong target.

For related disk tools, see the guides on fdisk for partitioning and df for checking free space.

How to Install and Use uv: Fast Python Package Manager

If you have worked with Python for a while, you have probably juggled pip, virtualenv, pip-tools, pipx, and pyenv just to keep a few projects running. Each tool does one thing, and wiring them together is slow and fragile. uv is a single binary that replaces all of them. It is written in Rust, published by Astral (the team behind ruff), and is typically ten to a hundred times faster than pip for common workflows.

This guide explains how to install uv on Linux, create and manage Python projects with it, add dependencies, work with virtual environments, and install specific Python versions.

Quick Reference

Command Description
uv init my-project Create a new project with pyproject.toml
uv add requests Add a dependency and update the lockfile
uv add --dev pytest Add a development dependency
uv remove requests Remove a dependency
uv sync Install dependencies from the lockfile
uv lock Update the lockfile
uv run script.py Run a command in the project environment
uv venv Create a virtual environment
uv pip install requests Pip-compatible install interface
uv python install 3.12 Install a specific Python version
uv python list List installed and available Python versions
uv tool install ruff Install a CLI tool globally
uvx ruff check . Run a tool without installing it permanently

Installing uv

The recommended install path on Linux is the standalone script from Astral. It downloads a prebuilt binary, places it in ~/.local/bin, and does not require Python to be installed first:

Terminal
curl -LsSf https://astral.sh/uv/install.sh | sh

The script prints where it put the binary and whether it updated your shell rc file. Reload your shell or run source ~/.bashrc so the new PATH takes effect, then verify the install:

Terminal
uv --version
output
uv 0.11.11

If you prefer to install from a package manager, uv is also available through pip, pipx, and Homebrew:

Terminal
pip install uv
Terminal
pipx install uv
Terminal
brew install uv

On Ubuntu, Debian, and Derivatives, pip may refuse a system-wide install because the Python environment is marked as externally managed. In that case, use pipx install uv or the standalone installer.

If you installed uv with the standalone installer, update it to the latest release with:

Terminal
uv self update

Creating a Project

uv init scaffolds a new project with a pyproject.toml, a README.md, a .python-version file, and a starter module:

Terminal
uv init my-project
cd my-project
output
Initialized project `my-project`

The generated pyproject.toml looks like this:

pyproject.tomltoml
[project]
name = "my-project"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = []

uv has not created a virtual environment yet. That happens the first time you add a dependency or run a command in the project.

Adding and Removing Dependencies

To add a package, use uv add:

Terminal
uv add requests
output
Resolved 5 packages in 320ms
Installed 5 packages in 45ms
+ certifi==2025.1.31
+ charset-normalizer==3.4.1
+ idna==3.10
+ requests==2.32.3
+ urllib3==2.3.0

On the first run, uv creates a .venv directory in the project root, resolves the dependency graph, writes a uv.lock file that pins every package and its transitive dependencies, and installs everything into the venv. The requests line is also added to pyproject.toml under dependencies.

Development-only dependencies go into a separate group with --dev:

Terminal
uv add --dev pytest ruff

They are recorded under a dev dependency group in pyproject.toml and are installed into the same venv, but they are excluded when your project is published or installed as a library.

Remove a dependency with uv remove:

Terminal
uv remove requests

uv updates pyproject.toml, updates uv.lock, and uninstalls the package along with any dependencies that are no longer needed.

Running Code in the Project Environment

You do not have to activate the virtual environment manually. uv run executes a command inside the project venv and syncs dependencies first if needed:

Terminal
uv run python main.py
Terminal
uv run pytest

Scripts declared in pyproject.toml under [project.scripts] are also available:

Terminal
uv run my-command

If you prefer the classic workflow, activate the venv yourself:

Terminal
source .venv/bin/activate
python main.py

Either approach works, but uv run has the advantage of keeping the environment in sync with uv.lock every time you invoke it.

Syncing and Reproducing an Environment

When you clone a project that uses uv, run uv sync to install exactly the versions pinned in uv.lock:

Terminal
uv sync
output
Resolved 12 packages in 5ms
Installed 12 packages in 110ms

uv sync is deterministic: given the same uv.lock, it produces the same environment on any machine. This is the command to run in CI, in Docker builds, and on every fresh checkout.

If pyproject.toml changes but the lockfile is out of date, refresh the lock with:

Terminal
uv lock

Use uv lock --upgrade to bump every dependency to the latest version allowed by the version constraints in pyproject.toml, and uv lock --upgrade-package requests to bump a single package.

Working with Virtual Environments Directly

uv can also be used as a faster drop-in for virtualenv. To create a standalone venv in the current directory:

Terminal
uv venv
output
Using Python 3.12.8
Creating virtual environment at: .venv
Activate with: source .venv/bin/activate

Specify a Python version with --python:

Terminal
uv venv --python 3.11

Specify a different path:

Terminal
uv venv /tmp/myenv

Once the venv exists, use uv pip as a fast replacement for the regular pip commands:

Terminal
uv pip install requests
uv pip install -r requirements.txt
uv pip freeze
uv pip list

The uv pip interface is intentionally compatible with pip, which makes it easy to adopt uv in existing projects without restructuring them as pyproject.toml-based projects.

For a refresher on classic virtual environments, see the guide on Python virtual environments .

Installing and Managing Python Versions

uv can download and manage Python interpreters without touching your system Python. List the versions available and installed:

Terminal
uv python list

Install a specific version:

Terminal
uv python install 3.12
Terminal
uv python install 3.11 3.13

The downloaded interpreters live under ~/.local/share/uv/python/ and are independent of the system package manager. To use a specific version in a project, set it in the .python-version file or pass --python when creating the venv:

Terminal
uv venv --python 3.13

When uv run starts a project, it reads .python-version and the requires-python constraint in pyproject.toml, then picks or downloads a matching interpreter automatically.

Installing CLI Tools

uv ships with a tool manager similar to pipx. It installs Python-based CLIs in isolated environments so they do not pollute your system Python:

Terminal
uv tool install ruff
output
Resolved 1 package in 80ms
Installed 1 package in 12ms
+ ruff==0.9.2
Installed 1 executable: ruff

The ruff binary is now on your PATH. List, upgrade, or remove tools:

Terminal
uv tool list
uv tool upgrade ruff
uv tool uninstall ruff

To run a tool without installing it permanently, use uvx (an alias for uv tool run):

Terminal
uvx ruff check .

uvx downloads the tool into a cached environment, runs it, and reuses the cache on subsequent invocations. This is the fastest way to try a CLI without making it permanent.

Running Single-File Scripts

A Python script can declare its dependencies inline with a PEP 723 comment block. uv run reads that block and sets up an ephemeral environment before executing the script:

script.pypy
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "requests",
# ]
# ///

import requests

response = requests.get("https://api.github.com")
print(response.status_code)

Run the script with:

Terminal
uv run script.py

uv creates a temporary venv, installs requests, runs the script, and keeps the environment in a cache for next time. This pattern is useful for small utilities and one-off automation where a full project is overkill.

Troubleshooting

uv: command not found after install
The installer places the binary in ~/.local/bin, which may not be on your PATH. Add it in your shell rc file: export PATH="$HOME/.local/bin:$PATH", then reload the shell. Verify with which uv.

Installation fails with “error: externally-managed-environment”
This message comes from system pip, not from uv. Use pipx install uv or the standalone installer instead. Installing Python packages into the system Python is discouraged on modern Ubuntu and Debian releases.

uv sync installs different versions than expected
Check that uv.lock is committed to your repository. Without the lockfile, uv sync falls back to resolving from pyproject.toml and may pick newer versions. Commit uv.lock to keep environments reproducible across machines.

uv cannot find a suitable Python interpreter
Run uv python install 3.12 (or whichever version your project requires) to let uv download a matching interpreter. The requires-python field in pyproject.toml tells uv which versions are acceptable.

Permission denied when installing to the system
uv does not need root. If a command prompts for sudo, the destination path is wrong. Use a project virtual environment or uv tool install for CLIs instead of writing to system directories.

FAQ

How is uv different from pip?
pip installs packages into an existing Python environment. uv is a full project manager: it creates and manages virtual environments, locks dependencies, installs Python interpreters, and runs scripts and tools. uv pip also provides a drop-in pip-compatible command for teams that want just the speed without adopting the full workflow.

Does uv replace Poetry or Hatch?
For most projects, yes. uv reads the standard pyproject.toml format used by Poetry and Hatch, manages lockfiles, and handles dependency groups. If you depend on specific features of Poetry plugins or Hatch build hooks, evaluate those needs before migrating.

Is uv compatible with existing requirements.txt workflows?
Yes. uv pip install -r requirements.txt works as a faster replacement for pip install -r, and uv pip compile generates a pinned requirements file from your unpinned input, similar to pip-compile from pip-tools.

Can I use uv with Docker?
Yes, and it is a good fit. The recommended pattern is to copy pyproject.toml and uv.lock first, run uv sync --frozen --no-install-project to install dependencies, then copy the rest of the source and run uv sync --frozen. This keeps the dependency layer cached across builds.

Where does uv store downloaded interpreters and package cache?
Interpreters go under ~/.local/share/uv/python/ and the package cache lives in ~/.cache/uv/. Both paths respect the XDG environment variables, so you can override them by setting XDG_DATA_HOME and XDG_CACHE_HOME.

Conclusion

uv rolls together the jobs that previously required pip, virtualenv, pip-tools, pipx, and pyenv into one fast binary. The speed is the headline feature, but the real payoff is a single, consistent workflow for projects, tools, scripts, and Python versions.

For related Python tooling on Linux, see the guide on installing Python on Ubuntu 24.04 and the guide on Python virtual environments .

netstat Command in Linux: Network Connections and Statistics

When something on a server is holding port 80, a connection is refusing to close, or a service is not reachable from the outside, the first question is almost always the same: what is actually listening, and who is talking to whom. For years, the answer on Linux was the netstat command.

netstat is part of the classic net-tools package and prints network connections, listening ports, routing tables, interface counters, and per-protocol statistics. It has been deprecated in favor of ss and ip , but it is still installed on many systems and the tool many sysadmins reach for first. This guide explains how to read its output and which flags cover the cases you run into day to day.

Install netstat

On most modern distributions, net-tools is not installed by default. If netstat is not available, install it with your package manager.

On Ubuntu, Debian, and Derivatives:

Terminal
sudo apt install net-tools

On Fedora, RHEL, and Derivatives:

Terminal
sudo dnf install net-tools

Once the package is installed, verify the binary is on your path:

Terminal
netstat --version

netstat Syntax

The general form of the command is:

txt
netstat [OPTIONS]

Run without options, netstat prints a list of open non-listening sockets, which is rarely what you want. In practice, you almost always pass a combination of flags that describe what kind of sockets you care about (TCP, UDP, Unix), what state they are in, and how you want the output formatted.

List All Connections

To list every connection, both listening and established, use the -a option:

Terminal
sudo netstat -a

The output is divided into two parts. The top lists Internet sockets with columns for protocol, receive and send queue sizes, local and foreign address, and state. The bottom lists Unix domain sockets used for local inter-process communication.

Running netstat with sudo is recommended because without it, the command cannot read socket ownership for processes that belong to other users.

Show TCP and UDP Connections

The -t and -u flags filter the output by protocol. To show only TCP connections:

Terminal
sudo netstat -at

To show only UDP connections:

Terminal
sudo netstat -au

You can combine both flags to get TCP and UDP together:

Terminal
sudo netstat -atu

Show Listening Ports

When you want to know which services are accepting new connections, use the -l option. For TCP, it restricts the output to sockets in the LISTEN state. UDP does not use the same connection states, but UDP sockets that are ready to receive traffic are still shown.

Terminal
sudo netstat -tuln

The combined flags read as: show TCP (-t) and UDP (-u) sockets that are listening (-l), with numeric addresses and ports (-n). The output looks similar to this:

output
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN
tcp6 0 0 :::80 :::* LISTEN
udp 0 0 0.0.0.0:68 0.0.0.0:*

From the output, we can see that SSH is listening on all interfaces on port 22, MySQL is bound to localhost only on port 3306, and a web server is listening on port 80 over IPv6.

Show the Process Using a Port

To find out which program owns a socket, add the -p option. It prints the PID and the process name next to each connection:

Terminal
sudo netstat -tulnp
output
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 812/sshd
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 1034/mysqld
tcp6 0 0 :::80 :::* LISTEN 1591/nginx: master

The PID/Program name column ties each listening port to a process. This is the quickest way to find out which service is holding a port when you get an Address already in use error.

To check one specific port, pipe the output to grep :

Terminal
sudo netstat -tulnp | grep ':80'

If the command prints a row, a process is listening on port 80. If it prints nothing, no TCP or UDP listener matched that port on the local system.

Show Numeric Output

By default, netstat resolves IP addresses to hostnames and port numbers to service names (for example, 22 becomes ssh). On a busy system, this can be slow because each address needs a DNS lookup.

The -n option disables name resolution and prints raw numbers:

Terminal
sudo netstat -n

Numeric output is also easier to pipe into other tools such as grep or awk, because the field values are predictable.

Continuous Monitoring

The -c option tells netstat to print the output every second until you stop it with Ctrl+C. It is useful when you want to watch a connection appear, change state, and close:

Terminal
sudo netstat -atnc

Each iteration prints the full table again, so it is best used together with a filter such as grep to isolate a single connection.

Show TCP Connection States

TCP connections move through states such as ESTABLISHED, LISTEN, TIME_WAIT, and CLOSE_WAIT. To show active TCP sockets with numeric addresses, run:

Terminal
sudo netstat -ant

The State column tells you where each connection is in the TCP lifecycle:

output
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 192.168.1.10:22 192.168.1.5:52710 ESTABLISHED
tcp 0 0 192.168.1.10:80 203.0.113.25:51422 TIME_WAIT
tcp 0 0 127.0.0.1:3306 127.0.0.1:41834 CLOSE_WAIT

ESTABLISHED means the connection is active. TIME_WAIT is normal after a connection closes. A large number of CLOSE_WAIT entries can mean the local application is not closing sockets correctly.

To show only established TCP connections, filter the output:

Terminal
sudo netstat -ant | grep ESTABLISHED

For new troubleshooting work, ss gives better built-in state filters, such as ss -tn state established.

Count Connections

On busy web servers, you may want a quick count instead of a full connection table. To count all current TCP connections that involve port 80, run:

Terminal
sudo netstat -ant | grep ':80' | wc -l

To count only established connections on port 80, add the TCP state to the filter:

Terminal
sudo netstat -ant | grep ':80' | grep ESTABLISHED | wc -l

These counts are useful as a quick signal, but they are not a replacement for application metrics. The number can change while the pipeline is running, especially on high-traffic systems.

Display the Routing Table

To print the kernel IP routing table, use the -r option:

Terminal
netstat -rn
output
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

The first row is the default route, which sends all traffic not matching another rule to 192.168.1.1. The ip route command from ip is the modern equivalent and is what you should use on new systems.

Interface Statistics

The -i option prints a per-interface summary that includes received and transmitted packets, errors, drops, and the MTU:

Terminal
netstat -i
output
Kernel Interface table
Iface MTU RX-OK RX-ERR RX-DRP TX-OK TX-ERR TX-DRP Flg
eth0 1500 2345678 0 12 1987654 0 0 BMRU
lo 65536 55432 0 0 55432 0 0 LRU

A growing RX-ERR or TX-ERR column is a hint that you have a cabling, duplex, or driver issue on that interface.

Protocol Statistics

To print a summary of statistics for each protocol, use -s:

Terminal
netstat -s

The output is long and grouped by protocol (Ip, Tcp, Udp, and so on). It includes counters such as total packets received, segments retransmitted, and connection resets. You can narrow the output to a single protocol by combining -s with -t or -u:

Terminal
netstat -st

netstat vs ss

On modern Linux distributions, ss is the recommended replacement for netstat. It is faster and reads information directly from the kernel through Netlink, with richer filter support. The ip command replaces the routing and interface parts of netstat.

A few common translations:

  • netstat -tuln is equivalent to ss -tuln
  • netstat -tulnp is equivalent to sudo ss -tulnp
  • netstat -s maps to ss -s (a much shorter summary)
  • netstat -i maps to ip -s link
  • netstat -r maps to ip route

For a full walkthrough of the replacement tool, see the guide on the ss command .

Quick Reference

For a printable quick reference, see the netstat cheatsheet .

Command Description
netstat Show active non-listening sockets
sudo netstat -a Show all listening and non-listening sockets
sudo netstat -at Show all TCP sockets
sudo netstat -au Show all UDP sockets
sudo netstat -tuln Show TCP and UDP listening sockets with numeric output
sudo netstat -tulnp Show listening sockets with PID and process name
sudo netstat -tulnp | grep ':80' Find the process listening on port 80
sudo netstat -ant Show all TCP sockets with numeric addresses
sudo netstat -ant | grep ESTABLISHED Show established TCP connections
sudo netstat -ant | grep ':80' | wc -l Count TCP connections involving port 80
sudo netstat -atnc Refresh TCP connection output every second
netstat -rn Show the routing table with numeric addresses
netstat -i Show interface statistics
netstat -s Show protocol statistics
netstat -st Show TCP protocol statistics

Troubleshooting

netstat: command not found
The net-tools package is not installed. Install it with your package manager, or use ss instead.

No PID/Program name column shown
You need to run the command with sudo. Without root privileges, netstat cannot read process information for sockets owned by other users.

Output is slow or hangs
DNS resolution is slow or failing. Add the -n option to disable name lookups and print numeric addresses and ports.

A port shows up as LISTEN on :: but not on 0.0.0.0
The service is listening on the IPv6 wildcard address. On systems with dual-stack enabled, this accepts IPv4 traffic too. Check your IPv6 configuration if only IPv4 clients are failing.

FAQ

Is netstat deprecated?
Yes. The net-tools suite, which includes netstat, ifconfig, and route, has been deprecated for years in favor of the iproute2 tools ss and ip. It is still functional and still shipped by most distributions, but new scripts should target ss and ip.

What is the difference between netstat on Linux and Windows?
The command name is the same, and the general idea is the same, but the flags are different. This guide covers the Linux version from net-tools. On Windows, the closest equivalents to netstat -tulnp are netstat -ano and Get-NetTCPConnection in PowerShell.

How do I find what process is using a specific port?
Run sudo netstat -tulnp | grep ':PORT', replacing PORT with the port number. The last column shows the PID and program name. With ss, the same query is sudo ss -tulnp 'sport = :PORT'.

Why is a port still TIME_WAIT after I stopped the service?
TIME_WAIT is a normal TCP state that keeps a closed connection around for a short period so late packets can be handled safely. The kernel clears these entries automatically, usually within one or two minutes.

Conclusion

netstat remains useful when you need to inspect network connections on systems that still have net-tools installed. For new scripts and current Linux systems, prefer ss for sockets and ip for routes and interface statistics.

git branch Command: Create, List, and Delete Branches

Branches are how Git keeps your work separated from the main line of development. A new branch lets you experiment, fix a bug, or build a feature without touching what is already shipped, and a merge brings the work back together when it is ready. The tool that creates, lists, and removes those branches is git branch.

This guide explains how to use git branch to manage local and remote branches, filter by merged or upstream state, rename branches, and clean up stale ones.

git branch Syntax

The general form of the command is:

txt
git branch [OPTIONS] [BRANCH] [START_POINT]

With no arguments, git branch lists your local branches and marks the currently checked-out one with an asterisk. With a branch name, it creates a new branch pointing at the current commit. With additional flags, it can list remote branches, delete branches, rename them, or set upstream tracking.

Listing Branches

To list the local branches in the current repository, run git branch with no arguments:

Terminal
git branch
output
 develop
* main
feature/login

The asterisk marks the branch you are currently on. The branches are sorted alphabetically.

To include remote-tracking branches, add -a (all):

Terminal
git branch -a
output
 develop
* main
feature/login
remotes/origin/HEAD -> origin/main
remotes/origin/develop
remotes/origin/main

Anything prefixed with remotes/ is a local reference to a branch on a remote, not a branch you can commit to directly. Check it out to create a local tracking branch.

To list only remote-tracking branches, use -r:

Terminal
git branch -r

To see the latest commit on each branch, add -v:

Terminal
git branch -v
output
 develop a1b2c3d Add release notes
* main 7e8f9a0 Merge pull request #42
feature/login 3d4e5f6 Stub login form

For even more detail, -vv also prints the upstream branch each local branch tracks and whether it is ahead, behind, or in sync:

Terminal
git branch -vv
output
 develop a1b2c3d [origin/develop] Add release notes
* main 7e8f9a0 [origin/main] Merge pull request #42
feature/login 3d4e5f6 [origin/feature/login: ahead 2] Stub login form

Creating a Branch

To create a new branch, pass the name as an argument. This creates the branch at the current HEAD but does not switch to it:

Terminal
git branch feature/login

To start the branch from a specific commit, tag, or another branch, pass it as the second argument:

Terminal
git branch hotfix v1.4.0
Terminal
git branch hotfix 3d4e5f6

Creating a branch without switching to it is useful when you want to mark a commit for later work. Most of the time, though, you create a branch and switch to it in one step. The modern command for that is git switch -c:

Terminal
git switch -c feature/login

The older equivalent is git checkout -b feature/login. Both create the branch at the current commit and check it out immediately. See the create and list Git branches guide for a fuller walkthrough.

Renaming a Branch

To rename the branch you are currently on, use -m with the new name:

Terminal
git branch -m new-name

To rename any branch, pass both the old and the new name:

Terminal
git branch -m old-name new-name

If a branch with the target name already exists, this form fails to protect you from overwriting it. To force the rename and overwrite the existing branch, use the capital -M instead. Use -M with care: it discards the destination branch.

After renaming a branch that is already pushed, push the new name to the remote and delete the old one so collaborators see the change. The rename a Git branch guide walks through the full local-and-remote flow.

Deleting a Branch

To delete a local branch that has been merged into the current branch, use -d:

Terminal
git branch -d feature/login
output
Deleted branch feature/login (was 3d4e5f6).

Git refuses to delete a branch that has unmerged work, to protect you from losing commits:

output
error: The branch 'feature/wip' is not fully merged.
If you are sure you want to delete it, run 'git branch -D feature/wip'.

If you are sure the work is no longer needed, force the deletion with -D:

Terminal
git branch -D feature/wip

You cannot delete the branch you are currently on. Switch to another branch first, then delete it. For remote branch deletion, see the delete local and remote Git branch guide.

Filtering by Merged State

git branch can filter branches by whether they are fully merged into the current branch. List branches that have been merged:

Terminal
git branch --merged

This is the safest way to find branches that can be deleted without losing history. Before deleting anything, preview the filtered list:

Terminal
git branch --merged | grep -vE '^\*|^[[:space:]]*(main|develop)$'

The grep filter keeps the current branch and the protected main and develop branches out of the list. If the output looks correct, pipe the same list to git branch -d:

Terminal
git branch --merged | grep -vE '^\*|^[[:space:]]*(main|develop)$' | xargs -r git branch -d

To list branches that still have unmerged work, use --no-merged:

Terminal
git branch --no-merged

These are the branches you should review before deleting, because they contain commits that are not on your current branch.

Filtering Remote Branches Too

--merged and --no-merged accept a reference argument. To check against main from the remote:

Terminal
git branch --merged origin/main

This is useful in CI or cleanup scripts where you want to see which local branches are safely merged into the shipped code, regardless of which branch you are currently on.

Setting an Upstream Branch

An upstream branch tells Git which remote branch your local branch tracks for git pull and git push. To set it on an existing branch, use --set-upstream-to (or -u):

Terminal
git branch --set-upstream-to=origin/main main
output
Branch 'main' set up to track remote branch 'main' from 'origin'.

To remove tracking information from a branch, use --unset-upstream:

Terminal
git branch --unset-upstream main

When you push a new branch to a remote for the first time, pass -u to git push to create the remote branch and set it as the upstream in one step:

Terminal
git push -u origin feature/login

Showing the Current Branch

To print just the name of the branch you are on, use --show-current:

Terminal
git branch --show-current
output
main

This is the right command for shell prompts and scripts. It prints nothing if you are in a detached HEAD state, which scripts can detect and handle.

Copying a Branch

git branch -c copies a branch, including its reflog and configuration, to a new name:

Terminal
git branch -c old-branch new-branch

The original branch is kept. Use the capital -C to force the copy and overwrite an existing target branch. Copying is less common than creating a fresh branch, but it is useful when you want to preserve the history and config of a branch under a new name.

Sorting the Branch List

By default, branches are listed alphabetically. The --sort option changes the order. To sort by the date of the most recent commit:

Terminal
git branch --sort=-committerdate

The minus sign reverses the order, so the most recently updated branch appears first. This is handy when you return to a project after a break and want to see what you were working on last.

Quick Reference

Command Description
git branch List local branches
git branch -a List local and remote-tracking branches
git branch -r List only remote-tracking branches
git branch -v Show the latest commit on each branch
git branch -vv Show upstream tracking and ahead/behind counts
git branch NAME Create a new branch at HEAD
git branch NAME START Create a branch at a specific commit or tag
git branch -m NEW Rename the current branch
git branch -m OLD NEW Rename another branch
git branch -d NAME Delete a merged branch
git branch -D NAME Force-delete a branch with unmerged work
git branch --merged List branches merged into the current branch
git branch --no-merged List branches with unmerged work
git branch -u origin/main Set the upstream of the current branch
git branch --show-current Print the current branch name
git branch --sort=-committerdate Sort by most recent commit

FAQ

What is the difference between git branch and git checkout?
git branch manages branches: creating, listing, renaming, and deleting. git checkout (and its modern replacement git switch) changes which branch you are on. Creating and switching in one step uses git switch -c or git checkout -b.

How do I delete a remote branch?
git branch -d only deletes local branches. To delete a branch on the remote, run git push origin --delete BRANCH_NAME. The delete local and remote Git branch guide covers the full flow, including cleanup of stale tracking references.

Why does Git say “branch is not fully merged”?
The branch has commits that are not reachable from your current branch. Git is protecting you from losing work. Review the commits with git log BRANCH_NAME, merge them if needed, and then use -d again. If you are certain the commits are not needed, force the delete with -D.

How do I see which branch a remote branch tracks locally?
Run git branch -vv. The output includes each local branch’s upstream in brackets along with its ahead and behind counts relative to that upstream.

Can I list branches sorted by who committed most recently?
Yes. Use git branch --sort=-committerdate to sort branches by the timestamp of their most recent commit, with the freshest branch first. This is a fast way to see what is actively being worked on.

Conclusion

git branch covers the full lifecycle of a branch: creating it, renaming it, keeping track of what it points at, and deleting it when the work is done. Combined with git switch for moving between branches and git merge for integrating changes, it is the foundation of any Git workflow.

For related Git commands, see the git diff guide for reviewing changes and the git log guide for inspecting branch history.

git fetch vs git pull: What Is the Difference?

Sooner or later, every Git user runs into the same question: should you use git fetch or git pull to get the latest changes from the remote? The two commands look similar, and both talk to the remote repository, but they do very different things to your working branch.

This guide explains the difference between git fetch and git pull, how each command works, and when to use one over the other.

Quick Reference

If you want to… Use
Download remote changes without touching your branch git fetch
Download and immediately integrate remote changes git pull
Preview incoming commits before merging git fetch + git log HEAD..origin/main --oneline
Update your branch but refuse merge commits git pull --ff-only
Rebase local commits on top of remote changes git pull --rebase

What git fetch Does

git fetch downloads commits, branches, and tags from a remote repository and stores them locally under remote-tracking references such as origin/main or origin/feature-x. It does not touch your working directory, your current branch, or the index.

Terminal
git fetch origin
output
remote: Enumerating objects: 12, done.
remote: Counting objects: 100% (12/12), done.
From github.com:example/project
4a1c2e3..9f7b8d1 main -> origin/main
a6d4c02..1e2f3a4 feature-x -> origin/feature-x

The output shows which remote branches were updated. After the fetch, the local branch main is still pointing to whatever commit it was on before. Only the remote-tracking branch origin/main has moved.

You can then inspect what changed before doing anything with it:

Terminal
git log main..origin/main

This is the safe way to look at new work without merging or rebasing yet. It gives you a preview of what the remote has that your branch does not.

What git pull Does

git pull is a convenience command. Under the hood, it runs git fetch followed by an integration step (a merge by default, or a rebase if configured). In other words:

Terminal
git pull origin main

is roughly equivalent to:

Terminal
git fetch origin
git merge origin/main

After git pull, your current branch has moved forward, and your working directory may contain new files, updated files, or merge conflicts that need resolving. The change is immediate and affects your checked-out branch.

Side-By-Side Comparison

The difference between the two commands comes down to what they change in your repository.

Aspect git fetch git pull
Downloads new commits from remote Yes Yes
Updates remote-tracking branches (origin/*) Yes Yes
Updates your current local branch No Yes
Modifies the working directory No Yes
Can create merge conflicts No Yes
Safe to run at any time Yes Only when ready to integrate

git fetch is read-only from your branch’s perspective. git pull actively changes your branch.

When to Use git fetch

Use git fetch when you want to see what changed on the remote without integrating anything yet. This is useful before you start new work, before rebasing a feature branch, or when you want to inspect a teammate’s branch safely.

For example, after fetching, you can compare your branch with the remote like this:

Terminal
git log HEAD..origin/main --oneline

This shows commits that exist on the remote but not in your current branch. Running git fetch often is cheap and has no side effects on your work, which is why many developers do it regularly throughout the day.

When to Use git pull

Use git pull when you are ready to bring remote changes into your current branch and keep working on top of them. The common flow looks like this:

Terminal
git checkout main
git pull origin main

This is the quickest way to sync a branch with its upstream when you know the integration will be straightforward, such as on a shared main branch where you rarely have local commits. git pull always updates the branch you currently have checked out, so it is worth confirming where you are before you run it.

If you want a cleaner history without merge commits, configure pull to rebase:

Terminal
git config --global pull.rebase true

From that point on, git pull runs git fetch followed by git rebase instead of git merge. Your local commits are replayed on top of the fetched changes, producing a linear history.

Avoiding Merge Surprises

The biggest reason to prefer git fetch over git pull in day-to-day work is control. A bare git pull on a branch where you have local commits can create a merge commit, introduce conflicts, or rewrite history during a rebase, all in one step.

A safer pattern is:

Terminal
git fetch origin
git log HEAD..origin/main --oneline
git merge origin/main

You fetch first, review what is coming, then decide whether to merge, rebase, or defer the integration. For active feature branches, this workflow avoids most of the “what just happened to my branch” moments.

If you already ran git pull and need to inspect what happened, start with:

Terminal
git log --oneline --decorate -n 5

This helps you confirm whether Git created a merge commit, fast-forwarded the branch, or rebased your local work.

If the pull created a merge commit that you do not want, ORIG_HEAD often points to the commit your branch was on before the pull. In that case, you can reset back to it:

Terminal
git reset --hard ORIG_HEAD
Warning
git reset --hard discards uncommitted changes. Use it only when you are sure you do not need the current state of your working tree.

Useful Flags

A few flags make both commands more predictable in real workflows.

  • git fetch --all - Fetch from every configured remote, not only origin.
  • git fetch --prune - Remove remote-tracking branches that no longer exist on the remote.
  • git fetch --tags - Download tags in addition to branches.
  • git pull --rebase - Rebase your local commits on top of the fetched changes instead of merging.
  • git pull --ff-only - Update the branch only if Git can fast-forward it; prevents unexpected merge commits.

--ff-only is especially useful on shared branches. If your branch cannot move forward by simply advancing the pointer, the pull fails and you decide what to do next.

FAQ

Does git fetch modify any local files?
No. It only updates remote-tracking references such as origin/main. Your branches, working directory, and staging area stay the same.

Is git pull the same as git fetch plus git merge?
By default, yes. With pull.rebase set to true (or --rebase on the command line), it runs git fetch followed by git rebase instead.

Which one should I use daily?
Prefer git fetch for visibility and git pull --ff-only (or git pull --rebase) for the integration step. A bare git pull works, but it hides two operations behind one command.

How do I see what git fetch downloaded?
Use git log main..origin/main to see commits on the remote that are not yet in your local branch, or git diff main origin/main to compare the content directly.

Can git fetch cause conflicts?
No. Conflicts only happen when you merge or rebase the fetched commits into your branch. Fetching itself is conflict-free.

Conclusion

git fetch lets you inspect remote changes without touching your branch, while git pull fetches and integrates those changes in one step. If you want more control and fewer surprises, fetch first, review the incoming commits, and then merge or rebase on your own terms. For more Git workflow details, see the git log command and git diff command guides.

sha256sum and md5sum Commands: Verify File Integrity in Linux

When you download an ISO image, a backup archive, or a large release tarball, there is no easy way to tell by looking whether the file arrived intact. A single flipped bit can break a boot image or turn a compressed archive into junk, and a compromised mirror can serve a tampered file that looks legitimate. The fix is to compare a cryptographic fingerprint of the file against a value the publisher has signed or posted somewhere you trust.

This guide shows how to use sha256sum and md5sum to generate, compare, and verify checksums on Linux, and when to use each one.

sha256sum and md5sum Syntax

Both commands follow the same form:

txt
sha256sum [OPTIONS] [FILE]...
md5sum [OPTIONS] [FILE]...

Without options, each command prints a hex digest followed by two spaces and the file name. Pass -c to verify files against a list of previously generated checksums.

sha256sum vs md5sum

The two tools do the same job: they read a file and print a fixed-length fingerprint. The difference is the algorithm and, by extension, how safe the result is against intentional tampering.

md5sum uses the MD5 algorithm and produces a 128-bit digest. MD5 is fast but has been broken for years: it is possible to construct two different files that share the same MD5 hash. Treat it as a checksum for accidental corruption only, not for authenticity or security.

sha256sum uses SHA-256 from the SHA-2 family and produces a 256-bit digest. It is the default choice for verifying downloads, release artifacts, and anything where the threat model includes a malicious middle party. Most Linux distributions publish SHA256SUMS files next to their ISO images for exactly this reason.

When in doubt, use sha256sum. Reach for md5sum only when a publisher provides MD5 values and nothing stronger, or when you need quick parity checks between known-good files.

Generating a Checksum for a File

To produce a SHA-256 digest for a single file, pass it as an argument:

Terminal
sha256sum ubuntu-24.04.2-desktop-amd64.iso
output
5e38b55d57d94ff029719342357325ed3bda38fa80054f9330dc789cd2d43931 ubuntu-24.04.2-desktop-amd64.iso

The output is one line: the hex digest, two spaces, and the file name. The same file always produces the same digest, so you can run the command again after a copy or a download and compare the values by eye.

md5sum behaves the same way:

Terminal
md5sum ubuntu-24.04.2-desktop-amd64.iso
output
2e3720b76b2f9f96edc43ec4d87d7d52 ubuntu-24.04.2-desktop-amd64.iso

Notice that the MD5 digest is shorter. That is the 128-bit hash encoded in 32 hex characters, compared with 64 hex characters for SHA-256.

Generating Checksums for Multiple Files

Both commands accept any number of file arguments and print one line per file:

Terminal
sha256sum *.tar.gz
output
b2b09c1e04b2a3a4c5d6e7f890123456789abcdef0123456789abcdef01234567 backup-2026-04-01.tar.gz
c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8091a2b3c4d5e6f708192a3b4c5d6e7f8 backup-2026-04-08.tar.gz

To save the output to a checksum file that you can share or verify later, redirect it:

Terminal
sha256sum *.tar.gz > SHA256SUMS

The resulting SHA256SUMS file can be published alongside the archives, and anyone who downloads them can run a single command to confirm that their copy matches.

Verifying a File Against a Known Checksum

The most common task is to check that a download matches the digest the publisher posted. Distributions usually ship a SHA256SUMS file that lists every release file and its digest. Change into the directory that holds both the archive and the checksum file, then pass -c:

Terminal
sha256sum -c SHA256SUMS
output
ubuntu-24.04.2-desktop-amd64.iso: OK
ubuntu-24.04.2-live-server-amd64.iso: OK

sha256sum reads each line of SHA256SUMS, recomputes the digest for the named file, and prints OK when they match. Any line whose file is missing or whose digest differs prints a clear error and causes the command to exit with a non-zero status, which is convenient in scripts.

When you only care about one file out of many, grep the relevant line into the check:

Terminal
grep "ubuntu-24.04.2-desktop-amd64.iso" SHA256SUMS | sha256sum -c -

The trailing - tells sha256sum to read the checksum list from standard input. This keeps the verification scoped to a single file without creating a second checksum file.

Comparing a File to a Published Digest

Sometimes the publisher does not provide a full SHA256SUMS file but instead shows a single digest on a release page. You can compare it directly without creating a file:

Terminal
echo "5e38b55d57d94ff029719342357325ed3bda38fa80054f9330dc789cd2d43931 ubuntu-24.04.2-desktop-amd64.iso" | sha256sum -c -
output
ubuntu-24.04.2-desktop-amd64.iso: OK

The key detail is the double space between the digest and the file name. That is the exact format sha256sum produces and expects, and a single space will cause the check to fail.

Quiet and Warn Modes

During an automated check, the per-file OK lines can be noisy. The --quiet option suppresses successful lines so only failures appear:

Terminal
sha256sum -c --quiet SHA256SUMS

If every file passes, the command prints nothing and exits with status 0. If a file fails, you see a single failure line and a non-zero exit status, which fits well in a CI job or a backup script.

To flag lines in a checksum file that are not formatted correctly, add --warn. This is helpful when the file was assembled by hand and you want to be sure every entry parses.

Common Options

The flags below are the ones you are likely to use day to day:

  • -c, --check - Read digests from a file and verify each one.
  • -b, --binary - Mark files as binary in the output (default on Linux).
  • -t, --text - Read files in text mode (rare on Linux, kept for portability).
  • --quiet - Suppress OK lines when checking.
  • --status - Print nothing; rely on the exit status alone.
  • --ignore-missing - Skip files listed in the digest file that are not present.
  • --tag - Output BSD-style tagged format, useful when mixing hash algorithms.

md5sum accepts the same flags, which makes it easy to swap one command for the other when the algorithm changes.

Quick Reference

Command Description
sha256sum file.iso Generate a SHA-256 checksum for one file
md5sum file.iso Generate an MD5 checksum for one file
sha256sum *.tar.gz > SHA256SUMS Save checksums for multiple files
sha256sum -c SHA256SUMS Verify files against a checksum list
`grep “file.iso” SHA256SUMS sha256sum -c -`
sha256sum -c --quiet SHA256SUMS Show only failures during verification

Verifying a Download End to End

Putting the pieces together, a typical download check looks like this:

Terminal
wget https://releases.ubuntu.com/24.04/ubuntu-24.04.2-desktop-amd64.iso
wget https://releases.ubuntu.com/24.04/SHA256SUMS
sha256sum -c --ignore-missing SHA256SUMS

The --ignore-missing flag keeps the check focused on the file you actually downloaded instead of failing on every other release listed in SHA256SUMS.

For full confidence, also verify the signature on SHA256SUMS itself using the publisher’s GPG key. A digest is only as trustworthy as the source you got it from, so a signed checksum file closes the loop.

Troubleshooting

Checksum mismatch on a freshly downloaded file
Re-download the file, ideally from a different mirror. The most common cause is a truncated transfer or a network error. If the second download still fails, the file on the mirror may be stale or tampered with, and you should report it to the project.

No such file or directory when running with -c
The names in the checksum file are resolved relative to the current directory. Change into the directory that holds the files, or edit the checksum file to use paths that match where the files live.

improperly formatted checksum line warnings
A single space between the digest and the file name instead of two, trailing whitespace, or Windows line endings will all trip the parser. Run dos2unix on the file or recreate it with sha256sum > SHA256SUMS to reset the format.

The MD5 digest matches but you still do not trust the file
You are right to be cautious. MD5 collisions are practical, so match an MD5 against accidental corruption only. Ask the publisher for a SHA-256 digest or a signed checksum file.

FAQ

Which is faster, sha256sum or md5sum?
md5sum is faster, sometimes noticeably so on large files. The speed difference rarely matters on modern hardware, and it is not a good reason to pick MD5 over SHA-256 for security-sensitive checks.

Can I use sha256sum on a directory?
Not directly. Hash tools operate on files. To produce a digest that represents an entire directory, pipe a deterministic listing such as find ... -type f -print0 | sort -z | xargs -0 sha256sum through another sha256sum.

Where do I find the expected digest for a Linux distro ISO?
Every major distribution publishes a SHA256SUMS or SHA512SUMS file on its download page, usually along with a detached GPG signature. Prefer those files over digests shown in third-party blog posts.

Is sha256sum available by default on Linux?
Yes. Both sha256sum and md5sum ship with the GNU coreutils package, which is installed on every mainstream Linux distribution.

Conclusion

sha256sum should be the default for verifying downloads and backups, with md5sum reserved for quick corruption checks when the publisher provides nothing stronger. When you pair a SHA-256 digest with a signed checksum file and a trusted key, you can answer the question that matters most: did I get the file the publisher actually shipped?

git cherry-pick Command: Apply Commits from Another Branch

Sometimes the change you need is already written, just on the wrong branch. A hotfix may land on main when it also needs to go to a maintenance branch, or a useful commit may be buried in a feature branch that you do not want to merge wholesale. In that situation, git cherry-pick lets you copy the effect of a specific commit onto your current branch.

This guide explains how git cherry-pick works, how to apply one or more commits safely, and how to handle the conflicts that can appear along the way.

Syntax

The general syntax for git cherry-pick is:

txt
git cherry-pick [OPTIONS] COMMIT...
  • OPTIONS - Flags that change how Git applies the commit.
  • COMMIT - One or more commit hashes, branch references, or commit ranges.

git cherry-pick replays the changes introduced by the selected commit on top of your current branch. Git creates a new commit, so the result has a different commit hash even when the file changes are the same.

Cherry-Picking a Single Commit

Start by finding the commit you want to copy. This example lists the recent commits on a feature branch:

Terminal
git log --oneline feature/auth
output
a3f1c92 Fix null pointer in auth handler
d8b22e1 Add login form validation
7c4e003 Refactor session logic

The output gives you the abbreviated commit hashes. In this case, a3f1c92 is the fix we want to move.

Switch to the target branch before running git cherry-pick:

Terminal
git switch main
git cherry-pick a3f1c92
output
[main 9b2d4f1] Fix null pointer in auth handler
Date: Tue Apr 14 10:42:00 2026 +0200
1 file changed, 2 insertions(+)

Git applies the change from a3f1c92 to main and creates a new commit, 9b2d4f1. The subject line is the same, but the commit hash is different because the parent commit is different.

Cherry-Picking Multiple Commits

If you need more than one non-consecutive commit, pass each hash in the order you want Git to apply them:

Terminal
git cherry-pick a3f1c92 d8b22e1

Git creates a separate new commit for each one. This works well when you need a few targeted fixes but do not want the rest of the source branch.

For a range of consecutive commits, use the range notation:

Terminal
git cherry-pick a3f1c92^..7c4e003

This tells Git to include a3f1c92 and every commit after it up to 7c4e003. If you omit the caret, the starting commit itself is excluded:

Terminal
git cherry-pick a3f1c92..7c4e003

That form applies every commit after a3f1c92 through 7c4e003.

Applying Changes Without Committing

Sometimes you want the changes from a commit, but not an automatic commit for each one. Use --no-commit (or -n) to apply the changes to your working tree and staging area without creating the commit yet:

Terminal
git cherry-pick --no-commit a3f1c92

This is useful when you want to combine several small fixes into one commit on the target branch, or when you need to edit the files before committing.

After reviewing the result, create the commit yourself:

Terminal
git status
git commit -m "Backport auth null-check fix"

This gives you more control over the final commit message and lets you group related backports together.

Recording Where the Commit Came From

For maintenance branches and backports, it is often helpful to keep a reference to the original commit. Use -x to append the source commit hash to the new commit message:

Terminal
git cherry-pick -x a3f1c92

Git adds a line like this to the new commit message:

output
(cherry picked from commit a3f1c92...)

That extra line makes future audits easier, especially when you need to prove that a fix on a release branch came from a reviewed change on another branch.

Cherry-Picking a Merge Commit

Cherry-picking a regular commit is straightforward, but merge commits need one extra option. Git must know which parent to treat as the main line:

Terminal
git cherry-pick -m 1 MERGE_COMMIT_HASH
  • -m 1 - Use the first parent as the base, which is usually the branch that received the merge.
  • -m 2 - Use the second parent instead.

If you are not sure which parent is which, inspect the history first:

Terminal
git log --oneline --graph

Cherry-picking merge commits is more advanced and easier to get wrong. If the goal is to bring over an entire merged feature, a normal merge is often clearer than cherry-picking the merge commit itself.

Resolving Conflicts

If the target branch has changed in the same area of code, Git may stop and ask you to resolve a conflict:

output
error: could not apply a3f1c92... Fix null pointer in auth handler
hint: After resolving the conflicts, mark them with
hint: "git add/rm <pathspec>", then run
hint: "git cherry-pick --continue".

Open the conflicted file and look for the conflict markers:

output
<<<<<<< HEAD
return session.getUser();
=======
if (session == null) return null;
return session.getUser();
>>>>>>> a3f1c92 (Fix null pointer in auth handler)

Edit the file to keep the final version you want, then stage it and continue:

Terminal
git add src/auth.js
git cherry-pick --continue

If you decide the commit is not worth applying after all, abort the operation:

Terminal
git cherry-pick --abort

git cherry-pick --abort puts the branch back where it was before the cherry-pick started.

A Safe Backport Workflow

When you cherry-pick onto a release or maintenance branch, slow down and make the source obvious. A simple workflow looks like this:

Terminal
git switch release/1.4
git pull --ff-only
git cherry-pick -x a3f1c92
git status

The important part is the sequence. Start from the branch that needs the fix, make sure it is up to date, cherry-pick with -x, then review and test the branch before pushing it. This avoids the common mistake of copying a fix into an outdated branch and shipping an untested backport.

If the picked commit depends on earlier refactors or new APIs that are not present on the target branch, stop there. In that case, either copy the prerequisite commits too or recreate the fix manually.

When to Use git cherry-pick

git cherry-pick is a good fit when you need a precise change without the rest of the branch:

  • Backporting a bug fix from main to a release branch
  • Recovering one useful commit from an abandoned feature branch
  • Moving a small fix that was committed on the wrong branch
  • Pulling a reviewed change into a hotfix branch without merging unrelated work

Avoid it when the target branch needs the full context of the source branch. If the commit depends on earlier commits, shared refactors, or schema changes, a merge or rebase is usually the cleaner option.

Troubleshooting

error: could not apply ... during cherry-pick
The target branch has conflicting changes. Resolve the files Git marks as conflicted, stage them with git add, then run git cherry-pick --continue.

Cherry-pick created duplicate-looking history
That is normal. Cherry-pick copies the effect of a commit, not the original object. The new commit has a different hash because it has a different parent.

The picked commit does not build on the target branch
The commit likely depends on earlier work that is missing from the target branch. Inspect the source branch history with git log and either cherry-pick the prerequisites too or reimplement the change manually.

You picked the wrong commit
If the cherry-pick already completed, use git revert on the new commit. If the operation is still in progress, use git cherry-pick --abort.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Task Command
Pick one commit git cherry-pick COMMIT
Pick several specific commits git cherry-pick C1 C2 C3
Pick a consecutive range including the first commit git cherry-pick A^..B
Apply changes without committing git cherry-pick --no-commit COMMIT
Record the source commit in the message git cherry-pick -x COMMIT
Continue after resolving a conflict git cherry-pick --continue
Skip the current commit in a sequence git cherry-pick --skip
Abort the operation git cherry-pick --abort
Pick a merge commit git cherry-pick -m 1 MERGE_COMMIT

FAQ

What is the difference between git cherry-pick and git merge?
git cherry-pick copies the effect of selected commits onto your current branch. git merge joins two branch histories and brings over all commits that are missing from the target branch.

Does cherry-pick change the original commit?
No. The source commit stays exactly where it is. Git creates a new commit on your current branch with the same file changes.

Should I use -x every time?
Not always, but it is a good habit for backports and maintenance branches. It gives you a clear link back to the original commit.

Can I cherry-pick commits from another remote branch?
Yes. Fetch the remote branch first, then cherry-pick the commit hash you want. You can inspect the incoming history with git log or review the file changes with git diff before applying anything.

Conclusion

git cherry-pick is the tool to reach for when one commit matters more than the branch it came from. Use it for targeted fixes, keep -x in mind for backports, and fall back to merge or rebase when the change depends on broader branch context.

ufw Command in Linux: Uncomplicated Firewall Reference

If your server is open to the public network, it needs a firewall. On Ubuntu and Debian, the easiest way to set one up is with ufw. This tool sits on top of iptables (or nftables on newer systems) and replaces complex rules with simple, easy-to-read commands.

This guide walks through the ufw command, from enabling the firewall and setting default policies to writing rules for ports, services, and specific addresses.

ufw Syntax

The general form of the command is:

txt
sudo ufw [OPTIONS] COMMAND [ARGS]
  • OPTIONS - Flags such as --dry-run to preview what a rule would do.
  • COMMAND - The action, for example enable, allow, deny, delete, or status.
  • ARGS - The rule body, such as a port number, a service name, or a full rule specification.

ufw changes firewall rules, so almost every invocation needs sudo.

Checking the Firewall Status

Before you touch the rules, check whether the firewall is running and what is already in place:

Terminal
sudo ufw status

If ufw is inactive, you will see:

output
Status: inactive

This confirms that ufw is installed but not currently enforcing any firewall rules.

Once active, the same command lists the rules:

output
Status: active
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
80/tcp ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)

For a numbered view that is easier to reference when deleting rules, use:

Terminal
sudo ufw status numbered

And for a more detailed output that shows default policies and logging level:

Terminal
sudo ufw status verbose

Enabling and Disabling the Firewall

The first time you enable ufw, make sure you have already allowed SSH. Enabling the firewall over an SSH session without an allow ssh rule will lock you out of the server.

Allow SSH first:

Terminal
sudo ufw allow ssh

Then turn the firewall on:

Terminal
sudo ufw enable
output
Firewall is active and enabled on system startup

At this point, ufw starts enforcing the rules immediately and will come back automatically after a reboot.

To stop the firewall and clear it from the startup sequence:

Terminal
sudo ufw disable

A disabled ufw keeps the rules you defined. When you re-enable it, the same rules come back.

Default Policies

Default policies decide what happens to traffic that does not match any rule. The usual hardening is to deny incoming traffic and allow outgoing traffic:

Terminal
sudo ufw default deny incoming
sudo ufw default allow outgoing

After running these two commands, you only need to open the specific inbound ports your server should expose. Outbound traffic still flows freely, which is what most servers need.

You can also set the default for forwarded traffic, which matters if the host acts as a router or runs containers:

Terminal
sudo ufw default deny routed

Allowing Connections

The allow command is the one you will use most often. It accepts a service name, a port, a protocol, or a full rule.

Allow a Service by Name

ufw reads /etc/services and understands common service names:

Terminal
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https

Each command opens the matching port (22, 80, and 443) for both TCP and UDP where applicable.

Allow a Specific Port

To open a port directly, pass the port number:

Terminal
sudo ufw allow 8080

By default, this opens the port for both TCP and UDP. To limit it to one protocol, add /tcp or /udp:

Terminal
sudo ufw allow 8080/tcp
sudo ufw allow 53/udp

Allow a Port Range

Port ranges require an explicit protocol:

Terminal
sudo ufw allow 6000:6007/tcp
sudo ufw allow 6000:6007/udp

The colon separates the start and the end of the range, both inclusive.

Allow Traffic From a Specific Address

To accept connections from a single IP address, use from:

Terminal
sudo ufw allow from 203.0.113.25

To restrict that to a single service, add to any port:

Terminal
sudo ufw allow from 203.0.113.25 to any port 22

Allow Traffic From a Subnet

CIDR notation works the same way for whole networks:

Terminal
sudo ufw allow from 192.168.1.0/24 to any port 3306

This is a common pattern for databases: the port stays closed to the internet and is only reachable from your internal network.

Allow Traffic on a Specific Interface

To tie a rule to a network interface, add on INTERFACE:

Terminal
sudo ufw allow in on eth1 to any port 3306

The in keyword applies the rule to inbound traffic on eth1. Use out for outbound traffic.

Denying Connections

The deny command is the mirror of allow and takes the same arguments:

Terminal
sudo ufw deny 23
sudo ufw deny from 198.51.100.77

When the default incoming policy is already deny, you usually do not need to write explicit deny rules for closed ports. Explicit denies are useful when you want to block a specific address while keeping a port open to everyone else.

To log the denied packets instead of silently dropping them, use reject:

Terminal
sudo ufw reject 23

A reject sends an ICMP message back to the sender, while a deny drops the packet with no response.

Rate Limiting SSH

ufw can throttle repeated connections from the same IP, which is handy for SSH brute-force protection:

Terminal
sudo ufw limit ssh

The limit rule denies a connection if the source address attempts six or more connections in 30 seconds. This is a quick way to slow down password-guessing bots without installing a full intrusion-prevention tool.

Deleting Rules

There are two common ways to remove a rule. The first is to repeat the rule with delete in front:

Terminal
sudo ufw delete allow 8080/tcp

The second is to delete by rule number:

Terminal
sudo ufw status numbered
output
 To Action From
-- ------ ----
[ 1] 22/tcp ALLOW IN Anywhere
[ 2] 80/tcp ALLOW IN Anywhere
[ 3] 8080/tcp ALLOW IN Anywhere
Terminal
sudo ufw delete 3

ufw asks for confirmation before removing the rule. Keep in mind that after a deletion, the numbering shifts for the remaining rules, so always check the numbered status again before deleting another rule.

For a deeper walkthrough, see how to list and delete UFW firewall rules .

Application Profiles

Services that come with their own ufw profile can be referenced by name. List them with:

Terminal
sudo ufw app list
output
Available applications:
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH

To open the ports that a profile covers, pass its name to allow:

Terminal
sudo ufw allow 'Nginx Full'

Quote profile names that contain spaces. To see what ports a profile includes, use:

Terminal
sudo ufw app info 'Nginx Full'

Dry Run

Before you commit to a change, you can preview the iptables rules ufw would add with --dry-run:

Terminal
sudo ufw --dry-run allow 8080/tcp

No rule is created. The output shows the exact lines ufw would write, which is useful when you are writing rules over SSH and want to double-check them first.

Logging

ufw can log packets that match rules, which helps when troubleshooting. The log level can be adjusted:

Terminal
sudo ufw logging on
sudo ufw logging medium

Valid levels are off, low, medium, high, and full. ufw logs through the kernel log facility, so the exact destination depends on your system logging setup. On many Ubuntu systems with rsyslog, you will see entries in /var/log/ufw.log, and you can often inspect them with journalctl as well.

Reset the Firewall

To clear every rule and set ufw back to its fresh state, run:

Terminal
sudo ufw reset
Warning
ufw reset disables the firewall and removes every rule, including the one that allows SSH. If you are connected over SSH, add an allow rule for SSH and re-enable the firewall right after the reset, or you will lose access the moment you turn it back on.

IPv6 Support

On most modern systems, ufw can manage IPv6 rules as well as IPv4 rules. Check the setting in /etc/default/ufw:

/etc/default/ufwsh
IPV6=yes

When IPV6=yes, generic rules such as sudo ufw allow 22/tcp are created for both IPv4 and IPv6. Rules that include an explicit address stay specific to that address family. In the status output, the IPv6 rules appear with (v6) next to them.

Quick Reference

For a printable quick reference, see the ufw cheatsheet .

Task Command
Show status sudo ufw status verbose
Show numbered rules sudo ufw status numbered
Enable firewall sudo ufw enable
Disable firewall sudo ufw disable
Default deny incoming sudo ufw default deny incoming
Default allow outgoing sudo ufw default allow outgoing
Allow SSH sudo ufw allow ssh
Allow port sudo ufw allow 8080/tcp
Allow port range sudo ufw allow 6000:6007/tcp
Allow from IP sudo ufw allow from 203.0.113.25
Allow from subnet sudo ufw allow from 192.168.1.0/24
Rate-limit SSH sudo ufw limit ssh
Delete rule by number sudo ufw delete 3
List app profiles sudo ufw app list
Reset all rules sudo ufw reset

Troubleshooting

ufw: command not found
Install the ufw package with sudo apt install ufw on Ubuntu or Debian. On Fedora, RHEL, and derivatives, firewalld is the default and ufw is rarely used.

Locked out after enabling the firewall over SSH
The default deny policy blocked your SSH session. You will need console access to the server. Once in, run sudo ufw allow ssh and sudo ufw reload to restore access. The usual safeguard is to allow SSH before the first enable.

Rule added but traffic still blocked
Confirm the rule is in the right direction (inbound vs outbound) and on the right interface. Check with sudo ufw status verbose. If Docker is running, it writes its own iptables rules that can bypass ufw.

Changes do not take effect
After manual edits to /etc/ufw/ files, reload the firewall with sudo ufw reload. Commands issued through ufw apply immediately and do not need a reload.

FAQ

Is ufw the same as iptables?
No. ufw is a front end that generates iptables (or nftables) rules for you. The kernel still enforces the rules through its packet filter, but you do not need to edit the raw tables yourself.

Does ufw start automatically after reboot?
Yes, once you run sudo ufw enable, the service is registered with systemd and starts on boot. You can confirm with systemctl status ufw.

How do I allow a port for a single IP address?
Use the from and to any port form: sudo ufw allow from 203.0.113.25 to any port 22. This keeps the port closed to the rest of the internet and only opens it for that one address.

What is the difference between deny and reject?
deny drops the packet without any reply. reject sends an ICMP message back to the sender telling them the connection was refused. Reject is slightly friendlier to well-behaved clients but makes the server more visible to scans.

How do I undo everything and start over?
Run sudo ufw reset. It disables the firewall and removes every rule. Remember to re-add the SSH rule before re-enabling, especially on a remote server.

Conclusion

ufw keeps firewall management readable: allow what you need, deny what you do not, and lean on the default policies to cover the rest. If you want step-by-step instructions for setting it up on a new server, see how to set up a firewall with UFW on Ubuntu 24.04 .

nslookup Command in Linux: Query DNS Records

When a website does not load or email stops arriving, the first thing to check is whether the domain resolves to the correct address. The nslookup command is a quick way to query DNS servers and inspect the records behind a domain name.

nslookup ships with most Linux distributions and works on macOS and Windows as well. It supports both one-off queries from the command line and an interactive mode for running multiple lookups in a row.

This guide explains how to use nslookup with practical examples covering record types, reverse lookups, and troubleshooting.

Syntax

txt
nslookup [OPTIONS] [NAME] [SERVER]
  • NAME — The domain name or IP address to look up.
  • SERVER — The DNS server to query. If omitted, nslookup uses the server configured in /etc/resolv.conf.
  • OPTIONS — Query options such as -type=MX or -debug.

When called without arguments, nslookup starts in interactive mode.

Installing nslookup

On most distributions nslookup is already installed. To check, run:

Terminal
nslookup -version

If the command is not found, install it using your distribution’s package manager.

Install nslookup on Ubuntu, Debian, and Derivatives

Terminal
sudo apt update && sudo apt install dnsutils

Install nslookup on Fedora, RHEL, and Derivatives

Terminal
sudo dnf install bind-utils

Install nslookup on Arch Linux

Terminal
sudo pacman -S bind

The nslookup command is bundled with the same packages that provide dig .

Look Up a Domain Name

The simplest use is passing a domain name as an argument:

Terminal
nslookup linux.org
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: linux.org
Address: 104.26.14.72
Name: linux.org
Address: 104.26.15.72
Name: linux.org
Address: 172.67.73.26

The first two lines show the DNS server that answered the query. Everything under “Non-authoritative answer” is the actual result. In this case, linux.org resolves to three IPv4 addresses.

“Non-authoritative” means the answer came from a resolver’s cache rather than directly from the domain’s authoritative name server.

Query a Specific DNS Server

By default, nslookup queries the resolver configured in /etc/resolv.conf. To query a different server, add it as the last argument.

For example, to query Google’s public DNS:

Terminal
nslookup linux.org 8.8.8.8
output
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: linux.org
Address: 104.26.14.72
Name: linux.org
Address: 104.26.15.72
Name: linux.org
Address: 172.67.73.26

This is useful when you want to compare results across different resolvers or verify whether a DNS change has propagated to public servers.

Query Record Types

By default, nslookup returns A (IPv4 address) records. Use the -type option to query other record types.

MX Records (Mail Servers)

MX records identify the mail servers responsible for receiving email for a domain:

Terminal
nslookup -type=mx google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com mail exchanger = 10 smtp.google.com.

The number before the mail server hostname is the priority. A lower number means higher priority.

NS Records (Name Servers)

NS records show which name servers are authoritative for a domain:

Terminal
nslookup -type=ns google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com nameserver = ns1.google.com.
google.com nameserver = ns2.google.com.
google.com nameserver = ns3.google.com.
google.com nameserver = ns4.google.com.

TXT Records

TXT records store arbitrary text data, commonly used for SPF, DKIM, and domain ownership verification:

Terminal
nslookup -type=txt google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com text = "v=spf1 include:_spf.google.com ~all"
google.com text = "facebook-domain-verification=22rm551cu4k0ab0bxsw536tlds4h95"
google.com text = "docusign=05958488-4752-4ef2-95eb-aa7ba8a3bd0e"

The output may include many entries. The example above shows a subset of the TXT records returned for google.com.

AAAA Records (IPv6)

AAAA records return the IPv6 address of a domain:

Terminal
nslookup -type=aaaa google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: google.com
Address: 2a00:1450:4017:818::200e

SOA Record (Start of Authority)

The SOA record contains administrative information about the domain, including the primary name server, the responsible email address, and timing parameters for zone transfers:

Terminal
nslookup -type=soa google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com
origin = ns1.google.com
mail addr = dns-admin.google.com
serial = 897592583
refresh = 900
retry = 900
expire = 1800
minimum = 60

The serial number increments each time the zone is updated. DNS secondaries use it to decide whether they need a zone transfer.

CNAME Records

CNAME records point one domain name to another:

Terminal
nslookup -type=cname www.github.com

If a CNAME record exists, the output shows the canonical name the alias points to. If the domain does not have a CNAME record, nslookup returns No answer.

Run an ANY Query

To ask the DNS server for an ANY response, use -type=any:

Terminal
nslookup -type=any google.com

ANY queries do not reliably return every record type for a domain. Many DNS servers return only a subset of records or refuse the query entirely.

Reverse DNS Lookup

A reverse lookup finds the hostname associated with an IP address. Pass an IP address instead of a domain name:

Terminal
nslookup 208.118.235.148
output
148.235.118.208.in-addr.arpa name = ip-208-118-235-148.twdx.net.

Reverse lookups query PTR records. They are useful for verifying that an IP address maps back to the expected hostname, which matters for mail server configuration and security checks.

Interactive Mode

Running nslookup without arguments starts an interactive session where you can run multiple queries without retyping the command:

Terminal
nslookup
output
>

At the > prompt, type a domain name to look it up:

output
> linux.org
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: linux.org
Address: 104.26.14.72
Name: linux.org
Address: 104.26.15.72

You can change query settings during the session with the set command. For example, to switch to MX record lookups and then query a domain:

output
> set type=mx
> google.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com mail exchanger = 10 smtp.google.com.

To change the DNS server:

output
> server 8.8.8.8
Default server: 8.8.8.8
Address: 8.8.8.8#53

Type exit to leave interactive mode.

Interactive mode is convenient when you need to test several domains or record types in a row without running separate commands each time.

Debugging DNS Issues

The -debug option shows the full query and response details, including TTL values and additional sections that nslookup normally hides:

Terminal
nslookup -debug linux.org

The debug output is verbose, but it is helpful when you need to see TTL values, check whether answers are authoritative, or trace unexpected behavior.

nslookup vs dig

Both nslookup and dig query DNS servers, but they differ in output and capabilities:

  • nslookup produces simpler, more readable output. It also has an interactive mode that is convenient for quick checks.
  • dig provides detailed, structured output with sections (QUESTION, ANSWER, AUTHORITY, ADDITIONAL) and supports advanced options like +trace for tracing the full resolution path and +dnssec for verifying DNSSEC signatures.

For quick lookups and basic troubleshooting, nslookup is often faster to type and read. For in-depth DNS debugging, dig gives you more control and detail.

Troubleshooting

nslookup returns NXDOMAIN
The domain does not exist or is misspelled. Verify the domain name and check that it is registered.

nslookup returns SERVFAIL
The DNS server could not process the query. Try a different resolver to isolate the problem:

Terminal
nslookup linux.org 1.1.1.1

If public resolvers return the correct answer, the issue is with your configured resolver.

Connection timed out; no servers could be reached
This means nslookup could not contact the DNS server. Check your network connection and verify that /etc/resolv.conf contains a reachable name server. A firewall may also be blocking outbound DNS traffic on port 53.

Non-authoritative answer appears on every query
This is normal. It means the answer came from a resolver’s cache, not directly from the domain’s authoritative server. The result is still valid.

Quick Reference

For a printable quick reference, see the nslookup cheatsheet .

Task Command
Look up a domain nslookup example.com
Query a specific DNS server nslookup example.com 8.8.8.8
Query MX records nslookup -type=mx example.com
Query NS records nslookup -type=ns example.com
Query TXT records nslookup -type=txt example.com
Query AAAA (IPv6) records nslookup -type=aaaa example.com
Query SOA record nslookup -type=soa example.com
Query CNAME record nslookup -type=cname example.com
Run an ANY query nslookup -type=any example.com
Reverse DNS lookup nslookup 192.0.2.1
Start interactive mode nslookup
Enable debug output nslookup -debug example.com

FAQ

Can I use nslookup to check DNS propagation?
Yes. Query the same domain against several public DNS servers and compare the results. For example, run nslookup example.com 8.8.8.8, nslookup example.com 1.1.1.1, and nslookup example.com 9.9.9.9. If the answers differ, the change has not fully propagated.

Is nslookup deprecated?
The ISC (the organization behind BIND) once marked nslookup as deprecated in favor of dig, but later reversed that decision. nslookup is actively maintained and included in current BIND releases. It remains a practical tool for quick DNS lookups.

What does “Non-authoritative answer” mean?
It means the response came from a caching resolver, not from one of the domain’s authoritative name servers. The data is still accurate, but it may be slightly behind if a DNS change was made very recently and the cache has not expired yet.

Conclusion

The nslookup command is a quick way to query DNS records from the command line. Use -type to look up MX, NS, TXT, AAAA, and other record types, and pass a server argument to test against a specific resolver. For deeper DNS debugging, pair it with dig .

env Command in Linux: Show and Set Environment Variables

When you need to run a program with a different set of variables, test a script in a clean environment, or write a shebang line that works across systems, the env command is the right tool. It prints the current environment, sets or removes variables for a single command, and can even start a process with no inherited variables at all.

This guide covers how to use the env command with practical examples for everyday tasks.

env Syntax

txt
env [OPTIONS] [NAME=VALUE]... [COMMAND [ARGS]]

When called without arguments, env prints every environment variable in the current session, one per line. When followed by NAME=VALUE pairs and a command, it runs that command with the specified variables added or changed without affecting the current shell.

Print All Environment Variables

The simplest use of env is printing the full environment:

Terminal
env
output
SHELL=/bin/bash
USER=john
HOME=/home/john
LANG=en_US.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
TERM=xterm-256color
XDG_SESSION_TYPE=tty
EDITOR=vim

Each line is a KEY=value pair. This is identical to what printenv shows with no arguments. The difference between the two commands becomes clear when you start passing options: printenv is designed for inspecting variables, while env is designed for modifying the environment before launching a program.

Print Variables with NUL Separator

By default, env separates each variable with a newline character. If variable values contain newlines themselves, the output becomes ambiguous. The -0 (or --null) option ends each entry with a NUL byte instead:

Terminal
env -0 | tr '\0' '\n' | head -5

This is useful when piping environment data into tools that expect NUL-delimited input, such as xargs with its -0 flag.

Run a Command with Modified Variables

The most practical feature of env is launching a command with variables changed for that command only. Place NAME=VALUE pairs before the command:

Terminal
env LANG=C sort unsorted.txt

This runs sort with LANG set to C, which forces byte-order sorting regardless of the system locale. The current shell’s LANG value stays unchanged after the command finishes.

You can set multiple variables at once:

Terminal
env DB_HOST=localhost DB_PORT=5432 python3 app.py

The application sees DB_HOST and DB_PORT in its environment, but those variables do not persist in your shell session after the process exits.

Tip
You can achieve the same result with the shell’s built-in VAR=value command syntax (for example, LANG=C sort unsorted.txt). The env command is useful when you need additional options like -i or -u, or when you want to be explicit about environment manipulation in scripts.

Run a Command in a Clean Environment

The -i (or --ignore-environment) option clears the entire inherited environment before running the command. Only the variables you explicitly set on the command line will be present:

Terminal
env -i bash -c 'env'
output
PWD=/home/john
SHLVL=1
_=/usr/bin/env

The output shows almost nothing. The shell itself sets a few internal variables (PWD, SHLVL, _), but everything the parent shell normally passes along (PATH, HOME, LANG, USER) is gone.

This is useful for testing whether a script depends on variables it does not set itself. You can combine -i with explicit variables to create a minimal, controlled environment:

Terminal
env -i HOME=/home/john PATH=/usr/bin:/bin bash -c 'echo $HOME; echo $PATH'
output
/home/john
/usr/bin:/bin

A bare - (hyphen) works as a shorthand for -i:

Terminal
env - PATH=/usr/bin bash -c 'echo $PATH'

Unset a Variable for a Command

The -u (or --unset) option removes a specific variable from the environment before running the command:

Terminal
env -u EDITOR vim

This launches vim without the EDITOR variable in its environment. Other variables remain untouched. You can unset multiple variables by repeating -u:

Terminal
env -u LANG -u LC_ALL python3 script.py

The difference from -i is that -u is surgical: it removes only the named variables and leaves everything else in place.

Change Directory Before Running a Command

The -C (or --chdir) option changes the working directory before executing the command:

Terminal
env -C /var/log cat syslog | head -3

This is equivalent to running cd /var/log && cat syslog, but without affecting the current shell’s working directory. It is a convenient way to run a command in another directory from within a script or one-liner.

Info
The -C option requires GNU coreutils 8.28 or later. Check your version with env --version.

Using env in Shebangs

One of the most common uses of env is in shebang lines at the top of scripts:

sh
#!/usr/bin/env bash
echo "Hello from Bash"

When the kernel encounters #!/usr/bin/env bash, it runs /usr/bin/env with bash as its argument. env then searches the PATH for the bash executable and runs it. This is more portable than hardcoding #!/bin/bash, because bash is not always located in /bin on every system (for example, on FreeBSD it is typically at /usr/local/bin/bash).

The same pattern works for other interpreters:

sh
#!/usr/bin/env python3
sh
#!/usr/bin/env node

On systems with GNU coreutils 8.30 or later, you can pass arguments to the interpreter using the -S (split string) option:

sh
#!/usr/bin/env -S python3 -u

Without -S, the kernel treats python3 -u as a single argument. The -S option tells env to split the string into separate arguments before executing.

env Options

  • -i, --ignore-environment - Start with an empty environment
  • -u NAME, --unset=NAME - Remove NAME from the environment
  • -C DIR, --chdir=DIR - Change working directory to DIR before running the command
  • -0, --null - End each output line with a NUL byte instead of a newline
  • -S STRING, --split-string=STRING - Split STRING into separate arguments (useful in shebangs)
  • -v, --debug - Print verbose information for each processing step
  • --block-signal=SIG - Block delivery of the specified signal to the command
  • --default-signal=SIG - Reset signal handling to the default for the command
  • --ignore-signal=SIG - Set signal handling to ignore for the command

Quick Reference

For a printable quick reference, see the env cheatsheet .

Command Description
env Print all environment variables
env -0 Print variables with NUL separator
env VAR=value command Run a command with a modified variable
env -i command Run a command in a clean environment
env -i VAR=value command Run a command with only the specified variables
env -u VAR command Run a command with a variable removed
env -C /path command Run a command in a different directory
#!/usr/bin/env bash Portable shebang line
#!/usr/bin/env -S python3 -u Shebang with interpreter arguments

FAQ

What is the difference between env and printenv?
Both commands print environment variables when called without arguments. The difference is in their purpose: printenv is a read-only inspection tool that can print individual variables by name (printenv HOME). env is designed to modify the environment and run commands. Use printenv when you need to check a value, and env when you need to change the environment for a process.

What is the difference between env and export?
export is a shell built-in that adds a variable to the current shell’s environment permanently (until the session ends or you unset it). env sets variables only for the duration of a single command and does not affect the current shell. If you need a variable to persist for all subsequent commands in your session, use export. If you need a variable set for one command only, use env or the shell’s VAR=value command syntax.

When should I use env -i?
Use env -i when you want to verify that a script or program works without relying on inherited environment variables. It is also useful in security-sensitive contexts where you want to prevent a child process from seeing variables like AWS_SECRET_ACCESS_KEY or database credentials that exist in the parent shell.

Why use #!/usr/bin/env bash instead of #!/bin/bash?
The env-based shebang is more portable. On most Linux distributions, bash lives at /bin/bash, but on other Unix systems (FreeBSD, macOS with Homebrew, NixOS) it may be installed elsewhere. Using #!/usr/bin/env bash searches the PATH for the interpreter, so the script works regardless of where bash is installed.

Conclusion

The env command gives you fine-grained control over the environment a process sees without touching your current shell session. For a broader look at how environment and shell variables work in Linux, including persistent configuration and the PATH variable, see the guide on how to set and list environment variables .

htop Command in Linux: Monitor Processes Interactively

When a server starts responding slowly or a desktop session feels sluggish, you need to figure out which process is consuming the most CPU or memory. The top command can do this, but its interface is minimal and navigation takes some getting used to. htop is an interactive process viewer that improves on top with color-coded meters, mouse support, and the ability to scroll, search, filter, and kill processes without leaving the viewer.

This guide explains how to install and use htop to monitor system resources and manage processes on Linux.

Installing htop

htop is available in the default repositories of most Linux distributions but is not always pre-installed.

On Ubuntu, Debian, and Derivatives:

Terminal
sudo apt install htop

On Fedora, RHEL, and Derivatives:

Terminal
sudo dnf install htop

Verify the installation by checking the version:

Terminal
htop --version
output
htop 3.4.1

htop Syntax

txt
htop [OPTIONS]

Running htop without arguments opens the interactive process viewer with default settings:

Terminal
htop

Understanding the Interface

The htop screen is divided into three areas: a header with system meters, a process list in the center, and a footer with function key shortcuts.

Header Area

The top section shows system-wide resource usage at a glance:

  • CPU bars: one bar per CPU core, color-coded by usage type. Green is normal user processes, red is kernel (system) activity, blue is low-priority (nice) processes, and cyan is virtualization overhead (steal time)
  • Memory bar: shows used, buffered, and cached memory as a proportion of total RAM. Green is memory in use by applications, blue is buffers, and yellow is file cache
  • Swap bar: shows swap usage. If this bar is consistently full, the system does not have enough physical memory for its workload
  • Tasks: total number of processes and threads, with a count of how many are currently running
  • Load average: the 1-minute, 5-minute, and 15-minute system load averages
  • Uptime: how long the system has been running since the last boot

Process List

Below the header, each row represents one process. The default columns are:

  • PID - process ID
  • USER - the owner of the process
  • PRI - kernel scheduling priority
  • NI - nice value (user-space priority, ranges from -20 to 19)
  • VIRT - total virtual memory the process has allocated
  • RES - resident memory, the portion of physical RAM the process is actually using
  • SHR - shared memory, the portion of RES that is shared with other processes (such as shared libraries)
  • S - process state (R running, S sleeping, D uninterruptible sleep, Z zombie, T stopped)
  • CPU% - percentage of CPU time the process is using
  • MEM% - percentage of physical memory the process is using
  • TIME+ - total CPU time consumed since the process started
  • Command - the full command line that launched the process

Function Keys

The bottom bar shows the available actions:

  • F1 - open the help screen
  • F2 - open the setup menu to customize meters and columns
  • F3 - search for a process by name
  • F4 - filter the process list to show only matching entries
  • F5 - toggle tree view
  • F6 - choose a column to sort by
  • F7 - decrease the nice value (higher priority) of the selected process
  • F8 - increase the nice value (lower priority) of the selected process
  • F9 - send a signal to the selected process
  • F10 - quit htop

Sorting Processes

By default, htop sorts processes by CPU usage, so the heaviest processes float to the top. To change the sort column, press F6 and select a column from the list using the arrow keys.

For quick sorting without opening the menu, use these shortcut keys:

  • P - sort by CPU%
  • M - sort by MEM%
  • T - sort by TIME+

Press the same key again to reverse the sort order. This is useful when you want to find the least active processes or the ones that started most recently.

You can also set the initial sort column from the command line:

Terminal
htop -s PERCENT_MEM

This starts htop with the process list sorted by memory usage, which is handy when you are investigating high memory consumption on a server.

Searching for a Process

Press F3 (or /) to open the search bar at the bottom of the screen. Type part of the process name and htop will highlight the first matching entry in the process list. Press F3 again to jump to the next match.

The search is incremental, so the highlight updates as you type each character. Press Esc to close the search bar and return to normal navigation.

Filtering Processes

Press F4 to activate the filter. Unlike search, which jumps between matches, filtering hides all processes that do not match the text you type. Only matching entries remain visible in the list.

This is especially useful on busy systems with hundreds of running processes. For example, typing postgres after pressing F4 reduces the list to only PostgreSQL worker processes and the postmaster, making it much easier to see their combined resource usage.

Press Esc to clear the filter and restore the full process list.

Tree View

Press F5 to toggle tree view. In this mode, htop groups child processes under their parent and displays the process hierarchy as an indented tree. This makes it straightforward to see which processes were spawned by a service manager, a shell session, or a container runtime.

To start htop in tree view directly from the command line:

Terminal
htop -t

Tree view combines well with filtering. Press F4, type a service name, and the tree shows only that service and its child processes. For a dedicated look at process hierarchies outside of htop, see the pstree command .

Killing a Process

To stop a process from within htop, use the arrow keys to highlight it (or search with F3), then press F9. This opens a signal selection menu on the left side of the screen.

The most commonly used signals are:

  • 15 SIGTERM - asks the process to shut down gracefully. This is the default and the safest first choice
  • 9 SIGKILL - forces the process to terminate immediately. Use this only when SIGTERM does not work, because the process gets no chance to clean up
  • 2 SIGINT - the same signal sent by pressing Ctrl+C in a terminal
  • 1 SIGHUP - often used to tell a daemon to reload its configuration without restarting

Select the signal with the arrow keys and press Enter to send it. For more detail on process signals and how to send them from the command line, see the kill command guide .

Warning
SIGKILL bypasses all cleanup routines. Files and sockets may be left in an inconsistent state. Always try SIGTERM first and wait a few seconds before escalating to SIGKILL.

You can also tag multiple processes with Space and then press F9 to send a signal to all of them at once. Press U to untag all processes when you are done.

Changing Process Priority

You can adjust the scheduling priority (nice value) of a process directly from htop. Select the process and press:

  • F7 - decrease the nice value (give the process higher priority)
  • F8 - increase the nice value (give the process lower priority)

Nice values range from -20 (highest priority) to 19 (lowest priority). Only root can set negative nice values, so you will need to run htop with sudo to raise a process above normal priority.

This is useful when a background task like a large compilation is consuming too much CPU and you want to lower its priority so that interactive services remain responsive.

Showing Only One User’s Processes

To limit the process list to a single user, start htop with the -u option:

Terminal
htop -u www-data

This shows only processes owned by www-data, which is useful when you are debugging a web server or an application service and do not want system processes cluttering the view.

Monitoring Specific Processes

To watch a known set of processes by their PIDs, use the -p option:

Terminal
htop -p 1234,5678

The display is restricted to those two processes, but the header meters still show system-wide resource usage. This gives you a focused view while keeping overall load visible at the top.

Customizing the Display

Press F2 to open the setup screen. From here you can:

  • Rearrange the header meters (move CPU bars, swap positions, switch between bar, text, graph, or LED display styles)
  • Add or remove columns in the process list
  • Change the color scheme
  • Toggle display options like showing kernel threads or custom thread names

Changes are saved to ~/.config/htop/htoprc and persist across sessions.

Command-Line Options

  • -d N - set the update interval to N tenths of a second. For example, -d 20 updates every 2 seconds instead of the default 1.5 seconds
  • -u USER - show only processes belonging to USER
  • -p PID[,PID...] - show only the specified process IDs
  • -t - start in tree view
  • -s COLUMN - sort by the given column name (use names like PERCENT_CPU, PERCENT_MEM, TIME)
  • -C - use monochrome mode (no colors), useful on terminals with limited color support
  • -H - highlight new and old processes

Quick Reference

For a printable quick reference, see the htop cheatsheet .

Key / Command Action
F1 Help
F2 Setup (customize meters and columns)
F3 or / Search for a process
F4 Filter the process list
F5 Toggle tree view
F6 Choose sort column
F7 / F8 Decrease / increase nice value
F9 Send a signal to the selected process
F10 or q Quit
P Sort by CPU%
M Sort by MEM%
T Sort by TIME+
Space Tag a process
U Untag all processes
htop -u USER Show only one user’s processes
htop -p PID,PID Monitor specific PIDs
htop -t Start in tree view
htop -d 20 Update every 2 seconds
htop -s PERCENT_MEM Sort by memory usage

Troubleshooting

htop: command not found
htop is not installed by default on every distribution. Install it with your package manager: sudo apt install htop on Debian-based systems or sudo dnf install htop on Fedora and RHEL.

Cannot change process priority (permission denied)
Setting a nice value below 0 requires root privileges. Run htop with sudo to adjust priority for processes owned by other users or to set negative nice values.

CPU bars show unexpected colors
Each color represents a different type of CPU usage. Green is normal user processes, red is kernel activity, blue is low-priority (nice) processes, and cyan is virtualization steal time. Press F1 inside htop to see the full color legend for your version.

Memory bar looks full but the system is responsive
Linux uses free memory for disk cache and buffers. The memory bar in htop distinguishes between application memory (green), buffers (blue), and cache (yellow). Cached memory is released to applications when they need it, so a full-looking bar does not mean the system is out of memory.

Processes appear and disappear quickly
Short-lived processes can start and exit between screen refreshes. Lower the update interval with -d 5 (half a second) to catch fast processes. Keep in mind that a very short interval increases the CPU usage of htop itself.

FAQ

What is the difference between htop and top?
Both show running processes and system metrics in real time. htop adds color-coded CPU and memory meters, mouse support, horizontal and vertical scrolling, built-in search and filtering, and the ability to kill or renice processes without typing a PID. The top command is pre-installed on virtually every Linux system, while htop may need to be installed separately.

Does htop use more resources than top?
Slightly. htop reads additional process details from /proc, so it uses a bit more CPU and memory than top. On modern hardware the difference is negligible, even on systems with thousands of processes.

What do VIRT, RES, and SHR mean?
VIRT is the total virtual memory the process has mapped, including memory it has allocated but may not be using yet. RES (Resident) is the portion actually held in physical RAM. SHR (Shared) is the subset of RES that is shared with other processes, such as shared libraries loaded by multiple programs.

How do I save my htop configuration?
Press F2 to open the setup screen, make your changes, and press F10 to exit. htop saves the configuration automatically to ~/.config/htop/htoprc.

Can I run htop over SSH?
Yes. htop works in any terminal emulator, including SSH sessions. If you are monitoring a remote server over a long session, consider running htop inside screen or tmux so the session persists if the connection drops.

Conclusion

htop gives you a clear, interactive view of what is running on your system and the resources each process is consuming. For related tools, see the ps command for snapshot process listings, kill for sending signals from the command line, and pstree for visualizing the process hierarchy.

lsof Command in Linux: List Open Files and Network Connections

When you delete a file but the disk space does not come back, or a port appears occupied and you cannot tell what is using it, the lsof command gives you the answer. lsof stands for “list open files.” In Linux, many resources are represented as files, so lsof can show regular files, directories, pipes, sockets, and network connections.

This guide explains how to use lsof to inspect open files, trace network connections, and identify which processes hold resources on your system.

lsof Syntax

txt
lsof [OPTIONS] [FILE]

When called without arguments, lsof attempts to list every open file visible to the current user. That output is usually thousands of lines long, so in practice you will almost always combine it with a filter or run it with sudo when you need a full system-wide view.

Understanding the Output

Running lsof without arguments produces output like this:

Terminal
sudo lsof | head -5
output
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
systemd 1 root cwd DIR 8,1 4096 2 /
systemd 1 root rtd DIR 8,1 4096 2 /
nginx 1234 root 6u IPv4 28743 0t0 TCP *:http (LISTEN)
bash 5678 john 1u REG 8,1 102400 789 /home/john/log.txt

Each column tells you something specific:

  • COMMAND - the name of the process
  • PID - process ID
  • USER - the user running the process
  • FD - file descriptor (cwd is the current working directory, txt is the program executable, a number like 4u is an open file handle where r means read, w means write, and u means read and write)
  • TYPE - type of file (REG for a regular file, DIR for a directory, IPv4/IPv6 for network sockets, unix for Unix domain sockets)
  • SIZE/OFF - file size or current offset
  • NAME - the file path or network address

Find What Is Using a Port

The most common use for lsof is finding which process is listening on a given port. For that, use numeric output, limit the search to TCP, and filter to the LISTEN state:

Terminal
sudo lsof -nP -iTCP:80 -sTCP:LISTEN
output
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 1234 root 6u IPv4 34120 0t0 TCP *:80 (LISTEN)
nginx 1235 www-data 6u IPv4 34120 0t0 TCP *:80 (LISTEN)

The output shows that nginx is listening on port 80. The PID column gives you the process ID if you need to stop or restart it. The -nP options keep addresses and port numbers numeric, which makes the output easier to read in scripts and troubleshooting sessions.

To filter by both protocol and port number, specify the protocol before the port:

Terminal
sudo lsof -nP -iTCP:443 -sTCP:LISTEN

This is useful when both TCP and UDP services share the same port number and you want to narrow the results to listening TCP sockets only.

List All Network Connections

To see all open network connections on the system, pass -i without a port:

Terminal
sudo lsof -i

To narrow the results to a specific protocol, add TCP or UDP:

Terminal
sudo lsof -i TCP
output
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 1234 root 6u IPv4 34120 0t0 TCP *:http (LISTEN)
sshd 2345 root 3u IPv4 39210 0t0 TCP *:ssh (LISTEN)
curl 5678 john 5u IPv4 56789 0t0 TCP workstation:54321->93.184.216.34:http (ESTABLISHED)

Listening services show an asterisk for the address. Established connections show the remote address and port.

List Files Opened by a Process

To see all files opened by a specific process, pass the process ID with -p:

Terminal
sudo lsof -p 1234

To exclude a process instead, prefix the PID with ^:

Terminal
sudo lsof -p ^1234

You can also filter by process name with -c. This matches any running process whose name starts with the given string:

Terminal
sudo lsof -c nginx

List Files Opened by a User

To see all files a particular user has open, use -u followed by the username:

Terminal
lsof -u john

To see files opened by everyone except that user, prefix the username with ^:

Terminal
sudo lsof -u ^john

This is useful on multi-user systems when you want to audit what a specific account is accessing.

Find Which Process Has a File Open

Pass a file path directly to lsof to see which processes currently have it open:

Terminal
lsof /var/log/nginx/access.log
output
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 1234 root 5w REG 8,1 249856 654321 /var/log/nginx/access.log
nginx 1235 www-data 5w REG 8,1 249856 654321 /var/log/nginx/access.log

This works for any file, including device files, sockets, and named pipes.

List All Open Files in a Directory

To list all files open within a directory and its subdirectories, use +D:

Terminal
sudo lsof +D /var/log

For a non-recursive listing (top-level only), use lowercase +d:

Terminal
sudo lsof +d /var/log

+D is especially useful before unmounting a filesystem. If any process has a file open inside the mount point, umount will refuse to proceed. Running lsof +D /mountpoint tells you exactly which process is blocking it.

Find Deleted Files Still Holding Disk Space

When you delete a file that a running process still has open, the kernel keeps the data on disk until the process closes the file descriptor or exits. This is a common cause of the “disk is full but I cannot find any large files” problem.

To find these held-open deleted files, use +L1:

Terminal
sudo lsof +L1
output
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME
python3 4567 john 3r REG 8,1 2097152 0 456 /tmp/data.bin (deleted)

The NLINK column shows 0, meaning the file has no directory entry left but is still open. Restarting the process listed under COMMAND will release the space. For more ways to find large files on Linux , including files that are still visible on disk, see that dedicated guide.

Get Only Process IDs

The -t option outputs only process IDs, with no headers or other columns. This is designed for scripting:

Terminal
sudo lsof -t -iTCP:8080 -sTCP:LISTEN
output
3456

A common pattern is to stop the process listening on a port:

Terminal
kill $(sudo lsof -t -iTCP:8080 -sTCP:LISTEN)

Verify the process first with sudo lsof -nP -iTCP:8080 -sTCP:LISTEN before killing it, so you know what you are stopping.

Combining Filters

By default, lsof treats multiple options as OR logic: the output includes files matching any of the conditions. To switch to AND logic (a result must satisfy all conditions), add -a:

Terminal
sudo lsof -a -u john -i TCP

Without -a, this command would return all TCP connections on the system plus all files opened by john. With -a, it returns only TCP connections that belong specifically to john.

Quick Reference

For a printable quick reference, see the lsof cheatsheet .

Command Description
lsof List all open files
lsof -nP -iTCP:PORT -sTCP:LISTEN Find what is listening on a TCP port
lsof -i TCP List all TCP connections
lsof -nP -iTCP:443 -sTCP:LISTEN Filter by protocol and port
lsof -p PID Files opened by a process
lsof -c nginx Files opened by processes named nginx
lsof -u john Files opened by a user
lsof /path/to/file Processes that have a specific file open
lsof +D /dir All open files in a directory (recursive)
lsof +L1 Deleted files still held open
lsof -t -iTCP:PORT -sTCP:LISTEN Output only PIDs listening on a TCP port
lsof -a -u john -i TCP AND logic: TCP connections for a specific user

FAQ

Why does lsof require sudo?
Without root privileges, lsof can only show files opened by your own processes. Running it with sudo gives you visibility into all processes on the system. Some distributions also restrict access to /proc for non-root users, which limits what lsof can read without elevated permissions.

What is the difference between lsof and ss?
lsof covers the full range of open files: regular files, directories, pipes, devices, and network sockets in a single view. The ss command is purpose-built for network sockets and provides more detail about socket states and statistics. Use lsof when you want to tie a network connection back to a specific file or see everything a process has open. Use ss when you need to inspect socket state in detail.

How do I find which process is using a port?
Run sudo lsof -nP -iTCP:PORT -sTCP:LISTEN, replacing PORT with the number you want to check. The ss command with ss -tlnp is another approach covered in the guide on how to check listening ports .

Can I monitor file access in real time with lsof?
No. lsof takes a snapshot at the moment you run it. For real-time filesystem event monitoring, use inotifywait from the inotify-tools package.

lsof shows a file path ending in (deleted). What does that mean?
The file was removed from the directory while a process still had it open. The data remains on disk until the process releases the file descriptor. This is the cause of the “disk full but no large files visible” problem described in the section above. Use lsof +L1 to list all such files and restart the holding process to free the space.

Conclusion

lsof is one of the most practical diagnostic tools available on Linux. Whether you are tracking down a port conflict, auditing which files a user has open, or recovering disk space from deleted but held-open files, it gives you a direct view into what every process is accessing at any moment.

For related tools, see the ps command guide for process inspection and the ss command guide for dedicated network socket statistics.

sed Delete Lines: Remove Lines by Number, Pattern, or Range

When working with text files, you will often need to remove specific lines, whether it is stripping comments from a configuration file, cleaning up blank lines in log output, or cutting a range of lines from a data dump. The sed stream editor handles all of these cases from the command line, without opening the file in an editor.

sed processes input line by line, applies your commands, and writes the result to standard output. The original file stays untouched unless you explicitly tell sed to edit in place. If you need to find and replace text rather than remove entire lines, see How to Use sed to Find and Replace Strings in Files .

This guide explains how to use the sed delete command (d) with practical examples covering line numbers, patterns, ranges, and regular expressions.

How the Delete Command Works

The general form of a sed delete expression is:

txt
sed 'ADDRESSd' file

ADDRESS selects which lines to delete, and d is the delete command. When sed encounters a line that matches the address, it removes that line from the output entirely. Any line that does not match the address is printed as usual.

For demonstration purposes, the examples in this guide use the following file:

colors.txttxt
red
green
blue
yellow
white
black
orange
purple

Delete by Line Number

The simplest form of delete targets a specific line number. To remove the third line, place 3 before the d command:

Terminal
sed '3d' colors.txt
output
red
green
yellow
white
black
orange
purple

Notice that “blue” (which was on line 3) is gone from the output, but the original colors.txt file is unchanged. This is the default behavior of sed, and it gives you a safe way to preview changes before committing them.

Sometimes you need to remove the last line of a file without knowing how many lines it contains. The $ address represents the last line:

Terminal
sed '$d' colors.txt
output
red
green
blue
yellow
white
black
orange

Here “purple” (the last line) has been removed from the output.

Delete Multiple Specific Lines

If you need to remove several individual lines, separate the addresses with a semicolon. The following command deletes lines 2, 5, and 8 in a single pass:

Terminal
sed '2d;5d;8d' colors.txt
output
red
blue
yellow
black
orange

The output is missing “green” (line 2), “white” (line 5), and “purple” (line 8). You can list as many addresses as needed this way.

Delete a Range of Lines

To delete a block of consecutive lines, specify a range with two numbers separated by a comma. The following command removes lines 3 through 6:

Terminal
sed '3,6d' colors.txt
output
red
green
orange
purple

Everything from “blue” (line 3) through “black” (line 6) is gone, and the remaining lines print normally.

You can also combine a line number with $ to delete from a given line to the end of the file. This keeps only the first four lines:

Terminal
sed '5,$d' colors.txt
output
red
green
blue
yellow

Adding ! after the address inverts the selection, so sed deletes every line that is not in the range. The following command keeps only lines 3 through 5 and removes everything else:

Terminal
sed '3,5!d' colors.txt
output
blue
yellow
white

This is a convenient way to extract a slice from a file without piping through head and tail.

Delete Every Nth Line

GNU sed supports the step address first~step, which matches every Nth line starting from a given position. To delete every even-numbered line (2, 4, 6, …):

Terminal
sed '0~2d' colors.txt
output
red
blue
white
orange

The first number is the starting offset (0 means “before the first line”, so the first match is line 2) and the second number is the step. In this case sed deletes lines 2, 4, 6, and 8.

To delete every third line starting from line 3:

Terminal
sed '3~3d' colors.txt
output
red
green
yellow
white
orange
purple

Lines 3 (“blue”) and 6 (“black”) are removed. This syntax is specific to GNU sed and is not available in the BSD version shipped with macOS.

Delete Lines Matching a Pattern

One of the most common uses of sed delete is removing lines that contain a specific string. Wrap the search pattern in forward slashes before the d command:

Terminal
sed '/blue/d' colors.txt
output
red
green
yellow
white
black
orange
purple

Every line containing the string “blue” is removed from the output. Keep in mind that this is a substring match, so a pattern like /bl/d would also delete the line “black” because it contains “bl”.

If you want the match to be case-insensitive, add the I flag after d:

Terminal
sed '/Blue/Id' colors.txt

This removes lines containing “blue”, “Blue”, “BLUE”, or any other case variation.

Delete Lines Not Matching a Pattern

Adding ! after the pattern address inverts the match, so sed deletes every line that does not contain the pattern. The following command removes all lines that do not have the letter “e”:

Terminal
sed '/e/!d' colors.txt
output
red
green
blue
yellow
white
orange
purple

Only “black” (line 6) was removed because it is the only line without an “e”. This works similarly to grep , but the advantage of sed is that you can combine it with other editing commands in the same expression.

Delete a Range Between Two Patterns

You can use two patterns separated by a comma to define a range. sed starts deleting at the first line that matches the opening pattern and stops after the first line that matches the closing pattern. The following command removes everything from “green” through “white”:

Terminal
sed '/green/,/white/d' colors.txt
output
red
black
orange
purple

Both the “green” and “white” lines are included in the deletion, along with “blue” and “yellow” between them.

You can also mix a line number with a pattern. This deletes from line 1 through the first line that contains “blue”:

Terminal
sed '1,/blue/d' colors.txt
output
yellow
white
black
orange
purple

This kind of mixed address is useful when you know where the range starts (a fixed line) but not where it ends.

Delete Empty Lines

Removing blank lines is one of the most common sed tasks. To delete completely empty lines, use the pattern ^$, which matches lines where the beginning and end have nothing between them:

Terminal
sed '/^$/d' file.txt

This works well for truly empty lines, but it will not catch lines that contain only spaces or tabs. To remove those as well, use the [[:space:]] character class:

Terminal
sed '/^[[:space:]]*$/d' file.txt

The * quantifier matches zero or more whitespace characters, so this pattern covers both empty lines and lines that appear blank but contain invisible whitespace.

Delete Lines Matching a Regular Expression

The pattern between the slashes is a regular expression, so you can use anchors, character classes, and quantifiers for more precise matching.

A common example is stripping comment lines from a configuration file. The ^# pattern matches any line that starts with #:

Terminal
sed '/^#/d' config.txt

To delete lines that end with a semicolon, anchor the pattern to the end of the line with $:

Terminal
sed '/;$/d' source.txt

You can also match by line length. The following command deletes lines that are shorter than 5 characters. The \{0,4\} quantifier means “between 0 and 4 of any character”:

Terminal
sed '/^.\{0,4\}$/d' file.txt

If you find the backslash escaping awkward, use extended regular expressions with the -E flag (or -r on older systems). This lets you write the same expression without escaping the braces:

Terminal
sed -E '/^.{0,4}$/d' file.txt

Combining Delete with Other Commands

One of the strengths of sed is that you can chain multiple operations in a single expression. For example, the following command first removes comment lines and then replaces “localhost” with “127.0.0.1” in whatever remains:

Terminal
sed '/^#/d; s/localhost/127.0.0.1/g' config.txt

The commands are separated by a semicolon and execute left to right. If a line is deleted by an earlier command, the later commands never see it, so the replacement only applies to non-comment lines.

Editing Files In Place

All of the examples above print the modified text to standard output without changing the original file. When you are satisfied with the result, add the -i flag to apply the changes directly:

Terminal
sed -i '/^$/d' file.txt
Warning
The -i flag modifies the file permanently. Always preview the output without -i first, or create a backup by passing a suffix: sed -i.bak '/^$/d' file.txt. This saves the original as file.txt.bak.

On macOS, the BSD version of sed requires an explicit extension argument even if you do not want a backup. Use sed -i '' '/^$/d' file.txt for in-place editing without creating a backup file.

Quick Reference

For a printable quick reference, see the sed cheatsheet .

Expression What it deletes
sed '3d' Line 3
sed '$d' Last line
sed '2d;5d' Lines 2 and 5
sed '3,6d' Lines 3 through 6
sed '5,$d' Line 5 to end of file
sed '3,5!d' All lines except 3 through 5
sed '0~2d' Every even-numbered line
sed '/pattern/d' Lines matching pattern
sed '/pattern/!d' Lines not matching pattern
sed '/start/,/end/d' From first match of start to first match of end
sed '/^$/d' Empty lines
sed '/^#/d' Comment lines starting with #

FAQ

How do I delete a line containing a specific word? Use sed '/word/d' file.txt. This removes any line where “word” appears as a substring. If you want to match the whole word only (so that “keyword” is not affected), use word boundaries: sed '/\bword\b/d' file.txt.

Can I delete lines from multiple files at once? Yes. Pass multiple filenames after the expression: sed -i '/pattern/d' file1.txt file2.txt. To process files recursively, combine sed with find: find . -name "*.log" -exec sed -i '/DEBUG/d' {} +.

What is the difference between sed '/pattern/d' and grep -v 'pattern'? Both remove matching lines from the output. The difference is that sed can combine deletion with other editing commands in the same expression and supports in-place editing with -i. If all you need is simple line filtering, grep -v is shorter to type.

How do I delete a range of lines and save the result? You can either redirect the output to a new file (sed '3,6d' file.txt > cleaned.txt) or edit in place with a backup (sed -i.bak '3,6d' file.txt). Do not redirect to the same input file, because the shell will truncate it before sed has a chance to read it.

Conclusion

The sed delete command gives you a fast way to remove lines by number, pattern, or range without opening a file in an editor. For replacing text within lines rather than removing them, see the companion guide on sed find and replace . If you need to work with individual columns or fields within a line, awk is often the better tool for the job.

git clone: Clone a Repository

git clone copies an existing Git repository into a new directory on your local machine. It sets up tracking branches for each remote branch and creates a remote called origin pointing back to the source.

This guide explains how to use git clone with the most common options and protocols.

Syntax

The basic syntax of the git clone command is:

txt
git clone [OPTIONS] REPOSITORY [DIRECTORY]

REPOSITORY is the URL or path of the repository to clone. DIRECTORY is optional; if omitted, Git uses the repository name as the directory name.

Clone a Remote Repository

The most common way to clone a repository is over HTTPS. For example, to clone a GitHub repository:

Terminal
git clone https://github.com/user/repo.git

This creates a repo directory in the current working directory, initializes a .git directory inside it, and checks out the default branch.

You can also clone over SSH if you have an SSH key configured :

Terminal
git clone git@github.com:user/repo.git

SSH is generally preferred for repositories you will push to, as it does not require entering credentials each time.

Clone into a Specific Directory

By default, git clone creates a directory named after the repository. To clone into a different directory, pass the target path as the second argument:

Terminal
git clone https://github.com/user/repo.git my-project

To clone into the current directory, use . as the target:

Terminal
git clone https://github.com/user/repo.git .
Warning
Cloning into . requires the current directory to be empty.

Clone a Specific Branch

By default, git clone checks out the default branch (usually main or master). To clone and check out a different branch, use the -b option:

Terminal
git clone -b develop https://github.com/user/repo.git

The full repository history is still downloaded; only the checked-out branch differs. To limit the download to a single branch, combine -b with --single-branch:

Terminal
git clone -b develop --single-branch https://github.com/user/repo.git

Shallow Clone

A shallow clone downloads only a limited number of recent commits rather than the full history. This is useful for speeding up the clone of large repositories when you do not need the full commit history.

To clone with only the latest commit:

Terminal
git clone --depth 1 https://github.com/user/repo.git

To include the last 10 commits:

Terminal
git clone --depth 10 https://github.com/user/repo.git

Shallow clones are commonly used in CI/CD pipelines to reduce download time. To convert a shallow clone to a full clone later, run:

Terminal
git fetch --unshallow

Clone a Local Repository

git clone also works with local paths. This is useful for creating an isolated copy for testing:

Terminal
git clone /path/to/local/repo new-copy

Troubleshooting

Destination directory is not empty
git clone URL . works only when the current directory is empty. If files already exist, clone into a new directory or move the existing files out of the way first.

Authentication failed over HTTPS
Most Git hosting services no longer accept account passwords for Git operations over HTTPS. Use a personal access token or configure a credential manager so Git can authenticate securely.

Permission denied (publickey)
This error usually means your SSH key is missing, not loaded into your SSH agent, or not added to your account on the Git hosting service. Verify your SSH setup before retrying the clone.

Host key verification failed
SSH could not verify the identity of the remote server. Confirm you are connecting to the correct host, then accept or update the host key in ~/.ssh/known_hosts.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git clone URL Clone a repository via HTTPS or SSH
git clone URL dir Clone into a specific directory
git clone URL . Clone into the current (empty) directory
git clone -b branch URL Clone and check out a specific branch
git clone -b branch --single-branch URL Clone only a specific branch
git clone --depth 1 URL Shallow clone: latest commit only
git clone --depth N URL Shallow clone: last N commits
git clone /path/to/repo Clone a local repository

FAQ

What is the difference between HTTPS and SSH cloning?
HTTPS cloning works out of the box but prompts for credentials unless you use a credential manager. SSH cloning requires a key pair to be configured but does not prompt for a password on each push or pull.

Does git clone download all branches?
Yes. All remote branches are fetched, but only the default branch is checked out locally. Use git branch -r to list all remote branches, or git checkout branch-name to switch to one.

How do I clone a private repository?
For HTTPS, provide your username and a personal access token when prompted. For SSH, ensure your public key is added to your account on the hosting service.

Can I clone a specific tag instead of a branch?
Yes. Use -b with the tag name: git clone -b v1.0.0 https://github.com/user/repo.git. Git checks out that tag in a detached HEAD state, so create a branch if you plan to make commits.

Conclusion

git clone is the starting point for working with any existing Git repository. After cloning, you can inspect the commit history with git log , review changes with git diff , learn when to use git fetch vs git pull , and configure your username and email if you have not done so already.

jobs Command in Linux: List and Manage Background Jobs

The jobs command is a Bash built-in that lists all background and suspended processes associated with the current shell session. It shows the job ID, state, and the command that started each job.

This article explains how to use the jobs command, what each option does, and how job specifications work for targeting individual jobs.

Syntax

The general syntax for the jobs command is:

txt
jobs [OPTIONS] [JOB_SPEC...]

When called without arguments, jobs lists all active jobs in the current shell.

Listing Jobs

To see all background and suspended jobs, run jobs without any arguments:

Terminal
jobs

If you have two processes (one running and one stopped), the output will look similar to this:

output
[1]- Stopped vim notes.txt
[2]+ Running ping google.com &

Each line shows:

  • The job number in brackets ([1], [2])
  • A + or - sign indicating the current and previous job (more on this below)
  • The job state (Running, Stopped, Done)
  • The command that started the job

To include the process ID (PID) in the output, use the -l option:

Terminal
jobs -l
output
[1]- 14205 Stopped vim notes.txt
[2]+ 14230 Running ping google.com &

The PID is useful when you need to send a signal to a process with the kill command.

Job States

Jobs reported by the jobs command can be in one of these states:

  • Running — the process is actively executing in the background.
  • Stopped — the process has been suspended, typically by pressing Ctrl+Z.
  • Done — the process has finished. This state appears once and is then cleared from the list.
  • Terminated — the process was killed by a signal.

When a background job finishes, the shell displays its Done status the next time you press Enter or run a command.

Job Specifications

Job specifications (job specs) let you target a specific job when using commands like fg, bg, kill , or disown. They always start with a percent sign (%):

  • %1, %2, … — refer to a job by its number.
  • %% or %+ — the current job (the most recently backgrounded or suspended job, marked with +).
  • %- — the previous job (marked with -).
  • %string — the job whose command starts with string. For example, %ping matches a job started with ping google.com.
  • %?string — the job whose command contains string anywhere. For example, %?google also matches ping google.com.

Here is a practical example. Start two background jobs:

Terminal
sleep 300 &
ping -c 100 google.com > /dev/null &

Now list them:

Terminal
jobs
output
[1]- Running sleep 300 &
[2]+ Running ping -c 100 google.com > /dev/null &

You can bring the sleep job to the foreground by its number:

Terminal
fg %1

Or by the command prefix:

Terminal
fg %sleep

Both are equivalent and bring job [1] to the foreground.

Options

The jobs command accepts the following options:

  • -l — List jobs with their process IDs in addition to the normal output.
  • -p — Print only the PID of each job’s process group leader. This is useful for scripting.
  • -r — List only running jobs.
  • -s — List only stopped (suspended) jobs.
  • -n — Show only jobs whose status has changed since the last notification.

To show only running jobs:

Terminal
jobs -r

To get just the PIDs of all jobs:

Terminal
jobs -p
output
14205
14230

Using jobs in Interactive Shells

The jobs command is primarily useful in an interactive shell where job control is enabled. In a regular shell script, job control is usually off, so jobs may return no output even when background processes exist.

If you need to manage background work in a script, track process IDs with $! and use wait with the PID instead of a job spec:

sh
# Start a background task
long_task &
task_pid=$!

# Poll until it finishes
while kill -0 "$task_pid" 2>/dev/null; do
 echo "Task is still running..."
 sleep 5
done

echo "Task complete."

To wait for several background tasks, store their PIDs and pass them to wait:

sh
task_a &
pid_a=$!

task_b &
pid_b=$!

task_c &
pid_c=$!

wait "$pid_a" "$pid_b" "$pid_c"
echo "All tasks complete."

In an interactive shell, you can still wait for a job by its job spec:

Terminal
wait %1

Quick Reference

Command Description
jobs List all background and stopped jobs
jobs -l List jobs with PIDs
jobs -p Print only PIDs
jobs -r List only running jobs
jobs -s List only stopped jobs
fg %1 Bring job 1 to the foreground
bg %1 Resume job 1 in the background
kill %1 Terminate job 1
disown %1 Remove job 1 from the job table
wait Wait for all background jobs to finish

FAQ

What is the difference between jobs and ps?
jobs shows only the processes started from the current shell session. The ps command lists all processes running on the system, regardless of which shell started them.

Why does jobs show nothing even though processes are running?
jobs only tracks processes started from the current shell. If you started a process in a different terminal or via a system service, it will not appear in jobs. Use ps aux to find it instead.

How do I stop a background job?
Use kill %N where N is the job number. For example, kill %1 sends SIGTERM to job 1. If the job does not stop, use kill -9 %1 to force-terminate it.

What do the + and - signs mean in jobs output?
The + marks the current job, the one that fg and bg will act on by default. The - marks the previous job. When the current job finishes, the previous job becomes the new current job.

How do I keep a background job running after closing the terminal?
Use nohup before the command, or disown after backgrounding it. For long-running interactive sessions, consider Screen or Tmux .

Conclusion

The jobs command lists all background and suspended processes in the current shell session. Use it together with fg, bg, and kill to control jobs interactively. For a broader guide on running and managing background processes, see How to Run Linux Commands in the Background .

git diff Command: Show Changes Between Commits

Every change in a Git repository can be inspected before it is staged or committed. The git diff command shows the exact lines that were added, removed, or modified — between your working directory, the staging area, and any two commits or branches.

This guide explains how to use git diff to review changes at every stage of your workflow.

Basic Usage

Run git diff with no arguments to see all unstaged changes — differences between your working directory and the staging area:

Terminal
git diff
output
diff --git a/app.py b/app.py
index 3b1a2c4..7d8e9f0 100644
--- a/app.py
+++ b/app.py
@@ -10,6 +10,7 @@ def main():
print("Starting app")
+ print("Debug mode enabled")
run()

Lines starting with + were added. Lines starting with - were removed. Lines with no prefix are context — unchanged lines shown for reference.

If nothing is shown, all changes are already staged or there are no changes at all.

Staged Changes

To see changes that are staged and ready to commit, use --staged (or its synonym --cached):

Terminal
git diff --staged

This compares the staging area against the last commit. It shows exactly what will go into the next commit.

Terminal
git diff --cached

Both flags are equivalent. Use whichever you prefer.

Comparing Commits

Pass two commit hashes to compare them directly:

Terminal
git diff abc1234 def5678

To compare a commit against its parent, use the ^ suffix:

Terminal
git diff HEAD^ HEAD

HEAD refers to the latest commit. HEAD^ is one commit before it. You can go further back with HEAD~2, HEAD~3, and so on.

To see what changed in the last commit:

Terminal
git diff HEAD^ HEAD

Comparing Branches

Use the same syntax to compare two branches:

Terminal
git diff main feature-branch

This shows all differences between the tips of the two branches.

To see only what a branch adds relative to main — excluding changes already in main — use the three-dot notation:

Terminal
git diff main...feature-branch

The three-dot form finds the common ancestor of both branches and diffs from there to the tip of feature-branch. This is the most useful form for reviewing a pull request before merging.

Comparing a Specific File

To limit the diff to a single file, pass the path after --:

Terminal
git diff -- path/to/file.py

The -- separator tells Git the argument is a file path, not a branch name. Combine it with commit references to compare a file across commits:

Terminal
git diff HEAD~3 HEAD -- path/to/file.py

Summary with –stat

To see a summary of which files changed and how many lines were added or removed — without the full diff — use --stat:

Terminal
git diff --stat
output
 app.py | 3 +++
config.yml | 1 -
2 files changed, 3 insertions(+), 1 deletion(-)

This is useful for a quick overview before reviewing the full diff.

Word-Level Diff

By default, git diff highlights changed lines. To highlight only the changed words within a line, use --word-diff:

Terminal
git diff --word-diff
output
@@ -10,6 +10,6 @@ def main():
print("[-Starting-]{+Running+} app")

Removed words appear in [-brackets-] and added words in {+braces+}. This is especially useful for prose or configuration files where only part of a line changes.

Ignoring Whitespace

To ignore whitespace-only changes, use -w:

Terminal
git diff -w

Use -b to ignore changes in the amount of whitespace (but not all whitespace):

Terminal
git diff -b

These options are useful when reviewing files that have been reformatted or indented differently.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git diff Unstaged changes in working directory
git diff --staged Staged changes ready to commit
git diff HEAD^ HEAD Changes in the last commit
git diff abc1234 def5678 Diff between two commits
git diff main feature Diff between two branches
git diff main...feature Changes in feature since branching from main
git diff -- file Diff for a specific file
git diff --stat Summary of changed files and line counts
git diff --word-diff Word-level diff
git diff -w Ignore all whitespace changes

FAQ

What is the difference between git diff and git diff --staged?
git diff shows changes in your working directory that have not been staged yet. git diff --staged shows changes that have been staged with git add and are ready to commit. To see all changes — staged and unstaged combined — use git diff HEAD.

What does the three-dot ... notation do in git diff?
git diff main...feature finds the common ancestor of main and feature and shows the diff from that ancestor to the tip of feature. This isolates the changes that the feature branch introduces, ignoring any new commits in main since the branch was created. In git diff, the two-dot form is just another way to name the two revision endpoints, so git diff main..feature is effectively the same as git diff main feature.

How do I see only the file names that changed, not the full diff?
Use git diff --name-only. To include the change status (modified, added, deleted), use git diff --name-status instead.

How do I compare my local branch against the remote?
Fetch first to update remote-tracking refs, then diff against the updated remote branch. To see what your current branch changes relative to origin/main, use git fetch && git diff origin/main...HEAD. If you want a direct endpoint comparison instead, use git fetch && git diff origin/main HEAD. If you want a clearer overview of what git fetch changes compared with git pull, see our guide on git fetch vs git pull .

Can I use git diff to generate a patch file?
Yes. Redirect the output to a file: git diff > changes.patch. Apply it on another machine with git apply changes.patch.

Conclusion

git diff is the primary tool for reviewing changes before you stage or commit them. Use git diff for unstaged work, git diff --staged before committing, and git diff main...feature to review a branch before merging. For a history of committed changes, see the git log guide , and for syncing remote changes safely, see git fetch vs git pull .

git log Command: View Commit History

Every commit in a Git repository is recorded in the project history. The git log command lets you browse that history — showing who made each change, when, and why. It is one of the most frequently used Git commands and one of the most flexible.

This guide explains how to use git log to view, filter, and format commit history.

Basic Usage

Run git log with no arguments to see the full commit history of the current branch, starting from the most recent commit:

Terminal
git log
output
commit a3f1d2e4b5c6789012345678901234567890abcd
Author: Jane Smith <jane@example.com>
Date: Mon Mar 10 14:22:31 2026 +0100
Add user authentication module
commit 7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c
Author: John Doe <john@example.com>
Date: Fri Mar 7 09:15:04 2026 +0100
Fix login form validation

Each entry shows the full commit hash, author name and email, date, and commit message. Press q to exit the log view.

Compact Output with –oneline

The default output is verbose. Use --oneline to show one commit per line — the abbreviated hash and the subject line only:

Terminal
git log --oneline
output
a3f1d2e Add user authentication module
7b8c9d0 Fix login form validation
c1d2e3f Update README with setup instructions

This is useful for a quick overview of recent work.

Limiting the Number of Commits

To show only the last N commits, pass -n followed by the number:

Terminal
git log -n 5

You can also write it without the space:

Terminal
git log -5

Filtering by Author

Use --author to show only commits by a specific person. The value is matched as a substring against the author name or email:

Terminal
git log --author="Jane"

To match multiple authors, pass --author more than once:

Terminal
git log --author="Jane" --author="John"

Filtering by Date

Use --since and --until to limit commits to a date range. These options accept a variety of formats:

Terminal
git log --since="2 weeks ago"
git log --since="2026-03-01" --until="2026-03-15"

Searching Commit Messages

Use --grep to find commits whose message contains a keyword:

Terminal
git log --grep="authentication"

The search is case-sensitive by default. Add -i to make it case-insensitive:

Terminal
git log -i --grep="fix"

Viewing Changed Files

To see a summary of which files were changed in each commit without the full diff, use --stat:

Terminal
git log --stat
output
commit a3f1d2e4b5c6789012345678901234567890abcd
Author: Jane Smith <jane@example.com>
Date: Mon Mar 10 14:22:31 2026 +0100
Add user authentication module
src/auth.py | 84 ++++++++++++++++++++++++++++++++++++++++++++++++
src/models.py | 12 +++++++
tests/test_auth.py | 45 +++++++++++++++++
3 files changed, 141 insertions(+)

To see the full diff for each commit, use -p (or --patch):

Terminal
git log -p

Combine with -n to limit the output:

Terminal
git log -p -3

Viewing History of a Specific File

To see only the commits that touched a particular file, pass the file path after --:

Terminal
git log -- path/to/file.py

The -- separator tells Git that what follows is a path, not a branch name. Add -p to see the exact changes made to that file in each commit:

Terminal
git log -p -- path/to/file.py

Branch and Graph View

To visualize how branches diverge and merge, use --graph:

Terminal
git log --oneline --graph
output
* a3f1d2e Add user authentication module
* 7b8c9d0 Fix login form validation
| * 3e4f5a6 WIP: dark mode toggle
|/
* c1d2e3f Update README with setup instructions

Add --all to include all branches, not just the current one:

Terminal
git log --oneline --graph --all

This gives a full picture of the repository’s branch and merge history.

Comparing Two Branches

To see commits that are in one branch but not another, use the .. notation:

Terminal
git log main..feature-branch

This shows commits reachable from feature-branch that are not yet in main — useful for reviewing what a branch adds before merging.

Custom Output Format

Use --pretty=format: to define exactly what each log line shows. Some useful placeholders:

  • %h — abbreviated commit hash
  • %an — author name
  • %ar — author date, relative
  • %s — subject (first line of commit message)

For example:

Terminal
git log --pretty=format:"%h %an %ar %s"
output
a3f1d2e Jane Smith 5 days ago Add user authentication module
7b8c9d0 John Doe 8 days ago Fix login form validation

Excluding Merge Commits

To hide merge commits and see only substantive changes, use --no-merges:

Terminal
git log --no-merges --oneline

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git log Full commit history
git log --oneline One line per commit
git log -n 10 Last 10 commits
git log --author="Name" Filter by author
git log --since="2 weeks ago" Filter by date
git log --grep="keyword" Search commit messages
git log --stat Show changed files per commit
git log -p Show full diff per commit
git log -- file History of a specific file
git log --oneline --graph Visual branch graph
git log --oneline --graph --all Graph of all branches
git log main..feature Commits in feature not in main
git log --no-merges Exclude merge commits
git log --pretty=format:"%h %an %s" Custom output format

FAQ

How do I search for a commit that changed a specific piece of code?
Use -S (the “pickaxe” option) to find commits that added or removed a specific string: git log -S "function_name". For regex patterns, use -G "pattern" instead.

How do I see who last changed each line of a file?
Use git blame path/to/file. It annotates every line with the commit hash, author, and date of the last change.

What is the difference between git log and git reflog?
git log shows the commit history of the current branch. git reflog shows every movement of HEAD — including commits that were reset, rebased, or are no longer reachable from any branch. It is the main recovery tool if you lose commits accidentally.

How do I find a commit by its message?
Use git log --grep="search term". For case-insensitive search add -i. If you need to search the full commit message body as well as the subject, add --all-match only when combining multiple --grep patterns, or use git log --grep="search term" --format=full to inspect matching commits more closely. Use -E only when you need extended regular expressions in the grep pattern.

How do I show a single commit in full detail?
Use git show <hash>. It prints the commit metadata and the full diff. To show just the files changed without the diff, use git show --stat <hash>.

Conclusion

git log is the primary tool for understanding a project’s history. Start with --oneline for a quick overview, use --graph --all to visualize branches, and combine --author, --since, and --grep to zero in on specific changes. For undoing commits found in the log, see the git revert guide or how to undo the last commit . If you need to move one specific change onto another branch, see git cherry-pick .

git stash: Save and Restore Uncommitted Changes

When you are in the middle of a feature and need to switch branches to fix a bug or review someone else’s work, you have a problem: your changes are not ready to commit, but you cannot switch branches with a dirty working tree. git stash solves this by saving your uncommitted changes to a temporary stack so you can restore them later.

This guide explains how to use git stash to save, list, apply, and delete stashed changes.

Basic Usage

The simplest form saves all modifications to tracked files and reverts the working tree to match the last commit:

Terminal
git stash
output
Saved working directory and index state WIP on main: abc1234 Add login page

Your working tree is now clean. You can switch branches, pull updates, or apply a hotfix. When you are ready to return to your work, restore the stash.

By default, git stash saves both staged and unstaged changes to tracked files. It does not save untracked or ignored files unless you explicitly include them (see below).

Stashing with a Message

A plain git stash entry shows the branch and last commit message, which becomes hard to read when you have multiple stashes. Use -m to attach a descriptive label:

Terminal
git stash push -m "WIP: user authentication form"

This makes it easy to identify the right stash when you list them later.

Listing Stashes

To see all saved stashes, run:

Terminal
git stash list
output
stash@{0}: On main: WIP: user authentication form
stash@{1}: WIP on main: abc1234 Add login page

Stashes are indexed from newest (stash@{0}) to oldest. The index is used when you want to apply or drop a specific stash.

To inspect the contents of a stash before applying it, use git stash show:

Terminal
git stash show stash@{0}

Add -p to see the full diff:

Terminal
git stash show -p stash@{0}

Applying Stashes

There are two ways to restore a stash.

git stash pop applies the most recent stash and removes it from the stash list:

Terminal
git stash pop

git stash apply applies a stash but keeps it in the list so you can apply it again or to another branch:

Terminal
git stash apply

To apply a specific stash by index, pass the stash reference:

Terminal
git stash apply stash@{1}

If applying the stash causes conflicts, resolve them the same way you would resolve a merge conflict, then stage the resolved files.

Stashing Untracked and Ignored Files

By default, git stash only saves changes to files that Git already tracks. New files you have not yet staged are left behind.

Use -u (or --include-untracked) to include untracked files:

Terminal
git stash -u

To include both untracked and ignored files — for example, generated build artifacts or local config files — use -a (or --all):

Terminal
git stash -a

Partial Stash

If you only want to stash some of your changes and leave others in the working tree, use the -p (or --patch) flag to interactively select which hunks to stash:

Terminal
git stash -p

Git will step through each changed hunk and ask whether to stash it. Type y to stash the hunk, n to leave it, or ? for a full list of options.

Creating a Branch from a Stash

If you have been working on a stash for a while and the branch has diverged enough to cause conflicts on apply, you can create a new branch from the stash directly:

Terminal
git stash branch new-branch-name

This creates a new branch, checks it out at the commit where the stash was originally made, applies the stash, and drops it from the list if it applied cleanly. It is the safest way to resume stashed work that has grown out of sync with the base branch.

Deleting Stashes

To remove a specific stash once you no longer need it:

Terminal
git stash drop stash@{0}

To delete all stashes at once:

Terminal
git stash clear

Use clear with care — there is no undo.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git stash Stash staged and unstaged changes
git stash push -m "message" Stash with a descriptive label
git stash -u Include untracked files
git stash -a Include untracked and ignored files
git stash -p Interactively choose what to stash
git stash list List all stashes
git stash show stash@{0} Show changed files in a stash
git stash show -p stash@{0} Show full diff of a stash
git stash pop Apply and remove the latest stash
git stash apply stash@{0} Apply a stash without removing it
git stash branch branch-name Create a branch from the latest stash
git stash drop stash@{0} Delete a specific stash
git stash clear Delete all stashes

FAQ

What is the difference between git stash pop and git stash apply?
pop applies the stash and removes it from the stash list. apply restores the changes but keeps the stash entry so you can apply it again — for example, to multiple branches. Use pop for normal day-to-day use and apply when you need to reuse the stash.

Does git stash save untracked files?
No, not by default. Run git stash -u to include untracked files, or git stash -a to also include files that match your .gitignore patterns.

Can I stash only specific files?
Yes. In modern Git, you can pass pathspecs to git stash push, for example: git stash push -m "label" -- path/to/file. If you need a more portable workflow, use git stash -p to interactively select which hunks to stash.

How do I recover a dropped stash?
If you accidentally ran git stash drop or git stash clear, the underlying commit may still exist for a while. Run git fsck --no-reflogs | grep "dangling commit" to list unreachable commits, then git show <hash> to inspect them. If you find the right stash commit, you can recreate it with git stash store -m "recovered stash" <hash> and then apply it normally.

Can I apply a stash to a different branch?
Yes. Switch to the target branch with git switch branch-name, then run git stash apply stash@{0}. If there are no conflicts, the changes are applied cleanly. If the branches have diverged significantly, consider git stash branch to create a fresh branch from the stash instead.

Conclusion

git stash is the cleanest way to set aside incomplete work without creating a throwaway commit. Use git stash push -m to keep your stash list readable, prefer pop for one-time restores, and use git stash branch when applying becomes messy. For a broader overview of Git workflows, see the Git cheatsheet or the git revert guide .

How to Create a systemd Service File in Linux

systemd is the init system used by most modern Linux distributions, including Ubuntu, Debian, Fedora, and RHEL. It manages system processes, handles service dependencies, and starts services automatically at boot.

A systemd service file — also called a unit file — tells systemd how to start, stop, and manage a process. This guide explains how to create a systemd service file, install it on the system, and manage it with systemctl.

Quick Reference

Task Command
Create a service file sudo nano /etc/systemd/system/myservice.service
Reload systemd after changes sudo systemctl daemon-reload
Enable service at boot sudo systemctl enable myservice
Start the service sudo systemctl start myservice
Enable and start at once sudo systemctl enable --now myservice
Check service status sudo systemctl status myservice
Stop the service sudo systemctl stop myservice
Disable at boot sudo systemctl disable myservice
View service logs sudo journalctl -u myservice
View live logs sudo journalctl -fu myservice

Understanding systemd Unit Files

Service files are plain text files with an .service extension. They are stored in one of two locations:

  • /etc/systemd/system/ — system-wide services, managed by root. Files here take priority over package-installed units.
  • /usr/lib/systemd/system/ or /lib/systemd/system/ — units installed by packages, depending on the distribution. Do not edit these directly; copy them to /etc/systemd/system/ to override.

A service file is divided into three sections:

  • [Unit] — describes the service and declares dependencies.
  • [Service] — defines how to start, stop, and run the process.
  • [Install] — controls how and when the service is enabled.

Creating a Simple Service File

In this example, we will create a service that runs a custom Bash script. First, create the script you want to run:

Terminal
sudo nano /usr/local/bin/myapp.sh

Add the following content to the script:

/usr/local/bin/myapp.shsh
#!/bin/bash
echo "myapp started" >> /var/log/myapp.log
# your application logic here
exec /usr/local/bin/myapp

Make the script executable:

Terminal
sudo chmod +x /usr/local/bin/myapp.sh

Now create the service file:

Terminal
sudo nano /etc/systemd/system/myapp.service

Add the following content:

/etc/systemd/system/myapp.serviceini
[Unit]
Description=My Custom Application
After=network.target

[Service]
Type=simple
ExecStart=/usr/local/bin/myapp.sh
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

After saving the file, reload systemd to pick up the new unit:

Terminal
sudo systemctl daemon-reload

Enable and start the service:

Terminal
sudo systemctl enable --now myapp

Check that it is running:

Terminal
sudo systemctl status myapp
output
● myapp.service - My Custom Application
Loaded: loaded (/etc/systemd/system/myapp.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2026-03-10 09:00:00 CET; 2s ago
Main PID: 1234 (myapp.sh)
Tasks: 1 (limit: 4915)
CGroup: /system.slice/myapp.service
└─1234 /bin/bash /usr/local/bin/myapp.sh

Unit File Sections Explained

[Unit] Section

The [Unit] section contains metadata and dependency declarations.

Common directives:

  • Description= — a short human-readable name shown in systemctl status output.
  • After= — start this service only after the listed units are active. Does not create a dependency; use Requires= for hard dependencies.
  • Requires= — this service requires the listed units. If they fail, this service fails too.
  • Wants= — like Requires=, but the service still starts even if the listed units fail.
  • Documentation= — a URL or man page reference for the service.

[Service] Section

The [Service] section defines the process and how systemd manages it.

Common directives:

  • Type= — the startup type. See Service Types below.
  • ExecStart= — the command to run when the service starts. Must be an absolute path.
  • ExecStop= — the command to run when the service stops. If omitted, systemd sends SIGTERM.
  • ExecReload= — the command to reload the service configuration without restarting.
  • Restart= — when to restart automatically. See Restart Policies below.
  • RestartSec= — how many seconds to wait before restarting. Default is 100ms.
  • User= — run the process as this user instead of root.
  • Group= — run the process as this group.
  • WorkingDirectory= — set the working directory for the process.
  • Environment= — set environment variables: Environment="KEY=value".
  • EnvironmentFile= — read environment variables from a file.

[Install] Section

The [Install] section controls how the service is enabled.

  • WantedBy=multi-user.target — the most common value. Enables the service for normal multi-user (non-graphical) boot.
  • WantedBy=graphical.target — use this if the service requires a graphical environment.

Service Types

The Type= directive tells systemd how the process behaves at startup.

Type Description
simple Default. The process started by ExecStart is the main process.
forking The process forks and the parent exits. systemd tracks the child. Use with traditional daemons.
oneshot The process runs once and exits. systemd waits for it to finish before marking the service as active.
notify Like simple, but the process notifies systemd when it is ready using sd_notify().
idle Like simple, but the process is delayed until all active jobs finish.

For most custom scripts and applications, Type=simple is the right choice.

Restart Policies

The Restart= directive controls when systemd automatically restarts the service.

Value Restarts when
no Never (default).
always The process exits for any reason.
on-failure The process exits with a non-zero code, is killed by a signal, or times out.
on-abnormal Killed by a signal or times out, but not clean exit.
on-success Only if the process exits cleanly (exit code 0).

For long-running services, Restart=on-failure is the most common choice. Use Restart=always for services that must stay running under all circumstances.

Running a Service as a Non-Root User

Running services as root is a security risk. Use the User= and Group= directives to run the service as a dedicated user:

/etc/systemd/system/myapp.serviceini
[Unit]
Description=My Custom Application
After=network.target

[Service]
Type=simple
User=myappuser
Group=myappuser
ExecStart=/usr/local/bin/myapp
Restart=on-failure
WorkingDirectory=/opt/myapp
Environment="NODE_ENV=production"

[Install]
WantedBy=multi-user.target

Create the system user before enabling the service:

Terminal
sudo useradd -r -s /usr/sbin/nologin myappuser

Troubleshooting

Service fails to start — “Failed to start myapp.service”
Check the logs with journalctl : sudo journalctl -u myapp --since "5 minutes ago". The output shows the exact error message from the process.

Changes to the service file have no effect
You must run sudo systemctl daemon-reload after every edit to the unit file. systemd reads the file at reload time, not at start time.

“ExecStart= must be an absolute path”
The ExecStart= value must start with /. Use the full path to the binary — for example /usr/bin/python3 not python3. Use which python3 to find the absolute path.

Service starts but immediately exits
The process may be crashing on startup. Check sudo journalctl -u myapp -n 50 for error output. If Type=simple, make sure ExecStart= runs the process in the foreground — not a script that launches a background process and exits.

Service does not start at boot despite being enabled
Confirm the [Install] section has a valid WantedBy= target and that systemctl enable was run after daemon-reload. Check with systemctl is-enabled myapp.

FAQ

Where should I put my service file?
Put custom service files in /etc/systemd/system/. Files there take priority over package-installed units in /usr/lib/systemd/system/ and persist across package upgrades.

What is the difference between enable and start?
systemctl start starts the service immediately for the current session only. systemctl enable creates a symlink so the service starts automatically at boot. To do both at once, use systemctl enable --now myservice.

How do I view logs for my service?
Use sudo journalctl -u myservice to see all logs. Add -f to follow live output, or --since "1 hour ago" to limit the time range. See the journalctl guide for full usage.

Can I run a service as a regular user without root?
Yes, using user-level systemd units stored in ~/.config/systemd/user/. Manage them with systemctl --user enable myservice. They run when the user logs in and stop when they log out (unless lingering is enabled with loginctl enable-linger username).

How do I reload a service after changing its configuration?
If the service supports it, use sudo systemctl reload myservice — this sends SIGHUP or runs ExecReload= without restarting the process. Otherwise, use sudo systemctl restart myservice.

Conclusion

Creating a systemd service file gives you full control over how and when your process runs — including automatic restarts, boot integration, and user isolation. Once the file is in place, use systemctl enable --now to activate it and journalctl -u to monitor it. For scheduling tasks that run once at a set time rather than continuously, consider cron jobs as an alternative.

❌