普通视图

发现新文章,点击刷新页面。
昨天以前首页

nslookup Command in Linux: Query DNS Records

When a website does not load or email stops arriving, the first thing to check is whether the domain resolves to the correct address. The nslookup command is a quick way to query DNS servers and inspect the records behind a domain name.

nslookup ships with most Linux distributions and works on macOS and Windows as well. It supports both one-off queries from the command line and an interactive mode for running multiple lookups in a row.

This guide explains how to use nslookup with practical examples covering record types, reverse lookups, and troubleshooting.

Syntax

txt
nslookup [OPTIONS] [NAME] [SERVER]
  • NAME — The domain name or IP address to look up.
  • SERVER — The DNS server to query. If omitted, nslookup uses the server configured in /etc/resolv.conf.
  • OPTIONS — Query options such as -type=MX or -debug.

When called without arguments, nslookup starts in interactive mode.

Installing nslookup

On most distributions nslookup is already installed. To check, run:

Terminal
nslookup -version

If the command is not found, install it using your distribution’s package manager.

Install nslookup on Ubuntu, Debian, and Derivatives

Terminal
sudo apt update && sudo apt install dnsutils

Install nslookup on Fedora, RHEL, and Derivatives

Terminal
sudo dnf install bind-utils

Install nslookup on Arch Linux

Terminal
sudo pacman -S bind

The nslookup command is bundled with the same packages that provide dig .

Look Up a Domain Name

The simplest use is passing a domain name as an argument:

Terminal
nslookup linux.org
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: linux.org
Address: 104.26.14.72
Name: linux.org
Address: 104.26.15.72
Name: linux.org
Address: 172.67.73.26

The first two lines show the DNS server that answered the query. Everything under “Non-authoritative answer” is the actual result. In this case, linux.org resolves to three IPv4 addresses.

“Non-authoritative” means the answer came from a resolver’s cache rather than directly from the domain’s authoritative name server.

Query a Specific DNS Server

By default, nslookup queries the resolver configured in /etc/resolv.conf. To query a different server, add it as the last argument.

For example, to query Google’s public DNS:

Terminal
nslookup linux.org 8.8.8.8
output
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: linux.org
Address: 104.26.14.72
Name: linux.org
Address: 104.26.15.72
Name: linux.org
Address: 172.67.73.26

This is useful when you want to compare results across different resolvers or verify whether a DNS change has propagated to public servers.

Query Record Types

By default, nslookup returns A (IPv4 address) records. Use the -type option to query other record types.

MX Records (Mail Servers)

MX records identify the mail servers responsible for receiving email for a domain:

Terminal
nslookup -type=mx google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com mail exchanger = 10 smtp.google.com.

The number before the mail server hostname is the priority. A lower number means higher priority.

NS Records (Name Servers)

NS records show which name servers are authoritative for a domain:

Terminal
nslookup -type=ns google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com nameserver = ns1.google.com.
google.com nameserver = ns2.google.com.
google.com nameserver = ns3.google.com.
google.com nameserver = ns4.google.com.

TXT Records

TXT records store arbitrary text data, commonly used for SPF, DKIM, and domain ownership verification:

Terminal
nslookup -type=txt google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com text = "v=spf1 include:_spf.google.com ~all"
google.com text = "facebook-domain-verification=22rm551cu4k0ab0bxsw536tlds4h95"
google.com text = "docusign=05958488-4752-4ef2-95eb-aa7ba8a3bd0e"

The output may include many entries. The example above shows a subset of the TXT records returned for google.com.

AAAA Records (IPv6)

AAAA records return the IPv6 address of a domain:

Terminal
nslookup -type=aaaa google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: google.com
Address: 2a00:1450:4017:818::200e

SOA Record (Start of Authority)

The SOA record contains administrative information about the domain, including the primary name server, the responsible email address, and timing parameters for zone transfers:

Terminal
nslookup -type=soa google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com
origin = ns1.google.com
mail addr = dns-admin.google.com
serial = 897592583
refresh = 900
retry = 900
expire = 1800
minimum = 60

The serial number increments each time the zone is updated. DNS secondaries use it to decide whether they need a zone transfer.

CNAME Records

CNAME records point one domain name to another:

Terminal
nslookup -type=cname www.github.com

If a CNAME record exists, the output shows the canonical name the alias points to. If the domain does not have a CNAME record, nslookup returns No answer.

Run an ANY Query

To ask the DNS server for an ANY response, use -type=any:

Terminal
nslookup -type=any google.com

ANY queries do not reliably return every record type for a domain. Many DNS servers return only a subset of records or refuse the query entirely.

Reverse DNS Lookup

A reverse lookup finds the hostname associated with an IP address. Pass an IP address instead of a domain name:

Terminal
nslookup 208.118.235.148
output
148.235.118.208.in-addr.arpa name = ip-208-118-235-148.twdx.net.

Reverse lookups query PTR records. They are useful for verifying that an IP address maps back to the expected hostname, which matters for mail server configuration and security checks.

Interactive Mode

Running nslookup without arguments starts an interactive session where you can run multiple queries without retyping the command:

Terminal
nslookup
output
>

At the > prompt, type a domain name to look it up:

output
> linux.org
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: linux.org
Address: 104.26.14.72
Name: linux.org
Address: 104.26.15.72

You can change query settings during the session with the set command. For example, to switch to MX record lookups and then query a domain:

output
> set type=mx
> google.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com mail exchanger = 10 smtp.google.com.

To change the DNS server:

output
> server 8.8.8.8
Default server: 8.8.8.8
Address: 8.8.8.8#53

Type exit to leave interactive mode.

Interactive mode is convenient when you need to test several domains or record types in a row without running separate commands each time.

Debugging DNS Issues

The -debug option shows the full query and response details, including TTL values and additional sections that nslookup normally hides:

Terminal
nslookup -debug linux.org

The debug output is verbose, but it is helpful when you need to see TTL values, check whether answers are authoritative, or trace unexpected behavior.

nslookup vs dig

Both nslookup and dig query DNS servers, but they differ in output and capabilities:

  • nslookup produces simpler, more readable output. It also has an interactive mode that is convenient for quick checks.
  • dig provides detailed, structured output with sections (QUESTION, ANSWER, AUTHORITY, ADDITIONAL) and supports advanced options like +trace for tracing the full resolution path and +dnssec for verifying DNSSEC signatures.

For quick lookups and basic troubleshooting, nslookup is often faster to type and read. For in-depth DNS debugging, dig gives you more control and detail.

Troubleshooting

nslookup returns NXDOMAIN
The domain does not exist or is misspelled. Verify the domain name and check that it is registered.

nslookup returns SERVFAIL
The DNS server could not process the query. Try a different resolver to isolate the problem:

Terminal
nslookup linux.org 1.1.1.1

If public resolvers return the correct answer, the issue is with your configured resolver.

Connection timed out; no servers could be reached
This means nslookup could not contact the DNS server. Check your network connection and verify that /etc/resolv.conf contains a reachable name server. A firewall may also be blocking outbound DNS traffic on port 53.

Non-authoritative answer appears on every query
This is normal. It means the answer came from a resolver’s cache, not directly from the domain’s authoritative server. The result is still valid.

Quick Reference

For a printable quick reference, see the nslookup cheatsheet .

Task Command
Look up a domain nslookup example.com
Query a specific DNS server nslookup example.com 8.8.8.8
Query MX records nslookup -type=mx example.com
Query NS records nslookup -type=ns example.com
Query TXT records nslookup -type=txt example.com
Query AAAA (IPv6) records nslookup -type=aaaa example.com
Query SOA record nslookup -type=soa example.com
Query CNAME record nslookup -type=cname example.com
Run an ANY query nslookup -type=any example.com
Reverse DNS lookup nslookup 192.0.2.1
Start interactive mode nslookup
Enable debug output nslookup -debug example.com

FAQ

Can I use nslookup to check DNS propagation?
Yes. Query the same domain against several public DNS servers and compare the results. For example, run nslookup example.com 8.8.8.8, nslookup example.com 1.1.1.1, and nslookup example.com 9.9.9.9. If the answers differ, the change has not fully propagated.

Is nslookup deprecated?
The ISC (the organization behind BIND) once marked nslookup as deprecated in favor of dig, but later reversed that decision. nslookup is actively maintained and included in current BIND releases. It remains a practical tool for quick DNS lookups.

What does “Non-authoritative answer” mean?
It means the response came from a caching resolver, not from one of the domain’s authoritative name servers. The data is still accurate, but it may be slightly behind if a DNS change was made very recently and the cache has not expired yet.

Conclusion

The nslookup command is a quick way to query DNS records from the command line. Use -type to look up MX, NS, TXT, AAAA, and other record types, and pass a server argument to test against a specific resolver. For deeper DNS debugging, pair it with dig .

Debian vs Ubuntu Server: Which One Should You Use?

If you are setting up a new Linux server, Debian and Ubuntu are usually the two options that come up first. Both are mature, well-supported, and used across everything from small VPS deployments to large production environments. They also share the same package format and package manager, so the day-to-day administration feels familiar on both.

The decision usually comes down to tradeoffs: Debian gives you a leaner and more conservative base, while Ubuntu gives you newer defaults, a broader support ecosystem, and easier onboarding. This guide shows where those differences matter in practice.

Shared Foundation

Ubuntu is built on top of Debian. Canonical (the company behind Ubuntu) takes a snapshot of Debian’s unstable branch every six months, applies its own patches and defaults, and publishes a new Ubuntu release.

Because of this shared lineage, the two distributions have a lot in common:

  • Both use apt and dpkg for package management.
  • Both use systemd as the init system.
  • Both use .deb packages, and many third-party vendors ship a single .deb that works on either distribution.
  • Configuration file locations, service management commands, and filesystem layout follow the same conventions.

If you already know one, you can work comfortably on the other with very little adjustment.

Release Cycle and Support

The biggest practical difference between Debian and Ubuntu is how often they release and how long each release is supported.

Debian does not follow a fixed release schedule. A new stable version ships roughly every two years, but only when the release team considers it ready. Each stable release receives approximately three years of full security support from the Debian Security Team, followed by about two more years of extended support through the Debian LTS project. Debian 13 (Trixie), released in August 2025, is the current stable version as of this writing. Debian 12 (Bookworm), released in June 2023, is now oldstable.

Ubuntu follows a predictable time-based schedule. A new version ships every six months (April and October), and every two years the April release is designated a Long-Term Support (LTS) version. LTS releases receive five years of free security updates, extendable to ten years through Ubuntu Pro (free for up to five machines for personal use). Ubuntu 24.04 LTS (Noble Numbat) is the current LTS release.

Debian Stable Ubuntu LTS
Release cadence ~2 years, when ready Every 2 years (April of even years)
Free security support ~3 years 5 years
Extended support ~2 years (Debian LTS) Up to 10 years (Ubuntu Pro)
Current release 13 Trixie (August 2025) 24.04 Noble (April 2024)

If you want a release calendar you can plan around years in advance, Ubuntu is easier to work with. If you prefer a slower release model that puts stability first, Debian is usually the better choice.

Package Freshness vs Stability

Debian Stable freezes all package versions at release time. Once Bookworm shipped, its packages only receive security patches and critical bug fixes, never feature updates. This means you will not encounter unexpected behavior changes after a routine apt upgrade, but it also means you may be running older versions of languages, databases, or runtimes for the life of the release.

Ubuntu LTS takes a similar approach to stability, but because it pulls from a more recent Debian snapshot and applies its own patches, LTS packages tend to be slightly newer at release time. Ubuntu also offers PPAs (Personal Package Archives) as an official mechanism for installing newer software outside the main repository, though mixing PPAs with production servers carries its own risks.

Both distributions offer backports repositories for users who need specific newer packages without upgrading the entire system.

For server workloads where you install most software through containers or version managers (for example, nvm for Node.js or pyenv for Python), base package age matters less. The distribution becomes a stable foundation, and you manage application-level versions separately.

Default Installation and Setup

Ubuntu Server ships with a guided installer (Subiquity) that walks you through disk layout, networking, user creation, and optional snap-based packages like Docker. It installs a small but functional set of tools by default and can import your SSH keys from GitHub or Launchpad during setup.

Debian’s installer is more traditional. It offers the same configuration options, but the interface is text-based and expects you to make more decisions yourself. The resulting system is leaner; Debian installs very little beyond the base system unless you explicitly select additional task groups during installation.

If you provision servers with tools like Ansible, Terraform, or cloud-init, the installer matters less because you will not spend much time in it after the first boot. It matters more for quick manual deployments and local VMs, where Ubuntu usually gets you to a usable system faster.

Commercial Support and Cloud Ecosystem

Canonical offers paid support contracts, compliance certifications (FIPS, CIS benchmarks), and a managed services tier for Ubuntu. Ubuntu Pro extends security patching beyond the main repository and adds kernel livepatch for rebootless security updates. That matters most for teams with uptime targets, compliance requirements, or formal support needs.

Debian is entirely community-driven. There is no single company behind it, and there is no commercial support offering from the project itself. Third-party consultancies provide Debian support, but the ecosystem around it is smaller.

In the cloud, Ubuntu has a strong lead in first-party image availability. Every major provider (AWS, Google Cloud, Azure, DigitalOcean, Hetzner) offers official Ubuntu images, and many default to Ubuntu when launching a new instance. Debian images are also available on all major platforms, but Ubuntu tends to receive new platform features and optimized images first.

For container base images, both are widely used. Debian slim images are a popular choice for minimal Docker containers, while Ubuntu images are common when teams want closer parity between their server OS and their container environment.

Security

Both distributions have strong security track records. The Debian Security Team and the Ubuntu Security Team publish advisories and patches regularly, and both maintain clear processes for reporting and fixing vulnerabilities.

The key difference is scope. Ubuntu LTS includes five years of standard security maintenance for packages in the Main repository. Ubuntu Pro extends coverage to Main and Universe, effectively the full Ubuntu archive. Debian’s security team focuses on the main repository, and Debian’s LTS team covers only a subset of packages during the extended support period.

For kernel security, Ubuntu offers kernel livepatch through Ubuntu Pro, which applies critical kernel fixes without a reboot. Debian does not have an equivalent service built into the project, though third-party solutions exist.

In practice, both distributions are secure when properly maintained. The difference is in how much of the maintenance is handled for you and for how long.

When to Choose Debian

Choose Debian when you want:

  • Maximum stability with few surprises after upgrades. Debian Stable is one of the most conservative general-purpose distributions available.
  • A minimal base system that you build up yourself. Debian installs very little by default, which keeps the system lean and gives you tighter control over what runs on the server.
  • No vendor dependency. Debian is maintained by a community of volunteers and governed by a social contract. There is no single company setting the direction of the project.
  • A lightweight container base. Debian slim images are a common choice when image size matters.

Debian is often the better fit for long-running infrastructure, self-managed servers, and setups where predictability matters more than convenience.

When to Choose Ubuntu

Choose Ubuntu when you want:

  • Faster time to a working server. The installer and default configuration get you to a usable system quickly, especially on cloud platforms where Ubuntu images are often the default.
  • Longer official support. Five years of free LTS support (ten with Ubuntu Pro) gives you a wider upgrade window than Debian’s default support period.
  • Commercial backing. If your organization requires vendor support contracts, compliance certifications, or managed services, Canonical provides them.
  • Broader documentation and community. Ubuntu’s larger user base means you will usually find more tutorials, examples, and third-party guides written for it.
  • Cloud-heavy workflows. First-party cloud images and strong cloud-init support make Ubuntu an easy default for many hosted deployments.

Ubuntu is often the easier choice for cloud deployments, teams that need commercial support, and environments where onboarding speed matters.

Side-by-Side Summary

Debian Stable Ubuntu LTS
Based on Independent Debian (unstable branch)
Governance Community (Debian Project) Corporate (Canonical) + community
Release schedule When ready (~2 years) Fixed (every 2 years)
Free support period ~5 years (3 + 2 LTS) 5 years (10 with Pro)
Package freshness Conservative (frozen at release) Slightly newer at release
Default install Minimal Functional with guided setup
Cloud image availability Good Excellent (often the default)
Commercial support Third-party only Canonical (Ubuntu Pro)
Kernel livepatch Not built in Available via Ubuntu Pro
Container base image Popular (especially slim) Popular
Package manager apt / dpkg apt / dpkg
Init system systemd systemd

FAQ

Can I migrate a server from Ubuntu to Debian or vice versa?
It is technically possible but not straightforward. The distributions share the same package format, but they differ in package versions, configuration defaults, and init scripts. A clean reinstall with configuration management (Ansible, Puppet, or similar) is the safer path. Plan the migration as a new deployment rather than an in-place conversion.

Do Debian and Ubuntu run the same software?
For the most part, yes. Most common server software is available on both Debian and Ubuntu, whether through the official repositories, vendor-provided .deb packages, containers, or upstream binaries. That said, package versions, repository availability, and vendor support can differ between the two distributions and between releases.

Which is better for Docker and containers?
Both work well as Docker hosts. Installing Docker follows nearly the same steps on either distribution. For base images inside containers, Debian slim images are slightly smaller, while Ubuntu images offer closer parity with Ubuntu-based host systems. The difference is marginal for most workloads.

Which is more secure?
Neither is inherently more secure than the other. Both have dedicated security teams and fast patch turnaround. Ubuntu Pro extends patching to a wider package set and adds kernel livepatch, which can matter in environments with strict uptime or compliance requirements. On a properly maintained server, both are equally secure.

Conclusion

Debian and Ubuntu are close enough that you can run the same kinds of workloads on either one. If you want the more conservative and self-managed option, pick Debian. If you want faster onboarding, broader vendor support, and a smoother cloud path, pick Ubuntu.

Linux-从0开始-20260408

作者 walking957
2026年4月8日 10:05

Linux

一、基础命令

1. Linux 基础命令

问题:Linux 基础命令

答案核心回答:ls、cd、pwd、mkdir、rm、cp、mv 是基础命令。

代码示例

# 目录操作
ls -la                       # 详细列表
ls -lh                       # 人类可读大小
cd /path/to/dir              # 切换目录
pwd                          # 显示当前目录
mkdir -p /path/to/dir        # 创建目录
mkdir -m 755 dir             # 指定权限

# 文件操作
rm -rf dir                   # 删除目录
rm file.txt                   # 删除文件
cp -r source dest            # 复制目录
cp file.txt dest/             # 复制文件
mv oldname newname           # 重命名/移动

# 查看文件
cat file.txt                 # 全文
head -n 20 file.txt          # 前20行
tail -n 20 file.txt          # 后20行
tail -f /var/log/syslog      # 实时查看日志

# 搜索
grep "pattern" file.txt      # 搜索文本
find / -name "filename"     # 查找文件

2. 文件权限

问题:文件权限

答案核心回答:Linux 文件权限分为 owner、group、others 三类,各有 rwx 权限。

代码示例

# 查看权限
ls -l file.txt
# -rw-r--r-- 1 user group 4096 Jan 15 10:00 file.txt
# 权限位: -rw-r--r--
# 类型  owner  group  others
# r=4 w=2 x=1

# 修改权限
chmod 755 file               # 数字形式
chmod u+x file               # 所有者添加执行
chmod g-w file               # 组移除写
chmod o+r file               # 其他添加读
chmod a+x file               # 所有添加执行

# 修改所有者
chown user:group file        # 修改所有者和组
chown user file              # 只修改所有者
chgrp group file             # 只修改组

# 特殊权限
chmod +s file               # SUID/SGID
chmod +t file               # Sticky Bit

二、文本处理

3. 文本处理命令

问题:文本处理命令

答案核心回答:awk、sed、grep 是强大的文本处理工具。

代码示例

# awk - 文本分析
awk '{print $1, $3}' file.txt          # 打印第1、3列
awk -F',' '{print $2}' file.csv       # 指定分隔符
awk '/pattern/ {print $0}' file.txt   # 模式匹配
awk 'NR==5' file.txt                  # 第5行

# sed - 文本替换
sed 's/old/new/g' file.txt           # 全局替换
sed -i 's/old/new/g' file.txt        # 直接修改
sed '1,5d' file.txt                  # 删除1-5行
sed -n '2p' file.txt                 # 打印第2行

# grep - 搜索
grep "pattern" file.txt
grep -r "pattern" dir/               # 递归搜索
grep -i "pattern" file.txt           # 忽略大小写
grep -v "pattern" file.txt           # 反向匹配
grep -E "regex" file.txt             # 扩展正则

# 管道组合
cat file.txt | grep "pattern" | awk '{print $2}' | sort

三、进程管理

4. 进程管理

问题:进程管理命令

答案核心回答:ps、top、kill 用于进程管理。

代码示例

# 查看进程
ps aux                          # 所有进程
ps -ef                          # 详细格式
top                              # 实时监控
htop                             # 增强版 top

# 查找进程
ps aux | grep nginx
pgrep -f "nginx"

# 信号与 kill
kill -l                          # 列出信号
kill -9 pid                      # 强制终止
kill -15 pid                     # 优雅终止(SIGTERM)
killall nginx                     # 按名称终止
pkill -f "nginx"                 # 按模式终止

# 后台进程
nohup command &                  # 后台运行
bg                               # 后台任务
fg                               # 前台任务
jobs                             # 任务列表

四、网络管理

5. 网络命令

问题:网络命令

答案核心回答:ping、curl、wget、netstat、ss 是常用网络命令。

代码示例

# 测试连通性
ping -c 4 example.com           # 发4个包
ping -i 0.5 example.com         # 0.5秒间隔

# 下载
curl https://example.com       # 下载页面
curl -O file.txt               # 下载文件
curl -L url                    #跟随重定向
wget url                        # 下载工具
wget -c url                    # 断点续传

# 网络诊断
netstat -tuln                  # 监听端口
netstat -anp | grep 80         # 查找端口占用
ss -tuln                       # 替代 netstat
netstat -i                      # 网卡信息

# curl 高级用法
curl -X POST url -d "data"    # POST 请求
curl -H "Header: value" url   # 自定义头
curl -v url                    # 详细输出

五、磁盘与内存

6. 磁盘与内存

问题:磁盘与内存命令

答案核心回答:df、du、free 是查看磁盘和内存的常用命令。

代码示例

# 磁盘使用
df -h                          # 人类可读格式
df -i                          # 查看 inode
du -sh *                       # 目录大小
du -sh /path/to/dir            # 指定目录

# 内存使用
free -h                        # 人类可读格式
free -m                        # MB 为单位
cat /proc/meminfo              # 详细内存信息

# 磁盘挂载
mount /dev/sdb1 /mnt/usb      # 挂载
umount /mnt/usb                # 卸载
df -T                           # 显示文件系统类型

env Command in Linux: Show and Set Environment Variables

When you need to run a program with a different set of variables, test a script in a clean environment, or write a shebang line that works across systems, the env command is the right tool. It prints the current environment, sets or removes variables for a single command, and can even start a process with no inherited variables at all.

This guide covers how to use the env command with practical examples for everyday tasks.

env Syntax

txt
env [OPTIONS] [NAME=VALUE]... [COMMAND [ARGS]]

When called without arguments, env prints every environment variable in the current session, one per line. When followed by NAME=VALUE pairs and a command, it runs that command with the specified variables added or changed without affecting the current shell.

Print All Environment Variables

The simplest use of env is printing the full environment:

Terminal
env
output
SHELL=/bin/bash
USER=john
HOME=/home/john
LANG=en_US.UTF-8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
TERM=xterm-256color
XDG_SESSION_TYPE=tty
EDITOR=vim

Each line is a KEY=value pair. This is identical to what printenv shows with no arguments. The difference between the two commands becomes clear when you start passing options: printenv is designed for inspecting variables, while env is designed for modifying the environment before launching a program.

Print Variables with NUL Separator

By default, env separates each variable with a newline character. If variable values contain newlines themselves, the output becomes ambiguous. The -0 (or --null) option ends each entry with a NUL byte instead:

Terminal
env -0 | tr '\0' '\n' | head -5

This is useful when piping environment data into tools that expect NUL-delimited input, such as xargs with its -0 flag.

Run a Command with Modified Variables

The most practical feature of env is launching a command with variables changed for that command only. Place NAME=VALUE pairs before the command:

Terminal
env LANG=C sort unsorted.txt

This runs sort with LANG set to C, which forces byte-order sorting regardless of the system locale. The current shell’s LANG value stays unchanged after the command finishes.

You can set multiple variables at once:

Terminal
env DB_HOST=localhost DB_PORT=5432 python3 app.py

The application sees DB_HOST and DB_PORT in its environment, but those variables do not persist in your shell session after the process exits.

Tip
You can achieve the same result with the shell’s built-in VAR=value command syntax (for example, LANG=C sort unsorted.txt). The env command is useful when you need additional options like -i or -u, or when you want to be explicit about environment manipulation in scripts.

Run a Command in a Clean Environment

The -i (or --ignore-environment) option clears the entire inherited environment before running the command. Only the variables you explicitly set on the command line will be present:

Terminal
env -i bash -c 'env'
output
PWD=/home/john
SHLVL=1
_=/usr/bin/env

The output shows almost nothing. The shell itself sets a few internal variables (PWD, SHLVL, _), but everything the parent shell normally passes along (PATH, HOME, LANG, USER) is gone.

This is useful for testing whether a script depends on variables it does not set itself. You can combine -i with explicit variables to create a minimal, controlled environment:

Terminal
env -i HOME=/home/john PATH=/usr/bin:/bin bash -c 'echo $HOME; echo $PATH'
output
/home/john
/usr/bin:/bin

A bare - (hyphen) works as a shorthand for -i:

Terminal
env - PATH=/usr/bin bash -c 'echo $PATH'

Unset a Variable for a Command

The -u (or --unset) option removes a specific variable from the environment before running the command:

Terminal
env -u EDITOR vim

This launches vim without the EDITOR variable in its environment. Other variables remain untouched. You can unset multiple variables by repeating -u:

Terminal
env -u LANG -u LC_ALL python3 script.py

The difference from -i is that -u is surgical: it removes only the named variables and leaves everything else in place.

Change Directory Before Running a Command

The -C (or --chdir) option changes the working directory before executing the command:

Terminal
env -C /var/log cat syslog | head -3

This is equivalent to running cd /var/log && cat syslog, but without affecting the current shell’s working directory. It is a convenient way to run a command in another directory from within a script or one-liner.

Info
The -C option requires GNU coreutils 8.28 or later. Check your version with env --version.

Using env in Shebangs

One of the most common uses of env is in shebang lines at the top of scripts:

sh
#!/usr/bin/env bash
echo "Hello from Bash"

When the kernel encounters #!/usr/bin/env bash, it runs /usr/bin/env with bash as its argument. env then searches the PATH for the bash executable and runs it. This is more portable than hardcoding #!/bin/bash, because bash is not always located in /bin on every system (for example, on FreeBSD it is typically at /usr/local/bin/bash).

The same pattern works for other interpreters:

sh
#!/usr/bin/env python3
sh
#!/usr/bin/env node

On systems with GNU coreutils 8.30 or later, you can pass arguments to the interpreter using the -S (split string) option:

sh
#!/usr/bin/env -S python3 -u

Without -S, the kernel treats python3 -u as a single argument. The -S option tells env to split the string into separate arguments before executing.

env Options

  • -i, --ignore-environment - Start with an empty environment
  • -u NAME, --unset=NAME - Remove NAME from the environment
  • -C DIR, --chdir=DIR - Change working directory to DIR before running the command
  • -0, --null - End each output line with a NUL byte instead of a newline
  • -S STRING, --split-string=STRING - Split STRING into separate arguments (useful in shebangs)
  • -v, --debug - Print verbose information for each processing step
  • --block-signal=SIG - Block delivery of the specified signal to the command
  • --default-signal=SIG - Reset signal handling to the default for the command
  • --ignore-signal=SIG - Set signal handling to ignore for the command

Quick Reference

For a printable quick reference, see the env cheatsheet .

Command Description
env Print all environment variables
env -0 Print variables with NUL separator
env VAR=value command Run a command with a modified variable
env -i command Run a command in a clean environment
env -i VAR=value command Run a command with only the specified variables
env -u VAR command Run a command with a variable removed
env -C /path command Run a command in a different directory
#!/usr/bin/env bash Portable shebang line
#!/usr/bin/env -S python3 -u Shebang with interpreter arguments

FAQ

What is the difference between env and printenv?
Both commands print environment variables when called without arguments. The difference is in their purpose: printenv is a read-only inspection tool that can print individual variables by name (printenv HOME). env is designed to modify the environment and run commands. Use printenv when you need to check a value, and env when you need to change the environment for a process.

What is the difference between env and export?
export is a shell built-in that adds a variable to the current shell’s environment permanently (until the session ends or you unset it). env sets variables only for the duration of a single command and does not affect the current shell. If you need a variable to persist for all subsequent commands in your session, use export. If you need a variable set for one command only, use env or the shell’s VAR=value command syntax.

When should I use env -i?
Use env -i when you want to verify that a script or program works without relying on inherited environment variables. It is also useful in security-sensitive contexts where you want to prevent a child process from seeing variables like AWS_SECRET_ACCESS_KEY or database credentials that exist in the parent shell.

Why use #!/usr/bin/env bash instead of #!/bin/bash?
The env-based shebang is more portable. On most Linux distributions, bash lives at /bin/bash, but on other Unix systems (FreeBSD, macOS with Homebrew, NixOS) it may be installed elsewhere. Using #!/usr/bin/env bash searches the PATH for the interpreter, so the script works regardless of where bash is installed.

Conclusion

The env command gives you fine-grained control over the environment a process sees without touching your current shell session. For a broader look at how environment and shell variables work in Linux, including persistent configuration and the PATH variable, see the guide on how to set and list environment variables .

htop Command in Linux: Monitor Processes Interactively

When a server starts responding slowly or a desktop session feels sluggish, you need to figure out which process is consuming the most CPU or memory. The top command can do this, but its interface is minimal and navigation takes some getting used to. htop is an interactive process viewer that improves on top with color-coded meters, mouse support, and the ability to scroll, search, filter, and kill processes without leaving the viewer.

This guide explains how to install and use htop to monitor system resources and manage processes on Linux.

Installing htop

htop is available in the default repositories of most Linux distributions but is not always pre-installed.

On Ubuntu, Debian, and Derivatives:

Terminal
sudo apt install htop

On Fedora, RHEL, and Derivatives:

Terminal
sudo dnf install htop

Verify the installation by checking the version:

Terminal
htop --version
output
htop 3.4.1

htop Syntax

txt
htop [OPTIONS]

Running htop without arguments opens the interactive process viewer with default settings:

Terminal
htop

Understanding the Interface

The htop screen is divided into three areas: a header with system meters, a process list in the center, and a footer with function key shortcuts.

Header Area

The top section shows system-wide resource usage at a glance:

  • CPU bars: one bar per CPU core, color-coded by usage type. Green is normal user processes, red is kernel (system) activity, blue is low-priority (nice) processes, and cyan is virtualization overhead (steal time)
  • Memory bar: shows used, buffered, and cached memory as a proportion of total RAM. Green is memory in use by applications, blue is buffers, and yellow is file cache
  • Swap bar: shows swap usage. If this bar is consistently full, the system does not have enough physical memory for its workload
  • Tasks: total number of processes and threads, with a count of how many are currently running
  • Load average: the 1-minute, 5-minute, and 15-minute system load averages
  • Uptime: how long the system has been running since the last boot

Process List

Below the header, each row represents one process. The default columns are:

  • PID - process ID
  • USER - the owner of the process
  • PRI - kernel scheduling priority
  • NI - nice value (user-space priority, ranges from -20 to 19)
  • VIRT - total virtual memory the process has allocated
  • RES - resident memory, the portion of physical RAM the process is actually using
  • SHR - shared memory, the portion of RES that is shared with other processes (such as shared libraries)
  • S - process state (R running, S sleeping, D uninterruptible sleep, Z zombie, T stopped)
  • CPU% - percentage of CPU time the process is using
  • MEM% - percentage of physical memory the process is using
  • TIME+ - total CPU time consumed since the process started
  • Command - the full command line that launched the process

Function Keys

The bottom bar shows the available actions:

  • F1 - open the help screen
  • F2 - open the setup menu to customize meters and columns
  • F3 - search for a process by name
  • F4 - filter the process list to show only matching entries
  • F5 - toggle tree view
  • F6 - choose a column to sort by
  • F7 - decrease the nice value (higher priority) of the selected process
  • F8 - increase the nice value (lower priority) of the selected process
  • F9 - send a signal to the selected process
  • F10 - quit htop

Sorting Processes

By default, htop sorts processes by CPU usage, so the heaviest processes float to the top. To change the sort column, press F6 and select a column from the list using the arrow keys.

For quick sorting without opening the menu, use these shortcut keys:

  • P - sort by CPU%
  • M - sort by MEM%
  • T - sort by TIME+

Press the same key again to reverse the sort order. This is useful when you want to find the least active processes or the ones that started most recently.

You can also set the initial sort column from the command line:

Terminal
htop -s PERCENT_MEM

This starts htop with the process list sorted by memory usage, which is handy when you are investigating high memory consumption on a server.

Searching for a Process

Press F3 (or /) to open the search bar at the bottom of the screen. Type part of the process name and htop will highlight the first matching entry in the process list. Press F3 again to jump to the next match.

The search is incremental, so the highlight updates as you type each character. Press Esc to close the search bar and return to normal navigation.

Filtering Processes

Press F4 to activate the filter. Unlike search, which jumps between matches, filtering hides all processes that do not match the text you type. Only matching entries remain visible in the list.

This is especially useful on busy systems with hundreds of running processes. For example, typing postgres after pressing F4 reduces the list to only PostgreSQL worker processes and the postmaster, making it much easier to see their combined resource usage.

Press Esc to clear the filter and restore the full process list.

Tree View

Press F5 to toggle tree view. In this mode, htop groups child processes under their parent and displays the process hierarchy as an indented tree. This makes it straightforward to see which processes were spawned by a service manager, a shell session, or a container runtime.

To start htop in tree view directly from the command line:

Terminal
htop -t

Tree view combines well with filtering. Press F4, type a service name, and the tree shows only that service and its child processes. For a dedicated look at process hierarchies outside of htop, see the pstree command .

Killing a Process

To stop a process from within htop, use the arrow keys to highlight it (or search with F3), then press F9. This opens a signal selection menu on the left side of the screen.

The most commonly used signals are:

  • 15 SIGTERM - asks the process to shut down gracefully. This is the default and the safest first choice
  • 9 SIGKILL - forces the process to terminate immediately. Use this only when SIGTERM does not work, because the process gets no chance to clean up
  • 2 SIGINT - the same signal sent by pressing Ctrl+C in a terminal
  • 1 SIGHUP - often used to tell a daemon to reload its configuration without restarting

Select the signal with the arrow keys and press Enter to send it. For more detail on process signals and how to send them from the command line, see the kill command guide .

Warning
SIGKILL bypasses all cleanup routines. Files and sockets may be left in an inconsistent state. Always try SIGTERM first and wait a few seconds before escalating to SIGKILL.

You can also tag multiple processes with Space and then press F9 to send a signal to all of them at once. Press U to untag all processes when you are done.

Changing Process Priority

You can adjust the scheduling priority (nice value) of a process directly from htop. Select the process and press:

  • F7 - decrease the nice value (give the process higher priority)
  • F8 - increase the nice value (give the process lower priority)

Nice values range from -20 (highest priority) to 19 (lowest priority). Only root can set negative nice values, so you will need to run htop with sudo to raise a process above normal priority.

This is useful when a background task like a large compilation is consuming too much CPU and you want to lower its priority so that interactive services remain responsive.

Showing Only One User’s Processes

To limit the process list to a single user, start htop with the -u option:

Terminal
htop -u www-data

This shows only processes owned by www-data, which is useful when you are debugging a web server or an application service and do not want system processes cluttering the view.

Monitoring Specific Processes

To watch a known set of processes by their PIDs, use the -p option:

Terminal
htop -p 1234,5678

The display is restricted to those two processes, but the header meters still show system-wide resource usage. This gives you a focused view while keeping overall load visible at the top.

Customizing the Display

Press F2 to open the setup screen. From here you can:

  • Rearrange the header meters (move CPU bars, swap positions, switch between bar, text, graph, or LED display styles)
  • Add or remove columns in the process list
  • Change the color scheme
  • Toggle display options like showing kernel threads or custom thread names

Changes are saved to ~/.config/htop/htoprc and persist across sessions.

Command-Line Options

  • -d N - set the update interval to N tenths of a second. For example, -d 20 updates every 2 seconds instead of the default 1.5 seconds
  • -u USER - show only processes belonging to USER
  • -p PID[,PID...] - show only the specified process IDs
  • -t - start in tree view
  • -s COLUMN - sort by the given column name (use names like PERCENT_CPU, PERCENT_MEM, TIME)
  • -C - use monochrome mode (no colors), useful on terminals with limited color support
  • -H - highlight new and old processes

Quick Reference

For a printable quick reference, see the htop cheatsheet .

Key / Command Action
F1 Help
F2 Setup (customize meters and columns)
F3 or / Search for a process
F4 Filter the process list
F5 Toggle tree view
F6 Choose sort column
F7 / F8 Decrease / increase nice value
F9 Send a signal to the selected process
F10 or q Quit
P Sort by CPU%
M Sort by MEM%
T Sort by TIME+
Space Tag a process
U Untag all processes
htop -u USER Show only one user’s processes
htop -p PID,PID Monitor specific PIDs
htop -t Start in tree view
htop -d 20 Update every 2 seconds
htop -s PERCENT_MEM Sort by memory usage

Troubleshooting

htop: command not found
htop is not installed by default on every distribution. Install it with your package manager: sudo apt install htop on Debian-based systems or sudo dnf install htop on Fedora and RHEL.

Cannot change process priority (permission denied)
Setting a nice value below 0 requires root privileges. Run htop with sudo to adjust priority for processes owned by other users or to set negative nice values.

CPU bars show unexpected colors
Each color represents a different type of CPU usage. Green is normal user processes, red is kernel activity, blue is low-priority (nice) processes, and cyan is virtualization steal time. Press F1 inside htop to see the full color legend for your version.

Memory bar looks full but the system is responsive
Linux uses free memory for disk cache and buffers. The memory bar in htop distinguishes between application memory (green), buffers (blue), and cache (yellow). Cached memory is released to applications when they need it, so a full-looking bar does not mean the system is out of memory.

Processes appear and disappear quickly
Short-lived processes can start and exit between screen refreshes. Lower the update interval with -d 5 (half a second) to catch fast processes. Keep in mind that a very short interval increases the CPU usage of htop itself.

FAQ

What is the difference between htop and top?
Both show running processes and system metrics in real time. htop adds color-coded CPU and memory meters, mouse support, horizontal and vertical scrolling, built-in search and filtering, and the ability to kill or renice processes without typing a PID. The top command is pre-installed on virtually every Linux system, while htop may need to be installed separately.

Does htop use more resources than top?
Slightly. htop reads additional process details from /proc, so it uses a bit more CPU and memory than top. On modern hardware the difference is negligible, even on systems with thousands of processes.

What do VIRT, RES, and SHR mean?
VIRT is the total virtual memory the process has mapped, including memory it has allocated but may not be using yet. RES (Resident) is the portion actually held in physical RAM. SHR (Shared) is the subset of RES that is shared with other processes, such as shared libraries loaded by multiple programs.

How do I save my htop configuration?
Press F2 to open the setup screen, make your changes, and press F10 to exit. htop saves the configuration automatically to ~/.config/htop/htoprc.

Can I run htop over SSH?
Yes. htop works in any terminal emulator, including SSH sessions. If you are monitoring a remote server over a long session, consider running htop inside screen or tmux so the session persists if the connection drops.

Conclusion

htop gives you a clear, interactive view of what is running on your system and the resources each process is consuming. For related tools, see the ps command for snapshot process listings, kill for sending signals from the command line, and pstree for visualizing the process hierarchy.

lsof Command in Linux: List Open Files and Network Connections

When you delete a file but the disk space does not come back, or a port appears occupied and you cannot tell what is using it, the lsof command gives you the answer. lsof stands for “list open files.” In Linux, many resources are represented as files, so lsof can show regular files, directories, pipes, sockets, and network connections.

This guide explains how to use lsof to inspect open files, trace network connections, and identify which processes hold resources on your system.

lsof Syntax

txt
lsof [OPTIONS] [FILE]

When called without arguments, lsof attempts to list every open file visible to the current user. That output is usually thousands of lines long, so in practice you will almost always combine it with a filter or run it with sudo when you need a full system-wide view.

Understanding the Output

Running lsof without arguments produces output like this:

Terminal
sudo lsof | head -5
output
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
systemd 1 root cwd DIR 8,1 4096 2 /
systemd 1 root rtd DIR 8,1 4096 2 /
nginx 1234 root 6u IPv4 28743 0t0 TCP *:http (LISTEN)
bash 5678 john 1u REG 8,1 102400 789 /home/john/log.txt

Each column tells you something specific:

  • COMMAND - the name of the process
  • PID - process ID
  • USER - the user running the process
  • FD - file descriptor (cwd is the current working directory, txt is the program executable, a number like 4u is an open file handle where r means read, w means write, and u means read and write)
  • TYPE - type of file (REG for a regular file, DIR for a directory, IPv4/IPv6 for network sockets, unix for Unix domain sockets)
  • SIZE/OFF - file size or current offset
  • NAME - the file path or network address

Find What Is Using a Port

The most common use for lsof is finding which process is listening on a given port. For that, use numeric output, limit the search to TCP, and filter to the LISTEN state:

Terminal
sudo lsof -nP -iTCP:80 -sTCP:LISTEN
output
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 1234 root 6u IPv4 34120 0t0 TCP *:80 (LISTEN)
nginx 1235 www-data 6u IPv4 34120 0t0 TCP *:80 (LISTEN)

The output shows that nginx is listening on port 80. The PID column gives you the process ID if you need to stop or restart it. The -nP options keep addresses and port numbers numeric, which makes the output easier to read in scripts and troubleshooting sessions.

To filter by both protocol and port number, specify the protocol before the port:

Terminal
sudo lsof -nP -iTCP:443 -sTCP:LISTEN

This is useful when both TCP and UDP services share the same port number and you want to narrow the results to listening TCP sockets only.

List All Network Connections

To see all open network connections on the system, pass -i without a port:

Terminal
sudo lsof -i

To narrow the results to a specific protocol, add TCP or UDP:

Terminal
sudo lsof -i TCP
output
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 1234 root 6u IPv4 34120 0t0 TCP *:http (LISTEN)
sshd 2345 root 3u IPv4 39210 0t0 TCP *:ssh (LISTEN)
curl 5678 john 5u IPv4 56789 0t0 TCP workstation:54321->93.184.216.34:http (ESTABLISHED)

Listening services show an asterisk for the address. Established connections show the remote address and port.

List Files Opened by a Process

To see all files opened by a specific process, pass the process ID with -p:

Terminal
sudo lsof -p 1234

To exclude a process instead, prefix the PID with ^:

Terminal
sudo lsof -p ^1234

You can also filter by process name with -c. This matches any running process whose name starts with the given string:

Terminal
sudo lsof -c nginx

List Files Opened by a User

To see all files a particular user has open, use -u followed by the username:

Terminal
lsof -u john

To see files opened by everyone except that user, prefix the username with ^:

Terminal
sudo lsof -u ^john

This is useful on multi-user systems when you want to audit what a specific account is accessing.

Find Which Process Has a File Open

Pass a file path directly to lsof to see which processes currently have it open:

Terminal
lsof /var/log/nginx/access.log
output
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 1234 root 5w REG 8,1 249856 654321 /var/log/nginx/access.log
nginx 1235 www-data 5w REG 8,1 249856 654321 /var/log/nginx/access.log

This works for any file, including device files, sockets, and named pipes.

List All Open Files in a Directory

To list all files open within a directory and its subdirectories, use +D:

Terminal
sudo lsof +D /var/log

For a non-recursive listing (top-level only), use lowercase +d:

Terminal
sudo lsof +d /var/log

+D is especially useful before unmounting a filesystem. If any process has a file open inside the mount point, umount will refuse to proceed. Running lsof +D /mountpoint tells you exactly which process is blocking it.

Find Deleted Files Still Holding Disk Space

When you delete a file that a running process still has open, the kernel keeps the data on disk until the process closes the file descriptor or exits. This is a common cause of the “disk is full but I cannot find any large files” problem.

To find these held-open deleted files, use +L1:

Terminal
sudo lsof +L1
output
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME
python3 4567 john 3r REG 8,1 2097152 0 456 /tmp/data.bin (deleted)

The NLINK column shows 0, meaning the file has no directory entry left but is still open. Restarting the process listed under COMMAND will release the space. For more ways to find large files on Linux , including files that are still visible on disk, see that dedicated guide.

Get Only Process IDs

The -t option outputs only process IDs, with no headers or other columns. This is designed for scripting:

Terminal
sudo lsof -t -iTCP:8080 -sTCP:LISTEN
output
3456

A common pattern is to stop the process listening on a port:

Terminal
kill $(sudo lsof -t -iTCP:8080 -sTCP:LISTEN)

Verify the process first with sudo lsof -nP -iTCP:8080 -sTCP:LISTEN before killing it, so you know what you are stopping.

Combining Filters

By default, lsof treats multiple options as OR logic: the output includes files matching any of the conditions. To switch to AND logic (a result must satisfy all conditions), add -a:

Terminal
sudo lsof -a -u john -i TCP

Without -a, this command would return all TCP connections on the system plus all files opened by john. With -a, it returns only TCP connections that belong specifically to john.

Quick Reference

For a printable quick reference, see the lsof cheatsheet .

Command Description
lsof List all open files
lsof -nP -iTCP:PORT -sTCP:LISTEN Find what is listening on a TCP port
lsof -i TCP List all TCP connections
lsof -nP -iTCP:443 -sTCP:LISTEN Filter by protocol and port
lsof -p PID Files opened by a process
lsof -c nginx Files opened by processes named nginx
lsof -u john Files opened by a user
lsof /path/to/file Processes that have a specific file open
lsof +D /dir All open files in a directory (recursive)
lsof +L1 Deleted files still held open
lsof -t -iTCP:PORT -sTCP:LISTEN Output only PIDs listening on a TCP port
lsof -a -u john -i TCP AND logic: TCP connections for a specific user

FAQ

Why does lsof require sudo?
Without root privileges, lsof can only show files opened by your own processes. Running it with sudo gives you visibility into all processes on the system. Some distributions also restrict access to /proc for non-root users, which limits what lsof can read without elevated permissions.

What is the difference between lsof and ss?
lsof covers the full range of open files: regular files, directories, pipes, devices, and network sockets in a single view. The ss command is purpose-built for network sockets and provides more detail about socket states and statistics. Use lsof when you want to tie a network connection back to a specific file or see everything a process has open. Use ss when you need to inspect socket state in detail.

How do I find which process is using a port?
Run sudo lsof -nP -iTCP:PORT -sTCP:LISTEN, replacing PORT with the number you want to check. The ss command with ss -tlnp is another approach covered in the guide on how to check listening ports .

Can I monitor file access in real time with lsof?
No. lsof takes a snapshot at the moment you run it. For real-time filesystem event monitoring, use inotifywait from the inotify-tools package.

lsof shows a file path ending in (deleted). What does that mean?
The file was removed from the directory while a process still had it open. The data remains on disk until the process releases the file descriptor. This is the cause of the “disk full but no large files visible” problem described in the section above. Use lsof +L1 to list all such files and restart the holding process to free the space.

Conclusion

lsof is one of the most practical diagnostic tools available on Linux. Whether you are tracking down a port conflict, auditing which files a user has open, or recovering disk space from deleted but held-open files, it gives you a direct view into what every process is accessing at any moment.

For related tools, see the ps command guide for process inspection and the ss command guide for dedicated network socket statistics.

sed Delete Lines: Remove Lines by Number, Pattern, or Range

When working with text files, you will often need to remove specific lines, whether it is stripping comments from a configuration file, cleaning up blank lines in log output, or cutting a range of lines from a data dump. The sed stream editor handles all of these cases from the command line, without opening the file in an editor.

sed processes input line by line, applies your commands, and writes the result to standard output. The original file stays untouched unless you explicitly tell sed to edit in place. If you need to find and replace text rather than remove entire lines, see How to Use sed to Find and Replace Strings in Files .

This guide explains how to use the sed delete command (d) with practical examples covering line numbers, patterns, ranges, and regular expressions.

How the Delete Command Works

The general form of a sed delete expression is:

txt
sed 'ADDRESSd' file

ADDRESS selects which lines to delete, and d is the delete command. When sed encounters a line that matches the address, it removes that line from the output entirely. Any line that does not match the address is printed as usual.

For demonstration purposes, the examples in this guide use the following file:

colors.txttxt
red
green
blue
yellow
white
black
orange
purple

Delete by Line Number

The simplest form of delete targets a specific line number. To remove the third line, place 3 before the d command:

Terminal
sed '3d' colors.txt
output
red
green
yellow
white
black
orange
purple

Notice that “blue” (which was on line 3) is gone from the output, but the original colors.txt file is unchanged. This is the default behavior of sed, and it gives you a safe way to preview changes before committing them.

Sometimes you need to remove the last line of a file without knowing how many lines it contains. The $ address represents the last line:

Terminal
sed '$d' colors.txt
output
red
green
blue
yellow
white
black
orange

Here “purple” (the last line) has been removed from the output.

Delete Multiple Specific Lines

If you need to remove several individual lines, separate the addresses with a semicolon. The following command deletes lines 2, 5, and 8 in a single pass:

Terminal
sed '2d;5d;8d' colors.txt
output
red
blue
yellow
black
orange

The output is missing “green” (line 2), “white” (line 5), and “purple” (line 8). You can list as many addresses as needed this way.

Delete a Range of Lines

To delete a block of consecutive lines, specify a range with two numbers separated by a comma. The following command removes lines 3 through 6:

Terminal
sed '3,6d' colors.txt
output
red
green
orange
purple

Everything from “blue” (line 3) through “black” (line 6) is gone, and the remaining lines print normally.

You can also combine a line number with $ to delete from a given line to the end of the file. This keeps only the first four lines:

Terminal
sed '5,$d' colors.txt
output
red
green
blue
yellow

Adding ! after the address inverts the selection, so sed deletes every line that is not in the range. The following command keeps only lines 3 through 5 and removes everything else:

Terminal
sed '3,5!d' colors.txt
output
blue
yellow
white

This is a convenient way to extract a slice from a file without piping through head and tail.

Delete Every Nth Line

GNU sed supports the step address first~step, which matches every Nth line starting from a given position. To delete every even-numbered line (2, 4, 6, …):

Terminal
sed '0~2d' colors.txt
output
red
blue
white
orange

The first number is the starting offset (0 means “before the first line”, so the first match is line 2) and the second number is the step. In this case sed deletes lines 2, 4, 6, and 8.

To delete every third line starting from line 3:

Terminal
sed '3~3d' colors.txt
output
red
green
yellow
white
orange
purple

Lines 3 (“blue”) and 6 (“black”) are removed. This syntax is specific to GNU sed and is not available in the BSD version shipped with macOS.

Delete Lines Matching a Pattern

One of the most common uses of sed delete is removing lines that contain a specific string. Wrap the search pattern in forward slashes before the d command:

Terminal
sed '/blue/d' colors.txt
output
red
green
yellow
white
black
orange
purple

Every line containing the string “blue” is removed from the output. Keep in mind that this is a substring match, so a pattern like /bl/d would also delete the line “black” because it contains “bl”.

If you want the match to be case-insensitive, add the I flag after d:

Terminal
sed '/Blue/Id' colors.txt

This removes lines containing “blue”, “Blue”, “BLUE”, or any other case variation.

Delete Lines Not Matching a Pattern

Adding ! after the pattern address inverts the match, so sed deletes every line that does not contain the pattern. The following command removes all lines that do not have the letter “e”:

Terminal
sed '/e/!d' colors.txt
output
red
green
blue
yellow
white
orange
purple

Only “black” (line 6) was removed because it is the only line without an “e”. This works similarly to grep , but the advantage of sed is that you can combine it with other editing commands in the same expression.

Delete a Range Between Two Patterns

You can use two patterns separated by a comma to define a range. sed starts deleting at the first line that matches the opening pattern and stops after the first line that matches the closing pattern. The following command removes everything from “green” through “white”:

Terminal
sed '/green/,/white/d' colors.txt
output
red
black
orange
purple

Both the “green” and “white” lines are included in the deletion, along with “blue” and “yellow” between them.

You can also mix a line number with a pattern. This deletes from line 1 through the first line that contains “blue”:

Terminal
sed '1,/blue/d' colors.txt
output
yellow
white
black
orange
purple

This kind of mixed address is useful when you know where the range starts (a fixed line) but not where it ends.

Delete Empty Lines

Removing blank lines is one of the most common sed tasks. To delete completely empty lines, use the pattern ^$, which matches lines where the beginning and end have nothing between them:

Terminal
sed '/^$/d' file.txt

This works well for truly empty lines, but it will not catch lines that contain only spaces or tabs. To remove those as well, use the [[:space:]] character class:

Terminal
sed '/^[[:space:]]*$/d' file.txt

The * quantifier matches zero or more whitespace characters, so this pattern covers both empty lines and lines that appear blank but contain invisible whitespace.

Delete Lines Matching a Regular Expression

The pattern between the slashes is a regular expression, so you can use anchors, character classes, and quantifiers for more precise matching.

A common example is stripping comment lines from a configuration file. The ^# pattern matches any line that starts with #:

Terminal
sed '/^#/d' config.txt

To delete lines that end with a semicolon, anchor the pattern to the end of the line with $:

Terminal
sed '/;$/d' source.txt

You can also match by line length. The following command deletes lines that are shorter than 5 characters. The \{0,4\} quantifier means “between 0 and 4 of any character”:

Terminal
sed '/^.\{0,4\}$/d' file.txt

If you find the backslash escaping awkward, use extended regular expressions with the -E flag (or -r on older systems). This lets you write the same expression without escaping the braces:

Terminal
sed -E '/^.{0,4}$/d' file.txt

Combining Delete with Other Commands

One of the strengths of sed is that you can chain multiple operations in a single expression. For example, the following command first removes comment lines and then replaces “localhost” with “127.0.0.1” in whatever remains:

Terminal
sed '/^#/d; s/localhost/127.0.0.1/g' config.txt

The commands are separated by a semicolon and execute left to right. If a line is deleted by an earlier command, the later commands never see it, so the replacement only applies to non-comment lines.

Editing Files In Place

All of the examples above print the modified text to standard output without changing the original file. When you are satisfied with the result, add the -i flag to apply the changes directly:

Terminal
sed -i '/^$/d' file.txt
Warning
The -i flag modifies the file permanently. Always preview the output without -i first, or create a backup by passing a suffix: sed -i.bak '/^$/d' file.txt. This saves the original as file.txt.bak.

On macOS, the BSD version of sed requires an explicit extension argument even if you do not want a backup. Use sed -i '' '/^$/d' file.txt for in-place editing without creating a backup file.

Quick Reference

For a printable quick reference, see the sed cheatsheet .

Expression What it deletes
sed '3d' Line 3
sed '$d' Last line
sed '2d;5d' Lines 2 and 5
sed '3,6d' Lines 3 through 6
sed '5,$d' Line 5 to end of file
sed '3,5!d' All lines except 3 through 5
sed '0~2d' Every even-numbered line
sed '/pattern/d' Lines matching pattern
sed '/pattern/!d' Lines not matching pattern
sed '/start/,/end/d' From first match of start to first match of end
sed '/^$/d' Empty lines
sed '/^#/d' Comment lines starting with #

FAQ

How do I delete a line containing a specific word? Use sed '/word/d' file.txt. This removes any line where “word” appears as a substring. If you want to match the whole word only (so that “keyword” is not affected), use word boundaries: sed '/\bword\b/d' file.txt.

Can I delete lines from multiple files at once? Yes. Pass multiple filenames after the expression: sed -i '/pattern/d' file1.txt file2.txt. To process files recursively, combine sed with find: find . -name "*.log" -exec sed -i '/DEBUG/d' {} +.

What is the difference between sed '/pattern/d' and grep -v 'pattern'? Both remove matching lines from the output. The difference is that sed can combine deletion with other editing commands in the same expression and supports in-place editing with -i. If all you need is simple line filtering, grep -v is shorter to type.

How do I delete a range of lines and save the result? You can either redirect the output to a new file (sed '3,6d' file.txt > cleaned.txt) or edit in place with a backup (sed -i.bak '3,6d' file.txt). Do not redirect to the same input file, because the shell will truncate it before sed has a chance to read it.

Conclusion

The sed delete command gives you a fast way to remove lines by number, pattern, or range without opening a file in an editor. For replacing text within lines rather than removing them, see the companion guide on sed find and replace . If you need to work with individual columns or fields within a line, awk is often the better tool for the job.

git clone: Clone a Repository

git clone copies an existing Git repository into a new directory on your local machine. It sets up tracking branches for each remote branch and creates a remote called origin pointing back to the source.

This guide explains how to use git clone with the most common options and protocols.

Syntax

The basic syntax of the git clone command is:

txt
git clone [OPTIONS] REPOSITORY [DIRECTORY]

REPOSITORY is the URL or path of the repository to clone. DIRECTORY is optional; if omitted, Git uses the repository name as the directory name.

Clone a Remote Repository

The most common way to clone a repository is over HTTPS. For example, to clone a GitHub repository:

Terminal
git clone https://github.com/user/repo.git

This creates a repo directory in the current working directory, initializes a .git directory inside it, and checks out the default branch.

You can also clone over SSH if you have an SSH key configured :

Terminal
git clone git@github.com:user/repo.git

SSH is generally preferred for repositories you will push to, as it does not require entering credentials each time.

Clone into a Specific Directory

By default, git clone creates a directory named after the repository. To clone into a different directory, pass the target path as the second argument:

Terminal
git clone https://github.com/user/repo.git my-project

To clone into the current directory, use . as the target:

Terminal
git clone https://github.com/user/repo.git .
Warning
Cloning into . requires the current directory to be empty.

Clone a Specific Branch

By default, git clone checks out the default branch (usually main or master). To clone and check out a different branch, use the -b option:

Terminal
git clone -b develop https://github.com/user/repo.git

The full repository history is still downloaded; only the checked-out branch differs. To limit the download to a single branch, combine -b with --single-branch:

Terminal
git clone -b develop --single-branch https://github.com/user/repo.git

Shallow Clone

A shallow clone downloads only a limited number of recent commits rather than the full history. This is useful for speeding up the clone of large repositories when you do not need the full commit history.

To clone with only the latest commit:

Terminal
git clone --depth 1 https://github.com/user/repo.git

To include the last 10 commits:

Terminal
git clone --depth 10 https://github.com/user/repo.git

Shallow clones are commonly used in CI/CD pipelines to reduce download time. To convert a shallow clone to a full clone later, run:

Terminal
git fetch --unshallow

Clone a Local Repository

git clone also works with local paths. This is useful for creating an isolated copy for testing:

Terminal
git clone /path/to/local/repo new-copy

Troubleshooting

Destination directory is not empty
git clone URL . works only when the current directory is empty. If files already exist, clone into a new directory or move the existing files out of the way first.

Authentication failed over HTTPS
Most Git hosting services no longer accept account passwords for Git operations over HTTPS. Use a personal access token or configure a credential manager so Git can authenticate securely.

Permission denied (publickey)
This error usually means your SSH key is missing, not loaded into your SSH agent, or not added to your account on the Git hosting service. Verify your SSH setup before retrying the clone.

Host key verification failed
SSH could not verify the identity of the remote server. Confirm you are connecting to the correct host, then accept or update the host key in ~/.ssh/known_hosts.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git clone URL Clone a repository via HTTPS or SSH
git clone URL dir Clone into a specific directory
git clone URL . Clone into the current (empty) directory
git clone -b branch URL Clone and check out a specific branch
git clone -b branch --single-branch URL Clone only a specific branch
git clone --depth 1 URL Shallow clone: latest commit only
git clone --depth N URL Shallow clone: last N commits
git clone /path/to/repo Clone a local repository

FAQ

What is the difference between HTTPS and SSH cloning?
HTTPS cloning works out of the box but prompts for credentials unless you use a credential manager. SSH cloning requires a key pair to be configured but does not prompt for a password on each push or pull.

Does git clone download all branches?
Yes. All remote branches are fetched, but only the default branch is checked out locally. Use git branch -r to list all remote branches, or git checkout branch-name to switch to one.

How do I clone a private repository?
For HTTPS, provide your username and a personal access token when prompted. For SSH, ensure your public key is added to your account on the hosting service.

Can I clone a specific tag instead of a branch?
Yes. Use -b with the tag name: git clone -b v1.0.0 https://github.com/user/repo.git. Git checks out that tag in a detached HEAD state, so create a branch if you plan to make commits.

Conclusion

git clone is the starting point for working with any existing Git repository. After cloning, you can inspect the commit history with git log , review changes with git diff , and configure your username and email if you have not done so already.

jobs Command in Linux: List and Manage Background Jobs

The jobs command is a Bash built-in that lists all background and suspended processes associated with the current shell session. It shows the job ID, state, and the command that started each job.

This article explains how to use the jobs command, what each option does, and how job specifications work for targeting individual jobs.

Syntax

The general syntax for the jobs command is:

txt
jobs [OPTIONS] [JOB_SPEC...]

When called without arguments, jobs lists all active jobs in the current shell.

Listing Jobs

To see all background and suspended jobs, run jobs without any arguments:

Terminal
jobs

If you have two processes (one running and one stopped), the output will look similar to this:

output
[1]- Stopped vim notes.txt
[2]+ Running ping google.com &

Each line shows:

  • The job number in brackets ([1], [2])
  • A + or - sign indicating the current and previous job (more on this below)
  • The job state (Running, Stopped, Done)
  • The command that started the job

To include the process ID (PID) in the output, use the -l option:

Terminal
jobs -l
output
[1]- 14205 Stopped vim notes.txt
[2]+ 14230 Running ping google.com &

The PID is useful when you need to send a signal to a process with the kill command.

Job States

Jobs reported by the jobs command can be in one of these states:

  • Running — the process is actively executing in the background.
  • Stopped — the process has been suspended, typically by pressing Ctrl+Z.
  • Done — the process has finished. This state appears once and is then cleared from the list.
  • Terminated — the process was killed by a signal.

When a background job finishes, the shell displays its Done status the next time you press Enter or run a command.

Job Specifications

Job specifications (job specs) let you target a specific job when using commands like fg, bg, kill , or disown. They always start with a percent sign (%):

  • %1, %2, … — refer to a job by its number.
  • %% or %+ — the current job (the most recently backgrounded or suspended job, marked with +).
  • %- — the previous job (marked with -).
  • %string — the job whose command starts with string. For example, %ping matches a job started with ping google.com.
  • %?string — the job whose command contains string anywhere. For example, %?google also matches ping google.com.

Here is a practical example. Start two background jobs:

Terminal
sleep 300 &
ping -c 100 google.com > /dev/null &

Now list them:

Terminal
jobs
output
[1]- Running sleep 300 &
[2]+ Running ping -c 100 google.com > /dev/null &

You can bring the sleep job to the foreground by its number:

Terminal
fg %1

Or by the command prefix:

Terminal
fg %sleep

Both are equivalent and bring job [1] to the foreground.

Options

The jobs command accepts the following options:

  • -l — List jobs with their process IDs in addition to the normal output.
  • -p — Print only the PID of each job’s process group leader. This is useful for scripting.
  • -r — List only running jobs.
  • -s — List only stopped (suspended) jobs.
  • -n — Show only jobs whose status has changed since the last notification.

To show only running jobs:

Terminal
jobs -r

To get just the PIDs of all jobs:

Terminal
jobs -p
output
14205
14230

Using jobs in Interactive Shells

The jobs command is primarily useful in an interactive shell where job control is enabled. In a regular shell script, job control is usually off, so jobs may return no output even when background processes exist.

If you need to manage background work in a script, track process IDs with $! and use wait with the PID instead of a job spec:

sh
# Start a background task
long_task &
task_pid=$!

# Poll until it finishes
while kill -0 "$task_pid" 2>/dev/null; do
 echo "Task is still running..."
 sleep 5
done

echo "Task complete."

To wait for several background tasks, store their PIDs and pass them to wait:

sh
task_a &
pid_a=$!

task_b &
pid_b=$!

task_c &
pid_c=$!

wait "$pid_a" "$pid_b" "$pid_c"
echo "All tasks complete."

In an interactive shell, you can still wait for a job by its job spec:

Terminal
wait %1

Quick Reference

Command Description
jobs List all background and stopped jobs
jobs -l List jobs with PIDs
jobs -p Print only PIDs
jobs -r List only running jobs
jobs -s List only stopped jobs
fg %1 Bring job 1 to the foreground
bg %1 Resume job 1 in the background
kill %1 Terminate job 1
disown %1 Remove job 1 from the job table
wait Wait for all background jobs to finish

FAQ

What is the difference between jobs and ps?
jobs shows only the processes started from the current shell session. The ps command lists all processes running on the system, regardless of which shell started them.

Why does jobs show nothing even though processes are running?
jobs only tracks processes started from the current shell. If you started a process in a different terminal or via a system service, it will not appear in jobs. Use ps aux to find it instead.

How do I stop a background job?
Use kill %N where N is the job number. For example, kill %1 sends SIGTERM to job 1. If the job does not stop, use kill -9 %1 to force-terminate it.

What do the + and - signs mean in jobs output?
The + marks the current job, the one that fg and bg will act on by default. The - marks the previous job. When the current job finishes, the previous job becomes the new current job.

How do I keep a background job running after closing the terminal?
Use nohup before the command, or disown after backgrounding it. For long-running interactive sessions, consider Screen or Tmux .

Conclusion

The jobs command lists all background and suspended processes in the current shell session. Use it together with fg, bg, and kill to control jobs interactively. For a broader guide on running and managing background processes, see How to Run Linux Commands in the Background .

git diff Command: Show Changes Between Commits

Every change in a Git repository can be inspected before it is staged or committed. The git diff command shows the exact lines that were added, removed, or modified — between your working directory, the staging area, and any two commits or branches.

This guide explains how to use git diff to review changes at every stage of your workflow.

Basic Usage

Run git diff with no arguments to see all unstaged changes — differences between your working directory and the staging area:

Terminal
git diff
output
diff --git a/app.py b/app.py
index 3b1a2c4..7d8e9f0 100644
--- a/app.py
+++ b/app.py
@@ -10,6 +10,7 @@ def main():
print("Starting app")
+ print("Debug mode enabled")
run()

Lines starting with + were added. Lines starting with - were removed. Lines with no prefix are context — unchanged lines shown for reference.

If nothing is shown, all changes are already staged or there are no changes at all.

Staged Changes

To see changes that are staged and ready to commit, use --staged (or its synonym --cached):

Terminal
git diff --staged

This compares the staging area against the last commit. It shows exactly what will go into the next commit.

Terminal
git diff --cached

Both flags are equivalent. Use whichever you prefer.

Comparing Commits

Pass two commit hashes to compare them directly:

Terminal
git diff abc1234 def5678

To compare a commit against its parent, use the ^ suffix:

Terminal
git diff HEAD^ HEAD

HEAD refers to the latest commit. HEAD^ is one commit before it. You can go further back with HEAD~2, HEAD~3, and so on.

To see what changed in the last commit:

Terminal
git diff HEAD^ HEAD

Comparing Branches

Use the same syntax to compare two branches:

Terminal
git diff main feature-branch

This shows all differences between the tips of the two branches.

To see only what a branch adds relative to main — excluding changes already in main — use the three-dot notation:

Terminal
git diff main...feature-branch

The three-dot form finds the common ancestor of both branches and diffs from there to the tip of feature-branch. This is the most useful form for reviewing a pull request before merging.

Comparing a Specific File

To limit the diff to a single file, pass the path after --:

Terminal
git diff -- path/to/file.py

The -- separator tells Git the argument is a file path, not a branch name. Combine it with commit references to compare a file across commits:

Terminal
git diff HEAD~3 HEAD -- path/to/file.py

Summary with –stat

To see a summary of which files changed and how many lines were added or removed — without the full diff — use --stat:

Terminal
git diff --stat
output
 app.py | 3 +++
config.yml | 1 -
2 files changed, 3 insertions(+), 1 deletion(-)

This is useful for a quick overview before reviewing the full diff.

Word-Level Diff

By default, git diff highlights changed lines. To highlight only the changed words within a line, use --word-diff:

Terminal
git diff --word-diff
output
@@ -10,6 +10,6 @@ def main():
print("[-Starting-]{+Running+} app")

Removed words appear in [-brackets-] and added words in {+braces+}. This is especially useful for prose or configuration files where only part of a line changes.

Ignoring Whitespace

To ignore whitespace-only changes, use -w:

Terminal
git diff -w

Use -b to ignore changes in the amount of whitespace (but not all whitespace):

Terminal
git diff -b

These options are useful when reviewing files that have been reformatted or indented differently.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git diff Unstaged changes in working directory
git diff --staged Staged changes ready to commit
git diff HEAD^ HEAD Changes in the last commit
git diff abc1234 def5678 Diff between two commits
git diff main feature Diff between two branches
git diff main...feature Changes in feature since branching from main
git diff -- file Diff for a specific file
git diff --stat Summary of changed files and line counts
git diff --word-diff Word-level diff
git diff -w Ignore all whitespace changes

FAQ

What is the difference between git diff and git diff --staged?
git diff shows changes in your working directory that have not been staged yet. git diff --staged shows changes that have been staged with git add and are ready to commit. To see all changes — staged and unstaged combined — use git diff HEAD.

What does the three-dot ... notation do in git diff?
git diff main...feature finds the common ancestor of main and feature and shows the diff from that ancestor to the tip of feature. This isolates the changes that the feature branch introduces, ignoring any new commits in main since the branch was created. In git diff, the two-dot form is just another way to name the two revision endpoints, so git diff main..feature is effectively the same as git diff main feature.

How do I see only the file names that changed, not the full diff?
Use git diff --name-only. To include the change status (modified, added, deleted), use git diff --name-status instead.

How do I compare my local branch against the remote?
Fetch first to update remote-tracking refs, then diff against the updated remote branch. To see what your current branch changes relative to origin/main, use git fetch && git diff origin/main...HEAD. If you want a direct endpoint comparison instead, use git fetch && git diff origin/main HEAD.

Can I use git diff to generate a patch file?
Yes. Redirect the output to a file: git diff > changes.patch. Apply it on another machine with git apply changes.patch.

Conclusion

git diff is the primary tool for reviewing changes before you stage or commit them. Use git diff for unstaged work, git diff --staged before committing, and git diff main...feature to review a branch before merging. For a history of committed changes, see the git log guide .

git log Command: View Commit History

Every commit in a Git repository is recorded in the project history. The git log command lets you browse that history — showing who made each change, when, and why. It is one of the most frequently used Git commands and one of the most flexible.

This guide explains how to use git log to view, filter, and format commit history.

Basic Usage

Run git log with no arguments to see the full commit history of the current branch, starting from the most recent commit:

Terminal
git log
output
commit a3f1d2e4b5c6789012345678901234567890abcd
Author: Jane Smith <jane@example.com>
Date: Mon Mar 10 14:22:31 2026 +0100
Add user authentication module
commit 7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c
Author: John Doe <john@example.com>
Date: Fri Mar 7 09:15:04 2026 +0100
Fix login form validation

Each entry shows the full commit hash, author name and email, date, and commit message. Press q to exit the log view.

Compact Output with –oneline

The default output is verbose. Use --oneline to show one commit per line — the abbreviated hash and the subject line only:

Terminal
git log --oneline
output
a3f1d2e Add user authentication module
7b8c9d0 Fix login form validation
c1d2e3f Update README with setup instructions

This is useful for a quick overview of recent work.

Limiting the Number of Commits

To show only the last N commits, pass -n followed by the number:

Terminal
git log -n 5

You can also write it without the space:

Terminal
git log -5

Filtering by Author

Use --author to show only commits by a specific person. The value is matched as a substring against the author name or email:

Terminal
git log --author="Jane"

To match multiple authors, pass --author more than once:

Terminal
git log --author="Jane" --author="John"

Filtering by Date

Use --since and --until to limit commits to a date range. These options accept a variety of formats:

Terminal
git log --since="2 weeks ago"
git log --since="2026-03-01" --until="2026-03-15"

Searching Commit Messages

Use --grep to find commits whose message contains a keyword:

Terminal
git log --grep="authentication"

The search is case-sensitive by default. Add -i to make it case-insensitive:

Terminal
git log -i --grep="fix"

Viewing Changed Files

To see a summary of which files were changed in each commit without the full diff, use --stat:

Terminal
git log --stat
output
commit a3f1d2e4b5c6789012345678901234567890abcd
Author: Jane Smith <jane@example.com>
Date: Mon Mar 10 14:22:31 2026 +0100
Add user authentication module
src/auth.py | 84 ++++++++++++++++++++++++++++++++++++++++++++++++
src/models.py | 12 +++++++
tests/test_auth.py | 45 +++++++++++++++++
3 files changed, 141 insertions(+)

To see the full diff for each commit, use -p (or --patch):

Terminal
git log -p

Combine with -n to limit the output:

Terminal
git log -p -3

Viewing History of a Specific File

To see only the commits that touched a particular file, pass the file path after --:

Terminal
git log -- path/to/file.py

The -- separator tells Git that what follows is a path, not a branch name. Add -p to see the exact changes made to that file in each commit:

Terminal
git log -p -- path/to/file.py

Branch and Graph View

To visualize how branches diverge and merge, use --graph:

Terminal
git log --oneline --graph
output
* a3f1d2e Add user authentication module
* 7b8c9d0 Fix login form validation
| * 3e4f5a6 WIP: dark mode toggle
|/
* c1d2e3f Update README with setup instructions

Add --all to include all branches, not just the current one:

Terminal
git log --oneline --graph --all

This gives a full picture of the repository’s branch and merge history.

Comparing Two Branches

To see commits that are in one branch but not another, use the .. notation:

Terminal
git log main..feature-branch

This shows commits reachable from feature-branch that are not yet in main — useful for reviewing what a branch adds before merging.

Custom Output Format

Use --pretty=format: to define exactly what each log line shows. Some useful placeholders:

  • %h — abbreviated commit hash
  • %an — author name
  • %ar — author date, relative
  • %s — subject (first line of commit message)

For example:

Terminal
git log --pretty=format:"%h %an %ar %s"
output
a3f1d2e Jane Smith 5 days ago Add user authentication module
7b8c9d0 John Doe 8 days ago Fix login form validation

Excluding Merge Commits

To hide merge commits and see only substantive changes, use --no-merges:

Terminal
git log --no-merges --oneline

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git log Full commit history
git log --oneline One line per commit
git log -n 10 Last 10 commits
git log --author="Name" Filter by author
git log --since="2 weeks ago" Filter by date
git log --grep="keyword" Search commit messages
git log --stat Show changed files per commit
git log -p Show full diff per commit
git log -- file History of a specific file
git log --oneline --graph Visual branch graph
git log --oneline --graph --all Graph of all branches
git log main..feature Commits in feature not in main
git log --no-merges Exclude merge commits
git log --pretty=format:"%h %an %s" Custom output format

FAQ

How do I search for a commit that changed a specific piece of code?
Use -S (the “pickaxe” option) to find commits that added or removed a specific string: git log -S "function_name". For regex patterns, use -G "pattern" instead.

How do I see who last changed each line of a file?
Use git blame path/to/file. It annotates every line with the commit hash, author, and date of the last change.

What is the difference between git log and git reflog?
git log shows the commit history of the current branch. git reflog shows every movement of HEAD — including commits that were reset, rebased, or are no longer reachable from any branch. It is the main recovery tool if you lose commits accidentally.

How do I find a commit by its message?
Use git log --grep="search term". For case-insensitive search add -i. If you need to search the full commit message body as well as the subject, add --all-match only when combining multiple --grep patterns, or use git log --grep="search term" --format=full to inspect matching commits more closely. Use -E only when you need extended regular expressions in the grep pattern.

How do I show a single commit in full detail?
Use git show <hash>. It prints the commit metadata and the full diff. To show just the files changed without the diff, use git show --stat <hash>.

Conclusion

git log is the primary tool for understanding a project’s history. Start with --oneline for a quick overview, use --graph --all to visualize branches, and combine --author, --since, and --grep to zero in on specific changes. For undoing commits found in the log, see the git revert guide or how to undo the last commit .

git stash: Save and Restore Uncommitted Changes

When you are in the middle of a feature and need to switch branches to fix a bug or review someone else’s work, you have a problem: your changes are not ready to commit, but you cannot switch branches with a dirty working tree. git stash solves this by saving your uncommitted changes to a temporary stack so you can restore them later.

This guide explains how to use git stash to save, list, apply, and delete stashed changes.

Basic Usage

The simplest form saves all modifications to tracked files and reverts the working tree to match the last commit:

Terminal
git stash
output
Saved working directory and index state WIP on main: abc1234 Add login page

Your working tree is now clean. You can switch branches, pull updates, or apply a hotfix. When you are ready to return to your work, restore the stash.

By default, git stash saves both staged and unstaged changes to tracked files. It does not save untracked or ignored files unless you explicitly include them (see below).

Stashing with a Message

A plain git stash entry shows the branch and last commit message, which becomes hard to read when you have multiple stashes. Use -m to attach a descriptive label:

Terminal
git stash push -m "WIP: user authentication form"

This makes it easy to identify the right stash when you list them later.

Listing Stashes

To see all saved stashes, run:

Terminal
git stash list
output
stash@{0}: On main: WIP: user authentication form
stash@{1}: WIP on main: abc1234 Add login page

Stashes are indexed from newest (stash@{0}) to oldest. The index is used when you want to apply or drop a specific stash.

To inspect the contents of a stash before applying it, use git stash show:

Terminal
git stash show stash@{0}

Add -p to see the full diff:

Terminal
git stash show -p stash@{0}

Applying Stashes

There are two ways to restore a stash.

git stash pop applies the most recent stash and removes it from the stash list:

Terminal
git stash pop

git stash apply applies a stash but keeps it in the list so you can apply it again or to another branch:

Terminal
git stash apply

To apply a specific stash by index, pass the stash reference:

Terminal
git stash apply stash@{1}

If applying the stash causes conflicts, resolve them the same way you would resolve a merge conflict, then stage the resolved files.

Stashing Untracked and Ignored Files

By default, git stash only saves changes to files that Git already tracks. New files you have not yet staged are left behind.

Use -u (or --include-untracked) to include untracked files:

Terminal
git stash -u

To include both untracked and ignored files — for example, generated build artifacts or local config files — use -a (or --all):

Terminal
git stash -a

Partial Stash

If you only want to stash some of your changes and leave others in the working tree, use the -p (or --patch) flag to interactively select which hunks to stash:

Terminal
git stash -p

Git will step through each changed hunk and ask whether to stash it. Type y to stash the hunk, n to leave it, or ? for a full list of options.

Creating a Branch from a Stash

If you have been working on a stash for a while and the branch has diverged enough to cause conflicts on apply, you can create a new branch from the stash directly:

Terminal
git stash branch new-branch-name

This creates a new branch, checks it out at the commit where the stash was originally made, applies the stash, and drops it from the list if it applied cleanly. It is the safest way to resume stashed work that has grown out of sync with the base branch.

Deleting Stashes

To remove a specific stash once you no longer need it:

Terminal
git stash drop stash@{0}

To delete all stashes at once:

Terminal
git stash clear

Use clear with care — there is no undo.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git stash Stash staged and unstaged changes
git stash push -m "message" Stash with a descriptive label
git stash -u Include untracked files
git stash -a Include untracked and ignored files
git stash -p Interactively choose what to stash
git stash list List all stashes
git stash show stash@{0} Show changed files in a stash
git stash show -p stash@{0} Show full diff of a stash
git stash pop Apply and remove the latest stash
git stash apply stash@{0} Apply a stash without removing it
git stash branch branch-name Create a branch from the latest stash
git stash drop stash@{0} Delete a specific stash
git stash clear Delete all stashes

FAQ

What is the difference between git stash pop and git stash apply?
pop applies the stash and removes it from the stash list. apply restores the changes but keeps the stash entry so you can apply it again — for example, to multiple branches. Use pop for normal day-to-day use and apply when you need to reuse the stash.

Does git stash save untracked files?
No, not by default. Run git stash -u to include untracked files, or git stash -a to also include files that match your .gitignore patterns.

Can I stash only specific files?
Yes. In modern Git, you can pass pathspecs to git stash push, for example: git stash push -m "label" -- path/to/file. If you need a more portable workflow, use git stash -p to interactively select which hunks to stash.

How do I recover a dropped stash?
If you accidentally ran git stash drop or git stash clear, the underlying commit may still exist for a while. Run git fsck --no-reflogs | grep "dangling commit" to list unreachable commits, then git show <hash> to inspect them. If you find the right stash commit, you can recreate it with git stash store -m "recovered stash" <hash> and then apply it normally.

Can I apply a stash to a different branch?
Yes. Switch to the target branch with git switch branch-name, then run git stash apply stash@{0}. If there are no conflicts, the changes are applied cleanly. If the branches have diverged significantly, consider git stash branch to create a fresh branch from the stash instead.

Conclusion

git stash is the cleanest way to set aside incomplete work without creating a throwaway commit. Use git stash push -m to keep your stash list readable, prefer pop for one-time restores, and use git stash branch when applying becomes messy. For a broader overview of Git workflows, see the Git cheatsheet or the git revert guide .

How to Create a systemd Service File in Linux

systemd is the init system used by most modern Linux distributions, including Ubuntu, Debian, Fedora, and RHEL. It manages system processes, handles service dependencies, and starts services automatically at boot.

A systemd service file — also called a unit file — tells systemd how to start, stop, and manage a process. This guide explains how to create a systemd service file, install it on the system, and manage it with systemctl.

Quick Reference

Task Command
Create a service file sudo nano /etc/systemd/system/myservice.service
Reload systemd after changes sudo systemctl daemon-reload
Enable service at boot sudo systemctl enable myservice
Start the service sudo systemctl start myservice
Enable and start at once sudo systemctl enable --now myservice
Check service status sudo systemctl status myservice
Stop the service sudo systemctl stop myservice
Disable at boot sudo systemctl disable myservice
View service logs sudo journalctl -u myservice
View live logs sudo journalctl -fu myservice

Understanding systemd Unit Files

Service files are plain text files with an .service extension. They are stored in one of two locations:

  • /etc/systemd/system/ — system-wide services, managed by root. Files here take priority over package-installed units.
  • /usr/lib/systemd/system/ or /lib/systemd/system/ — units installed by packages, depending on the distribution. Do not edit these directly; copy them to /etc/systemd/system/ to override.

A service file is divided into three sections:

  • [Unit] — describes the service and declares dependencies.
  • [Service] — defines how to start, stop, and run the process.
  • [Install] — controls how and when the service is enabled.

Creating a Simple Service File

In this example, we will create a service that runs a custom Bash script. First, create the script you want to run:

Terminal
sudo nano /usr/local/bin/myapp.sh

Add the following content to the script:

/usr/local/bin/myapp.shsh
#!/bin/bash
echo "myapp started" >> /var/log/myapp.log
# your application logic here
exec /usr/local/bin/myapp

Make the script executable:

Terminal
sudo chmod +x /usr/local/bin/myapp.sh

Now create the service file:

Terminal
sudo nano /etc/systemd/system/myapp.service

Add the following content:

/etc/systemd/system/myapp.serviceini
[Unit]
Description=My Custom Application
After=network.target

[Service]
Type=simple
ExecStart=/usr/local/bin/myapp.sh
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

After saving the file, reload systemd to pick up the new unit:

Terminal
sudo systemctl daemon-reload

Enable and start the service:

Terminal
sudo systemctl enable --now myapp

Check that it is running:

Terminal
sudo systemctl status myapp
output
● myapp.service - My Custom Application
Loaded: loaded (/etc/systemd/system/myapp.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2026-03-10 09:00:00 CET; 2s ago
Main PID: 1234 (myapp.sh)
Tasks: 1 (limit: 4915)
CGroup: /system.slice/myapp.service
└─1234 /bin/bash /usr/local/bin/myapp.sh

Unit File Sections Explained

[Unit] Section

The [Unit] section contains metadata and dependency declarations.

Common directives:

  • Description= — a short human-readable name shown in systemctl status output.
  • After= — start this service only after the listed units are active. Does not create a dependency; use Requires= for hard dependencies.
  • Requires= — this service requires the listed units. If they fail, this service fails too.
  • Wants= — like Requires=, but the service still starts even if the listed units fail.
  • Documentation= — a URL or man page reference for the service.

[Service] Section

The [Service] section defines the process and how systemd manages it.

Common directives:

  • Type= — the startup type. See Service Types below.
  • ExecStart= — the command to run when the service starts. Must be an absolute path.
  • ExecStop= — the command to run when the service stops. If omitted, systemd sends SIGTERM.
  • ExecReload= — the command to reload the service configuration without restarting.
  • Restart= — when to restart automatically. See Restart Policies below.
  • RestartSec= — how many seconds to wait before restarting. Default is 100ms.
  • User= — run the process as this user instead of root.
  • Group= — run the process as this group.
  • WorkingDirectory= — set the working directory for the process.
  • Environment= — set environment variables: Environment="KEY=value".
  • EnvironmentFile= — read environment variables from a file.

[Install] Section

The [Install] section controls how the service is enabled.

  • WantedBy=multi-user.target — the most common value. Enables the service for normal multi-user (non-graphical) boot.
  • WantedBy=graphical.target — use this if the service requires a graphical environment.

Service Types

The Type= directive tells systemd how the process behaves at startup.

Type Description
simple Default. The process started by ExecStart is the main process.
forking The process forks and the parent exits. systemd tracks the child. Use with traditional daemons.
oneshot The process runs once and exits. systemd waits for it to finish before marking the service as active.
notify Like simple, but the process notifies systemd when it is ready using sd_notify().
idle Like simple, but the process is delayed until all active jobs finish.

For most custom scripts and applications, Type=simple is the right choice.

Restart Policies

The Restart= directive controls when systemd automatically restarts the service.

Value Restarts when
no Never (default).
always The process exits for any reason.
on-failure The process exits with a non-zero code, is killed by a signal, or times out.
on-abnormal Killed by a signal or times out, but not clean exit.
on-success Only if the process exits cleanly (exit code 0).

For long-running services, Restart=on-failure is the most common choice. Use Restart=always for services that must stay running under all circumstances.

Running a Service as a Non-Root User

Running services as root is a security risk. Use the User= and Group= directives to run the service as a dedicated user:

/etc/systemd/system/myapp.serviceini
[Unit]
Description=My Custom Application
After=network.target

[Service]
Type=simple
User=myappuser
Group=myappuser
ExecStart=/usr/local/bin/myapp
Restart=on-failure
WorkingDirectory=/opt/myapp
Environment="NODE_ENV=production"

[Install]
WantedBy=multi-user.target

Create the system user before enabling the service:

Terminal
sudo useradd -r -s /usr/sbin/nologin myappuser

Troubleshooting

Service fails to start — “Failed to start myapp.service”
Check the logs with journalctl : sudo journalctl -u myapp --since "5 minutes ago". The output shows the exact error message from the process.

Changes to the service file have no effect
You must run sudo systemctl daemon-reload after every edit to the unit file. systemd reads the file at reload time, not at start time.

“ExecStart= must be an absolute path”
The ExecStart= value must start with /. Use the full path to the binary — for example /usr/bin/python3 not python3. Use which python3 to find the absolute path.

Service starts but immediately exits
The process may be crashing on startup. Check sudo journalctl -u myapp -n 50 for error output. If Type=simple, make sure ExecStart= runs the process in the foreground — not a script that launches a background process and exits.

Service does not start at boot despite being enabled
Confirm the [Install] section has a valid WantedBy= target and that systemctl enable was run after daemon-reload. Check with systemctl is-enabled myapp.

FAQ

Where should I put my service file?
Put custom service files in /etc/systemd/system/. Files there take priority over package-installed units in /usr/lib/systemd/system/ and persist across package upgrades.

What is the difference between enable and start?
systemctl start starts the service immediately for the current session only. systemctl enable creates a symlink so the service starts automatically at boot. To do both at once, use systemctl enable --now myservice.

How do I view logs for my service?
Use sudo journalctl -u myservice to see all logs. Add -f to follow live output, or --since "1 hour ago" to limit the time range. See the journalctl guide for full usage.

Can I run a service as a regular user without root?
Yes, using user-level systemd units stored in ~/.config/systemd/user/. Manage them with systemctl --user enable myservice. They run when the user logs in and stop when they log out (unless lingering is enabled with loginctl enable-linger username).

How do I reload a service after changing its configuration?
If the service supports it, use sudo systemctl reload myservice — this sends SIGHUP or runs ExecReload= without restarting the process. Otherwise, use sudo systemctl restart myservice.

Conclusion

Creating a systemd service file gives you full control over how and when your process runs — including automatic restarts, boot integration, and user isolation. Once the file is in place, use systemctl enable --now to activate it and journalctl -u to monitor it. For scheduling tasks that run once at a set time rather than continuously, consider cron jobs as an alternative.

ss Command in Linux: Display Socket Statistics

ss is a command-line utility for displaying socket statistics on Linux. It is the modern replacement for the deprecated netstat command and is faster, more detailed, and available by default on all current Linux distributions.

This guide explains how to use ss to list open sockets, filter results by protocol and port, and identify which process is using a given connection.

ss Syntax

txt
ss [OPTIONS] [FILTER]

When invoked without options, ss displays all non-listening sockets that have an established connection.

List All Sockets

To list all sockets regardless of state, use the -a option:

Terminal
ss -a

The output includes columns for the socket type (Netid), state, receive and send queue sizes, local address and port, and peer address and port:

output
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp ESTAB 0 0 192.168.1.10:ssh 192.168.1.5:52710
tcp LISTEN 0 128 0.0.0.0:http 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:bootpc 0.0.0.0:*

Filter by Socket Type

TCP Sockets (-t)

To list only TCP sockets:

Terminal
ss -t

To include listening TCP sockets as well, combine with -a:

Terminal
ss -ta

UDP Sockets (-u)

To list only UDP sockets:

Terminal
ss -ua

Unix Domain Sockets (-x)

To list Unix domain sockets used for inter-process communication:

Terminal
ss -xa

Show Listening Sockets

The -l option shows only sockets that are in the listening state:

Terminal
ss -tl

The most commonly used combination is -tulpn, which shows all TCP and UDP listening sockets with process names and numeric addresses:

Terminal
ss -tulpn
output
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1234,fd=3))
tcp LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=5678,fd=6))
udp UNCONN 0 0 0.0.0.0:68 0.0.0.0:* users:(("dhclient",pid=910,fd=6))

Each option in the combination does the following:

  • -t — show TCP sockets
  • -u — show UDP sockets
  • -l — show listening sockets only
  • -p — show the process name and PID
  • -n — show numeric addresses and ports instead of resolving hostnames and service names

Show Process Information

The -p option adds the process name and PID to the output. This requires root privileges to see processes owned by other users:

Terminal
sudo ss -tp
output
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
ESTAB 0 0 192.168.1.10:ssh 192.168.1.5:52710 users:(("sshd",pid=2341,fd=5))

Use Numeric Output

By default, ss resolves port numbers to service names (for example, port 22 becomes ssh). The -n option disables this and shows raw port numbers:

Terminal
ss -tn

This is useful when you need to match exact port numbers in scripts or when name resolution is slow.

Filter by Port

To find which process is using a specific port, filter by destination or source port. For example, to list all sockets using port 80:

Terminal
ss -tulpn | grep :80

You can also use the built-in filter syntax:

Terminal
ss -tnp 'dport = :443'

To filter by source port:

Terminal
ss -tnp 'sport = :22'

Filter by Connection State

ss supports filtering by connection state. Common states include ESTABLISHED, LISTEN, TIME-WAIT, and CLOSE-WAIT.

To show only established TCP connections:

Terminal
ss -tn state ESTABLISHED

To show only sockets in the TIME-WAIT state:

Terminal
ss -tn state TIME-WAIT

Filter by Address

To show sockets connected to or from a specific IP address:

Terminal
ss -tn dst 192.168.1.5

To filter by source address:

Terminal
ss -tn src 192.168.1.10

You can combine address and port filters:

Terminal
ss -tnp dst 192.168.1.5 dport = :22

Show IPv4 or IPv6 Only

To restrict output to IPv4 sockets, use -4:

Terminal
ss -tln -4

To show only IPv6 sockets, use -6:

Terminal
ss -tln -6

Show Summary Statistics

The -s option prints a summary of socket counts by type and state without listing individual sockets:

Terminal
ss -s
output
Total: 312
TCP: 14 (estab 4, closed 3, orphaned 0, timewait 3)
Transport Total IP IPv6
RAW 1 0 1
UDP 6 4 2
TCP 11 7 4
INET 18 11 7
FRAG 0 0 0

This is useful for a quick overview of the network state on a busy server.

Practical Examples

The following examples cover common diagnostics you will use together with tools like ip , ifconfig , and check listening ports .

Find which process is listening on port 8080:

Terminal
sudo ss -tlpn sport = :8080

List all established SSH connections to your server:

Terminal
ss -tn state ESTABLISHED '( dport = :22 or sport = :22 )'

Show all connections to a remote host:

Terminal
ss -tn dst 203.0.113.10

Count established TCP connections:

Terminal
ss -tn state ESTABLISHED | tail -n +2 | wc -l

Quick Reference

Command Description
ss -a List all sockets
ss -t List TCP sockets
ss -u List UDP sockets
ss -x List Unix domain sockets
ss -l Show listening sockets only
ss -tulpn Listening TCP/UDP with process and numeric output
ss -tp TCP sockets with process names
ss -tn TCP sockets with numeric addresses
ss -s Show socket summary statistics
ss -tn state ESTABLISHED Show established TCP connections
ss -tnp dport = :80 Filter by destination port
ss -tn dst 192.168.1.5 Filter by remote address
ss -4 IPv4 sockets only
ss -6 IPv6 sockets only

Troubleshooting

ss -p does not show process names
Process information for sockets owned by other users requires elevated privileges. Use sudo ss -tp or sudo ss -tulpn.

Filters return no results
Use quoted filter expressions such as ss -tn 'dport = :443', and verify whether you should filter by sport or dport.

Service names hide numeric ports
If output shows service names (ssh, http) instead of port numbers, add -n to keep numeric ports and avoid lookup ambiguity.

Output is too broad on busy servers
Start with protocol and state filters (-t, -u, state ESTABLISHED) and then add address or port filters to narrow results.

You need command-level context, not only sockets
Use ss with ps or pgrep when you need additional process detail.

FAQ

What is the difference between ss and netstat?
ss is the modern replacement for netstat. It reads directly from kernel socket structures, making it significantly faster on systems with many connections. netstat is part of the net-tools package, which is deprecated and not installed by default on most current distributions.

Why do I need sudo with ss -p?
Without root privileges, ss can only show process information for sockets owned by your own user. To see process names and PIDs for all sockets, run ss with sudo.

What does Recv-Q and Send-Q mean in the output?
Recv-Q is the number of bytes received but not yet read by the application. Send-Q is the number of bytes sent but not yet acknowledged by the remote host. Non-zero values on a listening socket or consistently high values can indicate a performance issue.

How do I find which process is using a specific port?
Run sudo ss -tulpn | grep :<port>. The -p flag adds process information and -n keeps port numbers numeric so the grep match is reliable.

Conclusion

ss is the standard tool for inspecting socket connections on modern Linux systems. The -tulpn combination covers most day-to-day needs, while the state and address filters make it easy to narrow results on busy servers. For related network diagnostics, see the ip and ifconfig command guides, or check listening ports for a broader overview.

uniq Command in Linux: Remove and Count Duplicate Lines

uniq is a command-line utility that filters adjacent duplicate lines from sorted input and writes the result to standard output. It is most commonly used together with sort to remove or count duplicates across an entire file.

This guide explains how to use the uniq command with practical examples.

uniq Command Syntax

The syntax for the uniq command is as follows:

txt
uniq [OPTIONS] [INPUT [OUTPUT]]

If no input file is specified, uniq reads from standard input. The optional OUTPUT argument redirects the result to a file instead of standard output.

Removing Duplicate Lines

By default, uniq removes adjacent duplicate lines, keeping one copy of each. Because it only compares consecutive lines, the input must be sorted first — otherwise only back-to-back duplicates are removed.

Given a file with repeated entries:

fruits.txttxt
apple
apple
banana
cherry
banana
cherry
cherry

Running uniq on the unsorted file removes only the back-to-back duplicates (apple apple and cherry cherry), but leaves the second banana and the second cherry intact:

Terminal
uniq fruits.txt
output
apple
banana
cherry
banana
cherry

To remove all duplicates regardless of position, sort the file first:

Terminal
sort fruits.txt | uniq
output
apple
banana
cherry

Counting Occurrences

To prefix each output line with the number of times it appears, use the -c (--count) option:

Terminal
sort fruits.txt | uniq -c
output
 2 apple
2 banana
3 cherry

The count and the line are separated by whitespace. This is especially useful for finding the most frequent lines in a file. To rank them from most to least common, pipe the output back to sort:

Terminal
sort fruits.txt | uniq -c | sort -rn
output
 3 cherry
2 banana
2 apple

Showing Only Duplicate Lines

To print only lines that appear more than once (one copy per group), use the -d (--repeated) option:

Terminal
sort fruits.txt | uniq -d
output
apple
banana
cherry

To print every instance of a repeated line rather than just one, use -D:

Terminal
sort fruits.txt | uniq -D
output
apple
apple
banana
banana
cherry
cherry
cherry

Showing Only Unique Lines

To print only lines that appear exactly once (no duplicates), use the -u (--unique) option:

Terminal
sort fruits.txt | uniq -u

If every line appears more than once, the output is empty. In the example above, all three fruits are duplicated, so nothing is printed.

Ignoring Case

By default, uniq comparisons are case-sensitive, so Apple and apple are treated as different lines. To compare lines case-insensitively, use the -i (--ignore-case) option:

Terminal
sort -f file.txt | uniq -i

Skipping Fields and Characters

uniq can be told to skip leading fields or characters before comparing lines.

To skip the first N whitespace-separated fields, use -f N (--skip-fields=N). This is useful when lines share a common prefix (such as a timestamp) that should not be part of the comparison:

log.txttxt
2026-01-01 ERROR disk full
2026-01-02 ERROR disk full
2026-01-03 WARNING low memory
Terminal
uniq -f 1 log.txt
output
2026-01-01 ERROR disk full
2026-01-03 WARNING low memory

The first field (the date) is skipped, so the two ERROR disk full lines are treated as duplicates.

To skip the first N characters instead of fields, use -s N (--skip-chars=N). To limit the comparison to the first N characters per line, use -w N (--check-chars=N).

Combining uniq with Other Commands

uniq works well in pipelines with grep , cut , sort , and wc .

To count the number of unique lines in a file:

Terminal
sort file.txt | uniq | wc -l

To find the top 10 most common words in a text file:

Terminal
grep -Eo '[[:alnum:]]+' file.txt | sort | uniq -c | sort -rn | head -10

To list unique IP addresses from an access log:

Terminal
awk '{print $1}' /var/log/nginx/access.log | sort | uniq

To find lines that appear in one file but not another (using -u against merged sorted files):

Terminal
sort file1.txt file2.txt | uniq -u

Quick Reference

Command Description
sort file.txt | uniq Remove all duplicate lines
sort file.txt | uniq -c Count occurrences of each line
sort file.txt | uniq -c | sort -rn Rank lines by frequency
sort file.txt | uniq -d Show only duplicate lines (one per group)
sort file.txt | uniq -D Show all instances of duplicate lines
sort file.txt | uniq -u Show only lines that appear exactly once
uniq -i file.txt Compare lines case-insensitively
uniq -f 2 file.txt Skip first 2 fields when comparing
uniq -s 5 file.txt Skip first 5 characters when comparing
uniq -w 10 file.txt Compare only first 10 characters

Troubleshooting

Duplicates are not removed uniq only removes adjacent duplicate lines. If the file is not sorted, non-consecutive duplicates are not detected. Always sort the input first: sort file.txt | uniq.

-c output has inconsistent spacing The count is right-aligned and padded with spaces. If you need to process the output further, use awk to normalize spacing while preserving the full line text: sort file.txt | uniq -c | awk '{count=$1; $1=\"\"; sub(/^ +/, \"\"); print count, $0}'.

Case variants are treated as different lines Use the -i option to compare case-insensitively. Also sort with -f so that case-insensitive duplicates are adjacent before uniq processes them: sort -f file.txt | uniq -i.

FAQ

What is the difference between sort -u and sort | uniq? Both produce the same output for simple deduplication. sort -u is slightly more efficient. sort | uniq is more flexible because uniq supports options like -c (count occurrences) and -d (show only duplicates) that sort -u does not.

Does uniq modify the input file? No. uniq reads from a file or standard input and writes to standard output. The original file is never modified. To save the result, use redirection: sort file.txt | uniq > deduped.txt.

How do I count unique lines in a file? Pipe through sort, uniq, and wc : sort file.txt | uniq | wc -l.

How do I find lines that only appear in one of two files? Merge both files and use uniq -u: sort file1.txt file2.txt | uniq -u. Lines shared by both files become duplicates in the merged sorted stream and are suppressed. Lines that exist in only one file remain unique and are printed.

Can uniq work on columns instead of full lines? Yes. Use -f N to skip the first N fields, -s N to skip the first N characters, or -w N to limit comparison to the first N characters. This lets you deduplicate based on a portion of each line.

Conclusion

The uniq command is a focused tool for filtering and counting duplicate lines. It is most effective when used after sort in a pipeline and pairs naturally with wc , grep , and head for text analysis tasks.

If you have any questions, feel free to leave a comment below.

top Command in Linux: Monitor Processes in Real Time

top is a command-line utility that displays running processes and system resource usage in real time. It helps you inspect CPU usage, memory consumption, load averages, and process activity from a single interactive view.

This guide explains how to use top to monitor your system and quickly identify resource-heavy processes.

top Command Syntax

The syntax for the top command is as follows:

txt
top [OPTIONS]

Run top without arguments to open the interactive monitor:

Terminal
top

Understanding the top Output

The screen is split into two areas:

  • Summary area — system-wide metrics displayed in the header lines
  • Task area — per-process metrics listed below the header

A typical top header looks like this:

output
top - 14:32:01 up 3 days, 4:12, 2 users, load average: 0.45, 0.31, 0.28
Tasks: 213 total, 1 running, 212 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.2 us, 1.1 sy, 0.0 ni, 95.3 id, 0.2 wa, 0.0 hi, 0.1 si, 0.0 st
MiB Mem : 15886.7 total, 8231.4 free, 4912.5 used, 2742.8 buff/cache
MiB Swap: 2048.0 total, 2048.0 free, 0.0 used. 10374.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1234 www-data 20 0 123456 45678 9012 S 4.3 0.3 0:42.31 nginx
987 mysql 20 0 987654 321098 12345 S 1.7 2.0 3:12.04 mysqld
5678 root 20 0 65432 5678 3456 S 0.0 0.0 0:00.12 sshd

CPU State Percentages

The %Cpu(s) line breaks CPU time into the following states:

  • us — time running user-space (non-kernel) processes
  • sy — time running kernel processes
  • ni — time running user processes with a manually adjusted nice value
  • id — idle time
  • wa — time waiting for I/O to complete
  • hi — time handling hardware interrupt requests
  • si — time handling software interrupt requests
  • st — time stolen from this virtual machine by the hypervisor (relevant on VMs)

High wa typically points to a disk or network I/O bottleneck. High us or sy points to CPU pressure. For memory details, see the free command. For load average context, see uptime .

Task Area Columns

Column Description
PID Process ID
USER Owner of the process
PR Scheduling priority assigned by the kernel
NI Nice value (-20 = highest priority, 19 = lowest)
VIRT Total virtual memory the process has mapped
RES Resident set size — physical RAM currently in use
SHR Shared memory with other processes
S Process state: R running, S sleeping, D uninterruptible, Z zombie
%CPU CPU usage since the last refresh
%MEM Percentage of physical RAM in use
TIME+ Total CPU time consumed since the process started
COMMAND Process name or full command line (toggle with c)

Common Interactive Controls

Press these keys while top is running:

Key Action
q Quit top
h Show help
k Kill a process by PID
r Renice a process
P Sort by CPU usage
M Sort by memory usage
N Sort by PID
T Sort by running time
1 Toggle per-CPU core display
c Toggle full command line
u Filter by user

Refresh Interval and Batch Mode

To change the refresh interval (seconds), use -d:

Terminal
top -d 2

To run top in batch mode (non-interactive), use -b. This is useful for scripts and logs:

Terminal
top -b -n 1

To capture multiple snapshots:

Terminal
top -b -n 5 -d 1 > top-snapshot.txt

Filtering and Sorting

Start top with a user filter:

Terminal
top -u root

To monitor a specific process by PID, use -p:

Terminal
top -p 1234

Pass a comma-separated list to monitor multiple PIDs at once:

Terminal
top -p 1234,5678

To show individual threads instead of processes, use -H:

Terminal
top -H

Thread view is useful when profiling multi-threaded applications — each thread appears as a separate row with its own CPU and memory usage.

Inside top, use:

  • P to sort by CPU
  • M to sort by memory
  • u to show a specific user

For deeper process filtering, combine top with ps , pgrep , and kill .

Quick Reference

Command Description
top Start interactive process monitor
top -d 2 Refresh every 2 seconds
top -u user Show only a specific user’s processes
top -p PID Monitor a specific process by PID
top -H Show individual threads
top -b -n 1 Single non-interactive snapshot
top -b -n 5 -d 1 Collect 5 snapshots at 1-second intervals

Troubleshooting

top is not installed Install the procps package (procps-ng on some distributions), then run top again.

Display refresh is too fast or too slow Use -d to tune the refresh rate, for example top -d 2.

Cannot kill or renice a process You may not own the process. Use sudo or run as a user with sufficient privileges.

FAQ

What is the difference between top and htop? top is available by default on most Linux systems and is lightweight. htop provides a richer interactive UI with color coding, mouse support, and easier navigation, but requires a separate installation.

What does load average mean? Load average shows the average number of runnable or uninterruptible tasks over the last 1, 5, and 15 minutes. A value above the number of CPU cores indicates the system is under pressure. See uptime for more detail.

What is a zombie process? A zombie process has finished execution but its entry remains in the process table because the parent process has not yet read its exit status. A small number of zombie processes is harmless; a growing count may indicate a bug in the parent application.

What do VIRT, RES, and SHR mean? VIRT is the total virtual address space the process has reserved. RES is the actual physical RAM currently in use. SHR is the portion of RES that is shared with other processes (for example, shared libraries). RES minus SHR gives the memory used exclusively by that process.

Can I save top output to a file? Yes. Use batch mode, for example: top -b -n 1 > top.txt.

Conclusion

The top command is one of the fastest ways to inspect process activity and system load in real time. Learn the interactive keys and sort options to diagnose performance issues quickly.

If you have any questions, feel free to leave a comment below.

sort Command in Linux: Sort Lines of Text

sort is a command-line utility that sorts lines of text from files or standard input and writes the result to standard output. By default, sort arranges lines alphabetically, but it supports numeric sorting, reverse order, sorting by specific fields, and more.

This guide explains how to use the sort command with practical examples.

sort Command Syntax

The syntax for the sort command is as follows:

txt
sort [OPTIONS] [FILE]...

If no file is specified, sort reads from standard input. When multiple files are given, their contents are merged and sorted together.

Sorting Lines Alphabetically

By default, sort sorts lines in ascending alphabetical order. To sort a file, pass the file name as an argument:

Terminal
sort file.txt

For example, given a file with the following content:

file.txttxt
banana
apple
cherry
date

Running sort produces:

output
apple
banana
cherry
date

The original file is not modified. sort writes the sorted output to standard output. To save the result to a file, use output redirection:

Terminal
sort file.txt > sorted.txt

Sorting in Reverse Order

To sort in descending order, use the -r (--reverse) option:

Terminal
sort -r file.txt
output
date
cherry
banana
apple

Sorting Numerically

Alphabetical sorting does not work correctly for numbers. For example, 10 would sort before 2 because 1 comes before 2 alphabetically. Use the -n (--numeric-sort) option to sort numerically:

Terminal
sort -n numbers.txt

Given the file:

numbers.txttxt
10
2
30
5

The output is:

output
2
5
10
30

To sort numerically in reverse (largest first), combine -n and -r:

Terminal
sort -nr numbers.txt
output
30
10
5
2

Sorting by a Specific Column

By default, sort uses the entire line as the sort key. To sort by a specific field (column), use the -k (--key) option followed by the field number.

Fields are separated by whitespace by default. For example, to sort the following file by the second column (file size):

files.txttxt
report.pdf 2500
notes.txt 80
archive.tar 15000
readme.md 200
Terminal
sort -k2 -n files.txt
output
notes.txt 80
readme.md 200
report.pdf 2500
archive.tar 15000

To use a different field separator, specify it with the -t option. For example, to sort /etc/passwd by the third field (UID), using : as the separator:

Terminal
sort -t: -k3 -n /etc/passwd

Removing Duplicate Lines

To output only unique lines (removing adjacent duplicates after sorting), use the -u (--unique) option:

Terminal
sort -u file.txt

Given a file with repeated entries:

file.txttxt
apple
banana
apple
cherry
banana
output
apple
banana
cherry

This is equivalent to running sort file.txt | uniq.

Ignoring Case

By default, sort is case-sensitive. Uppercase letters sort before lowercase. To sort case-insensitively, use the -f (--ignore-case) option:

Terminal
sort -f file.txt

Checking if a File Is Already Sorted

To verify that a file is already sorted, use the -c (--check) option. It exits silently if the file is sorted, or prints an error and exits with a non-zero status if it is not:

Terminal
sort -c file.txt
output
sort: file.txt:2: disorder: banana

This is useful in scripts to validate input before processing.

Sorting Human-Readable Sizes

When working with output from commands like du, sizes are expressed in human-readable form such as 1K, 2M, or 3G. Standard numeric sort does not handle these correctly. Use the -h (--human-numeric-sort) option:

Terminal
du -sh /var/* | sort -h
output
4.0K /var/backups
16K /var/cache
1.2M /var/log
3.4G /var/lib

Combining sort with Other Commands

sort is commonly used in pipelines with other commands.

To count and rank the most frequent words in a file, combine grep , sort, and uniq:

Terminal
grep -Eo '[[:alnum:]_]+' file.txt | sort | uniq -c | sort -rn

To display the ten largest files in a directory, combine sort with head :

Terminal
du -sh /var/* | sort -rh | head -10

To extract and sort a specific field from a CSV using cut :

Terminal
cut -d',' -f2 data.csv | sort

Quick Reference

Option Description
sort file.txt Sort alphabetically (ascending)
sort -r file.txt Sort in reverse order
sort -n file.txt Sort numerically
sort -nr file.txt Sort numerically, largest first
sort -k2 file.txt Sort by the second field
sort -t: -k3 file.txt Sort by field 3 using : as separator
sort -u file.txt Sort and remove duplicate lines
sort -f file.txt Sort case-insensitively
sort -h file.txt Sort human-readable sizes (1K, 2M, 3G)
sort -c file.txt Check if file is already sorted

For a printable quick reference, see the sort cheatsheet .

Troubleshooting

Numbers sort in wrong order
You are using alphabetical sort on numeric data. Add the -n option to sort numerically: sort -n file.txt.

Human-readable sizes sort incorrectly
Sizes like 1K, 2M, and 3G require -h (--human-numeric-sort). Standard -n only handles plain integers.

Uppercase entries appear before lowercase
sort is case-sensitive by default. Use -f to sort case-insensitively, or use LC_ALL=C sort to enforce byte-order sorting.

Output looks correct on screen but differs from file
Output redirection with > truncates the file before reading. Never redirect to the same file you are sorting: sort file.txt > file.txt will empty the file. Use a temporary file or the sponge utility from moreutils.

FAQ

Does sort modify the original file?
No. sort writes to standard output and never modifies the input file. Use sort file.txt > sorted.txt or sort -o file.txt file.txt to save the output.

How do I sort a file and save it in place?
Use the -o option: sort -o file.txt file.txt. Unlike > file.txt, the -o option is safe to use with the same input and output file.

How do I sort by the last column?
Use a negative field number with the -k option: sort -k-1 sorts by the last field. Alternatively, use awk to reorder columns before sorting.

What is the difference between sort -u and sort | uniq?
sort -u is slightly more efficient as it removes duplicates during sorting. sort | uniq is more flexible because uniq supports options like -c (count occurrences) and -d (show only duplicates).

How do I sort lines randomly?
Use sort -R (random sort) or the shuf command: shuf file.txt. Both produce a random permutation of the input lines.

Conclusion

The sort command is a versatile tool for ordering lines of text in files or command output. It is most powerful when combined with other commands like grep , cut , and head in pipelines.

If you have any questions, feel free to leave a comment below.

journalctl Command in Linux: Query and Filter System Logs

journalctl is a command-line utility for querying and displaying logs collected by systemd-journald, the systemd logging daemon. It gives you structured access to all system logs — kernel messages, service output, authentication events, and more — from a single interface.

This guide explains how to use journalctl to view, filter, and manage system logs.

journalctl Command Syntax

The general syntax for the journalctl command is:

txt
journalctl [OPTIONS] [MATCHES]

When invoked without any options, journalctl displays all collected logs starting from the oldest entry, piped through a pager (usually less). Press q to exit.

Only the root user or members of the adm or systemd-journal groups can read system logs. Regular users can view their own user journal with the --user flag.

Quick Reference

Command Description
journalctl Show all logs
journalctl -f Follow new log entries in real time
journalctl -n 50 Show last 50 lines
journalctl -r Show logs newest first
journalctl -e Jump to end of logs
journalctl -u nginx Logs for a specific unit
journalctl -u nginx -f Follow unit logs in real time
journalctl -b Current boot logs
journalctl -b -1 Previous boot logs
journalctl --list-boots List all boots
journalctl -p err Errors and above
journalctl -p warning --since "1 hour ago" Recent warnings
journalctl -k Kernel messages
journalctl --since "yesterday" Logs since yesterday
journalctl --since "2026-02-01" --until "2026-02-02" Logs in a time window
journalctl -g "failed" Search by pattern
journalctl -o json-pretty JSON output
journalctl --disk-usage Show journal disk usage
journalctl --vacuum-size=500M Reduce journal to 500 MB

For a printable quick reference, see the journalctl cheatsheet .

Viewing System Logs

To view all system logs, run journalctl without any options:

Terminal
journalctl

To show the most recent entries first, use the -r flag:

Terminal
journalctl -r

To jump directly to the end of the log, use -e:

Terminal
journalctl -e

To show the last N lines (similar to tail ), use the -n flag:

Terminal
journalctl -n 50

To disable the pager and print directly to the terminal, use --no-pager:

Terminal
journalctl --no-pager

Following Logs in Real Time

To stream new log entries as they arrive (similar to tail -f), use the -f flag:

Terminal
journalctl -f

This is one of the most useful options for monitoring a running service or troubleshooting an active issue. Press Ctrl+C to stop.

Filtering by Systemd Unit

To view logs for a specific systemd service, use the -u flag followed by the unit name:

Terminal
journalctl -u nginx

You can combine -u with other filters. For example, to follow nginx logs in real time:

Terminal
journalctl -u nginx -f

To view logs for multiple units at once, specify -u more than once:

Terminal
journalctl -u nginx -u php-fpm

To print the last 100 lines for a service without the pager:

Terminal
journalctl -u nginx -n 100 --no-pager

For more on starting and stopping services, see how to start, stop, and restart Nginx and Apache .

Filtering by Time

Use --since and --until to limit log output to a specific time range.

To show logs since a specific date and time:

Terminal
journalctl --since "2026-02-01 10:00"

To show logs within a window:

Terminal
journalctl --since "2026-02-01 10:00" --until "2026-02-01 12:00"

journalctl accepts many natural time expressions:

Terminal
journalctl --since "1 hour ago"
journalctl --since "yesterday"
journalctl --since today

You can combine time filters with unit filters. For example, to view nginx logs from the past hour:

Terminal
journalctl -u nginx --since "1 hour ago"

Filtering by Priority

systemd uses the standard syslog priority levels. Use the -p flag to filter by severity:

Terminal
journalctl -p err

The output will include the specified priority and all higher-severity levels. The available priority levels from highest to lowest are:

Level Name Description
0 emerg System is unusable
1 alert Immediate action required
2 crit Critical conditions
3 err Error conditions
4 warning Warning conditions
5 notice Normal but significant events
6 info Informational messages
7 debug Debug-level messages

To view only warnings and above from the last hour:

Terminal
journalctl -p warning --since "1 hour ago"

Filtering by Boot

The journal stores logs from multiple boots. Use -b to filter by boot session.

To view logs from the current boot:

Terminal
journalctl -b

To view logs from the previous boot:

Terminal
journalctl -b -1

To list all available boot sessions with their IDs and timestamps:

Terminal
journalctl --list-boots

The output will look something like this:

output
-2 abc123def456 Mon 2026-02-24 08:12:01 CET—Mon 2026-02-24 18:43:22 CET
-1 def456abc789 Tue 2026-02-25 09:05:14 CET—Tue 2026-02-25 21:11:03 CET
0 789abcdef012 Wed 2026-02-26 08:30:41 CET—Wed 2026-02-26 14:00:00 CET

To view logs for a specific boot ID:

Terminal
journalctl -b abc123def456

To view errors from the previous boot:

Terminal
journalctl -b -1 -p err

Kernel Messages

To view kernel messages only (equivalent to dmesg ), use the -k flag:

Terminal
journalctl -k

To view kernel messages from the current boot:

Terminal
journalctl -k -b

To view kernel errors from the previous boot:

Terminal
journalctl -k -p err -b -1

Filtering by Process

In addition to filtering by unit, you can filter logs by process name, executable path, PID, or user ID using journal fields.

To filter by process name:

Terminal
journalctl _COMM=sshd

To filter by executable path:

Terminal
journalctl _EXE=/usr/sbin/sshd

To filter by PID:

Terminal
journalctl _PID=1234

To filter by user ID:

Terminal
journalctl _UID=1000

Multiple fields can be combined to narrow the results further.

Searching Log Messages

To search log messages by a pattern, use the -g flag followed by a regular expression:

Terminal
journalctl -g "failed"

To search within a specific unit:

Terminal
journalctl -u ssh -g "invalid user"

You can also pipe journalctl output to grep for more complex matching:

Terminal
journalctl -u nginx -n 500 --no-pager | grep -i "upstream"

Output Formats

By default, journalctl displays logs in a human-readable format. Use the -o flag to change the output format.

To display logs with ISO 8601 timestamps:

Terminal
journalctl -o short-iso

To display logs as JSON (useful for scripting and log shipping):

Terminal
journalctl -o json-pretty

To display message text only, without metadata:

Terminal
journalctl -o cat

The most commonly used output formats are:

Format Description
short Default human-readable format
short-iso ISO 8601 timestamps
short-precise Microsecond-precision timestamps
json One JSON object per line
json-pretty Formatted JSON
cat Message text only

Managing Journal Size

The journal stores logs on disk under /var/log/journal/. To check how much disk space the journal is using:

Terminal
journalctl --disk-usage
output
Archived and active journals take up 512.0M in the file system.

To reduce the journal size, use the --vacuum-size, --vacuum-time, or --vacuum-files options:

Terminal
journalctl --vacuum-size=500M
Terminal
journalctl --vacuum-time=30d
Terminal
journalctl --vacuum-files=5

These commands remove old archived journal files until the specified limit is met. To configure a permanent size limit, edit /etc/systemd/journald.conf and set SystemMaxUse=.

Practical Troubleshooting Workflow

When a service fails, we can use a short sequence to isolate the issue quickly. First, check service state with systemctl :

Terminal
sudo systemctl status nginx

Then inspect recent error-level logs for that unit:

Terminal
sudo journalctl -u nginx -p err -n 100 --no-pager

If the problem started after reboot, inspect previous boot logs:

Terminal
sudo journalctl -u nginx -b -1 -p err --no-pager

To narrow the time window around the incident:

Terminal
sudo journalctl -u nginx --since "30 minutes ago" --no-pager

If you need pattern matching across many lines, pipe to grep :

Terminal
sudo journalctl -u nginx -n 500 --no-pager | grep -Ei "error|failed|timeout"

Troubleshooting

“No journal files were found”
The systemd journal may not be persistent on your system. Check if /var/log/journal/ exists. If it does not, create it with mkdir -p /var/log/journal and restart systemd-journald. Alternatively, set Storage=persistent in /etc/systemd/journald.conf.

“Permission denied” reading logs
Regular users can only access their own user journal. To read system logs, run journalctl with sudo, or add your user to the adm or systemd-journal group: usermod -aG systemd-journal USERNAME.

-g pattern search returns no results
The -g flag uses PCRE2 regular expressions. Make sure the pattern is correct and that your journalctl version supports -g (available on modern systemd releases). As an alternative, pipe the output to grep.

Logs missing after reboot
The journal is stored in memory by default on some distributions. To enable persistent storage across reboots, set Storage=persistent in /etc/systemd/journald.conf and restart systemd-journald.

Journal consuming too much disk space
Use journalctl --disk-usage to check the current size, then journalctl --vacuum-size=500M to trim old entries. For a permanent limit, configure SystemMaxUse= in /etc/systemd/journald.conf.

FAQ

What is the difference between journalctl and /var/log/syslog?
/var/log/syslog is a plain text file written by rsyslog or syslog-ng. journalctl reads the binary systemd journal, which stores structured metadata alongside each message. The journal offers better filtering, field-based queries, and persistent boot tracking.

How do I view logs for a service that keeps restarting?
Use journalctl -u servicename -f to follow logs in real time, or journalctl -u servicename -n 200 to view the most recent entries. Adding -p err will surface only error-level messages.

How do I check logs from before the current boot?
Use journalctl -b -1 for the previous boot, or journalctl --list-boots to see all available boot sessions and then journalctl -b BOOTID to query a specific one.

Can I export logs to a file?
Yes. Use journalctl --no-pager > output.log for plain text, or journalctl -o json-pretty > output.json for structured JSON. You can combine this with any filter flags.

How do I reduce the amount of disk space used by the journal?
Run journalctl --vacuum-size=500M to immediately trim archived logs to 500 MB. For a persistent limit, set SystemMaxUse=500M in /etc/systemd/journald.conf and restart the journal daemon with systemctl restart systemd-journald.

Conclusion

journalctl is a powerful and flexible tool for querying the systemd journal. Whether you are troubleshooting a failing service, reviewing kernel messages, or auditing authentication events, mastering its filter options saves significant time. If you have any questions, feel free to leave a comment below.

Linux patch Command: Apply Diff Files

The patch command applies a set of changes described in a diff file to one or more original files. It is the standard way to distribute and apply source code changes, security fixes, and configuration updates in Linux, and pairs directly with the diff command.

This guide explains how to use the patch command in Linux with practical examples.

Syntax

The general syntax of the patch command is:

txt
patch [OPTIONS] [ORIGINALFILE] [PATCHFILE]

You can pass the patch file using the -i option or pipe it through standard input:

txt
patch [OPTIONS] -i patchfile [ORIGINALFILE]
patch [OPTIONS] [ORIGINALFILE] < patchfile

patch Options

Option Long form Description
-i FILE --input=FILE Read the patch from FILE instead of stdin
-p N --strip=N Strip N leading path components from file names in the patch
-R --reverse Reverse the patch (undo a previously applied patch)
-b --backup Back up the original file before patching
--dry-run Test the patch without making any changes
-d DIR --directory=DIR Change to DIR before doing anything
-N --forward Skip patches that appear to be already applied
-l --ignore-whitespace Ignore whitespace differences when matching lines
-f --force Force apply even if the patch does not match cleanly
-u --unified Interpret the patch as a unified diff

Create and Apply a Basic Patch

The most common workflow is to use diff to generate a patch file and patch to apply it.

Start with two versions of a file. Here is the original:

hello.pytxt
print("Hello, World!")
print("Version 1")

And the updated version:

hello_new.pytxt
print("Hello, Linux!")
print("Version 2")

Use diff with the -u flag to generate a unified diff and save it to a patch file:

Terminal
diff -u hello.py hello_new.py > hello.patch

The hello.patch file will contain the following differences:

output
--- hello.py 2026-02-21 10:00:00.000000000 +0100
+++ hello_new.py 2026-02-21 10:05:00.000000000 +0100
@@ -1,2 +1,2 @@
-print("Hello, World!")
-print("Version 1")
+print("Hello, Linux!")
+print("Version 2")

To apply the patch to the original file:

Terminal
patch hello.py hello.patch
output
patching file hello.py

hello.py now contains the updated content from hello_new.py.

Dry Run Before Applying

Use --dry-run to test whether a patch will apply cleanly without actually modifying any files:

Terminal
patch --dry-run hello.py hello.patch
output
patching file hello.py

If the patch cannot be applied cleanly, patch reports the conflict without touching any files. This is useful before applying patches from external sources or when you are unsure whether a patch has already been applied.

Back Up the Original File

Use the -b option to create a backup of the original file before applying the patch. The backup is saved with a .orig extension:

Terminal
patch -b hello.py hello.patch
output
patching file hello.py

After running this command, hello.py.orig contains the original unpatched file. You can restore it manually if needed.

Strip Path Components with -p

When a patch file is generated from a different directory structure than the one where you apply it, the file paths inside the patch may not match your local paths. Use -p N to strip N leading path components from the paths recorded in the patch.

For example, a patch generated with git diff will reference files like:

output
--- a/src/utils/hello.py
+++ b/src/utils/hello.py

To apply this patch from the project root, use -p1 to strip the a/ and b/ prefixes:

Terminal
patch -p1 < hello.patch

-p1 is the standard option for patches generated by git diff and by diff -u from the root of a project directory. Without it, patch would look for a file named a/src/utils/hello.py on disk, which does not exist.

Reverse a Patch

Use the -R option to undo a previously applied patch and restore the original file:

Terminal
patch -R hello.py hello.patch
output
patching file hello.py

The file is restored to its state before the patch was applied.

Apply a Multi-File Patch

A single patch file can contain changes to multiple files. In this case, you do not specify a target file on the command line — patch reads the file names from the diff headers inside the patch:

Terminal
patch -p1 < project.patch

patch processes each file name from the diff headers and applies the corresponding changes. Use -p1 when the patch was produced by git diff or diff -u from the project root.

Quick Reference

Task Command
Apply a patch patch original.txt changes.patch
Apply from stdin patch -p1 < changes.patch
Specify patch file with -i patch -i changes.patch original.txt
Dry run (test without applying) patch --dry-run -p1 < changes.patch
Back up original before patching patch -b original.txt changes.patch
Strip one path prefix (git patches) patch -p1 < changes.patch
Reverse a patch patch -R original.txt changes.patch
Ignore whitespace differences patch -l original.txt changes.patch
Apply to a specific directory patch -d /path/to/dir -p1 < changes.patch

Troubleshooting

Hunk #1 FAILED at line N.
The patch does not match the current content of the file. This usually means the file has already been modified since the patch was created, or you are applying the patch against the wrong version. patch creates a .rej file containing the failed hunk so you can review and apply the change manually.

Reversed (or previously applied) patch detected! Assume -R?
patch has detected that the changes in the patch are already present in the file — the patch may have been applied before. Answer y to reverse the patch or n to skip it. Use -N (--forward) to automatically skip already-applied patches without prompting.

patch: **** malformed patch at line N
The patch file is corrupted or not in a recognized format. Make sure the patch was generated with diff -u (unified format). Non-unified formats require the -c (context) or specific format flag to be passed to patch.

File paths in the patch do not match files on disk.
Adjust the -p N value. Run patch --dry-run -p0 < changes.patch first, then increment N (-p1, -p2) until patch finds the correct files.

FAQ

How do I create a patch file?
Use the diff command with the -u flag: diff -u original.txt updated.txt > changes.patch. The -u flag produces a unified diff, which is the format patch works with by default. See the diff command guide for details.

What does -p1 mean?
-p1 tells patch to strip one leading path component from file names in the patch. Patches generated by git diff prefix file paths with a/ and b/, so -p1 removes that prefix before patch looks for the file on disk.

How do I undo a patch I already applied?
Run patch -R originalfile patchfile. patch reads the diff in reverse and restores the original content.

What is a .rej file?
When patch cannot apply one or more sections of a patch (called hunks), it writes the failed hunks to a file with a .rej extension alongside the original file. You can open the .rej file and apply those changes manually.

What is the difference between patch and git apply?
Both apply diff files, but git apply is designed for use inside a Git repository and integrates with the Git index. patch is a standalone POSIX tool that works on any file regardless of version control.

Conclusion

The patch command is the standard tool for applying diff files in Linux. Use --dry-run to verify a patch before applying it, -b to keep a backup of the original, and -R to reverse changes when needed.

If you have any questions, feel free to leave a comment below.

❌
❌