阅读视图

发现新文章,点击刷新页面。

git clone: Clone a Repository

git clone copies an existing Git repository into a new directory on your local machine. It sets up tracking branches for each remote branch and creates a remote called origin pointing back to the source.

This guide explains how to use git clone with the most common options and protocols.

Syntax

The basic syntax of the git clone command is:

txt
git clone [OPTIONS] REPOSITORY [DIRECTORY]

REPOSITORY is the URL or path of the repository to clone. DIRECTORY is optional; if omitted, Git uses the repository name as the directory name.

Clone a Remote Repository

The most common way to clone a repository is over HTTPS. For example, to clone a GitHub repository:

Terminal
git clone https://github.com/user/repo.git

This creates a repo directory in the current working directory, initializes a .git directory inside it, and checks out the default branch.

You can also clone over SSH if you have an SSH key configured :

Terminal
git clone git@github.com:user/repo.git

SSH is generally preferred for repositories you will push to, as it does not require entering credentials each time.

Clone into a Specific Directory

By default, git clone creates a directory named after the repository. To clone into a different directory, pass the target path as the second argument:

Terminal
git clone https://github.com/user/repo.git my-project

To clone into the current directory, use . as the target:

Terminal
git clone https://github.com/user/repo.git .
Warning
Cloning into . requires the current directory to be empty.

Clone a Specific Branch

By default, git clone checks out the default branch (usually main or master). To clone and check out a different branch, use the -b option:

Terminal
git clone -b develop https://github.com/user/repo.git

The full repository history is still downloaded; only the checked-out branch differs. To limit the download to a single branch, combine -b with --single-branch:

Terminal
git clone -b develop --single-branch https://github.com/user/repo.git

Shallow Clone

A shallow clone downloads only a limited number of recent commits rather than the full history. This is useful for speeding up the clone of large repositories when you do not need the full commit history.

To clone with only the latest commit:

Terminal
git clone --depth 1 https://github.com/user/repo.git

To include the last 10 commits:

Terminal
git clone --depth 10 https://github.com/user/repo.git

Shallow clones are commonly used in CI/CD pipelines to reduce download time. To convert a shallow clone to a full clone later, run:

Terminal
git fetch --unshallow

Clone a Local Repository

git clone also works with local paths. This is useful for creating an isolated copy for testing:

Terminal
git clone /path/to/local/repo new-copy

Troubleshooting

Destination directory is not empty
git clone URL . works only when the current directory is empty. If files already exist, clone into a new directory or move the existing files out of the way first.

Authentication failed over HTTPS
Most Git hosting services no longer accept account passwords for Git operations over HTTPS. Use a personal access token or configure a credential manager so Git can authenticate securely.

Permission denied (publickey)
This error usually means your SSH key is missing, not loaded into your SSH agent, or not added to your account on the Git hosting service. Verify your SSH setup before retrying the clone.

Host key verification failed
SSH could not verify the identity of the remote server. Confirm you are connecting to the correct host, then accept or update the host key in ~/.ssh/known_hosts.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git clone URL Clone a repository via HTTPS or SSH
git clone URL dir Clone into a specific directory
git clone URL . Clone into the current (empty) directory
git clone -b branch URL Clone and check out a specific branch
git clone -b branch --single-branch URL Clone only a specific branch
git clone --depth 1 URL Shallow clone: latest commit only
git clone --depth N URL Shallow clone: last N commits
git clone /path/to/repo Clone a local repository

FAQ

What is the difference between HTTPS and SSH cloning?
HTTPS cloning works out of the box but prompts for credentials unless you use a credential manager. SSH cloning requires a key pair to be configured but does not prompt for a password on each push or pull.

Does git clone download all branches?
Yes. All remote branches are fetched, but only the default branch is checked out locally. Use git branch -r to list all remote branches, or git checkout branch-name to switch to one.

How do I clone a private repository?
For HTTPS, provide your username and a personal access token when prompted. For SSH, ensure your public key is added to your account on the hosting service.

Can I clone a specific tag instead of a branch?
Yes. Use -b with the tag name: git clone -b v1.0.0 https://github.com/user/repo.git. Git checks out that tag in a detached HEAD state, so create a branch if you plan to make commits.

Conclusion

git clone is the starting point for working with any existing Git repository. After cloning, you can inspect the commit history with git log , review changes with git diff , and configure your username and email if you have not done so already.

jobs Command in Linux: List and Manage Background Jobs

The jobs command is a Bash built-in that lists all background and suspended processes associated with the current shell session. It shows the job ID, state, and the command that started each job.

This article explains how to use the jobs command, what each option does, and how job specifications work for targeting individual jobs.

Syntax

The general syntax for the jobs command is:

txt
jobs [OPTIONS] [JOB_SPEC...]

When called without arguments, jobs lists all active jobs in the current shell.

Listing Jobs

To see all background and suspended jobs, run jobs without any arguments:

Terminal
jobs

If you have two processes (one running and one stopped), the output will look similar to this:

output
[1]- Stopped vim notes.txt
[2]+ Running ping google.com &

Each line shows:

  • The job number in brackets ([1], [2])
  • A + or - sign indicating the current and previous job (more on this below)
  • The job state (Running, Stopped, Done)
  • The command that started the job

To include the process ID (PID) in the output, use the -l option:

Terminal
jobs -l
output
[1]- 14205 Stopped vim notes.txt
[2]+ 14230 Running ping google.com &

The PID is useful when you need to send a signal to a process with the kill command.

Job States

Jobs reported by the jobs command can be in one of these states:

  • Running — the process is actively executing in the background.
  • Stopped — the process has been suspended, typically by pressing Ctrl+Z.
  • Done — the process has finished. This state appears once and is then cleared from the list.
  • Terminated — the process was killed by a signal.

When a background job finishes, the shell displays its Done status the next time you press Enter or run a command.

Job Specifications

Job specifications (job specs) let you target a specific job when using commands like fg, bg, kill , or disown. They always start with a percent sign (%):

  • %1, %2, … — refer to a job by its number.
  • %% or %+ — the current job (the most recently backgrounded or suspended job, marked with +).
  • %- — the previous job (marked with -).
  • %string — the job whose command starts with string. For example, %ping matches a job started with ping google.com.
  • %?string — the job whose command contains string anywhere. For example, %?google also matches ping google.com.

Here is a practical example. Start two background jobs:

Terminal
sleep 300 &
ping -c 100 google.com > /dev/null &

Now list them:

Terminal
jobs
output
[1]- Running sleep 300 &
[2]+ Running ping -c 100 google.com > /dev/null &

You can bring the sleep job to the foreground by its number:

Terminal
fg %1

Or by the command prefix:

Terminal
fg %sleep

Both are equivalent and bring job [1] to the foreground.

Options

The jobs command accepts the following options:

  • -l — List jobs with their process IDs in addition to the normal output.
  • -p — Print only the PID of each job’s process group leader. This is useful for scripting.
  • -r — List only running jobs.
  • -s — List only stopped (suspended) jobs.
  • -n — Show only jobs whose status has changed since the last notification.

To show only running jobs:

Terminal
jobs -r

To get just the PIDs of all jobs:

Terminal
jobs -p
output
14205
14230

Using jobs in Interactive Shells

The jobs command is primarily useful in an interactive shell where job control is enabled. In a regular shell script, job control is usually off, so jobs may return no output even when background processes exist.

If you need to manage background work in a script, track process IDs with $! and use wait with the PID instead of a job spec:

sh
# Start a background task
long_task &
task_pid=$!

# Poll until it finishes
while kill -0 "$task_pid" 2>/dev/null; do
 echo "Task is still running..."
 sleep 5
done

echo "Task complete."

To wait for several background tasks, store their PIDs and pass them to wait:

sh
task_a &
pid_a=$!

task_b &
pid_b=$!

task_c &
pid_c=$!

wait "$pid_a" "$pid_b" "$pid_c"
echo "All tasks complete."

In an interactive shell, you can still wait for a job by its job spec:

Terminal
wait %1

Quick Reference

Command Description
jobs List all background and stopped jobs
jobs -l List jobs with PIDs
jobs -p Print only PIDs
jobs -r List only running jobs
jobs -s List only stopped jobs
fg %1 Bring job 1 to the foreground
bg %1 Resume job 1 in the background
kill %1 Terminate job 1
disown %1 Remove job 1 from the job table
wait Wait for all background jobs to finish

FAQ

What is the difference between jobs and ps?
jobs shows only the processes started from the current shell session. The ps command lists all processes running on the system, regardless of which shell started them.

Why does jobs show nothing even though processes are running?
jobs only tracks processes started from the current shell. If you started a process in a different terminal or via a system service, it will not appear in jobs. Use ps aux to find it instead.

How do I stop a background job?
Use kill %N where N is the job number. For example, kill %1 sends SIGTERM to job 1. If the job does not stop, use kill -9 %1 to force-terminate it.

What do the + and - signs mean in jobs output?
The + marks the current job, the one that fg and bg will act on by default. The - marks the previous job. When the current job finishes, the previous job becomes the new current job.

How do I keep a background job running after closing the terminal?
Use nohup before the command, or disown after backgrounding it. For long-running interactive sessions, consider Screen or Tmux .

Conclusion

The jobs command lists all background and suspended processes in the current shell session. Use it together with fg, bg, and kill to control jobs interactively. For a broader guide on running and managing background processes, see How to Run Linux Commands in the Background .

git diff Command: Show Changes Between Commits

Every change in a Git repository can be inspected before it is staged or committed. The git diff command shows the exact lines that were added, removed, or modified — between your working directory, the staging area, and any two commits or branches.

This guide explains how to use git diff to review changes at every stage of your workflow.

Basic Usage

Run git diff with no arguments to see all unstaged changes — differences between your working directory and the staging area:

Terminal
git diff
output
diff --git a/app.py b/app.py
index 3b1a2c4..7d8e9f0 100644
--- a/app.py
+++ b/app.py
@@ -10,6 +10,7 @@ def main():
print("Starting app")
+ print("Debug mode enabled")
run()

Lines starting with + were added. Lines starting with - were removed. Lines with no prefix are context — unchanged lines shown for reference.

If nothing is shown, all changes are already staged or there are no changes at all.

Staged Changes

To see changes that are staged and ready to commit, use --staged (or its synonym --cached):

Terminal
git diff --staged

This compares the staging area against the last commit. It shows exactly what will go into the next commit.

Terminal
git diff --cached

Both flags are equivalent. Use whichever you prefer.

Comparing Commits

Pass two commit hashes to compare them directly:

Terminal
git diff abc1234 def5678

To compare a commit against its parent, use the ^ suffix:

Terminal
git diff HEAD^ HEAD

HEAD refers to the latest commit. HEAD^ is one commit before it. You can go further back with HEAD~2, HEAD~3, and so on.

To see what changed in the last commit:

Terminal
git diff HEAD^ HEAD

Comparing Branches

Use the same syntax to compare two branches:

Terminal
git diff main feature-branch

This shows all differences between the tips of the two branches.

To see only what a branch adds relative to main — excluding changes already in main — use the three-dot notation:

Terminal
git diff main...feature-branch

The three-dot form finds the common ancestor of both branches and diffs from there to the tip of feature-branch. This is the most useful form for reviewing a pull request before merging.

Comparing a Specific File

To limit the diff to a single file, pass the path after --:

Terminal
git diff -- path/to/file.py

The -- separator tells Git the argument is a file path, not a branch name. Combine it with commit references to compare a file across commits:

Terminal
git diff HEAD~3 HEAD -- path/to/file.py

Summary with –stat

To see a summary of which files changed and how many lines were added or removed — without the full diff — use --stat:

Terminal
git diff --stat
output
 app.py | 3 +++
config.yml | 1 -
2 files changed, 3 insertions(+), 1 deletion(-)

This is useful for a quick overview before reviewing the full diff.

Word-Level Diff

By default, git diff highlights changed lines. To highlight only the changed words within a line, use --word-diff:

Terminal
git diff --word-diff
output
@@ -10,6 +10,6 @@ def main():
print("[-Starting-]{+Running+} app")

Removed words appear in [-brackets-] and added words in {+braces+}. This is especially useful for prose or configuration files where only part of a line changes.

Ignoring Whitespace

To ignore whitespace-only changes, use -w:

Terminal
git diff -w

Use -b to ignore changes in the amount of whitespace (but not all whitespace):

Terminal
git diff -b

These options are useful when reviewing files that have been reformatted or indented differently.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git diff Unstaged changes in working directory
git diff --staged Staged changes ready to commit
git diff HEAD^ HEAD Changes in the last commit
git diff abc1234 def5678 Diff between two commits
git diff main feature Diff between two branches
git diff main...feature Changes in feature since branching from main
git diff -- file Diff for a specific file
git diff --stat Summary of changed files and line counts
git diff --word-diff Word-level diff
git diff -w Ignore all whitespace changes

FAQ

What is the difference between git diff and git diff --staged?
git diff shows changes in your working directory that have not been staged yet. git diff --staged shows changes that have been staged with git add and are ready to commit. To see all changes — staged and unstaged combined — use git diff HEAD.

What does the three-dot ... notation do in git diff?
git diff main...feature finds the common ancestor of main and feature and shows the diff from that ancestor to the tip of feature. This isolates the changes that the feature branch introduces, ignoring any new commits in main since the branch was created. In git diff, the two-dot form is just another way to name the two revision endpoints, so git diff main..feature is effectively the same as git diff main feature.

How do I see only the file names that changed, not the full diff?
Use git diff --name-only. To include the change status (modified, added, deleted), use git diff --name-status instead.

How do I compare my local branch against the remote?
Fetch first to update remote-tracking refs, then diff against the updated remote branch. To see what your current branch changes relative to origin/main, use git fetch && git diff origin/main...HEAD. If you want a direct endpoint comparison instead, use git fetch && git diff origin/main HEAD.

Can I use git diff to generate a patch file?
Yes. Redirect the output to a file: git diff > changes.patch. Apply it on another machine with git apply changes.patch.

Conclusion

git diff is the primary tool for reviewing changes before you stage or commit them. Use git diff for unstaged work, git diff --staged before committing, and git diff main...feature to review a branch before merging. For a history of committed changes, see the git log guide .

git log Command: View Commit History

Every commit in a Git repository is recorded in the project history. The git log command lets you browse that history — showing who made each change, when, and why. It is one of the most frequently used Git commands and one of the most flexible.

This guide explains how to use git log to view, filter, and format commit history.

Basic Usage

Run git log with no arguments to see the full commit history of the current branch, starting from the most recent commit:

Terminal
git log
output
commit a3f1d2e4b5c6789012345678901234567890abcd
Author: Jane Smith <jane@example.com>
Date: Mon Mar 10 14:22:31 2026 +0100
Add user authentication module
commit 7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c
Author: John Doe <john@example.com>
Date: Fri Mar 7 09:15:04 2026 +0100
Fix login form validation

Each entry shows the full commit hash, author name and email, date, and commit message. Press q to exit the log view.

Compact Output with –oneline

The default output is verbose. Use --oneline to show one commit per line — the abbreviated hash and the subject line only:

Terminal
git log --oneline
output
a3f1d2e Add user authentication module
7b8c9d0 Fix login form validation
c1d2e3f Update README with setup instructions

This is useful for a quick overview of recent work.

Limiting the Number of Commits

To show only the last N commits, pass -n followed by the number:

Terminal
git log -n 5

You can also write it without the space:

Terminal
git log -5

Filtering by Author

Use --author to show only commits by a specific person. The value is matched as a substring against the author name or email:

Terminal
git log --author="Jane"

To match multiple authors, pass --author more than once:

Terminal
git log --author="Jane" --author="John"

Filtering by Date

Use --since and --until to limit commits to a date range. These options accept a variety of formats:

Terminal
git log --since="2 weeks ago"
git log --since="2026-03-01" --until="2026-03-15"

Searching Commit Messages

Use --grep to find commits whose message contains a keyword:

Terminal
git log --grep="authentication"

The search is case-sensitive by default. Add -i to make it case-insensitive:

Terminal
git log -i --grep="fix"

Viewing Changed Files

To see a summary of which files were changed in each commit without the full diff, use --stat:

Terminal
git log --stat
output
commit a3f1d2e4b5c6789012345678901234567890abcd
Author: Jane Smith <jane@example.com>
Date: Mon Mar 10 14:22:31 2026 +0100
Add user authentication module
src/auth.py | 84 ++++++++++++++++++++++++++++++++++++++++++++++++
src/models.py | 12 +++++++
tests/test_auth.py | 45 +++++++++++++++++
3 files changed, 141 insertions(+)

To see the full diff for each commit, use -p (or --patch):

Terminal
git log -p

Combine with -n to limit the output:

Terminal
git log -p -3

Viewing History of a Specific File

To see only the commits that touched a particular file, pass the file path after --:

Terminal
git log -- path/to/file.py

The -- separator tells Git that what follows is a path, not a branch name. Add -p to see the exact changes made to that file in each commit:

Terminal
git log -p -- path/to/file.py

Branch and Graph View

To visualize how branches diverge and merge, use --graph:

Terminal
git log --oneline --graph
output
* a3f1d2e Add user authentication module
* 7b8c9d0 Fix login form validation
| * 3e4f5a6 WIP: dark mode toggle
|/
* c1d2e3f Update README with setup instructions

Add --all to include all branches, not just the current one:

Terminal
git log --oneline --graph --all

This gives a full picture of the repository’s branch and merge history.

Comparing Two Branches

To see commits that are in one branch but not another, use the .. notation:

Terminal
git log main..feature-branch

This shows commits reachable from feature-branch that are not yet in main — useful for reviewing what a branch adds before merging.

Custom Output Format

Use --pretty=format: to define exactly what each log line shows. Some useful placeholders:

  • %h — abbreviated commit hash
  • %an — author name
  • %ar — author date, relative
  • %s — subject (first line of commit message)

For example:

Terminal
git log --pretty=format:"%h %an %ar %s"
output
a3f1d2e Jane Smith 5 days ago Add user authentication module
7b8c9d0 John Doe 8 days ago Fix login form validation

Excluding Merge Commits

To hide merge commits and see only substantive changes, use --no-merges:

Terminal
git log --no-merges --oneline

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git log Full commit history
git log --oneline One line per commit
git log -n 10 Last 10 commits
git log --author="Name" Filter by author
git log --since="2 weeks ago" Filter by date
git log --grep="keyword" Search commit messages
git log --stat Show changed files per commit
git log -p Show full diff per commit
git log -- file History of a specific file
git log --oneline --graph Visual branch graph
git log --oneline --graph --all Graph of all branches
git log main..feature Commits in feature not in main
git log --no-merges Exclude merge commits
git log --pretty=format:"%h %an %s" Custom output format

FAQ

How do I search for a commit that changed a specific piece of code?
Use -S (the “pickaxe” option) to find commits that added or removed a specific string: git log -S "function_name". For regex patterns, use -G "pattern" instead.

How do I see who last changed each line of a file?
Use git blame path/to/file. It annotates every line with the commit hash, author, and date of the last change.

What is the difference between git log and git reflog?
git log shows the commit history of the current branch. git reflog shows every movement of HEAD — including commits that were reset, rebased, or are no longer reachable from any branch. It is the main recovery tool if you lose commits accidentally.

How do I find a commit by its message?
Use git log --grep="search term". For case-insensitive search add -i. If you need to search the full commit message body as well as the subject, add --all-match only when combining multiple --grep patterns, or use git log --grep="search term" --format=full to inspect matching commits more closely. Use -E only when you need extended regular expressions in the grep pattern.

How do I show a single commit in full detail?
Use git show <hash>. It prints the commit metadata and the full diff. To show just the files changed without the diff, use git show --stat <hash>.

Conclusion

git log is the primary tool for understanding a project’s history. Start with --oneline for a quick overview, use --graph --all to visualize branches, and combine --author, --since, and --grep to zero in on specific changes. For undoing commits found in the log, see the git revert guide or how to undo the last commit .

git stash: Save and Restore Uncommitted Changes

When you are in the middle of a feature and need to switch branches to fix a bug or review someone else’s work, you have a problem: your changes are not ready to commit, but you cannot switch branches with a dirty working tree. git stash solves this by saving your uncommitted changes to a temporary stack so you can restore them later.

This guide explains how to use git stash to save, list, apply, and delete stashed changes.

Basic Usage

The simplest form saves all modifications to tracked files and reverts the working tree to match the last commit:

Terminal
git stash
output
Saved working directory and index state WIP on main: abc1234 Add login page

Your working tree is now clean. You can switch branches, pull updates, or apply a hotfix. When you are ready to return to your work, restore the stash.

By default, git stash saves both staged and unstaged changes to tracked files. It does not save untracked or ignored files unless you explicitly include them (see below).

Stashing with a Message

A plain git stash entry shows the branch and last commit message, which becomes hard to read when you have multiple stashes. Use -m to attach a descriptive label:

Terminal
git stash push -m "WIP: user authentication form"

This makes it easy to identify the right stash when you list them later.

Listing Stashes

To see all saved stashes, run:

Terminal
git stash list
output
stash@{0}: On main: WIP: user authentication form
stash@{1}: WIP on main: abc1234 Add login page

Stashes are indexed from newest (stash@{0}) to oldest. The index is used when you want to apply or drop a specific stash.

To inspect the contents of a stash before applying it, use git stash show:

Terminal
git stash show stash@{0}

Add -p to see the full diff:

Terminal
git stash show -p stash@{0}

Applying Stashes

There are two ways to restore a stash.

git stash pop applies the most recent stash and removes it from the stash list:

Terminal
git stash pop

git stash apply applies a stash but keeps it in the list so you can apply it again or to another branch:

Terminal
git stash apply

To apply a specific stash by index, pass the stash reference:

Terminal
git stash apply stash@{1}

If applying the stash causes conflicts, resolve them the same way you would resolve a merge conflict, then stage the resolved files.

Stashing Untracked and Ignored Files

By default, git stash only saves changes to files that Git already tracks. New files you have not yet staged are left behind.

Use -u (or --include-untracked) to include untracked files:

Terminal
git stash -u

To include both untracked and ignored files — for example, generated build artifacts or local config files — use -a (or --all):

Terminal
git stash -a

Partial Stash

If you only want to stash some of your changes and leave others in the working tree, use the -p (or --patch) flag to interactively select which hunks to stash:

Terminal
git stash -p

Git will step through each changed hunk and ask whether to stash it. Type y to stash the hunk, n to leave it, or ? for a full list of options.

Creating a Branch from a Stash

If you have been working on a stash for a while and the branch has diverged enough to cause conflicts on apply, you can create a new branch from the stash directly:

Terminal
git stash branch new-branch-name

This creates a new branch, checks it out at the commit where the stash was originally made, applies the stash, and drops it from the list if it applied cleanly. It is the safest way to resume stashed work that has grown out of sync with the base branch.

Deleting Stashes

To remove a specific stash once you no longer need it:

Terminal
git stash drop stash@{0}

To delete all stashes at once:

Terminal
git stash clear

Use clear with care — there is no undo.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Command Description
git stash Stash staged and unstaged changes
git stash push -m "message" Stash with a descriptive label
git stash -u Include untracked files
git stash -a Include untracked and ignored files
git stash -p Interactively choose what to stash
git stash list List all stashes
git stash show stash@{0} Show changed files in a stash
git stash show -p stash@{0} Show full diff of a stash
git stash pop Apply and remove the latest stash
git stash apply stash@{0} Apply a stash without removing it
git stash branch branch-name Create a branch from the latest stash
git stash drop stash@{0} Delete a specific stash
git stash clear Delete all stashes

FAQ

What is the difference between git stash pop and git stash apply?
pop applies the stash and removes it from the stash list. apply restores the changes but keeps the stash entry so you can apply it again — for example, to multiple branches. Use pop for normal day-to-day use and apply when you need to reuse the stash.

Does git stash save untracked files?
No, not by default. Run git stash -u to include untracked files, or git stash -a to also include files that match your .gitignore patterns.

Can I stash only specific files?
Yes. In modern Git, you can pass pathspecs to git stash push, for example: git stash push -m "label" -- path/to/file. If you need a more portable workflow, use git stash -p to interactively select which hunks to stash.

How do I recover a dropped stash?
If you accidentally ran git stash drop or git stash clear, the underlying commit may still exist for a while. Run git fsck --no-reflogs | grep "dangling commit" to list unreachable commits, then git show <hash> to inspect them. If you find the right stash commit, you can recreate it with git stash store -m "recovered stash" <hash> and then apply it normally.

Can I apply a stash to a different branch?
Yes. Switch to the target branch with git switch branch-name, then run git stash apply stash@{0}. If there are no conflicts, the changes are applied cleanly. If the branches have diverged significantly, consider git stash branch to create a fresh branch from the stash instead.

Conclusion

git stash is the cleanest way to set aside incomplete work without creating a throwaway commit. Use git stash push -m to keep your stash list readable, prefer pop for one-time restores, and use git stash branch when applying becomes messy. For a broader overview of Git workflows, see the Git cheatsheet or the git revert guide .

How to Create a systemd Service File in Linux

systemd is the init system used by most modern Linux distributions, including Ubuntu, Debian, Fedora, and RHEL. It manages system processes, handles service dependencies, and starts services automatically at boot.

A systemd service file — also called a unit file — tells systemd how to start, stop, and manage a process. This guide explains how to create a systemd service file, install it on the system, and manage it with systemctl.

Quick Reference

Task Command
Create a service file sudo nano /etc/systemd/system/myservice.service
Reload systemd after changes sudo systemctl daemon-reload
Enable service at boot sudo systemctl enable myservice
Start the service sudo systemctl start myservice
Enable and start at once sudo systemctl enable --now myservice
Check service status sudo systemctl status myservice
Stop the service sudo systemctl stop myservice
Disable at boot sudo systemctl disable myservice
View service logs sudo journalctl -u myservice
View live logs sudo journalctl -fu myservice

Understanding systemd Unit Files

Service files are plain text files with an .service extension. They are stored in one of two locations:

  • /etc/systemd/system/ — system-wide services, managed by root. Files here take priority over package-installed units.
  • /usr/lib/systemd/system/ or /lib/systemd/system/ — units installed by packages, depending on the distribution. Do not edit these directly; copy them to /etc/systemd/system/ to override.

A service file is divided into three sections:

  • [Unit] — describes the service and declares dependencies.
  • [Service] — defines how to start, stop, and run the process.
  • [Install] — controls how and when the service is enabled.

Creating a Simple Service File

In this example, we will create a service that runs a custom Bash script. First, create the script you want to run:

Terminal
sudo nano /usr/local/bin/myapp.sh

Add the following content to the script:

/usr/local/bin/myapp.shsh
#!/bin/bash
echo "myapp started" >> /var/log/myapp.log
# your application logic here
exec /usr/local/bin/myapp

Make the script executable:

Terminal
sudo chmod +x /usr/local/bin/myapp.sh

Now create the service file:

Terminal
sudo nano /etc/systemd/system/myapp.service

Add the following content:

/etc/systemd/system/myapp.serviceini
[Unit]
Description=My Custom Application
After=network.target

[Service]
Type=simple
ExecStart=/usr/local/bin/myapp.sh
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

After saving the file, reload systemd to pick up the new unit:

Terminal
sudo systemctl daemon-reload

Enable and start the service:

Terminal
sudo systemctl enable --now myapp

Check that it is running:

Terminal
sudo systemctl status myapp
output
● myapp.service - My Custom Application
Loaded: loaded (/etc/systemd/system/myapp.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2026-03-10 09:00:00 CET; 2s ago
Main PID: 1234 (myapp.sh)
Tasks: 1 (limit: 4915)
CGroup: /system.slice/myapp.service
└─1234 /bin/bash /usr/local/bin/myapp.sh

Unit File Sections Explained

[Unit] Section

The [Unit] section contains metadata and dependency declarations.

Common directives:

  • Description= — a short human-readable name shown in systemctl status output.
  • After= — start this service only after the listed units are active. Does not create a dependency; use Requires= for hard dependencies.
  • Requires= — this service requires the listed units. If they fail, this service fails too.
  • Wants= — like Requires=, but the service still starts even if the listed units fail.
  • Documentation= — a URL or man page reference for the service.

[Service] Section

The [Service] section defines the process and how systemd manages it.

Common directives:

  • Type= — the startup type. See Service Types below.
  • ExecStart= — the command to run when the service starts. Must be an absolute path.
  • ExecStop= — the command to run when the service stops. If omitted, systemd sends SIGTERM.
  • ExecReload= — the command to reload the service configuration without restarting.
  • Restart= — when to restart automatically. See Restart Policies below.
  • RestartSec= — how many seconds to wait before restarting. Default is 100ms.
  • User= — run the process as this user instead of root.
  • Group= — run the process as this group.
  • WorkingDirectory= — set the working directory for the process.
  • Environment= — set environment variables: Environment="KEY=value".
  • EnvironmentFile= — read environment variables from a file.

[Install] Section

The [Install] section controls how the service is enabled.

  • WantedBy=multi-user.target — the most common value. Enables the service for normal multi-user (non-graphical) boot.
  • WantedBy=graphical.target — use this if the service requires a graphical environment.

Service Types

The Type= directive tells systemd how the process behaves at startup.

Type Description
simple Default. The process started by ExecStart is the main process.
forking The process forks and the parent exits. systemd tracks the child. Use with traditional daemons.
oneshot The process runs once and exits. systemd waits for it to finish before marking the service as active.
notify Like simple, but the process notifies systemd when it is ready using sd_notify().
idle Like simple, but the process is delayed until all active jobs finish.

For most custom scripts and applications, Type=simple is the right choice.

Restart Policies

The Restart= directive controls when systemd automatically restarts the service.

Value Restarts when
no Never (default).
always The process exits for any reason.
on-failure The process exits with a non-zero code, is killed by a signal, or times out.
on-abnormal Killed by a signal or times out, but not clean exit.
on-success Only if the process exits cleanly (exit code 0).

For long-running services, Restart=on-failure is the most common choice. Use Restart=always for services that must stay running under all circumstances.

Running a Service as a Non-Root User

Running services as root is a security risk. Use the User= and Group= directives to run the service as a dedicated user:

/etc/systemd/system/myapp.serviceini
[Unit]
Description=My Custom Application
After=network.target

[Service]
Type=simple
User=myappuser
Group=myappuser
ExecStart=/usr/local/bin/myapp
Restart=on-failure
WorkingDirectory=/opt/myapp
Environment="NODE_ENV=production"

[Install]
WantedBy=multi-user.target

Create the system user before enabling the service:

Terminal
sudo useradd -r -s /usr/sbin/nologin myappuser

Troubleshooting

Service fails to start — “Failed to start myapp.service”
Check the logs with journalctl : sudo journalctl -u myapp --since "5 minutes ago". The output shows the exact error message from the process.

Changes to the service file have no effect
You must run sudo systemctl daemon-reload after every edit to the unit file. systemd reads the file at reload time, not at start time.

“ExecStart= must be an absolute path”
The ExecStart= value must start with /. Use the full path to the binary — for example /usr/bin/python3 not python3. Use which python3 to find the absolute path.

Service starts but immediately exits
The process may be crashing on startup. Check sudo journalctl -u myapp -n 50 for error output. If Type=simple, make sure ExecStart= runs the process in the foreground — not a script that launches a background process and exits.

Service does not start at boot despite being enabled
Confirm the [Install] section has a valid WantedBy= target and that systemctl enable was run after daemon-reload. Check with systemctl is-enabled myapp.

FAQ

Where should I put my service file?
Put custom service files in /etc/systemd/system/. Files there take priority over package-installed units in /usr/lib/systemd/system/ and persist across package upgrades.

What is the difference between enable and start?
systemctl start starts the service immediately for the current session only. systemctl enable creates a symlink so the service starts automatically at boot. To do both at once, use systemctl enable --now myservice.

How do I view logs for my service?
Use sudo journalctl -u myservice to see all logs. Add -f to follow live output, or --since "1 hour ago" to limit the time range. See the journalctl guide for full usage.

Can I run a service as a regular user without root?
Yes, using user-level systemd units stored in ~/.config/systemd/user/. Manage them with systemctl --user enable myservice. They run when the user logs in and stop when they log out (unless lingering is enabled with loginctl enable-linger username).

How do I reload a service after changing its configuration?
If the service supports it, use sudo systemctl reload myservice — this sends SIGHUP or runs ExecReload= without restarting the process. Otherwise, use sudo systemctl restart myservice.

Conclusion

Creating a systemd service file gives you full control over how and when your process runs — including automatic restarts, boot integration, and user isolation. Once the file is in place, use systemctl enable --now to activate it and journalctl -u to monitor it. For scheduling tasks that run once at a set time rather than continuously, consider cron jobs as an alternative.

ss Command in Linux: Display Socket Statistics

ss is a command-line utility for displaying socket statistics on Linux. It is the modern replacement for the deprecated netstat command and is faster, more detailed, and available by default on all current Linux distributions.

This guide explains how to use ss to list open sockets, filter results by protocol and port, and identify which process is using a given connection.

ss Syntax

txt
ss [OPTIONS] [FILTER]

When invoked without options, ss displays all non-listening sockets that have an established connection.

List All Sockets

To list all sockets regardless of state, use the -a option:

Terminal
ss -a

The output includes columns for the socket type (Netid), state, receive and send queue sizes, local address and port, and peer address and port:

output
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp ESTAB 0 0 192.168.1.10:ssh 192.168.1.5:52710
tcp LISTEN 0 128 0.0.0.0:http 0.0.0.0:*
udp UNCONN 0 0 0.0.0.0:bootpc 0.0.0.0:*

Filter by Socket Type

TCP Sockets (-t)

To list only TCP sockets:

Terminal
ss -t

To include listening TCP sockets as well, combine with -a:

Terminal
ss -ta

UDP Sockets (-u)

To list only UDP sockets:

Terminal
ss -ua

Unix Domain Sockets (-x)

To list Unix domain sockets used for inter-process communication:

Terminal
ss -xa

Show Listening Sockets

The -l option shows only sockets that are in the listening state:

Terminal
ss -tl

The most commonly used combination is -tulpn, which shows all TCP and UDP listening sockets with process names and numeric addresses:

Terminal
ss -tulpn
output
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1234,fd=3))
tcp LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=5678,fd=6))
udp UNCONN 0 0 0.0.0.0:68 0.0.0.0:* users:(("dhclient",pid=910,fd=6))

Each option in the combination does the following:

  • -t — show TCP sockets
  • -u — show UDP sockets
  • -l — show listening sockets only
  • -p — show the process name and PID
  • -n — show numeric addresses and ports instead of resolving hostnames and service names

Show Process Information

The -p option adds the process name and PID to the output. This requires root privileges to see processes owned by other users:

Terminal
sudo ss -tp
output
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
ESTAB 0 0 192.168.1.10:ssh 192.168.1.5:52710 users:(("sshd",pid=2341,fd=5))

Use Numeric Output

By default, ss resolves port numbers to service names (for example, port 22 becomes ssh). The -n option disables this and shows raw port numbers:

Terminal
ss -tn

This is useful when you need to match exact port numbers in scripts or when name resolution is slow.

Filter by Port

To find which process is using a specific port, filter by destination or source port. For example, to list all sockets using port 80:

Terminal
ss -tulpn | grep :80

You can also use the built-in filter syntax:

Terminal
ss -tnp 'dport = :443'

To filter by source port:

Terminal
ss -tnp 'sport = :22'

Filter by Connection State

ss supports filtering by connection state. Common states include ESTABLISHED, LISTEN, TIME-WAIT, and CLOSE-WAIT.

To show only established TCP connections:

Terminal
ss -tn state ESTABLISHED

To show only sockets in the TIME-WAIT state:

Terminal
ss -tn state TIME-WAIT

Filter by Address

To show sockets connected to or from a specific IP address:

Terminal
ss -tn dst 192.168.1.5

To filter by source address:

Terminal
ss -tn src 192.168.1.10

You can combine address and port filters:

Terminal
ss -tnp dst 192.168.1.5 dport = :22

Show IPv4 or IPv6 Only

To restrict output to IPv4 sockets, use -4:

Terminal
ss -tln -4

To show only IPv6 sockets, use -6:

Terminal
ss -tln -6

Show Summary Statistics

The -s option prints a summary of socket counts by type and state without listing individual sockets:

Terminal
ss -s
output
Total: 312
TCP: 14 (estab 4, closed 3, orphaned 0, timewait 3)
Transport Total IP IPv6
RAW 1 0 1
UDP 6 4 2
TCP 11 7 4
INET 18 11 7
FRAG 0 0 0

This is useful for a quick overview of the network state on a busy server.

Practical Examples

The following examples cover common diagnostics you will use together with tools like ip , ifconfig , and check listening ports .

Find which process is listening on port 8080:

Terminal
sudo ss -tlpn sport = :8080

List all established SSH connections to your server:

Terminal
ss -tn state ESTABLISHED '( dport = :22 or sport = :22 )'

Show all connections to a remote host:

Terminal
ss -tn dst 203.0.113.10

Count established TCP connections:

Terminal
ss -tn state ESTABLISHED | tail -n +2 | wc -l

Quick Reference

Command Description
ss -a List all sockets
ss -t List TCP sockets
ss -u List UDP sockets
ss -x List Unix domain sockets
ss -l Show listening sockets only
ss -tulpn Listening TCP/UDP with process and numeric output
ss -tp TCP sockets with process names
ss -tn TCP sockets with numeric addresses
ss -s Show socket summary statistics
ss -tn state ESTABLISHED Show established TCP connections
ss -tnp dport = :80 Filter by destination port
ss -tn dst 192.168.1.5 Filter by remote address
ss -4 IPv4 sockets only
ss -6 IPv6 sockets only

Troubleshooting

ss -p does not show process names
Process information for sockets owned by other users requires elevated privileges. Use sudo ss -tp or sudo ss -tulpn.

Filters return no results
Use quoted filter expressions such as ss -tn 'dport = :443', and verify whether you should filter by sport or dport.

Service names hide numeric ports
If output shows service names (ssh, http) instead of port numbers, add -n to keep numeric ports and avoid lookup ambiguity.

Output is too broad on busy servers
Start with protocol and state filters (-t, -u, state ESTABLISHED) and then add address or port filters to narrow results.

You need command-level context, not only sockets
Use ss with ps or pgrep when you need additional process detail.

FAQ

What is the difference between ss and netstat?
ss is the modern replacement for netstat. It reads directly from kernel socket structures, making it significantly faster on systems with many connections. netstat is part of the net-tools package, which is deprecated and not installed by default on most current distributions.

Why do I need sudo with ss -p?
Without root privileges, ss can only show process information for sockets owned by your own user. To see process names and PIDs for all sockets, run ss with sudo.

What does Recv-Q and Send-Q mean in the output?
Recv-Q is the number of bytes received but not yet read by the application. Send-Q is the number of bytes sent but not yet acknowledged by the remote host. Non-zero values on a listening socket or consistently high values can indicate a performance issue.

How do I find which process is using a specific port?
Run sudo ss -tulpn | grep :<port>. The -p flag adds process information and -n keeps port numbers numeric so the grep match is reliable.

Conclusion

ss is the standard tool for inspecting socket connections on modern Linux systems. The -tulpn combination covers most day-to-day needs, while the state and address filters make it easy to narrow results on busy servers. For related network diagnostics, see the ip and ifconfig command guides, or check listening ports for a broader overview.

uniq Command in Linux: Remove and Count Duplicate Lines

uniq is a command-line utility that filters adjacent duplicate lines from sorted input and writes the result to standard output. It is most commonly used together with sort to remove or count duplicates across an entire file.

This guide explains how to use the uniq command with practical examples.

uniq Command Syntax

The syntax for the uniq command is as follows:

txt
uniq [OPTIONS] [INPUT [OUTPUT]]

If no input file is specified, uniq reads from standard input. The optional OUTPUT argument redirects the result to a file instead of standard output.

Removing Duplicate Lines

By default, uniq removes adjacent duplicate lines, keeping one copy of each. Because it only compares consecutive lines, the input must be sorted first — otherwise only back-to-back duplicates are removed.

Given a file with repeated entries:

fruits.txttxt
apple
apple
banana
cherry
banana
cherry
cherry

Running uniq on the unsorted file removes only the back-to-back duplicates (apple apple and cherry cherry), but leaves the second banana and the second cherry intact:

Terminal
uniq fruits.txt
output
apple
banana
cherry
banana
cherry

To remove all duplicates regardless of position, sort the file first:

Terminal
sort fruits.txt | uniq
output
apple
banana
cherry

Counting Occurrences

To prefix each output line with the number of times it appears, use the -c (--count) option:

Terminal
sort fruits.txt | uniq -c
output
 2 apple
2 banana
3 cherry

The count and the line are separated by whitespace. This is especially useful for finding the most frequent lines in a file. To rank them from most to least common, pipe the output back to sort:

Terminal
sort fruits.txt | uniq -c | sort -rn
output
 3 cherry
2 banana
2 apple

Showing Only Duplicate Lines

To print only lines that appear more than once (one copy per group), use the -d (--repeated) option:

Terminal
sort fruits.txt | uniq -d
output
apple
banana
cherry

To print every instance of a repeated line rather than just one, use -D:

Terminal
sort fruits.txt | uniq -D
output
apple
apple
banana
banana
cherry
cherry
cherry

Showing Only Unique Lines

To print only lines that appear exactly once (no duplicates), use the -u (--unique) option:

Terminal
sort fruits.txt | uniq -u

If every line appears more than once, the output is empty. In the example above, all three fruits are duplicated, so nothing is printed.

Ignoring Case

By default, uniq comparisons are case-sensitive, so Apple and apple are treated as different lines. To compare lines case-insensitively, use the -i (--ignore-case) option:

Terminal
sort -f file.txt | uniq -i

Skipping Fields and Characters

uniq can be told to skip leading fields or characters before comparing lines.

To skip the first N whitespace-separated fields, use -f N (--skip-fields=N). This is useful when lines share a common prefix (such as a timestamp) that should not be part of the comparison:

log.txttxt
2026-01-01 ERROR disk full
2026-01-02 ERROR disk full
2026-01-03 WARNING low memory
Terminal
uniq -f 1 log.txt
output
2026-01-01 ERROR disk full
2026-01-03 WARNING low memory

The first field (the date) is skipped, so the two ERROR disk full lines are treated as duplicates.

To skip the first N characters instead of fields, use -s N (--skip-chars=N). To limit the comparison to the first N characters per line, use -w N (--check-chars=N).

Combining uniq with Other Commands

uniq works well in pipelines with grep , cut , sort , and wc .

To count the number of unique lines in a file:

Terminal
sort file.txt | uniq | wc -l

To find the top 10 most common words in a text file:

Terminal
grep -Eo '[[:alnum:]]+' file.txt | sort | uniq -c | sort -rn | head -10

To list unique IP addresses from an access log:

Terminal
awk '{print $1}' /var/log/nginx/access.log | sort | uniq

To find lines that appear in one file but not another (using -u against merged sorted files):

Terminal
sort file1.txt file2.txt | uniq -u

Quick Reference

Command Description
sort file.txt | uniq Remove all duplicate lines
sort file.txt | uniq -c Count occurrences of each line
sort file.txt | uniq -c | sort -rn Rank lines by frequency
sort file.txt | uniq -d Show only duplicate lines (one per group)
sort file.txt | uniq -D Show all instances of duplicate lines
sort file.txt | uniq -u Show only lines that appear exactly once
uniq -i file.txt Compare lines case-insensitively
uniq -f 2 file.txt Skip first 2 fields when comparing
uniq -s 5 file.txt Skip first 5 characters when comparing
uniq -w 10 file.txt Compare only first 10 characters

Troubleshooting

Duplicates are not removed uniq only removes adjacent duplicate lines. If the file is not sorted, non-consecutive duplicates are not detected. Always sort the input first: sort file.txt | uniq.

-c output has inconsistent spacing The count is right-aligned and padded with spaces. If you need to process the output further, use awk to normalize spacing while preserving the full line text: sort file.txt | uniq -c | awk '{count=$1; $1=\"\"; sub(/^ +/, \"\"); print count, $0}'.

Case variants are treated as different lines Use the -i option to compare case-insensitively. Also sort with -f so that case-insensitive duplicates are adjacent before uniq processes them: sort -f file.txt | uniq -i.

FAQ

What is the difference between sort -u and sort | uniq? Both produce the same output for simple deduplication. sort -u is slightly more efficient. sort | uniq is more flexible because uniq supports options like -c (count occurrences) and -d (show only duplicates) that sort -u does not.

Does uniq modify the input file? No. uniq reads from a file or standard input and writes to standard output. The original file is never modified. To save the result, use redirection: sort file.txt | uniq > deduped.txt.

How do I count unique lines in a file? Pipe through sort, uniq, and wc : sort file.txt | uniq | wc -l.

How do I find lines that only appear in one of two files? Merge both files and use uniq -u: sort file1.txt file2.txt | uniq -u. Lines shared by both files become duplicates in the merged sorted stream and are suppressed. Lines that exist in only one file remain unique and are printed.

Can uniq work on columns instead of full lines? Yes. Use -f N to skip the first N fields, -s N to skip the first N characters, or -w N to limit comparison to the first N characters. This lets you deduplicate based on a portion of each line.

Conclusion

The uniq command is a focused tool for filtering and counting duplicate lines. It is most effective when used after sort in a pipeline and pairs naturally with wc , grep , and head for text analysis tasks.

If you have any questions, feel free to leave a comment below.

top Command in Linux: Monitor Processes in Real Time

top is a command-line utility that displays running processes and system resource usage in real time. It helps you inspect CPU usage, memory consumption, load averages, and process activity from a single interactive view.

This guide explains how to use top to monitor your system and quickly identify resource-heavy processes.

top Command Syntax

The syntax for the top command is as follows:

txt
top [OPTIONS]

Run top without arguments to open the interactive monitor:

Terminal
top

Understanding the top Output

The screen is split into two areas:

  • Summary area — system-wide metrics displayed in the header lines
  • Task area — per-process metrics listed below the header

A typical top header looks like this:

output
top - 14:32:01 up 3 days, 4:12, 2 users, load average: 0.45, 0.31, 0.28
Tasks: 213 total, 1 running, 212 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.2 us, 1.1 sy, 0.0 ni, 95.3 id, 0.2 wa, 0.0 hi, 0.1 si, 0.0 st
MiB Mem : 15886.7 total, 8231.4 free, 4912.5 used, 2742.8 buff/cache
MiB Swap: 2048.0 total, 2048.0 free, 0.0 used. 10374.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1234 www-data 20 0 123456 45678 9012 S 4.3 0.3 0:42.31 nginx
987 mysql 20 0 987654 321098 12345 S 1.7 2.0 3:12.04 mysqld
5678 root 20 0 65432 5678 3456 S 0.0 0.0 0:00.12 sshd

CPU State Percentages

The %Cpu(s) line breaks CPU time into the following states:

  • us — time running user-space (non-kernel) processes
  • sy — time running kernel processes
  • ni — time running user processes with a manually adjusted nice value
  • id — idle time
  • wa — time waiting for I/O to complete
  • hi — time handling hardware interrupt requests
  • si — time handling software interrupt requests
  • st — time stolen from this virtual machine by the hypervisor (relevant on VMs)

High wa typically points to a disk or network I/O bottleneck. High us or sy points to CPU pressure. For memory details, see the free command. For load average context, see uptime .

Task Area Columns

Column Description
PID Process ID
USER Owner of the process
PR Scheduling priority assigned by the kernel
NI Nice value (-20 = highest priority, 19 = lowest)
VIRT Total virtual memory the process has mapped
RES Resident set size — physical RAM currently in use
SHR Shared memory with other processes
S Process state: R running, S sleeping, D uninterruptible, Z zombie
%CPU CPU usage since the last refresh
%MEM Percentage of physical RAM in use
TIME+ Total CPU time consumed since the process started
COMMAND Process name or full command line (toggle with c)

Common Interactive Controls

Press these keys while top is running:

Key Action
q Quit top
h Show help
k Kill a process by PID
r Renice a process
P Sort by CPU usage
M Sort by memory usage
N Sort by PID
T Sort by running time
1 Toggle per-CPU core display
c Toggle full command line
u Filter by user

Refresh Interval and Batch Mode

To change the refresh interval (seconds), use -d:

Terminal
top -d 2

To run top in batch mode (non-interactive), use -b. This is useful for scripts and logs:

Terminal
top -b -n 1

To capture multiple snapshots:

Terminal
top -b -n 5 -d 1 > top-snapshot.txt

Filtering and Sorting

Start top with a user filter:

Terminal
top -u root

To monitor a specific process by PID, use -p:

Terminal
top -p 1234

Pass a comma-separated list to monitor multiple PIDs at once:

Terminal
top -p 1234,5678

To show individual threads instead of processes, use -H:

Terminal
top -H

Thread view is useful when profiling multi-threaded applications — each thread appears as a separate row with its own CPU and memory usage.

Inside top, use:

  • P to sort by CPU
  • M to sort by memory
  • u to show a specific user

For deeper process filtering, combine top with ps , pgrep , and kill .

Quick Reference

Command Description
top Start interactive process monitor
top -d 2 Refresh every 2 seconds
top -u user Show only a specific user’s processes
top -p PID Monitor a specific process by PID
top -H Show individual threads
top -b -n 1 Single non-interactive snapshot
top -b -n 5 -d 1 Collect 5 snapshots at 1-second intervals

Troubleshooting

top is not installed Install the procps package (procps-ng on some distributions), then run top again.

Display refresh is too fast or too slow Use -d to tune the refresh rate, for example top -d 2.

Cannot kill or renice a process You may not own the process. Use sudo or run as a user with sufficient privileges.

FAQ

What is the difference between top and htop? top is available by default on most Linux systems and is lightweight. htop provides a richer interactive UI with color coding, mouse support, and easier navigation, but requires a separate installation.

What does load average mean? Load average shows the average number of runnable or uninterruptible tasks over the last 1, 5, and 15 minutes. A value above the number of CPU cores indicates the system is under pressure. See uptime for more detail.

What is a zombie process? A zombie process has finished execution but its entry remains in the process table because the parent process has not yet read its exit status. A small number of zombie processes is harmless; a growing count may indicate a bug in the parent application.

What do VIRT, RES, and SHR mean? VIRT is the total virtual address space the process has reserved. RES is the actual physical RAM currently in use. SHR is the portion of RES that is shared with other processes (for example, shared libraries). RES minus SHR gives the memory used exclusively by that process.

Can I save top output to a file? Yes. Use batch mode, for example: top -b -n 1 > top.txt.

Conclusion

The top command is one of the fastest ways to inspect process activity and system load in real time. Learn the interactive keys and sort options to diagnose performance issues quickly.

If you have any questions, feel free to leave a comment below.

sort Command in Linux: Sort Lines of Text

sort is a command-line utility that sorts lines of text from files or standard input and writes the result to standard output. By default, sort arranges lines alphabetically, but it supports numeric sorting, reverse order, sorting by specific fields, and more.

This guide explains how to use the sort command with practical examples.

sort Command Syntax

The syntax for the sort command is as follows:

txt
sort [OPTIONS] [FILE]...

If no file is specified, sort reads from standard input. When multiple files are given, their contents are merged and sorted together.

Sorting Lines Alphabetically

By default, sort sorts lines in ascending alphabetical order. To sort a file, pass the file name as an argument:

Terminal
sort file.txt

For example, given a file with the following content:

file.txttxt
banana
apple
cherry
date

Running sort produces:

output
apple
banana
cherry
date

The original file is not modified. sort writes the sorted output to standard output. To save the result to a file, use output redirection:

Terminal
sort file.txt > sorted.txt

Sorting in Reverse Order

To sort in descending order, use the -r (--reverse) option:

Terminal
sort -r file.txt
output
date
cherry
banana
apple

Sorting Numerically

Alphabetical sorting does not work correctly for numbers. For example, 10 would sort before 2 because 1 comes before 2 alphabetically. Use the -n (--numeric-sort) option to sort numerically:

Terminal
sort -n numbers.txt

Given the file:

numbers.txttxt
10
2
30
5

The output is:

output
2
5
10
30

To sort numerically in reverse (largest first), combine -n and -r:

Terminal
sort -nr numbers.txt
output
30
10
5
2

Sorting by a Specific Column

By default, sort uses the entire line as the sort key. To sort by a specific field (column), use the -k (--key) option followed by the field number.

Fields are separated by whitespace by default. For example, to sort the following file by the second column (file size):

files.txttxt
report.pdf 2500
notes.txt 80
archive.tar 15000
readme.md 200
Terminal
sort -k2 -n files.txt
output
notes.txt 80
readme.md 200
report.pdf 2500
archive.tar 15000

To use a different field separator, specify it with the -t option. For example, to sort /etc/passwd by the third field (UID), using : as the separator:

Terminal
sort -t: -k3 -n /etc/passwd

Removing Duplicate Lines

To output only unique lines (removing adjacent duplicates after sorting), use the -u (--unique) option:

Terminal
sort -u file.txt

Given a file with repeated entries:

file.txttxt
apple
banana
apple
cherry
banana
output
apple
banana
cherry

This is equivalent to running sort file.txt | uniq.

Ignoring Case

By default, sort is case-sensitive. Uppercase letters sort before lowercase. To sort case-insensitively, use the -f (--ignore-case) option:

Terminal
sort -f file.txt

Checking if a File Is Already Sorted

To verify that a file is already sorted, use the -c (--check) option. It exits silently if the file is sorted, or prints an error and exits with a non-zero status if it is not:

Terminal
sort -c file.txt
output
sort: file.txt:2: disorder: banana

This is useful in scripts to validate input before processing.

Sorting Human-Readable Sizes

When working with output from commands like du, sizes are expressed in human-readable form such as 1K, 2M, or 3G. Standard numeric sort does not handle these correctly. Use the -h (--human-numeric-sort) option:

Terminal
du -sh /var/* | sort -h
output
4.0K /var/backups
16K /var/cache
1.2M /var/log
3.4G /var/lib

Combining sort with Other Commands

sort is commonly used in pipelines with other commands.

To count and rank the most frequent words in a file, combine grep , sort, and uniq:

Terminal
grep -Eo '[[:alnum:]_]+' file.txt | sort | uniq -c | sort -rn

To display the ten largest files in a directory, combine sort with head :

Terminal
du -sh /var/* | sort -rh | head -10

To extract and sort a specific field from a CSV using cut :

Terminal
cut -d',' -f2 data.csv | sort

Quick Reference

Option Description
sort file.txt Sort alphabetically (ascending)
sort -r file.txt Sort in reverse order
sort -n file.txt Sort numerically
sort -nr file.txt Sort numerically, largest first
sort -k2 file.txt Sort by the second field
sort -t: -k3 file.txt Sort by field 3 using : as separator
sort -u file.txt Sort and remove duplicate lines
sort -f file.txt Sort case-insensitively
sort -h file.txt Sort human-readable sizes (1K, 2M, 3G)
sort -c file.txt Check if file is already sorted

For a printable quick reference, see the sort cheatsheet .

Troubleshooting

Numbers sort in wrong order
You are using alphabetical sort on numeric data. Add the -n option to sort numerically: sort -n file.txt.

Human-readable sizes sort incorrectly
Sizes like 1K, 2M, and 3G require -h (--human-numeric-sort). Standard -n only handles plain integers.

Uppercase entries appear before lowercase
sort is case-sensitive by default. Use -f to sort case-insensitively, or use LC_ALL=C sort to enforce byte-order sorting.

Output looks correct on screen but differs from file
Output redirection with > truncates the file before reading. Never redirect to the same file you are sorting: sort file.txt > file.txt will empty the file. Use a temporary file or the sponge utility from moreutils.

FAQ

Does sort modify the original file?
No. sort writes to standard output and never modifies the input file. Use sort file.txt > sorted.txt or sort -o file.txt file.txt to save the output.

How do I sort a file and save it in place?
Use the -o option: sort -o file.txt file.txt. Unlike > file.txt, the -o option is safe to use with the same input and output file.

How do I sort by the last column?
Use a negative field number with the -k option: sort -k-1 sorts by the last field. Alternatively, use awk to reorder columns before sorting.

What is the difference between sort -u and sort | uniq?
sort -u is slightly more efficient as it removes duplicates during sorting. sort | uniq is more flexible because uniq supports options like -c (count occurrences) and -d (show only duplicates).

How do I sort lines randomly?
Use sort -R (random sort) or the shuf command: shuf file.txt. Both produce a random permutation of the input lines.

Conclusion

The sort command is a versatile tool for ordering lines of text in files or command output. It is most powerful when combined with other commands like grep , cut , and head in pipelines.

If you have any questions, feel free to leave a comment below.

journalctl Command in Linux: Query and Filter System Logs

journalctl is a command-line utility for querying and displaying logs collected by systemd-journald, the systemd logging daemon. It gives you structured access to all system logs — kernel messages, service output, authentication events, and more — from a single interface.

This guide explains how to use journalctl to view, filter, and manage system logs.

journalctl Command Syntax

The general syntax for the journalctl command is:

txt
journalctl [OPTIONS] [MATCHES]

When invoked without any options, journalctl displays all collected logs starting from the oldest entry, piped through a pager (usually less). Press q to exit.

Only the root user or members of the adm or systemd-journal groups can read system logs. Regular users can view their own user journal with the --user flag.

Quick Reference

Command Description
journalctl Show all logs
journalctl -f Follow new log entries in real time
journalctl -n 50 Show last 50 lines
journalctl -r Show logs newest first
journalctl -e Jump to end of logs
journalctl -u nginx Logs for a specific unit
journalctl -u nginx -f Follow unit logs in real time
journalctl -b Current boot logs
journalctl -b -1 Previous boot logs
journalctl --list-boots List all boots
journalctl -p err Errors and above
journalctl -p warning --since "1 hour ago" Recent warnings
journalctl -k Kernel messages
journalctl --since "yesterday" Logs since yesterday
journalctl --since "2026-02-01" --until "2026-02-02" Logs in a time window
journalctl -g "failed" Search by pattern
journalctl -o json-pretty JSON output
journalctl --disk-usage Show journal disk usage
journalctl --vacuum-size=500M Reduce journal to 500 MB

For a printable quick reference, see the journalctl cheatsheet .

Viewing System Logs

To view all system logs, run journalctl without any options:

Terminal
journalctl

To show the most recent entries first, use the -r flag:

Terminal
journalctl -r

To jump directly to the end of the log, use -e:

Terminal
journalctl -e

To show the last N lines (similar to tail ), use the -n flag:

Terminal
journalctl -n 50

To disable the pager and print directly to the terminal, use --no-pager:

Terminal
journalctl --no-pager

Following Logs in Real Time

To stream new log entries as they arrive (similar to tail -f), use the -f flag:

Terminal
journalctl -f

This is one of the most useful options for monitoring a running service or troubleshooting an active issue. Press Ctrl+C to stop.

Filtering by Systemd Unit

To view logs for a specific systemd service, use the -u flag followed by the unit name:

Terminal
journalctl -u nginx

You can combine -u with other filters. For example, to follow nginx logs in real time:

Terminal
journalctl -u nginx -f

To view logs for multiple units at once, specify -u more than once:

Terminal
journalctl -u nginx -u php-fpm

To print the last 100 lines for a service without the pager:

Terminal
journalctl -u nginx -n 100 --no-pager

For more on starting and stopping services, see how to start, stop, and restart Nginx and Apache .

Filtering by Time

Use --since and --until to limit log output to a specific time range.

To show logs since a specific date and time:

Terminal
journalctl --since "2026-02-01 10:00"

To show logs within a window:

Terminal
journalctl --since "2026-02-01 10:00" --until "2026-02-01 12:00"

journalctl accepts many natural time expressions:

Terminal
journalctl --since "1 hour ago"
journalctl --since "yesterday"
journalctl --since today

You can combine time filters with unit filters. For example, to view nginx logs from the past hour:

Terminal
journalctl -u nginx --since "1 hour ago"

Filtering by Priority

systemd uses the standard syslog priority levels. Use the -p flag to filter by severity:

Terminal
journalctl -p err

The output will include the specified priority and all higher-severity levels. The available priority levels from highest to lowest are:

Level Name Description
0 emerg System is unusable
1 alert Immediate action required
2 crit Critical conditions
3 err Error conditions
4 warning Warning conditions
5 notice Normal but significant events
6 info Informational messages
7 debug Debug-level messages

To view only warnings and above from the last hour:

Terminal
journalctl -p warning --since "1 hour ago"

Filtering by Boot

The journal stores logs from multiple boots. Use -b to filter by boot session.

To view logs from the current boot:

Terminal
journalctl -b

To view logs from the previous boot:

Terminal
journalctl -b -1

To list all available boot sessions with their IDs and timestamps:

Terminal
journalctl --list-boots

The output will look something like this:

output
-2 abc123def456 Mon 2026-02-24 08:12:01 CET—Mon 2026-02-24 18:43:22 CET
-1 def456abc789 Tue 2026-02-25 09:05:14 CET—Tue 2026-02-25 21:11:03 CET
0 789abcdef012 Wed 2026-02-26 08:30:41 CET—Wed 2026-02-26 14:00:00 CET

To view logs for a specific boot ID:

Terminal
journalctl -b abc123def456

To view errors from the previous boot:

Terminal
journalctl -b -1 -p err

Kernel Messages

To view kernel messages only (equivalent to dmesg ), use the -k flag:

Terminal
journalctl -k

To view kernel messages from the current boot:

Terminal
journalctl -k -b

To view kernel errors from the previous boot:

Terminal
journalctl -k -p err -b -1

Filtering by Process

In addition to filtering by unit, you can filter logs by process name, executable path, PID, or user ID using journal fields.

To filter by process name:

Terminal
journalctl _COMM=sshd

To filter by executable path:

Terminal
journalctl _EXE=/usr/sbin/sshd

To filter by PID:

Terminal
journalctl _PID=1234

To filter by user ID:

Terminal
journalctl _UID=1000

Multiple fields can be combined to narrow the results further.

Searching Log Messages

To search log messages by a pattern, use the -g flag followed by a regular expression:

Terminal
journalctl -g "failed"

To search within a specific unit:

Terminal
journalctl -u ssh -g "invalid user"

You can also pipe journalctl output to grep for more complex matching:

Terminal
journalctl -u nginx -n 500 --no-pager | grep -i "upstream"

Output Formats

By default, journalctl displays logs in a human-readable format. Use the -o flag to change the output format.

To display logs with ISO 8601 timestamps:

Terminal
journalctl -o short-iso

To display logs as JSON (useful for scripting and log shipping):

Terminal
journalctl -o json-pretty

To display message text only, without metadata:

Terminal
journalctl -o cat

The most commonly used output formats are:

Format Description
short Default human-readable format
short-iso ISO 8601 timestamps
short-precise Microsecond-precision timestamps
json One JSON object per line
json-pretty Formatted JSON
cat Message text only

Managing Journal Size

The journal stores logs on disk under /var/log/journal/. To check how much disk space the journal is using:

Terminal
journalctl --disk-usage
output
Archived and active journals take up 512.0M in the file system.

To reduce the journal size, use the --vacuum-size, --vacuum-time, or --vacuum-files options:

Terminal
journalctl --vacuum-size=500M
Terminal
journalctl --vacuum-time=30d
Terminal
journalctl --vacuum-files=5

These commands remove old archived journal files until the specified limit is met. To configure a permanent size limit, edit /etc/systemd/journald.conf and set SystemMaxUse=.

Practical Troubleshooting Workflow

When a service fails, we can use a short sequence to isolate the issue quickly. First, check service state with systemctl :

Terminal
sudo systemctl status nginx

Then inspect recent error-level logs for that unit:

Terminal
sudo journalctl -u nginx -p err -n 100 --no-pager

If the problem started after reboot, inspect previous boot logs:

Terminal
sudo journalctl -u nginx -b -1 -p err --no-pager

To narrow the time window around the incident:

Terminal
sudo journalctl -u nginx --since "30 minutes ago" --no-pager

If you need pattern matching across many lines, pipe to grep :

Terminal
sudo journalctl -u nginx -n 500 --no-pager | grep -Ei "error|failed|timeout"

Troubleshooting

“No journal files were found”
The systemd journal may not be persistent on your system. Check if /var/log/journal/ exists. If it does not, create it with mkdir -p /var/log/journal and restart systemd-journald. Alternatively, set Storage=persistent in /etc/systemd/journald.conf.

“Permission denied” reading logs
Regular users can only access their own user journal. To read system logs, run journalctl with sudo, or add your user to the adm or systemd-journal group: usermod -aG systemd-journal USERNAME.

-g pattern search returns no results
The -g flag uses PCRE2 regular expressions. Make sure the pattern is correct and that your journalctl version supports -g (available on modern systemd releases). As an alternative, pipe the output to grep.

Logs missing after reboot
The journal is stored in memory by default on some distributions. To enable persistent storage across reboots, set Storage=persistent in /etc/systemd/journald.conf and restart systemd-journald.

Journal consuming too much disk space
Use journalctl --disk-usage to check the current size, then journalctl --vacuum-size=500M to trim old entries. For a permanent limit, configure SystemMaxUse= in /etc/systemd/journald.conf.

FAQ

What is the difference between journalctl and /var/log/syslog?
/var/log/syslog is a plain text file written by rsyslog or syslog-ng. journalctl reads the binary systemd journal, which stores structured metadata alongside each message. The journal offers better filtering, field-based queries, and persistent boot tracking.

How do I view logs for a service that keeps restarting?
Use journalctl -u servicename -f to follow logs in real time, or journalctl -u servicename -n 200 to view the most recent entries. Adding -p err will surface only error-level messages.

How do I check logs from before the current boot?
Use journalctl -b -1 for the previous boot, or journalctl --list-boots to see all available boot sessions and then journalctl -b BOOTID to query a specific one.

Can I export logs to a file?
Yes. Use journalctl --no-pager > output.log for plain text, or journalctl -o json-pretty > output.json for structured JSON. You can combine this with any filter flags.

How do I reduce the amount of disk space used by the journal?
Run journalctl --vacuum-size=500M to immediately trim archived logs to 500 MB. For a persistent limit, set SystemMaxUse=500M in /etc/systemd/journald.conf and restart the journal daemon with systemctl restart systemd-journald.

Conclusion

journalctl is a powerful and flexible tool for querying the systemd journal. Whether you are troubleshooting a failing service, reviewing kernel messages, or auditing authentication events, mastering its filter options saves significant time. If you have any questions, feel free to leave a comment below.

Linux patch Command: Apply Diff Files

The patch command applies a set of changes described in a diff file to one or more original files. It is the standard way to distribute and apply source code changes, security fixes, and configuration updates in Linux, and pairs directly with the diff command.

This guide explains how to use the patch command in Linux with practical examples.

Syntax

The general syntax of the patch command is:

txt
patch [OPTIONS] [ORIGINALFILE] [PATCHFILE]

You can pass the patch file using the -i option or pipe it through standard input:

txt
patch [OPTIONS] -i patchfile [ORIGINALFILE]
patch [OPTIONS] [ORIGINALFILE] < patchfile

patch Options

Option Long form Description
-i FILE --input=FILE Read the patch from FILE instead of stdin
-p N --strip=N Strip N leading path components from file names in the patch
-R --reverse Reverse the patch (undo a previously applied patch)
-b --backup Back up the original file before patching
--dry-run Test the patch without making any changes
-d DIR --directory=DIR Change to DIR before doing anything
-N --forward Skip patches that appear to be already applied
-l --ignore-whitespace Ignore whitespace differences when matching lines
-f --force Force apply even if the patch does not match cleanly
-u --unified Interpret the patch as a unified diff

Create and Apply a Basic Patch

The most common workflow is to use diff to generate a patch file and patch to apply it.

Start with two versions of a file. Here is the original:

hello.pytxt
print("Hello, World!")
print("Version 1")

And the updated version:

hello_new.pytxt
print("Hello, Linux!")
print("Version 2")

Use diff with the -u flag to generate a unified diff and save it to a patch file:

Terminal
diff -u hello.py hello_new.py > hello.patch

The hello.patch file will contain the following differences:

output
--- hello.py 2026-02-21 10:00:00.000000000 +0100
+++ hello_new.py 2026-02-21 10:05:00.000000000 +0100
@@ -1,2 +1,2 @@
-print("Hello, World!")
-print("Version 1")
+print("Hello, Linux!")
+print("Version 2")

To apply the patch to the original file:

Terminal
patch hello.py hello.patch
output
patching file hello.py

hello.py now contains the updated content from hello_new.py.

Dry Run Before Applying

Use --dry-run to test whether a patch will apply cleanly without actually modifying any files:

Terminal
patch --dry-run hello.py hello.patch
output
patching file hello.py

If the patch cannot be applied cleanly, patch reports the conflict without touching any files. This is useful before applying patches from external sources or when you are unsure whether a patch has already been applied.

Back Up the Original File

Use the -b option to create a backup of the original file before applying the patch. The backup is saved with a .orig extension:

Terminal
patch -b hello.py hello.patch
output
patching file hello.py

After running this command, hello.py.orig contains the original unpatched file. You can restore it manually if needed.

Strip Path Components with -p

When a patch file is generated from a different directory structure than the one where you apply it, the file paths inside the patch may not match your local paths. Use -p N to strip N leading path components from the paths recorded in the patch.

For example, a patch generated with git diff will reference files like:

output
--- a/src/utils/hello.py
+++ b/src/utils/hello.py

To apply this patch from the project root, use -p1 to strip the a/ and b/ prefixes:

Terminal
patch -p1 < hello.patch

-p1 is the standard option for patches generated by git diff and by diff -u from the root of a project directory. Without it, patch would look for a file named a/src/utils/hello.py on disk, which does not exist.

Reverse a Patch

Use the -R option to undo a previously applied patch and restore the original file:

Terminal
patch -R hello.py hello.patch
output
patching file hello.py

The file is restored to its state before the patch was applied.

Apply a Multi-File Patch

A single patch file can contain changes to multiple files. In this case, you do not specify a target file on the command line — patch reads the file names from the diff headers inside the patch:

Terminal
patch -p1 < project.patch

patch processes each file name from the diff headers and applies the corresponding changes. Use -p1 when the patch was produced by git diff or diff -u from the project root.

Quick Reference

Task Command
Apply a patch patch original.txt changes.patch
Apply from stdin patch -p1 < changes.patch
Specify patch file with -i patch -i changes.patch original.txt
Dry run (test without applying) patch --dry-run -p1 < changes.patch
Back up original before patching patch -b original.txt changes.patch
Strip one path prefix (git patches) patch -p1 < changes.patch
Reverse a patch patch -R original.txt changes.patch
Ignore whitespace differences patch -l original.txt changes.patch
Apply to a specific directory patch -d /path/to/dir -p1 < changes.patch

Troubleshooting

Hunk #1 FAILED at line N.
The patch does not match the current content of the file. This usually means the file has already been modified since the patch was created, or you are applying the patch against the wrong version. patch creates a .rej file containing the failed hunk so you can review and apply the change manually.

Reversed (or previously applied) patch detected! Assume -R?
patch has detected that the changes in the patch are already present in the file — the patch may have been applied before. Answer y to reverse the patch or n to skip it. Use -N (--forward) to automatically skip already-applied patches without prompting.

patch: **** malformed patch at line N
The patch file is corrupted or not in a recognized format. Make sure the patch was generated with diff -u (unified format). Non-unified formats require the -c (context) or specific format flag to be passed to patch.

File paths in the patch do not match files on disk.
Adjust the -p N value. Run patch --dry-run -p0 < changes.patch first, then increment N (-p1, -p2) until patch finds the correct files.

FAQ

How do I create a patch file?
Use the diff command with the -u flag: diff -u original.txt updated.txt > changes.patch. The -u flag produces a unified diff, which is the format patch works with by default. See the diff command guide for details.

What does -p1 mean?
-p1 tells patch to strip one leading path component from file names in the patch. Patches generated by git diff prefix file paths with a/ and b/, so -p1 removes that prefix before patch looks for the file on disk.

How do I undo a patch I already applied?
Run patch -R originalfile patchfile. patch reads the diff in reverse and restores the original content.

What is a .rej file?
When patch cannot apply one or more sections of a patch (called hunks), it writes the failed hunks to a file with a .rej extension alongside the original file. You can open the .rej file and apply those changes manually.

What is the difference between patch and git apply?
Both apply diff files, but git apply is designed for use inside a Git repository and integrates with the Git index. patch is a standalone POSIX tool that works on any file regardless of version control.

Conclusion

The patch command is the standard tool for applying diff files in Linux. Use --dry-run to verify a patch before applying it, -b to keep a backup of the original, and -R to reverse changes when needed.

If you have any questions, feel free to leave a comment below.

Understanding the /etc/fstab File in Linux

The /etc/fstab file (filesystem table) is a system configuration file that defines how filesystems, partitions, and storage devices are mounted at boot time. The system reads this file during startup and mounts each entry automatically.

Understanding /etc/fstab is essential when you need to add a new disk, create a swap file , mount a network share , or change mount options for an existing filesystem.

This guide explains the /etc/fstab file format, what each field means, common mount options, and how to add new entries safely.

/etc/fstab Format

The /etc/fstab file is a plain text file with one entry per line. Each line defines a filesystem to mount. Lines beginning with # are comments and are ignored by the system.

To view the contents of the file safely, use less :

Terminal
less /etc/fstab

A typical /etc/fstab file looks like this:

output
# <file system> <mount point> <type> <options> <dump> <pass>
UUID=a1b2c3d4-e5f6-7890-abcd-ef1234567890 / ext4 errors=remount-ro 0 1
UUID=b2c3d4e5-f6a7-8901-bcde-f12345678901 /home ext4 defaults 0 2
UUID=c3d4e5f6-a7b8-9012-cdef-123456789012 none swap sw 0 0
tmpfs /tmp tmpfs defaults,noatime 0 0

Each entry contains six space-separated fields:

txt
UUID=a1b2c3d4... /home ext4 defaults 0 2
[---------------] [---] [--] [------] - -
| | | | | |
| | | | | +-> 6. Pass (fsck order)
| | | | +----> 5. Dump (backup flag)
| | | +-----------> 4. Options
| | +------------------> 3. Type
| +-------------------------> 2. Mount point
+---------------------------------------------> 1. File system

Field Descriptions

  1. File system — The device or partition to mount. This can be specified as:

    • A UUID: UUID=a1b2c3d4-e5f6-7890-abcd-ef1234567890
    • A disk label: LABEL=home
    • A device path: /dev/sda1
    • A network path: 192.168.1.10:/export/share (for NFS)

    Using UUIDs is recommended because device paths like /dev/sda1 can change if disks are added or removed. To find the UUID of a partition, run blkid:

    Terminal
    sudo blkid
  2. Mount point — The directory where the filesystem is attached. The directory must already exist. Common mount points include /, /home, /boot, and /mnt/data. For swap entries, this field is set to none.

  3. Type — The filesystem type. Common values include:

    • ext4 — The default Linux filesystem
    • xfs — High-performance filesystem used on RHEL-based distributions
    • btrfs — Copy-on-write filesystem with snapshot support
    • swap — Swap partition or file
    • tmpfs — Temporary filesystem stored in memory
    • nfs — Network File System
    • vfat — FAT32 filesystem (USB drives, EFI partitions)
    • auto — Let the kernel detect the filesystem type automatically
  4. Options — A comma-separated list of mount options. See the Common Mount Options section below for details.

  5. Dump — Used by the dump backup utility. A value of 0 means the filesystem is not included in backups. A value of 1 means it is. Most modern systems do not use dump, so this is typically set to 0.

  6. Pass — The order in which fsck checks filesystems at boot. The root filesystem should be 1. Other filesystems should be 2 so they are checked after root. A value of 0 means the filesystem is not checked.

Common Mount Options

The fourth field in each fstab entry is a comma-separated list of mount options. The following options are the most commonly used:

  • defaults — Uses the standard default options (rw, suid, dev, exec, auto, nouser, async). Some effective behaviors can still vary by filesystem and kernel settings.
  • ro — Mount the filesystem as read-only.
  • rw — Mount the filesystem as read-write.
  • noatime — Do not update file access times. This can improve performance, especially on SSDs.
  • nodiratime — Do not update directory access times.
  • noexec — Do not allow execution of binaries on the filesystem.
  • nosuid — Do not allow set-user-ID or set-group-ID bits to take effect.
  • nodev — Do not interpret character or block special devices on the filesystem.
  • nofail — Do not report errors if the device does not exist at boot. Useful for removable drives and network shares.
  • auto — Mount the filesystem automatically at boot (default behavior).
  • noauto — Do not mount automatically at boot. The filesystem can still be mounted manually with mount.
  • user — Allow a regular user to mount the filesystem.
  • errors=remount-ro — Remount the filesystem as read-only if an error occurs. Common on root filesystem entries.
  • _netdev — The filesystem requires network access. The system waits for the network to be available before mounting. Use this for NFS, CIFS, and iSCSI mounts.
  • x-systemd.automount — Mount the filesystem on first access instead of at boot. Managed by systemd.

You can combine multiple options separated by commas:

txt
UUID=a1b2c3d4... /data ext4 defaults,noatime,nofail 0 2

Adding an Entry to /etc/fstab

Before editing /etc/fstab, always create a backup:

Terminal
sudo cp /etc/fstab /etc/fstab.bak

Step 1: Find the UUID

Identify the UUID of the partition you want to mount:

Terminal
sudo blkid /dev/sdb1
output
/dev/sdb1: UUID="d4e5f6a7-b8c9-0123-def0-123456789abc" TYPE="ext4"

Step 2: Create the Mount Point

Create the directory where the filesystem will be mounted:

Terminal
sudo mkdir -p /mnt/data

Step 3: Add the Entry

Open /etc/fstab in a text editor :

Terminal
sudo nano /etc/fstab

Add a new line at the end of the file:

/etc/fstabsh
UUID=d4e5f6a7-b8c9-0123-def0-123456789abc /mnt/data ext4 defaults,nofail 0 2

Step 4: Test the Entry

Instead of rebooting, use mount -a to mount all entries in /etc/fstab that are not already mounted:

Terminal
sudo mount -a

If the command produces no output, the entry is correct. If there is an error, fix the fstab entry before rebooting — an incorrect fstab can prevent the system from booting normally.

Verify the filesystem is mounted :

Terminal
df -h /mnt/data

Common fstab Examples

Swap File

To add a swap file to fstab:

/etc/fstabsh
/swapfile none swap sw 0 0

NFS Network Share

To mount an NFS share that requires network access:

/etc/fstabsh
192.168.1.10:/export/share /mnt/nfs nfs defaults,_netdev,nofail 0 0

CIFS/SMB Windows Share

To mount a Windows/Samba share with a credentials file:

/etc/fstabsh
//192.168.1.20/share /mnt/smb cifs credentials=/etc/samba/creds,_netdev,nofail 0 0

Set strict permissions on the credentials file so other users cannot read it:

Terminal
sudo chmod 600 /etc/samba/creds

USB or External Drive

To mount a removable drive that may not always be attached:

/etc/fstabsh
UUID=e5f6a7b8-c9d0-1234-ef01-23456789abcd /mnt/usb ext4 defaults,nofail,noauto 0 0

The nofail option prevents boot errors when the drive is not connected. The noauto option prevents automatic mounting — mount it manually with sudo mount /mnt/usb when needed.

tmpfs for /tmp

To mount /tmp as a temporary filesystem in memory:

/etc/fstabsh
tmpfs /tmp tmpfs defaults,noatime,size=2G 0 0

Quick Reference

Task Command
View fstab contents less /etc/fstab
Back up fstab sudo cp /etc/fstab /etc/fstab.bak
Find partition UUIDs sudo blkid
Mount all fstab entries sudo mount -a
Check mounted filesystems mount or df -h
Check filesystem type lsblk -f
Restore fstab from backup sudo cp /etc/fstab.bak /etc/fstab

Troubleshooting

System does not boot after editing fstab
An incorrect fstab entry can cause a boot failure. Boot into recovery mode or a live USB, mount the root filesystem, and fix or restore /etc/fstab from the backup. Always test with sudo mount -a before rebooting.

mount -a reports “wrong fs type” or “bad superblock”
The filesystem type in the fstab entry does not match the actual filesystem on the device. Use sudo blkid or lsblk -f to check the correct type.

Network share fails to mount at boot
Add the _netdev option to tell the system to wait for network availability before mounting. For systemd-based systems, x-systemd.automount can also help with timing issues.

“mount point does not exist”
The directory specified in the second field does not exist. Create it with mkdir -p /path/to/mountpoint before running mount -a.

UUID changed after reformatting a partition
Reformatting a partition assigns a new UUID. Run sudo blkid to find the new UUID and update the fstab entry accordingly.

FAQ

What happens if I make an error in /etc/fstab?
If the entry references a non-existent device without the nofail option, the system may drop to an emergency shell during boot. Always use nofail for non-essential filesystems and test with sudo mount -a before rebooting.

Should I use UUID or device path (/dev/sda1)?
Use UUID. Device paths can change if you add or remove disks, or if the boot order changes. UUIDs are unique to each filesystem and do not change unless you reformat the partition.

What does the nofail option do?
It tells the system to continue booting even if the device is not present or cannot be mounted. Without nofail, a missing device causes the system to drop to an emergency shell.

How do I remove an fstab entry?
Open /etc/fstab with sudo nano /etc/fstab, delete or comment out the line (add # at the beginning), save the file, and then unmount the filesystem with sudo umount /mount/point.

What is the difference between noauto and nofail?
noauto prevents the filesystem from being mounted automatically at boot — you must mount it manually. nofail still mounts automatically but does not cause a boot error if the device is missing.

Conclusion

The /etc/fstab file controls how filesystems are mounted at boot. Each entry specifies the device, mount point, filesystem type, options, and check order. Always back up fstab before editing, use UUIDs instead of device paths, and test changes with sudo mount -a before rebooting.

If you have any questions, feel free to leave a comment below.

Traceroute Command in Linux

The traceroute command is a network diagnostic tool that displays the path packets take from your system to a destination host. It shows each hop (router) along the route and the time it takes for packets to reach each one.

Network administrators use traceroute to identify where packets are being delayed or dropped, making it essential for troubleshooting connectivity issues, latency problems, and routing failures.

This guide covers how to use the traceroute command with practical examples and explanations of the most common options.

Syntax

The general syntax for the traceroute command is:

Terminal
traceroute [OPTIONS] DESTINATION
  • OPTIONS — Flags that modify the behavior of the command.
  • DESTINATION — The target hostname or IP address to trace.

Installing traceroute

The traceroute command is not installed by default on all Linux distributions. To check if it is available on your system, type:

Terminal
traceroute --version

If traceroute is not present, the command will print “traceroute: command not found”. You can install it using your distribution’s package manager.

Install traceroute on Ubuntu, Debian, and Derivatives

Terminal
sudo apt update && sudo apt install traceroute

Install traceroute on Fedora, RHEL, and Derivatives

Terminal
sudo dnf install traceroute

Install traceroute on Arch Linux

Terminal
sudo pacman -S traceroute

How traceroute works

When you run traceroute, it sends packets with incrementally increasing TTL (Time to Live) values, starting at 1. Each router along the path decrements the TTL by 1. When the TTL reaches 0, the router discards the packet and sends back an ICMP “Time Exceeded” message.

By increasing the TTL with each round of packets, traceroute discovers each hop along the route until the packets reach the final destination.

By default, traceroute sends three UDP packets per hop (on Linux) and displays the round-trip time for each packet.

Basic Usage

To trace the route to a destination, run traceroute followed by the hostname or IP address:

Terminal
traceroute google.com

The output should look something like this:

output
traceroute to google.com (142.250.185.78), 30 hops max, 60 byte packets
1 router.local (192.168.1.1) 1.234 ms 1.102 ms 1.056 ms
2 10.0.0.1 (10.0.0.1) 12.345 ms 12.234 ms 12.123 ms
3 isp-gateway.example.net (203.0.113.1) 15.678 ms 15.567 ms 15.456 ms
4 core-router.example.net (198.51.100.1) 20.123 ms 20.012 ms 19.901 ms
5 google-peer.example.net (192.0.2.1) 22.345 ms 22.234 ms 22.123 ms
6 142.250.185.78 (142.250.185.78) 25.678 ms 25.567 ms 25.456 ms

Understanding the Output

Each line in the traceroute output represents a hop along the route. Let us break down what each field means:

  • Hop number — The sequential number of the router in the path (1, 2, 3, etc.).
  • Hostname — The DNS name of the router, if available.
  • IP address — The IP address of the router in parentheses.
  • Round-trip times — Three time measurements in milliseconds, one for each probe packet sent to that hop.

The first line shows the destination, maximum number of hops (default 30), and packet size (default 60 bytes).

Interpreting the Results

Asterisks (* * *) indicate that no response was received for that hop. This can happen when:

  • The router is configured to not respond to traceroute probes.
  • A firewall is blocking the packets.
  • The packets were lost due to network congestion.

Increasing latency at a specific hop suggests a bottleneck or congested link at that point in the network.

Consistent high latency from a certain hop onward indicates the issue is at or before that router.

Common Options

The traceroute command accepts several options to customize its behavior:

  • -n — Do not resolve IP addresses to hostnames. This speeds up the output by skipping DNS lookups.
  • -m max_ttl — Set the maximum number of hops (default is 30).
  • -q nqueries — Set the number of probe packets per hop (default is 3).
  • -w waittime — Set the time in seconds to wait for a response (default is 5).
  • -I — Use ICMP ECHO packets instead of UDP (requires root privileges).
  • -T — Use TCP SYN packets instead of UDP (requires root privileges).
  • -p port — Set the destination port for UDP or TCP probes.
  • -s source_addr — Use the specified source IP address.
  • -i interface — Send packets through the specified network interface.

Skip DNS Resolution

To speed up the trace and display only IP addresses, use the -n option:

Terminal
traceroute -n google.com
output
traceroute to google.com (142.250.185.78), 30 hops max, 60 byte packets
1 192.168.1.1 1.234 ms 1.102 ms 1.056 ms
2 10.0.0.1 12.345 ms 12.234 ms 12.123 ms
3 203.0.113.1 15.678 ms 15.567 ms 15.456 ms

This is useful when DNS resolution is slow or when you only need IP addresses.

Change Maximum Hops

By default, traceroute stops after 30 hops. To change this limit, use the -m option:

Terminal
traceroute -m 15 google.com

This limits the trace to 15 hops maximum.

Change Number of Probes

To send a different number of probe packets per hop, use the -q option:

Terminal
traceroute -q 1 google.com

This sends only one probe per hop, resulting in faster but less detailed output.

Use ICMP Instead of UDP

By default, Linux traceroute uses UDP packets. Some networks block UDP, so you can use ICMP ECHO packets instead:

Terminal
sudo traceroute -I google.com
Info
The -I option requires root privileges because sending raw ICMP packets requires elevated permissions.

Use TCP Instead of UDP

For networks that block both UDP and ICMP, you can use TCP SYN packets:

Terminal
sudo traceroute -T google.com

You can also specify a port, such as port 443 for HTTPS:

Terminal
sudo traceroute -T -p 443 google.com

This is useful for tracing routes through firewalls that only allow specific TCP ports.

Trace IPv6 Routes

To trace IPv6 routes, use the -6 option:

Terminal
traceroute -6 ipv6.google.com

Specify Source Interface

If your system has multiple network interfaces, you can specify which one to use:

Terminal
traceroute -i eth0 google.com

Or specify the source IP address:

Terminal
traceroute -s 192.168.1.100 google.com

Traceroute vs tracepath

Linux systems often include tracepath, which is similar to traceroute but does not require root privileges and automatically discovers the MTU (Maximum Transmission Unit) along the path.

Feature traceroute tracepath
Root required Yes (for ICMP/TCP) No
Protocol UDP, ICMP, TCP UDP only
MTU discovery No Yes
Customization Many options Limited

Use tracepath for quick traces without root access:

Terminal
tracepath google.com

Use traceroute when you need more control over the probe method or when tracepath does not provide enough information.

Practical Examples

Diagnose Slow Connections

If a website is loading slowly, trace the route to identify where the delay occurs:

Terminal
traceroute -n example.com

Look for hops with significantly higher latency than the previous ones. The hop before the latency spike is often the source of the problem.

Check if a Host is Reachable

If ping shows packet loss, use traceroute to find where packets are being dropped:

Terminal
traceroute google.com

Hops showing * * * followed by successful hops indicate a router that does not respond to probes but forwards traffic. If all remaining hops show * * *, the issue is at or after the last responding hop.

Trace Through a Firewall

If standard UDP probes are blocked, try ICMP or TCP:

Terminal
sudo traceroute -I google.com
sudo traceroute -T -p 80 google.com

Compare Routes to Different Servers

To understand routing differences, trace routes to multiple servers:

Terminal
traceroute -n server1.example.com
traceroute -n server2.example.com

This helps identify whether traffic to different destinations takes different paths through your network.

Quick Reference

Task Command
Basic trace traceroute example.com
Skip DNS resolution traceroute -n example.com
Limit to N hops traceroute -m 15 example.com
One probe per hop traceroute -q 1 example.com
Use ICMP sudo traceroute -I example.com
Use TCP sudo traceroute -T example.com
Use TCP on port 443 sudo traceroute -T -p 443 example.com
Specify interface traceroute -i eth0 example.com
Set timeout traceroute -w 3 example.com
Trace with tracepath tracepath example.com

Troubleshooting

All hops show * * *
The destination or your network may be blocking traceroute probes. Try using ICMP (-I) or TCP (-T) instead of the default UDP. If the issue persists, a firewall between you and the destination is likely blocking all probe types.

Only the first hop responds
Your local router responds, but nothing beyond it does. This often indicates a firewall or routing issue at your ISP. Contact your network administrator or ISP for assistance.

Trace never completes
The destination may not be reachable, or the maximum hop count is too low. Increase the maximum hops with -m 60 and check if the trace progresses further.

High latency at a specific hop
A single hop with high latency does not always indicate a problem. Routers often deprioritize ICMP responses. If the final destination has acceptable latency, the intermediate high latency may not affect actual traffic.

Latency increases then decreases
This can occur due to asymmetric routing, where the return path differs from the outbound path. The times displayed include the round trip, so a longer return path can inflate the displayed latency.

Permission denied
Options like -I (ICMP) and -T (TCP) require root privileges. Run the command with sudo.

FAQ

What is the difference between traceroute and ping?
ping tests whether a destination is reachable and measures round-trip latency. traceroute shows the path packets take and the latency at each hop along the route. Use ping for basic connectivity checks and traceroute for diagnosing where problems occur.

Why do some hops show asterisks?
Asterisks (* * *) mean no response was received. The router may be configured to ignore traceroute probes, a firewall may be blocking them, or the packets may have been lost. This does not necessarily mean the router is down.

What is the default protocol used by traceroute?
On Linux, traceroute uses UDP by default. On Windows, tracert uses ICMP. You can switch Linux traceroute to ICMP with -I or TCP with -T.

How do I trace the route on Windows?
Windows uses the tracert command instead of traceroute. The syntax is similar: tracert example.com. It uses ICMP by default.

What does TTL mean in traceroute?
TTL (Time to Live) is a field in the IP packet header that limits the packet’s lifespan. Each router decrements the TTL by 1. When it reaches 0, the router discards the packet and sends an ICMP “Time Exceeded” message. Traceroute uses this mechanism to discover each hop.

How can I trace the route to a specific port?
Use the -p option with TCP (-T) or UDP to specify the destination port:

Terminal
sudo traceroute -T -p 443 example.com

Is there an alternative to traceroute for continuous diagnostics?
mtr combines ping and traceroute in a single, continuously updating view and is useful for ongoing packet loss and latency checks.

Conclusion

The traceroute command is an essential tool for diagnosing network connectivity and routing issues. It shows the path packets take to a destination and helps identify where delays or failures occur.

For more options, refer to the traceroute man page by running man traceroute in your terminal.

If you have any questions, feel free to leave a comment below.

❌