阅读视图

发现新文章,点击刷新页面。

journalctl Command in Linux: Query and Filter System Logs

journalctl is a command-line utility for querying and displaying logs collected by systemd-journald, the systemd logging daemon. It gives you structured access to all system logs — kernel messages, service output, authentication events, and more — from a single interface.

This guide explains how to use journalctl to view, filter, and manage system logs.

journalctl Command Syntax

The general syntax for the journalctl command is:

txt
journalctl [OPTIONS] [MATCHES]

When invoked without any options, journalctl displays all collected logs starting from the oldest entry, piped through a pager (usually less). Press q to exit.

Only the root user or members of the adm or systemd-journal groups can read system logs. Regular users can view their own user journal with the --user flag.

Quick Reference

Command Description
journalctl Show all logs
journalctl -f Follow new log entries in real time
journalctl -n 50 Show last 50 lines
journalctl -r Show logs newest first
journalctl -e Jump to end of logs
journalctl -u nginx Logs for a specific unit
journalctl -u nginx -f Follow unit logs in real time
journalctl -b Current boot logs
journalctl -b -1 Previous boot logs
journalctl --list-boots List all boots
journalctl -p err Errors and above
journalctl -p warning --since "1 hour ago" Recent warnings
journalctl -k Kernel messages
journalctl --since "yesterday" Logs since yesterday
journalctl --since "2026-02-01" --until "2026-02-02" Logs in a time window
journalctl -g "failed" Search by pattern
journalctl -o json-pretty JSON output
journalctl --disk-usage Show journal disk usage
journalctl --vacuum-size=500M Reduce journal to 500 MB

For a printable quick reference, see the journalctl cheatsheet .

Viewing System Logs

To view all system logs, run journalctl without any options:

Terminal
journalctl

To show the most recent entries first, use the -r flag:

Terminal
journalctl -r

To jump directly to the end of the log, use -e:

Terminal
journalctl -e

To show the last N lines (similar to tail ), use the -n flag:

Terminal
journalctl -n 50

To disable the pager and print directly to the terminal, use --no-pager:

Terminal
journalctl --no-pager

Following Logs in Real Time

To stream new log entries as they arrive (similar to tail -f), use the -f flag:

Terminal
journalctl -f

This is one of the most useful options for monitoring a running service or troubleshooting an active issue. Press Ctrl+C to stop.

Filtering by Systemd Unit

To view logs for a specific systemd service, use the -u flag followed by the unit name:

Terminal
journalctl -u nginx

You can combine -u with other filters. For example, to follow nginx logs in real time:

Terminal
journalctl -u nginx -f

To view logs for multiple units at once, specify -u more than once:

Terminal
journalctl -u nginx -u php-fpm

To print the last 100 lines for a service without the pager:

Terminal
journalctl -u nginx -n 100 --no-pager

For more on starting and stopping services, see how to start, stop, and restart Nginx and Apache .

Filtering by Time

Use --since and --until to limit log output to a specific time range.

To show logs since a specific date and time:

Terminal
journalctl --since "2026-02-01 10:00"

To show logs within a window:

Terminal
journalctl --since "2026-02-01 10:00" --until "2026-02-01 12:00"

journalctl accepts many natural time expressions:

Terminal
journalctl --since "1 hour ago"
journalctl --since "yesterday"
journalctl --since today

You can combine time filters with unit filters. For example, to view nginx logs from the past hour:

Terminal
journalctl -u nginx --since "1 hour ago"

Filtering by Priority

systemd uses the standard syslog priority levels. Use the -p flag to filter by severity:

Terminal
journalctl -p err

The output will include the specified priority and all higher-severity levels. The available priority levels from highest to lowest are:

Level Name Description
0 emerg System is unusable
1 alert Immediate action required
2 crit Critical conditions
3 err Error conditions
4 warning Warning conditions
5 notice Normal but significant events
6 info Informational messages
7 debug Debug-level messages

To view only warnings and above from the last hour:

Terminal
journalctl -p warning --since "1 hour ago"

Filtering by Boot

The journal stores logs from multiple boots. Use -b to filter by boot session.

To view logs from the current boot:

Terminal
journalctl -b

To view logs from the previous boot:

Terminal
journalctl -b -1

To list all available boot sessions with their IDs and timestamps:

Terminal
journalctl --list-boots

The output will look something like this:

output
-2 abc123def456 Mon 2026-02-24 08:12:01 CET—Mon 2026-02-24 18:43:22 CET
-1 def456abc789 Tue 2026-02-25 09:05:14 CET—Tue 2026-02-25 21:11:03 CET
0 789abcdef012 Wed 2026-02-26 08:30:41 CET—Wed 2026-02-26 14:00:00 CET

To view logs for a specific boot ID:

Terminal
journalctl -b abc123def456

To view errors from the previous boot:

Terminal
journalctl -b -1 -p err

Kernel Messages

To view kernel messages only (equivalent to dmesg ), use the -k flag:

Terminal
journalctl -k

To view kernel messages from the current boot:

Terminal
journalctl -k -b

To view kernel errors from the previous boot:

Terminal
journalctl -k -p err -b -1

Filtering by Process

In addition to filtering by unit, you can filter logs by process name, executable path, PID, or user ID using journal fields.

To filter by process name:

Terminal
journalctl _COMM=sshd

To filter by executable path:

Terminal
journalctl _EXE=/usr/sbin/sshd

To filter by PID:

Terminal
journalctl _PID=1234

To filter by user ID:

Terminal
journalctl _UID=1000

Multiple fields can be combined to narrow the results further.

Searching Log Messages

To search log messages by a pattern, use the -g flag followed by a regular expression:

Terminal
journalctl -g "failed"

To search within a specific unit:

Terminal
journalctl -u ssh -g "invalid user"

You can also pipe journalctl output to grep for more complex matching:

Terminal
journalctl -u nginx -n 500 --no-pager | grep -i "upstream"

Output Formats

By default, journalctl displays logs in a human-readable format. Use the -o flag to change the output format.

To display logs with ISO 8601 timestamps:

Terminal
journalctl -o short-iso

To display logs as JSON (useful for scripting and log shipping):

Terminal
journalctl -o json-pretty

To display message text only, without metadata:

Terminal
journalctl -o cat

The most commonly used output formats are:

Format Description
short Default human-readable format
short-iso ISO 8601 timestamps
short-precise Microsecond-precision timestamps
json One JSON object per line
json-pretty Formatted JSON
cat Message text only

Managing Journal Size

The journal stores logs on disk under /var/log/journal/. To check how much disk space the journal is using:

Terminal
journalctl --disk-usage
output
Archived and active journals take up 512.0M in the file system.

To reduce the journal size, use the --vacuum-size, --vacuum-time, or --vacuum-files options:

Terminal
journalctl --vacuum-size=500M
Terminal
journalctl --vacuum-time=30d
Terminal
journalctl --vacuum-files=5

These commands remove old archived journal files until the specified limit is met. To configure a permanent size limit, edit /etc/systemd/journald.conf and set SystemMaxUse=.

Practical Troubleshooting Workflow

When a service fails, we can use a short sequence to isolate the issue quickly. First, check service state with systemctl :

Terminal
sudo systemctl status nginx

Then inspect recent error-level logs for that unit:

Terminal
sudo journalctl -u nginx -p err -n 100 --no-pager

If the problem started after reboot, inspect previous boot logs:

Terminal
sudo journalctl -u nginx -b -1 -p err --no-pager

To narrow the time window around the incident:

Terminal
sudo journalctl -u nginx --since "30 minutes ago" --no-pager

If you need pattern matching across many lines, pipe to grep :

Terminal
sudo journalctl -u nginx -n 500 --no-pager | grep -Ei "error|failed|timeout"

Troubleshooting

“No journal files were found”
The systemd journal may not be persistent on your system. Check if /var/log/journal/ exists. If it does not, create it with mkdir -p /var/log/journal and restart systemd-journald. Alternatively, set Storage=persistent in /etc/systemd/journald.conf.

“Permission denied” reading logs
Regular users can only access their own user journal. To read system logs, run journalctl with sudo, or add your user to the adm or systemd-journal group: usermod -aG systemd-journal USERNAME.

-g pattern search returns no results
The -g flag uses PCRE2 regular expressions. Make sure the pattern is correct and that your journalctl version supports -g (available on modern systemd releases). As an alternative, pipe the output to grep.

Logs missing after reboot
The journal is stored in memory by default on some distributions. To enable persistent storage across reboots, set Storage=persistent in /etc/systemd/journald.conf and restart systemd-journald.

Journal consuming too much disk space
Use journalctl --disk-usage to check the current size, then journalctl --vacuum-size=500M to trim old entries. For a permanent limit, configure SystemMaxUse= in /etc/systemd/journald.conf.

FAQ

What is the difference between journalctl and /var/log/syslog?
/var/log/syslog is a plain text file written by rsyslog or syslog-ng. journalctl reads the binary systemd journal, which stores structured metadata alongside each message. The journal offers better filtering, field-based queries, and persistent boot tracking.

How do I view logs for a service that keeps restarting?
Use journalctl -u servicename -f to follow logs in real time, or journalctl -u servicename -n 200 to view the most recent entries. Adding -p err will surface only error-level messages.

How do I check logs from before the current boot?
Use journalctl -b -1 for the previous boot, or journalctl --list-boots to see all available boot sessions and then journalctl -b BOOTID to query a specific one.

Can I export logs to a file?
Yes. Use journalctl --no-pager > output.log for plain text, or journalctl -o json-pretty > output.json for structured JSON. You can combine this with any filter flags.

How do I reduce the amount of disk space used by the journal?
Run journalctl --vacuum-size=500M to immediately trim archived logs to 500 MB. For a persistent limit, set SystemMaxUse=500M in /etc/systemd/journald.conf and restart the journal daemon with systemctl restart systemd-journald.

Conclusion

journalctl is a powerful and flexible tool for querying the systemd journal. Whether you are troubleshooting a failing service, reviewing kernel messages, or auditing authentication events, mastering its filter options saves significant time. If you have any questions, feel free to leave a comment below.

ps Cheatsheet

Basic Syntax

Core ps command forms.

Command Description
ps Show processes in the current shell
ps -e Show all running processes
ps -f Show full-format output
ps aux BSD-style all-process listing with user and resource info
ps -ef SysV-style all-process listing

List Processes

Common process listing commands.

Command Description
ps -e List all processes
ps -ef Full process list with PPID and start time
ps aux Detailed list including CPU and memory usage
ps -u username Processes owned by a specific user
ps -p 1234 Show one process by PID

Select and Filter

Filter output to specific process groups.

Command Description
ps -C nginx Match processes by command name
ps -p 1234,5678 Show multiple PIDs
ps -u root -U root Show processes by effective and real user
ps -t pts/0 Show processes attached to a terminal
ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%cpu Custom output sorted by CPU

Custom Output Columns

Show only the process fields you need.

Command Description
ps -eo pid,cmd PID and command
ps -eo user,pid,%cpu,%mem,cmd User, PID, CPU, memory, command
ps -eo pid,lstart,cmd PID with full start time
ps -o pid= -o comm= Output without column headers
ps -p 1234 -o pid,ppid,user,%cpu,%mem,cmd Custom fields for one PID

Process Tree and Parent/Child

Inspect process hierarchy.

Command Description
ps -ejH Hierarchical process view
ps -axjf Forest view (BSD style)
ps -o pid,ppid,cmd -p 1234 Parent-child context for one process
ps -eo pid,ppid,cmd --sort=ppid Group processes by parent PID

Useful Patterns

Common real-world combinations.

Command Description
`ps aux grep nginx`
ps -C nginx -o pid,cmd Cleaner command-name search without grep
`ps -eo pid,%cpu,%mem,cmd –sort=-%mem head`
`ps -eo pid,%cpu,cmd –sort=-%cpu head`
`ps -ef grep -v grep

Troubleshooting

Quick checks for common ps usage issues.

Issue Check
Command not visible in output Use ps -ef or ps aux for full list
Process disappears between checks It may be short-lived; sample repeatedly or use watch
grep shows itself Use ps -C name or pgrep instead of raw grep
Missing expected process details Add fields with -o (for example %cpu, %mem, lstart)
Need exact process ID for kill Use ps -C name -o pid= or pgrep name

Related Guides

Use these guides for full process-management workflows.

Guide Description
Ps Command in Linux Full ps guide with examples
Kill Command in Linux Terminate processes by PID
Pkill Command in Linux Kill processes by name/pattern
Pgrep Command in Linux Find process IDs by name and pattern
How to Kill a Process in Linux Practical process-stop workflow

cp Cheatsheet

Basic Syntax

Core command forms for copy operations.

Command Description
cp [OPTIONS] SOURCE DEST Copy one file to destination
cp [OPTIONS] SOURCE... DIRECTORY Copy multiple sources into a directory
cp -r [OPTIONS] SOURCE DEST Copy directory recursively
cp -- FILE DEST Copy file whose name starts with -

Copy Files

Common file copy commands.

Command Description
cp file.txt /tmp/ Copy file to another directory
cp file.txt newname.txt Copy and rename in same directory
cp file1 file2 /backup/ Copy multiple files to a directory
cp *.log /var/log/archive/ Copy files matching pattern
cp /src/file.txt /dest/newname.txt Copy and rename to another directory

Copy Directories

Copy entire directory trees.

Command Description
cp -r dir/ /dest/ Copy directory recursively
cp -r dir1 dir2 /dest/ Copy multiple directories
cp -r /src/dir /dest/dir-new Copy and rename directory
cp -r dir/. /dest/ Copy directory contents only (not the directory itself)

Overwrite Behavior

Control what happens when the destination already exists.

Command Description
cp -i file.txt /dest/ Prompt before overwrite
cp -n file.txt /dest/ Never overwrite existing file
cp -f file.txt /dest/ Force overwrite without prompt
cp -u file.txt /dest/ Copy only if source is newer than destination

Preserve Attributes

Keep timestamps, ownership, and permissions when copying.

Command Description
cp -p file.txt /dest/ Preserve mode, ownership, and timestamps
cp -a dir/ /dest/ Archive mode — preserve all attributes, copy recursively
cp --preserve=timestamps file.txt /dest/ Preserve only timestamps
cp --preserve=mode file.txt /dest/ Preserve only permissions

Useful Patterns

Common real-world cp command combinations.

Command Description
cp file.txt{,.bak} Quick backup via brace expansion
cp -v file.txt /dest/ Verbose output
cp -rv dir/ /dest/ Verbose recursive copy output
find . -name '*.conf' -exec cp -t /backup/ {} + Copy matched files with find
cp -a /src/. /dest/ Mirror directory preserving all attributes

Troubleshooting

Quick checks for common copy errors.

Note: cp -t in the find example is GNU-specific and may not be available on non-GNU systems.

Issue Check
omitting directory Add -r to copy directories
Permission denied Check source read permission and destination write permission
No such file or directory Verify source path with ls -l source
Destination file overwritten Use -i or -n to protect existing files
Attributes not preserved Use -a or -p to preserve ownership and timestamps

Related Guides

Use these guides for detailed copy workflows.

Guide Description
Linux cp Command: Copy Files and Directories Full cp guide with examples
How to Copy Files and Directories in Linux Overview of all copy tools
mv Cheatsheet Move and rename files
rsync Cheatsheet Remote and local sync with progress

PHP Error Reporting: Enable, Display, and Log Errors

PHP error reporting controls which errors are shown, logged, or silently ignored. Configuring it correctly is essential during development — where you want full visibility — and on production servers — where errors should be logged but never displayed to users.

This guide explains how to configure PHP error reporting using php.ini, the error_reporting() function, and .htaccess. If you are not sure which PHP version is active, first check it with the PHP version guide .

Quick Reference

Task Setting / Command
Enable all errors in php.ini error_reporting = E_ALL
Display errors on screen display_errors = On
Hide errors from users (production) display_errors = Off
Log errors to a file log_errors = On
Set custom log file error_log = /var/log/php/error.log
Enable all errors at runtime error_reporting(E_ALL);
Suppress all errors at runtime error_reporting(0);

PHP Error Levels

PHP categorizes errors into levels. Each level has a name (constant) and a numeric value. You can combine levels using bitwise operators to control exactly which errors are reported.

The most commonly used levels are:

  • E_ERROR — fatal errors that stop script execution (undefined function, missing module)
  • E_WARNING — non-fatal errors that allow execution to continue (wrong function argument, missing include)
  • E_PARSE — syntax errors detected at parse time; always stop execution
  • E_NOTICE — minor issues such as using an undefined variable; useful during development
  • E_DEPRECATED — warnings about features that will be removed in future PHP versions
  • E_ALL — all errors and warnings; recommended during development

For a full list of error constants, see the PHP error constants documentation .

Configure Error Reporting in php.ini

The php.ini file is the main configuration file for PHP. Changes here apply globally to all PHP scripts on the server. After editing php.ini, restart the web server for the changes to take effect.

To find the location of your active php.ini file, run:

Terminal
php --ini

Enable Error Reporting

Open php.ini and set the following directives:

ini
; Report all errors
error_reporting = E_ALL

; Display errors on the page (development only)
display_errors = On

; Display startup sequence errors
display_startup_errors = On
Warning
Never enable display_errors on a production server. Displaying error messages to users exposes file paths, database credentials, and application logic that can be exploited.

Log Errors to a File

To write errors to a log file instead of displaying them, set:

ini
; Disable displaying errors
display_errors = Off

; Enable error logging
log_errors = On

; Set the path to the log file
error_log = /var/log/php/error.log

Make sure the directory exists and is writable by the web server user. To create the directory:

Terminal
sudo mkdir -p /var/log/php
sudo chown www-data:www-data /var/log/php

Recommended Development Configuration

ini
error_reporting = E_ALL
display_errors = On
display_startup_errors = On
log_errors = On
error_log = /var/log/php/error.log

Recommended Production Configuration

ini
error_reporting = E_ALL & ~E_DEPRECATED
display_errors = Off
display_startup_errors = Off
log_errors = On
error_log = /var/log/php/error.log

This reports all serious errors while suppressing deprecation notices, and logs everything without displaying anything to users.

Configure Error Reporting at Runtime

You can override php.ini settings within a PHP script using the error_reporting() function and ini_set(). Runtime changes apply only to the current script.

To enable all errors at the top of a script:

php
<?php
ini_set('display_errors', '1');
ini_set('display_startup_errors', '1');
error_reporting(E_ALL);

To report all errors except notices:

php
<?php
error_reporting(E_ALL & ~E_NOTICE);

To suppress all error output (not recommended for debugging):

php
<?php
error_reporting(0);

Runtime configuration is useful during development when you do not have access to php.ini, but it should not replace proper server-level configuration on production systems.

Configure Error Reporting in .htaccess

If you are on a shared hosting environment without access to php.ini, you can configure PHP error reporting via .htaccess (when PHP runs as an Apache module):

apache
php_flag display_errors On
php_value error_reporting -1
php_flag log_errors On
php_value error_log /var/log/php/error.log

Using -1 for error_reporting enables all possible errors, including those added in future PHP versions.

Info
.htaccess directives only work when PHP runs as an Apache module (mod_php). They have no effect with PHP-FPM.

View PHP Error Logs

Once log_errors is enabled, errors are written to the file specified by error_log. To monitor the log in real time, use the tail -f command:

Terminal
tail -f /var/log/php/error.log

If no error_log path is set, PHP writes errors to the web server error log:

  • Apache: /var/log/apache2/error.log
  • Nginx + PHP-FPM: /var/log/nginx/error.log and /var/log/php/php-fpm.log

For related service commands, see how to start, stop, or restart Apache and how to start, stop, or restart Nginx .

Troubleshooting

Errors are not displayed even with display_errors = On
Confirm you are editing the correct php.ini file. Run php --ini or call phpinfo() in a script to see which configuration file is loaded and the current value of display_errors.

Changes to php.ini have no effect
The web server must be restarted after editing php.ini. For Apache, run sudo systemctl restart apache2. For Nginx + PHP-FPM, restart Nginx and your versioned PHP-FPM service (for example, sudo systemctl restart nginx php8.3-fpm).

No errors appear in the log file
Check that the log file path is writable by the web server user (www-data or nginx), and that log_errors = On is set in the active php.ini.

ini_set('display_errors', '1') has no effect
Fatal parse errors (E_PARSE) occur before any PHP code runs, so ini_set() cannot catch them. Enable display_errors in php.ini or .htaccess to see parse errors.

Error suppression operator @ hides errors
PHP’s @ prefix suppresses errors for a single expression. If errors from a specific function are missing from logs, check whether the call is prefixed with @.

FAQ

What is the difference between display_errors and log_errors?
display_errors controls whether errors are output directly to the browser or CLI. log_errors controls whether errors are written to the error log file. On production, always set display_errors = Off and log_errors = On.

Which error_reporting level should I use?
Use E_ALL during development to catch every possible issue. On production, use E_ALL & ~E_DEPRECATED to log serious errors while suppressing deprecation notices that do not require immediate action.

How do I check the current error reporting level?
Call error_reporting() with no arguments: it returns the current bitmask as an integer. You can also check it in the output of phpinfo().

Can I log errors to a database instead of a file?
Not directly via php.ini. Use a custom error handler registered with set_error_handler() and set_exception_handler() to send errors to a database, email, or external logging service.

Where is php.ini located?
Run php --ini from the command line or add <?php phpinfo(); ?> to a script and load it in a browser. The “Loaded Configuration File” row shows the active path. Common locations are /etc/php/8.x/cli/php.ini (CLI) and /etc/php/8.x/apache2/php.ini (Apache).

Conclusion

During development, set error_reporting = E_ALL and display_errors = On in php.ini to see all errors immediately. On production, set display_errors = Off and log_errors = On to log errors silently without exposing details to users.

Make changes in php.ini for permanent server-wide configuration, use error_reporting() and ini_set() for per-script overrides, or use .htaccess on shared hosting without php.ini access.

If you have any questions, feel free to leave a comment below.

chown Cheatsheet

Basic Syntax

Use these core command forms for chown.

Command Description
chown USER FILE Change file owner
chown USER:GROUP FILE Change owner and group
chown :GROUP FILE Change group only
chown USER: FILE Change owner and set group to user’s login group
chown --reference=REF FILE Copy owner and group from another file

Common Examples

Common ownership changes for files and directories.

Command Description
chown root file.txt Set owner to root
chown www-data:www-data /var/www/index.html Set owner and group for a web file
sudo chown $USER:$USER file.txt Return ownership to current user
chown :developers app.log Change group only
chown --reference=source.txt target.txt Match ownership of another file

Recursive Changes

Apply ownership updates to full directory trees.

Command Description
chown -R USER:GROUP /path Recursively change owner and group
chown -R USER /path Recursively change owner only
chown -R :GROUP /path Recursively change group only
chown -R -h USER:GROUP /path Change symlink ownership itself during recursion
chown -R --from=OLDUSER:OLDGROUP NEWUSER:NEWGROUP /path Change only matching current ownership

Symlinks and Traversal

Control how chown treats symbolic links.

Command Description
chown USER:GROUP symlink Change target by default
chown -h USER:GROUP symlink Change symlink itself (not target)
chown -R -H USER:GROUP /path Follow symlink command-line args to directories
chown -R -L USER:GROUP /path Follow all directory symlinks
chown -R -P USER:GROUP /path Never follow symlinks (default)

Safe Patterns

Use these patterns to avoid ownership mistakes.

Command Description
chown --from=root root:root /path/file Change only if current owner matches
find /path -user olduser -exec chown newuser {} + Target only files owned by one user
find /path -group oldgroup -exec chown :newgroup {} + Target only one group
ls -l /path/file Verify ownership before and after changes
id username Confirm user and group names exist

Common Errors

Quick checks when ownership changes fail.

Issue Check
Operation not permitted You need root privileges; run with sudo
invalid user Verify user exists with getent passwd username
invalid group Verify group exists with getent group groupname
Changes did not apply recursively Confirm -R was used
Access still denied after chown Check permission bits with ls -l and ACLs

Related Guides

Use these guides for full ownership and permissions workflows.

Guide Description
Chown Command in Linux Full chown guide with examples
chgrp Command in Linux Change file group ownership
How to Change File Permissions in Linux (chmod command) Update permission bits
Understanding Linux File Permissions Ownership and permission model
How to List Groups in Linux Check group membership and IDs

rm Cheatsheet

Basic Syntax

Core command forms for file and directory removal.

Command Description
rm [OPTIONS] FILE... Remove one or more files
rm -r [OPTIONS] DIRECTORY... Remove directories recursively
rm -- FILE Treat argument as filename even if it starts with -
rm -i FILE Prompt before each removal

Remove Files

Common file deletion commands.

Command Description
rm file.txt Remove one file
rm file1 file2 file3 Remove multiple files
rm *.log Remove files matching pattern
rm -- -strange-filename Remove file named with leading -
rm -v file.txt Remove file with verbose output

Remove Directories

Delete directories and their contents.

Command Description
rm -r dir/ Remove directory recursively
rm -rf dir/ Force recursive removal without prompts
rm -r dir1 dir2 Remove multiple directories
rm -r -- */ Remove all directories in current path

Prompt and Safety Options

Control how aggressively rm deletes files.

Command Description
rm -i file.txt Prompt before each file removal
rm -I file1 file2 Prompt once before deleting many files
rm --interactive=always file.txt Always prompt
rm --interactive=once *.tmp Prompt once
rm -f file.txt Ignore nonexistent files, never prompt

Useful Patterns

Frequent real-world combinations.

Command Description
find . -type f -name '*.tmp' -delete Remove matching temporary files
find . -type d -empty -delete Remove empty directories
rm -rf -- build/ dist/ Remove common build directories
rm -f -- *.bak *.old Remove backup files quietly

Troubleshooting

Quick checks for common removal errors.

Issue Check
Permission denied Check ownership and permissions; use sudo only when needed
Is a directory Add -r to remove directories
No such file or directory Verify path and shell glob expansion
Cannot remove write-protected file Use -i for prompts or -f to force
File name starts with - Use rm -- filename

Related Guides

Use these references for safer file management workflows.

Guide Description
Rm Command in Linux Full rm guide with examples
How to Find Files in Linux Using the Command Line Find and target files before deleting
Chmod Command in Linux Fix permission errors before removal
Linux Commands Cheatsheet General Linux command quick reference

Linux patch Command: Apply Diff Files

The patch command applies a set of changes described in a diff file to one or more original files. It is the standard way to distribute and apply source code changes, security fixes, and configuration updates in Linux, and pairs directly with the diff command.

This guide explains how to use the patch command in Linux with practical examples.

Syntax

The general syntax of the patch command is:

txt
patch [OPTIONS] [ORIGINALFILE] [PATCHFILE]

You can pass the patch file using the -i option or pipe it through standard input:

txt
patch [OPTIONS] -i patchfile [ORIGINALFILE]
patch [OPTIONS] [ORIGINALFILE] < patchfile

patch Options

Option Long form Description
-i FILE --input=FILE Read the patch from FILE instead of stdin
-p N --strip=N Strip N leading path components from file names in the patch
-R --reverse Reverse the patch (undo a previously applied patch)
-b --backup Back up the original file before patching
--dry-run Test the patch without making any changes
-d DIR --directory=DIR Change to DIR before doing anything
-N --forward Skip patches that appear to be already applied
-l --ignore-whitespace Ignore whitespace differences when matching lines
-f --force Force apply even if the patch does not match cleanly
-u --unified Interpret the patch as a unified diff

Create and Apply a Basic Patch

The most common workflow is to use diff to generate a patch file and patch to apply it.

Start with two versions of a file. Here is the original:

hello.pytxt
print("Hello, World!")
print("Version 1")

And the updated version:

hello_new.pytxt
print("Hello, Linux!")
print("Version 2")

Use diff with the -u flag to generate a unified diff and save it to a patch file:

Terminal
diff -u hello.py hello_new.py > hello.patch

The hello.patch file will contain the following differences:

output
--- hello.py 2026-02-21 10:00:00.000000000 +0100
+++ hello_new.py 2026-02-21 10:05:00.000000000 +0100
@@ -1,2 +1,2 @@
-print("Hello, World!")
-print("Version 1")
+print("Hello, Linux!")
+print("Version 2")

To apply the patch to the original file:

Terminal
patch hello.py hello.patch
output
patching file hello.py

hello.py now contains the updated content from hello_new.py.

Dry Run Before Applying

Use --dry-run to test whether a patch will apply cleanly without actually modifying any files:

Terminal
patch --dry-run hello.py hello.patch
output
patching file hello.py

If the patch cannot be applied cleanly, patch reports the conflict without touching any files. This is useful before applying patches from external sources or when you are unsure whether a patch has already been applied.

Back Up the Original File

Use the -b option to create a backup of the original file before applying the patch. The backup is saved with a .orig extension:

Terminal
patch -b hello.py hello.patch
output
patching file hello.py

After running this command, hello.py.orig contains the original unpatched file. You can restore it manually if needed.

Strip Path Components with -p

When a patch file is generated from a different directory structure than the one where you apply it, the file paths inside the patch may not match your local paths. Use -p N to strip N leading path components from the paths recorded in the patch.

For example, a patch generated with git diff will reference files like:

output
--- a/src/utils/hello.py
+++ b/src/utils/hello.py

To apply this patch from the project root, use -p1 to strip the a/ and b/ prefixes:

Terminal
patch -p1 < hello.patch

-p1 is the standard option for patches generated by git diff and by diff -u from the root of a project directory. Without it, patch would look for a file named a/src/utils/hello.py on disk, which does not exist.

Reverse a Patch

Use the -R option to undo a previously applied patch and restore the original file:

Terminal
patch -R hello.py hello.patch
output
patching file hello.py

The file is restored to its state before the patch was applied.

Apply a Multi-File Patch

A single patch file can contain changes to multiple files. In this case, you do not specify a target file on the command line — patch reads the file names from the diff headers inside the patch:

Terminal
patch -p1 < project.patch

patch processes each file name from the diff headers and applies the corresponding changes. Use -p1 when the patch was produced by git diff or diff -u from the project root.

Quick Reference

Task Command
Apply a patch patch original.txt changes.patch
Apply from stdin patch -p1 < changes.patch
Specify patch file with -i patch -i changes.patch original.txt
Dry run (test without applying) patch --dry-run -p1 < changes.patch
Back up original before patching patch -b original.txt changes.patch
Strip one path prefix (git patches) patch -p1 < changes.patch
Reverse a patch patch -R original.txt changes.patch
Ignore whitespace differences patch -l original.txt changes.patch
Apply to a specific directory patch -d /path/to/dir -p1 < changes.patch

Troubleshooting

Hunk #1 FAILED at line N.
The patch does not match the current content of the file. This usually means the file has already been modified since the patch was created, or you are applying the patch against the wrong version. patch creates a .rej file containing the failed hunk so you can review and apply the change manually.

Reversed (or previously applied) patch detected! Assume -R?
patch has detected that the changes in the patch are already present in the file — the patch may have been applied before. Answer y to reverse the patch or n to skip it. Use -N (--forward) to automatically skip already-applied patches without prompting.

patch: **** malformed patch at line N
The patch file is corrupted or not in a recognized format. Make sure the patch was generated with diff -u (unified format). Non-unified formats require the -c (context) or specific format flag to be passed to patch.

File paths in the patch do not match files on disk.
Adjust the -p N value. Run patch --dry-run -p0 < changes.patch first, then increment N (-p1, -p2) until patch finds the correct files.

FAQ

How do I create a patch file?
Use the diff command with the -u flag: diff -u original.txt updated.txt > changes.patch. The -u flag produces a unified diff, which is the format patch works with by default. See the diff command guide for details.

What does -p1 mean?
-p1 tells patch to strip one leading path component from file names in the patch. Patches generated by git diff prefix file paths with a/ and b/, so -p1 removes that prefix before patch looks for the file on disk.

How do I undo a patch I already applied?
Run patch -R originalfile patchfile. patch reads the diff in reverse and restores the original content.

What is a .rej file?
When patch cannot apply one or more sections of a patch (called hunks), it writes the failed hunks to a file with a .rej extension alongside the original file. You can open the .rej file and apply those changes manually.

What is the difference between patch and git apply?
Both apply diff files, but git apply is designed for use inside a Git repository and integrates with the Git index. patch is a standalone POSIX tool that works on any file regardless of version control.

Conclusion

The patch command is the standard tool for applying diff files in Linux. Use --dry-run to verify a patch before applying it, -b to keep a backup of the original, and -R to reverse changes when needed.

If you have any questions, feel free to leave a comment below.

mv Cheatsheet

Basic Syntax

Core command forms for move and rename operations.

Command Description
mv [OPTIONS] SOURCE DEST Move or rename one file/directory
mv [OPTIONS] SOURCE... DIRECTORY Move multiple sources into destination directory
mv file.txt newname.txt Rename file in same directory
mv dir1 dir2 Rename directory

Move Files

Move files between directories.

Command Description
mv file.txt /tmp/ Move one file to another directory
mv file1 file2 /backup/ Move multiple files
mv *.log /var/log/archive/ Move files matching pattern
mv /src/file.txt /dest/newname.txt Move and rename in one step

Move Directories

Move or rename complete directory trees.

Command Description
mv project/ /opt/ Move directory to another location
mv olddir newdir Rename directory
mv dir1 dir2 /dest/ Move multiple directories
mv /src/dir /dest/dir-new Move and rename directory

Overwrite Behavior

Control what happens when destination already exists.

Command Description
mv -i file.txt /dest/ Prompt before overwrite
mv -n file.txt /dest/ Never overwrite existing file
mv -f file.txt /dest/ Force overwrite without prompt
mv -u file.txt /dest/ Move only if source is newer

Backup and Safety

Protect destination files while moving.

Command Description
mv -b file.txt /dest/ Create backup of overwritten destination
mv --backup=numbered file.txt /dest/ Numbered backups (.~1~, .~2~)
mv -v file.txt /dest/ Verbose output
mv -iv file.txt /dest/ Interactive + verbose

Useful Patterns

Common real-world mv command combinations.

Command Description
mv -- *.txt archive/ Move files when names may start with -
mv "My File.txt" /dest/ Move file with spaces in name
find . -maxdepth 1 -name '*.tmp' -exec mv -t /tmp/archive {} + Move matched files safely with find
mv /path/file{,.bak} Quick rename via brace expansion

Troubleshooting

Quick checks for common move/rename errors.

Issue Check
No such file or directory Verify source path with ls -l source
Permission denied Check destination permissions and ownership
Wrong file overwritten Use -i or -n for safer moves
Wildcard misses hidden files * does not match dotfiles by default
Option-like filename fails Use -- before source names

Related Guides

Use these guides for detailed move and rename workflows.

Guide Description
How to Move Files in Linux with mv Command Full mv guide with examples
How to Rename Files in Linux File renaming patterns and tools
How to Rename Directories in Linux Directory rename methods
Cp Command in Linux: Copy Files and Directories Compare copy vs move workflows

ls Cheatsheet

Basic Listing

Use these commands for everyday directory listing.

Command Description
ls List files in current directory
ls /path/to/dir List files in specific directory
ls -1 One entry per line
ls -a Include hidden files
ls -A Include hidden files except . and ..

Long Format and Metadata

Show permissions, ownership, size, and timestamps.

Command Description
ls -l Long listing format
ls -lh Human-readable file sizes
ls -la Long format with hidden files
ls -n Numeric UID and GID
ls -li Show inode numbers

Sorting

Sort files by time, size, extension, or version.

Command Description
ls -lt Sort by modification time (newest first)
ls -ltr Sort by modification time (oldest first)
ls -lS Sort by file size (largest first)
ls -lX Sort by extension
ls -lv Natural sort for version-like names

Time Display

Control which timestamp is shown.

Command Description
ls -lt --time=atime Sort/show by access time
ls -lt --time=ctime Sort/show by status change time
ls -l --time-style=long-iso ISO-like date format
ls -l --full-time Full timestamp precision

Directory Views

List directories recursively or show directory entries only.

Command Description
ls -la --group-directories-first Long listing with directories before files
ls -d */ List only directories in current path
ls -ld /path/to/dir Show metadata for directory itself
ls -R Recursive listing
ls -laR Recursive long listing with hidden files

Output Formatting

Adjust visual style and indicators.

Command Description
ls -F Append indicator (/, *, @) by file type
ls -p Append / to directories
ls -m Comma-separated output
ls -x List entries across rows instead of down columns
ls --color=auto Enable colorized output when supported

Filtering with Globs

List files that match shell patterns.

Command Description
ls *.log List files ending in .log
ls file?.txt Match single-character wildcard
ls [ab]*.conf Match names starting with a or b
ls -d .[^.]* List hidden files (common shell pattern)

Common Patterns

Frequent command combinations.

Command Description
ls -lah Most common detailed listing
ls -lhS Largest files first with readable sizes
ls -lat Newest files first including hidden entries
`ls -1 wc -l`
`ls -l grep ‘^d’`

Troubleshooting

Quick checks for typical listing issues.

Issue Check
Hidden files not visible Add -a or -A
File sizes are hard to read Use -h with -l
Wrong sort order Confirm flags (-t, -S, -X, -r)
No color output Try ls --color=auto and check alias settings
Path looks empty Verify permissions with ls -ld /path

Related Guides

Use these references for deeper file management workflows.

Guide Description
How to List Files in Linux Using the ls Command Full ls guide with practical examples
Du Command in Linux Check disk usage and file sizes
Linux Commands Cheatsheet General command quick reference

ZIP/UNZIP Cheatsheet

Basic Syntax

Common command forms for ZIP operations.

Command Description
zip [OPTIONS] archive.zip files Create or update ZIP archive
unzip [OPTIONS] archive.zip Extract ZIP archive
zip -r archive.zip directory/ Recursively archive a directory
unzip archive.zip -d /path/ Extract to a specific directory
unzip -t archive.zip Test archive integrity

Create ZIP Archives

Create archives from files and directories.

Command Description
zip archive.zip file1 file2 Archive multiple files
zip -r project.zip project/ Archive directory recursively
zip -j archive.zip path/to/file Archive files without directory paths (flat)
zip -9 -r backup.zip /etc/ Maximum compression
zip -0 -r store.zip media/ No compression (faster)
zip -r logs.zip /var/log -x "*.gz" Exclude matching files
zip -q -r archive.zip dir/ Create archive silently (no output)

Update Existing Archives

Add, refresh, or remove archive entries.

Command Description
zip archive.zip newfile.txt Add file to existing archive
zip -r archive.zip newdir/ Add directory to existing archive
zip -u archive.zip file.txt Update only changed files
zip -d archive.zip "*.tmp" Delete matching entries
zip -FS archive.zip Sync archive with filesystem state

List and Inspect

Check archive contents before extraction.

Command Description
unzip -l archive.zip List archived files
unzip -Z -v archive.zip Detailed listing (sizes, ratio, methods)
zipinfo archive.zip Display archive metadata
zipinfo -1 archive.zip List filenames only
unzip -t archive.zip Test archive integrity

Extract Archives

Extract all files or selected paths.

Command Description
unzip archive.zip Extract into current directory
unzip archive.zip -d /tmp/extract Extract to target directory
unzip archive.zip file.txt Extract one file
unzip archive.zip "dir/*" Extract matching path pattern
unzip -n archive.zip Never overwrite existing files
unzip -o archive.zip Overwrite existing files without prompt
unzip -q archive.zip Extract silently (no output)

Password-Protected Archives

Create and extract encrypted ZIP files.

Command Description
zip -e secure.zip file.txt Create encrypted archive (interactive password)
zip -er secure-dir.zip secrets/ Encrypt directory archive
unzip secure.zip Extract encrypted ZIP (prompts for password)
zipcloak archive.zip Add encryption to an existing archive
zipcloak -d archive.zip Remove encryption from archive

Split Archives

Split large ZIP files into smaller chunks.

Command Description
zip -r -s 100m backup.zip bigdir/ Create split archive with 100 MB parts
zip -s 0 split.zip --out merged.zip Recombine split ZIP into one file
unzip split.zip Extract split archive (all parts required)
zip -s 2g -r media.zip media/ Create split archive with 2 GB parts

Troubleshooting

Common ZIP/UNZIP problems and checks.

Issue Check
unzip: cannot find or open Verify path and filename, then run ls -lh archive.zip
CRC error during extraction Run unzip -t archive.zip to test integrity
Files overwritten without warning Use unzip -n archive.zip to skip existing files
Wrong file permissions after extract Check with ls -l, then adjust using chmod/chown
Password prompt fails Re-enter password carefully; verify archive is encrypted with zipinfo

Related Guides

Use these guides for full walkthroughs.

Guide Description
How to Zip Files and Directories in Linux Detailed ZIP creation examples
How to Unzip Files in Linux Detailed extraction methods
Tar Cheatsheet Tar and compressed archive reference

Dnf Command in Linux: Package Management Guide

dnf (Dandified YUM) is the default package manager on Fedora, RHEL, AlmaLinux, Rocky Linux, and other RPM-based distributions. It replaces the older yum package manager and provides faster dependency resolution, better performance, and a cleaner command interface.

Most dnf commands require root privileges, so you need to run them with sudo .

This guide explains the most common dnf commands for day-to-day package management.

Checking for Updates (dnf check-update)

Before installing or upgrading packages, check which packages have updates available:

Terminal
dnf check-update

The command lists all packages with available updates and returns exit code 100 if updates are available, or 0 if the system is up to date. This makes it useful in scripts.

Upgrading Packages (dnf upgrade)

To upgrade all installed packages to their latest available versions, run:

Terminal
sudo dnf upgrade

To refresh the repository metadata and upgrade in one step:

Terminal
sudo dnf upgrade --refresh

To upgrade a single package:

Terminal
sudo dnf upgrade package_name

To apply only security updates:

Terminal
sudo dnf upgrade --security

Installing Packages (dnf install)

To install a package, run:

Terminal
sudo dnf install package_name

To install multiple packages at once, specify them as a space-separated list:

Terminal
sudo dnf install package1 package2

To install a local RPM file, provide the full path:

Terminal
sudo dnf install /full/path/package.rpm

dnf automatically resolves and installs all required dependencies.

To reinstall a package (for example, to restore corrupted files):

Terminal
sudo dnf reinstall package_name

Removing Packages (dnf remove)

To remove an installed package:

Terminal
sudo dnf remove package_name

You can specify multiple packages separated by spaces:

Terminal
sudo dnf remove package1 package2

The remove command also removes packages that depend on the one being removed.

Removing Unused Dependencies (dnf autoremove)

When a package is removed, its dependencies may no longer be needed by any other package. To remove these orphaned dependencies:

Terminal
sudo dnf autoremove

Searching for Packages (dnf search)

To search for a package by name or description:

Terminal
dnf search package_name

The command searches both package names and summaries. To search only in package names:

Terminal
dnf search --names-only package_name

Package Information (dnf info)

To display detailed information about a package, including its version, repository, size, and description:

Terminal
dnf info package_name

This works for both installed and available packages.

Finding Which Package Provides a File (dnf provides)

To find which package provides a specific file or command:

Terminal
dnf provides /usr/bin/curl

This is useful when a command is missing and you need to know which package to install.

Listing Packages (dnf list)

To list all installed packages:

Terminal
dnf list installed

To list all available packages from enabled repositories:

Terminal
dnf list available

To check whether a specific package is installed:

Terminal
dnf list installed | grep package_name

To list packages that have updates available:

Terminal
dnf list updates

Package Groups (dnf group)

DNF organizes related packages into groups. To list all available groups:

Terminal
dnf group list

To install a group (for example, “Development Tools”):

Terminal
sudo dnf group install "Development Tools"

To remove a group:

Terminal
sudo dnf group remove "Development Tools"

Module Streams (dnf module)

DNF modules allow you to install specific versions (streams) of software. For example, you can choose between Node.js 18 or 20.

To list available modules:

Terminal
dnf module list

To enable a specific module stream:

Terminal
sudo dnf module enable nodejs:20

To install a module with its default profile:

Terminal
sudo dnf module install nodejs:20

To reset a module to its default state:

Terminal
sudo dnf module reset nodejs

Managing Repositories (dnf config-manager)

The config-manager command is provided by the dnf-plugins-core package. Install it first if the subcommand is missing:

Terminal
sudo dnf install dnf-plugins-core

To list all enabled repositories:

Terminal
dnf repolist

To list all repositories, including disabled ones:

Terminal
dnf repolist all

To enable a repository:

Terminal
sudo dnf config-manager --set-enabled repo_id

To disable a repository:

Terminal
sudo dnf config-manager --set-disabled repo_id

To show detailed information about a repository:

Terminal
dnf repoinfo repo_id

Cleaning the Cache (dnf clean)

DNF caches repository metadata and downloaded packages locally. To clear all cached data:

Terminal
sudo dnf clean all

To rebuild the metadata cache:

Terminal
sudo dnf makecache

Cleaning the cache is useful when you encounter stale metadata errors or want to free disk space.

Transaction History (dnf history)

DNF records every transaction (install, upgrade, remove) in a history log. To view the transaction history:

Terminal
dnf history

To see the details of a specific transaction:

Terminal
dnf history info 25

To undo a transaction (revert the changes it made):

Terminal
sudo dnf history undo 25

This is useful when an upgrade causes problems and you need to roll back.

Quick Reference

Task Command
Check for available updates dnf check-update
Upgrade all packages sudo dnf upgrade
Install a package sudo dnf install package_name
Install a local RPM file sudo dnf install /path/file.rpm
Remove a package sudo dnf remove package_name
Remove unused dependencies sudo dnf autoremove
Search for a package dnf search keyword
Show package details dnf info package_name
Find which package provides a file dnf provides /path/to/file
List installed packages dnf list installed
List enabled repositories dnf repolist
Clear cached data sudo dnf clean all
View transaction history dnf history
Undo a transaction sudo dnf history undo ID

For a printable quick reference, see the DNF cheatsheet .

Troubleshooting

“Error: Failed to download metadata for repo”
The repository metadata is stale or the mirror is unreachable. Run sudo dnf clean all followed by sudo dnf makecache to refresh the cache. If the problem persists, check your network connection and the repository URL in /etc/yum.repos.d/.

“No match for argument: package_name”
The package does not exist in any enabled repository. Verify the package name with dnf search and check that the correct repository is enabled with dnf repolist.

Dependency conflict during upgrade
If a dependency conflict prevents an upgrade, review the error message carefully. You can retry with sudo dnf upgrade --allowerasing, but only after confirming which packages will be removed.

GPG key verification failed
The repository GPG key is not imported. DNF prompts you to accept the key during the first install from a new repository. If you need to import it manually, use sudo rpm --import KEY_URL.

Transaction undo fails
Not all transactions can be undone. If packages have been updated by later transactions, the undo may conflict. Check dnf history info ID for details and consider a manual rollback.

FAQ

What is the difference between dnf and yum?
dnf is the successor to yum. It uses the same repository format and configuration files, but provides faster dependency resolution, better memory usage, and a more consistent command interface. On modern Fedora and RHEL systems, yum is a symlink to dnf.

Is dnf update the same as dnf upgrade?
Yes. dnf update is an alias for dnf upgrade. Both commands upgrade all installed packages to the latest available versions.

How do I install a specific version of a package?
Specify the version with a dash: sudo dnf install package_name-1.2.3. To list all available versions, use dnf --showduplicates list package_name.

How do I prevent a package from being upgraded?
Use the versionlock plugin: sudo dnf install dnf-plugin-versionlock, then sudo dnf versionlock add package_name. To remove the lock later, use sudo dnf versionlock delete package_name.

What is the equivalent of apt autoremove in dnf?
The equivalent is sudo dnf autoremove. It removes packages that were installed as dependencies but are no longer required by any installed package.

Conclusion

dnf is the standard package manager for Fedora, RHEL, and RPM-based distributions. It handles installing, upgrading, removing, and searching packages, as well as managing repositories and module streams. To learn more, run man dnf in your terminal.

If you have any questions, feel free to leave a comment below.

IP Command Cheatsheet

Basic Syntax

Use this structure for most ip operations.

Command Description
ip [OPTIONS] OBJECT COMMAND General ip command syntax
ip -br a Show addresses in brief format
ip -c a Show colorized output
ip -4 a Show only IPv4 addresses
ip -6 a Show only IPv6 addresses

Show Interfaces and Addresses

Inspect links and assigned IP addresses.

Command Description
ip link show Show all network interfaces
ip link show dev eth0 Show one interface
ip addr show Show all IP addresses
ip addr show dev eth0 Show addresses on one interface
ip -br addr Brief interface and address overview

Add and Remove IP Addresses

Assign or remove IP addresses on interfaces.

Command Description
sudo ip addr add 192.168.1.50/24 dev eth0 Add IPv4 address
sudo ip addr del 192.168.1.50/24 dev eth0 Remove IPv4 address
sudo ip addr add 2001:db8::50/64 dev eth0 Add IPv6 address
sudo ip addr flush dev eth0 Remove all addresses from interface
ip addr show dev eth0 Verify interface addresses

Bring Interfaces Up or Down

Enable, disable, or rename network links.

Command Description
sudo ip link set dev eth0 up Bring interface up
sudo ip link set dev eth0 down Bring interface down
sudo ip link set dev eth0 mtu 9000 Change MTU
sudo ip link set dev eth0 name lan0 Rename interface
ip -br link Show link state quickly

Routing Table

Inspect and manage network routes.

Command Description
ip route show Show IPv4 routing table
ip -6 route show Show IPv6 routing table
ip route get 8.8.8.8 Show route used for destination
sudo ip route add default via 192.168.1.1 Add default gateway
sudo ip route del default Remove default gateway
sudo ip route add 10.10.0.0/16 via 192.168.1.254 dev eth0 Add static route

Neighbor (ARP/NDP) Table

View and manage neighbor cache entries.

Command Description
ip neigh show Show neighbor table
ip neigh show dev eth0 Show neighbors for one interface
sudo ip neigh flush dev eth0 Clear neighbor entries on interface
sudo ip neigh del 192.168.1.10 dev eth0 Remove a neighbor entry
ip -s neigh Show neighbor statistics

Policy Routing

Work with multiple routing tables and rules.

Command Description
ip rule show List policy routing rules
sudo ip rule add from 192.168.10.0/24 table 100 Route source subnet using table 100
sudo ip route add default via 10.0.0.1 table 100 Add default route to custom table
sudo ip rule del from 192.168.10.0/24 table 100 Remove policy rule
ip route show table 100 Show routes in table 100

Network Namespaces

Inspect or run commands inside network namespaces.

Command Description
ip netns list List network namespaces
sudo ip netns add ns1 Create namespace
sudo ip netns exec ns1 ip a Run ip a inside namespace
sudo ip netns del ns1 Delete namespace
ip -n ns1 route Show routes in namespace

Troubleshooting

Fast checks for common network issues.

Issue Check
No IP assigned to interface ip addr show dev eth0
Interface is down ip link show dev eth0 then sudo ip link set dev eth0 up
Wrong default route ip route show and verify default via ...
Cannot reach destination ip route get DESTINATION_IP
Stale ARP/neighbor entry sudo ip neigh flush dev eth0

Related Guides

Use these articles for detailed networking workflows.

Guide Description
Linux ip Command with Examples Complete ip command guide
How to Find Your IP Address in Linux Public and private IP lookup methods
Traceroute Command in Linux Path and hop diagnostics
UFW Cheatsheet Firewall rules quick reference

Understanding the /etc/fstab File in Linux

The /etc/fstab file (filesystem table) is a system configuration file that defines how filesystems, partitions, and storage devices are mounted at boot time. The system reads this file during startup and mounts each entry automatically.

Understanding /etc/fstab is essential when you need to add a new disk, create a swap file , mount a network share , or change mount options for an existing filesystem.

This guide explains the /etc/fstab file format, what each field means, common mount options, and how to add new entries safely.

/etc/fstab Format

The /etc/fstab file is a plain text file with one entry per line. Each line defines a filesystem to mount. Lines beginning with # are comments and are ignored by the system.

To view the contents of the file safely, use less :

Terminal
less /etc/fstab

A typical /etc/fstab file looks like this:

output
# <file system> <mount point> <type> <options> <dump> <pass>
UUID=a1b2c3d4-e5f6-7890-abcd-ef1234567890 / ext4 errors=remount-ro 0 1
UUID=b2c3d4e5-f6a7-8901-bcde-f12345678901 /home ext4 defaults 0 2
UUID=c3d4e5f6-a7b8-9012-cdef-123456789012 none swap sw 0 0
tmpfs /tmp tmpfs defaults,noatime 0 0

Each entry contains six space-separated fields:

txt
UUID=a1b2c3d4... /home ext4 defaults 0 2
[---------------] [---] [--] [------] - -
| | | | | |
| | | | | +-> 6. Pass (fsck order)
| | | | +----> 5. Dump (backup flag)
| | | +-----------> 4. Options
| | +------------------> 3. Type
| +-------------------------> 2. Mount point
+---------------------------------------------> 1. File system

Field Descriptions

  1. File system — The device or partition to mount. This can be specified as:

    • A UUID: UUID=a1b2c3d4-e5f6-7890-abcd-ef1234567890
    • A disk label: LABEL=home
    • A device path: /dev/sda1
    • A network path: 192.168.1.10:/export/share (for NFS)

    Using UUIDs is recommended because device paths like /dev/sda1 can change if disks are added or removed. To find the UUID of a partition, run blkid:

    Terminal
    sudo blkid
  2. Mount point — The directory where the filesystem is attached. The directory must already exist. Common mount points include /, /home, /boot, and /mnt/data. For swap entries, this field is set to none.

  3. Type — The filesystem type. Common values include:

    • ext4 — The default Linux filesystem
    • xfs — High-performance filesystem used on RHEL-based distributions
    • btrfs — Copy-on-write filesystem with snapshot support
    • swap — Swap partition or file
    • tmpfs — Temporary filesystem stored in memory
    • nfs — Network File System
    • vfat — FAT32 filesystem (USB drives, EFI partitions)
    • auto — Let the kernel detect the filesystem type automatically
  4. Options — A comma-separated list of mount options. See the Common Mount Options section below for details.

  5. Dump — Used by the dump backup utility. A value of 0 means the filesystem is not included in backups. A value of 1 means it is. Most modern systems do not use dump, so this is typically set to 0.

  6. Pass — The order in which fsck checks filesystems at boot. The root filesystem should be 1. Other filesystems should be 2 so they are checked after root. A value of 0 means the filesystem is not checked.

Common Mount Options

The fourth field in each fstab entry is a comma-separated list of mount options. The following options are the most commonly used:

  • defaults — Uses the standard default options (rw, suid, dev, exec, auto, nouser, async). Some effective behaviors can still vary by filesystem and kernel settings.
  • ro — Mount the filesystem as read-only.
  • rw — Mount the filesystem as read-write.
  • noatime — Do not update file access times. This can improve performance, especially on SSDs.
  • nodiratime — Do not update directory access times.
  • noexec — Do not allow execution of binaries on the filesystem.
  • nosuid — Do not allow set-user-ID or set-group-ID bits to take effect.
  • nodev — Do not interpret character or block special devices on the filesystem.
  • nofail — Do not report errors if the device does not exist at boot. Useful for removable drives and network shares.
  • auto — Mount the filesystem automatically at boot (default behavior).
  • noauto — Do not mount automatically at boot. The filesystem can still be mounted manually with mount.
  • user — Allow a regular user to mount the filesystem.
  • errors=remount-ro — Remount the filesystem as read-only if an error occurs. Common on root filesystem entries.
  • _netdev — The filesystem requires network access. The system waits for the network to be available before mounting. Use this for NFS, CIFS, and iSCSI mounts.
  • x-systemd.automount — Mount the filesystem on first access instead of at boot. Managed by systemd.

You can combine multiple options separated by commas:

txt
UUID=a1b2c3d4... /data ext4 defaults,noatime,nofail 0 2

Adding an Entry to /etc/fstab

Before editing /etc/fstab, always create a backup:

Terminal
sudo cp /etc/fstab /etc/fstab.bak

Step 1: Find the UUID

Identify the UUID of the partition you want to mount:

Terminal
sudo blkid /dev/sdb1
output
/dev/sdb1: UUID="d4e5f6a7-b8c9-0123-def0-123456789abc" TYPE="ext4"

Step 2: Create the Mount Point

Create the directory where the filesystem will be mounted:

Terminal
sudo mkdir -p /mnt/data

Step 3: Add the Entry

Open /etc/fstab in a text editor :

Terminal
sudo nano /etc/fstab

Add a new line at the end of the file:

/etc/fstabsh
UUID=d4e5f6a7-b8c9-0123-def0-123456789abc /mnt/data ext4 defaults,nofail 0 2

Step 4: Test the Entry

Instead of rebooting, use mount -a to mount all entries in /etc/fstab that are not already mounted:

Terminal
sudo mount -a

If the command produces no output, the entry is correct. If there is an error, fix the fstab entry before rebooting — an incorrect fstab can prevent the system from booting normally.

Verify the filesystem is mounted :

Terminal
df -h /mnt/data

Common fstab Examples

Swap File

To add a swap file to fstab:

/etc/fstabsh
/swapfile none swap sw 0 0

NFS Network Share

To mount an NFS share that requires network access:

/etc/fstabsh
192.168.1.10:/export/share /mnt/nfs nfs defaults,_netdev,nofail 0 0

CIFS/SMB Windows Share

To mount a Windows/Samba share with a credentials file:

/etc/fstabsh
//192.168.1.20/share /mnt/smb cifs credentials=/etc/samba/creds,_netdev,nofail 0 0

Set strict permissions on the credentials file so other users cannot read it:

Terminal
sudo chmod 600 /etc/samba/creds

USB or External Drive

To mount a removable drive that may not always be attached:

/etc/fstabsh
UUID=e5f6a7b8-c9d0-1234-ef01-23456789abcd /mnt/usb ext4 defaults,nofail,noauto 0 0

The nofail option prevents boot errors when the drive is not connected. The noauto option prevents automatic mounting — mount it manually with sudo mount /mnt/usb when needed.

tmpfs for /tmp

To mount /tmp as a temporary filesystem in memory:

/etc/fstabsh
tmpfs /tmp tmpfs defaults,noatime,size=2G 0 0

Quick Reference

Task Command
View fstab contents less /etc/fstab
Back up fstab sudo cp /etc/fstab /etc/fstab.bak
Find partition UUIDs sudo blkid
Mount all fstab entries sudo mount -a
Check mounted filesystems mount or df -h
Check filesystem type lsblk -f
Restore fstab from backup sudo cp /etc/fstab.bak /etc/fstab

Troubleshooting

System does not boot after editing fstab
An incorrect fstab entry can cause a boot failure. Boot into recovery mode or a live USB, mount the root filesystem, and fix or restore /etc/fstab from the backup. Always test with sudo mount -a before rebooting.

mount -a reports “wrong fs type” or “bad superblock”
The filesystem type in the fstab entry does not match the actual filesystem on the device. Use sudo blkid or lsblk -f to check the correct type.

Network share fails to mount at boot
Add the _netdev option to tell the system to wait for network availability before mounting. For systemd-based systems, x-systemd.automount can also help with timing issues.

“mount point does not exist”
The directory specified in the second field does not exist. Create it with mkdir -p /path/to/mountpoint before running mount -a.

UUID changed after reformatting a partition
Reformatting a partition assigns a new UUID. Run sudo blkid to find the new UUID and update the fstab entry accordingly.

FAQ

What happens if I make an error in /etc/fstab?
If the entry references a non-existent device without the nofail option, the system may drop to an emergency shell during boot. Always use nofail for non-essential filesystems and test with sudo mount -a before rebooting.

Should I use UUID or device path (/dev/sda1)?
Use UUID. Device paths can change if you add or remove disks, or if the boot order changes. UUIDs are unique to each filesystem and do not change unless you reformat the partition.

What does the nofail option do?
It tells the system to continue booting even if the device is not present or cannot be mounted. Without nofail, a missing device causes the system to drop to an emergency shell.

How do I remove an fstab entry?
Open /etc/fstab with sudo nano /etc/fstab, delete or comment out the line (add # at the beginning), save the file, and then unmount the filesystem with sudo umount /mount/point.

What is the difference between noauto and nofail?
noauto prevents the filesystem from being mounted automatically at boot — you must mount it manually. nofail still mounts automatically but does not cause a boot error if the device is missing.

Conclusion

The /etc/fstab file controls how filesystems are mounted at boot. Each entry specifies the device, mount point, filesystem type, options, and check order. Always back up fstab before editing, use UUIDs instead of device paths, and test changes with sudo mount -a before rebooting.

If you have any questions, feel free to leave a comment below.

DNF Cheatsheet

Basic Commands

Start with package lists and metadata.

Command Description
dnf --version Show DNF version
dnf check-update List available updates
dnf makecache Refresh repository metadata cache
dnf repolist List enabled repositories
dnf repolist all List all repositories

Search and Info

Find packages and inspect details.

Command Description
dnf search nginx Search packages by keyword
dnf info nginx Show package details
dnf provides /usr/bin/python3 Find package that provides a file
dnf list installed List installed packages
dnf list available List available packages from repos

Install and Remove

Install, remove, and reinstall packages.

Command Description
sudo dnf install nginx Install one package
sudo dnf install nginx php-fpm Install multiple packages
sudo dnf remove nginx Remove package
sudo dnf autoremove Remove unneeded dependencies
sudo dnf reinstall nginx Reinstall package

Update and Upgrade

Keep the system and packages up to date.

Command Description
sudo dnf update Update installed packages
sudo dnf upgrade Upgrade packages (same effect in most setups)
sudo dnf upgrade --refresh Refresh metadata and upgrade
sudo dnf update --security Apply security updates only
sudo dnf offline-upgrade download Prepare offline upgrade (where supported)

Groups and Modules

Work with package groups and modular streams.

Command Description
dnf group list List package groups
sudo dnf group install "Development Tools" Install package group
sudo dnf group remove "Development Tools" Remove package group
dnf module list List module streams
sudo dnf module enable nodejs:20 Enable a module stream
sudo dnf module reset nodejs Reset module stream

Repository Management

Enable, disable, and inspect repositories.

Command Description
sudo dnf config-manager --set-enabled repo_id Enable repository
sudo dnf config-manager --set-disabled repo_id Disable repository
dnf repoinfo Show repo details
dnf repoinfo repo_id Show one repository details
sudo dnf clean all Clear all cache data

Query and History

Review installed files and transaction history.

Command Description
rpm -ql nginx List files installed by package
rpm -qf /usr/sbin/nginx Find package owning a file
dnf history Show transaction history
dnf history info 25 Show details of transaction ID 25
sudo dnf history undo 25 Undo transaction ID 25

Troubleshooting

Common checks when package operations fail.

Issue Check
Metadata errors or stale cache Run sudo dnf clean all then sudo dnf makecache
Package not found Verify enabled repos with dnf repolist and use dnf search
Dependency conflicts Retry with --allowerasing only after reviewing affected packages
GPG key error Import/verify repository GPG key and retry
Slow mirror response Refresh metadata and test another mirror/repo configuration

Related Guides

Use these references for broader package management workflows.

Guide Description
How to Use apt Command Package management on Ubuntu, Debian, and derivatives
Linux Commands Cheatsheet General Linux command quick reference

MySQL/MariaDB Cheatsheet

Connect and Exit

Open a SQL session and disconnect safely.

Command Description
mysql -u root -p Connect as root (prompt for password)
mysql -u user -p -h 127.0.0.1 Connect to specific host
mysql -u user -p -P 3306 Connect on custom port
sudo mysql Login using Unix socket auth (common on Debian/Ubuntu)
exit Leave MySQL/MariaDB shell

Basic SQL Checks

Run quick checks after connecting.

SQL Description
SELECT VERSION(); Show server version
SHOW DATABASES; List all databases
USE db_name; Switch active database
SHOW TABLES; List tables in current database
SHOW GRANTS FOR 'user'@'host'; Show user privileges

Database Management

Create, inspect, and remove databases.

SQL Description
CREATE DATABASE db_name; Create a database
CREATE DATABASE db_name CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; Create DB with charset/collation
SHOW DATABASES LIKE 'db_name'; Check if DB exists
DROP DATABASE db_name; Delete a database
SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA; List schemas via information schema

User Management

Create users and manage authentication.

SQL Description
CREATE USER 'app'@'localhost' IDENTIFIED BY 'strong_password'; Create local user
CREATE USER 'app'@'%' IDENTIFIED BY 'strong_password'; Create remote-capable user
ALTER USER 'app'@'localhost' IDENTIFIED BY 'new_password'; Change password
DROP USER 'app'@'localhost'; Delete user
SELECT user, host FROM mysql.user; List users and hosts

Privileges

Grant, review, and revoke access.

SQL Description
GRANT ALL PRIVILEGES ON db_name.* TO 'app'@'localhost'; Grant full DB access
GRANT SELECT,INSERT,UPDATE,DELETE ON db_name.* TO 'app'@'localhost'; Grant specific privileges
REVOKE ALL PRIVILEGES ON db_name.* FROM 'app'@'localhost'; Remove DB privileges
SHOW GRANTS FOR 'app'@'localhost'; Review granted privileges
FLUSH PRIVILEGES; Reload grant tables

Backup and Restore

Use mysqldump for exports and imports.

Command Description
mysqldump -u root -p db_name > db_name.sql Backup one database
mysqldump -u root -p --databases db1 db2 > multi.sql Backup multiple databases
mysqldump -u root -p --all-databases > all.sql Backup all databases
mysql -u root -p db_name < db_name.sql Restore into existing database
gunzip < db_name.sql.gz | mysql -u root -p db_name Restore compressed SQL dump

Import and Export Data

Common import/export patterns.

Command Description
mysql -u root -p -e "SHOW DATABASES;" Run one-off SQL from shell
mysql -u root -p db_name -e "SOURCE /path/file.sql" Import SQL file via client
mysql -u root -p -Nse "SELECT NOW();" Non-interactive query output
mysqldump -u root -p db_name table_name > table.sql Export one table
mysql -u root -p db_name < table.sql Import one table dump

Service and Health Checks

Verify daemon state and open ports.

Command Description
systemctl status mysql Check MySQL service status
systemctl status mariadb Check MariaDB service status
sudo systemctl restart mysql Restart MySQL service
sudo systemctl restart mariadb Restart MariaDB service
ss -tulpn | grep 3306 Confirm server is listening

Common Troubleshooting

Quick checks for frequent connection and auth problems.

Issue Check
Access denied for user Verify user/host pair and password, then check SHOW GRANTS
Can't connect to local MySQL server through socket Check service status and socket path in config
Can't connect to MySQL server on 'host' Confirm host/port reachability and firewall rules
Unknown database Verify database name with SHOW DATABASES;
Restore fails on collation/charset Ensure server supports source collation and set utf8mb4 where needed

Related Guides

Use these articles for detailed MySQL/MariaDB workflows.

Guide Description
How to Manage MySQL Databases and Users from the Command Line End-to-end admin workflow
How to Create MySQL Users Accounts and Grant Privileges User and privilege setup
How to Back Up and Restore MySQL Databases with Mysqldump Backup and restore patterns
How to Show a List of All Databases in MySQL Database discovery commands
List (Show) Tables in a MySQL Database Table listing commands
How to Check the MySQL Version Version checks and verification

UFW Cheatsheet

Basic Commands

Start with status and firewall state.

Command Description
ufw status Show firewall status and rules
ufw status verbose Show detailed status and defaults
sudo ufw enable Enable UFW
sudo ufw disable Disable UFW
sudo ufw reload Reload rules
sudo ufw reset Reset UFW to defaults

Default Policies

Set default inbound and outbound behavior.

Command Description
sudo ufw default deny incoming Deny all incoming by default
sudo ufw default allow outgoing Allow all outgoing by default
sudo ufw default deny outgoing Deny all outgoing by default
sudo ufw default allow incoming Allow all incoming (not recommended on servers)

Allow and Deny Rules

Allow or block traffic by port and protocol.

Command Description
sudo ufw allow 22 Allow port 22 (TCP and UDP)
sudo ufw allow 80/tcp Allow HTTP over TCP
sudo ufw allow 443/tcp Allow HTTPS over TCP
sudo ufw deny 25 Deny SMTP port 25
sudo ufw reject 23 Reject Telnet connections
sudo ufw limit 22/tcp Rate-limit SSH connections

Rule Management

List, delete, and clean specific rules.

Command Description
sudo ufw status numbered List rules with numbers
sudo ufw delete allow 80/tcp Delete matching rule
sudo ufw delete 3 Delete rule by number
sudo ufw delete deny 25 Delete a deny rule

IP-Based Rules

Allow or deny traffic from specific hosts and networks.

Command Description
sudo ufw allow from 203.0.113.10 Allow all traffic from one IP
sudo ufw deny from 203.0.113.10 Block all traffic from one IP
sudo ufw allow from 203.0.113.10 to any port 22 Allow SSH from one IP
sudo ufw allow from 10.0.0.0/24 to any port 3306 Allow MySQL from a subnet
sudo ufw deny from 198.51.100.0/24 to any port 22 proto tcp Deny TCP SSH from subnet

Application Profiles

Use service profiles from /etc/ufw/applications.d/.

Command Description
sudo ufw app list List available application profiles
sudo ufw app info "Nginx Full" Show ports/protocols for profile
sudo ufw allow "OpenSSH" Allow profile rules
sudo ufw deny "Nginx HTTP" Deny profile rules
sudo ufw delete allow "OpenSSH" Remove allowed profile

Logging

Control and inspect UFW logging.

Command Description
sudo ufw logging on Enable logging
sudo ufw logging off Disable logging
sudo ufw logging low Set low log level
sudo ufw logging medium Set medium log level
sudo ufw logging high Set high log level

Common Server Setup

Baseline rules for a web server.

Command Description
sudo ufw default deny incoming Deny incoming by default
sudo ufw default allow outgoing Allow outgoing by default
sudo ufw allow OpenSSH Keep SSH access
sudo ufw allow 80/tcp Allow HTTP
sudo ufw allow 443/tcp Allow HTTPS
sudo ufw enable Activate firewall
sudo ufw status verbose Verify active rules

Troubleshooting

Quick checks for common UFW issues.

Issue Check
SSH access lost after enable Ensure OpenSSH is allowed before ufw enable
Rule did not apply Run sudo ufw reload and re-check with ufw status numbered
Service still unreachable Confirm service is listening (ss -tulpn) and port/protocol match
Rules conflict Check order with ufw status numbered and delete/re-add as needed
UFW not active at boot Verify service state with systemctl status ufw

Related Guides

Use these guides for full UFW workflows.

Guide Description
How to Set Up a Firewall with UFW on Ubuntu 20.04 Full UFW setup on Ubuntu 20.04
How to Set Up a Firewall with UFW on Ubuntu 18.04 UFW setup on Ubuntu 18.04
How to Set Up a Firewall with UFW on Debian 10 UFW setup on Debian 10
How to List and Delete UFW Firewall Rules Rule management and cleanup

chmod Cheatsheet

Basic Syntax

Use these core command forms for chmod.

Command Description
chmod MODE FILE General chmod syntax
chmod 644 file.txt Set numeric permissions
chmod u+x script.sh Add execute for owner
chmod g-w file.txt Remove write for group
chmod o=r file.txt Set others to read-only

Numeric Modes

Common numeric permission combinations.

Mode Meaning
600 Owner read/write
644 Owner read/write, group+others read
640 Owner read/write, group read
700 Owner full access only
755 Owner full access, group+others read/execute
775 Owner+group full access, others read/execute
444 Read-only for everyone

Symbolic Modes

Change specific permissions without replacing all bits.

Command Description
chmod u+x file Add execute for owner
chmod g-w file Remove write for group
chmod o-rwx file Remove all permissions for others
chmod ug+rw file Add read/write for owner and group
chmod a+r file Add read for all users
chmod a-x file Remove execute for all users

Files and Directories

Typical permission patterns for files and directories.

Command Description
chmod 644 file.txt Standard file permissions
chmod 755 dir/ Standard executable directory permissions
chmod u=rw,go=r file.txt Symbolic equivalent of 644
chmod u=rwx,go=rx dir/ Symbolic equivalent of 755
chmod +x script.sh Make script executable

Recursive Changes

Apply permission updates to directory trees.

Command Description
chmod -R 755 project/ Recursively set mode for all entries
chmod -R u+rwX project/ Add read/write and smart execute recursively
find project -type f -exec chmod 644 {} + Set files to 644
find project -type d -exec chmod 755 {} + Set directories to 755
chmod -R g-w shared/ Remove group write recursively

Special Bits

Setuid, setgid, and sticky bit examples.

Command Description
chmod 4755 /usr/local/bin/tool Setuid on executable
chmod 2755 /srv/shared Setgid on directory
chmod 1777 /tmp/mytmp Sticky bit on world-writable directory
chmod u+s file Add setuid (symbolic)
chmod g+s dir Add setgid (symbolic)
chmod +t dir Add sticky bit (symbolic)

Safe Patterns

Use these patterns to avoid unsafe permission changes.

Command Description
chmod 600 ~/.ssh/id_ed25519 Secure SSH private key
chmod 700 ~/.ssh Secure SSH directory
chmod 644 ~/.ssh/id_ed25519.pub Public key permissions
chmod 750 /var/www/app Limit web root access
chmod 755 script.sh Safer than 777 for scripts

Common Errors

Quick checks when permission changes do not work.

Issue Check
Operation not permitted Check file ownership with ls -l and apply with the correct user or sudo
Permission still denied after chmod Parent directory may block access; check directory execute (x) bit
Cannot chmod symlink target as expected chmod applies to target file, not link metadata
Recursive mode broke app files Reset with separate file/dir modes using find ... -type f/-type d
Changes revert on mounted share Filesystem mount options/ACL may override mode bits

Related Guides

Use these guides for full permission and ownership workflows.

Guide Description
How to Change File Permissions in Linux (chmod command) Full chmod guide with examples
Chmod Recursive: Change File Permissions Recursively in Linux Recursive permission strategies
What Does chmod 777 Mean Security impact of 777
Chown Command in Linux (File Ownership) Change file and directory ownership
Umask Command in Linux Default permissions for new files
Understanding Linux File Permissions Permission model explained

Best Linux Distributions for Every Use Case

A Linux distribution (or “distro”) is an operating system built on the Linux kernel, combined with GNU tools, libraries, and software packages. Each distro includes a desktop environment, package manager, and preinstalled applications tailored to specific use cases.

With hundreds of Linux distributions available, choosing the right one can be overwhelming. This guide covers the best Linux distributions for different types of users, from complete beginners to security professionals.

How to Choose a Linux Distro

When selecting a Linux distribution, consider these factors:

  • Experience level — Some distros are beginner-friendly, while others require technical knowledge
  • Hardware — Older computers benefit from lightweight distros like Xubuntu and other Xfce-based systems
  • Purpose — Desktop use, gaming, server deployment, or security testing all have ideal distros
  • Software availability — Check if your required applications are available in the distro’s repositories
  • Community support — Larger communities mean more documentation and help

Linux Distros for Beginners

These distributions are designed with user-friendliness in mind, featuring intuitive interfaces and easy installation.

Ubuntu

Ubuntu is the most popular Linux distribution and an excellent starting point for newcomers. Developed by Canonical, it offers a polished desktop experience with the GNOME environment and extensive hardware support.

Ubuntu desktop screenshot

Ubuntu comes in several editions:

  • Ubuntu Desktop — Standard desktop with GNOME
  • Ubuntu Server — For server deployments
  • Kubuntu, Lubuntu, Xubuntu — Alternative desktop environments

Ubuntu releases new versions every six months, with Long Term Support (LTS) versions every two years receiving five years of security updates.

Website: https://ubuntu.com/

Linux Mint

Linux Mint is an excellent choice for users coming from Windows. Its Cinnamon desktop environment provides a familiar layout with a taskbar, start menu, and system tray.

Linux Mint desktop screenshot

Key features:

  • Comes with multimedia codecs preinstalled
  • LibreOffice productivity suite included
  • Update Manager for easy system maintenance
  • Available with Cinnamon, MATE, or Xfce desktops

Website: https://linuxmint.com/

Pop!_OS

Developed by System76, Pop!_OS is based on Ubuntu but optimized for productivity and gaming. It ships with the COSMIC desktop environment (built in Rust by System76), featuring built-in window tiling, a launcher for quick app access, and excellent NVIDIA driver support out of the box.

Pop!_OS desktop screenshot

Pop!_OS is particularly popular among:

  • Gamers (thanks to Steam and Proton integration)
  • Developers (includes many development tools)
  • Users with NVIDIA graphics cards

Website: https://pop.system76.com/

Zorin OS

Zorin OS is a beginner-focused distribution designed to make the move from Windows or macOS easier. It includes polished desktop layouts, strong hardware compatibility, and a simple settings experience for new users.

Zorin OS is a strong option when you want:

  • A familiar desktop layout with minimal setup
  • Stable Ubuntu-based package compatibility
  • Good out-of-the-box support for everyday desktop tasks
Zorin OS desktop screenshot

Website: https://zorin.com/os/

elementary OS

elementary OS is a design-focused distribution with a macOS-like interface called Pantheon. It emphasizes simplicity, consistency, and a curated app experience through its AppCenter.

elementary OS desktop screenshot

elementary OS is a good fit when you want:

  • A clean, visually polished desktop out of the box
  • A curated app store with native applications
  • A macOS-like workflow on Linux

Website: https://elementary.io/

Lightweight Linux Distros

These distributions are designed for older hardware or users who prefer a minimal, fast system.

Xubuntu

Xubuntu combines Ubuntu’s reliability with the Xfce desktop environment, offering a good balance between performance and features. It is lighter than standard Ubuntu while remaining full-featured for daily desktop use.

Xubuntu is a practical choice when you need:

  • Better performance on older hardware
  • A traditional desktop workflow
  • Ubuntu repositories and long-term support options
Xubuntu desktop screenshot

Website: https://xubuntu.org/

Lubuntu

Lubuntu uses the LXQt desktop environment, making it one of the lightest Ubuntu-based distributions available. It is designed for very old or resource-constrained hardware where even Xfce feels heavy.

Lubuntu desktop screenshot

Lubuntu works well when you need:

  • Minimal memory and CPU usage
  • A functional desktop on very old hardware
  • Access to Ubuntu repositories and LTS support

Website: https://lubuntu.me/

Linux Distros for Advanced Users

These distributions offer more control and customization but require technical knowledge to set up and maintain.

Arch Linux

Arch Linux follows a “do-it-yourself” philosophy, providing a minimal base system that users build according to their needs. It uses a rolling release model, meaning you always have the latest software without major version upgrades.

Key features:

  • Pacman package manager with access to vast repositories
  • Arch User Repository (AUR) for community packages
  • Excellent documentation in the Arch Wiki
  • Complete control over every aspect of the system
Arch Linux desktop screenshot
Tip
Arch Linux requires manual installation via the command line. If you want the Arch experience with an easier setup, consider Manjaro or EndeavourOS.

Website: https://archlinux.org/

EndeavourOS

EndeavourOS is an Arch-based distribution that keeps the Arch philosophy while simplifying installation and initial setup. It is popular among users who want a near-Arch experience without doing a fully manual install.

EndeavourOS gives you:

  • Rolling release updates
  • Access to Arch repositories and AUR packages
  • A cleaner onboarding path than a base Arch install
EndeavourOS desktop screenshot

Website: https://endeavouros.com/

Fedora

Fedora is a cutting-edge distribution sponsored by Red Hat. It showcases the latest open-source technologies while maintaining stability, making it popular among developers and system administrators.

Fedora desktop screenshot

Fedora editions include:

  • Fedora Workstation — Desktop with GNOME
  • Fedora Server — For server deployments
  • Fedora Silverblue — Immutable desktop OS
  • Fedora Spins — Alternative desktops (KDE, Xfce, etc.)

Many Red Hat technologies debut in Fedora before reaching RHEL, making it ideal for learning enterprise Linux.

Website: https://fedoraproject.org/

openSUSE

openSUSE is a community-driven distribution known for its stability and powerful administration tools. It offers two main variants:

  • openSUSE Leap — Regular releases based on SUSE Linux Enterprise
  • openSUSE Tumbleweed — Rolling release with the latest packages

The YaST (Yet another Setup Tool) configuration utility makes system administration straightforward, handling everything from software installation to network configuration.

openSUSE desktop screenshot

Website: https://www.opensuse.org/

Linux Distros for Gaming

Gaming-focused distributions prioritize current graphics stacks, controller support, and compatibility with modern Steam and Proton workflows.

Bazzite

Bazzite is an immutable Fedora-based desktop optimized for gaming and handheld devices. It ships with gaming-focused defaults and integrates well with Steam, Proton, and modern GPU drivers.

Bazzite is ideal when you want:

  • A Steam-first gaming setup
  • Reliable rollback and update behavior from an immutable base
  • A distro tuned for gaming PCs and handheld hardware
Bazzite desktop screenshot

Website: https://bazzite.gg/

Linux Distros for Servers

These distributions are optimized for stability, security, and long-term support in server environments.

Debian

Debian is one of the oldest and most influential Linux distributions. Known for its rock-solid stability and rigorous testing process, it serves as the foundation for Ubuntu, Linux Mint, Kali Linux, and many other distributions.

Debian desktop screenshot

Debian offers three release channels:

  • Stable — Thoroughly tested, ideal for production servers
  • Testing — Upcoming stable release with newer packages
  • Unstable (Sid) — Rolling release with the latest software

With over 59,000 packages in its repositories, Debian supports more hardware architectures than any other Linux distribution.

Website: https://www.debian.org/

Red Hat Enterprise Linux (RHEL)

RHEL is the industry standard for enterprise Linux deployments. It offers:

  • 10-year support lifecycle
  • Certified hardware and software compatibility
  • Red Hat Insights for predictive analytics
  • Professional support from Red Hat

RHEL runs on multiple architectures including x86_64, ARM64, IBM Power, and IBM Z.

Website: https://www.redhat.com/

Rocky Linux

After CentOS shifted to CentOS Stream, Rocky Linux emerged as a community-driven RHEL-compatible distribution. Founded by one of the original CentOS creators, it provides 1:1 binary compatibility with RHEL.

Rocky Linux desktop screenshot

Rocky Linux is ideal for:

  • Organizations previously using CentOS
  • Production servers requiring stability
  • Anyone needing RHEL compatibility without the cost

Website: https://rockylinux.org/

Ubuntu Server

Ubuntu Server is widely used for cloud deployments and containerized workloads. It powers a significant portion of public cloud instances on AWS, Google Cloud, and Azure.

Features include:

  • Regular and LTS releases
  • Excellent container and Kubernetes support
  • Ubuntu Pro for extended security maintenance
  • Snap packages for easy application deployment

Website: https://ubuntu.com/server

SUSE Linux Enterprise Server (SLES)

SUSE Linux Enterprise Server is designed for mission-critical workloads. It excels in:

  • SAP HANA deployments
  • High-performance computing
  • Mainframe environments
  • Edge computing

SLES offers a common codebase across different environments, simplifying workload migration.

Website: https://www.suse.com/products/server/

Linux Distros for Security and Privacy

These distributions focus on security testing, anonymity, and privacy protection.

Kali Linux

Kali Linux is the industry-standard platform for penetration testing and security research. Maintained by Offensive Security, it includes hundreds of security tools preinstalled.

Common use cases:

  • Penetration testing
  • Security auditing
  • Digital forensics
  • Reverse engineering
Kali Linux desktop screenshot
Warning
Kali Linux is designed for security professionals. It should not be used as a daily driver operating system.

Website: https://www.kali.org/

Tails

Tails (The Amnesic Incognito Live System) is a portable operating system designed for privacy and anonymity. It runs from a USB drive and routes all traffic through the Tor network.

Key features:

  • Leaves no trace on the host computer
  • All connections go through Tor
  • Built-in encryption tools
  • Amnesic by design (forgets everything on shutdown)
Tails desktop screenshot

Website: https://tails.net/

Qubes OS

Qubes OS takes a unique approach to security by isolating different activities in separate virtual machines called “qubes.” If one qube is compromised, others remain protected.

The Xen hypervisor runs directly on hardware, providing strong isolation between:

  • Work applications
  • Personal browsing
  • Untrusted software
  • Sensitive data

Website: https://www.qubes-os.org/

Parrot Security OS

Parrot Security is a Debian-based distribution for security testing, development, and privacy. It is lighter than Kali Linux and can serve as a daily driver.

Parrot offers several editions:

  • Security Edition — Full security toolkit
  • Home Edition — Privacy-focused daily use
  • Cloud Edition — For cloud deployments

Website: https://parrotsec.org/

Getting Started

Once you have chosen a distro, the next steps are:

  1. Download the ISO from the official website
  2. Create a bootable USB drive — See our guide on creating a bootable Linux USB
  3. Try it live before installing (most distros support this)
  4. Install following the distro’s installation wizard

Quick Comparison

Distro Best For Desktop Package Manager Based On
Ubuntu Beginners GNOME APT Debian
Linux Mint Windows users Cinnamon APT Ubuntu
Zorin OS New Linux users GNOME (Zorin desktop) APT Ubuntu
elementary OS macOS-like experience Pantheon APT Ubuntu
Fedora Developers GNOME DNF Independent
Debian Stability/Servers GNOME APT Independent
Arch Linux Advanced users Any Pacman Independent
EndeavourOS Arch with easier setup Xfce (default) Pacman Arch Linux
Pop!_OS Gaming/Developers COSMIC APT Ubuntu
Bazzite Gaming KDE/GNOME variants RPM-OSTree Fedora
Rocky Linux Enterprise servers None DNF RHEL
Xubuntu Older hardware Xfce APT Ubuntu
Lubuntu Very old hardware LXQt APT Ubuntu
Kali Linux Security testing Xfce APT Debian

FAQ

Which Linux distro is best for beginners?
Ubuntu, Linux Mint, and Zorin OS are the best choices for beginners. Ubuntu has the largest community and most documentation, while Linux Mint and Zorin OS provide a familiar desktop experience.

Can I try a Linux distro without installing it?
Yes. Most distributions support “live booting” from a USB drive, allowing you to test the system without making any changes to your computer.

Is Linux free?
Most Linux distributions are completely free to download and use. Some enterprise distros like RHEL offer paid support subscriptions.

Can I run Windows software on Linux?
Many Windows applications run on Linux through Wine or Proton (for games via Steam). Native alternatives like LibreOffice, GIMP, and Firefox are also available.

What is a rolling release distro?
A rolling release distro (like Arch Linux or openSUSE Tumbleweed) delivers continuous updates instead of major version upgrades. You always have the latest software, but updates require more attention.

Conclusion

The best Linux distribution depends entirely on your needs and experience level. If you are new to Linux, start with Ubuntu, Linux Mint, or Zorin OS. If you want full control over your system, try Arch Linux, EndeavourOS, or Fedora. For gaming, Pop!_OS and Bazzite are strong options. For servers, Debian, Rocky Linux, Ubuntu Server, and RHEL are all solid choices. For security testing, Kali Linux and Parrot Security are the industry standards.

Most distributions are free to download and try. Create a bootable USB, test a few options, and find the one that fits your workflow.

If you have any questions, feel free to leave a comment below.

SCP Cheatsheet

Basic Syntax

Use this general form for scp commands.

Command Description
scp SOURCE DEST General scp syntax
scp file.txt user@host:/path/ Copy local file to remote
scp user@host:/path/file.txt . Copy remote file to current directory
scp user@host:/path/file.txt /local/path/ Copy remote file to local directory

Upload Files

Copy local files to a remote host.

Command Description
scp file.txt user@host:/tmp/ Upload one file
scp file1 file2 user@host:/tmp/ Upload multiple files
scp *.log user@host:/var/log/archive/ Upload matching files
scp -p file.txt user@host:/tmp/ Preserve modification times and mode

Download Files

Copy files from a remote host to your local system.

Command Description
scp user@host:/tmp/file.txt . Download to current directory
scp user@host:/tmp/file.txt ~/Downloads/ Download to specific directory
scp user@host:'/var/log/*.log' . Download remote wildcard (quoted)
scp user@host:/tmp/file.txt ./new-name.txt Download and rename locally

Copy Directories

Use -r for recursive directory transfers.

Command Description
scp -r dir/ user@host:/tmp/ Upload directory recursively
scp -r user@host:/var/www/ ./backup/ Download directory recursively
scp -r dir1 dir2 user@host:/tmp/ Upload multiple directories
scp -rp project/ user@host:/srv/ Recursive copy and preserve attributes

Ports, Keys, and Identity

Connect with custom SSH settings.

Command Description
scp -P 2222 file.txt user@host:/tmp/ Use custom SSH port
scp -i ~/.ssh/id_ed25519 file.txt user@host:/tmp/ Use specific private key
scp -o IdentityFile=~/.ssh/id_ed25519 file.txt user@host:/tmp/ Set key with -o option
scp -o StrictHostKeyChecking=yes file.txt user@host:/tmp/ Enforce host key verification

Performance and Reliability

Tune speed, verbosity, and resilience.

Command Description
scp -C large-file.iso user@host:/tmp/ Enable compression
scp -l 8000 file.txt user@host:/tmp/ Limit bandwidth (Kbit/s)
scp -v file.txt user@host:/tmp/ Verbose output for debugging
scp -q file.txt user@host:/tmp/ Quiet mode
scp -o ConnectTimeout=10 file.txt user@host:/tmp/ Set connection timeout

Remote to Remote Copy

Transfer files between two remote hosts.

Command Description
scp user1@host1:/path/file user2@host2:/path/ Copy between remote hosts
scp -3 user1@host1:/path/file user2@host2:/path/ Route transfer through local host
scp -P 2222 user1@host1:/path/file user2@host2:/path/ Use custom port (applies to both hosts)

Common Patterns

Frequently used command combinations.

Command Description
scp -r ./site user@host:/var/www/ Deploy static site files
scp -i ~/.ssh/id_ed25519 -P 2222 backup.sql user@host:/tmp/ Upload with key and custom port
scp user@host:/etc/nginx/nginx.conf ./ Pull config for review
scp -rp ./configs user@host:/etc/myapp/ Copy configs and keep metadata

Troubleshooting

Check Command
Permission denied Verify user has write access to the destination path
Host key verification failed ssh-keygen -R hostname to remove old key, then retry
Connection refused on custom port scp -P PORT file user@host:/path/ (uppercase -P)
Transfer stalls or times out scp -o ConnectTimeout=10 -o ServerAliveInterval=15 file user@host:/path/
Not a regular file error Add -r for directories: scp -r dir/ user@host:/path/
Protocol error on OpenSSH 9.0+ scp -O file user@host:/path/ to use legacy SCP protocol
Debug connection issues scp -v file user@host:/path/ for verbose SSH output

Related Guides

Use these articles for detailed file transfer and SSH workflows.

Guide Description
How to Use SCP Command to Securely Transfer Files Full scp guide with practical examples
SSH Command in Linux SSH options, authentication, and connection examples
How to Use Linux SFTP Command to Transfer Files Interactive secure file transfer over SSH
How to Use Rsync for Local and Remote Data Transfer Incremental sync and directory transfer
❌