普通视图

发现新文章,点击刷新页面。
今天 — 2026年4月25日Linux Tips, Tricks and Tutorials on Linuxize

How to Install Nginx on Ubuntu 26.04

Nginx (pronounced “engine x”) is an open-source, high-performance HTTP and reverse proxy server used by some of the largest sites on the Internet. It can act as a standalone web server, load balancer, content cache, and proxy for HTTP and non-HTTP services.

This guide explains how to install Nginx on Ubuntu 26.04, configure the firewall, and manage the service.

Prerequisites

Before continuing, make sure you are logged in as a user with sudo privileges , and that no other service such as Apache is running on port 80 or 443.

Installing Nginx

Nginx is available in the default Ubuntu repositories. To install it, run:

Terminal
sudo apt update
sudo apt install nginx

Once the installation is completed, the Nginx service starts automatically. To verify it is running:

Terminal
sudo systemctl status nginx
output
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; preset: enabled)
Active: active (running) since Fri 2026-04-25 15:00:00 UTC; 10s ago
...

Nginx is now installed and running. You can manage the Nginx service in the same way as any other systemd unit.

Configuring the Firewall

Now that Nginx is running, you need to make sure the firewall allows traffic on HTTP (80) and HTTPS (443). If you are using ufw , enable the Nginx Full profile which covers both ports:

Terminal
sudo ufw allow 'Nginx Full'

To verify the firewall status:

Terminal
sudo ufw status
output
Status: active
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
Nginx Full ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
Nginx Full (v6) ALLOW Anywhere (v6)

Testing the Installation

To confirm Nginx is serving requests, open http://YOUR_IP in a browser. You should see the default Nginx welcome page. You can also test from the command line:

Terminal
curl -I http://localhost
output
HTTP/1.1 200 OK
Server: nginx/1.28.1 (Ubuntu)
...

Nginx Configuration Structure

  • All Nginx configuration files are located in /etc/nginx/.
  • The main configuration file is /etc/nginx/nginx.conf.
  • Create a separate configuration file for each domain to keep things manageable. Server block files are stored in /etc/nginx/sites-available/ and must be symlinked to /etc/nginx/sites-enabled/ to be active.
  • Follow the standard naming convention: if your domain is mydomain.com, name the file /etc/nginx/sites-available/mydomain.com.conf.
  • The /etc/nginx/snippets/ directory holds reusable configuration fragments that can be included in server block files.
  • Log files (access.log and error.log) are located in /var/log/nginx/. Use a separate log file per server block.
  • Common document root locations: /var/www/<site_name>, /var/www/html/<site_name>, or /home/<user>/<site_name>.

Conclusion

Nginx is now installed and ready to serve your applications on Ubuntu 26.04. To set up multiple sites on the same server, see the guide on Nginx server blocks .

How to Install PostgreSQL on Ubuntu 26.04

PostgreSQL is an open-source, object-relational database management system known for its reliability, feature set, and standards compliance. It is widely used in production environments ranging from small applications to large-scale data warehouses.

This guide explains how to install PostgreSQL 17 on Ubuntu 26.04, set up roles and authentication, and configure remote access.

Prerequisites

To follow this guide, you need to be logged in as a user with sudo privileges .

Installing PostgreSQL on Ubuntu 26.04

The default Ubuntu 26.04 repositories include PostgreSQL 17.

Start by updating the local package index:

Terminal
sudo apt update

Install the PostgreSQL server and the contrib package, which provides several additional features:

Terminal
sudo apt install postgresql postgresql-contrib

Once the installation is completed, the PostgreSQL service starts automatically. Use the psql tool to verify the installation by connecting to the PostgreSQL server and printing its version :

Terminal
sudo -u postgres psql -c "SELECT version();"
output
 version
----------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 17.6 (Ubuntu 17.6-1build1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 13.3.0-6ubuntu2) 13.3.0, 64-bit
(1 row)

PostgreSQL has been installed and is ready to use.

PostgreSQL Roles and Authentication

Database access in PostgreSQL is managed through roles. A role can represent a database user or a group of database users.

PostgreSQL supports several authentication methods . The most commonly used are:

  • Trust lets a role connect without a password, as long as the conditions in pg_hba.conf are met.
  • Password lets a role connect by providing a password. Passwords can be stored as scram-sha-256 or md5.
  • Ident is supported on TCP/IP connections. It obtains the client’s operating system username, with optional mapping.
  • Peer works like Ident, but for local connections only.

PostgreSQL client authentication is defined in pg_hba.conf. By default, PostgreSQL uses the peer authentication method for local connections.

The postgres user is created automatically during installation. It is the superuser for the PostgreSQL instance, equivalent to the MySQL root user.

To log in to the PostgreSQL server as the postgres user, use the sudo command:

Terminal
sudo -u postgres psql

You can also switch to the user first:

Terminal
sudo su - postgres
psql

To exit the PostgreSQL shell, type:

Terminal
\q

Creating a Role and Database

Only superusers and roles with the CREATEROLE privilege can create new roles.

The following example creates a role named john, sets a password for the role, and creates a database named johndb owned by that role:

  1. Create a new PostgreSQL role:

    Terminal
    sudo -u postgres createuser john
  2. Set a password for the role:

    Terminal
    sudo -u postgres psql
    sql
    ALTER ROLE john WITH ENCRYPTED PASSWORD 'strong_password';
    Warning
    Use a real password here, and do not store it in shell history, scripts, or version control.
    
    1. Create a new database owned by that role:

      Terminal
      sudo -u postgres createdb johndb --owner=john

    Because john owns the database, it can create objects in the default public schema without additional grants.

    Enabling Remote Access

    By default, PostgreSQL only listens on localhost (127.0.0.1).

    To allow remote connections, open the PostgreSQL configuration file:

    Terminal
    sudo nano /etc/postgresql/17/main/postgresql.conf

    Find the listen_addresses line in the CONNECTIONS AND AUTHENTICATION section and set it to '*' to listen on all interfaces:

    /etc/postgresql/17/main/postgresql.confcfg
    listen_addresses = '*'

    Save the file and restart the PostgreSQL service:

    Terminal
    sudo systemctl restart postgresql

    Verify that PostgreSQL is now listening on all interfaces:

    Terminal
    ss -nlt | grep 5432
    output
    LISTEN 0 244 0.0.0.0:5432 0.0.0.0:*
    LISTEN 0 244 [::]:5432 [::]:*

    The next step is to configure which hosts are allowed to connect by editing pg_hba.conf:

    Terminal
    sudo nano /etc/postgresql/17/main/pg_hba.conf

    Add rules at the bottom of the file. Some examples:

    /etc/postgresql/17/main/pg_hba.confconf
    # TYPE DATABASE USER ADDRESS METHOD
    # Allow jane to connect to all databases from anywhere using a password
    host all jane 0.0.0.0/0 scram-sha-256
    # Allow jane to connect only to janedb from anywhere
    host janedb jane 0.0.0.0/0 scram-sha-256
    # Allow jane to connect from a trusted host without a password
    host all jane 192.168.1.134 trust

    Reload PostgreSQL to apply the changes:

    Terminal
    sudo systemctl reload postgresql

    Finally, open port 5432 in the firewall. To allow access from a specific subnet:

    Terminal
    sudo ufw allow proto tcp from 192.168.1.0/24 to any port 5432
    Warning
    Make sure your firewall is configured to accept connections only from trusted IP addresses.

    Conclusion

    We have shown you how to install PostgreSQL 17 on Ubuntu 26.04 and covered the basics of roles, authentication, and remote access. For more details, consult the PostgreSQL 17 documentation .

How to Install MySQL on Ubuntu 26.04

MySQL is one of the most popular open-source relational database management systems. It is fast, easy to manage, scalable, and an integral part of the popular LAMP and LEMP stacks.

This guide explains how to install and secure MySQL 8.4 on an Ubuntu 26.04 machine. Upon completion, you will have a fully functional database server that you can use for your projects.

Quick Reference

Task Command
Install MySQL sudo apt install mysql-server
Check service status sudo systemctl status mysql
Start/Stop/Restart MySQL sudo systemctl start/stop/restart mysql
Secure MySQL sudo mysql_secure_installation
Log in as root sudo mysql
Check MySQL version mysql --version
View MySQL logs sudo journalctl -u mysql
Open firewall port sudo ufw allow 3306/tcp

Prerequisites

To follow this guide, you need to be logged in as a user with sudo privileges .

Installing MySQL on Ubuntu 26.04

The default Ubuntu 26.04 repositories include MySQL 8.4.

Start by updating the local package index:

Terminal
sudo apt update

Install the MySQL server package:

Terminal
sudo apt install mysql-server

Once the installation is completed, the MySQL service starts automatically. To verify that the MySQL server is running, type:

Terminal
sudo systemctl status mysql

The output should show that the service is enabled and running:

output
● mysql.service - MySQL Community Server
Loaded: loaded (/usr/lib/systemd/system/mysql.service; enabled; preset: enabled)
Active: active (running) since Fri 2026-04-25 09:14:23 UTC; 15s ago
Process: 1234 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Main PID: 1242 (mysqld)
Status: "Server is operational"
Tasks: 38 (limit: 4558)
Memory: 362.4M
CPU: 1.183s
CGroup: /system.slice/mysql.service
└─1242 /usr/sbin/mysqld

If the server fails to start, you can check the logs with journalctl :

Terminal
sudo journalctl -u mysql

To check the installed version, run:

Terminal
mysql --version
output
mysql Ver 8.4.8-0ubuntu1 for Linux on x86_64 ((Ubuntu))

Securing MySQL

MySQL includes a script named mysql_secure_installation that helps you improve the database server security.

Run the script:

Terminal
sudo mysql_secure_installation

You will be asked to configure the VALIDATE PASSWORD component, which tests the strength of MySQL users’ passwords:

output
Securing the MySQL server deployment.
Connecting to MySQL using a blank password.
VALIDATE PASSWORD COMPONENT can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD component?
Press y|Y for Yes, any other key for No: y

There are three levels of password validation policy. Press y to enable the password validation plugin, or any other key to skip:

output
There are three levels of password validation policy:
LOW Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary file
Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1

Next, you will be asked to remove the anonymous user, restrict root user access to the local machine, remove the test database, and reload privilege tables. Answer y to all questions:

output
Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
Success.
Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
Success.
Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y
- Dropping test database...
Success.
Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.
Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
Success.
All done!

Logging In as Root

You can interact with the MySQL server from the command line using the MySQL client utility, which is installed as a dependency of the MySQL server package.

On Ubuntu 26.04, the MySQL root account uses the auth_socket plugin by default. This plugin authenticates users that connect from localhost through the Unix socket file, which means you cannot authenticate as root by providing a password.

To log in to the MySQL server as the root user, type:

Terminal
sudo mysql

You will be presented with the MySQL shell:

output
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 10
Server version: 8.4.8-0ubuntu1 (Ubuntu)
Copyright (c) 2000, 2024, Oracle and/or its affiliates.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>

To exit the MySQL shell, type exit or press Ctrl+D.

Changing Root Authentication

If you want to log in to your MySQL server as root using an external program such as phpMyAdmin, you have two options.

Option 1: Change the Authentication Method

Change the authentication method from auth_socket to caching_sha2_password:

sql
ALTER USER 'root'@'localhost' IDENTIFIED WITH caching_sha2_password BY 'very_strong_password';
FLUSH PRIVILEGES;
Info
caching_sha2_password is the default authentication plugin in MySQL 8.4. The older mysql_native_password plugin is deprecated and disabled by default.

Option 2: Create a Dedicated Administrative User

The recommended approach is to create a new administrative user with access to all databases:

sql
CREATE USER 'administrator'@'localhost' IDENTIFIED BY 'very_strong_password';
GRANT ALL PRIVILEGES ON *.* TO 'administrator'@'localhost' WITH GRANT OPTION;
FLUSH PRIVILEGES;

Creating a Database and User

Once MySQL is secured, you can create databases and user accounts. For example, to create a database named myapp and a user that has access to it:

Log in to the MySQL shell:

Terminal
sudo mysql

Create the database:

sql
CREATE DATABASE myapp;

Create a user and grant privileges on the database:

sql
CREATE USER 'myappuser'@'localhost' IDENTIFIED BY 'strong_password';
GRANT ALL PRIVILEGES ON myapp.* TO 'myappuser'@'localhost';
FLUSH PRIVILEGES;

To allow the user to connect from any host (for remote access), replace 'localhost' with '%'.

Enabling Remote Access

By default, MySQL only listens on localhost (127.0.0.1). To allow remote connections, you need to change the bind address.

Open the MySQL configuration file:

Terminal
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

Find the line that starts with bind-address and change it to 0.0.0.0 to listen on all interfaces, or to a specific IP address:

/etc/mysql/mysql.conf.d/mysqld.cnfini
bind-address = 0.0.0.0

Restart the MySQL service for the changes to take effect:

Terminal
sudo systemctl restart mysql
Warning
Allowing remote access to your MySQL server can be a security risk. Make sure to only grant remote access to users who need it and use strong passwords. Consider using a firewall to restrict access to the MySQL port (3306) to trusted IP addresses only.

For production, consider enabling TLS so remote connections are encrypted. This protects credentials and data in transit when connecting over untrusted networks.

Configuring the Firewall

If you have a firewall enabled and want to allow connections to MySQL from remote machines, you need to open port 3306.

To allow access from a specific IP address:

Terminal
sudo ufw allow from 192.168.1.100 to any port 3306

To allow access from any IP address (not recommended for production):

Terminal
sudo ufw allow 3306/tcp

FAQ

What version of MySQL does Ubuntu 26.04 include?
Ubuntu 26.04 ships with MySQL 8.4 in its default repositories. To check the exact version installed on your system, run mysql --version.

What is the difference between auth_socket and caching_sha2_password?
auth_socket authenticates users based on the operating system user connecting through the Unix socket. No password is needed, but you must use sudo. caching_sha2_password uses traditional password-based authentication.

How do I reset the MySQL root password?
Stop the MySQL service with sudo systemctl stop mysql, start it with --skip-grant-tables, log in without a password, run ALTER USER 'root'@'localhost' IDENTIFIED WITH caching_sha2_password BY 'new_password';, then restart the service normally.

Is mysql_native_password still supported?
It is still available, but it is deprecated and disabled by default in MySQL 8.4. Use caching_sha2_password for new installations unless you have a specific compatibility requirement.

Can I install a newer MySQL release on Ubuntu 26.04?
Yes. If you need a newer release than the one in the Ubuntu repositories, you can use the official MySQL APT repository . For most setups, the Ubuntu package is the simplest choice.

Conclusion

We have shown you how to install and secure MySQL 8.4 on Ubuntu 26.04. Now that your database server is up and running, your next step could be to learn how to manage MySQL user accounts and databases .

How to Install Docker on Ubuntu 26.04

Docker is an open-source container platform that lets you build, test, and run applications as portable containers. Containers package the application code together with its dependencies, which makes it easier to run the same workload across development machines, test environments, and servers.

Docker is widely used in development workflows and DevOps pipelines because it keeps application environments predictable.

This tutorial explains how to install Docker on Ubuntu 26.04.

Supported Ubuntu Versions

Docker is available from the standard Ubuntu repositories, but those packages are often outdated. To ensure you get the latest stable version, we will install Docker from the official Docker repository.

At the time of writing, the Docker repository provides packages for current supported Ubuntu releases, including Ubuntu 26.04 LTS, Ubuntu 25.10, Ubuntu 24.04 LTS, and Ubuntu 22.04 LTS.

Info
Derivatives like Linux Mint are not officially supported, though they often work.

Prerequisites

Before you begin, make sure that:

  • You are running a 64-bit supported Ubuntu version
  • You have a user account with sudo privileges
  • Your system is connected to the internet and up to date

Uninstall any old or conflicting Docker packages first to avoid potential issues:

Terminal
sudo apt remove docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc

Installing Docker on Ubuntu 26.04

Installing Docker on Ubuntu is relatively straightforward. We will enable the Docker repository, import the repository GPG key, and install the Docker packages.

Step 1: Update the Package Index and Install Dependencies

First, update the package index and install packages required to use repositories over HTTPS :

Terminal
sudo apt update
sudo apt install ca-certificates curl

Step 2: Import Docker’s Official GPG Key

Add Docker’s official GPG key so your system can verify package authenticity:

Terminal
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

Step 3: Add the Docker APT Repository

Add the Docker repository to your system:

Terminal
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/ubuntu
Suites: $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}")
Components: stable
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/docker.asc
EOF

$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") reads your Ubuntu codename from the OS release file. On Ubuntu 26.04 it returns resolute.

Step 4: Install Docker Engine

Tip
If you want to install a specific Docker version, skip this step and go to the next one.

Now that the Docker repository is enabled, update the package index and install Docker:

Terminal
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Installing a Specific Docker Version (Optional)

If you want to install a specific Docker version instead of the latest one, first list the available versions:

Terminal
sudo apt update
apt list --all-versions docker-ce

The available Docker versions are printed in the second column:

output
docker-ce/resolute 5:29.4.0-1~ubuntu.26.04~resolute amd64
docker-ce/resolute 5:29.3.1-1~ubuntu.26.04~resolute amd64
docker-ce/resolute 5:29.3.0-1~ubuntu.26.04~resolute amd64
...

Install a specific version by adding =<VERSION> after the package name:

Terminal
DOCKER_VERSION="<VERSION>"
sudo apt install docker-ce=$DOCKER_VERSION docker-ce-cli=$DOCKER_VERSION containerd.io docker-buildx-plugin docker-compose-plugin

Replace <VERSION> with the exact version string returned by apt list --all-versions docker-ce.

Verify the Docker Installation

On most Ubuntu systems, the Docker service starts automatically after installation. If it does not, start it manually:

Terminal
sudo systemctl start docker

Check the service status:

Terminal
sudo systemctl status docker

The output will look something like this:

output
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
Active: active (running)
...

You can also confirm that the client and daemon are responding:

Terminal
sudo docker version

To verify that Docker can pull and run containers correctly, run the test image:

Terminal
sudo docker run hello-world

If the image is not found locally, Docker will download it from Docker Hub, run the container, print a “Hello from Docker” message, and exit.

Docker Hello World

The container stops after printing the message because it has no long-running process.

Run Docker Commands Without sudo (Highly Recommended)

By default, only root and a user with sudo privileges can run Docker commands.

To allow a non-root user to execute Docker commands, add the user to the docker group:

Terminal
sudo usermod -aG docker $USER

$USER is an environment variable that holds the currently logged-in username. If you want to execute commands with another user, replace $USER with that username.

Run newgrp docker or log out and log back in for the group membership changes to take effect.

After that, you can rerun the test container without sudo :

Terminal
docker run hello-world
Info
By default, Docker pulls images from Docker Hub. It is a cloud-based registry service that stores Docker images in public or private repositories.

Updating Docker

When a new Docker version is released, update it using standard system commands:

Terminal
sudo apt update
sudo apt upgrade

To prevent Docker from being updated automatically, mark it as held:

Terminal
sudo apt-mark hold docker-ce

Uninstalling Docker

Before uninstalling Docker, it is recommended to remove all containers, images, volumes, and networks :

Run the following commands to stop all running containers and remove all Docker objects:

Terminal
docker container stop $(docker container ls -aq)
docker system prune -a --volumes -f

Remove Docker packages:

Terminal
sudo apt purge docker-ce docker-ce-cli containerd.io \
docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras
sudo apt autoremove

To completely remove Docker data from your system, delete the directories that were created during the installation process:

Terminal
sudo rm -rf /var/lib/{docker,containerd}

Troubleshooting

Permission denied while trying to connect to the Docker daemon socket
Your user is not yet in the docker group, or the group membership has not taken effect. Run sudo usermod -aG docker $USER, then log out and back in (or run newgrp docker).

Cannot connect to the Docker daemon. Is the daemon running?
The Docker service is not running. Start it with sudo systemctl start docker and check its status with sudo systemctl status docker.

Package ‘docker-ce’ has no installation candidate
The Docker repository was not added correctly. Re-run Steps 2 and 3, then run sudo apt update before installing.

Containers fail to start after an OS or Docker upgrade
Check the Docker daemon status, the installed Docker version, and your container runtime configuration. If the host or Docker Engine was upgraded recently, review the Docker release notes and verify that your workloads are using a supported configuration before restarting them.

Conclusion

Installing Docker from the official repository ensures you always have access to the latest stable releases and security updates. For post-install steps such as configuring log drivers or setting up rootless mode, see the official post-install guide .

昨天 — 2026年4月24日Linux Tips, Tricks and Tutorials on Linuxize

How to Install Node.js and npm on Ubuntu 26.04

Node.js is an open-source JavaScript runtime built on Chrome’s V8 engine. It lets you run JavaScript outside the browser and is commonly used for APIs, command-line tools, and server-side applications. npm is the default package manager for Node.js.

This guide covers three ways to install Node.js and npm on Ubuntu 26.04:

  • NodeSource repository - Install a specific Node.js version. NodeSource supports Node.js v25.x, v24.x, and v22.x.
  • nvm (Node Version Manager) - Manage multiple Node.js versions on the same machine. This is the preferred method for developers.
  • Ubuntu repository - The easiest way. Ubuntu 26.04 includes Node.js v22.x, which is suitable for many applications.

Choose the method that fits your needs. If you are not sure which version to install, check the documentation of the application you are deploying.

Quick Reference

Task Command
Install via NodeSource sudo apt install nodejs (after adding repo)
Install via nvm nvm install --lts
Install via Ubuntu repo sudo apt install nodejs npm
Check Node.js version node -v
Check npm version npm -v
List installed versions (nvm) nvm ls
Switch Node.js version (nvm) nvm use <version>
Set default version (nvm) nvm alias default <version>
Uninstall Node.js sudo apt remove nodejs or nvm uninstall <version>

Installing Node.js and npm from NodeSource

NodeSource maintains an APT repository with multiple Node.js versions. Use this method when you need a specific version.

Install the required dependencies:

Terminal
sudo apt update
sudo apt install ca-certificates curl gnupg

Import the NodeSource GPG key:

Terminal
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg

NodeSource provides the following versions:

  • v25.x - The current release.
  • v24.x - The latest LTS version (Krypton).
  • v22.x - Maintenance LTS version (Jod).

We will install Node.js version 24.x. Change NODE_MAJOR=24 to your preferred version if needed:

Terminal
NODE_MAJOR=24
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | sudo tee /etc/apt/sources.list.d/nodesource.list

Install Node.js and npm:

Terminal
sudo apt update
sudo apt install nodejs

The nodejs package includes both node and npm binaries.

Verify the installation:

Terminal
node --version

You should see output similar to this:

output
v24.x
Terminal
npm --version

The output will show the npm version bundled with the installed Node.js release:

output
11.x

To compile native addons from npm, install the development tools:

Terminal
sudo apt install build-essential

Installing Node.js and npm using NVM

NVM (Node Version Manager) is a bash script that lets you manage multiple Node.js versions per user. This is the preferred method for developers who need to switch between versions.

Download and install nvm:

Terminal
wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash

Do not use sudo , as it will enable nvm for the root user only.

The script clones the repository to ~/.nvm and adds the required lines to your shell profile:

output
=> Close and reopen your terminal to start using nvm or run the following to use it now:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion

Either close and reopen your terminal or run the commands above to load nvm in the current session.

Verify the installation:

Terminal
nvm -v
output
0.40.3

List all available Node.js versions:

Terminal
nvm list-remote
output
...
v22.x (LTS: Jod)
...
v24.x (LTS: Krypton)
...
v25.x

Install the latest Node.js version:

Terminal
nvm install node
output
...
Now using node v25.x (npm v11.x)
Creating default alias: default -> node (-> v25.x)

Verify the installation:

Terminal
node -v
output
v25.x

Install additional versions:

Terminal
nvm install --lts
nvm install 22

List installed versions:

Terminal
nvm ls
output
-> v22.x
v24.x
v25.x
default -> node (-> v25.x)
iojs -> N/A (default)
unstable -> N/A (default)
node -> stable (-> v25.x) (default)
stable -> 25 (-> v25.x) (default)
lts/* -> lts/krypton (-> v24.x)
lts/jod -> v22.x
lts/krypton -> v24.x

The arrow (-> v22.x) indicates the active version. The default version activates when opening new shells.

Switch to a different version:

Terminal
nvm use 24
output
Now using node v24.x (npm v11.x)

Change the default version:

Terminal
nvm alias default 24

Installing Node.js and npm from the Ubuntu repository

Ubuntu 26.04 includes Node.js v22.x in its default repositories. This version works well for many applications and receives security updates through Ubuntu.

Install Node.js and npm:

Terminal
sudo apt update
sudo apt install nodejs npm

This installs Node.js along with the tools needed to compile native addons from npm.

Verify the installation:

Terminal
node -v
output
v22.x
Terminal
npm -v
output
10.x
Info
If you need a newer Node.js version, use NodeSource or nvm instead.

Uninstalling Node.js

The uninstall method depends on how you installed Node.js.

NodeSource or Ubuntu repository:

Terminal
sudo apt remove nodejs
sudo apt autoremove

nvm:

Terminal
nvm uninstall 24

To completely remove nvm, delete the ~/.nvm directory and remove the nvm lines from your ~/.bashrc or ~/.zshrc file.

Conclusion

We covered three ways to install Node.js and npm on Ubuntu 26.04. NodeSource provides specific versions, nvm offers flexibility for managing multiple versions, and the Ubuntu repository includes a stable LTS version out of the box.

For more information, see the official Node.js documentation .

How to Upgrade to Ubuntu 26.04

Ubuntu 26.04 LTS (Resolute Raccoon) was released on April 23, 2026. It ships with Linux kernel 7.0, GNOME 50, Python 3.14, PHP 8.5, Java 25, TPM-backed full-disk encryption, post-quantum cryptography support in OpenSSL, and a Wayland-only GNOME session. As an LTS release, it receives standard security updates for five years, with expanded coverage available through Ubuntu Pro.

This guide shows how to upgrade to Ubuntu 26.04 LTS from the command line. Ubuntu 25.10 systems can upgrade now, while Ubuntu 24.04 LTS systems will be offered the standard upgrade path after Ubuntu 26.04.1 is released.

Prerequisites

You need to be logged in as root or a user with sudo privileges to perform the upgrade.

You can upgrade to Ubuntu 26.04 from Ubuntu 25.10 today. Ubuntu 24.04 LTS systems will be offered the standard upgrade path after Ubuntu 26.04.1 is released. If you are running an older release, you must first upgrade to Ubuntu 22.04 and then to 24.04 before continuing.

Make sure you have a working internet connection before starting.

Back Up Your Data

Before starting a major version upgrade, make sure you have a complete backup of your data. If you are running Ubuntu on a virtual machine, take a full system snapshot so you can restore quickly if anything goes wrong.

Update Currently Installed Packages

Before starting the release upgrade, bring your existing system fully up to date.

Check whether any packages are marked as held back, as they can interfere with the upgrade:

Terminal
sudo apt-mark showhold

An empty output means there are no held packages. If there are held packages, unhold them with:

Terminal
sudo apt-mark unhold package_name

Refresh the package index and upgrade all installed packages:

Terminal
sudo apt update
sudo apt upgrade
Info
If the kernel is upgraded during this step, reboot the machine and log back in before continuing.

Perform a full distribution upgrade to resolve any remaining dependency changes:

Terminal
sudo apt full-upgrade

Remove automatically installed dependencies that are no longer needed:

Terminal
sudo apt --purge autoremove

Upgrade to Ubuntu 26.04 LTS

You can upgrade from the command line using do-release-upgrade, which works for both desktop and server installations.

do-release-upgrade is part of the update-manager-core package, which is installed by default on most Ubuntu systems. If it is not present, install it first:

Terminal
sudo apt install update-manager-core
Info
Check that the upgrade policy in /etc/update-manager/release-upgrades is set to Prompt=lts or Prompt=normal. If it is set to Prompt=never, the upgrade will not start.

If you are upgrading over SSH, do-release-upgrade may start an additional SSH daemon on port 1022 so you can reconnect if the main session drops. If you use a firewall, you may need to open that port temporarily:

Terminal
sudo ufw allow 1022/tcp

To begin the release upgrade, run:

Terminal
sudo do-release-upgrade

If Ubuntu does not offer the new release yet, follow the standard supported rollout for your current version. This guide covers the normal upgrade path.

The tool will disable third-party repositories, update the apt sources to point to the Ubuntu 26.04 repositories, and begin downloading the required packages.

You will be prompted several times during the process. When asked whether services should be automatically restarted, type y. When asked about configuration files, type Y to accept the package maintainer’s version if you have not made custom changes; otherwise keep your current version to avoid losing your customizations.

The upgrade runs inside a GNU screen session and will automatically re-attach if the connection drops.

The process may take some time depending on the number of packages, your hardware, and your internet speed.

Once the new packages are installed, the tool will ask whether to remove obsolete software. Type d to review the list first, or y to proceed with removal.

When the upgrade finishes, you will be prompted to restart the system. Type y to reboot and complete the upgrade.

Verify the Upgrade

After the system boots, log in and check the Ubuntu version :

Terminal
lsb_release -a
output
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 26.04 LTS
Release: 26.04
Codename: resolute

You can also confirm the kernel version:

Terminal
uname -r

The output should show a 7.0.x kernel.

Troubleshooting

Third-party repository errors during apt update
The upgrade tool disables third-party repositories automatically. If you see errors before running do-release-upgrade, disable the affected sources under /etc/apt/sources.list.d/ and re-enable them one by one after the upgrade completes.

“Packages have been kept back”
Run sudo apt full-upgrade to resolve held-back packages before starting the upgrade. Do not proceed with do-release-upgrade until there are no held packages.

SSH connection drops mid-upgrade
The upgrade runs inside a GNU screen session. Reconnect over SSH and run sudo screen -r to re-attach to the running session.

A service or container does not work after the reboot
If something that worked before the upgrade fails afterward, check its status and logs first. Run systemctl status service_name and journalctl -xe for services, and review your container runtime logs and configuration for Docker or containerd workloads. Third-party packages and older configurations sometimes need updates after a major release upgrade.

GNOME desktop shows only Wayland sessions
Ubuntu 26.04 drops the X11 GNOME session. If you relied on an X11 desktop session, applications that do not support Wayland natively will run through XWayland automatically. For most applications this is transparent, but some older tools may behave differently.

Conclusion

Your system is now running Ubuntu 26.04 LTS Resolute Raccoon. Re-enable any third-party repositories you disabled before the upgrade and verify that your critical services are running. For a full list of known issues and changes, see the official Ubuntu 26.04 release notes .

昨天以前Linux Tips, Tricks and Tutorials on Linuxize

git branch Command: Create, List, and Delete Branches

Branches are how Git keeps your work separated from the main line of development. A new branch lets you experiment, fix a bug, or build a feature without touching what is already shipped, and a merge brings the work back together when it is ready. The tool that creates, lists, and removes those branches is git branch.

This guide explains how to use git branch to manage local and remote branches, filter by merged or upstream state, rename branches, and clean up stale ones.

git branch Syntax

The general form of the command is:

txt
git branch [OPTIONS] [BRANCH] [START_POINT]

With no arguments, git branch lists your local branches and marks the currently checked-out one with an asterisk. With a branch name, it creates a new branch pointing at the current commit. With additional flags, it can list remote branches, delete branches, rename them, or set upstream tracking.

Listing Branches

To list the local branches in the current repository, run git branch with no arguments:

Terminal
git branch
output
 develop
* main
feature/login

The asterisk marks the branch you are currently on. The branches are sorted alphabetically.

To include remote-tracking branches, add -a (all):

Terminal
git branch -a
output
 develop
* main
feature/login
remotes/origin/HEAD -> origin/main
remotes/origin/develop
remotes/origin/main

Anything prefixed with remotes/ is a local reference to a branch on a remote, not a branch you can commit to directly. Check it out to create a local tracking branch.

To list only remote-tracking branches, use -r:

Terminal
git branch -r

To see the latest commit on each branch, add -v:

Terminal
git branch -v
output
 develop a1b2c3d Add release notes
* main 7e8f9a0 Merge pull request #42
feature/login 3d4e5f6 Stub login form

For even more detail, -vv also prints the upstream branch each local branch tracks and whether it is ahead, behind, or in sync:

Terminal
git branch -vv
output
 develop a1b2c3d [origin/develop] Add release notes
* main 7e8f9a0 [origin/main] Merge pull request #42
feature/login 3d4e5f6 [origin/feature/login: ahead 2] Stub login form

Creating a Branch

To create a new branch, pass the name as an argument. This creates the branch at the current HEAD but does not switch to it:

Terminal
git branch feature/login

To start the branch from a specific commit, tag, or another branch, pass it as the second argument:

Terminal
git branch hotfix v1.4.0
Terminal
git branch hotfix 3d4e5f6

Creating a branch without switching to it is useful when you want to mark a commit for later work. Most of the time, though, you create a branch and switch to it in one step. The modern command for that is git switch -c:

Terminal
git switch -c feature/login

The older equivalent is git checkout -b feature/login. Both create the branch at the current commit and check it out immediately. See the create and list Git branches guide for a fuller walkthrough.

Renaming a Branch

To rename the branch you are currently on, use -m with the new name:

Terminal
git branch -m new-name

To rename any branch, pass both the old and the new name:

Terminal
git branch -m old-name new-name

If a branch with the target name already exists, this form fails to protect you from overwriting it. To force the rename and overwrite the existing branch, use the capital -M instead. Use -M with care: it discards the destination branch.

After renaming a branch that is already pushed, push the new name to the remote and delete the old one so collaborators see the change. The rename a Git branch guide walks through the full local-and-remote flow.

Deleting a Branch

To delete a local branch that has been merged into the current branch, use -d:

Terminal
git branch -d feature/login
output
Deleted branch feature/login (was 3d4e5f6).

Git refuses to delete a branch that has unmerged work, to protect you from losing commits:

output
error: The branch 'feature/wip' is not fully merged.
If you are sure you want to delete it, run 'git branch -D feature/wip'.

If you are sure the work is no longer needed, force the deletion with -D:

Terminal
git branch -D feature/wip

You cannot delete the branch you are currently on. Switch to another branch first, then delete it. For remote branch deletion, see the delete local and remote Git branch guide.

Filtering by Merged State

git branch can filter branches by whether they are fully merged into the current branch. List branches that have been merged:

Terminal
git branch --merged

This is the safest way to find branches that can be deleted without losing history. Before deleting anything, preview the filtered list:

Terminal
git branch --merged | grep -vE '^\*|^[[:space:]]*(main|develop)$'

The grep filter keeps the current branch and the protected main and develop branches out of the list. If the output looks correct, pipe the same list to git branch -d:

Terminal
git branch --merged | grep -vE '^\*|^[[:space:]]*(main|develop)$' | xargs -r git branch -d

To list branches that still have unmerged work, use --no-merged:

Terminal
git branch --no-merged

These are the branches you should review before deleting, because they contain commits that are not on your current branch.

Filtering Remote Branches Too

--merged and --no-merged accept a reference argument. To check against main from the remote:

Terminal
git branch --merged origin/main

This is useful in CI or cleanup scripts where you want to see which local branches are safely merged into the shipped code, regardless of which branch you are currently on.

Setting an Upstream Branch

An upstream branch tells Git which remote branch your local branch tracks for git pull and git push. To set it on an existing branch, use --set-upstream-to (or -u):

Terminal
git branch --set-upstream-to=origin/main main
output
Branch 'main' set up to track remote branch 'main' from 'origin'.

To remove tracking information from a branch, use --unset-upstream:

Terminal
git branch --unset-upstream main

When you push a new branch to a remote for the first time, pass -u to git push to create the remote branch and set it as the upstream in one step:

Terminal
git push -u origin feature/login

Showing the Current Branch

To print just the name of the branch you are on, use --show-current:

Terminal
git branch --show-current
output
main

This is the right command for shell prompts and scripts. It prints nothing if you are in a detached HEAD state, which scripts can detect and handle.

Copying a Branch

git branch -c copies a branch, including its reflog and configuration, to a new name:

Terminal
git branch -c old-branch new-branch

The original branch is kept. Use the capital -C to force the copy and overwrite an existing target branch. Copying is less common than creating a fresh branch, but it is useful when you want to preserve the history and config of a branch under a new name.

Sorting the Branch List

By default, branches are listed alphabetically. The --sort option changes the order. To sort by the date of the most recent commit:

Terminal
git branch --sort=-committerdate

The minus sign reverses the order, so the most recently updated branch appears first. This is handy when you return to a project after a break and want to see what you were working on last.

Quick Reference

Command Description
git branch List local branches
git branch -a List local and remote-tracking branches
git branch -r List only remote-tracking branches
git branch -v Show the latest commit on each branch
git branch -vv Show upstream tracking and ahead/behind counts
git branch NAME Create a new branch at HEAD
git branch NAME START Create a branch at a specific commit or tag
git branch -m NEW Rename the current branch
git branch -m OLD NEW Rename another branch
git branch -d NAME Delete a merged branch
git branch -D NAME Force-delete a branch with unmerged work
git branch --merged List branches merged into the current branch
git branch --no-merged List branches with unmerged work
git branch -u origin/main Set the upstream of the current branch
git branch --show-current Print the current branch name
git branch --sort=-committerdate Sort by most recent commit

FAQ

What is the difference between git branch and git checkout?
git branch manages branches: creating, listing, renaming, and deleting. git checkout (and its modern replacement git switch) changes which branch you are on. Creating and switching in one step uses git switch -c or git checkout -b.

How do I delete a remote branch?
git branch -d only deletes local branches. To delete a branch on the remote, run git push origin --delete BRANCH_NAME. The delete local and remote Git branch guide covers the full flow, including cleanup of stale tracking references.

Why does Git say “branch is not fully merged”?
The branch has commits that are not reachable from your current branch. Git is protecting you from losing work. Review the commits with git log BRANCH_NAME, merge them if needed, and then use -d again. If you are certain the commits are not needed, force the delete with -D.

How do I see which branch a remote branch tracks locally?
Run git branch -vv. The output includes each local branch’s upstream in brackets along with its ahead and behind counts relative to that upstream.

Can I list branches sorted by who committed most recently?
Yes. Use git branch --sort=-committerdate to sort branches by the timestamp of their most recent commit, with the freshest branch first. This is a fast way to see what is actively being worked on.

Conclusion

git branch covers the full lifecycle of a branch: creating it, renaming it, keeping track of what it points at, and deleting it when the work is done. Combined with git switch for moving between branches and git merge for integrating changes, it is the foundation of any Git workflow.

For related Git commands, see the git diff guide for reviewing changes and the git log guide for inspecting branch history.

timedatectl Cheatsheet

Basic Status

Check the current time, date, time zone, and sync state.

Command Description
timedatectl Show current time, time zone, and NTP status
timedatectl status Show the same formatted status explicitly
timedatectl show Show status in key-value format
timedatectl show --property=Timezone --value Print the current time zone only
timedatectl show --property=NTPSynchronized --value Print whether the clock is synchronized

Time Zones

List and change time zone settings.

Command Description
timedatectl list-timezones List all available time zones
timedatectl list-timezones | grep -i berlin Filter the time zone list
sudo timedatectl set-timezone Europe/Berlin Set the system time zone
sudo timedatectl set-timezone Etc/UTC Switch the system back to UTC
timedatectl show --property=Timezone --value Confirm the current time zone in scripts

NTP Synchronization

Enable, disable, and inspect time synchronization.

Command Description
sudo timedatectl set-ntp true Enable automatic time sync
sudo timedatectl set-ntp false Disable automatic time sync
timedatectl timesync-status Show the current NTP server, offset, and poll interval
timedatectl show-timesync Show timesync details in key-value format
timedatectl show --property=NTP --value Show whether NTP is enabled

Set Time and Date

Disable NTP first when setting the clock manually.

Command Description
sudo timedatectl set-ntp false Turn off NTP before manual changes
sudo timedatectl set-time '2026-04-21 14:30:00' Set both date and time
sudo timedatectl set-time '14:30:00' Set the time only
sudo timedatectl set-time '2026-04-21' Set the date only
sudo timedatectl set-ntp true Re-enable NTP after manual changes

RTC and Hardware Clock

Control whether the RTC uses UTC or local time.

Command Description
timedatectl show --property=LocalRTC --value Check whether the RTC uses local time
sudo timedatectl set-local-rtc 0 Set the RTC to UTC
sudo timedatectl set-local-rtc 1 Set the RTC to local time
sudo timedatectl set-local-rtc 1 --adjust-system-clock Switch RTC mode and adjust the system clock
timedatectl Confirm the RTC in local TZ status line

Script-Friendly Output

Extract single values for shell scripts and automation.

Command Description
timedatectl show --property=Timezone --value Get the current time zone
timedatectl show --property=LocalRTC --value Get the RTC mode
timedatectl show --property=CanNTP --value Check whether NTP is supported
timedatectl show --property=NTP --value Check whether NTP is enabled
timedatectl show --property=NTPSynchronized --value Check whether the clock is synchronized

Remote and Container Use

Run timedatectl against another host or a local container.

Command Description
timedatectl -H user@server status Check time settings on a remote host over SSH
timedatectl -H root@server set-timezone UTC Change the remote host time zone
timedatectl -M mycontainer status Check time settings in a local container
timedatectl -M mycontainer show --property=Timezone --value Print the container time zone only

Quick Fixes

Use these when timedatectl does not behave as expected.

Command Description
sudo timedatectl set-ntp false Fix Automatic time synchronization is enabled before set-time
timedatectl timesync-status Check which server is syncing the clock
timedatectl --no-pager Print directly without opening a pager
sudo date -s '2026-04-21 14:30:00' Set the clock on non-systemd systems where timedatectl does not work

Related Guides

Use these articles for deeper explanations and step-by-step instructions.

Guide Description
How to Set or Change the Time Zone in Linux Change the system time zone with timedatectl, tzdata, or /etc/localtime
date Command in Linux Read and set the system clock with the traditional date command
journalctl Cheatsheet Inspect time sync and service logs from the systemd journal

Bash Strict Mode: set -euo pipefail Explained

By default, a Bash script will keep running after a command fails, treat unset variables as empty strings, and hide errors inside pipelines. That is convenient for one-off commands in an interactive shell, but inside a script it is a recipe for silent corruption: a failed backup step that still reports success, an unset path that expands to rm -rf /, a curl | sh pipeline that swallows a 500 error.

Bash strict mode is a short set of flags you put at the top of a script to make the shell fail loudly instead. This guide explains each flag in set -euo pipefail, shows what it changes with real examples, and covers the cases where you need to turn it off.

The Strict Mode Line

You will see this line at the top of many modern shell scripts:

script.shsh
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'

Each flag addresses a different class of silent failure:

  • set -e exits immediately when a command returns a non-zero status.
  • set -u treats references to unset variables as an error.
  • set -o pipefail makes a pipeline fail if any command in it fails, not just the last.
  • IFS=$'\n\t' narrows word splitting so unquoted expansions do not split on spaces.

You can enable the options separately, but in practice they are almost always used together.

set -e: Exit on Error

Without set -e, a failing command in a script does not stop execution. The script happily continues to the next line:

no-strict.shsh
#!/usr/bin/env bash
cp /does/not/exist /tmp/dest
echo "Backup complete"

Running it prints both the error and the success message:

Terminal
./no-strict.sh
output
cp: cannot stat '/does/not/exist': No such file or directory
Backup complete

The script exits with status 0, so any caller thinks the backup worked. Adding set -e changes the behavior:

strict.shsh
#!/usr/bin/env bash
set -e
cp /does/not/exist /tmp/dest
echo "Backup complete"
Terminal
./strict.sh
echo $?
output
cp: cannot stat '/does/not/exist': No such file or directory
1

The script stops at the failing cp, never prints “Backup complete”, and exits with a non-zero status. Anyone scheduling this through cron or a CI pipeline now gets a real failure signal.

Opting Out for a Single Command

Sometimes you expect a command to fail and want to handle the result yourself. Append || true to tell set -e to ignore the exit status of that one command:

sh
grep "pattern" config.txt || true

You can also use if or &&, which set -e treats as deliberate checks:

sh
if ! grep -q "pattern" config.txt; then
 echo "pattern missing"
fi

This is the canonical way to check for something without exiting the script when it is not there.

set -u: Fail on Unset Variables

A classic shell footgun is a typo in a variable name. Without strict mode, Bash silently expands the misspelled variable to an empty string:

unset.shsh
#!/usr/bin/env bash
TARGET_DIR="/var/backups"
rm -rf "$TARGE_DIR/old"

The script deletes /old because $TARGE_DIR is empty. With set -u, the same script exits before the rm runs:

unset-strict.shsh
#!/usr/bin/env bash
set -u
TARGET_DIR="/var/backups"
rm -rf "$TARGE_DIR/old"
output
./unset-strict.sh: line 4: TARGE_DIR: unbound variable

Handling Optional Variables

Environment variables that may or may not be set need a default to play nicely with set -u. The ${VAR:-default} syntax provides one without touching the original:

sh
PORT="${PORT:-8080}"
echo "Listening on $PORT"

If $PORT is unset or empty, the script uses 8080. If it is set, the original value is kept.

set -o pipefail: Catch Failures Inside Pipelines

By default, the exit status of a pipeline is the exit status of the last command. Everything to the left can fail silently:

sh
curl https://bad.example.com/data | tee data.txt

If curl fails with a 404, the pipeline still exits 0 because tee wrote its (empty) input successfully. With set -o pipefail, the pipeline adopts the first non-zero exit status from any command in it:

pipefail.shsh
#!/usr/bin/env bash
set -o pipefail
curl https://bad.example.com/data | tee data.txt
echo "Exit: $?"
output
curl: (6) Could not resolve host: bad.example.com
Exit: 6

This is the one flag that fixes the most “my script said it succeeded but clearly did not” bugs, and it is the one most often forgotten when people add just set -e.

IFS: Safer Word Splitting

The Internal Field Separator (IFS) controls how Bash splits unquoted expansions into words. The default is space, tab, and newline, which means a filename with a space in it gets split into two arguments:

sh
files="one two.txt three.txt"
for f in $files; do
 echo "$f"
done
output
one
two.txt
three.txt

Setting IFS=$'\n\t' removes the space from the separator list. You still get clean splitting on newlines (useful for reading find or ls output) and on tabs (useful for TSV data), but spaces inside values are preserved.

Even with a narrower IFS, always quote your expansions ("$var", "${array[@]}"). IFS tightening is a safety net, not a replacement for quoting.

When Strict Mode Gets in the Way

Strict mode is opinionated, and a few situations push back. The most common one is reading a file line by line with a while loop and a counter:

sh
count=0
while read -r line; do
 count=$((count + 1))
done < input.txt

That works fine, but if you increment with ((count++)) inside set -e, the script exits on the first iteration because ((count++)) returns the pre-increment value (0), which Bash treats as a failure. Use count=$((count + 1)) or ((count++)) || true to stay compatible.

Another common case is probing for a command:

sh
if ! command -v jq >/dev/null; then
 echo "jq is not installed"
 exit 1
fi

Using command -v inside an if is safe even under set -e, because set -e does not trigger on commands in conditional contexts.

If you need to temporarily disable a flag for a specific block, turn it off and back on:

sh
set +e
some_flaky_command
result=$?
set -e

This is cleaner than sprinkling || true everywhere when you have a block of commands that need softer error handling.

A Minimal Strict Script Template

Use this as a starting point for new scripts:

template.shsh
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'

# Script body goes here.

It is four lines, and it catches the majority of silent failures that cause shell scripts to misbehave in production.

Quick Reference

Flag What it does
set -e Exit when any command returns a non-zero status
set -u Treat unset variables as an error
set -o pipefail A pipeline fails if any command in it fails
IFS=$'\n\t' Word-split only on newline and tab, not on spaces
set +e / set +u Turn the matching flag back off for a block
command || true Ignore the exit status of a single command
${VAR:-default} Provide a default for optional variables

Troubleshooting

Script exits silently with no error message
A command is failing under set -e without printing anything. Run the script with bash -x script.sh to trace execution and see which line aborts.

unbound variable on a variable that sometimes is set
Replace the reference with ${VAR:-} to provide an empty default, or ${VAR:-fallback} to provide a meaningful one.

Pipeline fails on a command that is supposed to stop early
Commands like head can cause the producer to receive SIGPIPE, which pipefail treats as a failure. Either redesign the pipeline or wrap the producer in || true when the early exit is expected.

Strict mode breaks a sourced script
The set flags apply to the current shell. If you source a script that expects the old behavior, either fix the sourced script or wrap the source call between set +e and set -e.

FAQ

Should every Bash script use strict mode?
For scripts that run unattended (cron jobs, CI steps, deployment scripts), yes. For short interactive helpers, the flags are still useful, but the cost of a missed edge case is lower.

Does strict mode work in sh or dash?
set -e and set -u are POSIX, so they work in any compliant shell. set -o pipefail is a Bash extension that also works in Zsh and Ksh, but not in pure POSIX sh or dash.

Is set -e really that unreliable?
set -e has well-known edge cases, especially around functions and subshells. It is not a replacement for explicit error handling in critical paths, but combined with pipefail and -u it catches far more bugs than it causes.

How do I pass set -euo pipefail to a one-liner?
Use bash -euo pipefail -c 'your command' when invoking Bash from another program such as ssh or a Makefile.

Conclusion

set -euo pipefail and a tighter IFS catch many of the silent failures that make shell scripts hard to trust. Pair strict mode with clear error handling using Bash functions and a proper shebang line when you want scripts to fail early and predictably.

git fetch vs git pull: What Is the Difference?

Sooner or later, every Git user runs into the same question: should you use git fetch or git pull to get the latest changes from the remote? The two commands look similar, and both talk to the remote repository, but they do very different things to your working branch.

This guide explains the difference between git fetch and git pull, how each command works, and when to use one over the other.

Quick Reference

If you want to… Use
Download remote changes without touching your branch git fetch
Download and immediately integrate remote changes git pull
Preview incoming commits before merging git fetch + git log HEAD..origin/main --oneline
Update your branch but refuse merge commits git pull --ff-only
Rebase local commits on top of remote changes git pull --rebase

What git fetch Does

git fetch downloads commits, branches, and tags from a remote repository and stores them locally under remote-tracking references such as origin/main or origin/feature-x. It does not touch your working directory, your current branch, or the index.

Terminal
git fetch origin
output
remote: Enumerating objects: 12, done.
remote: Counting objects: 100% (12/12), done.
From github.com:example/project
4a1c2e3..9f7b8d1 main -> origin/main
a6d4c02..1e2f3a4 feature-x -> origin/feature-x

The output shows which remote branches were updated. After the fetch, the local branch main is still pointing to whatever commit it was on before. Only the remote-tracking branch origin/main has moved.

You can then inspect what changed before doing anything with it:

Terminal
git log main..origin/main

This is the safe way to look at new work without merging or rebasing yet. It gives you a preview of what the remote has that your branch does not.

What git pull Does

git pull is a convenience command. Under the hood, it runs git fetch followed by an integration step (a merge by default, or a rebase if configured). In other words:

Terminal
git pull origin main

is roughly equivalent to:

Terminal
git fetch origin
git merge origin/main

After git pull, your current branch has moved forward, and your working directory may contain new files, updated files, or merge conflicts that need resolving. The change is immediate and affects your checked-out branch.

Side-By-Side Comparison

The difference between the two commands comes down to what they change in your repository.

Aspect git fetch git pull
Downloads new commits from remote Yes Yes
Updates remote-tracking branches (origin/*) Yes Yes
Updates your current local branch No Yes
Modifies the working directory No Yes
Can create merge conflicts No Yes
Safe to run at any time Yes Only when ready to integrate

git fetch is read-only from your branch’s perspective. git pull actively changes your branch.

When to Use git fetch

Use git fetch when you want to see what changed on the remote without integrating anything yet. This is useful before you start new work, before rebasing a feature branch, or when you want to inspect a teammate’s branch safely.

For example, after fetching, you can compare your branch with the remote like this:

Terminal
git log HEAD..origin/main --oneline

This shows commits that exist on the remote but not in your current branch. Running git fetch often is cheap and has no side effects on your work, which is why many developers do it regularly throughout the day.

When to Use git pull

Use git pull when you are ready to bring remote changes into your current branch and keep working on top of them. The common flow looks like this:

Terminal
git checkout main
git pull origin main

This is the quickest way to sync a branch with its upstream when you know the integration will be straightforward, such as on a shared main branch where you rarely have local commits. git pull always updates the branch you currently have checked out, so it is worth confirming where you are before you run it.

If you want a cleaner history without merge commits, configure pull to rebase:

Terminal
git config --global pull.rebase true

From that point on, git pull runs git fetch followed by git rebase instead of git merge. Your local commits are replayed on top of the fetched changes, producing a linear history.

Avoiding Merge Surprises

The biggest reason to prefer git fetch over git pull in day-to-day work is control. A bare git pull on a branch where you have local commits can create a merge commit, introduce conflicts, or rewrite history during a rebase, all in one step.

A safer pattern is:

Terminal
git fetch origin
git log HEAD..origin/main --oneline
git merge origin/main

You fetch first, review what is coming, then decide whether to merge, rebase, or defer the integration. For active feature branches, this workflow avoids most of the “what just happened to my branch” moments.

If you already ran git pull and need to inspect what happened, start with:

Terminal
git log --oneline --decorate -n 5

This helps you confirm whether Git created a merge commit, fast-forwarded the branch, or rebased your local work.

If the pull created a merge commit that you do not want, ORIG_HEAD often points to the commit your branch was on before the pull. In that case, you can reset back to it:

Terminal
git reset --hard ORIG_HEAD
Warning
git reset --hard discards uncommitted changes. Use it only when you are sure you do not need the current state of your working tree.

Useful Flags

A few flags make both commands more predictable in real workflows.

  • git fetch --all - Fetch from every configured remote, not only origin.
  • git fetch --prune - Remove remote-tracking branches that no longer exist on the remote.
  • git fetch --tags - Download tags in addition to branches.
  • git pull --rebase - Rebase your local commits on top of the fetched changes instead of merging.
  • git pull --ff-only - Update the branch only if Git can fast-forward it; prevents unexpected merge commits.

--ff-only is especially useful on shared branches. If your branch cannot move forward by simply advancing the pointer, the pull fails and you decide what to do next.

FAQ

Does git fetch modify any local files?
No. It only updates remote-tracking references such as origin/main. Your branches, working directory, and staging area stay the same.

Is git pull the same as git fetch plus git merge?
By default, yes. With pull.rebase set to true (or --rebase on the command line), it runs git fetch followed by git rebase instead.

Which one should I use daily?
Prefer git fetch for visibility and git pull --ff-only (or git pull --rebase) for the integration step. A bare git pull works, but it hides two operations behind one command.

How do I see what git fetch downloaded?
Use git log main..origin/main to see commits on the remote that are not yet in your local branch, or git diff main origin/main to compare the content directly.

Can git fetch cause conflicts?
No. Conflicts only happen when you merge or rebase the fetched commits into your branch. Fetching itself is conflict-free.

Conclusion

git fetch lets you inspect remote changes without touching your branch, while git pull fetches and integrates those changes in one step. If you want more control and fewer surprises, fetch first, review the incoming commits, and then merge or rebase on your own terms. For more Git workflow details, see the git log command and git diff command guides.

sha256sum and md5sum Commands: Verify File Integrity in Linux

When you download an ISO image, a backup archive, or a large release tarball, there is no easy way to tell by looking whether the file arrived intact. A single flipped bit can break a boot image or turn a compressed archive into junk, and a compromised mirror can serve a tampered file that looks legitimate. The fix is to compare a cryptographic fingerprint of the file against a value the publisher has signed or posted somewhere you trust.

This guide shows how to use sha256sum and md5sum to generate, compare, and verify checksums on Linux, and when to use each one.

sha256sum and md5sum Syntax

Both commands follow the same form:

txt
sha256sum [OPTIONS] [FILE]...
md5sum [OPTIONS] [FILE]...

Without options, each command prints a hex digest followed by two spaces and the file name. Pass -c to verify files against a list of previously generated checksums.

sha256sum vs md5sum

The two tools do the same job: they read a file and print a fixed-length fingerprint. The difference is the algorithm and, by extension, how safe the result is against intentional tampering.

md5sum uses the MD5 algorithm and produces a 128-bit digest. MD5 is fast but has been broken for years: it is possible to construct two different files that share the same MD5 hash. Treat it as a checksum for accidental corruption only, not for authenticity or security.

sha256sum uses SHA-256 from the SHA-2 family and produces a 256-bit digest. It is the default choice for verifying downloads, release artifacts, and anything where the threat model includes a malicious middle party. Most Linux distributions publish SHA256SUMS files next to their ISO images for exactly this reason.

When in doubt, use sha256sum. Reach for md5sum only when a publisher provides MD5 values and nothing stronger, or when you need quick parity checks between known-good files.

Generating a Checksum for a File

To produce a SHA-256 digest for a single file, pass it as an argument:

Terminal
sha256sum ubuntu-24.04.2-desktop-amd64.iso
output
5e38b55d57d94ff029719342357325ed3bda38fa80054f9330dc789cd2d43931 ubuntu-24.04.2-desktop-amd64.iso

The output is one line: the hex digest, two spaces, and the file name. The same file always produces the same digest, so you can run the command again after a copy or a download and compare the values by eye.

md5sum behaves the same way:

Terminal
md5sum ubuntu-24.04.2-desktop-amd64.iso
output
2e3720b76b2f9f96edc43ec4d87d7d52 ubuntu-24.04.2-desktop-amd64.iso

Notice that the MD5 digest is shorter. That is the 128-bit hash encoded in 32 hex characters, compared with 64 hex characters for SHA-256.

Generating Checksums for Multiple Files

Both commands accept any number of file arguments and print one line per file:

Terminal
sha256sum *.tar.gz
output
b2b09c1e04b2a3a4c5d6e7f890123456789abcdef0123456789abcdef01234567 backup-2026-04-01.tar.gz
c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8091a2b3c4d5e6f708192a3b4c5d6e7f8 backup-2026-04-08.tar.gz

To save the output to a checksum file that you can share or verify later, redirect it:

Terminal
sha256sum *.tar.gz > SHA256SUMS

The resulting SHA256SUMS file can be published alongside the archives, and anyone who downloads them can run a single command to confirm that their copy matches.

Verifying a File Against a Known Checksum

The most common task is to check that a download matches the digest the publisher posted. Distributions usually ship a SHA256SUMS file that lists every release file and its digest. Change into the directory that holds both the archive and the checksum file, then pass -c:

Terminal
sha256sum -c SHA256SUMS
output
ubuntu-24.04.2-desktop-amd64.iso: OK
ubuntu-24.04.2-live-server-amd64.iso: OK

sha256sum reads each line of SHA256SUMS, recomputes the digest for the named file, and prints OK when they match. Any line whose file is missing or whose digest differs prints a clear error and causes the command to exit with a non-zero status, which is convenient in scripts.

When you only care about one file out of many, grep the relevant line into the check:

Terminal
grep "ubuntu-24.04.2-desktop-amd64.iso" SHA256SUMS | sha256sum -c -

The trailing - tells sha256sum to read the checksum list from standard input. This keeps the verification scoped to a single file without creating a second checksum file.

Comparing a File to a Published Digest

Sometimes the publisher does not provide a full SHA256SUMS file but instead shows a single digest on a release page. You can compare it directly without creating a file:

Terminal
echo "5e38b55d57d94ff029719342357325ed3bda38fa80054f9330dc789cd2d43931 ubuntu-24.04.2-desktop-amd64.iso" | sha256sum -c -
output
ubuntu-24.04.2-desktop-amd64.iso: OK

The key detail is the double space between the digest and the file name. That is the exact format sha256sum produces and expects, and a single space will cause the check to fail.

Quiet and Warn Modes

During an automated check, the per-file OK lines can be noisy. The --quiet option suppresses successful lines so only failures appear:

Terminal
sha256sum -c --quiet SHA256SUMS

If every file passes, the command prints nothing and exits with status 0. If a file fails, you see a single failure line and a non-zero exit status, which fits well in a CI job or a backup script.

To flag lines in a checksum file that are not formatted correctly, add --warn. This is helpful when the file was assembled by hand and you want to be sure every entry parses.

Common Options

The flags below are the ones you are likely to use day to day:

  • -c, --check - Read digests from a file and verify each one.
  • -b, --binary - Mark files as binary in the output (default on Linux).
  • -t, --text - Read files in text mode (rare on Linux, kept for portability).
  • --quiet - Suppress OK lines when checking.
  • --status - Print nothing; rely on the exit status alone.
  • --ignore-missing - Skip files listed in the digest file that are not present.
  • --tag - Output BSD-style tagged format, useful when mixing hash algorithms.

md5sum accepts the same flags, which makes it easy to swap one command for the other when the algorithm changes.

Quick Reference

Command Description
sha256sum file.iso Generate a SHA-256 checksum for one file
md5sum file.iso Generate an MD5 checksum for one file
sha256sum *.tar.gz > SHA256SUMS Save checksums for multiple files
sha256sum -c SHA256SUMS Verify files against a checksum list
`grep “file.iso” SHA256SUMS sha256sum -c -`
sha256sum -c --quiet SHA256SUMS Show only failures during verification

Verifying a Download End to End

Putting the pieces together, a typical download check looks like this:

Terminal
wget https://releases.ubuntu.com/24.04/ubuntu-24.04.2-desktop-amd64.iso
wget https://releases.ubuntu.com/24.04/SHA256SUMS
sha256sum -c --ignore-missing SHA256SUMS

The --ignore-missing flag keeps the check focused on the file you actually downloaded instead of failing on every other release listed in SHA256SUMS.

For full confidence, also verify the signature on SHA256SUMS itself using the publisher’s GPG key. A digest is only as trustworthy as the source you got it from, so a signed checksum file closes the loop.

Troubleshooting

Checksum mismatch on a freshly downloaded file
Re-download the file, ideally from a different mirror. The most common cause is a truncated transfer or a network error. If the second download still fails, the file on the mirror may be stale or tampered with, and you should report it to the project.

No such file or directory when running with -c
The names in the checksum file are resolved relative to the current directory. Change into the directory that holds the files, or edit the checksum file to use paths that match where the files live.

improperly formatted checksum line warnings
A single space between the digest and the file name instead of two, trailing whitespace, or Windows line endings will all trip the parser. Run dos2unix on the file or recreate it with sha256sum > SHA256SUMS to reset the format.

The MD5 digest matches but you still do not trust the file
You are right to be cautious. MD5 collisions are practical, so match an MD5 against accidental corruption only. Ask the publisher for a SHA-256 digest or a signed checksum file.

FAQ

Which is faster, sha256sum or md5sum?
md5sum is faster, sometimes noticeably so on large files. The speed difference rarely matters on modern hardware, and it is not a good reason to pick MD5 over SHA-256 for security-sensitive checks.

Can I use sha256sum on a directory?
Not directly. Hash tools operate on files. To produce a digest that represents an entire directory, pipe a deterministic listing such as find ... -type f -print0 | sort -z | xargs -0 sha256sum through another sha256sum.

Where do I find the expected digest for a Linux distro ISO?
Every major distribution publishes a SHA256SUMS or SHA512SUMS file on its download page, usually along with a detached GPG signature. Prefer those files over digests shown in third-party blog posts.

Is sha256sum available by default on Linux?
Yes. Both sha256sum and md5sum ship with the GNU coreutils package, which is installed on every mainstream Linux distribution.

Conclusion

sha256sum should be the default for verifying downloads and backups, with md5sum reserved for quick corruption checks when the publisher provides nothing stronger. When you pair a SHA-256 digest with a signed checksum file and a trusted key, you can answer the question that matters most: did I get the file the publisher actually shipped?

PostgreSQL Cheatsheet

Basic Syntax

Core PostgreSQL command forms.

Command Description
psql Open an interactive PostgreSQL shell using local defaults
psql -U user -d dbname Connect as a specific user to a specific database
psql -h host -p 5432 -U user -d dbname Connect to a remote PostgreSQL server
psql -c "SQL_STATEMENT" Run one SQL command and exit
sudo -u postgres psql Open psql as the local postgres superuser

Connect and Switch

Common ways to connect and move between databases.

Command Description
sudo -u postgres psql Connect locally as the postgres system user
psql -U app_user -d app_db Connect to app_db as app_user
psql "host=localhost port=5432 dbname=app_db user=app_user" Connect with a connection string
\c app_db Switch to another database inside psql
\conninfo Show the current connection details

Databases

Create, list, rename, and remove databases.

Command Description
CREATE DATABASE app_db; Create a new database
CREATE DATABASE app_db OWNER app_user; Create a database owned by a specific role
\l List databases
ALTER DATABASE app_db RENAME TO app_prod; Rename a database
DROP DATABASE app_db; Delete a database

Roles and Users

Create login roles and inspect existing roles.

Command Description
CREATE ROLE app_user; Create a role without login
CREATE ROLE app_user WITH LOGIN PASSWORD 'strong_password'; Create a login role
CREATE USER app_user WITH PASSWORD 'strong_password'; Shortcut for a login role
ALTER ROLE app_user WITH PASSWORD 'new_password'; Change a role password
\du List roles and attributes

Grant and Revoke Privileges

Give or remove access at the database, schema, and table levels.

Command Description
GRANT CONNECT ON DATABASE app_db TO app_user; Allow a role to connect to a database
GRANT USAGE, CREATE ON SCHEMA public TO app_user; Allow schema access and object creation
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLE orders TO app_user; Grant table privileges
REVOKE INSERT, UPDATE ON TABLE orders FROM app_user; Remove selected table privileges
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO app_user; Grant defaults for future tables

Table and Schema Introspection

Inspect schemas, tables, columns, and query results.

Command Description
\dn List schemas
\dt List tables in the current search path
\dt public.* List tables in the public schema
\d orders Describe a table, view, or sequence
SELECT * FROM orders LIMIT 10; Preview rows from a table

psql Meta-Commands

Useful built-in psql commands for daily administration.

Command Description
\? Show psql meta-command help
\h CREATE ROLE Show SQL help for one statement
\x Toggle expanded output for wide rows
\timing Toggle query timing display
\q Quit psql

Backup and Restore

Common logical backup and restore commands.

Command Description
pg_dump -U app_user -d app_db > app_db.sql Export a database as plain SQL
pg_dump -Fc -U app_user -d app_db -f app_db.dump Create a custom-format backup
psql -U app_user -d app_db < app_db.sql Restore a plain SQL dump
pg_restore -U app_user -d app_db app_db.dump Restore a custom-format dump
pg_dumpall > cluster.sql Back up all databases and global objects

Version and Service Checks

Quick checks for server version and service status.

Command Description
SELECT version(); Show the PostgreSQL server version
psql --version Show the client version
SHOW server_version; Show the server version only
sudo systemctl status postgresql Check the PostgreSQL service state
sudo systemctl restart postgresql Restart the PostgreSQL service

Related Guides

Use these guides for full PostgreSQL walkthroughs.

Guide Description
PostgreSQL User Management: Create Users and Grant Privileges Full guide to roles, passwords, and grants
How to Check the PostgreSQL Version Find the installed and running PostgreSQL version
How to Install PostgreSQL on Ubuntu 20.04 Install PostgreSQL on Ubuntu
How to Install PostgreSQL on Debian 10 Install PostgreSQL on Debian
How to Install PostgreSQL on CentOS 8 Install PostgreSQL on CentOS

PostgreSQL User Management: Create Users and Grant Privileges

When you run a PostgreSQL database in production, you rarely want every application and every developer to connect as the postgres superuser. A clean setup gives each service its own login role with a scoped set of privileges, so a bug or a leaked password cannot touch unrelated data.

PostgreSQL handles this with roles. A role can represent a single user, a group, or both at the same time, and privileges are granted to roles rather than to raw login accounts. This guide explains how to create roles, set passwords, grant and revoke privileges, and clean up roles you no longer need.

Roles vs Users in PostgreSQL

Historically, PostgreSQL had separate CREATE USER and CREATE GROUP statements. Modern versions replaced both with a single concept: the role. A role with the LOGIN attribute can connect to the server, which makes it a user. A role without LOGIN is typically used as a group that other roles inherit privileges from.

In practice, CREATE USER is still valid and is treated as a shortcut for CREATE ROLE ... LOGIN. We will use both forms in this guide.

Connecting to PostgreSQL

All the commands below run inside the psql shell. On most systems you can open it as the postgres system user:

Terminal
sudo -u postgres psql

You will see a prompt like this:

output
postgres=#

Every SQL statement ends with a semicolon. If you forget it, psql keeps waiting for more input.

Creating a Role

The simplest form of CREATE ROLE takes just a name:

sql
CREATE ROLE linuxize;

This role exists but cannot log in yet and has no password.

To create a login role with a password, use LOGIN:

sql
CREATE ROLE linuxize_login WITH LOGIN PASSWORD 'strong_password_here';

The equivalent shortcut is:

sql
CREATE USER linuxize_user WITH PASSWORD 'strong_password_here';

Both statements produce the same result. Use whichever form reads more clearly in your scripts.

Warning
Do not commit passwords to version control. Keep them in a .env file, a secrets manager, or a provisioning tool, and make sure the file is listed in .gitignore.

Useful Role Attributes

You can combine several attributes in a single CREATE ROLE statement. These are the ones you will reach for most often:

  • LOGIN - The role can connect to the server.
  • PASSWORD 'secret' - Sets the login password.
  • SUPERUSER - Grants full access, equivalent to the postgres role. Use sparingly.
  • CREATEDB - The role may create new databases.
  • CREATEROLE - The role may create and modify other roles.
  • INHERIT - The role automatically inherits privileges of roles it is a member of. This is the default.
  • VALID UNTIL 'timestamp' - Expires the password at the given time.
  • CONNECTION LIMIT n - Caps the number of concurrent connections for this role.

For example, to create a login role that can also create databases and is limited to ten concurrent connections:

sql
CREATE ROLE app_owner WITH LOGIN PASSWORD 'strong_password_here' CREATEDB CONNECTION LIMIT 10;

Listing Existing Roles

To see every role on the server, use the \du meta-command:

sql
\du
output
 List of roles
Role name | Attributes
-----------+------------------------------------------------------------
app_owner | Create DB, 10 connections
linuxize |
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS

The Attributes column tells you what each role can do. A blank column means the role has no special attributes beyond the defaults, such as NOLOGIN and INHERIT.

Changing a Role

Use ALTER ROLE to change attributes, rename a role, or reset a password. To change the password:

sql
ALTER ROLE linuxize WITH PASSWORD 'new_password_here';

To grant an additional attribute:

sql
ALTER ROLE linuxize CREATEDB;

To remove one, prefix it with NO:

sql
ALTER ROLE linuxize NOCREATEDB;

Creating a Database for the Role

It is common to give each application its own database owned by its login role. Create the database and assign ownership in one statement:

sql
CREATE DATABASE linuxize_app OWNER linuxize;

The owner of a database has full control over it, so the role can create tables, schemas, and other objects without any further grants.

Granting Privileges

Privileges in PostgreSQL are granted at several levels: database, schema, table, column, sequence, and function. The syntax is consistent across levels:

sql
GRANT privilege_list ON object_type object_name TO role_name;

Database-level Privileges

To let a role connect to a database and create objects in it:

sql
GRANT CONNECT ON DATABASE linuxize_app TO linuxize;
GRANT CREATE ON DATABASE linuxize_app TO linuxize;

CONNECT controls whether the role can open a session to the database. CREATE controls whether it can create schemas.

Schema-level Privileges

To let a role create and use objects inside a schema:

sql
GRANT USAGE ON SCHEMA public TO linuxize;
GRANT CREATE ON SCHEMA public TO linuxize;

USAGE is required for almost every operation inside the schema. CREATE lets the role add new tables, views, or functions.

Table-level Privileges

Table privileges match the common SQL operations:

sql
GRANT SELECT, INSERT, UPDATE, DELETE ON TABLE orders TO linuxize;

To grant every available table privilege at once:

sql
GRANT ALL PRIVILEGES ON TABLE orders TO linuxize;

To grant the same privileges on every existing table in a schema:

sql
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO linuxize;

This only affects tables that exist at the moment you run the command. Tables created later are not covered.

Default Privileges for Future Objects

To cover tables created in the future, set default privileges:

sql
ALTER DEFAULT PRIVILEGES IN SCHEMA public
 GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO linuxize;

From this point on, any new table created in public by the current role inherits the listed privileges for linuxize.

Revoking Privileges

REVOKE is the mirror of GRANT and uses the same structure:

sql
REVOKE INSERT, UPDATE, DELETE ON TABLE orders FROM linuxize;

To strip every privilege on a table:

sql
REVOKE ALL PRIVILEGES ON TABLE orders FROM linuxize;

Revoking a privilege only affects the privileges you previously granted. Ownership is a separate concept: the owner of an object always keeps full control over it, regardless of grants.

Group Roles

To manage privileges for a team, create a role without LOGIN and add members to it:

sql
CREATE ROLE readonly;
GRANT CONNECT ON DATABASE linuxize_app TO readonly;
GRANT USAGE ON SCHEMA public TO readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly;

GRANT readonly TO linuxize;

linuxize now inherits every privilege granted to readonly. To remove the membership later:

sql
REVOKE readonly FROM linuxize;

This pattern scales better than granting privileges role by role, because you change permissions in one place.

Deleting a Role

To drop a role, use DROP ROLE:

sql
DROP ROLE linuxize;

PostgreSQL refuses the command if the role still owns any objects or holds any privileges. To clean these up first, reassign the objects and drop dependent privileges in each database where the role owns objects:

sql
REASSIGN OWNED BY linuxize TO postgres;
DROP OWNED BY linuxize;
DROP ROLE linuxize;

REASSIGN OWNED transfers ownership of objects owned by the role in the current database, and DROP OWNED removes any remaining privileges there. If the role owns objects in other databases, repeat the cleanup in each one before dropping the role.

Quick Reference

Task Statement
Create a login role CREATE ROLE name WITH LOGIN PASSWORD 'pass';
Create a user (shortcut) CREATE USER name WITH PASSWORD 'pass';
List roles \du
Change a password ALTER ROLE name WITH PASSWORD 'new';
Add attribute ALTER ROLE name CREATEDB;
Remove attribute ALTER ROLE name NOCREATEDB;
Create database with owner CREATE DATABASE db OWNER name;
Grant table privileges GRANT SELECT, INSERT ON TABLE t TO name;
Grant all table privileges in schema GRANT ... ON ALL TABLES IN SCHEMA public TO name;
Default privileges for new tables ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ... TO name;
Revoke privileges REVOKE ALL PRIVILEGES ON TABLE t FROM name;
Add role to group GRANT group_role TO name;
Drop role and its objects REASSIGN OWNED BY name TO postgres; DROP OWNED BY name; DROP ROLE name;

Troubleshooting

ERROR: permission denied for schema public
Grant USAGE on the schema: GRANT USAGE ON SCHEMA public TO role_name;. On PostgreSQL 15 and later, the public schema is no longer writable by default, so you may also need GRANT CREATE ON SCHEMA public.

ERROR: role "name" cannot be dropped because some objects depend on it
Reassign and drop the role’s objects first: REASSIGN OWNED BY name TO postgres; DROP OWNED BY name;, then run DROP ROLE name; again.

FATAL: password authentication failed for user
Check that the role has LOGIN and that the authentication method in pg_hba.conf matches the client (for example, md5 or scram-sha-256). Reload the configuration with SELECT pg_reload_conf(); after editing pg_hba.conf.

New tables are not visible to a read-only user
Grants on ALL TABLES IN SCHEMA only cover tables that exist at grant time. Set ALTER DEFAULT PRIVILEGES for the owning role so new tables inherit the read permissions automatically.

FAQ

What is the difference between CREATE USER and CREATE ROLE?
CREATE USER is a shortcut for CREATE ROLE ... LOGIN. Both create the same kind of object. Use CREATE USER when you want a login account and CREATE ROLE when you are creating a group role without login.

Can a single role be both a user and a group?
Yes. A role with LOGIN can still be granted to other roles. Members inherit its privileges as long as INHERIT is set, which is the default.

How do I change the owner of an existing database?
Use ALTER DATABASE db_name OWNER TO new_owner;. The new owner gains full control over the database and its objects.

Do I need to restart PostgreSQL after creating a role?
No. Role changes take effect immediately. Only changes to pg_hba.conf or the main server configuration require a reload or restart.

Conclusion

Roles are the single unit of access control in PostgreSQL. Give each application its own login role, group shared privileges into group roles, and use ALTER DEFAULT PRIVILEGES so future objects stay consistent with your current policy.

For related reading, see how to check the PostgreSQL version and the installation guide for your distribution under the PostgreSQL tag .

git cherry-pick Command: Apply Commits from Another Branch

Sometimes the change you need is already written, just on the wrong branch. A hotfix may land on main when it also needs to go to a maintenance branch, or a useful commit may be buried in a feature branch that you do not want to merge wholesale. In that situation, git cherry-pick lets you copy the effect of a specific commit onto your current branch.

This guide explains how git cherry-pick works, how to apply one or more commits safely, and how to handle the conflicts that can appear along the way.

Syntax

The general syntax for git cherry-pick is:

txt
git cherry-pick [OPTIONS] COMMIT...
  • OPTIONS - Flags that change how Git applies the commit.
  • COMMIT - One or more commit hashes, branch references, or commit ranges.

git cherry-pick replays the changes introduced by the selected commit on top of your current branch. Git creates a new commit, so the result has a different commit hash even when the file changes are the same.

Cherry-Picking a Single Commit

Start by finding the commit you want to copy. This example lists the recent commits on a feature branch:

Terminal
git log --oneline feature/auth
output
a3f1c92 Fix null pointer in auth handler
d8b22e1 Add login form validation
7c4e003 Refactor session logic

The output gives you the abbreviated commit hashes. In this case, a3f1c92 is the fix we want to move.

Switch to the target branch before running git cherry-pick:

Terminal
git switch main
git cherry-pick a3f1c92
output
[main 9b2d4f1] Fix null pointer in auth handler
Date: Tue Apr 14 10:42:00 2026 +0200
1 file changed, 2 insertions(+)

Git applies the change from a3f1c92 to main and creates a new commit, 9b2d4f1. The subject line is the same, but the commit hash is different because the parent commit is different.

Cherry-Picking Multiple Commits

If you need more than one non-consecutive commit, pass each hash in the order you want Git to apply them:

Terminal
git cherry-pick a3f1c92 d8b22e1

Git creates a separate new commit for each one. This works well when you need a few targeted fixes but do not want the rest of the source branch.

For a range of consecutive commits, use the range notation:

Terminal
git cherry-pick a3f1c92^..7c4e003

This tells Git to include a3f1c92 and every commit after it up to 7c4e003. If you omit the caret, the starting commit itself is excluded:

Terminal
git cherry-pick a3f1c92..7c4e003

That form applies every commit after a3f1c92 through 7c4e003.

Applying Changes Without Committing

Sometimes you want the changes from a commit, but not an automatic commit for each one. Use --no-commit (or -n) to apply the changes to your working tree and staging area without creating the commit yet:

Terminal
git cherry-pick --no-commit a3f1c92

This is useful when you want to combine several small fixes into one commit on the target branch, or when you need to edit the files before committing.

After reviewing the result, create the commit yourself:

Terminal
git status
git commit -m "Backport auth null-check fix"

This gives you more control over the final commit message and lets you group related backports together.

Recording Where the Commit Came From

For maintenance branches and backports, it is often helpful to keep a reference to the original commit. Use -x to append the source commit hash to the new commit message:

Terminal
git cherry-pick -x a3f1c92

Git adds a line like this to the new commit message:

output
(cherry picked from commit a3f1c92...)

That extra line makes future audits easier, especially when you need to prove that a fix on a release branch came from a reviewed change on another branch.

Cherry-Picking a Merge Commit

Cherry-picking a regular commit is straightforward, but merge commits need one extra option. Git must know which parent to treat as the main line:

Terminal
git cherry-pick -m 1 MERGE_COMMIT_HASH
  • -m 1 - Use the first parent as the base, which is usually the branch that received the merge.
  • -m 2 - Use the second parent instead.

If you are not sure which parent is which, inspect the history first:

Terminal
git log --oneline --graph

Cherry-picking merge commits is more advanced and easier to get wrong. If the goal is to bring over an entire merged feature, a normal merge is often clearer than cherry-picking the merge commit itself.

Resolving Conflicts

If the target branch has changed in the same area of code, Git may stop and ask you to resolve a conflict:

output
error: could not apply a3f1c92... Fix null pointer in auth handler
hint: After resolving the conflicts, mark them with
hint: "git add/rm <pathspec>", then run
hint: "git cherry-pick --continue".

Open the conflicted file and look for the conflict markers:

output
<<<<<<< HEAD
return session.getUser();
=======
if (session == null) return null;
return session.getUser();
>>>>>>> a3f1c92 (Fix null pointer in auth handler)

Edit the file to keep the final version you want, then stage it and continue:

Terminal
git add src/auth.js
git cherry-pick --continue

If you decide the commit is not worth applying after all, abort the operation:

Terminal
git cherry-pick --abort

git cherry-pick --abort puts the branch back where it was before the cherry-pick started.

A Safe Backport Workflow

When you cherry-pick onto a release or maintenance branch, slow down and make the source obvious. A simple workflow looks like this:

Terminal
git switch release/1.4
git pull --ff-only
git cherry-pick -x a3f1c92
git status

The important part is the sequence. Start from the branch that needs the fix, make sure it is up to date, cherry-pick with -x, then review and test the branch before pushing it. This avoids the common mistake of copying a fix into an outdated branch and shipping an untested backport.

If the picked commit depends on earlier refactors or new APIs that are not present on the target branch, stop there. In that case, either copy the prerequisite commits too or recreate the fix manually.

When to Use git cherry-pick

git cherry-pick is a good fit when you need a precise change without the rest of the branch:

  • Backporting a bug fix from main to a release branch
  • Recovering one useful commit from an abandoned feature branch
  • Moving a small fix that was committed on the wrong branch
  • Pulling a reviewed change into a hotfix branch without merging unrelated work

Avoid it when the target branch needs the full context of the source branch. If the commit depends on earlier commits, shared refactors, or schema changes, a merge or rebase is usually the cleaner option.

Troubleshooting

error: could not apply ... during cherry-pick
The target branch has conflicting changes. Resolve the files Git marks as conflicted, stage them with git add, then run git cherry-pick --continue.

Cherry-pick created duplicate-looking history
That is normal. Cherry-pick copies the effect of a commit, not the original object. The new commit has a different hash because it has a different parent.

The picked commit does not build on the target branch
The commit likely depends on earlier work that is missing from the target branch. Inspect the source branch history with git log and either cherry-pick the prerequisites too or reimplement the change manually.

You picked the wrong commit
If the cherry-pick already completed, use git revert on the new commit. If the operation is still in progress, use git cherry-pick --abort.

Quick Reference

For a printable quick reference, see the Git cheatsheet .

Task Command
Pick one commit git cherry-pick COMMIT
Pick several specific commits git cherry-pick C1 C2 C3
Pick a consecutive range including the first commit git cherry-pick A^..B
Apply changes without committing git cherry-pick --no-commit COMMIT
Record the source commit in the message git cherry-pick -x COMMIT
Continue after resolving a conflict git cherry-pick --continue
Skip the current commit in a sequence git cherry-pick --skip
Abort the operation git cherry-pick --abort
Pick a merge commit git cherry-pick -m 1 MERGE_COMMIT

FAQ

What is the difference between git cherry-pick and git merge?
git cherry-pick copies the effect of selected commits onto your current branch. git merge joins two branch histories and brings over all commits that are missing from the target branch.

Does cherry-pick change the original commit?
No. The source commit stays exactly where it is. Git creates a new commit on your current branch with the same file changes.

Should I use -x every time?
Not always, but it is a good habit for backports and maintenance branches. It gives you a clear link back to the original commit.

Can I cherry-pick commits from another remote branch?
Yes. Fetch the remote branch first, then cherry-pick the commit hash you want. You can inspect the incoming history with git log or review the file changes with git diff before applying anything.

Conclusion

git cherry-pick is the tool to reach for when one commit matters more than the branch it came from. Use it for targeted fixes, keep -x in mind for backports, and fall back to merge or rebase when the change depends on broader branch context.

ufw Command in Linux: Uncomplicated Firewall Reference

If your server is open to the public network, it needs a firewall. On Ubuntu and Debian, the easiest way to set one up is with ufw. This tool sits on top of iptables (or nftables on newer systems) and replaces complex rules with simple, easy-to-read commands.

This guide walks through the ufw command, from enabling the firewall and setting default policies to writing rules for ports, services, and specific addresses.

ufw Syntax

The general form of the command is:

txt
sudo ufw [OPTIONS] COMMAND [ARGS]
  • OPTIONS - Flags such as --dry-run to preview what a rule would do.
  • COMMAND - The action, for example enable, allow, deny, delete, or status.
  • ARGS - The rule body, such as a port number, a service name, or a full rule specification.

ufw changes firewall rules, so almost every invocation needs sudo.

Checking the Firewall Status

Before you touch the rules, check whether the firewall is running and what is already in place:

Terminal
sudo ufw status

If ufw is inactive, you will see:

output
Status: inactive

This confirms that ufw is installed but not currently enforcing any firewall rules.

Once active, the same command lists the rules:

output
Status: active
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
80/tcp ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)

For a numbered view that is easier to reference when deleting rules, use:

Terminal
sudo ufw status numbered

And for a more detailed output that shows default policies and logging level:

Terminal
sudo ufw status verbose

Enabling and Disabling the Firewall

The first time you enable ufw, make sure you have already allowed SSH. Enabling the firewall over an SSH session without an allow ssh rule will lock you out of the server.

Allow SSH first:

Terminal
sudo ufw allow ssh

Then turn the firewall on:

Terminal
sudo ufw enable
output
Firewall is active and enabled on system startup

At this point, ufw starts enforcing the rules immediately and will come back automatically after a reboot.

To stop the firewall and clear it from the startup sequence:

Terminal
sudo ufw disable

A disabled ufw keeps the rules you defined. When you re-enable it, the same rules come back.

Default Policies

Default policies decide what happens to traffic that does not match any rule. The usual hardening is to deny incoming traffic and allow outgoing traffic:

Terminal
sudo ufw default deny incoming
sudo ufw default allow outgoing

After running these two commands, you only need to open the specific inbound ports your server should expose. Outbound traffic still flows freely, which is what most servers need.

You can also set the default for forwarded traffic, which matters if the host acts as a router or runs containers:

Terminal
sudo ufw default deny routed

Allowing Connections

The allow command is the one you will use most often. It accepts a service name, a port, a protocol, or a full rule.

Allow a Service by Name

ufw reads /etc/services and understands common service names:

Terminal
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https

Each command opens the matching port (22, 80, and 443) for both TCP and UDP where applicable.

Allow a Specific Port

To open a port directly, pass the port number:

Terminal
sudo ufw allow 8080

By default, this opens the port for both TCP and UDP. To limit it to one protocol, add /tcp or /udp:

Terminal
sudo ufw allow 8080/tcp
sudo ufw allow 53/udp

Allow a Port Range

Port ranges require an explicit protocol:

Terminal
sudo ufw allow 6000:6007/tcp
sudo ufw allow 6000:6007/udp

The colon separates the start and the end of the range, both inclusive.

Allow Traffic From a Specific Address

To accept connections from a single IP address, use from:

Terminal
sudo ufw allow from 203.0.113.25

To restrict that to a single service, add to any port:

Terminal
sudo ufw allow from 203.0.113.25 to any port 22

Allow Traffic From a Subnet

CIDR notation works the same way for whole networks:

Terminal
sudo ufw allow from 192.168.1.0/24 to any port 3306

This is a common pattern for databases: the port stays closed to the internet and is only reachable from your internal network.

Allow Traffic on a Specific Interface

To tie a rule to a network interface, add on INTERFACE:

Terminal
sudo ufw allow in on eth1 to any port 3306

The in keyword applies the rule to inbound traffic on eth1. Use out for outbound traffic.

Denying Connections

The deny command is the mirror of allow and takes the same arguments:

Terminal
sudo ufw deny 23
sudo ufw deny from 198.51.100.77

When the default incoming policy is already deny, you usually do not need to write explicit deny rules for closed ports. Explicit denies are useful when you want to block a specific address while keeping a port open to everyone else.

To log the denied packets instead of silently dropping them, use reject:

Terminal
sudo ufw reject 23

A reject sends an ICMP message back to the sender, while a deny drops the packet with no response.

Rate Limiting SSH

ufw can throttle repeated connections from the same IP, which is handy for SSH brute-force protection:

Terminal
sudo ufw limit ssh

The limit rule denies a connection if the source address attempts six or more connections in 30 seconds. This is a quick way to slow down password-guessing bots without installing a full intrusion-prevention tool.

Deleting Rules

There are two common ways to remove a rule. The first is to repeat the rule with delete in front:

Terminal
sudo ufw delete allow 8080/tcp

The second is to delete by rule number:

Terminal
sudo ufw status numbered
output
 To Action From
-- ------ ----
[ 1] 22/tcp ALLOW IN Anywhere
[ 2] 80/tcp ALLOW IN Anywhere
[ 3] 8080/tcp ALLOW IN Anywhere
Terminal
sudo ufw delete 3

ufw asks for confirmation before removing the rule. Keep in mind that after a deletion, the numbering shifts for the remaining rules, so always check the numbered status again before deleting another rule.

For a deeper walkthrough, see how to list and delete UFW firewall rules .

Application Profiles

Services that come with their own ufw profile can be referenced by name. List them with:

Terminal
sudo ufw app list
output
Available applications:
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH

To open the ports that a profile covers, pass its name to allow:

Terminal
sudo ufw allow 'Nginx Full'

Quote profile names that contain spaces. To see what ports a profile includes, use:

Terminal
sudo ufw app info 'Nginx Full'

Dry Run

Before you commit to a change, you can preview the iptables rules ufw would add with --dry-run:

Terminal
sudo ufw --dry-run allow 8080/tcp

No rule is created. The output shows the exact lines ufw would write, which is useful when you are writing rules over SSH and want to double-check them first.

Logging

ufw can log packets that match rules, which helps when troubleshooting. The log level can be adjusted:

Terminal
sudo ufw logging on
sudo ufw logging medium

Valid levels are off, low, medium, high, and full. ufw logs through the kernel log facility, so the exact destination depends on your system logging setup. On many Ubuntu systems with rsyslog, you will see entries in /var/log/ufw.log, and you can often inspect them with journalctl as well.

Reset the Firewall

To clear every rule and set ufw back to its fresh state, run:

Terminal
sudo ufw reset
Warning
ufw reset disables the firewall and removes every rule, including the one that allows SSH. If you are connected over SSH, add an allow rule for SSH and re-enable the firewall right after the reset, or you will lose access the moment you turn it back on.

IPv6 Support

On most modern systems, ufw can manage IPv6 rules as well as IPv4 rules. Check the setting in /etc/default/ufw:

/etc/default/ufwsh
IPV6=yes

When IPV6=yes, generic rules such as sudo ufw allow 22/tcp are created for both IPv4 and IPv6. Rules that include an explicit address stay specific to that address family. In the status output, the IPv6 rules appear with (v6) next to them.

Quick Reference

For a printable quick reference, see the ufw cheatsheet .

Task Command
Show status sudo ufw status verbose
Show numbered rules sudo ufw status numbered
Enable firewall sudo ufw enable
Disable firewall sudo ufw disable
Default deny incoming sudo ufw default deny incoming
Default allow outgoing sudo ufw default allow outgoing
Allow SSH sudo ufw allow ssh
Allow port sudo ufw allow 8080/tcp
Allow port range sudo ufw allow 6000:6007/tcp
Allow from IP sudo ufw allow from 203.0.113.25
Allow from subnet sudo ufw allow from 192.168.1.0/24
Rate-limit SSH sudo ufw limit ssh
Delete rule by number sudo ufw delete 3
List app profiles sudo ufw app list
Reset all rules sudo ufw reset

Troubleshooting

ufw: command not found
Install the ufw package with sudo apt install ufw on Ubuntu or Debian. On Fedora, RHEL, and derivatives, firewalld is the default and ufw is rarely used.

Locked out after enabling the firewall over SSH
The default deny policy blocked your SSH session. You will need console access to the server. Once in, run sudo ufw allow ssh and sudo ufw reload to restore access. The usual safeguard is to allow SSH before the first enable.

Rule added but traffic still blocked
Confirm the rule is in the right direction (inbound vs outbound) and on the right interface. Check with sudo ufw status verbose. If Docker is running, it writes its own iptables rules that can bypass ufw.

Changes do not take effect
After manual edits to /etc/ufw/ files, reload the firewall with sudo ufw reload. Commands issued through ufw apply immediately and do not need a reload.

FAQ

Is ufw the same as iptables?
No. ufw is a front end that generates iptables (or nftables) rules for you. The kernel still enforces the rules through its packet filter, but you do not need to edit the raw tables yourself.

Does ufw start automatically after reboot?
Yes, once you run sudo ufw enable, the service is registered with systemd and starts on boot. You can confirm with systemctl status ufw.

How do I allow a port for a single IP address?
Use the from and to any port form: sudo ufw allow from 203.0.113.25 to any port 22. This keeps the port closed to the rest of the internet and only opens it for that one address.

What is the difference between deny and reject?
deny drops the packet without any reply. reject sends an ICMP message back to the sender telling them the connection was refused. Reject is slightly friendlier to well-behaved clients but makes the server more visible to scans.

How do I undo everything and start over?
Run sudo ufw reset. It disables the firewall and removes every rule. Remember to re-add the SSH rule before re-enabling, especially on a remote server.

Conclusion

ufw keeps firewall management readable: allow what you need, deny what you do not, and lean on the default policies to cover the rest. If you want step-by-step instructions for setting it up on a new server, see how to set up a firewall with UFW on Ubuntu 24.04 .

du Cheatsheet

Basic Usage

Common ways to check directory and file sizes.

Command Description
du Show disk usage of the current directory and its subdirectories
du file Show disk usage of a single file
du dir1 dir2 Show disk usage for multiple paths
du -h dir Human-readable sizes (K, M, G)
du -sh dir Show only the total size of a directory
du -sh * Show the size of every item in the current directory
sudo du -sh /var Run with sudo to read root-owned paths

Size Formats

Control how sizes are printed.

Option Description
-h Human-readable, powers of 1024 (K, M, G)
-H Human-readable, powers of 1000 (SI units)
-k Display sizes in 1K blocks (default on most systems)
-m Display sizes in 1M blocks
-BG Display sizes in 1G blocks
-B SIZE Use SIZE-byte blocks, for example -BM or -B512
-b Equivalent to --apparent-size --block-size=1

Summary and Totals

Reduce noise or add a grand total row.

Command Description
du -s dir Show only the total for the given directory
du -sh dir Total in human-readable format
du -c dir1 dir2 Add a total line at the bottom
du -csh /var/log /var/lib Human-readable totals plus a combined grand total
du -a dir Include every file in the listing, not just directories

Depth Control

Limit how deep du descends into the directory tree.

Command Description
du -h --max-depth=1 dir Show only the first level of subdirectories
du -h --max-depth=2 dir Show two levels deep
du -h -d 1 dir Short form of --max-depth=1
du -h --max-depth=0 dir Show only the directory total (same as -s)

Excluding Files

Skip paths or patterns from the report.

Option Description
--exclude=PATTERN Skip files and directories matching the shell pattern
--exclude-from=FILE Read exclude patterns from a file
-x Stay on the same filesystem (skip mounted ones)

Examples:

Command Description
du -sh --exclude="*.log" /var Exclude .log files from the total
du -sh --exclude=node_modules ~/projects Skip node_modules directories
du -xsh / Total of the root filesystem only, ignoring mounts

Sorting and Top N

Combine du with sort and head to find the largest items.

Command Description
du -h dir | sort -rh Sort entries by size, largest first
du -h dir | sort -rh | head -10 List the 10 largest items
du -h --max-depth=1 / | sort -rh | head -20 Largest top-level directories under /
du -ah dir | sort -rh | head -10 Largest individual files and directories
du -sh */ | sort -rh Sort current directory’s children by size

Apparent vs Disk Usage

du reports allocated blocks by default. Use these flags to see actual byte counts.

Option Description
--apparent-size Show how many bytes the file contains, not how much it occupies on disk
-b Apparent size in bytes (shorthand for --apparent-size --block-size=1)

Examples:

Command Description
du -sh --apparent-size /var/log Apparent size of /var/log
du -sb file Exact byte count of a file

Counting and Time

Less common but useful options.

Option Description
-L Follow all symbolic links
-P Never follow symbolic links (default)
-l Count sizes many times if hard linked
--time Show last modification time of the file or directory
--time=atime Show the access time instead of modification time
-0 Use a NUL character as the line separator (for piping into xargs -0)

Related Guides

Use these references for deeper disk usage workflows.

Guide Description
du Command in Linux Full du guide with practical examples
How to Get the Size of a File or Directory Focused walkthrough for sizing files and directories
How to Check Disk Space in Linux Using df Filesystem-level disk space reporting
Find Large Files in Linux Locate the biggest files across a tree

nslookup Cheatsheet

Basic Syntax

Core nslookup command forms.

Command Description
nslookup example.com Look up a domain using the default resolver
nslookup example.com 8.8.8.8 Query a specific DNS server
nslookup -type=mx example.com Query a specific record type
nslookup 192.0.2.1 Run a reverse DNS lookup
nslookup Start interactive mode

Common Lookups

Quick checks for hostnames and addresses.

Command Description
nslookup example.com Look up A and AAAA records using the default resolver
nslookup www.example.com Check a hostname or subdomain
nslookup localhost Verify local name resolution
nslookup 127.0.0.1 Reverse lookup for the local loopback address
nslookup 192.0.2.1 Reverse lookup for a public IP address

Record Types

Use -type to query specific DNS records.

Command Description
nslookup -type=a example.com Query IPv4 address records
nslookup -type=aaaa example.com Query IPv6 address records
nslookup -type=mx example.com Query mail exchanger records
nslookup -type=ns example.com Query authoritative name servers
nslookup -type=txt example.com Query TXT records
nslookup -type=soa example.com Query the SOA record
nslookup -type=cname www.example.com Check whether a hostname is an alias
nslookup -type=any example.com Run an ANY query

Specific DNS Servers

Compare answers from different resolvers.

Command Description
nslookup example.com 8.8.8.8 Query Google Public DNS
nslookup example.com 1.1.1.1 Query Cloudflare DNS
nslookup example.com 9.9.9.9 Query Quad9
nslookup -type=mx example.com 8.8.8.8 Query MX records from a specific resolver
nslookup -type=txt example.com 1.1.1.1 Compare TXT answers between resolvers

Interactive Mode

Run multiple queries in one session.

Command Description
nslookup Open interactive mode
set type=mx Switch the active query type to MX
set type=txt Switch the active query type to TXT
server 8.8.8.8 Change the active DNS server
example.com Query a domain after entering interactive mode
exit Leave the interactive session

Troubleshooting

Quick checks for common nslookup errors.

Issue Check
NXDOMAIN Verify the domain name and make sure it exists
SERVFAIL Try another resolver such as 8.8.8.8 or 1.1.1.1
Connection timed out; no servers could be reached Check network access and verify /etc/resolv.conf
Non-authoritative answer Normal cached response from a resolver
No answer The queried record type is not set for that name

Related Guides

Use these guides for fuller DNS troubleshooting workflows.

Guide Description
nslookup Command in Linux Full nslookup guide with practical examples
How to Use the dig Command to Query DNS in Linux Detailed dig guide for deeper DNS debugging
ping Cheatsheet Quick connectivity checks before DNS troubleshooting
IP command cheatsheet Inspect interfaces, addresses, and routes
curl Cheatsheet Test HTTP reachability after DNS resolution succeeds

nslookup Command in Linux: Query DNS Records

When a website does not load or email stops arriving, the first thing to check is whether the domain resolves to the correct address. The nslookup command is a quick way to query DNS servers and inspect the records behind a domain name.

nslookup ships with most Linux distributions and works on macOS and Windows as well. It supports both one-off queries from the command line and an interactive mode for running multiple lookups in a row.

This guide explains how to use nslookup with practical examples covering record types, reverse lookups, and troubleshooting.

Syntax

txt
nslookup [OPTIONS] [NAME] [SERVER]
  • NAME — The domain name or IP address to look up.
  • SERVER — The DNS server to query. If omitted, nslookup uses the server configured in /etc/resolv.conf.
  • OPTIONS — Query options such as -type=MX or -debug.

When called without arguments, nslookup starts in interactive mode.

Installing nslookup

On most distributions nslookup is already installed. To check, run:

Terminal
nslookup -version

If the command is not found, install it using your distribution’s package manager.

Install nslookup on Ubuntu, Debian, and Derivatives

Terminal
sudo apt update && sudo apt install dnsutils

Install nslookup on Fedora, RHEL, and Derivatives

Terminal
sudo dnf install bind-utils

Install nslookup on Arch Linux

Terminal
sudo pacman -S bind

The nslookup command is bundled with the same packages that provide dig .

Look Up a Domain Name

The simplest use is passing a domain name as an argument:

Terminal
nslookup linux.org
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: linux.org
Address: 104.26.14.72
Name: linux.org
Address: 104.26.15.72
Name: linux.org
Address: 172.67.73.26

The first two lines show the DNS server that answered the query. Everything under “Non-authoritative answer” is the actual result. In this case, linux.org resolves to three IPv4 addresses.

“Non-authoritative” means the answer came from a resolver’s cache rather than directly from the domain’s authoritative name server.

Query a Specific DNS Server

By default, nslookup queries the resolver configured in /etc/resolv.conf. To query a different server, add it as the last argument.

For example, to query Google’s public DNS:

Terminal
nslookup linux.org 8.8.8.8
output
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: linux.org
Address: 104.26.14.72
Name: linux.org
Address: 104.26.15.72
Name: linux.org
Address: 172.67.73.26

This is useful when you want to compare results across different resolvers or verify whether a DNS change has propagated to public servers.

Query Record Types

By default, nslookup returns A (IPv4 address) records. Use the -type option to query other record types.

MX Records (Mail Servers)

MX records identify the mail servers responsible for receiving email for a domain:

Terminal
nslookup -type=mx google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com mail exchanger = 10 smtp.google.com.

The number before the mail server hostname is the priority. A lower number means higher priority.

NS Records (Name Servers)

NS records show which name servers are authoritative for a domain:

Terminal
nslookup -type=ns google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com nameserver = ns1.google.com.
google.com nameserver = ns2.google.com.
google.com nameserver = ns3.google.com.
google.com nameserver = ns4.google.com.

TXT Records

TXT records store arbitrary text data, commonly used for SPF, DKIM, and domain ownership verification:

Terminal
nslookup -type=txt google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com text = "v=spf1 include:_spf.google.com ~all"
google.com text = "facebook-domain-verification=22rm551cu4k0ab0bxsw536tlds4h95"
google.com text = "docusign=05958488-4752-4ef2-95eb-aa7ba8a3bd0e"

The output may include many entries. The example above shows a subset of the TXT records returned for google.com.

AAAA Records (IPv6)

AAAA records return the IPv6 address of a domain:

Terminal
nslookup -type=aaaa google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: google.com
Address: 2a00:1450:4017:818::200e

SOA Record (Start of Authority)

The SOA record contains administrative information about the domain, including the primary name server, the responsible email address, and timing parameters for zone transfers:

Terminal
nslookup -type=soa google.com
output
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com
origin = ns1.google.com
mail addr = dns-admin.google.com
serial = 897592583
refresh = 900
retry = 900
expire = 1800
minimum = 60

The serial number increments each time the zone is updated. DNS secondaries use it to decide whether they need a zone transfer.

CNAME Records

CNAME records point one domain name to another:

Terminal
nslookup -type=cname www.github.com

If a CNAME record exists, the output shows the canonical name the alias points to. If the domain does not have a CNAME record, nslookup returns No answer.

Run an ANY Query

To ask the DNS server for an ANY response, use -type=any:

Terminal
nslookup -type=any google.com

ANY queries do not reliably return every record type for a domain. Many DNS servers return only a subset of records or refuse the query entirely.

Reverse DNS Lookup

A reverse lookup finds the hostname associated with an IP address. Pass an IP address instead of a domain name:

Terminal
nslookup 208.118.235.148
output
148.235.118.208.in-addr.arpa name = ip-208-118-235-148.twdx.net.

Reverse lookups query PTR records. They are useful for verifying that an IP address maps back to the expected hostname, which matters for mail server configuration and security checks.

Interactive Mode

Running nslookup without arguments starts an interactive session where you can run multiple queries without retyping the command:

Terminal
nslookup
output
>

At the > prompt, type a domain name to look it up:

output
> linux.org
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: linux.org
Address: 104.26.14.72
Name: linux.org
Address: 104.26.15.72

You can change query settings during the session with the set command. For example, to switch to MX record lookups and then query a domain:

output
> set type=mx
> google.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
google.com mail exchanger = 10 smtp.google.com.

To change the DNS server:

output
> server 8.8.8.8
Default server: 8.8.8.8
Address: 8.8.8.8#53

Type exit to leave interactive mode.

Interactive mode is convenient when you need to test several domains or record types in a row without running separate commands each time.

Debugging DNS Issues

The -debug option shows the full query and response details, including TTL values and additional sections that nslookup normally hides:

Terminal
nslookup -debug linux.org

The debug output is verbose, but it is helpful when you need to see TTL values, check whether answers are authoritative, or trace unexpected behavior.

nslookup vs dig

Both nslookup and dig query DNS servers, but they differ in output and capabilities:

  • nslookup produces simpler, more readable output. It also has an interactive mode that is convenient for quick checks.
  • dig provides detailed, structured output with sections (QUESTION, ANSWER, AUTHORITY, ADDITIONAL) and supports advanced options like +trace for tracing the full resolution path and +dnssec for verifying DNSSEC signatures.

For quick lookups and basic troubleshooting, nslookup is often faster to type and read. For in-depth DNS debugging, dig gives you more control and detail.

Troubleshooting

nslookup returns NXDOMAIN
The domain does not exist or is misspelled. Verify the domain name and check that it is registered.

nslookup returns SERVFAIL
The DNS server could not process the query. Try a different resolver to isolate the problem:

Terminal
nslookup linux.org 1.1.1.1

If public resolvers return the correct answer, the issue is with your configured resolver.

Connection timed out; no servers could be reached
This means nslookup could not contact the DNS server. Check your network connection and verify that /etc/resolv.conf contains a reachable name server. A firewall may also be blocking outbound DNS traffic on port 53.

Non-authoritative answer appears on every query
This is normal. It means the answer came from a resolver’s cache, not directly from the domain’s authoritative server. The result is still valid.

Quick Reference

For a printable quick reference, see the nslookup cheatsheet .

Task Command
Look up a domain nslookup example.com
Query a specific DNS server nslookup example.com 8.8.8.8
Query MX records nslookup -type=mx example.com
Query NS records nslookup -type=ns example.com
Query TXT records nslookup -type=txt example.com
Query AAAA (IPv6) records nslookup -type=aaaa example.com
Query SOA record nslookup -type=soa example.com
Query CNAME record nslookup -type=cname example.com
Run an ANY query nslookup -type=any example.com
Reverse DNS lookup nslookup 192.0.2.1
Start interactive mode nslookup
Enable debug output nslookup -debug example.com

FAQ

Can I use nslookup to check DNS propagation?
Yes. Query the same domain against several public DNS servers and compare the results. For example, run nslookup example.com 8.8.8.8, nslookup example.com 1.1.1.1, and nslookup example.com 9.9.9.9. If the answers differ, the change has not fully propagated.

Is nslookup deprecated?
The ISC (the organization behind BIND) once marked nslookup as deprecated in favor of dig, but later reversed that decision. nslookup is actively maintained and included in current BIND releases. It remains a practical tool for quick DNS lookups.

What does “Non-authoritative answer” mean?
It means the response came from a caching resolver, not from one of the domain’s authoritative name servers. The data is still accurate, but it may be slightly behind if a DNS change was made very recently and the cache has not expired yet.

Conclusion

The nslookup command is a quick way to query DNS records from the command line. Use -type to look up MX, NS, TXT, AAAA, and other record types, and pass a server argument to test against a specific resolver. For deeper DNS debugging, pair it with dig .

Debian vs Ubuntu Server: Which One Should You Use?

If you are setting up a new Linux server, Debian and Ubuntu are usually the two options that come up first. Both are mature, well-supported, and used across everything from small VPS deployments to large production environments. They also share the same package format and package manager, so the day-to-day administration feels familiar on both.

The decision usually comes down to tradeoffs: Debian gives you a leaner and more conservative base, while Ubuntu gives you newer defaults, a broader support ecosystem, and easier onboarding. This guide shows where those differences matter in practice.

Shared Foundation

Ubuntu is built on top of Debian. Canonical (the company behind Ubuntu) takes a snapshot of Debian’s unstable branch every six months, applies its own patches and defaults, and publishes a new Ubuntu release.

Because of this shared lineage, the two distributions have a lot in common:

  • Both use apt and dpkg for package management.
  • Both use systemd as the init system.
  • Both use .deb packages, and many third-party vendors ship a single .deb that works on either distribution.
  • Configuration file locations, service management commands, and filesystem layout follow the same conventions.

If you already know one, you can work comfortably on the other with very little adjustment.

Release Cycle and Support

The biggest practical difference between Debian and Ubuntu is how often they release and how long each release is supported.

Debian does not follow a fixed release schedule. A new stable version ships roughly every two years, but only when the release team considers it ready. Each stable release receives approximately three years of full security support from the Debian Security Team, followed by about two more years of extended support through the Debian LTS project. Debian 13 (Trixie), released in August 2025, is the current stable version as of this writing. Debian 12 (Bookworm), released in June 2023, is now oldstable.

Ubuntu follows a predictable time-based schedule. A new version ships every six months (April and October), and every two years the April release is designated a Long-Term Support (LTS) version. LTS releases receive five years of free security updates, extendable to ten years through Ubuntu Pro (free for up to five machines for personal use). Ubuntu 24.04 LTS (Noble Numbat) is the current LTS release.

Debian Stable Ubuntu LTS
Release cadence ~2 years, when ready Every 2 years (April of even years)
Free security support ~3 years 5 years
Extended support ~2 years (Debian LTS) Up to 10 years (Ubuntu Pro)
Current release 13 Trixie (August 2025) 24.04 Noble (April 2024)

If you want a release calendar you can plan around years in advance, Ubuntu is easier to work with. If you prefer a slower release model that puts stability first, Debian is usually the better choice.

Package Freshness vs Stability

Debian Stable freezes all package versions at release time. Once Bookworm shipped, its packages only receive security patches and critical bug fixes, never feature updates. This means you will not encounter unexpected behavior changes after a routine apt upgrade, but it also means you may be running older versions of languages, databases, or runtimes for the life of the release.

Ubuntu LTS takes a similar approach to stability, but because it pulls from a more recent Debian snapshot and applies its own patches, LTS packages tend to be slightly newer at release time. Ubuntu also offers PPAs (Personal Package Archives) as an official mechanism for installing newer software outside the main repository, though mixing PPAs with production servers carries its own risks.

Both distributions offer backports repositories for users who need specific newer packages without upgrading the entire system.

For server workloads where you install most software through containers or version managers (for example, nvm for Node.js or pyenv for Python), base package age matters less. The distribution becomes a stable foundation, and you manage application-level versions separately.

Default Installation and Setup

Ubuntu Server ships with a guided installer (Subiquity) that walks you through disk layout, networking, user creation, and optional snap-based packages like Docker. It installs a small but functional set of tools by default and can import your SSH keys from GitHub or Launchpad during setup.

Debian’s installer is more traditional. It offers the same configuration options, but the interface is text-based and expects you to make more decisions yourself. The resulting system is leaner; Debian installs very little beyond the base system unless you explicitly select additional task groups during installation.

If you provision servers with tools like Ansible, Terraform, or cloud-init, the installer matters less because you will not spend much time in it after the first boot. It matters more for quick manual deployments and local VMs, where Ubuntu usually gets you to a usable system faster.

Commercial Support and Cloud Ecosystem

Canonical offers paid support contracts, compliance certifications (FIPS, CIS benchmarks), and a managed services tier for Ubuntu. Ubuntu Pro extends security patching beyond the main repository and adds kernel livepatch for rebootless security updates. That matters most for teams with uptime targets, compliance requirements, or formal support needs.

Debian is entirely community-driven. There is no single company behind it, and there is no commercial support offering from the project itself. Third-party consultancies provide Debian support, but the ecosystem around it is smaller.

In the cloud, Ubuntu has a strong lead in first-party image availability. Every major provider (AWS, Google Cloud, Azure, DigitalOcean, Hetzner) offers official Ubuntu images, and many default to Ubuntu when launching a new instance. Debian images are also available on all major platforms, but Ubuntu tends to receive new platform features and optimized images first.

For container base images, both are widely used. Debian slim images are a popular choice for minimal Docker containers, while Ubuntu images are common when teams want closer parity between their server OS and their container environment.

Security

Both distributions have strong security track records. The Debian Security Team and the Ubuntu Security Team publish advisories and patches regularly, and both maintain clear processes for reporting and fixing vulnerabilities.

The key difference is scope. Ubuntu LTS includes five years of standard security maintenance for packages in the Main repository. Ubuntu Pro extends coverage to Main and Universe, effectively the full Ubuntu archive. Debian’s security team focuses on the main repository, and Debian’s LTS team covers only a subset of packages during the extended support period.

For kernel security, Ubuntu offers kernel livepatch through Ubuntu Pro, which applies critical kernel fixes without a reboot. Debian does not have an equivalent service built into the project, though third-party solutions exist.

In practice, both distributions are secure when properly maintained. The difference is in how much of the maintenance is handled for you and for how long.

When to Choose Debian

Choose Debian when you want:

  • Maximum stability with few surprises after upgrades. Debian Stable is one of the most conservative general-purpose distributions available.
  • A minimal base system that you build up yourself. Debian installs very little by default, which keeps the system lean and gives you tighter control over what runs on the server.
  • No vendor dependency. Debian is maintained by a community of volunteers and governed by a social contract. There is no single company setting the direction of the project.
  • A lightweight container base. Debian slim images are a common choice when image size matters.

Debian is often the better fit for long-running infrastructure, self-managed servers, and setups where predictability matters more than convenience.

When to Choose Ubuntu

Choose Ubuntu when you want:

  • Faster time to a working server. The installer and default configuration get you to a usable system quickly, especially on cloud platforms where Ubuntu images are often the default.
  • Longer official support. Five years of free LTS support (ten with Ubuntu Pro) gives you a wider upgrade window than Debian’s default support period.
  • Commercial backing. If your organization requires vendor support contracts, compliance certifications, or managed services, Canonical provides them.
  • Broader documentation and community. Ubuntu’s larger user base means you will usually find more tutorials, examples, and third-party guides written for it.
  • Cloud-heavy workflows. First-party cloud images and strong cloud-init support make Ubuntu an easy default for many hosted deployments.

Ubuntu is often the easier choice for cloud deployments, teams that need commercial support, and environments where onboarding speed matters.

Side-by-Side Summary

Debian Stable Ubuntu LTS
Based on Independent Debian (unstable branch)
Governance Community (Debian Project) Corporate (Canonical) + community
Release schedule When ready (~2 years) Fixed (every 2 years)
Free support period ~5 years (3 + 2 LTS) 5 years (10 with Pro)
Package freshness Conservative (frozen at release) Slightly newer at release
Default install Minimal Functional with guided setup
Cloud image availability Good Excellent (often the default)
Commercial support Third-party only Canonical (Ubuntu Pro)
Kernel livepatch Not built in Available via Ubuntu Pro
Container base image Popular (especially slim) Popular
Package manager apt / dpkg apt / dpkg
Init system systemd systemd

FAQ

Can I migrate a server from Ubuntu to Debian or vice versa?
It is technically possible but not straightforward. The distributions share the same package format, but they differ in package versions, configuration defaults, and init scripts. A clean reinstall with configuration management (Ansible, Puppet, or similar) is the safer path. Plan the migration as a new deployment rather than an in-place conversion.

Do Debian and Ubuntu run the same software?
For the most part, yes. Most common server software is available on both Debian and Ubuntu, whether through the official repositories, vendor-provided .deb packages, containers, or upstream binaries. That said, package versions, repository availability, and vendor support can differ between the two distributions and between releases.

Which is better for Docker and containers?
Both work well as Docker hosts. Installing Docker follows nearly the same steps on either distribution. For base images inside containers, Debian slim images are slightly smaller, while Ubuntu images offer closer parity with Ubuntu-based host systems. The difference is marginal for most workloads.

Which is more secure?
Neither is inherently more secure than the other. Both have dedicated security teams and fast patch turnaround. Ubuntu Pro extends patching to a wider package set and adds kernel livepatch, which can matter in environments with strict uptime or compliance requirements. On a properly maintained server, both are equally secure.

Conclusion

Debian and Ubuntu are close enough that you can run the same kinds of workloads on either one. If you want the more conservative and self-managed option, pick Debian. If you want faster onboarding, broader vendor support, and a smoother cloud path, pick Ubuntu.

❌
❌