NOTE: WHEN WRITING THIS GUIDE, IT WAS BASED ON
- UBUNTU SERVER v23.1
- DOCKER ENGINE v24.0.7, build afdd53b
- DOCKER-COMPOSE v2.24.0
YOU WILL ONLY NEED A MONITOR AND A KEYBOARD FOR THE FIRST FEW STEPS, THEN THOSE WILL BE REPLACED BY A LAPTOP
- Downloading and Installing Ubuntu Server
- Fix GRUB to have the dual boot
- Setting up internet connection
- Headless operation
- Installing docker and docker-compose
- Mounting folders on Synology Mini-PC and Laptop
- Copying configuration files from Synology
- Migrating docker containers environments
- Adding a bit of security to the Mini-PC
- Easy access the docker-compose file
- My RAM usage is Leaking
- Missed Timezone Setting
- Install Tailscale VPN
- OpenSSH
- SSH Public Keys
To begin with, I needed to download a copy of Ubuntu Server. Why this, not any other distribution? Simply because it is all in terminal, so no UI, meanin less resources and memory consupmtion.
Simply head to the download page page.
The installation steps are very easy. Just follow what is on the screen. Just make sure about 4 things:
- Do not connect to the network during installation
- Make sure the minimuim swap partition is = RAM size + 2GB
- Choose an easy passowrd as it will be prompted a lot during the setting up of the mini-pc (i.e. _~1234abcd, 1234, abcd ...etc)
- Choose the Ubuntu Server option, not the minimized option
In case it will be installed side-by-side with windows, what I did was I installed LinuxMint (or any distro with UI), make the partitioning during the installation, then re-installed Ubuntu Server on top of the created LinuxMint partitions. I had to make that as I could not have more than 4 logical partitions (1 for boot, 1 Windows, 1 system restore, 1 Ubuntu ext4). By that, Swap was not doable
This step is only needed if the Mini-PC came with a pre-installed Windows, and needs to keep this original installation alongside Ubuntu Server (just like my case_
There might be a simpler way, but I am not aware of. The below steps are what I have personally done and worked fine for me
- Download any copy of Ubuntu or LinuxMint with a UI
- Copy it to a USB and make it bootable
- Boot into this image, I used LinuxMint 21
- Once it is loaded, in the application menu, search for grub and then choose the Boot Repair
- Then wait till it loads, and select the first option 'Recommended repair'
- The last step is to make sure that you have selected 'Reinstall GRUB' before clicking 'Apply'
Restart the Mini-PC and make sure that the boot menu has dual boot, and try both Windows and Ubuntu Server before proceeding further. If not, refer this site for boot-repair usage.
Once the installation is successful, the mini-pc will reboot into terminal mode. enter the username and password chosen during installation.
Type in:
sudo apt install net-tools
then
ifconfig
and take note of the ethernet adapter name. Mine was enp0s31f6
If this did not work with you and gave a connection error, it is normal as Ubuntu is looking for the package from an online resource, and you can proceed with the next steps.
Now, we need to define the network parameters
Type in
ls /etc/netplan/
and take note of the output file there. Mine was 50-cloud-init.yaml
Now, we need to edit this file and provide the network details, and assign a static IP for the mini-pc
In my case, I have to type
sudoedit /etc/netplan/50-cloud-init.yaml
A screen will open with some commented values, at the end of that file, enter the following:
network:
ethernets:
enp0s31f6: #the network interface name which is initially in the file, keep it, or if not available write it down from the ifconfig above
renderer: networkd
addresses:
- 192.168.1.5/24 #static IP of the mini-pc
routes:
- to: default
via: 192.168.1.1 #gateway, usually the home network router IP
nameservers:
addresses: [8.8.8.8] #if you have adguard or pi-hole installed, include the IP first and seperate with a comma
Press ctrl+X, then type in y and press enter
Now reboot the mini-pc and connect it to an ethernet cable and make sure all is working before moving to the next steps in headless mode
type in sudo poweroff
Proceed with this step only when the internet is working on the mini-pc. Make sure by connecting an ethernet cable, a screen and reboot. then login and in terminal, type ping 8.8.8.8
, if it is successfully connected, you should see reponses like in below. Otherwise, refer to the official documentation and see which one suits you
Now, when all is successful, you can move the mini-pc and install it in your server rack, and remove the monitor and keyboard. Boot it up and wait few minutes then use any terminal applciation to SSH into it.
Use Snowflake as it is a cross-platform free SSH tool, for both Windows and Linux
Once installed, it is straightforward to SSH into the mini-pc, it will look like this
All the next steps will be done through this terminal, and this is called the Headless Mode, because the mini-pc is connected to the local network, and accessed remotely through a laptop, rather than connecting it to a monitor and keyboard and using it. This is what a server should look like
Ubuntu Server comes with no docker installed. To make sure, type in sudo docker info
and you shall get as in below. If not, then we need to update it and install docker-compose as well
To install the docker engine, as well as the docker-compose, follow the following steps in the terminal
- Add Docker's official GPG key
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
- Add the repository to Apt sources
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
-
Install the latest version
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
-
Verify the success of the installation
sudo docker run hello-world
If successful, you should get something like this
If it was not successfull, refer this guide and see what suits your case
- To manage Docker as a non-root user, we will create a group named docker and add the current user to this group
sudo groupadd docker
sudo usermod -aG docker $USER
-
Log out and log back in to re-evaluated the group membership and then activate the changes by
newgrp docker
-
Verify everything is successful, by running the docker command without sudo
docker run hello-world
If it was not successfull, refer this guide and see what suits your case
The last step is to make sure that docekr is started on boot
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
sudo systemctl disable docker.service
sudo systemctl disable containerd.service
Verify all engines are installed and running by executing systemctl status docker
. You should see that the engine is active and in green as below
Verify the installed version is the latest one by executing docker --version; docker-compose --version;ctr version
You may get a similar docker-compose version as in the one below (personally I faced this issue). In this case, you need to manually update the docker-compose by following the below steps
Docker version 24.0.7, build afdd53b
Docker Compose version v1.29.2, build unknown
Client:
Version: 1.6.27
Revision: a1496014c916f9e62104b33d1bb5bd03b0858e59
Go version: go1.20.13
ctr: failed to dial "/run/containerd/containerd.sock": connection error: desc = "transport: error while dialing: dial unix /run/containerd/containerd.sock: connect: permission denied"
-
Backup the current version on the host machine
sudo mv /usr/local/bin/docker-compose /usr/local/bin/docker-compose-backup
-
Download the latest docker-compose, which was v2.24.0 at the time when writing this document
sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
-
Make it executable
sudo chmod +x /usr/local/bin/docker-compose
-
Verify the version is updated, and it should be => 2.24.0
docker-compose --version
In this step, we will mount the old docker folder from Synology into the new Mini-PC drive.
First, we need to create a mapping folder on the mini-pc. If this folder is on the partition as the Ubuntu Server, then start from step #5
- Map the other drive (in my case, I have 128gb nvme for the OS, and 500gb SATA for the docker configuration files and volume mounting). So I need to auto mount the 500gb partition exceute
lsblk -o NAME,FSTYPE,UUID,MOUNTPOINTS
and take note of the partition UUID, mine is in red below
if the partitions are confusing and want to have the partition size with the name and then look it in the above results, execute sudo fdisk -l
Make sure that this partition is formatted as ext4, NTFS will have a lot of issues when used with docker volume mounting
-
Make a folder to mount this partition at, by executing
mkdir /home/USER/docker
-
Add this partition to the fstab to have it auto-mounted at every boot by executing
sudoedit /etc/fstab
, and adding the following line at the end of the fileUUID=0ea7f90f-6cd8-4e10-a3d4-4d070da6da7b /home/USER/docker ext4 rw,relatime,discard 0 2
This makes sure that my other drive (500gb), is always accessible from the docker folder under my home
- Reload the changes, and apply them by
sudo systemctl daemon-reload
sudo findmnt --verify
sudo mount -a
- Make a folder to mount the Synology docker folder to, by executing
mkdir /home/USER/syno-docker
Assuming that in Synology (IP 192.168.1.4), the docker files are saved at /volume1/docker, then edit fstab and add the below to the end of the file. Just change the ADMIN and PASSWORD
//192.168.1.4/syno-docker /home/USER/syno-docker cifs username=ADMIN,password=PASSWORD,uid=1000,gid=1000 0 0
- Reload the changes, and apply them by
sudo systemctl daemon-reload
sudo findmnt --verify
sudo mount -a
After mapping the folders, we are now ready to copy the docker configuration files to the mini-pc by cp -r /home/USER/syno-docker/ /home/USER/syno-docker/
The last step is to change the ownership and mode of this folder to make is accessible by the docker group and avoid issues when loading the contaienrs. Simply execute
sudo chown -R USER:docker /home/USER/syno-docker/
sudo chmod -R 777 /home/USER/syno-docker/
Another and easier way is to install a file manager on the mini-pc in docker. I used Cloud Commander
Before creating the docker containers, just make sure that all environmental variables are modified to mini-pc variables. In my case, I had to edit the .env file and modify the following
- LOCAL_TIME
- LOCAL_HOST
- PUID
- PGID
- DOCKER_SOCKET
- MEDIA_PATH
- PERSIST
- DOWNLOADS
- BACKUPS
Now, everything is ready, just execute docker-compose up -d
and all containers will be loaded.
The final 2 steps come into hand, which are:
- Having a more powerful password
Simply type in sudo passwd USERNAME
, where USERNAME is the current user logged in
- Changing SSH port
We need to edit the sock file that defines the SSH port. To do so, type in sudoedit /lib/systemd/system/ssh.socket
and add ListenStream=1122
to the end of the file (SSH IP will be 1122 in this example). Make sure it is not there originally. If so, just change the port from 22 to the desired port. Press Ctrl+X, then y and press enter to save and exit.
Next, we need to update and apply the changes made by
sudo systemctl daemon-reload
sudo systemctl restart ssh
To make sure that the new port was successfully assigned and updated, just execute sudo lsof -i -P -n | grep LISTEN | grep ssh
. You should see the new port in place of the 22 (in the photo, I did not change the port for illustration only)
THE ABOVE METHOD MIGHT NOT BE A PERMANENT SOLUTION FOR UBUNTU 24.04 LTS VERSION, THEN YOU WILL NEED THE ALTERNATIVE METHOD BELOW:
- Execute
sudoedit /etc/ssh/sshd_config
- Uncomment, by removing the
#
at the beginning of the line that saysPort 54747
- Replace the number 54747 with the desired port you are after, i.e. 1111
- check it has been changed by executing
sudo netstat -lntp | grep ssh
Lastly, log off and SSH again with the newly assigned port
This step is a bonus and only applicable if you are a Linux user (for Windows users, you need to search for the solution, which i guess it is called Folder Mapping via Explorer), which I personally prefer to have it, as I like to have access for everything from my laptop in headless mode. In this step, I will mount a folder on my laptop that has all the docker mounting and docker-compose file on the Mini-PC. By that, I can just edit the compose file, as well as create new folders for new containers using my LinuxMint UI installed on the laptop, instead of doing that throguh the terminal on my Mini-PC (easier)
To do so, we need to use the NFS mounting option, as this is used to mount linux-to-linux folders.
- Install NFS
sudo apt update
sudo apt install nfs-kernel-server
-
Edit the exports file to define the domain and IP to access the server
sudoedit /etc/exports
-
Add the below line at the end of the file, assuming the mount folder on the Mini-PC are located at /home/USER/docker, and the Laptop IP is static at 192.168.1.100
/home/USER/docker 192.168.1.100(rw,sync,no_subtree_check)
- Reload the changes, and apply them by
sudo systemctl restart nfs-kernel-server
- Install NFS
sudo apt update
sudo apt install nfs-common
-
Make the mapping folder by
mkdir /home/USER/minipc-docker
-
Check that both are able to see and communicate with each other, execute
sudo showmount -e 192.168.1.5
and you shall see something like this
Export list for 192.168.1.5:
/home/USER/docker 192.168.1.100
-
Verify the folder can be mounted by executing
sudo mount -t nfs 192.168.1.5:/home/USER/minipc-docker /home/USER/docker
-
If the folder was mounted successfully, make the mounting automatic on boot by adding it to the fstab by
sudoedit /etc/fstab
and add the following line at the end of the file192.168.1.5:/home/USER/docker /home/USER/minipc-docker nfs defaults 0 0
- Reload the changes, and have them ready to be auto-applied on the next boot
sudo systemctl daemon-reload
sudo findmnt --verify
After I finished setting up everything and started using it as my main server host, I noticed that the RAM usage starts at ~25% on boot time, and then keeps on increasing till it reaches ~95% and stays there. Then, swap is used!!!!
You can simply check the usage by executing the command free -m
Tried to troubleshoot what is going on, and find the leak, is it from the system setup, swap allocation, hardware, or even the docker containers. None was the answer, especially that the maximum docker usage for a container did not exceed 2.5%
I came to the fact that it is a kernel issue, as in version 6.5, the kernel starts to leak RAM usage to the maximum. So I need to downgrade to an earlier version.
To check your kernel version, simply execute uname -r
Now, how to downgrade it? Simply follow the steps:
- First, we need to download a bash tool that fetches for the kernel versions available and then automatically downloads the desired kernel version
wget https://raw.githubusercontent.com/pimlie/ubuntu-mainline-kernel.sh/master/ubuntu-mainline-kernel.sh
-
Then make this tool executable
chmod +x ubuntu-mainline-kernel.sh
-
Search for the available kernel version available
./ubuntu-mainline-kernel.sh -r | grep 5.15
- Based on the results, choose the built number and download it (this time it must be in sudo mode for installation)
sudo ./ubuntu-mainline-kernel.sh -i v5.15.90
- Check what are the entries available in the GRUB Bootloader, as this will be used to edit the default kernel on boot
grep 'menuentry \|submenu ' /boot/grub/grub.cfg | cut -f2 -d "'"
-
Edit the grub menu to boot to the installed kernel
sudoedit /etc/default/grub
-
Now replace
GRUB_DEFAULT=0
toGRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 5.15.90-051590-generic"
- Update GRUB by
sudo update-grub
-
Reboot the host
sudo reboot now
-
Once booted up, check that you have done all the steps correctly by verifying the kernel by
uname -r
and the RAM usage byfree -m
After you have successfully downgraded the kernel, check the RAM usage and it should be stabilised somehwere between 25% and 28%
The last thing is to remove the previously installed kernel v6.5.0
- Take note of the installed kernels when executing
dpkg --list | grep linux-image
- Uninstall the kernels by executing the below command, and this will remove all three for me as they are all v6.5.0 but different builts
sudo apt remove linux-headers-6.5.0*
sudo apt remove linux-image-6.5.0*
sudo apt remove linux-modules-6.5.0*
-
The last step is re-edit the GRUB menu and make it boot the installed kernel, regardless of the version so that if the kernel was updated to a later build, it boots automatically
sudoedit /etc/default/grub
-
Now replace
GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 5.15.90-051590-generic"
back toGRUB_DEFAULT=0
- Reboot the host
sudo reboot now
and once booted, re-check that you are still on the desired kernel byuname -r
as well as the RAM usage byfree -m
In case you missed setting the Timezone during installation (like what happened with me), you can still re-configure it to the correct time zone.
First, double-check that the timezone is incorrectly set by executing timedatectl
. In my case, it was set to Etc/UTC as below
Second, find the desired Timezone name installed on the system by executing timedatectl list-timezones | grep -i melbourne
. Take note of the results. In my case, it is Australia/Melbourne
Third, change the time zone to match the needed one by executing sudo timedatectl set-timezone Australia/Melbourne
Lastly, check that the timezone has been set as required by executing timedatectl
and seeing it is changed to Australia/Melbourne
In case a VPN is needed to access the Mini-PC (personally I prefer that in case something happens to remote access while away from home), it is advised to install Tailscale VPN as it is the easiest one available.
Simply SSH into the Mini-PC and execute the following (noting to be done, this magic script from Tailscale itself will do the needed)
curl -fsSL https://tailscale.com/install.sh | sh
Once everything is done, simply run Tailscale by sudo tailscale up
THIS STEP IS ONLY NEEDED FOR UBUNTU 24.04 LTS VERSION AS SSH IS NOT INSTALLED BY DEFUALT DURING INSTALLATION TIME. HOWEVER, IT CAN BE INSTALLED IF NEEDED IN EARLY STAGES, THEN THE BELOW IS NOT NEEDED, ONLY FOLLOW THE STEPS IN CASE IT WAS MISSED AS IN MY CASE
The steps are straightforward, just execute them in the order below:
sudo apt update
sudo apt upgrade
sudo apt install ssh
sudo systemctl enable ssh
To check that everything is done properly, check the SSH status by sudo systemctl status ssh
. If it does not show it is active in green, then start the service by sudo systemctl start ssh
CHECK HERE TO CHANGE THE SSH PORT
SSH public keys are used as a more secure layer of protection to SSH into a server. It is used instead of the username/password, and it is specific for each device trying to SSH into the server.
To generate and enable public keys for a device, you need to do 2 parts, one on the host (device trying to access; i.e. laptop), and the server; i.e. Mini-PC
1-Host:
Open the terminal and execute the following to generate a key specific to this host only
ssh-keygen -t ed25519
. During this process, you will be prompted for a key name, save it as you wish. For example, here, consider it saved as host-keys
Then, you need to insert the generated key into the server to allow the access. Take care of: USER to be replaced by the server username used PORT to be the SSH port, remove in case it was not changed and kept as the default 22
cat ssh/host-keys.pub | ssh [email protected] -p _PORT_ "cat >> ~/.ssh/authorized_keys"
chmod -R 600 ssh
2-Server:
Now on the server side, Mini-PC, simply execute chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys
.
Now everything is set up, and to test, go to the terminal in the host (laptop), and try to SSH into the server (mini-pc) by executing ssh -i ssh/host-keys [email protected] -p _PORT_
You should be able to enter SSH mode into the server without being asked for any username/password. If you try to execute the above from another laptop that does not have public keys defined, it will fail and will be prompted for a username/password as shown below
This document guide is licensed under the CC0 1.0 Universal license. The terms of the license are detailed in LICENSE