There seem to be lots of posts about changing your hostname on a RaspberryPi. The issue is that “something” keeps overriding the changes for you. This thing seems to be cloud-init, which appears to be installed by default now.
Digging about I found people turning off and removing cloud-init. I also found people trying to lean into cloud-init – which also attempted but it always reverted my change.
What I eventually found is that the user-data file in /boot/firmware doesn’t seem to be read correctly on boot. The system appears to read a cached version which is stored in /var/lib/cloud/instances.
My solution? I told cloud-init to purge its cache:
cloud-init -c
And then I rebooted the device. The change I was attempting to make appeared straight after the reboot. I have no idea if there is a better way of having the instance file refresh – not gone that far down the cloud-init rabbithole. But I wanted to get my method out there, as it might be a bit simpler.
FWIW, what I was attempting was to get the fully qualified domain name correctly reflected in the system. I did this by adding the following to the user-data file:
fqdn: <full name of the device>
Now when I run hostname -f, I get the correct response.
There you go – a different approach to changing a RaspberryPi hostname.
This collates the lessons I have learned when I attempted to convert the root filesystem to btrfs on my RaspberryPi.
First things first – it is NOT just run the btrfs-convert and change a couple of bits of config. I initially thought it was, and this bit me quite spectacularly. I’ve now spent a few hours getting this working, so learn from my experience.
I should note that my work comes from digging about on the internet. These posts helped point the way:
You will need to be able to work with two operating systems which may mean you need use a USB -> SD Card converter.
Check whether btrfs is being loaded on boot. If it isn’t it will need adding FIRST. Do not go any further until the module is included in the initramfs.
Check in boot log
Run this command: dmesg | grep -i btrfs
You are looking for this in the output: [ 2.651955] Btrfs loaded, zoned=no, fsverity=no
Check in kernel
Can see the module here: lsmod | grep -i btrfs
Output should be similar to: btrfs 1630208 1 xor 12288 1 btrfs raid6_pq 106496 1 btrfs
Check whether btrfs is in the initramfs
Run this: lsinitramfs /boot/initrd.img-$(uname -r) | grep btrfs
You are looking specifically for these lines: usr/lib/modules/6.12.75+rpt-rpi-v8/kernel/fs/btrfs usr/lib/modules/6.12.75+rpt-rpi-v8/kernel/fs/btrfs/btrfs.ko.xz
If these lines are NOT appearing then you MUST do the following:
Add BTRFS into the initramfs modules: echo 'btrfs' | sudo tee -a /etc/initramfs-tools/modules
You can check the the initramfs is now correct using the lsinitramfs command above.
Reboot your system
Check for the module in the boot log AND the kernel as before.
Now you can install btrfs: sudo apt install btrfs-progs
The system is now prepared – shut it down and BACKUP the SD Card.
Everything from here on could lead to a broken system so it is at your own risk. If you can’t risk having a broken system, and starting everything again from scratch, do not go any further.
Boot from another SD Card. This system will need btrfs installed on it obviously.
Check the root volume for errors: sudo fsck -f /dev/sdb2
Convert the root volume: sudo btrfs-convert /dev/sdb2
Obviously, this will take a while
It will finish, with output similar to: btrfs-convert from btrfs-progs v6.14 Source filesystem: Type: ext2 Label: rootfs Blocksize: 4096 UUID: ed6c7f1b-238b-41a1-b4b6-7bcdef3270fe Target filesystem: Label: Blocksize: 4096 Nodesize: 16384 UUID: cd4d1c9c-8fe9-4fac-a8e5-1e828ef54432 Checksum: crc32c Features: extref, skinny-metadata, no-holes, free-space-tree (default) Data csum: yes Inline data: yes Copy xattr: yes Reported stats: Total space: 63319310336 Free space: 50360090624 (79.53%) Inode count: 3813760 Free inodes: 3590302 Block count: 15458816 Create initial btrfs filesystem Create ext2 image file Create btrfs metadata Copy inodes [o] [ 223370/ 223458] Free space cache cleared Conversion complete
At this point, conversion is complete.
Perform a sanity mount of this converted filesystem: sudo mount /dev/sdb2 /mnt
Assuming it mounts fine we want to fix fstab: vi /mnt/etc/fstab
It will look something like this: proc /proc proc defaults 0 0 PARTUUID=3ae81c03-01 /boot/firmware vfat defaults 0 2 PARTUUID=3ae81c03-02 / ext4 defaults,noatime 0 1
Convert the line for / to use btrfs. NB The PARTUUIDs did not change for me.
Will now look like this: proc /proc proc defaults 0 0 PARTUUID=3ae81c03-01 /boot/firmware vfat defaults 0 2 PARTUUID=3ae81c03-02 / btrfs defaults,noatime 0 1
Unmount converted root. sudo umount /mnt
Mount the boot filesystem sudo mount /dev/sdb1 /mnt
Update boot options: sudo vi /mnt/cmdline.txt
Should look like this: console=serial0,115200 console=tty1 root=PARTUUID=3ae81c03-02 rootfstype=ext4 fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles ds=nocloud;i=rpi-imager-1777239363059 cfg80211.ieee80211_regdom=GB
Change rootfstype to btrfs.
Will now look like this: console=serial0,115200 console=tty1 root=PARTUUID=3ae81c03-02 rootfstype=btrfs fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles ds=nocloud;i=rpi-imager-1777239363059 cfg80211.ieee80211_regdom=GB
Everything should now be ready. Shut everything down put you SD Card in and cross your fingers. It should boot.
One the system boots you should see further entries related to btrfs in the boot log if you look using dmesg | grep -i btrfs
[ 0.000000] Kernel command line: coherent_pool=1M 8250.nr_uarts=1 snd_bcm2835.enable_headphones=0 cgroup_disable=memory numa_policy=interleave nvme.max_host_mem_size_mb=0 snd_bcm2835.enable_headphones=1 snd_bcm2835.enable_hdmi=1 snd_bcm2835.enable_hdmi=0 numa=fake=2 system_heap.max_order=0 iommu_dma_numa_policy=interleave smsc95xx.macaddr=E4:5F:01:05:57:6B vc_mem.mem_base=0x3ec00000 vc_mem.mem_size=0x40000000 console=ttyS0,115200 console=tty1 root=PARTUUID=3ae81c03-02 rootfstype=btrfs fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles ds=nocloud;i=rpi-imager-1777239363059 cfg80211.ieee80211_regdom=GB
[ 2.689648] Btrfs loaded, zoned=no, fsverity=no
[ 2.994418] BTRFS: device label rootfs devid 1 transid 209 /dev/mmcblk0p2 (179:2) scanned by mount (212)
[ 2.995544] BTRFS info (device mmcblk0p2): first mount of filesystem cd4d1c9c-8fe9-4fac-a8e5-1e828ef54432
[ 2.995568] BTRFS info (device mmcblk0p2): using crc32c (crc32c-generic) checksum algorithm
[ 3.040169] BTRFS info (device mmcblk0p2): enabling ssd optimizations
[ 3.040186] BTRFS info (device mmcblk0p2): turning on async discard
[ 3.040191] BTRFS info (device mmcblk0p2): enabling free space tree
Once you are happy everything is alright, you can delete the failback subvolume:
sudo btrfs subvolume delete /ext2_saved
Should see similar to:
Delete subvolume 256 (no-commit): '//ext2_saved'
And that should be it root is converted – enjoy your btrfs based RaspberryPi system.
Things I discovered.
If you are an idiot like me, you can copy an initramfs of the same kernel (with the module) to the boot volume. The one that was important to me was the one called initramfs8 in that directory.
If you’ve stuffed up the cmdline.txt file you can also copy that.
Remember that you do have to correct the PARTUUID when you do this (I didn’t).
The only wrinkle I have found is that the remote node doesn’t always remember the multiple keys properly – as it only is supposed to have one key.
On ASUSWRT, the main router strips out a false carriage return characters and puts the carriage return back. If the AIMesh node reboots, I have to go and do that same action by hand.
This post is intended to cover how to install the Traefik Proxy on a Debian server. The process is a little complex as, unfortunately, there isn’t a .deb file that I could find.
This will not be use under a functional configuration is in place.
Starter Configuration
Static Configuration
Now copy the below starter configuration and place it in /etc/traefik/traefik.yaml
################################################################
#
# Configuration sample for Traefik v2.
#
# For Traefik v1:
# https://github.com/traefik/traefik/blob/v1.7/traefik.sample.toml
#
################################################################
################################################################
# Global configuration
################################################################
global:
checkNewVersion: true
sendAnonymousUsage: true
################################################################
# EntryPoints configuration
################################################################
entryPoints:
web:
address: :85
websecure:
address: :448
################################################################
# Traefik logs configuration
################################################################
log:
level: DEBUG
################################################################
# API and dashboard configuration
################################################################
# Enable API and dashboard
api:
dashboard: true
providers:
file:
filename: /etc/traefik/dynamic.yaml
In short, this file will:
Make traefik listen on port 85 and 448
Increase logging to debug level So, we can see what is going on
Enable the API and the Dashboard
Configure a file provider call dynamic.yaml
Dynamic Configuration
Now create an authentication user (replace the content of the angle brackets) – you will need the output from this in the next step.
htpasswd -nb <user> <password>
Follow this up with placing this in /etc/traefik/dynamic.yaml replacing <string from htpasswd> with your string from htpasswd (yes, put it inside the double quotes).
# dynamic.yaml
http:
routers:
api:
rule: PathPrefix(`/api`) || PathPrefix(`/dashboard`)
service: api@internal
entryPoints:
- "web"
middlewares:
- auth
catchall:
# attached only to web entryPoint
entryPoints:
- "web"
# catchall rule
rule: "PathPrefix(`/`)"
service: unavailable
# lowest possible priority
# evaluated when no other router is matched
priority: 1
middlewares:
auth:
basicAuth:
users:
- "<string from htpasswd>"
services:
# Service that will always answer a 503 Service Unavailable response
unavailable:
loadBalancer:
servers: {}
Yes, the back ticks are correct, single quote will cause a problem due to go.
This file sets up the following:
Two routers
API
catch traffic coming in from the “web” entry point (85)
protects using the auth middleware
looks for /api and /dashboard traffic
will then route trafic to api@internal service
CatchAll
catch traffic coming in from the “web” entry point (85)
looks for anything under /
will route traffic to unavailable service.
Middleware To protect the api and dashboard endpoints from prying eyes.
Service To show when things are broken.
Install Traefik Proxy
The next step of the install of Traefik Proxy on Debian is to extract the downloaded zip file:
cd /opt/traefik
tar -zxvf <download location>/traefik_v<version>_<platform>.tar.gz
Since traefik proxy will be using privileged ports, it therefore needs a permission setting:
sudo setcap 'cap_net_bind_service=+ep' traefik
Testing Traefik Proxy
This is where things begin to get fun. I had some fun and games with getting the Dashboard to work initially, hence why I am writing this and posting it for posterity.
You MUST have the PathPrefix rule to make Dashboard and API endpoints work right.
On the plus side, its presence in dynamic.yaml means we can fiddle about with the file and Traefik Proxy will just reload it.
Now, simply run the Traefik Proxy executable – it looks in standard locations for configuration and /etc/traefik/traefik.yaml is one of those.
./traefik
If it works you should see something like this:
You should also be able to access various URLs.
api/endpoints
dashboard
anything else
Assuming everything is working we can now go about locking the software installation, enabling the services configuration that was setup, and linking it to the Docker.
Troubleshooting
Hopefully, Traefic Proxy is working for you. If it isn’t the terminal that Traefic is running in should give you some indication of what has gone wrong.
It is likely to be related to listening on the selected ports. The simplest option would then be to move to a different port by altering the values for address in /etc/traefik/traefik.yaml.
Having done this, stop traefik [if needed] and re-run it.
As part of the creation of my new lab installation I wanted to migrate WordPress from barebones Apache/FPM into a custom PHP container.
The original thought was to segretate the web server and PHP FPM from one another – but it seemed to be far to complex. I eventually found this web site:
This site is talking about a generalised solution for embedding any PHP application into a container, but it turns out the approach worked fine for WordPress too.
In short, it is easier to start off with a PHP-FPM container. So I chose one from the DockerHub PHP site.
Having done that, I created a Docker file:
# Dockerfile.nginx - PHP-FPM with Nginx in a single container
FROM php:8.3-fpm-alpine
# Install Nginx
RUN apk add --no-cache nginx curl
# Install PHP extensions
RUN apk add --no-cache \
freetype-dev \
libjpeg-turbo-dev \
libpng-dev \
libzip-dev \
icu-dev \
postgresql-dev \
&& docker-php-ext-configure gd --with-freetype --with-jpeg \
&& docker-php-ext-install -j$(nproc) \
gd zip intl pdo pdo_mysql pdo_pgsql opcache
# PHP-FPM configuration
COPY php-fpm.conf /usr/local/etc/php-fpm.d/www.conf
COPY php-production.ini /usr/local/etc/php/conf.d/production.ini
# Nginx configuration
COPY nginx.conf /etc/nginx/nginx.conf
COPY nginx-default.conf /etc/nginx/http.d/default.conf
# Copy application code
COPY . /var/www/html/
RUN chown -R www-data:www-data /var/www/html
# Startup script that runs both Nginx and PHP-FPM
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
EXPOSE 80
HEALTHCHECK --interval=15s --timeout=5s --retries=3 \
CMD curl -f http://localhost/health.php || exit 1
ENTRYPOINT ["/docker-entrypoint.sh"]
This file:
Installs NGINX
Configures NGINX
Installs PHP modules.
Configures FPM
Installs CURL
Defines a CURL healthcheck
Mounts the application and fixes the permissions.
TODO: add actual example files.
I then just archived my installation on the old server, and transferred the directories to the Lab server. These where then mounted as volumes.
Obviously, I need to do regular checks of the image to make sure it is patched up – but WordPress is working in my custom PHP container (as you can tell by being here now!)
I’ve recently reinstalled my HomeAssistant using HAOS, rather than my previous Supervisor install – mainly because the suppliers of HomeAssistant have desupported that method. However, I still want to be able to connect at the operating system level, rather than only relying on HomeAssistant WebUI for everything – and that means getting HAOS SSH mode working.
I initially attempted to follow the “USB Method” documented here:
However, it didn’t seem to want to work – even though the command appear to complete correctly. And I obviously couldn’t see if it was working, as I don’t have access to the operating system to check!
Open addon Web UI and run the following: cp /etc/ssh/authorized_keys /share docker run -it --rm \ -v /root:/mnt/root \ -v /mnt/data/supervisor/share:/mnt/share \ bash cp /mnt/share/authorized_keys /mnt/root/.ssh rm /share/authorized_keys ha host reboot
HomeAssistant will now reboot (the final command told it to).
That’s all. You should be able to access HAOS using ssh and the same keys on:
My Lab server needs to be able to send emails now. I’ve done some digging about, and have decided that I will configure Postfix to do forward things to my email hosting provider instead of exim4.
Obviously, this required a bit more thumping this time, as I’ve never used Postfix before.
Annoyingly, the default that Debian sets up results in warning being generated in log files, which means you have to manually tune the main.cf file to make the warning go away and then I could configure it to forward to my email host.
These initial changes involve me adding the following:
As part of setting up my new lab server, I needed to transfer Nginx Proxy Manager from its old home to a new home. This is intended to capture what I did, in case I ever need to repeat the process.
Backup npm_data/_data and npm_letsencrypt/_data locations on the old host.
Migrate the stack from the old host to the new host using Portainer.
Stop the new stack
It will create an empty installation, which is obviously not useful for me.
Restore the backup of the stack volumes, replacing the ones the migration just created.
Restart the stack,
The transfer of Nginx Proxy Manager is now complete. Check that it has connectivity to the downstream servers, and then move any firewall rules so that it is now the external server rather than the old one.
Obviously now clean up the old installation on the old docker server.
I use Nginx Proxy Manager to direct things around my URLs and web sites – including Home Assistant.
Its a great product, but does mean I have to configure my applications right to route via it.
This is a notes because I had forgetten I had done a thing!
Remember that in Home Assistant, there will need to be an entry under “http” in “trusted proxies” section for the server doing the work.
This was mainly caused by the fact that I was changing from a “co-locating” the docker containers that I use for HA and NPM to ones that had to route between servers.
My original values listed 172.20.x.x and I now needed to list the server IP address.
I use the product Portainer to manage the various docker containers I have at home. Periodically, I change my infrastructure set up, which results in my having to transfer the Portainer containers to a different server.
This time, I have decided to document what I did, in an attempt to not have to recreate the wheel next time I need to perform this activity.
Check you have the command line for starting Portainer.
Obviously, you will need this to start up portainer when it gets where it is going.
Make a volume on the new host.
This will hold a copy of the persisted Portainer data.
Stop portainer on the old host
Backup the old volume on the old host.
You’ll likely need to be root to view the folder correctly.
Transfer the backed up file to the new host using scp.
Restore the backup file into the volume on the new host.
You’ll likely need to be root to get the permissions correct.
Startup portainer using the volume
At this point the transfer of Portainer is technically complete.
Test you can connect to portainer.
Startup portainer agent on the old host.
This is so that I have access to the old host again. I’m going to need this to clean up the old host.
Redirect the proxy.
I use NPM and I needed to fix the proxy to fetch portainer from the new host.
Clean up stacks
In this situation i am moving to a new Lab server, so I have copied stacks from the old host to the new host, manually backing up and transferring volumes as required.