Get rid of Windows 11 File Explorer’s “Start Backup” advertisement

The latest version of Windows started to aggressively advertise OneDrive’s backup feature in File Explorer, where it would prominently show a “Start backup” button as part of the navigation bar:

Clicking on this button, even accidentally, will trigger an annoying dialog, that, with it’s preselected options, might start a backup of data to Microsoft’s cloud that you did not really intend:

This notification can be turned off by a hidden setting inside File Explorer’s settings. Open Explorer’s settings and from there select the “View” tab, find the “Show sync provider notifications” setting and un-check it.

After logging out and logging in again, the notification is gone. To be honest, I don’t know which other notifications I am now missing, but so far I have not noticed anything important.

Btrfs raid1 vs. mdadm raid1

RAID is about up-time. Or about the chance to avoid having to restore from backup if you are lucky. RAID is not a backup, though. There are also no backups, just successful or failed restores. (Those are the most important proverbs that come to my mind right now.)

Given that I’m obsessed with backups, but also “lazy” in the sense that I want to avoid having to actually restore from my backups, I’ve been using RAID1 in my data store for at least 15 years now. For the first 12 years of them, I’ve been using ext3/4 on top of mdadm managed RAID1. About 3-4 years ago, I switched most of my storage to Btrfs, using the filesystem’s built-in RAID1 mode.

In this article I want to give a short reasoning for this. I initially wanted this article to kick-off a mini-series on blog-posts on Btrfs features that you might find usable, but due to some discussion on Mastodon, I already previously posted my article about speeding up Btrfs RAID 1 up using LVM cache. You should check that one out as well.

Continue reading “Btrfs raid1 vs. mdadm raid1”

All Bluesky content is public

Created by Bing AI

Needing an invite to join, the apps, etc. all gives a certain sense of privacy over on Blueskye. But that’s just show. The API that powers the app is publicly available, no authentication needed. Every post made on Bluesky can be queried publicly by everyone, even without having an invite.

Mario Zechner has demonstrated this well with his low-effort (but amazing!) tool Skyview (source-code available on Github).

It’s a pure client-side web application that requires the link to a Bluesky posting as input and then renders the entire discussion thread around it. Pure client-side, no server, no authentication. Amazing!

Screenshot of Skyview with one of my own postings (in German), reminding about the lack of privacy of postings on Bluesky.

That’s not a problem in itself, but just keep it in mind.

Bluesky with own domain-handle and .well-known/atproto-did

TLDR: Beware that there must be no newline at the end of the .well-known/atproto-did file and that the content type needs to be text/plain. echo -n to the rescue instead of vim.


I recently received an invite to Bluesky and so far I’ve enjoyed the experience. Early-day Twitter feeling. Can recommend to check it out if you get an invite.

One very intriguing thing is that Bluesky allows for your own domain to be your handle. So I decided to go with @martin.dont-panic.cc.

The process is described in a blog post by Bluesky. There are two main options to verify your domain ownership, DNS TXT entries or an HTTPS request to https://martin.dont-panic.cc/.well-known/atproto-did (in my case). Since everyone is doing DNS, I wanted to try out HTTPS/.well-known. (Of course, there needs to be a martin.dont-panic.cc DNS entry to get to the web-server, but no special TXT header for the verification.)

I wanted to serve the file as a static file in the filesystem via nginx. So I set up the following static nginx configuration:

server {
        listen 443 ssl;
        server_name martin.dont-panic.cc;

        root /var/www/cc/dont-panic/martin/;
        index index.html;
        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
        }
        location = /.well-known/atproto-did {
                default_type text/plain;
        }
        # ... lots of SSL stuff omitted ...
}

So basically this tells nginx to try to serve any existing file or fall-back to directory or 404. It forces text/plain for the /.well-known/atproto-did file, since otherwise it is serves as application/octet-stream which violates the requirements.

Then I used vim to simply create the file and validated that the content of the file was accessible correctly via curl.

Looked good, so hit this verify button. And it failed. After a few retries, i decided that maybe it’s because of the final newline that end of the file?

New approach (note the “-n“!):

echo -n "did:plc:njnt2ukwkoljfxnsqsbs5mdm" > /var/www/cc/dont-panic/martin/.well-known/atproto-did

One click on verify later, Bluesky accepted the handle as verified and I could switch over from my previous user name.

So, looking forward to hearing from you either in the comments here or via Bluesky. Follow me! 😉

overlay2 for Docker within an unprivileged LXC container

For my Jenkins installation I use a Docker agent inside an LXC container. I want this container to be unprivileged, so that the host is somewhat protected from misconfiguration (not deliberate attacks). The default setup works fine, but after a bit of experimenting, I noticed that I was soon running out of disk-space. The reason for that turned out that Docker had fallen back to using the vfs storage backend instead of overlay2, which basically creates a copy for every layer and every running container.

# docker info | grep Storage
 Storage Driver: vfs

Further investigation showed, that this was due to the fact that the container was unprivileged. Short experiments with making the container privileged also yielded issues with cgroup management of the outer docker container on the host. So what was the reason for the issues? It seems that the ID mapping / shifting of the user IDs prevented the overlay2 driver from working.

Therefore I decided to try to mount a host directory as a “device” into the container’s /var/lib/docker. But using the shift=true option, this again fails, since this way the underlying filesystem is shiftfs and not plain ext4 (see supported filesystems for various storage drivers). So a solution without “shift” is required.

Shifting UIDs is done by a fixed offset for a container, in my case it’s 1,000,000. You need to figure this out for your system, but likely it’s the same. So by creating the external storage directory with this as owner and then mounting it inside the container without shifting, things start to get working.

export CONTAINER_NAME=mycontainer
export DOCKER_STORAGE_DIRECTORY=/mnt/pool/mycontainer/var-lib-docker

mkdir -p "$DOCKER_STORAGE_DIRECTORY"
chown 1000000:1000000 "$DOCKER_STORAGE_DIRECTORY"

lxc config device add "$CONTAINER_NAME" var-lib-docker disk source="$DOCKER_STORAGE_DIRECTORY" path=/var/lib/docker

# important, security.nesting is required for nested containers to work!
lxc config set "$CONTAINER_NAME" security.nesting=true

After this docker info | grep Storage finally showed what I wanted:

# docker info | grep Storage
 Storage Driver: overlay2

Add SSH host key fingerprint to Jenkins for Git checkouts

I have a self-hosted Gitea instance, and also operate my own Jenkins instance. On the Jenkins instance, strict host-key checking is enabled. When adding the first reference to a Git repository hosted on my server, the following error appears:

Failed to connect to repository : Command "git ls-remote -h -- ssh://git@<myserver>:22222/martin/jenkins-test-docker-pipeline.git HEAD" returned status code 128:
stdout:
stderr: No ECDSA host key is known for [myserver]:22222 and you have requested strict checking.
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

The reason is that since it’s the first time I’m accessing a repository on this server, so the SSH host fingerprint is not in the known_hosts file for this SSH connection. Since I run this installation of Jenkins inside a Docker container and I don’t want to manually edit files in the file-system, I rely on setting the appropriate settings in Manage Jenkins > Security > Git Host Key Verification Configuration. This is set to Manually Provided Keys.

The easy solution is to set it to Accept First Connection. But I want to be stay on the manual mode. The easiest way to get the SSH host fingerprint via ssh-keyscan (-p 2222 is for specifying the SSH server port, which is a non-standard port in my case):

ssh-keyscan -p 22222 myserver

The output looks like this:

# myserver:22222 SSH-2.0-OpenSSH_9.1
[myserver]:22222 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC085ixMnTlpr0pxXmkeJ6X479mbW/9PGDeUvD8hnG7EVUn3WsnnSG8yZkmU+jzg2W+xmFd7WIdaYLt6UcGvCS3RZIye68+qu64UToKX6CdTQOWyj6z9kd8tLoPBobsBd7tRyGaXU4c4UkCR5M44KhYtbQz0bgL7u+sL0z+R3lbOVyXaYPiSmUf/Wsd8fA2VcdWHkXJx0MMNMSVj/hgkZR7RfHzP4SZSqRLhn/AzIdx4DDuyGyPbVxu1ppnFtumRwlBkgat9UpMWkelREhcUdJtrZO1KPpA6DOkxIH8X/WtXyWToS9EjPb8FVTvzdjG2C4Zi0DkogH3no9vQcXLiihz
# myserver:22222 SSH-2.0-OpenSSH_9.1
[myserver]:22222 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDfTT9eEpDmd7ToGAorTW1X9uuJVhZl+KX9phmTpTy2e8U7l31jWn2TnKlXOp5oKgivpQ2cVjcTyazyrFB7MhgI=
# myserver:22222 SSH-2.0-OpenSSH_9.1
[myserver]:22222 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFoEzPpEWApszceLM/jWHvAbrTppjsTzftw79yTSS5Po
# myserver:22222 SSH-2.0-OpenSSH_9.1
# myserver:22222 SSH-2.0-OpenSSH_9.1

It only makes sense to copy the non-comment lines (the ones not starting with a # to the configuration).

Now Git checkouts to this repository should work, once you have configured the appropriate credentials.

Enable RSA-based public-keys for ssh when accessing legacy devices

When accessing old devices that are not yet using modern encryption algorithms, current Ubuntu installations might reject connection due to the signature algorithm for the public keys being disabled, e.g.

sign_and_send_pubkey: no mutual signature supported

You can enable this on a per-command level by adding the following option to your SSH command line:

ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa ...

As an alternative you can add this permanently for a host by adding it to the host’s configuration in your $HOME/.ssh/config:

Host myhost
  PubkeyAcceptedKeyTypes +ssh-rsa

This also works for other key types like ssh-dss.

Note: In general you only should do this if you access legacy devices where you have no possibility to upgrade to state-of-the-art encryption algorithms. Those algorithms got deprecated for a reason. Therefore always do this on a per-command or per-target-host level instead of blindly enabling those algorithms in your global SSH config.

Speeding up Btrfs RAID1 with LVM Cache

Logical Volume Manager 2 (lvm2) is a very powerful toolset to manage physical storage devices and logical volumes. I’ve been using that instead of disk partitions for over a decade now. LVM gives you full control where logical volumes are placed, and a ton of other features I have not even tried out yet. It can provide software RAID, it can provide error correction, you can move around logical volumes while they are being actively used. In short, LVM is an awesome tool that should be in every Linux-admin’s toolbox.

Today I want to show how I used LVM’s cache volume feature to drastically speed up a Btrfs RAID1 situated on two slow desktop HDDs, using two cheap SSDs also attached to the same computer, while still maintaining reasonable error resilience against single failing devices.

Creating the cached LVs and Btrfs RAID1

The setup is as follows:

  • 2x 4TB HDD (slow), /dev/sda1, /dev/sdb1
  • 2x 128GB SSD (consumer-grade, SATA), /dev/sdc1, /dev/sdd1
  • All of these devices are part of the Volume Group vg0
  • Goal is to use Btrfs RAID1 mode instead of a MD RAID or lvmraid, because Btrfs has built-in checksums and can detect and correct problems a little bit better because it can determine which leg of the mirror is the correct one.
Continue reading “Speeding up Btrfs RAID1 with LVM Cache”

Trilium – An Awesome Note-taking App

I’ve been a long-time user of WikidPad for personal note-taking. Unfortunately, development has slowed down over time and it was time for me to look for some alternative. And wow, did I find an alternative, that really ticks most (all) of my boxes: meet Trilium, the most feature-packed outliner / hierarchical note taking app I’ve ever encountered.

Take a look at the screenshot tour to get a feeling of what’s possible with Trilium.

The features I adore most about it:

  • Can act standalone and in a client/server model
  • Server provides a browser-based interface to the instance
  • Client-application can work offline and then sync back changes to the central server instance
  • It’s incredible scriptable using JavaScript
  • mermaid.js support for quickly creating diagrams
  • Linking, Cross-Linking, Cloning of notes in various places
  • Journal functions

There are also a ton of features that I don’t use personally, e.g. encrypted notes that are only available once you enter your decryption password.

I personally recommend that you give it a look and try very much!

Moto G6 Plus without GPS Lock

I’ve been a quite happy owner of the Moto G6 Plus for some years now. Since the beginning, I always had a “minor” issue: sometimes the GPS started to suddenly stopped getting a lock. Which was especially cumbersome, if I was using the phone as navigation system while driving. Today, the GPS lost it’s locking mid-drive and I’ve not been able to reestablish it, not even by power-cycling the device. Also various attempts of changing battery saving options and changing location accuracy settings did not result in any improvements (normally it did). The internal diagnostics of the device (*#*#2486#*#*) just said it didn’t get a lock.

My assumption was that it somehow might be related to the A-GPS data. Therefore I looked if there was any tool in the Play Store that might help me clear the A-GPS data, and luckily I stumbled upon “GPS Status & Toolbox“. Even in the free version it allowed to clear the A-GPS data and from this “cold start” mode the device got a lock rather quickly. To support the devs, I decided to upgrade to the PRO version for less than €2,00.

I’m now curious if this is a long-term fix or if it was just lucky coincidence. I’m hoping for the first.

Quick Checklist

  • Disable battery optimizations on Google Maps (and any navigational map you might be using)
  • Disable battery optimization for the “LocationService”
  • Turn off WiFi- and Bluetooth-Background Scans since they might clash with Improved Google Location Accuracy setting
  • Use a tool (like GPS Status & Toolbox) to reset A-GPS data of the GPS receiver

Update 2020-01-06

Since I’ve installed “GPS Status & Toolbox“, the problem has been fixed. Never had the problem of not getting a GPS fix any more.