Install Resilio-Sync into openwrt

Hi. There are some posts, including official one, related to this topic. But so far I searched, none of them are 100% up to date. After tested myself, here is the guide for installing Resilio-Sync into openwrt, today.

Preparation:

  • A router with at least 1GB free memory left after all services you required is on, with ~50MB left on the internal storage.
  • A dedicated USB drive, or an internal storage with enough space left. I use a 500 GB USB drive. When it’s done, more than 30 GB is used. But I guess most of them is for ext4 journal.
  • A tool to transfer file into openwrt like scp or any other with scp support.
  • Upgrade your openwrt to the latest.
  • Know the arch of your router chip, like armhf for the new, armel for those old, and arm64 for 64-bit enabled. Note: arm64 not only means your chip is 64-bit ready, but also working in 64-bit mode with your system supported.

Now let’s start.

Step 1: USB drive preparation

If you need to use internal storage, see Step 1A instead.

  1. Login to terminal. Web based TTYD Terminal or SSH are both fine.
  2. Check current device list by command
ls /dev/sd*

There shouldn’t be anything printed if no disk attached. Remember all reported.

  1. Attach your USB drive. Re-run the command above, you will notice that something more are added. Normally, they should be like /dev/sdX and /dev/sdXY. X is a letter, Y is a number. If this is the 1st drive found, the X should be a, /dev/sda for full. I will take /dev/sda for example in below. Change it to the correct one if it’s not.
  2. Use fdisk to initialize this disk. Command is
fdisk /dev/sda

First, use command p to check the current partitions. Use d to remove them one by one. Then use n to create a primary partition. If any sign is found when creating partition, you can remove it when asked. Finally, use w to save. If anything goes wrong, use q without w will quit fdisk without changes. If everything is right, you will see a /dev/sda1 which is the only numbered return by command ls /dev/sda*.

  1. Make ext4 file system on the /dev/sda1 with command
mkfs.ext4 /dev/sda1
  1. Reboot your router with the USB drive attached. Go back to terminal.
  2. Use command
df -h

to check your device. You should see /dev/sda1 is mounted to somewhere like /mnt/sda1. Note down the location (like /mnt/sda1) for further steps. If the value is not /mnt/sda1, replace it with the right one on your device.

Step 1A: Use internal storage

This step is for whom want to use internal storage instead. No need to follow this when Step 1 is taken.

You need to find a place to store all folders and files created in later steps and note down the path to the place. I will use the path /mnt/sda1 in following steps. You should replace them to your path.

Step 2: Prepare debian system

Openwrt is lightweight linux without many system files shipped. Before installing Resilio-Sync, we need to prepare a full linux system core.

  1. First, install a tool for installing debian. These packages will be placed into internal storage.
opkg install debootstrap binutils
  1. Install debian files. Note: If the chip of your router is not arm64, you need to change the command with the correct arch name. Path /mnt/sda1 is used.
debootstrap --arch=arm64 buster /mnt/sda1/debian http://ftp.de.debian.org/debian

This command will download the debian files into a new folder named debian under /mnt/sda1 which should be your USB drive. If something goes wrong, remove the folder /mnt/sda1/debian using command rm -fr /mnt/sda1/debian before re-run this command.

  1. Link system folders from openwrt to the debian system by these commands.
mount --bind /dev /mnt/sda1/debian/dev/
mount --bind /proc /mnt/sda1/debian/proc/
mount --bind /sys /mnt/sda1/debian/sys/
ln -s /bin/bash bin/ash
  1. Then we start the debian bash by this command.
chroot /mnt/sda1/debian/ /bin/bash

Now, the bash is created under debian system. The file system root is changed to the folder debian on your USB drive also.

  1. (Optional) For avoiding scene confusing, a good way is change the shell prompt for chroot. You can run these commands to do that. This is an optional but recommended step.
echo 'PS1="CHROOT:\w# "' >> ~/.bashrc
exit
chroot /mnt/sda1/debian/ /bin/bash

You will notice a CHROOT is shown on the left of the prompt when debian system is using.

  1. Now let’s prepare the debian system by installing locales.
apt-get install locales
dpkg-reconfigure locales

You can select en_US.UTF8 or any others you like.

Step 3: Install Resilio-Sync

  1. Download the right DEB package of Resilio-Sync for your router chip from here.
  2. Transfer the deb file to path /mnt/sda1/debian on your router using scp or any other tool.
  3. Use ls / under chroot terminal to make sure the file is ready.
  4. Install the package using command dpkg -i. If the file name is resilio-sync_2.7.2.1375-1_arm64.deb, the command should be
dpkg -i /resilio-sync_2.7.2.1375-1_arm64.deb
  1. (Optional) Mark the service as auto start by following command. Actually, because the router will not boot debian directly, this command is useless at all. I still place it as an optional step for my obsessive-compulsive disorder 🙂
systemctl enable resilio-sync
  1. (Optional) Edit the config if you need. The config file can be located as /etc/resilio-sync/config.json under chroot terminal.
  2. Exit the chroot by tying exit and press enter.
  3. Create a file for start Resilio-Sync with openwrt. This is actually work, not the step 5. Place a file /etc/init.d/resilio-sync with the content below.
#!/bin/sh /etc/rc.common
#

START=99
STOP=10

. $IPKG_INSTROOT/lib/functions.sh
. $IPKG_INSTROOT/lib/functions/service.sh

start() {
        mount --bind /proc /mnt/sda1/debian/proc
        chroot /mnt/sda1/debian /bin/bash /etc/init.d/resilio-sync start
}

restart() {
        mount --bind /proc /mnt/sda1/debian/proc
        chroot /mnt/sda1/debian /bin/bash /etc/init.d/resilio-sync restart
}

stop() {
        chroot /mnt/sda1/debian /bin/bash /etc/init.d/resilio-sync stop
        umount /mnt/sda1/debian/proc
}
enable() {
        err=1
        name="$(basename "${initscript}")"
        [ "$START" ] && \
                ln -sf "../init.d/$name" "$IPKG_INSTROOT/etc/rc.d/S${START}${name}"
                err=0
        [ "$STOP" ] && \
                ln -sf "../init.d/$name" "$IPKG_INSTROOT/etc/rc.d/K${STOP}${name}"
                err=0
        return $err
}
disable() {
        name="$(basename "${initscript}")"
        rm -f "$IPKG_INSTROOT"/etc/rc.d/S??$name
        rm -f "$IPKG_INSTROOT"/etc/rc.d/K??$name
}

And mark the file with execution permission and set it auto start by commands

chmod +x /etc/init.d/resilio-sync
/etc/init.d/resilio-sync enable

You can check there should be a service named resilio-sync with number 99 in page System – Startup of the router management portal. It should be marked as enabled as well.

  1. Append path to resilio-sync config to /etc/sysupgrade.conf to preserve this config file in backup package for keeping it while upgrading as well. Command is:
echo "/etc/init.d/resilio-sync" >> /etc/sysupgrade.conf

Now everything is done. I left here for you to reboot your router and start to fulfill your Resilio-Sync mission.

Thanks for help from:

Use GZipStream as response in plain Asp.Net Core 5

Recently, I’m struggle with asp.net core 5 working with GZipStream. My request is easy: using GZipStream to compress a large text before sending it as response of asp.net content.

Here is my first guess code, which is not working:

//This code is not working.
public static async Task WriteWithGZipAndCompleteAsync(this HttpResponse response, string text)
{
    response.StatusCode = 200;
    response.ContentType = "application/gzip";

    await using var gzip = new GZipStream(response.Body, CompressionLevel.Optimal, true);
    await using var streamWriter = new StreamWriter(gzip, Encoding.UTF8, -1, true);

    await streamWriter.WriteAsync(text);
    await streamWriter.FlushAsync();
    streamWriter.Close();

    await gzip.FlushAsync();
    gzip.Close();

    await response.CompleteAsync();
}

It does send the text to the client but the gunzip reports “unexpected end of file” after decompressed all texts.

The second version is worse, it sends nothing at all:

//This code is not working either.
public static async Task WriteWithGZipAndCompleteAsync(this HttpResponse response, string text)
{
    response.StatusCode = 200;
    response.ContentType = "application/gzip";

    using (var compressed = new MemoryStream())
    {
        using (var gzip = new GZipStream(compressed, CompressionLevel.Optimal, true))
        {
            var uncompressed = Encoding.UTF8.GetBytes(text);
            gzip.Write(uncompressed);

            gzip.Flush();
            gzip.Close();
        }

        var result = compressed.ToArray();
        response.Body.Write(result);
    }

    await response.CompleteAsync();
}

Finally, I get it work by this code below:

public static async Task WriteWithGZipAndCompleteAsync(this HttpResponse response, string text)
{
    response.StatusCode = 200;
    response.ContentType = "application/gzip";

    await using var gzip = new GZipStream(response.Body, CompressionLevel.Optimal, true);
    await using var streamWriter = new StreamWriter(gzip, Encoding.UTF8, -1, true);

    await streamWriter.WriteAsync(text);
    await streamWriter.FlushAsync();
    await gzip.FlushAsync();
    await response.Body.FlushAsync();

    //WARNING: DO NOT CALL CompleteAsync, which will thrown an exception.
}

Conclusion:

  • When calling Close() on instances of StreamWriter and GZipStream, the underlying stream will be closed, NO MATTER the value of leaveOpen specified. Uhh? Weird? But it’s true.
  • Do not call response.CompleteAsync() after write data to response.Body.
    • If Close() called on streams before, calling response.CompleteAsync() will thrown an ObjectDisposedException: Cannot access a closed stream.
    • If Close() does not present, like the code above, calling response.CompleteAsync() will thrown an InvalidOperationException: Writing is not allowed after writer was completed.

SanDisk portable SSD strange problem

I brought a SanDisk portable SSD last year, named SanDisk Extreme Pro Portable (E80). It can be used on many computers without problem. But when connecting to a Windows on NUC11PAHi7, it can only be started with USB-A to C cable. While using USB C-C cable, this device is marked as cannot be started.

Last week, I try to connect this device to my Raspberry Pi 4 (8G), it cannot be recognized at all. I contact the service support and get a replacement. The new device looks the same as the old one, with the same name, but the model number is changed to E81. The new one can be detected and mounted on Raspberry Pi 4 but when I write large files into it, IO exception is thrown always.

Here is the log from dmesg:

[ 82.174619] usb 2-2: USB disconnect, device number 2
[ 86.214549] usb 2-2: new SuperSpeed Gen 1 USB device number 3 using xhci_hcd
[ 86.235518] usb 2-2: New USB device found, idVendor=0781, idProduct=55af, bcdDevice=10.84
[ 86.235534] usb 2-2: New USB device strings: Mfr=2, Product=3, SerialNumber=1
[ 86.235546] usb 2-2: Product: Extreme Pro 55AF
[ 86.235556] usb 2-2: Manufacturer: SanDisk
[ 86.235567] usb 2-2: SerialNumber: <Removed By Me>
[ 86.246327] scsi host0: uas
[ 87.451423] scsi 0:0:0:0: Direct-Access SanDisk Extreme Pro 55AF 1084 PQ: 0 ANSI: 6
[ 87.452523] scsi 0:0:0:1: Enclosure SanDisk SES Device 1084 PQ: 0 ANSI: 6
[ 87.453670] sd 0:0:0:0: Attached scsi generic sg0 type 0
[ 87.454934] sd 0:0:0:0: [sda] 3906963617 512-byte logical blocks: (2.00 TB/1.82 TiB)
[ 87.454971] scsi 0:0:0:1: Attached scsi generic sg1 type 13
[ 87.455110] sd 0:0:0:0: [sda] Write Protect is off
[ 87.455125] sd 0:0:0:0: [sda] Mode Sense: 37 00 10 00
[ 87.455454] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, supports DPO and FUA
[ 87.456143] sd 0:0:0:0: [sda] Optimal transfer size 2097152 bytes
[ 87.476901] sda: sda1
[ 87.478994] sd 0:0:0:0: [sda] Attached SCSI disk
[ 87.489999] scsi 0:0:0:1: Failed to get diagnostic page 0x1
[ 87.495768] scsi 0:0:0:1: Failed to bind enclosure -19
[ 87.501102] ses 0:0:0:1: Attached Enclosure device
[ 118.602926] exfat: module is from the staging directory, the quality is unknown, you have been warned.
[ 118.606363] exFAT: Version 1.3.0
[ 118.607475] [EXFAT] trying to mount...
[ 118.737713] [EXFAT] mounted successfully
[ 257.485781] usb 2-2: USB disconnect, device number 3
[ 257.487191] blk_update_request: I/O error, dev sda, sector 2048 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
[ 257.497889] Buffer I/O error on dev sda1, logical block 0, lost async page write
[ 257.505495] blk_update_request: I/O error, dev sda, sector 34816 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0
[ 257.516111] Buffer I/O error on dev sda1, logical block 32768, lost async page write
[ 257.523988] Buffer I/O error on dev sda1, logical block 32769, lost async page write
[ 257.531876] blk_update_request: I/O error, dev sda, sector 36864 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
[ 257.542483] Buffer I/O error on dev sda1, logical block 34816, lost async page write
[ 259.414440] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[ 259.653823] sd 0:0:0:0: [sda] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[ 259.954174] usb 2-2: new SuperSpeed Gen 1 USB device number 4 using xhci_hcd
[ 259.975162] usb 2-2: New USB device found, idVendor=0781, idProduct=55af, bcdDevice=10.84
[ 259.975179] usb 2-2: New USB device strings: Mfr=2, Product=3, SerialNumber=1
[ 259.975190] usb 2-2: Product: Extreme Pro 55AF
[ 259.975200] usb 2-2: Manufacturer: SanDisk
[ 259.975211] usb 2-2: SerialNumber: <Removed By Me>
[ 259.984054] scsi host1: uas
[ 262.170728] scsi 1:0:0:0: Direct-Access SanDisk Extreme Pro 55AF 1084 PQ: 0 ANSI: 6
[ 262.171845] scsi 1:0:0:1: Enclosure SanDisk SES Device 1084 PQ: 0 ANSI: 6
[ 262.173027] sd 1:0:0:0: Attached scsi generic sg0 type 0
[ 262.173449] ses 1:0:0:1: Attached Enclosure device
[ 262.174092] ses 1:0:0:1: Attached scsi generic sg1 type 13
[ 262.174542] ses 1:0:0:1: Failed to get diagnostic page 0x1
[ 262.176656] sd 1:0:0:0: [sdb] 3906963617 512-byte logical blocks: (2.00 TB/1.82 TiB)
[ 262.180401] sd 1:0:0:0: [sdb] Write Protect is off
[ 262.180412] sd 1:0:0:0: [sdb] Mode Sense: 37 00 10 00
[ 262.180485] ses 1:0:0:1: Failed to bind enclosure -19
[ 262.185106] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA
[ 262.186292] sd 1:0:0:0: [sdb] Optimal transfer size 2097152 bytes
[ 262.216969] sdb: sdb1
[ 262.219280] sd 1:0:0:0: [sdb] Attached SCSI disk
[ 263.425802] usb 2-2: USB disconnect, device number 4
[ 263.428047] sd 1:0:0:0: [sdb] Synchronizing SCSI cache
[ 263.665835] sd 1:0:0:0: [sdb] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[ 263.994479] usb 2-2: new SuperSpeed Gen 1 USB device number 5 using xhci_hcd
[ 264.015056] usb 2-2: New USB device found, idVendor=0781, idProduct=55af, bcdDevice=10.84
[ 264.015069] usb 2-2: New USB device strings: Mfr=2, Product=3, SerialNumber=1
[ 264.015078] usb 2-2: Product: Extreme Pro 55AF
[ 264.015086] usb 2-2: Manufacturer: SanDisk
[ 264.015095] usb 2-2: SerialNumber: 323130333757343030363638
[ 264.023743] scsi host1: uas
[ 265.437813] usb 2-2: USB disconnect, device number 5
[ 265.438360] xhci_hcd 0000:01:00.0: WARNING: Host System Error

Enable rc.local on Ubuntu 20.04

For compatible reason, rc.local support is still supported by new version of Ubuntu. But it is disabled by default. This will guide you to enable running rc.local while starting system.

  1. Create rc.local if it does not exist.

Run nano /etc/rc.local. If the file does not exist, place this code below as the default rc.local file.

#!/bin/bash

exit 0

Run chmod +x /etc/rc.local to give the executing permission.

2. Create systemd service file.

Run nano /etc/systemd/system/rc-local.service to create the systemd service file and paste this text below.

[Unit]
Description=/etc/rc.local Support
ConditionPathExists=/etc/rc.local

[Service]
ExecStart=/etc/rc.local start
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
SysVStartPriority=99

[Install]
WantedBy=multi-user.target

3. Config systemd using this bash command below.

systemctl enable rc-local

All done. The rc-local will be run while system starting.

A bash script for copying data from SD card

When I hang out, GoPro and drone is usually taken. Sometimes it’s necessary to copy data from the SD card to a hard drive frequently when traveling for several days. There are some product I owned for this scenario, like My Passport Wireless Pro. But none of them are good enough. Take that WD disk for example, it is very slow, with bad app, and the worst thing is the built-in battery. Many designers didn’t consider that how to take their products on a plane.

Finally, I decide to build a Raspberry Pi with all tools and scripts I need inside.

This is the script for copying data from SD card to a USB disk. It is tested with Ubuntu 20.04.1 ARM64 version on Raspberry Pi 4.

Before using this script, you need to prepare your disk for saving data for creating a folder named Target. Of cause, the file system of the partition should be writable on your device.

You can change the folder settings by editing the # define block.

  • MountPoint settings are the paths used in this script.
  • TargetFolder is the path as the target folder. The partition contains this folder will be detected as target. Default value is “/Target”.
  • SourceTestFolder is the folder for detecting as source. Note: All files, not only within this folder, will be copied. Default value is “/DCIM”. All SD cards from DC and drone should contain this folder.

To use this script, you need to connect your target disk and sd card to your device (mounting is not required) and run this script. Both source and target will be detected automatically and a name of the sub folder will be asked. Then all files from the source will be copy to the sub folder you inputted in the TargetFolder of the target disk. The files on the sd will NOT be deleted after copying.

If an argument is provided, the value will be used as the sub folder.

Source is licensed under MIT license. Click here to get it.

Forward Client Certificate to .net Core App through Nginx

When dotnet core app is deployed in Linux with Kestrel, nginx works in front as a proxy. Usually, Nginx will handle all https related issue and forward a plain http request to core app. Some app may require clients to use certificate for authentication. In this case, client certificate need to be transferred to core app.

Core App

First, the core app need to be prepared to receive and check the client certificate.

public class StartUp
{
    ...
    public void ConfigureServices(IServiceCollection services)
    {
        ...
        //Add code here
        services.AddCertificateForwarding(options => "X-ARR-ClientCert");
        //PointA - for later reference
    }
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        ...
        //Add code here
        app.UseAuthorization();
        app.UseCertificateForwarding();
        ...
    }
}

Note: UseHttpsRedirection() cannot be used because the core app is set to use http only. UseCertificateForwarding() may expose a security issue, you could set a switch to open it only when required. The header name X-ARR-ClientCert can be changed as your wish.

By default, core app will validate client certificate with local trusted CA. For additional tuning, add this code to the PointA position above.

services.AddAuthentication(CertificateAuthenticationDefaults.AuthenticationScheme)
    .AddCertificate(options =>
    {
        options.Events = new CertificateAuthenticationEvents
        {
            OnCertificateValidated = aMethod,
            OnAuthenticationFailed = anotherMethod
        };

It is not required to present OnCertificateValidated and OnAuthenticationFailed at the same time. Check this doc for details.

Nginx

Now, in the Nginx setting, some lines need to be added.

Check Certificate with Nginx

When need to check the client certificate by Nginx,

ssl_client_certificate file;
ssl_verify_client on;

is required. The file should contain trusted CA certificates in PEM format. When using multiple CA certificates, write all of them into the same file. When client certificate is not forcible, change on to optional.

Not Check Certificate with Nginx

If we need Nginx to leave the certificate checking to core app, simply use the code

ssl_verify_client optional_no_ca;

in site file. This will let Nginx transfer the client certificate to proxy without touch.

Pass to Proxy

After processing one of those above, the client certificate is ready to be passed into the proxy app — core app. This code below will do that.

proxy_set_header X-ARR-ClientCert $ssl_client_escaped_cert;

If you changed the name X-ARR-ClientCert above, use the same value here. This code can be placed into location block too.

Now you can enjoy your dirty job by checking everything about client certificate in your core app. 😀