SanDisk portable SSD strange problem

I brought a SanDisk portable SSD last year, named SanDisk Extreme Pro Portable (E80). It can be used on many computers without problem. But when connecting to a Windows on NUC11PAHi7, it can only be started with USB-A to C cable. While using USB C-C cable, this device is marked as cannot be started.

Last week, I try to connect this device to my Raspberry Pi 4 (8G), it cannot be recognized at all. I contact the service support and get a replacement. The new device looks the same as the old one, with the same name, but the model number is changed to E81. The new one can be detected and mounted on Raspberry Pi 4 but when I write large files into it, IO exception is thrown always.

Here is the log from dmesg:

[ 82.174619] usb 2-2: USB disconnect, device number 2
[ 86.214549] usb 2-2: new SuperSpeed Gen 1 USB device number 3 using xhci_hcd
[ 86.235518] usb 2-2: New USB device found, idVendor=0781, idProduct=55af, bcdDevice=10.84
[ 86.235534] usb 2-2: New USB device strings: Mfr=2, Product=3, SerialNumber=1
[ 86.235546] usb 2-2: Product: Extreme Pro 55AF
[ 86.235556] usb 2-2: Manufacturer: SanDisk
[ 86.235567] usb 2-2: SerialNumber: <Removed By Me>
[ 86.246327] scsi host0: uas
[ 87.451423] scsi 0:0:0:0: Direct-Access SanDisk Extreme Pro 55AF 1084 PQ: 0 ANSI: 6
[ 87.452523] scsi 0:0:0:1: Enclosure SanDisk SES Device 1084 PQ: 0 ANSI: 6
[ 87.453670] sd 0:0:0:0: Attached scsi generic sg0 type 0
[ 87.454934] sd 0:0:0:0: [sda] 3906963617 512-byte logical blocks: (2.00 TB/1.82 TiB)
[ 87.454971] scsi 0:0:0:1: Attached scsi generic sg1 type 13
[ 87.455110] sd 0:0:0:0: [sda] Write Protect is off
[ 87.455125] sd 0:0:0:0: [sda] Mode Sense: 37 00 10 00
[ 87.455454] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, supports DPO and FUA
[ 87.456143] sd 0:0:0:0: [sda] Optimal transfer size 2097152 bytes
[ 87.476901] sda: sda1
[ 87.478994] sd 0:0:0:0: [sda] Attached SCSI disk
[ 87.489999] scsi 0:0:0:1: Failed to get diagnostic page 0x1
[ 87.495768] scsi 0:0:0:1: Failed to bind enclosure -19
[ 87.501102] ses 0:0:0:1: Attached Enclosure device
[ 118.602926] exfat: module is from the staging directory, the quality is unknown, you have been warned.
[ 118.606363] exFAT: Version 1.3.0
[ 118.607475] [EXFAT] trying to mount...
[ 118.737713] [EXFAT] mounted successfully
[ 257.485781] usb 2-2: USB disconnect, device number 3
[ 257.487191] blk_update_request: I/O error, dev sda, sector 2048 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
[ 257.497889] Buffer I/O error on dev sda1, logical block 0, lost async page write
[ 257.505495] blk_update_request: I/O error, dev sda, sector 34816 op 0x1:(WRITE) flags 0x800 phys_seg 2 prio class 0
[ 257.516111] Buffer I/O error on dev sda1, logical block 32768, lost async page write
[ 257.523988] Buffer I/O error on dev sda1, logical block 32769, lost async page write
[ 257.531876] blk_update_request: I/O error, dev sda, sector 36864 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
[ 257.542483] Buffer I/O error on dev sda1, logical block 34816, lost async page write
[ 259.414440] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[ 259.653823] sd 0:0:0:0: [sda] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[ 259.954174] usb 2-2: new SuperSpeed Gen 1 USB device number 4 using xhci_hcd
[ 259.975162] usb 2-2: New USB device found, idVendor=0781, idProduct=55af, bcdDevice=10.84
[ 259.975179] usb 2-2: New USB device strings: Mfr=2, Product=3, SerialNumber=1
[ 259.975190] usb 2-2: Product: Extreme Pro 55AF
[ 259.975200] usb 2-2: Manufacturer: SanDisk
[ 259.975211] usb 2-2: SerialNumber: <Removed By Me>
[ 259.984054] scsi host1: uas
[ 262.170728] scsi 1:0:0:0: Direct-Access SanDisk Extreme Pro 55AF 1084 PQ: 0 ANSI: 6
[ 262.171845] scsi 1:0:0:1: Enclosure SanDisk SES Device 1084 PQ: 0 ANSI: 6
[ 262.173027] sd 1:0:0:0: Attached scsi generic sg0 type 0
[ 262.173449] ses 1:0:0:1: Attached Enclosure device
[ 262.174092] ses 1:0:0:1: Attached scsi generic sg1 type 13
[ 262.174542] ses 1:0:0:1: Failed to get diagnostic page 0x1
[ 262.176656] sd 1:0:0:0: [sdb] 3906963617 512-byte logical blocks: (2.00 TB/1.82 TiB)
[ 262.180401] sd 1:0:0:0: [sdb] Write Protect is off
[ 262.180412] sd 1:0:0:0: [sdb] Mode Sense: 37 00 10 00
[ 262.180485] ses 1:0:0:1: Failed to bind enclosure -19
[ 262.185106] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA
[ 262.186292] sd 1:0:0:0: [sdb] Optimal transfer size 2097152 bytes
[ 262.216969] sdb: sdb1
[ 262.219280] sd 1:0:0:0: [sdb] Attached SCSI disk
[ 263.425802] usb 2-2: USB disconnect, device number 4
[ 263.428047] sd 1:0:0:0: [sdb] Synchronizing SCSI cache
[ 263.665835] sd 1:0:0:0: [sdb] Synchronize Cache(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[ 263.994479] usb 2-2: new SuperSpeed Gen 1 USB device number 5 using xhci_hcd
[ 264.015056] usb 2-2: New USB device found, idVendor=0781, idProduct=55af, bcdDevice=10.84
[ 264.015069] usb 2-2: New USB device strings: Mfr=2, Product=3, SerialNumber=1
[ 264.015078] usb 2-2: Product: Extreme Pro 55AF
[ 264.015086] usb 2-2: Manufacturer: SanDisk
[ 264.015095] usb 2-2: SerialNumber: 323130333757343030363638
[ 264.023743] scsi host1: uas
[ 265.437813] usb 2-2: USB disconnect, device number 5
[ 265.438360] xhci_hcd 0000:01:00.0: WARNING: Host System Error

Enable rc.local on Ubuntu 20.04

For compatible reason, rc.local support is still supported by new version of Ubuntu. But it is disabled by default. This will guide you to enable running rc.local while starting system.

  1. Create rc.local if it does not exist.

Run nano /etc/rc.local. If the file does not exist, place this code below as the default rc.local file.

#!/bin/bash

exit 0

Run chmod +x /etc/rc.local to give the executing permission.

2. Create systemd service file.

Run nano /etc/systemd/system/rc-local.service to create the systemd service file and paste this text below.

[Unit]
Description=/etc/rc.local Support
ConditionPathExists=/etc/rc.local

[Service]
ExecStart=/etc/rc.local start
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
SysVStartPriority=99

[Install]
WantedBy=multi-user.target

3. Config systemd using this bash command below.

systemctl enable rc-local

All done. The rc-local will be run while system starting.

A bash script for copying data from SD card

When I hang out, GoPro and drone is usually taken. Sometimes it’s necessary to copy data from the SD card to a hard drive frequently when traveling for several days. There are some product I owned for this scenario, like My Passport Wireless Pro. But none of them are good enough. Take that WD disk for example, it is very slow, with bad app, and the worst thing is the built-in battery. Many designers didn’t consider that how to take their products on a plane.

Finally, I decide to build a Raspberry Pi with all tools and scripts I need inside.

This is the script for copying data from SD card to a USB disk. It is tested with Ubuntu 20.04.1 ARM64 version on Raspberry Pi 4.

Before using this script, you need to prepare your disk for saving data for creating a folder named Target. Of cause, the file system of the partition should be writable on your device.

You can change the folder settings by editing the # define block.

  • MountPoint settings are the paths used in this script.
  • TargetFolder is the path as the target folder. The partition contains this folder will be detected as target. Default value is “/Target”.
  • SourceTestFolder is the folder for detecting as source. Note: All files, not only within this folder, will be copied. Default value is “/DCIM”. All SD cards from DC and drone should contain this folder.

To use this script, you need to connect your target disk and sd card to your device (mounting is not required) and run this script. Both source and target will be detected automatically and a name of the sub folder will be asked. Then all files from the source will be copy to the sub folder you inputted in the TargetFolder of the target disk. The files on the sd will NOT be deleted after copying.

If an argument is provided, the value will be used as the sub folder.

Source is licensed under MIT license. Click here to get it.

Forward Client Certificate to .net Core App through Nginx

When dotnet core app is deployed in Linux with Kestrel, nginx works in front as a proxy. Usually, Nginx will handle all https related issue and forward a plain http request to core app. Some app may require clients to use certificate for authentication. In this case, client certificate need to be transferred to core app.

Core App

First, the core app need to be prepared to receive and check the client certificate.

public class StartUp
{
    ...
    public void ConfigureServices(IServiceCollection services)
    {
        ...
        //Add code here
        services.AddCertificateForwarding(options => "X-ARR-ClientCert");
        //PointA - for later reference
    }
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        ...
        //Add code here
        app.UseAuthorization();
        app.UseCertificateForwarding();
        ...
    }
}

Note: UseHttpsRedirection() cannot be used because the core app is set to use http only. UseCertificateForwarding() may expose a security issue, you could set a switch to open it only when required. The header name X-ARR-ClientCert can be changed as your wish.

By default, core app will validate client certificate with local trusted CA. For additional tuning, add this code to the PointA position above.

services.AddAuthentication(CertificateAuthenticationDefaults.AuthenticationScheme)
    .AddCertificate(options =>
    {
        options.Events = new CertificateAuthenticationEvents
        {
            OnCertificateValidated = aMethod,
            OnAuthenticationFailed = anotherMethod
        };

It is not required to present OnCertificateValidated and OnAuthenticationFailed at the same time. Check this doc for details.

Nginx

Now, in the Nginx setting, some lines need to be added.

Check Certificate with Nginx

When need to check the client certificate by Nginx,

ssl_client_certificate file;
ssl_verify_client on;

is required. The file should contain trusted CA certificates in PEM format. When using multiple CA certificates, write all of them into the same file. When client certificate is not forcible, change on to optional.

Not Check Certificate with Nginx

If we need Nginx to leave the certificate checking to core app, simply use the code

ssl_verify_client optional_no_ca;

in site file. This will let Nginx transfer the client certificate to proxy without touch.

Pass to Proxy

After processing one of those above, the client certificate is ready to be passed into the proxy app — core app. This code below will do that.

proxy_set_header X-ARR-ClientCert $ssl_client_escaped_cert;

If you changed the name X-ARR-ClientCert above, use the same value here. This code can be placed into location block too.

Now you can enjoy your dirty job by checking everything about client certificate in your core app. 😀

Thread-Safe calling support in Remote Agency 2

In the next release of Remote Agency, the thread safe calling support will be added.

In the current version, all accessing to assets is from the thread which sending message to the Remote Agency Manager. Due to network transportation, this may cause multithread calling on the target service object. Without special treatment, some error may be caused when accessing object without thread safe designing.

In the next release, a new attribute is introduced. User can specify the behavior of thread using for each interface or the class of service object: Free — like now, use SynchronizationContext — useful on form based program, one certain free thread or one task schedule. A new task scheduler is builtin with Remote Agency which always use one thread to execute all jobs one by one. The task scheduler also support user passing a thread in as the working one inside. Therefore, user code can use the same thread to initialize some object and then turn it into a task scheduler to run all accessing on the same object.

Serializer for Remote Agency 2

In the next major release of Remote Agency, the default serializer will be changed to JSON, from Json.net, instead of DataContractSerializer shipped with dotnet runtime.

This change is taken because the type support, especially the generic type support, is too weak in DataContractSerializer, and many user defined class is not marked with [DataMember] correctly. This change will also make it possible to serialize and deserialize data in one phase, instead of 2 phases in version 1, because the serializer will recognize the generic type automatically, without code specified generic types to be extracted in advanced.