Why you should use Syncthing

I’ve been a user of Dropbox for ages, I’ve tried Owncloud, I’ve tried Box, and probably numerous others that I’ve forgotten about, but in the past year I’ve migrated over to Syncthing, and I haven’t looked back. Opensource software, well designed protocol, complete ownership of your data, I could go on… but this post by gbolo explains it perfectly!

Syncthing – Why you should be using it

Upgrading the TP-Link Archer C5 (v1.2) to an Archer C7 (v2.0)

I own a TP-Link Archer C5 router, version 1.2 – which is identical to the TP-Link Archer C7, version 2.0, save for some limitations which are introduced through software. These limitations include a 300Mbps cap on 2.4GHz (450Mbps for the C7) and a 876Mbps cap on 5GHz (1300Mbps on the C7). Not that much, but still enough to be worth tinkering for.
Since I was looking at increasing the WiFi speeds in my home, I searched around a bit, and found out on Stefan Thesen’s blog and Hagensieker’s blog that it is perfectly possible :)

First, make sure you definitely have an Archer C5 version 1.2, with three antennas. Don’t even try with another version. If it breaks, noone is to blame but you.

You’ll need to flash DD-WRT, OpenWRT or LEDE-Project (check the respective projects for instructions on how to do that) first.

Next, download an Archer C7 firmware from the TP-Link website. I downloaded version 3.14.1 (141110) – which contains the firmware in the file ArcherC7v2_v3_en_3_14_1_up_boot(141110).bin

Now, remove the first 256 bytes, which is the bootloader (which we don’t need to flash it):  dd if=ArcherC7v2_v3_en_3_14_1_up_boot(141110).bin of=tplink_mod.bin skip=257 bs=512 (In case you don’t trust doing it yourself, you can also download the firmware from the blog of Stefan)

Next, you can transmit this (using SFTP) to your router, and then force flash it: sysupgrade -F /tmp/tplink_mod.bin. This will flash the firmware, and reboot the router. You’ll have to reconnect to it (default IP address is and the web interface should report an Archer C7 :)

Afterwards you can either upgrade to the latest C7 firmware, or whichever 3rd party firmware you want. I reflashed to LEDE-Project.

Initial testing showed an improvement in WiFi throughput speeds – so I’m happy with my ‘new’ C7 :)

Installing microG services (as Play Services replacement) on the Asus TF101 tablet

I still have an Asus Transformer TF101 tablet in use – running MarshMallow – but after a Play Services upgrade, in which Google inserted some NEON instructions (which the TF101 does not support) , a lot of “Play Services has stopped working” popups showed up  – making the tablet nigh unusable. Initial tests blocking upgrade of the services yielded no success, and a lot of programs demand the newer versions of the services anyway.

In my searches I ran across the microG Project – “A free-as-in-freedom re-implementation of Google’s proprietary Android user space apps and libraries.” Sounded interesting, so I went and tried it, with success, on the tablet. It runs faster, battery life is better, and it works for everything I use it for.

Below you can find the steps I used. These apply to the Transformer TF101, and come with no guarantees whatsoever.

Preparing the tablet

  • First, you’ll need to uninstall both “Google Play Servics” and the “Google Play Store”. Use something like Lucky Patcher, or Titanium Backup, or whatnot, to remove them.
  • Reflash the ROM for KatKiss (I’m using 6.0.1 #29) and SuperSU (linked on the same page). Do NOT install opengapps!
  • Install F-Droid.
    Make sure you enable “Expert Mode” and “Unstable updates” in the settings, as we need the latest version of the packages.
  • Add the repository for microG: https://microg.org/fdroid/repo (as described here)
  • Temporarily disable the F-Droid repository.
  • Install the following items using F-Droid:
    • microG Services Core
    • microG Service Framework Proxy
  • Re-enable the F-Droid repository, and install

Patching the ROM to allow signature spoofing
Download (with git) a copy of https://github.com/Lanchon/haystack.git: git clone https://github.com/Lanchon/haystack.git

Make sure your tablet is connected through usb, and that adb works, and execute these commands in the directory where you cloned the git repository:
(you can find more information on the page of the git repository)

  • ./pull-fileset tf101
  • ./patch-fileset patches/sigspoof-hook-4.1-6.0/ 23 tf101/
  • ./patch-fileset patches/sigspoof-core/ 23 tf101__sigspoof-hook-4.1-6.0/
  • ./patch-fileset patches/sigspoof-ui-global-4.1-6.0/ 23 tf101__sigspoof-hook-4.1-6.0__sigspoof-core/
  • ./push-fileset tf101__sigspoof-hook-4.1-6.0__sigspoof-core__sigspoof-ui-global-4.1-6.0/

Reboot the tablet. Afterwards, go to “Settings”, “Developer options”, scroll to the bottom and enable “Allow signature spoofing”.

Configuring microG Services
Go into the application drawer, and look for an application calld “microG Settings”.

  • Tap “Permission Missing” and give all permissions
  • Enable “Google device registration”
  • Enable “Google Cloud Messaging”
  • Go in “UnifiedNlp Settings”, tap both “location backend” and “address lookup backends” and enable the backends there.
  • Go back to the main menu of microG Settings and tap “Self-Check” and make sure it doesn’t complain about anything
  • In “Self-Check”, make sure to tap “Battery optimizations ignored” to allow the service to run in the background

Reinstall Google Play Store
Download the Play Store from eg. APKMirror (http://www.apkmirror.com/apk/google-inc/google-play-store/[/url] to your PC. Rename it to com.android.vending.apk
Execute the following with adb:

  • adb remount
  • adb shell mkdir /system/priv-app/Phonesky
  • adb push com.android.vending.apk /system/priv-app/Phonesky/

Reboot the tablet one last time. Now you should have the Play Store available and you can install apps again to your heart’s content ;)

Tweaking WooCommerce payment workflows

I’m playing part-time webmaster for the choir I sing in, and as such, am getting up close and personal with WooCommerce. Quite a nifty shopping cart, but it does require a lot of tweaks to really make it work to your liking – unless you’re willing to shell out a lot of cash.

The latest modification was changing the workflow of the payment gateways – more specifically, the BACS gateway (Bank Account Clearing System – or as we mortals call it, wire transfer).

The default flow for WooCommerce (for this gateway) is:

  1. Order is put in by customer
  2. Order is automatically flagged as on-hold, and a mail is sent out to the customer with the bank info
  3. Customer (supposedly) pays
  4. Store manager sees the payment, and flags order as processing – another mail is sent out with the notification that it’s being processed
  5. Store manager (hopefully) ships the product, flags the order as completed and another mail is sent out with ‘order complete’ status.

Now, for our uses, the on-hold status is a bit superfluous (and we’ve had people getting confused by it).
We’d rather have it go straight to processing, and have that mail contain the bank information (only for BACS payments, ofcourse).

After some testing, I came up with two solutions: One very hacky, and not maintainable, the other better. Both solutions need to be inserted in your theme’s functions.php file.

/* override gateway for BACS */
function my_core_gateways($methods) {
  foreach ($methods as &$method) {
    if($method == 'WC_Gateway_BACS') {
      $method = 'WC_Gateway_BACS_custom';
  return $methods;

/* custom gateway processor for BACS */
class WC_Gateway_BACS_custom extends WC_Gateway_BACS {
  public function email_instructions( $order, $sent_to_admin, $plain_text = false ) {

    if ( ! $sent_to_admin && 'bacs' === $order->payment_method && $order->has_status( 'processing' ) ) {
      if ( $this->instructions ) {
        echo wpautop( wptexturize( $this->instructions ) ) . PHP_EOL;
     /* dirty hack to get access to bank_details */
     $reflector = new ReflectionObject($this);
     $method = $reflector->getMethod('bank_details');
     $result = $method->invoke($this, $order->id);

  public function process_payment( $order_id ) {
    $order = wc_get_order( $order_id );

    // Mark as processing (we're awaiting the payment)
    $order->update_status( 'processing', __( 'Awaiting BACS payment', 'woocommerce' ) );

    // Reduce stock levels

    // Remove cart

    // Return thankyou redirect
    return array(
      'result' => 'success',
      'redirect' => $this->get_return_url( $order )

I have several reservations with the code above: it’s basically shamelessly copying and overloading the two functions of the parent class, and calling a private function which is internal to the parent class – both of which might cause trouble if there are big changes in WooCommerce. It works, but well, it’s .. ugly. So, I looked for a better way to tackle this.

add_action( 'woocommerce_email_before_order_table', 'add_order_email_instructions', 10, 2 );
add_action( 'woocommerce_thankyou', 'bacs_order_payment_processing_order_status', 10, 1 );

function bacs_order_payment_processing_order_status( $order_id ) {
  if ( ! $order_id ) {

  $order = new WC_Order( $order_id );
  if ('bacs' === $order->payment_method && ('on-hold' == $order->status || 'pending' == $order->status)) {
  } else {

function add_order_email_instructions( $order, $sent_to_admin ) {
  if ( ! $sent_to_admin && 'bacs' === $order->payment_method && $order->has_status( 'processing' ) ) {
    $gw = new WC_Gateway_BACS();
    $reflector = new ReflectionObject($gw);
    $method = $reflector->getMethod('bank_details');
    $result = $method->invoke($gw, $order->id);

Still not as clean as I’d like, as we’re still invoking an internal function, but atleast we’re using the proper hooks to tweak WooCommerce. I’ll update if I ever find a better way to get to the bank details.

Using a Yubikey for account security

I got a Yubikey 4 half a year ago (during Red Hat Summit 2016), but until now I didn’t do much with it. Time to change that ;)

If you have any more ideas on how to use the Yubikey, feel free to comment!

Also, If you’re not using 2  factor authentication yet, I urge you to start using it. It gives you a nice additional layer of account security, with limited hassle. It doesn’t even have to cost you any money, if you’re using a software solution. Checkout twofactorauth.org for a (non-comprehensive) list of sites that support it!


Replacing Crashplan

I’ve been a longtime user of Crashplan, an easy-to-use cloud backup solution. It works well, and it used to work also on nearly any platform that had a java run-time and some add-on opensource libraries. I’ve used it for some time on my raspberry pi to automatically backup my data to the cloud. (Crashplan on ARM (the architecture of the raspberry pi) is an unsupported configuration though).

Used to work, past tense.

Code42 (the company behind Crashplan) decided to incorporate a new library (libc42archive.so) in the latest update of their client, version 4.8, which has no ARM counterpart. Only x86 (and amd_64) architectures are supported, removing a lot of devices which were able to run crashplan from the list. No source code is available, so this is basically a call to stop using Crashplan on anything other than Intel-compatible architectures. Bleh.
(I opened a support ticket to ask them to restore compatibility, but I’m not holding my breath for it)

I was able to keep it alive for some time by downgrading back to version 4.7 and making the upgrade directory immutable, but it seems that this trick has run it’s course. The client needs to be version 4.8 or you aren’t allowed to connect to the Crashplan back-end.

So, I needed a new solution. One with the requirements of being open source (I don’t want to run in that issue again), offering client-side encryption and incremental forever style backups. Being able to be stored in the cloud was a no-brainer. After some testing of various tools, I ended up with the following combination:

While Crashplan offered immediate push to the cloud, the workflow is now somewhat different: every day a script is triggered (via cron), which executes borgbackup against a USB-connected harddisk for my local (and optionally NFS-shared) data. This allows for fast backups, fast deduplication, and encryption. No data leaves my network at this point.
When all backups are done, the encrypted repository is synced (using rclone) to Backblaze B2, bringing my offsite backup in sync with the local repository.

Using an intermediate USB harddisk is not ideal, but it gives me yet another copy of my data – which is convenient when I’ve just deleted a file that I really did want to keep.

To give you an idea about the compression and deduplication statistics:

                       Original size      Compressed size    Deduplicated size
All archives:                1.10 TB              1.07 TB            446.63 GB

1.10TB is compressed to 1.07TB, and this results in an archive if 446GB. Less than half ;)

To be able to find a file that has been deleted at some point, you can use borgbackup mount :: /<mountpoint> – this will mount the entire repository (using FUSE) on that directory, making it available for browsing. Don’t forget to unmount it using fusermount -u /<mountpoint> when you’re finished.

I’ve uploaded the script to my scripts repository on GitHub.

A new home, a new look

I finally decided to retire my custom-written CMS (wrote it back in 2003, when I had more free time), which – while obviously fantastic, super advanced, and capable of making coffee for you – had some drawbacks. Mostly being that it was a pain to use and update content with.

I also maintained a blog, on another domain, which was being updated more frequently than this site.

So, to make matters easier on myself, I’ve merged the (old) articles with the blog and moved it to my main domain – kcore.org.

I also threw in a new look (which – If you’re reading this on RSS – you won’t notice), activated SSL (thanks to Let’s Encrypt), and installed a boatload of redirects to keep the spice flowing!

Welcome! :)

Running crashplan (headless) on a Raspberry pi 2

In my grand scheme of “abuse all the low-power computing things!”, I’ve moved my crashplan backups over to the Raspberry Pi 2 (rpi2 for short). Installation is relatively painless: download the installer from the crashplan site, and unpack and execute. I installed mine under /opt/crashplan.

Afterwards, there are some things to fix, though, as by default Crashplan is only supported on the Intel architecture:

Install a working JRE (& dependencies for the GUI app should you want to launch it through X forwarding):
apt-get install oracle-java8-jdk libswt-gtk-3-jni libswt-cairo-gtk-3-jni
rm /opt/crashplan/jre; ln -s /usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/jre/ /opt/crashplan/jre
rm /opt/crashplan/lib/swt.jar; ln -s /usr/share/java/swt.jar /opt/crashplan/lib/swt.jar

Replace some libraries by their recompiled variants – you can compile them yourself (thanks to Jon Rogers for the instructions) or download them straight from his site if you’re lazy.
wget http://www.jonrogers.co.uk/wp-content/uploads/2012/05/libmd5.so -O /opt/crashplan/libmd5.so
wget http://www.jonrogers.co.uk/wp-content/uploads/2012/05/libjtux.so -O /opt/crashplan/libjtux.so

Add a library to the CrashplanEngine startup classpath:
sed -i 's|FULL_CP="|FULL_CP="/usr/share/java/jna.jar:|' /opt/crashplan/bin/CrashPlanEngine
And now you should be able to start your engine(s)!
/opt/crashplan/bin/CrashPlanEngine start
And the desktop app (which you can forward to your local Linux pc via ssh -X user@rpi2)
this does take forever to start. But it works. Or you can use these instructions (from Crashplan Support) to administer it remotely.